2014-05-18 22:09:47 +04:00
.TH "LVMCACHE" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
2021-04-13 16:26:54 +03:00
.
2014-05-18 22:09:47 +04:00
.SH NAME
2021-04-13 16:26:54 +03:00
.
2014-06-11 13:06:30 +04:00
lvmcache \(em LVM caching
2021-04-13 16:26:54 +03:00
.
2014-05-18 22:09:47 +04:00
.SH DESCRIPTION
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
\fBlvm\fP(8) includes two kinds of caching that can be used to improve the
2020-01-30 23:09:21 +03:00
performance of a Logical Volume (LV). When caching, varying subsets of an
LV's data are temporarily stored on a smaller, faster device (e.g. an SSD)
to improve the performance of the LV.
2021-04-13 16:26:54 +03:00
.P
2020-01-30 23:09:21 +03:00
To do this with lvm, a new special LV is first created from the faster
device. This LV will hold the cache. Then, the new fast LV is attached to
the main LV by way of an lvconvert command. lvconvert inserts one of the
device mapper caching targets into the main LV's i/o path. The device
mapper target combines the main LV and fast LV into a hybrid device that looks
like the main LV, but has better performance. While the main LV is being
used, portions of its data will be temporarily and transparently stored on
the special fast LV.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
The two kinds of caching are:
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.IP \[bu] 2
2020-01-30 23:09:21 +03:00
A read and write hot-spot cache, using the dm-cache kernel module.
This cache tracks access patterns and adjusts its content deliberately so
that commonly used parts of the main LV are likely to be found on the fast
storage. LVM refers to this using the LV type \fBcache\fP.
2021-04-13 16:26:54 +03:00
.
.IP \[bu]
2020-01-30 23:09:21 +03:00
A write cache, using the dm-writecache kernel module. This cache can be
used with SSD or PMEM devices to speed up all writes to the main LV. Data
read from the main LV is not stored in the cache, only newly written data.
LVM refers to this using the LV type \fBwritecache\fP.
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SH USAGE
2021-04-13 16:26:54 +03:00
.
.SS 1. Identify main LV that needs caching
.
2020-01-30 23:09:21 +03:00
The main LV may already exist, and is located on larger, slower devices.
A main LV would be created with a command like:
2021-04-13 16:26:54 +03:00
.P
# lvcreate -n main -L Size vg /dev/slow_hhd
.
.SS 2. Identify fast LV to use as the cache
.
2020-01-30 23:09:21 +03:00
A fast LV is created using one or more fast devices, like an SSD. This
special LV will be used to hold the cache:
2021-04-13 16:26:54 +03:00
.P
# lvcreate -n fast -L Size vg /dev/fast_ssd
.P
# lvs -a
2019-10-21 19:35:28 +03:00
LV Attr Type Devices
2020-01-30 23:09:21 +03:00
fast -wi------- linear /dev/fast_ssd
main -wi------- linear /dev/slow_hhd
2018-11-07 01:14:59 +03:00
.fi
2021-04-13 16:26:54 +03:00
.
.SS 3. Start caching the main LV
.
2020-01-30 23:09:21 +03:00
To start caching the main LV, convert the main LV to the desired caching
type, and specify the fast LV to use as the cache:
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
using dm-cache (with cachepool):
2021-04-13 16:26:54 +03:00
.P
# lvconvert --type cache --cachepool fast vg/main
.P
2021-03-29 19:55:47 +03:00
using dm-cache (with cachevol):
2021-04-13 16:26:54 +03:00
.P
# lvconvert --type cache --cachevol fast vg/main
.P
2021-03-29 19:55:47 +03:00
using dm-writecache (with cachevol):
2021-04-13 16:26:54 +03:00
.P
# lvconvert --type writecache --cachevol fast vg/main
.P
2021-03-29 19:55:47 +03:00
For more alteratives see:
2021-04-13 16:26:54 +03:00
.br
2021-03-29 19:55:47 +03:00
dm-cache command shortcut
2021-04-13 16:26:54 +03:00
.br
2021-03-29 19:55:47 +03:00
dm-cache with separate data and metadata LVs
2021-04-13 16:26:54 +03:00
.
.SS 4. Display LVs
.
2018-11-07 01:14:59 +03:00
Once the fast LV has been attached to the main LV, lvm reports the main LV
type as either \fBcache\fP or \fBwritecache\fP depending on the type used.
2019-10-21 19:35:28 +03:00
While attached, the fast LV is hidden, and renamed with a _cvol or _cpool
suffix. It is displayed by lvs -a. The _corig or _wcorig LV represents
the original LV without the cache.
2021-04-13 16:26:54 +03:00
.sp
2021-03-29 19:55:47 +03:00
using dm-cache (with cachepool):
2021-04-13 16:26:54 +03:00
.P
# lvs -ao+devices
.nf
2021-03-29 19:55:47 +03:00
LV Pool Type Devices
main [fast_cpool] cache main_corig(0)
[fast_cpool] cache-pool fast_pool_cdata(0)
[fast_cpool_cdata] linear /dev/fast_ssd
[fast_cpool_cmeta] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd
2021-04-13 16:26:54 +03:00
.fi
.sp
2021-03-29 19:55:47 +03:00
using dm-cache (with cachevol):
2021-04-13 16:26:54 +03:00
.P
# lvs -ao+devices
.P
.nf
2021-03-29 19:55:47 +03:00
LV Pool Type Devices
main [fast_cvol] cache main_corig(0)
2020-01-30 23:09:21 +03:00
[fast_cvol] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd
2021-04-13 16:26:54 +03:00
.fi
.sp
2021-03-29 19:55:47 +03:00
using dm-writecache (with cachevol):
2021-04-13 16:26:54 +03:00
.P
# lvs -ao+devices
.P
.nf
2021-03-29 19:55:47 +03:00
LV Pool Type Devices
2019-10-21 19:35:28 +03:00
main [fast_cvol] writecache main_wcorig(0)
2020-01-30 23:09:21 +03:00
[fast_cvol] linear /dev/fast_ssd
[main_wcorig] linear /dev/slow_hhd
2014-07-23 00:10:35 +04:00
.fi
2021-04-13 16:26:54 +03:00
.
.SS 5. Use the main LV
.
2018-11-07 01:14:59 +03:00
Use the LV until the cache is no longer wanted, or needs to be changed.
2021-04-13 16:26:54 +03:00
.
.SS 6. Stop caching
.
2021-03-29 19:55:47 +03:00
To stop caching the main LV and also remove unneeded cache pool,
use the --uncache:
2021-04-13 16:26:54 +03:00
.P
# lvconvert --uncache vg/main
.P
# lvs -a
2014-07-23 00:10:35 +04:00
.nf
2021-03-29 19:55:47 +03:00
LV VG Attr Type Devices
2020-01-30 23:09:21 +03:00
main vg -wi------- linear /dev/slow_hhd
2021-04-13 16:26:54 +03:00
.fi
.P
2021-03-29 19:55:47 +03:00
To stop caching the main LV, separate the fast LV from the main LV. This
changes the type of the main LV back to what it was before the cache was
attached.
2021-04-13 16:26:54 +03:00
.P
# lvconvert --splitcache vg/main
.P
# lvs -a
2020-06-23 13:19:54 +03:00
.nf
LV VG Attr Type Devices
2021-03-29 19:55:47 +03:00
fast vg -wi------- linear /dev/fast_ssd
2020-06-23 13:19:54 +03:00
main vg -wi------- linear /dev/slow_hhd
2014-07-23 00:10:35 +04:00
.fi
2021-04-13 16:26:54 +03:00
.
.SS 7. Create a new LV with caching
.
2020-06-12 01:01:01 +03:00
A new LV can be created with caching attached at the time of creation
using the following command:
2021-04-13 16:26:54 +03:00
.P
2020-06-12 01:01:01 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvcreate --type cache|writecache -n Name -L Size
2020-06-12 01:01:01 +03:00
--cachedevice /dev/fast_ssd vg /dev/slow_hhd
.fi
2021-04-13 16:26:54 +03:00
.P
2020-06-12 01:01:01 +03:00
The main LV is created with the specified Name and Size from the slow_hhd.
A hidden fast LV is created on the fast_ssd and is then attached to the
new main LV. If the fast_ssd is unused, the entire disk will be used as
the cache unless the --cachesize option is used to specify a size for the
fast LV. The --cachedevice option can be repeated to use multiple disks
for the fast LV.
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SH OPTIONS
2021-04-13 16:26:54 +03:00
.
2019-01-30 18:55:34 +03:00
.SS option args
2021-04-13 16:26:54 +03:00
.
2019-01-30 18:55:34 +03:00
.B --cachepool
.IR CachePoolLV | LV
2021-04-13 16:26:54 +03:00
.P
2020-01-30 23:09:21 +03:00
Pass this option a cachepool LV or a standard LV. When using a cache
pool, lvm places cache data and cache metadata on different LVs. The two
2020-06-23 13:19:54 +03:00
LVs together are called a cache pool. This has a bit better performance
for dm-cache and permits specific placement and segment type selection
for data and metadata volumes.
A cache pool is represented as a special type of LV
2020-01-30 23:09:21 +03:00
that cannot be used directly. If a standard LV is passed with this
option, lvm will first convert it to a cache pool by combining it with
another LV to use for metadata. This option can be used with dm-cache.
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
.B --cachevol
.I LV
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
Pass this option a fast LV that should be used to hold the cache. With a
cachevol, cache data and metadata are stored in different parts of the
same fast LV. This option can be used with dm-writecache or dm-cache.
2021-04-13 16:26:54 +03:00
.P
2020-06-12 01:01:01 +03:00
.B --cachedevice
.I PV
2021-04-13 16:26:54 +03:00
.P
2020-06-12 01:01:01 +03:00
This option can be used in place of --cachevol, in which case a cachevol
LV will be created using the specified device. This option can be
repeated to create a cachevol using multiple devices, or a tag name can be
specified in which case the cachevol will be created using any of the
devices with the given tag. If a named cache device is unused, the entire
device will be used to create the cachevol. To create a cachevol of a
specific size from the cache devices, include the --cachesize option.
2021-04-13 16:26:54 +03:00
.
2019-10-08 17:59:38 +03:00
.SS dm-cache block size
2021-04-13 16:26:54 +03:00
.
2019-10-08 17:59:38 +03:00
A cache pool will have a logical block size of 4096 bytes if it is created
on a device with a logical block size of 4096 bytes.
2021-04-13 16:26:54 +03:00
.P
2019-10-08 17:59:38 +03:00
If a main LV has logical block size 512 (with an existing xfs file system
using that size), then it cannot use a cache pool with a 4096 logical
block size. If the cache pool is attached, the main LV will likely fail
to mount.
2021-04-13 16:26:54 +03:00
.P
2019-10-08 17:59:38 +03:00
To avoid this problem, use a mkfs option to specify a 4096 block size for
the file system, or attach the cache pool before running mkfs.
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-writecache block size
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
The dm-writecache block size can be 4096 bytes (the default), or 512
bytes. The default 4096 has better performance and should be used except
when 512 is necessary for compatibility. The dm-writecache block size is
2018-11-22 00:16:23 +03:00
specified with --cachesettings block_size=4096|512 when caching is started.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
When a file system like xfs already exists on the main LV prior to
caching, and the file system is using a block size of 512, then the
writecache block size should be set to 512. (The file system will likely
fail to mount if writecache block size of 4096 is used in this case.)
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
Check the xfs sector size while the fs is mounted:
2021-04-13 16:26:54 +03:00
.P
# xfs_info /dev/vg/main
2014-05-18 22:09:47 +04:00
.nf
2018-11-07 01:14:59 +03:00
Look for sectsz=512 or sectsz=4096
.fi
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
The writecache block size should be chosen to match the xfs sectsz value.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
It is also possible to specify a sector size of 4096 to mkfs.xfs when
creating the file system. In this case the writecache block size of 4096
can be used.
2022-02-21 20:35:58 +03:00
.P
2022-02-22 01:09:57 +03:00
The writecache block size is displayed by the command:
.br
lvs -o writecacheblocksize VG/LV
.P
2022-02-21 20:35:58 +03:00
.SS dm-writecache memory usage
.P
The amount of main system memory used by dm-writecache can be a factor
when selecting the writecache cachevol size and the writecache block size.
.P
.IP \[bu] 2
writecache block size 4096: each 100 GiB of writecache cachevol uses
slighly over 2 GiB of system memory.
.IP \[bu] 2
writecache block size 512: each 100 GiB of writecache cachevol uses
a little over 16 GiB of system memory.
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-writecache settings
2021-04-13 16:26:54 +03:00
.
2022-02-17 00:21:09 +03:00
To specify dm-writecache tunable settings on the command line, use:
.br
--cachesettings 'option=N' or
.br
--cachesettings 'option1=N option2=N ...'
.P
For example, --cachesettings 'high_watermark=90 writeback_jobs=4'.
.P
To include settings when caching is started, run:
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvconvert --type writecache --cachevol fast \\
2022-02-17 00:21:09 +03:00
--cachesettings 'option=N' vg/main
.fi
.P
To change settings for an existing writecache, run:
.P
.nf
# lvchange --cachesettings 'option=N' vg/main
.fi
.P
To clear all settings that have been applied, run:
.P
.nf
# lvchange --cachesettings '' vg/main
.fi
.P
To view the settings that are applied to a writecache LV, run:
.P
.nf
# lvs -o cachesettings vg/main
2018-11-07 01:14:59 +03:00
.fi
2021-04-13 16:26:54 +03:00
.P
2022-02-17 00:21:09 +03:00
Tunable settings are:
2021-04-13 16:26:54 +03:00
.
.TP
2020-02-03 20:21:24 +03:00
high_watermark = <percent>
Start writeback when the writecache usage reaches this percent (0-100).
2021-04-13 16:26:54 +03:00
.
.TP
2020-02-03 20:21:24 +03:00
low_watermark = <percent>
Stop writeback when the writecache usage reaches this percent (0-100).
2021-04-13 16:26:54 +03:00
.
.TP
2018-11-07 01:14:59 +03:00
writeback_jobs = <count>
Limit the number of blocks that are in flight during writeback. Setting
this value reduces writeback throughput, but it may improve latency of
read requests.
2021-04-13 16:26:54 +03:00
.
.TP
2018-11-07 01:14:59 +03:00
autocommit_blocks = <count>
When the application writes this amount of blocks without issuing the
FLUSH request, the blocks are automatically commited.
2021-04-13 16:26:54 +03:00
.
.TP
2018-11-07 01:14:59 +03:00
autocommit_time = <milliseconds>
The data is automatically commited if this time passes and no FLUSH
request is received.
2021-04-13 16:26:54 +03:00
.
.TP
2018-11-07 01:14:59 +03:00
fua = 0|1
Use the FUA flag when writing data from persistent memory back to the
underlying device.
Applicable only to persistent memory.
2021-04-13 16:26:54 +03:00
.
.TP
2018-11-07 01:14:59 +03:00
nofua = 0|1
Don't use the FUA flag when writing back data and send the FLUSH request
afterwards. Some underlying devices perform better with fua, some with
nofua. Testing is necessary to determine which.
Applicable only to persistent memory.
2021-04-13 16:26:54 +03:00
.
.TP
2020-12-03 00:29:21 +03:00
cleaner = 0|1
Setting cleaner=1 enables the writecache cleaner mode in which data is
gradually flushed from the cache. If this is done prior to detaching the
writecache, then the splitcache command will have little or no flushing to
perform. If not done beforehand, the splitcache command enables the
cleaner mode and waits for flushing to complete before detaching the
writecache. Adding cleaner=0 to the splitcache command will skip the
cleaner mode, and any required flushing is performed in device suspend.
2022-01-24 16:50:09 +03:00
.SS dm-writecache using metadata profiles
2022-02-17 00:21:09 +03:00
.
In addition to specifying writecache settings on the command line, they
can also be set in lvm.conf, or in a profile file, using the
allocation/cache_settings/writecache config structure shown below.
.P
It's possible to prepare a number of different profile files in the
\fI#DEFAULT_SYS_DIR#/profile\fP directory and specify the file name
of the profile when starting writecache.
2022-01-24 16:50:09 +03:00
.P
.I Example
.nf
# cat <<EOF > #DEFAULT_SYS_DIR#/profile/cache_writecache.profile
allocation {
.RS
cache_settings {
.RS
writecache {
.RS
high_watermark=60
writeback_jobs=1024
.RE
}
.RE
}
.RE
}
EOF
.P
2022-02-17 00:21:09 +03:00
# lvcreate -an -L10G --name fast vg /dev/fast_ssd
# lvcreate --type writecache -L10G --name main --cachevol fast \\
2022-01-24 16:50:09 +03:00
--metadataprofile cache_writecache vg /dev/slow_hdd
.fi
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache with separate data and metadata LVs
2021-04-13 16:26:54 +03:00
.
2021-03-29 19:55:47 +03:00
Preferred way of using dm-cache is to place the cache metadata and cache data
on separate LVs. To do this, a "cache pool" is created, which is a special
2019-01-30 18:55:34 +03:00
LV that references two sub LVs, one for data and one for metadata.
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
To create a cache pool of given data size and let lvm2 calculate appropriate
metadata size:
2021-04-13 16:26:54 +03:00
.P
# lvcreate --type cache-pool -L DataSize -n fast vg /dev/fast_ssd1
.P
2021-03-29 19:55:47 +03:00
To create a cache pool from separate LV and let lvm2 calculate
appropriate cache metadata size:
2021-04-13 16:26:54 +03:00
.P
# lvcreate -n fast -L DataSize vg /dev/fast_ssd1
.br
# lvconvert --type cache-pool vg/fast /dev/fast_ssd1
.br
.P
2019-01-30 18:55:34 +03:00
To create a cache pool from two separate LVs:
2021-04-13 16:26:54 +03:00
.P
# lvcreate -n fast -L DataSize vg /dev/fast_ssd1
.br
# lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2
.br
# lvconvert --type cache-pool --poolmetadata fastmeta vg/fast
.P
2019-01-30 18:55:34 +03:00
Then use the cache pool LV to start caching the main LV:
2021-04-13 16:26:54 +03:00
.P
# lvconvert --type cache --cachepool fast vg/main
.P
2019-01-30 18:55:34 +03:00
A variation of the same procedure automatically creates a cache pool when
2018-11-07 01:14:59 +03:00
caching is started. To do this, use a standard LV as the --cachepool
(this will hold cache data), and use another standard LV as the
--poolmetadata (this will hold cache metadata). LVM will create a
2019-01-30 18:55:34 +03:00
cache pool LV from the two specified LVs, and use the cache pool to start
2018-11-07 01:14:59 +03:00
caching the main LV.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvcreate -n fast -L DataSize vg /dev/fast_ssd1
# lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2
2021-04-15 14:16:41 +03:00
# lvconvert --type cache --cachepool fast \\
--poolmetadata fastmeta vg/main
2018-11-07 01:14:59 +03:00
.fi
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache cache modes
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
The default dm-cache cache mode is "writethrough". Writethrough ensures
that any data written will be stored both in the cache and on the origin
LV. The loss of a device associated with the cache in this case would not
mean the loss of any data.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
A second cache mode is "writeback". Writeback delays writing data blocks
from the cache back to the origin LV. This mode will increase
performance, but the loss of a cache device can result in lost data.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
With the --cachemode option, the cache mode can be set when caching is
started, or changed on an LV that is already cached. The current cache
mode can be displayed with the cache_mode reporting option:
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.B lvs -o+cache_mode VG/LV
2021-04-13 16:26:54 +03:00
.P
2015-10-01 19:39:07 +03:00
.BR lvm.conf (5)
2018-11-07 01:14:59 +03:00
.B allocation/cache_mode
2015-10-01 19:39:07 +03:00
.br
2018-11-07 01:14:59 +03:00
defines the default cache mode.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvconvert --type cache --cachemode writethrough \\
2021-03-29 19:55:47 +03:00
--cachepool fast vg/main
2021-04-13 16:26:54 +03:00
.P
# lvconvert --type cache --cachemode writethrough \\
2021-03-29 19:55:47 +03:00
--cachevol fast vg/main
2018-11-07 01:14:59 +03:00
.nf
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache chunk size
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
The size of data blocks managed by dm-cache can be specified with the
--chunksize option when caching is started. The default unit is KiB. The
2022-07-04 16:10:58 +03:00
value must be a multiple of 32 KiB between 32 KiB and 1 GiB. Cache chunks
2020-06-23 13:19:54 +03:00
bigger then 512KiB shall be only used when necessary.
2021-04-13 16:26:54 +03:00
.P
2015-11-23 20:57:41 +03:00
Using a chunk size that is too large can result in wasteful use of the
2018-11-07 01:14:59 +03:00
cache, in which small reads and writes cause large sections of an LV to be
2020-06-23 13:19:54 +03:00
stored in the cache. It can also require increasing migration threshold
which defaults to 2048 sectors (1 MiB). Lvm2 ensures migration threshold is
at least 8 chunks in size. This may in some cases result in very
high bandwidth load of transfering data between the cache LV and its
cache origin LV. However, choosing a chunk size that is too small
2018-11-07 01:14:59 +03:00
can result in more overhead trying to manage the numerous chunks that
become mapped into the cache. Overhead can include both excessive CPU
time searching for chunks, and excessive memory tracking chunks.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
Command to display the chunk size:
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.B lvs -o+chunksize VG/LV
2021-04-13 16:26:54 +03:00
.P
2015-11-23 20:57:41 +03:00
.BR lvm.conf (5)
2021-04-15 14:16:41 +03:00
.B allocation/cache_pool_chunk_size
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
controls the default chunk size.
2021-04-13 16:26:54 +03:00
.P
2015-11-23 20:57:41 +03:00
The default value is shown by:
2021-04-13 16:26:54 +03:00
.P
2017-03-27 17:45:48 +03:00
.B lvmconfig --type default allocation/cache_pool_chunk_size
2021-04-13 16:26:54 +03:00
.P
2020-06-23 13:19:54 +03:00
Checking migration threshold (in sectors) of running cached LV:
.br
.B lvs -o+kernel_cache_settings VG/LV
2021-04-13 16:26:54 +03:00
.
2020-06-23 13:19:54 +03:00
.SS dm-cache migration threshold
2021-04-13 16:26:54 +03:00
.
2020-06-23 13:19:54 +03:00
Migrating data between the origin and cache LV uses bandwidth.
The user can set a throttle to prevent more than a certain amount of
migration occurring at any one time. Currently dm-cache is not taking any
account of normal io traffic going to the devices.
2021-04-13 16:26:54 +03:00
.P
2020-06-23 13:19:54 +03:00
User can set migration threshold via cache policy settings as
"migration_threshold=<#sectors>" to set the maximum number
2022-07-04 16:10:58 +03:00
of sectors being migrated, the default being 2048 sectors (1 MiB).
2021-04-13 16:26:54 +03:00
.P
2022-07-04 16:10:58 +03:00
Command to set migration threshold to 2 MiB (4096 sectors):
2021-04-13 16:26:54 +03:00
.P
2020-06-23 13:19:54 +03:00
.B lvcreate --cachepolicy 'migration_threshold=4096' VG/LV
2021-04-13 16:26:54 +03:00
.P
2020-06-23 13:19:54 +03:00
Command to display the migration threshold:
2021-04-13 16:26:54 +03:00
.P
2020-06-23 13:19:54 +03:00
.B lvs -o+kernel_cache_settings,cache_settings VG/LV
.br
.B lvs -o+chunksize VG/LV
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache cache policy
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
The dm-cache subsystem has additional per-LV parameters: the cache policy
to use, and possibly tunable parameters for the cache policy. Three
policies are currently available: "smq" is the default policy, "mq" is an
older implementation, and "cleaner" is used to force the cache to write
back (flush) all cached writes to the origin LV.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
The older "mq" policy has a number of tunable parameters. The defaults are
chosen to be suitable for the majority of systems, but in special
circumstances, changing the settings can improve performance.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
With the --cachepolicy and --cachesettings options, the cache policy and
settings can be set when caching is started, or changed on an existing
cached LV (both options can be used together). The current cache policy
and settings can be displayed with the cache_policy and cache_settings
reporting options:
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.B lvs -o+cache_policy,cache_settings VG/LV
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
Change the cache policy and settings of an existing LV.
2021-04-13 16:26:54 +03:00
.nf
# lvchange --cachepolicy mq --cachesettings \\
2018-11-07 01:14:59 +03:00
\(aqmigration_threshold=2048 random_threshold=4\(aq vg/main
.fi
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.BR lvm.conf (5)
.B allocation/cache_policy
.br
defines the default cache policy.
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.BR lvm.conf (5)
.B allocation/cache_settings
.br
defines the default cache settings.
2021-04-13 16:26:54 +03:00
.
2021-03-29 19:55:47 +03:00
.SS dm-cache using metadata profiles
2021-04-13 16:26:54 +03:00
.
2021-03-29 19:55:47 +03:00
Cache pools allows to set a variety of options. Lots of these settings
can be specified in lvm.conf or profile settings. You can prepare
2021-04-15 14:16:41 +03:00
a number of different profiles in the \fI#DEFAULT_SYS_DIR#/profile\fP directory
2021-03-29 19:55:47 +03:00
and just specify the metadata profile file name when caching LV or creating cache-pool.
Check the output of \fBlvmconfig --type default --withcomments\fP
for a detailed description of all individual cache settings.
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
.I Example
.nf
# cat <<EOF > #DEFAULT_SYS_DIR#/profile/cache_big_chunk.profile
allocation {
2021-04-15 14:16:41 +03:00
.RS
cache_pool_metadata_require_separate_pvs=0
cache_pool_chunk_size=512
cache_metadata_format=2
cache_mode="writethrough"
cache_policy="smq"
cache_settings {
.RS
smq {
.RS
migration_threshold=8192
random_threshold=4096
.RE
}
.RE
}
.RE
2021-03-29 19:55:47 +03:00
}
EOF
2021-04-13 16:26:54 +03:00
.P
2021-04-15 14:16:41 +03:00
# lvcreate --cache -L10G --metadataprofile cache_big_chunk vg/main \\
/dev/fast_ssd
# lvcreate --cache -L10G vg/main --config \\
'allocation/cache_pool_chunk_size=512' /dev/fast_ssd
2021-03-29 19:55:47 +03:00
.fi
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache spare metadata LV
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
See
.BR lvmthin (7)
for a description of the "pool metadata spare" LV.
The same concept is used for cache pools.
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
.SS dm-cache metadata formats
2021-04-13 16:26:54 +03:00
.
2018-11-07 01:14:59 +03:00
There are two disk formats for dm-cache metadata. The metadata format can
be specified with --cachemetadataformat when caching is started, and
cannot be changed. Format \fB2\fP has better performance; it is more
compact, and stores dirty bits in a separate btree, which improves the
speed of shutting down the cache. With \fBauto\fP, lvm selects the best
option provided by the current dm-cache kernel module.
2021-04-13 16:26:54 +03:00
.
2020-02-27 20:33:55 +03:00
.SS RAID1 cache device
2021-04-13 16:26:54 +03:00
.
2020-02-27 20:33:55 +03:00
RAID1 can be used to create the fast LV holding the cache so that it can
tolerate a device failure. (When using dm-cache with separate data
and metadata LVs, each of the sub-LVs can use RAID1.)
2021-04-13 16:26:54 +03:00
.P
2018-11-07 01:14:59 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvcreate -n main -L Size vg /dev/slow
# lvcreate --type raid1 -m 1 -n fast -L Size vg /dev/ssd1 /dev/ssd2
# lvconvert --type cache --cachevol fast vg/main
2018-11-07 01:14:59 +03:00
.fi
2021-04-13 16:26:54 +03:00
.
2019-01-30 18:55:34 +03:00
.SS dm-cache command shortcut
2021-04-13 16:26:54 +03:00
.
2021-03-29 19:55:47 +03:00
A single command can be used to cache main LV with automatic
creation of a cache-pool:
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvcreate --cache --size CacheDataSize VG/LV [FastPVs]
2021-03-29 19:55:47 +03:00
.fi
2021-04-13 16:26:54 +03:00
.P
2021-03-29 19:55:47 +03:00
or the longer variant
2021-04-13 16:26:54 +03:00
.P
2019-01-30 18:55:34 +03:00
.nf
2021-04-13 16:26:54 +03:00
# lvcreate --type cache --size CacheDataSize \\
2021-04-15 14:16:41 +03:00
--name NameCachePool VG/LV [FastPVs]
2019-01-30 18:55:34 +03:00
.fi
2021-04-13 16:26:54 +03:00
.P
2019-01-30 18:55:34 +03:00
In this command, the specified LV already exists, and is the main LV to be
2021-03-29 19:55:47 +03:00
cached. The command creates a new cache pool with size and given name
or the name is automatically selected from a sequence lvolX_cpool,
using the optionally specified fast PV(s) (typically an ssd). Then it
2019-01-30 18:55:34 +03:00
attaches the new cache pool to the existing main LV to begin caching.
2021-04-13 16:26:54 +03:00
.P
2019-01-30 18:55:34 +03:00
(Note: ensure that the specified main LV is a standard LV. If a cache
pool LV is mistakenly specified, then the command does something
different.)
2021-04-13 16:26:54 +03:00
.P
2019-01-30 18:55:34 +03:00
(Note: the type option is interpreted differently by this command than by
normal lvcreate commands in which --type specifies the type of the newly
created LV. In this case, an LV with type cache-pool is being created,
and the existing main LV is being converted to type cache.)
2021-04-13 16:26:54 +03:00
.
2014-05-18 22:09:47 +04:00
.SH SEE ALSO
2021-04-13 16:26:54 +03:00
.
.nh
.ad l
2014-05-18 22:09:47 +04:00
.BR lvm.conf (5),
.BR lvchange (8),
.BR lvcreate (8),
.BR lvdisplay (8),
.BR lvextend (8),
.BR lvremove (8),
.BR lvrename (8),
.BR lvresize (8),
.BR lvs (8),
2021-04-13 16:26:54 +03:00
.br
2014-05-18 22:09:47 +04:00
.BR vgchange (8),
.BR vgmerge (8),
.BR vgreduce (8),
2021-04-13 16:26:54 +03:00
.BR vgsplit (8),
.P
.BR cache_check (8),
2021-04-15 14:16:41 +03:00
.BR cache_dump (8),
2021-04-13 16:26:54 +03:00
.BR cache_repair (8)