1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00

man: describe profile support lvmcache.7

Add missing description for profile usage with cache pool.
List cache-pools as first option for dm-cache as it provides
better performance and more functionality over cachevols.
This commit is contained in:
Zdenek Kabelac 2021-03-29 18:55:47 +02:00
parent 2aaea13aaa
commit ff5776024f

View File

@ -63,17 +63,21 @@ To start caching the main LV, convert the main LV to the desired caching
type, and specify the fast LV to use as the cache: type, and specify the fast LV to use as the cache:
.nf .nf
using dm-cache:
$ lvconvert --type cache --cachevol fast vg/main
using dm-writecache:
$ lvconvert --type writecache --cachevol fast vg/main
using dm-cache (with cachepool): using dm-cache (with cachepool):
$ lvconvert --type cache --cachepool fast vg/main $ lvconvert --type cache --cachepool fast vg/main
using dm-cache (with cachevol):
$ lvconvert --type cache --cachevol fast vg/main
using dm-writecache (with cachevol):
$ lvconvert --type writecache --cachevol fast vg/main
For more alteratives see:
dm-cache command shortcut
dm-cache with separate data and metadata LVs
.fi .fi
.B 4. Display LVs .B 4. Display LVs
@ -85,31 +89,31 @@ suffix. It is displayed by lvs -a. The _corig or _wcorig LV represents
the original LV without the cache. the original LV without the cache.
.nf .nf
using dm-cache:
$ lvs -a
LV Pool Type Devices
main [fast_cvol] cache main_corig(0)
[fast_cvol] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd
using dm-writecache:
$ lvs -a
LV Pool Type Devices
main [fast_cvol] writecache main_wcorig(0)
[fast_cvol] linear /dev/fast_ssd
[main_wcorig] linear /dev/slow_hhd
using dm-cache (with cachepool): using dm-cache (with cachepool):
$ lvs -a $ lvs -ao+devices
LV Pool Type Devices LV Pool Type Devices
main [fast_cpool] cache main_corig(0) main [fast_cpool] cache main_corig(0)
[fast_cpool] cache-pool fast_pool_cdata(0) [fast_cpool] cache-pool fast_pool_cdata(0)
[fast_cpool_cdata] linear /dev/fast_ssd [fast_cpool_cdata] linear /dev/fast_ssd
[fast_cpool_cmeta] linear /dev/fast_ssd [fast_cpool_cmeta] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd [main_corig] linear /dev/slow_hhd
using dm-cache (with cachevol):
$ lvs -ao+devices
LV Pool Type Devices
main [fast_cvol] cache main_corig(0)
[fast_cvol] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd
using dm-writecache (with cachevol):
$ lvs -ao+devices
LV Pool Type Devices
main [fast_cvol] writecache main_wcorig(0)
[fast_cvol] linear /dev/fast_ssd
[main_wcorig] linear /dev/slow_hhd
.fi .fi
.B 5. Use the main LV .B 5. Use the main LV
@ -118,6 +122,16 @@ Use the LV until the cache is no longer wanted, or needs to be changed.
.B 6. Stop caching .B 6. Stop caching
To stop caching the main LV and also remove unneeded cache pool,
use the --uncache:
.nf
$ lvconvert --uncache vg/main
$ lvs -a
LV VG Attr Type Devices
main vg -wi------- linear /dev/slow_hhd
To stop caching the main LV, separate the fast LV from the main LV. This To stop caching the main LV, separate the fast LV from the main LV. This
changes the type of the main LV back to what it was before the cache was changes the type of the main LV back to what it was before the cache was
attached. attached.
@ -130,16 +144,6 @@ attached.
fast vg -wi------- linear /dev/fast_ssd fast vg -wi------- linear /dev/fast_ssd
main vg -wi------- linear /dev/slow_hhd main vg -wi------- linear /dev/slow_hhd
To stop caching the main LV and also remove unneeded cache pool,
use the --uncache:
.nf
$ lvconvert --uncache vg/main
$ lvs -a
LV VG Attr Type Devices
main vg -wi------- linear /dev/slow_hhd
.fi .fi
.SS Create a new LV with caching. .SS Create a new LV with caching.
@ -167,14 +171,6 @@ for the fast LV.
\& \&
.B --cachevol
.I LV
.br
Pass this option a fast LV that should be used to hold the cache. With a
cachevol, cache data and metadata are stored in different parts of the
same fast LV. This option can be used with dm-writecache or dm-cache.
.B --cachepool .B --cachepool
.IR CachePoolLV | LV .IR CachePoolLV | LV
.br .br
@ -189,6 +185,14 @@ that cannot be used directly. If a standard LV is passed with this
option, lvm will first convert it to a cache pool by combining it with option, lvm will first convert it to a cache pool by combining it with
another LV to use for metadata. This option can be used with dm-cache. another LV to use for metadata. This option can be used with dm-cache.
.B --cachevol
.I LV
.br
Pass this option a fast LV that should be used to hold the cache. With a
cachevol, cache data and metadata are stored in different parts of the
same fast LV. This option can be used with dm-writecache or dm-cache.
.B --cachedevice .B --cachedevice
.I PV .I PV
.br .br
@ -318,10 +322,25 @@ cleaner mode, and any required flushing is performed in device suspend.
\& \&
When using dm-cache, the cache metadata and cache data can be stored on Preferred way of using dm-cache is to place the cache metadata and cache data
separate LVs. To do this, a "cache pool" is created, which is a special on separate LVs. To do this, a "cache pool" is created, which is a special
LV that references two sub LVs, one for data and one for metadata. LV that references two sub LVs, one for data and one for metadata.
To create a cache pool of given data size and let lvm2 calculate appropriate
metadata size:
.nf
$ lvcreate --type cache-pool -L DataSize -n fast vg /dev/fast_ssd1
.fi
To create a cache pool from separate LV and let lvm2 calculate
appropriate cache metadata size:
.nf
$ lvcreate -n fast -L DataSize vg /dev/fast_ssd1
$ lvconvert --type cache-pool vg/fast /dev/fast_ssd1
.fi
To create a cache pool from two separate LVs: To create a cache pool from two separate LVs:
.nf .nf
@ -374,8 +393,12 @@ mode can be displayed with the cache_mode reporting option:
defines the default cache mode. defines the default cache mode.
.nf .nf
$ lvconvert --type cache --cachevol fast \\ $ lvconvert --type cache --cachemode writethrough \\
--cachemode writethrough vg/main --cachepool fast vg/main
$ lvconvert --type cache --cachemode writethrough \\
--cachevol fast vg/main
.nf .nf
.SS dm-cache chunk size .SS dm-cache chunk size
@ -480,6 +503,39 @@ defines the default cache policy.
.br .br
defines the default cache settings. defines the default cache settings.
.SS dm-cache using metadata profiles
\&
Cache pools allows to set a variety of options. Lots of these settings
can be specified in lvm.conf or profile settings. You can prepare
a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory
and just specify the metadata profile file name when caching LV or creating cache-pool.
Check the output of \fBlvmconfig --type default --withcomments\fP
for a detailed description of all individual cache settings.
.I Example
.nf
# cat <<EOF > #DEFAULT_SYS_DIR#/profile/cache_big_chunk.profile
allocation {
cache_pool_metadata_require_separate_pvs=0
cache_pool_chunk_size=512
cache_metadata_format=2
cache_mode="writethrough"
cache_policy="smq"
cache_settings {
smq {
migration_threshold=8192
random_threshold=4096
}
}
}
EOF
# lvcreate --cache -L10G --metadataprofile cache_big_chunk vg/main /dev/fast_ssd
# lvcreate --cache -L10G --config 'allocation/cache_pool_chunk_size=512' vg/main /dev/fast_ssd
.fi
.SS dm-cache spare metadata LV .SS dm-cache spare metadata LV
\& \&
@ -518,16 +574,24 @@ $ lvconvert --type cache --cachevol fast vg/main
\& \&
A single command can be used to create a cache pool and attach that new A single command can be used to cache main LV with automatic
cache pool to a main LV: creation of a cache-pool:
.nf .nf
$ lvcreate --type cache --name Name --size Size VG/LV [PV] $ lvcreate --cache --size CacheDataSize VG/LV [FastPVs]
.fi
or the longer variant
.nf
$ lvcreate --type cache --size CacheDataSize \\
--name NameCachePool VG/LV [FastPVs]
.fi .fi
In this command, the specified LV already exists, and is the main LV to be In this command, the specified LV already exists, and is the main LV to be
cached. The command creates a new cache pool with the given name and cached. The command creates a new cache pool with size and given name
size, using the optionally specified PV (typically an ssd). Then it or the name is automatically selected from a sequence lvolX_cpool,
using the optionally specified fast PV(s) (typically an ssd). Then it
attaches the new cache pool to the existing main LV to begin caching. attaches the new cache pool to the existing main LV to begin caching.
(Note: ensure that the specified main LV is a standard LV. If a cache (Note: ensure that the specified main LV is a standard LV. If a cache