1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-03-13 00:58:47 +03:00

man: typography

With to use .TP where it's easy and doesn't change layout
(since .HP is marked as deprecated) - but .TP is not always perfetc match.

Avoid submitting empty lines to troff and replace them mostly with .P
and use '.' at line start to preserve 'visual' presence of empty line
while editing man page manually when there is no extra space needed.

Fix some markup.

Add some missing SEE ALSO section.

Drop some white-space at end-of-lines.

Improve hyphenation logic so we do not split options.

Use '.IP numbers' only with first one the row (others in row
automatically derive this value)

Use automatic enumeration for .SH titles.

Guidelines in-use:
https://man7.org/linux/man-pages/man7/groff.7.html
https://www.gnu.org/software/groff/manual/html_node/Man-usage.html
https://www.gnu.org/software/groff/manual/html_node/Lists-in-ms.html
This commit is contained in:
Zdenek Kabelac 2021-04-13 15:26:54 +02:00
parent 0004ffa73a
commit 353718785f
22 changed files with 1970 additions and 2061 deletions

View File

@ -1,19 +1,34 @@
.TH "BLKDEACTIVATE" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "BLKDEACTIVATE" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME" .
.SH NAME
.
blkdeactivate \(em utility to deactivate block devices blkdeactivate \(em utility to deactivate block devices
.
.SH SYNOPSIS .SH SYNOPSIS
.
.ad l
.nh
.B blkdeactivate .B blkdeactivate
.RB [ -d \ \fIdm_options\fP ] .RB [ -d
.IR dm_options ]
.RB [ -e ] .RB [ -e ]
.RB [ -h ] .RB [ -h ]
.RB [ -l \ \fIlvm_options\fP ] .RB [ -l
.RB [ -m \ \fImpath_options\fP ] .IR lvm_options ]
.RB [ -r \ \fImdraid_options\fP ] .RB [ -m
.RB [ -o \ \fIvdo_options\fP ] .IR mpath_options ]
.RB [ -r
.IR mdraid_options ]
.RB [ -o
.IR vdo_options ]
.RB [ -u ] .RB [ -u ]
.RB [ -v ] .RB [ -v ]
.RI [ device ] .RI [ device ]
.hy
.ad b
.
.SH DESCRIPTION .SH DESCRIPTION
.
The blkdeactivate utility deactivates block devices. For mounted The blkdeactivate utility deactivates block devices. For mounted
block devices, it attempts to unmount it automatically before block devices, it attempts to unmount it automatically before
trying to deactivate. The utility currently supports trying to deactivate. The utility currently supports
@ -22,9 +37,11 @@ software RAID MD devices. LVM volumes are handled directly
using the \fBlvm\fP(8) command, the rest of device-mapper using the \fBlvm\fP(8) command, the rest of device-mapper
based devices are handled using the \fBdmsetup\fP(8) command. based devices are handled using the \fBdmsetup\fP(8) command.
MD devices are handled using the \fBmdadm\fP(8) command. MD devices are handled using the \fBmdadm\fP(8) command.
.
.SH OPTIONS .SH OPTIONS
.
.TP .TP
.BR -d ", " --dmoptions \ \fIdm_options\fP .BR -d | --dmoptions " " \fIdm_options
Comma separated list of device-mapper specific options. Comma separated list of device-mapper specific options.
Accepted \fBdmsetup\fP(8) options are: Accepted \fBdmsetup\fP(8) options are:
.RS .RS
@ -33,17 +50,20 @@ Retry removal several times in case of failure.
.IP \fIforce\fP .IP \fIforce\fP
Force device removal. Force device removal.
.RE .RE
.
.TP .TP
.BR -e ", " --errors .BR -e | --errors
Show errors reported from tools called by \fBblkdeactivate\fP. Without this Show errors reported from tools called by \fBblkdeactivate\fP. Without this
option, any error messages from these external tools are suppressed and the option, any error messages from these external tools are suppressed and the
\fBblkdeactivate\fP itself provides only a summary message to indicate \fBblkdeactivate\fP itself provides only a summary message to indicate
the device was skipped. the device was skipped.
.
.TP .TP
.BR -h ", " --help .BR -h | --help
Display the help text. Display the help text.
.
.TP .TP
.BR -l ", " --lvmoptions \ \fIlvm_options\fP .BR -l | --lvmoptions " " \fIlvm_options
Comma-separated list of LVM specific options: Comma-separated list of LVM specific options:
.RS .RS
.IP \fIretry\fP .IP \fIretry\fP
@ -53,8 +73,9 @@ Deactivate the whole LVM Volume Group when processing a Logical Volume.
Deactivating the Volume Group as a whole is quicker than deactivating Deactivating the Volume Group as a whole is quicker than deactivating
each Logical Volume separately. each Logical Volume separately.
.RE .RE
.
.TP .TP
.BR -m ", " --mpathoptions \ \fImpath_options\fP .BR -m | --mpathoptions " " \fImpath_options
Comma-separated list of device-mapper multipath specific options: Comma-separated list of device-mapper multipath specific options:
.RS .RS
.IP \fIdisablequeueing\fP .IP \fIdisablequeueing\fP
@ -63,68 +84,74 @@ This avoids a situation where blkdeactivate may end up waiting if
all the paths are unavailable for any underlying device-mapper multipath all the paths are unavailable for any underlying device-mapper multipath
device. device.
.RE .RE
.
.TP .TP
.BR -r ", " --mdraidoptions \ \fImdraid_options\fP .BR -r | --mdraidoptions " " \fImdraid_options
Comma-separated list of MD RAID specific options: Comma-separated list of MD RAID specific options:
.RS .RS
.IP \fIwait\fP .IP \fIwait\fP
Wait MD device's resync, recovery or reshape action to complete Wait MD device's resync, recovery or reshape action to complete
before deactivation. before deactivation.
.RE .RE
.
.TP .TP
.BR -o ", " --vdooptions \ \fIvdo_options\fP .BR -o | --vdooptions " " \fIvdo_options
Comma-separated list of VDO specific options: Comma-separated list of VDO specific options:
.RS .RS
.IP \fIconfigfile=file\fP .IP \fIconfigfile=file\fP
Use specified VDO configuration file. Use specified VDO configuration file.
.RE .RE
.
.TP .TP
.BR -u ", " --umount .BR -u | --umount
Unmount a mounted device before trying to deactivate it. Unmount a mounted device before trying to deactivate it.
Without this option used, a device that is mounted is not deactivated. Without this option used, a device that is mounted is not deactivated.
.
.TP .TP
.BR -v ", " --verbose .BR -v ", " --verbose
Run in verbose mode. Use --vv for even more verbose mode. Run in verbose mode. Use \fB-vv\fP for even more verbose mode.
.
.SH EXAMPLES .SH EXAMPLES
. .
Deactivate all supported block devices found in the system, skipping mounted Deactivate all supported block devices found in the system, skipping mounted
devices. devices.
.BR .br
# #
.B blkdeactivate .B blkdeactivate
.BR .br
.P .P
Deactivate all supported block devices found in the system, unmounting any Deactivate all supported block devices found in the system, unmounting any
mounted devices first, if possible. mounted devices first, if possible.
.BR .br
# #
.B blkdeactivate -u .B blkdeactivate -u
.BR .br
.P .P
Deactivate the device /dev/vg/lvol0 together with all its holders, unmounting Deactivate the device /dev/vg/lvol0 together with all its holders, unmounting
any mounted devices first, if possible. any mounted devices first, if possible.
.BR .br
# #
.B blkdeactivate -u /dev/vg/lvol0 .B blkdeactivate -u /dev/vg/lvol0
.BR .br
.P .P
Deactivate all supported block devices found in the system. If the deactivation Deactivate all supported block devices found in the system. If the deactivation
of a device-mapper device fails, retry it. Deactivate the whole of a device-mapper device fails, retry it. Deactivate the whole
Volume Group at once when processing an LVM Logical Volume. Volume Group at once when processing an LVM Logical Volume.
.BR .br
# #
.B blkdeactivate -u -d retry -l wholevg .B blkdeactivate -u -d retry -l wholevg
.BR .br
.P .P
Deactivate all supported block devices found in the system. If the deactivation Deactivate all supported block devices found in the system. If the deactivation
of a device-mapper device fails, retry it and force removal. of a device-mapper device fails, retry it and force removal.
.BR .br
# #
.B blkdeactivate -d force,retry .B blkdeactivate -d force,retry
. .
.SH SEE ALSO .SH SEE ALSO
.
.nh
.ad l
.BR dmsetup (8), .BR dmsetup (8),
.BR lsblk (8), .BR lsblk (8),
.BR lvm (8), .BR lvm (8),

View File

@ -1,35 +1,45 @@
.TH CMIRRORD 8 "LVM TOOLS #VERSION#" "Red Hat Inc" \" -*- nroff -*- .TH CMIRRORD 8 "LVM TOOLS #VERSION#" "Red Hat Inc" \" -*- nroff -*-
.
.SH NAME .SH NAME
.
cmirrord \(em cluster mirror log daemon cmirrord \(em cluster mirror log daemon
.
.SH SYNOPSIS .SH SYNOPSIS
\fBcmirrord\fR [\fB-f\fR] [\fB-h\fR] .
.B cmirrord
.RB [ -f | --foreground ]
.RB [ -h | --help ]
.
.SH DESCRIPTION .SH DESCRIPTION
.
\fBcmirrord\fP is the daemon that tracks mirror log information in a cluster. \fBcmirrord\fP is the daemon that tracks mirror log information in a cluster.
It is specific to device-mapper based mirrors (and by extension, LVM It is specific to device-mapper based mirrors (and by extension, LVM
cluster mirrors). Cluster mirrors are not possible without this daemon cluster mirrors). Cluster mirrors are not possible without this daemon
running. running.
.P
This daemon relies on the cluster infrastructure provided by the corosync, This daemon relies on the cluster infrastructure provided by the corosync,
which must be set up and running in order for cmirrord to function. which must be set up and running in order for cmirrord to function.
.P
Output is logged via \fBsyslog\fP(3). The \fBSIGUSR1 signal\fP(7) can be Output is logged via \fBsyslog\fP(3). The \fBSIGUSR1 signal\fP(7) can be
issued to \fBcmirrord\fP to gather current status information for debugging issued to \fBcmirrord\fP to gather current status information for debugging
purposes. purposes.
.P
Once started, \fBcmirrord\fP will run until it is shutdown via \fBSIGINT\fP Once started, \fBcmirrord\fP will run until it is shutdown via \fBSIGINT\fP
signal. If there are still active cluster mirrors, however, the signal will be signal. If there are still active cluster mirrors, however, the signal will be
ignored. Active cluster mirrors should be shutdown before stopping the cluster ignored. Active cluster mirrors should be shutdown before stopping the cluster
mirror log daemon. mirror log daemon.
.
.SH OPTIONS .SH OPTIONS
.IP "\fB-f\fR, \fB--foreground\fR" 4 .
.TP
.BR -f | --foreground
Do not fork and log to the terminal. Do not fork and log to the terminal.
.IP "\fB-h\fR, \fB--help\fR" 4 .TP
.BR -h | --help
Print usage. Print usage.
.
.SH SEE ALSO .SH SEE ALSO
.
.BR lvmlockd (8), .BR lvmlockd (8),
.BR lvm (8), .BR lvm (8),
.BR syslog (3), .BR syslog (3),

View File

@ -23,70 +23,63 @@ dmeventd is the event monitoring daemon for device-mapper devices.
Library plugins can register and carry out actions triggered when Library plugins can register and carry out actions triggered when
particular events occur. particular events occur.
. .
.
.SH OPTIONS .SH OPTIONS
. .
.HP .TP
.BR -d .B -d
.br Repeat from 1 to 3 times
Repeat from 1 to 3 times ( .RB ( -d ,
.BR -d ,
.BR -dd , .BR -dd ,
.BR -ddd .BR -ddd )
) to increase the detail of to increase the detail of
debug messages sent to syslog. debug messages sent to syslog.
Each extra d adds more debugging information. Each extra d adds more debugging information.
. .
.HP .TP
.BR -f .B -f
.br
Don't fork, run in the foreground. Don't fork, run in the foreground.
. .
.HP .TP
.BR -h .B -h
.br
Show help information. Show help information.
. .
.HP .TP
.BR -l .B -l
.br
Log through stdout and stderr instead of syslog. Log through stdout and stderr instead of syslog.
This option works only with option -f, otherwise it is ignored. This option works only with option -f, otherwise it is ignored.
. .
.HP .TP
.BR -? .B -?
.br
Show help information on stderr. Show help information on stderr.
. .
.HP .TP
.BR -R .B -R
.br
Replace a running dmeventd instance. The running dmeventd must be version Replace a running dmeventd instance. The running dmeventd must be version
2.02.77 or newer. The new dmeventd instance will obtain a list of devices and 2.02.77 or newer. The new dmeventd instance will obtain a list of devices and
events to monitor from the currently running daemon. events to monitor from the currently running daemon.
. .
.HP .TP
.BR -V .B -V
.br
Show version of dmeventd. Show version of dmeventd.
. .
.SH LVM PLUGINS .SH LVM PLUGINS
. .
.HP .TP
.BR Mirror .B Mirror
Attempts to handle device failure automatically.
.br .br
Attempts to handle device failure automatically. See See
.BR lvm.conf (5). .BR lvm.conf (5).
. .
.HP .TP
.BR Raid .B Raid
Attempts to handle device failure automatically.
.br .br
Attempts to handle device failure automatically. See See
.BR lvm.conf (5). .BR lvm.conf (5).
. .
.HP .TP
.BR Snapshot .B Snapshot
.br
Monitors how full a snapshot is becoming and emits a warning to Monitors how full a snapshot is becoming and emits a warning to
syslog when it exceeds 80% full. syslog when it exceeds 80% full.
The warning is repeated when 85%, 90% and 95% of the snapshot is filled. The warning is repeated when 85%, 90% and 95% of the snapshot is filled.
@ -95,9 +88,8 @@ See
Snapshot which runs out of space gets invalid and when it is mounted, Snapshot which runs out of space gets invalid and when it is mounted,
it gets umounted if possible. it gets umounted if possible.
. .
.HP .TP
.BR Thin .B Thin
.br
Monitors how full a thin pool data and metadata is becoming and emits Monitors how full a thin pool data and metadata is becoming and emits
a warning to syslog when it exceeds 80% full. a warning to syslog when it exceeds 80% full.
The warning is repeated when more then 85%, 90% and 95% The warning is repeated when more then 85%, 90% and 95%
@ -123,12 +115,11 @@ Command is executed with environmental variable
in this environment will not try to interact with dmeventd. in this environment will not try to interact with dmeventd.
To see the fullness of a thin pool command may check these To see the fullness of a thin pool command may check these
two environmental variables two environmental variables
\fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_METADATA\fP. \fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_\:METADATA\fP.
Command can also read status with tools like \fBlvs\fP(8). Command can also read status with tools like \fBlvs\fP(8).
.
.HP .TP
.BR Vdo .B Vdo
.br
Monitors how full a VDO pool data is becoming and emits Monitors how full a VDO pool data is becoming and emits
a warning to syslog when it exceeds 80% full. a warning to syslog when it exceeds 80% full.
The warning is repeated when more then 85%, 90% and 95% The warning is repeated when more then 85%, 90% and 95%

View File

@ -1,23 +1,23 @@
.TH DMFILEMAPD 8 "Dec 17 2016" "Linux" "MAINTENANCE COMMANDS" .TH DMFILEMAPD 8 "Dec 17 2016" "Linux" "MAINTENANCE COMMANDS"
.
.de OPT_FD .de OPT_FD
. RB [ file_descriptor ] . I file_descriptor
.. ..
. .
.de OPT_GROUP .de OPT_GROUP
. RB [ group_id ] . I group_id
.. ..
. .
.de OPT_PATH .de OPT_PATH
. RB [ abs_path ] . I abs_path
.. ..
. .
.de OPT_MODE .de OPT_MODE
. RB [ mode ] . BR inode | path
.. ..
. .
.de OPT_DEBUG .de OPT_DEBUG
. RB [ foreground [ verbose ]] . RI [ foreground " [" verbose ]]
.. ..
. .
.SH NAME .SH NAME
@ -29,7 +29,7 @@ dmfilemapd \(em device-mapper filemap monitoring daemon
.de CMD_DMFILEMAPD .de CMD_DMFILEMAPD
. ad l . ad l
. nh . nh
. IR dmfilemapd . BR dmfilemapd
. OPT_FD . OPT_FD
. OPT_GROUP . OPT_GROUP
. OPT_PATH . OPT_PATH
@ -41,15 +41,14 @@ dmfilemapd \(em device-mapper filemap monitoring daemon
.CMD_DMFILEMAPD .CMD_DMFILEMAPD
. .
.PD .PD
.ad b
. .
.SH DESCRIPTION .SH DESCRIPTION
. .
The dmfilemapd daemon monitors groups of \fIdmstats\fP regions that The dmfilemapd daemon monitors groups of \fBdmstats\fP regions that
correspond to the extents of a file, adding and removing regions to correspond to the extents of a file, adding and removing regions to
reflect the changing state of the file on-disk. reflect the changing state of the file on-disk.
.P
The daemon is normally launched automatically by the \fPdmstats The daemon is normally launched automatically by the \fBdmstats
create\fP command, but can be run manually, either to create a new create\fP command, but can be run manually, either to create a new
daemon where one did not previously exist, or to change the options daemon where one did not previously exist, or to change the options
previously used, by killing the existing daemon and starting a new previously used, by killing the existing daemon and starting a new
@ -57,49 +56,48 @@ one.
. .
.SH OPTIONS .SH OPTIONS
. .
.HP .TP
.BR file_descriptor .OPT_FD
.br
Specify the file descriptor number for the file to be monitored. Specify the file descriptor number for the file to be monitored.
The file descriptor must reference a regular file, open for reading, The file descriptor must reference a regular file, open for reading,
in a local file system that supports the FIEMAP ioctl, and that in a local file system that supports the FIEMAP ioctl, and that
returns data describing the physical location of extents. returns data describing the physical location of extents.
.sp
The process that executes \fBdmfilemapd\fP is responsible for The process that executes \fBdmfilemapd\fP is responsible for
opening the file descriptor that is handed to the daemon. opening the file descriptor that is handed to the daemon.
. .
.HP .TP
.BR group_id .OPT_GROUP
.br
The \fBdmstats\fP group identifier of the group that \fBdmfilemapd\fP The \fBdmstats\fP group identifier of the group that \fBdmfilemapd\fP
should update. The group must exist and it should correspond to should update. The group must exist and it should correspond to
a set of regions created by a previous filemap operation. a set of regions created by a previous filemap operation.
. .
.HP .TP
.BR abs_path .OPT_PATH
.br
The absolute path to the file being monitored, at the time that The absolute path to the file being monitored, at the time that
it was opened. The use of \fBpath\fP by the daemon differs, it was opened. The use of \fIabs_path\fP by the daemon differs,
depending on the filemap following mode in use; see \fBMODES\fP depending on the filemap following mode in use; see \fBMODES\fP
and the \fBmode\fP option for more information. and the \fImode\fP option for more information.
.
.br .TP
.HP .OPT_MODE
.BR mode The filemap monitoring mode the daemon.
.br Use either
The filemap monitoring mode the daemon should use: either "inode" .B inode
(\fBDM_FILEMAP_FOLLOW_INODE\fP), or "path" (\fBDM_FILEMAP_FOLLOW_INODE\fP), or
.B path
(\fBDM_FILEMAP_FOLLOW_PATH\fP), to enable follow-inode or (\fBDM_FILEMAP_FOLLOW_PATH\fP), to enable follow-inode or
follow-path mode respectively. follow-path mode respectively.
. .
.HP .TP
.BR [foreground] .RI [ foreground ]
.br .br
If set to 1, disable forking and allow the daemon to run in the If set to 1, disable forking and allow the daemon to run in the
foreground. foreground.
. .
.HP .TP
.BR [verbose] .RI [ verbose ]
.br
Control daemon logging. If set to zero, the daemon will close all Control daemon logging. If set to zero, the daemon will close all
stdio streams and run silently. If \fBverbose\fP is a number stdio streams and run silently. If \fBverbose\fP is a number
between 1 and 3, stdio will be retained and the daemon will log between 1 and 3, stdio will be retained and the daemon will log
@ -112,7 +110,7 @@ The file map monitoring daemon can monitor files in two distinct
ways: the mode affects the behaviour of the daemon when a file ways: the mode affects the behaviour of the daemon when a file
under monitoring is renamed or unlinked, and the conditions which under monitoring is renamed or unlinked, and the conditions which
cause the daemon to terminate. cause the daemon to terminate.
.P
In both modes, the daemon will always shut down when the group In both modes, the daemon will always shut down when the group
being monitored is deleted. being monitored is deleted.
.P .P
@ -123,7 +121,7 @@ daemon started. The file descriptor referencing the file is kept
open at all times, and the daemon will exit when it detects that open at all times, and the daemon will exit when it detects that
the file has been unlinked and it is the last holder of a reference the file has been unlinked and it is the last holder of a reference
to the file. to the file.
.P
This mode is useful if the file is expected to be renamed, or moved This mode is useful if the file is expected to be renamed, or moved
within the file system, while it is being monitored. within the file system, while it is being monitored.
.P .P
@ -134,7 +132,7 @@ line. The file descriptor referencing the file is re-opened on each
iteration of the daemon, and the daemon will exit if no file exists iteration of the daemon, and the daemon will exit if no file exists
at this location (a tolerance is allowed so that a brief delay at this location (a tolerance is allowed so that a brief delay
between removal and replacement is permitted). between removal and replacement is permitted).
.P
This mode is useful if the file is updated by unlinking the original This mode is useful if the file is updated by unlinking the original
and placing a new file at the same path. and placing a new file at the same path.
. .
@ -146,14 +144,14 @@ daemon can only react to new allocations once they have been written,
there are inevitably some IO events that cannot be counted when a there are inevitably some IO events that cannot be counted when a
file is growing, particularly if the file is being extended by a file is growing, particularly if the file is being extended by a
single thread writing beyond EOF (for example, the \fBdd\fP program). single thread writing beyond EOF (for example, the \fBdd\fP program).
.P
There is a further loss of events in that there is currently no way There is a further loss of events in that there is currently no way
to atomically resize a \fBdmstats\fP region and preserve its current to atomically resize a \fBdmstats\fP region and preserve its current
counter values. This affects files when they grow by extending the counter values. This affects files when they grow by extending the
final extent, rather than allocating a new extent: any events that final extent, rather than allocating a new extent: any events that
had accumulated in the region between any prior operation and the had accumulated in the region between any prior operation and the
resize are lost. resize are lost.
.P
File mapping is currently most effective in cases where the majority File mapping is currently most effective in cases where the majority
of IO does not trigger extent allocation. Future updates may address of IO does not trigger extent allocation. Future updates may address
these limitations when kernel support is available. these limitations when kernel support is available.
@ -206,8 +204,7 @@ Bryn M. Reeves <bmr@redhat.com>
.SH SEE ALSO .SH SEE ALSO
. .
.BR dmstats (8) .BR dmstats (8)
.P
LVM2 resource page: https://www.sourceware.org/lvm2/ LVM2 resource page: https://www.sourceware.org/lvm2/
.br .br
Device-mapper resource page: http://sources.redhat.com/dm/ Device-mapper resource page: http://sources.redhat.com/dm/
.br

View File

@ -24,13 +24,13 @@ dmsetup \(em low level logical volume management
. nh . nh
. BR create . BR create
. IR device_name . IR device_name
. RB [ -u | --uuid
. IR uuid ]
. RB [ --addnodeoncreate |\: --addnodeonresume ]
. RB [ -n | --notable |\: --table . RB [ -n | --notable |\: --table
. IR table |\: table_file ] . IR table |\: table_file ]
. RB [ --readahead . RB [ --readahead
. RB [ + ] \fIsectors |\: auto | none ] . RB [ + ] \fIsectors |\: auto | none ]
. RB [ -u | --uuid
. IR uuid ]
. RB [ --addnodeoncreate |\: --addnodeonresume ]
. hy . hy
. ad b . ad b
.. ..
@ -86,12 +86,12 @@ dmsetup \(em low level logical volume management
. IR count ] . IR count ]
. RB [ --interval . RB [ --interval
. IR seconds ] . IR seconds ]
. RB [ --nameprefixes ]
. RB [ --noheadings ] . RB [ --noheadings ]
. RB [ -o . RB [ -o
. IR fields ] . IR fields ]
. RB [ -O | --sort . RB [ -O | --sort
. IR sort_fields ] . IR sort_fields ]
. RB [ --nameprefixes ]
. RB [ --separator . RB [ --separator
. IR separator ] . IR separator ]
. RI [ device_name ] . RI [ device_name ]
@ -120,11 +120,11 @@ dmsetup \(em low level logical volume management
. BR ls . BR ls
. RB [ --target . RB [ --target
. IR target_type ] . IR target_type ]
. RB [ -o
. IR options ]
. RB [ --exec . RB [ --exec
. IR command ] . IR command ]
. RB [ --tree ] . RB [ --tree ]
. RB [ -o
. IR options ]
. hy . hy
. ad b . ad b
.. ..
@ -391,10 +391,10 @@ dmsetup \(em low level logical volume management
.CMD_WIPE_TABLE .CMD_WIPE_TABLE
.PD .PD
.P .P
.HP
.PD 0 .PD 0
.TP
.B devmap_name \fImajor minor .B devmap_name \fImajor minor
.HP .TP
.B devmap_name \fImajor:minor .B devmap_name \fImajor:minor
.PD .PD
.ad b .ad b
@ -404,10 +404,10 @@ dmsetup \(em low level logical volume management
dmsetup manages logical devices that use the device-mapper driver. dmsetup manages logical devices that use the device-mapper driver.
Devices are created by loading a table that specifies a target for Devices are created by loading a table that specifies a target for
each sector (512 bytes) in the logical device. each sector (512 bytes) in the logical device.
.P
The first argument to dmsetup is a command. The first argument to dmsetup is a command.
The second argument is the logical device name or uuid. The second argument is the logical device name or uuid.
.P
Invoking the dmsetup tool as \fBdevmap_name\fP Invoking the dmsetup tool as \fBdevmap_name\fP
(which is not normally distributed and is supported (which is not normally distributed and is supported
only for historical reasons) is equivalent to only for historical reasons) is equivalent to
@ -417,66 +417,53 @@ only for historical reasons) is equivalent to
. .
.SH OPTIONS .SH OPTIONS
. .
.HP .TP
.BR --addnodeoncreate .B --addnodeoncreate
.br
Ensure \fI/dev/mapper\fP node exists after \fBdmsetup create\fP. Ensure \fI/dev/mapper\fP node exists after \fBdmsetup create\fP.
. .
.HP .TP
.BR --addnodeonresume .B --addnodeonresume
.br
Ensure \fI/dev/mapper\fP node exists after \fBdmsetup resume\fP (default with udev). Ensure \fI/dev/mapper\fP node exists after \fBdmsetup resume\fP (default with udev).
. .
.HP .TP
.BR --checks .B --checks
.br
Perform additional checks on the operations requested and report Perform additional checks on the operations requested and report
potential problems. Useful when debugging scripts. potential problems. Useful when debugging scripts.
In some cases these checks may slow down operations noticeably. In some cases these checks may slow down operations noticeably.
. .
.HP .TP
.BR -c | -C | --columns .BR -c | -C | --columns
.br
Display output in columns rather than as Field: Value lines. Display output in columns rather than as Field: Value lines.
. .
.HP .TP
.BR --count .B --count \fIcount
.IR count
.br
Specify the number of times to repeat a report. Set this to zero Specify the number of times to repeat a report. Set this to zero
continue until interrupted. The default interval is one second. continue until interrupted. The default interval is one second.
. .
.HP .TP
.BR -f | --force .BR -f | --force
.br
Try harder to complete operation. Try harder to complete operation.
. .
.HP .TP
.BR -h | --help .BR -h | --help
.br
Outputs a summary of the commands available, optionally including Outputs a summary of the commands available, optionally including
the list of report fields (synonym with \fBhelp\fP command). the list of report fields (synonym with \fBhelp\fP command).
. .
.HP .TP
.BR --inactive .B --inactive
.br
When returning any table information from the kernel report on the When returning any table information from the kernel report on the
inactive table instead of the live table. inactive table instead of the live table.
Requires kernel driver version 4.16.0 or above. Requires kernel driver version 4.16.0 or above.
. .
.HP .TP
.BR --interval .B --interval \fIseconds
.IR seconds
.br
Specify the interval in seconds between successive iterations for Specify the interval in seconds between successive iterations for
repeating reports. If \fB--interval\fP is specified but \fB--count\fP repeating reports. If \fB--interval\fP is specified but \fB--count\fP
is not, reports will continue to repeat until interrupted. is not, reports will continue to repeat until interrupted.
The default interval is one second. The default interval is one second.
. .
.HP .TP
.BR --manglename .BR --manglename " " auto | hex | none
.BR auto | hex | none
.br
Mangle any character not on a whitelist using mangling_mode when Mangle any character not on a whitelist using mangling_mode when
processing device-mapper device names and UUIDs. The names and UUIDs processing device-mapper device names and UUIDs. The names and UUIDs
are mangled on input and unmangled on output where the mangling mode are mangled on input and unmangled on output where the mangling mode
@ -493,26 +480,20 @@ Mangling mode could be also set through
\fBDM_DEFAULT_NAME_MANGLING_MODE\fP \fBDM_DEFAULT_NAME_MANGLING_MODE\fP
environment variable. environment variable.
. .
.HP .TP
.BR -j | --major .BR -j | --major " " \fImajor
.IR major
.br
Specify the major number. Specify the major number.
. .
.HP .TP
.BR -m | --minor .BR -m | --minor " " \fIminor
.IR minor
.br
Specify the minor number. Specify the minor number.
. .
.HP .TP
.BR -n | --notable .BR -n | --notable
.br
When creating a device, don't load any table. When creating a device, don't load any table.
. .
.HP .TP
.BR --nameprefixes .B --nameprefixes
.br
Add a "DM_" prefix plus the field name to the output. Useful with Add a "DM_" prefix plus the field name to the output. Useful with
\fB--noheadings\fP to produce a list of \fB--noheadings\fP to produce a list of
field=value pairs that can be used to set environment variables field=value pairs that can be used to set environment variables
@ -520,45 +501,37 @@ field=value pairs that can be used to set environment variables
.BR udev (7) .BR udev (7)
rules). rules).
. .
.HP .TP
.BR --noheadings .B --noheadings
Suppress the headings line when using columnar output. Suppress the headings line when using columnar output.
. .
.HP .TP
.BR --noflush .B --noflush
Do not flush outstanding I/O when suspending a device, or do not Do not flush outstanding I/O when suspending a device, or do not
commit thin-pool metadata when obtaining thin-pool status. commit thin-pool metadata when obtaining thin-pool status.
. .
.HP .TP
.BR --nolockfs .B --nolockfs
.br
Do not attempt to synchronize filesystem eg, when suspending a device. Do not attempt to synchronize filesystem eg, when suspending a device.
. .
.HP .TP
.BR --noopencount .B --noopencount
.br
Tell the kernel not to supply the open reference count for the device. Tell the kernel not to supply the open reference count for the device.
. .
.HP .TP
.BR --noudevrules .B --noudevrules
.br
Do not allow udev to manage nodes for devices in device-mapper directory. Do not allow udev to manage nodes for devices in device-mapper directory.
. .
.HP .TP
.BR --noudevsync .B --noudevsync
.br
Do not synchronise with udev when creating, renaming or removing devices. Do not synchronise with udev when creating, renaming or removing devices.
. .
.HP .TP
.BR -o | --options .BR -o | --options " " \fIoptions
.IR options
.br
Specify which fields to display. Specify which fields to display.
. .
.HP .TP
.BR --readahead .BR --readahead \ [ + ] \fIsectors | auto | none
.RB [ + ] \fIsectors | auto | none
.br
Specify read ahead size in units of sectors. Specify read ahead size in units of sectors.
The default value is \fBauto\fP which allows the kernel to choose The default value is \fBauto\fP which allows the kernel to choose
a suitable value automatically. The \fB+\fP prefix lets you a suitable value automatically. The \fB+\fP prefix lets you
@ -566,15 +539,12 @@ specify a minimum value which will not be used if it is
smaller than the value chosen by the kernel. smaller than the value chosen by the kernel.
The value \fBnone\fP is equivalent to specifying zero. The value \fBnone\fP is equivalent to specifying zero.
. .
.HP .TP
.BR -r | --readonly .BR -r | --readonly
.br
Set the table being loaded read-only. Set the table being loaded read-only.
. .
.HP .TP
.BR -S | --select .BR -S | --select " " \fIselection
.IR selection
.br
Process only items that match \fIselection\fP criteria. If the command is Process only items that match \fIselection\fP criteria. If the command is
producing report output, adding the "selected" column (\fB-o producing report output, adding the "selected" column (\fB-o
selected\fP) displays all rows and shows 1 if the row matches the selected\fP) displays all rows and shows 1 if the row matches the
@ -584,49 +554,38 @@ comparison operators. As a quick help and to see full list of column names that
can be used in selection and the set of supported selection operators, check can be used in selection and the set of supported selection operators, check
the output of \fBdmsetup\ info\ -c\ -S\ help\fP command. the output of \fBdmsetup\ info\ -c\ -S\ help\fP command.
. .
.HP .TP
.BR --table .B --table \fItable
.IR table
.br
Specify a one-line table directly on the command line. Specify a one-line table directly on the command line.
See below for more information on the table format. See below for more information on the table format.
. .
.HP .TP
.BR --udevcookie .B --udevcookie \fIcookie
.IR cookie
.br
Use cookie for udev synchronisation. Use cookie for udev synchronisation.
Note: Same cookie should be used for same type of operations i.e. creation of Note: Same cookie should be used for same type of operations i.e. creation of
multiple different devices. It's not adviced to combine different multiple different devices. It's not adviced to combine different
operations on the single device. operations on the single device.
. .
.HP .TP
.BR -u | --uuid .BR -u | --uuid " " \fIuuid
.br
Specify the \fIuuid\fP. Specify the \fIuuid\fP.
. .
.HP .TP
.BR -y | --yes .BR -y | --yes
.br
Answer yes to all prompts automatically. Answer yes to all prompts automatically.
. .
.HP .TP
.BR -v | --verbose .BR -v | --verbose " [" -v | --verbose ]
.RB [ -v | --verbose ]
.br
Produce additional output. Produce additional output.
. .
.HP .TP
.BR --verifyudev .B --verifyudev
.br
If udev synchronisation is enabled, verify that udev operations get performed If udev synchronisation is enabled, verify that udev operations get performed
correctly and try to fix up the device nodes afterwards if not. correctly and try to fix up the device nodes afterwards if not.
. .
.HP .TP
.BR --version .B --version
.br
Display the library and kernel driver version. Display the library and kernel driver version.
.br
. .
.SH COMMANDS .SH COMMANDS
. .
@ -656,7 +615,7 @@ Flags defaults to read-write (rw) or may be read-only (ro).
Uuid, minor number and flags are optional so those fields may be empty. Uuid, minor number and flags are optional so those fields may be empty.
A semi-colon separates specifications of different devices. A semi-colon separates specifications of different devices.
Use a backslash to escape the following character, for example a comma or semi-colon in a name or table. See also CONCISE FORMAT below. Use a backslash to escape the following character, for example a comma or semi-colon in a name or table. See also CONCISE FORMAT below.
. .
.HP .HP
.CMD_DEPS .CMD_DEPS
.br .br
@ -701,11 +660,11 @@ Fields are comma-separated and chosen from the following list:
.BR events , .BR events ,
.BR uuid . .BR uuid .
Attributes are: Attributes are:
.RI ( L )ive, .RB ( L )ive,
.RI ( I )nactive, .RB ( I )nactive,
.RI ( s )uspended, .RB ( s )uspended,
.RI ( r )ead-only, .RB ( r )ead-only,
.RI read-( w )rite. .RB read-( w )rite.
Precede the list with '\fB+\fP' to append Precede the list with '\fB+\fP' to append
to the default selection of columns instead of replacing it. to the default selection of columns instead of replacing it.
Precede any sort field with '\fB-\fP' for a reverse sort on that column. Precede any sort field with '\fB-\fP' for a reverse sort on that column.
@ -838,7 +797,7 @@ Outputs status information for each of the device's targets.
With \fB--target\fP, only information relating to the specified target type With \fB--target\fP, only information relating to the specified target type
any is displayed. With \fB--noflush\fP, the thin target (from version 1.3.0) any is displayed. With \fB--noflush\fP, the thin target (from version 1.3.0)
doesn't commit any outstanding changes to disk before reporting its statistics. doesn't commit any outstanding changes to disk before reporting its statistics.
.
.HP .HP
.CMD_SUSPEND .CMD_SUSPEND
.br .br
@ -964,14 +923,13 @@ Creates a striped area.
e.g. striped 2 32 /dev/hda1 0 /dev/hdb1 0 e.g. striped 2 32 /dev/hda1 0 /dev/hdb1 0
will map the first chunk (16k) as follows: will map the first chunk (16k) as follows:
.RS .RS
.RS .IP
LV chunk 1 -> hda1, chunk 1 LV chunk 1 -> hda1, chunk 1
LV chunk 2 -> hdb1, chunk 1 LV chunk 2 -> hdb1, chunk 1
LV chunk 3 -> hda1, chunk 2 LV chunk 3 -> hda1, chunk 2
LV chunk 4 -> hdb1, chunk 2 LV chunk 4 -> hdb1, chunk 2
etc. etc.
.RE .RE
.RE
.TP .TP
.B error .B error
Errors any I/O that goes to this area. Useful for testing or Errors any I/O that goes to this area. Useful for testing or

View File

@ -1,12 +1,12 @@
.TH DMSTATS 8 "Jun 23 2016" "Linux" "MAINTENANCE COMMANDS" .TH DMSTATS 8 "Jun 23 2016" "Linux" "MAINTENANCE COMMANDS"
.
.de OPT_PROGRAMS .de OPT_PROGRAMS
. RB \%[ --allprograms | --programid . RB [ --allprograms | --programid
. IR id ] . IR id ]
.. ..
. .
.de OPT_REGIONS .de OPT_REGIONS
. RB \%[ --allregions | --regionid . RB [ --allregions | --regionid
. IR id ] . IR id ]
.. ..
.de OPT_OBJECTS .de OPT_OBJECTS
@ -55,15 +55,17 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_COMMAND .de CMD_COMMAND
. ad l . ad l
. nh
. IR command . IR command
. IR device_name " |" . IR device_name \ |
. BR --major . BR --major
. IR major . IR major
. BR --minor . BR --minor
. IR minor " |" . IR minor \ |
. BR -u | --uuid . BR -u | --uuid
. IR uuid . IR uuid
. RB \%[ -v | --verbose] . RB [ -v | --verbose ]
. hy
. ad b . ad b
.. ..
.CMD_COMMAND .CMD_COMMAND
@ -72,10 +74,12 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_CLEAR .de CMD_CLEAR
. ad l . ad l
. nh
. BR clear . BR clear
. IR device_name . IR device_name
. OPT_PROGRAMS . OPT_PROGRAMS
. OPT_REGIONS . OPT_REGIONS
. hy
. ad b . ad b
.. ..
.CMD_CLEAR .CMD_CLEAR
@ -84,13 +88,14 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_CREATE .de CMD_CREATE
. ad l . ad l
. nh
. BR create . BR create
. IR device_name... | file_path... | \fB--alldevices . IR device_name... | file_path... | \fB--alldevices
. RB [ --areas . RB [ --areas
. IR nr_areas | \fB--areasize . IR nr_areas | \fB--areasize
. IR area_size ] . IR area_size ]
. RB [ --bounds . RB [ --bounds
. IR \%histogram_boundaries ] . IR histogram_boundaries ]
. RB [ --filemap ] . RB [ --filemap ]
. RB [ --follow . RB [ --follow
. IR follow_mode ] . IR follow_mode ]
@ -102,10 +107,11 @@ dmstats \(em device-mapper statistics management
. IR start_sector . IR start_sector
. BR --length . BR --length
. IR length | \fB--segments ] . IR length | \fB--segments ]
. RB \%[ --userdata . RB [ --userdata
. IR user_data ] . IR user_data ]
. RB [ --programid . RB [ --programid
. IR id ] . IR id ]
. hy
. ad b . ad b
.. ..
.CMD_CREATE .CMD_CREATE
@ -114,10 +120,12 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_DELETE .de CMD_DELETE
. ad l . ad l
. nh
. BR delete . BR delete
. IR device_name | \fB--alldevices . IR device_name | \fB--alldevices
. OPT_PROGRAMS . OPT_PROGRAMS
. OPT_REGIONS . OPT_REGIONS
. hy
. ad b . ad b
.. ..
.CMD_DELETE .CMD_DELETE
@ -126,12 +134,14 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_GROUP .de CMD_GROUP
. ad l . ad l
. nh
. BR group . BR group
. RI [ device_name | \fB--alldevices ] . RI [ device_name | \fB--alldevices ]
. RB [ --alias . RB [ --alias
. IR name ] . IR name ]
. RB [ --regions . RB [ --regions
. IR regions ] . IR regions ]
. hy
. ad b . ad b
.. ..
.CMD_GROUP .CMD_GROUP
@ -149,6 +159,7 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_LIST .de CMD_LIST
. ad l . ad l
. nh
. BR list . BR list
. RI [ device_name ] . RI [ device_name ]
. RB [ --histogram ] . RB [ --histogram ]
@ -156,9 +167,10 @@ dmstats \(em device-mapper statistics management
. RB [ --units . RB [ --units
. IR units ] . IR units ]
. OPT_OBJECTS . OPT_OBJECTS
. RB \%[ --nosuffix ] . RB [ --nosuffix ]
. RB [ --notimesuffix ] . RB [ --notimesuffix ]
. RB \%[ -v | --verbose] . RB [ -v | --verbose ]
. hy
. ad b . ad b
.. ..
.CMD_LIST .CMD_LIST
@ -167,11 +179,13 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_PRINT .de CMD_PRINT
. ad l . ad l
. nh
. BR print . BR print
. RI [ device_name ] . RI [ device_name ]
. RB [ --clear ] . RB [ --clear ]
. OPT_PROGRAMS . OPT_PROGRAMS
. OPT_REGIONS . OPT_REGIONS
. hy
. ad b . ad b
.. ..
.CMD_PRINT .CMD_PRINT
@ -180,6 +194,7 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_REPORT .de CMD_REPORT
. ad l . ad l
. nh
. BR report . BR report
. RI [ device_name ] . RI [ device_name ]
. RB [ --interval . RB [ --interval
@ -199,7 +214,8 @@ dmstats \(em device-mapper statistics management
. RB [ --units . RB [ --units
. IR units ] . IR units ]
. RB [ --nosuffix ] . RB [ --nosuffix ]
. RB \%[ --notimesuffix ] . RB [ --notimesuffix ]
. hy
. ad b . ad b
.. ..
.CMD_REPORT .CMD_REPORT
@ -207,10 +223,12 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_UNGROUP .de CMD_UNGROUP
. ad l . ad l
. nh
. BR ungroup . BR ungroup
. RI [ device_name | \fB--alldevices ] . RI [ device_name | \fB--alldevices ]
. RB [ --groupid . RB [ --groupid
. IR id ] . IR id ]
. hy
. ad b . ad b
.. ..
.CMD_UNGROUP .CMD_UNGROUP
@ -218,6 +236,7 @@ dmstats \(em device-mapper statistics management
.B dmstats .B dmstats
.de CMD_UPDATE_FILEMAP .de CMD_UPDATE_FILEMAP
. ad l . ad l
. nh
. BR update_filemap . BR update_filemap
. IR file_path . IR file_path
. RB [ --groupid . RB [ --groupid
@ -225,6 +244,7 @@ dmstats \(em device-mapper statistics management
. RB [ --follow . RB [ --follow
. IR follow_mode ] . IR follow_mode ]
. OPT_FOREGROUND . OPT_FOREGROUND
. hy
. ad b . ad b
.. ..
.CMD_UPDATE_FILEMAP .CMD_UPDATE_FILEMAP
@ -237,334 +257,272 @@ dmstats \(em device-mapper statistics management
The dmstats program manages IO statistics regions for devices that use The dmstats program manages IO statistics regions for devices that use
the device-mapper driver. Statistics regions may be created, deleted, the device-mapper driver. Statistics regions may be created, deleted,
listed and reported on using the tool. listed and reported on using the tool.
.P
The first argument to dmstats is a \fIcommand\fP. The first argument to dmstats is a \fIcommand\fP.
.P
The second argument is the \fIdevice name\fP, The second argument is the \fIdevice name\fP,
\fIuuid\fP or \fImajor\fP and \fIminor\fP numbers. \fIuuid\fP or \fImajor\fP and \fIminor\fP numbers.
.P
Further options permit the selection of regions, output format Further options permit the selection of regions, output format
control, and reporting behaviour. control, and reporting behaviour.
.P
When no device argument is given dmstats will by default operate on all When no device argument is given dmstats will by default operate on all
device-mapper devices present. The \fBcreate\fP and \fBdelete\fP device-mapper devices present. The \fBcreate\fP and \fBdelete\fP
commands require the use of \fB--alldevices\fP when used in this way. commands require the use of \fB--alldevices\fP when used in this way.
. .
.SH OPTIONS .SH OPTIONS
. .
.HP .TP
.BR --alias .B --alias \fIname
.IR name
.br
Specify an alias name for a group. Specify an alias name for a group.
. .
.HP .TP
.BR --alldevices .B --alldevices
.br
If no device arguments are given allow operation on all devices when If no device arguments are given allow operation on all devices when
creating or deleting regions. creating or deleting regions.
. .
.HP .TP
.BR --allprograms .B --allprograms
.br
Include regions from all program IDs for list and report operations. Include regions from all program IDs for list and report operations.
.br .
.HP .TP
.BR --allregions .B --allregions
.br
Include all present regions for commands that normally accept a single Include all present regions for commands that normally accept a single
region identifier. region identifier.
. .
.HP .TP
.BR --area .B --area
.br
When peforming a list or report, include objects of type area in the When peforming a list or report, include objects of type area in the
results. results.
. .
.HP .TP
.BR --areas .B --areas \fInr_areas
.IR nr_areas
.br
Specify the number of statistics areas to create within a new region. Specify the number of statistics areas to create within a new region.
. .
.HP .TP
.BR --areasize .B --areasize \fIarea_size\fR[\c
.IR area_size \c
.RB [ \c
.UNITS .UNITS
.br
Specify the size of areas into which a new region should be divided. An Specify the size of areas into which a new region should be divided. An
optional suffix selects units of: optional suffix selects units of:
.HELP_UNITS .HELP_UNITS
. .
.HP .TP
.BR --clear .B --clear
.br
When printing statistics counters, also atomically reset them to zero. When printing statistics counters, also atomically reset them to zero.
. .
.HP .TP
.BR --count .B --count \fIcount
.IR count
.br
Specify the iteration count for repeating reports. If the count Specify the iteration count for repeating reports. If the count
argument is zero reports will continue to repeat until interrupted. argument is zero reports will continue to repeat until interrupted.
. .
.HP .TP
.BR --group .B --group
.br
When peforming a list or report, include objects of type group in the When peforming a list or report, include objects of type group in the
results. results.
. .
.HP .TP
.BR --filemap .B --filemap
.br
Instead of creating regions on a device as specified by command line Instead of creating regions on a device as specified by command line
options, open the file found at each \fBfile_path\fP argument, and options, open the file found at each \fBfile_path\fP argument, and
create regions corresponding to the locations of the on-disk extents create regions corresponding to the locations of the on-disk extents
allocated to the file(s). allocated to the file(s).
. .
.HP .TP
.BR --nomonitor .B --nomonitor
.br
Disable the \fBdmfilemapd\fP daemon when creating new file mapped Disable the \fBdmfilemapd\fP daemon when creating new file mapped
groups. Normally the device-mapper filemap monitoring daemon, groups. Normally the device-mapper filemap monitoring daemon,
\fBdmfilemapd\fP, is started for each file mapped group to update the \fBdmfilemapd\fP, is started for each file mapped group to update the
set of regions as the file changes on-disk: use of this option set of regions as the file changes on-disk: use of this option
disables this behaviour. disables this behaviour.
.P
Regions in the group may still be updated with the Regions in the group may still be updated with the
\fBupdate_filemap\fP command, or by starting the daemon manually. \fBupdate_filemap\fP command, or by starting the daemon manually.
. .
.HP .TP
.BR --follow .B --follow \fIfollow_mode
.IR follow_mode
.br
Specify the \fBdmfilemapd\fP file following mode. The file map Specify the \fBdmfilemapd\fP file following mode. The file map
monitoring daemon can monitor files in two distinct ways: the mode monitoring daemon can monitor files in two distinct ways: the mode
affects the behaviour of the daemon when a file under monitoring is affects the behaviour of the daemon when a file under monitoring is
renamed or unlinked, and the conditions which cause the daemon to renamed or unlinked, and the conditions which cause the daemon to
terminate. terminate.
.P
The \fBfollow_mode\fP argument is either "inode", for follow-inode The \fBfollow_mode\fP argument is either "inode", for follow-inode
mode, or "path", for follow-path. mode, or "path", for follow-path.
.P
If follow-inode mode is used, the daemon will hold the file open, and If follow-inode mode is used, the daemon will hold the file open, and
continue to update regions from the same file descriptor. This means continue to update regions from the same file descriptor. This means
that the mapping will follow rename, move (within the same file that the mapping will follow rename, move (within the same file
system), and unlink operations. This mode is useful if the file is system), and unlink operations. This mode is useful if the file is
expected to be moved, renamed, or unlinked while it is being expected to be moved, renamed, or unlinked while it is being
monitored. monitored.
.P
In follow-inode mode, the daemon will exit once it detects that the In follow-inode mode, the daemon will exit once it detects that the
file has been unlinked and it is the last holder of a reference to it. file has been unlinked and it is the last holder of a reference to it.
.P
If follow-path is used, the daemon will re-open the provided path on If follow-path is used, the daemon will re-open the provided path on
each monitoring iteration. This means that the group will be updated each monitoring iteration. This means that the group will be updated
to reflect a new file being moved to the same path as the original to reflect a new file being moved to the same path as the original
file. This mode is useful for files that are expected to be updated file. This mode is useful for files that are expected to be updated
via unlink and rename. via unlink and rename.
.P
In follow-path mode, the daemon will exit if the file is removed and In follow-path mode, the daemon will exit if the file is removed and
not replaced within a brief tolerance interval. not replaced within a brief tolerance interval.
.P
In either mode, the daemon exits automatically if the monitored group In either mode, the daemon exits automatically if the monitored group
is removed. is removed.
. .
.HP .TP
.BR --foreground .B --foreground
.br
Specify that the \fBdmfilemapd\fP daemon should run in the foreground. Specify that the \fBdmfilemapd\fP daemon should run in the foreground.
The daemon will not fork into the background, and will replace the The daemon will not fork into the background, and will replace the
\fBdmstats\fP command that started it. \fBdmstats\fP command that started it.
. .
.HP .TP
.BR --groupid .B --groupid \fIid
.IR id
.br
Specify the group to operate on. Specify the group to operate on.
. .
.HP .TP
.BR --bounds .B --bounds \fIhistogram_boundaries\c
.IR histogram_boundaries \c
.RB [ ns | us | ms | s ] .RB [ ns | us | ms | s ]
.br
Specify the boundaries of a latency histogram to be tracked for the Specify the boundaries of a latency histogram to be tracked for the
region as a comma separated list of latency values. Latency values are region as a comma separated list of latency values. Latency values are
given in nanoseconds. An optional unit suffix of given in nanoseconds. An optional unit suffix of
.BR ns , .BR ns , us , ms ,
.BR us ,
.BR ms ,
or \fBs\fP may be given after each value to specify units of or \fBs\fP may be given after each value to specify units of
nanoseconds, microseconds, miliseconds or seconds respectively. nanoseconds, microseconds, miliseconds or seconds respectively.
. .
.HP .TP
.BR --histogram .B --histogram
.br
When used with the \fBreport\fP and \fBlist\fP commands select default When used with the \fBreport\fP and \fBlist\fP commands select default
fields that emphasize latency histogram data. fields that emphasize latency histogram data.
. .
.HP .TP
.BR --interval .B --interval \fIseconds
.IR seconds
.br
Specify the interval in seconds between successive iterations for Specify the interval in seconds between successive iterations for
repeating reports. If \fB--interval\fP is specified but repeating reports. If \fB--interval\fP is specified but
\fB--count\fP is not, \fB--count\fP is not,
reports will continue to repeat until interrupted. reports will continue to repeat until interrupted.
. .
.HP .TP
.BR --length .B --length \fIlength\fR[\c
.IR length \c
.RB [ \c
.UNITS .UNITS
.br
Specify the length of a new statistics region in sectors. An optional Specify the length of a new statistics region in sectors. An optional
suffix selects units of: suffix selects units of:
.HELP_UNITS .HELP_UNITS
. .
.HP .TP
.BR -j | --major .BR -j | --major " " \fImajor
.IR major
.br
Specify the major number. Specify the major number.
. .
.HP .TP
.BR -m | --minor .BR -m | --minor " " \fIminor
.IR minor
.br
Specify the minor number. Specify the minor number.
. .
.HP .TP
.BR --nogroup .B --nogroup
.br
When creating regions mapping the extents of a file in the file When creating regions mapping the extents of a file in the file
system, do not create a group or set an alias. system, do not create a group or set an alias.
. .
.HP .TP
.BR --nosuffix .B --nosuffix
.br
Suppress the suffix on output sizes. Use with \fB--units\fP Suppress the suffix on output sizes. Use with \fB--units\fP
(except h and H) if processing the output. (except h and H) if processing the output.
. .
.HP .TP
.BR --notimesuffix .B --notimesuffix
.br
Suppress the suffix on output time values. Histogram boundary values Suppress the suffix on output time values. Histogram boundary values
will be reported in units of nanoseconds. will be reported in units of nanoseconds.
. .
.HP .TP
.BR -o | --options .BR -o | --options
.br
Specify which report fields to display. Specify which report fields to display.
. .
.HP .TP
.BR -O | --sort .BR -O | --sort " " \fIsort_fields
.IR sort_fields
.br
Sort output according to the list of fields given. Precede any Sort output according to the list of fields given. Precede any
sort field with '\fB-\fP' for a reverse sort on that column. sort field with '\fB-\fP' for a reverse sort on that column.
. .
.HP .TP
.BR --precise .B --precise
.br
Attempt to use nanosecond precision counters when creating new Attempt to use nanosecond precision counters when creating new
statistics regions. statistics regions.
. .
.HP .TP
.BR --programid .B --programid \fIid
.IR id
.br
Specify a program ID string. When creating new statistics regions this Specify a program ID string. When creating new statistics regions this
string is stored with the region. Subsequent operations may supply a string is stored with the region. Subsequent operations may supply a
program ID in order to select only regions with a matching value. The program ID in order to select only regions with a matching value. The
default program ID for dmstats-managed regions is "dmstats". default program ID for dmstats-managed regions is "dmstats".
. .
.HP .TP
.BR --region .B --region
.br
When peforming a list or report, include objects of type region in the When peforming a list or report, include objects of type region in the
results. results.
. .
.HP .TP
.BR --regionid .B --regionid \fIid
.IR id
.br
Specify the region to operate on. Specify the region to operate on.
. .
.HP .TP
.BR --regions .B --regions \fIregion_list
.IR region_list
.br
Specify a list of regions to group. The group list is a comma-separated Specify a list of regions to group. The group list is a comma-separated
list of region identifiers. Continuous sequences of identifiers may be list of region identifiers. Continuous sequences of identifiers may be
expressed as a hyphen separated range, for example: '1-10'. expressed as a hyphen separated range, for example: '1-10'.
. .
.HP .TP
.BR --relative .B --relative
.br
If displaying the histogram report show relative (percentage) values If displaying the histogram report show relative (percentage) values
instead of absolute counts. instead of absolute counts.
. .
.HP .TP
.BR -S | --select .BR -S | --select " " \fIselection
.IR selection
.br
Display only rows that match \fIselection\fP criteria. All rows with the Display only rows that match \fIselection\fP criteria. All rows with the
additional "selected" column (\fB-o selected\fP) showing 1 if the row matches additional "selected" column (\fB-o selected\fP) showing 1 if the row matches
the \fIselection\fP and 0 otherwise. The selection criteria are defined by the \fIselection\fP and 0 otherwise. The selection criteria are defined by
specifying column names and their valid values while making use of specifying column names and their valid values while making use of
supported comparison operators. supported comparison operators.
. .
.HP .TP
.BR --start .B --start \fIstart\fR[\c
.IR start \c
.RB [ \c
.UNITS .UNITS
.br
Specify the start offset of a new statistics region in sectors. An Specify the start offset of a new statistics region in sectors. An
optional suffix selects units of: optional suffix selects units of:
.HELP_UNITS .HELP_UNITS
. .
.HP .TP
.BR --segments .B --segments
.br
When used with \fBcreate\fP, create a new statistics region for each When used with \fBcreate\fP, create a new statistics region for each
target contained in the given device(s). This causes a separate region target contained in the given device(s). This causes a separate region
to be allocated for each segment of the device. to be allocated for each segment of the device.
.P
The newly created regions are automatically placed into a group unless The newly created regions are automatically placed into a group unless
the \fB--nogroup\fP option is given. When grouping is enabled a group the \fB--nogroup\fP option is given. When grouping is enabled a group
alias may be specified using the \fB--alias\fP option. alias may be specified using the \fB--alias\fP option.
. .
.HP .TP
.BR --units .B --units \c
.RI [ units ] \c .RI [ units ] \c
.RB [ h | H | \c .RB [ h | H | \c
.UNITS .UNITS
.br
Set the display units for report output. Set the display units for report output.
All sizes are output in these units: All sizes are output in these units:
.RB ( h )uman-readable, .RB ( h )uman-readable,
.HELP_UNITS .HELP_UNITS
Can also specify custom units e.g. \fB--units\ 3M\fP. Can also specify custom units e.g. \fB--units\ 3M\fP.
. .
.HP .TP
.BR --userdata .B --userdata \fIuser_data
.IR user_data
.br
Specify user data (a word) to be stored with a new region. The value Specify user data (a word) to be stored with a new region. The value
is added to any internal auxiliary data (for example, group is added to any internal auxiliary data (for example, group
information), and stored with the region in the aux_data field provided information), and stored with the region in the aux_data field provided
by the kernel. Whitespace is not permitted. by the kernel. Whitespace is not permitted.
. .
.HP .TP
.BR -u | --uuid .BR -u | --uuid
.br
Specify the uuid. Specify the uuid.
. .
.HP .TP
.BR -v | --verbose " [" -v | --verbose ] .BR -v | --verbose \ [ -v | --verbose ]
.br
Produce additional output. Produce additional output.
. .
.SH COMMANDS .SH COMMANDS
@ -579,23 +537,23 @@ regions (with the exception of in-flight IO counters).
.CMD_CREATE .CMD_CREATE
.br .br
Creates one or more new statistics regions on the specified device(s). Creates one or more new statistics regions on the specified device(s).
.P
The region will span the entire device unless \fB--start\fP and The region will span the entire device unless \fB--start\fP and
\fB--length\fP or \fB--segments\fP are given. The \fB--start\fP an \fB--length\fP or \fB--segments\fP are given. The \fB--start\fP an
\fB--length\fP options allow a region of arbitrary length to be placed \fB--length\fP options allow a region of arbitrary length to be placed
at an arbitrary offset into the device. The \fB--segments\fP option at an arbitrary offset into the device. The \fB--segments\fP option
causes a new region to be created for each target in the corresponding causes a new region to be created for each target in the corresponding
device-mapper device's table. device-mapper device's table.
.P
If the \fB--precise\fP option is used the command will attempt to If the \fB--precise\fP option is used the command will attempt to
create a region using nanosecond precision counters. create a region using nanosecond precision counters.
.P
If \fB--bounds\fP is given a latency histogram will be tracked for If \fB--bounds\fP is given a latency histogram will be tracked for
the new region. The boundaries of the histogram bins are given as a the new region. The boundaries of the histogram bins are given as a
comma separated list of latency values. There is an implicit lower bound comma separated list of latency values. There is an implicit lower bound
of zero on the first bin and an implicit upper bound of infinity (or the of zero on the first bin and an implicit upper bound of infinity (or the
configured interval duration) on the final bin. configured interval duration) on the final bin.
.P
Latencies are given in nanoseconds. An optional unit suffix of ns, us, Latencies are given in nanoseconds. An optional unit suffix of ns, us,
ms, or s may be given after each value to specify units of nanoseconds, ms, or s may be given after each value to specify units of nanoseconds,
microseconds, miliseconds or seconds respectively, so for example, 10ms microseconds, miliseconds or seconds respectively, so for example, 10ms
@ -603,19 +561,19 @@ is equivalent to 10000000. Latency values with a precision of less than
one milisecond can only be used when precise timestamps are enabled: if one milisecond can only be used when precise timestamps are enabled: if
\fB--precise\fP is not given and values less than one milisecond are \fB--precise\fP is not given and values less than one milisecond are
used it will be enabled automatically. used it will be enabled automatically.
.P
An optional \fBprogram_id\fP or \fBuser_data\fP string may be associated An optional \fBprogram_id\fP or \fBuser_data\fP string may be associated
with the region. A \fBprogram_id\fP may then be used to select regions with the region. A \fBprogram_id\fP may then be used to select regions
for subsequent list, print, and report operations. The \fBuser_data\fP for subsequent list, print, and report operations. The \fBuser_data\fP
stores an arbitrary string and is not used by dmstats or the stores an arbitrary string and is not used by dmstats or the
device-mapper kernel statistics subsystem. device-mapper kernel statistics subsystem.
.P
By default dmstats creates regions with a \fBprogram_id\fP of By default dmstats creates regions with a \fBprogram_id\fP of
"dmstats". "dmstats".
.P
On success the \fBregion_id\fP of the newly created region is printed On success the \fBregion_id\fP of the newly created region is printed
to stdout. to stdout.
.P
If the \fB--filemap\fP option is given with a regular file, or list If the \fB--filemap\fP option is given with a regular file, or list
of files, as the \fBfile_path\fP argument, instead of creating regions of files, as the \fBfile_path\fP argument, instead of creating regions
with parameters specified on the command line, \fBdmstats\fP will open with parameters specified on the command line, \fBdmstats\fP will open
@ -623,20 +581,20 @@ the files located at \fBfile_path\fP and create regions corresponding to
the physical extents allocated to the file. This can be used to monitor the physical extents allocated to the file. This can be used to monitor
statistics for individual files in the file system, for example, virtual statistics for individual files in the file system, for example, virtual
machine images, swap areas, or large database files. machine images, swap areas, or large database files.
.P
To work with the \fB--filemap\fP option, files must be located on a To work with the \fB--filemap\fP option, files must be located on a
local file system, backed by a device-mapper device, that supports local file system, backed by a device-mapper device, that supports
physical extent data using the FIEMAP ioctl (Ext4 and XFS for e.g.). physical extent data using the FIEMAP ioctl (Ext4 and XFS for e.g.).
.P
By default regions that map a file are placed into a group and the By default regions that map a file are placed into a group and the
group alias is set to the basename of the file. This behaviour can be group alias is set to the basename of the file. This behaviour can be
overridden with the \fB--alias\fP and \fB--nogroup\fP options. overridden with the \fB--alias\fP and \fB--nogroup\fP options.
.P
Creating a group that maps a file automatically starts a daemon, Creating a group that maps a file automatically starts a daemon,
\fBdmfilemapd\fP to monitor the file and update the mapping as the \fBdmfilemapd\fP to monitor the file and update the mapping as the
extents allocated to the file change. This behaviour can be disabled extents allocated to the file change. This behaviour can be disabled
using the \fB--nomonitor\fP option. using the \fB--nomonitor\fP option.
.P
Use the \fB--group\fP option to only display information for groups Use the \fB--group\fP option to only display information for groups
when listing and reporting. when listing and reporting.
. .
@ -646,17 +604,17 @@ when listing and reporting.
Delete the specified statistics region. All counters and resources used Delete the specified statistics region. All counters and resources used
by the region are released and the region will not appear in the output by the region are released and the region will not appear in the output
of subsequent list, print, or report operations. of subsequent list, print, or report operations.
.P
All regions registered on a device may be removed using All regions registered on a device may be removed using
\fB--allregions\fP. \fB--allregions\fP.
.P
To remove all regions on all devices both \fB--allregions\fP and To remove all regions on all devices both \fB--allregions\fP and
\fB--alldevices\fP must be used. \fB--alldevices\fP must be used.
.P
If a \fB--groupid\fP is given instead of a \fB--regionid\fP the If a \fB--groupid\fP is given instead of a \fB--regionid\fP the
command will attempt to delete the group and all regions that it command will attempt to delete the group and all regions that it
contains. contains.
.P
If a deleted region is the first member of a group of regions the group If a deleted region is the first member of a group of regions the group
will also be removed. will also be removed.
. .
@ -665,19 +623,19 @@ will also be removed.
.br .br
Combine one or more statistics regions on the specified device into a Combine one or more statistics regions on the specified device into a
group. group.
.P
The list of regions to be grouped is specified with \fB--regions\fP The list of regions to be grouped is specified with \fB--regions\fP
and an optional alias may be assigned with \fB--alias\fP. The set of and an optional alias may be assigned with \fB--alias\fP. The set of
regions is given as a comma-separated list of region identifiers. A regions is given as a comma-separated list of region identifiers. A
continuous range of identifers spanning from \fBR1\fP to \fBR2\fP may continuous range of identifers spanning from \fBR1\fP to \fBR2\fP may
be expressed as '\fBR1\fP-\fBR2\fP'. be expressed as '\fBR1\fP-\fBR2\fP'.
.P
Regions that have a histogram configured can be grouped: in this case Regions that have a histogram configured can be grouped: in this case
the number of histogram bins and their bounds must match exactly. the number of histogram bins and their bounds must match exactly.
.P
On success the group list and newly created \fBgroup_id\fP are On success the group list and newly created \fBgroup_id\fP are
printed to stdout. printed to stdout.
.P
The group metadata is stored with the first (lowest numbered) The group metadata is stored with the first (lowest numbered)
\fBregion_id\fP in the group: deleting this region will also delete \fBregion_id\fP in the group: deleting this region will also delete
the group and other group members will be returned to their prior the group and other group members will be returned to their prior
@ -695,18 +653,18 @@ the list of report fields.
List the statistics regions, areas, or groups registered on the device. List the statistics regions, areas, or groups registered on the device.
If the \fB--allprograms\fP switch is given all regions will be listed If the \fB--allprograms\fP switch is given all regions will be listed
regardless of region program ID values. regardless of region program ID values.
.P
By default only regions and groups are included in list output. If By default only regions and groups are included in list output. If
\fB-v\fP or \fB--verbose\fP is given the report will also include a \fB-v\fP or \fB--verbose\fP is given the report will also include a
row of information for each configured group and for each area contained row of information for each configured group and for each area contained
in each region displayed. in each region displayed.
.P
Regions that contain a single area are by default omitted from the Regions that contain a single area are by default omitted from the
verbose list since their properties are identical to the area that they verbose list since their properties are identical to the area that they
contain - to view all regions regardless of the number of areas present contain - to view all regions regardless of the number of areas present
use \fB--region\fP). To also view the areas contained within regions use \fB--region\fP). To also view the areas contained within regions
use \fB--area\fP. use \fB--area\fP.
.P
If \fB--histogram\fP is given the report will include the bin count If \fB--histogram\fP is given the report will include the bin count
and latency boundary values for any configured histograms. and latency boundary values for any configured histograms.
.HP .HP
@ -722,16 +680,16 @@ Start a report for the specified object or for all present objects. If
the count argument is specified, the report will repeat at a fixed the count argument is specified, the report will repeat at a fixed
interval set by the \fB--interval\fP option. The default interval is interval set by the \fB--interval\fP option. The default interval is
one second. one second.
.P
If the \fB--allprograms\fP switch is given, all regions will be If the \fB--allprograms\fP switch is given, all regions will be
listed, regardless of region program ID values. listed, regardless of region program ID values.
.P
If the \fB--histogram\fP is given the report will include the histogram If the \fB--histogram\fP is given the report will include the histogram
values and latency boundaries. values and latency boundaries.
.P
If the \fB--relative\fP is used the default histogram field displays If the \fB--relative\fP is used the default histogram field displays
bin values as a percentage of the total number of I/Os. bin values as a percentage of the total number of I/Os.
.P
Object types (areas, regions and groups) to include in the report are Object types (areas, regions and groups) to include in the report are
selected using the \fB--area\fP, \fB--region\fP, and \fB--group\fP selected using the \fB--area\fP, \fB--region\fP, and \fB--group\fP
options. options.
@ -741,7 +699,7 @@ options.
.br .br
Remove an existing group and return all the group's regions to their Remove an existing group and return all the group's regions to their
original state. original state.
.P
The group to be removed is specified using \fB--groupid\fP. The group to be removed is specified using \fB--groupid\fP.
.HP .HP
.CMD_UPDATE_FILEMAP .CMD_UPDATE_FILEMAP
@ -749,19 +707,19 @@ The group to be removed is specified using \fB--groupid\fP.
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP, Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
that were previously created with \fB--filemap\fP, either directly, that were previously created with \fB--filemap\fP, either directly,
or by starting the monitoring daemon, \fBdmfilemapd\fP. or by starting the monitoring daemon, \fBdmfilemapd\fP.
.P
This will add and remove regions to reflect changes in the allocated This will add and remove regions to reflect changes in the allocated
extents of the file on-disk, since the time that it was crated or last extents of the file on-disk, since the time that it was crated or last
updated. updated.
.P
Use of this command is not normally needed since the \fBdmfilemapd\fP Use of this command is not normally needed since the \fBdmfilemapd\fP
daemon will automatically monitor filemap groups and perform these daemon will automatically monitor filemap groups and perform these
updates when required. updates when required.
.P
If a filemapped group was created with \fB--nomonitor\fP, or the If a filemapped group was created with \fB--nomonitor\fP, or the
daemon has been killed, the \fBupdate_filemap\fP can be used to daemon has been killed, the \fBupdate_filemap\fP can be used to
manually force an update or start a new daemon. manually force an update or start a new daemon.
.P
Use \fB--nomonitor\fP to force a direct update and disable starting Use \fB--nomonitor\fP to force a direct update and disable starting
the monitoring daemon. the monitoring daemon.
. .
@ -773,55 +731,54 @@ span any range: from a single sector to the whole device. A region may
be further sub-divided into a number of distinct areas (one or more), be further sub-divided into a number of distinct areas (one or more),
each with its own counter set. In this case a summary value for the each with its own counter set. In this case a summary value for the
entire region is also available for use in reports. entire region is also available for use in reports.
.P
In addition, one or more regions on one device can be combined into In addition, one or more regions on one device can be combined into
a statistics group. Groups allow several regions to be aggregated and a statistics group. Groups allow several regions to be aggregated and
reported as a single entity; counters for all regions and areas are reported as a single entity; counters for all regions and areas are
summed and used to report totals for all group members. Groups also summed and used to report totals for all group members. Groups also
permit the assignment of an optional alias, allowing meaningful names permit the assignment of an optional alias, allowing meaningful names
to be associated with sets of regions. to be associated with sets of regions.
.P
The group metadata is stored with the first (lowest numbered) The group metadata is stored with the first (lowest numbered)
\fBregion_id\fP in the group: deleting this region will also delete \fBregion_id\fP in the group: deleting this region will also delete
the group and other group members will be returned to their prior the group and other group members will be returned to their prior
state. state.
.P
By default new regions span the entire device. The \fB--start\fP and By default new regions span the entire device. The \fB--start\fP and
\fB--length\fP options allows a region of any size to be placed at any \fB--length\fP options allows a region of any size to be placed at any
location on the device. location on the device.
.P
Using offsets it is possible to create regions that map individual Using offsets it is possible to create regions that map individual
objects within a block device (for example: partitions, files in a file objects within a block device (for example: partitions, files in a file
system, or stripes or other structures in a RAID volume). Groups allow system, or stripes or other structures in a RAID volume). Groups allow
several non-contiguous regions to be assembled together for reporting several non-contiguous regions to be assembled together for reporting
and data aggregation. and data aggregation.
.P
A region may be either divided into the specified number of equal-sized A region may be either divided into the specified number of equal-sized
areas, or into areas of the given size by specifying one of areas, or into areas of the given size by specifying one of
\fB--areas\fP or \fB--areasize\fP when creating a region with the \fB--areas\fP or \fB--areasize\fP when creating a region with the
\fBcreate\fP command. Depending on the size of the areas and the device \fBcreate\fP command. Depending on the size of the areas and the device
region the final area within the region may be smaller than requested. region the final area within the region may be smaller than requested.
.P .
.B Region identifiers .SS Region identifiers
.P .
Each region is assigned an identifier when it is created that is used to Each region is assigned an identifier when it is created that is used to
reference the region in subsequent operations. Region identifiers are reference the region in subsequent operations. Region identifiers are
unique within a given device (including across different \fBprogram_id\fP unique within a given device (including across different \fBprogram_id\fP
values). values).
.P
Depending on the sequence of create and delete operations, gaps may Depending on the sequence of create and delete operations, gaps may
exist in the sequence of \fBregion_id\fP values for a particular device. exist in the sequence of \fBregion_id\fP values for a particular device.
.P
The \fBregion_id\fP should be treated as an opaque identifier used to The \fBregion_id\fP should be treated as an opaque identifier used to
reference the region. reference the region.
. .
.P .SS Group identifiers
.B Group identifiers .
.P
Groups are also assigned an integer identifier at creation time; Groups are also assigned an integer identifier at creation time;
like region identifiers, group identifiers are unique within the like region identifiers, group identifiers are unique within the
containing device. containing device.
.P
The \fBgroup_id\fP should be treated as an opaque identifier used to The \fBgroup_id\fP should be treated as an opaque identifier used to
reference the group. reference the group.
. .
@ -832,82 +789,80 @@ correspond to the extents of a file in the file system. This allows
IO statistics to be monitored on a per-file basis, for example to IO statistics to be monitored on a per-file basis, for example to
observe large database files, virtual machine images, or other files observe large database files, virtual machine images, or other files
of interest. of interest.
.P
To be able to use file mapping, the file must be backed by a To be able to use file mapping, the file must be backed by a
device-mapper device, and in a file system that supports the FIEMAP device-mapper device, and in a file system that supports the FIEMAP
ioctl (and which returns data describing the physical location of ioctl (and which returns data describing the physical location of
extents). This currently includes \fBxfs(5)\fP and \fBext4(5)\fP. extents). This currently includes \fBxfs(5)\fP and \fBext4(5)\fP.
.P
By default the regions making up a file are placed together in a By default the regions making up a file are placed together in a
group, and the group alias is set to the \fBbasename(3)\fP of the group, and the group alias is set to the \fBbasename(3)\fP of the
file. This allows statistics to be reported for the file as a whole, file. This allows statistics to be reported for the file as a whole,
aggregating values for the regions making up the group. To see only aggregating values for the regions making up the group. To see only
the whole file (group) when using the \fBlist\fP and \fBreport\fP the whole file (group) when using the \fBlist\fP and \fBreport\fP
commands, use \fB--group\fP. commands, use \fB--group\fP.
.P
Since it is possible for the file to change after the initial Since it is possible for the file to change after the initial
group of regions is created, the \fBupdate_filemap\fP command, and group of regions is created, the \fBupdate_filemap\fP command, and
\fBdmfilemapd\fP daemon are provided to update file mapped groups \fBdmfilemapd\fP daemon are provided to update file mapped groups
either manually or automatically. either manually or automatically.
. .
.P .SS File follow modes
.B File follow modes .
.P
The file map monitoring daemon can monitor files in two distinct ways: The file map monitoring daemon can monitor files in two distinct ways:
follow-inode mode, and follow-path mode. follow-inode mode, and follow-path mode.
.P
The mode affects the behaviour of the daemon when a file under The mode affects the behaviour of the daemon when a file under
monitoring is renamed or unlinked, and the conditions which cause the monitoring is renamed or unlinked, and the conditions which cause the
daemon to terminate. daemon to terminate.
.P
If follow-inode mode is used, the daemon will hold the file open, and If follow-inode mode is used, the daemon will hold the file open, and
continue to update regions from the same file descriptor. This means continue to update regions from the same file descriptor. This means
that the mapping will follow rename, move (within the same file that the mapping will follow rename, move (within the same file
system), and unlink operations. This mode is useful if the file is system), and unlink operations. This mode is useful if the file is
expected to be moved, renamed, or unlinked while it is being expected to be moved, renamed, or unlinked while it is being
monitored. monitored.
.P
In follow-inode mode, the daemon will exit once it detects that the In follow-inode mode, the daemon will exit once it detects that the
file has been unlinked and it is the last holder of a reference to it. file has been unlinked and it is the last holder of a reference to it.
.P
If follow-path is used, the daemon will re-open the provided path on If follow-path is used, the daemon will re-open the provided path on
each monitoring iteration. This means that the group will be updated each monitoring iteration. This means that the group will be updated
to reflect a new file being moved to the same path as the original to reflect a new file being moved to the same path as the original
file. This mode is useful for files that are expected to be updated file. This mode is useful for files that are expected to be updated
via unlink and rename. via unlink and rename.
.P
In follow-path mode, the daemon will exit if the file is removed and In follow-path mode, the daemon will exit if the file is removed and
not replaced within a brief tolerance interval (one second). not replaced within a brief tolerance interval (one second).
.P
To stop the daemon, delete the group containing the mapped regions: To stop the daemon, delete the group containing the mapped regions:
the daemon will automatically shut down. the daemon will automatically shut down.
.P
The daemon can also be safely killed at any time and the group kept: The daemon can also be safely killed at any time and the group kept:
if the file is still being allocated the mapping will become if the file is still being allocated the mapping will become
progressively out-of-date as extents are added and removed (in this progressively out-of-date as extents are added and removed (in this
case the daemon can be re-started or the group updated manually with case the daemon can be re-started or the group updated manually with
the \fBupdate_filemap\fP command). the \fBupdate_filemap\fP command).
.P
See the \fBcreate\fP command and \fB--filemap\fP, \fB--follow\fP, See the \fBcreate\fP command and \fB--filemap\fP, \fB--follow\fP,
and \fB--nomonitor\fP options for further information. and \fB--nomonitor\fP options for further information.
. .
.P .SS Limitations
.B Limitations .
.P
The daemon attempts to maintain good synchronisation between the file The daemon attempts to maintain good synchronisation between the file
extents and the regions contained in the group, however, since it can extents and the regions contained in the group, however, since it can
only react to new allocations once they have been written, there are only react to new allocations once they have been written, there are
inevitably some IO events that cannot be counted when a file is inevitably some IO events that cannot be counted when a file is
growing, particularly if the file is being extended by a single thread growing, particularly if the file is being extended by a single thread
writing beyond end-of-file (for example, the \fBdd\fP program). writing beyond end-of-file (for example, the \fBdd\fP program).
.P
There is a further loss of events in that there is currently no way There is a further loss of events in that there is currently no way
to atomically resize a \fBdmstats\fP region and preserve its current to atomically resize a \fBdmstats\fP region and preserve its current
counter values. This affects files when they grow by extending the counter values. This affects files when they grow by extending the
final extent, rather than allocating a new extent: any events that final extent, rather than allocating a new extent: any events that
had accumulated in the region between any prior operation and the had accumulated in the region between any prior operation and the
resize are lost. resize are lost.
.P
File mapping is currently most effective in cases where the majority File mapping is currently most effective in cases where the majority
of IO does not trigger extent allocation. Future updates may address of IO does not trigger extent allocation. Future updates may address
these limitations when kernel support is available. these limitations when kernel support is available.
@ -916,7 +871,7 @@ these limitations when kernel support is available.
. .
The dmstats report provides several types of field that may be added to The dmstats report provides several types of field that may be added to
the default field set, or used to create custom reports. the default field set, or used to create custom reports.
.P
All performance counters and metrics are calculated per-area. All performance counters and metrics are calculated per-area.
. .
.SS Derived metrics .SS Derived metrics
@ -1273,12 +1228,11 @@ Bryn M. Reeves <bmr@redhat.com>
.SH SEE ALSO .SH SEE ALSO
. .
.BR dmsetup (8) .BR dmsetup (8)
.P
LVM2 resource page: https://www.sourceware.org/lvm2/ LVM2 resource page: https://www.sourceware.org/lvm2/
.br .br
Device-mapper resource page: http://sources.redhat.com/dm/ Device-mapper resource page: http://sources.redhat.com/dm/
.br .P
Device-mapper statistics kernel documentation Device-mapper statistics kernel documentation
.br .br
.I Documentation/device-mapper/statistics.txt .I Documentation/device-mapper/statistics.txt

View File

@ -1,24 +1,27 @@
.TH "FSADM" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "FSADM" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH "NAME" .SH "NAME"
.
fsadm \(em utility to resize or check filesystem on a device fsadm \(em utility to resize or check filesystem on a device
.
.SH SYNOPSIS .SH SYNOPSIS
. .
.PD 0 .PD 0
.ad l .ad l
.HP 5 .TP 6
.B fsadm .B fsadm
.RI [ options ] .RI [ options ]
.BR check .BR check
.IR device .IR device
. .
.HP .TP
.B fsadm .B fsadm
.RI [ options ] .RI [ options ]
.BR resize .BR resize
.IR device .IR device
.RI [ new_size ] .RI [ new_size ]
.PD
.ad b .ad b
.PD
. .
.SH DESCRIPTION .SH DESCRIPTION
. .
@ -34,44 +37,36 @@ filesystem.
. .
.SH OPTIONS .SH OPTIONS
. .
.HP .TP
.BR -e | --ext-offline .BR -e | --ext-offline
.br
Unmount ext2/ext3/ext4 filesystem before doing resize. Unmount ext2/ext3/ext4 filesystem before doing resize.
. .
.HP .TP
.BR -f | --force .BR -f | --force
.br
Bypass some sanity checks. Bypass some sanity checks.
. .
.HP .TP
.BR -h | --help .BR -h | --help
.br
Display the help text. Display the help text.
. .
.HP .TP
.BR -n | --dry-run .BR -n | --dry-run
.br
Print commands without running them. Print commands without running them.
. .
.HP .TP
.BR -v | --verbose .BR -v | --verbose
.br
Be more verbose. Be more verbose.
. .
.HP .TP
.BR -y | --yes .BR -y | --yes
.br
Answer "yes" at any prompts. Answer "yes" at any prompts.
. .
.HP .TP
.BR -c | --cryptresize .BR -c | --cryptresize
.br
Resize dm-crypt mapping together with filesystem detected on the device. The dm-crypt device must be recognizable by cryptsetup(8). Resize dm-crypt mapping together with filesystem detected on the device. The dm-crypt device must be recognizable by cryptsetup(8).
. .
.HP .TP
.BR \fInew_size [ B | K | M | G | T | P | E ] .BR \fInew_size [ B | K | M | G | T | P | E ]
.br
Absolute number of filesystem blocks to be in the filesystem, Absolute number of filesystem blocks to be in the filesystem,
or an absolute size using a suffix (in powers of 1024). or an absolute size using a suffix (in powers of 1024).
If new_size is not supplied, the whole device is used. If new_size is not supplied, the whole device is used.
@ -91,30 +86,37 @@ Resize the filesystem on logical volume \fI/dev/vg/test\fP to 1000 megabytes.
If \fI/dev/vg/test\fP contains ext2/ext3/ext4 If \fI/dev/vg/test\fP contains ext2/ext3/ext4
filesystem it will be unmounted prior the resize. filesystem it will be unmounted prior the resize.
All [y/n] questions will be answered 'y'. All [y/n] questions will be answered 'y'.
.sp .P
#
.B fsadm -e -y resize /dev/vg/test 1000M .B fsadm -e -y resize /dev/vg/test 1000M
. .
.SH ENVIRONMENT VARIABLES .SH ENVIRONMENT VARIABLES
. .
.TP .TP
.B "TMPDIR " .B TMPDIR
The temporary directory name for mount points. Defaults to "\fI/tmp\fP". The temporary directory name for mount points. Defaults to "\fI/tmp\fP".
.TP .TP
.B DM_DEV_DIR .B DM_DEV_DIR
The device directory name. The device directory name.
Defaults to "\fI/dev\fP" and must be an absolute path. Defaults to "\fI/dev\fP" and must be an absolute path.
.
.SH SEE ALSO .SH SEE ALSO
.
.nh .nh
.ad l
.BR lvm (8), .BR lvm (8),
.BR lvresize (8), .BR lvresize (8),
.BR lvm.conf (5), .BR lvm.conf (5),
.P
.BR fsck (8), .BR fsck (8),
.BR tune2fs (8), .BR tune2fs (8),
.BR resize2fs (8), .BR resize2fs (8),
.P
.BR reiserfstune (8), .BR reiserfstune (8),
.BR resize_reiserfs (8), .BR resize_reiserfs (8),
.P
.BR xfs_info (8), .BR xfs_info (8),
.BR xfs_growfs (8), .BR xfs_growfs (8),
.BR xfs_check (8), .BR xfs_check (8),
.P
.BR cryptsetup (8) .BR cryptsetup (8)

View File

@ -11,7 +11,6 @@ lvm \(em LVM2 tools
. .
.SH DESCRIPTION .SH DESCRIPTION
. .
The Logical Volume Manager (LVM) provides tools to create virtual block The Logical Volume Manager (LVM) provides tools to create virtual block
devices from physical devices. Virtual devices may be easier to manage devices from physical devices. Virtual devices may be easier to manage
than physical devices, and can have capabilities beyond what the physical than physical devices, and can have capabilities beyond what the physical
@ -22,7 +21,6 @@ applications. Each block of data in an LV is stored on one or more PV in
the VG, according to algorithms implemented by Device Mapper (DM) in the the VG, according to algorithms implemented by Device Mapper (DM) in the
kernel. kernel.
.P .P
The lvm command, and other commands listed below, are the command-line The lvm command, and other commands listed below, are the command-line
tools for LVM. A separate manual page describes each command in detail. tools for LVM. A separate manual page describes each command in detail.
.P .P
@ -40,7 +38,7 @@ On invocation, \fBlvm\fP requires that only the standard file descriptors
stdin, stdout and stderr are available. If others are found, they stdin, stdout and stderr are available. If others are found, they
get closed and messages are issued warning about the leak. get closed and messages are issued warning about the leak.
This warning can be suppressed by setting the environment variable This warning can be suppressed by setting the environment variable
.B LVM_SUPPRESS_FD_WARNINGS\fP. .BR LVM_SUPPRESS_FD_WARNINGS .
.P .P
Where commands take VG or LV names as arguments, the full path name is Where commands take VG or LV names as arguments, the full path name is
optional. An LV called "lvol0" in a VG called "vg0" can be specified optional. An LV called "lvol0" in a VG called "vg0" can be specified
@ -67,7 +65,7 @@ The following commands are built into lvm without links
normally being created in the filesystem for them. normally being created in the filesystem for them.
.sp .sp
.PD 0 .PD 0
.TP 14 .TP 16
.B config .B config
The same as \fBlvmconfig\fP(8) below. The same as \fBlvmconfig\fP(8) below.
.TP .TP
@ -112,7 +110,7 @@ Display version information.
The following commands implement the core LVM functionality. The following commands implement the core LVM functionality.
.sp .sp
.PD 0 .PD 0
.TP 14 .TP 16
.B pvchange .B pvchange
Change attributes of a Physical Volume. Change attributes of a Physical Volume.
.TP .TP
@ -296,19 +294,18 @@ original VG, LV and internal layer names.
. .
.SH UNIQUE NAMES .SH UNIQUE NAMES
. .
VG names should be unique. vgcreate will produce an error if the VG names should be unique. vgcreate will produce an error if the
specified VG name matches an existing VG name. However, there are cases specified VG name matches an existing VG name. However, there are cases
where different VGs with the same name can appear to LVM, e.g. after where different VGs with the same name can appear to LVM, e.g. after
moving disks or changing filters. moving disks or changing filters.
.P
When VGs with the same name exist, commands operating on all VGs will When VGs with the same name exist, commands operating on all VGs will
include all of the VGs with the same name. If the ambiguous VG name is include all of the VGs with the same name. If the ambiguous VG name is
specified on the command line, the command will produce an error. The specified on the command line, the command will produce an error. The
error states that multiple VGs exist with the specified name. To process error states that multiple VGs exist with the specified name. To process
one of the VGs specifically, the --select option should be used with the one of the VGs specifically, the --select option should be used with the
UUID of the intended VG: '--select vg_uuid=<uuid>'. UUID of the intended VG: --select vg_uuid=<uuid>
.P
An exception is if all but one of the VGs with the shared name is foreign An exception is if all but one of the VGs with the shared name is foreign
(see (see
.BR lvmsystemid (7).) .BR lvmsystemid (7).)
@ -317,18 +314,17 @@ VG and is processed.
.P .P
LV names are unique within a VG. The name of an historical LV cannot be LV names are unique within a VG. The name of an historical LV cannot be
reused until the historical LV has itself been removed or renamed. reused until the historical LV has itself been removed or renamed.
. .
.SH ALLOCATION .SH ALLOCATION
. .
When an operation needs to allocate Physical Extents for one or more When an operation needs to allocate Physical Extents for one or more
Logical Volumes, the tools proceed as follows: Logical Volumes, the tools proceed as follows:
.P
First of all, they generate the complete set of unallocated Physical Extents First of all, they generate the complete set of unallocated Physical Extents
in the Volume Group. If any ranges of Physical Extents are supplied at in the Volume Group. If any ranges of Physical Extents are supplied at
the end of the command line, only unallocated Physical Extents within the end of the command line, only unallocated Physical Extents within
those ranges on the specified Physical Volumes are considered. those ranges on the specified Physical Volumes are considered.
.P
Then they try each allocation policy in turn, starting with the strictest Then they try each allocation policy in turn, starting with the strictest
policy (\fBcontiguous\fP) and ending with the allocation policy specified policy (\fBcontiguous\fP) and ending with the allocation policy specified
using \fB--alloc\fP or set as the default for the particular Logical using \fB--alloc\fP or set as the default for the particular Logical
@ -337,14 +333,14 @@ lowest-numbered Logical Extent of the empty Logical Volume space that
needs to be filled, they allocate as much space as possible according to needs to be filled, they allocate as much space as possible according to
the restrictions imposed by the policy. If more space is needed, the restrictions imposed by the policy. If more space is needed,
they move on to the next policy. they move on to the next policy.
.P
The restrictions are as follows: The restrictions are as follows:
.P
\fBContiguous\fP requires that the physical location of any Logical \fBContiguous\fP requires that the physical location of any Logical
Extent that is not the first Logical Extent of a Logical Volume is Extent that is not the first Logical Extent of a Logical Volume is
adjacent to the physical location of the Logical Extent immediately adjacent to the physical location of the Logical Extent immediately
preceding it. preceding it.
.P
\fBCling\fP requires that the Physical Volume used for any Logical \fBCling\fP requires that the Physical Volume used for any Logical
Extent to be added to an existing Logical Volume is already in use by at Extent to be added to an existing Logical Volume is already in use by at
least one Logical Extent earlier in that Logical Volume. If the least one Logical Extent earlier in that Logical Volume. If the
@ -353,31 +349,31 @@ Physical Volumes are considered to match if any of the listed tags is
present on both Physical Volumes. This allows groups of Physical present on both Physical Volumes. This allows groups of Physical
Volumes with similar properties (such as their physical location) to be Volumes with similar properties (such as their physical location) to be
tagged and treated as equivalent for allocation purposes. tagged and treated as equivalent for allocation purposes.
.P
When a Logical Volume is striped or mirrored, the above restrictions are When a Logical Volume is striped or mirrored, the above restrictions are
applied independently to each stripe or mirror image (leg) that needs applied independently to each stripe or mirror image (leg) that needs
space. space.
.P
\fBNormal\fP will not choose a Physical Extent that shares the same Physical \fBNormal\fP will not choose a Physical Extent that shares the same Physical
Volume as a Logical Extent already allocated to a parallel Logical Volume as a Logical Extent already allocated to a parallel Logical
Volume (i.e. a different stripe or mirror image/leg) at the same offset Volume (i.e. a different stripe or mirror image/leg) at the same offset
within that parallel Logical Volume. within that parallel Logical Volume.
.P
When allocating a mirror log at the same time as Logical Volumes to hold When allocating a mirror log at the same time as Logical Volumes to hold
the mirror data, Normal will first try to select different Physical the mirror data, Normal will first try to select different Physical
Volumes for the log and the data. If that's not possible and the Volumes for the log and the data. If that's not possible and the
.B allocation/mirror_logs_require_separate_pvs .B allocation/mirror_logs_require_separate_pvs
configuration parameter is set to 0, it will then allow the log configuration parameter is set to 0, it will then allow the log
to share Physical Volume(s) with part of the data. to share Physical Volume(s) with part of the data.
.P
When allocating thin pool metadata, similar considerations to those of a When allocating thin pool metadata, similar considerations to those of a
mirror log in the last paragraph apply based on the value of the mirror log in the last paragraph apply based on the value of the
.B allocation/thin_pool_metadata_require_separate_pvs .B allocation/thin_pool_metadata_require_separate_pvs
configuration parameter. configuration parameter.
.P
If you rely upon any layout behaviour beyond that documented here, be If you rely upon any layout behaviour beyond that documented here, be
aware that it might change in future versions of the code. aware that it might change in future versions of the code.
.P
For example, if you supply on the command line two empty Physical For example, if you supply on the command line two empty Physical
Volumes that have an identical number of free Physical Extents available for Volumes that have an identical number of free Physical Extents available for
allocation, the current code considers using each of them in the order allocation, the current code considers using each of them in the order
@ -387,7 +383,7 @@ for a particular Logical Volume, then you should build it up through a
sequence of \fBlvcreate\fP(8) and \fBlvconvert\fP(8) steps such that the sequence of \fBlvcreate\fP(8) and \fBlvconvert\fP(8) steps such that the
restrictions described above applied to each step leave the tools no restrictions described above applied to each step leave the tools no
discretion over the layout. discretion over the layout.
.P
To view the way the allocation process currently works in any specific To view the way the allocation process currently works in any specific
case, read the debug logging output, for example by adding \fB-vvvv\fP to case, read the debug logging output, for example by adding \fB-vvvv\fP to
a command. a command.
@ -501,7 +497,7 @@ Prepends source file name and code line number with libdm debugging.
.BR lvm (8), .BR lvm (8),
.BR lvm.conf (5), .BR lvm.conf (5),
.BR lvmconfig (8), .BR lvmconfig (8),
.P
.BR pvchange (8), .BR pvchange (8),
.BR pvck (8), .BR pvck (8),
.BR pvcreate (8), .BR pvcreate (8),
@ -511,7 +507,7 @@ Prepends source file name and code line number with libdm debugging.
.BR pvresize (8), .BR pvresize (8),
.BR pvs (8), .BR pvs (8),
.BR pvscan (8), .BR pvscan (8),
.P
.BR vgcfgbackup (8), .BR vgcfgbackup (8),
.BR vgcfgrestore (8), .BR vgcfgrestore (8),
.BR vgchange (8), .BR vgchange (8),
@ -531,7 +527,7 @@ Prepends source file name and code line number with libdm debugging.
.BR vgs (8), .BR vgs (8),
.BR vgscan (8), .BR vgscan (8),
.BR vgsplit (8), .BR vgsplit (8),
.P
.BR lvcreate (8), .BR lvcreate (8),
.BR lvchange (8), .BR lvchange (8),
.BR lvconvert (8), .BR lvconvert (8),
@ -543,26 +539,26 @@ Prepends source file name and code line number with libdm debugging.
.BR lvresize (8), .BR lvresize (8),
.BR lvs (8), .BR lvs (8),
.BR lvscan (8), .BR lvscan (8),
.P
.BR lvm-fullreport (8), .BR lvm-fullreport (8),
.BR lvm-lvpoll (8), .BR lvm-lvpoll (8),
.BR lvm2-activation-generator (8), .BR lvm2-activation-generator (8),
.BR blkdeactivate (8), .BR blkdeactivate (8),
.BR lvmdump (8), .BR lvmdump (8),
.P
.BR dmeventd (8), .BR dmeventd (8),
.BR lvmpolld (8), .BR lvmpolld (8),
.BR lvmlockd (8), .BR lvmlockd (8),
.BR lvmlockctl (8), .BR lvmlockctl (8),
.BR cmirrord (8), .BR cmirrord (8),
.BR lvmdbusd (8), .BR lvmdbusd (8),
.P
.BR lvmsystemid (7), .BR lvmsystemid (7),
.BR lvmreport (7), .BR lvmreport (7),
.BR lvmraid (7), .BR lvmraid (7),
.BR lvmthin (7), .BR lvmthin (7),
.BR lvmcache (7), .BR lvmcache (7),
.P
.BR dmsetup (8), .BR dmsetup (8),
.BR dmstats (8), .BR dmstats (8),
.BR readline (3) .BR readline (3)

View File

@ -1,28 +1,35 @@
.TH LVM.CONF 5 "LVM TOOLS #VERSION#" "Red Hat, Inc." \" -*- nroff -*- .TH LVM.CONF 5 "LVM TOOLS #VERSION#" "Red Hat, Inc." \" -*- nroff -*-
.
.SH NAME .SH NAME
.
lvm.conf \(em Configuration file for LVM2 lvm.conf \(em Configuration file for LVM2
.
.SH SYNOPSIS .SH SYNOPSIS
.
.B #DEFAULT_SYS_DIR#/lvm.conf .B #DEFAULT_SYS_DIR#/lvm.conf
.
.SH DESCRIPTION .SH DESCRIPTION
.
\fBlvm.conf\fP is loaded during the initialisation phase of \fBlvm.conf\fP is loaded during the initialisation phase of
\fBlvm\fP(8). This file can in turn lead to other files \fBlvm\fP(8). This file can in turn lead to other files
being loaded - settings read in later override earlier being loaded - settings read in later override earlier
settings. File timestamps are checked between commands and if settings. File timestamps are checked between commands and if
any have changed, all the files are reloaded. any have changed, all the files are reloaded.
.P
For a description of each lvm.conf setting, run: For a description of each lvm.conf setting, run:
.P
.B lvmconfig --typeconfig default --withcomments --withspaces .B lvmconfig --typeconfig default --withcomments --withspaces
.P
The settings defined in lvm.conf can be overridden by any The settings defined in lvm.conf can be overridden by any
of these extended configuration methods: of these extended configuration methods:
.
.TP .TP
.B direct config override on command line .B direct config override on command line
The \fB--config ConfigurationString\fP command line option takes the The \fB--config ConfigurationString\fP command line option takes the
ConfigurationString as direct string representation of the configuration ConfigurationString as direct string representation of the configuration
to override the existing configuration. The ConfigurationString is of to override the existing configuration. The ConfigurationString is of
exactly the same format as used in any LVM configuration file. exactly the same format as used in any LVM configuration file.
.
.TP .TP
.B profile config .B profile config
.br .br
@ -30,17 +37,17 @@ A profile is a set of selected customizable configuration settings
that are aimed to achieve a certain characteristics in various that are aimed to achieve a certain characteristics in various
environments or uses. It's used to override existing configuration. environments or uses. It's used to override existing configuration.
Normally, the name of the profile should reflect that environment or use. Normally, the name of the profile should reflect that environment or use.
.P
There are two groups of profiles recognised: \fBcommand profiles\fP and There are two groups of profiles recognised: \fBcommand profiles\fP and
\fBmetadata profiles\fP. \fBmetadata profiles\fP.
.P
The \fBcommand profile\fP is used to override selected configuration The \fBcommand profile\fP is used to override selected configuration
settings at global LVM command level - it is applied at the very beginning settings at global LVM command level - it is applied at the very beginning
of LVM command execution and it is used throughout the whole time of LVM of LVM command execution and it is used throughout the whole time of LVM
command execution. The command profile is applied by using the command execution. The command profile is applied by using the
\fB--commandprofile ProfileName\fP command line option that is recognised by \fB--commandprofile ProfileName\fP command line option that is recognised by
all LVM2 commands. all LVM2 commands.
.P
The \fBmetadata profile\fP is used to override selected configuration The \fBmetadata profile\fP is used to override selected configuration
settings at Volume Group/Logical Volume level - it is applied independently settings at Volume Group/Logical Volume level - it is applied independently
for each Volume Group/Logical Volume that is being processed. As such, for each Volume Group/Logical Volume that is being processed. As such,
@ -56,12 +63,12 @@ option during creation when using \fBvgcreate\fP or \fBlvcreate\fP command.
The \fBvgs\fP and \fBlvs\fP reporting commands provide \fB-o vg_profile\fP The \fBvgs\fP and \fBlvs\fP reporting commands provide \fB-o vg_profile\fP
and \fB-o lv_profile\fP output options to show the metadata profile and \fB-o lv_profile\fP output options to show the metadata profile
currently attached to a Volume Group or a Logical Volume. currently attached to a Volume Group or a Logical Volume.
.P
The set of options allowed for command profiles is mutually exclusive The set of options allowed for command profiles is mutually exclusive
when compared to the set of options allowed for metadata profiles. The when compared to the set of options allowed for metadata profiles. The
settings that belong to either of these two sets can't be mixed together settings that belong to either of these two sets can't be mixed together
and LVM tools will reject such profiles. and LVM tools will reject such profiles.
.P
LVM itself provides a few predefined configuration profiles. LVM itself provides a few predefined configuration profiles.
Users are allowed to add more profiles with different values if needed. Users are allowed to add more profiles with different values if needed.
For this purpose, there's the \fBcommand_profile_template.profile\fP For this purpose, there's the \fBcommand_profile_template.profile\fP
@ -74,31 +81,36 @@ or \fBlvmconfig --file <ProfileName.profile> --type profilable-metadata <section
can be used to generate a configuration with profilable settings in either can be used to generate a configuration with profilable settings in either
of the type for given section and save it to new ProfileName.profile of the type for given section and save it to new ProfileName.profile
(if the section is not specified, all profilable settings are reported). (if the section is not specified, all profilable settings are reported).
.P
The profiles are stored in #DEFAULT_PROFILE_DIR# directory by default. The profiles are stored in \fI#DEFAULT_PROFILE_DIR#\fP directory by default.
This location can be changed by using the \fBconfig/profile_dir\fP setting. This location can be changed by using the \fBconfig/profile_dir\fP setting.
Each profile configuration is stored in \fBProfileName.profile\fP file Each profile configuration is stored in \fBProfileName.profile\fP file
in the profile directory. When referencing the profile, the \fB.profile\fP in the profile directory. When referencing the profile, the \fB.profile\fP
suffix is left out. suffix is left out.
.
.TP .TP
.B tag config .B tag config
.br .br
See \fBtags\fP configuration setting description below. See \fBtags\fP configuration setting description below.
.P
.LP
When several configuration methods are used at the same time When several configuration methods are used at the same time
and when LVM looks for the value of a particular setting, it traverses and when LVM looks for the value of a particular setting, it traverses
this \fBconfig cascade\fP from left to right: this \fBconfig cascade\fP from left to right:
.P
\fBdirect config override on command line\fP -> \fBcommand profile config\fP -> \fBmetadata profile config\fP -> \fBtag config\fP -> \fBlvmlocal.conf\fB -> \fBlvm.conf\fP \fBdirect config override on command line\fP ->
\fBcommand profile config\fP ->
\fBmetadata profile config\fP ->
\fBtag config\fP ->
\fBlvmlocal.conf\fP ->
\fBlvm.conf\fP
.P
No part of this cascade is compulsory. If there's no setting value found at No part of this cascade is compulsory. If there's no setting value found at
the end of the cascade, a default value is used for that setting. the end of the cascade, a default value is used for that setting.
Use \fBlvmconfig\fP to check what settings are in use and what Use \fBlvmconfig\fP to check what settings are in use and what
the default values are. the default values are.
.
.SH SYNTAX .SH SYNTAX
.LP .
This section describes the configuration file syntax. This section describes the configuration file syntax.
.LP .LP
Whitespace is not significant unless it is within quotes. Whitespace is not significant unless it is within quotes.
@ -109,15 +121,12 @@ They are treated as whitespace.
Here is an informal grammar: Here is an informal grammar:
.TP .TP
.BR file " = " value * .BR file " = " value *
.br
A configuration file consists of a set of values. A configuration file consists of a set of values.
.TP .TP
.BR value " = " section " | " assignment .BR value " = " section " | " assignment
.br
A value can either be a new section, or an assignment. A value can either be a new section, or an assignment.
.TP .TP
.BR section " = " identifier " '" { "' " value "* '" } ' .BR section " = " identifier " '" { "' " value "* '" } '
.br
A section groups associated values together. If the same section is A section groups associated values together. If the same section is
encountered multiple times, the contents of all instances are concatenated encountered multiple times, the contents of all instances are concatenated
together in the order of appearance. together in the order of appearance.
@ -142,60 +151,58 @@ e.g. \fBlevel = 7\fP
.br .br
.TP .TP
.BR array " = '" [ "' ( " type " '" , "')* " type " '" ] "' | '" [ "' '" ] ' .BR array " = '" [ "' ( " type " '" , "')* " type " '" ] "' | '" [ "' '" ] '
.br
Inhomogeneous arrays are supported. Inhomogeneous arrays are supported.
.br .br
Elements must be separated by commas. Elements must be separated by commas.
.br .br
An empty array is acceptable. An empty array is acceptable.
.TP .TP
.BR type " = " integer " | " float " | " string .BR type " = " integer | float | string
.BR integer " = [0-9]*" .BR integer " = [" 0 - 9 "]*"
.br .br
.BR float " = [0-9]*'" . '[0-9]* .BR float " = [" 0 - 9 "]*'" . "'[" 0 - 9 ]*
.br .br
.B string \fR= '\fB"\fR'.*'\fB"\fR' .BR string " = '" \(dq "' .* '" \(dq '
.IP .IP
Strings with spaces must be enclosed in double quotes, single words that start Strings with spaces must be enclosed in double quotes, single words that start
with a letter can be left unquoted. with a letter can be left unquoted.
.
.SH SETTINGS .SH SETTINGS
.
The The
.B lvmconfig .B lvmconfig
command prints the LVM configuration settings in various ways. command prints the LVM configuration settings in various ways.
See the man page See the man page
.BR lvmconfig (8). .BR lvmconfig (8).
.P
Command to print a list of all possible config settings, with their Command to print a list of all possible config settings, with their
default values: default values:
.br .br
.B lvmconfig --type default .B lvmconfig --type default
.P
Command to print a list of all possible config settings, with their Command to print a list of all possible config settings, with their
default values, and a full description of each as a comment: default values, and a full description of each as a comment:
.br .br
.B lvmconfig --type default --withcomments .B lvmconfig --type default --withcomments
.P
Command to print a list of all possible config settings, with their Command to print a list of all possible config settings, with their
current values (configured, non-default values are shown): current values (configured, non-default values are shown):
.br .br
.B lvmconfig --type current .B lvmconfig --type current
.P
Command to print all config settings that have been configured with a Command to print all config settings that have been configured with a
different value than the default (configured, non-default values are different value than the default (configured, non-default values are
shown): shown):
.br .br
.B lvmconfig --type diff .B lvmconfig --type diff
.P
Command to print a single config setting, with its default value, Command to print a single config setting, with its default value,
and a full description, where "Section" refers to the config section, and a full description, where "Section" refers to the config section,
e.g. global, and "Setting" refers to the name of the specific setting, e.g. global, and "Setting" refers to the name of the specific setting,
e.g. umask: e.g. umask:
.br .br
.B lvmconfig --type default --withcomments Section/Setting .B lvmconfig --type default --withcomments Section/Setting
.
.SH FILES .SH FILES
.I #DEFAULT_SYS_DIR#/lvm.conf .I #DEFAULT_SYS_DIR#/lvm.conf
.br .br
@ -210,8 +217,8 @@ e.g. umask:
.I #DEFAULT_LOCK_DIR# .I #DEFAULT_LOCK_DIR#
.br .br
.I #DEFAULT_PROFILE_DIR# .I #DEFAULT_PROFILE_DIR#
.
.SH SEE ALSO .SH SEE ALSO
.BR lvm (8) .
.BR lvm (8),
.BR lvmconfig (8) .BR lvmconfig (8)

View File

@ -1,49 +1,58 @@
.TH "LVM2-ACTIVATION-GENERATOR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVM2-ACTIVATION-GENERATOR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH "NAME" .SH "NAME"
.
lvm2-activation-generator - generator for systemd units to activate LVM volumes on boot lvm2-activation-generator - generator for systemd units to activate LVM volumes on boot
.
.SH SYNOPSIS .SH SYNOPSIS
.
.B #SYSTEMD_GENERATOR_DIR#/lvm2-activation-generator .B #SYSTEMD_GENERATOR_DIR#/lvm2-activation-generator
.sp .
.SH DESCRIPTION .SH DESCRIPTION
.
The lvm2-activation-generator is called by \fBsystemd\fP(1) on boot to The lvm2-activation-generator is called by \fBsystemd\fP(1) on boot to
generate systemd units at runtime to activate LVM Logical Volumes (LVs) generate systemd units at runtime to activate LVM Logical Volumes (LVs)
when global/event_activation=0 is set in \fBlvm.conf\fP(5). These units use when global/event_activation=0 is set in \fBlvm.conf\fP(5). These units use
\fBvgchange -aay\fP to activate LVs. \fBvgchange -aay\fP to activate LVs.
.P
If event_activation=1, the lvm2-activation-generator exits immediately without If event_activation=1, the lvm2-activation-generator exits immediately without
generating any systemd units, and LVM fully relies on event-based generating any systemd units, and LVM fully relies on event-based
activation to activate LVs. In this case, event-generated \fBpvscan activation to activate LVs. In this case, event-generated
--cache -aay\fP commands activate LVs. .B pvscan --cache -aay
commands activate LVs.
.P
These systemd units are generated by lvm2-activation-generator: These systemd units are generated by lvm2-activation-generator:
.sp .P
\fIlvm2-activation-early.service\fP .I lvm2-activation-early.service
is run before systemd's special \fBcryptsetup.target\fP to activate is run before systemd's special \fBcryptsetup.target\fP to activate
LVs that are not layered on top of encrypted devices. LVs that are not layered on top of encrypted devices.
.P
\fIlvm2-activation.service\fP .I lvm2-activation.service
is run after systemd's special \fBcryptsetup.target\fP to activate is run after systemd's special \fBcryptsetup.target\fP to activate
LVs that are layered on top of encrypted devices. LVs that are layered on top of encrypted devices.
.P
\fIlvm2-activation-net.service\fP .I lvm2-activation-net.service
is run after systemd's special \fBremote-fs-pre.target\fP to activate is run after systemd's special \fBremote-fs-pre.target\fP to activate
LVs that are layered on attached remote devices. LVs that are layered on attached remote devices.
.P
Note that all the underlying LVM devices (Physical Volumes) need to be Note that all the underlying LVM devices (Physical Volumes) need to be
present when the service is run. If the there are any devices that appear present when the service is run. If the there are any devices that appear
to the system later, LVs using these devices need to be activated directly to the system later, LVs using these devices need to be activated directly
by \fBlvchange\fP(8) or \fBvgchange\fP(8). by \fBlvchange\fP(8) or \fBvgchange\fP(8).
.P
The lvm2-activation-generator implements the \fBGenerators Specification\fP The lvm2-activation-generator implements the \fBGenerators Specification\fP
as referenced in \fBsystemd\fP(1). as referenced in \fBsystemd\fP(1).
.sp .
.SH SEE ALSO .SH SEE ALSO
.BR lvm.conf (5) .nh
.BR vgchange (8) .ad l
.BR lvchange (8) .BR lvm.conf (5),
.BR pvscan (8) .BR vgchange (8),
.BR lvchange (8),
.BR pvscan (8),
.P
.BR systemd (1),
.BR systemd.target (5),
.BR systemd.special (7),
.P
.BR udev (7) .BR udev (7)
.BR systemd (1)
.BR systemd.target (5)
.BR systemd.special (7)

View File

@ -1,14 +1,16 @@
.TH "LVMCACHE" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMCACHE" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH NAME .SH NAME
.
lvmcache \(em LVM caching lvmcache \(em LVM caching
.
.SH DESCRIPTION .SH DESCRIPTION
.
\fBlvm\fP(8) includes two kinds of caching that can be used to improve the \fBlvm\fP(8) includes two kinds of caching that can be used to improve the
performance of a Logical Volume (LV). When caching, varying subsets of an performance of a Logical Volume (LV). When caching, varying subsets of an
LV's data are temporarily stored on a smaller, faster device (e.g. an SSD) LV's data are temporarily stored on a smaller, faster device (e.g. an SSD)
to improve the performance of the LV. to improve the performance of the LV.
.P
To do this with lvm, a new special LV is first created from the faster To do this with lvm, a new special LV is first created from the faster
device. This LV will hold the cache. Then, the new fast LV is attached to device. This LV will hold the cache. Then, the new fast LV is attached to
the main LV by way of an lvconvert command. lvconvert inserts one of the the main LV by way of an lvconvert command. lvconvert inserts one of the
@ -17,164 +19,162 @@ mapper target combines the main LV and fast LV into a hybrid device that looks
like the main LV, but has better performance. While the main LV is being like the main LV, but has better performance. While the main LV is being
used, portions of its data will be temporarily and transparently stored on used, portions of its data will be temporarily and transparently stored on
the special fast LV. the special fast LV.
.P
The two kinds of caching are: The two kinds of caching are:
.P
.IP \[bu] 2 .IP \[bu] 2
A read and write hot-spot cache, using the dm-cache kernel module. A read and write hot-spot cache, using the dm-cache kernel module.
This cache tracks access patterns and adjusts its content deliberately so This cache tracks access patterns and adjusts its content deliberately so
that commonly used parts of the main LV are likely to be found on the fast that commonly used parts of the main LV are likely to be found on the fast
storage. LVM refers to this using the LV type \fBcache\fP. storage. LVM refers to this using the LV type \fBcache\fP.
.
.IP \[bu] 2 .IP \[bu]
A write cache, using the dm-writecache kernel module. This cache can be A write cache, using the dm-writecache kernel module. This cache can be
used with SSD or PMEM devices to speed up all writes to the main LV. Data used with SSD or PMEM devices to speed up all writes to the main LV. Data
read from the main LV is not stored in the cache, only newly written data. read from the main LV is not stored in the cache, only newly written data.
LVM refers to this using the LV type \fBwritecache\fP. LVM refers to this using the LV type \fBwritecache\fP.
.
.SH USAGE .SH USAGE
.
.B 1. Identify main LV that needs caching .SS 1. Identify main LV that needs caching
.
The main LV may already exist, and is located on larger, slower devices. The main LV may already exist, and is located on larger, slower devices.
A main LV would be created with a command like: A main LV would be created with a command like:
.P
.nf # lvcreate -n main -L Size vg /dev/slow_hhd
$ lvcreate -n main -L Size vg /dev/slow_hhd .
.fi .SS 2. Identify fast LV to use as the cache
.
.B 2. Identify fast LV to use as the cache
A fast LV is created using one or more fast devices, like an SSD. This A fast LV is created using one or more fast devices, like an SSD. This
special LV will be used to hold the cache: special LV will be used to hold the cache:
.P
.nf # lvcreate -n fast -L Size vg /dev/fast_ssd
$ lvcreate -n fast -L Size vg /dev/fast_ssd .P
# lvs -a
$ lvs -a
LV Attr Type Devices LV Attr Type Devices
fast -wi------- linear /dev/fast_ssd fast -wi------- linear /dev/fast_ssd
main -wi------- linear /dev/slow_hhd main -wi------- linear /dev/slow_hhd
.fi .fi
.
.B 3. Start caching the main LV .SS 3. Start caching the main LV
.
To start caching the main LV, convert the main LV to the desired caching To start caching the main LV, convert the main LV to the desired caching
type, and specify the fast LV to use as the cache: type, and specify the fast LV to use as the cache:
.P
.nf
using dm-cache (with cachepool): using dm-cache (with cachepool):
.P
$ lvconvert --type cache --cachepool fast vg/main # lvconvert --type cache --cachepool fast vg/main
.P
using dm-cache (with cachevol): using dm-cache (with cachevol):
.P
$ lvconvert --type cache --cachevol fast vg/main # lvconvert --type cache --cachevol fast vg/main
.P
using dm-writecache (with cachevol): using dm-writecache (with cachevol):
.P
$ lvconvert --type writecache --cachevol fast vg/main # lvconvert --type writecache --cachevol fast vg/main
.P
For more alteratives see: For more alteratives see:
.br
dm-cache command shortcut dm-cache command shortcut
.br
dm-cache with separate data and metadata LVs dm-cache with separate data and metadata LVs
.fi .
.SS 4. Display LVs
.B 4. Display LVs .
Once the fast LV has been attached to the main LV, lvm reports the main LV Once the fast LV has been attached to the main LV, lvm reports the main LV
type as either \fBcache\fP or \fBwritecache\fP depending on the type used. type as either \fBcache\fP or \fBwritecache\fP depending on the type used.
While attached, the fast LV is hidden, and renamed with a _cvol or _cpool While attached, the fast LV is hidden, and renamed with a _cvol or _cpool
suffix. It is displayed by lvs -a. The _corig or _wcorig LV represents suffix. It is displayed by lvs -a. The _corig or _wcorig LV represents
the original LV without the cache. the original LV without the cache.
.sp
.nf
using dm-cache (with cachepool): using dm-cache (with cachepool):
.P
$ lvs -ao+devices # lvs -ao+devices
.nf
LV Pool Type Devices LV Pool Type Devices
main [fast_cpool] cache main_corig(0) main [fast_cpool] cache main_corig(0)
[fast_cpool] cache-pool fast_pool_cdata(0) [fast_cpool] cache-pool fast_pool_cdata(0)
[fast_cpool_cdata] linear /dev/fast_ssd [fast_cpool_cdata] linear /dev/fast_ssd
[fast_cpool_cmeta] linear /dev/fast_ssd [fast_cpool_cmeta] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd [main_corig] linear /dev/slow_hhd
.fi
.sp
using dm-cache (with cachevol): using dm-cache (with cachevol):
.P
$ lvs -ao+devices # lvs -ao+devices
.P
.nf
LV Pool Type Devices LV Pool Type Devices
main [fast_cvol] cache main_corig(0) main [fast_cvol] cache main_corig(0)
[fast_cvol] linear /dev/fast_ssd [fast_cvol] linear /dev/fast_ssd
[main_corig] linear /dev/slow_hhd [main_corig] linear /dev/slow_hhd
.fi
.sp
using dm-writecache (with cachevol): using dm-writecache (with cachevol):
.P
$ lvs -ao+devices # lvs -ao+devices
.P
.nf
LV Pool Type Devices LV Pool Type Devices
main [fast_cvol] writecache main_wcorig(0) main [fast_cvol] writecache main_wcorig(0)
[fast_cvol] linear /dev/fast_ssd [fast_cvol] linear /dev/fast_ssd
[main_wcorig] linear /dev/slow_hhd [main_wcorig] linear /dev/slow_hhd
.fi .fi
.
.B 5. Use the main LV .SS 5. Use the main LV
.
Use the LV until the cache is no longer wanted, or needs to be changed. Use the LV until the cache is no longer wanted, or needs to be changed.
.
.B 6. Stop caching .SS 6. Stop caching
.
To stop caching the main LV and also remove unneeded cache pool, To stop caching the main LV and also remove unneeded cache pool,
use the --uncache: use the --uncache:
.P
# lvconvert --uncache vg/main
.P
# lvs -a
.nf .nf
$ lvconvert --uncache vg/main
$ lvs -a
LV VG Attr Type Devices LV VG Attr Type Devices
main vg -wi------- linear /dev/slow_hhd main vg -wi------- linear /dev/slow_hhd
.fi
.P
To stop caching the main LV, separate the fast LV from the main LV. This To stop caching the main LV, separate the fast LV from the main LV. This
changes the type of the main LV back to what it was before the cache was changes the type of the main LV back to what it was before the cache was
attached. attached.
.P
# lvconvert --splitcache vg/main
.P
# lvs -a
.nf .nf
$ lvconvert --splitcache vg/main
$ lvs -a
LV VG Attr Type Devices LV VG Attr Type Devices
fast vg -wi------- linear /dev/fast_ssd fast vg -wi------- linear /dev/fast_ssd
main vg -wi------- linear /dev/slow_hhd main vg -wi------- linear /dev/slow_hhd
.fi .fi
.
.SS Create a new LV with caching. .SS 7. Create a new LV with caching
.
A new LV can be created with caching attached at the time of creation A new LV can be created with caching attached at the time of creation
using the following command: using the following command:
.P
.nf .nf
$ lvcreate --type cache|writecache -n Name -L Size # lvcreate --type cache|writecache -n Name -L Size
--cachedevice /dev/fast_ssd vg /dev/slow_hhd --cachedevice /dev/fast_ssd vg /dev/slow_hhd
.fi .fi
.P
The main LV is created with the specified Name and Size from the slow_hhd. The main LV is created with the specified Name and Size from the slow_hhd.
A hidden fast LV is created on the fast_ssd and is then attached to the A hidden fast LV is created on the fast_ssd and is then attached to the
new main LV. If the fast_ssd is unused, the entire disk will be used as new main LV. If the fast_ssd is unused, the entire disk will be used as
the cache unless the --cachesize option is used to specify a size for the the cache unless the --cachesize option is used to specify a size for the
fast LV. The --cachedevice option can be repeated to use multiple disks fast LV. The --cachedevice option can be repeated to use multiple disks
for the fast LV. for the fast LV.
.
.SH OPTIONS .SH OPTIONS
.
\&
.SS option args .SS option args
.
\&
.B --cachepool .B --cachepool
.IR CachePoolLV | LV .IR CachePoolLV | LV
.br .P
Pass this option a cachepool LV or a standard LV. When using a cache Pass this option a cachepool LV or a standard LV. When using a cache
pool, lvm places cache data and cache metadata on different LVs. The two pool, lvm places cache data and cache metadata on different LVs. The two
LVs together are called a cache pool. This has a bit better performance LVs together are called a cache pool. This has a bit better performance
@ -184,19 +184,17 @@ A cache pool is represented as a special type of LV
that cannot be used directly. If a standard LV is passed with this that cannot be used directly. If a standard LV is passed with this
option, lvm will first convert it to a cache pool by combining it with option, lvm will first convert it to a cache pool by combining it with
another LV to use for metadata. This option can be used with dm-cache. another LV to use for metadata. This option can be used with dm-cache.
.P
.B --cachevol .B --cachevol
.I LV .I LV
.br .P
Pass this option a fast LV that should be used to hold the cache. With a Pass this option a fast LV that should be used to hold the cache. With a
cachevol, cache data and metadata are stored in different parts of the cachevol, cache data and metadata are stored in different parts of the
same fast LV. This option can be used with dm-writecache or dm-cache. same fast LV. This option can be used with dm-writecache or dm-cache.
.P
.B --cachedevice .B --cachedevice
.I PV .I PV
.br .P
This option can be used in place of --cachevol, in which case a cachevol This option can be used in place of --cachevol, in which case a cachevol
LV will be created using the specified device. This option can be LV will be created using the specified device. This option can be
repeated to create a cachevol using multiple devices, or a tag name can be repeated to create a cachevol using multiple devices, or a tag name can be
@ -204,112 +202,96 @@ specified in which case the cachevol will be created using any of the
devices with the given tag. If a named cache device is unused, the entire devices with the given tag. If a named cache device is unused, the entire
device will be used to create the cachevol. To create a cachevol of a device will be used to create the cachevol. To create a cachevol of a
specific size from the cache devices, include the --cachesize option. specific size from the cache devices, include the --cachesize option.
.
\&
.SS dm-cache block size .SS dm-cache block size
.
\&
A cache pool will have a logical block size of 4096 bytes if it is created A cache pool will have a logical block size of 4096 bytes if it is created
on a device with a logical block size of 4096 bytes. on a device with a logical block size of 4096 bytes.
.P
If a main LV has logical block size 512 (with an existing xfs file system If a main LV has logical block size 512 (with an existing xfs file system
using that size), then it cannot use a cache pool with a 4096 logical using that size), then it cannot use a cache pool with a 4096 logical
block size. If the cache pool is attached, the main LV will likely fail block size. If the cache pool is attached, the main LV will likely fail
to mount. to mount.
.P
To avoid this problem, use a mkfs option to specify a 4096 block size for To avoid this problem, use a mkfs option to specify a 4096 block size for
the file system, or attach the cache pool before running mkfs. the file system, or attach the cache pool before running mkfs.
.
.SS dm-writecache block size .SS dm-writecache block size
.
\&
The dm-writecache block size can be 4096 bytes (the default), or 512 The dm-writecache block size can be 4096 bytes (the default), or 512
bytes. The default 4096 has better performance and should be used except bytes. The default 4096 has better performance and should be used except
when 512 is necessary for compatibility. The dm-writecache block size is when 512 is necessary for compatibility. The dm-writecache block size is
specified with --cachesettings block_size=4096|512 when caching is started. specified with --cachesettings block_size=4096|512 when caching is started.
.P
When a file system like xfs already exists on the main LV prior to When a file system like xfs already exists on the main LV prior to
caching, and the file system is using a block size of 512, then the caching, and the file system is using a block size of 512, then the
writecache block size should be set to 512. (The file system will likely writecache block size should be set to 512. (The file system will likely
fail to mount if writecache block size of 4096 is used in this case.) fail to mount if writecache block size of 4096 is used in this case.)
.P
Check the xfs sector size while the fs is mounted: Check the xfs sector size while the fs is mounted:
.P
# xfs_info /dev/vg/main
.nf .nf
$ xfs_info /dev/vg/main
Look for sectsz=512 or sectsz=4096 Look for sectsz=512 or sectsz=4096
.fi .fi
.P
The writecache block size should be chosen to match the xfs sectsz value. The writecache block size should be chosen to match the xfs sectsz value.
.P
It is also possible to specify a sector size of 4096 to mkfs.xfs when It is also possible to specify a sector size of 4096 to mkfs.xfs when
creating the file system. In this case the writecache block size of 4096 creating the file system. In this case the writecache block size of 4096
can be used. can be used.
.
.SS dm-writecache settings .SS dm-writecache settings
.
\&
Tunable parameters can be passed to the dm-writecache kernel module using Tunable parameters can be passed to the dm-writecache kernel module using
the --cachesettings option when caching is started, e.g. the --cachesettings option when caching is started, e.g.
.P
.nf .nf
$ lvconvert --type writecache --cachevol fast \\ # lvconvert --type writecache --cachevol fast \\
--cachesettings 'high_watermark=N writeback_jobs=N' vg/main --cachesettings 'high_watermark=N writeback_jobs=N' vg/main
.fi .fi
.P
Tunable options are: Tunable options are:
.
.IP \[bu] 2 .TP
high_watermark = <percent> high_watermark = <percent>
Start writeback when the writecache usage reaches this percent (0-100). Start writeback when the writecache usage reaches this percent (0-100).
.
.IP \[bu] 2 .TP
low_watermark = <percent> low_watermark = <percent>
Stop writeback when the writecache usage reaches this percent (0-100). Stop writeback when the writecache usage reaches this percent (0-100).
.
.IP \[bu] 2 .TP
writeback_jobs = <count> writeback_jobs = <count>
Limit the number of blocks that are in flight during writeback. Setting Limit the number of blocks that are in flight during writeback. Setting
this value reduces writeback throughput, but it may improve latency of this value reduces writeback throughput, but it may improve latency of
read requests. read requests.
.
.IP \[bu] 2 .TP
autocommit_blocks = <count> autocommit_blocks = <count>
When the application writes this amount of blocks without issuing the When the application writes this amount of blocks without issuing the
FLUSH request, the blocks are automatically commited. FLUSH request, the blocks are automatically commited.
.
.IP \[bu] 2 .TP
autocommit_time = <milliseconds> autocommit_time = <milliseconds>
The data is automatically commited if this time passes and no FLUSH The data is automatically commited if this time passes and no FLUSH
request is received. request is received.
.
.IP \[bu] 2 .TP
fua = 0|1 fua = 0|1
Use the FUA flag when writing data from persistent memory back to the Use the FUA flag when writing data from persistent memory back to the
underlying device. underlying device.
Applicable only to persistent memory. Applicable only to persistent memory.
.
.IP \[bu] 2 .TP
nofua = 0|1 nofua = 0|1
Don't use the FUA flag when writing back data and send the FLUSH request Don't use the FUA flag when writing back data and send the FLUSH request
afterwards. Some underlying devices perform better with fua, some with afterwards. Some underlying devices perform better with fua, some with
nofua. Testing is necessary to determine which. nofua. Testing is necessary to determine which.
Applicable only to persistent memory. Applicable only to persistent memory.
.
.IP \[bu] 2 .TP
cleaner = 0|1 cleaner = 0|1
Setting cleaner=1 enables the writecache cleaner mode in which data is Setting cleaner=1 enables the writecache cleaner mode in which data is
gradually flushed from the cache. If this is done prior to detaching the gradually flushed from the cache. If this is done prior to detaching the
writecache, then the splitcache command will have little or no flushing to writecache, then the splitcache command will have little or no flushing to
@ -317,99 +299,88 @@ perform. If not done beforehand, the splitcache command enables the
cleaner mode and waits for flushing to complete before detaching the cleaner mode and waits for flushing to complete before detaching the
writecache. Adding cleaner=0 to the splitcache command will skip the writecache. Adding cleaner=0 to the splitcache command will skip the
cleaner mode, and any required flushing is performed in device suspend. cleaner mode, and any required flushing is performed in device suspend.
.
.SS dm-cache with separate data and metadata LVs .SS dm-cache with separate data and metadata LVs
.
\&
Preferred way of using dm-cache is to place the cache metadata and cache data Preferred way of using dm-cache is to place the cache metadata and cache data
on separate LVs. To do this, a "cache pool" is created, which is a special on separate LVs. To do this, a "cache pool" is created, which is a special
LV that references two sub LVs, one for data and one for metadata. LV that references two sub LVs, one for data and one for metadata.
.P
To create a cache pool of given data size and let lvm2 calculate appropriate To create a cache pool of given data size and let lvm2 calculate appropriate
metadata size: metadata size:
.P
.nf # lvcreate --type cache-pool -L DataSize -n fast vg /dev/fast_ssd1
$ lvcreate --type cache-pool -L DataSize -n fast vg /dev/fast_ssd1 .P
.fi
To create a cache pool from separate LV and let lvm2 calculate To create a cache pool from separate LV and let lvm2 calculate
appropriate cache metadata size: appropriate cache metadata size:
.P
.nf # lvcreate -n fast -L DataSize vg /dev/fast_ssd1
$ lvcreate -n fast -L DataSize vg /dev/fast_ssd1 .br
$ lvconvert --type cache-pool vg/fast /dev/fast_ssd1 # lvconvert --type cache-pool vg/fast /dev/fast_ssd1
.fi .br
.P
To create a cache pool from two separate LVs: To create a cache pool from two separate LVs:
.P
.nf # lvcreate -n fast -L DataSize vg /dev/fast_ssd1
$ lvcreate -n fast -L DataSize vg /dev/fast_ssd1 .br
$ lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2 # lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2
$ lvconvert --type cache-pool --poolmetadata fastmeta vg/fast .br
.fi # lvconvert --type cache-pool --poolmetadata fastmeta vg/fast
.P
Then use the cache pool LV to start caching the main LV: Then use the cache pool LV to start caching the main LV:
.P
.nf # lvconvert --type cache --cachepool fast vg/main
$ lvconvert --type cache --cachepool fast vg/main .P
.fi
A variation of the same procedure automatically creates a cache pool when A variation of the same procedure automatically creates a cache pool when
caching is started. To do this, use a standard LV as the --cachepool caching is started. To do this, use a standard LV as the --cachepool
(this will hold cache data), and use another standard LV as the (this will hold cache data), and use another standard LV as the
--poolmetadata (this will hold cache metadata). LVM will create a --poolmetadata (this will hold cache metadata). LVM will create a
cache pool LV from the two specified LVs, and use the cache pool to start cache pool LV from the two specified LVs, and use the cache pool to start
caching the main LV. caching the main LV.
.P
.nf .nf
$ lvcreate -n fast -L DataSize vg /dev/fast_ssd1 # lvcreate -n fast -L DataSize vg /dev/fast_ssd1
$ lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2 # lvcreate -n fastmeta -L MetadataSize vg /dev/fast_ssd2
$ lvconvert --type cache --cachepool fast --poolmetadata fastmeta vg/main # lvconvert --type cache --cachepool fast --poolmetadata fastmeta vg/main
.fi .fi
.
.SS dm-cache cache modes .SS dm-cache cache modes
.
\&
The default dm-cache cache mode is "writethrough". Writethrough ensures The default dm-cache cache mode is "writethrough". Writethrough ensures
that any data written will be stored both in the cache and on the origin that any data written will be stored both in the cache and on the origin
LV. The loss of a device associated with the cache in this case would not LV. The loss of a device associated with the cache in this case would not
mean the loss of any data. mean the loss of any data.
.P
A second cache mode is "writeback". Writeback delays writing data blocks A second cache mode is "writeback". Writeback delays writing data blocks
from the cache back to the origin LV. This mode will increase from the cache back to the origin LV. This mode will increase
performance, but the loss of a cache device can result in lost data. performance, but the loss of a cache device can result in lost data.
.P
With the --cachemode option, the cache mode can be set when caching is With the --cachemode option, the cache mode can be set when caching is
started, or changed on an LV that is already cached. The current cache started, or changed on an LV that is already cached. The current cache
mode can be displayed with the cache_mode reporting option: mode can be displayed with the cache_mode reporting option:
.P
.B lvs -o+cache_mode VG/LV .B lvs -o+cache_mode VG/LV
.P
.BR lvm.conf (5) .BR lvm.conf (5)
.B allocation/cache_mode .B allocation/cache_mode
.br .br
defines the default cache mode. defines the default cache mode.
.P
.nf .nf
$ lvconvert --type cache --cachemode writethrough \\ # lvconvert --type cache --cachemode writethrough \\
--cachepool fast vg/main --cachepool fast vg/main
.P
# lvconvert --type cache --cachemode writethrough \\
$ lvconvert --type cache --cachemode writethrough \\
--cachevol fast vg/main --cachevol fast vg/main
.nf .nf
.
.SS dm-cache chunk size .SS dm-cache chunk size
.
\&
The size of data blocks managed by dm-cache can be specified with the The size of data blocks managed by dm-cache can be specified with the
--chunksize option when caching is started. The default unit is KiB. The --chunksize option when caching is started. The default unit is KiB. The
value must be a multiple of 32KiB between 32KiB and 1GiB. Cache chunks value must be a multiple of 32KiB between 32KiB and 1GiB. Cache chunks
bigger then 512KiB shall be only used when necessary. bigger then 512KiB shall be only used when necessary.
.P
Using a chunk size that is too large can result in wasteful use of the Using a chunk size that is too large can result in wasteful use of the
cache, in which small reads and writes cause large sections of an LV to be cache, in which small reads and writes cause large sections of an LV to be
stored in the cache. It can also require increasing migration threshold stored in the cache. It can also require increasing migration threshold
@ -420,100 +391,90 @@ cache origin LV. However, choosing a chunk size that is too small
can result in more overhead trying to manage the numerous chunks that can result in more overhead trying to manage the numerous chunks that
become mapped into the cache. Overhead can include both excessive CPU become mapped into the cache. Overhead can include both excessive CPU
time searching for chunks, and excessive memory tracking chunks. time searching for chunks, and excessive memory tracking chunks.
.P
Command to display the chunk size: Command to display the chunk size:
.br .P
.B lvs -o+chunksize VG/LV .B lvs -o+chunksize VG/LV
.P
.BR lvm.conf (5) .BR lvm.conf (5)
.B cache_pool_chunk_size .B cache_pool_chunk_size
.br .P
controls the default chunk size. controls the default chunk size.
.P
The default value is shown by: The default value is shown by:
.br .P
.B lvmconfig --type default allocation/cache_pool_chunk_size .B lvmconfig --type default allocation/cache_pool_chunk_size
.P
Checking migration threshold (in sectors) of running cached LV: Checking migration threshold (in sectors) of running cached LV:
.br .br
.B lvs -o+kernel_cache_settings VG/LV .B lvs -o+kernel_cache_settings VG/LV
.
.SS dm-cache migration threshold .SS dm-cache migration threshold
.
\&
Migrating data between the origin and cache LV uses bandwidth. Migrating data between the origin and cache LV uses bandwidth.
The user can set a throttle to prevent more than a certain amount of The user can set a throttle to prevent more than a certain amount of
migration occurring at any one time. Currently dm-cache is not taking any migration occurring at any one time. Currently dm-cache is not taking any
account of normal io traffic going to the devices. account of normal io traffic going to the devices.
.P
User can set migration threshold via cache policy settings as User can set migration threshold via cache policy settings as
"migration_threshold=<#sectors>" to set the maximum number "migration_threshold=<#sectors>" to set the maximum number
of sectors being migrated, the default being 2048 sectors (1MiB). of sectors being migrated, the default being 2048 sectors (1MiB).
.P
Command to set migration threshold to 2MiB (4096 sectors): Command to set migration threshold to 2MiB (4096 sectors):
.br .P
.B lvcreate --cachepolicy 'migration_threshold=4096' VG/LV .B lvcreate --cachepolicy 'migration_threshold=4096' VG/LV
.P
Command to display the migration threshold: Command to display the migration threshold:
.br .P
.B lvs -o+kernel_cache_settings,cache_settings VG/LV .B lvs -o+kernel_cache_settings,cache_settings VG/LV
.br .br
.B lvs -o+chunksize VG/LV .B lvs -o+chunksize VG/LV
.
.SS dm-cache cache policy .SS dm-cache cache policy
.
\&
The dm-cache subsystem has additional per-LV parameters: the cache policy The dm-cache subsystem has additional per-LV parameters: the cache policy
to use, and possibly tunable parameters for the cache policy. Three to use, and possibly tunable parameters for the cache policy. Three
policies are currently available: "smq" is the default policy, "mq" is an policies are currently available: "smq" is the default policy, "mq" is an
older implementation, and "cleaner" is used to force the cache to write older implementation, and "cleaner" is used to force the cache to write
back (flush) all cached writes to the origin LV. back (flush) all cached writes to the origin LV.
.P
The older "mq" policy has a number of tunable parameters. The defaults are The older "mq" policy has a number of tunable parameters. The defaults are
chosen to be suitable for the majority of systems, but in special chosen to be suitable for the majority of systems, but in special
circumstances, changing the settings can improve performance. circumstances, changing the settings can improve performance.
.P
With the --cachepolicy and --cachesettings options, the cache policy and With the --cachepolicy and --cachesettings options, the cache policy and
settings can be set when caching is started, or changed on an existing settings can be set when caching is started, or changed on an existing
cached LV (both options can be used together). The current cache policy cached LV (both options can be used together). The current cache policy
and settings can be displayed with the cache_policy and cache_settings and settings can be displayed with the cache_policy and cache_settings
reporting options: reporting options:
.P
.B lvs -o+cache_policy,cache_settings VG/LV .B lvs -o+cache_policy,cache_settings VG/LV
.P
.nf
Change the cache policy and settings of an existing LV. Change the cache policy and settings of an existing LV.
.nf
$ lvchange --cachepolicy mq --cachesettings \\ # lvchange --cachepolicy mq --cachesettings \\
\(aqmigration_threshold=2048 random_threshold=4\(aq vg/main \(aqmigration_threshold=2048 random_threshold=4\(aq vg/main
.fi .fi
.P
.BR lvm.conf (5) .BR lvm.conf (5)
.B allocation/cache_policy .B allocation/cache_policy
.br .br
defines the default cache policy. defines the default cache policy.
.P
.BR lvm.conf (5) .BR lvm.conf (5)
.B allocation/cache_settings .B allocation/cache_settings
.br .br
defines the default cache settings. defines the default cache settings.
.
.SS dm-cache using metadata profiles .SS dm-cache using metadata profiles
.
\&
Cache pools allows to set a variety of options. Lots of these settings Cache pools allows to set a variety of options. Lots of these settings
can be specified in lvm.conf or profile settings. You can prepare can be specified in lvm.conf or profile settings. You can prepare
a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory
and just specify the metadata profile file name when caching LV or creating cache-pool. and just specify the metadata profile file name when caching LV or creating cache-pool.
Check the output of \fBlvmconfig --type default --withcomments\fP Check the output of \fBlvmconfig --type default --withcomments\fP
for a detailed description of all individual cache settings. for a detailed description of all individual cache settings.
.P
.I Example .I Example
.nf .nf
# cat <<EOF > #DEFAULT_SYS_DIR#/profile/cache_big_chunk.profile # cat <<EOF > #DEFAULT_SYS_DIR#/profile/cache_big_chunk.profile
@ -531,80 +492,74 @@ allocation {
} }
} }
EOF EOF
.P
# lvcreate --cache -L10G --metadataprofile cache_big_chunk vg/main /dev/fast_ssd # lvcreate --cache -L10G --metadataprofile cache_big_chunk vg/main /dev/fast_ssd
# lvcreate --cache -L10G --config 'allocation/cache_pool_chunk_size=512' vg/main /dev/fast_ssd # lvcreate --cache -L10G --config 'allocation/cache_pool_chunk_size=512' vg/main /dev/fast_ssd
.fi .fi
.
.SS dm-cache spare metadata LV .SS dm-cache spare metadata LV
.
\&
See See
.BR lvmthin (7) .BR lvmthin (7)
for a description of the "pool metadata spare" LV. for a description of the "pool metadata spare" LV.
The same concept is used for cache pools. The same concept is used for cache pools.
.
.SS dm-cache metadata formats .SS dm-cache metadata formats
.
\&
There are two disk formats for dm-cache metadata. The metadata format can There are two disk formats for dm-cache metadata. The metadata format can
be specified with --cachemetadataformat when caching is started, and be specified with --cachemetadataformat when caching is started, and
cannot be changed. Format \fB2\fP has better performance; it is more cannot be changed. Format \fB2\fP has better performance; it is more
compact, and stores dirty bits in a separate btree, which improves the compact, and stores dirty bits in a separate btree, which improves the
speed of shutting down the cache. With \fBauto\fP, lvm selects the best speed of shutting down the cache. With \fBauto\fP, lvm selects the best
option provided by the current dm-cache kernel module. option provided by the current dm-cache kernel module.
.
.SS RAID1 cache device .SS RAID1 cache device
.
\&
RAID1 can be used to create the fast LV holding the cache so that it can RAID1 can be used to create the fast LV holding the cache so that it can
tolerate a device failure. (When using dm-cache with separate data tolerate a device failure. (When using dm-cache with separate data
and metadata LVs, each of the sub-LVs can use RAID1.) and metadata LVs, each of the sub-LVs can use RAID1.)
.P
.nf .nf
$ lvcreate -n main -L Size vg /dev/slow # lvcreate -n main -L Size vg /dev/slow
$ lvcreate --type raid1 -m 1 -n fast -L Size vg /dev/ssd1 /dev/ssd2 # lvcreate --type raid1 -m 1 -n fast -L Size vg /dev/ssd1 /dev/ssd2
$ lvconvert --type cache --cachevol fast vg/main # lvconvert --type cache --cachevol fast vg/main
.fi .fi
.
.SS dm-cache command shortcut .SS dm-cache command shortcut
.
\&
A single command can be used to cache main LV with automatic A single command can be used to cache main LV with automatic
creation of a cache-pool: creation of a cache-pool:
.P
.nf .nf
$ lvcreate --cache --size CacheDataSize VG/LV [FastPVs] # lvcreate --cache --size CacheDataSize VG/LV [FastPVs]
.fi .fi
.P
or the longer variant or the longer variant
.P
.nf .nf
$ lvcreate --type cache --size CacheDataSize \\ # lvcreate --type cache --size CacheDataSize \\
--name NameCachePool VG/LV [FastPVs] --name NameCachePool VG/LV [FastPVs]
.fi .fi
.P
In this command, the specified LV already exists, and is the main LV to be In this command, the specified LV already exists, and is the main LV to be
cached. The command creates a new cache pool with size and given name cached. The command creates a new cache pool with size and given name
or the name is automatically selected from a sequence lvolX_cpool, or the name is automatically selected from a sequence lvolX_cpool,
using the optionally specified fast PV(s) (typically an ssd). Then it using the optionally specified fast PV(s) (typically an ssd). Then it
attaches the new cache pool to the existing main LV to begin caching. attaches the new cache pool to the existing main LV to begin caching.
.P
(Note: ensure that the specified main LV is a standard LV. If a cache (Note: ensure that the specified main LV is a standard LV. If a cache
pool LV is mistakenly specified, then the command does something pool LV is mistakenly specified, then the command does something
different.) different.)
.P
(Note: the type option is interpreted differently by this command than by (Note: the type option is interpreted differently by this command than by
normal lvcreate commands in which --type specifies the type of the newly normal lvcreate commands in which --type specifies the type of the newly
created LV. In this case, an LV with type cache-pool is being created, created LV. In this case, an LV with type cache-pool is being created,
and the existing main LV is being converted to type cache.) and the existing main LV is being converted to type cache.)
.
.SH SEE ALSO .SH SEE ALSO
.
.nh
.ad l
.BR lvm.conf (5), .BR lvm.conf (5),
.BR lvchange (8), .BR lvchange (8),
.BR lvcreate (8), .BR lvcreate (8),
@ -614,7 +569,12 @@ and the existing main LV is being converted to type cache.)
.BR lvrename (8), .BR lvrename (8),
.BR lvresize (8), .BR lvresize (8),
.BR lvs (8), .BR lvs (8),
.br
.BR vgchange (8), .BR vgchange (8),
.BR vgmerge (8), .BR vgmerge (8),
.BR vgreduce (8), .BR vgreduce (8),
.BR vgsplit (8) .BR vgsplit (8),
.P
.BR cache_dump (8),
.BR cache_check (8),
.BR cache_repair (8)

View File

@ -8,31 +8,28 @@ lvmdbusd \(em LVM D-Bus daemon
. .
.ad l .ad l
.B lvmdbusd .B lvmdbusd
.RB [ --debug \] .RB [ --debug ]
.RB [ --udev \] .RB [ --udev ]
.ad b .ad b
. .
.SH DESCRIPTION .SH DESCRIPTION
. .
lvmdbusd is a service which provides a D-Bus API to the logical volume manager (LVM). lvmdbusd is a service which provides a D-Bus API to the logical volume manager (LVM).
Run Run
.BR lvmdbusd (8) .BR lvmdbusd (8)
as root. as root.
. .
.SH OPTIONS .SH OPTIONS
. .
.HP .TP 8
.BR --debug .B --debug
.br Enable debug statements
Enable debug statements
. .
.HP .TP
.BR --udev .B --udev
.br
Use udev events to trigger updates Use udev events to trigger updates
. .
.SH SEE ALSO .SH SEE ALSO
. .
.nh
.BR dbus-send (1), .BR dbus-send (1),
.BR lvm (8) .BR lvm (8)

View File

@ -1,7 +1,11 @@
.TH LVMDUMP 8 "LVM TOOLS #VERSION#" "Red Hat, Inc." .TH LVMDUMP 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.
.SH NAME .SH NAME
.
lvmdump \(em create lvm2 information dumps for diagnostic purposes lvmdump \(em create lvm2 information dumps for diagnostic purposes
.
.SH SYNOPSIS .SH SYNOPSIS
.
.B lvmdump .B lvmdump
.RB [ -a ] .RB [ -a ]
.RB [ -c ] .RB [ -c ]
@ -13,46 +17,54 @@ lvmdump \(em create lvm2 information dumps for diagnostic purposes
.RB [ -p ] .RB [ -p ]
.RB [ -s ] .RB [ -s ]
.RB [ -u ] .RB [ -u ]
.
.SH DESCRIPTION .SH DESCRIPTION
.
lvmdump is a tool to dump various information concerning LVM2. lvmdump is a tool to dump various information concerning LVM2.
By default, it creates a tarball suitable for submission along By default, it creates a tarball suitable for submission along
with a problem report. with a problem report.
.PP .P
The content of the tarball is as follows: The content of the tarball is as follows:
.br .ad l
- dmsetup info .PD 0
.br .IP \[bu] 2
- table of currently running processes dmsetup info
.br .IP \[bu]
- recent entries from /var/log/messages (containing system messages) table of currently running processes
.br .IP \[bu]
- complete lvm configuration and cache (content of #DEFAULT_SYS_DIR#) recent entries from /var/log/messages (containing system messages)
.br .IP \[bu]
- list of device nodes present under /dev complete lvm configuration and cache (content of #DEFAULT_SYS_DIR#)
.br .IP \[bu]
- list of files present /sys/block list of device nodes present under /dev
.br .IP \[bu]
- list of files present /sys/devices/virtual/block list of files present /sys/block
.br .IP \[bu]
- if enabled with -m, metadata dump will be also included list of files present /sys/devices/virtual/block
.br .IP \[bu]
- if enabled with -a, debug output of vgscan, pvscan and list of all available volume groups, physical volumes and logical volumes will be included if enabled with -m, metadata dump will be also included
.br .IP \[bu]
- if enabled with -l, lvmetad state if running if enabled with -a, debug output of vgscan, pvscan and list of all available volume groups, physical volumes and logical volumes will be included
.br .IP \[bu]
- if enabled with -p, lvmpolld state if running if enabled with -l, lvmetad state if running
.br .IP \[bu]
- if enabled with -s, system info and context if enabled with -p, lvmpolld state if running
.br .IP \[bu]
- if enabled with -u, udev info and context if enabled with -s, system info and context
.IP \[bu]
if enabled with -u, udev info and context
.PD
.ad b
.
.SH OPTIONS .SH OPTIONS
.
.TP .TP
.B -a .B -a
Advanced collection. Advanced collection.
\fBWARNING\fR: if lvm is already hung, then this script may hang as well \fBWARNING\fR: if lvm is already hung, then this script may hang as well
if \fB-a\fR is used. if \fB-a\fR is used.
.TP .TP
.B -d \fIdirectory .B -d \fIdirectory
Dump into a directory instead of tarball Dump into a directory instead of tarball
By default, lvmdump will produce a single compressed tarball containing By default, lvmdump will produce a single compressed tarball containing
all the information. Using this option, it can be instructed to only all the information. Using this option, it can be instructed to only
@ -92,16 +104,19 @@ Gather udev info and context: /etc/udev/udev.conf file, udev daemon version
(content of /lib/udev/rules.d and /etc/udev/rules.d directory), (content of /lib/udev/rules.d and /etc/udev/rules.d directory),
list of files in /lib/udev directory and dump of current udev list of files in /lib/udev directory and dump of current udev
database content (the output of 'udevadm info --export-db' command). database content (the output of 'udevadm info --export-db' command).
.
.SH ENVIRONMENT VARIABLES .SH ENVIRONMENT VARIABLES
.
.TP .TP
\fBLVM_BINARY\fP .B LVM_BINARY
The LVM2 binary to use. The LVM2 binary to use.
Defaults to "lvm". Defaults to "lvm".
Sometimes you might need to set this to "#LVM_PATH#/lvm.static", for example. Sometimes you might need to set this to "#LVM_PATH#.static", for example.
.TP .TP
\fBDMSETUP_BINARY\fP .B DMSETUP_BINARY
The dmsetup binary to use. The dmsetup binary to use.
Defaults to "dmsetup". Defaults to "dmsetup".
.PP .
.SH SEE ALSO .SH SEE ALSO
.
.BR lvm (8) .BR lvm (8)

View File

@ -1,69 +1,79 @@
.TH "LVMLOCKCTL" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMLOCKCTL" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH NAME .SH NAME
lvmlockctl \(em Control for lvmlockd .
lvmlockctl \(em Control for lvmlockd
.
.SH SYNOPSIS
.
.BR lvmlockctl " [" \fIoptions ]
.
.SH DESCRIPTION .SH DESCRIPTION
.
This command interacts with This command interacts with
.BR lvmlockd (8). .BR lvmlockd (8).
.
.SH OPTIONS .SH OPTIONS
.
lvmlockctl [options] .TP
.BR -h | --help
.B --help | -h Show this help information.
Show this help information. .
.TP
.B --quit | -q .BR -q | --quit
Tell lvmlockd to quit. Tell lvmlockd to quit.
.
.B --info | -i .TP
Print lock state information from lvmlockd. .BR -i | --info
Print lock state information from lvmlockd.
.B --dump | -d .
Print log buffer from lvmlockd. .TP
.BR -d | --dump
.B --wait | -w 0|1 Print log buffer from lvmlockd.
Wait option for other commands. .
.TP
.B --force | -f 0|1 .BR -w | --wait\ 0 | 1
Force option for other commands. Wait option for other commands.
.
.B --kill | -k .TP
.I vgname .BR -f | --force\ 0 | 1
Kill access to the VG when sanlock cannot renew lease. Force option for other commands.
.
.B --drop | -r .TP
.I vgname .BR -k | --kill " " \fIvgname
Clear locks for the VG when it is unused after kill (-k). Kill access to the VG when sanlock cannot renew lease.
.
.B --gl-enable | -E .TP
.I vgname .BR -r | --drop " " \fIvgname
Tell lvmlockd to enable the global lock in a sanlock VG. Clear locks for the VG when it is unused after kill (-k).
.
.B --gl-disable | -D .TP
.I vgname .BR -E | --gl-enable " " \fIvgname
Tell lvmlockd to disable the global lock in a sanlock VG. Tell lvmlockd to enable the global lock in a sanlock VG.
.
.B --stop-lockspaces | -S .TP
Stop all lockspaces. .BR -D | --gl-disable " " \fIvgname
Tell lvmlockd to disable the global lock in a sanlock VG.
.
.TP
.BR -S | --stop-lockspaces
Stop all lockspaces.
.
.SH USAGE .SH USAGE
.
.SS info .SS info
.
This collects and displays lock state from lvmlockd. The display is This collects and displays lock state from lvmlockd. The display is
primitive, incomplete and will change in future version. To print the raw primitive, incomplete and will change in future version. To print the raw
lock state from lvmlockd, combine this option with --dump|-d. lock state from lvmlockd, combine this option with --dump|-d.
.
.SS dump .SS dump
.
This collects the circular log buffer of debug statements from lvmlockd This collects the circular log buffer of debug statements from lvmlockd
and prints it. and prints it.
.
.SS kill .SS kill
.
This is run by sanlock when it loses access to the storage holding leases This is run by sanlock when it loses access to the storage holding leases
for a VG. It runs the command specified in lvm.conf for a VG. It runs the command specified in lvm.conf
lvmlockctl_kill_command to deactivate LVs in the VG. If the specified lvmlockctl_kill_command to deactivate LVs in the VG. If the specified
@ -73,34 +83,37 @@ is specified, or the command fails, then the user must intervene
to forcefully deactivate LVs in the VG, and if successful, run to forcefully deactivate LVs in the VG, and if successful, run
lvmlockctl --drop. For more, see lvmlockctl --drop. For more, see
.BR lvmlockd (8). .BR lvmlockd (8).
.
.SS drop .SS drop
.
This should only be run after a VG has been successfully deactivated This should only be run after a VG has been successfully deactivated
following an lvmlockctl --kill command. It clears the stale lockspace following an lvmlockctl --kill command. It clears the stale lockspace
from lvmlockd. When lvmlockctl_kill_command is used, the --kill from lvmlockd. When lvmlockctl_kill_command is used, the --kill
command may run drop automatically. For more, see command may run drop automatically. For more, see
.BR lvmlockd (8). .BR lvmlockd (8).
.
.SS gl-enable .SS gl-enable
.
This enables the global lock in a sanlock VG. This is necessary if the VG This enables the global lock in a sanlock VG. This is necessary if the VG
that previously held the global lock is removed. For more, see that previously held the global lock is removed. For more, see
.BR lvmlockd (8). .BR lvmlockd (8).
.
.SS gl-disable .SS gl-disable
.
This disables the global lock in a sanlock VG. This is necessary if the This disables the global lock in a sanlock VG. This is necessary if the
global lock has mistakenly been enabled in more than one VG. The global global lock has mistakenly been enabled in more than one VG. The global
lock should be disabled in all but one sanlock VG. For more, see lock should be disabled in all but one sanlock VG. For more, see
.BR lvmlockd (8). .BR lvmlockd (8).
.
.SS stop-lockspaces .SS stop-lockspaces
.
This tells lvmlockd to stop all lockspaces. It can be useful to stop This tells lvmlockd to stop all lockspaces. It can be useful to stop
lockspaces for VGs that the vgchange --lock-stop comand can no longer lockspaces for VGs that the vgchange --lock-stop comand can no longer
see, or to stop the dlm global lockspace which is not directly stopped by see, or to stop the dlm global lockspace which is not directly stopped by
the vgchange command. The wait and force options can be used with this the vgchange command. The wait and force options can be used with this
command. command.
.
.SH SEE ALSO
.
.BR lvm (8),
.BR lvmlockd (8)

File diff suppressed because it is too large Load Diff

View File

@ -1,10 +1,16 @@
.TH LVMPOLLD 8 "LVM TOOLS #VERSION#" "Red Hat Inc" \" -*- nroff -*- .TH LVMPOLLD 8 "LVM TOOLS #VERSION#" "Red Hat Inc" \" -*- nroff -*-
.
.SH NAME .SH NAME
.
lvmpolld \(em LVM poll daemon lvmpolld \(em LVM poll daemon
.
.SH SYNOPSIS .SH SYNOPSIS
.
.B lvmpolld .B lvmpolld
.nh
.ad l
.RB [ -l | --log .RB [ -l | --log
.RI { all | wire | debug }] .BR all | wire | debug ]
.RB [ -p | --pidfile .RB [ -p | --pidfile
.IR pidfile_path ] .IR pidfile_path ]
.RB [ -s | --socket .RB [ -s | --socket
@ -16,75 +22,91 @@ lvmpolld \(em LVM poll daemon
.RB [ -f | --foreground ] .RB [ -f | --foreground ]
.RB [ -h | --help ] .RB [ -h | --help ]
.RB [ -V | --version ] .RB [ -V | --version ]
.ad b
.hy
.P
.B lvmpolld .B lvmpolld
.RB [ --dump ] .RB [ --dump ]
.
.SH DESCRIPTION .SH DESCRIPTION
.
lvmpolld is polling daemon for LVM. The daemon receives requests for polling lvmpolld is polling daemon for LVM. The daemon receives requests for polling
of already initialised operations originating in LVM2 command line tool. of already initialised operations originating in LVM2 command line tool.
The requests for polling originate in the \fBlvconvert\fP, \fBpvmove\fP, The requests for polling originate in the \fBlvconvert\fP, \fBpvmove\fP,
\fBlvchange\fP or \fBvgchange\fP LVM2 commands. \fBlvchange\fP or \fBvgchange\fP LVM2 commands.
.P
The purpose of lvmpolld is to reduce the number of spawned background processes The purpose of lvmpolld is to reduce the number of spawned background processes
per otherwise unique polling operation. There should be only one. It also per otherwise unique polling operation. There should be only one. It also
eliminates the possibility of unsolicited termination of background process by eliminates the possibility of unsolicited termination of background process by
external factors. external factors.
.P
lvmpolld is used by LVM only if it is enabled in \fBlvm.conf\fP(5) by lvmpolld is used by LVM only if it is enabled in \fBlvm.conf\fP(5) by
specifying the \fBglobal/use_lvmpolld\fP setting. If this is not defined in the specifying the \fBglobal/use_lvmpolld\fP setting. If this is not defined in the
LVM configuration explicitly then default setting is used instead (see the LVM configuration explicitly then default setting is used instead (see the
output of \fBlvmconfig --type default global/use_lvmpolld\fP command). output of \fBlvmconfig --type default global/use_lvmpolld\fP command).
.
.SH OPTIONS .SH OPTIONS
.
To run the daemon in a test environment both the pidfile_path and the To run the daemon in a test environment both the pidfile_path and the
socket_path should be changed from the defaults. socket_path should be changed from the defaults.
.
.TP .TP
.BR -f ", " --foreground .BR -f | --foreground
Don't fork, but run in the foreground. Don't fork, but run in the foreground.
.TP .TP
.BR -h ", " --help .BR -h | --help
Show help information. Show help information.
.
.TP .TP
.IR \fB-l\fP ", " \fB--log\fP " {" all | wire | debug } .BR -l | --log " " all | wire | debug
Select the type of log messages to generate. Select the type of log messages to generate.
Messages are logged by syslog. Messages are logged by syslog.
Additionally, when -f is given they are also sent to standard error. Additionally, when \fB-f\fP is given they are also sent to standard error.
There are two classes of messages: wire and debug. Selecting 'all' supplies both There are two classes of messages: wire and debug. Selecting '\fBall\fP' supplies both
and is equivalent to a comma-separated list -l wire,debug. and is equivalent to a comma-separated list \fB-l wire,debug\fP.
.
.TP .TP
.BR -p ", " --pidfile " " \fIpidfile_path .BR -p | --pidfile " " \fIpidfile_path
Path to the pidfile. This overrides both the built-in default Path to the pidfile. This overrides both the built-in default
(#DEFAULT_PID_DIR#/lvmpolld.pid) and the environment variable (#DEFAULT_PID_DIR#/lvmpolld.pid) and the environment variable
\fBLVM_LVMPOLLD_PIDFILE\fP. This file is used to prevent more \fBLVM_LVMPOLLD_PIDFILE\fP. This file is used to prevent more
than one instance of the daemon running simultaneously. than one instance of the daemon running simultaneously.
.
.TP .TP
.BR -s ", " --socket " " \fIsocket_path .BR -s | --socket " " \fIsocket_path
Path to the socket file. This overrides both the built-in default Path to the socket file. This overrides both the built-in default
(#DEFAULT_RUN_DIR#/lvmpolld.socket) and the environment variable (#DEFAULT_RUN_DIR#/lvmpolld.socket) and the environment variable
\fBLVM_LVMPOLLD_SOCKET\fP. \fBLVM_LVMPOLLD_SOCKET\fP.
.
.TP .TP
.BR -t ", " --timeout " " \fItimeout_value .BR -t | --timeout " " \fItimeout_value
The daemon may shutdown after being idle for the given time (in seconds). When the The daemon may shutdown after being idle for the given time (in seconds). When the
option is omitted or the value given is zero the daemon never shutdowns on idle. option is omitted or the value given is zero the daemon never shutdowns on idle.
.
.TP .TP
.BR -B ", " --binary " " \fIlvm_binary_path .BR -B | --binary " " \fIlvm_binary_path
Optional path to alternative LVM binary (default: #LVM_PATH#). Use for Optional path to alternative LVM binary (default: #LVM_PATH#). Use for
testing purposes only. testing purposes only.
.
.TP .TP
.BR -V ", " --version .BR -V | --version
Display the version of lvmpolld daemon. Display the version of lvmpolld daemon.
.TP .TP
.B --dump .B --dump
Contact the running lvmpolld daemon to obtain the complete state and print it Contact the running lvmpolld daemon to obtain the complete state and print it
out in a raw format. out in a raw format.
.
.SH ENVIRONMENT VARIABLES .SH ENVIRONMENT VARIABLES
.
.TP .TP
.B LVM_LVMPOLLD_PIDFILE .B LVM_LVMPOLLD_PIDFILE
Path for the pid file. Path for the pid file.
.
.TP .TP
.B LVM_LVMPOLLD_SOCKET .B LVM_LVMPOLLD_SOCKET
Path for the socket file. Path for the socket file.
.
.SH SEE ALSO .SH SEE ALSO
.
.BR lvm (8), .BR lvm (8),
.BR lvm.conf (5) .BR lvm.conf (5)

View File

@ -874,7 +874,7 @@ segs_sort="vg_name,lv_name,seg_start"
.nf .nf
# pvs # pvs
PV VG Fmt Attr PSize PFree PV VG Fmt Attr PSize PFree
/dev/sda vg lvm2 a-- 100.00m 88.00m /dev/sda vg lvm2 a-- 100.00m 88.00m
/dev/sdb vg lvm2 a-- 100.00m 92.00m /dev/sdb vg lvm2 a-- 100.00m 92.00m
@ -889,13 +889,13 @@ segs_sort="vg_name,lv_name,seg_start"
/dev/sdb vg lvm2 a-- 100.00m 92.00m 2 23 /dev/sdb vg lvm2 a-- 100.00m 92.00m 2 23
# vgs # vgs
VG #PV #LV #SN Attr VSize VFree VG #PV #LV #SN Attr VSize VFree
vg 2 2 0 wz--n- 200.00m 180.00m vg 2 2 0 wz--n- 200.00m 180.00m
# lvs # lvs
LV VG Attr LSize Pool Origin Move Log Cpy%Sync Convert LV VG Attr LSize Pool Origin Move Log Cpy%Sync Convert
lvol0 vg -wi-a----- 4.00m lvol0 vg -wi-a----- 4.00m
lvol1 vg rwi-a-r--- 4.00m 100.00 lvol1 vg rwi-a-r--- 4.00m 100.00
# lvs --segments # lvs --segments
LV VG Attr #Str Type SSize LV VG Attr #Str Type SSize
@ -917,8 +917,8 @@ lvs_sort="-lv_time"
# lvs # lvs
LV LSize Origin Pool Cpy%Sync LV LSize Origin Pool Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
.fi .fi
You can use \fB-o|--options\fP command line option to override current You can use \fB-o|--options\fP command line option to override current
@ -931,18 +931,18 @@ configuration directly on command line.
lvol0 4.00m lvol0 4.00m
# lvs -o+lv_layout # lvs -o+lv_layout
LV LSize Origin Pool Cpy%Sync Layout LV LSize Origin Pool Cpy%Sync Layout
lvol1 4.00m 100.00 raid,raid1 lvol1 4.00m 100.00 raid,raid1
lvol0 4.00m linear lvol0 4.00m linear
# lvs -o-origin # lvs -o-origin
LV LSize Pool Cpy%Sync LV LSize Pool Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
# lvs -o lv_name,lv_size,origin -o+lv_layout -o-origin -O lv_name # lvs -o lv_name,lv_size,origin -o+lv_layout -o-origin -O lv_name
LV LSize Layout LV LSize Layout
lvol0 4.00m linear lvol0 4.00m linear
lvol1 4.00m raid,raid1 lvol1 4.00m raid,raid1
.fi .fi
@ -1012,11 +1012,11 @@ compact_output=1
# lvs # lvs
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
# lvs vg/lvol0 # lvs vg/lvol0
LV LSize LV LSize
lvol0 4.00m lvol0 4.00m
.fi .fi
@ -1031,17 +1031,17 @@ compact_output_cols="origin"
# lvs # lvs
LV LSize Pool Cpy%Sync LV LSize Pool Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
# lvs vg/lvol0 # lvs vg/lvol0
LV LSize Pool LV LSize Pool
lvol0 4.00m lvol0 4.00m
# lvs -o#pool_lv # lvs -o#pool_lv
LV LSize Origin Cpy%Sync LV LSize Origin Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
.fi .fi
We will use \fBreport/compact_output=1\fP for subsequent examples. We will use \fBreport/compact_output=1\fP for subsequent examples.
@ -1057,8 +1057,8 @@ configuration setting (or \fB--nosuffix\fP command line option) to change this.
.nf .nf
# lvs --units b --nosuffix # lvs --units b --nosuffix
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4194304 100.00 lvol1 4194304 100.00
lvol0 4194304 lvol0 4194304
.fi .fi
If you want to configure whether report headings are displayed or not, use If you want to configure whether report headings are displayed or not, use
@ -1067,8 +1067,8 @@ line option).
.nf .nf
# lvs --noheadings # lvs --noheadings
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
.fi .fi
In some cases, it may be useful to display report content as key=value pairs In some cases, it may be useful to display report content as key=value pairs
@ -1124,12 +1124,12 @@ properly.
# lvs --separator " | " # lvs --separator " | "
LV | LSize | Cpy%Sync LV | LSize | Cpy%Sync
lvol1 | 4.00m | 100.00 lvol1 | 4.00m | 100.00
lvol0 | 4.00m | lvol0 | 4.00m |
# lvs --separator " | " --aligned # lvs --separator " | " --aligned
LV | LSize | Cpy%Sync LV | LSize | Cpy%Sync
lvol1 | 4.00m | 100.00 lvol1 | 4.00m | 100.00
lvol0 | 4.00m | lvol0 | 4.00m |
.fi .fi
Let's display one one more field in addition ("lv_tags" in this example) Let's display one one more field in addition ("lv_tags" in this example)
@ -1137,8 +1137,8 @@ for the lvs report output.
.nf .nf
# lvs -o+lv_tags # lvs -o+lv_tags
LV LSize Cpy%Sync LV Tags LV LSize Cpy%Sync LV Tags
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m tagA,tagB lvol0 4.00m tagA,tagB
.fi .fi
@ -1152,8 +1152,8 @@ definition.
list_item_separator=";" list_item_separator=";"
# lvs -o+tags # lvs -o+tags
LV LSize Cpy%Sync LV Tags LV LSize Cpy%Sync LV Tags
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m tagA;tagB lvol0 4.00m tagA;tagB
.fi .fi
@ -1169,9 +1169,9 @@ and time is displayed, including timezone.
time_format="%Y-%m-%d %T %z" time_format="%Y-%m-%d %T %z"
# lvs -o+time # lvs -o+time
LV LSize Cpy%Sync CTime LV LSize Cpy%Sync CTime
lvol1 4.00m 100.00 2016-08-29 12:53:36 +0200 lvol1 4.00m 100.00 2016-08-29 12:53:36 +0200
lvol0 4.00m 2016-08-29 10:15:17 +0200 lvol0 4.00m 2016-08-29 10:15:17 +0200
.fi .fi
We can change time format in similar way as we do when using \fBdate\fP(1) We can change time format in similar way as we do when using \fBdate\fP(1)
@ -1185,9 +1185,9 @@ below, we decided to use %s for number of seconds since Epoch (1970-01-01 UTC).
time_format="%s" time_format="%s"
# lvs # lvs
LV Attr LSize Cpy%Sync LV Tags CTime LV Attr LSize Cpy%Sync LV Tags CTime
lvol1 rwi-a-r--- 4.00m 100.00 1472468016 lvol1 rwi-a-r--- 4.00m 100.00 1472468016
lvol0 -wi-a----- 4.00m tagA,tagB 1472458517 lvol0 -wi-a----- 4.00m tagA,tagB 1472458517
.fi .fi
The \fBlvs\fP does not display hidden LVs by default - to include these LVs The \fBlvs\fP does not display hidden LVs by default - to include these LVs
@ -1197,12 +1197,12 @@ these hidden LVs are displayed within square brackets.
.nf .nf
# lvs -a # lvs -a
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
[lvol1_rimage_0] 4.00m [lvol1_rimage_0] 4.00m
[lvol1_rmeta_0] 4.00m [lvol1_rmeta_0] 4.00m
[lvol1_rimage_1] 4.00m [lvol1_rimage_1] 4.00m
[lvol1_rmeta_1] 4.00m [lvol1_rmeta_1] 4.00m
lvol0 4.00m lvol0 4.00m
.fi .fi
You can configure LVM to display the square brackets for hidden LVs or not with You can configure LVM to display the square brackets for hidden LVs or not with
@ -1214,12 +1214,12 @@ mark_hidden_devices=0
# lvs -a # lvs -a
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol1_rimage_0 4.00m lvol1_rimage_0 4.00m
lvol1_rmeta_0 4.00m lvol1_rmeta_0 4.00m
lvol1_rimage_1 4.00m lvol1_rimage_1 4.00m
lvol1_rmeta_1 4.00m lvol1_rmeta_1 4.00m
lvol0 4.00m lvol0 4.00m
.fi .fi
It's not recommended to use LV marks for hidden devices to decide whether the It's not recommended to use LV marks for hidden devices to decide whether the
@ -1229,13 +1229,13 @@ used by LVM only and they should not be accessed directly by end users.
.nf .nf
# lvs -a -o+lv_role # lvs -a -o+lv_role
LV LSize Cpy%Sync Role LV LSize Cpy%Sync Role
lvol1 4.00m 100.00 public lvol1 4.00m 100.00 public
lvol1_rimage_0 4.00m private,raid,image lvol1_rimage_0 4.00m private,raid,image
lvol1_rmeta_0 4.00m private,raid,metadata lvol1_rmeta_0 4.00m private,raid,metadata
lvol1_rimage_1 4.00m private,raid,image lvol1_rimage_1 4.00m private,raid,image
lvol1_rmeta_1 4.00m private,raid,metadata lvol1_rmeta_1 4.00m private,raid,metadata
lvol0 4.00m public lvol0 4.00m public
.fi .fi
Some of the reporting fields that LVM reports are of binary nature. For such Some of the reporting fields that LVM reports are of binary nature. For such
@ -1245,7 +1245,7 @@ undefined).
.nf .nf
# lvs -o+lv_active_locally # lvs -o+lv_active_locally
LV LSize Cpy%Sync ActLocal LV LSize Cpy%Sync ActLocal
lvol1 4.00m 100.00 active locally lvol1 4.00m 100.00 active locally
lvol0 4.00m active locally lvol0 4.00m active locally
.fi .fi
@ -1258,7 +1258,7 @@ We can change the way how these binary values are displayed with
binary_values_as_numeric=1 binary_values_as_numeric=1
# lvs -o+lv_active_locally # lvs -o+lv_active_locally
LV LSize Cpy%Sync ActLocal LV LSize Cpy%Sync ActLocal
lvol1 4.00m 100.00 1 lvol1 4.00m 100.00 1
lvol0 4.00m 1 lvol0 4.00m 1
.fi .fi
@ -1342,11 +1342,11 @@ addition to lvol0 and lvol1 we used in our previous examples.
.nf .nf
# lvs -o name,size,origin,snap_percent,tags,time # lvs -o name,size,origin,snap_percent,tags,time
LV LSize Origin Snap% LV Tags CTime LV LSize Origin Snap% LV Tags CTime
lvol4 4.00m lvol2 24.61 2016-09-09 16:57:44 +0200 lvol4 4.00m lvol2 24.61 2016-09-09 16:57:44 +0200
lvol3 4.00m lvol2 5.08 2016-09-09 16:56:48 +0200 lvol3 4.00m lvol2 5.08 2016-09-09 16:56:48 +0200
lvol2 8.00m tagA,tagC,tagD 2016-09-09 16:55:12 +0200 lvol2 8.00m tagA,tagC,tagD 2016-09-09 16:55:12 +0200
lvol1 4.00m 2016-08-29 12:53:36 +0200 lvol1 4.00m 2016-08-29 12:53:36 +0200
lvol0 4.00m tagA,tagB 2016-08-29 10:15:17 +0200 lvol0 4.00m tagA,tagB 2016-08-29 10:15:17 +0200
.fi .fi
@ -1359,29 +1359,29 @@ together.
.nf .nf
# lvs -o name,size,snap_percent -S 'size=8m' # lvs -o name,size,snap_percent -S 'size=8m'
LV LSize LV LSize
lvol2 8.00m lvol2 8.00m
# lvs -o name,size,snap_percent -S 'size=8' # lvs -o name,size,snap_percent -S 'size=8'
LV LSize LV LSize
lvol2 8.00m lvol2 8.00m
# lvs -o name,size,snap_percent -S 'size < 5000k' # lvs -o name,size,snap_percent -S 'size < 5000k'
LV LSize Snap% LV LSize Snap%
lvol4 4.00m 24.61 lvol4 4.00m 24.61
lvol3 4.00m 5.08 lvol3 4.00m 5.08
lvol1 4.00m lvol1 4.00m
lvol0 4.00m lvol0 4.00m
# lvs -o name,size,snap_percent -S 'size < 5000k && snap_percent > 20' # lvs -o name,size,snap_percent -S 'size < 5000k && snap_percent > 20'
LV LSize Snap% LV LSize Snap%
lvol4 4.00m 24.61 lvol4 4.00m 24.61
# lvs -o name,size,snap_percent \\ # lvs -o name,size,snap_percent \\
-S '(size < 5000k && snap_percent > 20%) || name=lvol2' -S '(size < 5000k && snap_percent > 20%) || name=lvol2'
LV LSize Snap% LV LSize Snap%
lvol4 4.00m 24.61 lvol4 4.00m 24.61
lvol2 8.00m lvol2 8.00m
.fi .fi
You can also use selection together with processing-oriented commands. You can also use selection together with processing-oriented commands.
@ -1409,14 +1409,14 @@ that matches. If the subset has only one item, we can leave out { }.
.nf .nf
# lvs -o name,tags -S 'tags={tagA}' # lvs -o name,tags -S 'tags={tagA}'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
lvol0 tagA,tagB lvol0 tagA,tagB
# lvs -o name,tags -S 'tags=tagA' # lvs -o name,tags -S 'tags=tagA'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
lvol0 tagA,tagB lvol0 tagA,tagB
.fi .fi
Depending on whether we use "&&" (or ",") or "||" ( or "#") as delimiter Depending on whether we use "&&" (or ",") or "||" ( or "#") as delimiter
@ -1425,23 +1425,23 @@ we either match subset ("&&" or ",") or even intersection ("||" or "#").
.nf .nf
# lvs -o name,tags -S 'tags={tagA,tagC,tagD}' # lvs -o name,tags -S 'tags={tagA,tagC,tagD}'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
# lvs -o name,tags -S 'tags={tagA || tagC || tagD}' # lvs -o name,tags -S 'tags={tagA || tagC || tagD}'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
lvol0 tagA,tagB lvol0 tagA,tagB
.fi .fi
To match the complete set, use [ ] with "&&" (or ",") as delimiter for items. To match the complete set, use [ ] with "&&" (or ",") as delimiter for items.
Also note that the order in which we define items in the set is not relevant. Also note that the order in which we define items in the set is not relevant.
.nf .nf
# lvs -o name,tags -S 'tags=[tagA]' # lvs -o name,tags -S 'tags=[tagA]'
# lvs -o name,tags -S 'tags=[tagB,tagA]' # lvs -o name,tags -S 'tags=[tagB,tagA]'
LV LV Tags LV LV Tags
lvol0 tagA,tagB lvol0 tagA,tagB
.fi .fi
@ -1449,9 +1449,9 @@ If you use [ ] with "||" (or "#"), this is exactly the same as using { }.
.nf .nf
# lvs -o name,tags -S 'tags=[tagA || tagC || tagD]' # lvs -o name,tags -S 'tags=[tagA || tagC || tagD]'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
lvol0 tagA,tagB lvol0 tagA,tagB
.fi .fi
To match a set with no items, use "" to denote this (note that we have To match a set with no items, use "" to denote this (note that we have
@ -1460,15 +1460,15 @@ the example below because it's blank and so it gets compacted).
.nf .nf
# lvs -o name,tags -S 'tags=""' # lvs -o name,tags -S 'tags=""'
LV LV
lvol4 lvol4
lvol3 lvol3
lvol1 lvol1
# lvs -o name,tags -S 'tags!=""' # lvs -o name,tags -S 'tags!=""'
LV LV Tags LV LV Tags
lvol2 tagA,tagC,tagD lvol2 tagA,tagC,tagD
lvol0 tagA,tagB lvol0 tagA,tagB
.fi .fi
When doing selection based on time fields, we can use either standard, When doing selection based on time fields, we can use either standard,
@ -1477,40 +1477,40 @@ are using standard forms.
.nf .nf
# lvs -o name,time # lvs -o name,time
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time -S 'time since "2016-09-01"' # lvs -o name,time -S 'time since "2016-09-01"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
# lvs -o name,time -S 'time since "2016-09-09 16:56"' # lvs -o name,time -S 'time since "2016-09-09 16:56"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
# lvs -o name,time -S 'time since "2016-09-09 16:57:30"' # lvs -o name,time -S 'time since "2016-09-09 16:57:30"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
# lvs -o name,time \\ # lvs -o name,time \\
-S 'time since "2016-08-29" && time until "2016-09-09 16:55:12"' -S 'time since "2016-08-29" && time until "2016-09-09 16:55:12"'
LV CTime LV CTime
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time \\ # lvs -o name,time \\
-S 'time since "2016-08-29" && time before "2016-09-09 16:55:12"' -S 'time since "2016-08-29" && time before "2016-09-09 16:55:12"'
LV CTime LV CTime
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
.fi .fi
Time operators have synonyms: ">=" for since, "<=" for until, Time operators have synonyms: ">=" for since, "<=" for until,
@ -1519,75 +1519,75 @@ Time operators have synonyms: ">=" for since, "<=" for until,
.nf .nf
# lvs -o name,time \\ # lvs -o name,time \\
-S 'time >= "2016-08-29" && time <= "2016-09-09 16:55:30"' -S 'time >= "2016-08-29" && time <= "2016-09-09 16:55:30"'
LV CTime LV CTime
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time \\ # lvs -o name,time \\
-S 'time since "2016-08-29" && time < "2016-09-09 16:55:12"' -S 'time since "2016-08-29" && time < "2016-09-09 16:55:12"'
LV CTime LV CTime
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
.fi .fi
Example below demonstrates using absolute time expression. Example below demonstrates using absolute time expression.
.nf .nf
# lvs -o name,time --config report/time_format="%s" # lvs -o name,time --config report/time_format="%s"
LV CTime LV CTime
lvol4 1473433064 lvol4 1473433064
lvol3 1473433008 lvol3 1473433008
lvol2 1473432912 lvol2 1473432912
lvol1 1472468016 lvol1 1472468016
lvol0 1472458517 lvol0 1472458517
# lvs -o name,time -S 'time since @1473433008' # lvs -o name,time -S 'time since @1473433008'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
.fi .fi
Examples below demonstrates using freeform time expressions. Examples below demonstrates using freeform time expressions.
.nf .nf
# lvs -o name,time -S 'time since "2 weeks ago"' # lvs -o name,time -S 'time since "2 weeks ago"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time -S 'time since "1 week ago"' # lvs -o name,time -S 'time since "1 week ago"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
# lvs -o name,time -S 'time since "2 weeks ago"' # lvs -o name,time -S 'time since "2 weeks ago"'
LV CTime LV CTime
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time -S 'time before "1 week ago"' # lvs -o name,time -S 'time before "1 week ago"'
LV CTime LV CTime
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
# lvs -o name,time -S 'time since "68 hours ago"' # lvs -o name,time -S 'time since "68 hours ago"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
# lvs -o name,time -S 'time since "1 year 3 months ago"' # lvs -o name,time -S 'time since "1 year 3 months ago"'
LV CTime LV CTime
lvol4 2016-09-09 16:57:44 +0200 lvol4 2016-09-09 16:57:44 +0200
lvol3 2016-09-09 16:56:48 +0200 lvol3 2016-09-09 16:56:48 +0200
lvol2 2016-09-09 16:55:12 +0200 lvol2 2016-09-09 16:55:12 +0200
lvol1 2016-08-29 12:53:36 +0200 lvol1 2016-08-29 12:53:36 +0200
lvol0 2016-08-29 10:15:17 +0200 lvol0 2016-08-29 10:15:17 +0200
.fi .fi
.SS Command log reporting .SS Command log reporting
@ -1615,9 +1615,9 @@ command_log_selection="!(log_type=status && message=success)"
Logical Volume Logical Volume
============== ==============
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
Command Log Command Log
=========== ===========
Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode
@ -1638,9 +1638,9 @@ command_log_selection="all"
Logical Volume Logical Volume
============== ==============
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
Command Log Command Log
=========== ===========
Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode
@ -1670,7 +1670,7 @@ To configure the log report directly on command line, we need to use
LV LSize LV LSize
lvol1 4.00m lvol1 4.00m
lvol0 4.00m lvol0 4.00m
Command Log Command Log
=========== ===========
ObjType ObjName Msg RetCode ObjType ObjName Msg RetCode
@ -1763,9 +1763,9 @@ lvm> lvs
Logical Volume Logical Volume
============== ==============
LV LSize Cpy%Sync LV LSize Cpy%Sync
lvol1 4.00m 100.00 lvol1 4.00m 100.00
lvol0 4.00m lvol0 4.00m
Command Log Command Log
=========== ===========
Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode
@ -1774,7 +1774,7 @@ lvm> lvs
3 status processing vg vg success 0 1 3 status processing vg vg success 0 1
4 status shell cmd lvs success 0 1 4 status shell cmd lvs success 0 1
lvm> lastlog lvm> lastlog
Command Log Command Log
=========== ===========
Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode

View File

@ -1,12 +1,20 @@
.TH "LVMSADC" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMSADC" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME" .
.SH NAME
.
lvmsadc \(em LVM system activity data collector lvmsadc \(em LVM system activity data collector
.SH "SYNOPSIS" .
.SH SYNOPSIS
.
.B lvmsadc .B lvmsadc
.SH "DESCRIPTION" .
.SH DESCRIPTION
.
lvmsadc is not supported under LVM2. The device-mapper statistics lvmsadc is not supported under LVM2. The device-mapper statistics
facility provides similar performance metrics using the \fBdmstats(8)\fP facility provides similar performance metrics using the \fBdmstats(8)\fP
command. command.
.SH "SEE ALSO" .
.BR dmstats (8) .SH SEE ALSO
.
.BR dmstats (8),
.BR lvm (8) .BR lvm (8)

View File

@ -1,12 +1,20 @@
.TH "LVMSAR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMSAR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME" .
.SH NAME
.
lvmsar \(em LVM system activity reporter lvmsar \(em LVM system activity reporter
.SH "SYNOPSIS" .
.SH SYNOPSIS
.
.B lvmsar .B lvmsar
.SH "DESCRIPTION" .
.SH DESCRIPTION
.
lvmsar is not supported under LVM2. The device-mapper statistics lvmsar is not supported under LVM2. The device-mapper statistics
facility provides similar performance metrics using the \fBdmstats(8)\fP facility provides similar performance metrics using the \fBdmstats(8)\fP
command. command.
.SH "SEE ALSO" .
.BR dmstats (8) .SH SEE ALSO
.
.BR dmstats (8),
.BR lvm (8) .BR lvm (8)

View File

@ -1,38 +1,39 @@
.TH "LVMSYSTEMID" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMSYSTEMID" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH NAME .SH NAME
.
lvmsystemid \(em LVM system ID lvmsystemid \(em LVM system ID
.
.SH DESCRIPTION .SH DESCRIPTION
.
The \fBlvm\fP(8) system ID restricts Volume Group (VG) access to one host. The \fBlvm\fP(8) system ID restricts Volume Group (VG) access to one host.
This is useful when a VG is placed on shared storage devices, or when This is useful when a VG is placed on shared storage devices, or when
local devices are visible to both host and guest operating systems. In local devices are visible to both host and guest operating systems. In
cases like these, a VG can be visible to multiple hosts at once, and some cases like these, a VG can be visible to multiple hosts at once, and some
mechanism is needed to protect it from being used by more than one host at mechanism is needed to protect it from being used by more than one host at
a time. a time.
.P
A VG's system ID identifies one host as the VG owner. The host with a A VG's system ID identifies one host as the VG owner. The host with a
matching system ID can use the VG and its LVs, while LVM on other hosts matching system ID can use the VG and its LVs, while LVM on other hosts
will ignore it. This protects the VG from being accidentally used from will ignore it. This protects the VG from being accidentally used from
other hosts. other hosts.
.P
The system ID is a string that uniquely identifies a host. It can be The system ID is a string that uniquely identifies a host. It can be
configured as a custom value, or it can be assigned automatically by LVM configured as a custom value, or it can be assigned automatically by LVM
using some unique identifier already available on the host, e.g. using some unique identifier already available on the host, e.g.
machine-id or uname. machine-id or uname.
.P
When a new VG is created, the system ID of the local host is recorded in When a new VG is created, the system ID of the local host is recorded in
the VG metadata. The creating host then owns the new VG, and LVM on other the VG metadata. The creating host then owns the new VG, and LVM on other
hosts will ignore it. When an existing, exported VG is imported hosts will ignore it. When an existing, exported VG is imported
(vgimport), the system ID of the local host is saved in the VG metadata, (vgimport), the system ID of the local host is saved in the VG metadata,
and the importing host owns the VG. and the importing host owns the VG.
.P
A VG without a system ID can be used by LVM on any host where the VG's A VG without a system ID can be used by LVM on any host where the VG's
devices are visible. When system IDs are not used, device filters should devices are visible. When system IDs are not used, device filters should
be configured on all hosts to exclude the VG's devices from all but one be configured on all hosts to exclude the VG's devices from all but one
host. host.
.P
A A
.B foreign VG .B foreign VG
is a VG seen by a host with an unmatching system ID, i.e. the system ID is a VG seen by a host with an unmatching system ID, i.e. the system ID
@ -40,195 +41,194 @@ in the VG metadata does not match the system ID configured on the host.
If the host has no system ID, and the VG does, the VG is foreign and LVM If the host has no system ID, and the VG does, the VG is foreign and LVM
will ignore it. If the VG has no system ID, access is unrestricted, and will ignore it. If the VG has no system ID, access is unrestricted, and
LVM can access it from any host, whether the host has a system ID or not. LVM can access it from any host, whether the host has a system ID or not.
.P
Changes to a host's system ID and a VG's system ID can be made in limited Changes to a host's system ID and a VG's system ID can be made in limited
circumstances (see vgexport and vgimport). Improper changes can result in circumstances (see vgexport and vgimport). Improper changes can result in
a host losing access to its VG, or a VG being accidentally damaged by a host losing access to its VG, or a VG being accidentally damaged by
access from an unintended host. Even limited changes to the VG system ID access from an unintended host. Even limited changes to the VG system ID
may not be perfectly reflected across hosts. A more coherent view of may not be perfectly reflected across hosts. A more coherent view of
shared storage requires an inter-host locking system to coordinate access. shared storage requires an inter-host locking system to coordinate access.
.P
Valid system ID characters are the same as valid VG name characters. If a Valid system ID characters are the same as valid VG name characters. If a
system ID contains invalid characters, those characters are omitted and system ID contains invalid characters, those characters are omitted and
remaining characters are used. If a system ID is longer than the maximum remaining characters are used. If a system ID is longer than the maximum
name length, the characters up to the maximum length are used. The name length, the characters up to the maximum length are used. The
maximum length of a system ID is 128 characters. maximum length of a system ID is 128 characters.
.P
Print the system ID of a VG to check if it is set: Print the system ID of a VG to check if it is set:
.P
.B vgs -o systemid .B vgs -o systemid
.I VG .I VG
.P
Print the system ID of the local host to check if it is configured: Print the system ID of the local host to check if it is configured:
.P
.B lvm systemid .B lvm systemid
.
.SS Limitations and warnings .SS Limitations and warnings
.
To benefit fully from system ID, all hosts should have a system ID To benefit fully from system ID, all hosts should have a system ID
configured, and all VGs should have a system ID set. Without any method configured, and all VGs should have a system ID set. Without any method
to restrict access, e.g. system ID or device filters, a VG that is visible to restrict access, e.g. system ID or device filters, a VG that is visible
to multiple hosts can be accidentally damaged or destroyed. to multiple hosts can be accidentally damaged or destroyed.
.
.IP \[bu] 2 .IP \[bu] 2
A VG without a system ID can be used without restriction from any host A VG without a system ID can be used without restriction from any host
where it is visible, even from hosts that have a system ID. where it is visible, even from hosts that have a system ID.
.
.IP \[bu] 2 .IP \[bu]
Many VGs will not have a system ID set because LVM has not enabled it by Many VGs will not have a system ID set because LVM has not enabled it by
default, and even when enabled, many VGs were created before the feature default, and even when enabled, many VGs were created before the feature
was added to LVM or enabled. A system ID can be assigned to these VGs by was added to LVM or enabled. A system ID can be assigned to these VGs by
using vgchange --systemid (see below). using vgchange --systemid (see below).
.
.IP \[bu] 2 .IP \[bu]
Two hosts should not be assigned the same system ID. Doing so defeats Two hosts should not be assigned the same system ID. Doing so defeats
the purpose of distinguishing different hosts with this value. the purpose of distinguishing different hosts with this value.
.
.IP \[bu] 2 .IP \[bu]
Orphan PVs (or unused devices) on shared storage are unprotected by the Orphan PVs (or unused devices) on shared storage are unprotected by the
system ID feature. Commands that use these PVs, such as vgcreate or system ID feature. Commands that use these PVs, such as vgcreate or
vgextend, are not prevented from performing conflicting operations and vgextend, are not prevented from performing conflicting operations and
corrupting the PVs. See the corrupting the PVs. See the
.B orphans .B orphans
section for more information. section for more information.
.
.IP \[bu] 2 .IP \[bu]
The system ID does not protect devices in a VG from programs other than LVM. The system ID does not protect devices in a VG from programs other than LVM.
.
.IP \[bu] 2 .IP \[bu]
A host using an old LVM version (without the system ID feature) will not A host using an old LVM version (without the system ID feature) will not
recognize a system ID set in VGs. The old LVM can read a VG with a recognize a system ID set in VGs. The old LVM can read a VG with a
system ID, but is prevented from writing to the VG (or its LVs). system ID, but is prevented from writing to the VG (or its LVs).
The system ID feature changes the write mode of a VG, making it appear The system ID feature changes the write mode of a VG, making it appear
read-only to previous versions of LVM. read-only to previous versions of LVM.
.sp
This also means that if a host downgrades to the old LVM version, it would This also means that if a host downgrades to the old LVM version, it would
lose access to any VGs it had created with a system ID. To avoid this, lose access to any VGs it had created with a system ID. To avoid this,
the system ID should be removed from local VGs before downgrading LVM to a the system ID should be removed from local VGs before downgrading LVM to a
version without the system ID feature. version without the system ID feature.
.
.SS Types of VG access .SS Types of VG access
.
A local VG is meant to be used by a single host. A local VG is meant to be used by a single host.
.P
A shared or clustered VG is meant to be used by multiple hosts. A shared or clustered VG is meant to be used by multiple hosts.
.P
These can be further distinguished as: These can be further distinguished as:
.
.TP
.B Unrestricted: .B Unrestricted:
A local VG that has no system ID. This VG type is unprotected and A local VG that has no system ID. This VG type is unprotected and
accessible to any host. accessible to any host.
.
.TP
.B Owned: .B Owned:
A local VG that has a system ID set, as viewed from the host with a A local VG that has a system ID set, as viewed from the host with a
matching system ID (the owner). This VG type is accessible to the host. matching system ID (the owner). This VG type is accessible to the host.
.
.TP
.B Foreign: .B Foreign:
A local VG that has a system ID set, as viewed from any host with an A local VG that has a system ID set, as viewed from any host with an
unmatching system ID (or no system ID). It is owned by another host. unmatching system ID (or no system ID). It is owned by another host.
This VG type is not accessible to the host. This VG type is not accessible to the host.
.
.TP
.B Exported: .B Exported:
A local VG that has been exported with vgexport and has no system ID. A local VG that has been exported with vgexport and has no system ID.
This VG type can only be accessed by vgimport which will change it to This VG type can only be accessed by vgimport which will change it to
owned. owned.
.
.TP
.B Shared: .B Shared:
A shared or "lockd" VG has the lock_type set and has no system ID. A shared or "lockd" VG has the lock_type set and has no system ID.
A shared VG is meant to be used on shared storage from multiple hosts, A shared VG is meant to be used on shared storage from multiple hosts,
and is only accessible to hosts using lvmlockd. Applicable only if LVM and is only accessible to hosts using lvmlockd. Applicable only if LVM
is compiled with lvmlockd support. is compiled with lvmlockd support.
.
.TP
.B Clustered: .B Clustered:
A clustered or "clvm" VG has the clustered flag set and has no system ID. A clustered or "clvm" VG has the clustered flag set and has no system ID.
A clustered VG is meant to be used on shared storage from multiple hosts, A clustered VG is meant to be used on shared storage from multiple hosts,
and is only accessible to hosts using clvmd. Applicable only if LVM and is only accessible to hosts using clvmd. Applicable only if LVM
is compiled with clvm support. is compiled with clvm support.
.
.SS Host system ID configuration
.SS Host system ID configuration .
A host's own system ID can be defined in a number of ways. lvm.conf A host's own system ID can be defined in a number of ways. lvm.conf
global/system_id_source defines the method LVM will use to find the local global/system_id_source defines the method LVM will use to find the local
system ID: system ID:
.
.TP .TP
.B none .B none
.br .br
LVM will not use a system ID. LVM is allowed to access VGs without a LVM will not use a system ID. LVM is allowed to access VGs without a
system ID, and will create new VGs without a system ID. An undefined system ID, and will create new VGs without a system ID. An undefined
system_id_source is equivalent to none. system_id_source is equivalent to none.
.sp
.I lvm.conf .I lvm.conf
.nf .nf
global { global {
system_id_source = "none" system_id_source = "none"
} }
.fi .fi
.
.TP .TP
.B machineid .B machineid
.br .br
The content of /etc/machine-id is used as the system ID if available. The content of /etc/machine-id is used as the system ID if available.
See See
.BR machine-id (5) .BR machine-id (5)
and and
.BR systemd-machine-id-setup (1) .BR systemd-machine-id-setup (1)
to check if machine-id is available on the host. to check if machine-id is available on the host.
.sp
.I lvm.conf .I lvm.conf
.nf .nf
global { global {
system_id_source = "machineid" system_id_source = "machineid"
} }
.fi .fi
.
.TP .TP
.B uname .B uname
.br .br
The string utsname.nodename from The string utsname.nodename from
.BR uname (2) .BR uname (2)
is used as the system ID. A uname beginning with "localhost" is used as the system ID. A uname beginning with "localhost"
is ignored and equivalent to none. is ignored and equivalent to none.
.sp
.I lvm.conf .I lvm.conf
.nf .nf
global { global {
system_id_source = "uname" system_id_source = "uname"
} }
.fi .fi
.
.TP .TP
.B lvmlocal .B lvmlocal
.br .br
The system ID is defined in lvmlocal.conf local/system_id. The system ID is defined in lvmlocal.conf local/system_id.
.sp
.I lvm.conf .I lvm.conf
.nf .nf
global { global {
system_id_source = "lvmlocal" system_id_source = "lvmlocal"
} }
.fi .fi
.sp
.I lvmlocal.conf .I lvmlocal.conf
.nf .nf
local { local {
system_id = "example_name" system_id = "example_name"
} }
.fi .fi
.
.TP .TP
.B file .B file
.br .br
The system ID is defined in a file specified by lvm.conf The system ID is defined in a file specified by lvm.conf
global/system_id_file. global/system_id_file.
.sp
.I lvm.conf .I lvm.conf
.nf .nf
global { global {
@ -236,132 +236,125 @@ global {
system_id_file = "/path/to/file" system_id_file = "/path/to/file"
} }
.fi .fi
.LP .LP
Changing system_id_source will likely cause the system ID of the host to Changing system_id_source will likely cause the system ID of the host to
change, which will prevent the host from using VGs that it previously used change, which will prevent the host from using VGs that it previously used
(see extra_system_ids below to handle this.) (see extra_system_ids below to handle this.)
.P
If a system_id_source other than none fails to produce a system ID value, If a system_id_source other than none fails to produce a system ID value,
it is the equivalent of having none. The host will be allowed to access it is the equivalent of having none. The host will be allowed to access
VGs with no system ID, but will not be allowed to access VGs with a system VGs with no system ID, but will not be allowed to access VGs with a system
ID set. ID set.
.
.SS Overriding system ID .SS Overriding system ID
.
In some cases, it may be necessary for a host to access VGs with different In some cases, it may be necessary for a host to access VGs with different
system IDs, e.g. if a host's system ID changes, and it wants to use VGs system IDs, e.g. if a host's system ID changes, and it wants to use VGs
that it created with its old system ID. To allow a host to access VGs that it created with its old system ID. To allow a host to access VGs
with other system IDs, those other system IDs can be listed in with other system IDs, those other system IDs can be listed in
lvmlocal.conf local/extra_system_ids. lvmlocal.conf local/extra_system_ids.
.P
.I lvmlocal.conf .I lvmlocal.conf
.nf .nf
local { local {
extra_system_ids = [ "my_other_name" ] extra_system_ids = [ "my_other_name" ]
} }
.fi .fi
.P
A safer option may be configuring the extra values as needed on the A safer option may be configuring the extra values as needed on the
command line as: command line as:
.br .br
\fB--config 'local/extra_system_ids=["\fP\fIid\fP\fB"]'\fP \fB--config 'local/extra_system_ids=["\fP\fIid\fP\fB"]'\fP
.
.SS vgcreate .SS vgcreate
.
In vgcreate, the host running the command assigns its own system ID to the In vgcreate, the host running the command assigns its own system ID to the
new VG. To override this and set another system ID: new VG. To override this and set another system ID:
.P
.B vgcreate --systemid .B vgcreate --systemid
.I SystemID VG PVs .I SystemID VG PVs
.P
Overriding the host's system ID makes it possible for a host to create a Overriding the host's system ID makes it possible for a host to create a
VG that it may not be able to use. Another host with a system ID matching VG that it may not be able to use. Another host with a system ID matching
the one specified may not recognize the new VG without manually rescanning the one specified may not recognize the new VG without manually rescanning
devices. devices.
.P
If the --systemid argument is an empty string (""), the VG is created with If the --systemid argument is an empty string (""), the VG is created with
no system ID, making it accessible to other hosts (see warnings above.) no system ID, making it accessible to other hosts (see warnings above.)
.
.SS report/display .SS report/display
.
The system ID of a VG is displayed with the "systemid" reporting option. The system ID of a VG is displayed with the "systemid" reporting option.
.P
Report/display commands ignore foreign VGs by default. To report foreign Report/display commands ignore foreign VGs by default. To report foreign
VGs, the --foreign option can be used. This causes the VGs to be read VGs, the --foreign option can be used. This causes the VGs to be read
from disk. from disk.
.P
.B vgs --foreign -o +systemid .B vgs --foreign -o +systemid
.P
When a host with no system ID sees foreign VGs, it warns about them as When a host with no system ID sees foreign VGs, it warns about them as
they are skipped. The host should be assigned a system ID, after which they are skipped. The host should be assigned a system ID, after which
standard reporting commands will silently ignore foreign VGs. standard reporting commands will silently ignore foreign VGs.
.
.SS vgexport/vgimport .SS vgexport/vgimport
.
vgexport clears the VG system ID when exporting the VG. vgexport clears the VG system ID when exporting the VG.
.P
vgimport sets the VG system ID to the system ID of the host doing the vgimport sets the VG system ID to the system ID of the host doing the
import. import.
.
.SS vgchange .SS vgchange
.
A host can change the system ID of its own VGs, but the command requires A host can change the system ID of its own VGs, but the command requires
confirmation because the host may lose access to the VG being changed: confirmation because the host may lose access to the VG being changed:
.P
.B vgchange --systemid .B vgchange --systemid
.I SystemID VG .I SystemID VG
.P
The system ID can be removed from a VG by specifying an empty string ("") The system ID can be removed from a VG by specifying an empty string ("")
as the new system ID. This makes the VG accessible to other hosts (see as the new system ID. This makes the VG accessible to other hosts (see
warnings above.) warnings above.)
.P
A host cannot directly change the system ID of a foreign VG. A host cannot directly change the system ID of a foreign VG.
.P
To move a VG from one host to another, vgexport and vgimport should be To move a VG from one host to another, vgexport and vgimport should be
used. used.
.P
To forcibly gain ownership of a foreign VG, a host can temporarily add the To forcibly gain ownership of a foreign VG, a host can temporarily add the
foreign system ID to its extra_system_ids list, and change the system ID foreign system ID to its extra_system_ids list, and change the system ID
of the foreign VG to its own. See Overriding system ID above. of the foreign VG to its own. See Overriding system ID above.
.
.SS shared VGs .SS shared VGs
.
A shared VG has no system ID set, allowing multiple hosts to use it A shared VG has no system ID set, allowing multiple hosts to use it
via lvmlockd. Changing a VG to shared will clear the existing via lvmlockd. Changing a VG to shared will clear the existing
system ID. Applicable only if LVM is compiled with lvmlockd support. system ID. Applicable only if LVM is compiled with lvmlockd support.
.
.SS clustered VGs .SS clustered VGs
.
A clustered/clvm VG has no system ID set, allowing multiple hosts to use A clustered/clvm VG has no system ID set, allowing multiple hosts to use
it via clvmd. Changing a VG to clustered will clear the existing system it via clvmd. Changing a VG to clustered will clear the existing system
ID. Changing a VG to not clustered will set the system ID to the host ID. Changing a VG to not clustered will set the system ID to the host
running the vgchange command. running the vgchange command.
.
.SS creation_host .SS creation_host
.
In vgcreate, the VG metadata field creation_host is set by default to the In vgcreate, the VG metadata field creation_host is set by default to the
host's uname. The creation_host cannot be changed, and is not used to host's uname. The creation_host cannot be changed, and is not used to
control access. When system_id_source is "uname", the system_id and control access. When system_id_source is "uname", the system_id and
creation_host fields will be the same. creation_host fields will be the same.
.
.SS orphans .SS orphans
.
Orphan PVs are unused devices; they are not currently used in any VG. Orphan PVs are unused devices; they are not currently used in any VG.
Because of this, they are not protected by a system ID, and any host can Because of this, they are not protected by a system ID, and any host can
use them. Coordination of changes to orphan PVs is beyond the scope of use them. Coordination of changes to orphan PVs is beyond the scope of
system ID. The same is true of any block device that is not a PV. system ID. The same is true of any block device that is not a PV.
.
.SH SEE ALSO .SH SEE ALSO
.
.nh
.ad l
.BR vgcreate (8), .BR vgcreate (8),
.BR vgchange (8), .BR vgchange (8),
.BR vgimport (8), .BR vgimport (8),
@ -371,4 +364,3 @@ system ID. The same is true of any block device that is not a PV.
.BR lvm.conf (5), .BR lvm.conf (5),
.BR machine-id (5), .BR machine-id (5),
.BR uname (2) .BR uname (2)

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,22 @@
.TH "LVMVDO" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\"" .TH "LVMVDO" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.
.SH NAME .SH NAME
.
lvmvdo \(em Support for Virtual Data Optimizer in LVM lvmvdo \(em Support for Virtual Data Optimizer in LVM
.
.SH DESCRIPTION .SH DESCRIPTION
.
VDO is software that provides inline VDO is software that provides inline
block-level deduplication, compression, and thin provisioning capabilities block-level deduplication, compression, and thin provisioning capabilities
for primary storage. for primary storage.
.P
Deduplication is a technique for reducing the consumption of storage Deduplication is a technique for reducing the consumption of storage
resources by eliminating multiple copies of duplicate blocks. Compression resources by eliminating multiple copies of duplicate blocks. Compression
takes the individual unique blocks and shrinks them. These reduced blocks are then efficiently packed together into takes the individual unique blocks and shrinks them. These reduced blocks are then efficiently packed together into
physical blocks. Thin provisioning manages the mapping from logical blocks physical blocks. Thin provisioning manages the mapping from logical blocks
presented by VDO to where the data has actually been physically stored, presented by VDO to where the data has actually been physically stored,
and also eliminates any blocks of all zeroes. and also eliminates any blocks of all zeroes.
.P
With deduplication, instead of writing the same data more than once, VDO detects and records each With deduplication, instead of writing the same data more than once, VDO detects and records each
duplicate block as a reference to the original duplicate block as a reference to the original
block. VDO maintains a mapping from Logical Block Addresses (LBA) (used by the block. VDO maintains a mapping from Logical Block Addresses (LBA) (used by the
@ -21,31 +24,33 @@ storage layer above VDO) to physical block addresses (used by the storage
layer under VDO). After deduplication, multiple logical block addresses layer under VDO). After deduplication, multiple logical block addresses
may be mapped to the same physical block address; these are called shared may be mapped to the same physical block address; these are called shared
blocks and are reference-counted by the software. blocks and are reference-counted by the software.
.P
With compression, VDO compresses multiple blocks (or shared blocks) With compression, VDO compresses multiple blocks (or shared blocks)
with the fast LZ4 algorithm, and bins them together where possible so that with the fast LZ4 algorithm, and bins them together where possible so that
multiple compressed blocks fit within a 4 KB block on the underlying multiple compressed blocks fit within a 4 KB block on the underlying
storage. Mapping from LBA is to a physical block address and index within storage. Mapping from LBA is to a physical block address and index within
it for the desired compressed data. All compressed blocks are individually it for the desired compressed data. All compressed blocks are individually
reference counted for correctness. reference counted for correctness.
.P
Block sharing and block compression are invisible to applications using Block sharing and block compression are invisible to applications using
the storage, which read and write blocks as they would if VDO were not the storage, which read and write blocks as they would if VDO were not
present. When a shared block is overwritten, a new physical block is present. When a shared block is overwritten, a new physical block is
allocated for storing the new block data to ensure that other logical allocated for storing the new block data to ensure that other logical
block addresses that are mapped to the shared physical block are not block addresses that are mapped to the shared physical block are not
modified. modified.
.P
To use VDO with \fBlvm\fP(8), you must install the standard VDO user-space tools To use VDO with \fBlvm\fP(8), you must install the standard VDO user-space tools
\fBvdoformat\fP(8) and the currently non-standard kernel VDO module \fBvdoformat\fP(8) and the currently non-standard kernel VDO module
"\fIkvdo\fP". "\fIkvdo\fP".
.P
The "\fIkvdo\fP" module implements fine-grained storage virtualization, The "\fIkvdo\fP" module implements fine-grained storage virtualization,
thin provisioning, block sharing, and compression. thin provisioning, block sharing, and compression.
The "\fIuds\fP" module provides memory-efficient duplicate The "\fIuds\fP" module provides memory-efficient duplicate
identification. The user-space tools include \fBvdostats\fP(8) identification. The user-space tools include \fBvdostats\fP(8)
for extracting statistics from VDO volumes. for extracting statistics from VDO volumes.
.
.SH VDO TERMS .SH VDO TERMS
.
.TP .TP
VDODataLV VDODataLV
.br .br
@ -54,6 +59,7 @@ VDO data LV
A large hidden LV with the _vdata suffix. It is created in a VG A large hidden LV with the _vdata suffix. It is created in a VG
.br .br
used by the VDO kernel target to store all data and metadata blocks. used by the VDO kernel target to store all data and metadata blocks.
.
.TP .TP
VDOPoolLV VDOPoolLV
.br .br
@ -62,6 +68,7 @@ VDO pool LV
A pool for virtual VDOLV(s), which are the size of used VDODataLV. A pool for virtual VDOLV(s), which are the size of used VDODataLV.
.br .br
Only a single VDOLV is currently supported. Only a single VDOLV is currently supported.
.
.TP .TP
VDOLV VDOLV
.br .br
@ -70,9 +77,14 @@ VDO LV
Created from VDOPoolLV. Created from VDOPoolLV.
.br .br
Appears blank after creation. Appears blank after creation.
.
.SH VDO USAGE .SH VDO USAGE
.
The primary methods for using VDO with lvm2: The primary methods for using VDO with lvm2:
.SS 1. Create a VDOPoolLV and a VDOLV .nr step 1 1
.
.SS \n[step]. Create a VDOPoolLV and a VDOLV
.
Create a VDOPoolLV that will hold VDO data, and a Create a VDOPoolLV that will hold VDO data, and a
virtual size VDOLV that the user can use. If you do not specify the virtual size, virtual size VDOLV that the user can use. If you do not specify the virtual size,
then the VDOLV is created with the maximum size that then the VDOLV is created with the maximum size that
@ -81,23 +93,25 @@ deduplication or compression can happen
(i.e. it can hold the incompressible content of /dev/urandom). (i.e. it can hold the incompressible content of /dev/urandom).
If you do not specify the name of VDOPoolLV, it is taken from If you do not specify the name of VDOPoolLV, it is taken from
the sequence of vpool0, vpool1 ... the sequence of vpool0, vpool1 ...
.P
Note: The performance of TRIM/Discard operations is slow for large Note: The performance of TRIM/Discard operations is slow for large
volumes of VDO type. Please try to avoid sending discard requests unless volumes of VDO type. Please try to avoid sending discard requests unless
necessary because it might take considerable amount of time to finish the discard necessary because it might take considerable amount of time to finish the discard
operation. operation.
.P
.nf .nf
.B lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV .B lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV
.B lvcreate --vdo -L DataSize VG .B lvcreate --vdo -L DataSize VG
.fi .fi
.P
.I Example .I Example
.nf .nf
# lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0 # lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
# mkfs.ext4 -E nodiscard /dev/vg/vdo0 # mkfs.ext4 -E nodiscard /dev/vg/vdo0
.fi .fi
.SS 2. Convert an existing LV into VDOPoolLV .
.SS \n+[step]. Convert an existing LV into VDOPoolLV
.
Convert an already created or existing LV into a VDOPoolLV, which is a volume Convert an already created or existing LV into a VDOPoolLV, which is a volume
that can hold data and metadata. that can hold data and metadata.
You will be prompted to confirm such conversion because it \fBIRREVERSIBLY You will be prompted to confirm such conversion because it \fBIRREVERSIBLY
@ -106,24 +120,26 @@ formatted by \fBvdoformat\fP(8) as a VDO pool data volume. You can
specify the virtual size of the VDOLV associated with this VDOPoolLV. specify the virtual size of the VDOLV associated with this VDOPoolLV.
If you do not specify the virtual size, it will be set to the maximum size If you do not specify the virtual size, it will be set to the maximum size
that can keep 100% incompressible data there. that can keep 100% incompressible data there.
.P
.nf .nf
.B lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV .B lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
.B lvconvert --vdopool VG/VDOPoolLV .B lvconvert --vdopool VG/VDOPoolLV
.fi .fi
.P
.I Example .I Example
.nf .nf
# lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV # lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
.fi .fi
.SS 3. Change the default settings used for creating a VDOPoolLV .
.SS \n+[step]. Change the default settings used for creating a VDOPoolLV
.
VDO allows to set a large variety of options. Lots of these settings VDO allows to set a large variety of options. Lots of these settings
can be specified in lvm.conf or profile settings. You can prepare can be specified in lvm.conf or profile settings. You can prepare
a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory
and just specify the profile file name. and just specify the profile file name.
Check the output of \fBlvmconfig --type default --withcomments\fP Check the output of \fBlvmconfig --type default --withcomments\fP
for a detailed description of all individual VDO settings. for a detailed description of all individual VDO settings.
.P
.I Example .I Example
.nf .nf
# cat <<EOF > #DEFAULT_SYS_DIR#/profile/vdo_create.profile # cat <<EOF > #DEFAULT_SYS_DIR#/profile/vdo_create.profile
@ -149,43 +165,45 @@ allocation {
vdo_max_discard=1 vdo_max_discard=1
} }
EOF EOF
.P
# lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0 # lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
# lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1 # lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
.fi .fi
.SS 4. Change the compression and deduplication of a VDOPoolLV .
.SS \n+[step]. Change the compression and deduplication of a VDOPoolLV
.
Disable or enable the compression and deduplication for VDOPoolLV Disable or enable the compression and deduplication for VDOPoolLV
(the volume that maintains all VDO LV(s) associated with it). (the volume that maintains all VDO LV(s) associated with it).
.P
.nf
.B lvchange --compression [y|n] --deduplication [y|n] VG/VDOPoolLV .B lvchange --compression [y|n] --deduplication [y|n] VG/VDOPoolLV
.fi .P
.I Example .I Example
.nf .nf
# lvchange --compression n vg/vdopool0 # lvchange --compression n vg/vdopool0
# lvchange --deduplication y vg/vdopool1 # lvchange --deduplication y vg/vdopool1
.fi .fi
.SS 5. Checking the usage of VDOPoolLV .
.SS \n+[step]. Checking the usage of VDOPoolLV
.
To quickly check how much data on a VDOPoolLV is already consumed, To quickly check how much data on a VDOPoolLV is already consumed,
use \fBlvs\fP(8). The Data% field reports how much data is occupied use \fBlvs\fP(8). The Data% field reports how much data is occupied
in the content of the virtual data for the VDOLV and how much space is already in the content of the virtual data for the VDOLV and how much space is already
consumed with all the data and metadata blocks in the VDOPoolLV. consumed with all the data and metadata blocks in the VDOPoolLV.
For a detailed description, use the \fBvdostats\fP(8) command. For a detailed description, use the \fBvdostats\fP(8) command.
.P
Note: \fBvdostats\fP(8) currently understands only /dev/mapper device names. Note: \fBvdostats\fP(8) currently understands only /dev/mapper device names.
.P
.I Example .I Example
.nf .nf
# lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0 # lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
# mkfs.ext4 -E nodiscard /dev/vg/vdo0 # mkfs.ext4 -E nodiscard /dev/vg/vdo0
# lvs -a vg # lvs -a vg
.P
LV VG Attr LSize Pool Origin Data% LV VG Attr LSize Pool Origin Data%
vdo0 vg vwi-a-v--- 20.00g vdopool0 0.01 vdo0 vg vwi-a-v--- 20.00g vdopool0 0.01
vdopool0 vg dwi-ao---- 10.00g 30.16 vdopool0 vg dwi-ao---- 10.00g 30.16
[vdopool0_vdata] vg Dwi-ao---- 10.00g [vdopool0_vdata] vg Dwi-ao---- 10.00g
.P
# vdostats --all /dev/mapper/vg-vdopool0-vpool # vdostats --all /dev/mapper/vg-vdopool0-vpool
/dev/mapper/vg-vdopool0 : /dev/mapper/vg-vdopool0 :
version : 30 version : 30
@ -193,76 +211,88 @@ Note: \fBvdostats\fP(8) currently understands only /dev/mapper device names.
data blocks used : 79 data blocks used : 79
... ...
.fi .fi
.SS 6. Extending the VDOPoolLV size .
.SS \n+[step]. Extending the VDOPoolLV size
.
You can add more space to hold VDO data and metadata by You can add more space to hold VDO data and metadata by
extending the VDODataLV using the commands extending the VDODataLV using the commands
\fBlvresize\fP(8) and \fBlvextend\fP(8). \fBlvresize\fP(8) and \fBlvextend\fP(8).
The extension needs to add at least one new VDO slab. You can configure The extension needs to add at least one new VDO slab. You can configure
the slab size with the \fBallocation/vdo_slab_size_mb\fP setting. the slab size with the \fBallocation/vdo_slab_size_mb\fP setting.
.P
You can also enable automatic size extension of a monitored VDOPoolLV You can also enable automatic size extension of a monitored VDOPoolLV
with the \fBactivation/vdo_pool_autoextend_percent\fP and with the \fBactivation/vdo_pool_autoextend_percent\fP and
\fBactivation/vdo_pool_autoextend_threshold\fP settings. \fBactivation/vdo_pool_autoextend_threshold\fP settings.
.P
Note: You cannot reduce the size of a VDOPoolLV. Note: You cannot reduce the size of a VDOPoolLV.
.P
.nf
.B lvextend -L+AddingSize VG/VDOPoolLV .B lvextend -L+AddingSize VG/VDOPoolLV
.fi .P
.I Example .I Example
.nf .nf
# lvextend -L+50G vg/vdopool0 # lvextend -L+50G vg/vdopool0
# lvresize -L300G vg/vdopool1 # lvresize -L300G vg/vdopool1
.fi .fi
.SS 7. Extending or reducing the VDOLV size .
.SS \n+[step]. Extending or reducing the VDOLV size
.
You can extend or reduce a virtual VDO LV as a standard LV with the You can extend or reduce a virtual VDO LV as a standard LV with the
\fBlvresize\fP(8), \fBlvextend\fP(8), and \fBlvreduce\fP(8) commands. \fBlvresize\fP(8), \fBlvextend\fP(8), and \fBlvreduce\fP(8) commands.
.P
Note: The reduction needs to process TRIM for reduced disk area Note: The reduction needs to process TRIM for reduced disk area
to unmap used data blocks from the VDOPoolLV, which might take to unmap used data blocks from the VDOPoolLV, which might take
a long time. a long time.
.P
.nf
.B lvextend -L+AddingSize VG/VDOLV .B lvextend -L+AddingSize VG/VDOLV
.br
.B lvreduce -L-ReducingSize VG/VDOLV .B lvreduce -L-ReducingSize VG/VDOLV
.fi .P
.I Example .I Example
.nf .nf
# lvextend -L+50G vg/vdo0 # lvextend -L+50G vg/vdo0
# lvreduce -L-50G vg/vdo1 # lvreduce -L-50G vg/vdo1
# lvresize -L200G vg/vdo2 # lvresize -L200G vg/vdo2
.fi .fi
.SS 8. Component activation of a VDODataLV .
.SS \n+[step]. Component activation of a VDODataLV
.
You can activate a VDODataLV separately as a component LV for examination You can activate a VDODataLV separately as a component LV for examination
purposes. The activation of the VDODataLV activates the data LV in read-only mode, purposes. The activation of the VDODataLV activates the data LV in read-only mode,
and the data LV cannot be modified. and the data LV cannot be modified.
If the VDODataLV is active as a component, any upper LV using this volume CANNOT If the VDODataLV is active as a component, any upper LV using this volume CANNOT
be activated. You have to deactivate the VDODataLV first to continue to use the VDOPoolLV. be activated. You have to deactivate the VDODataLV first to continue to use the VDOPoolLV.
.P
.I Example .I Example
.nf .nf
# lvchange -ay vg/vpool0_vdata # lvchange -ay vg/vpool0_vdata
# lvchange -an vg/vpool0_vdata # lvchange -an vg/vpool0_vdata
.fi .fi
.
.SH VDO TOPICS .SH VDO TOPICS
.SS 1. Stacking VDO .
.nr step 1 1
.
.SS \n[step]. Stacking VDO
.
You can convert or stack a VDOPooLV with these currently supported You can convert or stack a VDOPooLV with these currently supported
volume types: linear, stripe, raid, and cache with cachepool. volume types: linear, stripe, raid, and cache with cachepool.
.SS 2. VDOPoolLV on top of raid .
.SS \n+[step]. VDOPoolLV on top of raid
.
Using a raid type LV for a VDODataLV. Using a raid type LV for a VDODataLV.
.P
.I Example .I Example
.nf .nf
# lvcreate --type raid1 -L 5G -n vdopool vg # lvcreate --type raid1 -L 5G -n vdopool vg
# lvconvert --type vdo-pool -V 10G vg/vdopool # lvconvert --type vdo-pool -V 10G vg/vdopool
.fi .fi
.SS 3. Caching a VDODataLV or a VDOPoolLV .
.SS \n+[step]. Caching a VDODataLV or a VDOPoolLV
.
VDODataLV (accepts also VDOPoolLV) caching provides a mechanism VDODataLV (accepts also VDOPoolLV) caching provides a mechanism
to accelerate reads and writes of already compressed and deduplicated to accelerate reads and writes of already compressed and deduplicated
data blocks together with VDO metadata. data blocks together with VDO metadata.
.P
.I Example .I Example
.nf .nf
# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
@ -270,10 +300,12 @@ data blocks together with VDO metadata.
# lvconvert --cache --cachepool vg/cachepool vg/vdopool # lvconvert --cache --cachepool vg/cachepool vg/vdopool
# lvconvert --uncache vg/vdopool # lvconvert --uncache vg/vdopool
.fi .fi
.SS 4. Caching a VDOLV .
.SS \n+[step]. Caching a VDOLV
.
VDO LV cache allow you to 'cache' a device for better performance before VDO LV cache allow you to 'cache' a device for better performance before
it hits the processing of the VDO Pool LV layer. it hits the processing of the VDO Pool LV layer.
.P
.I Example .I Example
.nf .nf
# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
@ -281,12 +313,14 @@ it hits the processing of the VDO Pool LV layer.
# lvconvert --cache --cachepool vg/cachepool vg/vdo1 # lvconvert --cache --cachepool vg/cachepool vg/vdo1
# lvconvert --uncache vg/vdo1 # lvconvert --uncache vg/vdo1
.fi .fi
.SS 5. Usage of Discard/TRIM with a VDOLV .
.SS \n+[step]. Usage of Discard/TRIM with a VDOLV
.
You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV. You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
However, the current performance of discard operations is still not optimal However, the current performance of discard operations is still not optimal
and takes a considerable amount of time and CPU. and takes a considerable amount of time and CPU.
Unless you really need it, you should avoid using discard. Unless you really need it, you should avoid using discard.
.P
When a block device is going to be rewritten, When a block device is going to be rewritten,
its blocks will be automatically reused for new data. its blocks will be automatically reused for new data.
Discard is useful in situations when user knows that the given portion of a VDO LV Discard is useful in situations when user knows that the given portion of a VDO LV
@ -295,55 +329,59 @@ provisioning in other regions of the VDO LV.
For the same reason, you should avoid using mkfs with discard for For the same reason, you should avoid using mkfs with discard for
a freshly created VDO LV to save a lot of time that this operation would a freshly created VDO LV to save a lot of time that this operation would
take otherwise as device is already expected to be empty. take otherwise as device is already expected to be empty.
.SS 6. Memory usage .
.SS \n+[step]. Memory usage
.
The VDO target requires 370 MiB of RAM plus an additional 268 MiB The VDO target requires 370 MiB of RAM plus an additional 268 MiB
per each 1 TiB of physical storage managed by the volume. per each 1 TiB of physical storage managed by the volume.
.P
UDS requires a minimum of 250 MiB of RAM, UDS requires a minimum of 250 MiB of RAM,
which is also the default amount that deduplication uses. which is also the default amount that deduplication uses.
.P
The memory required for the UDS index is determined by the index type The memory required for the UDS index is determined by the index type
and the required size of the deduplication window and and the required size of the deduplication window and
is controlled by the \fBallocation/vdo_use_sparse_index\fP setting. is controlled by the \fBallocation/vdo_use_sparse_index\fP setting.
.P
With enabled UDS sparse indexing, it relies on the temporal locality of data With enabled UDS sparse indexing, it relies on the temporal locality of data
and attempts to retain only the most relevant index entries in memory and and attempts to retain only the most relevant index entries in memory and
can maintain a deduplication window that is ten times larger can maintain a deduplication window that is ten times larger
than with dense while using the same amount of memory. than with dense while using the same amount of memory.
.P
Although the sparse index provides the greatest coverage, Although the sparse index provides the greatest coverage,
the dense index provides more deduplication advice. the dense index provides more deduplication advice.
For most workloads, given the same amount of memory, For most workloads, given the same amount of memory,
the difference in deduplication rates between dense the difference in deduplication rates between dense
and sparse indexes is negligible. and sparse indexes is negligible.
.P
A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window, A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window. while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window.
In general, 1 GiB is sufficient for 4 TiB of physical space with In general, 1 GiB is sufficient for 4 TiB of physical space with
a dense index and 40 TiB with a sparse index. a dense index and 40 TiB with a sparse index.
.SS 7. Storage space requirements .
.SS \n+[step]. Storage space requirements
.
You can configure a VDOPoolLV to use up to 256 TiB of physical storage. You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
Only a certain part of the physical storage is usable to store data. Only a certain part of the physical storage is usable to store data.
This section provides the calculations to determine the usable size This section provides the calculations to determine the usable size
of a VDO-managed volume. of a VDO-managed volume.
.P
The VDO target requires storage for two types of VDO metadata and for the UDS index: The VDO target requires storage for two types of VDO metadata and for the UDS index:
.TP .IP \(bu 2
\(bu
The first type of VDO metadata uses approximately 1 MiB for each 4 GiB The first type of VDO metadata uses approximately 1 MiB for each 4 GiB
of physical storage plus an additional 1 MiB per slab. of physical storage plus an additional 1 MiB per slab.
.TP .IP \(bu
\(bu
The second type of VDO metadata consumes approximately 1.25 MiB The second type of VDO metadata consumes approximately 1.25 MiB
for each 1 GiB of logical storage, rounded up to the nearest slab. for each 1 GiB of logical storage, rounded up to the nearest slab.
.TP .IP \(bu
\(bu
The amount of storage required for the UDS index depends on the type of index The amount of storage required for the UDS index depends on the type of index
and the amount of RAM allocated to the index. For each 1 GiB of RAM, and the amount of RAM allocated to the index. For each 1 GiB of RAM,
a dense UDS index uses 17 GiB of storage and a sparse UDS index will use a dense UDS index uses 17 GiB of storage and a sparse UDS index will use
170 GiB of storage. 170 GiB of storage.
.
.SH SEE ALSO .SH SEE ALSO
.
.nh
.ad l
.BR lvm (8), .BR lvm (8),
.BR lvm.conf (5), .BR lvm.conf (5),
.BR lvmconfig (8), .BR lvmconfig (8),
@ -355,7 +393,9 @@ a dense UDS index uses 17 GiB of storage and a sparse UDS index will use
.BR lvresize (8), .BR lvresize (8),
.BR lvremove (8), .BR lvremove (8),
.BR lvs (8), .BR lvs (8),
.P
.BR vdo (8), .BR vdo (8),
.BR vdoformat (8), .BR vdoformat (8),
.BR vdostats (8), .BR vdostats (8),
.P
.BR mkfs (8) .BR mkfs (8)