2008-10-08 16:50:13 +04:00
.TH LVCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
2015-09-23 12:28:54 +03:00
.de UNITS
. .
.
2002-01-04 23:35:19 +03:00
.SH NAME
2015-09-23 12:28:54 +03:00
.
2014-06-11 13:06:30 +04:00
lvchange \(em change attributes of a logical volume
2015-09-23 12:28:54 +03:00
.
2002-01-04 23:35:19 +03:00
.SH SYNOPSIS
2015-09-23 12:28:54 +03:00
.
.ad l
2002-01-04 23:35:19 +03:00
.B lvchange
2012-06-27 15:48:31 +04:00
.RB [ \- a | \- \- activate
2015-09-21 23:45:47 +03:00
.RB [ a ][ e | s | l ]{ y | n }]
activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.RB [ \- \- activationmode
2015-09-21 23:45:47 +03:00
.RB { complete | degraded | partial }]
2015-09-23 12:28:54 +03:00
.RB [ \- \- addtag
.IR Tag ]
.RB [ \- K | \- \- ignoreactivationskip ]
2014-11-19 02:30:43 +03:00
.RB [ \- k | \- \- setactivationskip
2015-09-21 23:45:47 +03:00
.RB { y | n }]
2012-04-11 16:42:10 +04:00
.RB [ \- \- alloc
.IR AllocationPolicy ]
2015-09-23 12:28:54 +03:00
.RB [ \- A | \- \- autobackup
.RB { y | n }]
2014-11-24 01:42:20 +03:00
.RB [ \- \- cachepolicy
.IR policy ]
.RB [ \- \- cachesettings
2015-09-21 23:45:47 +03:00
.IR key \fB = value ]
2014-05-21 16:53:56 +04:00
.RB [ \- \- commandprofile
.IR ProfileName ]
2012-04-11 16:42:10 +04:00
.RB [ \- C | \- \- contiguous
2015-09-21 23:45:47 +03:00
.RB { y | n }]
2012-04-11 16:42:10 +04:00
.RB [ \- d | \- \- debug ]
activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.RB [ \- \- degraded ]
2012-04-11 16:42:10 +04:00
.RB [ \- \- deltag
.IR Tag ]
2013-07-03 18:45:27 +04:00
.RB [ \- \- detachprofile ]
2012-08-08 00:24:41 +04:00
.RB [ \- \- discards
2015-09-21 23:45:47 +03:00
.RB { ignore | nopassdown | passdown }]
2015-01-15 17:20:57 +03:00
.RB [ \- \- errorwhenfull
2015-09-21 23:45:47 +03:00
.RB { y | n }]
2012-04-11 16:42:10 +04:00
.RB [ \- h | \- ? | \- \- help ]
2015-09-23 12:28:54 +03:00
.RB \%[ \-\-ignorelockingfailure ]
.RB \%[ \-\-ignoremonitoring ]
.RB \%[ \-\-ignoreskippedcluster ]
.RB \% [ \- \- metadataprofile
.IR ProfileName ]
2012-04-11 16:42:10 +04:00
.RB [ \- \- monitor
2015-09-21 23:45:47 +03:00
.RB { y | n }]
2015-09-23 12:28:54 +03:00
.RB [ \- \- noudevsync ]
.RB [ \- P | \- \- partial ]
.RB [ \- p | \- \- permission
.RB { r | rw }]
.RB [ \- M | \- \- persistent
.RB { y | n }
.RB [ \- \- major
.IR major ]
.RB [ \- \- minor
.IR minor ]]
2012-04-11 16:42:10 +04:00
.RB [ \- \- poll
2015-09-21 23:45:47 +03:00
.RB { y | n }]
.RB [ \- \- [ raid ] maxrecoveryrate
2013-05-31 20:25:52 +04:00
.IR Rate ]
2015-09-21 23:45:47 +03:00
.RB [ \- \- [ raid ] minrecoveryrate
2013-05-31 20:25:52 +04:00
.IR Rate ]
2015-09-21 23:45:47 +03:00
.RB [ \- \- [ raid ] syncaction
.RB { check | repair }]
.RB [ \- \- [ raid ] writebehind
2013-07-22 22:02:32 +04:00
.IR IOCount ]
2015-09-21 23:45:47 +03:00
.RB [ \- \- [ raid ] writemostly
.BR \fI PhysicalVolume [ : { y | n | t }]]
2012-04-11 16:42:10 +04:00
.RB [ \- r | \- \- readahead
2015-09-21 23:45:47 +03:00
.RB { \fI ReadAheadSectors | auto | none }]
2012-04-11 16:42:10 +04:00
.RB [ \- \- refresh ]
2015-09-23 12:28:54 +03:00
.RB [ \- \- resync ]
2015-02-11 12:03:37 +03:00
.RB [ \- S | \- \- select
.IR Selection ]
2015-09-23 12:28:54 +03:00
.RB [ \- \- sysinit ]
2012-04-11 16:42:10 +04:00
.RB [ \- t | \- \- test ]
2012-06-28 16:52:23 +04:00
.RB [ \- v | \- \- verbose ]
.RB [ \- Z | \- \- zero
2015-09-21 23:45:47 +03:00
.RB { y | n }]
2012-04-11 16:42:10 +04:00
.RI [ LogicalVolumePath ...]
2015-09-23 12:28:54 +03:00
.ad b
.
2002-01-04 23:35:19 +03:00
.SH DESCRIPTION
2015-09-23 12:28:54 +03:00
.
2004-11-12 18:59:09 +03:00
lvchange allows you to change the attributes of a logical volume
including making them known to the kernel ready for use.
2015-09-23 12:28:54 +03:00
.
2002-01-04 23:35:19 +03:00
.SH OPTIONS
2015-09-23 12:28:54 +03:00
.
2012-04-11 16:42:10 +04:00
See \fB lvm\fP (8) for common options.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- a | \- \- activate
.RB [ a ][ e | s | l ]{ y | n }
.br
2002-01-04 23:35:19 +03:00
Controls the availability of the logical volumes for use.
2004-11-12 18:59:09 +03:00
Communicates with the kernel device-mapper driver via
2015-09-21 23:45:47 +03:00
libdevmapper to activate (\fB \- ay\fP ) or deactivate (\fB \- an\fP ) the
2015-01-15 17:20:57 +03:00
logical volumes.
2015-09-23 12:28:54 +03:00
.br
2014-06-23 17:01:31 +04:00
Activation of a logical volume creates a symbolic link
2015-09-21 23:45:47 +03:00
\fI /dev/VolumeGroupName/LogicalVolumeName\fP pointing to the device node.
2014-06-23 17:01:31 +04:00
This link is removed on deactivation.
All software and scripts should access the device through
this symbolic link and present this as the name of the device.
2015-01-15 17:20:57 +03:00
The location and name of the underlying device node may depend on
the distribution and configuration (e.g. udev) and might change
2014-06-23 17:01:31 +04:00
from release to release.
2015-09-23 12:28:54 +03:00
.br
2015-09-21 23:45:47 +03:00
If autoactivation option is used (\fB \- aay\fP ),
2012-06-29 14:40:26 +04:00
the logical volume is activated only if it matches an item in
2015-09-23 12:28:54 +03:00
the \fB activation/auto_activation_volume_list\fP
2015-09-21 23:45:47 +03:00
set in \fB lvm.conf\fP (5).
2013-06-14 11:36:56 +04:00
If this list is not set, then all volumes are considered for
2015-09-21 23:45:47 +03:00
activation. The \fB \- aay\fP option should be also used during system
2014-02-11 16:48:04 +04:00
boot so it's possible to select which volumes to activate using
2015-09-23 12:28:54 +03:00
the \fB activation/auto_activation_volume_list\fP setting.
.br
2015-06-16 21:28:07 +03:00
In a clustered VG, clvmd is used for activation, and the
following options are possible:
2015-09-21 23:45:47 +03:00
With \fB \- aey\fP , clvmd activates the LV in exclusive mode
2015-06-16 21:28:07 +03:00
(with an exclusive lock), allowing a single node to activate the LV.
2015-09-21 23:45:47 +03:00
With \fB \- asy\fP , clvmd activates the LV in shared mode
2015-06-16 21:28:07 +03:00
(with a shared lock), allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
2015-09-21 23:45:47 +03:00
the '\fB s\fP ' option is ignored and an exclusive lock is used.
2015-06-16 21:28:07 +03:00
2015-09-21 23:45:47 +03:00
With \fB \- ay\fP (no mode specified), clvmd activates the LV in shared mode
2015-06-16 21:28:07 +03:00
if the LV type allows concurrent access, such as a linear LV.
Otherwise, clvmd activates the LV in exclusive mode.
2015-09-21 23:45:47 +03:00
With \fB \- aey\fP , \fB \- asy\fP , and \fB \- ay\fP , clvmd attempts to activate the LV
2015-06-16 21:28:07 +03:00
on all nodes. If exclusive mode is used, then only one of the
nodes will be successful.
2015-09-21 23:45:47 +03:00
With \fB \- an\fP , clvmd attempts to deactivate the LV on all nodes.
2015-06-16 21:28:07 +03:00
2015-09-21 23:45:47 +03:00
With \fB \- aly\fP , clvmd activates the LV only on the local node, and \fB \- aln\fP
2015-06-16 21:28:07 +03:00
deactivates only on the local node. If the LV type allows concurrent
access, then shared mode is used, otherwise exclusive.
LVs with snapshots are always activated exclusively because they can only
be used on one node at once.
2015-09-21 23:45:47 +03:00
For local VGs \fB \- ay\fP , \fB \- aey\fP , and \fB \- asy\fP are all equivalent.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- activationmode
.RB { complete | degraded | partial }
.br
activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
The activation mode determines whether logical volumes are allowed to
activate when there are physical volumes missing (e.g. due to a device
2015-09-21 23:45:47 +03:00
failure). \fB complete\fP is the most restrictive; allowing only those
activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
logical volumes to be activated that are not affected by the missing
2015-09-21 23:45:47 +03:00
PVs. \fB degraded\fP allows RAID logical volumes to be activated even if
2014-11-19 02:30:43 +03:00
they have PVs missing. (Note that the "\fI mirror\fP " segment type is not
considered a RAID logical volume. The "\fI raid1\fP " segment type should
2015-09-21 23:45:47 +03:00
be used instead.) Finally, \fB partial\fP allows any logical volume to
activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
be activated even if portions are missing due to a missing or failed
PV. This last option should only be used when performing recovery or
2015-09-23 12:28:54 +03:00
repair operations. \fB degraded\fP is the default mode. To change it,
modify \fB activation_mode\fP in \fB lvm.conf\fP (5).
.
.HP
.BR \- K | \- \- ignoreactivationskip
.br
Ignore the flag to skip Logical Volumes during activation.
.
.HP
.BR \- k | \- \- setactivationskip
.RB { y | n }
.br
2013-07-11 14:44:36 +04:00
Controls whether Logical Volumes are persistently flagged to be
skipped during activation. By default, thin snapshot volumes are
flagged for activation skip. To activate such volumes,
2015-09-23 12:28:54 +03:00
an extra \fB \- \- ignoreactivationskip\fP option must be used.
2013-07-11 14:44:36 +04:00
The flag is not applied during deactivation. To see whether
2015-09-21 23:45:47 +03:00
the flag is attached, use \fB lvs\fP (8) command where the state
2013-07-11 14:44:36 +04:00
of the flag is reported within \fB lv_attr\fP bits.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- cachepolicy
.IR policy ,
.BR \- \- cachesettings
.IR key \fB = value
.br
2014-11-24 01:42:20 +03:00
Only applicable to cached LVs; see also \fB lvmcache(7)\fP . Sets
the cache policy and its associated tunable settings. In most use-cases,
default values should be adequate.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- C | \- \- contiguous
.RB { y | n }
.br
2002-11-18 17:04:08 +03:00
Tries to set or reset the contiguous allocation policy for
2002-01-04 23:35:19 +03:00
logical volumes. It's only possible to change a non-contiguous
logical volume's allocation policy to contiguous, if all of the
allocated physical extents are already contiguous.
2015-09-23 12:28:54 +03:00
.
.HP
2013-07-03 18:45:27 +04:00
.BR \- \- detachprofile
2015-09-23 12:28:54 +03:00
.br
2014-05-21 16:53:56 +04:00
Detach any metadata configuration profiles attached to given
Logical Volumes. See \fB lvm.conf\fP (5) for more information
2015-09-21 23:45:47 +03:00
about metadata profiles.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- discards
.RB { ignore | nopassdown | passdown }
.br
2015-09-21 23:45:47 +03:00
Set this to \fB ignore\fP to ignore any discards received by a
thin pool Logical Volume. Set to \fB nopassdown\fP to process such
2012-08-07 23:20:16 +04:00
discards within the thin pool itself and allow the no-longer-needed
2015-09-21 23:45:47 +03:00
extents to be overwritten by new data. Set to \fB passdown\fP (the
2015-01-15 17:20:57 +03:00
default) to process them both within the thin pool itself and to
2012-08-08 00:24:41 +04:00
pass them down the underlying device.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- errorwhenfull
.RB { y | n }
.br
2015-01-15 17:20:57 +03:00
Sets thin pool behavior when data space is exhaused. See
.BR lvcreate (8)
for information.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- ignoremonitoring
.br
Make no attempt to interact with dmeventd unless \fB \- \- monitor\fP
is specified.
Do not use this if dmeventd is already monitoring a device.
.
.HP
.BR \- \- major
.IR major
.br
Sets the major number. This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
.
.HP
.BR \- \- minor
.IR minor
.br
Set the minor number.
.
.HP
.BR \- \- metadataprofile
.IR ProfileName
.br
2015-09-21 23:45:47 +03:00
Uses and attaches \fI ProfileName\fP configuration profile to the logical
2014-05-21 16:53:56 +04:00
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
2015-09-21 23:45:47 +03:00
See \fB lvm.conf\fP (5) for more information about metadata profiles.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- monitor
.RB { y | n }
.br
2010-03-24 01:30:18 +03:00
Start or stop monitoring a mirrored or snapshot logical volume with
2006-08-19 02:27:01 +04:00
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
2012-04-11 16:42:10 +04:00
the failure is handled according to
2015-09-23 12:28:54 +03:00
\% \fB mirror_image_fault_policy\fP and \fB mirror_log_fault_policy\fP
2015-09-21 23:45:47 +03:00
set in \fB lvm.conf\fP (5).
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- noudevsync
.br
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.
.HP
.BR \- p | \- \- permission
.RB { r | rw }
.br
Change access permission to read-only or read/write.
.
.HP
.BR \- M | \- \- persistent
.RB { y | n }
.br
Set to \fB y\fP to make the minor number specified persistent.
Change of persistent numbers is not supported for pool volumes.
.
.HP
.BR \- \- poll
.RB { y | n }
.br
2010-01-06 22:08:58 +03:00
Without polling a logical volume's backgrounded transformation process
will never complete. If there is an incomplete pvmove or lvconvert (for
2012-04-11 16:42:10 +04:00
example, on rebooting after a crash), use \fB \- \- poll y\fP to restart the
2010-01-06 22:08:58 +03:00
process from its last checkpoint. However, it may not be appropriate to
2012-04-11 16:42:10 +04:00
immediately poll a logical volume when it is activated, use
\fB \- \- poll n\fP to defer and then \fB \- \- poll y\fP to restart the process.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] maxrecoveryrate
.BR \fI Rate [ b | B | s | S | k | K | m | M | g | G ]
.br
2013-05-31 20:25:52 +04:00
Sets the maximum recovery rate for a RAID logical volume. \fI Rate\fP
2013-07-22 22:02:32 +04:00
is specified as an amount per second for each device in the array.
2014-06-11 12:54:19 +04:00
If no suffix is given, then KiB/sec/device is assumed. Setting the
2015-09-21 23:45:47 +03:00
recovery rate to \fB 0\fP means it will be unbounded.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] minrecoveryrate
.BR \fI Rate [ b | B | s | S | k | K | m | M | g | G ]
.br
2013-05-31 20:25:52 +04:00
Sets the minimum recovery rate for a RAID logical volume. \fI Rate\fP
2013-07-22 22:02:32 +04:00
is specified as an amount per second for each device in the array.
2014-06-11 12:54:19 +04:00
If no suffix is given, then KiB/sec/device is assumed. Setting the
2015-09-21 23:45:47 +03:00
recovery rate to \fB 0\fP means it will be unbounded.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] syncaction
.RB { check | repair }
.br
2013-04-12 00:33:59 +04:00
This argument is used to initiate various RAID synchronization operations.
2015-09-21 23:45:47 +03:00
The \fB check\fP and \fB repair\fP options provide a way to check the
2013-04-12 00:33:59 +04:00
integrity of a RAID logical volume (often referred to as "scrubbing").
These options cause the RAID logical volume to
read all of the data and parity blocks in the array and check for any
discrepancies (e.g. mismatches between mirrors or incorrect parity values).
2015-09-21 23:45:47 +03:00
If \fB check\fP is used, the discrepancies will be counted but not repaired.
If \fB repair\fP is used, the discrepancies will be corrected as they are
encountered. The \fB lvs\fP (8) command can be used to show the number of
2013-04-12 00:33:59 +04:00
discrepancies found or repaired.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] writebehind
.IR IOCount
.br
2013-07-22 22:02:32 +04:00
Specify the maximum number of outstanding writes that are allowed to
2015-09-21 23:45:47 +03:00
devices in a RAID1 logical volume that are marked as write-mostly.
2013-07-22 22:02:32 +04:00
Once this value is exceeded, writes become synchronous (i.e. all writes
to the constituent devices must complete before the array signals the
write has completed). Setting the value to zero clears the preference
and allows the system to choose the value arbitrarily.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] writemostly
.BR \fI PhysicalVolume [ : { y | n | t }]
.br
2015-09-21 23:45:47 +03:00
Mark a device in a RAID1 logical volume as write-mostly. All reads
2013-07-22 22:02:32 +04:00
to these drives will be avoided unless absolutely necessary. This keeps
the number of I/Os to the drive to a minimum. The default behavior is to
set the write-mostly attribute for the specified physical volume in the
logical volume. It is possible to also remove the write-mostly flag by
2015-09-21 23:45:47 +03:00
appending a "\fB :n\fP " to the physical volume or to toggle the value by specifying
"\fB :t\fP ". The \fB \- \- writemostly\fP argument can be specified more than one time
2013-07-22 22:02:32 +04:00
in a single command; making it possible to toggle the write-mostly attributes
for all the physical volumes in a logical volume at once.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- r | \- \- readahead
.RB { \fI ReadAheadSectors | auto | none }
.br
Set read ahead sector count of this logical volume.
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120 sectors.
The default value is "\fB auto\fP " which allows the kernel to choose
a suitable value automatically.
"\fB none\fP " is equivalent to specifying zero.
.
.HP
.BR \- \- refresh
.br
If the logical volume is active, reload its metadata.
This is not necessary in normal operation, but may be useful
if something has gone wrong or if you're doing clustering
manually without a clustered lock manager.
.
.HP
.BR \- \- resync
.br
Forces the complete resynchronization of a mirror. In normal
circumstances you should not need this option because synchronization
happens automatically. Data is read from the primary mirror device
and copied to the others, so this can take a considerable amount of
time - and during this time you are without a complete redundant copy
of your data.
.
.HP
.BR \- \- sysinit
.br
2012-04-11 16:42:10 +04:00
Indicates that \fB lvchange\fP (8) is being invoked from early system
initialisation scripts (e.g. rc.sysinit or an initrd),
before writeable filesystems are available. As such,
some functionality needs to be disabled and this option
2010-05-06 15:15:55 +04:00
acts as a shortcut which selects an appropriate set of options. Currently
2012-04-11 16:42:10 +04:00
this is equivalent to using \fB \- \- ignorelockingfailure\fP ,
\fB \- \- ignoremonitoring\fP , \fB \- \- poll n\fP and setting
\fB LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES\fP
2010-05-06 15:15:55 +04:00
environment variable.
2012-07-10 15:49:46 +04:00
2015-09-21 23:45:47 +03:00
If \fB \- \- sysinit\fP is used in conjunction with
\fB lvmetad\fP (8) enabled and running,
2012-07-10 15:49:46 +04:00
autoactivation is preferred over manual activation via direct lvchange call.
2015-09-21 23:45:47 +03:00
Logical volumes are autoactivated according to
2015-09-23 12:28:54 +03:00
\fB auto_activation_volume_list\fP set in \fB lvm.conf\fP (5).
.
.HP
.BR \- Z | \- \- zero
.RB { y | n }
.br
2012-06-28 16:52:23 +04:00
Set zeroing mode for thin pool. Note: already provisioned blocks from pool
in non-zero mode are not cleared in unwritten parts when setting zero to
2015-09-21 23:45:47 +03:00
\fB y\fP .
2015-09-23 12:28:54 +03:00
.
2014-05-15 12:25:15 +04:00
.SH ENVIRONMENT VARIABLES
2015-09-23 12:28:54 +03:00
.
2014-05-15 12:25:15 +04:00
.TP
.B LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES
Suppress locking failure messages.
2015-09-23 12:28:54 +03:00
.
2002-01-04 23:35:19 +03:00
.SH Examples
2015-09-23 12:28:54 +03:00
.
2012-04-11 16:42:10 +04:00
Changes the permission on volume lvol1 in volume group vg00 to be read-only:
.sp
2014-06-11 13:06:30 +04:00
.B lvchange \- pr vg00/lvol1
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
.SH SEE ALSO
2015-10-06 15:55:09 +03:00
.
.nh
2012-04-11 16:42:10 +04:00
.BR lvm (8),
2015-09-21 23:45:47 +03:00
.BR lvmetad (8),
.BR lvs (8),
.BR lvcreate (8),
.BR vgchange (8),
2014-06-10 13:05:51 +04:00
.BR lvmcache (7),
.BR lvmthin (7),
2015-09-21 23:45:47 +03:00
.BR lvm.conf (5)