1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/man/lvchange.8.in

344 lines
14 KiB
Groff
Raw Normal View History

.TH LVCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvchange \(em change attributes of a logical volume
.SH SYNOPSIS
.B lvchange
.RB [ \-\-addtag
.IR Tag ]
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-a | \-\-activate
.RI [ a | e | s | l ]{ y | n }]
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.RB [ \-\-activationmode
2014-11-19 02:30:43 +03:00
.RI { complete | degraded | partial }]
.RB [ \-k | \-\-setactivationskip
.RI { y | n }]
.RB [ \-K | \-\-ignoreactivationskip ]
.RB [ \-\-alloc
.IR AllocationPolicy ]
.RB [ \-\-cachepolicy
.IR policy ]
.RB [ \-\-cachesettings
.IR key=value ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-C | \-\-contiguous
.RI { y | n }]
.RB [ \-d | \-\-debug ]
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.RB [ \-\-degraded ]
.RB [ \-\-deltag
.IR Tag ]
.RB [ \-\-detachprofile ]
.RB [ \-\-discards
.RI { ignore | nopassdown | passdown }]
2015-01-15 17:20:57 +03:00
.RB [ \-\-errorwhenfull
.RI { y | n }]
.RB [ \-\-resync ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-\-ignorelockingfailure ]
.RB [ \-\-ignoremonitoring ]
.RB [ \-\-ignoreskippedcluster ]
.RB [ \-\-monitor
.RI { y | n }]
.RB [ \-\-poll
.RI { y | n }]
.RB [ \-\-[raid]maxrecoveryrate
.IR Rate ]
.RB [ \-\-[raid]minrecoveryrate
.IR Rate ]
.RB [ \-\-[raid]syncaction
RAID: Add scrubbing support for RAID LVs New options to 'lvchange' allow users to scrub their RAID LVs. Synopsis: lvchange --syncaction {check|repair} vg/raid_lv RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. 'lvchange' can now initaite the two scrubbing operations: "check" and "repair". "check" will go over the array and recored the number of discrepancies but not repair them. "repair" will correct the discrepancies as it finds them. 'lvchange --syncaction repair vg/raid_lv' is not to be confused with 'lvconvert --repair vg/raid_lv'. The former initiates a background synchronization operation on the array, while the latter is designed to repair/replace failed devices in a mirror or RAID logical volume. Additional reporting has been added for 'lvs' to support the new operations. Two new printable fields (which are not printed by default) have been added: "syncaction" and "mismatches". These can be accessed using the '-o' option to 'lvs', like: lvs -o +syncaction,mismatches vg/lv "syncaction" will print the current synchronization operation that the RAID volume is performing. It can be one of the following: - idle: All sync operations complete (doing nothing) - resync: Initializing an array or recovering after a machine failure - recover: Replacing a device in the array - check: Looking for array inconsistencies - repair: Looking for and repairing inconsistencies The "mismatches" field with print the number of descrepancies found during a check or repair operation. The 'Cpy%Sync' field already available to 'lvs' will print the progress of any of the above syncactions, including check and repair. Finally, the lv_attr field has changed to accomadate the scrubbing operations as well. The role of the 'p'artial character in the lv_attr report field as expanded. "Partial" is really an indicator for the health of a logical volume and it makes sense to extend this include other health indicators as well, specifically: 'm'ismatches: Indicates that there are discrepancies in a RAID LV. This character is shown after a scrubbing operation has detected that portions of the RAID are not coherent. 'r'efresh : Indicates that a device in a RAID array has suffered a failure and the kernel regards it as failed - even though LVM can read the device label and considers the device to be ok. The LV should be 'r'efreshed to notify the kernel that the device is now available, or the device should be 'r'eplaced if it is suspected of failing.
2013-04-12 00:33:59 +04:00
.RI { check | repair }]
.RB [ \-\-[raid]writebehind
.IR IOCount ]
.RB [ \-\-[raid]writemostly
2014-11-19 02:30:43 +03:00
.IR PhysicalVolume [ : { y | n | t }]]
.RB [ \-\-sysinit ]
.RB [ \-\-noudevsync ]
.RB [ \-\-metadataprofile
.IR ProfileName ]
.RB [ \-M | \-\-persistent
.RI { y | n }
.RB [ \-\-minor
.IR minor ]
.RB [ \-\-major
.IR major ]]
.RB [ \-P | \-\-partial ]
.RB [ \-p | \-\-permission
.RI { r | rw }]
.RB [ \-r | \-\-readahead
.RI { ReadAheadSectors | auto | none }]
.RB [ \-\-refresh ]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-Z | \-\-zero
.RI { y | n }]
.RI [ LogicalVolumePath ...]
.SH DESCRIPTION
2004-11-12 18:59:09 +03:00
lvchange allows you to change the attributes of a logical volume
including making them known to the kernel ready for use.
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-a ", " \-\-activate " [" \fIa | \fIe | \fIs | \fIl ]{ \fIy | \fIn }
Controls the availability of the logical volumes for use.
2004-11-12 18:59:09 +03:00
Communicates with the kernel device-mapper driver via
libdevmapper to activate (\-ay) or deactivate (\-an) the
2015-01-15 17:20:57 +03:00
logical volumes.
.IP
Activation of a logical volume creates a symbolic link
/dev/VolumeGroupName/LogicalVolumeName pointing to the device node.
This link is removed on deactivation.
All software and scripts should access the device through
this symbolic link and present this as the name of the device.
2015-01-15 17:20:57 +03:00
The location and name of the underlying device node may depend on
the distribution and configuration (e.g. udev) and might change
from release to release.
.IP
If autoactivation option is used (\-aay),
the logical volume is activated only if it matches an item in
the activation/auto_activation_volume_list set in lvm.conf.
If this list is not set, then all volumes are considered for
activation. The \-aay option should be also used during system
boot so it's possible to select which volumes to activate using
the activation/auto_activation_volume_list setting.
2004-11-12 18:59:09 +03:00
.IP
In a clustered VG, clvmd is used for activation, and the
following options are possible:
With \-aey, clvmd activates the LV in exclusive mode
(with an exclusive lock), allowing a single node to activate the LV.
With \-asy, clvmd activates the LV in shared mode
(with a shared lock), allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
the 's' option is ignored and an exclusive lock is used.
With \-ay (no mode specified), clvmd activates the LV in shared mode
if the LV type allows concurrent access, such as a linear LV.
Otherwise, clvmd activates the LV in exclusive mode.
With \-aey, \-asy, and \-ay, clvmd attempts to activate the LV
on all nodes. If exclusive mode is used, then only one of the
nodes will be successful.
With \-an, clvmd attempts to deactivate the LV on all nodes.
With \-aly, clvmd activates the LV only on the local node, and \-aln
deactivates only on the local node. If the LV type allows concurrent
access, then shared mode is used, otherwise exclusive.
LVs with snapshots are always activated exclusively because they can only
be used on one node at once.
For local VGs, \-ay, \-aey, and \-asy are all equivalent.
.TP
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.BR \-\-activationmode " {" \fIcomplete | \fIdegraded | \fIpartial }
The activation mode determines whether logical volumes are allowed to
activate when there are physical volumes missing (e.g. due to a device
2014-11-19 02:30:43 +03:00
failure). \fIcomplete\fP is the most restrictive; allowing only those
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
logical volumes to be activated that are not affected by the missing
2014-11-19 02:30:43 +03:00
PVs. \fIdegraded\fP allows RAID logical volumes to be activated even if
they have PVs missing. (Note that the "\fImirror\fP" segment type is not
considered a RAID logical volume. The "\fIraid1\fP" segment type should
be used instead.) Finally, \fIpartial\fP allows any logical volume to
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
be activated even if portions are missing due to a missing or failed
PV. This last option should only be used when performing recovery or
2014-11-19 02:30:43 +03:00
repair operations. \fIdegraded\fP is the default mode. To change it, modify
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.B activation_mode
in
.BR lvm.conf (5).
.TP
.BR \-k ", " \-\-setactivationskip " {" \fIy | \fIn }
Controls whether Logical Volumes are persistently flagged to be
skipped during activation. By default, thin snapshot volumes are
flagged for activation skip. To activate such volumes,
an extra \fB\-K/\-\-ignoreactivationskip\fP option must be used.
The flag is not applied during deactivation. To see whether
the flag is attached, use \fBlvs\fP command where the state
of the flag is reported within \fBlv_attr\fP bits.
.TP
.BR \-K ", " \-\-ignoreactivationskip
Ignore the flag to skip Logical Volumes during activation.
.TP
.BR \-\-cachepolicy " " policy ", " \-\-cachesettings " " key=value
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy and its associated tunable settings. In most use-cases,
default values should be adequate.
.TP
.BR \-C ", " \-\-contiguous " {" \fIy | \fIn }
2002-11-18 17:04:08 +03:00
Tries to set or reset the contiguous allocation policy for
logical volumes. It's only possible to change a non-contiguous
logical volume's allocation policy to contiguous, if all of the
allocated physical extents are already contiguous.
.TP
.BR \-\-detachprofile
Detach any metadata configuration profiles attached to given
Logical Volumes. See \fBlvm.conf\fP(5) for more information
about \fBmetadata profiles\fP.
.TP
.BR \-\-discards " {" \fIignore | \fInopassdown | \fIpassdown }
2012-08-07 23:20:16 +04:00
Set this to \fIignore\fP to ignore any discards received by a
thin pool Logical Volume. Set to \fInopassdown\fP to process such
discards within the thin pool itself and allow the no-longer-needed
extents to be overwritten by new data. Set to \fIpassdown\fP (the
2015-01-15 17:20:57 +03:00
default) to process them both within the thin pool itself and to
pass them down the underlying device.
.TP
2015-01-15 17:20:57 +03:00
.BR \-\-errorwhenfull " {" \fIy | \fIn }
Sets thin pool behavior when data space is exhaused. See
.BR lvcreate (8)
for information.
.TP
.B \-\-resync
2006-10-24 21:09:40 +04:00
Forces the complete resynchronization of a mirror. In normal
circumstances you should not need this option because synchronization
happens automatically. Data is read from the primary mirror device
and copied to the others, so this can take a considerable amount of
time - and during this time you are without a complete redundant copy
of your data.
2006-10-24 03:03:55 +04:00
.TP
.B \-\-metadataprofile " " \fIProfileName
Uses and attaches ProfileName configuration profile to the logical
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
See \fBlvm.conf\fP(5) for more information about \fBmetadata profiles\fP.
.TP
.B \-\-minor \fIminor
2002-11-18 17:04:08 +03:00
Set the minor number.
.TP
.B \-\-major \fImajor
Sets the major number. This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
.TP
.BR \-\-monitor " {" \fIy | \fIn }
Start or stop monitoring a mirrored or snapshot logical volume with
2006-08-19 02:27:01 +04:00
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
2006-08-19 02:27:01 +04:00
\fBmirror_image_fault_policy\fP and \fBmirror_log_fault_policy\fP
set in \fBlvm.conf\fP.
2006-08-19 01:49:19 +04:00
.TP
.BR \-\-poll " {" \fIy | \fIn }
Without polling a logical volume's backgrounded transformation process
will never complete. If there is an incomplete pvmove or lvconvert (for
example, on rebooting after a crash), use \fB\-\-poll y\fP to restart the
process from its last checkpoint. However, it may not be appropriate to
immediately poll a logical volume when it is activated, use
\fB\-\-poll n\fP to defer and then \fB\-\-poll y\fP to restart the process.
.TP
.IR \fB\-\-[raid]maxrecoveryrate " " \fIRate [ bBsSkKmMgG ]
Sets the maximum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
2014-06-11 12:54:19 +04:00
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
.TP
.IR \fB\-\-[raid]minrecoveryrate " " \fIRate [ bBsSkKmMgG ]
Sets the minimum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
2014-06-11 12:54:19 +04:00
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
.TP
.BR \-\-[raid]syncaction " {" \fIcheck | \fIrepair }
RAID: Add scrubbing support for RAID LVs New options to 'lvchange' allow users to scrub their RAID LVs. Synopsis: lvchange --syncaction {check|repair} vg/raid_lv RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. 'lvchange' can now initaite the two scrubbing operations: "check" and "repair". "check" will go over the array and recored the number of discrepancies but not repair them. "repair" will correct the discrepancies as it finds them. 'lvchange --syncaction repair vg/raid_lv' is not to be confused with 'lvconvert --repair vg/raid_lv'. The former initiates a background synchronization operation on the array, while the latter is designed to repair/replace failed devices in a mirror or RAID logical volume. Additional reporting has been added for 'lvs' to support the new operations. Two new printable fields (which are not printed by default) have been added: "syncaction" and "mismatches". These can be accessed using the '-o' option to 'lvs', like: lvs -o +syncaction,mismatches vg/lv "syncaction" will print the current synchronization operation that the RAID volume is performing. It can be one of the following: - idle: All sync operations complete (doing nothing) - resync: Initializing an array or recovering after a machine failure - recover: Replacing a device in the array - check: Looking for array inconsistencies - repair: Looking for and repairing inconsistencies The "mismatches" field with print the number of descrepancies found during a check or repair operation. The 'Cpy%Sync' field already available to 'lvs' will print the progress of any of the above syncactions, including check and repair. Finally, the lv_attr field has changed to accomadate the scrubbing operations as well. The role of the 'p'artial character in the lv_attr report field as expanded. "Partial" is really an indicator for the health of a logical volume and it makes sense to extend this include other health indicators as well, specifically: 'm'ismatches: Indicates that there are discrepancies in a RAID LV. This character is shown after a scrubbing operation has detected that portions of the RAID are not coherent. 'r'efresh : Indicates that a device in a RAID array has suffered a failure and the kernel regards it as failed - even though LVM can read the device label and considers the device to be ok. The LV should be 'r'efreshed to notify the kernel that the device is now available, or the device should be 'r'eplaced if it is suspected of failing.
2013-04-12 00:33:59 +04:00
This argument is used to initiate various RAID synchronization operations.
The \fIcheck\fP and \fIrepair\fP options provide a way to check the
integrity of a RAID logical volume (often referred to as "scrubbing").
These options cause the RAID logical volume to
read all of the data and parity blocks in the array and check for any
discrepancies (e.g. mismatches between mirrors or incorrect parity values).
If \fIcheck\fP is used, the discrepancies will be counted but not repaired.
If \fIrepair\fP is used, the discrepancies will be corrected as they are
encountered. The 'lvs' command can be used to show the number of
discrepancies found or repaired.
.TP
2014-03-26 17:04:44 +04:00
.BR \-\-[raid]writebehind " " \fIIOCount
Specify the maximum number of outstanding writes that are allowed to
devices in a RAID1 logical volume that are marked as \fIwrite-mostly\fP.
Once this value is exceeded, writes become synchronous (i.e. all writes
to the constituent devices must complete before the array signals the
write has completed). Setting the value to zero clears the preference
and allows the system to choose the value arbitrarily.
.TP
2014-11-19 02:30:43 +03:00
.IR \fB\-\-[raid]writemostly " " PhysicalVolume [ : { y | n | t }]
Mark a device in a RAID1 logical volume as \fIwrite-mostly\fP. All reads
to these drives will be avoided unless absolutely necessary. This keeps
the number of I/Os to the drive to a minimum. The default behavior is to
set the write-mostly attribute for the specified physical volume in the
logical volume. It is possible to also remove the write-mostly flag by
2014-11-19 02:30:43 +03:00
appending a "\fI:n\fP" to the physical volume or to toggle the value by specifying
"\fI:t\fP". The \fB\-\-writemostly\fP argument can be specified more than one time
in a single command; making it possible to toggle the write-mostly attributes
for all the physical volumes in a logical volume at once.
.TP
.B \-\-sysinit
Indicates that \fBlvchange\fP(8) is being invoked from early system
initialisation scripts (e.g. rc.sysinit or an initrd),
before writeable filesystems are available. As such,
some functionality needs to be disabled and this option
acts as a shortcut which selects an appropriate set of options. Currently
this is equivalent to using \fB\-\-ignorelockingfailure\fP,
\fB\-\-ignoremonitoring\fP, \fB\-\-poll n\fP and setting
\fBLVM_SUPPRESS_LOCKING_FAILURE_MESSAGES\fP
environment variable.
If \fB\-\-sysinit\fP is used in conjunction with lvmetad(8) enabled and running,
autoactivation is preferred over manual activation via direct lvchange call.
Logical volumes are autoactivated according to auto_activation_volume_list
set in lvm.conf(5).
.TP
.B \-\-noudevsync
2009-08-03 14:58:40 +04:00
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.TP
.B \-\-ignoremonitoring
Make no attempt to interact with dmeventd unless \fB\-\-monitor\fP
is specified.
Do not use this if dmeventd is already monitoring a device.
.TP
.BR \-M ", " \-\-persistent " {" \fIy | \fIn }
2002-11-18 17:04:08 +03:00
Set to y to make the minor number specified persistent.
Change of persistent numbers is not supported for pool volumes.
2002-11-18 17:04:08 +03:00
.TP
.BR \-p ", " \-\-permission " {" \fIr | \fIrw }
Change access permission to read-only or read/write.
.TP
.BR \-r ", " \-\-readahead " {" \fIReadAheadSectors | \fIauto | \fInone }
Set read ahead sector count of this logical volume.
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120 sectors.
The default value is "auto" which allows the kernel to choose
a suitable value automatically.
"None" is equivalent to specifying zero.
2004-11-12 18:59:09 +03:00
.TP
.B \-\-refresh
2004-11-12 18:59:09 +03:00
If the logical volume is active, reload its metadata.
This is not necessary in normal operation, but may be useful
if something has gone wrong or if you're doing clustering
2004-11-12 18:59:09 +03:00
manually without a clustered lock manager.
.TP
.BR \-Z ", " \-\-zero " {" \fIy | \fIn }
Set zeroing mode for thin pool. Note: already provisioned blocks from pool
in non-zero mode are not cleared in unwritten parts when setting zero to
\fIy\fP.
2014-05-15 12:25:15 +04:00
.SH ENVIRONMENT VARIABLES
.TP
.B LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES
Suppress locking failure messages.
.SH Examples
Changes the permission on volume lvol1 in volume group vg00 to be read-only:
.sp
.B lvchange \-pr vg00/lvol1
.SH SEE ALSO
.BR lvm (8),
.BR lvmcache (7),
.BR lvmthin (7),
2004-11-12 18:59:09 +03:00
.BR lvcreate (8),
.BR vgchange (8)