1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-30 17:18:21 +03:00
lvm2/man/vgchange.8.in

347 lines
14 KiB
Groff
Raw Normal View History

.TH VGCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
vgchange \(em change attributes of a volume group
.SH SYNOPSIS
.B vgchange
2004-11-16 21:09:32 +03:00
.RB [ \-\-addtag
2004-03-22 18:08:50 +03:00
.IR Tag ]
2004-11-16 21:09:32 +03:00
.RB [ \-\-alloc
.IR AllocationPolicy ]
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-a | \-\-activate
.RI [ a | e | s | l ]
.RI { y | n }]
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.RB [ \-\-activationmode
.IB { complete | degraded | partial } ]
.RB [ \-K | \-\-ignoreactivationskip ]
.RB [ \-\-monitor
.RI { y | n }]
.RB [ \-\-poll
.RI { y | n }]
.RB [ \-c | \-\-clustered
.RI { y | n }]
.RB [ \-u | \-\-uuid ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
2004-11-16 21:09:32 +03:00
.RB [ \-\-deltag
2004-03-22 18:08:50 +03:00
.IR Tag ]
.RB [ \-\-detachprofile ]
.RB [ \-h | \-\-help ]
.RB [ \-\-ignorelockingfailure ]
.RB [ \-\-ignoremonitoring ]
.RB [ \-\-ignoreskippedcluster ]
.RB [ \-\-sysinit ]
.RB [ \-\-noudevsync ]
.RB [ \-\-lock\-start ]
.RB [ \-\-lock\-stop ]
.RB [ \-\-lock\-type
.IR LockType ]
2004-11-16 21:09:32 +03:00
.RB [ \-l | \-\-logicalvolume
.IR MaxLogicalVolumes ]
.RB [ \-p | \-\-maxphysicalvolumes
2006-08-16 18:41:42 +04:00
.IR MaxPhysicalVolumes ]
.RB [ \-\-metadataprofile
.IR ProfileName ]
.RB [ \-\- [ vg ] metadatacopies
.IR NumberOfCopies | unmanaged | all ]
.RB [ \-P | \-\-partial ]
.RB [ \-s | \-\-physicalextentsize
.IR PhysicalExtentSize [ bBsSkKmMgGtTpPeE ]]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-\-systemid
.IR SystemID ]
.RB [ \-\-refresh ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
2004-11-16 21:09:32 +03:00
.RB [ \-\-version ]
.RB [ \-x | \-\-resizeable
.RI { y | n }]
.RI [ VolumeGroupName ...]
.SH DESCRIPTION
vgchange allows you to change the attributes of one or more
volume groups. Its main purpose is to activate and deactivate
.IR VolumeGroupName ,
or all volume groups if none is specified. Only active volume groups
are subject to changes and allow access to their logical volumes.
2002-11-18 17:04:08 +03:00
[Not yet implemented: During volume group activation, if
.B vgchange
recognizes snapshot logical volumes which were dropped because they ran
out of space, it displays a message informing the administrator that such
snapshots should be removed (see
.BR lvremove (8)).
2002-11-18 17:04:08 +03:00
]
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-A ", " \-\-autobackup " {" \fIy | \fIn }
Controls automatic backup of metadata after the change. See
.BR vgcfgbackup (8).
Default is yes.
.TP
.BR \-a ", " \-\-activate " [" \fIa | \fIe | \fIs | \fIl ]{ \fIy | \fIn }
Controls the availability of the logical volumes in the volume
2004-11-12 18:59:09 +03:00
group for input/output.
In other words, makes the logical volumes known/unknown to the kernel.
If autoactivation option is used (\-aay), each logical volume in
the volume group is activated only if it matches an item in the
activation/auto_activation_volume_list set in lvm.conf. If this
list is not set, then all volumes are considered for activation.
The \-aay option should be also used during system boot so it's
possible to select which volumes to activate using the
activation/auto_activation_volume_list settting.
2004-11-12 18:59:09 +03:00
.IP
Activation of a logical volume creates a symbolic link
/dev/VolumeGroupName/LogicalVolumeName pointing to the device node.
This link is removed on deactivation.
All software and scripts should access the device through
this symbolic link and present this as the name of the device.
The location and name of the underlying device node may depend on
the distribution and configuration (e.g. udev) and might change
from release to release.
.IP
In a clustered VG, clvmd is used for activation, and the
following options are possible:
With \-aey, clvmd activates the LV in exclusive mode
(with an exclusive lock), allowing a single node to activate the LV.
With \-asy, clvmd activates the LV in shared mode
(with a shared lock), allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
the 's' option is ignored and an exclusive lock is used.
With \-ay (no mode specified), clvmd activates the LV in shared mode
if the LV type allows concurrent access, such as a linear LV.
Otherwise, clvmd activates the LV in exclusive mode.
With \-aey, \-asy, and \-ay, clvmd attempts to activate the LV
on all nodes. If exclusive mode is used, then only one of the
nodes will be successful.
With \-an, clvmd attempts to deactivate the LV on all nodes.
With \-aly, clvmd activates the LV only on the local node, and \-aln
deactivates only on the local node. If the LV type allows concurrent
access, then shared mode is used, otherwise exclusive.
LVs with snapshots are always activated exclusively because they can only
be used on one node at once.
For local VGs, \-ay, \-aey, and \-asy are all equivalent.
.IP
In a shared VG, lvmlockd is used for locking if LVM is compiled with lockd
support, and the following options are possible:
With \-aey, the command activates the LV in exclusive mode, allowing a
single host to activate the LV (the host running the command). Before
activating the LV, the command uses lvmlockd to acquire an exclusive lock
on the LV. If the lock cannot be acquired, the LV is not activated and an
error is reported. This would happen if the LV is active on another host.
With \-asy, the command activates the LV in shared mode, allowing multiple
hosts to activate the LV concurrently. Before activating the LV, the
command uses lvmlockd to acquire a shared lock on the LV. If the lock
cannot be acquired, the LV is not activated and an error is reported.
This would happen if the LV is active exclusively on another host. If the
LV type prohibits shared access, such as a snapshot, the command will
report an error and fail.
With \-an, the command deactivates the LV on the host running the command.
After deactivating the LV, the command uses lvmlockd to release the
current lock on the LV.
With lvmlockd, an unspecified mode is always exclusive, \-ay defaults to
\-aey.
.TP
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.BR \-\-activationmode " {" \fIcomplete | \fIdegraded | \fIpartial }
The activation mode determines whether logical volumes are allowed to
activate when there are physical volumes missing (e.g. due to a device
failure). \fIcomplete\fP is the most restrictive; allowing only those
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
logical volumes to be activated that are not affected by the missing
PVs. \fIdegraded\fP allows RAID logical volumes to be activated even if
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
they have PVs missing. (Note that the "mirror" segment type is not
considered a RAID logical volume. The "raid1" segment type should
be used instead.) Finally, \fIpartial\fP allows any logical volume to
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
be activated even if portions are missing due to a missing or failed
PV. This last option should only be used when performing recovery or
repair operations. \fIdegraded\fP is the default mode. To change it, modify
activation: Add "degraded" activation mode Currently, we have two modes of activation, an unnamed nominal mode (which I will refer to as "complete") and "partial" mode. The "complete" mode requires that a volume group be 'complete' - that is, no missing PVs. If there are any missing PVs, no affected LVs are allowed to activate - even RAID LVs which might be able to tolerate a failure. The "partial" mode allows anything to be activated (or at least attempted). If a non-redundant LV is missing a portion of its addressable space due to a device failure, it will be replaced with an error target. RAID LVs will either activate or fail to activate depending on how badly their redundancy is compromised. This patch adds a third option, "degraded" mode. This mode can be selected via the '--activationmode {complete|degraded|partial}' option to lvchange/vgchange. It can also be set in lvm.conf. The "degraded" activation mode allows RAID LVs with a sufficient level of redundancy to activate (e.g. a RAID5 LV with one device failure, a RAID6 with two device failures, or RAID1 with n-1 failures). RAID LVs with too many device failures are not allowed to activate - nor are any non-redundant LVs that may have been affected. This patch also makes the "degraded" mode the default activation mode. The degraded activation mode does not yet work in a cluster. A new cluster lock flag (LCK_DEGRADED_MODE) will need to be created to make that work. Currently, there is limited space for this extra flag and I am looking for possible solutions. One possible solution is to usurp LCK_CONVERT, as it is not used. When the locking_type is 3, the degraded mode flag simply gets dropped and the old ("complete") behavior is exhibited.
2014-07-10 07:56:11 +04:00
.B activation_mode
in
.BR lvm.conf (5).
.TP
.BR \-K ", " \-\-ignoreactivationskip
Ignore the flag to skip Logical Volumes during activation.
.TP
.BR \-c ", " \-\-clustered " {" \fIy | \fIn }
2007-01-23 16:08:34 +03:00
If clustered locking is enabled, this indicates whether this
Volume Group is shared with other nodes in the cluster or whether
it contains only local disks that are not visible on the other nodes.
If the cluster infrastructure is unavailable on a particular node at a
particular time, you may still be able to use Volume Groups that
are not marked as clustered.
.TP
.BR \-\-detachprofile
Detach any metadata configuration profiles attached to given
Volume Groups. See \fBlvm.conf\fP(5) for more information
about \fBmetadata profiles\fP.
.TP
.BR \-u ", " \-\-uuid
Generate new random UUID for specified Volume Groups.
.TP
.BR \-\-monitor " {" \fIy | \fIn }
Start or stop monitoring a mirrored or snapshot logical volume with
2006-08-19 02:27:01 +04:00
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
.B mirror_image_fault_policy
and
.B mirror_log_fault_policy
set in
2006-08-19 02:35:59 +04:00
.BR lvm.conf (5).
2006-08-19 01:49:19 +04:00
.TP
.BR \-\-poll " {" \fIy | \fIn }
Without polling a logical volume's backgrounded transformation process
will never complete. If there is an incomplete pvmove or lvconvert (for
example, on rebooting after a crash), use \fB\-\-poll y\fP to restart the
process from its last checkpoint. However, it may not be appropriate to
immediately poll a logical volume when it is activated, use
\fB\-\-poll n\fP to defer and then \fB\-\-poll y\fP to restart the process.
.TP
.BR \-\-sysinit
Indicates that vgchange(8) is being invoked from early system initialisation
scripts (e.g. rc.sysinit or an initrd), before writeable filesystems are
available. As such, some functionality needs to be disabled and this option
acts as a shortcut which selects an appropriate set of options. Currently
this is equivalent to using
.BR \-\-ignorelockingfailure ,
.BR \-\-ignoremonitoring ,
.B \-\-poll n
and setting \fBLVM_SUPPRESS_LOCKING_FAILURE_MESSAGES\fP
environment variable.
If \fB\-\-sysinit\fP is used in conjunction with lvmetad(8) enabled and running,
autoactivation is preferred over manual activation via direct vgchange call.
Logical volumes are autoactivated according to auto_activation_volume_list
set in lvm.conf(5).
.TP
.BR \-\-noudevsync
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.TP
.BR \-\-ignoremonitoring
Make no attempt to interact with dmeventd unless
.BR \-\-monitor
is specified.
Do not use this if dmeventd is already monitoring a device.
.TP
.BR \-\-lock\-start
Start the lockspace of a shared VG in lvmlockd. lvmlockd locks becomes
available for the VG, allowing LVM to use the VG. See
.BR lvmlockd (8).
.TP
.BR \-\-lock\-stop
Stop the lockspace of a shared VG in lvmlockd. lvmlockd locks become
unavailable for the VG, preventing LVM from using the VG. See
.BR lvmlockd (8).
.TP
.BR \-\-lock\-type " " \fILockType
Change the VG lock type to or from a shared lock type used with lvmlockd. See
.BR lvmlockd (8).
.TP
.BR \-l ", " \-\-logicalvolume " " \fIMaxLogicalVolumes
Changes the maximum logical volume number of an existing inactive
volume group.
.TP
.BR \-p ", " \-\-maxphysicalvolumes " " \fIMaxPhysicalVolumes
2006-08-16 18:41:42 +04:00
Changes the maximum number of physical volumes that can belong
to this volume group.
For volume groups with metadata in lvm1 format, the limit is 255.
If the metadata uses lvm2 format, the value 0 removes this restriction:
there is then no limit. If you have a large number of physical volumes in
a volume group with metadata in lvm2 format, for tool performance reasons,
you should consider some use of \fB\-\-pvmetadatacopies 0\fP as described in
\fBpvcreate(8)\fP, and/or use \fB\-\-vgmetadatacopies\fP.
.TP
.BR \-\-metadataprofile " " \fIProfileName
Uses and attaches ProfileName configuration profile to the volume group
metadata. Whenever the volume group is processed next time, the profile
is automatically applied. The profile is inherited by all logical volumes
in the volume group unless the logical volume itself has its own profile
attached. See \fBlvm.conf\fP(5) for more information about \fBmetadata profiles\fP.
.TP
.BR \-\- [ vg ] metadatacopies " " \fINumberOfCopies | \fIunmanaged | \fIall
Sets the desired number of metadata copies in the volume group. If set to
a non-zero value, LVM will automatically manage the 'metadataignore'
flags on the physical volumes (see \fBpvchange\fP or \fBpvcreate \-\-metadataignore\fP) in order
to achieve \fINumberOfCopies\fP copies of metadata. If set to \fIunmanaged\fP,
LVM will not automatically manage the 'metadataignore' flags. If set to
\fIall\fP, LVM will first clear all of the 'metadataignore' flags on all
metadata areas in the volume group, then set the value to \fIunmanaged\fP.
The \fBvgmetadatacopies\fP option is useful for volume groups containing
large numbers of physical volumes with metadata as it may be used to
minimize metadata read and write overhead.
2006-08-16 18:41:42 +04:00
.TP
.BR \-s ", " \-\-physicalextentsize " " \fIPhysicalExtentSize [ \fIBbBsSkKmMgGtTpPeE ]
Changes the physical extent size on physical volumes of this volume group.
A size suffix (k for kilobytes up to t for terabytes) is optional, megabytes
is the default if no suffix is present. For LVM2 format, the value must be a
power of 2 of at least 1 sector (where the sector size is the largest sector
size of the PVs currently used in the VG) or, if not a power of 2, at least
128KiB. For the older LVM1 format, it must be a power of 2 of at least 8KiB.
The default is 4 MiB.
Before increasing the physical extent size, you might need to use lvresize,
pvresize and/or pvmove so that everything fits. For example, every
contiguous range of extents used in a logical volume must start and
end on an extent boundary.
If the volume group metadata uses lvm1 format, extents can vary in size from
8KiB to 16GiB and there is a limit of 65534 extents in each logical volume.
The default of 4 MiB leads to a maximum logical volume size of around 256GiB.
If the volume group metadata uses lvm2 format those restrictions do not apply,
but having a large number of extents will slow down the tools but have no
impact on I/O performance to the logical volume. The smallest PE is 1KiB.
The 2.4 kernel has a limitation of 2TiB per block device.
.TP
.BR \-\-systemid " " \fISystemID
Changes the system ID of the VG. Using this option requires caution
because the VG may become foreign to the host running the command,
leaving the host unable to access it. See
.BR lvmsystemid (7).
.TP
.BR \-\-refresh
If any logical volume in the volume group is active, reload its metadata.
This is not necessary in normal operation, but may be useful
if something has gone wrong or if you're doing clustering
manually without a clustered lock manager.
.TP
.BR \-x ", " \-\-resizeable " {" \fIy | \fIn }
Enables or disables the extension/reduction of this volume group
with/by physical volumes.
.SH Examples
To activate all known volume groups in the system:
.sp
.B vgchange \-a y
To change the maximum number of logical volumes of inactive volume group
vg00 to 128.
.sp
.B vgchange \-l 128 /dev/vg00
.SH SEE ALSO
2004-11-12 18:59:09 +03:00
.BR lvchange (8),
.BR lvm (8),
.BR vgcreate (8)