mirror of
git://sourceware.org/git/lvm2.git
synced 2025-01-02 01:18:26 +03:00
man: updates to lvmlockd
The terminology has migrated toward using "shared VG" rather than "lockd VG". Also improve the wording in a number of places.
This commit is contained in:
parent
e84e9cd115
commit
b5f444d447
@ -84,8 +84,8 @@ For default settings, see lvmlockd -h.
|
|||||||
|
|
||||||
.SS Initial set up
|
.SS Initial set up
|
||||||
|
|
||||||
Using LVM with lvmlockd for the first time includes some one-time set up
|
Setting up LVM to use lvmlockd and a shared VG for the first time includes
|
||||||
steps:
|
some one time set up steps:
|
||||||
|
|
||||||
.SS 1. choose a lock manager
|
.SS 1. choose a lock manager
|
||||||
|
|
||||||
@ -94,7 +94,7 @@ steps:
|
|||||||
If dlm (or corosync) are already being used by other cluster
|
If dlm (or corosync) are already being used by other cluster
|
||||||
software, then select dlm. dlm uses corosync which requires additional
|
software, then select dlm. dlm uses corosync which requires additional
|
||||||
configuration beyond the scope of this document. See corosync and dlm
|
configuration beyond the scope of this document. See corosync and dlm
|
||||||
documentation for instructions on configuration, setup and usage.
|
documentation for instructions on configuration, set up and usage.
|
||||||
|
|
||||||
.I sanlock
|
.I sanlock
|
||||||
.br
|
.br
|
||||||
@ -117,7 +117,9 @@ Assign each host a unique host_id in the range 1-2000 by setting
|
|||||||
|
|
||||||
.SS 3. start lvmlockd
|
.SS 3. start lvmlockd
|
||||||
|
|
||||||
Use a unit/init file, or run the lvmlockd daemon directly:
|
Start the lvmlockd daemon.
|
||||||
|
.br
|
||||||
|
Use systemctl, a cluster resource agent, or run directly, e.g.
|
||||||
.br
|
.br
|
||||||
systemctl start lvm2-lvmlockd
|
systemctl start lvm2-lvmlockd
|
||||||
|
|
||||||
@ -125,14 +127,17 @@ systemctl start lvm2-lvmlockd
|
|||||||
|
|
||||||
.I sanlock
|
.I sanlock
|
||||||
.br
|
.br
|
||||||
Use unit/init files, or start wdmd and sanlock daemons directly:
|
Start the sanlock and wdmd daemons.
|
||||||
|
.br
|
||||||
|
Use systemctl or run directly, e.g.
|
||||||
.br
|
.br
|
||||||
systemctl start wdmd sanlock
|
systemctl start wdmd sanlock
|
||||||
|
|
||||||
.I dlm
|
.I dlm
|
||||||
.br
|
.br
|
||||||
Follow external clustering documentation when applicable, or use
|
Start the dlm and corosync daemons.
|
||||||
unit/init files:
|
.br
|
||||||
|
Use systemctl, a cluster resource agent, or run directly, e.g.
|
||||||
.br
|
.br
|
||||||
systemctl start corosync dlm
|
systemctl start corosync dlm
|
||||||
|
|
||||||
@ -141,18 +146,17 @@ systemctl start corosync dlm
|
|||||||
vgcreate --shared <vgname> <devices>
|
vgcreate --shared <vgname> <devices>
|
||||||
|
|
||||||
The shared option sets the VG lock type to sanlock or dlm depending on
|
The shared option sets the VG lock type to sanlock or dlm depending on
|
||||||
which lock manager is running. LVM commands will perform locking for the
|
which lock manager is running. LVM commands acquire locks from lvmlockd,
|
||||||
VG using lvmlockd. lvmlockd will use the chosen lock manager.
|
and lvmlockd uses the chosen lock manager.
|
||||||
|
|
||||||
.SS 6. start VG on all hosts
|
.SS 6. start VG on all hosts
|
||||||
|
|
||||||
vgchange --lock-start
|
vgchange --lock-start
|
||||||
|
|
||||||
lvmlockd requires shared VGs to be started before they are used. This is
|
Shared VGs must be started before they are used. Starting the VG performs
|
||||||
a lock manager operation to start (join) the VG lockspace, and it may take
|
lock manager initialization that is necessary to begin using locks (i.e.
|
||||||
some time. Until the start completes, locks for the VG are not available.
|
creating and joining a lockspace). Starting the VG may take some time,
|
||||||
LVM commands are allowed to read the VG while start is in progress. (A
|
and until the start completes the VG may not be modified or activated.
|
||||||
unit/init file can also be used to start VGs.)
|
|
||||||
|
|
||||||
.SS 7. create and activate LVs
|
.SS 7. create and activate LVs
|
||||||
|
|
||||||
@ -168,9 +172,9 @@ multiple hosts.)
|
|||||||
|
|
||||||
.SS Normal start up and shut down
|
.SS Normal start up and shut down
|
||||||
|
|
||||||
After initial set up, start up and shut down include the following general
|
After initial set up, start up and shut down include the following steps.
|
||||||
steps. They can be performed manually or using the system service
|
They can be performed directly or may be automated using systemd or a
|
||||||
manager.
|
cluster resource manager/agents.
|
||||||
|
|
||||||
\[bu]
|
\[bu]
|
||||||
start lvmlockd
|
start lvmlockd
|
||||||
@ -204,106 +208,64 @@ stop lvmlockd
|
|||||||
|
|
||||||
.SH TOPICS
|
.SH TOPICS
|
||||||
|
|
||||||
.SS VG access control
|
.SS Protecting VGs on shared devices
|
||||||
|
|
||||||
The following terms are used to describe different forms of VG access
|
The following terms are used to describe the different ways of accessing
|
||||||
control.
|
VGs on shared devices.
|
||||||
|
|
||||||
.I "lockd VG"
|
.I "shared VG"
|
||||||
|
|
||||||
A "lockd VG" is a shared VG that has a "lock type" of dlm or sanlock.
|
A shared VG exists on shared storage that is visible to multiple hosts.
|
||||||
Using it requires lvmlockd. These VGs exist on shared storage that is
|
LVM acquires locks through lvmlockd to coordinate access to shared VGs.
|
||||||
visible to multiple hosts. LVM commands use lvmlockd to perform locking
|
A shared VG has lock_type "dlm" or "sanlock", which specifies the lock
|
||||||
for these VGs when they are used.
|
manager lvmlockd will use.
|
||||||
|
|
||||||
If the lock manager for the lock type is not available (e.g. not started
|
When the lock manager for the lock type is not available (e.g. not started
|
||||||
or failed), lvmlockd is unable to acquire locks for LVM commands. LVM
|
or failed), lvmlockd is unable to acquire locks for LVM commands. In this
|
||||||
commands that only read the VG will generally be allowed to continue
|
situation, LVM commands are only allowed to read and display the VG;
|
||||||
without locks in this case (with a warning). Commands to modify or
|
changes and activation will fail.
|
||||||
activate the VG will fail without the necessary locks.
|
|
||||||
|
|
||||||
.I "local VG"
|
.I "local VG"
|
||||||
|
|
||||||
A "local VG" is meant to be used by a single host. It has no lock type or
|
A local VG is meant to be used by a single host. It has no lock type or
|
||||||
lock type "none". LVM commands and lvmlockd do not perform locking for
|
lock type "none". A local VG typically exists on local (non-shared)
|
||||||
these VGs. A local VG typically exists on local (non-shared) devices and
|
devices and cannot be used concurrently from different hosts.
|
||||||
cannot be used concurrently from different hosts.
|
|
||||||
|
|
||||||
If a local VG does exist on shared devices, it should be owned by a single
|
If a local VG does exist on shared devices, it should be owned by a single
|
||||||
host by having its system ID set, see
|
host by having the system ID set, see
|
||||||
.BR lvmsystemid (7).
|
.BR lvmsystemid (7).
|
||||||
Only the host with a matching system ID can use the local VG. A VG
|
The host with a matching system ID can use the local VG and other hosts
|
||||||
with no lock type and no system ID should be excluded from all but one
|
will ignore it. A VG with no lock type and no system ID should be
|
||||||
host using lvm.conf filters. Without any of these protections, a local VG
|
excluded from all but one host using lvm.conf filters. Without any of
|
||||||
on shared devices can be easily damaged or destroyed.
|
these protections, a local VG on shared devices can be easily damaged or
|
||||||
|
destroyed.
|
||||||
|
|
||||||
.I "clvm VG"
|
.I "clvm VG"
|
||||||
|
|
||||||
A "clvm VG" is a VG on shared storage (like a lockd VG) that requires
|
A clvm VG (or clustered VG) is a VG on shared storage (like a shared VG)
|
||||||
clvmd for clustering. See below for converting a clvm VG to a lockd VG.
|
that requires clvmd for clustering and locking. See below for converting
|
||||||
|
a clvm/clustered VG to a shared VG.
|
||||||
|
|
||||||
|
|
||||||
.SS lockd VGs from hosts not using lvmlockd
|
.SS shared VGs from hosts not using lvmlockd
|
||||||
|
|
||||||
Only hosts that use lockd VGs should be configured to run lvmlockd.
|
Hosts that do not use shared VGs will not be running lvmlockd. In this
|
||||||
However, shared devices in lockd VGs may be visible from hosts not
|
case, shared VGs that are still visible to the host will be ignored
|
||||||
using lvmlockd. From a host not using lvmlockd, lockd VGs are ignored
|
(like foreign VGs, see
|
||||||
in the same way as foreign VGs (see
|
|
||||||
.BR lvmsystemid (7).)
|
.BR lvmsystemid (7).)
|
||||||
|
|
||||||
The --shared option for reporting and display commands causes lockd VGs
|
The --shared option for reporting and display commands causes shared VGs
|
||||||
to be displayed on a host not using lvmlockd, like the --foreign option
|
to be displayed on a host not using lvmlockd, like the --foreign option
|
||||||
does for foreign VGs.
|
does for foreign VGs.
|
||||||
|
|
||||||
|
|
||||||
.SS vgcreate comparison
|
|
||||||
|
|
||||||
The type of VG access control is specified in the vgcreate command.
|
|
||||||
See
|
|
||||||
.BR vgcreate (8)
|
|
||||||
for all vgcreate options.
|
|
||||||
|
|
||||||
.B vgcreate <vgname> <devices>
|
|
||||||
|
|
||||||
.IP \[bu] 2
|
|
||||||
Creates a local VG with the local host's system ID when neither lvmlockd nor clvm are configured.
|
|
||||||
.IP \[bu] 2
|
|
||||||
Creates a local VG with the local host's system ID when lvmlockd is configured.
|
|
||||||
.IP \[bu] 2
|
|
||||||
Creates a clvm VG when clvm is configured.
|
|
||||||
|
|
||||||
.P
|
|
||||||
|
|
||||||
.B vgcreate --shared <vgname> <devices>
|
|
||||||
.IP \[bu] 2
|
|
||||||
Requires lvmlockd to be configured and running.
|
|
||||||
.IP \[bu] 2
|
|
||||||
Creates a lockd VG with lock type sanlock|dlm depending on which lock
|
|
||||||
manager is running.
|
|
||||||
.IP \[bu] 2
|
|
||||||
LVM commands request locks from lvmlockd to use the VG.
|
|
||||||
.IP \[bu] 2
|
|
||||||
lvmlockd obtains locks from the selected lock manager.
|
|
||||||
|
|
||||||
.P
|
|
||||||
|
|
||||||
.B vgcreate -c|--clustered y <vgname> <devices>
|
|
||||||
.IP \[bu] 2
|
|
||||||
Requires clvm to be configured and running.
|
|
||||||
.IP \[bu] 2
|
|
||||||
Creates a clvm VG with the "clustered" flag.
|
|
||||||
.IP \[bu] 2
|
|
||||||
LVM commands request locks from clvmd to use the VG.
|
|
||||||
|
|
||||||
.P
|
|
||||||
|
|
||||||
.SS creating the first sanlock VG
|
.SS creating the first sanlock VG
|
||||||
|
|
||||||
Creating the first sanlock VG is not protected by locking, so it requires
|
Creating the first sanlock VG is not protected by locking, so it requires
|
||||||
special attention. This is because sanlock locks exist on storage within
|
special attention. This is because sanlock locks exist on storage within
|
||||||
the VG, so they are not available until the VG exists. The first sanlock
|
the VG, so they are not available until after the VG is created. The
|
||||||
VG created will automatically contain the "global lock". Be aware of the
|
first sanlock VG that is created will automatically contain the "global
|
||||||
following special considerations:
|
lock". Be aware of the following special considerations:
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
The first vgcreate command needs to be given the path to a device that has
|
The first vgcreate command needs to be given the path to a device that has
|
||||||
@ -318,54 +280,48 @@ to be accessible to all hosts that will use sanlock shared VGs. All hosts
|
|||||||
will need to use the global lock from the first sanlock VG.
|
will need to use the global lock from the first sanlock VG.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
While running vgcreate for the first sanlock VG, ensure that the device
|
The device and VG name used by the initial vgcreate will not be protected
|
||||||
being used is not used by another LVM command. Allocation of shared
|
from concurrent use by another vgcreate on another host.
|
||||||
devices is usually protected by the global lock, but this cannot be done
|
|
||||||
for the first sanlock VG which will hold the global lock.
|
|
||||||
|
|
||||||
.IP \[bu] 2
|
|
||||||
While running vgcreate for the first sanlock VG, ensure that the VG name
|
|
||||||
being used is not used by another LVM command. Uniqueness of VG names is
|
|
||||||
usually ensured by the global lock.
|
|
||||||
|
|
||||||
See below for more information about managing the sanlock global lock.
|
See below for more information about managing the sanlock global lock.
|
||||||
|
|
||||||
|
|
||||||
.SS using lockd VGs
|
.SS using shared VGs
|
||||||
|
|
||||||
There are some special considerations when using lockd VGs.
|
There are some special considerations when using shared VGs.
|
||||||
|
|
||||||
When use_lvmlockd is first enabled in lvm.conf, and before the first lockd
|
When use_lvmlockd is first enabled in lvm.conf, and before the first
|
||||||
VG is created, no global lock will exist. In this initial state, LVM
|
shared VG is created, no global lock will exist. In this initial state,
|
||||||
commands try and fail to acquire the global lock, producing a warning, and
|
LVM commands try and fail to acquire the global lock, producing a warning,
|
||||||
some commands are disallowed. Once the first lockd VG is created, the
|
and some commands are disallowed. Once the first shared VG is created,
|
||||||
global lock will be available, and LVM will be fully operational.
|
the global lock will be available, and LVM will be fully operational.
|
||||||
|
|
||||||
When a new lockd VG is created, its lockspace is automatically started on
|
When a new shared VG is created, its lockspace is automatically started on
|
||||||
the host that creates it. Other hosts need to run 'vgchange
|
the host that creates it. Other hosts need to run 'vgchange --lock-start'
|
||||||
--lock-start' to start the new VG before they can use it.
|
to start the new VG before they can use it.
|
||||||
|
|
||||||
From the 'vgs' command, lockd VGs are indicated by "s" (for shared) in the
|
From the 'vgs' command, shared VGs are indicated by "s" (for shared) in
|
||||||
sixth attr field. The specific lock type and lock args for a lockd VG can
|
the sixth attr field, and by "shared" in the "--options shared" report
|
||||||
be displayed with 'vgs -o+locktype,lockargs'.
|
field. The specific lock type and lock args for a shared VG can be
|
||||||
|
displayed with 'vgs -o+locktype,lockargs'.
|
||||||
|
|
||||||
lockd VGs need to be "started" and "stopped", unlike other types of VGs.
|
Shared VGs need to be "started" and "stopped", unlike other types of VGs.
|
||||||
See the following section for a full description of starting and stopping.
|
See the following section for a full description of starting and stopping.
|
||||||
|
|
||||||
vgremove of a lockd VG will fail if other hosts have the VG started.
|
Removing a shared VG will fail if other hosts have the VG started. Run
|
||||||
Run vgchange --lock-stop <vgname> on all other hosts before vgremove.
|
vgchange --lock-stop <vgname> on all other hosts before vgremove. (It may
|
||||||
(It may take several seconds before vgremove recognizes that all hosts
|
take several seconds before vgremove recognizes that all hosts have
|
||||||
have stopped a sanlock VG.)
|
stopped a sanlock VG.)
|
||||||
|
|
||||||
.SS starting and stopping VGs
|
.SS starting and stopping VGs
|
||||||
|
|
||||||
Starting a lockd VG (vgchange --lock-start) causes the lock manager to
|
Starting a shared VG (vgchange --lock-start) causes the lock manager to
|
||||||
start (join) the lockspace for the VG on the host where it is run. This
|
start (join) the lockspace for the VG on the host where it is run. This
|
||||||
makes locks for the VG available to LVM commands on the host. Before a VG
|
makes locks for the VG available to LVM commands on the host. Before a VG
|
||||||
is started, only LVM commands that read/display the VG are allowed to
|
is started, only LVM commands that read/display the VG are allowed to
|
||||||
continue without locks (and with a warning).
|
continue without locks (and with a warning).
|
||||||
|
|
||||||
Stopping a lockd VG (vgchange --lock-stop) causes the lock manager to
|
Stopping a shared VG (vgchange --lock-stop) causes the lock manager to
|
||||||
stop (leave) the lockspace for the VG on the host where it is run. This
|
stop (leave) the lockspace for the VG on the host where it is run. This
|
||||||
makes locks for the VG inaccessible to the host. A VG cannot be stopped
|
makes locks for the VG inaccessible to the host. A VG cannot be stopped
|
||||||
while it has active LVs.
|
while it has active LVs.
|
||||||
@ -374,7 +330,7 @@ When using the lock type sanlock, starting a VG can take a long time
|
|||||||
(potentially minutes if the host was previously shut down without cleanly
|
(potentially minutes if the host was previously shut down without cleanly
|
||||||
stopping the VG.)
|
stopping the VG.)
|
||||||
|
|
||||||
A lockd VG can be started after all the following are true:
|
A shared VG can be started after all the following are true:
|
||||||
.br
|
.br
|
||||||
\[bu]
|
\[bu]
|
||||||
lvmlockd is running
|
lvmlockd is running
|
||||||
@ -386,9 +342,9 @@ the lock manager is running
|
|||||||
the VG's devices are visible on the system
|
the VG's devices are visible on the system
|
||||||
.br
|
.br
|
||||||
|
|
||||||
A lockd VG can be stopped if all LVs are deactivated.
|
A shared VG can be stopped if all LVs are deactivated.
|
||||||
|
|
||||||
All lockd VGs can be started/stopped using:
|
All shared VGs can be started/stopped using:
|
||||||
.br
|
.br
|
||||||
vgchange --lock-start
|
vgchange --lock-start
|
||||||
.br
|
.br
|
||||||
@ -407,12 +363,12 @@ vgchange --lock-start --lock-opt nowait ...
|
|||||||
|
|
||||||
lvmlockd can be asked directly to stop all lockspaces:
|
lvmlockd can be asked directly to stop all lockspaces:
|
||||||
.br
|
.br
|
||||||
lvmlockctl --stop-lockspaces
|
lvmlockctl -S|--stop-lockspaces
|
||||||
|
|
||||||
To start only selected lockd VGs, use the lvm.conf
|
To start only selected shared VGs, use the lvm.conf
|
||||||
activation/lock_start_list. When defined, only VG names in this list are
|
activation/lock_start_list. When defined, only VG names in this list are
|
||||||
started by vgchange. If the list is not defined (the default), all
|
started by vgchange. If the list is not defined (the default), all
|
||||||
visible lockd VGs are started. To start only "vg1", use the following
|
visible shared VGs are started. To start only "vg1", use the following
|
||||||
lvm.conf configuration:
|
lvm.conf configuration:
|
||||||
|
|
||||||
.nf
|
.nf
|
||||||
@ -435,7 +391,7 @@ The "auto" option causes the command to follow the lvm.conf
|
|||||||
activation/auto_lock_start_list. If auto_lock_start_list is undefined,
|
activation/auto_lock_start_list. If auto_lock_start_list is undefined,
|
||||||
all VGs are started, just as if the auto option was not used.
|
all VGs are started, just as if the auto option was not used.
|
||||||
|
|
||||||
When auto_lock_start_list is defined, it lists the lockd VGs that should
|
When auto_lock_start_list is defined, it lists the shared VGs that should
|
||||||
be started by the auto command. VG names that do not match an item in the
|
be started by the auto command. VG names that do not match an item in the
|
||||||
list will be ignored by the auto start command.
|
list will be ignored by the auto start command.
|
||||||
|
|
||||||
@ -443,23 +399,20 @@ list will be ignored by the auto start command.
|
|||||||
commands, i.e. with or without the auto option. When the lock_start_list
|
commands, i.e. with or without the auto option. When the lock_start_list
|
||||||
is defined, only VGs matching a list item can be started with vgchange.)
|
is defined, only VGs matching a list item can be started with vgchange.)
|
||||||
|
|
||||||
The auto_lock_start_list allows a user to select certain lockd VGs that
|
The auto_lock_start_list allows a user to select certain shared VGs that
|
||||||
should be automatically started by the system (or indirectly, those that
|
should be automatically started by the system (or indirectly, those that
|
||||||
should not).
|
should not).
|
||||||
|
|
||||||
To use auto activation of lockd LVs (see auto_activation_volume_list),
|
|
||||||
auto starting of the corresponding lockd VGs is necessary.
|
|
||||||
|
|
||||||
|
|
||||||
.SS internal command locking
|
.SS internal command locking
|
||||||
|
|
||||||
To optimize the use of LVM with lvmlockd, be aware of the three kinds of
|
To optimize the use of LVM with lvmlockd, be aware of the three kinds of
|
||||||
locks and when they are used:
|
locks and when they are used:
|
||||||
|
|
||||||
.I GL lock
|
.I Global lock
|
||||||
|
|
||||||
The global lock (GL lock) is associated with global information, which is
|
The global lock s associated with global information, which is information
|
||||||
information not isolated to a single VG. This includes:
|
not isolated to a single VG. This includes:
|
||||||
|
|
||||||
\[bu]
|
\[bu]
|
||||||
The global VG namespace.
|
The global VG namespace.
|
||||||
@ -484,61 +437,58 @@ acquired.
|
|||||||
|
|
||||||
.I VG lock
|
.I VG lock
|
||||||
|
|
||||||
A VG lock is associated with each lockd VG. The VG lock is acquired in
|
A VG lock is associated with each shared VG. The VG lock is acquired in
|
||||||
shared mode to read the VG and in exclusive mode to change the VG (modify
|
shared mode to read the VG and in exclusive mode to change the VG or
|
||||||
the VG metadata or activating LVs). This lock serializes access to a VG
|
activate LVs. This lock serializes access to a VG with all other LVM
|
||||||
with all other LVM commands accessing the VG from all hosts.
|
commands accessing the VG from all hosts.
|
||||||
|
|
||||||
The command 'vgs' will not only acquire the GL lock to read the list of
|
The command 'vgs <vgname>' does not acquire the global lock (it does not
|
||||||
all VG names, but will acquire the VG lock for each VG prior to reading
|
need the list of all VG names), but will acquire the VG lock on each VG
|
||||||
it.
|
name argument.
|
||||||
|
|
||||||
The command 'vgs <vgname>' does not acquire the GL lock (it does not need
|
|
||||||
the list of all VG names), but will acquire the VG lock on each VG name
|
|
||||||
argument.
|
|
||||||
|
|
||||||
.I LV lock
|
.I LV lock
|
||||||
|
|
||||||
An LV lock is acquired before the LV is activated, and is released after
|
An LV lock is acquired before the LV is activated, and is released after
|
||||||
the LV is deactivated. If the LV lock cannot be acquired, the LV is not
|
the LV is deactivated. If the LV lock cannot be acquired, the LV is not
|
||||||
activated. LV locks are persistent and remain in place when the
|
activated. (LV locks are persistent and remain in place when the
|
||||||
activation command is done. GL and VG locks are transient, and are held
|
activation command is done. Global and VG locks are transient, and are
|
||||||
only while an LVM command is running.
|
held only while an LVM command is running.)
|
||||||
|
|
||||||
.I lock retries
|
.I lock retries
|
||||||
|
|
||||||
If a request for a GL or VG lock fails due to a lock conflict with another
|
If a request for a Global or VG lock fails due to a lock conflict with
|
||||||
host, lvmlockd automatically retries for a short time before returning a
|
another host, lvmlockd automatically retries for a short time before
|
||||||
failure to the LVM command. If those retries are insufficient, the LVM
|
returning a failure to the LVM command. If those retries are
|
||||||
command will retry the entire lock request a number of times specified by
|
insufficient, the LVM command will retry the entire lock request a number
|
||||||
global/lvmlockd_lock_retries before failing. If a request for an LV lock
|
of times specified by global/lvmlockd_lock_retries before failing. If a
|
||||||
fails due to a lock conflict, the command fails immediately.
|
request for an LV lock fails due to a lock conflict, the command fails
|
||||||
|
immediately.
|
||||||
|
|
||||||
|
|
||||||
.SS managing the global lock in sanlock VGs
|
.SS managing the global lock in sanlock VGs
|
||||||
|
|
||||||
The global lock exists in one of the sanlock VGs. The first sanlock VG
|
The global lock exists in one of the sanlock VGs. The first sanlock VG
|
||||||
created will contain the global lock. Subsequent sanlock VGs will each
|
created will contain the global lock. Subsequent sanlock VGs will each
|
||||||
contain disabled global locks that can be enabled later if necessary.
|
contain a disabled global lock that can be enabled later if necessary.
|
||||||
|
|
||||||
The VG containing the global lock must be visible to all hosts using
|
The VG containing the global lock must be visible to all hosts using
|
||||||
sanlock VGs. This can be a reason to create a small sanlock VG, visible
|
sanlock VGs. For this reason, it can be useful to create a small sanlock
|
||||||
to all hosts, and dedicated to just holding the global lock. While not
|
VG, visible to all hosts, and dedicated to just holding the global lock.
|
||||||
required, this strategy can help to avoid difficulty in the future if VGs
|
While not required, this strategy can help to avoid difficulty in the
|
||||||
are moved or removed.
|
future if VGs are moved or removed.
|
||||||
|
|
||||||
The vgcreate command typically acquires the global lock, but in the case
|
The vgcreate command typically acquires the global lock, but in the case
|
||||||
of the first sanlock VG, there will be no global lock to acquire until the
|
of the first sanlock VG, there will be no global lock to acquire until the
|
||||||
first vgcreate is complete. So, creating the first sanlock VG is a
|
first vgcreate is complete. So, creating the first sanlock VG is a
|
||||||
special case that skips the global lock.
|
special case that skips the global lock.
|
||||||
|
|
||||||
vgcreate for a sanlock VG determines it is the first one to exist if no
|
vgcreate determines that it's creating the first sanlock VG when no other
|
||||||
other sanlock VGs are visible. It is possible that other sanlock VGs do
|
sanlock VGs are visible on the system. It is possible that other sanlock
|
||||||
exist but are not visible on the host running vgcreate. In this case,
|
VGs do exist, but are not visible when vgcreate checks for them. In this
|
||||||
vgcreate would create a new sanlock VG with the global lock enabled. When
|
case, vgcreate will create a new sanlock VG with the global lock enabled.
|
||||||
the other VG containing a global lock appears, lvmlockd will see more than
|
When the another VG containing a global lock appears, lvmlockd will then
|
||||||
one VG with a global lock enabled, and LVM commands will report that there
|
see more than one VG with a global lock enabled. LVM commands will report
|
||||||
are duplicate global locks.
|
that there are duplicate global locks.
|
||||||
|
|
||||||
If the situation arises where more than one sanlock VG contains a global
|
If the situation arises where more than one sanlock VG contains a global
|
||||||
lock, the global lock should be manually disabled in all but one of them
|
lock, the global lock should be manually disabled in all but one of them
|
||||||
@ -556,8 +506,8 @@ VGs with the command:
|
|||||||
|
|
||||||
lvmlockctl --gl-enable <vgname>
|
lvmlockctl --gl-enable <vgname>
|
||||||
|
|
||||||
A small sanlock VG dedicated to holding the global lock can avoid the case
|
(Using a small sanlock VG dedicated to holding the global lock can avoid
|
||||||
where the GL lock must be manually enabled after a vgremove.
|
the case where the global lock must be manually enabled after a vgremove.)
|
||||||
|
|
||||||
|
|
||||||
.SS internal lvmlock LV
|
.SS internal lvmlock LV
|
||||||
@ -574,8 +524,8 @@ device, then use vgextend to add other devices.
|
|||||||
|
|
||||||
.SS LV activation
|
.SS LV activation
|
||||||
|
|
||||||
In a shared VG, activation changes involve locking through lvmlockd, and
|
In a shared VG, LV activation involves locking through lvmlockd, and the
|
||||||
the following values are possible with lvchange/vgchange -a:
|
following values are possible with lvchange/vgchange -a:
|
||||||
|
|
||||||
.IP \fBy\fP|\fBey\fP
|
.IP \fBy\fP|\fBey\fP
|
||||||
The command activates the LV in exclusive mode, allowing a single host
|
The command activates the LV in exclusive mode, allowing a single host
|
||||||
@ -596,10 +546,6 @@ The shared mode is intended for a multi-host/cluster application or
|
|||||||
file system.
|
file system.
|
||||||
LV types that cannot be used concurrently
|
LV types that cannot be used concurrently
|
||||||
from multiple hosts include thin, cache, raid, and snapshot.
|
from multiple hosts include thin, cache, raid, and snapshot.
|
||||||
lvextend on LV with shared locks is not yet allowed. The LV must be
|
|
||||||
deactivated, or activated exclusively to run lvextend. (LVs with
|
|
||||||
the mirror type can be activated in shared mode from multiple hosts
|
|
||||||
when using the dlm lock type and cmirrord.)
|
|
||||||
|
|
||||||
.IP \fBn\fP
|
.IP \fBn\fP
|
||||||
The command deactivates the LV. After deactivating the LV, the command
|
The command deactivates the LV. After deactivating the LV, the command
|
||||||
@ -654,7 +600,7 @@ with the expiring lease before other hosts can acquire its locks.
|
|||||||
|
|
||||||
When the sanlock daemon detects that the lease storage is lost, it runs
|
When the sanlock daemon detects that the lease storage is lost, it runs
|
||||||
the command lvmlockctl --kill <vgname>. This command emits a syslog
|
the command lvmlockctl --kill <vgname>. This command emits a syslog
|
||||||
message stating that lease storage is lost for the VG and LVs must be
|
message stating that lease storage is lost for the VG, and LVs must be
|
||||||
immediately deactivated.
|
immediately deactivated.
|
||||||
|
|
||||||
If no LVs are active in the VG, then the lockspace with an expiring lease
|
If no LVs are active in the VG, then the lockspace with an expiring lease
|
||||||
@ -666,10 +612,10 @@ If the VG has active LVs when the lock storage is lost, the LVs must be
|
|||||||
quickly deactivated before the lockspace lease expires. After all LVs are
|
quickly deactivated before the lockspace lease expires. After all LVs are
|
||||||
deactivated, run lvmlockctl --drop <vgname> to clear the expiring
|
deactivated, run lvmlockctl --drop <vgname> to clear the expiring
|
||||||
lockspace from lvmlockd. If all LVs in the VG are not deactivated within
|
lockspace from lvmlockd. If all LVs in the VG are not deactivated within
|
||||||
about 40 seconds, sanlock will reset the host using the local watchdog.
|
about 40 seconds, sanlock uses wdmd and the local watchdog to reset the
|
||||||
The machine reset is effectively a severe form of "deactivating" LVs
|
host. The machine reset is effectively a severe form of "deactivating"
|
||||||
before they can be activated on other hosts. The reset is considered a
|
LVs before they can be activated on other hosts. The reset is considered
|
||||||
better alternative than having LVs used by multiple hosts at once, which
|
a better alternative than having LVs used by multiple hosts at once, which
|
||||||
could easily damage or destroy their content.
|
could easily damage or destroy their content.
|
||||||
|
|
||||||
In the future, the lvmlockctl kill command may automatically attempt to
|
In the future, the lvmlockctl kill command may automatically attempt to
|
||||||
@ -681,8 +627,7 @@ sanlock resets the machine.
|
|||||||
|
|
||||||
If the sanlock daemon fails or exits while a lockspace is started, the
|
If the sanlock daemon fails or exits while a lockspace is started, the
|
||||||
local watchdog will reset the host. This is necessary to protect any
|
local watchdog will reset the host. This is necessary to protect any
|
||||||
application resources that depend on sanlock leases which will be lost
|
application resources that depend on sanlock leases.
|
||||||
without sanlock running.
|
|
||||||
|
|
||||||
|
|
||||||
.SS changing dlm cluster name
|
.SS changing dlm cluster name
|
||||||
@ -762,14 +707,14 @@ Start the VG on hosts to use it:
|
|||||||
vgchange --lock-start <vgname>
|
vgchange --lock-start <vgname>
|
||||||
|
|
||||||
|
|
||||||
.SS changing a local VG to a lockd VG
|
.SS changing a local VG to a shared VG
|
||||||
|
|
||||||
All LVs must be inactive to change the lock type.
|
All LVs must be inactive to change the lock type.
|
||||||
|
|
||||||
lvmlockd must be configured and running as described in USAGE.
|
lvmlockd must be configured and running as described in USAGE.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
Change a local VG to a lockd VG with the command:
|
Change a local VG to a shared VG with the command:
|
||||||
.br
|
.br
|
||||||
vgchange --lock-type sanlock|dlm <vgname>
|
vgchange --lock-type sanlock|dlm <vgname>
|
||||||
|
|
||||||
@ -780,7 +725,7 @@ vgchange --lock-start <vgname>
|
|||||||
|
|
||||||
.P
|
.P
|
||||||
|
|
||||||
.SS changing a lockd VG to a local VG
|
.SS changing a shared VG to a local VG
|
||||||
|
|
||||||
All LVs must be inactive to change the lock type.
|
All LVs must be inactive to change the lock type.
|
||||||
|
|
||||||
@ -806,11 +751,11 @@ type can be forcibly changed to none with:
|
|||||||
|
|
||||||
vgchange --lock-type none --lock-opt force <vgname>
|
vgchange --lock-type none --lock-opt force <vgname>
|
||||||
|
|
||||||
To change a VG from one lockd type to another (i.e. between sanlock and
|
To change a VG from one lock type to another (i.e. between sanlock and
|
||||||
dlm), first change it to a local VG, then to the new type.
|
dlm), first change it to a local VG, then to the new type.
|
||||||
|
|
||||||
|
|
||||||
.SS changing a clvm VG to a lockd VG
|
.SS changing a clvm/clustered VG to a shared VG
|
||||||
|
|
||||||
All LVs must be inactive to change the lock type.
|
All LVs must be inactive to change the lock type.
|
||||||
|
|
||||||
@ -823,15 +768,15 @@ If the clvm cluster is no longer running on any nodes, then extra options
|
|||||||
can be used to forcibly make the VG local. Caution: this is only safe if
|
can be used to forcibly make the VG local. Caution: this is only safe if
|
||||||
all nodes have stopped using the VG:
|
all nodes have stopped using the VG:
|
||||||
|
|
||||||
vgchange --lock-type none --lock-opt force <vgname>
|
vgchange --lock-type none --lock-opt force <vgname>
|
||||||
|
|
||||||
After the VG is local, follow the steps described in "changing a local VG
|
After the VG is local, follow the steps described in "changing a local VG
|
||||||
to a lockd VG".
|
to a shared VG".
|
||||||
|
|
||||||
|
|
||||||
.SS limitations of lockd VGs
|
.SS limitations of shared VGs
|
||||||
|
|
||||||
Things that do not yet work in lockd VGs:
|
Things that do not yet work in shared VGs:
|
||||||
.br
|
.br
|
||||||
\[bu]
|
\[bu]
|
||||||
using external origins for thin LVs
|
using external origins for thin LVs
|
||||||
@ -851,22 +796,22 @@ vgsplit and vgmerge (convert to a local VG to do this)
|
|||||||
|
|
||||||
.SS lvmlockd changes from clvmd
|
.SS lvmlockd changes from clvmd
|
||||||
|
|
||||||
(See above for converting an existing clvm VG to a lockd VG.)
|
(See above for converting an existing clvm VG to a shared VG.)
|
||||||
|
|
||||||
While lvmlockd and clvmd are entirely different systems, LVM command usage
|
While lvmlockd and clvmd are entirely different systems, LVM command usage
|
||||||
remains similar. Differences are more notable when using lvmlockd's
|
remains similar. Differences are more notable when using lvmlockd's
|
||||||
sanlock option.
|
sanlock option.
|
||||||
|
|
||||||
Visible usage differences between lockd VGs (using lvmlockd) and clvm VGs
|
Visible usage differences between shared VGs (using lvmlockd) and
|
||||||
(using clvmd):
|
clvm/clustered VGs (using clvmd):
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
lvm.conf must be configured to use either lvmlockd (use_lvmlockd=1) or
|
lvm.conf must be configured to use either lvmlockd (use_lvmlockd=1) or
|
||||||
clvmd (locking_type=3), but not both.
|
clvmd (locking_type=3), but not both.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
vgcreate --shared creates a lockd VG, and vgcreate --clustered y
|
vgcreate --shared creates a shared VG, and vgcreate --clustered y
|
||||||
creates a clvm VG.
|
creates a clvm/clustered VG.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
lvmlockd adds the option of using sanlock for locking, avoiding the
|
lvmlockd adds the option of using sanlock for locking, avoiding the
|
||||||
@ -887,11 +832,11 @@ lvmlockd works with thin and cache pools and LVs.
|
|||||||
lvmlockd works with lvmetad.
|
lvmlockd works with lvmetad.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
lvmlockd saves the cluster name for a lockd VG using dlm. Only hosts in
|
lvmlockd saves the cluster name for a shared VG using dlm. Only hosts in
|
||||||
the matching cluster can use the VG.
|
the matching cluster can use the VG.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
lvmlockd requires starting/stopping lockd VGs with vgchange --lock-start
|
lvmlockd requires starting/stopping shared VGs with vgchange --lock-start
|
||||||
and --lock-stop.
|
and --lock-stop.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
@ -914,7 +859,7 @@ reporting option lock_args to view the corresponding metadata fields.
|
|||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
In the 'vgs' command's sixth VG attr field, "s" for "shared" is displayed
|
In the 'vgs' command's sixth VG attr field, "s" for "shared" is displayed
|
||||||
for lockd VGs.
|
for shared VGs.
|
||||||
|
|
||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
If lvmlockd fails or is killed while in use, locks it held remain but are
|
If lvmlockd fails or is killed while in use, locks it held remain but are
|
||||||
|
Loading…
Reference in New Issue
Block a user