1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

177 Commits

Author SHA1 Message Date
David Teigland
390ff5be2f raidintegrity: allow writecache and cache
Allow writecache|cache over raid+integrity LV.
2023-04-05 14:24:07 -05:00
Zdenek Kabelac
ba3707d953 archiving: take archive automatically
Instead of calling explicit archive with command processing logic,
move this step towards 1st. vg_write() call, which will automatically
store archive of committed metadata.

This slightly changes some error path where the error in archiving
was detected earlier in the command, while now some on going command
'actions' might have been, but will be simply scratched in case
of error (since even new metadata would not have been even written).

So general effect should be only some command message ordering.
2021-06-09 14:56:13 +02:00
David Teigland
e7f107c246 writecache: don't pvmove device used by writecache
The existing check didn't cover the unusual case where the
cachevol exists on the same device as the origin LV.
2021-06-02 11:12:20 -05:00
Wu Guanghao
d71199920f pvmove: check return value of top_level_lv_name()
The return value of top_level_lv_name() may be NULL, so we should
check return value of top_level_lv_name before calling
strcmp(lv->name, top_level_lv_name(vg, lv_name)).

Signed-off-by: Wu Guanghao <wuguanghao3@huawei.com>
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
2020-09-11 21:43:08 +02:00
David Teigland
d8bb85d963 writecache: allow pvmove on origin
The removed check didn't actually prevent pvmoving the origin,
which was possible by naming the wcorig lv, or naming no lv.
2020-09-02 14:45:52 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
c1ee6f0eef pvmove: prevent moving writecache device 2020-02-03 15:59:12 -06:00
David Teigland
fe16d296b0 pvmove: remove some cmirror related code
which is no longer used
2019-10-11 11:31:42 -05:00
David Teigland
8c87dda195 locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".

Commands that used the ORPHAN flock in exclusive mode:
  pvcreate, pvremove, vgcreate, vgextend, vgremove,
  vgcfgrestore

Commands that used the ORPHAN flock in shared mode:
  vgimportclone, pvs, pvscan, pvresize, pvmove,
  pvdisplay, pvchange, fullreport

Commands that used the GLOBAL flock in exclusive mode:
  pvchange, pvscan, vgimportclone, vgscan

Commands that used the GLOBAL flock in shared mode:
  pvscan --cache, pvs

The ORPHAN lock covers the important cases of serializing
the use of orphan PVs.  It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)

The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)

Neither lock correctly protects the VG namespace, or
orphan PV properties.

To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.

The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode.  Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs.  Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.

The locking of global state now looks like:

lockd_global()
  previously named lockd_gl(), acquires the distributed
  global lock through lvmlockd.  This is unchanged.
  It serializes distributed lvm commands that are changing
  global state.  This is a no-op when lvmlockd is not in use.

lockf_global()
  acquires an flock on a local file.  It serializes local lvm
  commands that are changing global state.

lock_global()
  first calls lockf_global() to acquire the local flock for
  global state, and if this succeeds, it calls lockd_global()
  to acquire the distributed lock for global state.

Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state.  Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).

The following commands which change global state are now
serialized with the exclusive global flock:

pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit

Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs.  The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.

The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-29 13:01:05 -05:00
David Teigland
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00
David Teigland
3e781ea446 Remove clvmd and associated code
More code reduction and simplification can follow.
2018-06-05 11:09:13 -05:00
Zdenek Kabelac
6a1f458bb7 build: compile fixes 2018-06-01 21:12:31 +02:00
David Teigland
b6f0f20da2 lvmlockd: primarily use vg_is_shared
to check if a vg uses an lvmlockd lock_type,
instead of the equivalent but longer is_lockd_type.
2018-06-01 13:15:22 -05:00
Joe Thornber
7f97c7ea9a build: Don't generate symlinks in include/ dir
As we start refactoring the code to break dependencies (see doc/refactoring.txt),
I want us to use full paths in the includes (eg, #include "base/data-struct/list.h").
This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in
metadata/ from base/
2018-05-14 10:30:20 +01:00
Zdenek Kabelac
d437bd86ff cleanup: display_lvname update message
Add more display_lvname usage.
Update some error messages.
Indent.
2018-04-20 12:17:01 +02:00
Zdenek Kabelac
66400d003d mirror: fix region_size for clustered VG
When adjusting region size for clustered VG it always needs to fit
2 full bitset into 1MB due to old limits of CPG.

This is relatively big amount of bits, but we have still limitation
for region size to fit into 32bits (0x8000000).

So for too big mirrors this operation needs to fail - so whenever
function returns now 0, it means we can't find matching region_size.

Since return 0 is now 'error' we need to also pass proper region_size
when creating pvmove mirror.
2018-04-20 12:13:48 +02:00
Zdenek Kabelac
ace97c9f9c pvmove: support properly subLV locking
Since we support snapshot of mirrors, we do need to properly check
for stacked lock holder - fixes problem of pvmove in cluster
with mirrors under snapshot.

WHATS_NEW for this patch goes with 'Restore pvmove support...'
2018-04-20 12:03:16 +02:00
Zdenek Kabelac
552e60b3a1 pvmove: enhance accepted states of active LVs
Improve pvmove to accept 'locally' active LVs together with
exclusive active LVs.

In the 1st. phase it now recognizes whether exclusive pvmove is needed.
For this case only 'exclusively' or 'locally-only without remote
activative state' LVs are acceptable and all others are skipped.

During build-up of pvmove 'activation' steps are taken, so if
there is any problem we can now 'skip' LVs from pvmove operation
rather then giving-up whole pvmove operation.

Also when pvmove is restarted, recognize need of exclusive pvmove,
and use it whenever there is LV, that require exclusive activation.
2018-02-15 13:55:38 +01:00
Zdenek Kabelac
083c221cbe pvmove: reinstantiate clustered pvmove
In fact  pvmove does support  'clustered-core' target for clustered
pvmove of LVs activated on multiple nodes.

This patch restores support for activation of pvmove on all nodes
for LVs that are also activate on all nodes.
2018-02-01 21:55:20 +01:00
Zdenek Kabelac
3aedaa7f2a cleanup: drop unused code 2018-01-17 14:45:48 +01:00
Zdenek Kabelac
38b81e6537 cleanup: enhance messages
Add extra info about failing local exlusive activation
(as in cluster the LV can be active on some other nodes).
2018-01-17 14:45:48 +01:00
Zdenek Kabelac
02621cffb0 pvmove: drop misleading pvmove restriction for cluster
pvmove handles properly locked LVs in cluster and this extra check
actually cause misbehavior as some LVs were silently skipped from
operation scope.
2018-01-17 14:44:33 +01:00
Zdenek Kabelac
5a961d3411 pvmove: better check for exclusive LV 2018-01-17 14:44:33 +01:00
Zdenek Kabelac
7c6fb63041 pvmove: fix _remove_sibling_pvs_from_trim_list
Fix the function to really check it sibling raid image LV.
For LV_rmeta_0  check for   LV_rimage_0   instead of
LV_rmeta_0rimage_0.
2018-01-17 14:44:31 +01:00
Zdenek Kabelac
0f0dc1a2a5 pvmove: remove unusued code
Support for snapshot and cache LVs should now work.
Remove protection rejecting pvmove for them.
2017-11-15 21:00:29 +01:00
Zdenek Kabelac
838592a171 activate_lvs: use exclusive activation
There is no need to differentiation between clustered VG and normal VG.
As the activation depends on locking type.

Use unconditionally locally exclusive activation for pvmove.
2017-11-15 14:03:22 +01:00
Zdenek Kabelac
7a28b243fa cleanup: pvmove messages
Just add some dots to messages and remove unneeded
stack trace from return after log_error.
2017-11-01 00:58:31 +01:00
Zdenek Kabelac
0ba3939542 pvmove: simplify name generation 2017-11-01 00:55:24 +01:00
Alasdair G Kergon
f1cc5b12fd tidy: Add missing underscores to statics. 2017-10-18 15:58:13 +01:00
Marian Csontos
090db98828 pvmove: Change error message
Change error message to match previously used one used by tests.
2017-09-27 13:20:25 +02:00
David Teigland
f2ee0e7aca pvmove: require LV name in a shared VG
In a shared VG, only allow pvmove with a named LV,
so that only PE's used by the LV will be moved.
The LV is then activated exclusively, ensuring that
the PE's being moved are not used from another host.

Previously, pvmove was mistakenly allowed on a full PV.
This won't work when LVs using that PV are active on
other hosts.
2017-09-20 09:56:51 -05:00
Zdenek Kabelac
1d58074d9f debug: more stacktrace corrections
Continue previous patch dropping some unneeded stack traces
after printed log_error/warn messages.
2016-11-25 14:58:28 +01:00
Heinz Mauelshagen
d83f2d766d pvmove: fix regression introduced with 8e9d5d12ae
'pvmove -n name pv1 pv2' called with the name of a top-level LV
failed with mentioned commit.

Enhance pvmove-raid-segtypes.sh to test for prohibited RAID SubLV moves.
2016-08-15 19:31:04 +02:00
Heinz Mauelshagen
8e9d5d12ae pvmove: prohibit non-resilient collocation of RAID SubLVs
'pvmove -n name pv1 pv2' allows to collocate multiple RAID SubLVs
on pv2 (e.g. results in collocated raidlv_rimage_0 and raidlv_rimage_1),
thus causing loss of resilence and/or performance of the RaidLV.

Fix this pvmove flaw leading to potential data loss in case of PV failure
by preventing any SubLVs from collocation on any PVs of the RaidLV.
Still allow to collocate any DataLVs of a RaidLV with their sibling MetaLVs
and vice-versa though (e.g. raidlv_rmeta_0 on pv1 may still be moved to pv2
already holding raidlv_rimage_0).

Because access to the top-level RaidLV name is needed,
promote local _top_level_lv_name() from raid_manip.c
to global top_level_lv_name().

- resolves rhbz1202497
2016-08-15 18:22:32 +02:00
Alasdair G Kergon
7e671e5dd0 tools: Use arg_is_set instead of arg_count. 2016-06-21 22:24:52 +01:00
Peter Rajnoha
f752a95302 toollib: add 'parent' field to processing_handle; init report format only if there's no parent
If there's parent processing handle, we don't need to create completely
new report group and status report - we'll just reuse the one already
initialized for the parent.

Currently, the situation where this matter is when doing internal report
to do the selection for processing commands where we have parent processing
handle for the command itself and processing handle for the selection
part (that is selection for non-reporting tools).
2016-06-20 11:33:41 +02:00
Zdenek Kabelac
5c29b54d4d cleanup: poll better check for internal errors 2016-02-25 23:30:25 +01:00
David Teigland
71671778ab toollib: add two phase pv processing code
This is common code for handling PV create/remove
that can be shared by pvcreate/vgcreate/vgextend/pvremove.
This does not change any commands to use the new code.

- Pull out the hidden equivalent of process_each_pv
  into an actual top level process_each_pv.

- Pull the prompts to the top level, and do not
  run any prompts while locks are held.
  The orphan lock is reacquired after any prompts are
  done, and the devices being created are checked for
  any change made while the lock was not held.

Previously, pvcreate_vol() was the shared function for
creating a PV for pvcreate, vgcreate, vgextend.
Now, it will be toollib function pvcreate_each_device().

pvcreate_vol() was called effectively as a helper, from
within vgcreate and vgextend code paths.
pvcreate_each_device() will be called at the same level
as other process_each functions.

One of the main problems with pvcreate_vol() is that
it included a hidden equivalent of process_each_pv for
each device being created:

  pvcreate_vol() -> _pvcreate_check() ->

   find_pv_by_name() -> get_pvs() ->

     get_pvs_internal() -> _get_pvs() -> get_vgids() ->

       /* equivalent to process_each_pv */
       dm_list_iterate_items(vgids)
         vg = vg_read_internal()
         dm_list_iterate_items(&vg->pvs)

pvcreate_each_device() reorganizes the code so that
each-VG-each-PV loop is done once, and uses the standard
process_each_pv function at the top level of the function.
2016-02-25 09:14:09 -06:00
Zdenek Kabelac
fcbef05aae doc: change fsf address
Hmm rpmlint suggest fsf is using a different address these days,
so lets keep it up-to-date
2016-01-21 12:11:37 +01:00
David Teigland
6d7dc87cb3 pvmove: use toollib
Previously, pvmove used the function find_pv_in_vg() which did the
equivalent of process_each_pv() by doing:

  find_pv_by_name() -> get_pvs() ->

  get_pvs_internal() -> _get_pvs() -> get_vgids() ->

  /* equivalent to process_each_pv */
  dm_list_iterate_items(vgids)
    vg = vg_read_internal()
    dm_list_iterate_items(&vg->pvs)

With the found 'pv', it would do vg_read() on pv_vg_name(pv),
and then do the actual pvmove processing.

This commit simplifies by using process_each_pv() and putting
the actual pvmove processing into the "single" function.
This eliminates both find_pv_by_name() and the vg_read().
The processing code that followed vg_read remains the same.

The return code for the pvmove command is not based on the
process_each_pv return code, but is based on the success/fail
conditions in the existing code.
2016-01-18 14:48:30 -06:00
David Teigland
0c6946b4ce pvmove: disallow moving PVs under sanlock leases
Fail with an error message if pvmove tries to move PVs
under the lvmlock LV.
2016-01-12 11:53:33 -06:00
Alasdair G Kergon
214e2cddf6 segtypes: Use SEG_TYPE_NAME_ string constants. 2015-09-22 19:04:12 +01:00
Ondrej Kozina
a9d954cb3c pvmove: skip polling later in test mode 2015-09-02 17:25:37 +02:00
David Teigland
7b570840cd lockd: no error when unlock fails
The unlock call will fail in expected and normal cases,
and should not cause the command to fail.  (An actual
unlock in the lock manager should never fail.)
2015-08-18 11:18:40 -05:00
Zdenek Kabelac
ae4db9f302 lockd: check for failing unlock
Avoid ignoring unlocking error.
2015-08-18 15:00:07 +02:00
Peter Rajnoha
3ec4813ba2 coverity: fix missing initialization
... Using uninitialized value "lockd_state" when calling "lockd_vg"
(even though lockd_vg assigns 0 to the lockd_state, but it looks at
previous state of lockd_state just before that so we need to have
that properly initialized!)

libdm/libdm-report.c:2934: uninit_use_in_call: Using uninitialized value "tm". Field "tm.tm_gmtoff" is uninitialized when calling "_get_final_time".

daemons/lvmlockd/lvmlockctl.c:273: uninit_use_in_call: Using uninitialized element of array "r_name" when calling "format_info_r_action". (just added FIXME as this looks unfinished?)
2015-07-08 14:53:30 +02:00
Alasdair G Kergon
dfe3eb12d0 include: Standardise around new tool.h. 2015-07-06 17:30:18 +01:00
David Teigland
fe70b03de2 Add lvmlockd 2015-07-02 15:42:26 -05:00
David Teigland
f400f9db19 pvmove.c: code cleanup 2015-05-19 20:56:24 +02:00
David Teigland
7a8ce8dbf7 polldaemon: remove get_copy_vg and get_copy_lv wrappers
These wrappers have been replaced by direct calls
to vg_read() and find_lv() in previous commits.

This commit should have no functional impact since
all bits were already unreachable.
2015-05-19 20:56:15 +02:00