1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

2565 Commits

Author SHA1 Message Date
Marian Csontos
dd19fa9ff3 tests: Fix unbound variable
Test `aux kernel_at_least 5 1` fails even for newer kernel
with `$3: unbound variable` when using `set -u`.
2019-07-24 16:30:15 +02:00
David Teigland
aa58f9bd9b tests: lvm-on-md use variable run dir
for hints file
2019-07-12 16:51:49 -05:00
David Teigland
bbca70a0b7 tests: metadata-zero-space
Test zero padding between copies of metadata.
2019-07-12 15:03:47 -05:00
David Teigland
7230aa891c tests: pvscan-autoactivate test unmatching dev and PV size 2019-07-11 11:38:12 -05:00
David Teigland
4eb0e65693 tests: extend lvm-on-md 2019-07-11 11:20:06 -05:00
David Teigland
f938545687 cache: warn and prompt for writeback with cachevol
The cache repair utility does not yet work with a cachevol
(where metadata and data exist on the same LV.)  So, warn
and prompt if writeback is specified with a cachevol.
2019-07-02 11:03:03 -05:00
Marian Csontos
ba9d152aa5 test: Remove now useless clvmd test 2019-06-27 11:14:00 +02:00
Marian Csontos
09f29570f2 test: Fix unbound variable
Test `aux kernel_at_least 5 1` fails even for newer kernel
with `$3: unbound variable` when using `set -u`.
2019-06-27 10:41:21 +02:00
David Teigland
9ba45d824a tests: add exported.sh
to test how commands work with exported VGs/PVs.
2019-06-25 15:45:47 -05:00
David Teigland
8fecd9c14e metadata: include description with command in metadata areas
Previously the VG metadata description field (which contains
the command line) was only included in backup/archive copies
of the metadata.  Now also include it in the metadata written
to the metadata areas.
2019-06-20 16:09:05 -05:00
David Teigland
208a09745d tests: aux have_writecache
function was never defined, causing writecache.sh to be skipped
2019-06-13 11:36:18 -05:00
Zdenek Kabelac
c9203a6106 tests: correct checked target name
So when the target name happened to be a suffix of another one,
the grep was filtering incorrect line
(i.e. dm-cache && dm-writecache) - so do a line head matching.
2019-06-11 16:43:14 +02:00
David Teigland
d7c1168c6a tests: add metadata-bad-mdaheader.sh
needs xxd command
2019-06-07 15:54:04 -05:00
David Teigland
878741502a tests: add metadata-bad-text.sh 2019-06-07 15:54:04 -05:00
David Teigland
4fa1638301 tests: add outdated-pv.sh 2019-06-07 15:54:04 -05:00
David Teigland
9156640b60 tests: add metadata-old.sh 2019-06-07 15:54:04 -05:00
David Teigland
d3636ff832 tests: add missing-pv missing-pv-unused 2019-06-07 15:54:04 -05:00
David Teigland
ba7ff96faf improve reading and repairing vg metadata
The fact that vg repair is implemented as a part of vg read
has led to a messy and complicated implementation of vg_read,
and limited and uncontrolled repair capability.  This splits
read and repair apart.

Summary
-------

- take all kinds of various repairs out of vg_read
- vg_read no longer writes anything
- vg_read now simply reads and returns vg metadata
- vg_read ignores bad or old copies of metadata
- vg_read proceeds with a single good copy of metadata
- improve error checks and handling when reading
- keep track of bad (corrupt) copies of metadata in lvmcache
- keep track of old (seqno) copies of metadata in lvmcache
- keep track of outdated PVs in lvmcache
- vg_write will do basic repairs
- new command vgck --updatemetdata will do all repairs

Details
-------

- In scan, do not delete dev from lvmcache if reading/processing fails;
  the dev is still present, and removing it makes it look like the dev
  is not there.  Records are now kept about the problems with each PV
  so they be fixed/repaired in the appropriate places.

- In scan, record a bad mda on failure, and delete the mda from
  mda in use list so it will not be used by vg_read or vg_write,
  only by repair.

- In scan, succeed if any good mda on a device is found, instead of
  failing if any is bad.  The bad/old copies of metadata should not
  interfere with normal usage while good copies can be used.

- In scan, add a record of old mdas in lvmcache for later, do not repair
  them while reading, and do not let them prevent us from finding and
  using a good copy of metadata from elsewhere.  One result is that
  "inconsistent metadata" is no longer a read error, but instead a
  record in lvmcache that can be addressed separate from the read.

- Treat a dev with no good mdas like a dev with no mdas, which is an
  existing case we already handle.

- Don't use a fake vg "handle" for returning an error from vg_read,
  or the vg_read_error function for getting that error number;
  just return null if the vg cannot be read or used, and an error_flags
  arg with flags set for the specific kind of error (which can be used
  later for determining the kind of repair.)

- Saving an original copy of the vg metadata, for purposes of reverting
  a write, is now done explicitly in vg_read instead of being hidden in
  the vg_make_handle function.

- When a vg is not accessible due to "access restrictions" but is
  otherwise fine, return the vg through the new error_vg arg so that
  process_each_pv can skip the PVs in the VG while processing.
  (This is a temporary accomodation for the way process_each_pv
  tracks which devs have been looked at, and can be dropped later
  when process_each_pv implementation dev tracking is changed.)

- vg_read does not try to fix or recover a vg, but now just reads the
  metadata, checks access restrictions and returns it.
  (Checking access restrictions might be better done outside of vg_read,
   but this is a later improvement.)

- _vg_read now simply makes one attempt to read metadata from
  each mda, and uses the most recent copy to return to the caller
  in the form of a 'vg' struct.
  (bad mdas were excluded during the scan and are not retried)
  (old mdas were not excluded during scan and are retried here)

- vg_read uses _vg_read to get the latest copy of metadata from mdas,
  and then makes various checks against it to produce warnings,
  and to check if VG access is allowed (access restrictions include:
  writable, foreign, shared, clustered, missing pvs).

- Things that were previously silently/automatically written by vg_read
  that are now done by vg_write, based on the records made in lvmcache
  during the scan and read:
  . clearing the missing flag
  . updating old copies of metadata
  . clearing outdated pvs
  . updating pv header flags

- Bad/corrupt metadata are now repaired; they were not before.

Test changes
------------

- A read command no longer writes the VG to repair it, so add a write
  command to do a repair.
  (inconsistent-metadata, unlost-pv)

- When a missing PV is removed from a VG, and then the device is
  enabled again, vgck --updatemetadata is needed to clear the
  outdated PV before it can be used again, where it wasn't before.
  (lvconvert-repair-policy, lvconvert-repair-raid, lvconvert-repair,
   mirror-vgreduce-removemissing, pv-ext-flags, unlost-pv)

Reading bad/old metadata
------------------------

- "bad metadata": the mda_header or metadata text has invalid fields
  or can't be parsed by lvm.  This is a form of corruption that would
  not be caused by known failure scenarios.  A checksum error is
  typically included among the errors reported.

- "old metadata": a valid copy of the metadata that has a smaller seqno
  than other copies of the metadata.  This can happen if the device
  failed, or io failed, or lvm failed while commiting new metadata
  to all the metadata areas.  Old metadata on a PV that has been
  removed from the VG is the "outdated" case below.

When a VG has some PVs with bad/old metadata, lvm can simply ignore
the bad/old copies, and use a good copy.  This is why there are
multiple copies of the metadata -- so it's available even when some
of the copies cannot be used.  The bad/old copies do not have to be
repaired before the VG can be used (the repair can happen later.)

A PV with no good copies of the metadata simply falls back to being
treated like a PV with no mdas; a common and harmless configuration.

When bad/old metadata exists, lvm warns the user about it, and
suggests repairing it using a new metadata repair command.
Bad metadata in particular is something that users will want to
investigate and repair themselves, since it should not happen and
may indicate some other problem that needs to be fixed.

PVs with bad/old metadata are not the same as missing devices.
Missing devices will block various kinds of VG modification or
activation, but bad/old metadata will not.

Previously, lvm would attempt to repair bad/old metadata whenever
it was read.  This was unnecessary since lvm does not require every
copy of the metadata to be used.  It would also hide potential
problems that should be investigated by the user.  It was also
dangerous in cases where the VG was on shared storage.  The user
is now allowed to investigate potential problems and decide how
and when to repair them.

Repairing bad/old metadata
--------------------------

When label scan sees bad metadata in an mda, that mda is removed
from the lvmcache info->mdas list.  This means that vg_read will
skip it, and not attempt to read/process it again.  If it was
the only in-use mda on a PV, that PV is treated like a PV with
no mdas.  It also means that vg_write will also skip the bad mda,
and not attempt to write new metadata to it.  The only way to
repair bad metadata is with the metadata repair command.

When label scan sees old metadata in an mda, that mda is kept
in the lvmcache info->mdas list.  This means that vg_read will
read/process it again, and likely see the same mismatch with
the other copies of the metadata.  Like the label_scan, the
vg_read will simply ignore the old copy of the metadata and
use the latest copy.  If the command is modifying the vg
(e.g. lvcreate), then vg_write, which writes new metadata to
every mda on info->mdas, will write the new metadata to the
mda that had the old version.  If successful, this will resolve
the old metadata problem (without needing to run a metadata
repair command.)

Outdated PVs
------------

An outdated PV is a PV that has an old copy of VG metadata
that shows it is a member of the VG, but the latest copy of
the VG metadata does not include this PV.  This happens if
the PV is disconnected, vgreduce --removemissing is run to
remove the PV from the VG, then the PV is reconnected.
In this case, the outdated PV needs have its outdated metadata
removed and the PV used flag needs to be cleared.  This repair
will be done by the subsequent repair command.  It is also done
if vgremove is run on the VG.

MISSING PVs
-----------

When a device is missing, most commands will refuse to modify
the VG.  This is the simple case.  More complicated is when
a command is allowed to modify the VG while it is missing a
device.

When a VG is written while a device is missing for one of it's PVs,
the VG metadata is written to disk with the MISSING flag on the PV
with the missing device.  When the VG is next used, it is treated
as if the PV with the MISSING flag still has a missing device, even
if that device has reappeared.

If all LVs that were using a PV with the MISSING flag are removed
or repaired so that the MISSING PV is no longer used, then the
next time the VG metadata is written, the MISSING flag will be
dropped.

Alternative methods of clearing the MISSING flag are:

vgreduce --removemissing will remove PVs with missing devices,
or PVs with the MISSING flag where the device has reappeared.

vgextend --restoremissing will clear the MISSING flag on PVs
where the device has reappeared, allowing the VG to be used
normally.  This must be done with caution since the reappeared
device may have old data that is inconsistent with data on other PVs.

Bad mda repair
--------------

The new command:
vgck --updatemetadata VG

first uses vg_write to repair old metadata, and other basic
issues mentioned above (old metadata, outdated PVs, pv_header
flags, MISSING_PV flags).  It will also go further and repair
bad metadata:

. text metadata that has a bad checksum
. text metadata that is not parsable
. corrupt mda_header checksum and version fields

(To keep a clean diff, #if 0 is added around functions that
are replaced by new code.  These commented functions are
removed by the following commit.)
2019-06-07 15:54:04 -05:00
David Teigland
db98a6e362 Additional MD component checking
If udev info is missing for a device, (which would indicate
if it's an MD component), then do an end-of-device read to
check if a PV is an MD component.  (This is skipped when
using hints since we already know devs in hints are good.)

A new config setting md_component_checks can be used to
disable the additional end-of-device MD checks, or to
always enable end-of-device MD checks.

When both hints and udev info are disabled/unavailable,
the end of PVs will now be scanned by default.  If md
devices with end-of-device superblocks are not being
used, the extra I/O overhead can be avoided by setting
md_component_checks="start".
2019-06-07 13:27:16 -05:00
David Teigland
c315112a3b tests: pvscan-autoactivate check for machine-id 2019-06-06 15:32:42 -05:00
David Teigland
356ea897cc tests: pvck-dump 2019-06-05 13:58:26 -05:00
Zdenek Kabelac
4d9f41b119 tests: check no_discard_passdown
Check reporting works
2019-06-05 15:48:44 +02:00
Zdenek Kabelac
ddd68fbead tests: automatically set scan_lvs when using extend_filter
When using 'aux extend_filter' we always want to use LV as PV.
2019-06-05 15:48:44 +02:00
Marian Csontos
669a834981 test: Increase latency in pvmove-resume-multiseg
The test was failing consistently on some VMs (F25), and inconsistently
on Rawhide.

With increased latency these failures are no longer reproducible.

Reproducer:

    make check_lvmpolld T=pvmove-resume-multiseg.sh
2019-06-03 16:57:49 +02:00
Marian Csontos
a9907bef99 test: Restore testing of D-Bus API 2019-05-31 08:58:30 +02:00
David Teigland
eebb5e9fff tests: add debug to pvscan-cache deactivation 2019-05-23 15:32:46 -05:00
David Teigland
e055b89d28 tests: pvscan-cache more attempts to fix 2019-05-23 14:55:57 -05:00
David Teigland
1022b88a66 tests: change mkfs usage in lvconvert raid tests
The "echo y | mkfs" was failing at times from echo y.
Remove echo y and replace with wipefs -a prior to mkfs.
2019-05-23 11:45:26 -05:00
David Teigland
6169c0a51b tests: fix error detection in lvconvert-raid-takeover.sh 2019-05-23 10:29:52 -05:00
David Teigland
2036608423 tests: pvscan-cache try to fix teardown problems
teardown after the test was failing, probably because
of uncoordinated udev actions running on the test
system.  Try to avoid this by doing some work before
teardown.
2019-05-22 11:55:48 -05:00
David Teigland
78afe75b08 tests: fsadm-crypt.sh update mkfs parameter
mkfs.xfs was rejecting previously working value
2019-05-21 14:46:01 -05:00
David Teigland
cf3f463929 tests: pvscan-autoactivate.sh switch system_id_source
to machineid instead of uname which would break if
the test system had no proper uname set.
2019-05-21 14:37:55 -05:00
David Teigland
99ca06ca46 tests: hints check if strace exists
avoid test failure if test system does not
have strace
2019-05-21 14:24:57 -05:00
Zdenek Kabelac
0c26aa13ca tests: check accepting out-of-range creation_time 2019-05-10 15:00:21 +02:00
Zdenek Kabelac
1f7c9da554 tests: split args
Here we want args to be splited into individual strings.
2019-05-06 13:02:45 +02:00
Zdenek Kabelac
4ff472b907 tests: drop call of wipefs
wipefs might not be present on test system.
Devices are also already zeroed by cleanup_md_dev
(which 'fakes' missing wipefs eventually)
2019-05-04 19:11:00 +02:00
David Teigland
6ff1583c1b tests: expand lvm-on-md
test both md raid0 and raid1
2019-05-03 14:39:42 -05:00
Zdenek Kabelac
ac627fd1ce tests: use luks1 for test
Since we do not need anywhere luks2 - pick older format
which does not require password for resize to keep
the rest of test unmodified.
2019-05-03 13:17:22 +02:00
Zdenek Kabelac
8c56e31134 tests: update resize value
Since we now properly extend also _pmspare - there was not enough free
space to add 8extents to both volumes.
2019-05-03 13:17:22 +02:00
David Teigland
8c87dda195 locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".

Commands that used the ORPHAN flock in exclusive mode:
  pvcreate, pvremove, vgcreate, vgextend, vgremove,
  vgcfgrestore

Commands that used the ORPHAN flock in shared mode:
  vgimportclone, pvs, pvscan, pvresize, pvmove,
  pvdisplay, pvchange, fullreport

Commands that used the GLOBAL flock in exclusive mode:
  pvchange, pvscan, vgimportclone, vgscan

Commands that used the GLOBAL flock in shared mode:
  pvscan --cache, pvs

The ORPHAN lock covers the important cases of serializing
the use of orphan PVs.  It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)

The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)

Neither lock correctly protects the VG namespace, or
orphan PV properties.

To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.

The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode.  Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs.  Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.

The locking of global state now looks like:

lockd_global()
  previously named lockd_gl(), acquires the distributed
  global lock through lvmlockd.  This is unchanged.
  It serializes distributed lvm commands that are changing
  global state.  This is a no-op when lvmlockd is not in use.

lockf_global()
  acquires an flock on a local file.  It serializes local lvm
  commands that are changing global state.

lock_global()
  first calls lockf_global() to acquire the local flock for
  global state, and if this succeeds, it calls lockd_global()
  to acquire the distributed lock for global state.

Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state.  Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).

The following commands which change global state are now
serialized with the exclusive global flock:

pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit

Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs.  The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.

The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-29 13:01:05 -05:00
David Teigland
aa75b31db5 pvscan: handle case of scanning PV without metadata last
Handle the case where pvscan --cache -aay (with no dev args)
gets to the final PV, completing the VG, but that final PV does not
have VG metadata.  In this case, we need to use VG metadata from a
previously scanned PV in the same VG, which we saved for this
possibility.  Using this saved metadata, we can find which VG
this PVID belongs to, and then check if that VG is now complete,
and if so add the VG name to the list of complete VGs to be
autoactivated.
2019-04-15 11:27:49 -05:00
David Teigland
41ba2b568b tests: disable unworking pvscan case
and add corresponding fixme in the code
2019-04-12 15:40:38 -05:00
David Teigland
48e9f116ae tests: update pvscan-autoactivate for init change 2019-04-05 14:04:42 -05:00
Zdenek Kabelac
5d6fe796bd tests: check auto-growth of thin-pool meta 2019-04-03 13:28:56 +02:00
Zdenek Kabelac
7f757ab616 tests: vdo caching tests 2019-03-20 14:39:11 +01:00
Zdenek Kabelac
5139e5f1b3 tests: vdo dmevent autoresize 2019-03-20 14:39:11 +01:00
Zdenek Kabelac
ac31bfd6fd tests: check vgsplit works with cache 2019-03-20 14:38:05 +01:00
David Teigland
d84134c75b pvscan: fix ignoring foreign PVs
Fix to previous commit
  "pvscan: ignore online for shared and foreign PVs"

which was incorrectly considering a PV foreign if its
VG had no system ID when the host did have a system ID.
2019-03-13 16:03:02 -05:00
David Teigland
98b7a3a42d tests: check that pvscan --cache ignores certain PVs 2019-03-06 12:17:47 -06:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00