1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

27 Commits

Author SHA1 Message Date
David Teigland
ba7ff96faf improve reading and repairing vg metadata
The fact that vg repair is implemented as a part of vg read
has led to a messy and complicated implementation of vg_read,
and limited and uncontrolled repair capability.  This splits
read and repair apart.

Summary
-------

- take all kinds of various repairs out of vg_read
- vg_read no longer writes anything
- vg_read now simply reads and returns vg metadata
- vg_read ignores bad or old copies of metadata
- vg_read proceeds with a single good copy of metadata
- improve error checks and handling when reading
- keep track of bad (corrupt) copies of metadata in lvmcache
- keep track of old (seqno) copies of metadata in lvmcache
- keep track of outdated PVs in lvmcache
- vg_write will do basic repairs
- new command vgck --updatemetdata will do all repairs

Details
-------

- In scan, do not delete dev from lvmcache if reading/processing fails;
  the dev is still present, and removing it makes it look like the dev
  is not there.  Records are now kept about the problems with each PV
  so they be fixed/repaired in the appropriate places.

- In scan, record a bad mda on failure, and delete the mda from
  mda in use list so it will not be used by vg_read or vg_write,
  only by repair.

- In scan, succeed if any good mda on a device is found, instead of
  failing if any is bad.  The bad/old copies of metadata should not
  interfere with normal usage while good copies can be used.

- In scan, add a record of old mdas in lvmcache for later, do not repair
  them while reading, and do not let them prevent us from finding and
  using a good copy of metadata from elsewhere.  One result is that
  "inconsistent metadata" is no longer a read error, but instead a
  record in lvmcache that can be addressed separate from the read.

- Treat a dev with no good mdas like a dev with no mdas, which is an
  existing case we already handle.

- Don't use a fake vg "handle" for returning an error from vg_read,
  or the vg_read_error function for getting that error number;
  just return null if the vg cannot be read or used, and an error_flags
  arg with flags set for the specific kind of error (which can be used
  later for determining the kind of repair.)

- Saving an original copy of the vg metadata, for purposes of reverting
  a write, is now done explicitly in vg_read instead of being hidden in
  the vg_make_handle function.

- When a vg is not accessible due to "access restrictions" but is
  otherwise fine, return the vg through the new error_vg arg so that
  process_each_pv can skip the PVs in the VG while processing.
  (This is a temporary accomodation for the way process_each_pv
  tracks which devs have been looked at, and can be dropped later
  when process_each_pv implementation dev tracking is changed.)

- vg_read does not try to fix or recover a vg, but now just reads the
  metadata, checks access restrictions and returns it.
  (Checking access restrictions might be better done outside of vg_read,
   but this is a later improvement.)

- _vg_read now simply makes one attempt to read metadata from
  each mda, and uses the most recent copy to return to the caller
  in the form of a 'vg' struct.
  (bad mdas were excluded during the scan and are not retried)
  (old mdas were not excluded during scan and are retried here)

- vg_read uses _vg_read to get the latest copy of metadata from mdas,
  and then makes various checks against it to produce warnings,
  and to check if VG access is allowed (access restrictions include:
  writable, foreign, shared, clustered, missing pvs).

- Things that were previously silently/automatically written by vg_read
  that are now done by vg_write, based on the records made in lvmcache
  during the scan and read:
  . clearing the missing flag
  . updating old copies of metadata
  . clearing outdated pvs
  . updating pv header flags

- Bad/corrupt metadata are now repaired; they were not before.

Test changes
------------

- A read command no longer writes the VG to repair it, so add a write
  command to do a repair.
  (inconsistent-metadata, unlost-pv)

- When a missing PV is removed from a VG, and then the device is
  enabled again, vgck --updatemetadata is needed to clear the
  outdated PV before it can be used again, where it wasn't before.
  (lvconvert-repair-policy, lvconvert-repair-raid, lvconvert-repair,
   mirror-vgreduce-removemissing, pv-ext-flags, unlost-pv)

Reading bad/old metadata
------------------------

- "bad metadata": the mda_header or metadata text has invalid fields
  or can't be parsed by lvm.  This is a form of corruption that would
  not be caused by known failure scenarios.  A checksum error is
  typically included among the errors reported.

- "old metadata": a valid copy of the metadata that has a smaller seqno
  than other copies of the metadata.  This can happen if the device
  failed, or io failed, or lvm failed while commiting new metadata
  to all the metadata areas.  Old metadata on a PV that has been
  removed from the VG is the "outdated" case below.

When a VG has some PVs with bad/old metadata, lvm can simply ignore
the bad/old copies, and use a good copy.  This is why there are
multiple copies of the metadata -- so it's available even when some
of the copies cannot be used.  The bad/old copies do not have to be
repaired before the VG can be used (the repair can happen later.)

A PV with no good copies of the metadata simply falls back to being
treated like a PV with no mdas; a common and harmless configuration.

When bad/old metadata exists, lvm warns the user about it, and
suggests repairing it using a new metadata repair command.
Bad metadata in particular is something that users will want to
investigate and repair themselves, since it should not happen and
may indicate some other problem that needs to be fixed.

PVs with bad/old metadata are not the same as missing devices.
Missing devices will block various kinds of VG modification or
activation, but bad/old metadata will not.

Previously, lvm would attempt to repair bad/old metadata whenever
it was read.  This was unnecessary since lvm does not require every
copy of the metadata to be used.  It would also hide potential
problems that should be investigated by the user.  It was also
dangerous in cases where the VG was on shared storage.  The user
is now allowed to investigate potential problems and decide how
and when to repair them.

Repairing bad/old metadata
--------------------------

When label scan sees bad metadata in an mda, that mda is removed
from the lvmcache info->mdas list.  This means that vg_read will
skip it, and not attempt to read/process it again.  If it was
the only in-use mda on a PV, that PV is treated like a PV with
no mdas.  It also means that vg_write will also skip the bad mda,
and not attempt to write new metadata to it.  The only way to
repair bad metadata is with the metadata repair command.

When label scan sees old metadata in an mda, that mda is kept
in the lvmcache info->mdas list.  This means that vg_read will
read/process it again, and likely see the same mismatch with
the other copies of the metadata.  Like the label_scan, the
vg_read will simply ignore the old copy of the metadata and
use the latest copy.  If the command is modifying the vg
(e.g. lvcreate), then vg_write, which writes new metadata to
every mda on info->mdas, will write the new metadata to the
mda that had the old version.  If successful, this will resolve
the old metadata problem (without needing to run a metadata
repair command.)

Outdated PVs
------------

An outdated PV is a PV that has an old copy of VG metadata
that shows it is a member of the VG, but the latest copy of
the VG metadata does not include this PV.  This happens if
the PV is disconnected, vgreduce --removemissing is run to
remove the PV from the VG, then the PV is reconnected.
In this case, the outdated PV needs have its outdated metadata
removed and the PV used flag needs to be cleared.  This repair
will be done by the subsequent repair command.  It is also done
if vgremove is run on the VG.

MISSING PVs
-----------

When a device is missing, most commands will refuse to modify
the VG.  This is the simple case.  More complicated is when
a command is allowed to modify the VG while it is missing a
device.

When a VG is written while a device is missing for one of it's PVs,
the VG metadata is written to disk with the MISSING flag on the PV
with the missing device.  When the VG is next used, it is treated
as if the PV with the MISSING flag still has a missing device, even
if that device has reappeared.

If all LVs that were using a PV with the MISSING flag are removed
or repaired so that the MISSING PV is no longer used, then the
next time the VG metadata is written, the MISSING flag will be
dropped.

Alternative methods of clearing the MISSING flag are:

vgreduce --removemissing will remove PVs with missing devices,
or PVs with the MISSING flag where the device has reappeared.

vgextend --restoremissing will clear the MISSING flag on PVs
where the device has reappeared, allowing the VG to be used
normally.  This must be done with caution since the reappeared
device may have old data that is inconsistent with data on other PVs.

Bad mda repair
--------------

The new command:
vgck --updatemetadata VG

first uses vg_write to repair old metadata, and other basic
issues mentioned above (old metadata, outdated PVs, pv_header
flags, MISSING_PV flags).  It will also go further and repair
bad metadata:

. text metadata that has a bad checksum
. text metadata that is not parsable
. corrupt mda_header checksum and version fields

(To keep a clean diff, #if 0 is added around functions that
are replaced by new code.  These commented functions are
removed by the following commit.)
2019-06-07 15:54:04 -05:00
David Teigland
595196bc29 tests: enable lvmlockd for passing tests 2018-05-30 09:25:45 -05:00
Heinz Mauelshagen
b74e7f6a78 test: allow to succeed in the cluster
Avoiding "$(get first_extent_sector "$d")" in the loop
allows the test to succeed in the cluster.  Further cluster
analysis needed to get to the core reason.
2017-12-01 18:59:55 +01:00
Zdenek Kabelac
c64e2a85cb tests: use get_devs 2017-07-15 00:13:33 +02:00
Zdenek Kabelac
e55bae2b2c tests: use bash 2017-07-10 14:23:53 +02:00
Zdenek Kabelac
24751b45bd tests: double quote 2017-07-10 14:23:53 +02:00
Zdenek Kabelac
1598bf154e tests: add transient failure test
Check raid  repairs still 'present', but failed device.

TODO: tests needs more checking about repair actually doing its work.
2017-06-23 23:32:44 +02:00
Heinz Mauelshagen
8daad11666 test: "lvconvert --repair" after vgreduce
Add test for commit 921b496fff.
2017-03-09 02:35:57 +01:00
Heinz Mauelshagen
55eaabd118 lvreduce/lvresize: add ability to reduce the size of a RaidLV
- support shrinking of raid0/1/4/5/6/10 LVs
- enhance lvresize-raid.sh tests: add raid0* and raid10
- fix have_raid4 in aux.sh to allow lv_resize-raid.sh
  and other scripts to test raid4

Resolves: rhbz1394048
2017-02-09 22:42:03 +01:00
Zdenek Kabelac
15e657f110 tests: ignore racy test failure
When test fails here, make it just warning instead of failing whole
test.
2017-01-06 23:39:53 +01:00
Alasdair G Kergon
6dce0e9489 tests: Disable 3-leg raid1 pre-sync repair attempt
Test fails as it doesn't ensure device isn't already in sync.
2016-11-04 00:23:08 +00:00
Heinz Mauelshagen
5d455b28fc lvconvert: fix (automatic) raid repair regression
The dm-raid target now rejects device rebuild requests during ongoing
resynchronization thus causing 'lvconvert --repair ...' to fail with
a kernel error message. This regresses with respect to failing automatic
repair via the dmeventd RAID plugin in case raid_fault_policy="allocate"
is configured in lvm.conf as well.

Previously allowing such repair request required cancelling the
resynchronization of any still accessible DataLVs, hence reasoning
potential data loss.

Patch allows the resynchronization of still accessible DataLVs to
finish up by rejecting any 'lvconvert --repair ...'.

It enhances the dmeventd RAID plugin to be able to automatically repair
by postponing the repair after synchronization ended.

More tests are added to lvconvert-rebuild-raid.sh to cover single
and multiple DataLV failure cases for the different RAID levels.

- resolves: rhbz1371717
2016-09-21 00:39:29 +02:00
Zdenek Kabelac
59b0bd7b64 tests: use bigger raid size
To avoid problem with report of array in-sync use for now bigger
raid size 64M.
2016-04-18 23:09:17 +02:00
David Teigland
f54253d396 tests: add SKIP_WITH_LVMLOCKD
to all tests that don't already used vgcreate $SHARED
2016-02-23 09:28:48 -06:00
Zdenek Kabelac
fcbef05aae doc: change fsf address
Hmm rpmlint suggest fsf is using a different address these days,
so lets keep it up-to-date
2016-01-21 12:11:37 +01:00
Zdenek Kabelac
4159680a0e tests: use more SKIP
Speed-up check_lvmpolld.
2015-10-27 16:00:09 +01:00
Ondrej Kozina
e587b0677b lvmpolld: Add standalone polldaemon.
See doc/lvmpolld_overview.txt
2015-05-09 00:59:18 +01:00
Zdenek Kabelac
ebde60beab tests: use single lvmconf call 2015-04-08 23:19:37 +02:00
Zdenek Kabelac
5c5177c37c tests: rename test to inittest
We are getting into problem when we use 'test' for commands like
should/not/...

So avoid overloading test name and change it to inittest.
2014-06-10 10:51:27 +02:00
Zdenek Kabelac
c5c3995ed5 tests: increase min version for raid testing
Seems smaller version are causing weird kernel lookups.
2014-05-23 23:35:42 +02:00
Zdenek Kabelac
65b0948c1e tests: swap tests 2014-05-22 12:01:44 +02:00
Zdenek Kabelac
2e9792121f tests: add have_cache and have_raid
Need to be aware of build options, when system would be
configure without raid or cache support
2014-05-20 21:50:30 +02:00
Zdenek Kabelac
4adbb85c37 tests: disable test for broken kernel raid targe
Since using raid5 - validate it's usable on the system
2014-03-26 00:04:59 +01:00
Jonathan Brassow
7763607f36 TEST: Test was trying to kill 2 devices in RAID5 instead of RAID6
Segment type being used for test should have been 'raid6'.
2013-10-18 09:33:37 -05:00
Zdenek Kabelac
362d8ead64 tests: more test run in cluster mode
aux updates:

prepare_vg now created clustered VG for cluster tests.

since dm-raid doesn't work in cluster, skip the cluster
test when someone checks for dm-raid target until fixed.
2013-06-16 00:07:33 +02:00
Zdenek Kabelac
c5f7d401e5 tests: missed skip in test 2013-05-31 21:58:51 +02:00
Zdenek Kabelac
e9e7421c8e tests: move raid test to separate file 2013-05-31 21:42:32 +02:00