1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

24 Commits

Author SHA1 Message Date
Zdenek Kabelac
24751b45bd tests: double quote 2017-07-10 14:23:53 +02:00
Jonathan Brassow
c87907dcd5 lvconvert: linear -> raid1 upconvert should cause "recover" not "resync"
Two of the sync actions performed by the kernel (aka MD runtime) are
"resync" and "recover".  The "resync" refers to when an entirely new array
is going through the process of initializing (or resynchronizing after an
unexpected shutdown).  The "recover" is the process of initializing a new
member device to the array.  So, a brand new array with all new devices
will undergo "resync".  An array with replaced or added sub-LVs will undergo
"recover".

These two states are treated very differently when failures happen.  If any
device is lost or replaced while "resync", there are no worries.  This is
because any writes created from the inception of the array have occurred to
all the devices and can be safely recovered.  Even though non-initialized
portions will still be resync'ed with uninitialized data, it is ok.  However,
if a pre-existing device is lost (aka, the original linear device in a
linear -> raid1 convert) during a "recover", data loss can be the result.
Thus, writes are errored by the kernel and recovery is halted.  The failed
device must be restored or removed.  This is the correct behavior.

Unfortunately, we were treating an up-convert from linear as a "resync"
when we should have been treating it as a "recover".  This patch
removes the special case for linear upconvert.  It allows each new image
sub-LV to be marked with a rebuild flag and treats the array as 'in-sync'.
This has the correct effect of causing the upconvert to be treated as a
"recover" rather than a "resync".  There is no need to flag these two states
differently in LVM metadata, because they are already considered differently
by the kernel RAID metadata.  (Any activation/deactivation will properly
resume the "recover" process and not a "resync" process.)

We make this behavior change based on the presense of dm-raid target
version 1.9.0+.
2017-06-14 08:35:22 -05:00
Heinz Mauelshagen
0f65d7ec3a lvconvert: prompt on raid1 image changes
Don't change resilience of raid1 LVs without --yes.

Adjust respective tests.
2017-04-06 18:47:41 +02:00
Heinz Mauelshagen
a9651adc84 test: add raid4 checks to respective tests
Add missing checks for valid raid4 mapping to tests.
2016-10-28 21:54:10 +02:00
Zdenek Kabelac
d34e315b8d tests: use just single raid_leg_status
Simplify 'check' test actualy and correct printed tracing output
2016-09-13 12:25:36 +02:00
Zdenek Kabelac
5b55f9616d tests: add check function for raid leg status
2 new check test functions:

raid_leg_status    - to just compare
raid_leg_status_is - to compare and die when different
2016-09-12 16:51:53 +02:00
Zdenek Kabelac
e5ec348d68 tests: lowering disc usage
Correction for aux test result  ([] -> if;then;fi)

Use issue_discard to lower memory demands on discardable test devices
Use large devices directly through prepare_pvs

I'm still observing more then 0.5G of data usage through.

Particullary:

'lvcreate' followed by 'lvconvert' (which doesn't yet support --nosync
option)  is quite demanging, and resume returns quite 'late' when
a lot of data has been already written on PV.
2016-09-09 20:55:00 +02:00
Heinz Mauelshagen
bb9789f2b3 test: fix/enhance lvcreate-large-raid*.sh
Multi-step extend to even larger raid10 LV lvcreate-large-raid.sh.

Comment fixes.
2016-08-09 18:16:01 +02:00
Heinz Mauelshagen
48e14390c1 test: fix lvcreate-large-raid.sh
RAID6 LVs may not be created with --nosync or data corruption
may occur in case of device failures.  The underlying MD raid6
personality used to drive the RaidLV performs read-modify-write
updates on stripes and thus relies on properly written parity
(P and Q Syndromes) during initial synchronization.

Once on it, enhance test to create/extend more and
larger RaidLVs and check sync/nosync status.
2016-08-09 17:45:37 +02:00
David Teigland
f54253d396 tests: add SKIP_WITH_LVMLOCKD
to all tests that don't already used vgcreate $SHARED
2016-02-23 09:28:48 -06:00
Zdenek Kabelac
fcbef05aae doc: change fsf address
Hmm rpmlint suggest fsf is using a different address these days,
so lets keep it up-to-date
2016-01-21 12:11:37 +01:00
Zdenek Kabelac
4159680a0e tests: use more SKIP
Speed-up check_lvmpolld.
2015-10-27 16:00:09 +01:00
Ondrej Kozina
e587b0677b lvmpolld: Add standalone polldaemon.
See doc/lvmpolld_overview.txt
2015-05-09 00:59:18 +01:00
Zdenek Kabelac
4607cbcb0d tests: use old virt snaps in the test
Don't use thin with its thin requirements for the test.
2014-11-22 18:51:02 +01:00
Zdenek Kabelac
5c5177c37c tests: rename test to inittest
We are getting into problem when we use 'test' for commands like
should/not/...

So avoid overloading test name and change it to inittest.
2014-06-10 10:51:27 +02:00
Zdenek Kabelac
c5c3995ed5 tests: increase min version for raid testing
Seems smaller version are causing weird kernel lookups.
2014-05-23 23:35:42 +02:00
Zdenek Kabelac
2e9792121f tests: add have_cache and have_raid
Need to be aware of build options, when system would be
configure without raid or cache support
2014-05-20 21:50:30 +02:00
Zdenek Kabelac
47b15b805e tests: updates
Add some vgremove calls.
Remove uneeded test for some unused commands.
Add tests for missing commands.
2014-02-27 13:01:04 +01:00
Zdenek Kabelac
9c9f4515ae tests: add some quotes
Use quotes for DM_DEV_DIR
2014-02-24 21:13:36 +01:00
Zdenek Kabelac
fe609141a8 tests: on 32bit test with <16T devs
Add  'can_use_16T' to detect systems where we could
safely use 16T devices without causing system deadlocks.

16T size leads on those to endless loops in udevd
- it calls blkid which tries cached read from such device
- this ends in endless loop.

Related problems:
https://bugzilla.redhat.com/show_bug.cgi?id=1015028
2013-11-19 10:55:14 +01:00
Jonathan Brassow
2691f1d764 RAID: Make RAID single-machine-exclusive capable in a cluster
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.

The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide.  This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them.  Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction.  This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.

The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID.  (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
2013-09-10 16:33:22 -05:00
Petr Rockai
53fbf2bea3 tests: make filter extension more robust 2013-06-02 00:50:09 +02:00
Zdenek Kabelac
db2b65704c tests: test mirrors in clustered way
Make the clustered testing using cluster locking for most of the tests.
Use exclusive activation for mirrors and snapshot origins.
2013-05-31 21:42:32 +02:00
Zdenek Kabelac
3877ccfe1b test: move raid test to separate tests
Revert changes to origin lvcreate-large test and use separate
test scripts for raid  - so they can be properly skipped when
kernel doesn't support raid targets.
2012-10-08 14:49:21 +02:00