1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-10-28 11:55:55 +03:00
Commit Graph

8012 Commits

Author SHA1 Message Date
Zdenek Kabelac
61427af377 tests: no activate LV
Since our current vgcfgbackup/restore doesn't deal
with difference of active volumes between current and
restored set of volumes -  run test with inactive LVs.
2013-09-16 15:38:42 +02:00
Zdenek Kabelac
2abaf87a9b tests: update vgcfgrestore test
Test vgcfgrestore when seqno of restored mda is smaller then current
metadata on device.
2013-09-16 13:48:44 +02:00
Peter Rajnoha
9742c5192e systemd: run lvm2-activation-net.service after lvm2-activation.service
The lvm2-activation-net.service was ordered only with respect to iscsi
and fcoe service before. In addition to that, we also need ordering
with respect to lvm2-activation.service to prevent parallel vgchange -aay
runs which may cause some problems during activation.
See also https://bugs.gentoo.org/show_bug.cgi?id=480066.

With this patch, the ordering is firmly set to:
lvm2-activation-early.service -> lvm2-activation.service -> lvm2-activation-net.service

Thanks to Alexander Tsoy for the original patch (modified a bit here):
https://www.redhat.com/archives/lvm-devel/2013-September/msg00049.html
2013-09-16 11:47:09 +02:00
Zdenek Kabelac
fbb732bd8e tests: enhance pvmove testing
Rewrite check lv_on  and add new  lv_tree_on
Move more pvmove test unrelated code out to check & get sections
(so they do not obfuscate trace output unnecesserily)

Use new lv_tree_on()
NOTE: unsure how the snapshot origin should be accounted here.

Split pmove-all-segments into separate tests for raid and thins
(so the test output properly shows what has been skipped in test)
2013-09-16 11:22:04 +02:00
Zdenek Kabelac
47b7cc6850 tests: use check for raid test
Use test-suite built-in check command for this test
(creates better logs, and reduces amount of valgrind-ed commands)
2013-09-16 11:22:04 +02:00
Zdenek Kabelac
9df27c4279 tests: update check and get
Update usage of "" around shell vars.
trim needs to trim both sides now.
trim also removes debug.log since it's only called when lvm command
has finished properly (so if something fails afterward, there
is no missleading debug trace in the log)
2013-09-16 11:22:04 +02:00
Zdenek Kabelac
51d3667cb5 tests: update die
'die' evaluates given string - so \n could be used for
multiline error report

Also remove debug.log since the command finished properly when we
call 'die'

Note: we should not call 'die' after lvm command failure.
2013-09-16 11:22:04 +02:00
Zdenek Kabelac
d6090a10f0 tests: add help function
Add mkdev_md5sum to create and checksum given LV.
Add dev_md5sum to verify device has matching md5 sum.
2013-09-16 11:22:04 +02:00
Jonathan Brassow
1d06868240 TEST: Unaccounted possible output causing failure
lvchange-raid.sh checks to ensure that the 'p'artial flag takes
precedence over the 'w'ritemostly flag by disabling and reenabling
a device in the array.  Most of the time this works fine, but
sometimes the kernel can notice the device failure before it is
reenabled.  In that case, the attr flag will not return to 'w', but
to 'r'efresh.  This is because 'r'efresh also takes precedence over
the 'w'ritemostly flag.  So, we also do a quick check for 'r' and
not just 'w'.
2013-09-12 13:23:53 -05:00
Peter Rajnoha
e2268aeb92 WHATS_NEW: some missing lines 2013-09-12 14:16:54 +02:00
Peter Rajnoha
6940e24aee udev: keep DM_ACTIVATION and DM_UDEV_PRIMARY_SOURCE_FLAG meaning as before commit 8d1d835
The DM_ACTIVATION and DM_UDEV_PRIMARY_SOURCE_FLAG needs to be kept the
way it was for backward compatibility (e.g. the old rules are still
in initramfs). This way the check in whether the device should be
scanned in 69-dm-lvmetad.rules is even easier.
2013-09-12 13:49:02 +02:00
Zdenek Kabelac
4d54400486 tests: extend harness with output of /var/log/messages
Add a very simple hack for embeding /var/log/messages into
the tests output - it's not ideal since it sometimes breaks lines,
but still gives valuable info.
2013-09-12 13:30:12 +02:00
Zdenek Kabelac
4dc1668467 tests: singlenode cleanup for prev commit
Add few more comments and cleanup some warnings.
2013-09-12 11:29:18 +02:00
Zdenek Kabelac
2a6abcb80a tests: singlenode updates
Add more 'realistic' simulation of dlm locking.
Previous version was not capable to maintain multiple locks.
Current version doesn't handle multiqueues for locks,
so the ordering is different.
2013-09-12 10:40:39 +02:00
Zdenek Kabelac
7b5f2e7f34 clvmd: add missing debug newline
Just missing new line.
2013-09-12 10:38:49 +02:00
Zdenek Kabelac
b8ea27ac97 cleanup: hide gcc warning
Older gcc is giving misleading warning:

metadata/lv_manip.c:4018: warning: ‘seg’ may be used uninitialized in
this function

But warning free compilation is better.
2013-09-11 23:40:45 +02:00
Jonathan Brassow
82228acfc9 Mirror/Thin: Disallow thinpools on mirror logical volumes
The same corner cases that exist for snapshots on mirrors exist for
any logical volume layered on top of mirror.  (One example is when
a mirror image fails and a non-repair LVM command is the first to
detect it via label reading.  In this case, the LVM command will hang
and prevent the necessary LVM repair command from running.)  When
a better alternative exists, it makes no sense to allow a new target
to stack on mirrors as a new feature.  Since, RAID is now capable of
running EX in a cluster and thin is not active-active aware, it makes
sense to pair these two rather than mirror+thinpool.

As further background, here are some additional comments that I made
when addressing a bug related to mirror+thinpool:
(https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9)
I am going to disallow thin* on top of mirror logical volumes.
Users will have to use the "raid1" segment type if they want this.

This bug has come down to a choice between:
1) Disallowing thin-LVs from being used as PVs.
2) Disallowing thinpools on top of mirrors.

The problem is that the code in dev_manager.c:device_is_usable() is unable
to tell whether there is a mirror device lower in the stack from the device
being checked.  Pretty much anything layered on top of a mirror will suffer
from this problem.  (Snapshots are a good example of this; and option #1
above has been chosen to deal with them.  This can also be seen in
dev_manager.c:device_is_usable().)  When a mirror failure occurs, the
kernel blocks all I/O to it.  If there is an LVM command that comes along
to do the repair (or a different operation that requires label reading), it
would normally avoid the mirror when it sees that it is blocked.  However,
if there is a snapshot or a thin-LV that is on a mirror, the above code
will not detect the mirror underneath and will issue label reading I/O.
This causes the command to hang.

Choosing #1 would mean that thin-LVs could never be used as PVs - even if
they are stacked on something other than mirrors.

Choosing #2 means that thinpools can never be placed on mirrors.  This is
probably better than we think, since it is preferred that people use the
"raid1" segment type in the first place.  However, RAID* cannot currently
be used in a cluster volume group - even in EX-only mode.  Thus, a complete
solution for option #2 must include the ability to activate RAID logical
volumes (and perform RAID operations) in a cluster volume group.  I've
already begun working on this.
2013-09-11 15:58:44 -05:00
Peter Rajnoha
1f4bc637b4 WHATS_NEW: one more for commit 8d1d835 2013-09-11 13:16:36 +02:00
Peter Rajnoha
fe227d14a5 WHATS_NEW: for commit 8d1d8350 and 72a9d4f 2013-09-11 13:01:01 +02:00
Peter Rajnoha
72a9d4f879 udev: override new udev default timeout of 30s to original 3min
New versions of udev changed the default event timeout to 30s
from original 3min. This causes problems with LVM processes that
starve because of the IO load caused by some LVM actions (e.g.
mirror/raid synchronization).

Reinstate the 3min udev timeout for now until we optimize this
in a way that even the 30s timeout is sufficient.
2013-09-11 12:47:38 +02:00
Jonathan Brassow
2691f1d764 RAID: Make RAID single-machine-exclusive capable in a cluster
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.

The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide.  This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them.  Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction.  This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.

The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID.  (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
2013-09-10 16:33:22 -05:00
Peter Rajnoha
8d1d83504d udev: fix pvscan --cache -aay to trigger on relevant events
This patch fixes the way the special devices are handled
(special in this context means that they're not usable
after the usual ADD event like other generic devices):

  - DM and MD devices are pvscanned only when they are just set up.
    This is the first CHANGE event that makes the device  usable
    (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the
     md/array_state sysfs attribute is present for MD).
    Whether the device is activated is remembered via
    DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD)
    udev environment variable. This is then used to decide
    whether we should fire the pvscan on ADD event to
    support coldplugging. For any (artificial) ADD event
    generated during coldplug, the device must be already
    set up properly to fire the pvscan on it.

  - Similar for loop devices. For loop devices, only CHANGE
    events are relevant (so there's a CHANGE after the loop
    device is set up as well as detached). Whether the loop
    has just been activated is detected via loop/backing_file
    sysfs attribute presence. The activation state is remembered
    via LVM_LOOP_PV_ACTIVATED udev environment variable.

  - Do not pvscan multipath device components (underlying paths).

  - Do not pvscan RAID device components.

  - Also, set LVM_SCANNED="1" udev environment variable for
    debug purposes (it's visible in the lvmdump -u that takes
    the current udev database). This variable is set once
    the pvscan is triggered.

The table below summarises when the pvscan is triggered
(marked with X, X* means fire only if the special dev is properly set up):

      | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove
=============================================================================
DM    |          |      X      |       X*       |                   |   X
MD    |          |      X      |       X*       |                   |
loop  |          |      X      |       X*       |                   |
other |    X     |             |       X        |                   |   X
2013-09-10 16:27:58 +02:00
Peter Rajnoha
9ba40da7ae udev: DM_ID_FS_TYPE should be ID_FS_TYPE when comparing with old value 2013-09-10 12:39:23 +02:00
Jonathan Brassow
ca51435153 Misc/RAID: Enable resume_lv to handle some renaming conflicts.
When images and their associated metadata are removed from a RAID1 LV,
the remaining sub-LVs are "shifted" down to fill the gaps.  For
example, if there is a 3-way mirror:
	[0][1][2]
and we remove device#0, the devices will be shifted down
	[1][2]
and renamed.
	[0][1]

This can create a problem for resume_lv (specifically,
dm_tree_activate_children) during the renaming process though.  This
is because it will attempt to rename the higher indexed sub-LVs first
and find that it cannot because there are currently other sub-LVs with
that name.  The solution is to check for a conflicting name before
attempting to rename.  If a conflict is found and that conflicting
sub-LV is also in the process of renaming, we can defer the current
rename until the conflicting sub-LV has renamed and cleared the
conflict.

Now that resume_lv can handle these types of rename conflicts, we can
remove the workaround in RAID that was attempting to resume a RAID1
LV from the bottom-up in order to force a proper rename in assending
order before attempting a resume on the top-level LV.  This "hack"
only worked for single machine use-cases of LVM.  Clearing this up
paves the way for exclusive activation of RAID LVs in a cluster.
2013-09-09 15:07:28 -05:00
Peter Rajnoha
9f2fc2471c udev: also inform lvmetad about lost LVM1 PV label
Addendum to 4d3b5724e0
which covered only LVM2 PV labels.
2013-09-09 13:48:27 +02:00
Zdenek Kabelac
d89b514e06 cleanup: drop within comment gcc warning
toollib.c:69:24: warning: "/*" within comment
2013-09-09 12:17:11 +02:00
Zdenek Kabelac
ac2de55d69 test: timeout when no write happens since last written line
Change current test abort after 3 minutes, to abort after 3 minutes
without written output line.
2013-09-09 12:17:11 +02:00
Zdenek Kabelac
f5832d8c49 deactivate: drop readahead calc in deactivation
Skip readahead when device will be deactivated.
2013-09-07 09:13:20 +02:00
Zdenek Kabelac
0670bfeb59 thin: validation catch multiseg thin pool/volumes
Multisegment thin pools and volumes are not supported.
Catch such error code path early.
2013-09-07 03:32:07 +02:00
Zdenek Kabelac
655296609e thin: fix monitoring of thin pool volume
Properly skip unmonitoring of thin pool volume in deactivation code
path. Code makes sure if there is just any thin pool user
it stays monitored with all its resources.
2013-09-07 03:31:04 +02:00
Zdenek Kabelac
4c001a7854 thin: fix resize of stacked thin pool volume
When the pool is created from non-linear target the more complex rules
have to be used and stacking needs to properly decode args for _tdata
LV. Also proper allocation policies are being used according to those
set in lvm2 metadata for data and metadata LVs.

Also properly check for active pool and extra code to active it
temporarily.

With this fix it's now possible to use:

lvcreate -L20 -m2 -n pool vg  --alloc anywhere
lvcreate -L10 -m2 -n poolm vg --alloc anywhere
lvconvert --thinpool vg/pool --poolmetadata vg/poolm

lvresize -L+10 vg/pool
2013-09-07 03:24:48 +02:00
Petr Rockai
fe6b19a6a3 test: Add the 64b fc17 kernel to the mirror recovery blacklist. 2013-09-06 16:50:05 +02:00
Alasdair G Kergon
10a5838a60 toollib: tweak background forking
Log what is forked and replace #if 1 with DEBUG_CHILD.
2013-09-06 01:49:43 +01:00
Alasdair G Kergon
96880102a3 logging: Write Completed message before resetting. 2013-09-06 01:47:41 +01:00
Alasdair G Kergon
5face2010d tools: Use backgroundfork_ARG for pvscan -b
Change pvscan -b to use a new backgroundfork_ARG instead of
background_ARG so as not to affect pvmove -b and lvconvert -b.
2013-09-06 01:43:24 +01:00
Petr Rockai
374653f2b5 test: Include tests that timed out in the final summary. 2013-09-04 16:21:08 +02:00
Jonathan Brassow
cc66dedc0e pvmove: Skip pvmove of RAID, thin, snapshot, origin, and mirror LVs in cluster
pvmove of the above types should only have been enabled in single machine
mode.
2013-09-03 13:17:01 -05:00
Petr Rockai
7918217c75 test: Fix a spurious failure in skip_if_mirror_recovery_broken. 2013-09-03 20:06:24 +02:00
Jonathan Brassow
848e8026d6 TEST: pvmove-all-segtypes.sh should not be run in a cluster 2013-09-03 10:54:42 -05:00
Peter Rajnoha
3b51f298bb reinstate: commit 82d83a01ce
It now works as supposed. The source of the problem is fixed
by previous commit d2d6a9da52e04f28e1916bcea3f9fda356b6df29.
2013-09-03 16:49:21 +02:00
Peter Rajnoha
008c33a21b tools: add -b/--background for pvscan --cache -aay
Udev daemon has recently introduced a limit on the number of udev
processes (there was no limit before). This causes a problem
when calling pvscan --cache -aay in lvmetad udev rules which
is supposed to activate the volumes. This activation is itself
synced with udev and so it waits for the activation to complete
before the pvscan finishes. The event processing can't continue
until this pvscan call is finished.

But if we're at the limit with the udev process count, we can't
instatiate any more udev processes, all such events are queued
and so we can't process the lvm activation event for which the
pvscan is waiting.

Then we're in a deadlock since the udev process with the
pvscan --cache -aay call waits for the lvm activation udev
processing to complete, but that will never happen as there's
this limit hit with the number of udev processes.

The process with pvscan --cache -aay actually times out eventually
(3min or 30sec, depends on the version of udev).

This patch makes it possible to run the pvscan --cache -aay
in the background so the udev processing can continue and hence
we can avoid the deadlock mentioned above.
2013-09-03 16:49:21 +02:00
Petr Rockai
ea1e8166d5 test: Skip tests involving mirror recovery on known bad kernels. 2013-09-03 16:24:32 +02:00
Peter Rajnoha
6a5838a69c pvscan: show -aay with --cache for help 2013-09-03 09:51:30 +02:00
Peter Rajnoha
44c1a02c18 revert: commit 82d83a01ce
The commit 82d83a01ce
"autoactivation: refresh existing VG before autoactivation"
causes problems (dangling udev_sync cookies, slow processing
of the pvscan --cache --major --minor call from udev rules)
when the autoactivation handler is run in parallel on
several PVs that belong to the same VG. Revert this patch
until the exact source of the problem is found and then
properly fixed and handled.
2013-09-02 13:53:27 +02:00
Zdenek Kabelac
039585bb0d tests: test pvmove behavior after restart
Simulate crash of the system and restarted pvmove after next VG
activation.

Test is catching regression introduced in 2.02.99 for partial tree
creation changes.
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
7cc36a93f6 tests: add delay_dev
Function to create slower responsive device.

Useful for testing things which needs to happen something during on
going operation - with  'delayed' device - much smaller sizes of devices
are needed and its much more deterministic (though still not optimal)
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
350cf7968c libdm: new name can't be empty
Do not allow passing '' names to kernel.

This test was missing also in kernel, so it has allowed
to create device with '' name.  This then confused dmsetup tool,
since such name is unexpected and unsupported. To remove
such name from table, user has to use -j -m to specify which device
should be removed.

This patch fixes the posibility to run this operation:

dmsetup rename existingdev ''

after this operation commands like  'dmsetup table' are failing.
This patch prohibits to use such name.
2013-08-31 21:40:28 +02:00
David Teigland
eee3aeeb61 test: fix process-each-duplicate-vgnames
After enable_dev, the following commands were not
consistently seeing the pv on it.

Alasdair explained, "whenever enabling/disabling devs
outside the tools (and you aren't trying to test how
the tools cope with suddenly appearing/disappering
devices) use "vgscan""
2013-08-30 11:53:10 -05:00
Peter Rajnoha
c36dcc1728 man: lvmdump -u -l 2013-08-29 14:20:57 +02:00
Alasdair G Kergon
78647da1c6 toolcontext: Only reopen stdin if readable.
Don't fail when running lvm commands under versions of nohup that set
up stdin as O_WRONLY!
2013-08-28 23:55:14 +01:00