1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

8199 Commits

Author SHA1 Message Date
Zdenek Kabelac
2a6abcb80a tests: singlenode updates
Add more 'realistic' simulation of dlm locking.
Previous version was not capable to maintain multiple locks.
Current version doesn't handle multiqueues for locks,
so the ordering is different.
2013-09-12 10:40:39 +02:00
Zdenek Kabelac
7b5f2e7f34 clvmd: add missing debug newline
Just missing new line.
2013-09-12 10:38:49 +02:00
Zdenek Kabelac
b8ea27ac97 cleanup: hide gcc warning
Older gcc is giving misleading warning:

metadata/lv_manip.c:4018: warning: ‘seg’ may be used uninitialized in
this function

But warning free compilation is better.
2013-09-11 23:40:45 +02:00
Jonathan Brassow
82228acfc9 Mirror/Thin: Disallow thinpools on mirror logical volumes
The same corner cases that exist for snapshots on mirrors exist for
any logical volume layered on top of mirror.  (One example is when
a mirror image fails and a non-repair LVM command is the first to
detect it via label reading.  In this case, the LVM command will hang
and prevent the necessary LVM repair command from running.)  When
a better alternative exists, it makes no sense to allow a new target
to stack on mirrors as a new feature.  Since, RAID is now capable of
running EX in a cluster and thin is not active-active aware, it makes
sense to pair these two rather than mirror+thinpool.

As further background, here are some additional comments that I made
when addressing a bug related to mirror+thinpool:
(https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9)
I am going to disallow thin* on top of mirror logical volumes.
Users will have to use the "raid1" segment type if they want this.

This bug has come down to a choice between:
1) Disallowing thin-LVs from being used as PVs.
2) Disallowing thinpools on top of mirrors.

The problem is that the code in dev_manager.c:device_is_usable() is unable
to tell whether there is a mirror device lower in the stack from the device
being checked.  Pretty much anything layered on top of a mirror will suffer
from this problem.  (Snapshots are a good example of this; and option #1
above has been chosen to deal with them.  This can also be seen in
dev_manager.c:device_is_usable().)  When a mirror failure occurs, the
kernel blocks all I/O to it.  If there is an LVM command that comes along
to do the repair (or a different operation that requires label reading), it
would normally avoid the mirror when it sees that it is blocked.  However,
if there is a snapshot or a thin-LV that is on a mirror, the above code
will not detect the mirror underneath and will issue label reading I/O.
This causes the command to hang.

Choosing #1 would mean that thin-LVs could never be used as PVs - even if
they are stacked on something other than mirrors.

Choosing #2 means that thinpools can never be placed on mirrors.  This is
probably better than we think, since it is preferred that people use the
"raid1" segment type in the first place.  However, RAID* cannot currently
be used in a cluster volume group - even in EX-only mode.  Thus, a complete
solution for option #2 must include the ability to activate RAID logical
volumes (and perform RAID operations) in a cluster volume group.  I've
already begun working on this.
2013-09-11 15:58:44 -05:00
Peter Rajnoha
1f4bc637b4 WHATS_NEW: one more for commit 8d1d835 2013-09-11 13:16:36 +02:00
Peter Rajnoha
fe227d14a5 WHATS_NEW: for commit 8d1d8350 and 72a9d4f 2013-09-11 13:01:01 +02:00
Peter Rajnoha
72a9d4f879 udev: override new udev default timeout of 30s to original 3min
New versions of udev changed the default event timeout to 30s
from original 3min. This causes problems with LVM processes that
starve because of the IO load caused by some LVM actions (e.g.
mirror/raid synchronization).

Reinstate the 3min udev timeout for now until we optimize this
in a way that even the 30s timeout is sufficient.
2013-09-11 12:47:38 +02:00
Jonathan Brassow
2691f1d764 RAID: Make RAID single-machine-exclusive capable in a cluster
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.

The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide.  This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them.  Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction.  This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.

The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID.  (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
2013-09-10 16:33:22 -05:00
Peter Rajnoha
8d1d83504d udev: fix pvscan --cache -aay to trigger on relevant events
This patch fixes the way the special devices are handled
(special in this context means that they're not usable
after the usual ADD event like other generic devices):

  - DM and MD devices are pvscanned only when they are just set up.
    This is the first CHANGE event that makes the device  usable
    (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the
     md/array_state sysfs attribute is present for MD).
    Whether the device is activated is remembered via
    DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD)
    udev environment variable. This is then used to decide
    whether we should fire the pvscan on ADD event to
    support coldplugging. For any (artificial) ADD event
    generated during coldplug, the device must be already
    set up properly to fire the pvscan on it.

  - Similar for loop devices. For loop devices, only CHANGE
    events are relevant (so there's a CHANGE after the loop
    device is set up as well as detached). Whether the loop
    has just been activated is detected via loop/backing_file
    sysfs attribute presence. The activation state is remembered
    via LVM_LOOP_PV_ACTIVATED udev environment variable.

  - Do not pvscan multipath device components (underlying paths).

  - Do not pvscan RAID device components.

  - Also, set LVM_SCANNED="1" udev environment variable for
    debug purposes (it's visible in the lvmdump -u that takes
    the current udev database). This variable is set once
    the pvscan is triggered.

The table below summarises when the pvscan is triggered
(marked with X, X* means fire only if the special dev is properly set up):

      | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove
=============================================================================
DM    |          |      X      |       X*       |                   |   X
MD    |          |      X      |       X*       |                   |
loop  |          |      X      |       X*       |                   |
other |    X     |             |       X        |                   |   X
2013-09-10 16:27:58 +02:00
Peter Rajnoha
9ba40da7ae udev: DM_ID_FS_TYPE should be ID_FS_TYPE when comparing with old value 2013-09-10 12:39:23 +02:00
Jonathan Brassow
ca51435153 Misc/RAID: Enable resume_lv to handle some renaming conflicts.
When images and their associated metadata are removed from a RAID1 LV,
the remaining sub-LVs are "shifted" down to fill the gaps.  For
example, if there is a 3-way mirror:
	[0][1][2]
and we remove device#0, the devices will be shifted down
	[1][2]
and renamed.
	[0][1]

This can create a problem for resume_lv (specifically,
dm_tree_activate_children) during the renaming process though.  This
is because it will attempt to rename the higher indexed sub-LVs first
and find that it cannot because there are currently other sub-LVs with
that name.  The solution is to check for a conflicting name before
attempting to rename.  If a conflict is found and that conflicting
sub-LV is also in the process of renaming, we can defer the current
rename until the conflicting sub-LV has renamed and cleared the
conflict.

Now that resume_lv can handle these types of rename conflicts, we can
remove the workaround in RAID that was attempting to resume a RAID1
LV from the bottom-up in order to force a proper rename in assending
order before attempting a resume on the top-level LV.  This "hack"
only worked for single machine use-cases of LVM.  Clearing this up
paves the way for exclusive activation of RAID LVs in a cluster.
2013-09-09 15:07:28 -05:00
Peter Rajnoha
9f2fc2471c udev: also inform lvmetad about lost LVM1 PV label
Addendum to 4d3b5724e0
which covered only LVM2 PV labels.
2013-09-09 13:48:27 +02:00
Zdenek Kabelac
d89b514e06 cleanup: drop within comment gcc warning
toollib.c:69:24: warning: "/*" within comment
2013-09-09 12:17:11 +02:00
Zdenek Kabelac
ac2de55d69 test: timeout when no write happens since last written line
Change current test abort after 3 minutes, to abort after 3 minutes
without written output line.
2013-09-09 12:17:11 +02:00
Zdenek Kabelac
f5832d8c49 deactivate: drop readahead calc in deactivation
Skip readahead when device will be deactivated.
2013-09-07 09:13:20 +02:00
Zdenek Kabelac
0670bfeb59 thin: validation catch multiseg thin pool/volumes
Multisegment thin pools and volumes are not supported.
Catch such error code path early.
2013-09-07 03:32:07 +02:00
Zdenek Kabelac
655296609e thin: fix monitoring of thin pool volume
Properly skip unmonitoring of thin pool volume in deactivation code
path. Code makes sure if there is just any thin pool user
it stays monitored with all its resources.
2013-09-07 03:31:04 +02:00
Zdenek Kabelac
4c001a7854 thin: fix resize of stacked thin pool volume
When the pool is created from non-linear target the more complex rules
have to be used and stacking needs to properly decode args for _tdata
LV. Also proper allocation policies are being used according to those
set in lvm2 metadata for data and metadata LVs.

Also properly check for active pool and extra code to active it
temporarily.

With this fix it's now possible to use:

lvcreate -L20 -m2 -n pool vg  --alloc anywhere
lvcreate -L10 -m2 -n poolm vg --alloc anywhere
lvconvert --thinpool vg/pool --poolmetadata vg/poolm

lvresize -L+10 vg/pool
2013-09-07 03:24:48 +02:00
Petr Rockai
fe6b19a6a3 test: Add the 64b fc17 kernel to the mirror recovery blacklist. 2013-09-06 16:50:05 +02:00
Alasdair G Kergon
10a5838a60 toollib: tweak background forking
Log what is forked and replace #if 1 with DEBUG_CHILD.
2013-09-06 01:49:43 +01:00
Alasdair G Kergon
96880102a3 logging: Write Completed message before resetting. 2013-09-06 01:47:41 +01:00
Alasdair G Kergon
5face2010d tools: Use backgroundfork_ARG for pvscan -b
Change pvscan -b to use a new backgroundfork_ARG instead of
background_ARG so as not to affect pvmove -b and lvconvert -b.
2013-09-06 01:43:24 +01:00
Petr Rockai
374653f2b5 test: Include tests that timed out in the final summary. 2013-09-04 16:21:08 +02:00
Jonathan Brassow
cc66dedc0e pvmove: Skip pvmove of RAID, thin, snapshot, origin, and mirror LVs in cluster
pvmove of the above types should only have been enabled in single machine
mode.
2013-09-03 13:17:01 -05:00
Petr Rockai
7918217c75 test: Fix a spurious failure in skip_if_mirror_recovery_broken. 2013-09-03 20:06:24 +02:00
Jonathan Brassow
848e8026d6 TEST: pvmove-all-segtypes.sh should not be run in a cluster 2013-09-03 10:54:42 -05:00
Peter Rajnoha
3b51f298bb reinstate: commit 82d83a01ce
It now works as supposed. The source of the problem is fixed
by previous commit d2d6a9da52e04f28e1916bcea3f9fda356b6df29.
2013-09-03 16:49:21 +02:00
Peter Rajnoha
008c33a21b tools: add -b/--background for pvscan --cache -aay
Udev daemon has recently introduced a limit on the number of udev
processes (there was no limit before). This causes a problem
when calling pvscan --cache -aay in lvmetad udev rules which
is supposed to activate the volumes. This activation is itself
synced with udev and so it waits for the activation to complete
before the pvscan finishes. The event processing can't continue
until this pvscan call is finished.

But if we're at the limit with the udev process count, we can't
instatiate any more udev processes, all such events are queued
and so we can't process the lvm activation event for which the
pvscan is waiting.

Then we're in a deadlock since the udev process with the
pvscan --cache -aay call waits for the lvm activation udev
processing to complete, but that will never happen as there's
this limit hit with the number of udev processes.

The process with pvscan --cache -aay actually times out eventually
(3min or 30sec, depends on the version of udev).

This patch makes it possible to run the pvscan --cache -aay
in the background so the udev processing can continue and hence
we can avoid the deadlock mentioned above.
2013-09-03 16:49:21 +02:00
Petr Rockai
ea1e8166d5 test: Skip tests involving mirror recovery on known bad kernels. 2013-09-03 16:24:32 +02:00
Peter Rajnoha
6a5838a69c pvscan: show -aay with --cache for help 2013-09-03 09:51:30 +02:00
Peter Rajnoha
44c1a02c18 revert: commit 82d83a01ce
The commit 82d83a01ce
"autoactivation: refresh existing VG before autoactivation"
causes problems (dangling udev_sync cookies, slow processing
of the pvscan --cache --major --minor call from udev rules)
when the autoactivation handler is run in parallel on
several PVs that belong to the same VG. Revert this patch
until the exact source of the problem is found and then
properly fixed and handled.
2013-09-02 13:53:27 +02:00
Zdenek Kabelac
039585bb0d tests: test pvmove behavior after restart
Simulate crash of the system and restarted pvmove after next VG
activation.

Test is catching regression introduced in 2.02.99 for partial tree
creation changes.
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
7cc36a93f6 tests: add delay_dev
Function to create slower responsive device.

Useful for testing things which needs to happen something during on
going operation - with  'delayed' device - much smaller sizes of devices
are needed and its much more deterministic (though still not optimal)
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
350cf7968c libdm: new name can't be empty
Do not allow passing '' names to kernel.

This test was missing also in kernel, so it has allowed
to create device with '' name.  This then confused dmsetup tool,
since such name is unexpected and unsupported. To remove
such name from table, user has to use -j -m to specify which device
should be removed.

This patch fixes the posibility to run this operation:

dmsetup rename existingdev ''

after this operation commands like  'dmsetup table' are failing.
This patch prohibits to use such name.
2013-08-31 21:40:28 +02:00
David Teigland
eee3aeeb61 test: fix process-each-duplicate-vgnames
After enable_dev, the following commands were not
consistently seeing the pv on it.

Alasdair explained, "whenever enabling/disabling devs
outside the tools (and you aren't trying to test how
the tools cope with suddenly appearing/disappering
devices) use "vgscan""
2013-08-30 11:53:10 -05:00
Peter Rajnoha
c36dcc1728 man: lvmdump -u -l 2013-08-29 14:20:57 +02:00
Alasdair G Kergon
78647da1c6 toolcontext: Only reopen stdin if readable.
Don't fail when running lvm commands under versions of nohup that set
up stdin as O_WRONLY!
2013-08-28 23:55:14 +01:00
Alasdair G Kergon
c0f987949b activation: Fix segfault with inactive pvmove LV.
Set flag to avoid recursion back through an inactive pvmove LV when
populating deptree.
2013-08-28 22:56:23 +01:00
Peter Rajnoha
0acd7173d1 systemd: lvm2-activation-generator: remove default dir if args not specified and require all args to be given
Remove default "/tmp" as destination directory if no args
specified for lvm2-activation-generator. Require all the
args to be specified directly for proper functionality.
2013-08-28 16:06:51 +02:00
Peter Rajnoha
c9258e7f2e man: lvmdump: add doc for -l and -u 2013-08-28 14:59:15 +02:00
Petr Rockai
8b3664dc8d test: Set the timeout to 3 minutes (was 5s accidentally). 2013-08-28 14:53:23 +02:00
Petr Rockai
64fe17dc94 test: Add a new "check_full" target, which also tests with real /dev.
The original "check" target stays confined to a local device directory, while
check_full does 6 flavours, 3 with a local device directory and 3 with the
global /dev directory (the latter are prefixed with "s" for
"system"). I.e.: normal, cluster, lvmetad, snormal, scluster, slvmetad.
2013-08-28 14:53:23 +02:00
Petr Rockai
b516a72b11 test: Check for flavoured variables earlier.
This is necessary to make LVM_TEST_DEVDIR flavourable, and in turn have flavours
that use the global /dev (which can in turn be managed by udev).
2013-08-28 14:53:23 +02:00
Petr Rockai
d07cf851e5 test: Remove a redundant drain() from the timeout path. 2013-08-28 14:53:23 +02:00
Petr Rockai
c1217e6881 test: Make timeouts a little more robust & verbose. 2013-08-28 14:53:23 +02:00
Petr Rockai
9a1da7b262 TEST: Add a timeout to the harness, killing tests after 2 minutes. 2013-08-28 14:53:23 +02:00
Jonathan Brassow
e72b2d047d TEST: Add tests for lvchange actions of RAID under thin
Patch includes RAID1,4,5,6,10 tests for:
- setting writemostly/writebehind
* syncaction changes (i.e. scrubbing operations)
- refresh (i.e. reviving devices after transient failures)
- setting recovery rate (sync I/O throttling)
while the RAID LVs are under a thin-pool (both data and metadata)

* not fully tested because I haven't found a way to force bad
  blocks to be noticed in the testsuite yet.  Works just fine
  when dealing with "real" devices.
2013-08-27 16:46:40 -05:00
Jonathan Brassow
0799e81ee0 test: pvmove tests for all the different segment types.
Test moving linear, mirror, snapshot, RAID1,5,10, thinpool, thin
and thin on RAID.  Perform the moves along with a dummy LV and
also without the dummy LV by specifying a logical volume name as
an argument to pvmove.
2013-08-26 16:38:54 -05:00
Jonathan Brassow
2ef48b91ed pvmove: Allow moving snapshot/origin. Disallow converting and merging LVs
The patch allows the user to also pvmove snapshots and origin logical
volumes.  This means pvmove should be able to move all segment types.
I have, however, disallowed moving converting or merging logical volumes.
2013-08-26 16:36:30 -05:00
Jonathan Brassow
caa77b33f2 pvmove: Fix inability to specify LV name when moving RAID, mirror, or thin LV
Top-level LVs (like RAID, mirror or thin) are ignored when determining which
portions of an LV to pvmove.  If the user specified the name of an LV to
move and it was one of the above types, it would be skipped.  The code would
never move on to check whether its sub-LVs needed moving because their names
did not match what the user specified.

The solution is to check whether a sub-LVs is part of the LV whose name was
specified by the user - not just if there was a name match.
2013-08-26 14:12:31 -05:00