1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

7986 Commits

Author SHA1 Message Date
Zdenek Kabelac
ac2de55d69 test: timeout when no write happens since last written line
Change current test abort after 3 minutes, to abort after 3 minutes
without written output line.
2013-09-09 12:17:11 +02:00
Zdenek Kabelac
f5832d8c49 deactivate: drop readahead calc in deactivation
Skip readahead when device will be deactivated.
2013-09-07 09:13:20 +02:00
Zdenek Kabelac
0670bfeb59 thin: validation catch multiseg thin pool/volumes
Multisegment thin pools and volumes are not supported.
Catch such error code path early.
2013-09-07 03:32:07 +02:00
Zdenek Kabelac
655296609e thin: fix monitoring of thin pool volume
Properly skip unmonitoring of thin pool volume in deactivation code
path. Code makes sure if there is just any thin pool user
it stays monitored with all its resources.
2013-09-07 03:31:04 +02:00
Zdenek Kabelac
4c001a7854 thin: fix resize of stacked thin pool volume
When the pool is created from non-linear target the more complex rules
have to be used and stacking needs to properly decode args for _tdata
LV. Also proper allocation policies are being used according to those
set in lvm2 metadata for data and metadata LVs.

Also properly check for active pool and extra code to active it
temporarily.

With this fix it's now possible to use:

lvcreate -L20 -m2 -n pool vg  --alloc anywhere
lvcreate -L10 -m2 -n poolm vg --alloc anywhere
lvconvert --thinpool vg/pool --poolmetadata vg/poolm

lvresize -L+10 vg/pool
2013-09-07 03:24:48 +02:00
Petr Rockai
fe6b19a6a3 test: Add the 64b fc17 kernel to the mirror recovery blacklist. 2013-09-06 16:50:05 +02:00
Alasdair G Kergon
10a5838a60 toollib: tweak background forking
Log what is forked and replace #if 1 with DEBUG_CHILD.
2013-09-06 01:49:43 +01:00
Alasdair G Kergon
96880102a3 logging: Write Completed message before resetting. 2013-09-06 01:47:41 +01:00
Alasdair G Kergon
5face2010d tools: Use backgroundfork_ARG for pvscan -b
Change pvscan -b to use a new backgroundfork_ARG instead of
background_ARG so as not to affect pvmove -b and lvconvert -b.
2013-09-06 01:43:24 +01:00
Petr Rockai
374653f2b5 test: Include tests that timed out in the final summary. 2013-09-04 16:21:08 +02:00
Jonathan Brassow
cc66dedc0e pvmove: Skip pvmove of RAID, thin, snapshot, origin, and mirror LVs in cluster
pvmove of the above types should only have been enabled in single machine
mode.
2013-09-03 13:17:01 -05:00
Petr Rockai
7918217c75 test: Fix a spurious failure in skip_if_mirror_recovery_broken. 2013-09-03 20:06:24 +02:00
Jonathan Brassow
848e8026d6 TEST: pvmove-all-segtypes.sh should not be run in a cluster 2013-09-03 10:54:42 -05:00
Peter Rajnoha
3b51f298bb reinstate: commit 82d83a01ce
It now works as supposed. The source of the problem is fixed
by previous commit d2d6a9da52e04f28e1916bcea3f9fda356b6df29.
2013-09-03 16:49:21 +02:00
Peter Rajnoha
008c33a21b tools: add -b/--background for pvscan --cache -aay
Udev daemon has recently introduced a limit on the number of udev
processes (there was no limit before). This causes a problem
when calling pvscan --cache -aay in lvmetad udev rules which
is supposed to activate the volumes. This activation is itself
synced with udev and so it waits for the activation to complete
before the pvscan finishes. The event processing can't continue
until this pvscan call is finished.

But if we're at the limit with the udev process count, we can't
instatiate any more udev processes, all such events are queued
and so we can't process the lvm activation event for which the
pvscan is waiting.

Then we're in a deadlock since the udev process with the
pvscan --cache -aay call waits for the lvm activation udev
processing to complete, but that will never happen as there's
this limit hit with the number of udev processes.

The process with pvscan --cache -aay actually times out eventually
(3min or 30sec, depends on the version of udev).

This patch makes it possible to run the pvscan --cache -aay
in the background so the udev processing can continue and hence
we can avoid the deadlock mentioned above.
2013-09-03 16:49:21 +02:00
Petr Rockai
ea1e8166d5 test: Skip tests involving mirror recovery on known bad kernels. 2013-09-03 16:24:32 +02:00
Peter Rajnoha
6a5838a69c pvscan: show -aay with --cache for help 2013-09-03 09:51:30 +02:00
Peter Rajnoha
44c1a02c18 revert: commit 82d83a01ce
The commit 82d83a01ce
"autoactivation: refresh existing VG before autoactivation"
causes problems (dangling udev_sync cookies, slow processing
of the pvscan --cache --major --minor call from udev rules)
when the autoactivation handler is run in parallel on
several PVs that belong to the same VG. Revert this patch
until the exact source of the problem is found and then
properly fixed and handled.
2013-09-02 13:53:27 +02:00
Zdenek Kabelac
039585bb0d tests: test pvmove behavior after restart
Simulate crash of the system and restarted pvmove after next VG
activation.

Test is catching regression introduced in 2.02.99 for partial tree
creation changes.
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
7cc36a93f6 tests: add delay_dev
Function to create slower responsive device.

Useful for testing things which needs to happen something during on
going operation - with  'delayed' device - much smaller sizes of devices
are needed and its much more deterministic (though still not optimal)
2013-08-31 21:40:51 +02:00
Zdenek Kabelac
350cf7968c libdm: new name can't be empty
Do not allow passing '' names to kernel.

This test was missing also in kernel, so it has allowed
to create device with '' name.  This then confused dmsetup tool,
since such name is unexpected and unsupported. To remove
such name from table, user has to use -j -m to specify which device
should be removed.

This patch fixes the posibility to run this operation:

dmsetup rename existingdev ''

after this operation commands like  'dmsetup table' are failing.
This patch prohibits to use such name.
2013-08-31 21:40:28 +02:00
David Teigland
eee3aeeb61 test: fix process-each-duplicate-vgnames
After enable_dev, the following commands were not
consistently seeing the pv on it.

Alasdair explained, "whenever enabling/disabling devs
outside the tools (and you aren't trying to test how
the tools cope with suddenly appearing/disappering
devices) use "vgscan""
2013-08-30 11:53:10 -05:00
Peter Rajnoha
c36dcc1728 man: lvmdump -u -l 2013-08-29 14:20:57 +02:00
Alasdair G Kergon
78647da1c6 toolcontext: Only reopen stdin if readable.
Don't fail when running lvm commands under versions of nohup that set
up stdin as O_WRONLY!
2013-08-28 23:55:14 +01:00
Alasdair G Kergon
c0f987949b activation: Fix segfault with inactive pvmove LV.
Set flag to avoid recursion back through an inactive pvmove LV when
populating deptree.
2013-08-28 22:56:23 +01:00
Peter Rajnoha
0acd7173d1 systemd: lvm2-activation-generator: remove default dir if args not specified and require all args to be given
Remove default "/tmp" as destination directory if no args
specified for lvm2-activation-generator. Require all the
args to be specified directly for proper functionality.
2013-08-28 16:06:51 +02:00
Peter Rajnoha
c9258e7f2e man: lvmdump: add doc for -l and -u 2013-08-28 14:59:15 +02:00
Petr Rockai
8b3664dc8d test: Set the timeout to 3 minutes (was 5s accidentally). 2013-08-28 14:53:23 +02:00
Petr Rockai
64fe17dc94 test: Add a new "check_full" target, which also tests with real /dev.
The original "check" target stays confined to a local device directory, while
check_full does 6 flavours, 3 with a local device directory and 3 with the
global /dev directory (the latter are prefixed with "s" for
"system"). I.e.: normal, cluster, lvmetad, snormal, scluster, slvmetad.
2013-08-28 14:53:23 +02:00
Petr Rockai
b516a72b11 test: Check for flavoured variables earlier.
This is necessary to make LVM_TEST_DEVDIR flavourable, and in turn have flavours
that use the global /dev (which can in turn be managed by udev).
2013-08-28 14:53:23 +02:00
Petr Rockai
d07cf851e5 test: Remove a redundant drain() from the timeout path. 2013-08-28 14:53:23 +02:00
Petr Rockai
c1217e6881 test: Make timeouts a little more robust & verbose. 2013-08-28 14:53:23 +02:00
Petr Rockai
9a1da7b262 TEST: Add a timeout to the harness, killing tests after 2 minutes. 2013-08-28 14:53:23 +02:00
Jonathan Brassow
e72b2d047d TEST: Add tests for lvchange actions of RAID under thin
Patch includes RAID1,4,5,6,10 tests for:
- setting writemostly/writebehind
* syncaction changes (i.e. scrubbing operations)
- refresh (i.e. reviving devices after transient failures)
- setting recovery rate (sync I/O throttling)
while the RAID LVs are under a thin-pool (both data and metadata)

* not fully tested because I haven't found a way to force bad
  blocks to be noticed in the testsuite yet.  Works just fine
  when dealing with "real" devices.
2013-08-27 16:46:40 -05:00
Jonathan Brassow
0799e81ee0 test: pvmove tests for all the different segment types.
Test moving linear, mirror, snapshot, RAID1,5,10, thinpool, thin
and thin on RAID.  Perform the moves along with a dummy LV and
also without the dummy LV by specifying a logical volume name as
an argument to pvmove.
2013-08-26 16:38:54 -05:00
Jonathan Brassow
2ef48b91ed pvmove: Allow moving snapshot/origin. Disallow converting and merging LVs
The patch allows the user to also pvmove snapshots and origin logical
volumes.  This means pvmove should be able to move all segment types.
I have, however, disallowed moving converting or merging logical volumes.
2013-08-26 16:36:30 -05:00
Jonathan Brassow
caa77b33f2 pvmove: Fix inability to specify LV name when moving RAID, mirror, or thin LV
Top-level LVs (like RAID, mirror or thin) are ignored when determining which
portions of an LV to pvmove.  If the user specified the name of an LV to
move and it was one of the above types, it would be skipped.  The code would
never move on to check whether its sub-LVs needed moving because their names
did not match what the user specified.

The solution is to check whether a sub-LVs is part of the LV whose name was
specified by the user - not just if there was a name match.
2013-08-26 14:12:31 -05:00
Peter Rajnoha
d34ab5e0d3 WHATS_NEW: for 4d3b5724e0 2013-08-26 15:52:15 +02:00
Peter Rajnoha
4d3b5724e0 udev: inform lvmetad about lost PV label
In stacked environment where we have a PV layered on top of a
snapshot LV and then removing the LV, lvmetad still keeps information
about the PV:

[0] raw/~ $ pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created
[0] raw/~ $ vgcreate vg /dev/sda
  Volume group "vg" successfully created
[0] raw/~ $ lvcreate -L32m vg
  Logical volume "lvol0" created
[0] raw/~ $ lvcreate -L32m -s vg/lvol0
  Logical volume "lvol1" created
[0] raw/~ $ pvcreate /dev/vg/lvol1
  Physical volume "/dev/vg/lvol1" successfully created
[0] raw/~ $ lvremove -ff vg/lvol1
  Logical volume "lvol1" successfully removed
[0] raw/~ $ pvs
  No device found for PV BdNlu2-7bHV-XcIp-mFFC-PPuR-ef6K-yffdzO.
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda   vg         lvm2 a--  124.00m 92.00m
[0] raw/~ $ pvscan --cache --major 253 --minor 3
  Device 253:3 not found. Cleared from lvmetad cache.

This is because of the reactivation that is done just before
snapshot removal as part of the process (vg/lvol1 from the example above).
This causes a CHANGE event to be generated, but any scan done
on the LV does not see the original data anymore (in this case
the stacked PV label on top) and consequently the ID_FS_TYPE="LVM2_member"
(provided by blkid scan) is not stored in udev db anymore for the LV.
Consequently, the pvscan --cache is not run anymore as the dev is not
identified as LVM PV by the "LVM2_member" id - lvmetad loses this info
and still keeps records about the PV.

We can run into a very similar problem with erasing the PV label directly:

[0] raw/~ $ lvcreate -L32m vg
  Logical volume "lvol0" created
[0] raw/~ $ pvcreate /dev/vg/lvol0
  Physical volume "/dev/vg/lvol0" successfully created
[0] raw/~ $ dd if=/dev/zero of=/dev/vg/lvol0 bs=1M
dd: error writing '/dev/vg/lvol0': No space left on device
33+0 records in
32+0 records out
33554432 bytes (34 MB) copied, 0.380921 s, 88.1 MB/s
[0] raw/~ $ pvs
  PV            VG         Fmt  Attr PSize   PFree
  /dev/sda      vg         lvm2 a--  124.00m 92.00m
  /dev/vg/lvol0            lvm2 a--   32.00m 32.00m
[0] raw/~ $ pvscan --cache --major 253 --minor 2
  No PV label found on /dev/vg/lvol0.

This patch adds detection of this change from ID_FS_LABEL="LVM2_member"
to ID_FS_LABEL="<whatever_else>" and hence informing the lvmetad
about PV being gone.
2013-08-26 15:40:16 +02:00
Zdenek Kabelac
6b416f837f thin: support lvchange for data and metadata
Support lvchange operation on stacked thin pool data and metadata
volumes.
2013-08-26 14:55:22 +02:00
David Teigland
7d6a125e97 test: add process-each-vg and process-each-lv
These test the toollib functions that select
vgs/lvs to process based on command line args:
empty, vg name(s), lv names(s), vg tag(s),
lv tags(s), and combinations of all.
2013-08-23 14:38:48 -05:00
David Teigland
506bc045b5 test: add process-each-duplicate-vgnames
Test that vgs shows both vgs when two vgs
exist with the same name but different uuids.
2013-08-23 14:19:59 -05:00
David Teigland
8c511122f4 test: add vg-name-from-env
vg name should come from env var LVM_VG_NAME
for commands that take vg name and lv name,
but vg name is not specified on command line.
2013-08-23 14:10:29 -05:00
Jonathan Brassow
72d6bdd6b9 misc: make lv_is_on_pv use for_each_sub_lv to walk LV tree
Make lv_is_on_pv use for_each_sub_lv to walk the LV tree.  This
reduces code duplication.
2013-08-23 11:03:28 -05:00
Jonathan Brassow
448ff0119f pvmove: Ability to move thin volumes
The previous commit was missing the code to allow moving thin
volumes.
2013-08-23 09:13:14 -05:00
Jonathan Brassow
c59167ec13 pvmove: Add support for RAID, mirror, and thin
This patch allows pvmove to operate on RAID, mirror and thin LVs.
The key component is the ability to avoid moving a RAID or mirror
sub-LV onto a PV that already has another RAID sub-LV on it.
(e.g. Avoid placing both images of a RAID1 LV on the same PV.)

Top-level LVs are processed to determine which PVs to avoid for
the sake of redundancy, while bottom-level LVs are processed
to determine which segments/extents to move.

This approach does have some drawbacks.  By eliminating whole PVs
from the allocation list, we might miss the opportunity to perform
pvmove in some senarios.  For example, if we have 3 devices and
a linear uses half of the first, a RAID1 uses half of the first and
half of the second, and a linear uses half of the third (FIGURE 1);
we should be able to pvmove the first device (FIGURE 2).
	FIGURE 1:
        [ linear ] [ -RAID- ] [ linear ]
        [ -RAID- ] [        ] [        ]

	FIGURE 2:
        [  moved ] [ -RAID- ] [ linear ]
        [  moved ] [ linear ] [ -RAID- ]
However, the approach we are using would eliminate the second
device from consideration and would leave us with too little space
for allocation.  In these situations, the user does have the ability
to specify LVs and move them one at a time.
2013-08-23 08:57:16 -05:00
Jonathan Brassow
e5c0213168 Thin: Make 'lv_is_on_pv(s)' work with thin types
The pool metadata LV must be accounted for when determining what PVs
are in a thin-pool.  The pool LV must also be accounted for when
checking thin volumes.

This is a prerequisite for pvmove working with thin types.
2013-08-23 08:49:16 -05:00
Jonathan Brassow
f1e3640df3 Misc: Make get_pv_list_for_lv() available to more than just RAID
The function 'get_pv_list_for_lv' will assemble all the PVs that are
used by the specified LV.  It uses 'for_each_sub_lv' to traverse all
of the sub-lvs which may compose it.
2013-08-23 08:40:13 -05:00
Peter Rajnoha
be9f4c77c9 conf: more comments about use_lvmetad + autoactivation relation 2013-08-22 08:29:20 +02:00
Peter Rajnoha
99fe3b88d2 systemd: lvm2-activation-generator: report only error otherwise be silent
Do not print success status for lvm2-activation-generator:

  "LVM: Activation generator successfully completed."
  "LVM: Logical Volume autoactivation enabled." (if use_lvmetad=1)

Though this information is quite useful during boot, it may
be confusing for users if it happens anytime later and it
actually happens if systemd reloads. This is usually on package
update to update the systemd state and load any new units that are
newly installed in the system. The systemd reload is global and
so any existing generators are rerun at that moment too.
2013-08-22 08:27:51 +02:00