1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-01-17 06:04:23 +03:00

1632 Commits

Author SHA1 Message Date
Leo Yan
38abd6bb2c tests: idm: Add testing for the fabric's half brain failure
If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system.  For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives.  On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.

This patch is to add half brain failure for idm; the test command is as
below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_half_brain.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
91d3b56875 tests: idm: Add testing for the fabric failure and timeout
If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system.  For worst case, the lease is timeout
and the drives cannot recovery back.  So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.

And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly.  The test command is as below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_timeout.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
fc0495ea04 tests: idm: Add testing for the fabric failure
When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.

For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.

This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case.  The testing usage is:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
874001ee6e tests: Add testing for lvmlockd failure
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.

This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back.  Below is an example for testing
the case:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
f83e11ff43 tests: stress: Add multi-threads stress testing for PV/VG/LV
This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
692fe7bb31 tests: stress: Add multi-threads stress testing for VG/LV
This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
fe660467fa tests: stress: Add single thread stress testing
This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
c64dbc7ee8 tests: Enable the testing for IDM locking scheme
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing.  Also add the prepare and remove shell scripts for
IDM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
David Teigland
4a746f7ffc lvremove: fix removing thin pool with writecache on data 2021-05-24 16:09:35 -05:00
David Teigland
00f603de2c tests: add lvextend-caches-on-thindata
to test lvextend of thin pool data while it has
cache|writecache attached
2021-05-06 16:23:26 -05:00
David Teigland
92fcfc59b2 tests: new lvextend-caches
to test lvextend of LVs with attached cache|writecache
2021-05-06 14:43:10 -05:00
Zdenek Kabelac
64a8505b96 tests: use should for expected state
While we heavily try to spot arrays that are not yet in-sync,
some kernels tends to block our lvm2 command in kernel,
while we resume these smaller raid arrays even for 5 seconds.

But since the result is not really wrong - report these
check failures only as TEST WARNING.
2021-04-23 23:00:55 +02:00
Zdenek Kabelac
94c264b975 Revert "tests: add check for lvconvert without zeroing"
This reverts commit accf324ccba681ad06cd8bcb27ead17ec191a471.
2021-04-14 10:53:34 +02:00
Zdenek Kabelac
9a33388c1a tests: race on md raid still being hit on 5.12-rc6
Still hits the race in initialization:

kernel BUG at drivers/md/raid5.c:7549!
invalid opcode: 0000 [#1] SMP PTI
CPU: 0 PID: 525149 Comm: dmsetup Tainted: G           OEi
    --------- ---  5.12.0-0.rc6.184.fc35.x86_64 #1
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
RIP: 0010:raid5_run+0x40b/0x4b0 [raid456]
Code: 00 8b 83 3c 01 00 00 39 83 bc 00 00 00 0f 85 ac 00 00 00
      48 c7 44 24 08 00 00 00 00 8b bb 30 01 00 00 85 ff 0f 84
      88 fd ff ff <0f> 0b 48 8b 43 48 48 c7 c6 40 93 92 c0 48
      c7 c7 70 2c 93 c0 48 85
Call Trace:
 md_run+0x4d6/0xbc0
 ? super_validate+0x2e1/0x4b0 [dm_raid]
 raid_ctr+0x133e/0x281b [dm_raid]
 dm_table_add_target+0x167/0x330
 table_load+0x103/0x350
 ctl_ioctl+0x1b4/0x430
 ? dev_suspend+0x2c0/0x2c0
 dm_ctl_ioctl+0xa/0x10
 __x64_sys_ioctl+0x82/0xb0
 do_syscall_64+0x33/0x40
 entry_SYSCALL_64_after_hwframe+0x44/0xae
2021-04-12 12:04:50 +02:00
Zdenek Kabelac
396d93937d tests: enable for 5.12+ kernels
Should not longer kill kernel.
2021-04-12 10:47:06 +02:00
David Teigland
01f108c4d0 tests: skip autoactivation-metadata with lvmlockd
shared vgs are not autoactivated
2021-04-08 16:08:45 -05:00
David Teigland
0a28e3c44b Add metadata-based autoactivation property for VG and LV
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.

 --setautoactivation y|n
 enables|disables autoactivation of a VG or LV.

Autoactivation is enabled by default, which is consistent with
past behavior.  The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.

If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.)  When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.

The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.

Previous versions of lvm do not recognize this property.  Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions.  If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.

The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.

If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.

An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.

To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list.  The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
2021-04-07 15:32:49 -05:00
Zdenek Kabelac
79a168d119 tests: pvmove updates
Switch to plain 'kill' we should no longer need SIGKILL
as polling can be interrupted.

Resolve problem in aux wait_pvmove_lv_ready() that was using
lvm command to check for UUID - but this was interferring with
VG lock and it's been delaying confirmation.

So reducing slow-down of test - so it can run faster.
2021-04-06 22:02:31 +02:00
Samanta Navarro
01d5e4d1ca all: fix typos 2021-03-30 13:08:14 +02:00
Zdenek Kabelac
1a17a5ab80 tests: sleep tunning
Check different sleep properties for lvmpolld.
Use aux remove_dm_devs.
2021-03-28 14:22:11 +02:00
Zdenek Kabelac
0ddbc4c5cd tests: bash quotes 2021-03-28 11:39:58 +02:00
Zdenek Kabelac
afbaab20c7 tests: fix unfinished check for 4.18 kernel 2021-03-28 00:21:38 +01:00
Zdenek Kabelac
f584f0cd9e tests: ensure raid is synchronized 2021-03-27 23:19:08 +01:00
Zdenek Kabelac
5ec7992e29 tests: reoder killing order
We need to stop pvmove while still in progress,
so restart lvmpolld after pvmoving devices are gone
2021-03-27 23:19:08 +01:00
Zdenek Kabelac
feb7fef6c8 tests: fight with losetup creation error
Try losetup few times in loop if we can succeed.
2021-03-27 23:19:08 +01:00
Zdenek Kabelac
37d603268f tests: for 4.18 use already 1.20 logic
Looks like there was some missed versioning increase during devel.
So with kernel >= 4.18 version 1.19 is enough to look like 1.20

However backported 1.19 targets seems to not provide all
the capabilities.
2021-03-27 23:16:52 +01:00
Zdenek Kabelac
f07a793813 tests: correct thin-pool version
Use thin-pool target version 1.20 for changed behavior.
2021-03-27 00:34:00 +01:00
Zdenek Kabelac
53338cf566 tests: increase mirror throttling 2021-03-27 00:29:28 +01:00
Zdenek Kabelac
8e9bc52b15 tests: more skipped tests for lvmpolld 2021-03-26 22:13:37 +01:00
Zdenek Kabelac
a55f4a8fe2 tests: use shell comment 2021-03-26 22:12:42 +01:00
Zdenek Kabelac
1d6e1d08a8 tests: update for newer thin-pool
Newer thin-pool handle metadata read-only recovery better.
2021-03-26 20:39:41 +01:00
Zdenek Kabelac
51ac56a05e tests: use blkid without caching
Always use blkid without caching to avoid poluting
cache stored in /run/blkid or /etc on older distros
2021-03-26 20:39:41 +01:00
Zdenek Kabelac
02e02a5ccc tests: use aux mdadm_assemble wrapper 2021-03-26 20:39:41 +01:00
Zdenek Kabelac
5ef8d84569 tests: better reporting of problematic services 2021-03-26 20:39:40 +01:00
Zdenek Kabelac
49575a6ce1 tests: skip more tests for lvmpolld pass
These test do not test polling, so skip them for lvmpolld pass.
2021-03-26 20:39:40 +01:00
Zdenek Kabelac
6db533c439 tests: try to observe some unusual problem
Lets see, why it's very occasionaly able to active LV.
2021-03-26 11:36:22 +01:00
Zdenek Kabelac
3ed79d8dfe tests: move setting of dmeventd pid
Added comment the  'lvs' already initiates dmeventd

Note: we don't have any query mechanism to check if dmeventd
is already running except access of socket which basically
starts dmeventd if it's not running.
2021-03-26 11:16:32 +01:00
Zdenek Kabelac
85fae836c0 tests: add basic validation of running services
For determinist test results lvm2/dm service shall not be present
and running in the system as it may randomize test results.

In case they are found present, this test ends with warning (not failure).
2021-03-26 11:13:56 +01:00
Zdenek Kabelac
9bcc76b63c tests: add should for racy test
Depending on kernel, the race may or may not happen.
2021-03-26 00:43:44 +01:00
Zdenek Kabelac
5feb99dda6 tests: add workaround for older mdadm
Some older instancies of 'mdadm' opened legs in RW and
closed and opened again and expected exlusive access.
But here udev rule can be fired - so on these versions
slow down whole mdadm runtime by using strace, to
give system a bit more time to finish udev rule.
2021-03-26 00:35:28 +01:00
Zdenek Kabelac
2151b71819 tests: check fsadm with missing filesystem 2021-03-24 16:38:12 +01:00
Zdenek Kabelac
9684e82cc4 tests: ignore incosistent raid status
Just like lvm command ignores  0/xxxx report from judging the status.
Avoid using infinite loop and limit report checking to 100 checks.
If it would need more - something is not right.
2021-03-24 12:40:17 +01:00
Zdenek Kabelac
afd43a75f2 tests: skip stray testing on real dev dir
Do not modify /dev dir maintained by udev.
2021-03-24 12:23:07 +01:00
Zdenek Kabelac
18f2475fa1 tests: query info instead of table
No need to access table when we just check presence,
so generate smaller error message about missing device.
2021-03-24 12:22:27 +01:00
Zdenek Kabelac
8df0a32abb tests: this test has race in it depending on kernel
Some kernel seems to keep 'lvextend' busy so long,
that actual resize already happens.

So ATM use 'should'  until something better is invented.
2021-03-23 21:32:51 +01:00
Zdenek Kabelac
14a3c34983 tests: increase required version
Seems like version 1.13.2 remains crashing kernel - so increase
the required version for this reshaping test.
2021-03-23 14:39:13 +01:00
Zdenek Kabelac
d0644fb2c3 tests: use prefix for VG name 2021-03-23 14:38:54 +01:00
Zdenek Kabelac
26d76d31c5 tests: use mirror throttling
Combination of throttling and slowed device is a bit faster.

Also add FIXME about the mutliple spawn polling processing
when activating invidual LV for a pvmove.
2021-03-23 11:34:34 +01:00
Zdenek Kabelac
0b2a037c80 tests: try to move more date
Throttling was not helping with race - try to use more data.
2021-03-23 10:53:02 +01:00
Zdenek Kabelac
acac3cb524 tests: test needs to have playable locking dir 2021-03-23 09:48:47 +01:00