IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch is to add VG testing on multi hosts. There have two scripts,
the script multi_hosts_vg_hosta.sh is used to create VGs on one host,
and the second script multi_hosts_vg_hostb.sh afterwards will acquire
global lock and VG lock, and remove VGs. The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.
This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back. Below is an example for testing
the case:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the drive failure occurs, the IDM lock manager and lvmlockd should
handle this case properly. E.g. when the IDM lock manager detects the
lease renewal failure caused by I/O errors, it should invoke the kill
path which is predefined by lvmlockd, so that the kill path program
(like lvmlockctl) can send requests to lvmlockd to stop and drop lock
for the relevant VG/LVs.
To verify the failure handling flow, this patch introduces an idm
failure injection program, it can input the "percentage" for drive
failures so that can emulate different failure cases.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Add checking for lvmlockd log, this can be used for the test cases which
are interested in the interaction with lvmlockd.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
For testing idm locking scheme, it's good to cleanup the idm context
before run the test cases. This can give a clean environment for the
testing.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
In current implementation, the option "LVM_TEST_BACKING_DEVICE" only
supports to specify one backing device; this patch is to extend the
option to support multiple backing devices by using comma as separator,
e.g. below command specifies two backing devices:
make check_lvmlockd_idm LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3
This can allow the testing works on multiple drives and verify the
locking scheme if can work as expected for multiple drives case. For
example, for Seagate IDM locking scheme, if a VG uses two PVs, every PV
is resident on a drive, thus the locking operations will be sent to two
drives respectively; so the extension for "LVM_TEST_BACKING_DEVICE" can
help to verify different drive configurations for locking.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing. Also add the prepare and remove shell scripts for
IDM.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Switch to plain 'kill' we should no longer need SIGKILL
as polling can be interrupted.
Resolve problem in aux wait_pvmove_lv_ready() that was using
lvm command to check for UUID - but this was interferring with
VG lock and it's been delaying confirmation.
So reducing slow-down of test - so it can run faster.
Make check_lvmpolld_init_rq_count() more compatible with older gawk,
where some functionality was not working properly.
Also change 'not not' condition.
Our tests may result in producation of huge set of
invalid links in /dev/disk directory depeding on version
of udev and various kinds of failures.
Also we happen to invoke some on-system pvscans generating
local /etc/lvm/archive & backups - remove them when
test is finished.
Try to synchronize with colliding udev.
Also retry once if there is some failure with some
sleep between next retry.
Use oflag=direct for wipping without wipefs.
Add generic wrapper for mdadm --create which takes
normal 'mdadm' args - but allows us to handle differences of
mdadm usage across various version of mdadm tool.
Resulting MD device is availalble in $(< MD_DEV).
Automatic cleaning is made through cleanup_md_dev
Calling of mdadm_create cleans previous MD dev if it exists.
When testing installed binaries on system, use more 'built-in'
predefined settings to usethem with their compiled-in values.
Also it's better to use same locking dir so the system's pvscan
is not unexpectedly interferring with test commands.
Always shift created virtual PVs on backing device by 1MiB
and leave 1MiB free space at the end of device.
This way the system doesn't see same PV headers at multiple devices.
user creates a file listing real devices they want
lvm tests to use, and sets LVM_TEST_DEVICE_LIST.
lvm tests can use these with prepare_real_devs
and get_real_devs.
Other aux functions do not work with these devs.
When ERR_DEV and ZERO_DEV are used, they are automatically
taken down when the last user no longer needs them,
so hide them from 'forgotten' device check.
As there could be few invokes of stacktrace, avoid
repeatedly display logs from commands.
So after first display rename debug.log* -> debug_log
so the file still can remain for reading in test dir.