IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system. For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives. On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.
This patch is to add half brain failure for idm; the test command is as
below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_half_brain.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system. For worst case, the lease is timeout
and the drives cannot recovery back. So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.
And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly. The test command is as below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_timeout.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.
For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.
This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case. The testing usage is:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.
This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back. Below is an example for testing
the case:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing. Also add the prepare and remove shell scripts for
IDM.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
While we heavily try to spot arrays that are not yet in-sync,
some kernels tends to block our lvm2 command in kernel,
while we resume these smaller raid arrays even for 5 seconds.
But since the result is not really wrong - report these
check failures only as TEST WARNING.
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.
--setautoactivation y|n
enables|disables autoactivation of a VG or LV.
Autoactivation is enabled by default, which is consistent with
past behavior. The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.
If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.) When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.
The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.
Previous versions of lvm do not recognize this property. Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions. If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.
The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.
If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.
An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.
To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list. The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
Switch to plain 'kill' we should no longer need SIGKILL
as polling can be interrupted.
Resolve problem in aux wait_pvmove_lv_ready() that was using
lvm command to check for UUID - but this was interferring with
VG lock and it's been delaying confirmation.
So reducing slow-down of test - so it can run faster.
Looks like there was some missed versioning increase during devel.
So with kernel >= 4.18 version 1.19 is enough to look like 1.20
However backported 1.19 targets seems to not provide all
the capabilities.
Added comment the 'lvs' already initiates dmeventd
Note: we don't have any query mechanism to check if dmeventd
is already running except access of socket which basically
starts dmeventd if it's not running.
For determinist test results lvm2/dm service shall not be present
and running in the system as it may randomize test results.
In case they are found present, this test ends with warning (not failure).
Some older instancies of 'mdadm' opened legs in RW and
closed and opened again and expected exlusive access.
But here udev rule can be fired - so on these versions
slow down whole mdadm runtime by using strace, to
give system a bit more time to finish udev rule.
Just like lvm command ignores 0/xxxx report from judging the status.
Avoid using infinite loop and limit report checking to 100 checks.
If it would need more - something is not right.
Combination of throttling and slowed device is a bit faster.
Also add FIXME about the mutliple spawn polling processing
when activating invidual LV for a pvmove.