IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- support shrinking of raid0/1/4/5/6/10 LVs
- enhance lvresize-raid.sh tests: add raid0* and raid10
- fix have_raid4 in aux.sh to allow lv_resize-raid.sh
and other scripts to test raid4
Resolves: rhbz1394048
Commit cfb6ef654d introduced
support to change RAID region size.
Add:
- missing conditions to support any types to function with
it in lv_raid_convert(); temporary workaround used until
cli validation patches get merged
- tests requesting "-R " to lvconvert-raid-takeover.sh
involving a cleanup of the script
Related: rhbz1392947
Add:
- support to change region size of existing RaidLVs
(all RAID LV types but raid0/raid0_meta)
- lvconvert-raid-regionsize.sh with test variations
for different RAID types and region sizes
Resolves: rhbz1392947
Well waiting for zeroing may take enough time to finish 'raid' sync.
So make the test running faster without zeroing and better avoid race
to have chance to happen (i.e. lvcreate is finished after array
gets already in sync).
Add:
- support for segment types raid6_{ls,rs,la,ra}_6
(striped raid with dedicated last Q-Syndrome SubLVs)
- conversion support from raid5_{ls,rs,la,ra} to/from raid6_{ls,rs,la,ra}_6
- setting convenient segtypes on conversions from/to raid4/5/6
- related tests to lvconvert-raid-takeover.sh factoring
out _lvcreate,_lvconvert funxtions
Related: rhbz1366296
Add:
- support for segment type raid6_n_6 (striped raid with dedicated last parity/Q-Syndrome SubLVs)
- conversion support from striped/raid0/raid0_meta/raid4 to/from raid6_n_6
- related tests to lvconvert-raid-takeover.sh
Related: rhbz1366296
Add:
- support for segment type raid5_n (striped raid with dedicated last parity SubLVs)
- conversion support from striped/raid0/raid0_meta/raid4 to/from raid5_n
- related tests to lvconvert-raid-takeover.sh
Related: rhbz1366296
Since we still experience occasiaonal test failure - slow
things down even more to avoid race.
Add support for 'quick' table changes between normal & delayed tables.
Solves: https://bugzilla.redhat.com/1280496
The only reasonable behaviour here is to error on
any number out of accepted range (i.e. now numbers
wrapping around with some hidden logic).
As this is plain bug there is no support for
backward compatibility since noone should
set numbers >UINT32_MAX and expect 0 or error
depending on how big number was used....
TODO: more fields might need to be converted.
Add to commits 87117c2b25 and 0b8bf73a63 to avoid refreshing two
times altogether, thus avoiding issues related to clustered, remotely
activated RaidLV. Avoid need to repeat "lvchange --refresh RaidLV"
two times as a workaround to refresh a RaidLV. Fix handles removal
of temporary *-missing-* devices created for any missing segments
in RAID SubLVs during activation.
Because the kernel dm-raid target isn't able to handle transiently
failing devices properly we need
"[dm-devel][PATCH] dm raid: fix transient device failure processing"
as well.
test: add lvchange-raid-transient-failures.sh
and enhance lvconvert-raid.sh
Resolves: rhbz1025322
Related: rhbz1265191
Related: rhbz1399844
Related: rhbz1404425
To be able to detect lvm2 command is not leaking some
'unexpected' device - remove all devices before
test exits by its own command so test teardown
now can check what was 'left' unexpectedly.
Use LVM_DBUSD_TEST_MODE env variable to customize what we test.
Default is the same where we try to test all combinations of all
modes. Renamed to make it consistent with the other env variables
that are used in the unit test.
- We check that all properties match the introspection data. We
don't verify values for every property as only lvm knows what they
should be.
- We are testing vg.Move
The default dbus python library mode of operation is to leverage
introspection. However, this introspection data isn't accessible
for users of the library and they have to specifically retrieve
the introspection data too. This resulted in many introspection
calls being made. This change eliminates introspection calls if
we are testing multiple concurrent test clients. If it's a single
client we will leverage a reduced amount of introspection data to
verify the introspection data is correct. Typically clients don't
leverage introspection data nearly as much as this test client.
The env variable LVM_DBUSD_PV_DEVICE_LIST when present and filled in
with at least 4 physical devices will run concurrently with other
instances running as long as they specify different devices in their
env variable.
When the env variable is not present the test runs as it did before.
Looks like this isn't support across versions. Need to add functionality
to service to return the supported segment types, so we only use the
supported ones.
0 interval leads as of now to a busy loop with lvmetad and command.
Avoid testing this patological case.
TODO: Code should possibly translate zero interval into some small
sleep. With lvmpolld it's already 1/10s
Add new targets:
make check_lvmpolld
make check_cluster_lvmpolld
make check_lvmetad_lvmpolld
make check_all_lvmpolld
So check_lvmetad runs only base lvmetad test - to much
logic of remaining targets.
Previous behavior is available via check_all_lvmpolld.
Check for dm-raid target version with non-standard raid4 mapping expecting the dedicated
parity device in the last rather than the first slot and prohibit to create, activate or
convert to such LVs from striped/raid0* or vice-versa in order to avoid data corruption.
Add related tests to lvconvert-raid-takeover.sh
Resolves: rhbz1388962
On conversions between striped/raid0* and raid4, the kernel expects
the dedicated raid4 parity SubLVs in the first segment area rather than
in the last it's been allocated to, thus the data mapping ain't proper.
Enhance lvconvert (lib/metadata/raid_manip.c) to shift the dedicated
parity SubLVs on conversions from striped/raid0* to raid4 and vice-versa.
In case of raid0_meta -> raid4 where the MD raid0 personality already has
stored RAID array device positions in the superblocks, the MetaLVs have to
be cleared so that the kernel doesn't fail validating the array positions
after lvm has shifted them up by one.
Add more tests to lvconvert-raid-takeover.sh including one to check for
mapping flaws by converting a created raid4 with filesystem -> striped
and fsck it.
Whilst on it:
- add missing direct striped -> raid4 conversion to the takeover array
to avoid an intermim conversion from striped -> raid0*
- clean up the takeover array
- allow lvconvert to actually call lv_raid_convert() on all takeover requests
in order to check parameters and display messages provided by takeover
functions rather than just "...not supported" from within lvconvert
- fix a typo
Resolves: rhbz1386148
This test never passes on loop back, so we will skip unless the
pv devices are real devices which contain `/dev/sd`.
We always fail because we need lvm to run slow to get a timer to
pop, and loopback are too fast.
If you run multiple runs of unittest.main, unless you don't pass exit=true
the test case always ends with a 0 exit code. Add ability to store the
result of each invocation of the test and exit with a non-zero exit code
if anyone of them fail.
Fix missing wait so we have paired waiting.
Also 'wait' for precise PID to get 'exit' code.
Test for 'error' replacing only with newer snapshot targets.
The old one will wait for resume.
Note: 'wait -n' is not always available so can't be used..
Pre 1.9 dm-raid targets status output was racy, which caused
the device status chars to be unreliable _during_ synchronization.
This shows paritcularly with tiny test devices used.
Enhance lvchange-rebuild-raid.sh to not check status
chars _during_ synchronization. Just check afterwards.
Though I'm not quite sure why we push this limit on user,
current --repair logic requires user to wait outside of command.
TODO: I'm not quite sure this repair logic is 'the most wanted'.
The dm-raid target now rejects device rebuild requests during ongoing
resynchronization thus causing 'lvconvert --repair ...' to fail with
a kernel error message. This regresses with respect to failing automatic
repair via the dmeventd RAID plugin in case raid_fault_policy="allocate"
is configured in lvm.conf as well.
Previously allowing such repair request required cancelling the
resynchronization of any still accessible DataLVs, hence reasoning
potential data loss.
Patch allows the resynchronization of still accessible DataLVs to
finish up by rejecting any 'lvconvert --repair ...'.
It enhances the dmeventd RAID plugin to be able to automatically repair
by postponing the repair after synchronization ended.
More tests are added to lvconvert-rebuild-raid.sh to cover single
and multiple DataLV failure cases for the different RAID levels.
- resolves: rhbz1371717
Commit 199697accf rerouted funtion
for priting cache volume origin to lvm2app app function - which
however had a bug. So restore the original functionality
and print correct LV as cache origin LV.
Because of the different code paths we need to test job handling with
all operations. Test now runs virtually everything with timeout == 0
and timeout == 15 so that we test both use cases.
Note: If a client passes -1 for the timeout value they need to make
sure that their connection time out is also infinite. Otherwise, if the client
times out the service side hangs any new dbus calls until the job
that is in progress completes. Not sure why this behavior is occuring
at this time, but it appears a limitation/bug of the dbus-python library.
The methods were returning an instance of the object instead of the
object path which was causing an exception when the result was returned
with the job object as we are explicity trying to return an object path.
Unit test added which re-creates the issue and verifies the fix.
Add simple helper/wrapper check function to check result
of dmsetup call i.e.:
check grep_dmsetup table vg-lv "grep_expected"
check grep_dmsetup status vg-lv -v "grep_unexpected"