IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Reverts previously added udevsettle call.
Seems to be unrelated, while udev on old system may take over 10
minutes, to finish it's very slow and CPU intensive work, it doesn't
interact directly with created device, only access /dev/mapper/control
node via dmsetup, so the device is ocasionaly blocked by something else.
Patch helps a bit when lvm2 is build with disabled udev_sync support,
but udevd runs in the system - so it randomly influences unrelated tests
even - so before every test wait at least till udevd is settled.
Initial testing of thin pool's metadata with thin repairing tools.
Try to use tools from configuration settings, but allow them
to be overriden by settings of these variables:
LVM_TEST_THIN_CHECK_CMD,
LVM_TEST_THIN_DUMP_CMD,
LVM_TEST_THIN_REPAIR_CMD
FIXME: test reveals some more important bugs:
pvremove -ff also needs --yes
vgremove -ff doesn not remove metadata when there are no real LVs.
vgreduce is not able to reduce VG with pool without pool's PVs
Reshape code a bit to make sockepair 'swappable' with plain old pipe
call.
Display status for FAILED error.
Increase buffer to hold always at least 1 page size.
Print error results with capitals.
1) When converting from an x-way mirror/raid1 to a y-way mirror/raid1,
the default behaviour should be to stay the same segment type.
2) When converting from linear to mirror or raid1, the default behaviour
should honor the mirror_segtype_default.
3) When converting and the '--type' argument is specified, the '--type'
argument should be honored.
catch such conditions, but errors in the tests caused the issue to go
unnoticed. The code has been fixed to perform #2 properly, the tests
have been corrected to properly test for #2, and a few other tests
were changed to explicitly specify the '--type mirror' when necessary.
A know issue with kmem_cach is causing failures while testing
RAID 4/5/6 device replacement. Blacklist the offending kernel
so that these tests are not performed there.
Since our current vgcfgbackup/restore doesn't deal
with difference of active volumes between current and
restored set of volumes - run test with inactive LVs.
Rewrite check lv_on and add new lv_tree_on
Move more pvmove test unrelated code out to check & get sections
(so they do not obfuscate trace output unnecesserily)
Use new lv_tree_on()
NOTE: unsure how the snapshot origin should be accounted here.
Split pmove-all-segments into separate tests for raid and thins
(so the test output properly shows what has been skipped in test)
Update usage of "" around shell vars.
trim needs to trim both sides now.
trim also removes debug.log since it's only called when lvm command
has finished properly (so if something fails afterward, there
is no missleading debug trace in the log)
'die' evaluates given string - so \n could be used for
multiline error report
Also remove debug.log since the command finished properly when we
call 'die'
Note: we should not call 'die' after lvm command failure.
lvchange-raid.sh checks to ensure that the 'p'artial flag takes
precedence over the 'w'ritemostly flag by disabling and reenabling
a device in the array. Most of the time this works fine, but
sometimes the kernel can notice the device failure before it is
reenabled. In that case, the attr flag will not return to 'w', but
to 'r'efresh. This is because 'r'efresh also takes precedence over
the 'w'ritemostly flag. So, we also do a quick check for 'r' and
not just 'w'.
Add a very simple hack for embeding /var/log/messages into
the tests output - it's not ideal since it sometimes breaks lines,
but still gives valuable info.
The same corner cases that exist for snapshots on mirrors exist for
any logical volume layered on top of mirror. (One example is when
a mirror image fails and a non-repair LVM command is the first to
detect it via label reading. In this case, the LVM command will hang
and prevent the necessary LVM repair command from running.) When
a better alternative exists, it makes no sense to allow a new target
to stack on mirrors as a new feature. Since, RAID is now capable of
running EX in a cluster and thin is not active-active aware, it makes
sense to pair these two rather than mirror+thinpool.
As further background, here are some additional comments that I made
when addressing a bug related to mirror+thinpool:
(https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9)
I am going to disallow thin* on top of mirror logical volumes.
Users will have to use the "raid1" segment type if they want this.
This bug has come down to a choice between:
1) Disallowing thin-LVs from being used as PVs.
2) Disallowing thinpools on top of mirrors.
The problem is that the code in dev_manager.c:device_is_usable() is unable
to tell whether there is a mirror device lower in the stack from the device
being checked. Pretty much anything layered on top of a mirror will suffer
from this problem. (Snapshots are a good example of this; and option #1
above has been chosen to deal with them. This can also be seen in
dev_manager.c:device_is_usable().) When a mirror failure occurs, the
kernel blocks all I/O to it. If there is an LVM command that comes along
to do the repair (or a different operation that requires label reading), it
would normally avoid the mirror when it sees that it is blocked. However,
if there is a snapshot or a thin-LV that is on a mirror, the above code
will not detect the mirror underneath and will issue label reading I/O.
This causes the command to hang.
Choosing #1 would mean that thin-LVs could never be used as PVs - even if
they are stacked on something other than mirrors.
Choosing #2 means that thinpools can never be placed on mirrors. This is
probably better than we think, since it is preferred that people use the
"raid1" segment type in the first place. However, RAID* cannot currently
be used in a cluster volume group - even in EX-only mode. Thus, a complete
solution for option #2 must include the ability to activate RAID logical
volumes (and perform RAID operations) in a cluster volume group. I've
already begun working on this.
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.
The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide. This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them. Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction. This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.
The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID. (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
Simulate crash of the system and restarted pvmove after next VG
activation.
Test is catching regression introduced in 2.02.99 for partial tree
creation changes.
Function to create slower responsive device.
Useful for testing things which needs to happen something during on
going operation - with 'delayed' device - much smaller sizes of devices
are needed and its much more deterministic (though still not optimal)
After enable_dev, the following commands were not
consistently seeing the pv on it.
Alasdair explained, "whenever enabling/disabling devs
outside the tools (and you aren't trying to test how
the tools cope with suddenly appearing/disappering
devices) use "vgscan""
The original "check" target stays confined to a local device directory, while
check_full does 6 flavours, 3 with a local device directory and 3 with the
global /dev directory (the latter are prefixed with "s" for
"system"). I.e.: normal, cluster, lvmetad, snormal, scluster, slvmetad.