IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Instead of rereading device list via cat - keep
the list in bash array. (Also solves problem
with spaces in device path)
Move usage of "$path" out of lvm shell usage,
since we don't support such thing there...
When directly corrupting RAID images for the purpose of testing,
we must use direct I/O (or a 'sync' after the 'dd') to ensure that
the writes are not caught in the buffer cache in a way that is not
reachable by the top-level RAID device.
Unsure if this is feature or bug of syncaction,
but it needs to be present physically on the media
and it ignores content of buffer cache...
(maybe lvchange should implicitely fsync all disks
that are members of raid array before starting test??)
Check we know how to handle same UUID
Test currently does NOT work on lvmetad
(or it's unclear it even should - thus test error
is currently lowered to 'test warning')
TODO: replace lib/test with a better shell script name
While the 'raid1/10' segment types were being handled inadvertently
by '_move_mirrors()', the parity RAIDs were not being properly checked
to ensure that the user had specified all necessary PVs when moving
them. Thus, internal errors were being triggered when only part of
a RAID LV was moved to the new VG. I've added a new function,
'_move_raid()', which properly checks over any affected RAID LVs and
ensures that all the necessary PVs are being moved.
vgsplit of RAID volumes was problematic because the metadata area
and data areas were always on the same PVs. This problem is similar
to one that was just fixed for mirrors that had log and images sharing
a PV (commit 9ac858f). We can now add RAID LVs to the tests for
vgsplit.
Note that there still seems to be an issue when specifying an
incomplete set of PVs when moving RAID LVs.
Given a named mirror LV, vgsplit will look for the PVs that compose it
and move them to a new VG. It does this by first looking at the log
and then the legs. If the log is on the same device as one of the mirror
images, a problem occurs. This is because the PV is moved to the new VG
as the log is processed and thus cannot be found in the current VG when
the image is processed. The solution is to check and see if the PV we are
looking for has already been moved to the new VG. If so, it is not an
error.
Since the kill may take various amount of time,
(especially when running with valgrind)
check it's really pvmoved LV.
Restore initial restart of clvmd - it's currently
broken at various moments - basically killed lvm2
command may leave clvmd and confusing state leading
to reports of internal errors.