IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Fix stripe count and size parameter validation for RAID LVs and
include existing automatic setting of these parameters based
on current shape of the RAID LV in case these are not set
on command line fully.
Previously, this was done only to a certain subset given by this
condition (where the 'stripes' is the '-i|--stripes' cmd line arg
and the 'stripe_size' is actually the '-I|--stripesize' cmd line arg):
!(stripes == 1 || (stripes > 1 && stripe_size))
This condition is a bit harder to follow at first sight and there
are no comments around with explanation for why this one is used,
so let's analyze it a bit more.
First, let's convert this to an equivalent condition (De Morgan law)
so it's easier to read for humans:
stripes != 1 && !(stripes > 1 && stripe_size)
Note: Both stripe and stripesize are unsigned integers, so they can't be negative.
Now, based on that condition, we were running the code to deduce the
stripe/stripesize and do the checks ("the code") only if both of these
are true:
- stripes is different from 1
- we don't have stripes > 1 and stripe_size defined at the same time
But this is not correct in all cases, because:
A) if someone uses stripes = 0, then "the code" is executed
(correct)
B) if someone uses stripes = 1, then "the code" is not executed
(wrong: we still need to be able to check the args against
existing RAID LV stripes whether it matches)
- if someone uses stripes > 1, then "the code" is:
C) if stripe_size = 0, executed
(correct)
D) if stripe_size > 0, not executed
(wrong: we still want to check against existing RAID LV stripes)
Current issues with this condition:
The B) ends up with segfault.
❯ lvextend -i 1 -l+1 vg/lvol0
Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents).
Segmentation fault (core dumped)
The D) ends up with errors like:
❯ lvextend -i 3 -l+1 -I128k vg/lvol0
Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents).
Rounding size (4 extents) up to stripe boundary size for segment (5 extents).
Size of logical volume vg/lvol0 changed from 8.00 MiB (2 extents) to 20.00 MiB (5 extents).
LV lvol0: segment 1 with len=5 has inconsistent area_len 3
Couldn't read all logical volumes for volume group vg.
Failed to write VG vg.
Conclusion:
The condition needs to be removed so we always run "the code" to check
given striping args given on command line against existing RAID LV
striping. The reason is that we don't want to allow changing stripe
count for RAID LVs through lvextend and we need to end up with the
error:
"Unable to extend <RAID segment type> segment type with different number of stripes"
(We do support changing the striping by lvconvert's reshaping functionality only).
When 'lvresize -r' is used to resize the volume, it's valid to
resize even to the same size of an LV, as the command then runs
fs-resize utility to eventually upsize the fs to the current
volume size.
Return code of such command then reflects the return value
of this fs-resize tool.
This fixes the regression introduced when the support
for option --fs was added (2.03.17).
Replace usage of dm_hash with radix_tree to quickly find LV name
with a vg and also index PV names with set of available PVs.
This PV index is only needed during the import, but instead
of passing 'radix_tree *' everywhere, just keep this within
a VG struct as well and once the parsing is finished, release
this PV index radix_tree.
This also makes it easier to replace this structure
in the future if needed.
lv_set_name now uses radix_tree remove+insert to keep lv_names
tree in-sync and usable for find_lv queries.
Since we detect 'debug' level after calling 'log_debug()' - all
the arguments are evaluated, so in this case display_lvname() was
preparing a string that is not used in case debugging is not enabled.
So since these string are on 'hot-path' and it's already known
which VG is being worked on, in these few cases just use lv->name.
When BLKZEROOUT ioctl fails, it should not stop us from trying the direct
zeroing as a fallback action, since this is an optimization only.
We should be able to continue with new LV creation if we succeed
with that direct fallback then.
Related report: https://issues.redhat.com/browse/RHEL-58737
commit a125a3bb50 "lv_remove: reduce commits for removed LVs"
changed "lvremove <vgname>" from removing one LV at a time,
to removing all LVs in one vg write/commit. It also changed
the behavior if some of the LVs could not be removed, from
removing those LVs that could be removed, to removing nothing
if any LV could not be removed. This caused a regression in
shared VGs using sanlock, in which the on-disk lease was
removed for any LV that could be removed, even if the command
decided to remove nothing. This would leave LVs without a
valid ondisk lease, and "lock failed: error -221" would be
returned for any command attempting to lock the LV.
Fix this by not freeing the on-disk leases until after the
command has decided to go ahead and remove everything, and
has written the VG metadata.
Before the fix:
node1: lvchange -ay vg/lv1
node2: lvchange -ay vg/lv2
node1: lvs
lv1 test -wi-a----- 4.00m
lv2 test -wi------- 4.00m
node2: lvs
lv1 test -wi------- 4.00m
lv2 test -wi-a----- 4.00m
node1: lvremove -y vg/lv1 vg/lv2
LV locked by other host: vg/lv2
(lvremove removed neither of the LVs, but it freed
the lock for lv1, which could have been removed
except for the proper locking failure on lv2.)
node1: lvs
lv1 test -wi------- 4.00m
lv2 test -wi------- 4.00m
node1: lvremove -y vg/lv1
LV vg/lv1 lock failed: error -221
(The lock for lv1 is gone, so nothing can be done with it.)
This provides better hints when trying to resize the fs on top of an LV.
Also needs a3f6d2f593 for proper operation.
❯ lvs -o name,size vg/swap
lv_name lv_size
swap 60.00m
Before:
❯ lvextend -L72m vg/swap
Size of logical volume vg/swap changed from 60.00 MiB (15 extents) to 72.00 MiB (18 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L60m vg/swap
File system swap found on vg/swap.
File system device usage is not available from libblkid.
❯ lvreduce -L50m vg/swap
Rounding size to boundary between physical extents: 52.00 MiB.
File system swap found on vg/swap.
File system device usage is not available from libblkid.
After:
❯ lvextend -L72m vg/swap
Size of logical volume vg/swap changed from 60.00 MiB (15 extents) to 72.00 MiB (18 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L60m vg/swap
File system swap found on vg/swap.
File system size (60.00 MiB) is equal to the requested size (60.00 MiB).
File system reduce is not needed, skipping.
Size of logical volume vg/swap changed from 72.00 MiB (18 extents) to 60.00 MiB (15 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L50m vg/swap
Rounding size to boundary between physical extents: 52.00 MiB.
File system swap found on vg/swap.
File system size (60.00 MiB) is larger than the requested size (52.00 MiB).
File system reduce is required and not supported (swap).
lvremove of a thin lv while the pool is inactive would
leave the pool locked but inactive.
lvcreate of a thin snapshot while the pool is inactive
would leave the pool locked but inactive.
lvcreate of a thin lv could activate the pool to check
a threshold before the pool lock was acquired in lvmlockd.
Revert 373372c8ab and instead update
our validation code to handle LVs with empty segment - currently
we should need this only for pvmove operation, thus such LV should
have name 'pvmove%u'.
This fixes a problem where user tried i.e. pvmove on a VG with single
PV - as reported: https://github.com/lvmteam/lvm2/issues/148
Reported-by: bob@redhat.com
In case of different PV sizes in a VG, the lvm2 allocator falls short
to define extended segments resiliently asked for 100%FREE RaidLV extension
and a RAID distinct allocation check fails. Fix is to release a memory pool
on the resulting error path.
Until the lvm2 allocator gets enhanced (WIP) to do such complex (and other)
allocations proper, a workaround is to extend a RaidLV to any free space on
its already allocated PVs by defining those PVs on the lvextend command line
then iteratively run further such lvextend commands to extend it to its
final intended size. Mind, this may be a non-trivial extension interation.
Shortens processing of 'lvcreate -L -V -T' command and
avoid deactivation and its activation with thin_check of the empty
created thin-pool that will be used for the new thin volume
made with a single lvcreate command.
Fix/support creation and usage of the external origin
across thin-pools - so thin LV can use thin LV from
some other thin-pool as external origin (read-only).
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.
Later we will use this for convertion of thin-pool data volume to VDO.
Apply the same logic for 'lvreduce' which exists for newer
systems (compiled with HAVE_BLKID_SUBLKS_FSINFO)
also for older systems for one very common practical case where
the active LV does not have any blkid known signature/filesystem.
New variant recognized this situation and allowed to proceed
without requesting a prompt, while the older variant always
requested confirmation prompt.
With this patch command now works equily for both variants
for 'active LV' without signature and allows to reduce LV
without prompting.
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.
Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.
With conversion to thin LV user can use same set of arguments
to set chunk-size.
TODO: add some smart code to decide best values for chunks sizes.
For proper functionality of insert_layer_for_lv we need to
move more bits to layerd LV.
Add some missing new types and correct usage of caller,
so the new LV type is set after the movement.
When lvm2 calculates the maximal usable COW size and crops the user
requested size to this value, don't return the error result from
the 'lvextend' operation.
We already apply the same logic when resizing thin-pool beyond
the supported maximal size.
FIXME: The return code error logic here is somewhat fuzzy.
The recent fix 05c2b10c5d ensures that raid LV images are not
using the same devices. This was happening in the lvextend commands
used by this test, so fix the test to use more devices to ensue
redundancy.
In case of e.g. 3 PVs, creating or extending a RaidLV causes SubLV
collocation thus putting segments of diffent rimage (and potentially
larger rmeta) SubLVs onto the same PV. For redundant RaidLVs this'll
compromise redundancy. Fix by detecting such bogus allocation on
lvcreate/lvextend and reject the request.