IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
When BLKZEROOUT ioctl fails, it should not stop us from trying the direct
zeroing as a fallback action, since this is an optimization only.
We should be able to continue with new LV creation if we succeed
with that direct fallback then.
Related report: https://issues.redhat.com/browse/RHEL-58737
commit a125a3bb50 "lv_remove: reduce commits for removed LVs"
changed "lvremove <vgname>" from removing one LV at a time,
to removing all LVs in one vg write/commit. It also changed
the behavior if some of the LVs could not be removed, from
removing those LVs that could be removed, to removing nothing
if any LV could not be removed. This caused a regression in
shared VGs using sanlock, in which the on-disk lease was
removed for any LV that could be removed, even if the command
decided to remove nothing. This would leave LVs without a
valid ondisk lease, and "lock failed: error -221" would be
returned for any command attempting to lock the LV.
Fix this by not freeing the on-disk leases until after the
command has decided to go ahead and remove everything, and
has written the VG metadata.
Before the fix:
node1: lvchange -ay vg/lv1
node2: lvchange -ay vg/lv2
node1: lvs
lv1 test -wi-a----- 4.00m
lv2 test -wi------- 4.00m
node2: lvs
lv1 test -wi------- 4.00m
lv2 test -wi-a----- 4.00m
node1: lvremove -y vg/lv1 vg/lv2
LV locked by other host: vg/lv2
(lvremove removed neither of the LVs, but it freed
the lock for lv1, which could have been removed
except for the proper locking failure on lv2.)
node1: lvs
lv1 test -wi------- 4.00m
lv2 test -wi------- 4.00m
node1: lvremove -y vg/lv1
LV vg/lv1 lock failed: error -221
(The lock for lv1 is gone, so nothing can be done with it.)
This provides better hints when trying to resize the fs on top of an LV.
Also needs a3f6d2f593 for proper operation.
❯ lvs -o name,size vg/swap
lv_name lv_size
swap 60.00m
Before:
❯ lvextend -L72m vg/swap
Size of logical volume vg/swap changed from 60.00 MiB (15 extents) to 72.00 MiB (18 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L60m vg/swap
File system swap found on vg/swap.
File system device usage is not available from libblkid.
❯ lvreduce -L50m vg/swap
Rounding size to boundary between physical extents: 52.00 MiB.
File system swap found on vg/swap.
File system device usage is not available from libblkid.
After:
❯ lvextend -L72m vg/swap
Size of logical volume vg/swap changed from 60.00 MiB (15 extents) to 72.00 MiB (18 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L60m vg/swap
File system swap found on vg/swap.
File system size (60.00 MiB) is equal to the requested size (60.00 MiB).
File system reduce is not needed, skipping.
Size of logical volume vg/swap changed from 72.00 MiB (18 extents) to 60.00 MiB (15 extents).
Logical volume vg/swap successfully resized.
❯ lvreduce -L50m vg/swap
Rounding size to boundary between physical extents: 52.00 MiB.
File system swap found on vg/swap.
File system size (60.00 MiB) is larger than the requested size (52.00 MiB).
File system reduce is required and not supported (swap).
lvremove of a thin lv while the pool is inactive would
leave the pool locked but inactive.
lvcreate of a thin snapshot while the pool is inactive
would leave the pool locked but inactive.
lvcreate of a thin lv could activate the pool to check
a threshold before the pool lock was acquired in lvmlockd.
Revert 373372c8ab and instead update
our validation code to handle LVs with empty segment - currently
we should need this only for pvmove operation, thus such LV should
have name 'pvmove%u'.
This fixes a problem where user tried i.e. pvmove on a VG with single
PV - as reported: https://github.com/lvmteam/lvm2/issues/148
Reported-by: bob@redhat.com
In case of different PV sizes in a VG, the lvm2 allocator falls short
to define extended segments resiliently asked for 100%FREE RaidLV extension
and a RAID distinct allocation check fails. Fix is to release a memory pool
on the resulting error path.
Until the lvm2 allocator gets enhanced (WIP) to do such complex (and other)
allocations proper, a workaround is to extend a RaidLV to any free space on
its already allocated PVs by defining those PVs on the lvextend command line
then iteratively run further such lvextend commands to extend it to its
final intended size. Mind, this may be a non-trivial extension interation.
Shortens processing of 'lvcreate -L -V -T' command and
avoid deactivation and its activation with thin_check of the empty
created thin-pool that will be used for the new thin volume
made with a single lvcreate command.
Fix/support creation and usage of the external origin
across thin-pools - so thin LV can use thin LV from
some other thin-pool as external origin (read-only).
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.
Later we will use this for convertion of thin-pool data volume to VDO.
Apply the same logic for 'lvreduce' which exists for newer
systems (compiled with HAVE_BLKID_SUBLKS_FSINFO)
also for older systems for one very common practical case where
the active LV does not have any blkid known signature/filesystem.
New variant recognized this situation and allowed to proceed
without requesting a prompt, while the older variant always
requested confirmation prompt.
With this patch command now works equily for both variants
for 'active LV' without signature and allows to reduce LV
without prompting.
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.
Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.
With conversion to thin LV user can use same set of arguments
to set chunk-size.
TODO: add some smart code to decide best values for chunks sizes.
For proper functionality of insert_layer_for_lv we need to
move more bits to layerd LV.
Add some missing new types and correct usage of caller,
so the new LV type is set after the movement.
When lvm2 calculates the maximal usable COW size and crops the user
requested size to this value, don't return the error result from
the 'lvextend' operation.
We already apply the same logic when resizing thin-pool beyond
the supported maximal size.
FIXME: The return code error logic here is somewhat fuzzy.
The recent fix 05c2b10c5d ensures that raid LV images are not
using the same devices. This was happening in the lvextend commands
used by this test, so fix the test to use more devices to ensue
redundancy.
In case of e.g. 3 PVs, creating or extending a RaidLV causes SubLV
collocation thus putting segments of diffent rimage (and potentially
larger rmeta) SubLVs onto the same PV. For redundant RaidLVs this'll
compromise redundancy. Fix by detecting such bogus allocation on
lvcreate/lvextend and reject the request.
lvreduce uses _lvseg_get_stripes() which was unable to get raid stripe
info with an integrity layer present. This caused lvreduce on a
raid+integrity LV to fail prematurely when checking stripe parameters.
An unhelpful error message about stripe size would be printed.
There is no easy way to detect, whether device supports zeroing,
and kernel also zeroes device when it's not directly supported,
but with extra message:
operation not supported error, dev X, sector Y op 0x9:(WRITE_ZEROES)...
So to avoid generating such message with every 'lvcreate', use for
zeroing of upto 8K just standard write of zeroed page.
(maybe we can go with even larger sizes).
It looks like force was not being used to enable crypt resize,
but rather to force an inconsistency between LV and crypt
sizes, so this is either not needed or force in this case
should have some other meaning.
This reverts commit ed808a9b54.
Update previous commit
"lvresize: only resize crypt when fs resize is enabled"
to enable crypt resizing when --force is set and --resizefs
is not set. This is because it's been allowed in the past
and people have used it, but it's not a good idea.
There were a couple of cases where lvresize, without --fs resize,
was resizing the crypt layer above the LV. Resizing the crypt
layer should only be done when fs resizing is enabled (even if the
fs is already small enough due to being independently reduced.)
Also, check the size of the crypt device to see if it's already
been reduced independently, and skip the cryptsetup resize if
it's not needed.
18722dfdf4 lvresize: restructure code
mistakenly changed the overprovisioning check from applying
to all lv_is_thin_type lvs to only lv_is_thin_pool lvs, so
it no longer applied when extending thin lvs. The result
was missing warning messages when extending thin lvs.