IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The first lv_attr flag is 'i' or 'I' for a raid image.
(i: raid image, I: out of sync raid image)
For integrity raid images (_iorig), the flag was not being set.
With commit d7e922480e
lvconvert -m may fail if we try to remove 1st. leg that
is out-of-sync while other leg is in-sync.
Hot fix allows to proceed with such down conversion.
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Apply the same logic for 'lvreduce' which exists for newer
systems (compiled with HAVE_BLKID_SUBLKS_FSINFO)
also for older systems for one very common practical case where
the active LV does not have any blkid known signature/filesystem.
New variant recognized this situation and allowed to proceed
without requesting a prompt, while the older variant always
requested confirmation prompt.
With this patch command now works equily for both variants
for 'active LV' without signature and allows to reduce LV
without prompting.
Before checking seg_type of the first area, check there is
some existing area.
Since we now support error and zero segtypes, these do not have
any PV area present.
While create new LV for pool volume, use name from 'pool_metadata%d' naming
sequence. This LV is later on renamed to pool_t/cmeta, but if there
is any error in the middle, we may evenutally leave some 'volume',
With this name it can be slightly more obvious how it got there,
but also when we handle _pmspare name - we get slightly more predictible
name used there for it.
However for a standard usage this commit shall no visible impact as the
name is used temporarily just for cleaning LV.
With 3596558861 it's been introduced
a more fine grained description.
However 'disabled' might be actually more confusing then empty field,
so keep only the info about 'not enabled'aka dmevend is not allowed
to monitor LV which otherwise could be monitored.
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.
Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.
With conversion to thin LV user can use same set of arguments
to set chunk-size.
TODO: add some smart code to decide best values for chunks sizes.
For proper functionality of insert_layer_for_lv we need to
move more bits to layerd LV.
Add some missing new types and correct usage of caller,
so the new LV type is set after the movement.
Validate cache origin in front of the prompt.
Also add some rules to command description file.
TODO:
more validation needed also for lvcreate,
more complex rules with "OR" seems to be needed.
Avoid activation when going to skip zeroing of 'error' segtype
(so it's not erroring out).
Also skip zeroing for 'zero' segtype LV (already being zero).
When lvm2 calculates the maximal usable COW size and crops the user
requested size to this value, don't return the error result from
the 'lvextend' operation.
We already apply the same logic when resizing thin-pool beyond
the supported maximal size.
FIXME: The return code error logic here is somewhat fuzzy.
This vdo parameter existed in the early stage of integration of vdo into lvm2,
but later it's been removed from vdoformat tool - so actually if
there would be any non-zero value it would cause error on lvcreate.
Option was not stored on disk in lvm2 metadata.
Remove this vdo parameter from lvm2 sources.
(Although this vdo parameter will be still accepted on cmdline through
--vdosettings option, but it will be ignored.)
The recent fix 05c2b10c5d ensures that raid LV images are not
using the same devices. This was happening in the lvextend commands
used by this test, so fix the test to use more devices to ensue
redundancy.
In case of e.g. 3 PVs, creating or extending a RaidLV causes SubLV
collocation thus putting segments of diffent rimage (and potentially
larger rmeta) SubLVs onto the same PV. For redundant RaidLVs this'll
compromise redundancy. Fix by detecting such bogus allocation on
lvcreate/lvextend and reject the request.
lvreduce uses _lvseg_get_stripes() which was unable to get raid stripe
info with an integrity layer present. This caused lvreduce on a
raid+integrity LV to fail prematurely when checking stripe parameters.
An unhelpful error message about stripe size would be printed.
There is no easy way to detect, whether device supports zeroing,
and kernel also zeroes device when it's not directly supported,
but with extra message:
operation not supported error, dev X, sector Y op 0x9:(WRITE_ZEROES)...
So to avoid generating such message with every 'lvcreate', use for
zeroing of upto 8K just standard write of zeroed page.
(maybe we can go with even larger sizes).
Remove old code that became incorrect at some point.
It's probably a fragment of an old condition that was left
behind because it wasn't understood. We don't want to drop
the MISSING_PV flag just because the PV has no mda in use.
The device that was missing may have stale data, so the user
needs to decide if the device should be removed or restored.