IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet. We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
Add code to handle creation of thin-pool with VDO data backend
which can be seen as compressed deduplicated thin-pool.
To avoid need of changing to many internal APIs, pass the conversion
parameters for create thin-pool data volume via cmd_context.
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.
Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
When creating thin pool or check pool there is allocated LV
for metadata and for such allocation user should be able to
specify list of PVs on cmdline.
Also fix unused passed list of PV for thick to thin conversion,
where the code was using whole PV set from a VG (but since it's
been not enabled on cmdline, user could not hit this issue).
Also remove unneeded initialization of use_pvh.
Set the lock_args string in addition to doing initialization.
lvconvert calls lockd_init_lv_args() directly, skipping
the normal lockd_init_lv() which usually sets lock_args.
lvmlockd locking for swapmetadata adjusted for commit
ac36153e99 lvconvert: preserve UUID for swapped metadata
Now that the LV uuid is swapped between LVs, the lvmlockd lock can
simply be moved between them, and the same lock can continue to be
used for the LV outside of the pool.
Once lvm2 repairs pool's metadata LV and preserves the original metadata LV
with unmodified metadata, for such LV in VG use newly created UUID for new
_pmspare and actually preserve UUID for this hidden _pmspare (if it
exists).
The lockd lock needs to be freed for the LV that is becoming
the new metadata LV, and a new lockd lock needs to be created
for the old metadata LV that is becoming an independent LV.
Fixes b3e45219c2
Use cachepool name for create name for metadata backup LV.
(so we do not generate 2 'sequences' of metadata filenames.)
Move path preparion before handling _pmspare.
Also drop extra call to sync_local_dev_names() as it's
already got in sync with call of exec_cmd().
When repair thinpool or cachepool, lvm2 leaves original metadata
volume backup. To avoid potential damage of those data, mark such
volume as 'read-only' and also allow user to use --setactivationskip
option for this volume.
TODO: likely better default would be to automatically skip, but
that might need some more thinking about recovery reporting doc.
Replace the use of internal /dev/mapper names with the use of
public LV names /dev/vg/lv for use with repair tools.
For this make the activation of _pmspare LV to be handled as
a component activation with public name.
Metadata is already atomatically activated this way (as readonly).
So if there is any 'error' happening, we leave public LVs in
system.
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.
Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.
With conversion to thin LV user can use same set of arguments
to set chunk-size.
TODO: add some smart code to decide best values for chunks sizes.
For proper functionality of insert_layer_for_lv we need to
move more bits to layerd LV.
Add some missing new types and correct usage of caller,
so the new LV type is set after the movement.
Validate cache origin in front of the prompt.
Also add some rules to command description file.
TODO:
more validation needed also for lvcreate,
more complex rules with "OR" seems to be needed.