IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet. We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
Add code to handle creation of thin-pool with VDO data backend
which can be seen as compressed deduplicated thin-pool.
To avoid need of changing to many internal APIs, pass the conversion
parameters for create thin-pool data volume via cmd_context.
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.
Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.
Later we will use this for convertion of thin-pool data volume to VDO.
If a RaidLV mapping is required to be refreshed as a result of temporarily failed
and recurred RAID leg device (pairs) caused by writes to the LV during failure,
the requirement is reported by volume health character r' in position 9 of the
LV's attribute field (see 'man lvs' about additional volume health characters).
As this character can be overlooked, this patch adds messages to the top
of the lvs command output informing the user explicitely about the fact.
Fix typos in previous commit 3589e515d.
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requesting the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requestinng the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
"--maxrecoveryrate Size ..." with a size greater or equal than min.
The first lv_attr flag is 'i' or 'I' for a raid image.
(i: raid image, I: out of sync raid image)
For integrity raid images (_iorig), the flag was not being set.
pvs -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
pvscan -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
For those devices that are included by the devices file,
their device ID is displayed in place of the usual "lvm2"
format and size.
(pvs -a|--all is unchanged, and shows devices not formatted as PVs.)
A pvid string read from system.devices could be less
then ID_LEN since system.devices fields can be edited.
Ensure the pvid buffer is ID_LEN+1 even if the string
read from the file is shorter.
Include info in the temp file to confirm that it should be used.
The temp file is meant to suppress repeated, identical searches
for the same PVIDs on the same set of devices. Write to the file
a count and hash of the missing PVIDs and a count and hash of the
devices to search. A subsequent command will ignore and remove
the temp file if any of these values differ. We don't want to
suppress a search if a change has occured, and a missing PV could
be found by scanning devices.
Problematic scenario:
. the device for a PV has no wwid, so it's identified in system.devices
with IDTYPE=devname IDNAME=/dev/foo
. user adds/enables a wwid for the device
. on reboot, the device name changes, e.g. now /dev/bar
. the code that searches for the new device name includes an
optimization to skip looking on devs that have a wwid, on
the basis that a device with a wwid won't have IDTYPE=devname
. this optimization causes lvm to not look for the PV on /dev/bar
since that device now has a wwid, so the PV is not found
. the optimization is enabled by search_for_devnames="auto"
. change the default to search_for_devnames="all" which does not
use the problematic optimization
- add new comparison between old and new entries, and use this
as the basis for new dedicated output for check and update
- add new --refresh option to search for missing PVIDs on all
devices, and possibly update the device ID
- internally, only use the term "refresh" for cases where a
new device ID may be found and assigned for a missing PVID