IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
When PVs are created on LVs, remove the devices file entries
for the PVs when the LVs are removed. In general, the devices
file entries should be removed with lvmdevices --deldev when
the LVs are removed (lvremove is the equivalent of detaching
a device from the system when layering PVs on LVs.)
This change is effectively an automatic lvmdevices --deldev
command that is built into lvremove when the LV has a PV on it.
Add slight delay for waiting until 'started' dmeventd is
responsing to other 'dmeventd -i' command.
TODO: race observed here is somewhat unclear, might need some more
details
There is no reason to wait for 10sec when removing 'vg' at test
exit - we just need to kill 'sleep 10' process forked from flock.
(Not using 'flock -F' as this flag is not across all machines used
for testing)
Look not only for whole 64byte sequence,
but seek also 32byte, 16byte and 8byte parts of the
key.
Currently to pass memcpy ZMM problems add possible
workaround in the form of GLIBC_TUNABLES setting.
Avoid using 'lvs' from 'get' shell - as that would wait until
whole group of processes is finished.
TODO: rethink what would be the point of starting 'dmeventd' with lvs.
It seems to break some rules.
Handle the case, where we 'kill -9' running dmeventd,
and restarting 'dmeventd -R' manages to still contact this
'zombie' damone and manages to get list of monitored devices
out of this and eventually 'run' new and in this case
unexpected instance of dmeventd.
Some test do expect event_activation to be set.
So add explicit configuring of this setting in tests,
but also add new default which kind of does it globally
as it's expected default (yet our testing rpms might
be create with disabled event_activation)
By adding this to each test individually - it's now easy
to locate such tests...
Occasionally this test fails as soemtimes UUID actually
may constain LV[d] string causing it to grep mismatch
UUID and LV name and eventually fail test for wrong reason.
As a simple workaround print the LV name first and
check the name is followed by a space character.
Here we actually need to slowdown only $dev2 - since repair operation
is only reading data from this device and compares it with origin $dev1,
and if they match there is no write...
If lvm.conf has use_devicesfile=0 and /etc/lvm/device/system.devices
exists, then rename it to system.devices-unused.YYYYMMDD.HHMMSS.
This prevents an old, incorrect system.devices from being used in
the future if lvm.conf is changed to use_devicesfile=1.
Create backup copies of system.devices in /etc/lvm/devices/backup
named system.devices-YYYYMMDD.HHMMSS.NNNN. NNNN is the version
counter from the file.
Each time that an lvm command writes a new system.devices file,
it also writes the same file in the backup directory.
A new comment line is added to system.devices with HASH=<num>
where <num> is a crc calculated from the uncommented lines in
system.devices. This lets lvm detect if the file has been
modified outside of lvm itself.
If system.devices is edited directly, the next time a command
reads the file, the crc will not match the HASH value. The
command will then rewrite system.devices with the correct HASH
value, and create a backup reflecting the edits.
A default limit of 50 backup files is kept, configurable by
lvm.conf devicesfile_backup_limit (set to 0 to disable backups.)
- add new comparison between old and new entries, and use this
as the basis for new dedicated output for check and update
- add new --refresh option to search for missing PVIDs on all
devices, and possibly update the device ID
- internally, only use the term "refresh" for cases where a
new device ID may be found and assigned for a missing PVID
It looks like there is some kernel bug/limitation
that may cause invalid table load processing:
dmsetup load LVMTEST-LV1
device-mapper: reload ioctl on LVMTEST-LV1 failed: Invalid argument
md/raid:mdX: reshape_position too early for auto-recovery - aborting.
md: pers->run() failed ...
device-mapper: table: 253:38: raid: Failed to run raid array (-EINVAL)
device-mapper: ioctl: error adding target to table
However ATM there is not much we can do then make delays bigger.
TODO: fixing md core...