IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add 2 new options for linking libnvme with lvm2.
Option --without-libnvme, --disable-nvme-wwid
(cherry picked from commit cb87e184bcbade1ac2da8fb611177f520169decd)
Device quirks may cause sysfs wwid file to change what it
displays, from a bogus eui... string to an nvme... string.
The old wwid may be saved in system.devices, so recognizing
the device requires finding the old value from libnvme.
After matching the old bogus value using libnvme, system.devices
is updated with the current sysfs wwid value.
(cherry picked from commit d952358636887348c390784a8ca5efb87b26784f)
Call opendir() after new file is stored within dir,
otherwise this new file would not accounted.
(cherry picked from commit 2c5bf25187be88f35b807431c763a4aef038a48f)
In case of different PV sizes in a VG, the lvm2 allocator falls short
to define extended segments resiliently asked for 100%FREE RaidLV extension
and a RAID distinct allocation check fails. Fix is to release a memory pool
on the resulting error path.
Until the lvm2 allocator gets enhanced (WIP) to do such complex (and other)
allocations proper, a workaround is to extend a RaidLV to any free space on
its already allocated PVs by defining those PVs on the lvextend command line
then iteratively run further such lvextend commands to extend it to its
final intended size. Mind, this may be a non-trivial extension interation.
(cherry picked from commit 557b2850cef7fa49e2cbacd36e77f679181f09ae)
vgchange --locktype sanlock|dlm --lockopt force <vgname>
can be used to change the lock type without lvmlockd or
the lock manager involved.
(cherry picked from commit 4dc009c87227a137c8be50686b1104cebb9a88e2)
Previously, a command would call lockd_vg() for a local VG,
which would go to lvmlockd, which would send back ENOLS,
and the command would not care when it saw the VG was local.
The pointless back-and-forth to lvmlockd for local VGs can
be avoided by checking the VG lock_type in lvmcache (which
label_scan now saves there; this wasn't the case back when
the original lockd_vg logic was added.) If the lock_type
saved during label_scan indicates a local VG, then the
lockd_vg step is skipped.
(cherry picked from commit bf60cb4da23cac2f6b721170dd0d8bfd38b16466)
Services introduced in commit c609dedc2f035f770b5f645c4695924abf15c2ca
need installing.
(cherry picked from commit 1b9bf5007bbfba5bcd622f039a099e6f7e0a8162)
Fix commit b65a2c3f3a767 "vgimportdevices: skip lvmlockd locking"
which intended to disable lvmlockd locking, but the lockd_gl_disable
flag was mistakenly set after lock_global() so it wasn't effective.
This caused vgimportdevices to fail unless locking was started.
(cherry picked from commit a8b8e1f074598d080bfb34e1fd04fe36ec122f93)
Previous commit 82617852a4d3c89b09124eddedcc2c1859b9d50e
introduce bug in complession - as the rl_completion_matches()
needs to always advance to next element where the index
is held in static variable.
Add comment about this usage.
(cherry picked from commit 73298635b9db2c2a11bc4cc291b15d0f21907598)
(cherry picked from commit c33b0e11878a52aeaa42b4ebfd0692e5da7f5e07)
An OS installer can create system.devices for the system and
disks, but an OS image cannot create the system-specific
system.devices. The OS image can instead configure the
image so that lvm will create system.devices on first boot.
Image preparation steps to enable auto creation of system.devices:
- create empty file /etc/lvm/devices/auto-import-rootvg
- remove any existing /etc/lvm/devices/system.devices
- enable lvm-devices-import.path
- enable lvm-devices-import.service
On first boot of the prepared image:
- udev triggers vgchange -aay --autoactivation event <rootvg>
- vgchange activates LVs in the root VG
- vgchange finds the file /etc/lvm/devices/auto-import-rootvg,
and no /etc/lvm/devices/system.devices, so it creates
/run/lvm/lvm-devices-import
- lvm-devices-import.path is run when /run/lvm/lvm-devices-import
appears, and triggers lvm-devices-import.service
- lvm-devices-import.service runs vgimportdevices --rootvg --auto
- vgimportdevices finds /etc/lvm/devices/auto-import-rootvg,
and no system.devices, so it creates system.devices containing
PVs in the root VG, and removes /etc/lvm/devices/auto-import-rootvg
and /run/lvm/lvm-devices-import
Run directly, vgimportdevices --rootvg (without --auto), will create
a new system.devices for the root VG, or will add devices for the
root VG to an existing system.devices.
(cherry picked from commit c609dedc2f035f770b5f645c4695924abf15c2ca)
(cherry picked from commit 3321a669d8f2df99df9d6dcd4ebb2b4d30731c7a)
lvm2 for a while already optimizes 'vgremove' operation to
a single commit when possible if all LVs can be
easily deactivated.
So the number of LVs doesn't matter much - but the tested
case 'test_delete_non_complete_job' seems to be taking
some time anyway to capture the exception.
So just reducing the running time of the test significatanly
as we don't need to create 64LVs for 4 'execution mode' runs.
Order is used for man page generation (although not completely).
So place 'zero & error' to the end of list.
Keep linear,striped,snapshot in front.
For the rest use alphabetic order.