IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Sscan may automatically add 0 after field width mark,
and since it's not exactlu trivial to do a macro calculation
for PATH_MAX - 1, rather make buffer for sscanf results bigger.
Also use matching FSTYPE_MAX as field width specifier.
lvm2 is caching DM nodes with the use of DM_LIST_DEVICES ioctl().
And tried to preserve the cached structure for the same list,
however there was 1 case where cache was empty, and new LIST ioctl
returned some elements - if this DM table change has happened
in the moment of 'scanning' and locking - lvm2 has then continued
to use 'invalid' empty cache.
Fix by capturing this missed case and update cache properly.
TODO: we could possibly use plain memcmp() with previous ioctl result.
Since we started to use DM cache now also for basic checks
whether the DM devices is present in DM table, this cache
now needs to be actually refreshed when the LOCK is taken.
This hiddenly happend if there was enabled 'scan_lvs' however
still not at the right place.
Move this explicit cache update call right after the moment
vg_read grabs the lock.
TODO: in the optimal case, we should mark the 'cache invalid'
and later refresh this cache, when the first reader appears,
but since this would be large patch, do this little fix step patch
first and improve performance later.
There was a lot of messy and inefficient locking calls sent from
a command to lvmlockd when working with thin volumes, e.g.
- requesting a lock numerous times that was already held
- releasing a lock numerous times that was already unlocked
- repeating lock/unlock/lock/unlock rather than holding the
lock until it was no longer needed
Mistakes in the locking could easily hide among all the noise.
The mess was largely because thin-related commands involve a
lot of internal LV manipulations, and lvmlockd calls were done
at the lower level of LV activation/deactivation. This change
adds locking code that is more specific to the thin command
being run, so it can be more intelligent in acquiring and
releasing locks where needed.
Fix regression introduced with commit:
964012fdb9
that effectively disabled memory locking before suspending volumes.
From merging/testing there remained wrong condition
as we really want to check for 0 memory reservation value
for both checked settings.
It seems we need new_size_bytes in places where struct fs_info is also
passed. Store the new_size_bytes inside the struct fs_info so we
can just pass that one to all the functions we call and hence make
the code a bit cleaner and easier to follow.
This avoids a situation where we would extend an LV and then we would
not do anything to the FS on it because the FS info check failed for some
reason, like the type was not supported (e.g. swap) or we could not resize
the FS unless being in some supported state (e.g. XFS to be mounted for
the xfs_growfs to work).
Before this patch (LV resized, FS not resized):
❯ lvextend --fs resize -L+4M vg/swap
Size of logical volume vg/swap changed from 32.00 MiB (8 extents) to 36.00 MiB (9 extents).
File system extend is not supported (swap).
File system extend error.
Logical volume vg/swap successfully resized.
With this patch (LV not resized, FS not resized):
❯ lvextend --fs resize -L+4M vg/swap
File system extend is not supported (swap).
Device quirks may cause sysfs wwid file to change what it
displays, from a bogus eui... string to an nvme... string.
The old wwid may be saved in system.devices, so recognizing
the device requires finding the old value from libnvme.
After matching the old bogus value using libnvme, system.devices
is updated with the current sysfs wwid value.
Since we now keep lv names valid all the time (as they are part
of radix_tree) - there is a problem with this renaming code, that
for a moment used duplicated name in vg struct.
Fix it by interating LVs backwared - which avoids breaking consitency
and also actually makes code more simple.
Fix stripe count and size parameter validation for RAID LVs and
include existing automatic setting of these parameters based
on current shape of the RAID LV in case these are not set
on command line fully.
Previously, this was done only to a certain subset given by this
condition (where the 'stripes' is the '-i|--stripes' cmd line arg
and the 'stripe_size' is actually the '-I|--stripesize' cmd line arg):
!(stripes == 1 || (stripes > 1 && stripe_size))
This condition is a bit harder to follow at first sight and there
are no comments around with explanation for why this one is used,
so let's analyze it a bit more.
First, let's convert this to an equivalent condition (De Morgan law)
so it's easier to read for humans:
stripes != 1 && !(stripes > 1 && stripe_size)
Note: Both stripe and stripesize are unsigned integers, so they can't be negative.
Now, based on that condition, we were running the code to deduce the
stripe/stripesize and do the checks ("the code") only if both of these
are true:
- stripes is different from 1
- we don't have stripes > 1 and stripe_size defined at the same time
But this is not correct in all cases, because:
A) if someone uses stripes = 0, then "the code" is executed
(correct)
B) if someone uses stripes = 1, then "the code" is not executed
(wrong: we still need to be able to check the args against
existing RAID LV stripes whether it matches)
- if someone uses stripes > 1, then "the code" is:
C) if stripe_size = 0, executed
(correct)
D) if stripe_size > 0, not executed
(wrong: we still want to check against existing RAID LV stripes)
Current issues with this condition:
The B) ends up with segfault.
❯ lvextend -i 1 -l+1 vg/lvol0
Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents).
Segmentation fault (core dumped)
The D) ends up with errors like:
❯ lvextend -i 3 -l+1 -I128k vg/lvol0
Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents).
Rounding size (4 extents) up to stripe boundary size for segment (5 extents).
Size of logical volume vg/lvol0 changed from 8.00 MiB (2 extents) to 20.00 MiB (5 extents).
LV lvol0: segment 1 with len=5 has inconsistent area_len 3
Couldn't read all logical volumes for volume group vg.
Failed to write VG vg.
Conclusion:
The condition needs to be removed so we always run "the code" to check
given striping args given on command line against existing RAID LV
striping. The reason is that we don't want to allow changing stripe
count for RAID LVs through lvextend and we need to end up with the
error:
"Unable to extend <RAID segment type> segment type with different number of stripes"
(We do support changing the striping by lvconvert's reshaping functionality only).
When converting a VG to locktype sanlock, a new
lease is allocated for each existing lv. Finding
a new lease location involved searching the lvmlock
LV from the start for an unused location, which
would be very slow with many LVs. Improve this by
starting each search from the last used location.
When searching for committed LV by uuid, this search can
be expensive for commands like 'vgremove' - so for
this part introduce 'lv_uuids' radix_tree that is
build with first access to lv_committed().
Since there is a group of commands that need to access 'lv_list'
while still need to search for LV by its name, make the whole
struct lv_list a member of logical_volume structure.
This makes it easy to return also 'lv_list' this list this LV
within VG.
Also the patch should not use more memory, since we were allocating
lv_list for each LV anyway when linkin LV to VG.
Since find_lv_by_name() is now using radix_tree(),
use the same 'search for /' in LV in name for both
find_lv() & find_lv_in_vg().
TODO: Possibly refactor code and use only dm_list
instead of lv_list and dereference LV with container_of()
(thus saving pointer within struct logical_volume) - but
we use 'lv_list' currently in many places...
Add lvm.conf config/validate_metadata configurable setting.
Allows to disable validation of volume_group structure before
writing to disk.
Call of vg_validate() is supposed to catch any inconsistency
of in-memory volume group structure and possibly early aborting
commnand before making any more 'damage' in case the VG struct
is found insistent after some metadata manipulation.
This is almost always useful for devel - and also for normal user
as for small metadata size this doesn't add too much overhead.
However if the volume_group size is large and operations are just
adding removing simple LVs - this validation time may add noticable
to final command running time.
So if the user seeks the highest perfomance of command and does
not do any 'complex' metadata manipulation - it's reasonably safe
to disable validation (with the use of setting "none") here.
With presence of uniq_insert, use this function also
here for extra protection and check for duplicate lv_name
when inserting a new name into radix_tree.
When 'lvresize -r' is used to resize the volume, it's valid to
resize even to the same size of an LV, as the command then runs
fs-resize utility to eventually upsize the fs to the current
volume size.
Return code of such command then reflects the return value
of this fs-resize tool.
This fixes the regression introduced when the support
for option --fs was added (2.03.17).