IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In most cases header should be self-compilable, so the
do not expect other 'header' files to be used upfront
so the header would be compilable.
No functional change.
Fix the code that limited the total number of backup files.
It failed, and left excess files, when the file version
number was greated than 9999, exceeding the four digit suffix.
Now, after version 9999, the suffix intentionally grows beyond
four digits as needed, and is not a fixed width, or zero padded.
Previous commit 5f71cebcbecc37b5291018da35b3efe9636bf6c3 was not
correct. 4k requirement cannot be put on attribute_offset - where
it is valid to have this only 512b aligned.
The rule might get more complicated to recognized invalid values.
For this moment however add more easier requirement - we
impose 4K restriction on minimal and optimal io size if they
are bigger then 1 sector (512B).
Sscan may automatically add 0 after field width mark,
and since it's not exactlu trivial to do a macro calculation
for PATH_MAX - 1, rather make buffer for sscanf results bigger.
Also use matching FSTYPE_MAX as field width specifier.
lvm2 is caching DM nodes with the use of DM_LIST_DEVICES ioctl().
And tried to preserve the cached structure for the same list,
however there was 1 case where cache was empty, and new LIST ioctl
returned some elements - if this DM table change has happened
in the moment of 'scanning' and locking - lvm2 has then continued
to use 'invalid' empty cache.
Fix by capturing this missed case and update cache properly.
TODO: we could possibly use plain memcmp() with previous ioctl result.
Since we started to use DM cache now also for basic checks
whether the DM devices is present in DM table, this cache
now needs to be actually refreshed when the LOCK is taken.
This hiddenly happend if there was enabled 'scan_lvs' however
still not at the right place.
Move this explicit cache update call right after the moment
vg_read grabs the lock.
TODO: in the optimal case, we should mark the 'cache invalid'
and later refresh this cache, when the first reader appears,
but since this would be large patch, do this little fix step patch
first and improve performance later.
It seems we need new_size_bytes in places where struct fs_info is also
passed. Store the new_size_bytes inside the struct fs_info so we
can just pass that one to all the functions we call and hence make
the code a bit cleaner and easier to follow.
Device quirks may cause sysfs wwid file to change what it
displays, from a bogus eui... string to an nvme... string.
The old wwid may be saved in system.devices, so recognizing
the device requires finding the old value from libnvme.
After matching the old bogus value using libnvme, system.devices
is updated with the current sysfs wwid value.
With existing code, the cache was working only to the 2nd. locking.
So i.e. when 'lvs' scans system with more then one VG, the caching
was effectively not working.
Update the code, so the label invalidate code is able to update DM
cache - so whenever we take a new lock - we will refresh the cache.
TODO: the refresh ATM does a very simple compare of old a new list
of cached DM device, and with the first spotted difference, it just
fallback to the full rebuild of DM cache - with large amount of active
devices this might not the most efficient way....
Detect when we have mixed dos partition with gpt's PMBR partition.
This is not a sane configuration, but detect it anyway, just in case
someone configures such partition layout manually and forcefully and
incorrectly defines one of the partition types to be the GPT's PMBR.
For example:
❯ fdisk -l /dev/sdc
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 67583 65536 32M 83 Linux
/dev/sdc2 67584 262143 194560 95M ee GPT
Before:
(The partition filter passes even though there's real existing dos
partition - the empty GPT PMBR overrides it.)
❯ pvcreate /dev/sdc
WARNING: PMBR signature detected on /dev/sdc at offset 510. Wipe it? [y/n]:
Wiping PMBR signature on /dev/sdc.
Physical volume "/dev/sdc" successfully created.
With this patch applied:
(The GPT PMBR does not override the existence of the dos partition.)
❯ pvcreate /dev/sdc
Cannot use /dev/sdc: device is partitioned
blkid does not report FSLASTBLOCK for a swap device. However, blkid
does report FSSIZE for swap devices, so use this field (and including
the header size which is of FSBLOCKSIZE for the swap) instead to
set the "filesystem last block" which is used subsequently for
further calculations and conditions.
We already detect msdos partition table. If it is empty, that is, there
is just the partition header and no actual partitions defined, then the
filter-partitioned passes, otherwise not.
Do the same for GPT partition table.
The cmd struct is now required in many more functions, and
it's added as a function arg for most direct dev-cache function
calls. The cmd struct is added to struct device (dev->cmd) so
that it can be accessed in many other cases where dev-cache
functions are being called from places where getting the cmd
struct is too difficult.
The dm devs cache is separate from the ordinary dev cache,
so give the function names distinct prefixes, using
"dm_devs_cache" to prefix dm devs cache functions.
When a PV is stacked on an LV, the PV needs to be
dropped from bcache before the LV is processed.
The LV can be found in dev-cache using its name
rather than the devno.
The list of dm devs was in the cmd struct and had a
different lifetime than the radix trees referencing
those dm devs. Now the list and radix trees are
created and destroyed together.
In the context of dm, 'device' refers to a dm device, but
in the context of lvm, 'device' refers to struct device.
Change some lvm function names to make that difference clearer.
dev_manager_get_device_list() -> dev_manager_get_dm_active_devices()
get_device_list() -> get_dm_active_devices()
device_get_uuid() -> dev_dm_uuid(), devno_dm_uuid()
For large device sets our dm_hash can produce larger amounf of mapping
collision and we would need to further increase our has size.
So instead use the radix_tree code which is immune agains growing size
of devices and uses memory more effiecently to store all the paths.
Instead of less efficient 'btree' switch dev_cache to use
radix_tree, that is generating more efficient tree mapping.
Some direct use of btree iteration replace with our dev_iter code.