IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
With existing code, the cache was working only to the 2nd. locking.
So i.e. when 'lvs' scans system with more then one VG, the caching
was effectively not working.
Update the code, so the label invalidate code is able to update DM
cache - so whenever we take a new lock - we will refresh the cache.
TODO: the refresh ATM does a very simple compare of old a new list
of cached DM device, and with the first spotted difference, it just
fallback to the full rebuild of DM cache - with large amount of active
devices this might not the most efficient way....
Detect when we have mixed dos partition with gpt's PMBR partition.
This is not a sane configuration, but detect it anyway, just in case
someone configures such partition layout manually and forcefully and
incorrectly defines one of the partition types to be the GPT's PMBR.
For example:
❯ fdisk -l /dev/sdc
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 67583 65536 32M 83 Linux
/dev/sdc2 67584 262143 194560 95M ee GPT
Before:
(The partition filter passes even though there's real existing dos
partition - the empty GPT PMBR overrides it.)
❯ pvcreate /dev/sdc
WARNING: PMBR signature detected on /dev/sdc at offset 510. Wipe it? [y/n]:
Wiping PMBR signature on /dev/sdc.
Physical volume "/dev/sdc" successfully created.
With this patch applied:
(The GPT PMBR does not override the existence of the dos partition.)
❯ pvcreate /dev/sdc
Cannot use /dev/sdc: device is partitioned
blkid does not report FSLASTBLOCK for a swap device. However, blkid
does report FSSIZE for swap devices, so use this field (and including
the header size which is of FSBLOCKSIZE for the swap) instead to
set the "filesystem last block" which is used subsequently for
further calculations and conditions.
We already detect msdos partition table. If it is empty, that is, there
is just the partition header and no actual partitions defined, then the
filter-partitioned passes, otherwise not.
Do the same for GPT partition table.
The cmd struct is now required in many more functions, and
it's added as a function arg for most direct dev-cache function
calls. The cmd struct is added to struct device (dev->cmd) so
that it can be accessed in many other cases where dev-cache
functions are being called from places where getting the cmd
struct is too difficult.
The dm devs cache is separate from the ordinary dev cache,
so give the function names distinct prefixes, using
"dm_devs_cache" to prefix dm devs cache functions.
When a PV is stacked on an LV, the PV needs to be
dropped from bcache before the LV is processed.
The LV can be found in dev-cache using its name
rather than the devno.
The list of dm devs was in the cmd struct and had a
different lifetime than the radix trees referencing
those dm devs. Now the list and radix trees are
created and destroyed together.
In the context of dm, 'device' refers to a dm device, but
in the context of lvm, 'device' refers to struct device.
Change some lvm function names to make that difference clearer.
dev_manager_get_device_list() -> dev_manager_get_dm_active_devices()
get_device_list() -> get_dm_active_devices()
device_get_uuid() -> dev_dm_uuid(), devno_dm_uuid()
For large device sets our dm_hash can produce larger amounf of mapping
collision and we would need to further increase our has size.
So instead use the radix_tree code which is immune agains growing size
of devices and uses memory more effiecently to store all the paths.
Instead of less efficient 'btree' switch dev_cache to use
radix_tree, that is generating more efficient tree mapping.
Some direct use of btree iteration replace with our dev_iter code.
Move the code around caching active dm device devno, name and uuid
from device_mapper/libdm-iface to dev_cache file - as libdm layer
cares about 'decoding' ioctl data from kernel and caching for use by
lvm stays within lvm.
Introduce:
dev_cache_update_dm_devs
dev_cache_get_dm_dev_by_devno
dev_cache_get_dm_dev_by_uuid
Use radix_tree for searching.
Instead of using 'key state & key end' uint8_t* switch to use
void* key, & size_t keylen. This allows easier adaptation with
lvm code base with way too much casting with every use of function.
Also correctly mark const buffers to avoid compiled warnings and
casting.
Adapt the only bcache user ATM for API change.
Adapt unit test to match changed API.
Size of these hashes was quite small, so raise the size of
hashed entries to reduce amount of hash collistion.
Select some unique/unused number for hash_create below 8192.
Replace call to get_dm_uuid_from_sysfs() with use of
device_get_uuid() which gets the same information,
but instead of several syscalls it need either 1 or even 0
when the information is cached with newer kernels.
For better use of cached data located within cmd_context,
pass this structure from the top level function.
Also add missing '_' for static _dev_cache_index_devs.
No other change here.
Skip scan and stat() for dirs and nodes within known /dev/ paths,
where no block devices are located.
Also strlen(_cache.dev_dir) just once.
TODO: add more dirs to _no_scan (configurable via lvm.conf ?)