IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Size of these hashes was quite small, so raise the size of
hashed entries to reduce amount of hash collistion.
Select some unique/unused number for hash_create below 8192.
Replace call to get_dm_uuid_from_sysfs() with use of
device_get_uuid() which gets the same information,
but instead of several syscalls it need either 1 or even 0
when the information is cached with newer kernels.
We've got cached DM list before grabbing lock, so there
is some chance, that DM table has changed and we would
need to refresh this info.
TODO: benchmark, whether it would even make sense to refresh cache
and keep it content instead of using individual ioctl() for tree build.
Function that is working with DM target is located within
lib/activate directory.
This function is able to use cached dm_device_list when possible
to quickly resolve checks for device's UUID.
Function can fully replace get_dm_uuid_from_sysfs() and instead
of syscalls for open/read/close get the UUID with single ioctl.
When there is cached dm devs list, we can get many UUID from
a single syscall.
For better use of cached data located within cmd_context,
pass this structure from the top level function.
Also add missing '_' for static _dev_cache_index_devs.
No other change here.
Skip scan and stat() for dirs and nodes within known /dev/ paths,
where no block devices are located.
Also strlen(_cache.dev_dir) just once.
TODO: add more dirs to _no_scan (configurable via lvm.conf ?)
When PVs are created on LVs, remove the devices file entries
for the PVs when the LVs are removed. In general, the devices
file entries should be removed with lvmdevices --deldev when
the LVs are removed (lvremove is the equivalent of detaching
a device from the system when layering PVs on LVs.)
This change is effectively an automatic lvmdevices --deldev
command that is built into lvremove when the LV has a PV on it.
An OS installer can create system.devices for the system and
disks, but an OS image cannot create the system-specific
system.devices. The OS image can instead configure the
image so that lvm will create system.devices on first boot.
Image preparation steps to enable auto creation of system.devices:
- create empty file /etc/lvm/devices/auto-import-rootvg
- remove any existing /etc/lvm/devices/system.devices
- enable lvm-devices-import.path
- enable lvm-devices-import.service
On first boot of the prepared image:
- udev triggers vgchange -aay --autoactivation event <rootvg>
- vgchange activates LVs in the root VG
- vgchange finds the file /etc/lvm/devices/auto-import-rootvg,
and no /etc/lvm/devices/system.devices, so it creates
/run/lvm/lvm-devices-import
- lvm-devices-import.path is run when /run/lvm/lvm-devices-import
appears, and triggers lvm-devices-import.service
- lvm-devices-import.service runs vgimportdevices --rootvg --auto
- vgimportdevices finds /etc/lvm/devices/auto-import-rootvg,
and no system.devices, so it creates system.devices containing
PVs in the root VG, and removes /etc/lvm/devices/auto-import-rootvg
and /run/lvm/lvm-devices-import
Run directly, vgimportdevices --rootvg (without --auto), will create
a new system.devices for the root VG, or will add devices for the
root VG to an existing system.devices.
Function to _allocate_memory() was not compiled-in when lvm2 was
build with support for better tracking memory pool with valgrind.
Instead now correctly avoid this function only when running
withing valgrind environment.
Do not modify flags field from 'strcut command_name' and
instead control this via cmd_context get_vgname_from_options.
Flag GET_VGNAME_FROM_OPTIONS is currently used only by lvconvert.
In the past it was common for a single command to run
multiple lvmcache_label_scan, and this code was a way
to make each call select the same duplicate pvs. Now
commands run a single lvmcache_label_scan, so this is
not needed.
When reseting stream buffer - check for being run within valgrind
and only in this case skip this code.
Define VALGRIND_POOL was incorrectly used for this logic.
Let's move proc into include/configure.h so this define can
be easily used across the source base.
Configure defines it - but ATM we do not provide any configure
option to change it - there should be no need to ever change it.
Let's stop Coverity thinking here we are using freed FILE*
for anything else then comparing numbers.
For this use the original source of old_stream pointer.
Add new configurable option for building lvm2 with enable/disable
default autoactivation setting.
Might be useful for building i.e. rpms for systems where
this event_activation is not desired.
Thin-pool and cache-pool targets got already quite stable so let's try
to remove checking of pools when using lvremove or vgremove commands.
This skips checking pools when they are going to be removed, but it
also when removing thin volume that was the only user of a thin-pool.
In this case thin-pool will be still there and could be activated
again with another thin volume and thin_check will be executed
in this moment. In this case it can delay discovery of metadata damage.
Shortens processing of 'lvcreate -L -V -T' command and
avoid deactivation and its activation with thin_check of the empty
created thin-pool that will be used for the new thin volume
made with a single lvcreate command.
Shuffle code to parsing VDO message also for lvs segment status
so it can report correctly data usage for VDO LVs.
For this change move code and also change its API to use just mempool.
Fixes usage with upstream 6.9 vdo target driver.
When using message API for parsing VDO stats info, 0 was wrongly
used for fallback for trying the old sysfs API.
Switch to use ULLONG_MAX for values that could not have been obtained
through the message call.
Fixes lvdisplay info for freshly created VDO volume with 0 used data
blocks.
Make a seperate function to decode which ID should be user
for cvol meta or data volume - also avoids duplication of code.
As a result it's now also easier to see how the lvid is build.
Add internal inline function wrapper for dm_strncpy().
Use it for calls where we test the result.
Avoids emitting warnings in Coverity for unchecked usage.
Avoid problems for other libc like muslc and use dm_basename.
Prototype for basename has been removed from string.h from latest musl [1]
compilers e.g. clang-18 flags the absense of prototype as error. therefore
include libgen.h for providing it.
[1] https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7
Reported-by: Khem Raj <raj.khem@gmail.com>
Add debug tracing for syscall failures.
Also switch some log_error to log_warn when command does not exit
with 'error' result and only warns user.
Easier error path handling.
Initialize some vars at declaration time.
When using cached LV with cachevols (so not with cachepool),
the loaded table could have been using more then one mapping line
for sub devices - resulting into data corruption in some cases
when i.e. taking snapshot of such cached LV with and instead of
single line - 2 lines were generated into DM table as the code
skipped protection again repeated addition.
vg-fast_cvol-cdata: 0 16384 linear 253:2 16384
vg-fast_cvol-cdata: 16384 16384 linear 253:2 16384
New code is also refactoring to use _add_new_cvol_subdev_to_dtree
(similar _add_cvol_subdev.. ) and also the addition of subdev has
been moved after check for already processed node.
Also the cachevol sub devices are now added with the insertion
of cachevol with cached LV.
Improve support for building DM tree when there is a chain
of external origins used for LV.
For this we cannot use track_external_lv_deps as this works
only for LV with just one external origin in its device tree.
Instead add directly 'dev' to the instead of add whole LV.
This avoid possibly recurive endless loop, however we may eventally
have some problems with undiscovered/missing devices in DM tree.
Fix/support creation and usage of the external origin
across thin-pools - so thin LV can use thin LV from
some other thin-pool as external origin (read-only).
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
Over the time the code for preloading detached LVs got unnecessarily
complicate. But actually we need to preload only LVs that
were previously non-toplevel (invisible) LVs and became visible
toplevel LVs in the precommitted metadata.
If there would be needed some other rule, it would likely be a bug in
conversion code forgetting to set visibility flag on detached LV.
This reduces number of unnecessary repeated DM tree preloading.
External origins for thin volumes can be also used at the same time
as old(thick) snapshot origins. However in this case it's possible
the LV is only active as being 'external' origin, but old snapshot LVs
are not active. For this case before handling these
LVs for un/monitoring check the active state of origin LV.
This should prevent warnings of monitoring failures.
Make recursive directory path creation reusable via
dir_create_recursive.
While we already have dm_create_dir() - it's not taking mode arg,
so let's make lvm's internal file helper function.
Instead of parsing the whole /proc/kallsyms use faster variant
of using modprobe tool logic.
lvm2 here wants to know whether the particular DM cache policy is
present in the kernel - however since the cache policy does not have
any kernel module parameters and it can be built-in to a kernel
there is no /sys/modules directory in such case and we would need to call
modprobe everytime we want detect such case.
The old solution tried to look for particular kernel symbol
(and like not the right way, as smq_exit might be actually ommitted).
New version checks MODULES_PATH/`uname -r`/modules.builtin for
whether is present cache policy module instead of CPU expensive parsing
of kallsyms.
If lvm.conf has use_devicesfile=0 and /etc/lvm/device/system.devices
exists, then rename it to system.devices-unused.YYYYMMDD.HHMMSS.
This prevents an old, incorrect system.devices from being used in
the future if lvm.conf is changed to use_devicesfile=1.
Create backup copies of system.devices in /etc/lvm/devices/backup
named system.devices-YYYYMMDD.HHMMSS.NNNN. NNNN is the version
counter from the file.
Each time that an lvm command writes a new system.devices file,
it also writes the same file in the backup directory.
A new comment line is added to system.devices with HASH=<num>
where <num> is a crc calculated from the uncommented lines in
system.devices. This lets lvm detect if the file has been
modified outside of lvm itself.
If system.devices is edited directly, the next time a command
reads the file, the crc will not match the HASH value. The
command will then rewrite system.devices with the correct HASH
value, and create a backup reflecting the edits.
A default limit of 50 backup files is kept, configurable by
lvm.conf devicesfile_backup_limit (set to 0 to disable backups.)