IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Older gcc doesn't really like complex types (buffer, struct) to be
initialized without extra {} around such type.
So pick any other 'single type' var from a struct and set it to 0,
rest will do the compiler without emitting a warning.
The cmd struct is now required in many more functions, and
it's added as a function arg for most direct dev-cache function
calls. The cmd struct is added to struct device (dev->cmd) so
that it can be accessed in many other cases where dev-cache
functions are being called from places where getting the cmd
struct is too difficult.
The dm devs cache is separate from the ordinary dev cache,
so give the function names distinct prefixes, using
"dm_devs_cache" to prefix dm devs cache functions.
The list of dm devs was in the cmd struct and had a
different lifetime than the radix trees referencing
those dm devs. Now the list and radix trees are
created and destroyed together.
In the context of dm, 'device' refers to a dm device, but
in the context of lvm, 'device' refers to struct device.
Change some lvm function names to make that difference clearer.
dev_manager_get_device_list() -> dev_manager_get_dm_active_devices()
get_device_list() -> get_dm_active_devices()
device_get_uuid() -> dev_dm_uuid(), devno_dm_uuid()
The idea in the patch 6e6d4c62b for handling -suffix as
indication of private device needs to be disabled.
Some problematic cases are currently not resolvable and some
more thinking is needed.
Once fixed, we can revert this patch.
When DM uuid cache is available, we can possibly avoid unnecessary
status ioctl() when we check the device for 'usable' uuid.
If this test passes the existing code will got through the full check.
Move the code around caching active dm device devno, name and uuid
from device_mapper/libdm-iface to dev_cache file - as libdm layer
cares about 'decoding' ioctl data from kernel and caching for use by
lvm stays within lvm.
Introduce:
dev_cache_update_dm_devs
dev_cache_get_dm_dev_by_devno
dev_cache_get_dm_dev_by_uuid
Use radix_tree for searching.
Do not attach layer suffix to the UUID when activating component LV.
In this case we want to see allow this LV to be public, thus
such LV should not be using -layer suffix in its UUID.
This also requires that our 'cached' access will check for
both UUID (with & without suffix) which was unnoticed issue before.
This change is now necesssary since our udev rule automatically expects
any LV with -layer suffix is private and will prevent generaring
any systemd unit even when there are no 'DM' flags bits passed via
cookie mechanism while creating such LV.
We've got cached DM list before grabbing lock, so there
is some chance, that DM table has changed and we would
need to refresh this info.
TODO: benchmark, whether it would even make sense to refresh cache
and keep it content instead of using individual ioctl() for tree build.
Function that is working with DM target is located within
lib/activate directory.
This function is able to use cached dm_device_list when possible
to quickly resolve checks for device's UUID.
Function can fully replace get_dm_uuid_from_sysfs() and instead
of syscalls for open/read/close get the UUID with single ioctl.
When there is cached dm devs list, we can get many UUID from
a single syscall.
Thin-pool and cache-pool targets got already quite stable so let's try
to remove checking of pools when using lvremove or vgremove commands.
This skips checking pools when they are going to be removed, but it
also when removing thin volume that was the only user of a thin-pool.
In this case thin-pool will be still there and could be activated
again with another thin volume and thin_check will be executed
in this moment. In this case it can delay discovery of metadata damage.
Shuffle code to parsing VDO message also for lvs segment status
so it can report correctly data usage for VDO LVs.
For this change move code and also change its API to use just mempool.
Fixes usage with upstream 6.9 vdo target driver.
When using message API for parsing VDO stats info, 0 was wrongly
used for fallback for trying the old sysfs API.
Switch to use ULLONG_MAX for values that could not have been obtained
through the message call.
Fixes lvdisplay info for freshly created VDO volume with 0 used data
blocks.
Make a seperate function to decode which ID should be user
for cvol meta or data volume - also avoids duplication of code.
As a result it's now also easier to see how the lvid is build.
When using cached LV with cachevols (so not with cachepool),
the loaded table could have been using more then one mapping line
for sub devices - resulting into data corruption in some cases
when i.e. taking snapshot of such cached LV with and instead of
single line - 2 lines were generated into DM table as the code
skipped protection again repeated addition.
vg-fast_cvol-cdata: 0 16384 linear 253:2 16384
vg-fast_cvol-cdata: 16384 16384 linear 253:2 16384
New code is also refactoring to use _add_new_cvol_subdev_to_dtree
(similar _add_cvol_subdev.. ) and also the addition of subdev has
been moved after check for already processed node.
Also the cachevol sub devices are now added with the insertion
of cachevol with cached LV.
Improve support for building DM tree when there is a chain
of external origins used for LV.
For this we cannot use track_external_lv_deps as this works
only for LV with just one external origin in its device tree.
Instead add directly 'dev' to the instead of add whole LV.
This avoid possibly recurive endless loop, however we may eventally
have some problems with undiscovered/missing devices in DM tree.
Instead of using size of 'empty header' in vdopool use fixed size 4K
for a 'wrappeing' vdo-pool device.
This fixes the issue when user tried to activate vdo-pool after
a conversion from vdo managed device with 'vgchange -ay' - where
this command activated all LVs with 'vdo-pool' wrapping device as well,
but this converted pool uses 0-length header.
This 4k size should usually prevent other tools like 'blkid' recognize
such device as anything - so it shouldn't cause any problems with
duplicate indentification of devices.
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.
Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
The new option "--fs String" for lvresize/lvreduce/lvextend
controls the handling of file systems before/after resizing
the LV. --resizefs is the same as --fs resize.
The new option "--fsmode String" can be used to control
mounting and unmounting of the fs during resizing.
Possible --fs values:
checksize
Only applies to reducing size; does nothing for extend.
Check the fs size and reduce the LV if the fs is not using
the affected space, i.e. the fs does not need to be shrunk.
Fail the command without reducing the fs or LV if the fs is
using the affected space.
resize
Resize the fs using the fs-specific resize command.
This may include mounting, unmounting, or running fsck.
See --fsmode to control mounting behavior, and --nofsck to
disable fsck.
resize_fsadm
Use the old method of calling fsadm to handle the fs
(deprecated.) Warning: this option does not prevent lvreduce
from destroying file systems that are unmounted (or mounted
if prompts are skipped.)
ignore
Resize the LV without checking for or handling a file system.
Warning: using ignore when reducing the LV size may destroy the
file system.
Possible --fsmode values:
manage
Mount or unmount the fs as needed to resize the fs,
and attempt to restore the original mount state at the end.
nochange
Do not mount or unmount the fs. If mounting or unmounting
is required to resize the fs, then do not resize the fs or
the LV and fail the command.
offline
Unmount the fs if it is mounted, and resize the fs while it
is unmounted. If mounting is required to resize the fs,
then do not resize the fs or the LV and fail the command.
Notes on lvreduce:
When no --fs or --resizefs option is specified:
. lvextend default behavior is fs ignore.
. lvreduce default behavior is fs checksize
(includes activating the LV.)
With the exception of --fs resize_fsadm|ignore, lvreduce requires
the recent libblkid fields FSLASTBLOCK and FSBLOCKSIZE.
FSLASTBLOCK*FSBLOCKSIZE is the last byte used by the fs on the LV,
which determines if reducing the fs is necessary.
When the volume size is extended, there is no need to flush
IO operations (nothing can be targeting new space yet).
VDO target is supported as target that can safely work with
this condition.
Such support is also needed, when extending VDOPOOL size
while the pool is reaching its capacity - since this allows
to continue working without reaching 'out-of-space' condition
due to flushing of all in flight IO.