IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
LV with pvmove_ prefix is not allowed to be created by user
so bigger chance our selected name will never exist.
TODO: probably add code to get generic unused LV name...
Older gcc doesn't really like complex types (buffer, struct) to be
initialized without extra {} around such type.
So pick any other 'single type' var from a struct and set it to 0,
rest will do the compiler without emitting a warning.
Revert 373372c8ab and instead update
our validation code to handle LVs with empty segment - currently
we should need this only for pvmove operation, thus such LV should
have name 'pvmove%u'.
This fixes a problem where user tried i.e. pvmove on a VG with single
PV - as reported: https://github.com/lvmteam/lvm2/issues/148
Reported-by: bob@redhat.com
The .cache and compile_commands.json is used by popular source crawling and
indexing clang tools which in turn may be integrated with source code editors.
We may reuse the .cache directory for for other caches and temporary
files.
The /doc/.ikiwiki and /public are related to the ikiwiki.
Avoid possible udev race - since dmsetup create is
not using the same cookie logic as lvm2 commands,
try to avoid racing on some systems with udev scanning.
The option can be used in multiple ways (like --cachesettings):
--integritysettings key=val
--integritysettings 'key1=val1 key2=val2'
--integritysettings key1=val1 --integritysettings key2=val2
Use with lvcreate or lvconvert when integrity is first enabled
to configure:
journal_sectors
journal_watermark
commit_time
bitmap_flush_interval
allow_discards
Use with lvchange to configure (only while inactive):
journal_watermark
commit_time
bitmap_flush_interval
allow_discards
lvchange --integritysettings "" clears any previously configured
settings, so dm-integrity will use its own defaults.
lvs -a -o integritysettings displays configured settings.
log/command_log_report config setting defaults to 1 now if json or json_std
output format is used (either by setting report/output_format config
setting or using --reportformat cmd line arg).
This means that if we use json/json_std output format, the command log
messages are then part of the json output too, not interleaved as
unstructured text mixed with the json output.
If log/command_log_report is set explicitly in the config, then we still
respect that, no matter what output format is used currently. In this
case, users can still separate and redirect the output by using
LVM_OUT_FD, LVM_ERR_FD and LVM_REPORT_FD so that the different types
do not interleave with the json/json_std output.
In case of different PV sizes in a VG, the lvm2 allocator falls short
to define extended segments resiliently asked for 100%FREE RaidLV extension
and a RAID distinct allocation check fails. Fix is to release a memory pool
on the resulting error path.
Until the lvm2 allocator gets enhanced (WIP) to do such complex (and other)
allocations proper, a workaround is to extend a RaidLV to any free space on
its already allocated PVs by defining those PVs on the lvextend command line
then iteratively run further such lvextend commands to extend it to its
final intended size. Mind, this may be a non-trivial extension interation.
The cmd struct is now required in many more functions, and
it's added as a function arg for most direct dev-cache function
calls. The cmd struct is added to struct device (dev->cmd) so
that it can be accessed in many other cases where dev-cache
functions are being called from places where getting the cmd
struct is too difficult.
The dm devs cache is separate from the ordinary dev cache,
so give the function names distinct prefixes, using
"dm_devs_cache" to prefix dm devs cache functions.
When a PV is stacked on an LV, the PV needs to be
dropped from bcache before the LV is processed.
The LV can be found in dev-cache using its name
rather than the devno.
The list of dm devs was in the cmd struct and had a
different lifetime than the radix trees referencing
those dm devs. Now the list and radix trees are
created and destroyed together.
In the context of dm, 'device' refers to a dm device, but
in the context of lvm, 'device' refers to struct device.
Change some lvm function names to make that difference clearer.
dev_manager_get_device_list() -> dev_manager_get_dm_active_devices()
get_device_list() -> get_dm_active_devices()
device_get_uuid() -> dev_dm_uuid(), devno_dm_uuid()
The comment explained that the ex global lock was just
used to trigger global cache invalidation, which is no
longer needed. This extra locking can cause problems
with LVM-activate when local and shared VGs are mixed
(and the incorrect exit code for errors was causing
problems.)
vgchange -an vg is permitted when the vg lockspace
is not available, because LVs could still be active
for some reason, and they should be inactive when not
properly locked. In case lvmlockd was not running, or
the lockspace was not started, the command was
unnecessarily trying and failing to unlock every LV,
printing errors for every LV. We can skip this when
the lockspace is known to not be available.
vgchange --lockstop will fail with EBUSY if orphan locks in the
lock manager prevent stopping the lockspace. The orphan locks
can then be adopted and released, and the lockspace then stopped
cleanly.
Lock adoption is not part of standard command behavior, but can
be used for manual recovery or cleanup from unexpected failure
cases. Like other lockopt values, they are hidden options for
--lockopt. Different lock managers will behave differently.
Adopting locks with lvmlockd -A1 is more accurate and automatic.
--lockopt adoptls
. for vgchange --lockstart
. adopt existing ls, or fail if no existing lockspace is found
--lockopt adoptgl | adoptvg | adoptlv
. for commands using lvmlockd locks
. adopt orphan gl/vg/lv lock, or fail the lock request if
no orphan lock is found
. will fail if orphan lock exists with a different lock mode
. command may still continue with a failed shared lock request
--lockopt adopt
. for lockstart or any command using lvmlockd locks
. adopt existing lockspace, or start lockspace if none exists
. adopt orphan gl/vg/lv lock, or acquire new lock if no orphan found
. will fail if orphan lock exists with a different lock mode
. command may still continue with a failed shared lock request
. with dlm this option only works for ls
Stop printing "Skipping global lock: lockspace not found or started"
for vgchange --lockstart, since it's generally an inherent limitation
that the global lock isn't available until after locking is started.
Update the start delay warning to "a few seconds".
The lvb is used to hold lock versions, but lock verions are
no longer used (since the removal of lvmetad), so the lvb
is not actually useful. Disable their use for sanlock to
avoid the extra i/o required to maintain the lvb.
vgremove with --lockopt force should skip lvmlockd-related
steps and allow a forced vg cleanup, in addition to using
--nolocking to skip normal locking calls.
Previously, a command would call lockd_vg() for a local VG,
which would go to lvmlockd, which would send back ENOLS,
and the command would not care when it saw the VG was local.
The pointless back-and-forth to lvmlockd for local VGs can
be avoided by checking the VG lock_type in lvmcache (which
label_scan now saves there; this wasn't the case back when
the original lockd_vg logic was added.) If the lock_type
saved during label_scan indicates a local VG, then the
lockd_vg step is skipped.
The idea in the patch 6e6d4c62b for handling -suffix as
indication of private device needs to be disabled.
Some problematic cases are currently not resolvable and some
more thinking is needed.
Once fixed, we can revert this patch.
For large device sets our dm_hash can produce larger amounf of mapping
collision and we would need to further increase our has size.
So instead use the radix_tree code which is immune agains growing size
of devices and uses memory more effiecently to store all the paths.
Instead of less efficient 'btree' switch dev_cache to use
radix_tree, that is generating more efficient tree mapping.
Some direct use of btree iteration replace with our dev_iter code.
Convert the persisten filter to use more memory compact radix_tree as
dm_hash is bound to preallocated number of slots and stores whole
key together with value.
When DM uuid cache is available, we can possibly avoid unnecessary
status ioctl() when we check the device for 'usable' uuid.
If this test passes the existing code will got through the full check.
Move the code around caching active dm device devno, name and uuid
from device_mapper/libdm-iface to dev_cache file - as libdm layer
cares about 'decoding' ioctl data from kernel and caching for use by
lvm stays within lvm.
Introduce:
dev_cache_update_dm_devs
dev_cache_get_dm_dev_by_devno
dev_cache_get_dm_dev_by_uuid
Use radix_tree for searching.
Do not attach layer suffix to the UUID when activating component LV.
In this case we want to see allow this LV to be public, thus
such LV should not be using -layer suffix in its UUID.
This also requires that our 'cached' access will check for
both UUID (with & without suffix) which was unnoticed issue before.
This change is now necesssary since our udev rule automatically expects
any LV with -layer suffix is private and will prevent generaring
any systemd unit even when there are no 'DM' flags bits passed via
cookie mechanism while creating such LV.
In order to free SubLVs after a stripe removing reshape, lvconvert has
to be run without layout changes. Prevent a layout changing request
in case any such freed SubLVs exist and inform the user about the fact
requesting to release them first.
Calculates expected size before/after reshapes adding/removing stripes
to/from RaidLVs with levels 4/5/6/10 and compares it with the actual
one the block layer shows. Stripes reshaped to are listed in the
tst_stripes variable. mkfs/fsck/resize2fs the respective RaidLVs
to confirm ext4 can be resized accordingly without issues.
We automatically ignore these devs, when lvm2 create devs,
whoever when lvm2 database is dropped or someone just
created these devs with such formated UUID, there is no
other informantion then to check DM UUID.
SUSE kernels distribution enables CONFIG_PRINTK_CALLER by default.
One line of cat /dev/kmsg is like:
6,904,9506214456,-,caller=T24012;loop8: detected capacity change from 0
to 354304
If CONFIG_PRINTK_CALLER is off:
6,721,53563833,-;loop0: detected capacity change from 0 to 354304
',caller=T24012' is the redundant part needed to be handled.
Otherwise pos will be lager than buf size causing sz underflowed.
Then constructor of std::string will throw a exception to break
tests:
$ make check_local T=shell/000-basic.sh
VERBOSE=0 ./lib/runner \
--testdir . --outdir results \
--flavours ndev-vanilla --only shell/000-basic.sh --skip @
running 1 tests
running: [ndev-vanilla] shell/000-basic.sh
0:00.000Exception: basic_string::_M_create
make[1]: *** [Makefile:148: check_local] Error 1
make[1]: Leaving directory '/root/lvm2/test'
make: *** [Makefile:89: check_local] Error 2
Fix it with strchr for ';' as delimiter to get pos.
Reported-by: Su Yue <glass.su@suse.com>
To more easily adopt radix_tree over existing code base, add
abstraction over 'radix_tree_iterate' which basically builds
an array of all traversed values, and then it's just easy to
go over all array elements.
TODO: code should be converted to use radix_tree_iterate()
directly as it's more efficient.
Note: it can be possibly to rewrite recursive _iterate() usage
to linear travesal, not sure whether it's worth the effort
as conversion to 'radix_itree_iterator' is relatively simple.
Add simple 'wrapper' inline functions to insert or return ptr lookup value.
(So the user doesn't need to deal with 'union radix_value' locally and
also it makes easier to translage some lvm2 functions to use radix_tree).
Note: If the stored 'value' would NULL, it cannot be recognized
from a case of 'not found'. So usable only when 'values' stored with
tree are not NULL.
Some updates to _dump() debugging function so the printed result
can be more easily examined by human.
Also print 'prefix' as string when all chars are printable.
Add 'simple' variant of _dump also to 'simple' version of radix tree
(which is however normally not compiled).
Instead of using 'key state & key end' uint8_t* switch to use
void* key, & size_t keylen. This allows easier adaptation with
lvm code base with way too much casting with every use of function.
Also correctly mark const buffers to avoid compiled warnings and
casting.
Adapt the only bcache user ATM for API change.
Adapt unit test to match changed API.
Fix commit b65a2c3f3a "vgimportdevices: skip lvmlockd locking"
which intended to disable lvmlockd locking, but the lockd_gl_disable
flag was mistakenly set after lock_global() so it wasn't effective.
This caused vgimportdevices to fail unless locking was started.
Enums are single 'values' so not a proper type for bitfields.
(Probably better to use such values as defines).
Although here 'daemon_talk()' is part of library API, it's hidden
non-public API call - and moreover 'enum' and 'unsigned' are
using the same size, so linker shouldn't have any issue with
this symbol usage.
For this reason there are no 'versioning' tricks applied.
Size of these hashes was quite small, so raise the size of
hashed entries to reduce amount of hash collistion.
Select some unique/unused number for hash_create below 8192.
Replace call to get_dm_uuid_from_sysfs() with use of
device_get_uuid() which gets the same information,
but instead of several syscalls it need either 1 or even 0
when the information is cached with newer kernels.
We've got cached DM list before grabbing lock, so there
is some chance, that DM table has changed and we would
need to refresh this info.
TODO: benchmark, whether it would even make sense to refresh cache
and keep it content instead of using individual ioctl() for tree build.
Function that is working with DM target is located within
lib/activate directory.
This function is able to use cached dm_device_list when possible
to quickly resolve checks for device's UUID.
Function can fully replace get_dm_uuid_from_sysfs() and instead
of syscalls for open/read/close get the UUID with single ioctl.
When there is cached dm devs list, we can get many UUID from
a single syscall.
For better use of cached data located within cmd_context,
pass this structure from the top level function.
Also add missing '_' for static _dev_cache_index_devs.
No other change here.
Introduce function to find device's name and uuid for
a given major:minor.
This information is cached with dm_device_list which reads all the
info from single ioctl(DM_DEVICE_LIST).
Lvm keeps major:minor name & uuid for active devices in the system.
Skip scan and stat() for dirs and nodes within known /dev/ paths,
where no block devices are located.
Also strlen(_cache.dev_dir) just once.
TODO: add more dirs to _no_scan (configurable via lvm.conf ?)
Just like with _VAL strings, also _ARG strings do not need to
be present - as we can easily check for LONG opt version just
by adding attribute.
With attribute ARG_LONG_OPT string arg name[] becomes unused
and can be safely removed.
Also within _find_command_id_function() we do not need to handle
'command_enum == CMD_NONE' as separate case and just use single loop.
Previous commit 82617852a4
introduce bug in complession - as the rl_completion_matches()
needs to always advance to next element where the index
is held in static variable.
Add comment about this usage.
When PVs are created on LVs, remove the devices file entries
for the PVs when the LVs are removed. In general, the devices
file entries should be removed with lvmdevices --deldev when
the LVs are removed (lvremove is the equivalent of detaching
a device from the system when layering PVs on LVs.)
This change is effectively an automatic lvmdevices --deldev
command that is built into lvremove when the LV has a PV on it.
An OS installer can create system.devices for the system and
disks, but an OS image cannot create the system-specific
system.devices. The OS image can instead configure the
image so that lvm will create system.devices on first boot.
Image preparation steps to enable auto creation of system.devices:
- create empty file /etc/lvm/devices/auto-import-rootvg
- remove any existing /etc/lvm/devices/system.devices
- enable lvm-devices-import.path
- enable lvm-devices-import.service
On first boot of the prepared image:
- udev triggers vgchange -aay --autoactivation event <rootvg>
- vgchange activates LVs in the root VG
- vgchange finds the file /etc/lvm/devices/auto-import-rootvg,
and no /etc/lvm/devices/system.devices, so it creates
/run/lvm/lvm-devices-import
- lvm-devices-import.path is run when /run/lvm/lvm-devices-import
appears, and triggers lvm-devices-import.service
- lvm-devices-import.service runs vgimportdevices --rootvg --auto
- vgimportdevices finds /etc/lvm/devices/auto-import-rootvg,
and no system.devices, so it creates system.devices containing
PVs in the root VG, and removes /etc/lvm/devices/auto-import-rootvg
and /run/lvm/lvm-devices-import
Run directly, vgimportdevices --rootvg (without --auto), will create
a new system.devices for the root VG, or will add devices for the
root VG to an existing system.devices.
lvm2 for a while already optimizes 'vgremove' operation to
a single commit when possible if all LVs can be
easily deactivated.
So the number of LVs doesn't matter much - but the tested
case 'test_delete_non_complete_job' seems to be taking
some time anyway to capture the exception.
So just reducing the running time of the test significatanly
as we don't need to create 64LVs for 4 'execution mode' runs.
Order is used for man page generation (although not completely).
So place 'zero & error' to the end of list.
Keep linear,striped,snapshot in front.
For the rest use alphabetic order.
Move part of the 'inner' loop which is would be otherwise
always production same results for all 'opt_enum' values
out of the loop, so it can be evaluated just once.
Instead of resolving and storing 'command_fn'
withing 'struct command' use just funtion enum
and resolve function pointer just in place,
where it is really needed - first try to resolve
'new style' and fallback to 'old style' named.
Instead of storing command_id as string, direcly
translate string to enum index and use 'command_enum()'
to get string when needed for printing.
This way we can easily detect error in the structure
while parsing it - and we can later avoid separate
'translation' loop.
We can more efficiently use command_name struct to
lookup for lvm_command_enum and avoid many repeated
command name searches since we already know
the enum index that is now stored in 'struct command'.
Constity members in cmdline_context, would be nice, to replace
this static struct with couple function calls.
Also replace some 'while' loops with for loops, so code
is more readable.
Split struct command_name to the constant part (keep the name)
and new 'struct command_name_args' which holds runtime computed
info. To get to the _args part - we can easily use
lvm_command_enum as equivalent index.
Constified part 'struct command_name' is now fully stored
in .data.rel.ro segment, while command_name_args part goes
to .bss segment.
Code will be further reduced with next refactoring.
Refactor code to not allocate memory for rule->opts,
instead use uint16_t array of MAX_RULE_OPTS within cmd_rule.
Also print more info if array would not be enough (>= 8).
Add slight delay for waiting until 'started' dmeventd is
responsing to other 'dmeventd -i' command.
TODO: race observed here is somewhat unclear, might need some more
details
There is no reason to wait for 10sec when removing 'vg' at test
exit - we just need to kill 'sleep 10' process forked from flock.
(Not using 'flock -F' as this flag is not across all machines used
for testing)
This reverts commit d16a8f80e9.
So the correction was OK, however we missed to fix also
cut&paste bug here - as the second check should be
actually checking field->type.
Update definitions to add support for some more VDO options
when converting volumes to be used as thin-pool with vdo data volume.
Split some option in existing OO_LVCONVERT_VDO to OO_LVCONVERT_VDO_POOL
and reused then with OO_LVCONVERT_THINPOOL.
When convert already existing vdopool to be used as
thin-pool backend and user is passinng option for VDO configuration
process them - as we know converted LV is offline, we can do such
change easily instead of telling user to run separate lvchange later.
When converting volumes to be used for thin-pool with VDO, allow
users to control wipesingature behaviour.
By default volumes should be checked against signature, and if
they are present, we promt user whether he wants to process with
conversion and lose i.e. filesystem present on such volume.
Users that want to bypass prompt in script can use either --yes
or they can disable wipe signature -Wn.
Resolve reported leak by renaming existing pckk() to pvck_mf()
and wrapping pvck() over this. This function can easily
free allocated buffer within the subfunction.
When we switch supported_reserved_types_with_range to const
gcc repots this problem:
warning: ‘and’ of mutually exclusive equal-tests is always 0
!(iter->type & supported_reserved_types_with_range))) {
It's not clear from the history what was the actual intention of this
internal error test, let's assume the check wanted to make sure
that when DM_REPORT_FIELD_RESERVED_VALUE_RANGE is set,
some other fields from supported_reserved_types_with_range set
are also selected.
This was rather API mistake - the internal of libdm
do handle suffixes list as const string, just the API
was only using 'const char **'.
So the user may pass safely casted 'const char * const`.
man-generate --check actually noticed 2 same definitions
for snapshot create with 'lvreate -T [--snapshot]'
and 'lvcreate --snapshot [-T]'.
So drop the '-T' from second alternative variant as
thin type is already implied here.
Refactor code so the definitions may become 'static const'
and with configure_command_option_values() we update options
val_enum for actually running command option when used.
Also update _update_relative_opt() which is used for
generating man pages and command help.
Introduce enumeration for lvm2 commands - so we may
use enum cmd_COMMAND instead of string checking.
So running command now does not modified opt_names.
Function to _allocate_memory() was not compiled-in when lvm2 was
build with support for better tracking memory pool with valgrind.
Instead now correctly avoid this function only when running
withing valgrind environment.
Do not modify flags field from 'strcut command_name' and
instead control this via cmd_context get_vgname_from_options.
Flag GET_VGNAME_FROM_OPTIONS is currently used only by lvconvert.
Create leading dirs for lvmdbusd lock file, otherwise it fails to start:
| systemd[1]: Starting LVM2 D-Bus service...
| lvmdbusd[1602]: [1602]: Error during creation of lock file(/var/lock/lvm/lvmdbusd): errno(2), exiting!
Signed-off-by: Kai Kang <kai.kang@windriver.com>
In the past it was common for a single command to run
multiple lvmcache_label_scan, and this code was a way
to make each call select the same duplicate pvs. Now
commands run a single lvmcache_label_scan, so this is
not needed.
When reseting stream buffer - check for being run within valgrind
and only in this case skip this code.
Define VALGRIND_POOL was incorrectly used for this logic.
We can run and build this code just fine with VALGRIND_POOL
and just rely on runtime detection - so we do not close
descriptors also within running valgrind session.
Old system do not work well with -l% in findstring,
so use a different strategy to recognize whether -lreadline needs
another library (-ltinfo, -lncourses...)
(So we don't need to solve this via 'configure')
Also for now comment out -Wl,-z,pack-relative-relocs and leave it
for the package maintainer whether they want to use.
Possibly add some 'configure' autodetection for usable switch,
as it's relatively new feature..
Add some -Wl flags separatly and avoid their duplication.
Also add --as-needed only when system is using 'newer' readline
library - on these older system the usage of '--as-needed'
seems to be causing some hard to solve problem - so avoid it.
Attach -Wl,-z,relro,-z,now,-z,pack-relative-relocs,--as-needed
to LDFLAGS, but only if LDFLAGS already doesn't contain 'relro'
(so it's not given repeatedly).
Also start to use -z,now linkage also when building libraries
with default compilation - this avoid calling symbol resolver
while library function are using function needing resolving.
Note: Fedora or RHEL rpm building is using:
CFLAGS=$(rpm --eval %{build_cflags})
LDFLAGS=$(rpm --eval %{build_ldflags})
Also split -DUSE_SD_NOTIFY into DEFS from CFLAGS.
Use LDFLAGS separately with every use of CLDFLAGS and leave
this flag only for handling versioning.
This will reflect any LDFLAGS setting use during make.
Look not only for whole 64byte sequence,
but seek also 32byte, 16byte and 8byte parts of the
key.
Currently to pass memcpy ZMM problems add possible
workaround in the form of GLIBC_TUNABLES setting.
Avoid using 'lvs' from 'get' shell - as that would wait until
whole group of processes is finished.
TODO: rethink what would be the point of starting 'dmeventd' with lvs.
It seems to break some rules.
Refactor _start_daemon and add synchronization delay waiting
untill new forked dmeventd actually open fifos and is ready
to process messages.
This closes some race window in testing.
When no lock manager for the global lock had been set yet,
and the first global lock request was received, and both
dlm and sanlock were running, lvmlockd would assume it
should use the dlm for the global lock, and would start
the "lvm_global" dlm lockspace automatically. That's
not always correct, so don't assume the dlm should be used,
fail the lock request, and wait until a VG with a specific
lock type is started to determine the lock manager to use.
Detect also presence of 'vdoformat' tool.
Fallback to 'kvdo' modprobe only when dm-vdo fails
(removed ugly error message from log)
Also add extra check for scsi model being present
so the test can wait a bit if 'scsi_debug' takes longer time.
No need for 'aux' within aux.
Refactor existing code from tools/lvmcmdline.c to
libdaemon/server/daemon-stray.h daemon_close_stray_fds()
used to close stray descriptors above some specified Fd.
This is code parses content of /proc dir to minimize 'blind' closing
of all possible descriptors within rlimits range.
As we have the same code in few other places in it's more 'trivial'
version - these were actually sensitive to high amount of descriptors,
which might be configured on some system.
With this patch we effectively resolve this reported gitlab issue:
https://gitlab.com/lvmteam/lvm2/-/issues/5
TODO: Current placement might not be ideal - however considering
existing code base constrains it's not so simple.
ATM it uses lib/misc/lvm-file.h for custom_fds declaration
and rest of functinality is included in daemon header file.
Let's move proc into include/configure.h so this define can
be easily used across the source base.
Configure defines it - but ATM we do not provide any configure
option to change it - there should be no need to ever change it.
Handle the case, where we 'kill -9' running dmeventd,
and restarting 'dmeventd -R' manages to still contact this
'zombie' damone and manages to get list of monitored devices
out of this and eventually 'run' new and in this case
unexpected instance of dmeventd.
Some test do expect event_activation to be set.
So add explicit configuring of this setting in tests,
but also add new default which kind of does it globally
as it's expected default (yet our testing rpms might
be create with disabled event_activation)
By adding this to each test individually - it's now easy
to locate such tests...
Move static to main() local vars.
Initilize libdm logging also before starting _restart_dmeventd()
so function there can be also logged.
Move call of _info_dmevent() out of option processing - so some
options like -V are processed with higher priority.
Use _restart_dmeventd() with return values 0,1,2.
Also let's use already created fifos struct.
Make sure the _systemd_activation variable is properly initialized
from _systemd_handover before calling _restart_dmeventd() as
this variable is used there to decide where 1 or 2 should
be returned - aka either letting systemd to initilize
or restart dmeventd itself.
This is initial test how to disable event_activation on
machines, where lvm2-testsuite packages are installed
with its lvm.conf file.
TODO: find more elegant mechanism
Occasionally this test fails as soemtimes UUID actually
may constain LV[d] string causing it to grep mismatch
UUID and LV name and eventually fail test for wrong reason.
As a simple workaround print the LV name first and
check the name is followed by a space character.
This code is somewhat complex and involves recursion and pointer
shuffling which confuses coverity here.
Let's add masking comment for this warning as there is no double
free in this code.
Let's stop Coverity thinking here we are using freed FILE*
for anything else then comparing numbers.
For this use the original source of old_stream pointer.
Add new configurable option for building lvm2 with enable/disable
default autoactivation setting.
Might be useful for building i.e. rpms for systems where
this event_activation is not desired.
Parse timestamps included in kernel kmsg line and add current system
time to the messsage as well - this makes it easier to look over,
when chasing some messages in journal after some time...
Before initiation fifo communication, check whether there is
running dmeventd - in case there is no running instance, this
would be just blocked for 5 seconds trying to open fifo.
Also awk got annoyed by this \# char sequence which is reported
as incorrect, however older rpm builder were failing without this.
So let's just try other variant.
Thin-pool and cache-pool targets got already quite stable so let's try
to remove checking of pools when using lvremove or vgremove commands.
This skips checking pools when they are going to be removed, but it
also when removing thin volume that was the only user of a thin-pool.
In this case thin-pool will be still there and could be activated
again with another thin volume and thin_check will be executed
in this moment. In this case it can delay discovery of metadata damage.
Shortens processing of 'lvcreate -L -V -T' command and
avoid deactivation and its activation with thin_check of the empty
created thin-pool that will be used for the new thin volume
made with a single lvcreate command.
Shuffle code to parsing VDO message also for lvs segment status
so it can report correctly data usage for VDO LVs.
For this change move code and also change its API to use just mempool.
Fixes usage with upstream 6.9 vdo target driver.
When using message API for parsing VDO stats info, 0 was wrongly
used for fallback for trying the old sysfs API.
Switch to use ULLONG_MAX for values that could not have been obtained
through the message call.
Fixes lvdisplay info for freshly created VDO volume with 0 used data
blocks.
Make a seperate function to decode which ID should be user
for cvol meta or data volume - also avoids duplication of code.
As a result it's now also easier to see how the lvid is build.
Fix cutting away 1 character via incorrect usage of dm_strncpy
introduced in last batch of commits and use sizeof(buffer) to
get proper sizes.
In case of use strcpy_name_len() the replacement was invalid,
so we need to restore this case since sanlock uses buffer without
nul ending, so we would strip one more character from the buffer.
Also start to use dm_strncpy() without (void) for unchecked usage.
Add internal inline function wrapper for dm_strncpy().
Use it for calls where we test the result.
Avoids emitting warnings in Coverity for unchecked usage.
This sanity check actually confused in some way Coverity
giving it some assumption about array access.
Since these two checks basically validated compiler's capability
to add and then substract the number from char pointer we likely
don't really need them - as if this base math would not work,
compiler would be having far more troubles...
So drop them - and get rid of report:
Event overrun-call: Overrunning callee's array of size 513 by...
Reduce #lv and intervals.
We are trying to ensure that the daemon stops while it's busy processing
its internal queues. Decrease the number of LVs and intervals to allow
this test to complete in less time.
Add a global timeout value to be used for the threads to end waiting for
whatever it is they are blocked on. The values varied from 2-5 seconds,
which is way longer than needed. Value of 0.5 shows no CPU load when
service is running and is idle.
Shuffle code to avoid using static variable to pass parsed option.
Code is now easier to follow and also number of coverity reports
will go away.
There should be no functional change.
Add debug tracing for syscall failures.
Also switch some log_error to log_warn when command does not exit
with 'error' result and only warns user.
Easier error path handling.
Initialize some vars at declaration time.
Here we actually need to slowdown only $dev2 - since repair operation
is only reading data from this device and compares it with origin $dev1,
and if they match there is no write...
Move memset() to the initialization function define_commands().
There is also not much point in clearing memory on command's exit
so drop zeroing of ~2M of RAM.
Use \0 as EOL in compiled-in syntax description to avoid
unnecessary line copy that just replaced \n with \0.
Also use already splitted lines when possible.
Handle mismatch of reported 'dm raid' status, where the active
raid LV can be actually showing higher numebr of raid leg devices,
that the number of shown status characters.
This can happen if the raid leg is dropped during initial
resynchronization.
Setting db_persist is required for dm devices so that their properties
are carried over on switch-root from the initrd to the rootfs. This
logic has always lived in dracut
(https://github.com/dracutdevs/dracut/blob/master/modules.d/90dm/11-dm.rules).
However, this means that other initramfs generators each have to
implement and maintain the same rule which leads to unnecessary
duplication.
Instead, let's make the rule part of the upstream lvm rules, which
will ensure that generated initramfses will just work if they make
sure the lvm udev rules are installed, without having to figure out
that they have to add an extra rule themselves on top.
Identical rule in Arch Linux's lvm2 package: https://gitlab.archlinux.org/archlinux/packaging/packages/lvm2/-/blob/main/11-dm-initramfs.rules?ref_type=heads
For DM devices, the add/change/remove can appear as action for genuine
udev events.
However, there are more action types (bind, unbind, move, online, offline)
which never appear as actions for genuine DM udev events, but they can
still be synthesized (e.g. by writing "<action>" to "/sys/.../uevent" file
or by calling "udevadm trigger --action=<action>").
Let's also process these extra action types so that the udev-related content
is not lost completely, keeping all the symlinks and udev db entries just like
this was a synthetic udev event with "change" action.
Related to https://gitlab.com/lvmteam/lvm2/-/issues/4.
Bump the rules version in order to indicate that upper level rules
should consume DM_UDEV_DISABLE_OTHER_RULES_FLAG rather than DM_NOSCAN
and DM_SUSPENDED.
Also update the comments at the top of the file that describe the
exported properties, and add a note about internal device-mapper
properties.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
DM_NOSCAN is not an official API any more and doesn't have to be
restored from the udev db. Rename it to .DM_NOSCAN.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
DM_SUSPENDED is a device-mapper internal flag, which is not intended to be
used by other rules, and which is determined by 10-dm.rules from sysfs for
every uevent. Rename it to ".DM_SUSPENDED", so that it won't be saved in the
udev database.
Known consumers of DM_SUSPENDED are 66-kpartx.rules (from multipath-tools) and
99-systemd.rules (from systemd). These will have to be adapted.
11-dm-mpath.rules will be changed to use .DM_SUSPENDED.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
DM_UDEV_DISABLE_OTHER_RULES_FLAG is used as the "output" flag of the
device-mapper rules, to be consumed by non-dm rules. It is a logical OR of
several conditions that might make dm devices inaccessible. 10-dm.rules
calculates it for every uevent, whether it's genuine or spurious.
DM_SUBSYSTEM_UDEV_FLAG0 is just another flag that needs to be or'd in. We
don't need to restore the previous state of DM_UDEV_DISABLE_OTHER_RULES_FLAG.
Actually, doing so is wrong if the flag has previously been set because the
device was suspended, and the device isn't suspended anymore.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
We use DM_UDEV_DISABLE_OTHER_RULES_FLAG to tell upper non-DM layers
to keep their hands off the device in question, for any reason.
One possible reason is that the device is supended; another is that
the cookie carries the flag of the same name.
DM_SUSPENDED is not restored from the db, but evaluated anew for every
uevent. Therefore DM_UDEV_DISABLE_OTHER_RULES_FLAG shouldn't be
restored, either. Use a new variable DM_COOKIE_DISABLE_OTHER_RULES_FLAG
to save and restore the original value from the cookie.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
DISK_RO is set in the environment of a block-device uevent if and only if
the read-only (ro) attribute of the device just changed (the kernel
function set_disk_ro() was called). It is not synoymous with the "ro" sysfs
attribute; the device could very well be write-protected if DISK_RO is not
set. Device mapper-level probing is possible for DISK_RO events, but it makes
little sense, because the device propreties haven't changed as far as dm is
concerned. But we should import possible previously set device properties
to avoid confusing follow-up rules. We should do this for both DISK_RO=1
and DISK_RO=0 events.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
ID_FS_TYPE is the most important udev property for most follow-up
rules. It must be imported from the udev db if blkid can't be run.
Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Peter Rajnoha <prajnoha@redhat.com>
When using cached LV with cachevols (so not with cachepool),
the loaded table could have been using more then one mapping line
for sub devices - resulting into data corruption in some cases
when i.e. taking snapshot of such cached LV with and instead of
single line - 2 lines were generated into DM table as the code
skipped protection again repeated addition.
vg-fast_cvol-cdata: 0 16384 linear 253:2 16384
vg-fast_cvol-cdata: 16384 16384 linear 253:2 16384
New code is also refactoring to use _add_new_cvol_subdev_to_dtree
(similar _add_cvol_subdev.. ) and also the addition of subdev has
been moved after check for already processed node.
Also the cachevol sub devices are now added with the insertion
of cachevol with cached LV.
Document usage of chaining external origins as it is now possibly to
create thinA LV in poolA which use thinB LV in poolB as it external
origin and user could create chain of such LV.
When using swap_lv_identifiers() we were basiclly exchanging 'names'
and only according to the caller some more data were 'transfered'.
However in most cases we should swap properties like 'hostname' as
the creation information should be preserved.
So let's do the function more universal.
Improve support for building DM tree when there is a chain
of external origins used for LV.
For this we cannot use track_external_lv_deps as this works
only for LV with just one external origin in its device tree.
Instead add directly 'dev' to the instead of add whole LV.
This avoid possibly recurive endless loop, however we may eventally
have some problems with undiscovered/missing devices in DM tree.
When we have some existing LV and this LV is being converted to
external origin - during the DM table manipulation there is a short
moment when the LV is being 'resumed' as 'read-only' volume
while still being live as 'rw' volume i.e. we could have had
a single thin LV active twice.
To avoid such weird scenarios of dual access to a same volume, we
just postpone a resume until a moment, where the existing volume
is already suspended thus no I/O can be in flight to such device.
Note: however there is slight catch - that we now have basically
a different 'risk' case where a resume of such i.e. new external
origin LV might fail and we are already in suspend tree state -
resolving error path in this situation is untrivial as well...
Fix/support creation and usage of the external origin
across thin-pools - so thin LV can use thin LV from
some other thin-pool as external origin (read-only).
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
Over the time the code for preloading detached LVs got unnecessarily
complicate. But actually we need to preload only LVs that
were previously non-toplevel (invisible) LVs and became visible
toplevel LVs in the precommitted metadata.
If there would be needed some other rule, it would likely be a bug in
conversion code forgetting to set visibility flag on detached LV.
This reduces number of unnecessary repeated DM tree preloading.
When building in other dir, ensure srcdir is used to find helper
script (via vpath) through $< usage.
Add also missing [INSTALL] prints for some installed files.
External origins for thin volumes can be also used at the same time
as old(thick) snapshot origins. However in this case it's possible
the LV is only active as being 'external' origin, but old snapshot LVs
are not active. For this case before handling these
LVs for un/monitoring check the active state of origin LV.
This should prevent warnings of monitoring failures.
Make recursive directory path creation reusable via
dir_create_recursive.
While we already have dm_create_dir() - it's not taking mode arg,
so let's make lvm's internal file helper function.
Instead of parsing the whole /proc/kallsyms use faster variant
of using modprobe tool logic.
lvm2 here wants to know whether the particular DM cache policy is
present in the kernel - however since the cache policy does not have
any kernel module parameters and it can be built-in to a kernel
there is no /sys/modules directory in such case and we would need to call
modprobe everytime we want detect such case.
The old solution tried to look for particular kernel symbol
(and like not the right way, as smq_exit might be actually ommitted).
New version checks MODULES_PATH/`uname -r`/modules.builtin for
whether is present cache policy module instead of CPU expensive parsing
of kallsyms.
If lvm.conf has use_devicesfile=0 and /etc/lvm/device/system.devices
exists, then rename it to system.devices-unused.YYYYMMDD.HHMMSS.
This prevents an old, incorrect system.devices from being used in
the future if lvm.conf is changed to use_devicesfile=1.
Create backup copies of system.devices in /etc/lvm/devices/backup
named system.devices-YYYYMMDD.HHMMSS.NNNN. NNNN is the version
counter from the file.
Each time that an lvm command writes a new system.devices file,
it also writes the same file in the backup directory.
A new comment line is added to system.devices with HASH=<num>
where <num> is a crc calculated from the uncommented lines in
system.devices. This lets lvm detect if the file has been
modified outside of lvm itself.
If system.devices is edited directly, the next time a command
reads the file, the crc will not match the HASH value. The
command will then rewrite system.devices with the correct HASH
value, and create a backup reflecting the edits.
A default limit of 50 backup files is kept, configurable by
lvm.conf devicesfile_backup_limit (set to 0 to disable backups.)
If LVM LVs happen to contain PVs, they are passed to the lvm udev
rule for processing, where they should be ignored. PVs on LVs
most likely belong to VM images, and don't belong to the host
which sees the LV. It's unsafe for the host to use these PVs.
Without this change, the LV would be processed by pvscan which
would generally ignore it, either because of the devices file,
or because of the default lvm policy to not consider LVs as
potential PVs. This change makes the udev rule consistent
with that policy and avoids the unnecessary system messages
produced when pvscan ignores the LV.
Since understanding the reason for choosing the appmachineid over the
direct use of machineid is not easily found, I extended to help text to
clarify this a bit.
Function to recalc chunk_size according to dev hints needs to be
used after chunk_size is being set to thin pool segment - correct
this ordering mistake introduced in previous refactoring commit.
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet. We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
Add code to handle creation of thin-pool with VDO data backend
which can be seen as compressed deduplicated thin-pool.
To avoid need of changing to many internal APIs, pass the conversion
parameters for create thin-pool data volume via cmd_context.
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.
Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.
Later we will use this for convertion of thin-pool data volume to VDO.
If a RaidLV mapping is required to be refreshed as a result of temporarily failed
and recurred RAID leg device (pairs) caused by writes to the LV during failure,
the requirement is reported by volume health character r' in position 9 of the
LV's attribute field (see 'man lvs' about additional volume health characters).
As this character can be overlooked, this patch adds messages to the top
of the lvs command output informing the user explicitely about the fact.
Fix typos in previous commit 3589e515d.
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requesting the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requestinng the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
"--maxrecoveryrate Size ..." with a size greater or equal than min.
The first lv_attr flag is 'i' or 'I' for a raid image.
(i: raid image, I: out of sync raid image)
For integrity raid images (_iorig), the flag was not being set.
pvs -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
pvscan -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
For those devices that are included by the devices file,
their device ID is displayed in place of the usual "lvm2"
format and size.
(pvs -a|--all is unchanged, and shows devices not formatted as PVs.)
A pvid string read from system.devices could be less
then ID_LEN since system.devices fields can be edited.
Ensure the pvid buffer is ID_LEN+1 even if the string
read from the file is shorter.
Include info in the temp file to confirm that it should be used.
The temp file is meant to suppress repeated, identical searches
for the same PVIDs on the same set of devices. Write to the file
a count and hash of the missing PVIDs and a count and hash of the
devices to search. A subsequent command will ignore and remove
the temp file if any of these values differ. We don't want to
suppress a search if a change has occured, and a missing PV could
be found by scanning devices.
Problematic scenario:
. the device for a PV has no wwid, so it's identified in system.devices
with IDTYPE=devname IDNAME=/dev/foo
. user adds/enables a wwid for the device
. on reboot, the device name changes, e.g. now /dev/bar
. the code that searches for the new device name includes an
optimization to skip looking on devs that have a wwid, on
the basis that a device with a wwid won't have IDTYPE=devname
. this optimization causes lvm to not look for the PV on /dev/bar
since that device now has a wwid, so the PV is not found
. the optimization is enabled by search_for_devnames="auto"
. change the default to search_for_devnames="all" which does not
use the problematic optimization
- add new comparison between old and new entries, and use this
as the basis for new dedicated output for check and update
- add new --refresh option to search for missing PVIDs on all
devices, and possibly update the device ID
- internally, only use the term "refresh" for cases where a
new device ID may be found and assigned for a missing PVID
During vdo testing with smaller block devices the test needs to determine
which PVs in a VG are unused. This was leveraging LvCommon.Devices, but
that isn't populated when a LV is resides on a LV pool instead of a PV.
This is a known limitation of the code at this time. For now we will walk
all the PVs in the VG looking for ones that don't have any associated LVs
and use them instead.
Signed-off-by: Tony Asleson <tasleson@redhat.com>
When getting raid status from some older kernels, we may get an 'a'
for 'A' leg when doing initial synchronization.
This may prohibit removal of newly synchronized leg until synchronization
is finished.
So in this case change the status to look like being reported
from a newer kernel version.
Incorrectly matching a dev to a devname id (due to changing devnames)
before matching the dev to a proper device id, can result in the
dev not being matched to the real id.
When reporting in JSON format, we need to be able to find the 'last
displayed row', not just 'last row' as we did before. This is used
to decide whether to put the JSON_SEPARATOR (the ',' character)
between the lines when reporting in JSON format.
This is mainly important in case we use a combination of JSON format
and a report marked with DM_REPORT_OUTPUT_MULTIPLE_TIMES flag.
Such report may be reused several times with different selection
criteria each time. In that case, the report always contains all lines
in memory, even though some of them do not need to pass the selection
criteria that are currently used.
Without DM_REPORT_OUTPUT_MULTIPLE_TIMES flag, the report only contains
the lines that have passed the selection criteria, so the this wasn't
an issue in this case.
Fix suggested by Lars Ellenberg and reported here: https://github.com/lvmteam/lvm2/issues/130
It looks like there is some kernel bug/limitation
that may cause invalid table load processing:
dmsetup load LVMTEST-LV1
device-mapper: reload ioctl on LVMTEST-LV1 failed: Invalid argument
md/raid:mdX: reshape_position too early for auto-recovery - aborting.
md: pers->run() failed ...
device-mapper: table: 253:38: raid: Failed to run raid array (-EINVAL)
device-mapper: ioctl: error adding target to table
However ATM there is not much we can do then make delays bigger.
TODO: fixing md core...
With commit d7e922480e
lvconvert -m may fail if we try to remove 1st. leg that
is out-of-sync while other leg is in-sync.
Hot fix allows to proceed with such down conversion.
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Fixes commit 63b469c160
"device_id: fix hints with device ids"
It's not correct for internal filtering to be a factor
in validating the set of devs that are the basis for hints.
Instead, look for inconsistencies between a hint entry and
the corresponding devices file entry.
Extend the test a bit futher so we can keep logic of resize
working similarly well for older and newer systems.
Test uses new 'aux have_fsinfo'function to regnize compiled
version of lvm.
Apply the same logic for 'lvreduce' which exists for newer
systems (compiled with HAVE_BLKID_SUBLKS_FSINFO)
also for older systems for one very common practical case where
the active LV does not have any blkid known signature/filesystem.
New variant recognized this situation and allowed to proceed
without requesting a prompt, while the older variant always
requested confirmation prompt.
With this patch command now works equily for both variants
for 'active LV' without signature and allows to reduce LV
without prompting.
Simplify configuration with enabling options:
- For --enable-dmeventd automatically --enable-cmdlib.
- For --enable-dbus-service auto --enable-notify-dbus.
Fix check linux/fiemap.h for dmfilemapd.
Before checking seg_type of the first area, check there is
some existing area.
Since we now support error and zero segtypes, these do not have
any PV area present.
When new app links with current dm_task_run() that propagates ioctl()
errno - ensure it will not be usable with older version without this
supported feature.
Introduce couple configure options:
--without-blkid
--without-systemd
--without-udev
These can be useful for build with --disable-shared
where those libraries are often distibuted without
their static libraries.
However such compiled static binaries then have limitted usage.
Since LVM 2.02 it appears that compiling LVM on a platform which
lacks a shared library linker/loader will fail to produce any
binaries (dmsetup, lvm2, etc).
Adds support for the standard --disable-shared flag, and
use it to disable attempts at building shared libraries.
Modified-by: zkabelac
Fix a bug in _stats_set_aux() that causes bogus data to appear
in the 'userdata' field of stats reports when previously grouped
regions are ungrouped:
/var/tmp/File With Spaces: Created new group with 1 region(s) as group ID 0.
Removed group ID 0 on fedora-root
Name GrpID RgID ObjType RgStart RgSize #Areas ArSize ProgID UserData
fedora-root - 0 region 6.39g 100.00m 1 100.00m dmstats #-
^^
This is the aux_data separator character followed by empty user data.
The _stats_set_aux() function should only emit the separator if
there is a valid group descriptor for the region.
Creating a group with a name that contains whitespace causes an error in
the @stats_set_aux message:
device-mapper: message ioctl on (253:0) failed: Invalid argument
Could not create regions from file /var/tmp/File With Spaces.
Fix this by quoting the group alias and backslash escaping any
embedded space characters.
Refactor the function that parses regions from @stats_list data to
separate the parsing of variable string data from the fixed parts
of the @stats_list response.
Fix a double free in the error path from _stats_create_group() by
clearing the group struct embedded in the dm_stats handle before
returning:
device-mapper: message ioctl on (253:0) failed: Invalid argument
Could not create regions from file /var/tmp/File With Spaces.
free(): double free detected in tcache 2
Aborted (core dumped)
When setting up dm-verity devices with signed root hashes it is very
useful to have a recognizable error code when a key is not present in
the kernel keyring. Turns out the kernel actually returns ENOKEY in that
case, but this gets lost in libdevmapper.
This fixes this: in _create_and_load_v4() it copies the error code from
the ioctl from the sub-tasks back to the main task field on failure.
This is not enough to make libcryptsetup actually propagate the ENOKEY
correctly, that also needs a patch to libcryptsetup, but this is part of
the puzzle.
Fix some interactions between device IDs and hints. Hints
may limit the scanned devices which should not always trigger
a search for the PVs that were intentionally not scanned.
Hints should also be invalidated if they contain a device
that's become excluded by an internal filter such as the
device_id filter.
Search for a PV on other devices if it's a devname entry
and the name doesn't exist on the system. This restores
code that should not have been removed in commit 1901a47df
"device_id: fix conditions for device_ids_refresh"
throtling mirror device is becoming useless with faster CPUS,
as way to many data can be transferred before throttling steps-in.
So prefer using dm-delay for test and keep throttling as fallback.
Fix commit 847f1dd99c
"device_id: rewrite validation of devname entries"
which began calling device_ids_refresh() in cases where it
was unnecessary, leading to extra PV searches and warnings.
Specifically, a command like "lvs <vg>" would use the hints
file to scan only devices for the named VG. This means that
scanning other PVs would be skipped, and device IDs of those
PVs could not be validated because there are no PVID values
to verify. This missing info would cause messages about
the missing info, and would cause device_ids_refresh to
search for the PVs that had been intentionally skipped.
Detection of how the command is supposed to behave actually depends on
the configure.h compilation and whether binary is compiled with
HAVE_BLKID_SUBLKS_FSINFO.
This makes it somewhat complicated in a way how to recognize which
behavior is expected.
Currently we can eventually recognize by checking error output
of some 'random' lvresize command and see if the --fs checksize is
actually recognized and rejected. If this changes - test needs
to be updated.
After umout we may race with system udevd rule - so
just retry once again after 1s sleep - that should be
enough - otherwise we would need some loop here...
Multi-line echo command are problemat across variety of bash version
and may have produce shorter results.
Convert to stable heredoc string with 'tab' skipping <<- for better
formating.
Since we now push more data into journal, parser reading this file
for --continue mode need to be adapted.
Also properly align batch mode with '.' for max test length name.
This is mainly useful in internal testing - but keep sysfs dir also
passed to filter.
Also drop use of static variable within sysfs filter and base whole
config at creation time.
Restore fsync() call For more accurate tracking by buildbot.
Try different rather tricky way of static_cast to use
already opened FD instead of seperate open(),fsync(),close().
It's pretty strange there is no way to enforce fsync() for
C++ iostreams. Flush() is actully not equal.
Add Timespec class to increase time resolution to miliseconds
(can switch to microseconds if ever needed).
Use more const and const_interators and pass by reference.
Output rusage also to list result file.
Reduce inlining of C++ constructors.
If the system changes, locate PVs that appear on different devices,
and update the device IDs in the devices file. A system change is
detected by saving the DMI product_uuid or hostname in the devices
file, and comparing it to the current system value. If a root PV
is restored or copied to a new system with different devices, then
the product_uuid or hostname should change, and trigger lvm to
locate PVIDs from system.devices on new devices.
When exit on file is present in a system and term/break signal is
catched, them dmeventd is no longger refusing to exit.
For the correct shutdown, there should be ideally unmonitoring call,
however in some case it's very hard to implement this correct procedure.
With this 'exit on' file dmeventd at least avoid 'blocking' shutdown,
before systemd kills use with -9 anyway possibly even in some unwanted
stated of internal dmeventd processing (i.e. in the middle of some lvm
command processing).
To quickly get info about some internal dmeventd status,
implment 'dmeventd -i' support.
Reported messages are some 'raw' internal informations mainly
useful to developers.
Instead of just exiting in the middle of monitoring,
unregisted all monitored devices first and then exit.
To speedup this path, all send internal SIGINT when thread
unregiters itself, to wakup-up main sleeping loop.
Make a local copy of the 'idx' string to avoid
overlapping during the rebuild of name.
This fixes cases where users specified raid
component LVs for moving.
Reported-by: kotarou3@github.com
No write outside of $LVM_TEST_DIR (removed /test access).
Use 'aux prepare_scsi_debug_dev' for automated scsi_debug handling
Properly use "" around shell vars.
Smarter read of PVS values.
Relax requirement to only work with real /dev dir.
Handle the case of device teardown where the first pass
could have only a single, but opened device, for removal.
In such case we want to at least once go through
the udev_wait and retry removal again.
TODO: maybe a sleep .1 might be usable as well with udev_wait
Instead of relying on 'pvs' output - check directly system
configuation and use lvmdevice accrording to use_devicesfile setting.
Also drop use of --fs ignore for filesystem extension for better
backward compatibility with older lvm version.
Shuffle code a bit so the '--no-snapshot' path does not execute
a few unnecessary commands.
Work also with devices that may have ':' inside their generated
/dev/disk/by-id
Ensure there is no race with systems' auto activation while using
the snapshot for conversion.
Update system's vdoconf.yml after the use of snapshot for conversion.
Skip unnecesary prompt for 'convert' while using snapshot and query only
for final snaphot merge.
Prohibit conversion for a device with the PV header.
Enhance 'trap' protection for more signals.
Improve clean() recovery path.
Replace bash 'test' command with [].
Correct some output message to print $TOOL.
Support also options without '-' in the middle i.e. --nosnapshot.
For shellcheck predefine all variables extracted from vdoconf.yml.
Since lvmdbusd testing tends to do its own logging,
try for now to disable very generic logging mechnanism
of the test suite and see the result.
Some lvmdbusd test seems to rely on some log/file logic
which is modified with the use of these shell vars.
Require VDO version 6.2.3.
Skip the part of the test that needs vdo wrapper and 2 different
versions of vdoprepareforlvm to prepare shifted VDO header
at the 2MiB offset.
Hunt also for devices with LVMTEST prefix in UUID.
Call teardown_devs_prefixed - so if they hold RAM or SCSI
they are closed before trying to remove kernel modules.
When creating thin pool or check pool there is allocated LV
for metadata and for such allocation user should be able to
specify list of PVs on cmdline.
Also fix unused passed list of PV for thick to thin conversion,
where the code was using whole PV set from a VG (but since it's
been not enabled on cmdline, user could not hit this issue).
Also remove unneeded initialization of use_pvh.
When the import is used on a system, that uses devices file,
the final activation was impossible for the case the converted
volume was not present in devices file.
Currently add volume automatically in such case.
Also add some more debugging output from the script.
TODO: Consider enhnacing lvconvert with extending devices file.
VDO is using specific path for some device paths.
i.e. for /dev/sda it could be /dev/disk/by-id/scsi-xxxxx.
This used to be not a problem before lvm2 started to use snapshot,
but now it needs to replace matching device path.
So switch to the path naming used in vdoconf.yml file.
Add '--headings none|abbrev|full|0|1|2' command line option to select
the heading type.
none|0 - no headings
abbrev|1 - column name abbreviations
full|2 - full column names (column names are equal to exact names
that -o|--options also accepts to set report output)
Reuse existing report/headings config setting to make it possible to
change the type of headings to display:
0 - no headings
1 - column name abbreviations (default and original functionality)
2 - full column names (column names are equal to exact names that
-o|--options also accepts to set report output)
Also, add '--headings none|abbrev|full|0|1|2' command line option
so we are able to select the heading type for each LVM reporting
command directly.
Add new DM_REPORT_OUTPUT_FIELD_IDS_IN_HEADINGS report output flag.
If enabled, column IDs are reported instead of column names in report
headings.
The 'column IDs' are IDs as found in 'const struct dm_report_field_type *fields'
array that is passed during report initialization (that is, a call to
dm_report_init/dm_report_init_with_selection). In this case, the 'id'
dm_report_field_type member is used instead of the 'heading' member.
While create new LV for pool volume, use name from 'pool_metadata%d' naming
sequence. This LV is later on renamed to pool_t/cmeta, but if there
is any error in the middle, we may evenutally leave some 'volume',
With this name it can be slightly more obvious how it got there,
but also when we handle _pmspare name - we get slightly more predictible
name used there for it.
However for a standard usage this commit shall no visible impact as the
name is used temporarily just for cleaning LV.
Set the lock_args string in addition to doing initialization.
lvconvert calls lockd_init_lv_args() directly, skipping
the normal lockd_init_lv() which usually sets lock_args.
lvmlockd locking for swapmetadata adjusted for commit
ac36153e99 lvconvert: preserve UUID for swapped metadata
Now that the LV uuid is swapped between LVs, the lvmlockd lock can
simply be moved between them, and the same lock can continue to be
used for the LV outside of the pool.
Once lvm2 repairs pool's metadata LV and preserves the original metadata LV
with unmodified metadata, for such LV in VG use newly created UUID for new
_pmspare and actually preserve UUID for this hidden _pmspare (if it
exists).
The lockd lock needs to be freed for the LV that is becoming
the new metadata LV, and a new lockd lock needs to be created
for the old metadata LV that is becoming an independent LV.
Fixes b3e45219c2
We incorrectly marked pv_major and pv_minor fields as being of string
type, even though the values were already correctly handled as integers
internally. This confused -S|--select that tried to compare string
values instead of integers.
Reported here: https://github.com/lvmteam/lvm2/issues/122
Previous fix was invalid (after some in-place shuffling)
'dd' copied goes to 'stderr' so we need to catch all output.
Grep needs to check output of tee tool.
Ensure 'C' locales are in use with 'dd'.
Use cachepool name for create name for metadata backup LV.
(so we do not generate 2 'sequences' of metadata filenames.)
Move path preparion before handling _pmspare.
Also drop extra call to sync_local_dev_names() as it's
already got in sync with call of exec_cmd().
When repair thinpool or cachepool, lvm2 leaves original metadata
volume backup. To avoid potential damage of those data, mark such
volume as 'read-only' and also allow user to use --setactivationskip
option for this volume.
TODO: likely better default would be to automatically skip, but
that might need some more thinking about recovery reporting doc.
Replace the use of internal /dev/mapper names with the use of
public LV names /dev/vg/lv for use with repair tools.
For this make the activation of _pmspare LV to be handled as
a component activation with public name.
Metadata is already atomatically activated this way (as readonly).
So if there is any 'error' happening, we leave public LVs in
system.
Pass more args with some 'aux' commands:
wipefs_a, enable_dev, disable_dev
(so it's a bit more efficient using single udev_wait call).
Use prepare_vg instead of prepare_pvs.
Keep using test directory for created files.
Trap errors and remove brd in this case.
Use some shell builtins to reduce fork count.
Use "$VAR".
Run 'pvs' with devlist (so not acceing other system devices).
New dmpd tools return version string in different format,
so update code to understand both variant.
Also hide some shell var setting to local functions.
lvresize --usepolicy requires resized LVs to be active.
(So it's not only required for shared VG).
The test for active pool needs to use lv_info to query 'layer'
otherwise the pool is considered inactive if it was not activated
explicitely - thun 'implicit' activation with VDO or ThinLV was
not managed by --usepolicy option.
Some linkers do need libdevmapper-event-lvm2.so.2.03,
so add also this symlink to the tests /lib dir.
This fixes the need to use previous LD_LIBRARY_PATH complex
setting and now works with much shorter list.
Last commit c38b668fc3 was pushed
with type 'scanf' instead of 'sscanf' needed for buffer reading.
Interestingly this caused scanning from 'stdin' descriptor and
thus failures in lvm shell usage.
The code in init_log_file relies on the process name (COMM) to not
contain whitespaces. This change fixes it by looking up the right-most
parenthesis to safely jump past COMM.
For more context see:
https://www.openwall.com/lists/oss-security/2022/12/21/6
Code is only used with testing, so it should have no impact on regular
users.
Reported-by: Hugues Evrard <hevrard@google.com>
mm
With 3596558861 it's been introduced
a more fine grained description.
However 'disabled' might be actually more confusing then empty field,
so keep only the info about 'not enabled'aka dmevend is not allowed
to monitor LV which otherwise could be monitored.
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.
Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.
With conversion to thin LV user can use same set of arguments
to set chunk-size.
TODO: add some smart code to decide best values for chunks sizes.
For proper functionality of insert_layer_for_lv we need to
move more bits to layerd LV.
Add some missing new types and correct usage of caller,
so the new LV type is set after the movement.
Validate cache origin in front of the prompt.
Also add some rules to command description file.
TODO:
more validation needed also for lvcreate,
more complex rules with "OR" seems to be needed.
Enable support to cache thin/error/zero virtual targets.
Use can now select whether he want to cache whole thin-pool,
or an individual thin volume out of whole thin-pool.
Support using zero and error LVs as data volume for
cachepool is possibly useful for benchmarking, not much
can be expected from such setup.
Avoid activation when going to skip zeroing of 'error' segtype
(so it's not erroring out).
Also skip zeroing for 'zero' segtype LV (already being zero).
When lvm2 calculates the maximal usable COW size and crops the user
requested size to this value, don't return the error result from
the 'lvextend' operation.
We already apply the same logic when resizing thin-pool beyond
the supported maximal size.
FIXME: The return code error logic here is somewhat fuzzy.
Output from vdo manager may actually indent output with spaces,
so trim leading and ending space.
Also add support for verbosity flag for vdo conversion tool.
This vdo parameter existed in the early stage of integration of vdo into lvm2,
but later it's been removed from vdoformat tool - so actually if
there would be any non-zero value it would cause error on lvcreate.
Option was not stored on disk in lvm2 metadata.
Remove this vdo parameter from lvm2 sources.
(Although this vdo parameter will be still accepted on cmdline through
--vdosettings option, but it will be ignored.)
Fix the testing logic.
With raid5 device the layout of files on a filesystem does not define
which leg will actually contain the block we try to damage.
So test will now figure out which device has damaged block.
Use 'check' functionality and also drop unneeded random write as we
now can identify easily in other way.
Compilation may configure it's own /etc path so ensure the test
has a defined location for access to this dir during testing.
Also prepare machine_id filei (with the use of uuidgen tool)
for the test.
Ensure the volume after conversion has the properly aligned size to the
volume group extent size. This would be visible when using virtual size,
that cannot be divided by extent size.
Before the user had to manually adjust the size after conversion to get
access to all data stored on VDO volume.
The command "lvcreate --type thin --snapshot ..." to create a thin
snapshot would fail.
commit d651b340e6 removed the optional
"--type thin" from the command definition "lvcreate --snapshot LV_thin",
and added --type thin as AUTOTYPE. This was correct and should not have
changed anything if all the command defs were correct, but it broke
the "lvcreate --type thin --snapshot" case. It reveals a problem in a
different command definintion: "lvcreate --type thin LV_thin" that was
missing --snapshot in its OO list.
If we want to have const structure - use a const pointer to avoid,
however making individual members const make it only assingle
in construct time and we already violate this logic since
we memcpy into the structure - so drop these unnecessary consts...
Add support for usage of 'dm-snapshot' for doing whole device conversion
in a snapshot which could be merged once the whole conversion has been
made.
This helps with cases where there would be any unexpected failure in the
middle of conversion process and user can continue using original
device until problem in conversion is fixed.
Import tool now uses 'truncate' tool to create a small backend file (20M) for loop device
to hold COW exception store.
Option --vdo-config has been added to allow specifing location of vdo
configuration file.
Option --no-snapshot allows to use import tool without creation of
snapshot, however recovery after finished VDO conversion and unfinished
lvm2 conversion is then very difficult.
Option --uuid-prefix allow to specify DM UUID prefix for snapshot
device.
Use read with -r.
Improve metadata parser to handle volume_geometry bio_offset,
which needs to be substracted from 'region' start_block when present.
This bio_offset block is non-zero i.e. with converted VDO volumes.
Also fix some converted structure value (but they are not in use).
Fix in the code that matches devices to system.devices entries when
the devices have the same serial number. A non-PV device in
system.devices has no pvid value, and the code was segfaulting
when checking the null pvid value.
If we are executing 'report_for_selection' to do an internal report
just for the selection itself (not for display), we can call it more
than once. In that case, we are reusing the same selection handle
(e.g. inside 'process_each_lv_in_vg').
The selection handle has 'report_type' field which is a union of all
report types needed for the report based on selection fields we actually
use.
The 'report_type' is further clarified based on checks and rules inside
'_get_final_report_type' which 'report_for_selection' calls. Then, this
final report type unambiguously identifies proper branch to take in
'report_all_in_{pv,vg,lv}' that is called next.
If the 'report_for_selection' is called more than once with the same
selection handle, we need to make sure that we always restore the report
type to its initial value, so all the rules inside 'report_for_selection'
are applied correctly next time.
This patch fixes the missing restoration of the 'report_type' value in
'selection_handle' that is reused for recurring 'report_for_selection'
calls.
An example scenario where this failed was with selecting an LV for
removal with "lvremove --select" while using a field in the selection
that required extra DM info or DM status call for the LV (that is,
"Logical Volume Device Info Fields" and "Logical Volume Device Status Fields"
as visible in 'lvs -S help').
In previous lvm versions, trailing spaces at the end of a t10 wwid would
be replaced with underscores, so the IDNAME string in system.devices
would look something like "t10.123_". Current versions of lvm ignore
trailing spaces in a t10 wwid, so the IDNAME string used would be
"t10.123". The different values would cause lvm to not recognize a
device in system.devices with the trailing _. Fix this by ignoring
trailing underscores in the IDNAME string from system.devices.
The recent fix 05c2b10c5d ensures that raid LV images are not
using the same devices. This was happening in the lvextend commands
used by this test, so fix the test to use more devices to ensue
redundancy.
vgrename does not support -S|--select, so do not provide a hint about
using it. Instead, provide a hint about using VG uuid directly.
❯ vgs
WARNING: VG name vg1 is used by VGs DXjcSK-gWfu-5gLh-9Kbg-sG49-dtRr-GqXzGL and MVMfyM-sjOa-M2xV-AT4Y-JddR-h4SP-UO5Ttk.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
VG #PV #LV #SN Attr VSize VFree
vg1 1 0 0 wz--n- 124.00m 124.00m
vg1 1 0 0 wz--n- 124.00m 124.00m
(vgrename does not support -S|--select)
❯ vgrename vg1 vg2
WARNING: VG name vg1 is used by VGs DXjcSK-gWfu-5gLh-9Kbg-sG49-dtRr-GqXzGL and MVMfyM-sjOa-M2xV-AT4Y-JddR-h4SP-UO5Ttk.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Multiple VGs found with the same name: skipping vg1
Use VG uuid in place of the VG name.
(vgchange does support -S|--select)
❯ vgchange --addtag a vg1
WARNING: VG name vg1 is used by VGs DXjcSK-gWfu-5gLh-9Kbg-sG49-dtRr-GqXzGL and MVMfyM-sjOa-M2xV-AT4Y-JddR-h4SP-UO5Ttk.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Multiple VGs found with the same name: skipping vg1
Use --select vg_uuid=<uuid> in place of the VG name.
In case of e.g. 3 PVs, creating or extending a RaidLV causes SubLV
collocation thus putting segments of diffent rimage (and potentially
larger rmeta) SubLVs onto the same PV. For redundant RaidLVs this'll
compromise redundancy. Fix by detecting such bogus allocation on
lvcreate/lvextend and reject the request.
Check for running (possibly leftover) lvmdbusd running in the
system - as this daemon may interfere with this test as in this
case both be operating on same 'live' data in /run/lvm.
Make test faster by agregating sets of operation to work on a single
created filesystem yet checking all the variants of extension and reduction.
Split 'xfs' part into separate test and convert it for use of the
minimal size 300M nowdays required by mkfs.xfs.
lvreduce uses _lvseg_get_stripes() which was unable to get raid stripe
info with an integrity layer present. This caused lvreduce on a
raid+integrity LV to fail prematurely when checking stripe parameters.
An unhelpful error message about stripe size would be printed.
When lvmcache info is dropped because it's an md component,
then the lvmcache vginfo can also be dropped, but the list
iterator was still using the list head in vginfo, so break
from the loop earlier to avoid it.
Previous commit cause the pvmove could actually be started in unexpected
order - so make sure, we are not starting new pvmove in same VG until
the previous one is started.
Keep backups within test_dir instead of dev_dir (so it doesn't
leak large files there if the tests are run over real /dev dir).
Move restoring of dm_mirror throttling before test_dir is removed.
Not quite sure if this helps anything, some of testing
machines can't reliably remove scsi_debug, reporting
they are in use - but it's not easily reproducible...
aux wait_pvmove_lv_ready() now handles multiple pvmove LVs
at one go - which allows a bit fast checking - although
at some point we may need to switch to use delayed devs
since mirror throttling seems to be no longer working well,
as CPU are getting so fast, that most of data are already
pvmoved before throttling has any chance to do something...
When 'brd' device can be removed (is unused AKA not opened),
remove such device and use again for testing.
Let's assume user has no unused brd device left in the system.
When the 'tests' sometimes fail to cleanup devices, with this
change futher cleanup from some next test may evenually release
brd device and make it available for testing.
There is no easy way to detect, whether device supports zeroing,
and kernel also zeroes device when it's not directly supported,
but with extra message:
operation not supported error, dev X, sector Y op 0x9:(WRITE_ZEROES)...
So to avoid generating such message with every 'lvcreate', use for
zeroing of upto 8K just standard write of zeroed page.
(maybe we can go with even larger sizes).
Convert test to use only ext4 instead of 300M demanding XFS.
Shorten 'B' files to 4K and use 4K strip size with >raid1 arrays
so we do not risk spreading of the file across stripe.
Also use easier 'aux corrupt_dev()' method to introduce a bit
corruption into a block device with integrity.
TODO: shorten _wait_recalc (should't be needed).
Add function to corrupt some bytes in give file path presenting
a device. 1st. patern in just once replaced with 2nd. pattern.
Usable to simulate some bit corruption for integrity devices.
Convert test to use a single skeleton and only different pieces
keep in separate tests.
Lower raid disk usage to smaller size and switch to ext4
as way less demanding fileystem.
Instead of using size of 'empty header' in vdopool use fixed size 4K
for a 'wrappeing' vdo-pool device.
This fixes the issue when user tried to activate vdo-pool after
a conversion from vdo managed device with 'vgchange -ay' - where
this command activated all LVs with 'vdo-pool' wrapping device as well,
but this converted pool uses 0-length header.
This 4k size should usually prevent other tools like 'blkid' recognize
such device as anything - so it shouldn't cause any problems with
duplicate indentification of devices.
Correcting output results from configure script when
printing locking dir.
Shuffle dmfilemapd close to dmeventd option and tracing.
Couple indent changes.
syscall 186 is specific to x86 64bit. As this is different from arch
to arch and between same arch different arch size we will only grab
thread ID using built-in python support if it is supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2166931
When the daemon is starting we do an initial fetch of lvm state. If we
happened to get some type of failure with lvm during this time we would
exit. During error injection testing this happened enough that
the unit tests were unable to finish. Add retries to ensure we can get
started during error injection testing.
Previously we were injecting a missing key in the lv, vg, and pv.
Given the order of processing in lvmdbusd, this prevented us from
exercising all the error paths. Change to returning just 1 instead.
When we sort the LVs, we can stumble on a missing key, protect against
this as well.
Seen in error injection testing:
Traceback (most recent call last):
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 198, in update_thread
num_changes = load(*_load_args(queued_requests))
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 83, in load
rc = MThreadRunner(_main_thread_load, refresh, emit_signal).done()
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/utils.py", line 726, in done
raise self.exception
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/utils.py", line 732, in _run
self.rc = self.f(*self.args)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 40, in _main_thread_load
(lv_changes, remove) = load_lvs(
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 148, in load_lvs
return common(
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/loader.py", line 37, in common
objects = retrieve(search_keys, cache_refresh=False)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 72, in lvs_state_retrieve
lvs = sorted(cfg.db.fetch_lvs(selection), key=get_key)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 35, in get_key
pool = i['pool_lv']
KeyError: 'pool_lv'
Remove old code that became incorrect at some point.
It's probably a fragment of an old condition that was left
behind because it wasn't understood. We don't want to drop
the MISSING_PV flag just because the PV has no mda in use.
The device that was missing may have stale data, so the user
needs to decide if the device should be removed or restored.
Different sequences of steps that could be used to handle raid LVs
after VG takeover (what would happen in cluster failover) combined
with the loss of a disk.
When we are using -S|--select for non-reporting tools while using command log
reporting (log/report_command_log=1 setting), we need to create an internal
processing handle to handle the selection itself. In this case, the internal
processing handle to execute the selection (to process the -S|--select) has
a parent handle (that is processing the actual non-reporting command).
When this parent handle exists, we can't destroy the command log report
in destroy_processing_handle as there's still the parent processing to
finish. The parent processing may still generate logs which need to be
reported in the command log report. If the command log report was
destroyed prematurely together with destroying the internal processing
handle for -S|--select, then any subsequent log request from processing
the actual command (and hence an attermpt to access the command log report)
ended up with a segfault.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=2175220
There is a window of time where the following can occur.
1. An API request is in process to the lvm shell, we have written some
command to the lvm shell and we are blocked on that thread waiting
2. A signal arrives to the daemon which causes us to exit. The signal
handling code path goes directly to the lvm shell and writes
"exit\n". This causes the lvm shell to simply exit.
3. The thread that was waiting for a response gets an EIO as the child
process has exited. This bubbles up a failure.
This is addressed by placing a lock in the lvm shell to prevent
concurrent access to the shell. We also gather additional debug data
when we get an error in the lvm shell read path. This should help if
the lvm shell exits/crashes on its own.
Replace spaces with \040 in directory paths from getmntent (mtab).
The recent commit 5374a44c57 compares mount point directory paths
from /etc/mtab and /proc/mounts, in order to detect when a mounted
LV has been renamed. The directory path comparison does not work
correctly when the path contains spaces because getmntent uses
ascii space chars and proc replaces spaces with \040.
Add some Gentoo based patches for better support of static linking.
This are not tested nor supported by upstream developers.
Usage requires presence of several libraries in their static form
which is however not commonly available.
Selinux modified by zkabelac to still work on older sofrware which
did not provided libselinux.pc at a time - see keep the old check
present and use pkg-config only when possible.
When user enables lockd libraris, code needs to fail, when then are
missing. Also when notify-dbus support if enabled, and libsystemd is
missing, abort configuration.
Check for pkg-config --libs libdlm_lt and test if the returned value
contains word 'pthread' - if so, it's likely a buggy result from
incorrect config file and use directly -ldlm_lt for this case.
There may be some unconventional methods of sharing disks with vgimport
where hints could cause confusion. Hints should be disabled when
sharing disks, or pvscan --cache used to regenerate hints as needed,
but invalidating hints from vgimport might help reduce confusion.
Convert lvmlockd to use configure _LIBS and _CFLAGS for
discovered libraries.
TODO: ATM we ignore discovered libdlm and use libdlm_lt instead.
Also libseagate_ilm is hard to find unicorn for testing.
Convert naming SYSTEMD_CFLAGS/LIB -> LIBSYSTEMD_CFLAGS/LIBS
to better fit library check for libsystemd.
Build lvmlockd with SD_NOTIFY when we have defined LIBSYSTEMD_LIBS.
Coverity is complaining about unchecked strcpy here, which is
irelevant as we preallocate buffer to fit in copied string,
however we could actually reuse these size and use just memcpy().
So lets make some simple conversions.
This was trying to warn about a misconfig where the root PV
was excluded by the devices file, but it wasn't covering all
cases. Also, it's not very useful because the root LV is
usually activated directly in the initrd.
With the recent use of DEVLINKS, there is no longer any real
point in checking the filter for symlink names. Removing
this check should not change behavior with or without symlinks
in the filter.
Commit 7a8b7b4add introducing lockidm
support missed to use AC_SUBST() for a variable and provide it only
in prebuilt configure, so with the next autoreconf this variable
was lost and IDM support was no longer compiled.
Fixes: https://github.com/lvmteam/lvm2/issues/98
Reported-by: Tom Prohofsky
Correct setting of migration_threshold and add clarify
how the user can restore default cache settings for cache policies.
Also document mq aliasing with smq for newer kernels.
When two parallel commands would have tried to create our
/dev/mapper/control node at the same time, one of them could
actually fail even if the 2nd. command actually mknod()
this control node correctly.
So for EEXIST case add detection if the control node is ok.
This may possibly help with some race case in early boot.
We still need to support build without any blkid present,
so use PKG_CHECK_EXISTS() instead of direct failure
from PKG_CHECK_MODULES for too old version.
Allow users to specify their path to systemd-run binary:
configure --with-systemd-run=/my/path/system-run
By defaults it autodetected in $PATH and fallbacks to:
/usr/bin/systemd-run.
Previous commit e67ebc5c40 used incorrect check
for blkid version.
Fix it by always checking for at least version 2.24,
as we can't use older version anyway.
Added BLKID_SUBLKS_FSINFO is present in newer version.
Reduce shellcheck warnings about missing {} for possible array
dereference.
Make sure we are not loosing error code when assigning local vars
and explicitely ignore 'errors' from standalone lines when needed.
Add some missing quotes.
Use $() instead of ancient ``
Avoid writing some temporary data into /tmp - test need to store
files within its own 'testdir' - so it can be properly discarded.
Addressing problem for user of system without default bash shell.
As reported "set -o pipefail" is a bashism that causes the build to
fail when /bin/sh does not point to bash.
For example, this has been observed causing build failures
on Gentoo when /bin/sh is symlinked to /bin/dash.
Rule has been reworked and we started to use .DELETE_ON_ERROR to
ensure that with any errors during file generation, such invalid
file is automatically removed.
Rules were reworked to avoid using pipe and instead use temporary
files when necessary.
Printing lines with echo was replaced with POSIX recomended 'printf'
as proposed by reporter since handling of escape characters and
the "-n" flag for "echo" aren't portable across shells.
Also some build deps has been added.
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1822280
Gentoo-bug: https://bugs.gentoo.org/682404
Gentoo-bug: https://bugs.gentoo.org/716496
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1822280
Reported-by: Michael Orlitzky <michael@orlitzky.com>
TODO: investage if the temporary files could be handled via some
intermediate target solution - ATM I couldn't make it work equally
well as current solution use shell 'trap' to remove temp file.
Commit dropping lvmetad support 117160b27e
also removed cleaning of its generated files. However we need to keep
this functionality, otherwise we can leak them during various bisect.
Easier is to keep such rules forever.
Also commit c1ab9fb37f moved cmds.h to
include, so again keep it removed so it's not left there and in any
case could not misslead anyone.
Use configure detection instead of trying to directly include
blkid.h inside lvresize.c - where we do not pass in BLKID_CFLAGS
and since we actually do not need to use blkid there, introeduce
test variable HAVE_BLKID_SUBLKS_FSINFO and avoid trying to
use blkid.h in this place - this also fixes builing problem for
systems without blkid.h.
`AS_IF([...])` is more portable, as it respects macro expansions of
`AC_REQUIRE()`.
This is recommended Autoconf best practice, since in nested
conditionals, it is generally unknowable whether some macro invokes
`AC_REQUIRE()` deep down:
https://www.gnu.org/software/autoconf/manual/autoconf-2.71/html_node/Common-Shell-Constructs.html#index-AS_005fIF-1
As a result, the hacky `pkg_config_init` function is not needed
anymore, since any `PKG_*` invocation will ensure that
`PKG_PROG_PKG_CONFIG` will have been called, due to the fact that
`AC_REQUIRE()` will trickle up.
`[[ ... ]]` only works in bash, and not in POSIX sh, specifically
dash fails on this, which is a popular alternative to bash for running
configure scripts.
test's `-a` and `-o` options are considered obsolescent by POSIX,
because they interact badly with expressions and can become ambiguous
depending on string arguments:
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html
`==` only works in bash.
Fix hang of vgimportclone command when:
the PV(s) being imported are not actually clones/duplicates, and
the -n vgname arg is the same as the current vgname.
(Not the intended use of the command, but it should still work.)
In this case, the old and new vgnames ended up being the same, when
the code expected they would be different. A file lock on both the
old and new vgnames is used, so when both flocks are on the same
file, the second blocks indefinitely.
Fix the new vgname to be the old name plus a numeric suffix, which
is the expected result.
Follow-up for e10f67e917.
The commit e10f67e917 tries to keep device
node symlinks even if the device is in the suspended state. However,
necessary properties that may previously obtained by the blkid command
were not imported at least in the .rules file. So, unless ID_FS_xyz
properties are imported by another earlier .rules file, the device node
symlinks are still lost when event is processed in the suspended state.
Let's explicitly import the necessary properties.
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=2158628
GHPR: https://github.com/lvmteam/lvm2/pull/105
We used KERNEL=="device-mapper", NAME="/dev/mapper/control" udev rule to
create the /dev/mapper/control file. The "NAME" rule should be only used
to rename network devices, otherwise udev issues a warning message. The
device-mapper driver has proper DEVNAME=/dev/mapper/control propagated
in the uevent environment when it is loaded so we don't need further
instruction on where to create the node - udev knows already.
Also, these days, it is created directly by kernel inside devtmpfs.
This makes the NAME="/dev/mapper/control" rule completely obsolete.
Since 67722b3123, we have a new mechanism
to run the autoactivation from udev. With this change, we also replaced
the way the LVM autoactivation service is instantiatiated - instead of
setting the SYSTEM_WANTS udev variable (which systemd read and then
instantiated the service), we're now directly instantiating the
transient 'lvm-activate-<vgname>' service by calling systemd-run.
As such, we don't need to bother with setting the SYSTEMD_READY variable
for foreign devices anymore (in this case, MD and loop devices on top of
which there's a PV).
Before, we set the SYSTEMD_READY variable to make sure that the SYSTEMD_WANTS
is applied correctly - the service instantiation was edge-triggered by
flipping the SYSTEMD_READY from 0 to 1 and at the same time having the
SYSTEMD_WANTS variable set to the service name to instantiate. We're
using systemd-run now so this condition does not apply anymore.
Also, it was not completely correct to set SYSTEMD_READY for foreign
devices because there might be cases where this could cause issues,
see also https://github.com/lvmteam/lvm2/issues/94.
When refreshing all LVs in a VG, skip processing of thick snapshots,
since they will be refreshed through its origin LV.
Should reduce some unnecessary ioctl().
Keep the conversion 64bit as on x32 arch time_t is 64bit value
and we may loose precision (y2038).
TODO: like use universal string for time printing as in log/log.c
_set_time_prefix()
When activating origin and its thick snapshots, ensure the
origin's LV udev processing is finished first and after this
reactivate its snapshot so the udev can scan them afterwards.
This should fix the problems for users using UUID of such device
in their fstab and occasionaly mounted snapshot instead of origin LV.
"vgchange -aay --autoactivation event" is called by our udev rule.
When the udev rule runs, symlinks for devices may not all be created
yet. If the regex filter contains symlinks, it won't work correctly.
This command uses devices that already passed through pvscan. Since
pvscan applies the regex filter correctly, this command inherits the
filtering from pvscan and can skip the regex filter itself.
See the previous commit
"pvscan: use alternate device names from DEVLINKS to check filter"
pvscan --cache <dev> is called by our udev rule at a time when all
the symlinks for <dev> may not be created yet (by other udev rules.)
The regex filter in lvm.conf may refer to <dev> using a symlink name
that hasn't yet been created, which would cause <dev> to not match
the filter regex. The DEVLINKS env var, set by udev, contains all
the symlink names for <dev> that have been or will be created.
So, we add all these symlink names to dev->aliases, as if we had
found them in /dev. This allows <dev> to be recognized by a regex
filter containing a symlink for <dev>.
It looks like force was not being used to enable crypt resize,
but rather to force an inconsistency between LV and crypt
sizes, so this is either not needed or force in this case
should have some other meaning.
This reverts commit ed808a9b54.
Update previous commit
"lvresize: only resize crypt when fs resize is enabled"
to enable crypt resizing when --force is set and --resizefs
is not set. This is because it's been allowed in the past
and people have used it, but it's not a good idea.
There were a couple of cases where lvresize, without --fs resize,
was resizing the crypt layer above the LV. Resizing the crypt
layer should only be done when fs resizing is enabled (even if the
fs is already small enough due to being independently reduced.)
Also, check the size of the crypt device to see if it's already
been reduced independently, and skip the cryptsetup resize if
it's not needed.
While the output of building looks more polished, text editors fail to
find source file from compile errors - so until we start to print
all file with full paths - comment out this make build parameter.
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.
For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.
Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
Cover a case missed by the recent commit e0ea0706d
"report: query lvmlockd for lv_active_exclusively"
Fix the lv_active_exclusively value reported for thin LVs.
It's the thin pool that is locked in lvmlockd, and the thin
LV state was mistakenly being queried and not found.
Certain LV types like thin can only be activated exclusively, so
always report lv_active_exclusively true for these when active.
If one of the PVs in the VG does not hold metadata, then the
command would fail, thinking that PV was from a different VG.
Also add missing free on that error path.
typo used "cryptresize" as command name
this affects cases where the file system is resized
independently, and then the lvresize command is used
which only needs to resize the crypt device and the LV.
18722dfdf4 lvresize: restructure code
mistakenly changed the overprovisioning check from applying
to all lv_is_thin_type lvs to only lv_is_thin_pool lvs, so
it no longer applied when extending thin lvs. The result
was missing warning messages when extending thin lvs.
The recent change that verifies sys_serial system.devices entries
using the PVID did not exclude non-PV devices from being checked.
The verification code would attempt to use du->pvid which was null
for the non-PVs causing a segfault.
With newer kernels (>5.13) DM_CREATE no longer generates
uevent for DM devices without table.
There are even no sysfs block device entries in such case,
although device has asigned major:minor and it is being listed
by 'dmsetup info'.
So this patch calculates amount of 'table' lines and in case
no table line comes from cmdline or stdin - waiting on cookie
is avoided generically instead of disabling just case with
option --notable - which then also skipped handling of an
option --addnodeoncreate (which is however historical and
should be avoided)
As a result there should be no leaking udev cookies and endlessly
waiting commands like this:
dmsetup create mytestdev </dev/null
We can't assume that strerror_r returns char* just because _GNU_SOURCE is
defined. We already call the appropriate autoconf test, so let's use its
result (STRERROR_R_CHAR_P).
Note that in configure, _GNU_SOURCE is always set, but we add a defined
guard just in case for futureproofing.
Bug: https://bugs.gentoo.org/869404
Query LV lock state in lvmlockd to report lv_active_exclusively
for active LVs in a shared VGs. As with all lvmlockd state,
it is from the perspective of the local node.
Signed-off-by: corubba <corubba@gmx.de>
Add a note to the manpage that lvmlockd is unable to determine
accurately and without side-effects whether a LV is remotely active.
Also change the value of the lv_active_remotely option from false to
undefined for shared VGs to distinctly communicate that inability to
users. Only for local VGs it can be definitely stated that they are not
remotely active.
Signed-off-by: corubba <corubba@gmx.de>
Since now we change deduplication with V4 table line change,
the modification tends to be faster and we can capture for a few ms
also 'status' about opening or closing deduplication index.
Use 'grep -E' to handle both words.
Improve detection of VDO virtual size - so it's not reading VDO
metadata when VDO device is already active and instead we reuse
existing table line for knowing existing metadata size.
NOTE: if there is ever going to be added support for reduction
of VDO virtual size - this method will need to be reworked to
allow size difference only within 'extent_size' alignment.
As we actully use reading of VDO metadata only as extra 'information' source,
and not error command - switch to 'log_debug()' severity with messages
out of parser code.
Handle multiple devices using the same serial number as
their device id. After matching devices to devices file
entries, if there is a discrepency between the ondisk PVID
and the devices file PVID, then rematch devices to
devices file entries using PVID, looking at all disks
on the system with the same serial number.
Only /sys/dev/block/major:minor/device/serial was read to find
a disk serial number, but a serial number seems to be reported
more often in other locations, so check these also:
/sys/dev/block/major:minor/device/vpd_pg80
/sys/class/block/vda/serial (for virtio disks only)
Add very simplistic parser of vdo metadata to be able to obtain
logical_blocks stored within vdo metadata - as lvm2 may
submit smaller value due to internal aligment rules.
To avoid creation of mismatching table line - use this number
instead the one provided by lvm2.
Previously we utilized udev until we got a dbus notification from lvm
command line tools. This however misses the case where something outside
of lvm clears the signatures on a block device and we fail to refresh the
state of the daemon. Change the behavior so we always monitor udev events,
but ignore those udev events that pertain to lvm members.
Note: --udev command line option no longer does anything and simply
outputs a message that it's no longer used.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1967171
The lvm dbus daemon will auto activate on dbus API calls. To
prevent the dbus daemon starting when lvm command line tools are
being used we will check to see if the daemon is running first.
If the daemon is not running, we will not notify the daemon.
For this check to work it requires the changes done previously
with commit: 3fdf449348
Reviewed-by: David Teigland <teigland@redhat.com>
The number of extents for the sanlock lvmlock lv is calculated using
integer division, which rounds towards zero. With a physical extent size
of 129M, instead of the requested 256M the lv is only 129M (1 extent).
With any physical extent size greater than 256M the lv creation fails
because the number of extents is zero.
This is fixed by replacing the integer division with a division macro
that rounds up and thus guarantees that the size of the lv will always
be equal or greater than the requested size. Using the examples above, a
pes of 129M will result in a 258M lv (2 extents), pes of 300M in a 300M
lv (1 extent).
The re-calculation of the lv size in bytes and megabytes is only so the
debug output shows the correct values. The size in mb there is still
not byte-perfect-accurate, but good enough for a human-readable estimate;
and the exact size in bytes and extents is right next to it.
Signed-off-by: corubba <corubba@gmx.de>
When executing process_each_lv_in_vg, we process live LVs first and
after that, we process any historical LVs. In case we have just removed
an LV, which also means we have just made it "historical" and so it
appears as fresh item in vg->historical_lvs list, we have to skip it
when we get to processing historical LVs inside the same process_each_lv_in_vg
call.
The simplest approach here, without introducing another LV list, is to
simply mark such historical LVs as "fresh" directly in struct
historical_logical_volume when we have just removed the original LV
and created the historical LV for it. Then, we just need to check the
flag when processing historical LVs and skip it if it is "fresh".
When we read historical LVs out of metadata, they are marked as
"not fresh" and so they can be processed as usual.
This was mainly an issue in conjuction with -S|--select use:
# lvmconfig --type diff
metadata {
record_lvs_history=1
}
(In this example, a thin pool with lvol1 thin LV and lvol2 and lvol3 snapshots.)
# lvs -H vg -o name,pool_lv,full_ancestors,full_descendants
LV Pool FAncestors FDescendants
lvol1 pool lvol2,lvol3
lvol2 pool lvol1 lvol3
lvol3 pool lvol2,lvol1
pool
# lvremove -S 'name=lvol2'
Logical volume "lvol2" successfully removed.
Historical logical volume "lvol2" successfully removed.
...here, the historical LV lvol2 should not have been removed because
we have just removed its original non-historical lvol2 and the fresh
historical lvol2 must not be included in the same processing spree.
The new device_id types are: wwid_naa, wwid_eui, wwid_t10.
The new types use the specific wwid type in their name.
lvm currently gets the values for these types by reading
the device's vpd_pg83 sysfs file (this could change in the
future if better methods become available for reading the
values.)
If a device is added to the devices file using one of these
types, prior versions of lvm will not recognize the types
and will be unable to use the devices.
When adding a new device, lvm continues to first use sys_wwid
from the sysfs wwid file. If the device has no sysfs wwid file,
lvm now attempts to use one of the new types from vpd_pg83.
If a devices file entry with type sys_wwid does not match a
given device's sysfs wwid file, the sys_wwid value will also
be compared to that device's other wwids from its vpd_pg83 file.
If the kernel changes the wwid type reported from the sysfs
wwid file, e.g. from a device's t10 id to its naa id, then lvm
should still be able to match it correctly using the vpd_pg83
data which will include both ids.
t10 wwids are now edited in the same way that multipath does,
which is replacing a series of spaces with one _. Previously
lvm replaced every space with one _. Devices file entries
with the old form will be converted to the new shorter form.
Move the functions handling dev wwids.
Add dev flags indicating that wwids have been read from
sysfs wwid file or sysfs vpd_pg83 file. This can be
used to avoid rereading these.
Improve filter-mpath search for a device's wwid in
/etc/multipath/wwids, to avoid unnecessary rereading
of wwids from sysfs files.
Type 8 wwids from vpd_pg83 with naa or eui names should be
saved as those types.
If lvmlockd in cluster is killed accidently or any other reason, the
lock resources will become orphaned in the VG lockspace. When the
cluster manager tries to restart this daemon, the LVs will probably
become inactive because of resource schedule policy and thus the lock
resouce will be omited during the adoption process. This patch will
try to purge the lock resources left in previous lockspace, so the
following actions can work again.
Exclude the new fs resizing capabilities at build time
(rather than run time) if the necessary libblkid features
are not available. When excluded, all fs resizing options
are translated to resize_fsadm. Accessing the new
features now requires rebuilding lvm if libblkid is
upgraded.
Place the addCleanup at the end as we don't want to go through clean up
if we don't make it through setUp. If we don't do this we can remove VGs
that we didn't create in the unit test.
Previously when the __del__ method ran on LVMShellProxy we would blindly
call terminate(). This was a race condition as the underlying process
may/maynot be present. When the process is still present the SIGTERM will
end up being seen by lvmdbusd too. Re-work the code so that we
first try to wait for the child process to exit and only then if it hasn't
exited will we send it a SIGTERM. We also ensure that when this is
executed we will briefly ignore a SIGTERM that arrives for the daemon.
If we plan to use dm throttling for mirror targets - we actually
have to check whether kernel runs with CONFIG_HZ_1000 - if it does
not the whole idea of throttling is actually not working in the
testsuite as within a single 'tick' with HZ 100 way too much date
is being moved on any modern hardware - and since there is no plan
to change this in kernel - we simply avoid using throttling on such
kernel and test needs to work differently - either ignore results
or use much larger mirror sizes...
When checking to see if the PV is missing we incorrectly checked that the
path_create was equal to PV creation. However, there are cases where we
are doing a lookup where the path_create == None. In this case, we would
fail to set lvm_id == None which caused a problem as we had more than 1
PV that was missing. When this occurred, the second lookup matched the
first missing PV that was added to the object manager. This resulted in
the following:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/lvmdbusd/utils.py", line 667, in _run
self.rc = self.f(*self.args)
File "/usr/lib/python3.9/site-packages/lvmdbusd/fetch.py", line 25, in _main_thread_load
(changes, remove) = load_pvs(
File "/usr/lib/python3.9/site-packages/lvmdbusd/pv.py", line 46, in load_pvs
return common(
File "/usr/lib/python3.9/site-packages/lvmdbusd/loader.py", line 55, in common
del existing_paths[dbus_object.dbus_object_path()]
Because we expect to find the object in existing_paths if we found it in
the lookup.
resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2085078
Latest upstream build of lvm results in the following error when
trying to use lvmshell.
"Argument --reportformat cannot be used in interactive mode.,
Error during parsing of command line."
When lvm is compiled with editline, if the file descriptors don't look like
a tty, then no "lvm> " prompt is done. Having lvm output the shell prompt
when consuming JSON on a report file descriptor is very useful in
determining if lvm command is complete.
Historically we have seen a few different errors which occur when we call
fullreport. Failing exit code and JSON which is missing one or more keys.
Instruct lvm to dump the debug to a file during fullreport calls when we
fork & exec lvm. If we encounter an error, ouput the debug data.
The reason this isn't being done when lvmshell is used is because we
don't have an easy way to test the error paths.
This change is complicated by the following:
1. We don't know if fullreport was good until we evaluate all the JSON.
This is done a bit after we have called into lvm and returned.
2. We don't want to orphan the debug file used by lvm if the daemon is
killed. Thus we try to minimize the window where the debug file hasn't
already been unlinked. A RFE to pass an open FD to lvm for this
purpose is outstanding.
The temp. file is:
-rw------. 1 root root /tmp/lvmdbusd.lvm.debug.XXXXXXXX.log
Introduce an exception which is used for known existing issues with lvm.
This is used to distinguish between errors between lvm itself and lvmdbusd.
In the case of lvm bugs, when we simply retry the operation we will log
very little. Otherwise, we will dump a full traceback for investigation
when we do the retry.
Instead of lumping all the exceptions, break them out to handle the dbus
exceptions separately, to reduce the amount of debug information that ends
up in the journal that has questionable value.
Lvm occasionally fails to return all the request JSON keys in the output of
"fullreport". This happens very rarely. When it does the daemon was reporting
the resulting informational exception:
MThreadRunner: exception
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/lvmdbusd/utils.py", line 667, in _run
self.rc = self.f(*self.args)
File "/usr/lib/python3.9/site-packages/lvmdbusd/fetch.py", line 40, in _main_thread_load
(lv_changes, remove) = load_lvs(
File "/usr/lib/python3.9/site-packages/lvmdbusd/lv.py", line 143, in load_lvs
return common(
File "/usr/lib/python3.9/site-packages/lvmdbusd/loader.py", line 37, in common
objects = retrieve(search_keys, cache_refresh=False)
File "/usr/lib/python3.9/site-packages/lvmdbusd/lv.py", line 95, in lvs_state_retrieve
l['vdo_operating_mode'],
KeyError: 'vdo_operating_mode'
The daemon retries the operation, which usually works and the daemon continues.
However, simply reporting this informational stack trace is causing CI and other
automated tests to fail as they expect no tracebacks in the log output.
Remove the reporting of this code path unless it persists and causes the daemon
to give up and exit.
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=2120267
When the daemon isn't started with --debug we will keep a circular
buffer of the past N number of debug messages which we will output
when we encounter an issue.
Rather than trying to bubble up return codes that get us to exit cleanly
it's better to just raise an exception to bail. In some cases functions
don't have return codes, so they cannot be checked.
We were incorrectly only setting this if --udev wasn't present on the
command line. In all cases when we see a manager.ExternalEvent we want
to set this.
Depending on when an occurs, it maynot have any information available for
lastlog. In this case try to grab an error message from the original
response.
Introduce a new lock for the flight recorder, so that we can dump it when
a command is block waiting for lvm to complete. Also in all paths we will
addthe metadata to the flight recorder before it's done, so we will have
it when a command hangs and we dump the flight recorder. Add the missing
bits after the command has finished.
Cleaned up the output too.
We put this in to test one of the paths in the damon, but unfortunately
if we hit the race condition where the job actually is done we will try
to call j.Wait(1) after the remove. This fails with:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.UnknownMethod:
Method "Wait" with signature "i" on interface "com.redhat.lvmdbus1.Job"
doesn't exist
Which is caused by the dbus object no longer existing. We could handle
this, but the issue is we no longer have the ability to get the result to
return, they have been lost.
A better solution would be to write a specific unit test to force this code
path and handle all the possible outcomes.
These files are automatically cleared on reboot given
that /run is tmpfs, and that remains the primary way
of clearing online files.
But, if there's extreme use of vgcreate+pvscan+vgremove
between reboots, then removing online files in vgremove
will limit the number of unused online files using space
in /run.
The new option "--fs String" for lvresize/lvreduce/lvextend
controls the handling of file systems before/after resizing
the LV. --resizefs is the same as --fs resize.
The new option "--fsmode String" can be used to control
mounting and unmounting of the fs during resizing.
Possible --fs values:
checksize
Only applies to reducing size; does nothing for extend.
Check the fs size and reduce the LV if the fs is not using
the affected space, i.e. the fs does not need to be shrunk.
Fail the command without reducing the fs or LV if the fs is
using the affected space.
resize
Resize the fs using the fs-specific resize command.
This may include mounting, unmounting, or running fsck.
See --fsmode to control mounting behavior, and --nofsck to
disable fsck.
resize_fsadm
Use the old method of calling fsadm to handle the fs
(deprecated.) Warning: this option does not prevent lvreduce
from destroying file systems that are unmounted (or mounted
if prompts are skipped.)
ignore
Resize the LV without checking for or handling a file system.
Warning: using ignore when reducing the LV size may destroy the
file system.
Possible --fsmode values:
manage
Mount or unmount the fs as needed to resize the fs,
and attempt to restore the original mount state at the end.
nochange
Do not mount or unmount the fs. If mounting or unmounting
is required to resize the fs, then do not resize the fs or
the LV and fail the command.
offline
Unmount the fs if it is mounted, and resize the fs while it
is unmounted. If mounting is required to resize the fs,
then do not resize the fs or the LV and fail the command.
Notes on lvreduce:
When no --fs or --resizefs option is specified:
. lvextend default behavior is fs ignore.
. lvreduce default behavior is fs checksize
(includes activating the LV.)
With the exception of --fs resize_fsadm|ignore, lvreduce requires
the recent libblkid fields FSLASTBLOCK and FSBLOCKSIZE.
FSLASTBLOCK*FSBLOCKSIZE is the last byte used by the fs on the LV,
which determines if reducing the fs is necessary.
This test could only be run when user passes LVM_TEST_DEVDIR=/dev
as it requires and expects actions to be going in this dir, skip
otherwise.
Also 'extend_filter' manages multiple args in on lvm.conf update.
mdraid has some breakage - so 5.19 is crashing
(possibly even some more older version - that can be added as well)
Test seems to pass with 6.0-rc kernel.
Add new define 'newline' for use in 'foreach()'
Add new $(SHOW) for makefile printing output
Add 'make print-VAR' for easier debugging of Makefiles' variables.
Fix lv_active to be of BIN type instead of STR. This allows lv_active to
follow the report/binary_values_as_numeric setting as well as --binary
cmd line switch. Also, it makes it possible to use -S|--select with
either textual or numeric representation of the value, like 'lvs -S
active=active' but also 'lvs -S active=1'.
Take the devices file lock before creating a new devices file.
(Was missed by the change to preemptively create the devices
file prior to setup_devices(), which was done to improve the
error path.)
When using --all, if one VG is skipped, continue adding
other VGs, and do not return an error from the command
if some VGs are added. (A VG is skipped if it's missing PVs.)
If the command fails during devices file setup or device
scanning, then remove the devices file if it has been
newly created by the command, and exit with an error.
If devices from a named VG are not imported (e.g. the
VG is missing devices), then remove the devices file if
it has been newly created by the command, and exit with
an error.
If --all VGs are being imported, and no devices are found
to include in the devices file, then remove the devices
file if it has been newly created by the command, and
exit with an error.
Names matching internal code layout.
Functionc in thin_manip.c uses thin_pool in its name.
Keep 'pool' only for function working for both cache and thin pools.
No change of functionality.
Certain args can't be used in lvm shell ("interactive mode") because
they are not supported there. Add ARG_NONINTERACTIVE flag to mark
such args and error out if we're in interactive mode and at the same
time we detect use of such argument.
Currently, this is the case for --reportformat arg - we don't support
changing the format per command in lvm shell. The whole shell is running
under a reportformat chosen at shell's start.
Commit 73ec3c954b added a way to print
only a part of the report string (repstr) to support decoding individual
string list items out of repstr.
The repstr is normally printed through _safe_repstr_output so that any
JSON_QUOTE character ('"') found within the repstr is escaped to not
interfere with value quoting in JSON format.
However, the commit 73ec3c954b missed
checking the 'len' argument passed to _safe_repstr_output function when
adding the rest of the repstr after all previous JSON_QUOTE characters
were escaped (when calling the last dm_pool_grow_object). When 'len'
is 0, we need to calculate the 'len' ourselves in the function by
simply calling strlen. This is because 'len' is passed to the function
only if we're taking a part of repstr, not as a whole.
If we failed or logged anything before we actually execute given command
in lvm shell, we couldn't report the log using lastlog command after.
This patch adds specific 'pre-cmd' log report object type to identify
such log messages and enables lastlog to report even this log.
pvcreate with --uuid would segfault if a devices file entry matched
the specified pvid, but the devices file entry had no device_id, which
could happen if the entry has a devname idtype.
When the volume size is extended, there is no need to flush
IO operations (nothing can be targeting new space yet).
VDO target is supported as target that can safely work with
this condition.
Such support is also needed, when extending VDOPOOL size
while the pool is reaching its capacity - since this allows
to continue working without reaching 'out-of-space' condition
due to flushing of all in flight IO.
When we wanted to insert '#' before a config line (to comment it out),
we used dm_pool_strndup to temporarily copy the space prefix first so
we can assemble the final line with:
"<space_prefix># <key>=<value>":
out of original:
"<space_prefix><key>=<value>".
The space_prefix copy is not necessary, we can just use fprintf's
precision modifier "%.*s" to print the exact part if we alrady
know space_prefix length.
The new --valuesonly option causes the lvmconfig output to contain only
values without keys for each config node. This is practical mainly in
case where we use lvmconfig in scripts and we want to assign the value
to a different custom key or simply output the value itself without the
key.
For example:
# lvmconfig --type full activation/raid_fault_policy
raid_fault_policy="warn"
# lvmconfig --type full activation/raid_fault_policy --valuesonly
"warn"
# my_var=$(lvmconfig --type full activation/raid_fault_policy --valuesonly)
# echo $my_var
"warn"
Internally, NUM and BIN fields are marked as DM_REPORT_FIELD_TYPE_NUM_NUMBER
through libdevmapper API. The new 'json_std' format mandates that the report
string representing such a value must be a number, not an arbitrary string.
This is because numeric values in 'json_std' format do not have double quotes
around them. This practically means, we can't use string synonyms
("named reserved values") for such values and the report string must always
represent a proper number.
With 'json' and 'basic' formats, this is not an issue because 'basic' format
doesn't have any structure or typing at all and 'json' format puts all values
in quotes, including numeric ones.
When we tested lvm2, the kernel injected various random faults.
(gdb) bt
...
(gdb) p vg
$1 = (struct volume_group *) 0x0
(gdb) p use_previous_vg
$2 = (unsigned int *) 0x0
Signed-off-by: Wu Guanghao <wuguanghao3@huawei.com>
Update to more recent version of configure script to support more
new architecture types like RISCV64. Tools in use ATM:
autoconf-2.71-3.fc37.noarch
autoconf-archive-2022.02.11-3.fc37.noarch
automake-1.16.5-9.fc37.noarch
Resolves https://bugzilla.redhat.com/show_bug.cgi?id=2118243
When lvcreate is makeing VDO pool and user has not specified -V size,
ATM we actually run 'vdoformat' twice to get properly 'extent' aligned
size matching lvm2 properties - so the 2nd. run of vdoformat actually
can stay with 'log_verbose()' so the standard printed result
is not showing confusing info (which is now also correctly using
print_unless_silent)
The 'pe_start' column was incorrectly marked as being of type NUM.
This was not correct as pe_start is actually of type SIZ, which means
it can have a size suffix and hence it's not a pure numeric value.
Proper column type is important for selection to work correctly, so we
can also do comparisons while using suffixes.
This is also important for new "json_std" output format which does not
put double quotes around pure numeric values. With pe_start incorrectly
marked as NUM instead of SIZ, this produced invalid JSON output
like '"pe_start" = 1.00m' because it contained the 'm' (or other)
size suffix. If properly marked as SIZ, this is then put in double
quotes like '"pe_start" = "1.00m"'.
In JSON format, we print string list this way:
"key" = "item1,item2,...,itemN"
while in JSON_STD format, we print string list this way:
"key" = ["item1","item2",...,"itemN"]
Before, we stored only the report string itself for a string list
in field->report_string. The field->report_string has either
sorted items or not, depending on what we need for a field -
some report fields have sorted output, some don't...
The field->sort_value.value then contains pointer to the exact
field->report_string. The field->sort_value.items ALWAYS keeps
sorted array of individual items, represented as '[position,length]'
pairs pointing to the field->sort_value.value string.
This approach was fine as far as we didn't need to apply further
formatting to field->report_string. However, if we need to apply
further formatting to field->report_string content, taking into
account individual items, we also need to know where each item
starts and what is its length. Before, we only knew this when
items in report string were sorted, but not in the unsorted version.
We can't rely on the delimiter (default ",") only to separate items
back out of report string, because that delimiter can be contained
in the item value itself.
So this patch enhances the field->report_string for a string list so
it also contains '[position,length]' pairs for each individual item
inside field->report_string. We store this array right beyond the
string itself and we encode it in the same manner we already did for
field->sort_value.items before.
If field->report_string has sorted items, the field->sort_value.items
just points to the array of items we store beyond the report string.
If field->report_string has unsorted items, we store separate array
of items for both field->report_string and field->sort_value.
This patch also cleans up the _report_field_string_list function a bit
so it's easier and more straightforward to follow than the original
version.
Example. If we have "abc", "xy", "defgh" as list on input with ","
as delimiter, then:
- field->report_string will have:
- if we need field->report_string unsorted:
abc,xy,defgh\0{[3,12],[0,3],[4,2],[7,5]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- if we need field->report_string sorted:
repstr_extra
|
V
abc,defgh,xy\0{[3,12],[0,3],[4,5],[10,2]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- field->sort_value will have:
- if field->report_string is unsorted:
field->sort_value.value = field->report_string
field->sort_value.items = {[0,3],[0,3],[7,5],[4,2]}
(that is 'abc,defgh,xy')
- if field->report_string is sorted already:
field->sort_value.value = field->report_string
field->sort_value.items = repstr_extra
(that is also 'abc,defgh,xy')
For JSON_STD format, use 'null' if a field has no value at all.
In JSON format, we print undefined numeric values this way:
"key" = ""
while in JSON_STD format, we print undefined numeric values this way:
"key" = null
(Keep in mind that 'null' is different from 0 (zero value) which is
a defined value.)
In JSON format, we print numeric values this way:
"key" = "N"
while in JSON_STD format, we print numeric value this way:
"key" = N
(Where N is a numeric value.)
The original JSON formatting will be still available using the original
DM_REPORT_GROUP_JSON identifier. Subsequent patches will add enhancements
to JSON formatting code so that it adheres more to JSON standard - this
will be identified by new DM_REPORT_GROUP_JSON_STD identifier.
If using JSON format for lvm shell's output, the error message about
exceeding the maximum number of arguments was not reported on output if
this condition was ever hit.
This is because the JSON format (as well as any other future format)
requires extra formatting compared to "basic" format and so it also
requires extra calls when it comes to reporting. The report needs to
be added to a report group and then popped and put on output with
specialized "dm_report_group_output_and_pop_all".
This "output and pop" is normally executed after we execute the command
in the lvm shell. When we didn't get to the command exection at all because
some precondition was not met (like hitting the limit for the number of
arguments for the command here), we skipped this important call and
so there was no log report output.
Right now, it's only this exact error message for which we need to call
"output and pop" directly, all the other error messages are about
initializing and setting the log report itself which we can't report
obviously.
Before this patch:
lvm> pvs 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
lvm>
With this patch applied:
lvm> pvs 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
{
"log": [
{"log_seq_num":"1", "log_type":"error", "log_context":"shell", "log_object_type":"cmd", "log_object_name":"", "log_object_id":"", "log_object_group":"", "log_object_group_id":"", "log_message":"Too many arguments, sorry.", "log_errno":"-1", "log_ret_code":"0"}
]
}
If there's any other error message in the future before we execute the
command itself, we also need to call the "output and pop" directly.
multipath_component_detection=0 has always applied to the filter-based
component detection. Also apply this setting to the duplicate-PV
handling which also eliminates multipath components (based on duplicate
PVs having the same wwid.)
When creating VDO pool based of % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.
If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
Newer VDO kernel target require to have matching virtual size - this
however cause incompatiblity when lvcreate is let to format VDO data
device and read the usable size from vdoformat.
Altough this is a kernel regression and will likely get fixed,
lvm2 can actually reformat VDO device to use properly aligned VDO LV
size to make this problem disappear.
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).
Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.
Estimation is based on documented RAM usage of VDO target.
If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.
TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
Keep single source for most of values printed in lvm.conf
(still needs some conversion)
Correct max for logical threads to 60
(we may refuse some older configuration which might eventually
user higher numbers - but so far let's assume no user have ever set this
as it's been non-trivial and if would complicate code unnecessarily.)
Accept maximum of 4PiB for virtual size of VDO LV
(lvm2 will drop 'header borders to 0 for this case').
Patch 1b070f366b should have
been already fixing this issue but since it the incorrect
patch rebasing the change to vdo_slabSize got lost.
So again now with explicit one-line patch.
Fixes commit b8f4ec846 "display: ignore --reportformat"
by restoring the --reportformat option to pvdisplay.
Adding -C to pvdisplay turns the command into a reporting
command (like pvs, vgs, lvs) in which --reportformat can
be useful.
to compare with wwids in /etc/multipath/wwids when
excluding multipath components. The wwid printed
from the sysfs wwid file may not be the wwid used
in multipath wwids. Save the wwids found for each
device on dev->wwids to avoid repeating reading
and parsing the sysfs files.
Update calling vdo manager since our vdo wrapper has a simple shell
arg parser so it needs args without '='
Also correct using DM_DEV_DIR for 'pvcreate'
Introduce a replacement vdo manager wrapper for testing.
When using test suite on a system without vdo manager (which has got
deprecated) - we still need its functionality to prepare 'vdo volume'
for testing lvm_import_vdo.
Wrapper currently need 2 binaries from older 'vdo 6.2' package -
to be named:
oldvdoformat - format VDO metadata with older format
oldvdoprepareforlvm - shift vdo metadata by 1MiB
When converting VDO volume, the parameter vdo_slabSize was
incorrectly copied as vdo_blockMapCacheSize, however this parameter
is then no longer used for any table line creation so the wrong
value was only stored in metadata.
Also use just single get_kb_size_with_unit_ and remove it's duplicate
functionality with get_mb_size_with_unit_.
Use $VERB for vdo remove call.
Fixes commit 494372b4ee
"filter-mpath: use multipath blacklist"
to handle wwids with initial type digits 1 and 2 used
for t10 and eui ids. Originally recognized type 3 naa.
A typo of the filename after --devicesfile should result in a
command error rather than the command falling back to using no
devices file at all. Exception is vgcreate|pvcreate which
create a new devices file if the file name doesn't exist.
When processing historical LVs inside process_each_lv_in_vg for
selection, we need to use dummy "_historical_lv" for select_match_lv.
This is because a historical LV is not an actual LV, but only a tiny
representation with subset of original properties that we recorded
(name, uuid...).
To use the same processing functions we use for full-fledged non-historical
LVs, we need to use the prefilled "_historical_lv" structure which has all
the other missing properties hard-coded.
Allow to use --vdosettings with lvcreate,lvconvert,lvchange.
Support settings currenly only configurable via lvm.conf.
With lvchange we require inactivate LV for changes to be applied.
Settings block_map_era_length has supported alias block_map_period.
Explicit wwid's from these sections control whether the
same wwid in /etc/multipath/wwids is recognized as a
multipath component. Other non-wwid keywords are not
used, and may require disabling the use of the multipath
wwids file in lvm.conf.
in vgcreate for shared sanlock vg, if sanlock_write_resource
returns an unexpected error, then make init_vg_sanlock fail
which will cause the vgcreate to fail.
When a VG has PVs with different device id types,
it would try to use the idtype of the previous PV
in the loop. This would produce an unncessary warning,
or could lead to using the devname idtype when a better
idtype is available.
In most installations, /dev/sda* or /dev/vda* should be included
in system.devices because the root, home, etc LVs are usually on
sda or vda.
Add a special case warning when a pvscan autoactivation command
sees that /dev/sda* or /dev/vda* are excluded by system.devices,
either not listed or having a different device id.
Change messages that refer to devices being "excluded by filters"
to say just "excluded". This will avoid mistaking the word
"filters" with the lvm.conf filter setting.
Warn if a scsi device is listed in the devices file that
is used by a multipath device that is not listed. This
will happen if a scsi device is listed in the devices
file and then an mpath device is set up to use it.
The way to correct this would be to remove the devices
file entry for the component device and add a new entry
for the multipath device.
When thin-pool had queued some delete message on extension operation
such message has been 'lost' and thin-pool kernel metadata has been
left with a thin volume that no longer existed for lvm2 metadata.
It's more logical to warn about --nolocking in the man page
before it's used rather than after it's used and too late.
Also, warnings are usually for things the user may not know.
dev_name(dev) returns "[unknown]" if there are no names
on dev->aliases. It's meant mainly for log messages.
Many places assume a valid path name is returned, and
use it directly. A caller that wants to use the path
from dev_name() must first check if the dev has any
paths with dm_list_empty(&dev->aliases).
Use dev_cache_get_existing() in a few common, high level
locations where it's obvious that only existing dev-cache
entries are wanted. This can be expanded and used in more
locations (or dev_cache_get can stop creating new entries.)
along with some basic checks for cases when a device
has no aliases.
lvm itself creates many situations where a struct device
has no valid paths, when it activates and opens an LV,
does something with it, e.g. zeroing, and then closes
and deactivates it. (dev-cache is intended for PVs, and
the use of LVs should be moved out of dev-cache in a
future patch.)
Different target type for LV it's not an internal error.
i.e. when target type is replaced with 'error' type - it should be
reported as regular warning and not cause interruption of command with
internall error.
With very old version of DM target driver we have to avoid
trying to use newuuid setting - otherwise we get error
during ioctl preparation phase.
Patch is fixing regression from commit:
988ea0e94c
In a certain disconnected state, a block device is present on
the system, can be opened, reports a valid size, reports the
correct device id (wwid), and matches a devices file entry.
But, reading the device can still fail. In this case,
device_ids_validate() was misinterpreting the read error as
the device having no data/label on it (and no PVID).
The validate function would then clear the PVID from the
devices file entry for the device, thinking that it was
fixing the devices file (making it consistent with the on disk
state.) Fix this by not attempting to check and correct a
devices file entry that cannot be read. Also make this case
explicit in the hints validation code (which was doing the
right thing but indirectly.)
When compiled and used with:
CFLAGS="-fsanitize=address -g -O0"
ASAN_OPTIONS=strict_string_checks=1:detect_stack_use_after_return=1:check_initialization_order=1:strict_init_order=1
we have few reported issue - they where not normally spotted, since
we were still accessing our own memory - but ouf of buffer-range.
TODO: there is still something to enhance with handling of #orphan vgids
If a dm device is suspended, we can't run blkid on it. But earlier
rules (e.g. 11-dm-parts.rules) might have imported previously scanned
properties from the udev db, in particular if the device had been correctly
set up beforehand (DM_UDEV_PRIMARY_SOURCE_FLAG==1). Symlinks for existing
ID_FS_xyz properties must be preserved in this case. Otherwise lower-priority
devices (such as multipath components) might take over the symlink
temporarily.
Likewise, we should't stop watching a temporarily suspended, but previously
correctly configured dm device.
Signed-off-by: Martin Wilck <mwilck@suse.com>
event based autoactivation is now the only method that lvm
provides for autoactivation.
Setting lvm.conf event_activation=0 can still be used to disable
event based autoactivation commands, but doing so will no longer
enable static autoactivation.
Removes some incorrect and unnecessary checks for other entries
when adding a new devices. The removed checks and corrections were
mostly redundant with what is already done by device id matching.
Other checking is reworked so the warnings are a bit different.
When a device has a wwid (from sysfs), but lvm ignores the wwid,
e.g. because it contains an unreliable "QEMU" value, then lvm
falls back to using IDTYPE=devname (the device name) as the
device id. If the device name changes after reboot, then lvm
automatically searches for the PV on other devices to find the
new device name and correct system.devices. When searching for
the PV, lvm skips looking at devices that would use other id types,
e.g. if a device would use a wwid and not a devname, then it
skips checking it. However, it failed to account for the fact
that a device may have wwid that was ignored, in which case it
should be checked.
. error exit means that lvmdevices --update would make a change.
. remove check of PART field from --check because it isn't used.
. unlink searched_devnames file to ensure check|update will search
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
After a vg_write, this function was used to attempt to
make lvmcache data match the new state written to disk.
It was not updated correctly in a many or most cases,
and the resulting lvmcache is not actually used after
vg_write, making the update unnecessary.
The destination vg is first written with the EXPORTED flag,
then the source vg is written, then the destination vg is
written again without the EXPORTED flag. Remove an unnecessary
vg_read of the destination vg just before the second write.
This reverts commit bd2baeaaa6.
This commit broke vgrename because vgrename relies on old bugs
in lvmcache_update_vg_from_write and lvmcache_update_vgname
which need to be fixed first.
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
When storage is lost under a sanlock VG, and kill_vg/drop_vg
are used, sanlock_rem_lockspace() may return an error, but
the cleanup steps should still be performed. Without the
cleanup, gl_lsname_sanlock was not cleared. This caused
future lock requests to fail with ENOLS, but the NO_GL_LS
flag was not set due to gl_lsname_sanlock being set.
This caused lockd_global(sh) in lvm commands to fail when
they could succeed.
Fix from guozhonghua216
Since we check for present DM devices - cache result for
futher use of checking presence of such device.
lvm2 uses cache result for label scan, but also when
it tries to activate or deactivate LV - however only simple
target 'striped' is reasonably supported.
Use disable_dm_devs to be able to control when lv_info()
get cache or uncached results.
TODO: support more type, however this is getting very complicated.
New API extension (internal ATM) for getting a list
of active DM device with extra features like i.e. uuid.
To easily lookout for existing UUID in device list,
there is: dm_device_list_find_by_uuid()
Once the returned structure is no longer usable call:
dm_device_list_destroy()
Struct dm_active_device {} holds all the info,
but is always allocated and destroyed within library.
TODO: once it's stable, copy to libdm
We used to reset 'settings' to their defaults after command is finished.
This however has a drawback we lose all the logging after this point.
So this patch disables this 'reset' to observe for side-effects.
lvm shell should be getting reset when next command is run -
so this might or might not have some 'hidden' effects.
ATM it looks like nothing really bad should happen - we just should be
able to get more logs - at least from normal commands.
If the transient service remains after it's done, then
it prevents the same transient service from being run
again later if the PVs are detached and reattached
(although the behavior of a second autoactivation is not
well defined and may only work in limited cases.)
Description stolen from linux d/b/rbd.c L3:
rbd.c -- Export ceph rados objects as a Linux block device
16 partitions seem to make sense according to L90:
#define RBD_SINGLE_MAJOR_PART_SHIFT 4
Running *scan -vvvvvvdddddd yields
#filters/filter-type.c:28 /dev/rbd1p5: Skipping: Unrecognised LVM device type 252
#filters/filter-persistent.c:131 filter caching bad /dev/rbd1p5
right now, and adding
types = ["rbd", 252]
to /e/l/lvm.conf (with the matching "252 rbd" in /p/devices) works as a
per-machine fix:
rbd1 252:16 0 1T 1 disk
|-rbd1p1 252:17 0 243M 1 part
|-rbd1p2 252:18 0 1K 1 part
`-rbd1p5 252:21 0 1023.8G 1 part
`-dev01--vg-root 253:0 0 1023.8G 0 lvm
but rbd is supported by upstream so it'd be nice to have it work OOB
Fixes commit "pvscan: match device arg to filter symlink"
which failed to account for the fact that filter entries
are not just path names but include "a" or "r", etc.
The device_hint name in the metadata was meant to prevent
autoactivation from md components, but the name checks were
more general and would catch unnecessary cases.
This fixes an issue related to the optimization in
"pvscan: only add device args to dev cache"
If the devices file is not used, and the lvm.conf filter
accepts devices via symlink names, then those devices won't
be accepted by pvscan for autoactivation. To resolve this,
recognize when the filter contains symlinks and disable the
optimization. When the optimization is disabled, a full
dev_cache_scan is performed, and symlinks are associated
with the device names passed to pvscan. filter-regex
will accept a device if symlinks to that device are accepted.
Improve handling of md components that get through the
filter, like the previous improvement for multipath.
If md components get through the filter and trigger
duplicate PV code, then eliminate any devs entirely
that are not an md device.
If multipath component devices get through the filter and
cause lvm to see duplicate PVs, then check the wwid of the
devs and drop the component devices as if they had been
filtered. If a dm mpath device was found among the duplicates
then use that as the PV, otherwise do not use any of the
components as the PV.
"duplicate PVs" associated with multipath configs will no
longer stop commands from working.
Remove the searched_devnames file in a couple more places:
. When hints need refreshing it's possible that a missing
devices file entry could be found by searching devices
again.
. When a devices file entry devname is first found to be
incorrect, a new search for missing entries may be
useful.
When devnames are used as device ids and devnames change,
then new devices need to be located for the PVs. If the old
devname is now used by a filtered device, this was preventing
the code from searching for the new device, so the PV was
reported as missing.
If the optimized label scan fails (using online files),
then clear the device state prior to falling back to the
standard label_scan. This avoids printing output about
unexpected state.
Consistently create the pvs_lookup file for VGs with
more than one PV. Previously the file create would be
skipped if all the PVs happened to already be online.
That led to unpredicatable results in an uncommon case
(when the last PV to come online is the only PV with
metadata.)
Copy another optimization from pvscan -aay to vgchange -aay.
When using the optimized label scan for only one VG, acquire the
VG lock prior to the scan. This allows vg_read to then skip the
repeated label scan that normally happens after locking the vg.
Include the device name in the /run/lvm/pvs_online/pvid files.
Commands using the pvid file can use the devname to more quickly
find the correct device, vs finding the device using the
major:minor number. If the devname in the pvid file is missing
or incorrect, fall back to using the devno.
For completeness and consistency, adjust the behavior
for some variations of:
vgchange -aay --autoactivation event [vgname]
The current standard use is with a VG name arg, and the
command is only called when all pvs_online files exist.
This is the optimal case, in which only pvs_online devs
are read. This remains the same.
Clean up behaviors for some other unexpected uses of the
command:
. With no VG name arg, the command activates any VGs
that are complete according to pvs_online. If no
pvs_online files exist, it does nothing.
. If a VG name is used but no PVs online files exist for
the VG, or the PVs online files are incomplete, then
consider there could be a problem with the pvs_online
files, and fall back to a full label scan prior to
attempting the activation.
Part of the optimization to avoid a full dev_cache_scan requires
translating major:minor numbers to a device name. If this devno
translation fails, then fall back to doing a full dev_cache_scan
which is slower but certain to provide the info. This preserves
the most important part of the label scanning optimization in the
vgchange aay (avoiding dev_cache_scan is a relatively small part
of the optimized activation compared to label scanning.)
Port another optimization from pvscan -aay to vgchange -aay:
"pvscan: only add device args to dev cache"
This optimization avoids doing a full dev_cache_scan, and
instead populates dev-cache with only the devices in the
VG being activated.
This involves shifting the use of pvs_online files from
the hints interface up to the higher level label_scan
interface. This specialized label_scan is structured
around creating a list of devices from the pvs_online
files. Previously, a list of all devices was created
first, and then reduced based on the pvs_online files.
The initial step of listing all devices was slow when
thousands of devices are present on the system.
This optimization extends the previous optimization that
used pvs_online files to limit the devices that were
actually scanned (i.e. reading to identify the device):
"vgchange -aay: optimize device scan using pvs_online files"
Port the old pvscan -aay scanning optimization to vgchange -aay.
The optimization uses pvs_online files created by pvscan --cache
to derive a list of devices to use when activating a VG. This
allows autoactivation of a VG to avoid scanning all devices, and
only scan the devices used by the VG itself. The optimization is
applied internally using the device hints interface.
The new option "--autoactivation event" is given to pvscan and
vgchange commands that are called by event activation. This
informs the command that it is being used for event activation,
so that it can apply checks and optimizations that are specific
to event activation. Those include:
- skipping the command if lvm.conf event_activation=0
- checking that a VG is complete before activating it
- using pvs_online files to limit device scanning
The information in /run/lvm/pvs_online/<pvid> files can
be used to build a list of devices for a given VG.
The pvscan -aay command has long used this information to
activate a VG while scanning only devices in that VG, which
is an important optimization for autoactivation.
This patch implements the same thing through the existing
device hints interface, so that the optimization can be
applied elsewhere. A future patch will take advantage of
this optimization in vgchange -aay, which is now used in
place of pvscan -aay for event activation.
When a device id is set for a device, using an idtype other
than devname, it means that sysfs has been used with the device
to match the device id. So, checking for a sysfs entry for the
device in filter-sysfs is redundant. For any other cases not
covered by this (e.g. devname ids), have filter-sysfs simply
stat /sys/dev/block/major:minor to test if the device exists
in sysfs.
The extensive processing done by filter-sysfs init is removed.
It was taking an immense amount of time with many devices, e.g.
. 1024 PVs in 520 VGs
. 520 concurrent vgchange -ay <vgname> commands
. vgchange scans only PVs in the named VG (based on pvs_online
files from a pending patch)
A large number of the vgchange commands were taking over 1 min,
and nearly half of that time was used by filter-sysfs init.
With this patch, the vgchange commands take about half the time.
Help bootstrapping existing shared vgs into the devices file.
Reading the vg in vgimportdevices would require locking to be
started, but vgchange lockstart won't see the vg if it's not
in the devices file. The lvmlockd locks are not protecting
vg modifications so skipping them here won't be a problem.
As we are not using 'enable-compat' for anything, remove this section.
Also remove duplicated check for blkid.
Move setting of some AC_ARG_ENABLE defaults into the macro so it's always
defined.
When scanning configured /dev dir, avoid entring
directories with different filesystem.
This minimizes risk we will block on i.e. entring
directory with mount point.
Enhance logic for checking supported systemd version,
while doing only a single check for systemd package.
For version checking use PKG_CHECK_EXISTS() macro.
Also use one pkg check for blkid.
Avoid checking version for thin/cache_check when tools are not present
on system.
Resolve event_activation configure option just once.
Do not print debug_devs about 'bad' filtering, when
actually filter already printed reason for skipping
Do not trace more then once about backup being disabled.
No debug when unlinked file does not exists in pvscan.
Handle automatically new setttings
--disable-systemd-journal
--disable-appmachineid
Both setting will check presence of apropriate header files.
In case they are present, build will try to automatically build with
them (adding systemd dependency)
User can anytime disabled them and drop systemd dependency.
Also add --with-default-use-devices-file configure option to
select automatically default value for this option.
For this moment keep default upstream as 0
Reporting non-PVs / "all devices" is only done by
pvs -a or pvdisplay -a, so avoid the work managing
a list of all devices in process_each_pv.
In the case when it's needed, use the results of
label_scan which already determines which devs
are not PVs.
Just setting lvm.conf level=N should not send messages to
syslog (now the journal by default.)
Sending messages to syslog should require setting lvm.conf
log { syslog=1 level=N }.
'.ID_FS_TYPE_NEW' is a custom property added by an LVM UDev rule
which is now being removed and 'ID_FS_TYPE' has the same value.
Signed-off-by: Vojtech Trefny <vtrefny@redhat.com>
new udev rule 69-dm-lvm.rules replaces
69-dm-lvm-meta.rules and lvm2-pvscan.service
udev rule calls pvscan directly on the added device
pvscan output indicates if a complete VG can be activated
udev env var LVM_VG_NAME_COMPLETE is used to pass complete
VG name from pvscan to the udev rule
udev rule uses systemd-run to run vgchange -aay <vgname>
Configure via lvm.conf log/journal or command line --journal.
Possible values:
"command" records command information.
"output" records default command output.
"debug" records full command debugging.
Multiple values can be set in lvm.conf as an array.
One value can be set in --journal which is added to
values set in lvm.conf
pvscan --cache <dev>
. read only dev
. create online file for dev
pvscan --listvg <dev>
. read only dev
. list VG using dev
pvscan --listlvs <dev>
. read only dev
. list VG using dev
. list LVs using dev
pvscan --cache --listvg [--checkcomplete] <dev>
. read only dev
. create online file for dev
. list VG using dev
. [check online files and report if VG is complete]
pvscan --cache --listlvs [--checkcomplete] <dev>
. read only dev
. create online file for dev
. list VG using dev
. list LVs using dev
. [check online files and report if VG is complete]
. [check online files and report if LVs are complete]
[--vgonline]
can be used with --checkcomplete, to enable use of a vg online
file. This results in only the first pvscan command to see
the complete VG to report 'VG complete', and others will report
'VG finished'. This allows the caller to easily run a single
activation of the VG.
[--udevoutput]
can be used with --cache --listvg --checkcomplete, to enable
an output mode that prints LVM_VG_NAME_COMPLETE='vgname' that
a udev rule can import, and prevents other output from the
command (other output causes udev to ignore the command.)
The list of complete LVs is meant to be passed to lvchange -aay,
or the complete VG used with vgchange -aay.
When --checkcomplete is used, lvm assumes that that the output
will be used to trigger event-based autoactivation, so the pvscan
does nothing if event_activation=0 and --checkcomplete is used.
Example of listlvs
------------------
$ lvs -a vg -olvname,devices
LV Devices
lv_a /dev/loop0(0)
lv_ab /dev/loop0(1),/dev/loop1(1)
lv_abc /dev/loop0(3),/dev/loop1(3),/dev/loop2(1)
lv_b /dev/loop1(0)
lv_c /dev/loop2(0)
$ pvscan --cache --listlvs --checkcomplete /dev/loop0
pvscan[35680] PV /dev/loop0 online, VG vg incomplete (need 2).
VG vg incomplete
LV vg/lv_a complete
LV vg/lv_ab incomplete
LV vg/lv_abc incomplete
$ pvscan --cache --listlvs --checkcomplete /dev/loop1
pvscan[35681] PV /dev/loop1 online, VG vg incomplete (need 1).
VG vg incomplete
LV vg/lv_b complete
LV vg/lv_ab complete
LV vg/lv_abc incomplete
$ pvscan --cache --listlvs --checkcomplete /dev/loop2
pvscan[35682] PV /dev/loop2 online, VG vg is complete.
VG vg complete
LV vg/lv_c complete
LV vg/lv_abc complete
Example of listvg
-----------------
$ pvscan --cache --listvg --checkcomplete /dev/loop0
pvscan[35684] PV /dev/loop0 online, VG vg incomplete (need 2).
VG vg incomplete
$ pvscan --cache --listvg --checkcomplete /dev/loop1
pvscan[35685] PV /dev/loop1 online, VG vg incomplete (need 1).
VG vg incomplete
$ pvscan --cache --listvg --checkcomplete /dev/loop2
pvscan[35686] PV /dev/loop2 online, VG vg is complete.
VG vg complete
The new system_id_source="appmachineid" will cause
lvm to use an lvm-specific derivation of the machine-id,
instead of the machine-id directly. This is now
recommended in place of using machineid.
Do not store full path with each archived name reduces memory usage if
the directory has thousands of entries and just add 'dir' path when
needed.
Also emit info print message to a user if the total size of archived
files for a VG is more then 128MiB or 8192 files.
TODO: Consider wheather adding a new 'lvm.conf archive{option}' to support
trimming these wild archive sizes can make situation better.
We already support retain_min && retain_days - but if user is
generating too many and too large archives with minutes - maybe archiving
should be disabled by a user - as it's not producing anything largely usable
and just slows-down command ??
If we add 'retain_max & retain_max_size' the condition will go against
each other and we need to chose priorities.
mm
Consider missing config tree from vg read to be an internal error
since we do not want to 'regenerate' this one in expesive parsing way.
Also if there is any failure on recreating committed VG, make it also
a 'vg_write' error.
Corrupt metadata text (with good mda header) was being handled
in the label_scan phase, but not in the vg_read phase. This
was sufficient because metadata areas would always be read and
checksummed during label_scan (metadata parsing was skipped
previously as an optimization.)
This changed with the optimization in
commit 61a6f9905e
"metadata: optimize reading metadata copies in scan"
Now, some metadata areas will not be read and checksummed
at all during the label_scan phase, only during the vg_read
phase. This means that bad metadata text may first be detected
in the vg_read phase. So, add equivalent bad metadata handling
to the vg_read path to match the label_scan path.
With larger metadata, decoding 'localtime()' for hinting time creation
of every LV may cause excessive check of /etc/localtime file.
Set TZ to ":/etc/localtime" so glibc reads this file just once
instead of check everytime if there has anything changed.
While being in lockless scanning phase, we can avoid reading and checking
matching metadata copies if we already know them from other PV
and just rely on matching metadata header information.
These copies will be examined later during locked metadata read/write
access.
This patch may postpone discovering some read failures to locked phase.
When creating lvm2 metadata for VG, lvm2 allocate some buffer,
and if buffer is not big enough, the buffer is 'reallocated' bigger,
and whole metadata creation is repeated until metadata fits.
We can try to use 'previous' metadata size as hint to reduce looping
here.
Preserve computed crc32 check from first written PV, just like we
preserve generated metadata.
Also there is no need to call crc32 twice on wrapping buffer with 2 calcs,
result must be always the same as with single crc32 checking.
Kernel patch 8b638081bd4520f63db1defc660666ec5f65bc15
introduced support to return UUID in DM_LIST_DEVICES_CMD.
Useful when asking for UUID of each device where the list
could be now returned directly with NAME && UUID for each device.
Returning UUID is done in backward-compatible way. There's one unused
32-bit word value after the event number. This patch sets the bit
DM_NAME_LIST_FLAG_HAS_UUID if UUID is present and
DM_NAME_LIST_FLAG_DOESNT_HAVE_UUID if it isn't (if none of these bits is
set, then we have an old kernel that doesn't support returning UUIDs). The
UUID is stored after this word. The 'next' value is updated to point after
the UUID, so that old version of libdevmapper will skip the UUID without
attempting to interpret it.
It's basically irrelavant which value we assing to optarg,
since it's set by getopt() function, but Coverity tool
is incorrectly reporting possibly dereference of NULL.
sysconf() may also return -1 although rather theoretically.
Default to 4K when such case would happen.
Also in function call it just once and keep as static variable.
Analyzer here was rather confused about possiblity of loosing previously
assigned device pointers - fixed by passing zero initialize memory
before first assign.
Ensure only nonNULL 'du' pointer is dereference altough the comment
to the last assign 'du' pointer already suggest 'NULL' case should not happen.
So just being explicit.
mer du
Error path in _lockd_retrive_vg_pv_list() has not zeroed released path
caussing possible double-free later in the code.
Fix it by using one single function freeing lock_pvs structure.
The cmd memory space is allocated by zalloc, and the registration
fails and is not released.
Although this code would be ever triggered just in the case
of some internal (likely compilation) bug.
Signed-off-by: Wu Guanghao <wuguanghao3@huawei.com>
When building lvm2 in Gentoo/ChromeOS with the ASAN memory
sanitizer enabled, man-generator fails with the following
error. Initializing makes the error go away.
* SUMMARY: MemorySanitizer: use-of-uninitialized-value /build/amd64-generic/tmp/portage/sys-fs/lvm2-2.02.187-r3/work/LVM2.2.02.187/tools/man-generator.c:3316:6 in _include_description_file
* Exiting
* ASAN error detected:
* ==2548047==WARNING: MemorySanitizer: use-of-uninitialized-value
* #0 0x558b00ab4730 in _include_description_file /build/amd64-generic/tmp/portage/sys-fs/lvm2-2.02.187-r3/work/LVM2.2.02.187/tools/man-generator.c:3316:6
* #1 0x558b00ab4730 in _print_man /build/amd64-generic/tmp/portage/sys-fs/lvm2-2.02.187-r3/work/LVM2.2.02.187/tools/man-generator.c:3426:21
* #2 0x558b00ab4730 in main /build/amd64-generic/tmp/portage/sys-fs/lvm2-2.02.187-r3/work/LVM2.2.02.187/tools/man-generator.c:3570:7
* #0 0x7fa9b2cbb807 in find_derivation /var/tmp/portage/cross-x86_64-cros-linux-gnu/glibc-2.33-r8/work/glibc-2.33/iconv/gconv_db.c:583:15
* #1 0x558b00a29559 in ?? ??:0
*
* Uninitialized value was created by an allocation of 'statbuf.i.i' in the stack frame of function 'main'
* #0 0x558b00ab1d4d in main /build/amd64-generic/tmp/portage/sys-fs/lvm2-2.02.187-r3/work/LVM2.2.02.187/tools/man-generator.c:3505
It's not a good idea to change passed 'argv[]' and replace it with
pointers to local stack - although in this case we are not using
this argv[] after return from this function.
Mask for strncpy() Coverity report warning would
actually need to copy buffer from 'tmp_name' instead of 'str'.
But replace it directly with single 'strncpy()' again for better readbility,
just mask out the warning reported for this strncpy instance
(so we do not need to put comment fro every call of strcpy_name_len).
Looks like newer version of mkfs.ext4 consumes more 'real' disk space,
when formating virtual volumes - so double the size of thin-pool to
have enough backend chunks for provisioning.
Existing mechanism was not able to trace root volume issue.
Simplify the functionality by using simply using activated flag
and trace the dtree in reverse order.
When cache creation fails on table reload path, implemen more
advanced revert solution, that tries to restore state of LVM
metadata into is look before actual caching started.
Loading invalid MQ/SMQ policy settings table line cause immediate
rejection - to prevent such failure, automatically filter valid
settins before it is being uploaded by lvm2.
For invalid setting issue a warnning informing user how to remove
them.
This solution is used to keep running cached LVs that might had
been created in the past with invalid settings that have been actually
unused due to another code bug.
When generating table line for cache target line,
the estimation of added arguments was incorrectly
calculated as the evaluation order of "?" is
made after "+".
However the result was 'masked' by the
Reported-by: Jian Cai jcai19
New versions of kvdo module exposes statistics at new location:
/sys/block/dm-XXX/vdo/statistics/...
Enhance lvm2 to access this location first.
Also if the statistic info is missing - make it 'debug' level info,
so it is not failing 'lvs' command.
Do not call 'dmsetup info' for non-dm devices.
Better handling for VGNAME and LVNAME - so when convering LV as
backend device automatically recognize it and reuse LV name for VDOLV.
Add prompt for final confirmation before actual conversion is started
(once confirmed, lvm2 commands no longer prompts to avoid leaving
conversion in the middle of its process.)
Compilation needs to generate 'C' locale sorted command file
definitions. To always enforce 'C' sorting rules user LC_ALL
instead of LANG, as LANG settings can be overuled by
other LC settings like LC_COLLATE and may result into miscompiled
lvm2 binary if locales ordering differs from 'C'.
Reported-by: jmp-lvm2@ookaze.fr
Since VDO is always returns 'zero' on unprovisioned read
and every provisioned block is always 'zeroed' on partial writes,
we can avoid 'zeroing' of such LVs.
Some device id types can only be used with specific device major
numbers, so use this restriction to avoid some comparisions.
This is more efficient, and can avoid some incorrect matches.
pvid and vgid are sometimes a null-terminated string, and
other times a 'struct id', and the two types were often
cast between each other. When a struct id was cast to a char
pointer, the resulting string would not necessarily be null
terminated. Casting a null-terminated string id to a
struct id is fine, but is still avoided when possible.
A struct id is: int8_t uuid[ID_LEN]
A string id is: char pvid[ID_LEN + 1]
A convention is introduced to help distinguish them:
- variables and struct fields named "pvid" or "vgid"
should be null-terminated strings.
- variables and struct fields named "pv_id" or "vg_id"
should be struct id's.
- examples:
char pvid[ID_LEN + 1];
char vgid[ID_LEN + 1];
struct id pv_id;
struct id vg_id;
Function names also attempt to follow this convention.
Avoid casting between the two types as much as possible,
with limited exceptions when known to be safe and clearly
commented.
Avoid using variations of strcpy and strcmp, and instead
use memcpy/memcmp with ID_LEN (with similar limited
exceptions possible.)
- Use a new function for all instances of copying
a null-terminated string into a fixed size struct
field that is not null-terminated.
- use memcpy when copying between struct fields of
the same size
When multiple lvchange refresh processes executed at the same time,
suspend/resume ioctl on the same dm, some of these commands will be failed
for dm aready change status, and ioctl will return EINVAL in _do_dm_ioctl function.
to avoid this problem, add READ_FOR_ACTIVATE flags in lvchange refresh process,
it will hold LCK_WRITE lock and avoid suspend/resume dm at the same time.
Signed-off-by: Long YunJian <long.yunjian@zte.com.cn>
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Fix memory leaks on error paths for allocated
path and backup_file name by converting allocation to
dm_pool_alloc and also change devicefile structure to contain
embeded path as last struct member - so we could allocate
only needed string size instead of PATH_MAX from pool.
TODO: still to be fixed 'mf' struct.
Recent commit 84bd394cf9
"writecache: use block size 4096 when no fs is found"
failed to account for the case where writecache is attached
to thin pool data. Checking fs block size on the thin pool
data LV is wrong, and checking the fs block on each thin LV
would be impractical, so default to 512 which cannot break
any existing file systems, and require the user to specify
4k when appropriate.
When splitting VG with thin/cache pool volume, handle pmspare during
such split and allocate new pmspare in new VG or extend existing pmspare
there and eventually drop pmspare in original VG if is no longer needed
there.
As pmspare is an invisible LV it's not getting automatically removed
since vgremove removes only visible LVs and it depending LVs.
If there was no other thin/cache pool volume, such pmspare stayed
undeleted and caused command failure.
So handle explicitelly such forgotten pmspare and remove it.
When check active componet of thinLV with external origin,
we need to check if the external origin isn't already active.
For this however we need to use layered check for -real device.
The lvm2-pvscan service runs pvscan --cache -aay <dev> for
device addition, and pvscan --cache <dev> on device removal.
For event_activation=0, the addition does nothing. Fix
device removal to also do nothing for event_activation=0.
Device removal was previously doing some work to process
the removal which slowed down stopping lvm2-pvscan services.
sysfs-based multipath component detection quit if a
device had multiple holders, and in this case would
fail to detect a device was an mpath component.
For the pv_min_size check, always use dev_get_size()
which is commonly used elsewhere, and don't bother
asking libudev for the device size when
external_device_info_source=udev.
related to config settings:
obtain_device_info_from_udev (controls if lvm gets
a list of devices from readdir /dev or from libudev)
external_device_info_source (controls if lvm asks
libudev for device information)
. Make the obtain_device_list_from_udev setting
affect only the choice of readdir /dev vs libudev.
The setting no longer controls if udev is used for
device type checks.
. Change obtain_device_list_from_udev default to 0.
This helps avoid boot timeouts due to slow libudev
queries, avoids reported failures from
udev_enumerate_scan_devices, and avoids delays from
"device not initialized in udev database" errors.
Even without errors, for a system booting with 1024 PVs,
lvm2-pvscan times improve from about 100 sec to 15 sec,
and the pvscan command from about 64 sec to about 4 sec.
. For external_device_info_source="none", remove all
libudev device info queries, and use only lvm
native device info.
. For external_device_info_source="udev", first check
lvm native device info, then check libudev info.
. Remove sleep/retry loop when attempting libudev
queries for device info. udev info will simply
be skipped if it's not immediately available.
. Only set up a libdev connection if it will be used by
obtain_device_list_from_udev/external_device_info_source.
. For native multipath component detection, use
/etc/multipath/wwids. If a device has a wwid
matching an entry in the wwids file, then it's
considered a multipath component. This is
necessary to natively detect multipath
components when the mpath device is not set up.
How to trigger:
```
~ # export LVM_SYSTEM_DIR=_
~ # pvscan
No matching physical volumes found
double free or corruption (!prev)
Aborted (core dumped)
```
when LVM_SYSTEM_DIR is empty, _load_config_file() won't be called.
when LVM_SYSTEM_DIR is not empty, cfl->cft links into cmd->config_files
by _load_config_file()@lib/commands/toolcontext.c
core dumped code: _destroy_config()@lib/commands/toolcontext.c
```
/* CONFIG_FILE/CONFIG_MERGED_FILES */
if ((cft = remove_config_tree_by_source(cmd, CONFIG_MERGED_FILES)))
config_destroy(cft);
else if ((cft = remove_config_tree_by_source(cmd, CONFIG_FILE)))
config_destroy(cft); <=== first free the cft
dm_list_iterate_items(cfl, &cmd->config_files)
config_destroy(cfl->cft); <=== double free the cft
```
Fixes: c43f2f8ae0
Signed-off-by: Heming Zhao <heming.zhao@suse.com>
expands commit d5a06f9a7d
"pvscan: skip indexing devices used by LVs"
The dev cache index is expensive and slow, so limit it
to commands that are used to observe the state of lvm.
The index is only used to print warnings about incorrect
device use by active LVs, e.g. if an LV is using a
multipath component device instead of the multipath
device. Commands that continue to use the index and
print the warnings:
fullreport, lvmdiskscan, vgs, lvs, pvs,
vgdisplay, lvdisplay, pvdisplay,
vgscan, lvscan, pvscan (excluding --cache)
A couple other commands were borrowing the DEV_USED_FOR_LV
flag to just check if a device was actively in use by LVs.
These are converted to the new dev_is_used_by_active_lv().
Add tool 'vdoimport' to support easy conversion of an existing VDO manager managed
VDO volumes into lvm2 managed VDO LV.
When physical converted volume is already a logical volume, conversion
happens with the VG itself, just with validation for extent_size, so
the virtually sized logical VDO volume size can be expressed in extents.
Example of basic simple usage:
vdoimport --name vg/vdolv /dev/mapper/vdophysicalvolume
dev_cache_index_devs() is taking a large amount of time
when there are many PVs. The index keeps track of
devices that are currently in use by active LVs. This
info is used to print warnings for users in some limited
cases.
The checks/warnings that are enabled by the index are not
needed by pvscan --cache, so disable it in this case.
This may be expanded to other cases in future commits.
dev_cache_index_devs should also be improved in another
commit to avoid the extreme delays with many devices.
There have been two separate checks for metadata
validity: first that the metadata text begins with
a valid VG name, and second the checksum of the
metadata text. These happen in different places,
which means there have been two separate error paths
for invalid metadata. This also causes large metadata
to be read in multiple parts, the first part is read
just to check the vgname, and then remaining parts are
read later when the full metadata is needed.
This patch moves the vg name verification so it's
done just before the checksum verification, which
results in a single error path for invalid metadata,
and causes the entire metadata to be read together
rather that in parts from different parts of the code.
If label_scan encounters bad vg metadata, invalidate
bcache data for the device and reread the mda_header
and metadata text back to back. With concurrent commands
modifying large metadata, it's possible that the entire
metadata area can be rewritten in the time between a
command reading the mda_header and reading the metadata
text that the header points to. Since the label_scan
is just assembling an initial overview of devices, it
doesn't use locking to serialize with other commands
that may be modifying the vg metadata at the same time.
Recent commit 84bd394cf9
"writecache: use block size 4096 when no fs is found"
changed the default writecache block size from 512 to 4096
when no file system is detected. The fs block size detection
requires the libblkid BLOCK_SIZE feature, so skip tests on
systems without this. Otherwise, 4096 writecache added to
512 xfs leads fs io or mount failures.
Add profilable configurable setting for vdo pool header size, that is
used as 'extra' empty space at the front and end of vdo-pool device
to avoid having a disk in the system the may have same data is real
vdo LV.
For some conversion cases however we may need to allow using '0' header size.
TODO: in this case we may eventually avoid adding 'linear' mapping layer
in future - but this requires further modification over lvm code base.
In testing where we inject large amounts of additional output in stderr
we can occassionally get truncated stdout from lvm. Catching and dumping
the json for debug before we re-raise the exception. As this doesn't
happen without the error injecting wrapper around lvm, the error seems to
be with the wrapper.
Signed-off-by: Tony Asleson <tasleson@redhat.com>
When exec'ing lvm, it's possible to get large amounts of both stdout
and stderr depending on the state of lvm and the size of the lvm
configuration. If we allow any of the buffers to fill we can end
up deadlocking the process. Ensure we are handling stdout & stderr
during lvm execution.
Ref. https://bugzilla.redhat.com/show_bug.cgi?id=1966636
Signed-off-by: Tony Asleson <tasleson@redhat.com>
When we are walking the new lvm state comparing it to the old state we can
run into an issue where we remove a VG that is no longer present from the
object manager, but is still needed by LVs that are left to be processed.
When we try to process existing LVs to see if their state needs to be
updated, or if they need to be removed, we need to be able to reference the
VG that was associated with it. However, if it's been removed from the
object manager we fail to find it which results in:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/lvmdbusd/utils.py", line 666, in _run
self.rc = self.f(*self.args)
File "/usr/lib/python3.6/site-packages/lvmdbusd/fetch.py", line 36, in _main_thread_load
cache_refresh=False)[1]
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 146, in load_lvs
lv_name, object_path, refresh, emit_signal, cache_refresh)
File "/usr/lib/python3.6/site-packages/lvmdbusd/loader.py", line 68, in common
num_changes += dbus_object.refresh(object_state=o)
File "/usr/lib/python3.6/site-packages/lvmdbusd/automatedproperties.py", line 160, in refresh
search = self.lvm_id
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 483, in lvm_id
return self.state.lvm_id
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 173, in lvm_id
return "%s/%s" % (self.vg_name_lookup(), self.Name)
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 169, in vg_name_lookup
return cfg.om.get_object_by_path(self.Vg).Name
Instead of removing objects from the object manager immediately, we will
keep them in a list and remove them once we have processed all of the state.
Ref:
https://bugzilla.redhat.com/show_bug.cgi?id=1968752
When execute IDM testing, the command reports error:
/usr/bin/install: cannot stat ‘lib/idm_inject_failure’: No such file
or directory
Since there have a stale program in my local environment, thus Makefile
always uses the stale program and doesn't report any issue. In the
brand new repository, it doesn't contain an idm_inject_failure program,
and Makefile doesn't build it without specifying the dependency, thus
the test command complaints the file 'idm_inject_failure' is not found.
This patch adds the dependency 'lib/idm_inject_failure' for IDM testing,
so it can firstly build the injection program and dismiss the error.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When adding a device to the devices file with --adddev, lvm
by default chooses the best device ID type for the new device.
The new --deviceidtype option allows the user to override the
built in preference. This is useful if there's a problem with
the default type, or if a secondary type is preferrable.
If the specified deviceidtype does not produce a device ID,
then lvm falls back to the preference it would otherwise use.
Previously there have been necessary explicit call of backup (often
either forgotten or over-used). With this patch the necessity to
store backup is remember at vg_commit and once the VG is unlocked,
the committed metadata are automatically store in backup file.
This may possibly alter some printed messages from command when the
backup is now taken later.
Instead of calling explicit archive with command processing logic,
move this step towards 1st. vg_write() call, which will automatically
store archive of committed metadata.
This slightly changes some error path where the error in archiving
was detected earlier in the command, while now some on going command
'actions' might have been, but will be simply scratched in case
of error (since even new metadata would not have been even written).
So general effect should be only some command message ordering.
As SUSE build tool reports the warning:
lvmlockd-core.c: In function 'client_thread_main':
lvmlockd-core.c:4959:37: warning: '%d' directive output may be truncated writing between 1 and 10 bytes into a region of size 6 [-Wformat-truncation=]
snprintf(buf, sizeof(buf), "path[%d]", i);
^~
lvmlockd-core.c:4959:31: note: directive argument in the range [0, 2147483647]
snprintf(buf, sizeof(buf), "path[%d]", i);
^~~~~~~~~~
To dismiss the compilation warning, enlarge the array "buf" to 17
bytes to support the max signed integer: string format 6 bytes + signed
integer 10 bytes + terminal char "\0".
Reported-by: Heming Zhao <heming.zhao@suse.com>
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to test timeout handling after activate LV with shareable
mode. It has the same logic with the testing for LV exclusive mode,
except it verifies the locking with shareable mode.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to test timeout handling after activate LV with exclusive
mode. It contains two scripts for host A and host B separately.
The script on host A firstly creates VGs and LVs based on the passed
back devices, every back device is for a dedicated VG and a LV is
created as well in the VG. Afterwards, all LVs are activated by host A,
so host A acquires the lease for these LVs. Then the test is designed
to fail on host A.
After the host A fails, host B starts to run the paired testing script,
it firstly fails to activate the LVs since the locks are leased by
host A; after lease expiration (after 70s), host B can achieve the lease
for LVs and it can operate LVs and VGs.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add LV testing on multi hosts. There have two scripts,
the script multi_hosts_lv_hosta.sh is used to create LVs on one host,
and the second script multi_hosts_lv_hostb.sh will acquire
global lock and VG lock, and remove VGs. The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add VG testing on multi hosts. There have two scripts,
the script multi_hosts_vg_hosta.sh is used to create VGs on one host,
and the second script multi_hosts_vg_hostb.sh afterwards will acquire
global lock and VG lock, and remove VGs. The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the IDM lock manager fails to access drives, might partially fail to
access drives (e.g. it fails to access one of three drives), or totally
fail to access drives, the lock manager should handle properly for these
cases. When the drives are partially failure, if the lock manager still
can renew the lease for the locking, then it doesn't need to take any
action for the drive failure; otherwise, if it detects it cannot renew
the locking majority, it needs ti immediately kill the VG from the
lvmlockd.
This patch adds the test for verification the IDM lock manager failure;
the command can be used as below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdl3,/dev/sdq3 \
LVM_TEST_FAILURE=1 T=idm_ilm_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system. For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives. On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.
This patch is to add half brain failure for idm; the test command is as
below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_half_brain.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system. For worst case, the lease is timeout
and the drives cannot recovery back. So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.
And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly. The test command is as below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_timeout.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.
For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.
This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case. The testing usage is:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.
This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back. Below is an example for testing
the case:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the drive failure occurs, the IDM lock manager and lvmlockd should
handle this case properly. E.g. when the IDM lock manager detects the
lease renewal failure caused by I/O errors, it should invoke the kill
path which is predefined by lvmlockd, so that the kill path program
(like lvmlockctl) can send requests to lvmlockd to stop and drop lock
for the relevant VG/LVs.
To verify the failure handling flow, this patch introduces an idm
failure injection program, it can input the "percentage" for drive
failures so that can emulate different failure cases.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Add checking for lvmlockd log, this can be used for the test cases which
are interested in the interaction with lvmlockd.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
For testing idm locking scheme, it's good to cleanup the idm context
before run the test cases. This can give a clean environment for the
testing.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
In current implementation, the option "LVM_TEST_BACKING_DEVICE" only
supports to specify one backing device; this patch is to extend the
option to support multiple backing devices by using comma as separator,
e.g. below command specifies two backing devices:
make check_lvmlockd_idm LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3
This can allow the testing works on multiple drives and verify the
locking scheme if can work as expected for multiple drives case. For
example, for Seagate IDM locking scheme, if a VG uses two PVs, every PV
is resident on a drive, thus the locking operations will be sent to two
drives respectively; so the extension for "LVM_TEST_BACKING_DEVICE" can
help to verify different drive configurations for locking.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing. Also add the prepare and remove shell scripts for
IDM.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Calling clear_hint_file() to invalidate hints would acquire
the hints flock before the global flock which could cause deadlock.
The lock order requires the global lock to be taken first.
pvchange was always invalidating hints, which was unnecessary;
only invalidate hints when changing a PV uuid. Because of the
lock ordering, take the global lock before clear_hint_file which
locks the hints file.
The macro LOCKDIDM_SUPPORT is missed in configure.h.in file, thus when
execute "configure" command, it has no chance to add this macro in the
automatic generated header include/configure.h.
This patch adds macro LOCKDIDM_SUPPORT into configure.h.in.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
For shared VG or LV locking, IDM locking scheme needs to use the PV
list assocated with VG or LV for sending SCSI commands, thus it requires
to use some places to generate PV list.
In reviewing the flow for LVM commands, the best place to generate PV
list is in the locking lib. So this is why this patch parses PV list as
shown. It iterates over all the PV nodes one by one, and compare with
the VG name or LV prefix string. If any PV matches, then the PV is
added into the PV list. Finally the PV list is sent to lvmlockd daemon.
Here as mentioned, it compares LV prefix string with the format
"lv_name_", the reason is it needs to find out all relevant PVs, e.g.
for the thin pool, it has LVs for metadata, pool, error, and raw LV, so
we can use the prefix string to find out all PVs belonging to the thin
pool.
For the global lock, it's not covered in this patch. To avoid the egg
and chicken issue, we need to prepare the global lock ahead before any
locking can be used. So the global lock's PV list is established in
lvmlockd daemon by iterating all drives with partition labeled with
"propeller".
Signed-off-by: Leo Yan <leo.yan@linaro.org>
We can consider the drive firmware a server to handle the locking
request from nodes, this essentially is a client-server model.
DLM uses the kernel as a central place to manage locks, so it also
complies with client-server model for locking operations. This is
why IDM and DLM are similar with each other for their wrappers.
This patch largely works by generalizing the DLM code paths and then
providing degeneralized functions as wrappers for both IDM and DLM.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
To allow the IDM locking scheme be used by users, this patch hooks the
IDM wrapper; it also introducs a new locking type "idm" and we can use
it for global lock with option '-g idm'.
To support IDM locking type, the main change in the data structure is to
add pvs path arrary. The pvs list is transferred from the lvm commands,
when lvmlockd core layer receives message, it extracts the message with
the keyword "path[idx]". Finally, the pv list will pass to IDM lock
manager as the target drives for sending IDM SCSI commands.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Alongside the existed locking schemes of DLM and sanlock, this patch is
to introduce new locking scheme: In-Drive-Mutex (IDM).
With the IDM support in the drive, the locks are resident in the drive,
thus, the locking lease is maintained in a central place: the drive
firmware. We can consider this is a typical client-server model,
every host (or node) in the server cluster launches the request for
leasing mutex to a drive firmware, the drive firmware works as an
arbitrator to grant the mutex to a requester and it can reject other
applicants if the mutex has been acquired. To satisfy the LVM
activation for different modes, IDM supports two locking modes:
exclusive and shareable.
Every IDM is identified with two IDs, one is the host ID and another is
the resource ID. The resource ID is a unique identifier for what the
resource it's protected, in the integration with lvmlockd, the resource
ID is combined with VG's UUID and LV's UUID; for the global locking,
the bytes in resource ID are all zeros, and for the VG locking, the
LV's UUID is set as zero. Every host can generate a random UUID and
use it as the host ID for the SCSI command, this ID is used to clarify
the ownership for mutex.
For easily invoking the IDM commands to drive, like other locking
scheme (e.g. sanlock), a daemon program named IDM lock manager is
created, so the detailed IDM SCSI commands are encapsulated in the
daemon, and lvmlockd uses the wrapper APIs to communicate with the
daemon program.
This patch introduces the IDM locking wrapper layer, it forwards the
locking requests from lvmlockd to the IDM lock manager, and returns the
result from drives' responding.
One thing should be mentioned is the IDM's LVB. IDM supports LVB to max
7 bytes when stores into the drive, the most significant byte of 8 bytes
is reserved for control bits. For this reason, the patch maps the
timestamp in macrosecond unit with its cached LVB, essentially, if any
timestamp was updated by other nodes, that means the local LVB is
invalidate. When the timestamp is stored into drive's LVB, it's
possbile to cause time-going-backwards issue, which is introduced by the
time precision or missing synchronization acrossing over multiple nodes.
So the IDM wrapper fixes up the timestamp by increment 1 to the latest
value and write back into drive.
Currently LVB is used to track VG changes and its purpose is to notify
lvmetad cache invalidation when detects any metadata has been altered;
but lvmetad is not used anymore for caching metadata, LVB doesn't
really work. It's possible that the LVB functionality could be useful
again in the future, so let's enable it for IDM in the first place.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the device is not a PV print
"No PV found on device ..."
instead of
"Failed to read lvm info for ... PVID ."
an earlier check had been added with a different
message for the same condition.
error reading dev and no pvid on dev were both
returning 0. make it easier for callers to
know which, if they care.
return 1 if the device could be read, regardless
of whether a pvid was found or not.
set has_pvid=1 if a pvid is found and 0 if no
pvid is found.
While we heavily try to spot arrays that are not yet in-sync,
some kernels tends to block our lvm2 command in kernel,
while we resume these smaller raid arrays even for 5 seconds.
But since the result is not really wrong - report these
check failures only as TEST WARNING.
Missed -l option in man page, although users should prefer
lvresize -r when the also want to do a volume management,
as there they can specify i.e. extents for allocation.
Also mention dm-crypt support in command description.
If the 'act' has been already processed by add_client_result()
it could have been possibly release - so avoid accessin 'act->'
afterward and go for next item directly.
This case happens when i.e. we convert LV to another type,
when we change existing LV into a different type - so change
to debug level and avoid confusing users with message about
Device path not match.
We may eventually enhnace caching code to drop cached info
after taking lock and reading VG.
With commit 0b18c25d93 there
was introduced 'zalloc()' for allocation of outdates pvs,
but no matching 'free()' is present.
Switch to use cmd mempool instead of adding free() code into
several places.
If a cmd def implies an LV type without --type
in the required options, then include the implied
type in the cmd def as AUTOTYPE: <type>
instead of including the redundant --type foo
in the OO list of options.
Including an implied --type in the OO list would
often cause multiple cmd defs to potentially be
identical when options were used, and a user
command could match more than one cmd def.
The AUTOTYPE values are listed in man page and
help output as
[ --type foo (implied) ]
If a user command includes --type, it will usually
match a cmd def with --type in the required options.
But, if the user command matches a cmd def with
AUTOTYPE, then the specifed --type and AUTOTYPE must
match.
The man-generator program has a new --check
option that compares cmd defs to find any cmd defs
that are equivalent with the use of options,
and should have their options adjusted.
Compares cmd defs based on two principles for avoiding repeated
commands (where a given command could match more than one cmd def):
. a cmd def should be a unique combination of required
option args and position args
. avoid adding optional options to a cmd def that if
used would make the command match a different cmd def
FIXME: record when repeated cmd defs are found so we can
avoid reporting them twice, e.g. once for A vs B and
second time for B vs A.
Correcting some usage of Bold and Italics (files).
Adding some missing SEE ALSO.
Fixing missed replaceable paths that are configurable.
Be careful about .P in .TP sections - need to use .sp for space line.
Use .UR/.UE for URL references.
Use #DEFAULT_SYS_DIR# replaceable string for devicesfile
so the man pages installation respects configured settings.
Update some missing lvm.conf(5) references.
Add missing VG into description of thin pool creation command.
Remove one duplicated thin-pool creation command.
Remove options --discards and --errorwhenfull from the list when the command describes
only creation of a thin volume - as these options do apply for thin-pool.
Also use here more correct name OO_LVCONVERT_THINPOOL instead of OO_LVCONVERT_THIN.
Reorder extra options for cache & thin-pool before common pool options.
Order consistenly --stripes and --stripesize after --extents option
so the options related to pools are better together.
Remove invalid snapshot creation description - since this case is
handled through our configurable spare volume creation.
Add some missing optional --type parameters for few command instancies.
Emit .ad l / .ad b less frequently around larger blocks
we want to keep left aligned.
Avoid emittting empty lines.
Reduce .HP usage and replace it with .TP.
However keep .HP for all option listings, as i.e. html rendering
can't handle well combintion of .TP an .HP together and .TP alone
is not indenting 2nd. line of long option line.
(For .TP line we don't need to emit .br)
Surround .SH with dots for better look.
For some .TP use plain more readable .I for a line.
Support rendering of optional [Number] (for --units).
Use better markup for units and instead of long markup string,
show individual units with markup.
Enhance man typography decoration of optional option
prefixes like --[raid]writebeind and use regular font to render []
as these are not part of the option name itself.
Sed replacements script missed to properly replace several '-' to '\-'.
Replace it with simpler set of regexes.
Also add new target 'make checksed' for testing with examples,
where the replacement should or should not occure for easier testing.
Previously, accepted LV types were presented as a series of suffixes
after the "LV" on the command line. The addition of many new types
resulted in this becoming too long, e.g
lvconvert --type cache --cachepool LV LV_linear_striped_thinpool_vdo_vdopool_vdopooldata_raid
For man pages, move these types from the command line to a new line
dedicated to listing accepted LV types:
lvconvert --type cache --cachepool LV LV1
...
LV1 types: linear striped thinpool vdo vdopool vdopooldata raid
The special "LV1" is used as a reference to avoid confusion
with other LVs that may appear on the command line. There
are currently no commands with more than one typed LV, but
if there are cases with more, then "LV2" could also be used.
For command line usage/-h output, drop the LV types from the
command line specification. The more detailed is not needed
in the help output and can be found in the man page.
With to use .TP where it's easy and doesn't change layout
(since .HP is marked as deprecated) - but .TP is not always perfetc match.
Avoid submitting empty lines to troff and replace them mostly with .P
and use '.' at line start to preserve 'visual' presence of empty line
while editing man page manually when there is no extra space needed.
Fix some markup.
Add some missing SEE ALSO section.
Drop some white-space at end-of-lines.
Improve hyphenation logic so we do not split options.
Use '.IP numbers' only with first one the row (others in row
automatically derive this value)
Use automatic enumeration for .SH titles.
Guidelines in-use:
https://man7.org/linux/man-pages/man7/groff.7.htmlhttps://www.gnu.org/software/groff/manual/html_node/Man-usage.htmlhttps://www.gnu.org/software/groff/manual/html_node/Lists-in-ms.html
This reverts commit 8e7690b798.
Actully this was bad idea - to make it on pair.
-Zn for thin-pools is already used - so here user must have
create new pool and swap existing thin-pool metadata into.
So reverting this commit to avoid any possible regression.
It would be complicated to handle ',' alignment after hyphenation
changes ATM, but these commas seems to be there rather unneeded
so remove them and make the man output more clear.
Disable hyphenation around longer option lists (>42 chars)
and use \: to markup places for line splits.
The code ATM is somewhat mixtured so it's not easy to encapsulate
section .nh ... .hy.
ATM global _was_hyphen is used to properly finish sections after
disabled hyphenation.
When --with-... option is used as --without-... it gets
assigned value 'no' - so support it better where we can.
Also remove 'shared' from help as it's not supported.
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.
--setautoactivation y|n
enables|disables autoactivation of a VG or LV.
Autoactivation is enabled by default, which is consistent with
past behavior. The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.
If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.) When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.
The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.
Previous versions of lvm do not recognize this property. Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions. If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.
The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.
If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.
An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.
To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list. The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
Switch to plain 'kill' we should no longer need SIGKILL
as polling can be interrupted.
Resolve problem in aux wait_pvmove_lv_ready() that was using
lvm command to check for UUID - but this was interferring with
VG lock and it's been delaying confirmation.
So reducing slow-down of test - so it can run faster.
Enhance handling of interruptions of polling process and lvmpoll daemon.
Daemon should now react much faster on interrups (i.e. shutdown
sequence) and avoid taking lenghty sleep waiting on pvmove signaling.
If we are signaled with SIGTERM it should be at least as good
as with SIGINT - as the command should stop ASAP.
So when lvm2 command allows signal handling we also
enable SIGTERM handling. If there are some other signals
we should handle equally - we could just extend array.
Avoid emitting Local symbol and sort symbols from
start and add dependency on previous version
Should not change anything, just better followup
linkage guidlines.
Add missing description for profile usage with cache pool.
List cache-pools as first option for dm-cache as it provides
better performance and more functionality over cachevols.
Not all libc (like musl, uclibc dietlibc) libraries support full symbol
version resolution in runtime like glibc.
Add support to not generate symbol versions when compiling against them.
Additionally libdevmapper.so was broken when compiled against
uclibc. Runtime linker loader caused calling dm_task_get_info_base()
function recursively, leading to segmentation fault.
Introduce --with-symvers=STYLE option, which allows to choose
between gnu and disabled symbol versioning. By default gnu symbol
versioning is used.
__GNUC__ check is replaced now with GNU_SYMVER.
Additionally ld version script is included only in
case of gnu option, which slightly reduces output size.
Providing --without-symvers to configure script when building against
uclibc library fixes segmentation fault error described above, due to
lack of several versions of the same symbol in libdevmapper.so
library.
Based on:
https://patchwork.kernel.org/project/dm-devel/patch/20180831144817.31207-1-m.niestroj@grinn-global.com/
Suggested-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Looks like there was some missed versioning increase during devel.
So with kernel >= 4.18 version 1.19 is enough to look like 1.20
However backported 1.19 targets seems to not provide all
the capabilities.
Test has to use PV with suffix pv[0..9] otherwise
it's ignored by test suite filter.
Better fix for VG names to use prefix LVMTEST.
Skip the test for runs without LVM_TEST_DEVDIR != /dev
Always use PREFIX for vg header - all tests must use this prefix,
VGs without are not allowed.
Modify pv_symlink test - as the test was checking unsupportable
combination - since lvm2 commands withing testsuite are only
allowed to manipulate with /dev/mapper/LVMTESTXXXX path -
nothing else allowed and fails on being filtered.
Added comment the 'lvs' already initiates dmeventd
Note: we don't have any query mechanism to check if dmeventd
is already running except access of socket which basically
starts dmeventd if it's not running.
For determinist test results lvm2/dm service shall not be present
and running in the system as it may randomize test results.
In case they are found present, this test ends with warning (not failure).
Some older instancies of 'mdadm' opened legs in RW and
closed and opened again and expected exlusive access.
But here udev rule can be fired - so on these versions
slow down whole mdadm runtime by using strace, to
give system a bit more time to finish udev rule.
Make check_lvmpolld_init_rq_count() more compatible with older gawk,
where some functionality was not working properly.
Also change 'not not' condition.
Fsadm wants to print its own error message when it can't detect
type of the filesystem on a block device.
Otherwise fsadm exits with no message on an unused block device.
When blocksize --getsize64 gives empty result we want to fallback
to ancient --getsize * --getss calculation (RHBZ #1942486).
Reported by: ajschorr@alumni.princeton.edu
Just like lvm command ignores 0/xxxx report from judging the status.
Avoid using infinite loop and limit report checking to 100 checks.
If it would need more - something is not right.
Our tests may result in producation of huge set of
invalid links in /dev/disk directory depeding on version
of udev and various kinds of failures.
Also we happen to invoke some on-system pvscans generating
local /etc/lvm/archive & backups - remove them when
test is finished.
Try to synchronize with colliding udev.
Also retry once if there is some failure with some
sleep between next retry.
Use oflag=direct for wipping without wipefs.
Combination of throttling and slowed device is a bit faster.
Also add FIXME about the mutliple spawn polling processing
when activating invidual LV for a pvmove.
Use for testing new mdadm_create aux wrapper.
Place functionality into a 2 pass loop - one for 'auto' other for 'start'.
Share same tests between raid level 0 and level 1 version of raid.
Add generic wrapper for mdadm --create which takes
normal 'mdadm' args - but allows us to handle differences of
mdadm usage across various version of mdadm tool.
Resulting MD device is availalble in $(< MD_DEV).
Automatic cleaning is made through cleanup_md_dev
Calling of mdadm_create cleans previous MD dev if it exists.
When testing installed binaries on system, use more 'built-in'
predefined settings to usethem with their compiled-in values.
Also it's better to use same locking dir so the system's pvscan
is not unexpectedly interferring with test commands.
We have here some kind of catch-22 - since older kernels do
use 'resync' while new 'recover' for initial raid synchronization.
So now - how do we recognize in which state of raid we are.
ATM seems to be simplest to simply keep disabled droping of primary raid
leg unless we are in sync.
FIXME: we should add a target version check and enable removal
Function is not having the best name since it does check
no just raid LVs to be in sync.
Restore the mirror percentage checking - although without retries,
since only raid target is currently known to need it - for other
types it would be ATM a bug to get inconsistent result.
Our new faster deps generation missed support for
buildirs != srcdir - as it can be usable to have
several builds from unchanged directory with sources.
Whiel waiting for raid to return consistent status,
use interruptible sleep - so command can break quickly.
Use lv_raid_status() to get percentage easily from status.
Always shift created virtual PVs on backing device by 1MiB
and leave 1MiB free space at the end of device.
This way the system doesn't see same PV headers at multiple devices.
Since lvm does support external users of thin-pool when thin devices
are managed outside it can be useful to support conversion to
thin pool from data and metadata LV without zeroing.
TransactionID will be 0 in lvm2 metadata.
lvconvert -Zn --thinpool vg/data --poolmetadata vg/meta
Renables usage of --type zero and --type error LVs to serve as
backend for _tdata device. Clearly not very useful in practice,
as it can't store any real data, but usable for some testing
and some sort of perfomance checking.
lvcreate --type zero -L1T -n pool vg
lvconvert --thinpool vg/pool
Will create a thin-pool with zero device backend.
Enabled extension/mixing of stripes/linears, error and zero
segtype LVs with stripes/linear, error and zero segtypes.
It is not very useful in practice, as the user cannot store any real
data on error or zero segtypes, but it may get some uses in
some scenarios where i.e. some portion of the device should not be
readable. Mixing of types happens on 'extent_size' level:
lvcreate -L1 -n lv vg
lvextend --type error -L+1 vg/lv
lvextend --type zero -L+1 vg/lv
lvextend --type linear -L+1 vg/lv
lvextend --type striped -L+1 vg/lv
lvs -o+segtype,seg_size vg
Note: when the type is not specified, the last segment type is
automatically selected.
It's also a small 'can of worms' since we can't tell LVs if
the LV is linear/error/zero or their mixtures. So the meaning behind
them may need some updates.
We already have this types of LV created i.e by:
vgreduce --removemissing --force
where missing LV segments have been replaced by either
error or zero segtype (lvm.conf).
TODO: it might be worth adding a message while such device is activated.
Add some extra code to handle differently sized thin-pool
from thin-pool data volume.
ATM this can't really happen, but once we start to use multiple
commits while resizing stacked LV, we may actually get into
the position, where data LV has been already resized,
but thin-pool stayed with old size.
But for now - report difference as internal error.
Devices made only from 'error' target cannot be used,
but if the device is also combined from 'zero' target
the same rule can be applied as such device cannot be used.
Just like with other segtype use this function to get whole
raid status info available per a single ioctl call.
Also it nicely simplifies read of percentage info about
in_sync portion of raid volume.
TODO: drop use of other calls then lv_raid_status call,
since all such calls could already use status - so it just
adds unnecessary duplication.
Reduce ioctl count and avoid separate info check,
when we can get the same info from status ioctl.
When devmanager calls return 0, then the exists value 0
means the reason of failure is missing device in table.
In such case we avoid stack trace.
Swap the flush parameter for the vdo status function
to match thin pool status.
Adding full filesystem sync, trying to fight with strange error from losetup:
losetup: loopa: failed to set up loop device: Resource temporarily unavailable
loop0: detected capacity change from 0 to 4096
loop_set_block_size: loop0 () has still dirty pages (nrpages=13)
Also reuse internal aux wipefs_a
When multiple polling tasks are watching for same LV, clearly
when some of them wins the game - other polling tasks will fail.
Improve the logic and report success if the merged LV is
actually not a merging origin anymore (since likely someone
else has already finished merging).
Although we support '0' interval - it's highly inefficent to
do so many scans in busy-loop.
So ATM raise minimal rescan time to 100ms.
TODO: revisit whole timing logic here as it does have some sideeffect
hiddent impact and can considerably eat CPU in some cases.
Reuse similar 'acceleration' as used for dependent volumes also
for snapshot - so when origin is being removed with all thick
snapshots, don't bother with individual 'COW' detachments
and write&commits, and when possible handle this all within
a single commit.
Move code for prompting about removed LV to a single function
and use it also to prompt for removal of origin and all its thick
snapshots and also when removing merging origin.
Function does handle postponed write_and_commit so there is
no 'in-flight' operation while waiting on [y|n] answer.
Compat code and handle unusual case, where
thin snapshot is also a 'thick snapshot origin' and such
snapshot gets merged into a thin origin.
However since now lv_is_visible() (which is complex function)
replaced &VISIBLE_LV check, the whole this check seems to be
no longer useful as sum of all 3 will always match??
Since cached LV is going to be removed together with its cache,
there is not much to gain if we try to flush cache first.
User may use 'vgcfgrestore' to get back origin + cache.
Assuming user is not using issue_discards.
When data are discarded after remove there is nothing to restore!
This change allows to futher reduce number of commits
during lvremove/vgremove.
This modification supports dynamically obtaining the value of PAGE_SIZE,
which is compatible with systems with PAGE_SIZE of (4K/64K)
Signed-off-by: wuguanghao <wuguanghao3@huawei.com>
There is really no practical reason to continue running
when we fail on allocation.
It seems we may need further fine frained errors, as for
some error type we simply need to exit ASAP, while
others may still produce usable results.
When lvremove/vgremove removes thin volumes with its thin-pool as well,
try to skip any updates of such thin-pool, so when everything properly
deactivates, there is no message send to this thin-pool and whole
thin-pool is removed with a single commit.
When generating list of processed LV, add thin-pool to the head of the
list, while other LVs are added on tail.
This makes it easier when removing many thin volumes, to recognize easily
when its thin-pool is also supposed to be removed.
Returning NULL for lv_committed is basically instant crash,
so instead try with passed LV instead.
It shouldn't matter as this is internall error path anyway,
but coverity should be happier.
Another step towards better automatic handling of backup,
and automatically setup needs_backup after commit.
In some next step we should reduce number of backups and takem
then only at the command finish with vg_committed content.
The correct test needs to actually check 'lv->snapshot' is not NULL,
so the 'find_snapshot()' can work.
Test lv_is_snapshot was actually irrelavant for this case.
Also initialize device_id.
With commit b44db5d1a7
needs to check allocated pointer for failed malloc().
Existing check was actually no checking anything so failing
malloc here would result in segfault (although with very
low chance to ever happen).
Match VG uuid just once per list of all LVs in VG.
TODO: maybe some more efficeint tree or hash could be better here,
but since it's used not so often, the total benefit is not so great,
so ATM just reducing amount of checked bytes.
Add Bob Jenkins hash function to get better working hash function,
which does genarate way less colisions (especially with similar
strings).
For a comparision also a kernel function used in DM kernel is include.
While it's better then our existing one, it's still far worse,
then Bob Jenkins hash.
Enhance hash perfomance by remembering the computed
hash value for the hashed node - this allows to speedup
lookup of nodes with the hash collision when
different keys map to the same masked hash value.
For the easier use 'num_slots' become 'mask_slots',
so we only add '1' in rare case of need of the original
num_slots value.
Also add statistic counters for hashes and print stats in
debug build (-DDEBUG) on hash wiping.
(so badly performing hash can be optimized.)
Use different 'hint' size for dm_hash_create() call - so
when debug info about hash is printed we can recognize which
hash was in use.
This patch doesn't change actual used size since that is always
rounded to be power of 2 and >=16 - so as such is only a
help to developer.
We could eventually use 'name' arg, but since this would have changed
API and this patchset will be routed to libdm & stable - we will
just use this small trick.
Just like with deactivation, call of 'lv_is_not_in_use()'
now has embeded report for inactivate LV.
Note: this patch cannot be backported to stable-2.02 - as
there lv_is_active() has 'cluster' meaning and differs from lvinfo().
When LV is deactivativate, we check for presence, and later
for some LV types also for being in use.
We can however do this check in 1 step for them a remove extra ioctl.
Add return value '2' to lv_check_not_in_use() to recognize LV is not
present.
Existing users were just testing for 0, so no change for them.
When parsing VG metadata we can create from a single config tree
also 'vg_committed' that is always created for writable VG.
This avoids extra uncessary step of serializing and deserilizing
just parsed VG.
Every vg_write stores new 'metadata' into precommitted slot.
For this step we use 'serialized buffer' to ascii metadata.
Instead of recreating this buffer after whole 'vg_write()' we
use this buffer instantly for creating of precommitted VG.
This has also the advantage of catching any problems with
reparsing of ascii metadata back to VG early before any write.
This patch postpones update of lvm metadata for each removed
LV for later moment depending on LV type.
It also queues messages to be printed after such write & commit.
As such there is some change in the behavior - although before
prompt we do make write&commit happens automatically in some
other error case we rather keep 'existing' state - so there
could be difference in amount of removed & commited LVs.
IMHO introduce logic is slightly better and more save.
But some cases still need the early commit - i.e. thin-removal
and fixing this needs some more thinking.
TODO: improve removal at least with the case of the whole thin-pool.
i.e. we can simply recognize removal of 'all LVs/whole VG'.
with fork and exec to avoid use of shell.
largely copied from lib/misc/lvm-exec.c
require lvmlockctl_kill_command to be full path
use lvm config instead of lvmconfig to avoid need for LVM_DIR
Make the generic "device is not usable" message from filter-usable
more specific in case the device is not usable because it's an LV.
(i.e. when scan_lvs=0)
When 'lv_info()' is called with &info structure,
the presence of node has to be checked from this structure.
Without this we were needlesly trying to look out 0:0 device.
When lvm2 calls archive() or backup() it can be useful to allow handling
break signal so the command can be interrupted at some consistent point.
Signal is accepted during processing these calls - and can be evaluated
later during even lengthy processing loops.
So now user can interrupt lengthy lvremove().
Taking backup with each removed LV is slowing down the process
considerable and is largerly uneeded. We are supposed to take
backup only on significant points and making sure the backup
is correct when the command is finished.
TODO: check how many other commands can be improved.
Use 'C' for alphasort - there is no need to use localized and slower
sorting for internal directory scanning.
Ensure on all code paths allocated dirent entries are released.
Optimize full path construction.
From commit 29abba3785 we have hopefully
fixed most of troubles for deps tracking we had in past - so retry
again.
Drop explicit configure.h from DEPS - as it's automatically gathered
by gcc dependency tracking anyway.
Drop the comment "This setting is no longer used." which
was printed just before the standard deprecation comment:
"This configuration option is deprecated."
When lvmconfig --typeconfig full printed a deprecated
entry it would attempt to print a non-existing
deprecation comment resulting in output like:
# (null) # This setting is no longer used.
This reverts commit 99b6173f10.
These tests are disabled with lvmlockd because they use
snapshots without an origin which is not permitted in a
shared vg.
user creates a file listing real devices they want
lvm tests to use, and sets LVM_TEST_DEVICE_LIST.
lvm tests can use these with prepare_real_devs
and get_real_devs.
Other aux functions do not work with these devs.
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
To better test actually fsadm in test suite - avoid setting
LVM_BINARY locally - since test setup already modifies
PATH to find test's lvm binary as the 1st. in path.
User use 'lvconvert -Zn --type vdo-pool' to convert an existing
vdo formated volume and skip lvm2 internal formating.
This however requires user is passing proper matching parameters.
For them user can use --profile|--metadataprofile option whos
support has been also enhanced.
TODO: add support to read values directly from formated volume.
In some cases we use 'creation' also during conversion.
Here it can be actually unwanted side effect we may remove
not just newly created layers - but also original converted LV.
So until we make clear how to properly revert from some errors
in middle of conversion, disable removal for any 'lvconvert' commands.
When lvdisplay was executed and thin snaphost has be merged to
thin origin and the operation has been postponed till devices
are closed, command crashed.
Check LV is COW before trying to check snapshot percentage.
When converting an existing LV to thin-pool,
user may now pass also '--errorwhenfull' option
like with 'lvcreate'.
Also recalculate chunksize when performace profile is
used with conversion (again matching lvcreate).
Adds missing flagging for uncropped metadata sizes.
Fix clearing persistent filter state when clearing all
the state from a label_scan.
label_scan reads devs and saves info in bcache, lvmcache,
and in the persistent filter. In some uncommon cases, an
lvm command wants to clear all info from a prior label_scan,
and repeat label_scan from scratch. In these cases, info
in lvmcache, bcache and the persistent filter all need to
be cleared before repeating label_scan.
By missing the persistent filter wiping, outdated persistent
filter info, from a prior label_scan, could cause lvm to
incorrectly filter devices that change between polling intervals.
(i.e. if the device changes in such a way that the filtering
results change.)
A case where lvm wants to do multiple label_scans is a
polling command (like lvconvert --merge), when lvmpolld
has been disabled, so that the command itself needs to
to do repeated polling checks.
Automatically figure out resizable layer in the LV stack and
resize it online.
Split check for reshaped raids and postpone removal of
unused space after finished reshaping after metadata archiving.
Drop warning about unsupported automatic resize of monitored thin-pool.
Currently there is not yet support for resize of writecache.
In past we had this control with use_lvmetad check for
pvscan --cache -aay
Howerer this got lost with lvmetad removal commit:
117160b27e
When user sets lvm.conf global/event_activation=0
pvscan service will no longer auto activate any LVs on appeared PVs.
Move extra md component detection into the label scan phase.
It had been in set_pv_devices which was deep within the vg_read
phase, which wasn't a good place (better to detect that earlier.)
Now that pv metadata info is available in the scan phase, the pv
details (size and device_hint) can be used for extra md checking.
Use the device_hint from the pv metadata to trigger a full md
component check if the device_hint begins with /dev/md.
Stop triggering full md component checks based on missing
udev info for a dev.
Changes to tests to reflect that the code is now detecting
md components in some test case that it wasn't before.
A cachevol can be forcibly detached when it's missing devices.
Also allow this if it's damaged/invalid and unrepairable.
This would be needed to recover data from the origin LV after
a cachevol is lost or damaged beyond repair.
In cases where lvconvert does not detect a fs block size on the
device, it falls back to choosing a writecache block size based
on the device's LBS and PBS (tries to match those.)
If the user specifies a writecache block size on the command
line (--cachesettings block_size=4096|512), lvconvert currently
fails and reports an error if the user-specified value does not
match the value lvconvert would have chosen based on LBS and PBS.
The purpose of allowing a user-specified value on the command line
is to override what lvconvert would otherwise do, so change this
to just print a warning that the user value does not match the
value that would be chosen based on the LBS/PBS, and then take
the user-specified value as the writecache block size.
Current allocation limitation requires to fit metadata/log LV on
a single PV. This is usually not a big problem, but since
thin-pool and cache-pool is using this for allocating extents
for their metadata LVs it might be eventually causing errors
where the remaining free spaces for large metadata size is spread
over several PV.
When passing 'pvmove --name arg' try to automatically move
all associated dependencies with given LV.
i.e. 'pvmove --name thinpool vg vgnew'
moves all thins and data and metadata LV into a new VG vgnew.
Use update_pool_metadata_min_max() which is shared with
thin-pool metadata min-max updating.
Gives improved messages when converting volumes to metadata.
There is not much point to let allocate more then this size
even when i.e. converted LV is bigger then 16GiB (%extent_size)
ATM neither thin-pool nor cache-pool supports bigger metadata.
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).
lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).
This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.
Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.
With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).
Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!
Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!
Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.
Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.
Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
When detaching writecache, make the first stage send a message
to dm-writecache to set the cleaner option. This is instead of
reloading the dm table with the cleaner option set. Reloading
the table causes udev to process/probe the dm dev, which gets
stalled because of the writeback activity, and the stalled udev
in turn stalls the lvconvert command when it tries to sync with
udev events.
When getting writecache status we do not need to get
open_count or read_head info, which can cause extra steps.
In case legs of a raid0 LV are removed, the lvdisplay command still
reports 'available' though raid0 is not providing any resilience
compared to the other raid levels.
Also lvdisplay does not display '(partial)' in case of missing raid0
legs as oposed to the lvs command.
Enhance lvdisplay to report "NOT available" for any RaidLV type in case
too many legs are inaccessible hence causing data loss. I.e. any leg
for raid0, all for raid1, more than 1 for raid4/5, more than 2 for raid6
and in case of completely lost mirror groups for raid10.
Add test/shell/lvdisplay-raid.sh.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872678
Read buffersize - 1 so the last byte is always 0.
Simplify init of 0 buffers.
Check snprintf result for error and report internal error as it could
happen only via bad compile parameters.
New VDO targets v6.2.3 corrects support for online rename of VDO device.
If needed if can be disable via new lvm.conf setting:
vdo_disabled_features = [ "online_rename" ]
When removing pool LV from a stacked LV setup, it's been possible
to leak _pmspare and such hidden LV then required manual
user removal.
Fix it by moving automatic removal into _lv_reduce().
When adding replacement raid+integrity images (lvconvert --repair
after a raid image is lost), various errors can cause the function
to exit with an error. On this exit path, the function attempts
to revert new images that had been created but not yet used. The
cleanup failed to account for the fact that not all images needed
to be reverted.
Since commit 77fdc17d70 always include
log_len size into needed extents - however now we may need sometimes
more extents then necessary - mainly when multiple PVs are involved
into allocation.
Add logs_still_needed into calculation of sufficient_pes_free()
When a writecache sublv or an integrity metadata sublv
are partial (missing a dev), set the partial flag on
the upper level LV also, as is done for other sublvs.
Fix the two-step writecache detach in commit c32d7fed4f.
In the case of uncache, the cachevol is removed after
detaching the writecache. When the detach is finished
in the second step, the remove must wait until then.
When using cache with a cachevol, the cache_check tool was
not being run on the cache metadata during activation.
cache_check clears the needs_check flag in the cache
metadata, so if the flag was set due to an unclean
shutdown, the activation would fail.
When 'fsadm resize vg/lv' is used without size, it should just
resize filesystem to match device - but since we now check
for unbound variable in bash - the previous usage no longer
works and needs explicit check.
Verify that corruption is corrected for raid levels other
than raid1. For other raid levels, attempt to corrupt the
given file pattern on each underlying device, since we don't
know which device contains the file being corrupted.
This ensures that corruption is actually be introduced
when testing the other raid levels.
Verify that corruption is being corrected by checking
the integritymismatches count is non-zero for the raid LV,
which includes the total from all images (since we don't
know which image will have the corruption.)
Each integrity image in a raid LV reports its own number
of integrity mismatches, e.g.
lvs -o integritymismatches vg/lv_rimage_0
lvs -o integritymismatches vg/lv_rimage_1
In addition to this, allow the total number of integrity
mismatches from all images to be displayed for the raid LV.
lvs -o integritymismatches vg/lv
shows the number of mismatches from both lv_rimage_0 and
lv_rimage_1.
Enhance VDO man page with description of memory usage
and space requirements chapter.
Remove some unneeded blank lines in man page.
Use more precise terminology.
Correct examples since cpool and vpool are protected names.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.