IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The recent addition to check for PVs that were
missed during the first iteration of processing
was unintentionally catching duplicate PVs because
duplicates were not removed from the all_devices
list when the primary dev was processed.
Also change a message from warn back to verbose.
If a VG is removed between the time that 'vgs'
or 'lvs' (with no args) creates the list of VGs
and the time that it reads the VG to process it,
then ignore the removed VG; don't report an error
that it could not be found, since it wasn't named
by the command.
PVs could be missing from the 'pvs' output if
their VG was removed at the same time that the
'pvs' command was run. To fix this:
1. If a VG is not found when processed, don't
silently skip the PVs in it, as is done when
the "skip" variable is set.
2. Repeat the VG search if some PVs are not
found on the first search through all VGs.
The second search uses a specific list of
PVs that were missed the first time.
testing:
/dev/sdb is a PV
/dev/sdd is a PV
/dev/sdg is not a PV
each test begins with:
vgcreate test /dev/sdb /dev/sdd
variations to test:
vgremove -f test & pvs
vgremove -f test & pvs -a
vgremove -f test & pvs /dev/sdb /dev/sdd
vgremove -f test & pvs /dev/sdg
vgremove -f test & pvs /dev/sdb /dev/sdg
The pvs command should always display /dev/sdb
and /dev/sdd, either as a part of VG test or not.
The pvs command should always print an error
indicating that /dev/sdg could not be found.
Commit 1a74171ca5 added
a check to ignore a VG that was FAILED_INCONSISTENT
if the command doesn't care if the VG is not found.
Remove that check because that case is never reached
by the current code.
The ONE_VGNAME_ARG was being passed and tested as
vg_read() flag but it's a cmd struct flag.
(It affects command arg processing in toollib,
not vg_read behavior. Flags related to command
processing are generally cmd struct flags, while
vg_read arg flags are generally related to vg_read
behavior.)
Running "vgremove -f VG & pvs" results in the pvs
command reporting that the VG is not found or is
inconsistent. If the VG is gone or being removed,
the pvs command should just skip it and not print
errors about it.
"Not found" is because the pvs command created the
list of VGs to process, including VG, then vgremove
removed the VG, then the pvs command came to to read
the VG to process it and did not find it.
An "inconsistent" error could be reported if vgremove
had only partially completed removing VG when pvs did
vg_read on the VG to process it, causing pvs to find
the VG in a partially-removed state.
This fix adds a flag that pvs uses to ignore a VG
that can't be read or is inconsistent.
If 'vgcreate --shared' finds both sanlock and dlm are running,
print a more accurate error message:
"Found multiple lock managers, select one with --lock-type."
When neither is running, we still print:
"Failed to detect a running lock manager to select lock type."
Using --lock-type sanlock|dlm implies --shared.
Using --shared selects lock type sanlock|dlm
(by choosing the one that's running.)
Using both --shared and --lock-type sanlock|dlm should
also be allowed (--shared is just redundant information.)
The unlock call will fail in expected and normal cases,
and should not cause the command to fail. (An actual
unlock in the lock manager should never fail.)
Add new profilable configurables:
allocation/cache_policy
allocation/cache_settings
and mark allocation/cache_pool_chunk_size as profilable as well.
Obsolete allocation/cache_pool_cachemode and
introduce new allocation/cache_mode instead.
Rename DEFAULT_CACHE_POOL_POLICY to DEFAULT_CACHE_POLICY.
When lvm is built without lvmlockd support, vgcreate using a
shared lock type would succeed and create a local VG (the
--shared option was effectively ignored). Make it fail.
Fix the same issue when using vgchange to change a VG to a
shared lock type.
Make the error messages consistent.
Keep policy name separate from policy settings and avoid
to mangling and demangling this string from same config tree.
Ensure policy_name is always defined.
There are two different failure conditions detected in
access_vg_lock_type() that should have different error
messages. This adds another failure flag so the two
cases can be distinguished to avoid printing a misleading
error message.
This prevents 'lvremove vgname' from attempting to remove the
hidden sanlock LV. Only vgremove should remove the hidden
sanlock LV holding the sanlock locks.
... Using uninitialized value "lockd_state" when calling "lockd_vg"
(even though lockd_vg assigns 0 to the lockd_state, but it looks at
previous state of lockd_state just before that so we need to have
that properly initialized!)
libdm/libdm-report.c:2934: uninit_use_in_call: Using uninitialized value "tm". Field "tm.tm_gmtoff" is uninitialized when calling "_get_final_time".
daemons/lvmlockd/lvmlockctl.c:273: uninit_use_in_call: Using uninitialized element of array "r_name" when calling "format_info_r_action". (just added FIXME as this looks unfinished?)
In process_each_{vg,lv,pv} when no vgname args are given,
the first step is to get a list of all vgid/vgname on the
system. This is exactly what lvmetad returns from a
vg_list request. The current code is doing a vg_lookup
on each VG after the vg_list and populating lvmcache with
the info for each VG. These preliminary vg_lookup's are
unnecessary, because they will be done again when the
processing functions call vg_read. This patch eliminates
the initial round of vg_lookup's, which can roughly cut in
half the number of lvmetad requests and save a lot of extra work.
Routines responsible for polling of in-progress pvmove, snapshot merge
or mirror conversion each used custom lookup functions to find vg and
lv involved in polling.
Especially pvmove used pvname to lookup pvmove in-progress. The future
lvmpolld will poll each operation by vg/lv name (internally by lvid).
Also there're plans to make pvmove able to move non-overlaping ranges
of extents instead of single PVs as of now. This would also require
to identify the opertion in different manner.
The poll_operation_id structure together with daemon_parms structure they
identify unambiguously the polling task.
With use_lvmetad=0, duplicate PVs /dev/loop0 and /dev/loop1,
where in this example, /dev/loop1 is the cached device
referenced by pv->dev, the command 'pvs /dev/loop0' reports:
Failed to find physical volume "/dev/loop0".
This is because the duplicate PV detection by pvid is
not working because _get_all_devices() is not setting
any dev->pvid for any entries. This is because the
pvid information has not yet been saved in lvmcache.
This is fixed by calling _get_vgnameids_on_system()
before _get_all_devices(), which has the effect of
caching the necessary pvid information.
With this fix, running pvs /dev/loop0, or pvs /dev/loop1,
produces no error and one line of output for the PV (the
device printed is the one cached in pv->dev, in this
example /dev/loop1.)
Running 'pvs /dev/loop0 /dev/loop1' produces no error
and two lines of output, with each device displayed
on one of the lines.
Running 'pvs -a' shows two PVs, one with loop0 and one
with loop1, and both shown as a member of the same VG.
Running 'pvs' shows only one of the duplicate PVs,
and that shows the device cached in pv->dev (loop1).
The above output is what the duplicate handling code
was previously designed to output in commits:
b64da4d8b5 toollib: search for duplicate PVs only when needed
3a7c47af0e toollib: pvs -a should display VG name for each duplicate PV
57d74a45a0 toollib: override the PV device with duplicates
c1f246fedf toollib: handle duplicate pvs in process_in_pv
As a further step after this, we may choose to change
some of those.
For all of these commands, a warning is printed about
the existence of the duplicate PVs:
Found duplicate PV ...: using /dev/loop1 not /dev/loop0
sharing connection between parent command and background
processes spawned from parent could lead to occasional failures
due to unexpected corruption in daemon responses sent to either child
or a parent.
lvmetad issued warning about duplicate config values in request.
LVM commands occasionaly failed w/ internal error after receving
corrupted response.
lvmetad connection is renewed when needed after explicit disconnect
in child
spawning a background polling from within the lv_change_activate
fn went to two problems:
1) vgchange should not spawn any background polling until after
the whole activation process for a VG is finished. Otherwise
it could lead to a duplicite request for spawning background
polling. This statement was alredy true with one exception of
mirror up-conversion polling (fixed by this commit).
2) due to current conditions in lv_change_activate lvchange cmd
couldn't start background polling for pvmove LVs if such LV was
about to get activated by the command in the same time.
This commit however doesn't alter the lvchange cmd so that it works same as
vgchange with regard to not to spawn duplicate background pollings per
unique LV.
Do not keep dangling LVs if they're removed from the vg->lvs list and
move them to vg->removed_lvs instead (this is actually similar to already
existing vg->removed_pvs list, just it's for LVs now).
Once we have this vg->removed_lvs list indexed so it's possible to
do lookups for LVs quickly, we can remove the LV_REMOVED flag as
that one won't be needed anymore - instead of checking the flag,
we can directly check the vg->removed_lvs list if the LV is present
there or not and to say if the LV is removed or not then. For now,
we don't have this index, but it may be implemented in the future.
This avoids a problem in which we're using selection on LV list - we
need to do the selection on initial state and not on any intermediary
state as we process LVs one by one - some of the relations among LVs
can be gone during this processing.
For example, processing one LV can cause the other LVs to lose the
relation to this LV and hence they're not selectable anymore with
the original selection criteria as it would be if we did selection
on inital state. A perfect example is with thin snapshots:
$ lvs -o lv_name,origin,layout,role vg
LV Origin Layout Role
lvol1 thin,sparse public,origin,thinorigin,multithinorigin
lvol2 lvol1 thin,sparse public,snapshot,thinsnapshot
lvol3 lvol1 thin,sparse public,snapshot,thinsnapshot
pool thin,pool private
$ lvremove -ff -S 'lv_name=lvol1 || origin=lvol1'
Logical volume "lvol1" successfully removed
The lvremove command above was supposed to remove lvol1 as well as
all its snapshots which have origin=lvol1. It failed to do so, because
once we removed the origin lvol1, the lvol2 and lvol3 which were
snapshots before are not snapshots anymore - the relations change
as we're processing these LVs one by one.
If we do the selection first and then execute any concrete actions on
these LVs (which is what this patch does), the behaviour is correct
then - the selection is done on the *initial state*:
$ lvremove -ff -S 'lv_name=lvol1 || origin=lvol1'
Logical volume "lvol1" successfully removed
Logical volume "lvol2" successfully removed
Logical volume "lvol3" successfully removed
Similarly for all the other situations in which relations among
LVs are being changed by processing the LVs one by one.
This patch also introduces LV_REMOVED internal LV status flag
to mark removed LVs so they're not processed further when we
iterate over collected list of LVs to be processed.
Previously, when we iterated directly over vg->lvs list to
process the LVs, we relied on the fact that once the LV is removed,
it is also removed from the vg->lvs list we're iterating over.
But that was incorrect as we shouldn't remove LVs from the list
during one iteration while we're iterating over that exact list
(dm_list_iterate_items safe can handle only one removal at
one iteration anyway, so it can't be used here).
In log messages refer to it as system ID (not System ID).
Do not put quotes around the system_id string when printing.
On the command line use systemid.
In code, metadata, and config files use system_id.
In lvmsystemid refer to the concept/entity as system_id.
Commands that can never use foreign VGs begin with
cmd->error_foreign_vgs = 1. This tells the vg_read
lib layer to print an error as soon as a foreign VG
is read.
The toollib process_each layer also prints an error if a
foreign VG is read, but is more selective about it. It
won't print an error if the command did not explicitly
name the foreign VG. We want to silently ignore foreign VGs
unless a command attempts to use one explicitly.
So, foreign VG errors are printed from two different layers:
vg_read (lower layer) and process_each (upper layer).
Commands that use toollib process_each, only want errors from
the process_each layer, not from both layers. So, process_each
disables the lower layer vg_read error message by setting
error_foreign_vgs = 0.
Commands that do not use toollib process_each, want errors
from the vg_read layer, otherwise they would get no error
message. The original cmd->error_foreign_vgs setting
enables this error.
(Commands that are allowed to operate on foreign VGs always
begin with cmd->error_foreign_vgs = 0, and all the commands
in this group use toollib process_each with the selective
error reporting.)
When checking whether the system ID permits access to a VG, check for
each permitted situation first, and only then issue the appropriate
error message. Always issue a message for now. (We'll try to
suppress some of those later when the VG concerned wasn't explicitly
requested.)
Add more messages to try to ensure every return code is checked and
every error path (and only an error path) contains a log_error().
Add self-correction to vgchange -c to deal with situations where
the cluster state and system ID state are out-of-sync (e.g. if
old tools were used).
(This reverts patch #d95c6154)
Filter complete device list through full_filter unconditionally when
we're getting the list of *all* devices even in case we're interested
only in fraction of those devices - the PVs, not the other devices
which are not PVs yet (e.g. pvs vs. pvs -a).
We need to do this full filtering whenever we're handling *complete*
list of devices, we need to be safe here, mainly if there are any
future changes and we'd forgot to change to use proper filtering then.
Also properly preventing duplicates if there are any block subsystem
components used (mpath, MD ...).
Thing here is that (under use_lvmetad=1), cmd->filter can be used
only if we're sure that the list of devices we're filtering contains
only PVs. We have to use cmd->full_filter otherwise (like it is in
case of _get_all_devices fn which acquires complete list of devices,
no matter if it is a PV or not).
Of course, cmd->full_filter is more extensive than cmd->filter
which is only a subset of full_filter.
We could optimize this in a way that if we're interested in PVs only
during process_each_pv processing (e.g. using pvs in contrast to pvs -a),
we'd get the list of PV devices directly from lvmetad from the
lvmcache_seed_infos_from_lvmetad fn call which currently updates
lvmcache only. We'd add an additional output arg for this fn to get
the list of PV devices directly in addition, without a need to iterate
over all devices which include non-PVs which we're not interested in
anyway, hence we could use only cmd->filter, not the cmd->full_filter.
So the code would look something like this:
static int _get_all_devices(....)
{
struct device_id_list *dil;
if (interested_in_pvs_only)
lvmcache_seed_infos_from_lvmetad(cmd, &dil); /* new "dil" arg */
/* the "dil" list would be filtered through cmd->filter inside lvmcache_seed_infos_from_lvmetad */
else {
lvmcache_seed_infos_from_lvmetad(cmd, NULL);
dev_iter_create(cmd->full_filter)
while (dev = dev_iter_get ...) {
dm_list_add(all_devices, &dil->list);
}
}
}
It's cleaner this way - do not mix static and dynamic
(init_processing_handle) initializers. Use the dynamic one everywhere.
This makes it easier to manage the code - there are no "exceptions"
then and we don't need to take care about two ways of initializing the
same thing - just use one common initializer throughout and it's clear.
Also, add more comments, mainly in the report_for_selection fn explaining
what is being done and why with respect to the processing_handle and
selection_handle.
We still need to get the list as the calls underneath process_each_pv
rely on this list. But still keep the change related to the filters -
if we're processing all devices, we need to use cmd->full_filter.
If we're processing only PVs, we can use cmd->filter only to save
some time which would be spent in filtering code.
When lvmetad is used and at the same time we're getting list of all
PV-capable devices, we can't use cmd->filter (which is used to filter
out lvmetad responses - so we're sure that the devices are PVs already).
To get the list of PV-capable devices, we're bypassing lvmetad (since
lvmetad only caches PVs, not all the other devices which are not PVs).
For this reason, we have to use the "full_filter" filter chain (just
like we do when we're running without lvmetad).
Example scenario:
- sdo and sdp components of MD device md0
- sdq, sdr and sds components of mpatha multipath device
- mpatha multipath device partitioned
- vda device partitioned
=> sdo,sdp,sdr,sds, mpatha and vda should be filtered!
$ lsblk -o NAME,TYPE
NAME TYPE
sdn disk
sdo disk
`-md0 raid0
sdp disk
`-md0 raid0
sdq disk
`-mpatha mpath
`-mpatha1 part
sdr disk
`-mpatha mpath
`-mpatha1 part
sds disk
`-mpatha mpath
`-mpatha1 part
vda disk
|-vda1 part
`-vda2 part
|-fedora-swap lvm
`-fedora-root lvm
Before this patch:
==================
use_lvmetad=0 (correct behaviour!)
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
use_lvmetad=1 (incorrect behaviour - sdo,sdp,sdq,sdr,sds and mpatha not filtered!)
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/sdo --- 0 0
/dev/sdp --- 0 0
/dev/sdq --- 0 0
/dev/sdr --- 0 0
/dev/sds --- 0 0
/dev/vda --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
With this patch applied:
========================
use_lvmetad=1
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
List of all devices is only needed if we want to process devices
which are not PVs (e.g. pvs -a). But if this is not the case, it's
useless to get the list of all devices and then discard it without
any use, which is exactly what happened in process_each_pv where
the code was never reached and the list was unused if we were
processing just PVs, not all PV-capable devices:
int process_each_pv(...)
{
...
process_all_devices = process_all_pvs &&
(cmd->command->flags & ENABLE_ALL_DEVS) &&
arg_count(cmd, all_ARG);
...
/*
* If the caller wants to process all devices (not just PVs), then all PVs
* from all VGs are processed first, removing them from all_devices. Then
* any devs remaining in all_devices are processed.
*/
_get_all_devices(cmd, &all_devices);
...
ret = _process_pvs_in_vgs(...);
...
if (!process_all_devices)
goto out;
ret = _process_device_list(cmd, &all_devices, handle, process_single_pv);
...
}
This patch adds missing check for "process_all_devices" and it gets the
list of all (including non-PV) devices only if needed: