IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
(This reverts patch #d95c6154)
Filter complete device list through full_filter unconditionally when
we're getting the list of *all* devices even in case we're interested
only in fraction of those devices - the PVs, not the other devices
which are not PVs yet (e.g. pvs vs. pvs -a).
We need to do this full filtering whenever we're handling *complete*
list of devices, we need to be safe here, mainly if there are any
future changes and we'd forgot to change to use proper filtering then.
Also properly preventing duplicates if there are any block subsystem
components used (mpath, MD ...).
Thing here is that (under use_lvmetad=1), cmd->filter can be used
only if we're sure that the list of devices we're filtering contains
only PVs. We have to use cmd->full_filter otherwise (like it is in
case of _get_all_devices fn which acquires complete list of devices,
no matter if it is a PV or not).
Of course, cmd->full_filter is more extensive than cmd->filter
which is only a subset of full_filter.
We could optimize this in a way that if we're interested in PVs only
during process_each_pv processing (e.g. using pvs in contrast to pvs -a),
we'd get the list of PV devices directly from lvmetad from the
lvmcache_seed_infos_from_lvmetad fn call which currently updates
lvmcache only. We'd add an additional output arg for this fn to get
the list of PV devices directly in addition, without a need to iterate
over all devices which include non-PVs which we're not interested in
anyway, hence we could use only cmd->filter, not the cmd->full_filter.
So the code would look something like this:
static int _get_all_devices(....)
{
struct device_id_list *dil;
if (interested_in_pvs_only)
lvmcache_seed_infos_from_lvmetad(cmd, &dil); /* new "dil" arg */
/* the "dil" list would be filtered through cmd->filter inside lvmcache_seed_infos_from_lvmetad */
else {
lvmcache_seed_infos_from_lvmetad(cmd, NULL);
dev_iter_create(cmd->full_filter)
while (dev = dev_iter_get ...) {
dm_list_add(all_devices, &dil->list);
}
}
}
It's cleaner this way - do not mix static and dynamic
(init_processing_handle) initializers. Use the dynamic one everywhere.
This makes it easier to manage the code - there are no "exceptions"
then and we don't need to take care about two ways of initializing the
same thing - just use one common initializer throughout and it's clear.
Also, add more comments, mainly in the report_for_selection fn explaining
what is being done and why with respect to the processing_handle and
selection_handle.
Invalid devices no longer included in the counters printed at the end.
May now need to use --ignoreskippedcluster if relying upon exit status.
If more than one change is requested per-PV, attempt to perform them
all. Note that different arguments still handle exit status
differently.
We still need to get the list as the calls underneath process_each_pv
rely on this list. But still keep the change related to the filters -
if we're processing all devices, we need to use cmd->full_filter.
If we're processing only PVs, we can use cmd->filter only to save
some time which would be spent in filtering code.
When lvmetad is used and at the same time we're getting list of all
PV-capable devices, we can't use cmd->filter (which is used to filter
out lvmetad responses - so we're sure that the devices are PVs already).
To get the list of PV-capable devices, we're bypassing lvmetad (since
lvmetad only caches PVs, not all the other devices which are not PVs).
For this reason, we have to use the "full_filter" filter chain (just
like we do when we're running without lvmetad).
Example scenario:
- sdo and sdp components of MD device md0
- sdq, sdr and sds components of mpatha multipath device
- mpatha multipath device partitioned
- vda device partitioned
=> sdo,sdp,sdr,sds, mpatha and vda should be filtered!
$ lsblk -o NAME,TYPE
NAME TYPE
sdn disk
sdo disk
`-md0 raid0
sdp disk
`-md0 raid0
sdq disk
`-mpatha mpath
`-mpatha1 part
sdr disk
`-mpatha mpath
`-mpatha1 part
sds disk
`-mpatha mpath
`-mpatha1 part
vda disk
|-vda1 part
`-vda2 part
|-fedora-swap lvm
`-fedora-root lvm
Before this patch:
==================
use_lvmetad=0 (correct behaviour!)
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
use_lvmetad=1 (incorrect behaviour - sdo,sdp,sdq,sdr,sds and mpatha not filtered!)
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/sdo --- 0 0
/dev/sdp --- 0 0
/dev/sdq --- 0 0
/dev/sdr --- 0 0
/dev/sds --- 0 0
/dev/vda --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
With this patch applied:
========================
use_lvmetad=1
$ pvs -a
PV VG Fmt Attr PSize PFree
/dev/fedora/root --- 0 0
/dev/fedora/swap --- 0 0
/dev/mapper/mpatha1 --- 0 0
/dev/md0 --- 0 0
/dev/sdn --- 0 0
/dev/vda1 --- 0 0
/dev/vda2 fedora lvm2 a-- 9.51g 0
List of all devices is only needed if we want to process devices
which are not PVs (e.g. pvs -a). But if this is not the case, it's
useless to get the list of all devices and then discard it without
any use, which is exactly what happened in process_each_pv where
the code was never reached and the list was unused if we were
processing just PVs, not all PV-capable devices:
int process_each_pv(...)
{
...
process_all_devices = process_all_pvs &&
(cmd->command->flags & ENABLE_ALL_DEVS) &&
arg_count(cmd, all_ARG);
...
/*
* If the caller wants to process all devices (not just PVs), then all PVs
* from all VGs are processed first, removing them from all_devices. Then
* any devs remaining in all_devices are processed.
*/
_get_all_devices(cmd, &all_devices);
...
ret = _process_pvs_in_vgs(...);
...
if (!process_all_devices)
goto out;
ret = _process_device_list(cmd, &all_devices, handle, process_single_pv);
...
}
This patch adds missing check for "process_all_devices" and it gets the
list of all (including non-PV) devices only if needed:
This is a followup patch for previous patchset that enables selection in
process_each_* fns to fix an issue where field prefixes are not
automatically used for fields in selection criteria.
Use initial report type that matches the intention of each process_each_* functions:
- _process_pvs_in_vg - PVS
- process_each_vg - VGS
- process_each_lv and process_each_lv_in_vg - LVS
This is not normally needed for the selection handle init, BUT we would
miss the field prefix matching, e.g.
lvchange -ay -S 'name=lvol0'
The "name" above would not work if we didn't initialize reporting with
the LVS type at its start. If we pass proper init type, reporting code
can deduce the prefix automatically ("lv_name" in this case).
This report type is then changed further based on what selection criteria we
have. When doing pure selection, not report output, the final report type
is purely based on combination of this initial report type and report types
of the fields used in selection criteria.
We already allowed -S|--select with {vg,lv,pv}display -C (which
was then equal to {vg,lv,pv}s command. Since we support selection
in toolib now, we can support -S also without using -C in *display
commands now.
pvchange is an exception that does not use toollib yet for iterating
over the list of PVs (process_each_pv) so intialize the
processing_handle and use just like it's used in toollib.
We have 3 input report types:
- LVS (representing "_select_match_lv")
- VGS (representing "_select_match_vg")
- PVS (representing "_select_match_pv")
The input report type is saved in struct selection_handle's "orig_report_type"
variable.
However, users can use any combination of fields of different report types in
selection criteria - the resulting report type can thus differ. The struct
selection_handle's "report_type" variable stores this resulting report type.
The resulting report_type can end up as one of:
- LVS
- VGS
- PVS
- SEGS
- PVSEGS
This patch adds logic to report_for_selection based on (sensible) combination
of orig_report_type and report_type and calls appropriate reporting functions
or iterates over multiple items that need reporting to determine the selection
result.
The report_for_selection does the actual "reporting for selection only".
The selection status will be saved in struct selection_handle's "selected"
variable.
The code to determine final report type based on combination of input
report type (determined from fields used for reporting to output and selection)
can be reused for pure reporting for selection - factor out this code into
_get_final_report_type function.
This applies to:
- process_each_lv_in_vg - the VG is selected only if at least one of its LVs is selected
- process_each_segment_in_lv - the LV is selected only if at least one of its LV segments is selected
- process_each_pv_in_vg - the VG is selected only if at least one of its PVs is selected
- process_each_segment_in_pv - the PV is selected only if at least one of its PV segments is selected
So this patch causes the selection result to be properly propagated up to callers.
Call _init_processing_handle, _init_selection_handle and
_destroy_processing_handle in process_each_* and related functions to
set up and destroy handles used while processing items.
The init_processing_handle, init_selection_handle and
destroy_processing_handle are helper functions that allocate and
initialize the handles used when processing items in process_each_*
and related functions.
The "struct processing_handle" contains handles to drive the selection/matching
so pass it to the _select_match_* functions which are entry points to the
selection mechanism used in process_each_* and related functions.
This is revised and edited version of former Dave Teigland's patch which
provided starting point for all the select support in process_each_* fns.
The new "report_init_for_selection" is just a wrapper over
dm_report_init_with_selection that initializes reporting for selection
only. This means we're not going to do the actual reporting to output
for display and as such we intialize reporting as if no fields are reported
or sorted. The only fields "reported" are taken from the selection criteria
string and all such fields are marked as hidden automatically (FLD_HIDDEN flag).
These fields are used solely for selection criteria matching.
Also, modify existing report_object function that was used for reporting to
output for display. Now, it can either cause reporting to output or reporting
for selection only. The selection result is stored in struct selection_handle's
"selected" variable which can be handled further by any report_object caller.
This patch replaces "void *handle" with "struct processing_handle *handle"
in process_each_*, process_single_* and related functions.
The struct processing_handle consists of two handles inside now:
- the "struct selection_handle *selection_handle" used for
applying selection criteria while processing process_each_*,
process_single_* and related functions (patches using this
logic will follow)
- the "void* custom_handle" (this is actually the original handle
used before this patch - a pointer to custom data passed into
process_each_*, process_single_* and related functions).
Once LVM_COMMAND_PROFILE environment variable is specified, the profile
referenced is used just like it was specified using "<lvm command> --commandprofile".
If both --commandprofile cmd line option and LVM_COMMAND_PROFILE env
var is used, the --commandprofile cmd line option gets preference.
After commit 158e998876 where we may
start to readlv_attr with a 'shared' ioctl call for a single lvs line
we where obtaing single status for thin pools.
However this is not properly reflecting lvm2 reality.
Correcting this by reading lv status from layered thin pool, but lv info
from non-layered (linear) mapped device which is maintained for proper
cluster locking.
When repairing thin pool or swapping thin pool metadata,
preserve chunk_size property and avoid to be automatically changed
later in the code to better match thin pool metadata size.
Add separate LVSINFOSTATUS field type for fields which display both
dm info-like and dm status-like information.
The internal interface is there with the introduction of LVSSTATUS
field type which can cope with the combination of LVSSTATUS
and LVSINFO field types (several fields).
However, till now, we considered that *single* field can display
either LVSINFO or LVSSTATUS, but not both at the same time.
Till now, we haven't had single field which needs both - hence
add LVSINFOSTATUS field type for such fields as we currently
need this for the lv_attr field which requires combination of
info and status.
This patch just adds interface for an ability to register such fields
(the code that copes with this is already in).
A full search for duplicate PVs in the case of pvs -a
is only necessary when duplicates have previously been
detected in lvmcache. Use a global variable from lvmcache
to indicate that duplicate PVs exist, so we can skip the
search for duplicates when none exist.
Previously, 'pvs -a' displayed the VG name for only the device
associated with the cached PV (pv->dev), and other duplicate
devices would have a blank VG name. This commit displays the
VG name for each of the duplicate devices. The cost of doing
this is not small: for each PV processed, the list of all
devices must be searched for duplicates.
When multiple duplicate devices are specified on the
command line, the PV is processed once for each of them,
but pv->dev is the device used each time.
This overrides the PV device to reflect the duplicate
device that was specified on the command line. This is
done by hacking the lvmcache to replace pv->dev with the
device of the duplicate being processed. (It would be
preferable to override pv->dev without munging the content
of the cache, and without sprinkling special cases throughout
the code.)
This override only applies when multiple duplicate devices are
specified on the command line. When only a single duplicate
device of pv->dev is specified, the priority is to display the
cached pv->dev, so pv->dev is not overridden by the named
duplicate device.
In the examples below, loop3 is the cached device referenced
by pv->dev, and is given priority for processing. Only after
loop3 is processed/displayed, will other duplicate devices
loop0/loop1 appear (when requested on the command line.)
With two duplicate devices, loop0 and loop3:
# pvs
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop0
PV VG Fmt Attr PSize PFree
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m
# pvs /dev/loop3
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop0
PV VG Fmt Attr PSize PFree
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m
# pvs /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop0
PV VG Fmt Attr PSize PFree
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m
# pvs -o+dev_size /dev/loop0 /dev/loop3
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop0
PV VG Fmt Attr PSize PFree DevSize
/dev/loop0 loopa lvm2 a-- 12.00m 12.00m 16.00m
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
With three duplicate devices, loop0, loop1, loop3:
# pvs -o+dev_size
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop3
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop1
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop3 /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop0 loopa lvm2 a-- 12.00m 12.00m 16.00m
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop3 /dev/loop1
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop1 loopa lvm2 a-- 12.00m 12.00m 32.00m
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop0 /dev/loop1
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop1 loopa lvm2 a-- 12.00m 12.00m 32.00m
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
# pvs -o+dev_size /dev/loop0 /dev/loop1 /dev/loop3
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop1 not /dev/loop0
Found duplicate PV XhLbpVo0hmuwrMQLjfxuAvPFUFZqD4vr: using /dev/loop3 not /dev/loop1
PV VG Fmt Attr PSize PFree DevSize
/dev/loop0 loopa lvm2 a-- 12.00m 12.00m 16.00m
/dev/loop1 loopa lvm2 a-- 12.00m 12.00m 32.00m
/dev/loop3 loopa lvm2 a-- 12.00m 12.00m 32.00m
Processes a PV once for each time a device with its PV ID
exists on the command line.
This fixes a regression in the case where:
. devices /dev/sdA and /dev/sdB where clones (same PV ID)
. the cached VG references /dev/sdA
. before the regression, the command: pvs /dev/sdB
would display the cached device clone /dev/sdA
. after the regression, pvs /dev/sdB would display nothing,
causing vgimportclone /dev/sdB to fail.
. with this fix, pvs /dev/sdB displays /dev/sdA
Also, pvs /dev/sdA /dev/sdB will report two lines, one for each
device on the command line, but /dev/sdA is displayed for each.
This only works without lvmetad.
Support error_if_no_space feature for thin pools.
Report more info about thinpool status:
(out_of_data (D), metadata_read_only (M), failed (F) also as health
attribute.)
API for seg reporting is breaking internal lvm coding - it cannot
use vgmem mem pool for allocation of reported value.
So use separate pool instead of 'vgmem' for non vg related allocations
Add consts for many function params - but still many other are left
for now as non-const - needs deeper level of change even on libdm side.
If pvscan is run with device path instead of major:minor pair and this
device still exists in the system and the device is not visible anymore
(due to a filter that is applied), notify lvmetad properly about this.
This makes it more consistent with respect to existing pvscan with
major:minor which already notifies lvmetad about device that is gone
due to filters.
However, if the device is not in the system anymore, we're not able
to translate the original device path into major:minor pair which
lvmetad needs for its action (lvmetad_pv_gone fn). So in this case,
we still need to use major:minor pair only, not device path. But at
least make "pvscan --cache DevicePath" as near as possible to "pvscan
--cahce <major>:<minor>" functionality.
Also add a note to pvscan man page about this difference when using
pvscan --cache with DevicePath and major:minor pair.
When processing PVs specified on the command line, the arg
name was being matched against pv_dev_name, which will not
always work:
- The PV specified on the command line could be an alias,
e.g. /dev/disk/by-id/...
- The PV specified on the command line could be any random
path to the device, e.g. /dev/../dev/sdb
To fix this, first resolve the named PV args to struct device's,
then iterate through the devices for processing.
The {pv,vg,lv}display *do* use reporting in case "-C|--columns" is used.
The man page was correct, the recognition for the --binary was missing
in the code though!
The call to dm_config_destroy can derefence result->mem
while result is still NULL:
struct dm_config_tree *get_cachepolicy_params(struct cmd_context *cmd)
{
...
int ok = 0;
...
if (!(result = dm_config_flatten(current)))
goto_out;
...
ok = 1;
out:
if (!ok) {
dm_config_destroy(result)
...
}
...
}
ignore_vg now returns 0 for the FAILED_CLUSTERED case,
so all the ignore_vg 1 cases will return vg's with an
empty vg->pvs, so we do not need to iterate through
vg->pvs to remove the entries from the devices list.
Clean up whitespace problems in that area from the
previous commit.
- Fix problems with recent changes related to skipping in:
. _process_vgnameid_list
. _process_pvs_in_vgs
- Undo unnecessary changes to the code structure and readability.
- Preserve valid but minor changes:
. testing FAILED bit values in ignore_vg
. using "skip" value from ignore_vg instead of "ret" value
. applying the sigint check to the start of all loops
. setting stack backtrace when ECMD_PROCESSED is not returned,
i.e. apply the following pattern:
ret = process_foo();
if (ret != ECMD_PROCESSED)
stack;
if (ret > ret_max)
ret_max = ret;
Extend/fix d8923457b8 commit.
'skip'-ed VG is not holding any lock - so don't unlock such VG.
At the same time simplify the code around and relase VG at a single
place and unlock only not skiped and not ignored VGs.
Rework ignore_vg() API so it properly handles
multiple kind of vg_read_error() states.
Skip processing only otherwise valid VG.
Always return ECMD_FAILED when break is detected.
Check sigint_caught() in front of dm iterator loop.
Add stack for _process failing ret codes.
Move common code into shared internal fn so the logic for getting the
LV info as well LV segment status is not scattered around - call common
_do_info_and_status to gather required parts in reporting handlers.
- Add separate lv_status fn (if we're interested only in seg status,
but not lv info at the same time as it is with existing
lv_info_with_seg_status fn). So we 3 fns:
- lv_info (existing one, runs only info ioctl, fills in struct lvinfo only)
- lv_status (new one, runs status ioctl, fills in struct lv_seg_status only)
- lv_info_with_seg_status (existing one, runs status ioctl, fills
in struct lvinfo as well as lv_seg_status)
- Add more comments in the code explaining the difference between lv_info,
lv_status and lv_info_with_seg_status and their return values.
- Move decision whether lv_info_with_seg_status needs to call only
status ioctl (in case the segment for which we require status is from
the LV for which we require info) or separate status and info ioctl
(in case the segment for which we require status is from different
LV that the one for which we require info) into
lv_info_with_seg_status fn so caller doesn't need to bother about
this at all.
- Cleanup internal interface for this seg status so it's more readable.
LVM2.2.02.112/tools/toollib.c:1991: leaked_storage: Variable "iter" going out of scope leaks the storage it points to.
LVM2.2.02.112/lib/filters/filter-usable.c:89: leaked_storage: Variable "f" going out of scope leaks the storage it points to.
LVM2.2.02.112/lib/activate/dev_manager.c:1874: leaked_handle: Handle variable "fd" going out of scope leaks the handle.
Similar to LVSINFO type which gathers LV + its DM_DEVICE_INFO, the
new LVSSTATUS/SEGSSTATUS report type will gather LV/segment + its
DM_DEVICE_STATUS.
Since we can report status only for certain segment, in case
of LVSSTATUS we need to choose which segment related to the LV
should be processed that represents the "LV status". In case of
SEGSSTATUS type it's clear - the status is reported for the
segment just processed.
The former struct lv_with_info is renamed to lv_with_info_and_seg_status as it can
hold more than just "info", there's lv's segment status now in addition:
struct lv_with_info_and_seg_status {
struct logical_volume *lv;
struct lvinfo *info;
struct lv_seg_status *seg_status;
}
Where struct lv_seg_status is:
struct lv_seg_status {
struct dm_pool *mem;
struct lv_segment lv_seg;
lv_seg_status_type_t type;
void *status; /* struct dm_status_* */
}
Where lv_seg points to lv's segment that is being reported or
processed in general.
New struct lv_seg_status keeps the information about segment status -
the status retrieved via DM_DEVICE_STATUS ioctl. This information will
be used for reporting dm device target status for the LV segment
specified.
So this patch introduces third level of LV information that is
kept for reuse while reporting fields within one reporting line,
causing only one DM_DEVICE_STATUS ioctl call per LV segment line
reported (otherwise we'd need to call the DM_DEVICE_STATUS for each
segment status field in one LV segment/reporting line which is not
efficient).
This is following exactly the same principle as already introduced
by commit ecb2be5d16.
So currently we have three levels of information that can be used
to report an LV/LV segment:
- LV metadata itself (struct logical_volume *lv)
- LV's DM_DEVICE_INFO ioctl result (struct lvinfo *info)
- LV's segment DM_DEVICE_STATUS ioctl result (this status must be
bound to a segment, not the whole LV as the whole LV may be
composed of several segments of course)
(this is the new struct lv_seg_status *seg_status)
Let's use this function for more activations in the code.
'needs_exlusive' will enforce exlusive type for any given LV.
We may want to activate LV in exlusive mode, even when we know
the LV (as is) supports non-exlusive activation as well.
lvcreate -ay -> exclusive & local
lvcreate -aay -> exclusive & local
lvcreate -aly -> exclusive & local
lvcreate -aey -> exclusive (might be on any node).
LVSINFO is just a subtype of LVS report type with extra "info" ioctl
called for each LV reported (per output line) so include its processing
within "case LVS" switch, not as completely different kind of reporting
which may be misleading when reading the code.
There's already the "lv_info_needed" flag set in the _report fn, so
call the approriate reporting function based on this flag within the
"case LVS" switch line.
Actually the same is already done for LV is reported per segments
within the "case SEGS" switch line. So this patch makes the code more
consistent so it's processed the same way for all cases.
Also, this is a preparation for another and new subtype that will
be introduced later - the "LVSSTATUS" and "SEGSSTATUS" report type.
Tool will use internal activation of unused cache pool to
clear metadata area before next use of cache-pool.
So allow to deactivation unused pool in case some error
case happend and we were not able to deactivation pool
right after metadata wipe.