IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Recently the single 'status' code has been used for number of cache
features.
Extend the API a little bit to allow usage also for lv_attr_dup.
As the function itself is used in lvm2api - add a new function:
lv_attr_dup_with_info_and_seg_status() that is able to use
grabbed info & status information.
report_init() is now using directly passed lvdm struct pointer
which holds the infomation whether lv_info() was correctly obtained or
there was some error when trying to read it.
Move 'healt' attribute to status.
TODO convert raid function to use the already known status.
Support error_if_no_space feature for thin pools.
Report more info about thinpool status:
(out_of_data (D), metadata_read_only (M), failed (F) also as health
attribute.)
API for seg reporting is breaking internal lvm coding - it cannot
use vgmem mem pool for allocation of reported value.
So use separate pool instead of 'vgmem' for non vg related allocations
Add consts for many function params - but still many other are left
for now as non-const - needs deeper level of change even on libdm side.
Since GET_FIELD_RESERVED_VALUE always returns a pointer, don't reference
it with "&" when used - we already have that pointer value (this is an
addendum to recent commit 028ff30947).
Only GET_TYPE_RESERVED_VALUE needs to be referenced with "&" as it
returns directly the value of that type.
We have to use empty list, not NULL if we want to denote that the list
has no items. Otherwise, the code further can segfault as it expects
there's always a sane value (= some list), including empty list,
but never NULL.
Use helper macros to handle reserved values and also define "undefined"
reserved value as:
FIELD_RESERVED_VALUE(cache_policy, cache_policy_undef, "", "", "undefined")
Which means:
- print "" if the cache_policy value is undefined (the first name for this reserved value is "")
- recognize "undefined" reserved name as synonym to ""
(so statements like "lvs -S cache_policy=undefined" are still recognized)
Avoid making a copy of the keyword which is already registered in
values.h for "unmanaged" (vg_mda_copies field) and "auto" reserved
value (lv_read_ahead field). Also use helper macros to handle these
reserved - this is the correct approach - just do not copy the same
thing again and do not mix it! The GET_FIELD_RESERVED_VALUE and
GET_FIRST_RESERVED_NAME macros guarantees this - use it!
In addition to that, rename reserved values:
vg_mda_copies --> vg_mda_copies_unmanaged
lv_read_ahead --> lv_read_ahead_auto
So the field reserved values follows this scheme:
"<field_name>_<reserved_value_name>".
The same applies for type reserved values with this scheme:
"<report type name in lowercase>_<reserved_value_name>"
Add a comment about this scheme for others to follow as well
when adding new fields and their reserved values. This makes
it a bit easier to read the code then.
RESERVED(id) --> GET_TYPE_RESERVED_VALUE(id)
FIRST_NAME(id) --> GET_FIRST_RESERVED_NAME(id)
Also add GET_FIELD_RESERVED_VALUE(id) macro to get per-field reserved value.
This makes it much more readable and hopefully it'll make it
easier to use these helper macros when adding new reporting
fields with reserved values if needed.
The cache policy name taken as LV segment property must be duped
for report as the VG/LV/seg structure is destroyed after processing,
reporting happens later:
$ valgrind lvs -o+cache_policy
...
==16589== Invalid read of size 1
==16589== at 0x54ABCC3: dm_report_compact_fields
(libdm-report.c:1739)
==16589== by 0x153FC7: _report (reporter.c:619)
==16589== by 0x1540A6: lvs (reporter.c:641)
==16589== by 0x148021: lvm_run_command (lvmcmdline.c:1452)
==16589== by 0x1495CB: lvm2_main (lvmcmdline.c:1907)
==16589== by 0x164712: main (lvm.c:21)
==16589== Address 0x7d465f2 is 8,338 bytes inside a block of size
16,384 free'd
==16589== at 0x4C2ACE9: free (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==16589== by 0x54B8C85: _free_chunk (pool-fast.c:318)
==16589== by 0x54B84FB: dm_pool_destroy (pool-fast.c:78)
==16589== by 0x1E59C7: _free_vg (vg.c:78)
==16589== by 0x1E5A6D: release_vg (vg.c:95)
==16589== by 0x159B6E: _process_lv_vgnameid_list (toollib.c:1967)
==16589== by 0x159DD7: process_each_lv (toollib.c:2030)
==16589== by 0x153ED8: _report (reporter.c:598)
==16589== by 0x1540A6: lvs (reporter.c:641)
==16589== by 0x148021: lvm_run_command (lvmcmdline.c:1452)
==16589== by 0x1495CB: lvm2_main (lvmcmdline.c:1907)
==16589== by 0x164712: main (lvm.c:21)
Similar to LVSINFO type which gathers LV + its DM_DEVICE_INFO, the
new LVSSTATUS/SEGSSTATUS report type will gather LV/segment + its
DM_DEVICE_STATUS.
Since we can report status only for certain segment, in case
of LVSSTATUS we need to choose which segment related to the LV
should be processed that represents the "LV status". In case of
SEGSSTATUS type it's clear - the status is reported for the
segment just processed.
The former struct lv_with_info is renamed to lv_with_info_and_seg_status as it can
hold more than just "info", there's lv's segment status now in addition:
struct lv_with_info_and_seg_status {
struct logical_volume *lv;
struct lvinfo *info;
struct lv_seg_status *seg_status;
}
Where struct lv_seg_status is:
struct lv_seg_status {
struct dm_pool *mem;
struct lv_segment lv_seg;
lv_seg_status_type_t type;
void *status; /* struct dm_status_* */
}
Where lv_seg points to lv's segment that is being reported or
processed in general.
New struct lv_seg_status keeps the information about segment status -
the status retrieved via DM_DEVICE_STATUS ioctl. This information will
be used for reporting dm device target status for the LV segment
specified.
So this patch introduces third level of LV information that is
kept for reuse while reporting fields within one reporting line,
causing only one DM_DEVICE_STATUS ioctl call per LV segment line
reported (otherwise we'd need to call the DM_DEVICE_STATUS for each
segment status field in one LV segment/reporting line which is not
efficient).
This is following exactly the same principle as already introduced
by commit ecb2be5d16.
So currently we have three levels of information that can be used
to report an LV/LV segment:
- LV metadata itself (struct logical_volume *lv)
- LV's DM_DEVICE_INFO ioctl result (struct lvinfo *info)
- LV's segment DM_DEVICE_STATUS ioctl result (this status must be
bound to a segment, not the whole LV as the whole LV may be
composed of several segments of course)
(this is the new struct lv_seg_status *seg_status)
Show some stats with 'lvs'
Display same info for active cache volume and cache-pool.
data% - #used cache blocks/#total cache blocks
meta% - #used metadata blocks/#total metadata blocks
copy% - #dirty/#used cache blocks
TODO: maybe there is a better mapping
- should be seen as first-try-and-see.
Use new libdm macro DM_LIST_HEAD_INIT().
Embeded 'free' segment type (so it's not needed in the list)
Drop assignments of 0,NULL since they are defaults.
Instead of segtype->ops->name() introduce lvseg_name().
This also allows us to leave name() function 'empty' for default
return of segtype->name.
TODO: add functions for rest of ops->
There was a bug in value and their synonym definition for these two fields
causing selections on these fields to not work correctly - nothing matched
against vg/lv_permissions fields even if selection criteria should have
matched.
Scenario:
$ lvs -o name,lv_permissions vg
LV LPerms
lvol0 read-only
lvol1 writeable
Before this patch:
$ lvs -o name,lv_permissions vg -S 'permissions=read-only'
(blank)
$ lvs -o name,lv_permissions vg -S 'permissions=writeable
(blank)
With this patch applied:
$ lvs -o name,lv_permissions vg -S 'permissions=read-only'
LV LPerms
lvol0 read-only
$ lvs -o name,lv_permissions vg -S 'permissions=writeable'
LV LPerms
lvol1 writeable
Also synonyms match correctly now:
$ lvs -o name,lv_permissions vg -S 'permissions=rw'
LV LPerms
lvol1 writeable
Process PVs by iterating through VGs, then iterating through
devices if the command needs to process non-PV devices.
The process_single function can always use the VG and PV args.
[Committed by agk with cosmetic changes and tweaks.]
The cache mode of a new cache pool is always explicitly
included in the vg metadata. If a cache mode is not
specified on the command line, the cache mode is taken
from lvm.conf allocation/cache_pool_cachemode, which
defaults to "writethrough".
The cache mode can be displayed with lvs -o+cachemode.
Try to enforce consistent macro usage along these lines:
lv_is_mirror - mirror that uses the original dm-raid1 implementation
(segment type "mirror")
lv_is_mirror_type - also includes internal mirror image and log LVs
lv_is_raid - raid volume that uses the new dm-raid implementation
(segment type "raid")
lv_is_raid_type - also includes internal raid image / log / metadata LVs
lv_is_mirrored - LV is mirrored using either kernel implementation
(excludes non-mirror modes like raid5 etc.)
lv_is_pvmove - internal pvmove volume
Use lv_is_* macros throughout the code base, introducing
lv_is_pvmove, lv_is_locked, lv_is_converting and lv_is_merging.
lv_is_mirror_type no longer includes pvmove.
This makes it a bit more readable since we can report more general
layouts/roles first and keywords describing the LV more precisely
afterwards in the list.
The 'lv_type' field name was a bit misleading. Better one is 'lv_role'
since this fields describes what's the actual use of the LV currently -
its 'role'.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
Before the patch:
$ lvs -o name,active vg/lvol1 --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will beattempted.
LV Active
lvol1 active
With this patch applied:
$ lvs -o name,active vg/lvol1 --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will be attempted.
LV Active
lvol1 unknown
The same for active_{locally,remotely,exclusively} fields.
Also, rename headings for these fields (ActLocal/ActRemote/ActExcl).
If the lv_info call fails for whatever reason/INFO dm ioctl fails or
the dm driver communication is disabled (--driverloaded n), make
sure we always display "unknown" for LVSINFO fields as that's exactly
what happens - we don't know the state.
Before the patch:
$ lvs -o name,device_open --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will be attempted.
Command failed with status code 5.
With this patch applied:
$ lvs -o name,device_open --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will be attempted.
LV DevOpen
lvol1 unknown
Like other binary fields we already have:
$ lvs -o name,zero vg/lvx vg/pool vg/pool1
LV Zero
lvx unknown
pool
pool1 zero
$ lvs -o name,zero vg/lvx vg/pool vg/pool1 --binary
LV Zero
lvx -1
pool 0
pool1 1
We have 1/"descriptive word"/"yes" for 1 and 0/"no" for 0.
For example (the new recognized values are "yes" and "no"):
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2
LV DevOpen
root open
swap open
lvol1 open
lvol2
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=open'
LV DevOpen
root open
swap open
lvol1 open
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=1'
LV DevOpen
root open
swap open
lvol1 open
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=yes'
LV DevOpen
root open
swap open
lvol1 open
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=0'
LV DevOpen
lvol2
$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=no'
LV DevOpen
lvol2
So all attribute reporting functions are all in one section of code
for quick orientation (all these functions are defined in the order
of their attribute character displayed in pv/vg/lv_attr field).
lv_active_{locally,remotely,exclusively} display the original
"lv_active" field in a more separate way so that we can create
selection criteria in a binary-based form (yes/no).