2002-12-12 23:55:49 +03:00
/*
2004-03-30 23:35:44 +04:00
* Copyright ( C ) 2002 - 2004 Sistina Software , Inc . All rights reserved .
2019-10-04 18:02:20 +03:00
* Copyright ( C ) 2004 - 2019 Red Hat , Inc . All rights reserved .
2002-12-12 23:55:49 +03:00
*
2004-03-30 23:35:44 +04:00
* This file is part of LVM2 .
2002-12-12 23:55:49 +03:00
*
2004-03-30 23:35:44 +04:00
* This copyrighted material is made available to anyone wishing to use ,
* modify , copy , or redistribute it subject to the terms and conditions
2007-08-21 00:55:30 +04:00
* of the GNU Lesser General Public License v .2 .1 .
2002-12-12 23:55:49 +03:00
*
2007-08-21 00:55:30 +04:00
* You should have received a copy of the GNU Lesser General Public License
2004-03-30 23:35:44 +04:00
* along with this program ; if not , write to the Free Software Foundation ,
2016-01-21 13:49:46 +03:00
* Inc . , 51 Franklin Street , Fifth Floor , Boston , MA 02110 - 1301 USA
2002-12-12 23:55:49 +03:00
*/
2018-05-14 12:30:20 +03:00
# include "lib/misc/lib.h"
# include "lib/metadata/metadata.h"
# include "lib/report/report.h"
# include "lib/commands/toolcontext.h"
# include "lib/misc/lvm-string.h"
# include "lib/display/display.h"
# include "lib/activate/activate.h"
# include "lib/metadata/segtype.h"
# include "lib/cache/lvmcache.h"
# include "lib/device/device-types.h"
# include "lib/datastruct/str_list.h"
2002-12-12 23:55:49 +03:00
2010-01-07 17:37:11 +03:00
# include <stddef.h> /* offsetof() */
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
# include <float.h> /* DBL_MAX */
2015-07-03 11:43:07 +03:00
# include <time.h>
2010-01-07 17:37:11 +03:00
2007-01-16 21:06:12 +03:00
struct lvm_report_object {
struct volume_group * vg ;
2014-10-20 15:46:50 +04:00
struct lv_with_info_and_seg_status * lvdm ;
2007-01-16 21:06:12 +03:00
struct physical_volume * pv ;
struct lv_segment * seg ;
struct pv_segment * pvseg ;
2013-07-29 21:07:11 +04:00
struct label * label ;
2007-01-16 21:06:12 +03:00
} ;
2017-10-18 17:57:46 +03:00
static uint32_t _log_seqnum = 1 ;
2016-08-08 16:45:46 +03:00
2014-07-08 14:40:45 +04:00
/*
* Enum for field_num index to use in per - field reserved value definition .
* Each field is represented by enum value with name " field_<id> " where < id >
* is the field_id of the field as registered in columns . h .
*/
report: define reserved values/synonyms for some attribute fields
All binary attr fields have synonyms so selection criteria can use
either 0/1 or words to match against the field value (base type
for these binary fields is numeric one - DM_REPORT_FIELD_TYPE_NUMBER
so words are registered as reserved values):
pv_allocatable - "allocatable"
pv_exported - "exported"
pv_missing - "missing"
vg_extendable - "extendable"
vg_exported - "exported"
vg_partial - "partial"
vg_clustered - "clustered"
lv_initial_image_sync - "initial image sync", "sync"
lv_image_synced_names - "image synced", "synced"
lv_merging_names - "merging"
lv_converting_names - "converting"
lv_allocation_locked - "allocation locked", "locked"
lv_fixed_minor - "fixed minor", "fixed"
lv_merge_failed - "merge failed", "failed"
For example, these three are all equivalent:
$ lvs -o name,fixed_minor -S 'fixed_minor=fixed'
LV FixMin
lvol8 fixed minor
$ lvs -o name,fixed_minor -S 'fixed_minor="fixed minor"'
LV FixMin
lvol8 fixed minor
$ lvs -o name,fixed_minor -S 'fixed_minor=1'
LV FixMin
lvol8 fixed minor
The same with binary output - it has no effect on this functionality:
$ lvs -o name,fixed_minor --binary -S 'fixed_minor=fixed'
LV FixMin
lvol8 1
$ lvs -o name,fixed_minor --binary -S 'fixed_minor="fixed
minor"'
LV FixMin
lvol8 1
[1] f20/~ # lvs -o name,fixed_minor --binary -S 'fixed_minor=1'
LV FixMin
lvol8 1
2014-07-04 14:08:52 +04:00
# define FIELD(type, strct, sorttype, head, field_name, width, func, id, desc, writeable) field_ ## id,
enum {
# include "columns.h"
} ;
# undef FIELD
2013-06-15 04:24:16 +04:00
static const uint64_t _zero64 = UINT64_C ( 0 ) ;
2014-07-02 13:09:14 +04:00
static const uint64_t _one64 = UINT64_C ( 1 ) ;
2017-02-23 19:41:59 +03:00
static const uint64_t _two64 = UINT64_C ( 2 ) ;
2014-08-29 13:51:53 +04:00
static const char _str_zero [ ] = " 0 " ;
static const char _str_one [ ] = " 1 " ;
static const char _str_no [ ] = " no " ;
static const char _str_yes [ ] = " yes " ;
static const char _str_unknown [ ] = " unknown " ;
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
static const double _siz_max = DBL_MAX ;
2007-11-09 19:51:54 +03:00
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
/*
* 32 bit signed is casted to 64 bit unsigned in dm_report_field internally !
2014-12-18 16:52:16 +03:00
* So when stored in the struct , the _reserved_num_undef_32 is actually
* equal to _reserved_num_undef_64 .
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
*/
2014-12-18 16:52:16 +03:00
static const int32_t _reserved_num_undef_32 = INT32_C ( - 1 ) ;
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
2015-07-03 11:43:07 +03:00
typedef enum {
/* top-level identification */
TIME_NULL ,
TIME_NUM ,
TIME_STR ,
/* direct numeric value */
TIME_NUM__START ,
TIME_NUM_MULTIPLIER ,
TIME_NUM_MULTIPLIER_NEGATIVE ,
TIME_NUM_DAY ,
TIME_NUM_YEAR ,
TIME_NUM__END ,
/* direct string value */
TIME_STR_TIMEZONE ,
/* time frame strings */
TIME_FRAME__START ,
TIME_FRAME_AGO ,
TIME_FRAME__END ,
/* labels for dates */
TIME_LABEL_DATE__START ,
TIME_LABEL_DATE_TODAY ,
TIME_LABEL_DATE_YESTERDAY ,
/* weekday name strings */
TIME_WEEKDAY__START ,
TIME_WEEKDAY_SUNDAY ,
TIME_WEEKDAY_MONDAY ,
TIME_WEEKDAY_TUESDAY ,
TIME_WEEKDAY_WEDNESDAY ,
TIME_WEEKDAY_THURSDAY ,
TIME_WEEKDAY_FRIDAY ,
TIME_WEEKDAY_SATURDAY ,
TIME_WEEKDAY__END ,
TIME_LABEL_DATE__END ,
/* labels for times */
TIME_LABEL_TIME__START ,
TIME_LABEL_TIME_NOON ,
TIME_LABEL_TIME_MIDNIGHT ,
TIME_LABEL_TIME__END ,
/* time unit strings */
TIME_UNIT__START ,
TIME_UNIT_SECOND ,
TIME_UNIT_SECOND_REL ,
TIME_UNIT_MINUTE ,
TIME_UNIT_MINUTE_REL ,
TIME_UNIT_HOUR ,
TIME_UNIT_HOUR_REL ,
TIME_UNIT_AM ,
TIME_UNIT_PM ,
TIME_UNIT_DAY ,
TIME_UNIT_WEEK ,
TIME_UNIT_MONTH ,
TIME_UNIT_YEAR ,
TIME_UNIT_TZ_MINUTE ,
TIME_UNIT_TZ_HOUR ,
TIME_UNIT__END ,
/* month name strings */
TIME_MONTH__START ,
TIME_MONTH_JANUARY ,
TIME_MONTH_FEBRUARY ,
TIME_MONTH_MARCH ,
TIME_MONTH_APRIL ,
TIME_MONTH_MAY ,
TIME_MONTH_JUNE ,
TIME_MONTH_JULY ,
TIME_MONTH_AUGUST ,
TIME_MONTH_SEPTEMBER ,
TIME_MONTH_OCTOBER ,
TIME_MONTH_NOVEMBER ,
TIME_MONTH_DECEMBER ,
TIME_MONTH__END ,
} time_id_t ;
# define TIME_PROP_DATE 0x00000001 /* date-related */
# define TIME_PROP_TIME 0x00000002 /* time-related */
# define TIME_PROP_ABS 0x00000004 /* absolute value */
# define TIME_PROP_REL 0x00000008 /* relative value */
struct time_prop {
time_id_t id ;
uint32_t prop_flags ;
time_id_t granularity ;
} ;
2017-07-19 17:17:30 +03:00
# define ADD_TIME_PROP(id, flags, granularity) [(id)] = {(id), (flags), (granularity)},
2015-07-03 11:43:07 +03:00
static const struct time_prop _time_props [ ] = {
ADD_TIME_PROP ( TIME_NULL , 0 , TIME_NULL )
ADD_TIME_PROP ( TIME_NUM , 0 , TIME_NULL )
ADD_TIME_PROP ( TIME_STR , 0 , TIME_NULL )
ADD_TIME_PROP ( TIME_NUM_MULTIPLIER , 0 , TIME_NULL )
ADD_TIME_PROP ( TIME_NUM_MULTIPLIER_NEGATIVE , 0 , TIME_NULL )
ADD_TIME_PROP ( TIME_NUM_DAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_NUM_YEAR , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_YEAR )
ADD_TIME_PROP ( TIME_STR_TIMEZONE , TIME_PROP_TIME | TIME_PROP_ABS , TIME_NULL )
ADD_TIME_PROP ( TIME_FRAME_AGO , TIME_PROP_DATE | TIME_PROP_TIME | TIME_PROP_REL , TIME_NULL )
ADD_TIME_PROP ( TIME_LABEL_DATE_TODAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_LABEL_DATE_YESTERDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_SUNDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_MONDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_TUESDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_WEDNESDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_THURSDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_FRIDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_WEEKDAY_SATURDAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_LABEL_TIME_NOON , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_SECOND )
ADD_TIME_PROP ( TIME_LABEL_TIME_MIDNIGHT , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_SECOND )
ADD_TIME_PROP ( TIME_UNIT_SECOND , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_SECOND )
ADD_TIME_PROP ( TIME_UNIT_SECOND_REL , TIME_PROP_TIME | TIME_PROP_REL , TIME_UNIT_SECOND )
ADD_TIME_PROP ( TIME_UNIT_MINUTE , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_MINUTE )
ADD_TIME_PROP ( TIME_UNIT_MINUTE_REL , TIME_PROP_TIME | TIME_PROP_REL , TIME_UNIT_MINUTE )
ADD_TIME_PROP ( TIME_UNIT_HOUR , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_HOUR )
ADD_TIME_PROP ( TIME_UNIT_HOUR_REL , TIME_PROP_TIME | TIME_PROP_REL , TIME_UNIT_HOUR )
ADD_TIME_PROP ( TIME_UNIT_AM , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_HOUR )
ADD_TIME_PROP ( TIME_UNIT_PM , TIME_PROP_TIME | TIME_PROP_ABS , TIME_UNIT_HOUR )
ADD_TIME_PROP ( TIME_UNIT_DAY , TIME_PROP_DATE | TIME_PROP_REL , TIME_UNIT_DAY )
ADD_TIME_PROP ( TIME_UNIT_WEEK , TIME_PROP_DATE | TIME_PROP_REL , TIME_UNIT_WEEK )
ADD_TIME_PROP ( TIME_UNIT_MONTH , TIME_PROP_DATE | TIME_PROP_REL , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_UNIT_YEAR , TIME_PROP_DATE | TIME_PROP_REL , TIME_UNIT_YEAR )
ADD_TIME_PROP ( TIME_UNIT_TZ_MINUTE , TIME_PROP_TIME | TIME_PROP_ABS , TIME_NULL )
ADD_TIME_PROP ( TIME_UNIT_TZ_HOUR , TIME_PROP_TIME | TIME_PROP_ABS , TIME_NULL )
ADD_TIME_PROP ( TIME_MONTH_JANUARY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_FEBRUARY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_MARCH , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_APRIL , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_MAY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_JUNE , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_JULY , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_AUGUST , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_SEPTEMBER , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_OCTOBER , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_NOVEMBER , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
ADD_TIME_PROP ( TIME_MONTH_DECEMBER , TIME_PROP_DATE | TIME_PROP_ABS , TIME_UNIT_MONTH )
} ;
# define TIME_REG_PLURAL_S 0x00000001 /* also recognize plural form with "s" suffix */
struct time_reg {
const char * name ;
const struct time_prop * prop ;
uint32_t reg_flags ;
} ;
2017-07-19 17:17:30 +03:00
# define TIME_PROP(id) (_time_props + (id))
2015-07-03 11:43:07 +03:00
static const struct time_reg _time_reg [ ] = {
/*
* Group of tokens representing time frame and used
* with relative date / time to specify different flavours
* of relativity .
*/
{ " ago " , TIME_PROP ( TIME_FRAME_AGO ) , 0 } ,
/*
* Group of tokens labeling some date and used
* instead of direct absolute specification .
*/
{ " today " , TIME_PROP ( TIME_LABEL_DATE_TODAY ) , 0 } , /* 0:00 - 23:59:59 for current date */
{ " yesterday " , TIME_PROP ( TIME_LABEL_DATE_YESTERDAY ) , 0 } , /* 0:00 - 23:59:59 for current date minus 1 day*/
/*
* Group of tokens labeling some date - weekday
* names used to build up date .
*/
{ " Sunday " , TIME_PROP ( TIME_WEEKDAY_SUNDAY ) , TIME_REG_PLURAL_S } ,
{ " Sun " , TIME_PROP ( TIME_WEEKDAY_SUNDAY ) , 0 } ,
{ " Monday " , TIME_PROP ( TIME_WEEKDAY_MONDAY ) , TIME_REG_PLURAL_S } ,
{ " Mon " , TIME_PROP ( TIME_WEEKDAY_MONDAY ) , 0 } ,
{ " Tuesday " , TIME_PROP ( TIME_WEEKDAY_TUESDAY ) , TIME_REG_PLURAL_S } ,
{ " Tue " , TIME_PROP ( TIME_WEEKDAY_TUESDAY ) , 0 } ,
{ " Wednesday " , TIME_PROP ( TIME_WEEKDAY_WEDNESDAY ) , TIME_REG_PLURAL_S } ,
{ " Wed " , TIME_PROP ( TIME_WEEKDAY_WEDNESDAY ) , 0 } ,
{ " Thursday " , TIME_PROP ( TIME_WEEKDAY_THURSDAY ) , TIME_REG_PLURAL_S } ,
{ " Thu " , TIME_PROP ( TIME_WEEKDAY_THURSDAY ) , 0 } ,
{ " Friday " , TIME_PROP ( TIME_WEEKDAY_FRIDAY ) , TIME_REG_PLURAL_S } ,
{ " Fri " , TIME_PROP ( TIME_WEEKDAY_FRIDAY ) , 0 } ,
{ " Saturday " , TIME_PROP ( TIME_WEEKDAY_SATURDAY ) , TIME_REG_PLURAL_S } ,
{ " Sat " , TIME_PROP ( TIME_WEEKDAY_SATURDAY ) , 0 } ,
/*
* Group of tokens labeling some time and used
* instead of direct absolute specification .
*/
{ " noon " , TIME_PROP ( TIME_LABEL_TIME_NOON ) , TIME_REG_PLURAL_S } , /* 12:00:00 */
{ " midnight " , TIME_PROP ( TIME_LABEL_TIME_MIDNIGHT ) , TIME_REG_PLURAL_S } , /* 00:00:00 */
/*
* Group of tokens used to build up time . Most of these
* are used either as relative or absolute time units .
* The absolute ones are always used with TIME_FRAME_ *
* token , otherwise the unit is relative .
*/
{ " second " , TIME_PROP ( TIME_UNIT_SECOND ) , TIME_REG_PLURAL_S } ,
{ " sec " , TIME_PROP ( TIME_UNIT_SECOND ) , TIME_REG_PLURAL_S } ,
{ " s " , TIME_PROP ( TIME_UNIT_SECOND ) , 0 } ,
{ " minute " , TIME_PROP ( TIME_UNIT_MINUTE ) , TIME_REG_PLURAL_S } ,
{ " min " , TIME_PROP ( TIME_UNIT_MINUTE ) , TIME_REG_PLURAL_S } ,
{ " m " , TIME_PROP ( TIME_UNIT_MINUTE ) , 0 } ,
{ " hour " , TIME_PROP ( TIME_UNIT_HOUR ) , TIME_REG_PLURAL_S } ,
{ " hr " , TIME_PROP ( TIME_UNIT_HOUR ) , TIME_REG_PLURAL_S } ,
{ " h " , TIME_PROP ( TIME_UNIT_HOUR ) , 0 } ,
{ " AM " , TIME_PROP ( TIME_UNIT_AM ) , 0 } ,
{ " PM " , TIME_PROP ( TIME_UNIT_PM ) , 0 } ,
/*
* Group of tokens used to build up date .
* These are all relative ones .
*/
{ " day " , TIME_PROP ( TIME_UNIT_DAY ) , TIME_REG_PLURAL_S } ,
{ " week " , TIME_PROP ( TIME_UNIT_WEEK ) , TIME_REG_PLURAL_S } ,
{ " month " , TIME_PROP ( TIME_UNIT_MONTH ) , TIME_REG_PLURAL_S } ,
{ " year " , TIME_PROP ( TIME_UNIT_YEAR ) , TIME_REG_PLURAL_S } ,
{ " yr " , TIME_PROP ( TIME_UNIT_YEAR ) , TIME_REG_PLURAL_S } ,
/*
* Group of tokes used to build up date .
* These are all absolute .
*/
{ " January " , TIME_PROP ( TIME_MONTH_JANUARY ) , 0 } ,
{ " Jan " , TIME_PROP ( TIME_MONTH_JANUARY ) , 0 } ,
{ " February " , TIME_PROP ( TIME_MONTH_FEBRUARY ) , 0 } ,
{ " Feb " , TIME_PROP ( TIME_MONTH_FEBRUARY ) , 0 } ,
{ " March " , TIME_PROP ( TIME_MONTH_MARCH ) , 0 } ,
{ " Mar " , TIME_PROP ( TIME_MONTH_MARCH ) , 0 } ,
{ " April " , TIME_PROP ( TIME_MONTH_APRIL ) , 0 } ,
{ " Apr " , TIME_PROP ( TIME_MONTH_APRIL ) , 0 } ,
{ " May " , TIME_PROP ( TIME_MONTH_MAY ) , 0 } ,
{ " June " , TIME_PROP ( TIME_MONTH_JUNE ) , 0 } ,
{ " Jun " , TIME_PROP ( TIME_MONTH_JUNE ) , 0 } ,
{ " July " , TIME_PROP ( TIME_MONTH_JULY ) , 0 } ,
{ " Jul " , TIME_PROP ( TIME_MONTH_JULY ) , 0 } ,
{ " August " , TIME_PROP ( TIME_MONTH_AUGUST ) , 0 } ,
{ " Aug " , TIME_PROP ( TIME_MONTH_AUGUST ) , 0 } ,
{ " September " , TIME_PROP ( TIME_MONTH_SEPTEMBER ) , 0 } ,
{ " Sep " , TIME_PROP ( TIME_MONTH_SEPTEMBER ) , 0 } ,
{ " October " , TIME_PROP ( TIME_MONTH_OCTOBER ) , 0 } ,
{ " Oct " , TIME_PROP ( TIME_MONTH_OCTOBER ) , 0 } ,
{ " November " , TIME_PROP ( TIME_MONTH_NOVEMBER ) , 0 } ,
{ " Nov " , TIME_PROP ( TIME_MONTH_NOVEMBER ) , 0 } ,
{ " December " , TIME_PROP ( TIME_MONTH_DECEMBER ) , 0 } ,
{ " Dec " , TIME_PROP ( TIME_MONTH_DECEMBER ) , 0 } ,
{ NULL , TIME_PROP ( TIME_NULL ) , 0 } ,
} ;
struct time_item {
struct dm_list list ;
const struct time_prop * prop ;
const char * s ;
size_t len ;
} ;
struct time_info {
struct dm_pool * mem ;
struct dm_list * ti_list ;
time_t * now ;
time_id_t min_abs_date_granularity ;
time_id_t max_abs_date_granularity ;
time_id_t min_abs_time_granularity ;
time_id_t min_rel_time_granularity ;
} ;
static int _is_time_num ( time_id_t id )
{
return ( ( id > TIME_NUM__START ) & & ( id < TIME_NUM__END ) ) ;
} ;
/*
static int _is_time_frame ( time_id_t id )
{
return ( ( id > TIME_FRAME__START ) & & ( id < TIME_FRAME__END ) ) ;
} ;
*/
static int _is_time_label_date ( time_id_t id )
{
return ( ( id > TIME_LABEL_DATE__START ) & & ( id < TIME_LABEL_DATE__END ) ) ;
} ;
static int _is_time_label_time ( time_id_t id )
{
return ( ( id > TIME_LABEL_TIME__START ) & & ( id < TIME_LABEL_TIME__END ) ) ;
} ;
static int _is_time_unit ( time_id_t id )
{
return ( ( id > TIME_UNIT__START ) & & ( id < TIME_UNIT__END ) ) ;
} ;
static int _is_time_weekday ( time_id_t id )
{
return ( ( id > TIME_WEEKDAY__START ) & & ( id < TIME_WEEKDAY__END ) ) ;
} ;
static int _is_time_month ( time_id_t id )
{
return ( ( id > TIME_MONTH__START ) & & ( id < TIME_MONTH__END ) ) ;
} ;
static const char * _skip_space ( const char * s )
{
while ( * s & & isspace ( * s ) )
s + + ;
return s ;
}
/* Move till delim or space */
static const char * _move_till_item_end ( const char * s )
{
char c = * s ;
int is_num = isdigit ( c ) ;
/*
* Allow numbers to be attached to next token , for example
* it ' s correct to write " 12 hours " as well as " 12hours " .
*/
while ( c & & ! isspace ( c ) & & ( is_num ? ( is_num = isdigit ( c ) ) : 1 ) )
c = * + + s ;
return s ;
}
static struct time_item * _alloc_time_item ( struct dm_pool * mem , time_id_t id ,
const char * s , size_t len )
{
struct time_item * ti ;
if ( ! ( ti = dm_pool_zalloc ( mem , sizeof ( struct time_item ) ) ) ) {
log_error ( " alloc_time_item: dm_pool_zalloc failed " ) ;
return NULL ;
}
ti - > prop = & _time_props [ id ] ;
ti - > s = s ;
ti - > len = len ;
return ti ;
}
static int _add_time_part_to_list ( struct dm_pool * mem , struct dm_list * list ,
time_id_t id , int minus , const char * s , size_t len )
{
struct time_item * ti1 , * ti2 ;
if ( ! ( ti1 = _alloc_time_item ( mem , minus ? TIME_NUM_MULTIPLIER_NEGATIVE
: TIME_NUM_MULTIPLIER , s , len ) ) | |
! ( ti2 = _alloc_time_item ( mem , id , s + len , 0 ) ) )
return 0 ;
dm_list_add ( list , & ti1 - > list ) ;
dm_list_add ( list , & ti2 - > list ) ;
return 1 ;
}
static int _get_time ( struct dm_pool * mem , const char * * str ,
struct dm_list * list , int tz )
{
const char * end , * s = * str ;
int r = 0 ;
/* hour */
end = _move_till_item_end ( s ) ;
if ( ! _add_time_part_to_list ( mem , list , tz ? TIME_UNIT_TZ_HOUR : TIME_UNIT_HOUR ,
tz = = - 1 , s , end - s ) )
goto out ;
/* minute */
if ( * end ! = ' : ' )
/* minute required */
goto out ;
s = end + 1 ;
end = _move_till_item_end ( s ) ;
if ( ! _add_time_part_to_list ( mem , list , tz ? TIME_UNIT_TZ_MINUTE : TIME_UNIT_MINUTE ,
tz = = - 1 , s , end - s ) )
goto out ;
/* second */
if ( * end ! = ' : ' ) {
/* second not required */
s = end + 1 ;
r = 1 ;
goto out ;
} else if ( tz )
/* timezone does not have seconds */
goto out ;
s = end + 1 ;
end = _move_till_item_end ( s ) ;
if ( ! _add_time_part_to_list ( mem , list , TIME_UNIT_SECOND , 0 , s , end - s ) )
goto out ;
s = end + 1 ;
r = 1 ;
out :
* str = s ;
return r ;
}
static int _preparse_fuzzy_time ( const char * s , struct time_info * info )
{
struct dm_list * list ;
struct time_item * ti ;
const char * end ;
int fuzzy = 0 ;
time_id_t id ;
size_t len ;
int r = 0 ;
char c ;
if ( ! ( list = dm_pool_alloc ( info - > mem , sizeof ( struct dm_list ) ) ) ) {
log_error ( " _preparse_fuzzy_time: dm_pool_alloc failed " ) ;
goto out ;
}
dm_list_init ( list ) ;
s = _skip_space ( s ) ;
while ( ( c = * s ) ) {
/*
* If the string consists of - : + , digits or spaces ,
* it ' s not worth looking for fuzzy names here -
* it ' s standard YYYY - MM - DD HH : MM : SS + - HH : MM format
* and that is parseable by libdm directly .
*/
if ( ! ( isdigit ( c ) | | ( c = = ' - ' ) | | ( c = = ' : ' ) | | ( c = = ' + ' ) ) )
fuzzy = 1 ;
end = _move_till_item_end ( s ) ;
if ( isalpha ( c ) )
id = TIME_STR ;
else if ( isdigit ( c ) ) {
if ( * end = = ' : ' ) {
/* we have time */
if ( ! _get_time ( info - > mem , & s , list , 0 ) )
goto out ;
continue ;
}
/* we have some other number */
id = TIME_NUM ;
} else if ( ( c = = ' - ' ) | | ( c = = ' + ' ) ) {
s + + ;
/* we have timezone */
if ( ! _get_time ( info - > mem , & s , list , ( c = = ' - ' ) ? - 1 : 1 ) )
goto out ;
continue ;
} else
goto out ;
len = end - s ;
if ( ! ( ti = _alloc_time_item ( info - > mem , id , s , len ) ) )
goto out ;
dm_list_add ( list , & ti - > list ) ;
s + = len ;
s = _skip_space ( s ) ;
}
info - > ti_list = list ;
r = 1 ;
out :
if ( ! ( r & & fuzzy ) ) {
dm_pool_free ( info - > mem , list ) ;
return 0 ;
}
return 1 ;
}
static int _match_time_str ( struct dm_list * ti_list , struct time_item * ti )
{
struct time_item * ti_context_p = ( struct time_item * ) dm_list_prev ( ti_list , & ti - > list ) ;
size_t reg_len ;
int i ;
ti - > prop = TIME_PROP ( TIME_NULL ) ;
for ( i = 0 ; _time_reg [ i ] . name ; i + + ) {
reg_len = strlen ( _time_reg [ i ] . name ) ;
if ( ( ti - > len ! = reg_len ) & &
! ( ( _time_reg [ i ] . reg_flags & TIME_REG_PLURAL_S ) & &
( ti - > len = = reg_len + 1 ) & & ( ti - > s [ reg_len ] = = ' s ' ) ) )
continue ;
if ( ! strncasecmp ( ti - > s , _time_reg [ i ] . name , reg_len ) ) {
ti - > prop = _time_reg [ i ] . prop ;
if ( ( ti - > prop - > id > TIME_UNIT__START ) & & ( ti - > prop - > id < TIME_UNIT__END ) & &
ti_context_p & & ( ti_context_p - > prop - > id = = TIME_NUM ) )
ti_context_p - > prop = TIME_PROP ( TIME_NUM_MULTIPLIER ) ;
break ;
}
}
return ti - > prop - > id ;
}
static int _match_time_num ( struct dm_list * ti_list , struct time_item * ti )
{
struct time_item * ti_context_p = ( struct time_item * ) dm_list_prev ( ti_list , & ti - > list ) ;
struct time_item * ti_context_n = ( struct time_item * ) dm_list_next ( ti_list , & ti - > list ) ;
struct time_item * ti_context_nn = ti_context_n ? ( struct time_item * ) dm_list_next ( ti_list , & ti_context_n - > list ) : NULL ;
if ( ti_context_n & &
( ti_context_n - > prop - > id > TIME_MONTH__START ) & &
( ti_context_n - > prop - > id < TIME_MONTH__END ) ) {
if ( ti_context_nn & & ti_context_nn - > prop - > id = = TIME_NUM ) {
if ( ti - > len < ti_context_nn - > len ) {
/* 24 Feb 2015 */
ti - > prop = TIME_PROP ( TIME_NUM_DAY ) ;
ti_context_nn - > prop = TIME_PROP ( TIME_NUM_YEAR ) ;
} else {
/* 2015 Feb 24 */
ti - > prop = TIME_PROP ( TIME_NUM_YEAR ) ;
ti_context_nn - > prop = TIME_PROP ( TIME_NUM_DAY ) ;
}
} else {
if ( ti - > len < = 2 )
/* 24 Feb */
ti - > prop = TIME_PROP ( TIME_NUM_DAY ) ;
else
/* 2015 Feb */
ti - > prop = TIME_PROP ( TIME_NUM_YEAR ) ;
}
} else if ( ti_context_p & &
( ti_context_p - > prop - > id > TIME_MONTH__START ) & &
( ti_context_p - > prop - > id < TIME_MONTH__END ) ) {
if ( ti - > len < = 2 )
/* Feb 24 */
ti - > prop = TIME_PROP ( TIME_NUM_DAY ) ;
else
/* Feb 2015 */
ti - > prop = TIME_PROP ( TIME_NUM_YEAR ) ;
} else
ti - > prop = TIME_PROP ( TIME_NUM_YEAR ) ;
return ti - > prop - > id ;
}
static void _detect_time_granularity ( struct time_info * info , struct time_item * ti )
{
time_id_t gran = ti - > prop - > granularity ;
int is_date , is_abs , is_rel ;
if ( gran = = TIME_NULL )
return ;
is_date = ti - > prop - > prop_flags & TIME_PROP_DATE ;
is_abs = ti - > prop - > prop_flags & TIME_PROP_ABS ;
is_rel = ti - > prop - > prop_flags & TIME_PROP_REL ;
if ( is_date & & is_abs ) {
if ( gran > info - > max_abs_date_granularity )
info - > max_abs_date_granularity = gran ;
if ( gran < info - > min_abs_date_granularity )
info - > min_abs_date_granularity = gran ;
} else {
if ( is_abs & & ( gran < info - > min_abs_time_granularity ) )
info - > min_abs_time_granularity = gran ;
else if ( is_rel & & ( gran < info - > min_rel_time_granularity ) )
info - > min_rel_time_granularity = gran ;
}
}
static void _change_to_relative ( struct time_info * info , struct time_item * ti )
{
struct time_item * ti2 ;
ti2 = ti ;
while ( ( ti2 = ( struct time_item * ) dm_list_prev ( info - > ti_list , & ti2 - > list ) ) ) {
if ( ti2 - > prop - > id = = TIME_FRAME_AGO )
break ;
switch ( ti2 - > prop - > id ) {
case TIME_UNIT_SECOND :
ti2 - > prop = TIME_PROP ( TIME_UNIT_SECOND_REL ) ;
break ;
case TIME_UNIT_MINUTE :
ti2 - > prop = TIME_PROP ( TIME_UNIT_MINUTE_REL ) ;
break ;
case TIME_UNIT_HOUR :
ti2 - > prop = TIME_PROP ( TIME_UNIT_HOUR_REL ) ;
break ;
default :
break ;
}
}
}
static int _recognize_time_items ( struct time_info * info )
{
struct time_item * ti ;
/*
* At first , try to recognize strings .
* Also , if there are any items which may be absolute or
* relative and we have " TIME_FRAME_AGO " , change them to relative .
*/
dm_list_iterate_items ( ti , info - > ti_list ) {
if ( ( ti - > prop - > id = = TIME_STR ) & & ! _match_time_str ( info - > ti_list , ti ) ) {
log_error ( " Unrecognized string in date/time "
" specification at \" %s \" . " , ti - > s ) ;
return 0 ;
}
if ( ti - > prop - > id = = TIME_FRAME_AGO )
_change_to_relative ( info , ti ) ;
}
/*
* Now , recognize any numbers and be sensitive to the context
* given by strings we recognized before . Also , detect time
* granularity used ( both for absolute and / or relative parts ) .
*/
dm_list_iterate_items ( ti , info - > ti_list ) {
if ( ( ti - > prop - > id = = TIME_NUM ) & & ! _match_time_num ( info - > ti_list , ti ) ) {
log_error ( " Unrecognized number in date/time "
" specification at \" %s \" . " , ti - > s ) ;
return 0 ;
}
_detect_time_granularity ( info , ti ) ;
}
return 1 ;
}
static int _check_time_items ( struct time_info * info )
{
struct time_item * ti ;
uint32_t flags ;
int rel ;
int date_is_relative = - 1 , time_is_relative = - 1 ;
int label_time = 0 , label_date = 0 ;
dm_list_iterate_items ( ti , info - > ti_list ) {
flags = ti - > prop - > prop_flags ;
rel = flags & TIME_PROP_REL ;
if ( flags & TIME_PROP_DATE ) {
if ( date_is_relative < 0 )
date_is_relative = rel ;
else if ( ( date_is_relative ^ rel ) & &
( info - > max_abs_date_granularity > = info - > min_rel_time_granularity ) ) {
log_error ( " Mixed absolute and relative date "
" specification found at \" %s \" . " , ti - > s ) ;
return 0 ;
}
/* Date label can be used only once and not mixed with other date spec. */
if ( label_date ) {
log_error ( " Ambiguous date specification found at \" %s \" . " , ti - > s ) ;
return 0 ;
2017-07-19 17:16:12 +03:00
}
if ( _is_time_label_date ( ti - > prop - > id ) )
2015-07-03 11:43:07 +03:00
label_date = 1 ;
}
else if ( flags & TIME_PROP_TIME ) {
if ( time_is_relative < 0 )
time_is_relative = rel ;
else if ( ( time_is_relative ^ rel ) ) {
log_error ( " Mixed absolute and relative time "
" specification found at \" %s \" . " , ti - > s ) ;
return 0 ;
}
/* Time label can be used only once and not mixed with other time spec. */
if ( label_time ) {
log_error ( " Ambiguous time specification found at \" %s \" . " , ti - > s ) ;
return 0 ;
2017-07-19 17:16:12 +03:00
}
if ( _is_time_label_time ( ti - > prop - > id ) )
2015-07-03 11:43:07 +03:00
label_time = 1 ;
}
}
return 1 ;
}
# define CACHE_ID_TIME_NOW "time_now"
static time_t * _get_now ( struct dm_report * rh , struct dm_pool * mem )
{
const void * cached_obj ;
time_t * now ;
if ( ! ( cached_obj = dm_report_value_cache_get ( rh , CACHE_ID_TIME_NOW ) ) ) {
if ( ! ( now = dm_pool_zalloc ( mem , sizeof ( time_t ) ) ) ) {
log_error ( " _get_now: dm_pool_zalloc failed " ) ;
return NULL ;
}
time ( now ) ;
if ( ! dm_report_value_cache_set ( rh , CACHE_ID_TIME_NOW , now ) ) {
log_error ( " _get_now: failed to cache current time " ) ;
return NULL ;
}
} else
now = ( time_t * ) cached_obj ;
return now ;
}
static void _adjust_time_for_granularity ( struct time_info * info , struct tm * tm , time_t * t )
{
switch ( info - > min_abs_date_granularity ) {
case TIME_UNIT_YEAR :
tm - > tm_mon = 0 ;
/* fall through */
case TIME_UNIT_MONTH :
tm - > tm_mday = 1 ;
break ;
default :
break ;
}
switch ( info - > min_abs_time_granularity ) {
case TIME_UNIT_HOUR :
tm - > tm_min = 0 ;
/* fall through */
case TIME_UNIT_MINUTE :
tm - > tm_sec = 0 ;
break ;
case TIME_UNIT__END :
if ( info - > min_rel_time_granularity = = TIME_UNIT__END )
tm - > tm_hour = tm - > tm_min = tm - > tm_sec = 0 ;
break ;
default :
break ;
}
if ( ( info - > min_abs_time_granularity = = TIME_UNIT__END ) & &
( info - > min_rel_time_granularity > = TIME_UNIT_DAY ) & &
( info - > min_rel_time_granularity < = TIME_UNIT_YEAR ) )
tm - > tm_hour = tm - > tm_min = tm - > tm_sec = 0 ;
}
# define SECS_PER_MINUTE 60
# define SECS_PER_HOUR 3600
# define SECS_PER_DAY 86400
static int _days_in_month [ 12 ] = { 31 , 28 , 31 , 30 , 31 , 30 , 31 , 31 , 30 , 31 , 30 , 31 } ;
static int _is_leap_year ( long year )
{
return ( ( ( year % 4 = = 0 ) & & ( year % 100 ! = 0 ) ) | | ( year % 400 = = 0 ) ) ;
}
static int _get_days_in_month ( long month , long year )
{
return ( month = = 2 & & _is_leap_year ( year ) ) ? _days_in_month [ month - 1 ] + 1
: _days_in_month [ month - 1 ] ;
}
static void _get_resulting_time_span ( struct time_info * info ,
struct tm * tm , time_t t ,
time_t * t_result1 , time_t * t_result2 )
{
time_t t1 = mktime ( tm ) - t ;
time_t t2 = t1 ;
struct tm tmp ;
if ( info - > min_abs_time_granularity ! = TIME_UNIT__END ) {
if ( info - > min_abs_time_granularity = = TIME_UNIT_MINUTE )
t2 + = ( SECS_PER_MINUTE - 1 ) ;
else if ( info - > min_abs_time_granularity = = TIME_UNIT_HOUR )
t2 + = ( SECS_PER_HOUR - 1 ) ;
} else if ( info - > min_rel_time_granularity ! = TIME_UNIT__END ) {
if ( info - > min_rel_time_granularity = = TIME_UNIT_MINUTE )
t1 - = ( SECS_PER_MINUTE + 1 ) ;
else if ( info - > min_rel_time_granularity = = TIME_UNIT_HOUR )
t1 - = ( SECS_PER_HOUR + 1 ) ;
else if ( ( info - > min_rel_time_granularity > = TIME_UNIT_DAY ) & &
( info - > min_rel_time_granularity < = TIME_UNIT_YEAR ) )
t2 + = ( SECS_PER_DAY - 1 ) ;
} else {
if ( info - > min_abs_date_granularity = = TIME_UNIT_MONTH )
t2 + = ( SECS_PER_DAY * _get_days_in_month ( tm - > tm_mon + 1 , tm - > tm_year ) - 1 ) ;
else if ( info - > min_abs_date_granularity ! = TIME_UNIT__END )
t2 + = ( SECS_PER_DAY - 1 ) ;
}
/* Adjust for DST if needed. */
localtime_r ( & t1 , & tmp ) ;
if ( tmp . tm_isdst )
t1 - = SECS_PER_HOUR ;
localtime_r ( & t2 , & tmp ) ;
if ( tmp . tm_isdst )
t2 - = SECS_PER_HOUR ;
* t_result1 = t1 ;
* t_result2 = t2 ;
}
static int _translate_time_items ( struct dm_report * rh , struct time_info * info ,
const char * * data_out )
{
struct time_item * ti , * ti_p = NULL ;
long multiplier = 1 ;
struct tm tm_now ;
time_id_t id ;
char * end ;
long num ;
struct tm tm ; /* absolute time */
time_t t = 0 ; /* offset into past before absolute time */
time_t t1 , t2 ;
char buf [ 32 ] ;
localtime_r ( info - > now , & tm_now ) ;
tm = tm_now ;
tm . tm_isdst = 0 ; /* we'll adjust for dst later */
tm . tm_wday = tm . tm_yday = - 1 ;
dm_list_iterate_items ( ti , info - > ti_list ) {
id = ti - > prop - > id ;
if ( _is_time_num ( id ) ) {
2017-08-25 12:58:33 +03:00
errno = 0 ;
2015-07-03 11:43:07 +03:00
num = strtol ( ti - > s , & end , 10 ) ;
2017-08-25 12:58:33 +03:00
if ( errno ) {
log_error ( " _translate_time_items: invalid time. " ) ;
return 0 ;
}
2015-07-03 11:43:07 +03:00
switch ( id ) {
case TIME_NUM_MULTIPLIER_NEGATIVE :
multiplier = - num ;
break ;
case TIME_NUM_MULTIPLIER :
multiplier = num ;
break ;
case TIME_NUM_DAY :
tm . tm_mday = num ;
break ;
case TIME_NUM_YEAR :
tm . tm_year = num - 1900 ;
break ;
default :
break ;
}
} else if ( _is_time_month ( id ) ) {
tm . tm_mon = id - TIME_MONTH__START - 1 ;
} else if ( _is_time_label_date ( id ) ) {
if ( _is_time_weekday ( id ) ) {
num = id - TIME_WEEKDAY__START - 1 ;
if ( tm_now . tm_wday < num )
num = 7 - num + tm_now . tm_wday ;
else
num = tm_now . tm_wday - num ;
t + = num * SECS_PER_DAY ;
} else switch ( id ) {
case TIME_LABEL_DATE_YESTERDAY :
t + = SECS_PER_DAY ;
break ;
case TIME_LABEL_DATE_TODAY :
/* Nothing to do here - we started with today. */
break ;
default :
break ;
}
} else if ( _is_time_label_time ( id ) ) {
switch ( id ) {
case TIME_LABEL_TIME_NOON :
tm . tm_hour = 12 ;
tm . tm_min = tm . tm_sec = 0 ;
break ;
case TIME_LABEL_TIME_MIDNIGHT :
tm . tm_hour = tm . tm_min = tm . tm_sec = 0 ;
break ;
default :
break ;
}
} else if ( _is_time_unit ( id ) ) {
switch ( id ) {
case TIME_UNIT_SECOND :
tm . tm_sec = multiplier ;
break ;
case TIME_UNIT_SECOND_REL :
t + = multiplier ;
break ;
case TIME_UNIT_MINUTE :
tm . tm_min = multiplier ;
break ;
case TIME_UNIT_MINUTE_REL :
t + = ( multiplier * SECS_PER_MINUTE ) ;
break ;
case TIME_UNIT_HOUR :
tm . tm_hour = multiplier ;
break ;
case TIME_UNIT_HOUR_REL :
t + = ( multiplier * SECS_PER_HOUR ) ;
break ;
case TIME_UNIT_AM :
2015-11-08 19:19:21 +03:00
if ( ti_p & & ti_p - > prop - > id = = TIME_NUM_MULTIPLIER )
2015-07-03 11:43:07 +03:00
tm . tm_hour = multiplier ;
break ;
case TIME_UNIT_PM :
2015-11-08 19:19:21 +03:00
if ( ti_p & & _is_time_unit ( ti_p - > prop - > id ) )
2015-07-03 11:43:07 +03:00
t - = 12 * SECS_PER_HOUR ;
2015-11-08 19:19:21 +03:00
else if ( ti_p & & ti_p - > prop - > id = = TIME_NUM_MULTIPLIER )
2015-07-03 11:43:07 +03:00
tm . tm_hour = multiplier + 12 ;
break ;
case TIME_UNIT_DAY :
t + = multiplier * SECS_PER_DAY ;
break ;
case TIME_UNIT_WEEK :
t + = multiplier * 7 * SECS_PER_DAY ;
break ;
case TIME_UNIT_MONTH :
/* if months > 12, convert to years first */
num = multiplier / 12 ;
tm . tm_year - = num ;
num = multiplier % 12 ;
if ( num > ( tm . tm_mon + 1 ) ) {
tm . tm_year - - ;
tm . tm_mon = 12 - num + tm . tm_mon ;
} else
tm . tm_mon - = num ;
break ;
case TIME_UNIT_YEAR :
tm . tm_year - = multiplier ;
break ;
default :
break ;
}
}
ti_p = ti ;
}
_adjust_time_for_granularity ( info , & tm , & t ) ;
_get_resulting_time_span ( info , & tm , t , & t1 , & t2 ) ;
dm_pool_free ( info - > mem , info - > ti_list ) ;
info - > ti_list = NULL ;
2017-02-15 19:01:49 +03:00
if ( dm_snprintf ( buf , sizeof ( buf ) , " @ " FMTd64 " :@ " FMTd64 , ( int64_t ) t1 , ( int64_t ) t2 ) = = - 1 ) {
2015-07-09 16:15:15 +03:00
log_error ( " _translate_time_items: dm_snprintf failed " ) ;
return 0 ;
}
2015-07-03 11:43:07 +03:00
if ( ! ( * data_out = dm_pool_strdup ( info - > mem , buf ) ) ) {
log_error ( " _translate_time_items: dm_pool_strdup failed " ) ;
return 0 ;
}
return 1 ;
}
static const char * _lv_time_handler_parse_fuzzy_name ( struct dm_report * rh ,
struct dm_pool * mem ,
const char * data_in )
{
const char * s = data_in ;
const char * data_out = NULL ;
struct time_info info = { . mem = mem ,
. ti_list = NULL ,
. now = _get_now ( rh , mem ) ,
. min_abs_date_granularity = TIME_UNIT__END ,
. max_abs_date_granularity = TIME_UNIT__START ,
. min_abs_time_granularity = TIME_UNIT__END ,
. min_rel_time_granularity = TIME_UNIT__END } ;
if ( ! info . now )
goto_out ;
/* recognize top-level parts - string/number/time/timezone? */
if ( ! _preparse_fuzzy_time ( s , & info ) )
goto out ;
/* recognize each part in more detail, also look at the context around if needed */
if ( ! _recognize_time_items ( & info ) )
goto out ;
/* check if the combination of items is allowed or whether it makes sense at all */
if ( ! _check_time_items ( & info ) )
goto out ;
/* translate items into final time range */
if ( ! _translate_time_items ( rh , & info , & data_out ) )
goto out ;
out :
if ( info . ti_list )
dm_pool_free ( info . mem , info . ti_list ) ;
return data_out ;
}
static void * _lv_time_handler_get_dynamic_value ( struct dm_report * rh ,
struct dm_pool * mem ,
const char * data_in )
{
2017-02-15 19:01:49 +03:00
int64_t t1 , t2 ;
2015-07-03 11:43:07 +03:00
time_t * result ;
2017-02-15 19:01:49 +03:00
if ( sscanf ( data_in , " @ " FMTd64 " :@ " FMTd64 , & t1 , & t2 ) ! = 2 ) {
2015-07-03 11:43:07 +03:00
log_error ( " Failed to get value for parsed time specification. " ) ;
return NULL ;
}
if ( ! ( result = dm_pool_alloc ( mem , 2 * sizeof ( time_t ) ) ) ) {
log_error ( " Failed to allocate space to store time range. " ) ;
return NULL ;
}
2017-02-15 19:01:49 +03:00
result [ 0 ] = ( time_t ) t1 ; /* Validate range for 32b arch ? */
result [ 1 ] = ( time_t ) t2 ;
2015-07-03 11:43:07 +03:00
return result ;
}
2017-10-18 17:57:46 +03:00
static int _lv_time_handler ( struct dm_report * rh , struct dm_pool * mem ,
uint32_t field_num ,
dm_report_reserved_action_t action ,
const void * data_in , const void * * data_out )
2015-07-03 11:43:07 +03:00
{
* data_out = NULL ;
if ( ! data_in )
return 1 ;
switch ( action ) {
case DM_REPORT_RESERVED_PARSE_FUZZY_NAME :
* data_out = _lv_time_handler_parse_fuzzy_name ( rh , mem , data_in ) ;
break ;
case DM_REPORT_RESERVED_GET_DYNAMIC_VALUE :
if ( ! ( * data_out = _lv_time_handler_get_dynamic_value ( rh , mem , data_in ) ) )
return 0 ;
break ;
default :
return - 1 ;
}
return 1 ;
}
2014-12-19 11:17:16 +03:00
/*
* Get type reserved value - the value returned is the direct value of that type .
*/
# define GET_TYPE_RESERVED_VALUE(id) _reserved_ ## id
/*
* Get field reserved value - the value returned is always a pointer ( const void * ) .
*/
# define GET_FIELD_RESERVED_VALUE(id) _reserved_ ## id.value
/*
* Get first name assigned to the reserved value - this is the one that
* should be reported / displayed . All the other names assigned for the reserved
* value are synonyms recognized in selection criteria .
*/
# define GET_FIRST_RESERVED_NAME(id) _reserved_ ## id ## _names[0]
2014-07-08 14:40:45 +04:00
/*
* Reserved values and their assigned names .
* The first name is the one that is also used for reporting .
* All names listed are synonyms recognized in selection criteria .
* For binary - based values we map all reserved names listed onto value 1 , blank onto value 0.
*
2014-07-10 15:37:26 +04:00
* TYPE_RESERVED_VALUE ( type , reserved_value_id , description , value , reserved name , . . . )
* FIELD_RESERVED_VALUE ( field_id , reserved_value_id , description , value , reserved name , . . . )
* FIELD_RESERVED_BINARY_VALUE ( field_id , reserved_value_id , description , reserved name for 1 , . . . )
2014-07-08 14:40:45 +04:00
*
2014-07-10 11:22:45 +04:00
* Note : FIELD_RESERVED_BINARY_VALUE creates :
* - ' reserved_value_id_y ' ( for 1 )
* - ' reserved_value_id_n ' ( for 0 )
2014-07-08 14:40:45 +04:00
*/
2014-07-10 13:54:37 +04:00
# define NUM uint64_t
2015-05-20 19:47:54 +03:00
# define NUM_HND dm_report_reserved_handler
# define HND (dm_report_reserved_handler)
2015-06-30 11:13:35 +03:00
# define NOFLAG 0
# define NAMED DM_REPORT_FIELD_RESERVED_VALUE_NAMED
# define RANGE DM_REPORT_FIELD_RESERVED_VALUE_RANGE
2015-05-19 14:01:48 +03:00
# define FUZZY DM_REPORT_FIELD_RESERVED_VALUE_FUZZY_NAMES
# define DYNAMIC DM_REPORT_FIELD_RESERVED_VALUE_DYNAMIC_VALUE
2014-07-10 13:54:37 +04:00
2015-06-30 11:13:35 +03:00
# define TYPE_RESERVED_VALUE(type, flags, id, desc, value, ...) \
2014-07-10 11:22:45 +04:00
static const char * _reserved_ # # id # # _names [ ] = { __VA_ARGS__ , NULL } ; \
static const type _reserved_ # # id = value ;
2014-07-10 13:54:37 +04:00
2015-06-30 11:13:35 +03:00
# define FIELD_RESERVED_VALUE(flags, field_id, id, desc, value, ...) \
2014-07-10 11:22:45 +04:00
static const char * _reserved_ # # id # # _names [ ] = { __VA_ARGS__ , NULL } ; \
2014-10-23 17:03:04 +04:00
static const struct dm_report_field_reserved_value _reserved_ # # id = { field_ # # field_id , value } ;
2014-07-10 13:54:37 +04:00
2014-07-10 15:37:26 +04:00
# define FIELD_RESERVED_BINARY_VALUE(field_id, id, desc, ...) \
2015-06-30 11:13:35 +03:00
FIELD_RESERVED_VALUE ( NAMED , field_id , id # # _y , desc , & _one64 , __VA_ARGS__ , _str_yes ) \
FIELD_RESERVED_VALUE ( NAMED , field_id , id # # _n , desc , & _zero64 , __VA_ARGS__ , _str_no )
2014-07-09 17:10:43 +04:00
2014-07-10 11:22:45 +04:00
# include "values.h"
2014-07-08 14:40:45 +04:00
2014-07-10 13:54:37 +04:00
# undef NUM
2015-05-20 19:47:54 +03:00
# undef NUM_HND
# undef HND
2015-06-30 11:13:35 +03:00
# undef NOFLAG
# undef NAMED
# undef RANGE
2014-07-10 11:22:45 +04:00
# undef TYPE_RESERVED_VALUE
# undef FIELD_RESERVED_VALUE
# undef FIELD_RESERVED_BINARY_VALUE
2015-05-19 14:01:48 +03:00
# undef FUZZY
# undef DYNAMIC
2014-07-10 11:22:45 +04:00
/*
* Create array of reserved values to be registered with reporting code via
* dm_report_init_with_selection function that initializes report with
* selection criteria . Selection code then recognizes these reserved values
* when parsing selection criteria .
2014-07-10 15:37:26 +04:00
*/
2014-07-10 13:54:37 +04:00
# define NUM DM_REPORT_FIELD_TYPE_NUMBER
2015-05-20 19:47:54 +03:00
# define NUM_HND DM_REPORT_FIELD_TYPE_NUMBER
# define HND 0
2015-06-30 11:13:35 +03:00
# define NOFLAG 0
# define NAMED DM_REPORT_FIELD_RESERVED_VALUE_NAMED
# define RANGE DM_REPORT_FIELD_RESERVED_VALUE_RANGE
2015-05-19 14:01:48 +03:00
# define FUZZY DM_REPORT_FIELD_RESERVED_VALUE_FUZZY_NAMES
# define DYNAMIC DM_REPORT_FIELD_RESERVED_VALUE_DYNAMIC_VALUE
2014-07-10 13:54:37 +04:00
2015-06-30 11:13:35 +03:00
# define TYPE_RESERVED_VALUE(type, flags, id, desc, value, ...) {type | flags, &_reserved_ ## id, _reserved_ ## id ## _names, desc},
2014-07-10 13:54:37 +04:00
2015-06-30 11:13:35 +03:00
# define FIELD_RESERVED_VALUE(flags, field_id, id, desc, value, ...) {DM_REPORT_FIELD_TYPE_NONE | flags, &_reserved_ ## id, _reserved_ ## id ## _names, desc},
2014-07-10 13:54:37 +04:00
2014-07-10 15:37:26 +04:00
# define FIELD_RESERVED_BINARY_VALUE(field_id, id, desc, ...) \
2015-06-30 11:13:35 +03:00
FIELD_RESERVED_VALUE ( NAMED , field_id , id # # _y , desc , & _one64 , __VA_ARGS__ ) \
FIELD_RESERVED_VALUE ( NAMED , field_id , id # # _n , desc , & _zero64 , __VA_ARGS__ )
2014-07-08 14:40:45 +04:00
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
static const struct dm_report_reserved_value _report_reserved_values [ ] = {
2014-07-10 15:37:26 +04:00
# include "values.h"
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
{ 0 , NULL , NULL , NULL }
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
} ;
2014-07-10 13:54:37 +04:00
# undef NUM
2015-05-20 19:47:54 +03:00
# undef NUM_HND
# undef HND
2015-06-30 11:13:35 +03:00
# undef NOFLAG
# undef NAMED
# undef RANGE
2015-05-19 14:01:48 +03:00
# undef FUZZY
# undef DYNAMIC
2014-12-18 16:42:14 +03:00
# undef TYPE_RESERVED_VALUE
# undef FIELD_RESERVED_VALUE
# undef FIELD_RESERVED_BINARY_VALUE
2014-07-10 11:22:45 +04:00
2016-03-02 13:50:12 +03:00
static int _field_string ( struct dm_report * rh , struct dm_report_field * field , const char * data )
{
return dm_report_field_string ( rh , field , & data ) ;
}
2013-09-23 12:02:01 +04:00
static int _field_set_value ( struct dm_report_field * field , const void * data , const void * sort )
{
dm_report_field_set_value ( field , data , sort ) ;
return 1 ;
}
2014-07-10 18:18:45 +04:00
static int _field_set_string_list ( struct dm_report * rh , struct dm_report_field * field ,
2016-01-19 13:50:41 +03:00
const struct dm_list * list , void * private , int sorted ,
const char * delimiter )
2014-07-10 18:18:45 +04:00
{
struct cmd_context * cmd = ( struct cmd_context * ) private ;
2016-01-19 13:50:41 +03:00
return sorted ? dm_report_field_string_list ( rh , field , list , delimiter ? : cmd - > report_list_item_separator )
: dm_report_field_string_list_unsorted ( rh , field , list , delimiter ? : cmd - > report_list_item_separator ) ;
2014-07-10 18:18:45 +04:00
}
2002-12-12 23:55:49 +03:00
/*
* Data - munging functions to prepare each data type for display and sorting
*/
2014-07-08 14:40:45 +04:00
/*
* Display either " 0 " / " 1 " or " " / " word " based on bin_value ,
* cmd - > report_binary_values_as_numeric selects the mode to use .
*/
static int _binary_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field , int bin_value , const char * word ,
void * private )
{
const struct cmd_context * cmd = ( const struct cmd_context * ) private ;
if ( cmd - > report_binary_values_as_numeric )
/* "0"/"1" */
return _field_set_value ( field , bin_value ? _str_one : _str_zero , bin_value ? & _one64 : & _zero64 ) ;
2017-07-19 17:16:12 +03:00
/* blank/"word" */
return _field_set_value ( field , bin_value ? word : " " , bin_value ? & _one64 : & _zero64 ) ;
2014-07-08 14:40:45 +04:00
}
static int _binary_undef_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field , void * private )
{
const struct cmd_context * cmd = ( const struct cmd_context * ) private ;
if ( cmd - > report_binary_values_as_numeric )
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( num_undef_64 ) , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-07-19 17:16:12 +03:00
return _field_set_value ( field , _str_unknown , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2014-07-08 14:40:45 +04:00
}
2010-07-09 19:34:40 +04:00
static int _string_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2011-02-18 17:47:28 +03:00
return dm_report_field_string ( rh , field , ( const char * const * ) data ) ;
2002-12-12 23:55:49 +03:00
}
2013-09-18 04:09:15 +04:00
static int _chars_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , data ) ;
2013-09-18 04:09:15 +04:00
}
2016-01-12 12:44:59 +03:00
static int _uuid_disp ( struct dm_report * rh , struct dm_pool * mem ,
2015-09-21 12:34:03 +03:00
struct dm_report_field * field ,
2016-01-12 12:44:59 +03:00
const void * data , void * private )
2015-09-21 12:34:03 +03:00
{
char * repstr ;
if ( ! ( repstr = id_format_and_copy ( mem , data ) ) )
return_0 ;
return _field_set_value ( field , repstr , NULL ) ;
}
2016-03-22 03:12:08 +03:00
static int _devminor_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int devminor = ( int ) MINOR ( ( * ( const struct device * const * ) data ) - > dev ) ;
return dm_report_field_int ( rh , field , & devminor ) ;
}
static int _devmajor_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int devmajor = ( int ) MAJOR ( ( * ( const struct device * const * ) data ) - > dev ) ;
return dm_report_field_int ( rh , field , & devmajor ) ;
}
2016-01-11 17:01:35 +03:00
static int _dev_name_disp ( struct dm_report * rh , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2016-01-11 17:01:35 +03:00
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , dev_name ( * ( const struct device * const * ) data ) ) ;
2002-12-12 23:55:49 +03:00
}
2016-01-19 14:24:02 +03:00
static int _devices_disp ( struct dm_report * rh , struct dm_pool * mem ,
2011-04-12 16:24:29 +04:00
struct dm_report_field * field ,
2016-01-19 14:24:02 +03:00
const void * data , void * private )
2004-05-05 14:58:44 +04:00
{
2016-01-19 14:24:02 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
struct dm_list * list ;
2004-05-05 14:58:44 +04:00
2016-01-19 14:24:02 +03:00
if ( ! ( list = lvseg_devices ( mem , seg ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2004-05-05 14:58:44 +04:00
2016-01-19 14:24:02 +03:00
return _field_set_string_list ( rh , field , list , private , 0 , " , " ) ;
2004-05-05 14:58:44 +04:00
}
2006-10-03 21:55:20 +04:00
2016-01-19 14:24:02 +03:00
static int _metadatadevices_disp ( struct dm_report * rh , struct dm_pool * mem ,
2015-10-02 12:09:28 +03:00
struct dm_report_field * field ,
2016-01-19 14:24:02 +03:00
const void * data , void * private )
2015-10-02 12:09:28 +03:00
{
2016-01-19 14:24:02 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
struct dm_list * list ;
2015-10-02 12:09:28 +03:00
2016-01-19 14:24:02 +03:00
if ( ! ( list = lvseg_metadata_devices ( mem , seg ) ) )
2015-10-02 12:09:28 +03:00
return_0 ;
2016-01-19 14:24:02 +03:00
return _field_set_string_list ( rh , field , list , private , 0 , " , " ) ;
2015-10-02 12:09:28 +03:00
}
2016-01-19 14:24:02 +03:00
static int _peranges_disp ( struct dm_report * rh , struct dm_pool * mem ,
2007-12-15 00:53:02 +03:00
struct dm_report_field * field ,
2016-01-19 14:24:02 +03:00
const void * data , void * private )
2007-12-15 00:53:02 +03:00
{
2016-01-19 14:24:02 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
struct dm_list * list ;
2011-04-12 16:24:29 +04:00
2016-01-19 14:24:02 +03:00
if ( ! ( list = lvseg_seg_pe_ranges ( mem , seg ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2011-04-12 16:24:29 +04:00
2016-01-19 14:24:02 +03:00
return _field_set_string_list ( rh , field , list , private , 0 , " " ) ;
2007-12-15 00:53:02 +03:00
}
2016-01-19 15:51:11 +03:00
static int _leranges_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
struct dm_list * list ;
if ( ! ( list = lvseg_seg_le_ranges ( mem , seg ) ) )
return_0 ;
return _field_set_string_list ( rh , field , list , private , 0 , NULL ) ;
}
2016-01-19 14:24:02 +03:00
static int _metadataleranges_disp ( struct dm_report * rh , struct dm_pool * mem ,
2015-10-02 12:09:28 +03:00
struct dm_report_field * field ,
2016-01-19 14:24:02 +03:00
const void * data , void * private )
2015-10-02 12:09:28 +03:00
{
2016-01-19 14:24:02 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
struct dm_list * list ;
2015-10-02 12:09:28 +03:00
2016-01-19 14:24:02 +03:00
if ( ! ( list = lvseg_seg_metadata_le_ranges ( mem , seg ) ) )
2015-10-02 12:09:28 +03:00
return_0 ;
report: make devices, metadata_devices, seg_pe_ranges and seg_metadata_le_ranges fields consistent
There are two basic groups of fields for LV segment device reporting:
- related to LV segment's devices: devices and seg_pe_ranges
- related to LV segment's metadata devices: metadata_devices and seg_metadata_le_ranges
The devices and metadata_devices report devices in this format:
"device_name(extent_start)"
The seg_pe_ranges and seg_metadata_le_ranges report devices in
this format:
"device_name:extent_start-extent_end"
This patch reverts partly what commit 7f74a995029caa41ee3cf9aec0bd024a34bfd89a
(v 2.02.140) introduced in this area - it added [] for
hidden devices to mark them for all four fields mentioned above.
We won't be marking hidden devices in devices and metadata_devices
fields.
The seg_metadata_le_ranges field will have hidden devices marked -
it's new enough that we don't need to care about compatibility much
yet.
The seg_pe_ranges is old enough that we shouldn't be changing this
one - so we're reverting to not marking hidden devices here.
Instead, there's going to be a new field "seg_le_ranges" which
is going to replace the seg_pe_ranges and it will mark hidden devices -
this is going to be introduced in a patch later.
So in the end we'll end up with:
(LV segment's devices)
devices field with "device_name(extent_start)" format, not marking hidden devices
seg_pe_ranges field with "device_name:extent_start-extent_end" format, not marking hidden devices (deprecated, new seg_le_ranges should be used instead for standardized format)
seg_le_ranges field with "device_name:extent_start-extent_end" format, marking hidden devices
(LV segment's metadata devices)
metadata_devices field with "device_name:extent_start-extent_end" format, not marking hidden devices
seg_metadata_le_ranges field with "device_name:extent_start-extent_end" format, marking hidden devices
Also, both seg_le_ranges and seg_metadata_le_ranges will honour the
report/list_item_separator setting which can be used to configure
the delimiter used for list items.
So, to sum it up, we will recommend using the new seg_le_ranges and
seg_metadata_le_ranges fields because they display devices with
standard extent range format, they can mark hidden devices and they
honour the report/list_item_separator setting.
We'll be keeping devices,seg_pe_ranges and metadata_devices fields
for compatibility.
2016-01-19 14:26:01 +03:00
return _field_set_string_list ( rh , field , list , private , 0 , NULL ) ;
2015-10-02 12:09:28 +03:00
}
2014-05-29 11:41:36 +04:00
static int _tags_disp ( struct dm_report * rh , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2014-07-10 18:18:45 +04:00
const void * data , void * private )
2004-03-08 20:19:15 +03:00
{
2014-01-31 01:09:28 +04:00
const struct dm_list * tagsl = ( const struct dm_list * ) data ;
2004-03-08 20:19:15 +03:00
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , tagsl , private , 1 , NULL ) ;
2004-03-08 20:19:15 +03:00
}
2014-12-15 22:48:05 +03:00
struct _str_list_append_baton {
struct dm_pool * mem ;
struct dm_list * result ;
} ;
static int _str_list_append ( const char * line , void * baton )
{
struct _str_list_append_baton * b = baton ;
2015-01-20 20:15:28 +03:00
const char * line2 = dm_pool_strdup ( b - > mem , line ) ;
if ( ! line2 )
2014-12-15 22:48:05 +03:00
return_0 ;
2015-01-20 20:15:28 +03:00
if ( ! str_list_add ( b - > mem , b - > result , line2 ) )
2014-12-15 22:48:05 +03:00
return_0 ;
2015-01-20 20:15:28 +03:00
2014-12-15 22:48:05 +03:00
return 1 ;
}
static int _cache_settings_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2018-11-06 01:33:34 +03:00
const struct lv_segment * setting_seg = NULL ;
2014-12-15 22:48:05 +03:00
const struct dm_config_node * settings ;
struct dm_list * result ;
struct _str_list_append_baton baton ;
2014-12-18 18:12:50 +03:00
struct dm_list dummy_list ; /* dummy list to display "nothing" */
2014-12-15 22:48:05 +03:00
2019-01-30 18:55:34 +03:00
if ( seg_is_cache ( seg ) & & lv_is_cache_vol ( seg - > pool_lv ) )
2018-08-17 23:45:52 +03:00
setting_seg = seg ;
else if ( seg_is_cache_pool ( seg ) )
2018-11-06 01:33:34 +03:00
setting_seg = seg ;
else if ( seg_is_cache ( seg ) )
setting_seg = first_seg ( seg - > pool_lv ) ;
2014-12-15 22:48:05 +03:00
2018-11-06 01:33:34 +03:00
if ( ! setting_seg | | ! setting_seg - > policy_settings ) {
2014-12-18 18:12:50 +03:00
dm_list_init ( & dummy_list ) ;
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , & dummy_list , private , 0 , NULL ) ;
2014-12-18 18:12:50 +03:00
/* TODO: once we have support for STR_LIST reserved values, replace with:
* return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( cache_settings_undef ) , GET_FIELD_RESERVED_VALUE ( cache_settings_undef ) ) ;
*/
}
2014-12-15 22:48:05 +03:00
2018-11-06 01:33:34 +03:00
settings = setting_seg - > policy_settings - > child ;
2014-12-15 22:48:05 +03:00
if ( ! ( result = str_list_create ( mem ) ) )
return_0 ;
baton . mem = mem ;
baton . result = result ;
while ( settings ) {
dm_config_write_one_node ( settings , _str_list_append , & baton ) ;
settings = settings - > sib ;
} ;
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , result , private , 0 , NULL ) ;
2014-12-15 22:48:05 +03:00
}
report: add kernel_cache_settings field
Existing cache_settings field displays the settings which are
saved in metadata. Add new kernel_cache_settings fields to display
the settings which are currently used by kernel, including fields
for which default values are used.
This way users have complete view of the set of cache settings
supported (and which they can set) and their values which are used
at the moment by kernel.
For example:
$ lvs -o name,cache_policy,cache_settings,kernel_cache_settings vg
LV Cache Policy Cache Settings KCache Settings
cached1 mq migration_threshold=1024,write_promote_adjustment=2 migration_threshold=1024,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=2
cached2 smq migration_threshold=1024 migration_threshold=1024
cached3 smq migration_threshold=2048
2016-01-18 16:32:17 +03:00
static int _do_get_kernel_cache_settings_list ( struct dm_pool * mem ,
int cache_argc , char * * cache_argv ,
struct dm_list * result )
{
const char * key , * value ;
char * buf ;
size_t buf_len ;
int i ;
for ( i = 0 ; i + 1 < cache_argc ; i + = 2 ) {
key = cache_argv [ i ] ;
value = cache_argv [ i + 1 ] ;
/* +1 for "=" char and +1 for trailing zero */
buf_len = strlen ( key ) + strlen ( value ) + 2 ;
if ( ! ( buf = dm_pool_alloc ( mem , buf_len ) ) )
return_0 ;
2016-02-23 23:03:50 +03:00
if ( dm_snprintf ( buf , buf_len , " %s=%s " , key , value ) < 0 )
return_0 ;
report: add kernel_cache_settings field
Existing cache_settings field displays the settings which are
saved in metadata. Add new kernel_cache_settings fields to display
the settings which are currently used by kernel, including fields
for which default values are used.
This way users have complete view of the set of cache settings
supported (and which they can set) and their values which are used
at the moment by kernel.
For example:
$ lvs -o name,cache_policy,cache_settings,kernel_cache_settings vg
LV Cache Policy Cache Settings KCache Settings
cached1 mq migration_threshold=1024,write_promote_adjustment=2 migration_threshold=1024,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=2
cached2 smq migration_threshold=1024 migration_threshold=1024
cached3 smq migration_threshold=2048
2016-01-18 16:32:17 +03:00
if ( ! str_list_add_no_dup_check ( mem , result , buf ) )
return_0 ;
}
return 1 ;
}
static int _get_kernel_cache_settings_list ( struct dm_pool * mem ,
struct dm_status_cache * cache_status ,
struct dm_list * * result )
{
if ( ! ( * result = str_list_create ( mem ) ) )
return_0 ;
if ( ! _do_get_kernel_cache_settings_list ( mem , cache_status - > core_argc ,
cache_status - > core_argv , * result ) )
return_0 ;
if ( ! _do_get_kernel_cache_settings_list ( mem , cache_status - > policy_argc ,
cache_status - > policy_argv , * result ) )
return_0 ;
return 1 ;
}
static int _kernel_cache_settings_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
struct dm_list dummy_list ; /* dummy list to display "nothing" */
struct dm_list * result ;
int r = 0 ;
if ( lvdm - > seg_status . type ! = SEG_STATUS_CACHE ) {
dm_list_init ( & dummy_list ) ;
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , & dummy_list , private , 0 , NULL ) ;
report: add kernel_cache_settings field
Existing cache_settings field displays the settings which are
saved in metadata. Add new kernel_cache_settings fields to display
the settings which are currently used by kernel, including fields
for which default values are used.
This way users have complete view of the set of cache settings
supported (and which they can set) and their values which are used
at the moment by kernel.
For example:
$ lvs -o name,cache_policy,cache_settings,kernel_cache_settings vg
LV Cache Policy Cache Settings KCache Settings
cached1 mq migration_threshold=1024,write_promote_adjustment=2 migration_threshold=1024,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=2
cached2 smq migration_threshold=1024 migration_threshold=1024
cached3 smq migration_threshold=2048
2016-01-18 16:32:17 +03:00
}
if ( ! ( mem = dm_pool_create ( " reporter_pool " , 1024 ) ) )
return_0 ;
if ( ! _get_kernel_cache_settings_list ( mem , lvdm - > seg_status . cache , & result ) )
goto_out ;
2016-01-19 13:50:41 +03:00
r = _field_set_string_list ( rh , field , result , private , 0 , NULL ) ;
report: add kernel_cache_settings field
Existing cache_settings field displays the settings which are
saved in metadata. Add new kernel_cache_settings fields to display
the settings which are currently used by kernel, including fields
for which default values are used.
This way users have complete view of the set of cache settings
supported (and which they can set) and their values which are used
at the moment by kernel.
For example:
$ lvs -o name,cache_policy,cache_settings,kernel_cache_settings vg
LV Cache Policy Cache Settings KCache Settings
cached1 mq migration_threshold=1024,write_promote_adjustment=2 migration_threshold=1024,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=2
cached2 smq migration_threshold=1024 migration_threshold=1024
cached3 smq migration_threshold=2048
2016-01-18 16:32:17 +03:00
out :
dm_pool_destroy ( mem ) ;
return r ;
}
2016-03-02 13:12:46 +03:00
static int _kernel_cache_policy_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lvdm - > seg_status . type = = SEG_STATUS_CACHE ) & &
lvdm - > seg_status . cache - > policy_name )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , lvdm - > seg_status . cache - > policy_name ) ;
2016-03-02 13:12:46 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( cache_policy_undef ) ,
GET_FIELD_RESERVED_VALUE ( cache_policy_undef ) ) ;
}
2017-02-23 19:41:59 +03:00
static int _kernelmetadataformat_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
unsigned format ;
if ( lvdm - > seg_status . type = = SEG_STATUS_CACHE ) {
format = ( lvdm - > seg_status . cache - > feature_flags & DM_CACHE_FEATURE_METADATA2 ) ;
return dm_report_field_uint64 ( rh , field , format ? & _two64 : & _one64 ) ;
}
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
2014-12-15 22:48:05 +03:00
static int _cache_policy_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2018-11-06 01:33:34 +03:00
const struct lv_segment * setting_seg = NULL ;
2014-12-15 22:48:05 +03:00
2019-01-30 18:55:34 +03:00
if ( seg_is_cache ( seg ) & & lv_is_cache_vol ( seg - > pool_lv ) )
2018-08-17 23:45:52 +03:00
setting_seg = seg ;
else if ( seg_is_cache_pool ( seg ) )
2018-11-06 01:33:34 +03:00
setting_seg = seg ;
else if ( seg_is_cache ( seg ) )
setting_seg = first_seg ( seg - > pool_lv ) ;
if ( ! setting_seg | | ! setting_seg - > policy_name )
2014-12-18 17:11:25 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( cache_policy_undef ) ,
GET_FIELD_RESERVED_VALUE ( cache_policy_undef ) ) ;
2014-12-15 22:48:05 +03:00
2018-11-06 01:33:34 +03:00
return _field_string ( rh , field , setting_seg - > policy_name ) ;
2014-12-15 22:48:05 +03:00
}
2007-01-16 21:06:12 +03:00
static int _modules_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2006-10-03 21:55:20 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2014-05-29 11:41:36 +04:00
struct dm_list * modules ;
if ( ! ( modules = str_list_create ( mem ) ) ) {
log_error ( " modules str_list allocation failed " ) ;
return 0 ;
}
2006-10-03 21:55:20 +04:00
2014-05-29 11:41:36 +04:00
if ( ! ( list_lv_modules ( mem , lv , modules ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2006-10-03 21:55:20 +04:00
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , modules , private , 1 , NULL ) ;
2006-10-03 21:55:20 +04:00
}
2013-07-02 16:34:52 +04:00
static int _lvprofile_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
if ( lv - > profile )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , lv - > profile - > name ) ;
2013-07-02 16:34:52 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2013-07-02 16:34:52 +04:00
}
2015-03-05 23:00:44 +03:00
static int _lvlockargs_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , lv - > lock_args ? : " " ) ;
2015-03-05 23:00:44 +03:00
}
2007-01-16 21:06:12 +03:00
static int _vgfmt_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct volume_group * vg = ( const struct volume_group * ) data ;
2002-12-12 23:55:49 +03:00
2014-10-07 03:34:04 +04:00
if ( vg - > fid & & vg - > fid - > fmt )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , vg - > fid - > fmt - > name ) ;
2002-12-12 23:55:49 +03:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static int _pvfmt_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2016-03-02 13:50:12 +03:00
const struct label * l = ( const struct label * ) data ;
2002-12-12 23:55:49 +03:00
2016-03-02 13:50:12 +03:00
if ( l - > labeller & & l - > labeller - > fmt )
return _field_string ( rh , field , l - > labeller - > fmt - > name ) ;
2002-12-12 23:55:49 +03:00
2016-03-02 13:50:12 +03:00
return _field_set_value ( field , " " , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2010-07-09 19:34:40 +04:00
static int _lvkmaj_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2004-07-04 02:07:52 +04:00
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2004-07-04 02:07:52 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists & & lvdm - > info . major > = 0 )
return dm_report_field_int ( rh , field , & lvdm - > info . major ) ;
2004-07-04 02:07:52 +04:00
2014-12-18 16:52:16 +03:00
return dm_report_field_int32 ( rh , field , & GET_TYPE_RESERVED_VALUE ( num_undef_32 ) ) ;
2004-07-04 02:07:52 +04:00
}
2010-07-09 19:34:40 +04:00
static int _lvkmin_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2004-07-04 02:07:52 +04:00
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2004-07-04 02:07:52 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists & & lvdm - > info . minor > = 0 )
return dm_report_field_int ( rh , field , & lvdm - > info . minor ) ;
2004-07-04 02:07:52 +04:00
2014-12-18 16:52:16 +03:00
return dm_report_field_int32 ( rh , field , & GET_TYPE_RESERVED_VALUE ( num_undef_32 ) ) ;
2004-07-04 02:07:52 +04:00
}
2010-07-09 19:34:40 +04:00
static int _lvstatus_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2015-01-20 15:14:16 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2002-12-20 02:25:55 +03:00
char * repstr ;
2002-12-12 23:55:49 +03:00
2015-01-20 15:14:16 +03:00
if ( ! ( repstr = lv_attr_dup_with_info_and_seg_status ( mem , lvdm ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2002-12-12 23:55:49 +03:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2010-07-09 19:34:40 +04:00
static int _pvstatus_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2010-09-30 17:52:55 +04:00
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2002-12-20 02:25:55 +03:00
char * repstr ;
2002-12-12 23:55:49 +03:00
2010-09-30 17:52:55 +04:00
if ( ! ( repstr = pv_attr_dup ( mem , pv ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2002-12-12 23:55:49 +03:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2010-07-09 19:34:40 +04:00
static int _vgstatus_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2004-05-19 02:12:53 +04:00
const struct volume_group * vg = ( const struct volume_group * ) data ;
2002-12-20 02:25:55 +03:00
char * repstr ;
2002-12-12 23:55:49 +03:00
2010-09-30 17:52:55 +04:00
if ( ! ( repstr = vg_attr_dup ( mem , vg ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2005-08-16 03:34:11 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2010-07-09 19:34:40 +04:00
static int _segtype_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2010-11-17 23:08:14 +03:00
char * name ;
2011-03-05 15:14:00 +03:00
if ( ! ( name = lvseg_segtype_dup ( mem , seg ) ) ) {
2014-10-20 20:40:39 +04:00
log_error ( " Failed to get segtype name. " ) ;
2011-03-05 15:14:00 +03:00
return 0 ;
}
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , name , NULL ) ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static int _lvname_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
2016-01-11 17:01:35 +03:00
const void * data , void * private )
2005-06-03 18:49:51 +04:00
{
2016-01-13 13:30:07 +03:00
struct cmd_context * cmd = ( struct cmd_context * ) private ;
2005-06-03 18:49:51 +04:00
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-03-01 17:24:07 +03:00
int is_historical = lv_is_historical ( lv ) ;
const char * tmp_lvname ;
2007-01-16 21:06:12 +03:00
char * repstr , * lvname ;
2005-06-03 18:49:51 +04:00
size_t len ;
2016-03-01 17:24:07 +03:00
if ( ! is_historical & & ( lv_is_visible ( lv ) | | ! cmd - > report_mark_hidden_devices ) )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , lv - > name ) ;
2005-06-03 18:49:51 +04:00
2016-03-01 17:24:07 +03:00
if ( is_historical ) {
tmp_lvname = lv - > this_glv - > historical - > name ;
len = strlen ( tmp_lvname ) + strlen ( HISTORICAL_LV_PREFIX ) + 1 ;
} else {
tmp_lvname = lv - > name ;
len = strlen ( tmp_lvname ) + 3 ;
}
2007-01-16 21:06:12 +03:00
if ( ! ( repstr = dm_pool_zalloc ( mem , len ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_alloc failed " ) ;
2005-06-03 18:49:51 +04:00
return 0 ;
}
2016-03-01 17:24:07 +03:00
if ( dm_snprintf ( repstr , len , " %s%s%s " ,
is_historical ? HISTORICAL_LV_PREFIX : " [ " ,
tmp_lvname ,
is_historical ? " " : " ] " ) < 0 ) {
2005-06-03 18:49:51 +04:00
log_error ( " lvname snprintf failed " ) ;
return 0 ;
}
2016-03-01 17:24:07 +03:00
if ( ! ( lvname = dm_pool_strdup ( mem , tmp_lvname ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_strdup failed " ) ;
2005-06-03 18:49:51 +04:00
return 0 ;
}
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , lvname ) ;
2005-06-03 18:49:51 +04:00
}
2016-01-12 13:05:16 +03:00
static int _do_loglv_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private ,
int uuid )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
struct logical_volume * mirror_log_lv = lv_mirror_log_lv ( lv ) ;
if ( ! mirror_log_lv )
return _field_set_value ( field , " " , NULL ) ;
if ( uuid )
return _uuid_disp ( rh , mem , field , & mirror_log_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , mirror_log_lv , private ) ;
2016-01-12 13:05:16 +03:00
}
static int _loglv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_loglv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _loglvuuid_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_loglv_disp ( rh , mem , field , data , private , 1 ) ;
}
2014-07-02 20:24:05 +04:00
static int _lvfullname_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
if ( ! ( repstr = lv_fullname_dup ( mem , lv ) ) )
return_0 ;
return _field_set_value ( field , repstr , NULL ) ;
}
2014-07-04 02:49:34 +04:00
static int _lvparent_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
2016-01-11 17:34:35 +03:00
const void * data , void * private )
2014-07-04 02:49:34 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-11 17:34:35 +03:00
struct logical_volume * parent_lv = lv_parent ( lv ) ;
2014-07-04 02:49:34 +04:00
2016-01-11 17:34:35 +03:00
if ( ! parent_lv )
return _field_set_value ( field , " " , NULL ) ;
2014-07-04 02:49:34 +04:00
2016-01-11 17:34:35 +03:00
return _lvname_disp ( rh , mem , field , parent_lv , private ) ;
2014-07-04 02:49:34 +04:00
}
2015-09-21 14:07:28 +03:00
static int _do_datalv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) ,
int uuid )
2012-01-19 19:34:32 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-12 13:15:22 +03:00
struct logical_volume * data_lv = lv_data_lv ( lv ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:15:22 +03:00
if ( ! data_lv )
return _field_set_value ( field , " " , NULL ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:15:22 +03:00
if ( uuid )
return _uuid_disp ( rh , mem , field , & data_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , data_lv , private ) ;
2012-01-19 19:34:32 +04:00
}
2015-09-21 14:07:28 +03:00
static int _datalv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_datalv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _datalvuuid_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_datalv_disp ( rh , mem , field , data , private , 1 ) ;
}
2015-09-21 13:59:08 +03:00
static int _do_metadatalv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) ,
int uuid )
2012-01-19 19:34:32 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-12 13:23:56 +03:00
struct logical_volume * metadata_lv = lv_metadata_lv ( lv ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:23:56 +03:00
if ( ! metadata_lv )
return _field_set_value ( field , " " , NULL ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:23:56 +03:00
if ( uuid )
return _uuid_disp ( rh , mem , field , & metadata_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , metadata_lv , private ) ;
2012-01-19 19:34:32 +04:00
}
2015-09-21 13:59:08 +03:00
static int _metadatalv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_metadatalv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _metadatalvuuid_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_metadatalv_disp ( rh , mem , field , data , private , 1 ) ;
}
2015-09-21 13:28:58 +03:00
static int _do_poollv_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private ,
int uuid )
2012-01-19 19:34:32 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-12 13:53:39 +03:00
struct logical_volume * pool_lv = lv_pool_lv ( lv ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:53:39 +03:00
if ( ! pool_lv )
return _field_set_value ( field , " " , NULL ) ;
2012-01-19 19:34:32 +04:00
2016-01-12 13:53:39 +03:00
if ( uuid )
return _uuid_disp ( rh , mem , field , & pool_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , pool_lv , private ) ;
2012-01-19 19:34:32 +04:00
}
2015-09-21 13:28:58 +03:00
static int _poollv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_poollv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _poollvuuid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_poollv_disp ( rh , mem , field , data , private , 1 ) ;
}
2010-06-23 16:32:08 +04:00
static int _lvpath_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2010-06-23 16:32:08 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
2010-10-12 20:11:34 +04:00
if ( ! ( repstr = lv_path_dup ( mem , lv ) ) )
2013-09-23 12:17:50 +04:00
return_0 ;
2014-07-02 20:24:05 +04:00
return _field_set_value ( field , repstr , NULL ) ;
}
static int _lvdmpath_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
if ( ! ( repstr = lv_dmpath_dup ( mem , lv ) ) )
return_0 ;
2010-06-23 16:32:08 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , NULL ) ;
2010-06-23 16:32:08 +04:00
}
2015-09-21 13:44:29 +03:00
static int _do_origin_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private ,
int uuid )
2009-04-25 05:17:59 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-12 12:52:34 +03:00
struct logical_volume * origin_lv = lv_origin_lv ( lv ) ;
if ( ! origin_lv )
2015-09-21 13:44:29 +03:00
return _field_set_value ( field , " " , NULL ) ;
2014-02-05 19:44:37 +04:00
2015-09-21 13:44:29 +03:00
if ( uuid )
2016-01-12 12:52:34 +03:00
return _uuid_disp ( rh , mem , field , & origin_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , origin_lv , private ) ;
2015-09-21 13:44:29 +03:00
}
2011-11-07 15:03:47 +04:00
2015-09-21 13:44:29 +03:00
static int _origin_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
return _do_origin_disp ( rh , mem , field , data , private , 0 ) ;
}
2013-01-15 18:16:16 +04:00
2015-09-21 13:44:29 +03:00
static int _originuuid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
return _do_origin_disp ( rh , mem , field , data , private , 1 ) ;
2009-04-25 05:17:59 +04:00
}
2016-03-01 17:24:40 +03:00
static const char * _get_glv_str ( char * buf , size_t buf_len ,
struct generic_logical_volume * glv )
{
if ( ! glv - > is_historical )
return glv - > live - > name ;
if ( dm_snprintf ( buf , buf_len , " %s%s " , HISTORICAL_LV_PREFIX , glv - > historical - > name ) < 0 ) {
log_error ( " _get_glv_str: dm_snprintf failed " ) ;
return NULL ;
}
return buf ;
}
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
static int _find_ancestors ( struct _str_list_append_baton * ancestors ,
2016-03-01 17:24:40 +03:00
struct generic_logical_volume glv ,
int full , int include_historical_lvs )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
{
struct lv_segment * seg ;
2016-03-01 17:24:40 +03:00
void * orig_p = glv . live ;
const char * ancestor_str ;
char buf [ NAME_LEN + strlen ( HISTORICAL_LV_PREFIX ) + 1 ] ;
if ( glv . is_historical ) {
if ( full & & glv . historical - > indirect_origin )
glv = * glv . historical - > indirect_origin ;
} else if ( lv_is_cow ( glv . live ) ) {
glv . live = origin_from_cow ( glv . live ) ;
} else if ( lv_is_thin_volume ( glv . live ) ) {
seg = first_seg ( glv . live ) ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
if ( seg - > origin )
2016-03-01 17:24:40 +03:00
glv . live = seg - > origin ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
else if ( seg - > external_lv )
2016-03-01 17:24:40 +03:00
glv . live = seg - > external_lv ;
else if ( full & & seg - > indirect_origin )
glv = * seg - > indirect_origin ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
}
2016-03-01 17:24:40 +03:00
if ( orig_p ! = glv . live ) {
if ( ! ( ancestor_str = _get_glv_str ( buf , sizeof ( buf ) , & glv ) ) )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
return_0 ;
2016-03-01 17:24:40 +03:00
if ( ! glv . is_historical | | include_historical_lvs ) {
if ( ! _str_list_append ( ancestor_str , ancestors ) )
return_0 ;
}
if ( ! _find_ancestors ( ancestors , glv , full , include_historical_lvs ) )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
return_0 ;
}
return 1 ;
}
2015-04-24 15:19:23 +03:00
static int _lvancestors_disp ( struct dm_report * rh , struct dm_pool * mem ,
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
struct dm_report_field * field ,
const void * data , void * private )
{
2016-03-01 17:24:40 +03:00
struct cmd_context * cmd = ( struct cmd_context * ) private ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
struct logical_volume * lv = ( struct logical_volume * ) data ;
struct _str_list_append_baton ancestors ;
2016-03-01 17:24:40 +03:00
struct generic_logical_volume glv ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
ancestors . mem = mem ;
if ( ! ( ancestors . result = str_list_create ( mem ) ) )
return_0 ;
2016-03-01 17:24:40 +03:00
if ( ( glv . is_historical = lv_is_historical ( lv ) ) )
glv . historical = lv - > this_glv - > historical ;
else
glv . live = lv ;
if ( ! _find_ancestors ( & ancestors , glv , 0 , cmd - > include_historical_lvs ) ) {
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
dm_pool_free ( ancestors . mem , ancestors . result ) ;
return_0 ;
}
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , ancestors . result , private , 0 , NULL ) ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
}
2016-03-01 17:24:40 +03:00
static int _lvfullancestors_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
struct cmd_context * cmd = ( struct cmd_context * ) private ;
struct logical_volume * lv = ( struct logical_volume * ) data ;
struct _str_list_append_baton full_ancestors ;
struct generic_logical_volume glv ;
full_ancestors . mem = mem ;
if ( ! ( full_ancestors . result = str_list_create ( mem ) ) )
return_0 ;
if ( ( glv . is_historical = lv_is_historical ( lv ) ) )
glv . historical = lv - > this_glv - > historical ;
else
glv . live = lv ;
if ( ! _find_ancestors ( & full_ancestors , glv , 1 , cmd - > include_historical_lvs ) ) {
dm_pool_free ( full_ancestors . mem , full_ancestors . result ) ;
return_0 ;
}
return _field_set_string_list ( rh , field , full_ancestors . result , private , 0 , NULL ) ;
}
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
static int _find_descendants ( struct _str_list_append_baton * descendants ,
2016-03-01 17:24:48 +03:00
struct generic_logical_volume glv ,
int full , int include_historical_lvs )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
{
2016-03-01 17:24:48 +03:00
struct generic_logical_volume glv_next = { 0 } ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
const struct seg_list * sl ;
struct lv_segment * seg ;
2016-03-01 17:24:48 +03:00
struct glv_list * glvl ;
struct dm_list * list ;
const char * descendant_str ;
char buf [ 64 ] ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
2016-03-01 17:24:48 +03:00
if ( glv . is_historical ) {
if ( full ) {
list = & glv . historical - > indirect_glvs ;
dm_list_iterate_items ( glvl , list ) {
if ( ! glvl - > glv - > is_historical | | include_historical_lvs ) {
if ( ! ( descendant_str = _get_glv_str ( buf , sizeof ( buf ) , glvl - > glv ) ) )
return_0 ;
if ( ! _str_list_append ( descendant_str , descendants ) )
return_0 ;
}
if ( ! _find_descendants ( descendants , * glvl - > glv , full , include_historical_lvs ) )
return_0 ;
}
}
} else if ( lv_is_origin ( glv . live ) ) {
list = & glv . live - > snapshot_segs ;
dm_list_iterate_items_gen ( seg , list , origin_list ) {
if ( ( glv . live = seg - > cow ) ) {
if ( ! ( descendant_str = _get_glv_str ( buf , sizeof ( buf ) , & glv ) ) )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
return_0 ;
2016-03-01 17:24:48 +03:00
if ( ! _str_list_append ( descendant_str , descendants ) )
return_0 ;
if ( ! _find_descendants ( descendants , glv , full , include_historical_lvs ) )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
return_0 ;
}
}
} else {
2016-03-01 17:24:48 +03:00
list = & glv . live - > segs_using_this_lv ;
dm_list_iterate_items ( sl , list ) {
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
if ( lv_is_thin_volume ( sl - > seg - > lv ) ) {
seg = first_seg ( sl - > seg - > lv ) ;
2016-03-01 17:24:48 +03:00
if ( ( seg - > origin = = glv . live ) | | ( seg - > external_lv = = glv . live ) ) {
glv_next . live = sl - > seg - > lv ;
if ( ! ( descendant_str = _get_glv_str ( buf , sizeof ( buf ) , & glv_next ) ) )
return_0 ;
if ( ! _str_list_append ( descendant_str , descendants ) )
return_0 ;
if ( ! _find_descendants ( descendants , glv_next , full , include_historical_lvs ) )
return_0 ;
}
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
}
2016-03-01 17:24:48 +03:00
}
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
2016-03-01 17:24:48 +03:00
if ( full ) {
list = & glv . live - > indirect_glvs ;
dm_list_iterate_items ( glvl , list ) {
if ( ! glvl - > glv - > is_historical | | include_historical_lvs ) {
if ( ! ( descendant_str = _get_glv_str ( buf , sizeof ( buf ) , glvl - > glv ) ) )
return_0 ;
if ( ! _str_list_append ( descendant_str , descendants ) )
return_0 ;
}
if ( ! _find_descendants ( descendants , * glvl - > glv , full , include_historical_lvs ) )
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
return_0 ;
}
}
}
return 1 ;
}
2015-04-24 15:19:23 +03:00
static int _lvdescendants_disp ( struct dm_report * rh , struct dm_pool * mem ,
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
struct dm_report_field * field ,
const void * data , void * private )
{
2016-03-01 17:24:48 +03:00
struct cmd_context * cmd = ( struct cmd_context * ) private ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
struct logical_volume * lv = ( struct logical_volume * ) data ;
struct _str_list_append_baton descendants ;
2016-03-01 17:24:48 +03:00
struct generic_logical_volume glv ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
descendants . mem = mem ;
if ( ! ( descendants . result = str_list_create ( mem ) ) )
return_0 ;
2016-03-01 17:24:48 +03:00
if ( ( glv . is_historical = lv_is_historical ( lv ) ) )
glv . historical = lv - > this_glv - > historical ;
else
glv . live = lv ;
if ( ! _find_descendants ( & descendants , glv , 0 , cmd - > include_historical_lvs ) ) {
dm_pool_free ( descendants . mem , descendants . result ) ;
return_0 ;
}
return _field_set_string_list ( rh , field , descendants . result , private , 0 , NULL ) ;
}
static int _lvfulldescendants_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
struct cmd_context * cmd = ( struct cmd_context * ) private ;
struct logical_volume * lv = ( struct logical_volume * ) data ;
struct _str_list_append_baton descendants ;
struct generic_logical_volume glv ;
descendants . mem = mem ;
if ( ! ( descendants . result = str_list_create ( mem ) ) )
return_0 ;
if ( ( glv . is_historical = lv_is_historical ( lv ) ) )
glv . historical = lv - > this_glv - > historical ;
else
glv . live = lv ;
if ( ! _find_descendants ( & descendants , glv , 1 , cmd - > include_historical_lvs ) ) {
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
dm_pool_free ( descendants . mem , descendants . result ) ;
return_0 ;
}
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , descendants . result , private , 0 , NULL ) ;
report: add lv_ancestors and lv_descendants reporting fields
Show full chain of ancestors and descendants for snapshots
(both thick and thin - in case of thick, the "ancestor" field
is actually equal to "origin" field as snapshots can't be
chained for thick snapshots).
These fields display current state as it is, they do not
display any history! If the snapshot chain is broken in
the middle, we don't report the historical origin (this
is going to be a part of another patch and a different
set of fields or just a switch for existing fields to
show ancestors and descendants with history included).
For example:
(origin --> snapshot)
lvol1 --> lvol2 --> lvol3 --> lvol4
\
--> lvol5 --> lvol6 --> lvol7 --> lvol8
$ lvs -o name,pool_lv,origin,ancestors,descendants vg
LV Pool Origin Ancestors Descendants
lvol1 pool lvol2,lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol2 pool lvol1 lvol1 lvol3,lvol4,lvol5,lvol6,lvol7,lvol8
lvol3 pool lvol2 lvol2,lvol1 lvol4
lvol4 pool lvol3 lvol3,lvol2,lvol1
lvol5 pool lvol2 lvol2,lvol1 lvol6,lvol7,lvol8
lvol6 pool lvol5 lvol5,lvol2,lvol1 lvol7,lvol8
lvol7 pool lvol6 lvol6,lvol5,lvol2,lvol1 lvol8
lvol8 pool lvol7 lvol7,lvol6,lvol5,lvol2,lvol1
2015-04-24 12:51:52 +03:00
}
2016-01-11 17:01:35 +03:00
static int _do_movepv_disp ( struct dm_report * rh , struct dm_pool * mem ,
2015-09-21 15:01:41 +03:00
struct dm_report_field * field ,
2016-01-11 17:01:35 +03:00
const void * data , void * private ,
2015-09-21 15:01:41 +03:00
int uuid )
2003-05-06 16:06:02 +04:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2015-09-21 15:01:41 +03:00
const char * repstr ;
if ( uuid )
repstr = lv_move_pv_uuid_dup ( mem , lv ) ;
else
repstr = lv_move_pv_dup ( mem , lv ) ;
2003-05-06 16:06:02 +04:00
2015-09-21 15:01:41 +03:00
if ( repstr )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , repstr ) ;
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2003-05-06 16:06:02 +04:00
}
2015-09-21 15:01:41 +03:00
static int _movepv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_movepv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _movepvuuid_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_movepv_disp ( rh , mem , field , data , private , 1 ) ;
}
2016-01-11 17:01:35 +03:00
static int _do_convertlv_disp ( struct dm_report * rh , struct dm_pool * mem ,
2015-09-21 15:10:21 +03:00
struct dm_report_field * field ,
2016-01-11 17:01:35 +03:00
const void * data , void * private ,
2015-09-21 15:10:21 +03:00
int uuid )
2008-01-10 21:35:51 +03:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2016-01-12 12:44:59 +03:00
const struct logical_volume * convert_lv = lv_convert_lv ( lv ) ;
if ( ! convert_lv )
return _field_set_value ( field , " " , NULL ) ;
2008-01-10 21:35:51 +03:00
2015-09-21 15:10:21 +03:00
if ( uuid )
2016-01-12 12:44:59 +03:00
return _uuid_disp ( rh , mem , field , & convert_lv - > lvid . id [ 1 ] , private ) ;
2017-07-19 17:16:12 +03:00
return _lvname_disp ( rh , mem , field , convert_lv , private ) ;
2008-01-10 21:35:51 +03:00
}
2015-09-21 15:10:21 +03:00
static int _convertlv_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_convertlv_disp ( rh , mem , field , data , private , 0 ) ;
}
static int _convertlvuuid_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return _do_convertlv_disp ( rh , mem , field , data , private , 1 ) ;
}
2010-07-09 19:34:40 +04:00
static int _size32_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const uint32_t size = * ( const uint32_t * ) data ;
2007-01-16 21:06:12 +03:00
const char * disp , * repstr ;
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
double * sortval ;
2002-12-12 23:55:49 +03:00
2008-01-30 16:19:47 +03:00
if ( ! * ( disp = display_size_units ( private , ( uint64_t ) size ) ) )
return_0 ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
if ( ! ( repstr = dm_pool_strdup ( mem , disp ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_strdup failed " ) ;
2002-12-12 23:55:49 +03:00
return 0 ;
}
2007-01-16 21:06:12 +03:00
if ( ! ( sortval = dm_pool_alloc ( mem , sizeof ( uint64_t ) ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_alloc failed " ) ;
2002-12-12 23:55:49 +03:00
return 0 ;
}
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
* sortval = ( double ) size ;
2007-01-16 21:06:12 +03:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , sortval ) ;
2002-12-12 23:55:49 +03:00
}
2010-07-09 19:34:40 +04:00
static int _size64_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
2007-08-22 18:38:18 +04:00
struct dm_pool * mem ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const uint64_t size = * ( const uint64_t * ) data ;
2007-01-16 21:06:12 +03:00
const char * disp , * repstr ;
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
double * sortval ;
2002-12-12 23:55:49 +03:00
2008-01-30 16:19:47 +03:00
if ( ! * ( disp = display_size_units ( private , size ) ) )
return_0 ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
if ( ! ( repstr = dm_pool_strdup ( mem , disp ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_strdup failed " ) ;
2002-12-12 23:55:49 +03:00
return 0 ;
}
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
if ( ! ( sortval = dm_pool_alloc ( mem , sizeof ( double ) ) ) ) {
2005-10-17 03:03:59 +04:00
log_error ( " dm_pool_alloc failed " ) ;
2002-12-12 23:55:49 +03:00
return 0 ;
}
select: fix matching reserved values while <,<=,>,>= is used in selection criteria
Scenario:
$ vgs -o+vg_mda_copies
VG #PV #LV #SN Attr VSize VFree #VMdaCps
fedora 1 2 0 wz--n- 9.51g 0 unmanaged
vg 16 9 0 wz--n- 1.94g 1.83g 2
$ lvs -o+read_ahead vg/lvol6 vg/lvol7
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Before this patch:
$vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
VG #VMdaCps
vg 2
Problem:
Reserved values can be only used with exact match = or !=, not <,<=,>,>=.
In the example above, the "unamanaged" is internally represented as
18446744073709551615, but this should be ignored while not comparing
field directly with "unmanaged" reserved name with = or !=. Users
should not be aware of this internal mapping of the reserved value
name to its internal value and hence it doesn't make sense for such
reserved value to take place in results of <,<=,> and >=.
There's no order defined for reserved values!!! It's a special
*reserved* value that is taken out of the usual value range
of that type.
This is very similar to what we have already fixed with
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but it's the other way round
now - we're using reserved value name in selection criteria now
(in the patch 2f7f693, we had concrete value and we compared it
with the reserved value). So this patch completes patch 2f7f693.
This patch also fixes this problem:
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Data% Rahead
lvol6 vg Vwi-a-tz-- 1.00g pool lvol5 0.00 auto
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
Problem:
In the example above, the internal reserved value "auto" is in the
range of selection "> 32k" - it shouldn't match as well. Here the
"auto" is internally represented as MAX_DBL and of course, numerically,
MAX_DBL > 256k. But for users, the reserved value should be uncomparable
to any number so the mapping of the reserved value name to its interna
value is transparent to users. Again, there's no order defined for
reserved values and hence it should never match if using <,<=,>,>=
operators.
This is actually exactly the same problem as already described in
2f7f6932dcd450ba75fe590aba8c09838d2618dc, but that patch failed for
size field types because of incorrect internal representation used.
With this patch applied, both problematic scenarios mentioned
above are fixed now:
$ vgs -o vg_name,vg_mda_copies -S 'vg_mda_copies < unmanaged'
(blank)
$ lvs -o+read_ahead vg/lvol6 vg/lvol7 -S 'read_ahead > 32k'
LV VG Attr LSize Pool Origin Rahead
lvol7 vg Vwi---tz-k 1.00g pool lvol6 256.00k
2015-04-24 10:47:25 +03:00
* sortval = ( double ) size ;
2002-12-12 23:55:49 +03:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , sortval ) ;
2002-12-12 23:55:49 +03:00
}
2017-03-07 20:47:20 +03:00
static int _lv_size_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
const struct lv_segment * seg = first_seg ( lv ) ;
uint64_t size = lv - > le_count ;
2017-03-08 20:31:20 +03:00
if ( seg & & ! lv_is_raid_image ( lv ) )
2017-03-11 00:44:32 +03:00
size - = seg - > reshape_len * ( seg - > area_count > 2 ? ( seg - > area_count - seg - > segtype - > parity_devs ) : 1 ) ;
2017-03-07 20:47:20 +03:00
size * = lv - > vg - > extent_size ;
return _size64_disp ( rh , mem , field , & size , private ) ;
}
2012-01-19 19:34:32 +04:00
static int _uint32_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return dm_report_field_uint32 ( rh , field , data ) ;
}
2013-09-18 04:09:15 +04:00
static int _int8_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
const int32_t val = * ( const int8_t * ) data ;
return dm_report_field_int32 ( rh , field , & val ) ;
}
2012-01-19 19:34:32 +04:00
static int _int32_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
return dm_report_field_int32 ( rh , field , data ) ;
}
2015-01-21 12:43:40 +03:00
static int _lvwhenfull_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
2015-01-13 17:23:03 +03:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2015-01-21 12:43:40 +03:00
if ( lv_is_thin_pool ( lv ) ) {
if ( lv - > status & LV_ERROR_WHEN_FULL )
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( lv_when_full_error ) ,
GET_FIELD_RESERVED_VALUE ( lv_when_full_error ) ) ;
2017-07-19 17:16:12 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( lv_when_full_queue ) ,
GET_FIELD_RESERVED_VALUE ( lv_when_full_queue ) ) ;
2015-01-21 12:43:40 +03:00
}
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( lv_when_full_undef ) ,
GET_FIELD_RESERVED_VALUE ( lv_when_full_undef ) ) ;
2015-01-13 17:23:03 +03:00
}
2007-11-09 19:51:54 +03:00
static int _lvreadahead_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2007-11-09 19:51:54 +03:00
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 12:17:50 +04:00
if ( lv - > read_ahead = = DM_READ_AHEAD_AUTO )
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( lv_read_ahead_auto ) ,
2014-12-19 11:17:16 +03:00
GET_FIELD_RESERVED_VALUE ( lv_read_ahead_auto ) ) ;
2007-11-09 19:51:54 +03:00
2007-11-14 03:08:25 +03:00
return _size32_disp ( rh , mem , field , & lv - > read_ahead , private ) ;
2007-11-09 19:51:54 +03:00
}
static int _lvkreadahead_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
2007-11-14 03:08:25 +03:00
void * private )
2007-11-09 19:51:54 +03:00
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2007-11-12 23:51:54 +03:00
2015-01-14 12:31:24 +03:00
if ( ! lvdm - > info . exists )
2014-12-18 16:52:16 +03:00
return dm_report_field_int32 ( rh , field , & GET_TYPE_RESERVED_VALUE ( num_undef_32 ) ) ;
2007-11-12 23:51:54 +03:00
2015-01-14 12:31:24 +03:00
return _size32_disp ( rh , mem , field , & lvdm - > info . read_ahead , private ) ;
2007-11-09 19:51:54 +03:00
}
2019-10-04 18:02:20 +03:00
static int _vdo_operating_mode_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lv_is_vdo_pool ( lvdm - > lv ) | | lv_is_vdo ( lvdm - > lv ) ) & &
( lvdm - > seg_status . type = = SEG_STATUS_VDO_POOL ) )
return _field_string ( rh , field , get_vdo_operating_mode_name ( lvdm - > seg_status . vdo_pool . vdo - > operating_mode ) ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_compression_state_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lv_is_vdo_pool ( lvdm - > lv ) | | lv_is_vdo ( lvdm - > lv ) ) & &
( lvdm - > seg_status . type = = SEG_STATUS_VDO_POOL ) )
return _field_string ( rh , field , get_vdo_compression_state_name ( lvdm - > seg_status . vdo_pool . vdo - > compression_state ) ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_index_state_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lv_is_vdo_pool ( lvdm - > lv ) | | lv_is_vdo ( lvdm - > lv ) ) & &
( lvdm - > seg_status . type = = SEG_STATUS_VDO_POOL ) )
return _field_string ( rh , field , get_vdo_index_state_name ( lvdm - > seg_status . vdo_pool . vdo - > index_state ) ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_used_size_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
uint64_t size ;
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lv_is_vdo_pool ( lvdm - > lv ) | | lv_is_vdo ( lvdm - > lv ) ) & &
( lvdm - > seg_status . type = = SEG_STATUS_VDO_POOL ) ) {
size = lvdm - > seg_status . vdo_pool . vdo - > used_blocks * DM_VDO_BLOCK_SIZE ;
return _size64_disp ( rh , mem , field , & size , private ) ;
}
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_saving_percent_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
if ( ( lv_is_vdo_pool ( lvdm - > lv ) | | lv_is_vdo ( lvdm - > lv ) ) & &
( lvdm - > seg_status . type = = SEG_STATUS_VDO_POOL ) )
return dm_report_field_percent ( rh , field , & lvdm - > seg_status . vdo_pool . saving ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
2007-01-16 21:06:12 +03:00
static int _vgsize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t size = vg_size ( vg ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & size , private ) ;
2002-12-12 23:55:49 +03:00
}
2013-04-25 14:07:57 +04:00
static int _segmonitor_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2015-01-20 16:36:21 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2013-04-25 14:07:57 +04:00
char * str ;
2015-01-20 16:36:21 +03:00
if ( ! ( str = lvseg_monitor_dup ( mem , seg ) ) )
2013-04-25 14:07:57 +04:00
return_0 ;
report: fix seg_monitor field to display monitoring status for thick snapshots and mirrors
The seg_monitor did not display monitored status for thick snapshots
and mirrors (with mirror log *not* mirrored). The seg monitor did work
correctly even before for other segtypes - thins and raids.
Before (mirrors and snapshots, only mirrors with mirrored log properly displayed monitoring status):
[0] f21/~ # lvs -a -o lv_name,lv_layout,lv_role,seg_monitor vg
LV Layout Role Monitor
mirror mirror public
[mirror_mimage_0] linear private,mirror,image
[mirror_mimage_1] linear private,mirror,image
[mirror_mlog] linear private,mirror,log
mirror_with_mirror_log mirror public monitored
[mirror_with_mirror_log_mimage_0] linear private,mirror,image
[mirror_with_mirror_log_mimage_1] linear private,mirror,image
[mirror_with_mirror_log_mlog] mirror private,mirror,log monitored
[mirror_with_mirror_log_mlog_mimage_0] linear private,mirror,image
[mirror_with_mirror_log_mlog_mimage_1] linear private,mirror,image
thick_origin linear public,origin,thickorigin
thick_snapshot linear public,snapshot,thicksnapshot
With this patch applied (monitoring status displayed for all mirrors and snapshots):
[0] f21/~ # lvs -a -o lv_name,lv_layout,lv_role,seg_monitor vg
LV Layout Role Monitor
mirror mirror public monitored
[mirror_mimage_0] linear private,mirror,image
[mirror_mimage_1] linear private,mirror,image
[mirror_mlog] linear private,mirror,log
mirror_with_mirror_log mirror public monitored
[mirror_with_mirror_log_mimage_0] linear private,mirror,image
[mirror_with_mirror_log_mimage_1] linear private,mirror,image
[mirror_with_mirror_log_mlog] mirror private,mirror,log monitored
[mirror_with_mirror_log_mlog_mimage_0] linear private,mirror,image
[mirror_with_mirror_log_mlog_mimage_1] linear private,mirror,image
thick_origin linear public,origin,thickorigin
thick_snapshot linear public,snapshot,thicksnapshot monitored
2015-03-05 12:45:29 +03:00
if ( * str )
return _field_set_value ( field , str , NULL ) ;
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( seg_monitor_undef ) ,
GET_FIELD_RESERVED_VALUE ( seg_monitor_undef ) ) ;
2013-04-25 14:07:57 +04:00
}
2007-01-16 21:06:12 +03:00
static int _segstart_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t start = lvseg_start ( seg ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & start , private ) ;
2002-12-12 23:55:49 +03:00
}
2007-12-20 19:49:37 +03:00
static int _segstartpe_disp ( struct dm_report * rh ,
2010-07-09 19:34:40 +04:00
struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-12-20 19:49:37 +03:00
struct dm_report_field * field ,
const void * data ,
2010-07-09 19:34:40 +04:00
void * private __attribute__ ( ( unused ) ) )
2007-12-15 00:53:02 +03:00
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
return dm_report_field_uint32 ( rh , field , & seg - > le ) ;
}
2017-03-04 00:29:50 +03:00
/* Hepler: get used stripes = total stripes minux any to remove after reshape */
2017-03-01 20:50:35 +03:00
static int _get_seg_used_stripes ( const struct lv_segment * seg )
{
uint32_t s ;
uint32_t stripes = seg - > area_count ;
for ( s = seg - > area_count - 1 ; stripes & & s ; s - - ) {
if ( seg_type ( seg , s ) = = AREA_LV & &
seg_lv ( seg , s ) - > status & LV_REMOVE_AFTER_RESHAPE )
stripes - - ;
else
break ;
}
return stripes ;
}
static int _seg_stripes_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( ( const struct lv_segment * ) data ) ;
return dm_report_field_uint32 ( rh , field , & seg - > area_count ) ;
}
2017-03-04 00:29:50 +03:00
/* Report the number of data stripes, which is less than total stripes (e.g. 2 less for raid6) */
2017-03-01 20:50:35 +03:00
static int _seg_data_stripes_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
uint32_t stripes = _get_seg_used_stripes ( seg ) - seg - > segtype - > parity_devs ;
/* FIXME: in case of odd numbers of raid10 stripes */
if ( seg_is_raid10 ( seg ) )
stripes / = seg - > data_copies ;
return dm_report_field_uint32 ( rh , field , & stripes ) ;
}
2017-03-04 00:29:50 +03:00
/* Helper: return the top-level, reshapable raid LV in case @seg belongs to an raid rimage LV */
static struct logical_volume * _lv_for_raid_image_seg ( const struct lv_segment * seg , struct dm_pool * mem )
{
char * lv_name ;
if ( seg_is_reshapable_raid ( seg ) )
return seg - > lv ;
if ( seg - > lv & &
lv_is_raid_image ( seg - > lv ) & & ! seg - > le & &
( lv_name = dm_pool_strdup ( mem , seg - > lv - > name ) ) ) {
char * p = strchr ( lv_name , ' _ ' ) ;
if ( p ) {
/* Handle duplicated sub LVs */
if ( strstr ( p , " _dup_ " ) )
p = strchr ( p + 5 , ' _ ' ) ;
if ( p ) {
struct lv_list * lvl ;
* p = ' \0 ' ;
if ( ( lvl = find_lv_in_vg ( seg - > lv - > vg , lv_name ) ) & &
seg_is_reshapable_raid ( first_seg ( lvl - > lv ) ) )
return lvl - > lv ;
}
}
}
return NULL ;
}
/* Helper: return the top-level raid LV in case it is reshapale for @seg or @seg if it is */
static const struct lv_segment * _get_reshapable_seg ( const struct lv_segment * seg , struct dm_pool * mem )
{
return _lv_for_raid_image_seg ( seg , mem ) ? seg : NULL ;
}
/* Display segment reshape length in current units */
2017-03-01 20:50:35 +03:00
static int _seg_reshape_len_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2017-03-04 00:29:50 +03:00
const struct lv_segment * seg = _get_reshapable_seg ( ( const struct lv_segment * ) data , mem ) ;
2017-03-01 20:50:35 +03:00
2017-03-04 00:29:50 +03:00
if ( seg ) {
uint32_t reshape_len = seg - > reshape_len * seg - > area_count * seg - > lv - > vg - > extent_size ;
2017-03-01 20:50:35 +03:00
return _size32_disp ( rh , mem , field , & reshape_len , private ) ;
}
2017-03-04 00:29:50 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-03-01 20:50:35 +03:00
}
2017-03-04 00:29:50 +03:00
/* Display segment reshape length of in logical extents */
2017-03-01 20:50:35 +03:00
static int _seg_reshape_len_le_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2017-03-04 00:29:50 +03:00
const struct lv_segment * seg = _get_reshapable_seg ( ( const struct lv_segment * ) data , mem ) ;
2017-03-01 20:50:35 +03:00
2017-03-04 00:29:50 +03:00
if ( seg ) {
uint32_t reshape_len = seg - > reshape_len * seg - > area_count ;
2017-03-01 20:50:35 +03:00
return dm_report_field_uint32 ( rh , field , & reshape_len ) ;
}
2017-03-04 00:29:50 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-03-01 20:50:35 +03:00
}
2017-03-04 00:29:50 +03:00
/* Display segment data copies (e.g. 3 for raid6) */
2017-03-01 20:50:35 +03:00
static int _seg_data_copies_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2017-03-04 00:29:50 +03:00
if ( seg - > data_copies )
2017-03-01 20:50:35 +03:00
return dm_report_field_uint32 ( rh , field , & seg - > data_copies ) ;
2017-03-04 00:29:50 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-03-01 20:50:35 +03:00
}
2017-03-04 00:29:50 +03:00
/* Helper: display segment data offset/new data offset in sectors */
2017-03-01 20:50:35 +03:00
static int _segdata_offset ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private , int new_data_offset )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2017-03-04 00:29:50 +03:00
struct logical_volume * lv ;
2017-03-01 20:50:35 +03:00
2017-03-04 00:29:50 +03:00
if ( ( lv = _lv_for_raid_image_seg ( seg , mem ) ) ) {
2017-07-13 19:27:38 +03:00
uint64_t data_offset = 0 ;
2017-03-01 20:50:35 +03:00
2017-03-04 00:29:50 +03:00
if ( lv_raid_data_offset ( lv , & data_offset ) ) {
2017-07-13 19:27:38 +03:00
if ( new_data_offset & & lv_is_raid_image ( lv ) & & ! lv_raid_image_in_sync ( lv ) )
2017-03-16 02:51:52 +03:00
data_offset = data_offset ? 0 : ( uint64_t ) seg - > reshape_len * lv - > vg - > extent_size ;
2017-03-01 20:50:35 +03:00
2017-03-04 00:29:50 +03:00
return dm_report_field_uint64 ( rh , field , & data_offset ) ;
2017-03-01 20:50:35 +03:00
}
}
2017-03-04 00:29:50 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-03-01 20:50:35 +03:00
}
static int _seg_data_offset_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
return _segdata_offset ( rh , mem , field , data , private , 0 ) ;
}
static int _seg_new_data_offset_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
return _segdata_offset ( rh , mem , field , data , private , 1 ) ;
}
static int _seg_parity_chunks_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
uint32_t parity_chunks = seg - > segtype - > parity_devs ? : seg - > data_copies - 1 ;
if ( parity_chunks ) {
uint32_t s , resilient_sub_lvs = 0 ;
for ( s = 0 ; s < seg - > area_count ; s + + ) {
if ( seg_type ( seg , s ) = = AREA_LV ) {
struct lv_segment * seg1 = first_seg ( seg_lv ( seg , s ) ) ;
if ( seg1 - > segtype - > parity_devs | |
seg1 - > data_copies > 1 )
resilient_sub_lvs + + ;
}
}
if ( resilient_sub_lvs & & resilient_sub_lvs = = seg - > area_count )
parity_chunks + + ;
return dm_report_field_uint32 ( rh , field , & parity_chunks ) ;
}
2017-03-04 00:29:50 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2017-03-01 20:50:35 +03:00
}
2007-01-16 21:06:12 +03:00
static int _segsize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t size = lvseg_size ( seg ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & size , private ) ;
2002-12-12 23:55:49 +03:00
}
2013-09-24 00:50:14 +04:00
static int _segsizepe_disp ( struct dm_report * rh ,
struct dm_pool * mem __attribute__ ( ( unused ) ) ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
return dm_report_field_uint32 ( rh , field , & seg - > len ) ;
}
2007-01-16 21:06:12 +03:00
static int _chunksize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2005-09-23 21:06:01 +04:00
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t size = lvseg_chunksize ( seg ) ;
2009-04-25 05:17:59 +04:00
return _size64_disp ( rh , mem , field , & size , private ) ;
}
2012-01-19 19:34:32 +04:00
static int _transactionid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2016-04-23 22:08:46 +03:00
if ( seg_is_thin_pool ( seg ) | | seg_is_thin_volume ( seg ) )
2013-09-23 12:18:10 +04:00
return dm_report_field_uint64 ( rh , field , & seg - > transaction_id ) ;
2012-01-19 19:34:32 +04:00
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2012-01-19 19:34:32 +04:00
}
2013-11-11 13:05:45 +04:00
static int _thinid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_thin_volume ( seg ) )
return dm_report_field_uint32 ( rh , field , & seg - > device_id ) ;
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2013-11-11 13:05:45 +04:00
}
2012-08-08 00:24:41 +04:00
static int _discards_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2012-07-09 18:48:28 +04:00
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2012-08-08 00:24:41 +04:00
const char * discards_str ;
2012-07-09 18:48:28 +04:00
if ( seg_is_thin_volume ( seg ) )
seg = first_seg ( seg - > pool_lv ) ;
if ( seg_is_thin_pool ( seg ) ) {
2012-08-08 00:24:41 +04:00
discards_str = get_pool_discards_name ( seg - > discards ) ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , discards_str ) ;
2012-08-07 21:48:34 +04:00
}
2012-07-09 18:48:28 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2012-07-09 18:48:28 +04:00
}
2012-01-19 19:34:32 +04:00
2016-01-14 18:54:12 +03:00
static int _kdiscards_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
const char * discards_str ;
if ( ! ( discards_str = lvseg_kernel_discards_dup_with_info_and_seg_status ( mem , lvdm ) ) )
return_0 ;
if ( * discards_str )
return _field_set_value ( field , discards_str , NULL ) ;
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( seg_kernel_discards_undef ) ,
GET_FIELD_RESERVED_VALUE ( seg_kernel_discards_undef ) ) ;
}
2014-10-02 01:06:01 +04:00
static int _cachemode_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2016-04-25 14:39:30 +03:00
return _field_string ( rh , field , display_cache_mode ( seg ) ) ;
2014-10-02 01:06:01 +04:00
}
2017-02-23 19:41:59 +03:00
static int _cachemetadataformat_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2018-11-06 01:33:34 +03:00
const struct lv_segment * setting_seg = NULL ;
2017-02-23 19:41:59 +03:00
const uint64_t * fmt ;
2019-01-30 18:55:34 +03:00
if ( seg_is_cache ( seg ) & & lv_is_cache_vol ( seg - > pool_lv ) )
2018-08-17 23:45:52 +03:00
setting_seg = seg ;
else if ( seg_is_cache_pool ( seg ) )
2018-11-06 01:33:34 +03:00
setting_seg = seg ;
2017-02-23 19:41:59 +03:00
2018-11-06 01:33:34 +03:00
else if ( seg_is_cache ( seg ) )
setting_seg = first_seg ( seg - > pool_lv ) ;
else
goto undef ;
switch ( setting_seg - > cache_metadata_format ) {
case CACHE_METADATA_FORMAT_1 :
case CACHE_METADATA_FORMAT_2 :
fmt = ( setting_seg - > cache_metadata_format = = CACHE_METADATA_FORMAT_2 ) ? & _two64 : & _one64 ;
return dm_report_field_uint64 ( rh , field , fmt ) ;
default : /* unselected/undefined for all other cases */ ;
2017-02-23 19:41:59 +03:00
}
2018-11-06 01:33:34 +03:00
undef :
2017-02-23 19:41:59 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
2009-04-25 05:17:59 +04:00
static int _originsize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t size = lv_origin_size ( lv ) ;
2009-04-25 05:17:59 +04:00
2013-09-23 11:44:53 +04:00
if ( size )
return _size64_disp ( rh , mem , field , & size , private ) ;
2005-09-23 21:06:01 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , & _zero64 ) ;
2005-09-23 21:06:01 +04:00
}
2007-01-16 21:06:12 +03:00
static int _pvused_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2016-02-11 17:00:43 +03:00
2013-09-23 11:44:53 +04:00
uint64_t used = pv_used ( pv ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & used , private ) ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static int _pvfree_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2016-02-15 15:30:37 +03:00
uint64_t freespace ;
2002-12-12 23:55:49 +03:00
2016-02-15 15:30:37 +03:00
if ( is_orphan ( pv ) & & is_used_pv ( pv ) )
freespace = 0 ;
else
freespace = pv_free ( pv ) ;
2016-02-11 17:00:43 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & freespace , private ) ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static int _pvsize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t size = pv_size_field ( pv ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & size , private ) ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static int _devsize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2004-08-11 17:15:05 +04:00
{
2015-03-03 17:37:17 +03:00
struct device * dev = * ( struct device * const * ) data ;
2013-11-22 16:21:52 +04:00
uint64_t size ;
2014-01-14 07:17:27 +04:00
if ( ! dev | | ! dev - > dev | | ! dev_get_size ( dev , & size ) )
size = _zero64 ;
2013-11-22 16:21:52 +04:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & size , private ) ;
2004-08-11 17:15:05 +04:00
}
2007-01-16 21:06:12 +03:00
static int _vgfree_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t freespace = vg_free ( vg ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _size64_disp ( rh , mem , field , & freespace , private ) ;
2002-12-12 23:55:49 +03:00
}
2015-02-24 02:03:52 +03:00
static int _vgsystemid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2018-04-28 00:22:46 +03:00
const char * repstr = ( vg - > system_id & & * vg - > system_id ) ? vg - > system_id : " " ;
2015-02-24 02:03:52 +03:00
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , repstr ) ;
2015-02-24 02:03:52 +03:00
}
2015-03-05 23:00:44 +03:00
static int _vglocktype_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2018-06-15 22:14:39 +03:00
const char * locktype ;
2015-03-05 23:00:44 +03:00
2018-06-15 22:14:39 +03:00
if ( ! vg - > lock_type | | ! strcmp ( vg - > lock_type , " none " ) )
locktype = " " ;
else
locktype = vg - > lock_type ;
return _field_string ( rh , field , locktype ) ;
2015-03-05 23:00:44 +03:00
}
static int _vglockargs_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , vg - > lock_args ? : " " ) ;
2015-03-05 23:00:44 +03:00
}
2016-03-01 17:23:23 +03:00
static int _lvuuid_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
const union lvid * lvid ;
char * repstr ;
if ( lv_is_historical ( lv ) )
lvid = & lv - > this_glv - > historical - > lvid ;
else
lvid = & lv - > lvid ;
if ( ! ( repstr = id_format_and_copy ( mem , & lvid - > id [ 1 ] ) ) )
return_0 ;
return _field_set_value ( field , repstr , NULL ) ;
}
2013-07-29 21:15:31 +04:00
static int _pvuuid_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private __attribute__ ( ( unused ) ) )
{
const struct label * label = ( const struct label * ) data ;
2015-09-21 12:34:03 +03:00
if ( ! label - > dev )
return _field_set_value ( field , " " , NULL ) ;
2013-07-29 21:15:31 +04:00
2015-09-21 12:34:03 +03:00
return _uuid_disp ( rh , mem , field , label - > dev - > pvid , private ) ;
2013-07-29 21:15:31 +04:00
}
2007-07-09 19:40:43 +04:00
static int _pvmdas_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2009-07-26 16:41:09 +04:00
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = pv_mda_count ( pv ) ;
2007-07-09 19:40:43 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
Define new functions and vgs/pvs fields related to mda ignore.
Define a new pvs field, pv_mda_used_count, and a new vgs field,
vg_mda_used_count to match the existing pv_mda_count and vg_mda_count.
These new fields count the number of mdas that have the 'ignored' bit
clear (they are in use on the PV / VG). Also define various supporting
functions to implement the counting as well as setting the ignored
flag and determining if an mda is ignored. These high level functions
call into the lower level location independent mda ignore functions
defined by earlier patches.
Note that counting ignored mdas in a vg requires traversing both lists
and checking for the ignored bit on the mda. The count of 'ignored'
mdas then is defined by having the bit set, not by which list the mda
is on. The list does determine whether LVM actually does read/write to
the mda, though we must count the bits in order to return accurate numbers
for the various counts. Also, pv_mda_set_ignored must search both vg
lists for ignored mda. If the state changes and needs to be committed
to disk, the ignored mda will be on the non-ignored list.
Note also in pv_mda_set_ignored(), we must properly manage the mda lists.
If we change the ignored state of an mda, we must change any mdas on
vg->fid->metadata_areas that correspond to this pv. Also, we may
need to allocate a copy of the mda, as is done when fid->metadata_areas
is populated from _vg_read(), if we are un-ignoring an ignored mda.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-06-29 00:33:44 +04:00
static int _pvmdasused_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct physical_volume * pv =
( const struct physical_volume * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = pv_mda_used_count ( pv ) ;
Define new functions and vgs/pvs fields related to mda ignore.
Define a new pvs field, pv_mda_used_count, and a new vgs field,
vg_mda_used_count to match the existing pv_mda_count and vg_mda_count.
These new fields count the number of mdas that have the 'ignored' bit
clear (they are in use on the PV / VG). Also define various supporting
functions to implement the counting as well as setting the ignored
flag and determining if an mda is ignored. These high level functions
call into the lower level location independent mda ignore functions
defined by earlier patches.
Note that counting ignored mdas in a vg requires traversing both lists
and checking for the ignored bit on the mda. The count of 'ignored'
mdas then is defined by having the bit set, not by which list the mda
is on. The list does determine whether LVM actually does read/write to
the mda, though we must count the bits in order to return accurate numbers
for the various counts. Also, pv_mda_set_ignored must search both vg
lists for ignored mda. If the state changes and needs to be committed
to disk, the ignored mda will be on the non-ignored list.
Note also in pv_mda_set_ignored(), we must properly manage the mda lists.
If we change the ignored state of an mda, we must change any mdas on
vg->fid->metadata_areas that correspond to this pv. Also, we may
need to allocate a copy of the mda, as is done when fid->metadata_areas
is populated from _vg_read(), if we are un-ignoring an ignored mda.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-06-29 00:33:44 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2007-07-09 19:40:43 +04:00
static int _vgmdas_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = vg_mda_count ( vg ) ;
2007-07-09 19:40:43 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
Define new functions and vgs/pvs fields related to mda ignore.
Define a new pvs field, pv_mda_used_count, and a new vgs field,
vg_mda_used_count to match the existing pv_mda_count and vg_mda_count.
These new fields count the number of mdas that have the 'ignored' bit
clear (they are in use on the PV / VG). Also define various supporting
functions to implement the counting as well as setting the ignored
flag and determining if an mda is ignored. These high level functions
call into the lower level location independent mda ignore functions
defined by earlier patches.
Note that counting ignored mdas in a vg requires traversing both lists
and checking for the ignored bit on the mda. The count of 'ignored'
mdas then is defined by having the bit set, not by which list the mda
is on. The list does determine whether LVM actually does read/write to
the mda, though we must count the bits in order to return accurate numbers
for the various counts. Also, pv_mda_set_ignored must search both vg
lists for ignored mda. If the state changes and needs to be committed
to disk, the ignored mda will be on the non-ignored list.
Note also in pv_mda_set_ignored(), we must properly manage the mda lists.
If we change the ignored state of an mda, we must change any mdas on
vg->fid->metadata_areas that correspond to this pv. Also, we may
need to allocate a copy of the mda, as is done when fid->metadata_areas
is populated from _vg_read(), if we are un-ignoring an ignored mda.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-06-29 00:33:44 +04:00
static int _vgmdasused_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = vg_mda_used_count ( vg ) ;
Define new functions and vgs/pvs fields related to mda ignore.
Define a new pvs field, pv_mda_used_count, and a new vgs field,
vg_mda_used_count to match the existing pv_mda_count and vg_mda_count.
These new fields count the number of mdas that have the 'ignored' bit
clear (they are in use on the PV / VG). Also define various supporting
functions to implement the counting as well as setting the ignored
flag and determining if an mda is ignored. These high level functions
call into the lower level location independent mda ignore functions
defined by earlier patches.
Note that counting ignored mdas in a vg requires traversing both lists
and checking for the ignored bit on the mda. The count of 'ignored'
mdas then is defined by having the bit set, not by which list the mda
is on. The list does determine whether LVM actually does read/write to
the mda, though we must count the bits in order to return accurate numbers
for the various counts. Also, pv_mda_set_ignored must search both vg
lists for ignored mda. If the state changes and needs to be committed
to disk, the ignored mda will be on the non-ignored list.
Note also in pv_mda_set_ignored(), we must properly manage the mda lists.
If we change the ignored state of an mda, we must change any mdas on
vg->fid->metadata_areas that correspond to this pv. Also, we may
need to allocate a copy of the mda, as is done when fid->metadata_areas
is populated from _vg_read(), if we are un-ignoring an ignored mda.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-06-29 00:33:44 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2010-06-29 00:37:23 +04:00
static int _vgmdacopies_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = vg_mda_copies ( vg ) ;
2010-06-29 00:37:23 +04:00
2013-09-23 12:17:50 +04:00
if ( count = = VGMETADATACOPIES_UNMANAGED )
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( vg_mda_copies_unmanaged ) ,
2014-12-19 11:17:16 +03:00
GET_FIELD_RESERVED_VALUE ( vg_mda_copies_unmanaged ) ) ;
Allow 'all' and 'unmanaged' values for --vgmetadatacopies.
Allowing an 'all' and 'unmanaged' value is more intuitive, and
provides a simple way for users to get back to original LVM behavior
of metadata written to all PVs in the volume group.
If the user requests "--vgmetadatacopies unmanaged", this instructs
LVM not to manage the ignore bits to achieve a specific number of
metadata copies in the volume group. The user is free to use
"pvchange --metadataignore" to control the mdas on a per-PV basis.
If the user requests "--vgmetadatacopies all", this instructs LVM
to do 2 things: 1) clear all ignore bits, and 2) set the "unmanaged"
policy going forward.
Internally, we use the special MAX_UINT32 value to indicate 'all'.
This 'just' works since it's the largest value possible for the
field and so all 'ignore' bits on all mdas in the VG will get
cleared inside _vg_metadata_balance(). However, after we've
called the _vg_metadata_balance function, we check for the special
'all' value, and if set, we write the "unmanaged" value into the
metadata. As such, the 'all' value is never written to disk.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-06-29 00:40:01 +04:00
2010-06-29 00:37:23 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2013-07-02 16:34:52 +04:00
static int _vgprofile_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
if ( vg - > profile )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , vg - > profile - > name ) ;
2013-07-02 16:34:52 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2013-07-02 16:34:52 +04:00
}
2015-10-09 17:20:29 +03:00
static int _vgmissingpvcount_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
uint32_t count = vg_missing_pv_count ( vg ) ;
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2007-11-05 20:17:55 +03:00
static int _pvmdafree_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2013-07-29 21:14:10 +04:00
const struct label * label = ( const struct label * ) data ;
uint64_t freespace = lvmcache_info_mda_free ( label - > info ) ;
2007-11-05 20:17:55 +03:00
return _size64_disp ( rh , mem , field , & freespace , private ) ;
}
2009-01-10 01:44:33 +03:00
static int _pvmdasize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2013-11-18 00:04:07 +04:00
const struct label * label = ( const struct label * ) data ;
uint64_t min_mda_size = lvmcache_smallest_mda_size ( label - > info ) ;
2009-01-10 01:44:33 +03:00
return _size64_disp ( rh , mem , field , & min_mda_size , private ) ;
}
2015-03-17 11:49:08 +03:00
static int _pvextvsn_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct label * label = ( const struct label * ) data ;
struct lvmcache_info * info = label - > info ;
uint32_t ext_version ;
if ( info ) {
ext_version = lvmcache_ext_version ( info ) ;
return _uint32_disp ( rh , mem , field , & ext_version , private ) ;
}
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
2009-01-10 01:44:33 +03:00
static int _vgmdasize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t min_mda_size = vg_mda_size ( vg ) ;
2009-01-10 01:44:33 +03:00
return _size64_disp ( rh , mem , field , & min_mda_size , private ) ;
}
2007-11-05 20:17:55 +03:00
static int _vgmdafree_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint64_t freespace = vg_mda_free ( vg ) ;
2007-11-05 20:17:55 +03:00
return _size64_disp ( rh , mem , field , & freespace , private ) ;
}
2008-04-10 21:19:02 +04:00
static int _lvcount_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = vg_visible_lvs ( vg ) ;
2008-04-10 21:19:02 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2007-01-16 21:06:12 +03:00
static int _lvsegcount_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
2002-12-12 23:55:49 +03:00
{
2002-12-20 02:25:55 +03:00
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = dm_list_size ( & lv - > segments ) ;
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
2002-12-12 23:55:49 +03:00
}
2009-05-12 23:12:09 +04:00
static int _snapcount_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct volume_group * vg = ( const struct volume_group * ) data ;
2013-09-23 11:44:53 +04:00
uint32_t count = snapshot_count ( vg ) ;
2009-05-12 23:12:09 +04:00
return _uint32_disp ( rh , mem , field , & count , private ) ;
}
2014-06-09 14:08:27 +04:00
static int _snpercent_disp ( struct dm_report * rh , struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2002-12-12 23:55:49 +03:00
{
2016-05-25 17:26:10 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2002-12-12 23:55:49 +03:00
2016-05-25 17:26:10 +03:00
dm_percent_t percent = lvseg_percent_with_info_and_seg_status ( lvdm , PERCENT_GET_DATA ) ;
2003-01-21 21:50:50 +03:00
2016-05-25 17:26:10 +03:00
return dm_report_field_percent ( rh , field , & percent ) ;
2002-12-12 23:55:49 +03:00
}
2014-06-09 14:08:27 +04:00
static int _copypercent_disp ( struct dm_report * rh ,
struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2007-01-16 21:06:12 +03:00
struct dm_report_field * field ,
2010-07-09 19:34:40 +04:00
const void * data , void * private __attribute__ ( ( unused ) ) )
2003-05-06 16:06:02 +04:00
{
2016-05-25 17:26:10 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
const struct logical_volume * lv = lvdm - > lv ;
2014-06-09 14:08:27 +04:00
dm_percent_t percent = DM_PERCENT_INVALID ;
2003-05-06 16:06:02 +04:00
2016-05-25 17:26:10 +03:00
/* TODO: just cache passes through lvseg_percent... */
2017-06-16 14:20:25 +03:00
if ( lv_is_cache ( lv ) | | lv_is_used_cache_pool ( lv ) | |
( ! lv_is_merging_origin ( lv ) & & lv_is_raid ( lv ) & & ! seg_is_any_raid0 ( first_seg ( lv ) ) ) )
2016-05-25 17:26:10 +03:00
percent = lvseg_percent_with_info_and_seg_status ( lvdm , PERCENT_GET_DIRTY ) ;
2017-06-16 14:20:25 +03:00
else if ( lv_is_raid ( lv ) & & ! seg_is_any_raid0 ( first_seg ( lv ) ) )
/* old way for percentage when merging snapshot into raid origin */
( void ) lv_raid_percent ( lv , & percent ) ;
else if ( ( ( lv_is_mirror ( lv ) & &
2016-05-25 17:26:10 +03:00
lv_mirror_percent ( lv - > vg - > cmd , lv , 0 , & percent , NULL ) ) ) & &
( percent ! = DM_PERCENT_INVALID ) )
2013-09-23 13:03:02 +04:00
percent = copy_percent ( lv ) ;
2003-05-06 16:06:02 +04:00
2014-06-09 14:08:27 +04:00
return dm_report_field_percent ( rh , field , & percent ) ;
2003-05-06 16:06:02 +04:00
}
2013-07-19 04:30:02 +04:00
static int _raidsyncaction_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
2013-04-12 00:33:59 +04:00
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * sync_action ;
2013-09-23 12:17:50 +04:00
if ( lv_is_raid ( lv ) & & lv_raid_sync_action ( lv , & sync_action ) )
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , sync_action ) ;
2013-04-12 00:33:59 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , " " , NULL ) ;
2013-04-12 00:33:59 +04:00
}
2013-07-19 04:30:02 +04:00
static int _raidmismatchcount_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
2013-04-12 00:33:59 +04:00
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
uint64_t mismatch_count ;
2013-09-23 12:18:10 +04:00
if ( lv_is_raid ( lv ) & & lv_raid_mismatch_count ( lv , & mismatch_count ) )
return dm_report_field_uint64 ( rh , field , & mismatch_count ) ;
2013-04-12 00:33:59 +04:00
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
RAID: Add writemostly/writebehind support for RAID1
'lvchange' is used to alter a RAID 1 logical volume's write-mostly and
write-behind characteristics. The '--writemostly' parameter takes a
PV as an argument with an optional trailing character to specify whether
to set ('y'), unset ('n'), or toggle ('t') the value. If no trailing
character is given, it will set the flag.
Synopsis:
lvchange [--writemostly <PV>:{t|y|n}] [--writebehind <count>] vg/lv
Example:
lvchange --writemostly /dev/sdb1:y --writebehind 512 vg/raid1_lv
The last character in the 'lv_attr' field is used to show whether a device
has the WriteMostly flag set. It is signified with a 'w'. If the device
has failed, the 'p'artial flag has priority.
Example ("nosync" raid1 with mismatch_cnt and writemostly):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg Rwi---r-m 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-w 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-- 1 linear 4.00m
Example (raid1 with mismatch_cnt, writemostly - but failed drive):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg rwi---r-p 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-p 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-p 1 linear 4.00m
A new reportable field has been added for writebehind as well. If
write-behind has not been set or the LV is not RAID1, the field will
be blank.
Example (writebehind is set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r-- 512
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
Example (writebehind is not set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r--
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
2013-04-15 22:59:46 +04:00
}
2013-07-19 04:30:02 +04:00
static int _raidwritebehind_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
RAID: Add writemostly/writebehind support for RAID1
'lvchange' is used to alter a RAID 1 logical volume's write-mostly and
write-behind characteristics. The '--writemostly' parameter takes a
PV as an argument with an optional trailing character to specify whether
to set ('y'), unset ('n'), or toggle ('t') the value. If no trailing
character is given, it will set the flag.
Synopsis:
lvchange [--writemostly <PV>:{t|y|n}] [--writebehind <count>] vg/lv
Example:
lvchange --writemostly /dev/sdb1:y --writebehind 512 vg/raid1_lv
The last character in the 'lv_attr' field is used to show whether a device
has the WriteMostly flag set. It is signified with a 'w'. If the device
has failed, the 'p'artial flag has priority.
Example ("nosync" raid1 with mismatch_cnt and writemostly):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg Rwi---r-m 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-w 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-- 1 linear 4.00m
Example (raid1 with mismatch_cnt, writemostly - but failed drive):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg rwi---r-p 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-p 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-p 1 linear 4.00m
A new reportable field has been added for writebehind as well. If
write-behind has not been set or the LV is not RAID1, the field will
be blank.
Example (writebehind is set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r-- 512
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
Example (writebehind is not set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r--
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
2013-04-15 22:59:46 +04:00
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 12:18:10 +04:00
if ( lv_is_raid_type ( lv ) & & first_seg ( lv ) - > writebehind )
return dm_report_field_uint32 ( rh , field , & first_seg ( lv ) - > writebehind ) ;
RAID: Add writemostly/writebehind support for RAID1
'lvchange' is used to alter a RAID 1 logical volume's write-mostly and
write-behind characteristics. The '--writemostly' parameter takes a
PV as an argument with an optional trailing character to specify whether
to set ('y'), unset ('n'), or toggle ('t') the value. If no trailing
character is given, it will set the flag.
Synopsis:
lvchange [--writemostly <PV>:{t|y|n}] [--writebehind <count>] vg/lv
Example:
lvchange --writemostly /dev/sdb1:y --writebehind 512 vg/raid1_lv
The last character in the 'lv_attr' field is used to show whether a device
has the WriteMostly flag set. It is signified with a 'w'. If the device
has failed, the 'p'artial flag has priority.
Example ("nosync" raid1 with mismatch_cnt and writemostly):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg Rwi---r-m 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-w 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-- 1 linear 4.00m
Example (raid1 with mismatch_cnt, writemostly - but failed drive):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg rwi---r-p 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-p 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-p 1 linear 4.00m
A new reportable field has been added for writebehind as well. If
write-behind has not been set or the LV is not RAID1, the field will
be blank.
Example (writebehind is set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r-- 512
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
Example (writebehind is not set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r--
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
2013-04-15 22:59:46 +04:00
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2013-04-12 00:33:59 +04:00
}
2013-07-19 04:30:02 +04:00
static int _raidminrecoveryrate_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
2013-05-31 20:25:52 +04:00
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 12:18:10 +04:00
if ( lv_is_raid_type ( lv ) & & first_seg ( lv ) - > min_recovery_rate )
return dm_report_field_uint32 ( rh , field ,
& first_seg ( lv ) - > min_recovery_rate ) ;
2013-05-31 20:25:52 +04:00
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2013-05-31 20:25:52 +04:00
}
2013-07-19 04:30:02 +04:00
static int _raidmaxrecoveryrate_disp ( struct dm_report * rh __attribute__ ( ( unused ) ) ,
2013-05-31 20:25:52 +04:00
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private __attribute__ ( ( unused ) ) )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
2013-09-23 12:18:10 +04:00
if ( lv_is_raid_type ( lv ) & & first_seg ( lv ) - > max_recovery_rate )
return dm_report_field_uint32 ( rh , field ,
& first_seg ( lv ) - > max_recovery_rate ) ;
2013-05-31 20:25:52 +04:00
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2013-05-31 20:25:52 +04:00
}
2012-01-19 19:34:32 +04:00
static int _datapercent_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2016-05-25 17:26:10 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
dm_percent_t percent = lvseg_percent_with_info_and_seg_status ( lvdm , PERCENT_GET_DATA ) ;
2012-01-19 19:34:32 +04:00
2014-06-09 14:08:27 +04:00
return dm_report_field_percent ( rh , field , & percent ) ;
2012-01-19 19:34:32 +04:00
}
2014-06-09 14:08:27 +04:00
static int _metadatapercent_disp ( struct dm_report * rh ,
struct dm_pool * mem __attribute__ ( ( unused ) ) ,
2012-01-19 19:34:32 +04:00
struct dm_report_field * field ,
const void * data , void * private )
{
2016-05-25 17:26:10 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2016-12-01 12:43:55 +03:00
dm_percent_t percent ;
2016-05-25 17:26:10 +03:00
2016-12-01 12:43:55 +03:00
switch ( lvdm - > seg_status . type ) {
case SEG_STATUS_CACHE :
case SEG_STATUS_THIN_POOL :
2016-05-25 17:26:10 +03:00
percent = lvseg_percent_with_info_and_seg_status ( lvdm , PERCENT_GET_METADATA ) ;
2016-12-01 12:43:55 +03:00
break ;
default :
percent = DM_PERCENT_INVALID ;
}
2012-01-19 19:34:32 +04:00
2014-11-03 14:38:24 +03:00
return dm_report_field_percent ( rh , field , & percent ) ;
2012-01-19 19:34:32 +04:00
}
static int _lvmetadatasize_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
uint64_t size ;
2019-01-30 18:55:34 +03:00
if ( lv_is_cache ( lv ) & & lv_is_cache_vol ( first_seg ( lv ) - > pool_lv ) ) {
2018-08-17 23:45:52 +03:00
size = lv_metadata_size ( lv ) ;
return _size64_disp ( rh , mem , field , & size , private ) ;
}
2014-02-05 19:44:37 +04:00
if ( lv_is_thin_pool ( lv ) | | lv_is_cache_pool ( lv ) ) {
2013-09-23 12:18:10 +04:00
size = lv_metadata_size ( lv ) ;
return _size64_disp ( rh , mem , field , & size , private ) ;
2012-01-19 19:34:32 +04:00
}
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2012-01-19 19:34:32 +04:00
}
static int _thincount_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
uint32_t count ;
2013-09-23 12:18:10 +04:00
if ( seg_is_thin_pool ( seg ) ) {
count = dm_list_size ( & seg - > lv - > segs_using_this_lv ) ;
return _uint32_disp ( rh , mem , field , & count , private ) ;
2012-01-19 19:34:32 +04:00
}
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
2012-01-19 19:34:32 +04:00
}
static int _lvtime_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
uint64_t * sortval ;
2016-03-01 17:23:23 +03:00
if ( ! ( repstr = lv_creation_time_dup ( mem , lv , 0 ) ) | |
2013-09-23 11:59:37 +04:00
! ( sortval = dm_pool_alloc ( mem , sizeof ( uint64_t ) ) ) ) {
log_error ( " Failed to allocate buffer for time. " ) ;
2012-01-19 19:34:32 +04:00
return 0 ;
}
2016-03-01 17:23:23 +03:00
* sortval = lv_is_historical ( lv ) ? lv - > this_glv - > historical - > timestamp : lv - > timestamp ;
return _field_set_value ( field , repstr , sortval ) ;
}
static int _lvtimeremoved_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
uint64_t * sortval ;
if ( ! ( repstr = lv_removal_time_dup ( mem , lv , 0 ) ) | |
! ( sortval = dm_pool_alloc ( mem , sizeof ( uint64_t ) ) ) ) {
log_error ( " Failed to allocate buffer for time. " ) ;
return 0 ;
}
2012-01-19 19:34:32 +04:00
2016-03-01 17:23:23 +03:00
* sortval = lv_is_historical ( lv ) ? lv - > this_glv - > historical - > timestamp_removed : 0 ;
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , sortval ) ;
2012-01-19 19:34:32 +04:00
}
static int _lvhost_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
char * repstr ;
2013-09-23 11:59:37 +04:00
if ( ! ( repstr = lv_host_dup ( mem , lv ) ) ) {
log_error ( " Failed to allocate buffer for host. " ) ;
return 0 ;
}
2012-01-19 19:34:32 +04:00
2013-09-23 12:17:50 +04:00
return _field_set_value ( field , repstr , NULL ) ;
2012-01-19 19:34:32 +04:00
}
2014-07-02 13:09:14 +04:00
/* PV/VG/LV Attributes */
static int _pvallocatable_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int allocatable = ( ( ( const struct physical_volume * ) data ) - > status & ALLOCATABLE_PV ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , allocatable , GET_FIRST_RESERVED_NAME ( pv_allocatable_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _pvexported_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int exported = ( ( ( const struct physical_volume * ) data ) - > status & EXPORTED_VG ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , exported , GET_FIRST_RESERVED_NAME ( pv_exported_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _pvmissing_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int missing = ( ( ( const struct physical_volume * ) data ) - > status & MISSING_PV ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , missing , GET_FIRST_RESERVED_NAME ( pv_missing_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
2015-03-10 18:10:16 +03:00
static int _pvinuse_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct physical_volume * pv = ( const struct physical_volume * ) data ;
int used = is_used_pv ( pv ) ;
if ( used < 0 )
return _binary_undef_disp ( rh , mem , field , private ) ;
return _binary_disp ( rh , mem , field , used , GET_FIRST_RESERVED_NAME ( pv_in_use_y ) , private ) ;
}
2016-04-29 22:42:14 +03:00
static int _pvduplicate_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct physical_volume * pv = ( const struct physical_volume * ) data ;
2019-08-01 21:50:04 +03:00
int duplicate = lvmcache_dev_is_unused_duplicate ( pv - > dev ) ;
2016-04-29 22:42:14 +03:00
return _binary_disp ( rh , mem , field , duplicate , GET_FIRST_RESERVED_NAME ( pv_duplicate_y ) , private ) ;
}
2014-07-02 13:09:14 +04:00
static int _vgpermissions_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-12-18 16:42:14 +03:00
const char * perms = ( ( const struct volume_group * ) data ) - > status & LVM_WRITE ? GET_FIRST_RESERVED_NAME ( vg_permissions_rw )
: GET_FIRST_RESERVED_NAME ( vg_permissions_r ) ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , perms ) ;
2014-07-02 13:09:14 +04:00
}
static int _vgextendable_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int extendable = ( vg_is_resizeable ( ( const struct volume_group * ) data ) ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , extendable , GET_FIRST_RESERVED_NAME ( vg_extendable_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _vgexported_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int exported = ( vg_is_exported ( ( const struct volume_group * ) data ) ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , exported , GET_FIRST_RESERVED_NAME ( vg_exported_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _vgpartial_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int partial = ( vg_missing_pv_count ( ( const struct volume_group * ) data ) ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , partial , GET_FIRST_RESERVED_NAME ( vg_partial_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _vgallocationpolicy_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const char * alloc_policy = get_alloc_string ( ( ( const struct volume_group * ) data ) - > alloc ) ? : _str_unknown ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , alloc_policy ) ;
2014-07-02 13:09:14 +04:00
}
static int _vgclustered_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int clustered = ( vg_is_clustered ( ( const struct volume_group * ) data ) ) ! = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , clustered , GET_FIRST_RESERVED_NAME ( vg_clustered_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
2018-05-31 18:23:03 +03:00
static int _vgshared_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int shared = ( vg_is_shared ( ( const struct volume_group * ) data ) ) ! = 0 ;
return _binary_disp ( rh , mem , field , shared , GET_FIRST_RESERVED_NAME ( vg_shared_y ) , private ) ;
}
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
static int _lvlayout_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
struct dm_list * lv_layout ;
2014-08-25 11:07:03 +04:00
struct dm_list * lv_role ;
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
2014-08-25 11:07:03 +04:00
if ( ! lv_layout_and_role ( mem , lv , & lv_layout , & lv_role ) ) {
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
log_error ( " Failed to display layout for LV %s/%s. " , lv - > vg - > name , lv - > name ) ;
return 0 ;
}
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , lv_layout , private , 0 , NULL ) ;
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
}
2014-08-25 11:07:03 +04:00
static int _lvrole_disp ( struct dm_report * rh , struct dm_pool * mem ,
2014-07-02 13:09:14 +04:00
struct dm_report_field * field ,
const void * data , void * private )
{
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
struct dm_list * lv_layout ;
2014-08-25 11:07:03 +04:00
struct dm_list * lv_role ;
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
2014-08-25 11:07:03 +04:00
if ( ! lv_layout_and_role ( mem , lv , & lv_layout , & lv_role ) ) {
log_error ( " Failed to display role for LV %s/%s. " , lv - > vg - > name , lv - > name ) ;
Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.
For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.
For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.
These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
[] for strict matching
{} for subset matching.
For example, let's consider this:
$ lvs -a -o name,vg_name,lv_attr,layout,type
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
pool vg twi-a-tz-- pool,thin pool,thin
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tdata_rimage_0] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_1] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_2] vg iwi-aor--- linear image,raid
[pool_tdata_rimage_3] vg iwi-aor--- linear image,raid
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rimage_0] vg iwi-aor--- linear image,raid
[pool_tmeta_rimage_1] vg iwi-aor--- linear image,raid
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
thin_vol1 vg Vwi-a-tz-- thin thin
thin_vol2 vg Vwi-a-tz-- thin multiple,origin,thin
Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).
Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:
$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV VG Attr Layout Type
[lvol1_pmspare] vg ewi------- linear metadata,pool,spare
[pool_tdata_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_1] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_2] vg ewi-aor--- linear metadata,raid
[pool_tdata_rmeta_3] vg ewi-aor--- linear metadata,raid
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg ewi-aor--- linear metadata,raid
[pool_tmeta_rmeta_1] vg ewi-aor--- linear metadata,raid
(selected all LVs which are related to metadata of any type)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs which hold metadata related to thin)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV VG Attr Layout Type
thin_snap1 vg Vwi---tz-k thin snapshot,thin
thin_snap2 vg Vwi---tz-k thin snapshot,thin
(selected all LVs which are thin snapshots)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV VG Attr Layout Type
[pool_tdata] vg rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid layout, any raid layout)
lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
LV VG Attr Layout Type
[pool_tmeta] vg ewi-aor--- level1,raid metadata,pool,thin
(selected all LVs with raid level1 layout exactly)
And so on...
2014-08-13 12:03:45 +04:00
return 0 ;
}
2016-01-19 13:50:41 +03:00
return _field_set_string_list ( rh , field , lv_role , private , 0 , NULL ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvinitialimagesync_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
int initial_image_sync ;
if ( lv_is_raid ( lv ) | | lv_is_mirrored ( lv ) )
2016-07-14 16:21:01 +03:00
initial_image_sync = ! lv_is_not_synced ( lv ) ;
2014-07-02 13:09:14 +04:00
else
initial_image_sync = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , initial_image_sync , GET_FIRST_RESERVED_NAME ( lv_initial_image_sync_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvimagesynced_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
int image_synced ;
if ( lv_is_raid_image ( lv ) )
image_synced = ! lv_is_visible ( lv ) & & lv_raid_image_in_sync ( lv ) ;
else if ( lv_is_mirror_image ( lv ) )
image_synced = lv_mirror_image_in_sync ( lv ) ;
else
image_synced = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , image_synced , GET_FIRST_RESERVED_NAME ( lv_image_synced_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvmerging_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
int merging ;
if ( lv_is_origin ( lv ) | | lv_is_external_origin ( lv ) )
merging = lv_is_merging_origin ( lv ) ;
else if ( lv_is_cow ( lv ) )
merging = lv_is_merging_cow ( lv ) ;
else if ( lv_is_thin_volume ( lv ) )
merging = lv_is_merging_thin_snapshot ( lv ) ;
else
merging = 0 ;
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , merging , GET_FIRST_RESERVED_NAME ( lv_merging_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvconverting_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-09-16 00:33:53 +04:00
int converting = lv_is_converting ( ( const struct logical_volume * ) data ) ;
2014-07-02 13:09:14 +04:00
return _binary_disp ( rh , mem , field , converting , " converting " , private ) ;
}
static int _lvpermissions_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
const char * perms = " " ;
2014-10-20 15:46:50 +04:00
if ( ! lv_is_pvmove ( lvdm - > lv ) ) {
if ( lvdm - > lv - > status & LVM_WRITE ) {
2015-01-14 12:31:24 +03:00
if ( ! lvdm - > info . exists )
2014-07-11 12:18:59 +04:00
perms = _str_unknown ;
2015-01-14 12:31:24 +03:00
else if ( lvdm - > info . read_only )
2014-12-18 16:42:14 +03:00
perms = GET_FIRST_RESERVED_NAME ( lv_permissions_r_override ) ;
2014-07-02 13:09:14 +04:00
else
2014-12-18 16:42:14 +03:00
perms = GET_FIRST_RESERVED_NAME ( lv_permissions_rw ) ;
2014-10-20 15:46:50 +04:00
} else if ( lvdm - > lv - > status & LVM_READ )
2014-12-18 16:42:14 +03:00
perms = GET_FIRST_RESERVED_NAME ( lv_permissions_r ) ;
2014-07-02 13:09:14 +04:00
else
perms = _str_unknown ;
}
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , perms ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvallocationpolicy_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const char * alloc_policy = get_alloc_string ( ( ( const struct logical_volume * ) data ) - > alloc ) ? : _str_unknown ;
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , alloc_policy ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvallocationlocked_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int alloc_locked = ( ( ( const struct logical_volume * ) data ) - > status & LOCKED ) ! = 0 ;
2014-09-16 00:33:53 +04:00
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , alloc_locked , GET_FIRST_RESERVED_NAME ( lv_allocation_locked_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvfixedminor_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int fixed_minor = ( ( ( const struct logical_volume * ) data ) - > status & FIXED_MINOR ) ! = 0 ;
2014-09-16 00:33:53 +04:00
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , fixed_minor , GET_FIRST_RESERVED_NAME ( lv_fixed_minor_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
2014-07-09 16:37:01 +04:00
static int _lvactive_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
char * repstr ;
if ( ! ( repstr = lv_active_dup ( mem , ( const struct logical_volume * ) data ) ) ) {
log_error ( " Failed to allocate buffer for active. " ) ;
return 0 ;
}
return _field_set_value ( field , repstr , NULL ) ;
}
2014-07-09 16:28:50 +04:00
static int _lvactivelocally_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
int active_locally ;
2014-07-11 13:15:06 +04:00
if ( ! activation ( ) )
return _binary_undef_disp ( rh , mem , field , private ) ;
2018-06-05 21:21:28 +03:00
active_locally = lv_is_active ( lv ) ;
2014-07-09 16:28:50 +04:00
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , active_locally , GET_FIRST_RESERVED_NAME ( lv_active_locally_y ) , private ) ;
2014-07-09 16:28:50 +04:00
}
static int _lvactiveremotely_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int active_remotely ;
2014-07-11 13:15:06 +04:00
if ( ! activation ( ) )
return _binary_undef_disp ( rh , mem , field , private ) ;
2018-06-05 21:21:28 +03:00
active_remotely = 0 ;
2014-07-09 16:28:50 +04:00
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , active_remotely , GET_FIRST_RESERVED_NAME ( lv_active_remotely_y ) , private ) ;
2014-07-09 16:28:50 +04:00
}
static int _lvactiveexclusively_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
int active_exclusively ;
2014-07-11 13:15:06 +04:00
if ( ! activation ( ) )
return _binary_undef_disp ( rh , mem , field , private ) ;
2018-06-05 21:21:28 +03:00
active_exclusively = lv_is_active ( lv ) ;
2014-07-09 16:28:50 +04:00
2014-12-18 16:42:14 +03:00
return _binary_disp ( rh , mem , field , active_exclusively , GET_FIRST_RESERVED_NAME ( lv_active_exclusively_y ) , private ) ;
2014-07-09 16:28:50 +04:00
}
2014-07-02 13:09:14 +04:00
static int _lvmergefailed_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2016-12-01 19:58:06 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2016-12-01 19:58:06 +03:00
if ( lvdm - > seg_status . type ! = SEG_STATUS_SNAPSHOT )
report: report -1, not 'unkown' for lv_{snapshot_invalid,merge_failed} with --binary
State:
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0
LV ActLocal SnapInvalid MergeFailed
lvol0 active locally unknown unknown
Now with using --binary switch.
Before this patch (lv_snapshot_invalid and lv_merge_failed not switched into numeric value
where -1 represents 'unknown' value)
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0 --binary
LV ActLocal SnapInvalid MergeFailed
lvol0 1 unknown unknown
With this patch applied:
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0 --binary
LV ActLocal SnapInvalid MergeFailed
lvol0 1 -1 -1
2016-02-18 14:10:00 +03:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
2016-12-01 19:58:06 +03:00
return _binary_disp ( rh , mem , field , lvdm - > seg_status . snapshot - > merge_failed ,
GET_FIRST_RESERVED_NAME ( lv_merge_failed_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvsnapshotinvalid_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2016-12-01 19:58:06 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2016-12-01 19:58:06 +03:00
if ( lvdm - > seg_status . type ! = SEG_STATUS_SNAPSHOT )
report: report -1, not 'unkown' for lv_{snapshot_invalid,merge_failed} with --binary
State:
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0
LV ActLocal SnapInvalid MergeFailed
lvol0 active locally unknown unknown
Now with using --binary switch.
Before this patch (lv_snapshot_invalid and lv_merge_failed not switched into numeric value
where -1 represents 'unknown' value)
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0 --binary
LV ActLocal SnapInvalid MergeFailed
lvol0 1 unknown unknown
With this patch applied:
$ lvs -o lv_name,lv_active_locally,lv_snapshot_invalid,lv_merge_failed vg/lvol0 --binary
LV ActLocal SnapInvalid MergeFailed
lvol0 1 -1 -1
2016-02-18 14:10:00 +03:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
2016-12-01 19:58:06 +03:00
return _binary_disp ( rh , mem , field , lvdm - > seg_status . snapshot - > invalid ,
GET_FIRST_RESERVED_NAME ( lv_snapshot_invalid_y ) , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvsuspended_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists )
return _binary_disp ( rh , mem , field , lvdm - > info . suspended , GET_FIRST_RESERVED_NAME ( lv_suspended_y ) , private ) ;
2014-07-02 13:09:14 +04:00
2014-07-08 14:15:14 +04:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvlivetable_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists )
return _binary_disp ( rh , mem , field , lvdm - > info . live_table , GET_FIRST_RESERVED_NAME ( lv_live_table_y ) , private ) ;
2014-07-02 13:09:14 +04:00
2014-07-08 14:15:14 +04:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvinactivetable_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists )
return _binary_disp ( rh , mem , field , lvdm - > info . inactive_table , GET_FIRST_RESERVED_NAME ( lv_inactive_table_y ) , private ) ;
2014-07-02 13:09:14 +04:00
2014-07-08 14:15:14 +04:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
}
static int _lvdeviceopen_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2014-10-20 15:46:50 +04:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2014-07-02 13:09:14 +04:00
2015-01-14 12:31:24 +03:00
if ( lvdm - > info . exists )
return _binary_disp ( rh , mem , field , lvdm - > info . open_count , GET_FIRST_RESERVED_NAME ( lv_device_open_y ) , private ) ;
2014-07-02 13:09:14 +04:00
2014-07-08 14:15:14 +04:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-02 13:09:14 +04:00
}
2014-07-09 16:37:01 +04:00
static int _thinzero_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
2016-04-23 22:08:46 +03:00
if ( seg_is_thin_volume ( seg ) )
seg = first_seg ( seg - > pool_lv ) ;
2014-07-09 16:37:01 +04:00
if ( seg_is_thin_pool ( seg ) )
2017-03-03 22:46:13 +03:00
return _binary_disp ( rh , mem , field , ( seg - > zero_new_blocks = = THIN_ZERO_YES ) , GET_FIRST_RESERVED_NAME ( zero_y ) , private ) ;
2014-07-09 16:37:01 +04:00
2014-07-10 17:23:56 +04:00
return _binary_undef_disp ( rh , mem , field , private ) ;
2014-07-09 16:37:01 +04:00
}
2014-07-02 13:09:14 +04:00
static int _lvhealthstatus_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
2015-01-20 15:14:16 +03:00
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
const struct logical_volume * lv = lvdm - > lv ;
2014-07-02 13:09:14 +04:00
const char * health = " " ;
uint64_t n ;
2016-03-02 22:59:03 +03:00
if ( lv_is_partial ( lv ) )
2014-07-02 13:09:14 +04:00
health = " partial " ;
else if ( lv_is_raid_type ( lv ) ) {
if ( ! activation ( ) )
health = " unknown " ;
else if ( ! lv_raid_healthy ( lv ) )
health = " refresh needed " ;
else if ( lv_is_raid ( lv ) ) {
if ( lv_raid_mismatch_count ( lv , & n ) & & n )
health = " mismatches exist " ;
} else if ( lv - > status & LV_WRITEMOSTLY )
health = " writemostly " ;
2016-03-10 19:56:43 +03:00
} else if ( lv_is_cache ( lv ) & & ( lvdm - > seg_status . type ! = SEG_STATUS_NONE ) ) {
if ( lvdm - > seg_status . type ! = SEG_STATUS_CACHE )
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( health_undef ) ,
GET_FIELD_RESERVED_VALUE ( health_undef ) ) ;
2017-07-19 17:16:12 +03:00
if ( lvdm - > seg_status . cache - > fail )
2016-03-10 19:56:43 +03:00
health = " failed " ;
else if ( lvdm - > seg_status . cache - > read_only )
health = " metadata_read_only " ;
2015-01-20 15:14:16 +03:00
} else if ( lv_is_thin_pool ( lv ) & & ( lvdm - > seg_status . type ! = SEG_STATUS_NONE ) ) {
if ( lvdm - > seg_status . type ! = SEG_STATUS_THIN_POOL )
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( health_undef ) ,
GET_FIELD_RESERVED_VALUE ( health_undef ) ) ;
2017-07-19 17:16:12 +03:00
if ( lvdm - > seg_status . thin_pool - > fail )
2015-01-13 17:23:03 +03:00
health = " failed " ;
2015-01-20 15:14:16 +03:00
else if ( lvdm - > seg_status . thin_pool - > out_of_data_space )
2015-01-13 17:23:03 +03:00
health = " out_of_data " ;
2015-01-20 15:14:16 +03:00
else if ( lvdm - > seg_status . thin_pool - > read_only )
2015-01-13 17:23:03 +03:00
health = " metadata_read_only " ;
2014-07-02 13:09:14 +04:00
}
2016-03-02 13:50:12 +03:00
return _field_string ( rh , field , health ) ;
2014-07-02 13:09:14 +04:00
}
2016-02-18 15:19:25 +03:00
static int _lvcheckneeded_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ;
2016-02-18 20:09:49 +03:00
if ( lv_is_thin_pool ( lvdm - > lv ) & & lvdm - > seg_status . type = = SEG_STATUS_THIN_POOL )
2016-02-18 15:19:25 +03:00
return _binary_disp ( rh , mem , field , lvdm - > seg_status . thin_pool - > needs_check ,
GET_FIRST_RESERVED_NAME ( lv_check_needed_y ) , private ) ;
2016-03-10 19:56:43 +03:00
if ( lv_is_cache ( lvdm - > lv ) & & lvdm - > seg_status . type = = SEG_STATUS_CACHE )
return _binary_disp ( rh , mem , field , lvdm - > seg_status . cache - > needs_check ,
GET_FIRST_RESERVED_NAME ( lv_check_needed_y ) , private ) ;
2016-02-18 15:19:25 +03:00
return _binary_undef_disp ( rh , mem , field , private ) ;
}
2014-07-02 13:09:14 +04:00
static int _lvskipactivation_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
int skip_activation = ( ( ( const struct logical_volume * ) data ) - > status & LV_ACTIVATION_SKIP ) ! = 0 ;
return _binary_disp ( rh , mem , field , skip_activation , " skip activation " , private ) ;
}
2016-03-01 17:23:43 +03:00
static int _lvhistorical_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct logical_volume * lv = ( const struct logical_volume * ) data ;
return _binary_disp ( rh , mem , field , lv_is_historical ( lv ) , " historical " , private ) ;
}
2014-10-24 15:39:56 +04:00
/*
* Macro to generate ' _cache_ < cache_status_field_name > _disp ' reporting function .
* The ' cache_status_field_name ' is field name from struct dm_cache_status .
*/
# define GENERATE_CACHE_STATUS_DISP_FN(cache_status_field_name) \
static int _cache_ # # cache_status_field_name # # _disp ( struct dm_report * rh , \
struct dm_pool * mem , \
struct dm_report_field * field , \
const void * data , \
void * private ) \
{ \
const struct lv_with_info_and_seg_status * lvdm = ( const struct lv_with_info_and_seg_status * ) data ; \
2015-01-14 12:31:24 +03:00
if ( lvdm - > seg_status . type ! = SEG_STATUS_CACHE ) \
2014-12-18 16:52:16 +03:00
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ; \
2015-01-20 15:14:16 +03:00
return dm_report_field_uint64 ( rh , field , & lvdm - > seg_status . cache - > cache_status_field_name ) ; \
2014-10-24 15:39:56 +04:00
}
GENERATE_CACHE_STATUS_DISP_FN ( total_blocks )
GENERATE_CACHE_STATUS_DISP_FN ( used_blocks )
GENERATE_CACHE_STATUS_DISP_FN ( dirty_blocks )
GENERATE_CACHE_STATUS_DISP_FN ( read_hits )
GENERATE_CACHE_STATUS_DISP_FN ( read_misses )
GENERATE_CACHE_STATUS_DISP_FN ( write_hits )
GENERATE_CACHE_STATUS_DISP_FN ( write_misses )
2019-10-04 18:02:20 +03:00
/*
* Macro to generate ' _vdo_ < vdo_field_name > _disp ' reporting function .
* The ' vdo_field_name ' is field name from struct lv_vdo_status .
*/
# define GENERATE_VDO_FIELD_DISP_FN(vdo_field_name) \
static int _vdo_ # # vdo_field_name # # _disp ( struct dm_report * rh , struct dm_pool * mem , \
struct dm_report_field * field , \
const void * data , void * private ) \
{ \
const struct lv_segment * seg = ( const struct lv_segment * ) data ; \
\
if ( seg_is_vdo ( seg ) ) \
seg = first_seg ( seg_lv ( seg , 0 ) ) ; \
\
if ( ! seg_is_vdo_pool ( seg ) ) \
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ; \
\
return dm_report_field_uint32 ( rh , field , & seg - > vdo_params . vdo_field_name ) ; \
}
GENERATE_VDO_FIELD_DISP_FN ( block_map_era_length )
GENERATE_VDO_FIELD_DISP_FN ( ack_threads )
GENERATE_VDO_FIELD_DISP_FN ( bio_threads )
GENERATE_VDO_FIELD_DISP_FN ( bio_rotation )
GENERATE_VDO_FIELD_DISP_FN ( cpu_threads )
GENERATE_VDO_FIELD_DISP_FN ( hash_zone_threads )
GENERATE_VDO_FIELD_DISP_FN ( logical_threads )
GENERATE_VDO_FIELD_DISP_FN ( physical_threads )
GENERATE_VDO_FIELD_DISP_FN ( max_discard )
/*
* Macro to generate ' _vdo_ < vdo_field_name > _disp ' reporting function .
* The ' vdo_field_name ' is field name from struct lv_vdo_status .
*/
# define GENERATE_VDO_FIELDSZMB_DISP_FN(vdo_field_name) \
static int _vdo_ # # vdo_field_name # # _disp ( struct dm_report * rh , struct dm_pool * mem , \
struct dm_report_field * field , \
const void * data , void * private ) \
{ \
uint64_t size ; \
const struct lv_segment * seg = ( const struct lv_segment * ) data ; \
\
if ( seg_is_vdo ( seg ) ) \
seg = first_seg ( seg_lv ( seg , 0 ) ) ; \
\
if ( ! seg_is_vdo_pool ( seg ) ) \
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ; \
\
size = seg - > vdo_params . vdo_field_name # # _mb * ( 1024 * 1024 > > SECTOR_SHIFT ) ; \
\
return _size64_disp ( rh , mem , field , & size , private ) ; \
}
GENERATE_VDO_FIELDSZMB_DISP_FN ( block_map_cache_size )
GENERATE_VDO_FIELDSZMB_DISP_FN ( index_memory_size )
GENERATE_VDO_FIELDSZMB_DISP_FN ( slab_size )
static int _vdo_compression_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ; \
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _binary_disp ( rh , mem , field , seg - > vdo_params . use_compression ,
GET_FIRST_RESERVED_NAME ( vdo_compression_y ) , private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ; \
}
static int _vdo_deduplication_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ; \
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _binary_disp ( rh , mem , field , seg - > vdo_params . use_deduplication ,
GET_FIRST_RESERVED_NAME ( vdo_deduplication_y ) , private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ; \
}
static int _vdo_use_metadata_hints_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _binary_disp ( rh , mem , field , seg - > vdo_params . use_metadata_hints ,
GET_FIRST_RESERVED_NAME ( vdo_use_metadata_hints_y ) ,
private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_use_sparse_index_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _binary_disp ( rh , mem , field , seg - > vdo_params . use_sparse_index ,
GET_FIRST_RESERVED_NAME ( vdo_use_sparse_index_y ) ,
private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_minimum_io_size_disp ( struct dm_report * rh , struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data , void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _size32_disp ( rh , mem , field , & seg - > vdo_params . minimum_io_size , private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_header_size_disp ( struct dm_report * rh ,
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _size32_disp ( rh , mem , field , & seg - > vdo_pool_header_size , private ) ;
return _field_set_value ( field , " " , & GET_TYPE_RESERVED_VALUE ( num_undef_64 ) ) ;
}
static int _vdo_write_policy_disp ( struct dm_report * rh ,
struct dm_pool * mem ,
struct dm_report_field * field ,
const void * data ,
void * private )
{
const struct lv_segment * seg = ( const struct lv_segment * ) data ;
if ( seg_is_vdo ( seg ) )
seg = first_seg ( seg_lv ( seg , 0 ) ) ;
if ( seg_is_vdo_pool ( seg ) )
return _field_string ( rh , field , get_vdo_write_policy_name ( seg - > vdo_params . write_policy ) ) ;
return _field_set_value ( field , GET_FIRST_RESERVED_NAME ( vdo_write_policy_undef ) ,
GET_FIELD_RESERVED_VALUE ( vdo_write_policy_undef ) ) ;
}
2007-01-16 21:06:12 +03:00
/* Report object types */
2002-12-12 23:55:49 +03:00
2007-01-16 21:06:12 +03:00
/* necessary for displaying something for PVs not belonging to VG */
2009-01-10 20:09:40 +03:00
static struct format_instance _dummy_fid = {
2014-10-22 16:30:33 +04:00
. metadata_areas_in_use = DM_LIST_HEAD_INIT ( _dummy_fid . metadata_areas_in_use ) ,
. metadata_areas_ignored = DM_LIST_HEAD_INIT ( _dummy_fid . metadata_areas_ignored ) ,
2009-01-10 20:09:40 +03:00
} ;
2007-01-16 21:06:12 +03:00
static struct volume_group _dummy_vg = {
2009-01-10 20:09:40 +03:00
. fid = & _dummy_fid ,
2011-02-18 17:47:28 +03:00
. name = " " ,
2009-01-10 20:09:40 +03:00
. system_id = ( char * ) " " ,
2014-10-22 16:30:33 +04:00
. pvs = DM_LIST_HEAD_INIT ( _dummy_vg . pvs ) ,
. lvs = DM_LIST_HEAD_INIT ( _dummy_vg . lvs ) ,
2016-03-01 17:18:42 +03:00
. historical_lvs = DM_LIST_HEAD_INIT ( _dummy_vg . historical_lvs ) ,
2014-10-22 16:30:33 +04:00
. tags = DM_LIST_HEAD_INIT ( _dummy_vg . tags ) ,
2002-12-12 23:55:49 +03:00
} ;
2016-02-27 00:06:20 +03:00
static struct volume_group _unknown_vg = {
. fid = & _dummy_fid ,
. name = " [unknown] " ,
. system_id = ( char * ) " " ,
. pvs = DM_LIST_HEAD_INIT ( _unknown_vg . pvs ) ,
. lvs = DM_LIST_HEAD_INIT ( _unknown_vg . lvs ) ,
2016-03-01 17:18:42 +03:00
. historical_lvs = DM_LIST_HEAD_INIT ( _unknown_vg . historical_lvs ) ,
2016-02-27 00:06:20 +03:00
. tags = DM_LIST_HEAD_INIT ( _unknown_vg . tags ) ,
} ;
2007-01-16 21:06:12 +03:00
static void * _obj_get_vg ( void * obj )
2006-10-02 20:46:27 +04:00
{
2007-01-16 21:06:12 +03:00
struct volume_group * vg = ( ( struct lvm_report_object * ) obj ) - > vg ;
2006-10-02 20:46:27 +04:00
2007-01-16 21:06:12 +03:00
return vg ? vg : & _dummy_vg ;
2006-10-02 20:46:27 +04:00
}
2007-01-16 21:06:12 +03:00
static void * _obj_get_lv ( void * obj )
2002-12-12 23:55:49 +03:00
{
2015-01-14 12:31:24 +03:00
return ( struct logical_volume * ) ( ( struct lvm_report_object * ) obj ) - > lvdm - > lv ;
2014-07-02 11:45:53 +04:00
}
2014-10-20 15:46:50 +04:00
static void * _obj_get_lv_with_info_and_seg_status ( void * obj )
2014-07-02 11:45:53 +04:00
{
2014-10-20 15:46:50 +04:00
return ( ( struct lvm_report_object * ) obj ) - > lvdm ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static void * _obj_get_pv ( void * obj )
2002-12-12 23:55:49 +03:00
{
2007-01-16 21:06:12 +03:00
return ( ( struct lvm_report_object * ) obj ) - > pv ;
2002-12-12 23:55:49 +03:00
}
2013-07-29 21:07:11 +04:00
static void * _obj_get_label ( void * obj )
{
return ( ( struct lvm_report_object * ) obj ) - > label ;
}
2007-01-16 21:06:12 +03:00
static void * _obj_get_seg ( void * obj )
2002-12-12 23:55:49 +03:00
{
2007-01-16 21:06:12 +03:00
return ( ( struct lvm_report_object * ) obj ) - > seg ;
2002-12-12 23:55:49 +03:00
}
2007-01-16 21:06:12 +03:00
static void * _obj_get_pvseg ( void * obj )
2002-12-12 23:55:49 +03:00
{
2007-01-16 21:06:12 +03:00
return ( ( struct lvm_report_object * ) obj ) - > pvseg ;
2002-12-12 23:55:49 +03:00
}
2013-09-18 04:09:15 +04:00
static void * _obj_get_devtypes ( void * obj )
{
return obj ;
}
2016-05-10 16:15:48 +03:00
static void * _obj_get_cmdlog ( void * obj )
{
return obj ;
}
static const struct dm_report_object_type _log_report_types [ ] = {
{ CMDLOG , " Command Log " , " log_ " , _obj_get_cmdlog } ,
{ 0 , " " , " " , NULL } ,
} ;
2007-01-16 21:06:12 +03:00
static const struct dm_report_object_type _report_types [ ] = {
{ VGS , " Volume Group " , " vg_ " , _obj_get_vg } ,
{ LVS , " Logical Volume " , " lv_ " , _obj_get_lv } ,
2014-10-20 15:46:50 +04:00
{ LVSINFO , " Logical Volume Device Info " , " lv_ " , _obj_get_lv_with_info_and_seg_status } ,
2014-10-21 14:01:57 +04:00
{ LVSSTATUS , " Logical Volume Device Status " , " lv_ " , _obj_get_lv_with_info_and_seg_status } ,
2015-01-20 18:02:48 +03:00
{ LVSINFOSTATUS , " Logical Volume Device Info and Status Combined " , " lv_ " , _obj_get_lv_with_info_and_seg_status } ,
2007-01-16 21:06:12 +03:00
{ PVS , " Physical Volume " , " pv_ " , _obj_get_pv } ,
2013-07-29 21:07:11 +04:00
{ LABEL , " Physical Volume Label " , " pv_ " , _obj_get_label } ,
2007-01-16 21:06:12 +03:00
{ SEGS , " Logical Volume Segment " , " seg_ " , _obj_get_seg } ,
{ PVSEGS , " Physical Volume Segment " , " pvseg_ " , _obj_get_pvseg } ,
{ 0 , " " , " " , NULL } ,
} ;
2002-12-12 23:55:49 +03:00
2013-09-18 04:09:15 +04:00
static const struct dm_report_object_type _devtypes_report_types [ ] = {
{ DEVTYPES , " Device Types " , " devtype_ " , _obj_get_devtypes } ,
{ 0 , " " , " " , NULL } ,
} ;
2007-01-16 21:06:12 +03:00
/*
* Import column definitions
*/
2002-12-12 23:55:49 +03:00
2007-01-18 20:48:29 +03:00
# define STR DM_REPORT_FIELD_TYPE_STRING
# define NUM DM_REPORT_FIELD_TYPE_NUMBER
2014-07-02 13:09:14 +04:00
# define BIN DM_REPORT_FIELD_TYPE_NUMBER
2014-05-29 11:37:22 +04:00
# define SIZ DM_REPORT_FIELD_TYPE_SIZE
2014-06-09 18:23:45 +04:00
# define PCT DM_REPORT_FIELD_TYPE_PERCENT
2015-05-21 16:19:03 +03:00
# define TIM DM_REPORT_FIELD_TYPE_TIME
2014-05-29 11:41:36 +04:00
# define STR_LIST DM_REPORT_FIELD_TYPE_STRING_LIST
2015-05-04 22:51:41 +03:00
# define SNUM DM_REPORT_FIELD_TYPE_NUMBER
2010-08-20 16:44:03 +04:00
# define FIELD(type, strct, sorttype, head, field, width, func, id, desc, writeable) \
2017-07-19 17:17:30 +03:00
{ type , sorttype , offsetof ( type_ # # strct , field ) , ( width ) ? : sizeof ( head ) - 1 , \
2010-08-20 16:44:17 +04:00
# id, head, &_ ## func ## _disp, desc},
2010-01-07 17:37:11 +03:00
2016-05-10 16:15:48 +03:00
typedef struct cmd_log_item type_cmd_log_item ;
2010-01-07 17:37:11 +03:00
typedef struct physical_volume type_pv ;
typedef struct logical_volume type_lv ;
typedef struct volume_group type_vg ;
typedef struct lv_segment type_seg ;
typedef struct pv_segment type_pvseg ;
2013-07-29 21:07:11 +04:00
typedef struct label type_label ;
2002-12-12 23:55:49 +03:00
2013-09-18 04:09:15 +04:00
typedef dev_known_type_t type_devtype ;
2010-01-07 17:37:11 +03:00
static const struct dm_report_field_type _fields [ ] = {
2007-01-16 21:06:12 +03:00
# include "columns.h"
2007-01-30 02:01:18 +03:00
{ 0 , 0 , 0 , 0 , " " , " " , NULL , NULL } ,
2007-01-16 21:06:12 +03:00
} ;
2002-12-12 23:55:49 +03:00
2013-09-18 04:09:15 +04:00
static const struct dm_report_field_type _devtypes_fields [ ] = {
# include "columns-devtypes.h"
{ 0 , 0 , 0 , 0 , " " , " " , NULL , NULL } ,
} ;
2016-05-10 16:15:48 +03:00
static const struct dm_report_field_type _log_fields [ ] = {
# include "columns-cmdlog.h"
{ 0 , 0 , 0 , 0 , " " , " " , NULL , NULL } ,
} ;
2007-01-16 21:06:12 +03:00
# undef STR
# undef NUM
2014-07-02 13:09:14 +04:00
# undef BIN
2014-05-29 11:37:22 +04:00
# undef SIZ
2014-05-29 11:41:36 +04:00
# undef STR_LIST
2015-05-04 22:51:41 +03:00
# undef SNUM
2007-01-16 21:06:12 +03:00
# undef FIELD
void * report_init ( struct cmd_context * cmd , const char * format , const char * keys ,
2007-07-10 22:20:00 +04:00
report_type_t * report_type , const char * separator ,
2008-06-25 01:21:04 +04:00
int aligned , int buffered , int headings , int field_prefixes ,
2016-05-12 15:37:38 +03:00
int quoted , int columns_as_rows , const char * selection ,
int multiple_output )
2007-01-16 21:06:12 +03:00
{
uint32_t report_flags = 0 ;
2016-05-30 17:37:05 +03:00
const struct dm_report_object_type * types ;
const struct dm_report_field_type * fields ;
const struct dm_report_reserved_value * reserved_values ;
2008-04-20 04:15:08 +04:00
void * rh ;
2002-12-12 23:55:49 +03:00
if ( aligned )
2007-01-16 21:06:12 +03:00
report_flags | = DM_REPORT_OUTPUT_ALIGNED ;
2002-12-12 23:55:49 +03:00
if ( buffered )
2007-01-16 21:06:12 +03:00
report_flags | = DM_REPORT_OUTPUT_BUFFERED ;
2002-12-12 23:55:49 +03:00
if ( headings )
2007-01-16 21:06:12 +03:00
report_flags | = DM_REPORT_OUTPUT_HEADINGS ;
2002-12-12 23:55:49 +03:00
2008-06-06 23:28:35 +04:00
if ( field_prefixes )
report_flags | = DM_REPORT_OUTPUT_FIELD_NAME_PREFIX ;
2008-06-25 01:21:04 +04:00
if ( ! quoted )
report_flags | = DM_REPORT_OUTPUT_FIELD_UNQUOTED ;
2008-06-25 02:48:53 +04:00
if ( columns_as_rows )
report_flags | = DM_REPORT_OUTPUT_COLUMNS_AS_ROWS ;
2016-05-12 15:37:38 +03:00
if ( multiple_output )
report_flags | = DM_REPORT_OUTPUT_MULTIPLE_TIMES ;
2016-05-10 16:15:48 +03:00
if ( * report_type & CMDLOG ) {
types = _log_report_types ;
fields = _log_fields ;
reserved_values = NULL ;
} else if ( * report_type & DEVTYPES ) {
2016-05-30 17:37:05 +03:00
types = _devtypes_report_types ;
fields = _devtypes_fields ;
reserved_values = NULL ;
} else {
types = _report_types ;
fields = _fields ;
reserved_values = _report_reserved_values ;
}
rh = dm_report_init_with_selection ( report_type , types , fields ,
report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:
{dm report field type, reserved value, array of strings representing this value}
When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.
This makes it possible to define selections like:
... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
... --select lv_read_ahead=auto
... --select vg_mda_copies=unmanaged
With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.
For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):
const uint64_t _reserved_number_undef_64 = UINT64_MAX;
const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
const uint64_t _reserved_size_auto_64 = UINT64_MAX;
{
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
{DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
{DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
NULL
}
Same reserved value of different field types do not collide.
All arrays are null-terminated.
The list of reserved values is automatically displayed within
selection help output:
Selection operands
------------------
...
Reserved values
---------------
-1, undefined, undef, unknown - Reserved value for undefined numeric value. [number]
unmanaged - Reserved value for unmanaged number of metadata copies in VG. [number]
auto - Reserved value for size that is automatically calculated. [size]
Selection operators
-------------------
...
2014-05-30 17:02:21 +04:00
format , separator , report_flags , keys ,
2016-05-30 17:37:05 +03:00
selection , reserved_values , cmd ) ;
2008-04-20 04:15:08 +04:00
2008-12-15 16:30:45 +03:00
if ( rh & & field_prefixes )
2008-06-06 23:28:35 +04:00
dm_report_set_output_field_name_prefix ( rh , " lvm2_ " ) ;
2008-04-20 04:15:08 +04:00
return rh ;
2002-12-12 23:55:49 +03:00
}
2014-12-02 15:14:12 +03:00
void * report_init_for_selection ( struct cmd_context * cmd ,
report_type_t * report_type ,
const char * selection_criteria )
{
return dm_report_init_with_selection ( report_type , _report_types , _fields ,
" " , DEFAULT_REP_SEPARATOR ,
DM_REPORT_OUTPUT_FIELD_UNQUOTED ,
" " , selection_criteria ,
_report_reserved_values ,
cmd ) ;
}
2016-06-14 17:44:52 +03:00
int report_get_prefix_and_desc ( report_type_t report_type_id ,
const char * * report_prefix ,
const char * * report_desc )
2015-10-22 14:27:59 +03:00
{
const struct dm_report_object_type * report_types , * report_type ;
2016-05-10 16:15:48 +03:00
if ( report_type_id & CMDLOG )
report_types = _log_report_types ;
else if ( report_type_id & DEVTYPES )
report_types = _devtypes_report_types ;
else
report_types = _report_types ;
2015-10-22 14:27:59 +03:00
for ( report_type = report_types ; report_type - > id ; report_type + + ) {
2016-06-14 17:44:52 +03:00
if ( report_type_id & report_type - > id ) {
* report_prefix = report_type - > prefix ;
* report_desc = report_type - > desc ;
return 1 ;
}
2015-10-22 14:27:59 +03:00
}
2016-06-14 17:44:52 +03:00
* report_prefix = * report_desc = " " ;
return 0 ;
2015-10-22 14:27:59 +03:00
}
2002-12-12 23:55:49 +03:00
/*
* Create a row of data for an object
*/
2014-12-02 15:14:12 +03:00
int report_object ( void * handle , int selection_only , const struct volume_group * vg ,
2015-01-14 12:31:24 +03:00
const struct logical_volume * lv , const struct physical_volume * pv ,
const struct lv_segment * seg , const struct pv_segment * pvseg ,
2015-01-20 15:14:16 +03:00
const struct lv_with_info_and_seg_status * lvdm ,
2015-01-14 12:31:24 +03:00
const struct label * label )
2002-12-12 23:55:49 +03:00
{
2014-12-02 15:14:12 +03:00
struct selection_handle * sh = selection_only ? ( struct selection_handle * ) handle : NULL ;
2014-06-12 13:33:16 +04:00
struct device dummy_device = { . dev = 0 } ;
struct label dummy_label = { . dev = & dummy_device } ;
struct lvm_report_object obj = {
2015-01-14 12:31:24 +03:00
. vg = ( struct volume_group * ) vg ,
2015-01-20 15:14:16 +03:00
. lvdm = ( struct lv_with_info_and_seg_status * ) lvdm ,
2015-01-14 12:31:24 +03:00
. pv = ( struct physical_volume * ) pv ,
. seg = ( struct lv_segment * ) seg ,
. pvseg = ( struct pv_segment * ) pvseg ,
. label = ( struct label * ) ( label ? : ( pv ? pv_label ( pv ) : NULL ) )
2014-06-12 13:33:16 +04:00
} ;
/* FIXME workaround for pv_label going through cache; remove once struct
* physical_volume gains a proper " label " pointer */
if ( ! obj . label ) {
if ( pv ) {
if ( pv - > fmt )
dummy_label . labeller = pv - > fmt - > labeller ;
if ( pv - > dev )
dummy_label . dev = pv - > dev ;
else
memcpy ( dummy_device . pvid , & pv - > id , ID_LEN ) ;
}
obj . label = & dummy_label ;
}
2002-12-12 23:55:49 +03:00
2014-10-07 03:34:04 +04:00
/* Never report orphan VGs. */
2016-02-27 00:06:20 +03:00
if ( vg & & is_orphan_vg ( vg - > name ) ) {
obj . vg = & _dummy_vg ;
if ( pv )
_dummy_fid . fmt = pv - > fmt ;
}
2014-10-07 03:34:04 +04:00
2016-04-21 21:23:13 +03:00
if ( vg & & is_orphan_vg ( vg - > name ) & & pv & & is_used_pv ( pv ) ) {
2016-02-27 00:06:20 +03:00
obj . vg = & _unknown_vg ;
2016-04-21 21:23:13 +03:00
_dummy_fid . fmt = pv - > fmt ;
2016-02-27 00:06:20 +03:00
}
2009-01-10 20:09:40 +03:00
2014-12-02 15:14:12 +03:00
return sh ? dm_report_object_is_selected ( sh - > selection_rh , & obj , 0 , & sh - > selected )
: dm_report_object ( handle , & obj ) ;
2002-12-12 23:55:49 +03:00
}
2013-09-18 04:09:15 +04:00
static int _report_devtype_single ( void * handle , const dev_known_type_t * devtype )
{
return dm_report_object ( handle , ( void * ) devtype ) ;
}
int report_devtypes ( void * handle )
{
int devtypeind = 0 ;
while ( _dev_known_types [ devtypeind ] . name [ 0 ] )
if ( ! _report_devtype_single ( handle , & _dev_known_types [ devtypeind + + ] ) )
return 0 ;
return 1 ;
}
2016-05-10 16:15:48 +03:00
int report_cmdlog ( void * handle , const char * type , const char * context ,
const char * object_type_name , const char * object_name ,
const char * object_id , const char * object_group ,
const char * object_group_id , const char * msg ,
int current_errno , int ret_code )
{
2017-10-18 17:57:46 +03:00
struct cmd_log_item log_item = { _log_seqnum + + , type , context , object_type_name ,
2016-05-10 16:15:48 +03:00
object_name ? : " " , object_id ? : " " ,
object_group ? : " " , object_group_id ? : " " ,
msg ? : " " , current_errno , ret_code } ;
if ( handle )
return dm_report_object ( handle , & log_item ) ;
return 1 ;
}
2016-05-23 16:09:05 +03:00
2016-08-08 16:45:46 +03:00
void report_reset_cmdlog_seqnum ( void )
{
2017-10-18 17:57:46 +03:00
_log_seqnum = 1 ;
2016-08-08 16:45:46 +03:00
}
2016-05-23 16:09:05 +03:00
int report_current_object_cmdlog ( const char * type , const char * msg , int32_t ret_code )
{
log_report_t log_state = log_get_report_state ( ) ;
return report_cmdlog ( log_state . report , type , log_get_report_context_name ( log_state . context ) ,
log_get_report_object_type_name ( log_state . object_type ) ,
log_state . object_name , log_state . object_id ,
log_state . object_group , log_state . object_group_id ,
msg , stored_errno ( ) , ret_code ) ;
}