IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The 'pe_start' column was incorrectly marked as being of type NUM.
This was not correct as pe_start is actually of type SIZ, which means
it can have a size suffix and hence it's not a pure numeric value.
Proper column type is important for selection to work correctly, so we
can also do comparisons while using suffixes.
This is also important for new "json_std" output format which does not
put double quotes around pure numeric values. With pe_start incorrectly
marked as NUM instead of SIZ, this produced invalid JSON output
like '"pe_start" = 1.00m' because it contained the 'm' (or other)
size suffix. If properly marked as SIZ, this is then put in double
quotes like '"pe_start" = "1.00m"'.
In JSON format, we print string list this way:
"key" = "item1,item2,...,itemN"
while in JSON_STD format, we print string list this way:
"key" = ["item1","item2",...,"itemN"]
Before, we stored only the report string itself for a string list
in field->report_string. The field->report_string has either
sorted items or not, depending on what we need for a field -
some report fields have sorted output, some don't...
The field->sort_value.value then contains pointer to the exact
field->report_string. The field->sort_value.items ALWAYS keeps
sorted array of individual items, represented as '[position,length]'
pairs pointing to the field->sort_value.value string.
This approach was fine as far as we didn't need to apply further
formatting to field->report_string. However, if we need to apply
further formatting to field->report_string content, taking into
account individual items, we also need to know where each item
starts and what is its length. Before, we only knew this when
items in report string were sorted, but not in the unsorted version.
We can't rely on the delimiter (default ",") only to separate items
back out of report string, because that delimiter can be contained
in the item value itself.
So this patch enhances the field->report_string for a string list so
it also contains '[position,length]' pairs for each individual item
inside field->report_string. We store this array right beyond the
string itself and we encode it in the same manner we already did for
field->sort_value.items before.
If field->report_string has sorted items, the field->sort_value.items
just points to the array of items we store beyond the report string.
If field->report_string has unsorted items, we store separate array
of items for both field->report_string and field->sort_value.
This patch also cleans up the _report_field_string_list function a bit
so it's easier and more straightforward to follow than the original
version.
Example. If we have "abc", "xy", "defgh" as list on input with ","
as delimiter, then:
- field->report_string will have:
- if we need field->report_string unsorted:
abc,xy,defgh\0{[3,12],[0,3],[4,2],[7,5]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- if we need field->report_string sorted:
repstr_extra
|
V
abc,defgh,xy\0{[3,12],[0,3],[4,5],[10,2]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- field->sort_value will have:
- if field->report_string is unsorted:
field->sort_value.value = field->report_string
field->sort_value.items = {[0,3],[0,3],[7,5],[4,2]}
(that is 'abc,defgh,xy')
- if field->report_string is sorted already:
field->sort_value.value = field->report_string
field->sort_value.items = repstr_extra
(that is also 'abc,defgh,xy')
For JSON_STD format, use 'null' if a field has no value at all.
In JSON format, we print undefined numeric values this way:
"key" = ""
while in JSON_STD format, we print undefined numeric values this way:
"key" = null
(Keep in mind that 'null' is different from 0 (zero value) which is
a defined value.)
In JSON format, we print numeric values this way:
"key" = "N"
while in JSON_STD format, we print numeric value this way:
"key" = N
(Where N is a numeric value.)
The original JSON formatting will be still available using the original
DM_REPORT_GROUP_JSON identifier. Subsequent patches will add enhancements
to JSON formatting code so that it adheres more to JSON standard - this
will be identified by new DM_REPORT_GROUP_JSON_STD identifier.
If using JSON format for lvm shell's output, the error message about
exceeding the maximum number of arguments was not reported on output if
this condition was ever hit.
This is because the JSON format (as well as any other future format)
requires extra formatting compared to "basic" format and so it also
requires extra calls when it comes to reporting. The report needs to
be added to a report group and then popped and put on output with
specialized "dm_report_group_output_and_pop_all".
This "output and pop" is normally executed after we execute the command
in the lvm shell. When we didn't get to the command exection at all because
some precondition was not met (like hitting the limit for the number of
arguments for the command here), we skipped this important call and
so there was no log report output.
Right now, it's only this exact error message for which we need to call
"output and pop" directly, all the other error messages are about
initializing and setting the log report itself which we can't report
obviously.
Before this patch:
lvm> pvs 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
lvm>
With this patch applied:
lvm> pvs 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
{
"log": [
{"log_seq_num":"1", "log_type":"error", "log_context":"shell", "log_object_type":"cmd", "log_object_name":"", "log_object_id":"", "log_object_group":"", "log_object_group_id":"", "log_message":"Too many arguments, sorry.", "log_errno":"-1", "log_ret_code":"0"}
]
}
If there's any other error message in the future before we execute the
command itself, we also need to call the "output and pop" directly.
multipath_component_detection=0 has always applied to the filter-based
component detection. Also apply this setting to the duplicate-PV
handling which also eliminates multipath components (based on duplicate
PVs having the same wwid.)
When creating VDO pool based of % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.
If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
Newer VDO kernel target require to have matching virtual size - this
however cause incompatiblity when lvcreate is let to format VDO data
device and read the usable size from vdoformat.
Altough this is a kernel regression and will likely get fixed,
lvm2 can actually reformat VDO device to use properly aligned VDO LV
size to make this problem disappear.
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).
Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.
Estimation is based on documented RAM usage of VDO target.
If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.
TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
Keep single source for most of values printed in lvm.conf
(still needs some conversion)
Correct max for logical threads to 60
(we may refuse some older configuration which might eventually
user higher numbers - but so far let's assume no user have ever set this
as it's been non-trivial and if would complicate code unnecessarily.)
Accept maximum of 4PiB for virtual size of VDO LV
(lvm2 will drop 'header borders to 0 for this case').
Patch 1b070f366b should have
been already fixing this issue but since it the incorrect
patch rebasing the change to vdo_slabSize got lost.
So again now with explicit one-line patch.
Fixes commit b8f4ec846 "display: ignore --reportformat"
by restoring the --reportformat option to pvdisplay.
Adding -C to pvdisplay turns the command into a reporting
command (like pvs, vgs, lvs) in which --reportformat can
be useful.