IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add very simplistic parser of vdo metadata to be able to obtain
logical_blocks stored within vdo metadata - as lvm2 may
submit smaller value due to internal aligment rules.
To avoid creation of mismatching table line - use this number
instead the one provided by lvm2.
Commit 73ec3c954b added a way to print
only a part of the report string (repstr) to support decoding individual
string list items out of repstr.
The repstr is normally printed through _safe_repstr_output so that any
JSON_QUOTE character ('"') found within the repstr is escaped to not
interfere with value quoting in JSON format.
However, the commit 73ec3c954b missed
checking the 'len' argument passed to _safe_repstr_output function when
adding the rest of the repstr after all previous JSON_QUOTE characters
were escaped (when calling the last dm_pool_grow_object). When 'len'
is 0, we need to calculate the 'len' ourselves in the function by
simply calling strlen. This is because 'len' is passed to the function
only if we're taking a part of repstr, not as a whole.
In JSON format, we print string list this way:
"key" = "item1,item2,...,itemN"
while in JSON_STD format, we print string list this way:
"key" = ["item1","item2",...,"itemN"]
Before, we stored only the report string itself for a string list
in field->report_string. The field->report_string has either
sorted items or not, depending on what we need for a field -
some report fields have sorted output, some don't...
The field->sort_value.value then contains pointer to the exact
field->report_string. The field->sort_value.items ALWAYS keeps
sorted array of individual items, represented as '[position,length]'
pairs pointing to the field->sort_value.value string.
This approach was fine as far as we didn't need to apply further
formatting to field->report_string. However, if we need to apply
further formatting to field->report_string content, taking into
account individual items, we also need to know where each item
starts and what is its length. Before, we only knew this when
items in report string were sorted, but not in the unsorted version.
We can't rely on the delimiter (default ",") only to separate items
back out of report string, because that delimiter can be contained
in the item value itself.
So this patch enhances the field->report_string for a string list so
it also contains '[position,length]' pairs for each individual item
inside field->report_string. We store this array right beyond the
string itself and we encode it in the same manner we already did for
field->sort_value.items before.
If field->report_string has sorted items, the field->sort_value.items
just points to the array of items we store beyond the report string.
If field->report_string has unsorted items, we store separate array
of items for both field->report_string and field->sort_value.
This patch also cleans up the _report_field_string_list function a bit
so it's easier and more straightforward to follow than the original
version.
Example. If we have "abc", "xy", "defgh" as list on input with ","
as delimiter, then:
- field->report_string will have:
- if we need field->report_string unsorted:
abc,xy,defgh\0{[3,12],[0,3],[4,2],[7,5]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- if we need field->report_string sorted:
repstr_extra
|
V
abc,defgh,xy\0{[3,12],[0,3],[4,5],[10,2]}
|____________||________________________|
string array of [pos,len] pairs
|____||________________|
#items items
- field->sort_value will have:
- if field->report_string is unsorted:
field->sort_value.value = field->report_string
field->sort_value.items = {[0,3],[0,3],[7,5],[4,2]}
(that is 'abc,defgh,xy')
- if field->report_string is sorted already:
field->sort_value.value = field->report_string
field->sort_value.items = repstr_extra
(that is also 'abc,defgh,xy')
For JSON_STD format, use 'null' if a field has no value at all.
In JSON format, we print undefined numeric values this way:
"key" = ""
while in JSON_STD format, we print undefined numeric values this way:
"key" = null
(Keep in mind that 'null' is different from 0 (zero value) which is
a defined value.)
In JSON format, we print numeric values this way:
"key" = "N"
while in JSON_STD format, we print numeric value this way:
"key" = N
(Where N is a numeric value.)
The original JSON formatting will be still available using the original
DM_REPORT_GROUP_JSON identifier. Subsequent patches will add enhancements
to JSON formatting code so that it adheres more to JSON standard - this
will be identified by new DM_REPORT_GROUP_JSON_STD identifier.
Keep single source for most of values printed in lvm.conf
(still needs some conversion)
Correct max for logical threads to 60
(we may refuse some older configuration which might eventually
user higher numbers - but so far let's assume no user have ever set this
as it's been non-trivial and if would complicate code unnecessarily.)
Accept maximum of 4PiB for virtual size of VDO LV
(lvm2 will drop 'header borders to 0 for this case').
Allow to use --vdosettings with lvcreate,lvconvert,lvchange.
Support settings currenly only configurable via lvm.conf.
With lvchange we require inactivate LV for changes to be applied.
Settings block_map_era_length has supported alias block_map_period.
New API extension (internal ATM) for getting a list
of active DM device with extra features like i.e. uuid.
To easily lookout for existing UUID in device list,
there is: dm_device_list_find_by_uuid()
Once the returned structure is no longer usable call:
dm_device_list_destroy()
Struct dm_active_device {} holds all the info,
but is always allocated and destroyed within library.
TODO: once it's stable, copy to libdm
Existing mechanism was not able to trace root volume issue.
Simplify the functionality by using simply using activated flag
and trace the dtree in reverse order.
When generating table line for cache target line,
the estimation of added arguments was incorrectly
calculated as the evaluation order of "?" is
made after "+".
However the result was 'masked' by the
Reported-by: Jian Cai jcai19
Use different 'hint' size for dm_hash_create() call - so
when debug info about hash is printed we can recognize which
hash was in use.
This patch doesn't change actual used size since that is always
rounded to be power of 2 and >=16 - so as such is only a
help to developer.
We could eventually use 'name' arg, but since this would have changed
API and this patchset will be routed to libdm & stable - we will
just use this small trick.
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).
lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).
This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.
Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.
With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).
Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!
Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!
Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.
Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.
Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
Check that type is always defined, if not make it explicit internal
error (although logged as debug - so catched only with proper lvm.conf
setting).
This ensures later type being NULL can't be dereferenced with coredump.
DM tree keeps track of created device while preloading a device tree.
When fail occures during such preload, it will now try to remove
all created and preloaded device. This makes it easier to maintain
stacking of device, since we do not need to check in-depth for
existance of all possible created devices during the failure.
Switch remaining zero sized struct to flexible arrays to be C99
complient.
These simple rules should apply:
- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.
Although some of the code pieces should be still improved.
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum. When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error. The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.
Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):
lvcreate --type raidN --raidintegrity y [options]
Add an integrity layer to images of an existing raid LV:
lvconvert --raidintegrity y LV
Remove the integrity layer from images of a raid LV:
lvconvert --raidintegrity n LV
Settings
Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.
Initialization
When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV. The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized. The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)
Example: create a raid1 LV with integrity:
$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
Logical volume "rr_rimage_0_imeta" created.
Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
Logical volume "rr_rimage_1_imeta" created.
Logical volume "rr" created.
$ lvs -a foo
LV VG Attr LSize Origin Cpy%Sync
rr foo rwi-a-r--- 1.00g 4.93
[rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02
[rr_rimage_0_imeta] foo ewi-ao---- 12.00m
[rr_rimage_0_iorig] foo -wi-ao---- 1.00g
[rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45
[rr_rimage_1_imeta] foo ewi-ao---- 12.00m
[rr_rimage_1_iorig] foo -wi-ao---- 1.00g
[rr_rmeta_0] foo ewi-aor--- 4.00m
[rr_rmeta_1] foo ewi-aor--- 4.00m
reporting fields (-o) directly from kernel:
writecache_total_blocks
writecache_free_blocks
writecache_writeback_blocks
writecache_error
The data_percent field shows used cache blocks / total cache blocks.
Adds support for the DM_GET_TARGET_VERSION to dmsetup.
It introduces a new comman "target-version" that will accept list
of targets and print their version.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Kernels <2.6.27 don't have /sys/dev dir - add code for looking
out device name via longre seach in /sys/block
This makes commands like 'dmsetup dep -o blkdevname' working.
Recent kernel version from kernel commit:
de7180ff908b2bc0342e832dbdaa9a5f1ecaa33a
started to report in cache status line new flag:
no_discard_passdown
Whenever lvm spots unknown status it reports:
Unknown feature in status:
So add reconginzing this feature flag and also report this with
'lvs -o+kernel_discards'
When no_discard_passdown is found in status 'nopassdown' gets reported
for this field (roughly matching what we report for thin-pools).
lvm2 till version 2.02.169 (commit 78d004efa8)
was printing invalid creation_time argument into metadata on 32bit arch.
However with commit ba9820b142 we started
to properly validate all input numbers and thus we refused to accept
invalid metadata with 'garbage' string - but this results in the
situation where metadata produced on older lvm2 on 32 bit architecture
will become unreadable after upgrade.
To fix this case - extend libdm parser in a way, that whenever we
find error integer value, we also check if the parsed value is not for
creation_time node and in this case we let the metadata pass through
with made-up date 2018-05-24 (release date of 2.02.169).
When using caches with BIG pool size (>TB) there is required
to use relatively huge chunk size. Once the chunksize has
got over 1MiB - kernel cache target stopped writing such chunks
back on this if the migration_threshold remained on default 1MiB
(2048 sectors) size.
This patch ensure, DM layer will not let pass table line which
has not big enough migration threshold that can let pass
at least 8 chunks (independently of lvm2 metadata).
Since configure.h is a generated header and it's missing traditional
ifdefs preambule - it can be included & parsed multiple times.
Normally compiler is fine when defines have same value and there is
no warning - yet we don't need to parse this several times
and by adding -include directive we can ensure every file
in the package is rightly compile with configure.h as the
first header file.
Drop very old original format of VDO target and focus on V2 version.
So some variables were renamed or replaced.
There is no compatibility preserved (with assumption so far this is
experimental feature and there is no real user).
Note - version currently VDO calls this version 6.2.
A bit of chicken & egg problem - dmeventd needs to use old libdm library.
VDO is only part of new device_mapper internal library.
So include directly source file for parsing status - this fixes usability
problem of VDO plugin introduced with previous Makefile reshaping
patchset.
NOTE: source file needs to be keep then compilable in both environments.
Also add missing copyright header.
Ensure configure.h is always 1st. included header.
Maybe we could eventually introduce gcc -include option, but for now
this better uses dependency tracking.
Also move _REENTRANT and _GNU_SOURCE into configure.h so it
doesn't need to be present in various source files.
This ensures consistent compilation of headers like stdio.h since
it may produce different declaration.
Patch 668c9d0762 introduced regression,
since the code here would actually always return failing result.
Replace it with more simple call to strndup().
For some targets we do not want to generate dependencies.
Also add note about usage of such Makefile - it might be
possibly better to rename it to different filename to avoid
any confusion.
Just for case ensure compiler is not able to optimize
memset() away for resources that are released.
This idea of using memory barrier is taken from openssl.
Other options would be to check for 'explicit_bzero' function.
When preparing ioctl buffer and flatting all parameters,
add table parameters only to ioctl that do process them.
Note: list of ioctl should be kept in sync with kernel code.
DM_DEVICE_CREATE with table is doing several ioctl operations,
however only some of then takes parameters.
Since _create_and_load_v4() reused already existing dm task from
DM_DEVICE_RELOAD it has also kept passing its table parameters
to DM_DEVICE_RESUME ioctl - but this ioctl is supposed to not take
any argument and thus there is no wiping of passed data - and
since kernel returns buffer and shortens dmi->data_size accordingly,
anything past returned data size remained uncleared in zfree()
function.
This has problem if the user used dm_task_secure_data (i.e. cryptsetup),
as in this case binary expact secured data are erased from main memory
after use, but they may have been left in place.
This patch is also closing the possible hole for error path,
which also reuse same dm task structure for DM_DEVICE_REMOVE.
This could be seen as continuation of
6cee8f1b06.
Some test maching with old udev system shows problem,
where udev 'jumps on' leg device after mirror target
releases its legs - since udev does not (in this old case) skips
such device from scanning - it opens device - and this prevent
leg device to be deactivated - effectively such device stays
'leaked' in DM table invisibly to lvm2 command.
So to 'combat' this issue - if the device has '_mimage' in its name,
the retry of deactivation is automatically assumed.
NOTE: wider impact is unexpected - as it's touching only old mirror
target which is nowadays replaced with 'raid'.
In case there will be some problem identified - probably both patches
should be reverted.
With older systems and udevs we don't have control over scanning of lvm2
internal devices - so far we set retry-removal only for top-level LVs,
but in occasional cases udev can be 'fast enough' to open device for
scanning and prevent removal of such device from DM table.
So to combat this case - try to pass 'retry' flag also for removal of
internal device so see how many races can go away with this simple
patch.
Note: patch is applied only to internal version of libdm so the external
API remains working in the old way for now.
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).
With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data. Separate
dm linear devices are set up to point at the metadata and
data areas of the LV. These dm devs are given to the
dm-cache target to use.
The single LV cache cannot be resized without recreating it.
If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)
Usage:
$ lvcreate -n main -L 128M vg /dev/loop0
$ lvcreate -n fast -L 64M vg /dev/loop1
$ lvs -a vg
LV VG Attr LSize Type Devices
main vg -wi-a----- 128.00m linear /dev/loop0(0)
fast vg -wi-a----- 64.00m linear /dev/loop1(0)
$ lvconvert --type cache --cachepool fast vg/main
$ lvs -a vg
LV VG Attr LSize Origin Pool Type Devices
[fast] vg Cwi---C--- 64.00m linear /dev/loop1(0)
main vg Cwi---C--- 128.00m [main_corig] [fast] cache main_corig(0)
[main_corig] vg owi---C--- 128.00m linear /dev/loop0(0)
$ lvchange -ay vg/main
$ dmsetup ls
vg-fast_cdata (253:4)
vg-fast_cmeta (253:5)
vg-main_corig (253:6)
vg-main (253:24)
vg-fast (253:3)
$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048
$ lvchange -an vg/min
$ lvconvert --splitcache vg/main
$ lvs -a vg
LV VG Attr LSize Type Devices
fast vg -wi------- 64.00m linear /dev/loop1(0)
main vg -wi------- 128.00m linear /dev/loop0(0)
Checks whether VDO support is enabled.
Detects presence of 'vdoformat' tool which is required for to format VDO pool.
ATM build of VDO is NOT automatically enabled (None is default).
To enable build of LVM with VDO support use:
configure --with-vdo=internal
TODO: Maybe future version may switch to link some small VDO library for formating
(would require linking and package dependency).