IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Using lvm1 metadata with lvmetad is not generally allowed,
but nothing has prevented creating new lvm1 metadata with
lvmetad (missing error checking in pvcreate/vgcreate.)
Various tests are using lvm1 with lvmetad and happen to
work because of the missing error checks.
This commit fixes the tests so they won't fail when the
lvm1/lvmetad error checking is fixed. A new variable
LVM_TEST_LVM1 is defined and is used in the scripts to
decide if lvm1 metadata should be tested. LVM_TEST_LVM1
is not defined when lvmetad is being tested, and the
combination of LVM_TEST_LVM1 and LVM_TEST_LVMETAD can
be used to verify the desired lvmetad+lvm1 behavior.
Showing 'u' in the pv_attr reporting field is mostly unnecessary because
most PVs are allocatable, and being allocatable implies it is (u)sed,
and this is already obvious from other fields in the default 'pvs'
output like the VG name.
So move the new (u)sed pv_attr from character position 4 to 1, and only
show it in those rare cases when the PV is not (a)llocatable or the
relevant metadata is missing.
(Scripts should not be using pv_attr, but rather pv_allocatable,
pv_exported, pv_missing, pv_in_use etc.)
Add tests for the "dmstats report" command:
* report
* report --count
* report --histogram
So far the tests just check the command runs as expected when a
correctly configured stats region exists: validation of output
can be added later.
Add tests for the "dmstats create" command:
* simple whole-device region
* region using --start/--len options
* region using --segments option
* region with precise timestamps (--precise)
* region with histogram bounds (--bounds)
Add initial dmstats tests to 000-basic.sh. These tests ensure that
the dmsetup binary is built and linked correctly when called as
'dmstats' and that the version of the binary matches the expected
library version used for the build.
Ask for confirmation when using pvcreate/pvremove on a PV which is
marked as belonging to a VG, just like we do in case of a PV which
belongs to known VG:
$ pvcreate -ff /dev/sda
Really INITIALIZE physical volume "/dev/sda" that is marked as belonging to a VG [y/n]? n
/dev/sda: physical volume not initialized
$ pvremove -ff /dev/sda
Really WIPE LABELS from physical volume "/dev/sda" that is marked as belonging to a VG [y/n]? n
/dev/sda: physical volume label not removed
There are two basic groups of fields for LV segment device reporting:
- related to LV segment's devices: devices and seg_pe_ranges
- related to LV segment's metadata devices: metadata_devices and seg_metadata_le_ranges
The devices and metadata_devices report devices in this format:
"device_name(extent_start)"
The seg_pe_ranges and seg_metadata_le_ranges report devices in
this format:
"device_name:extent_start-extent_end"
This patch reverts partly what commit 7f74a99502
(v 2.02.140) introduced in this area - it added [] for
hidden devices to mark them for all four fields mentioned above.
We won't be marking hidden devices in devices and metadata_devices
fields.
The seg_metadata_le_ranges field will have hidden devices marked -
it's new enough that we don't need to care about compatibility much
yet.
The seg_pe_ranges is old enough that we shouldn't be changing this
one - so we're reverting to not marking hidden devices here.
Instead, there's going to be a new field "seg_le_ranges" which
is going to replace the seg_pe_ranges and it will mark hidden devices -
this is going to be introduced in a patch later.
So in the end we'll end up with:
(LV segment's devices)
devices field with "device_name(extent_start)" format, not marking hidden devices
seg_pe_ranges field with "device_name:extent_start-extent_end" format, not marking hidden devices (deprecated, new seg_le_ranges should be used instead for standardized format)
seg_le_ranges field with "device_name:extent_start-extent_end" format, marking hidden devices
(LV segment's metadata devices)
metadata_devices field with "device_name:extent_start-extent_end" format, not marking hidden devices
seg_metadata_le_ranges field with "device_name:extent_start-extent_end" format, marking hidden devices
Also, both seg_le_ranges and seg_metadata_le_ranges will honour the
report/list_item_separator setting which can be used to configure
the delimiter used for list items.
So, to sum it up, we will recommend using the new seg_le_ranges and
seg_metadata_le_ranges fields because they display devices with
standard extent range format, they can mark hidden devices and they
honour the report/list_item_separator setting.
We'll be keeping devices,seg_pe_ranges and metadata_devices fields
for compatibility.
Thin pool discard mode set in metadata can be different from the one
actually used if any device underneath does not support that mode. Add
kernel_discard report field to make it possible to see this difference.
To handle multiple VGs with the same name.
Simply using the VG name is ambiguous, and
lvmetad requires the VG uuid be used to
specify which one is meant.
When reading older lvm2 metadata for cache-pool - we now handle more
extended syntax - basically we want to enter most setting when
actually creating cached LV.
For this new validation code has been added. However older metadata
without new settings set is now found as invalid.
Fix it by adding default settings for cache policy mq
and cache mode writethrough.
Use delay_dev to slow down mirror sync so we could more
easily check for race and proper reject of parallel mirror
leg addition/reduction.
Also expose fail in mirror allocation of parallel leg.
If two PVs have different VGs with the same name
(different uuids), one of the VGs is ignored by
lvmetad. A FIXME exists in lvmetad to find a
better response.
Drop already tested 'threshold & create' which is in
lvextend-thin-full.sh
Count with now match faster 'dmeventd' wakeup on watermark
as it's now nearly instant after crossing threshold value.
If the underlaying device has actually bigger read-ahead settings,
let it pass.
But anyway switch to 512 strip-size to get really high R-A sector count.
$ lvs -o name,tags vg
LV LV Tags
lvol0
lvol1 mytag
Before this patch:
$ lvs -o name,tags vg -S 'tags=""'
Failed to parse string list value for selection field lv_tags.
Selection syntax error at 'tags=""'.
Use 'help' for selection to get more help.
(and the same for -S 'tags={}' and -S 'tags=[]')
With this patch applied:
$ lvs -o name,tags vg -S 'tags=""'
LV LV Tags
lvol0
(and the same for -S 'tags={}' and -S 'tags=[]')
Somehow raid tests landed in plain cache - separte them out
so they properly check for have_raid.
Check we do not support strip option with cache-pool creation.
Certain stacks of cached LVs may have unexpected consequences.
So add a warning function called when LV is cached to detect
such caces and WARN user about them - the best we could do ATM.
Don't abort test when clvmd processes two VGs concurrently.
CLVMD: ioctl/libdm-iface.c:1940 Internal error: Performing unsafe table load while 3 device(s) are known to be suspended: (253:19)
Since our test environment runs also in non-real-udev world,
it's using /etc/.cache file with scanned files.
So in this case it is mandatory the user runs 'vgscan'
after a device reappears in the system.
This 'first' lvm2 command then fixes metadata (just like vgs did).
With thin-pool kernel target module 1.13 it's now support usage of
external origin with sizes which are not 'alligned' with chunk size
of thin-pool.
Enable lvm2 support for this and also fix reporting of data_percent
usage for case sizes are not alligned.
Set LVM_TEST_THIN_REPAIR_CMD to /bin/false for test which
doesn't need it.
This way - even if on the system there is no such tool present,
test will not result with warning about missing tool.
Also remove from Makefile settings of TEST vars which are set in
through /lib/paths - this also allows to override them in test.
Use 64bit arithmentic for PV size calculation (Coverity).
Also remove sector shift for compared PV size, since all
values are already held in sectors.
This fixes validatio of PV size when restoring PV
from vg metadata backup file.
Introduce LVM_TEST_LVMETAD_DEBUG_OPTS to allow to override
default debug opts for lvmetad.
However could be still overloaded on command line:
make check_lvmetad LVM_TEST_LVMETAD_DEBUG_OPTS="-l all"...
Better name for aux function.
First use normal -TERM, and only after a while use -KILL
(leaving some time to correctly finish)
Print INFO about killed processes.
some tests left dangling bg processes originating in
lvm2 commands being able to spawn any bg polling process
(lvchange, vgchange, pvmove, lvconvert...)
Initial fn 'add_to_kill_list' should collect processes with
specific parameters (proc's command line and parent processes ID).
After testing finishes the fn kill_listed_processes should remove these
listed by 'add_to_kill_list'.
Unfortunately it proved to be prone to an error especially in scenarios
where cmd line of initiating command contained characters required to
be espaced before passing to shell script to make it work correctly.
(Or if cmd spawned more than one bg process with same cmd line. i.e.:
vgchange or lvchange).
The new implementation is much simpler. It uses env. variable (LVM_TEST_TAG)
for marking a process desired to be killed later or during test env. teardown.
(i.e.: LVM_TEST_TAG=kill_me_$PREFIX to kill only processes related to
current test environment)
Put in pvmove background process into list quickly.
Update API for aux add_to_kill_list()/kill_listed_processes().
Run on 'background' (&) only non-background pvmoves.
Since now we have metadata parts running with normal speed,
we could avoid reinitilising delayed dev for every test.
(Saving seconds on cookie waits...)
We cannot print TEST WARNING within test shell script
(since it's running in debug mode and thus always prints it)
Use 'should false' trick let the string printed in this case.
So 'non cluster cases' should now end properly.
If the kernel has 'new lcm()' (3.19) it provides wrong
optimal_io_size value for dm device so lvm2 command cannot
create properly aligned devices.
Use 'should' for this case - so test ends with 'TEST WARNING'.
Improve testing for condition that pvmove0 is already running in the
table (so we do not kill pvmove while it has loaded target, but
it's not yet Live).
Also delay_dev for 200ms.
There is no reason to support persistent major/minor numbers
for pool volumes - it's only meant to be supported for filesystems
(since i.e. nfs may need to keep volume on a persistent device node.)
Support for pools is now explicitely disabled and documented.
Metadata areas which are marked as ignored should not be scanned
and read during pvscan --cache. Otherwise, this can cause lvmetad
to cache out-of-date metadata in case other PVs with fresh metadata
are missing by chance.
Make this to work like in non-lvmetad case where the behaviour would
be the same as if the PV was orphan (in case we have no other PVs
with valid non-ignored metadata areas).
Simplify the function usage and clean up parameter parsing.
There were 2 significant changes made in the test itself
(they passed before because of incorrect shell string handling)
-pvs_sel 'tags="pv_tag1"' "$dev1 $dev2"
+sel pv 'tags="pv_tag1"' "$dev1" "$dev6"
-lvs_sel '(lv_name=vol1 || lv_name=vol2) || vg_tags=vg_tag1' "vol1 vol2
abc orig snap"
+sel lv '(lv_name=vol1 || lv_name=vol2) || vg_tags=vg_tag1' vol1 vol2
orig snap xyz
The string reported by uname -n may include characters
that lvm omits from the system id (like parens, as seen
on a test machine.) Check against the final system id
string that lvm uses.
Though vgremove operates per VG by definition, internally, it
actually means iterating over each LV it contains to do the
remove.
So we need to direct selection a bit in this case so that the
selection is done per-VG, not per-LV.
That means, use processing handle with void_handle.internal_report_for_select=0
for the process_each_lv_in_vg that is called later in vgremove_single fn.
We need to disable internal selection for process_each_lv_in_vg
here as selection is already done by process_each_vg which calls
vgremove_single. Otherwise selection would be done per-LV and not
per-VG as we intend!
An intra-release fix for commit 00744b053f.
pvchange now uses process_each_pv so uncomment parts of the test
which check proper functionality of intersection between selection
result and PVs or PV tags directly provided on command line. This
didn't work properly before when pvchange was not using process_each_pv.
For example:
pvchange -u -S 'pv_name=/dev/sda' /dev/sdb
..changes nothing since clearly the intersection of /dev/sda and
/dev/sdb is empty set. The same applies for tags:
pvchange -u -S 'pv_name=/dev/sda' @some_tag
..changes nothing if /dev/sda is not tagged with some_tag.
When repairing thin pool or swapping thin pool metadata,
preserve chunk_size property and avoid to be automatically changed
later in the code to better match thin pool metadata size.
When raid leg is extracted, now the preload code handles this state
correctly and put proper new table entry into dm tree,
so the activation of extracted leg and removed metadata works
after commit.
It's not an error if the device is filtered out and hence cleared from
lvmetad cache - "pvscan --cache DevPath" has now the same behaviour in
this case as "pvscan --cache major:minor" (which is more consistent).
Before, the tests expected failure return code for "pvscan --cache DevicePath"
if the device was filtered (which is a different situation if the device
is missing in the system completely!).
Normally, if there are partitions defined on top of device-mapper
device, there should be a device-mapper device created for each
partiton on top of the old one and once the underlying DM device
is used by another devices (partition mappings in this case),
it can't be used as a PV anymore.
However, sometimes, it may happen the partition mappings are
missing - either the partitioning tool is not creating them if
it does not contain full support for device-mapper devices or
the mappings were removed.
Better safe than sorry - check for partition header on DM devs
and filter them out as unsuitable for PVs in case the check is
positive. Whatever the user is doing, let's do our best to prevent
unwanted corruption (...by running pvcreate on top of such device
that would corrupt the partition header).
We have to use empty list, not NULL if we want to denote that the list
has no items. Otherwise, the code further can segfault as it expects
there's always a sane value (= some list), including empty list,
but never NULL.
When we split leg from raid - we take a proper new lock for a new LV.
However for now activation checks only 'existince' of device UUID,
but it's not validating device has a proper name.
As a quick fix call suspend()/resume() to rename after split mirror.
When chunk size needs to be estimated, the code missed to round
to proper 64kb boundaries (or power of 2 for older thin pool driver).
So for some data and metadata size (i.e. 10GB and 4MB) it resulted
in incorrect chunk size (not being a multiple of 64KB)
Fix it by adding proper rounding and also use 1 routine for 2 places
where the same calculation is made.
Fix also incorrect printed warning that has used 'ffs()'
(which returns first 'least significant' bit in word)
and it was not really giving any useful size info and replace it
with properly estimated chunk size.