IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Since now we have metadata parts running with normal speed,
we could avoid reinitilising delayed dev for every test.
(Saving seconds on cookie waits...)
Use common code for error_dev & delay_dev.
Both functions now take list of sectors.
From now on we could delay just 'extent' section, while
keeping running lvm commands fast (having native metadata area).
We cannot print TEST WARNING within test shell script
(since it's running in debug mode and thus always prints it)
Use 'should false' trick let the string printed in this case.
So 'non cluster cases' should now end properly.
If the kernel has 'new lcm()' (3.19) it provides wrong
optimal_io_size value for dm device so lvm2 command cannot
create properly aligned devices.
Use 'should' for this case - so test ends with 'TEST WARNING'.
Improve testing for condition that pvmove0 is already running in the
table (so we do not kill pvmove while it has loaded target, but
it's not yet Live).
Also delay_dev for 200ms.
When we use /dev/loopX device - shift first PV1 sector by 1M
so /dev/loop0 and dm device do not appear as same device.
Also notify lvmetad once 'devs' are created - so in case this
command is called in the middle of test - lvmetad properly
drops its metadata for these devices.
Drop used test.img file between reuse so the 'prepare_vg'
always starts with zeroed disks.
When LVM_TEST_AUX_TRACE is set, allow shell tracing of aux commands.
There is no reason to support persistent major/minor numbers
for pool volumes - it's only meant to be supported for filesystems
(since i.e. nfs may need to keep volume on a persistent device node.)
Support for pools is now explicitely disabled and documented.
Metadata areas which are marked as ignored should not be scanned
and read during pvscan --cache. Otherwise, this can cause lvmetad
to cache out-of-date metadata in case other PVs with fresh metadata
are missing by chance.
Make this to work like in non-lvmetad case where the behaviour would
be the same as if the PV was orphan (in case we have no other PVs
with valid non-ignored metadata areas).
Simplify the function usage and clean up parameter parsing.
There were 2 significant changes made in the test itself
(they passed before because of incorrect shell string handling)
-pvs_sel 'tags="pv_tag1"' "$dev1 $dev2"
+sel pv 'tags="pv_tag1"' "$dev1" "$dev6"
-lvs_sel '(lv_name=vol1 || lv_name=vol2) || vg_tags=vg_tag1' "vol1 vol2
abc orig snap"
+sel lv '(lv_name=vol1 || lv_name=vol2) || vg_tags=vg_tag1' vol1 vol2
orig snap xyz
Avoid busy-looping on CPU while reading socket pipe
and always call read only when select tells there is
something for read.
Change the batch output to old nicer output.
The string reported by uname -n may include characters
that lvm omits from the system id (like parens, as seen
on a test machine.) Check against the final system id
string that lvm uses.
Though vgremove operates per VG by definition, internally, it
actually means iterating over each LV it contains to do the
remove.
So we need to direct selection a bit in this case so that the
selection is done per-VG, not per-LV.
That means, use processing handle with void_handle.internal_report_for_select=0
for the process_each_lv_in_vg that is called later in vgremove_single fn.
We need to disable internal selection for process_each_lv_in_vg
here as selection is already done by process_each_vg which calls
vgremove_single. Otherwise selection would be done per-LV and not
per-VG as we intend!
An intra-release fix for commit 00744b053f.
pvchange now uses process_each_pv so uncomment parts of the test
which check proper functionality of intersection between selection
result and PVs or PV tags directly provided on command line. This
didn't work properly before when pvchange was not using process_each_pv.
For example:
pvchange -u -S 'pv_name=/dev/sda' /dev/sdb
..changes nothing since clearly the intersection of /dev/sda and
/dev/sdb is empty set. The same applies for tags:
pvchange -u -S 'pv_name=/dev/sda' @some_tag
..changes nothing if /dev/sda is not tagged with some_tag.
When repairing thin pool or swapping thin pool metadata,
preserve chunk_size property and avoid to be automatically changed
later in the code to better match thin pool metadata size.
When raid leg is extracted, now the preload code handles this state
correctly and put proper new table entry into dm tree,
so the activation of extracted leg and removed metadata works
after commit.
It's not an error if the device is filtered out and hence cleared from
lvmetad cache - "pvscan --cache DevPath" has now the same behaviour in
this case as "pvscan --cache major:minor" (which is more consistent).
Before, the tests expected failure return code for "pvscan --cache DevicePath"
if the device was filtered (which is a different situation if the device
is missing in the system completely!).
Normally, if there are partitions defined on top of device-mapper
device, there should be a device-mapper device created for each
partiton on top of the old one and once the underlying DM device
is used by another devices (partition mappings in this case),
it can't be used as a PV anymore.
However, sometimes, it may happen the partition mappings are
missing - either the partitioning tool is not creating them if
it does not contain full support for device-mapper devices or
the mappings were removed.
Better safe than sorry - check for partition header on DM devs
and filter them out as unsuitable for PVs in case the check is
positive. Whatever the user is doing, let's do our best to prevent
unwanted corruption (...by running pvcreate on top of such device
that would corrupt the partition header).
We have to use empty list, not NULL if we want to denote that the list
has no items. Otherwise, the code further can segfault as it expects
there's always a sane value (= some list), including empty list,
but never NULL.
When we split leg from raid - we take a proper new lock for a new LV.
However for now activation checks only 'existince' of device UUID,
but it's not validating device has a proper name.
As a quick fix call suspend()/resume() to rename after split mirror.
When chunk size needs to be estimated, the code missed to round
to proper 64kb boundaries (or power of 2 for older thin pool driver).
So for some data and metadata size (i.e. 10GB and 4MB) it resulted
in incorrect chunk size (not being a multiple of 64KB)
Fix it by adding proper rounding and also use 1 routine for 2 places
where the same calculation is made.
Fix also incorrect printed warning that has used 'ffs()'
(which returns first 'least significant' bit in word)
and it was not really giving any useful size info and replace it
with properly estimated chunk size.
Make sure there is 'control' node before clvmd is started.
Somehow 'clvmd' is not allowed by selinux to create one.
TODO: Check is selinux policy is right here...
Since we support device stack of pools over pool
(thin-pool with cache data volume) the existing code
is no longer able to detect orphan _pmspare.
So instead do a _pmspare check after volume removal,
and remove spare afterwards.
Fixed syntax parsing means that some commands that used to work are now
failing. Particullary this case:
$ invalid lvcreate -l1 --type thin vg/pool
> Needs to fail becase thin type LV needs --virtualsize
$ invalid lvcreate --type snapshot vg/lv1
> Needs to fail because old-snapshot segment type needs --size
Some reported error messages have been also updated.
If we want to support conversion of VG to clustered type,
we currently need to relock active LV to get proper DLM lock.
So add extra loop after change of VG clustered attribute
to exlusively activate all active top level LVs.
When doing change -cy -> -cn we should validate LVs are not
active on other cluster nodes - we could be sure about this only
when with local exclusive activation - for other types
we require user to deactivate volumes first.
As a workaround for this limitation there is always
locking_type = 0 which amongs other skip the detection
of active LVs.
FIXME:
clvmd should handle looks for cluster locking type all the time.
While we could probably reacquire some type of lock when
going from non-clustered to clustered vg, we don't have any
single road back to drop the lock and keep LV active.
For now keep it safe and prohibit conversion when LV
is active in the VG.
We used to print an error message whenever we tried to deal with devices that
lvmetad knew about but were rejected by a client-side filter. Instead, we now
check whether the device is actually absent or only filtered out and only print
a warning in the latter case.
Commit 5ebff6cc9f seemed to introduce
new 'for' loop but the mode is not yet used.
But the access to /dev dir needs to go through $DM_DEV_DIR
and whole path needs to be in "".
Since the type passed LV is changed and content of data detroyed,
query user with prompt to confirm this operation.
Also add a proper wiping of header.
Using '--yes' will skip this prompt:
lvconvert -s --yes vg/lv vg/lvcow
Commit 33d69162e4 reduced the number of
PVs to a level where the test could not function. (It is impossible
to replace 3 PVs of a 4-way RAID1 LV if there are only 5 PVs.)
When repairing RAID LVs that have multiple PVs per image, allow
replacement images to be reallocated from the PVs that have not
failed in the image if there is sufficient space.
This allows for scenarios where a 2-way RAID1 is spread across 4 PVs,
where each image lives on two PVs but doesn't use the entire space
on any of them. If one PV fails and there is sufficient space on the
remaining PV in the image, the image can be reallocated on just the
remaining PV.
I've changed build_parallel_areas_from_lv to take a new parameter
that allows the caller to build parallel areas by LV vs by segment.
Previously, the function created a list of parallel areas for each
segment in the given LV. When it came time for allocation, the
parallel areas were honored on a segment basis. This was problematic
for RAID because any new RAID image must avoid being placed on any
PVs used by other images in the RAID. For example, if we have a
linear LV that has half its space on one PV and half on another, we
do not want an up-convert to use either of those PVs. It should
especially not wind up with the following, where the first portion
of one LV is paired up with the second portion of the other:
------PV1------- ------PV2-------
[ 2of2 image_1 ] [ 1of2 image_1 ]
[ 1of2 image_0 ] [ 2of2 image_0 ]
---------------- ----------------
Previously, it was possible for this to happen. The change makes
it so that the returned parallel areas list contains one "super"
segment (seg_pvs) with a list of all the PVs from every actual
segment in the given LV and covering the entire logical extent range.
This change allows RAID conversions to function properly when there
are existing images that contain multiple segments that span more
than one PV.