IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
aux updates:
prepare_vg now created clustered VG for cluster tests.
since dm-raid doesn't work in cluster, skip the cluster
test when someone checks for dm-raid target until fixed.
Reinstate the previous sort order for origin_size, so that LVs with
an empty origin_size continue to appear at the start of the list
not the end.
Ref. 9d445f371c
Support vgsplit for VGs with thin pools and thin volumes.
In case the thin data and thin metadata volumes are moved to a new VG,
move there also all related thin volumes and check that external origins
are also present in this new VG.
We use mpath filtering (enabled by devices/multipath_component_detection=1
lvm.conf setting) to avoid a situation in which we could end up with
duplicate PVs found. We need to filter out the mpath components and
use only the top-level multipath mapping instead for PV scans.
However, if the there are partitions on multipath components, we need
to filter out these partitions. This patch fixes it so those
partitions found on multipath components are filtered as well.
For example, let's consider following configuration:
The sda and sdb are mpath components, sda1 and sdb1 the partitions
on these components, mpath-test the mpath mapping and mpath-test1
the partition mapping - created automatically by kpartx right
after mpath-test creation. The PV resides on top.
(LVM PV)
|
mpath-test1
|
mpath-test
|
sda1 ---------- sdb1
\ | |/
sda sdb
E.g. for sda1 and sdb1, the code will detect this and it skips
the partition that belongs to the multipath component:
<snippet from the log>
#filters/filter-mpath.c:156 /dev/sda1: Device is a partition, using primary device /dev/sda for mpath component detection
130 #ioctl/libdm-iface.c:1724 dm status (253:2) OF[16384](*1)
131 #filters/filter-mpath.c:196 /dev/sda1: Skipping mpath component device
</snippet from the log>
Othewise, we'd see the same PV label on sda1/sdb1 and mpath-test1
at the same time ending up with "Duplicate PV found...".
The dev_get_primary_dev fn now returns:
0 if the dev is already a primary dev
1 if the dev is a partition, primary dev is returned in "result" (output arg)
-1 on error
This way, we can better differentiate between the error state
and the state in which the dev supplied is not a partition
in the caller (this was same return value before).
Also, if we already have information about the device type,
we can check its major number against the list of known device
types (cmd->dev_types) directly, so we don't need to go through
the sysfs - we only check the major:minor pair which is a bit
more straightforward and faster. If the dev_types does not have
any info about this device type, the code just fallbacks to
the original sysfs interface to get the partition info.
Changes:
- move device type registration out of "type filter" (filter.c)
to a separate and new dev-type.[ch] for common use throughout the code
- the structure for keeping the major numbers detected for available
device types and available partitioning available is stored in
"dev_types" structure now
- move common partitioning detection code to dev-type.[ch] as well
together with other device-related functions bound to dev_types
(see dev-type.h for the interface)
The dev-type interface contains all common functions used to detect
subsystems/device types, signature/superblock recognition code,
type-specific device properties and other common device properties
(bound to dev_types), including partitioning support.
- add dev_types instance to cmd context as cmd->dev_types for common use
- use cmd->dev_types throughout as a central point for providing
information about device types
Fix the usecase when only PV list is specified.
With --poolmetadatasize PV list is used for metadata extents.
Without --poolmetadatasize PV list is used for 100% extension of LV.
Handle the case, when nothing could be resized (i.e. in dmeventd)
Add limit for buffer so if the test is running in some loop
and generating lots of output, the output log gets too large
for browser to display (at least Firefox is quite confused
to display 300MB logs).
For now limit max output of one test to 32MB.
If there is need to see full log set LVM_TEST_UNLIMITED to
disable interruption of test.
harness now also prints max memory used during test,
it user and system time and amount of read/write operations.
Support 'clasic' way of resizing of metadata LV.
Normally we disallow to work with internal 'invisible' devices.
But in this case we can make an exception and if user has some
special needs how to extend thin pool metadata LV - support it.
After resize of metadata LV, the pool will be suspended and resumed,
to be notified of this change.
Add support for lvresize of thin pool metadata device.
lvresize --poolmetadatasize +20 vgname/thinpool_lv
or
lvresize -L +20 vgname/thinpool_lv_tmeta
Where the second one allows all the args for resize (striping...)
and the first option resizes accoding to the last metadata lv segment.
Giving volume type information about being 'metadata' type of volume
has higher priority then i.e. 'mirror' or 'thin' flag - for those
type we have 'target attr' (7th. field).
The special suspend/resume code in lv_remove for LVM1 snapshots was interpsersed
with a vg_commit call. However, while with LVM1 metadata, vg_commit is
technically a no-op, the activation code relied on the ondisk and incore
metadata being the same, since on LVM1, a "commit" happens in vg_write
already. Since the "ondisk" metadata was previously not available with format1
(and incore was silently used instead, via lvmcache), the problem was masked.
This ties the two preceding changes together, actually using the "ondisk"
version of VG metadata instead of calling into lvmcache when activating
volumes. The cache hooks are still used as a fallback, because we don't have an
uncached scanning API yet.
Previously, we have relied on UUIDs alone, and on lvmcache to make getting a
"new copy" of VG metadata fast. If the code which triggers the activation has
the correct VG metadata at hand (the version which is currently on disk), it can
now hand it to the activation code directly.