IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Adds support for the DM_GET_TARGET_VERSION to dmsetup.
It introduces a new comman "target-version" that will accept list
of targets and print their version.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
(cherry picked from commit 667b33dd3b)
Conflicts:
device_mapper/all.h
device_mapper/ioctl/libdm-iface.c
device_mapper/misc/dm-ioctl.h
Due to a dm-raid target flaw fixed in target version 1.15.0,
extents of raid sets don't get resynchronized when new MD bitmp
pages have to be allocated due to the extension.
Introduce lvextend-raid.sh to test this flaw.
Related: rhbz1671964
Analogous to the case of a device with no regions, it is not an
error to attempt to list the stats groups on a device that has no
configured groups: just return success and continue.
When lvmetad is going to be shutdown - there were 2 issue:
If there was endless stream of command contacting lvmetad - the process
would never finish - so now when we recieved SIGTERM - we are no longer
accepting new connections and we just wait till the existing ones are
finished.
2nd. issue is that actually when we are waiting for finish of all client
threads - we basically want an usleep() and check if all threads
are already finished - proper solution would be to singal from a thread
there is some work to do for master thread - but that would be a bigger
change and since lvmetad is no longer developed - keep the change
minimal and just use pselect() as our '1sec.' sleeping call once
we are in 'shutdown' mode.
Reported-by: wangjufeng@huawei.com
error could be reproduced follow those steps:
#!/bin/bash
vgcreate vgtest /dev/sdb
lvcreate -L 100M -n lv1 vgtest
while :
do
service lvm2-lvmetad restart
vgs &
pvscan &
lvcreate -L 100M -n lv2 vgtest &
lvchange /dev/vgtest/lv1 --addtag xxxxx &
wait
if ! lvs|grep lv2;then
echo "err create"
break
fi
sleep 1
lvremove -y /dev/vgtest/lv2
lvchange /dev/vgtest/lv1 --deltag xxxxx
done
and then fail to create vgtest/lv2, actually lv2 was created, while
the metadata written on disk is replaced by lvchange. It could look
up lv2 by calling dmsetup table, while lvs could not.
This is because, when lvmetad restarted, several lvm commands update
token concurrently, when lvcreate recieve "token_mismatch", it cancle
communicating with lvmetad, which leads to that lvmetad cache is not
sync with the metadata on disk, then lv2 is not committed to lvmetad
cache. The metadata of vgtest which lvchange query from lvmetad is
out of date. After lvchange, it use the old metadata cover the new one.
This patch let lvm process update token synchronously, only one command
update lvmetad token at a time.
lvmetad_pvscan_single send the metadata on a pv by sending "pv_found"
to lvmetad, while the metadata maybe out of date after waiting for the
chance to update lvmetad token. Call label_read to read metadata again.
Token mismatch may lead to problems, increase log level.
Signed-off-by: wangjufeng<wangjufeng@huawei.com>
Avoid having PVs with different logical block sizes in the same VG.
This prevents LVs from having mixed block sizes, which can produce
file system errors.
The new config setting devices/allow_mixed_block_sizes (default 0)
can be changed to 1 to return to the unrestricted mode.
(cherry picked from commit 0404539edb)
Conflicts:
tools/lvmcmdline.c
tools/toollib.c
This reverts commit 9e438b4bc6.
The reverted patch also removed the warning which we realized we need
to keep as valuable process information (see related bugzilla below).
In a followup patch, we'll keep the message and avoid bailing out thus
always allowing lvconvert to try repairing if 'allocate' fault policy set.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=1751887
When running pvmove, we need to decide whether pvmove has to be doing
exlusive or non-exclusive activations for LV.
Whenever LV that requires exlusive activation is present, it makes
pvmove exlusive. But when there is already activate i.e. linear LV
in exclusive mode it also needs to switch pvmove into 'exlusively
activating' pvmove.
When lvm2 is activating layered pool LV (to basically keep pool opened,
the other function used to be 'locking' be in sync with DM table)
use this LV in read-only mode - this prevents 'write' access into
data volume content of thin-pool.
Note: since EMPTY/unused thin-pool is created as 'public LV' for generic
use by any user who i.e. wish to maintain thin-pool and thins himself.
At this moment, thin-pool appears as writable LV. As soon as the 1st.
thinLV is created, layer volume will appear is 'read-only' LV from this moment.
For a long time there has been a bug in the activation
done by the initial pvscan (which scans all devs to
initialize the lvmetad cache.) It was attempting to
activate all VGs, even those that were not complete.
lvmetad tells pvscan when a VG is complete, and pvscan
needs to use this information to decide which VGs to
activate.
When there are problems that prevent lvmetad from being
used (e.g. lvmetad is disabled or not running), pvscan
activation cannot use lvmetad to determine when a VG
is complete, so it now checks if devices are present
for all PVs in the VG before activating.
(The recent commit "pvscan: avoid redundant activation"
could make this bug more apparent because redundant
activations can cover up the effect of activating an
incomplete VG and missing some LV activations.)
Since we need to preserve allocated strings across 2 separate
activation calls of '_tree_action()' we need to use other mem
pool them dm->mem - but since cmd->mem is released between
individual lvm2 locking calls, we rather introduce a new separate
mem pool just for pending deletes with easy to see life-span.
(not using 'libmem' as it would basicaly keep allocations over
the whole lifetime of clvmd)
This patch is fixing previous commmit where the memory was
improperly used after pool release.
- git describe returns just v2_02_182 instead of e.g.
v2_02_181-44-adf123, so cut needs -s option.
- there is no dash in VERSION_DM so we need to split on space first.
Use temp files in /run/lvm/vgs_online/ to keep track of when
a VG has been autoactivated by pvscan. When pvscan autoactivates
a VG, it creates a temp file with the VG's name. Before a
subsequent pvscan tries to autoactivate the same VG, it checks
if a temp file exists for the VG name, and if so it skips it.
This can commonly happen when many devices appear on the system
at once, which generates several concurrent pvscans. In this case
the first pvscan does initialization by scanning all devices and
activating any complete VGs. The other pvscans would attempt to
activate the same complete VGs again. This extra work could
create a bottleneck of pvscan commands.
If a VG is deactivated by vgchange, the vg online file is removed.
If PVs are then disconnected/reconnected, pvscan will again
autoactivate the VG.
Also, this patch disables the VG refresh that could be called from
pvscan --cache -aay if lvmetad detects metadata inconsistencies.
The role of pvscan should be limited to basic autoactivation, and
any refresh scenarios are special cases that are not appropriate
for automation.
The warning printed by commands retrying an lvmetad connection
has been reduced to once every 10 seconds. New output messages
have been added to pvscan to record when pvscan is falling back
to direct activation of all VGs.
New udev in rawhide seems to be 'dropping' udev rule operations for devices
that are no longer existing - while this is 'probably' a bug - it's
revealing moments in lvm2 that likely should not run in a single
transaction and we should wait for a cookie before submitting more work.
TODO: it seem more 'error' paths should always include synchronization
before starting deactivating 'just activated' devices.
We should probably figure out some 'automatic' solution for this instead
of placing sync_local_dev_name() all over the place...
Support internal removal of 'cache origin' volume - which we
do not normally expose to a user - however internal processing
loops may hit this condition (depending on order of list LVs).
So when this operation is internally requested - we automatically
try to remove it's 'holding' LV (cache LV) - which will also
remove the origin.
Drop the 'cluster-only' optimization so we do resume ALL device
before we try to wait on cookie before 'removal' operation.
It's more correct order of operation - alhtough possibly slightly
less efficient - but until we have correct list of operations
'in-progress' we can't do anything better.
With previous patch 30a98e4d67 we
started to put devices one pending_delete list instead
of directly scheduling their removal.
However we have operations like 'snapshot merge' where we are
resuming device tree in 2 subsequent activation calls - so
1st such call will still have suspened devices and no chance
to push 'remove' ioctl.
Since we curently cannot easily solve this by doing just single
activation call (which would be preferred solution) - we introduce
a preservation of pending_delete via command structure and
then restore it on next activation call.
This way we keep to remove devices later - although it might be
not the best moment - this may need futher tunning.
Also we don't keep the list of operation in 1 trasaction
(unless we do verify udev symlinks) - this could probably
also make it more correct in terms of which 'remove' can
be combined we already running 'resume'.
Udev debugging is a bit tricky, so to more easily pair cookie ID,
which is the lowest 16 bit - print cookie as hexa number.
This simplify pairing of processed cookies while the 'higher bit flags'
are changed for the same cookie.
Resuming of 'error' table entry followed with it's dirrect removal
is now troublesame with latest udev as it may skip processing of
udev rules for already 'dropped' device nodes.
As we cannot 'synchronize' with udev while we know we have devices
in suspended state - rework 'cleanup' so it collects nodes
for removal into pending_delete list and process the list with
synchronization once we are without any suspended nodes.
Between 'resume' and 'remove' we need to wait for udev to synchronize,
otherwise udev may 'skip' resume event processing if the udev node
is already gone.
When pvmove is finished, we do a tricky operation since we try to
resume multiple different device that were all joined into 1 big tree.
Currently we use the infromation from existing live DM table,
where we can get list of all holders of pvmove device.
We look for these nodes (by uuid) in new metadata, and we do now a full
regular device add into dm tree structure. All devices should be
already PRELOAD with correct table before entering suspend state,
however for correctly working readahead we need to put correct info
also into RESUME tree. Since table are preloaded, the same table
is skip and resume, but correct read ahead is now set.