IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Fix bug in table reload for clustered VG.
Function used in lv/vgchange --monitor y|n is using 'somewhat' hackish
shortcut and accesses activation function monitor_dev_for_events()
directly rather through full device refresh to avoid table reload
in case user want to only change monitoring state of device.
However since with old 'mirrors' there is table change dropping
handle_error there was in some cases needed table update.
This was put into monitor_dev_for_events() with assumption
vg_write_lock_held() could be used to decide if the actual monitoring
change comes from read-only lock-holding commands lvchange/vgchange.
However the clustered locking part (clvmd) actually doesn't differentiate
about this concept of having VG openned in read-only or write mode.
So it caused unwanted reloads of mirror tables even in commands like lvconvert.
Thus for clustered locking there is full table reload put into the code and
shortcut applies only for non-clustered locking.
Also the patch tries to avoid calling repeated LV refresh in case the
lv/vgchange uses --refresh & --monitor.
When using 'lvcreate -l100%VG' and there is big disproportion between
real available space and requested setting - automatically fallback
to 100%FREE.
Difference can be seen when VG is big and already most space was
allocated, so the requestion 100%VG can end (and by spec for % modifier
it's correct) as LV with size of 1%VG. Usually this is not a big
problem - buit in some cases - like cache-pool allocation, this
can result a big difference for chunksize selection.
With this patch it's more closely match common-sense logic without
the need of reitteration of too big changes in lvm2 core ATM.
TODO: in the future there should be allocator solving all allocations
in a single call.
Whenever thin-pool chunk size is unspecified and left for lvm calculation
try to select the size as nearest highest power-of-2 instead of
just being a multiple of 64KiB. When multiple is bigger then 1MiB,
keep using 1MiB multiple.
Udev is running udev-rule action upon 'resume'.
However lvm2 in special case is doing replacement of
'soon-to-be-removed' device with 'error' target for resuming
and then follows actual removal - the sequence is usually quick,
so when udev start action - it can result in 'strange' error
message in kernel log like:
Process '/usr/sbin/dmsetup info -j 253 -m 17 -c --nameprefixes --noheadings --rows -o name,uuid,suspended' failed with exit code 1.
To avoid this - we need to ensure there is synchronization wait for udev
between 'resume' and 'remove' part of this process.
However existing code put strict requirement to avoid synchronizing with
udev inside critical section - but this originally came from requirement
to not do anything special while there could be devices in
suspend-state. Now we are able to see differnce between critical section
with or without suspended devices. For udev synchronization only
suspended devices are prohibited to be there - so slightly relax
condition and allow calling and using 'fs_sync()' even inside critical
section - but there must not be any suspended device.
When data are growing, adapt also size of metadata.
As we get way too many reports from users doing huge growths of
data portion while keep metadata small and avoiding using monitoring.
So to enhance the user-experience in case user requests grown of
thin-pool (without passing PV list for growth) - lvm2 will automaticaly
grown also the metadata part of thin-pool (if possible).
Add function for estimation of thin-pool metadata size for given size of
data. Function is using already existing internal API so it can
be reused for resize of thin-pool data.
Since we are checking _shutdown_requested - we expect all threads
are finished - that is currently checked only by checking ->next ptr
being NULL - so this can be NULL only when _reap() function clears
out all already finished threads.
I'm finding this design quite problematic in its core - but as a
'trivial hotfix' - lets _reap() linked list before check for signal.
There is likely a large potentical for few races - but the windows is
very very small - since lvmetad has been already purged from upstream,
lets go with this hotfix.
The master branch uses activate_lv only, but on the stable branch
activate_lv_excl_local was used. This patch restores the constraints.
Fix regression from bad cherry pick: 9b04851fc5
Trying to convert an exiting RAID1 LV to cache metadata SubLV
fails because of missing exclusive activation during wiping.
Solve by activating the RAID1 metadata SubLV exclusively.
Resolves: rhbz1698866
To avoid tiny race on checking arrival of signal and entering select
(that can latter remain stuck as signal was already delivered) switch
to use pselect().
If it would needed, we can eventually add extra code for older systems
without pselect(), but there are probably no such ancient systems in
use.
We already used Conflicts=shutdown target to stop LVM2 services on shutdown.
But we still missed the ordering - the shutdown.target should be reached
only after all the services are really stopped.
Reported here: https://github.com/lvmteam/lvm2/issues/17
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata. In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.
The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.
Resolves: rhbz1633167
(cherry picked from commit dd5716ddf2)
Conflicts:
WHATS_NEW
lib/activate/activate.c
lib/metadata/lv_manip.c
lib/metadata/raid_manip.c
tools/lvchange.c
tools/lvconvert.c
When pvscan needs to initialize lvmetad (e.g. lvmetad has just
started and is empty), it should set the lvmetad state to "updating"
before it scans any devices. Otherwise, many parallel pvscans
will try to initialize lvmetad, and in some cases earlier pvscans
with fewer devices information may replace newer pvscans with
more recent information.
Use the correct loop variable within the loop, instead of reusing the
initial value. Table lines after the first don't get terminated in
the right place.
Signed-off-by: Kurt Garloff <kurt@garloff.de>
(cherry picked from commit ccfbd505fe)
Just like with precending lvm2 device_mapper patch, ensure
that old users of libdm will also get fixed migration threshold
for caches.
(cherry picked from commit 74ae1c5bc1)
Conflicts:
WHATS_NEW_DM
Just for case ensure compiler is not able to optimize
memset() away for resources that are released.
This idea of using memory barrier is taken from openssl.
Other options would be to check for 'explicit_bzero' function.
(cherry picked from commit 55a8d6c86b)
Conflicts:
device_mapper/ioctl/libdm-iface.c
When preparing ioctl buffer and flatting all parameters,
add table parameters only to ioctl that do process them.
Note: list of ioctl should be kept in sync with kernel code.
(cherry picked from commit 43f8da7699)
Conflicts:
WHATS_NEW_DM
device_mapper/ioctl/libdm-iface.c
Expose DM_DEVICE_ARM_POLL via standard API enum.
(cherry picked from commit 1ae5bf2b83)
Conflicts:
WHATS_NEW_DM
device_mapper/all.h
device_mapper/ioctl/libdm-iface.c
Check that type is always defined, if not make it explicit internal
error (although logged as debug - so catched only with proper lvm.conf
setting).
This ensures later type being NULL can't be dereferenced with coredump.
(cherry picked from commit 79879bd201)
Since previous patch reverted coverity patch as this case is intentional,
provide override this coverity warning.
(cherry picked from commit 05b5774827)
Make sure, tmp_begin and tmp_end are always set, even for blind
coverity.
(cherry picked from commit 2513661467)
Conflicts:
device_mapper/libdm-report.c
This is the default bcache size that is created at the
start of the command. It needs to be large enough to
hold a single copy of metadata for a given VG, or the
VG cannot be read or written (since the entire VG would
not fit into available memory.)
Increasing the default reduces the chances of anyone
needing to increase the default to use their VG.
The size can be set in lvm.conf global/io_memory_size;
the lower limit is 4 MiB and the upper limit is 128 MiB.
When a single copy of metadata gets within 1MB of the
current io_memory_size value, begin printing a warning
that the io_memory_size should be increased.