IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Apply the same logic to pvscan/lvmetad that was added to
the non-lvmetad label_scan in commit 3fd75d1b:
scan: use full md filter when md 1.0 devices are present
Before scanning, check if any of the devs on the system are
md 0.90/1.0, and if so make the scan read both the start and
the end of the device so that the components of those md
versions can be ignored.
commit de28637
scan: use full md filter when md 1.0 devices are present
missed the fact that md superblock version 0.90 also puts
metadata at the end of the device, so the full md filter
needs to be used when either 0.90 or 1.0 is present.
fix lseek error check
fix read/write error checks
handle zero return from read and write
don't return an error for short io
fix partial read/write loop
io_setup() for aio may fail if a system has reached the
aio request limit. In this case, fall back to using
sync io. Also, lvm use of aio can be disabled entirely
with config setting global/use_aio=0.
The system limit for aio requests can be seen from
/proc/sys/fs/aio-max-nr
The current usage of aio requests can be seen from
/proc/sys/fs/aio-nr
The system limit for aio requests can be increased by
setting fs.aio-max-nr using sysctl.
Also add last-byte limit to the sync io code.
Since the stats handle is neither bound nor listed before the
attempt to call dm_stats_get_nr_regions(), it will always return
zero: this prevents reporting of any dmstats regions on any
device.
Remove the dm_stats_get_nr_regions() check and instead rely on
the correct return status from dm_stats_populate() which only
returns 0 in the case that there are regions to inspect (and
which logs a specific error for all other cases).
Reported-by: Bryan Gurney <bgurney@redhat.com>
It doesn't make sense to test or warn about the region count until
the stats handle has been listed: at this point it may or may not
contain valid information (but is guaranteed to be correct after
the list).
lvm uses a bcache block size of 128K. A bcache block
at the end of the metadata area will overlap the PEs
from which LVs are allocated. How much depends on
alignments. When lvm reads and writes one of these
bcache blocks to update VG metadata, it can also be
reading and writing PEs that belong to an LV.
If these overlapping PEs are being written to by the
LV user (e.g. filesystem) at the same time that lvm
is modifying VG metadata in the overlapping bcache
block, then the user's updates to the PEs can be lost.
This patch is a quick hack to prevent lvm from writing
past the end of the metadata area.
The previous commit de2863739f
scan: use full md filter when md 1.0 devices are present
needs the use_full_md_check flag in the md filter, but
the cmd struct is not available when the filter is run,
so that commit wasn't working. Fix this by setting the
flag in a global variable.
(This was fixed in the master branch with commit 8eab37593
in which the cmd struct was passed to the filters, but it
was an intrusive change, so this commit is using the less
intrusive global variable.)
The md filter can operate in two native modes:
- normal: reads only the start of each device
- full: reads both the start and end of each device
md 1.0 devices place the superblock at the end of the device,
so components of this version will only be identified and
excluded when lvm uses the full md filter.
Previously, the full md filter was only used in commands
that could write to the device. Now, the full md filter
is also applied when there is an md 1.0 device present
on the system. This means the 'pvs' command can avoid
displaying md 1.0 components (at the cost of doubling
the i/o to every device on the system.)
(The md filter can operate in a third mode, using udev,
but this is disabled by default because there have been
problems with reliability of the info returned from udev.)
When converting from striped/raid0/raid0_meta
to raid6 with > 2 stripes, allow possible
direct conversion (to raid6_n_6).
In case of 2 stripes, first convert to raid5_n to restripe
to at least 3 data stripes (the raid6 minimum in lvm2) in
a second conversion before finally converting to raid6_n_6.
As before, raid6_n_6 then can be converted
to any other raid6 layout.
Enhance lvconvert-raid-takeover.sh to test the
2 stripes conversions to raid6.
Resolves: rhbz1624038
(cherry picked from commit e2e30a64ab)
Conflicts:
WHATS_NEW
We need to have Ceph RBD devices mapped first before use in a stack
where LVM is on top so make sure rbdmap.service is called before
generated lvm2-activation-net.service.
On shutdown, we need to stop blk-availability first before we stop the
rbdmap.service.
Resolves: rhbz1623479
(cherry picked from commit cb17ef221b)
Conflicts:
WHATS_NEW
Thin plugin started to use configuble setting to allow to configure
usage of external scripts - however to read this value it needed to
execute internal command as dmeventd itself has no access to lvm.conf
and the API for dmeventd plugin has been kept stable.
The call of command itself was not normally 'a big issue' until users
started to use higher number of monitored LVs and execution of command
got stuck because other monitored resource already started to execute
some other lvm2 command and become blocked waiting on VG lock.
This scenario revealed necesity to somehow avoid calling lvm2 command
during resource registration - but this requires bigger changes - so
meanwhile this patch tries to minimize the possibility to hit this race
by obtaining any configurable setting just once - such patch is small
and covers majority of problem - yet better solution needs to be
introduced likely with bigger rework of dmeventd.
TODO: Avoid blocking registration of resource with execution of lvm2
commands since those can get stuck waiting on mutexes.
When using lvmetad, 'pvs' still evaluates full filters
on all devices (lvmetad only provides info about PVs,
but pvs needs to report info about all devices, at
least sometimes.)
Because some filters read the devices, pvs still reads
every device, even with lvmetad (i.e. lvmetad is no help
for the pvs command.) Because the device reads are not
being managed by the standard label scan layer, but only
happen incidentally through the filters, there is nothing
to control and limit the bcache content and the open file
descriptors for the devices. When there are a lot of devs
on the system, the number of open fd's excedes the limit
and all opens begin failing.
The proper solution for this would be for pvs to really
use lvmetad and not scan devs, or for pvs to do a proper
label scan even when lvmetad is enabled. To avoid any
major changes to the way this has worked, just work around
this problem by dropping bcache and closing the fd after
pvs evaluates the filter on each device.
For 'pvscan --cache' avoid using dev_iter in the loop
after the label_scan by passing the necessary devs back
from the label_scan for the continued pvscan.
The dev_iter functions reapply the filters which will
trigger more io when we don't need or want it. With
many devs, incidental opens from the filters (not controlled
by the label scan) can lead to too many open files.
This is the number of concurrent async io requests that
the scan layer will submit to the bcache layer. There
will be an open fd for each of these, so it is best to
keep this well below the default limit for max open files
(1024), otherwise lvm may get EMFILE from open(2) when
there are around 1024 devices to scan on the system.
"lvconvert --type linear RaidLV" on striped and raid4/5/6/10
have to provide the convenient interim layouts. Fix involves
a cleanup to the convenience type function.
As a result of testing, add missing sync waits to
lvconvert-raid-reshape-linear_to_raid6-single-type.sh.
Resolves: rhbz1447809
(cherry picked from commit e83c4f07ca)
Conflicts:
WHATS_NEW
Conversion to striped from raid0/raid0_meta is directly possible.
Fix a regression setting superfluous interim raid5_n conversion type
introduced by commit bd7cdd0b09.
Add new test script lvconvert-raid0-striped.sh.
Resolves: rhbz1608067
(cherry picked from commit 4578411633)
Conflicts:
WHATS_NEW
With improved mirror activation code --splitmirror issue poppedup
since there was missing proper preload code and deactivation
for splitted mirror leg.
If a mirror LV is listed in read_only_volume_list, it would
still be activated rw. The activation would initially be
readonly, but the monitoring function would immediately
change it to rw. This was a regression from commit
fade45b1d1 mirror: improve table update
The monitoring function needs to copy the read_only setting
into the new set of mirror activation options it uses.
When vgcreate does an automatic pvcreate, it opens the
dev with O_EXCL to ensure no other subsystem is using
the device. This exclusive fd remained in bcache and
prevented activation parts of lvm from using the dev.
This appeared with vgcreate of a sanlock VG because of
the unique combination where the dev is not yet a PV,
so pvcreate is needed, and the vgcreate also creates
and activates an internal LV for sanlock.
Fix this by closing the exclusive fd after it's used
by pvcreate so that it won't interfere with other
bits of lvm that may try to use the device.
Some minimal set of changes to make vdo plugin compilable in stable branch:
Use older headers.
Implement simple vdo status parser to only resolve use-percentage.
Introduce VDO plugin for monitoring VDO devices.
This plugin can be used also by other users, as plugin checks
for UUID prefix 'LVM-' and run lvm actions only on those
devices.
Non LVM- device are only monitored and log warnings
when usage threshold reaches 80%.
Prohibit, because the tracking can't continue and
further conversions may fail with bogus error messages.
Resolves: rhbz1579072
(cherry picked from commit a004bb07f1)
Conflicts:
WHATS_NEW
Prohibit conversions of raid1 split trackchanges SubLVs
because they will fail to get merged back into the RaidLV.
Resolves: rhbz1579438
(cherry picked from commit 8b0729af0f)
Conflicts:
WHATS_NEW