1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

6705 Commits

Author SHA1 Message Date
David Teigland
a7f195b7e8 add label_scan_devs_cached
label_scan_devs without invalidating data first
for cases where the caller wants to use any
bcache data they have already read.
2020-10-21 16:24:16 -05:00
David Teigland
677f829e54 add label_read_pvid
To read the lvm headers and set dev->pvid if the
device is a PV.  Difference from label_scan_ functions
is this does not read any vg metadata or add any info
to lvmcache.
2020-10-21 16:24:16 -05:00
David Teigland
c7311d4722 lvmcache: rename label_read label_scan_dev
for consistent naming with other similar functions
2020-10-21 16:24:16 -05:00
David Teigland
b3cdf0d881 lvmcache: add lvmcache_get_dev_mda
for future patch
2020-10-21 16:24:16 -05:00
David Teigland
2c9bb67604 scanning: improve filtering control
Filtering in label_scan was controlled indirectly by
the fact that bcache was not yet set up when label_scan
first ran.  The result is that filters that needed data
would not run and would return -EAGAIN, which would
result in the dev flag FILTER_AFTER_SCAN being set.
After the dev header was read for checking the label,
filters would be rechecked because of FILTER_AFTER_SCAN.
All filters would be checked this time because bcache
was now set up, and the filters needing data would
largely use data already scanned for reading the label.
This design worked but is hard to adjust for future
cases where bcache is already set up.

Replace this method (based on setting up bcache, or not)
with a new cmd flag filter_nodata_only.  When this flag
is set filters that need data will not run.  This allows
the same label_scan behavior when bcache has been set up.
There are no expected changes in behavior.
2020-10-21 16:24:16 -05:00
David Teigland
c74ccd5201 filters: nodata option
When filter_nodata_only is set, a filter that uses
data is skipped.
2020-10-21 16:24:16 -05:00
David Teigland
c601ec0d6e filters: allow filter wipe for one device
as passes_filter already does
2020-10-21 16:24:16 -05:00
Zdenek Kabelac
fdec4cd3e6 memlock: allocate at most halve of rlimit stack
Touch of stack allocation validated given size with rlimit
and if the reserved_stack was above rlimit, its been completely
ignored - now we will always touch stack upto rlimit/2 size.
2020-10-20 22:26:44 +02:00
Zdenek Kabelac
b75c2dfe1b debug: shorten error message
Just check for sigint during log_error().
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
58976ccc34 properties: fix data_usage typo
Patch 4de6f58085 introduce typo,
we need to use data_usage.

Note: this code was used by lvmapp library and currently is unused.
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
73a3a0d347 debug: drop vgid from debug
From the code can be seen the VGID will be always NULL here
as vgid != NULL is already handled before.
Thus drop from being displayed.
2020-10-02 22:27:00 +02:00
Zdenek Kabelac
117fc64e6e debug: no backtrace
As the path already printed verbose message drop backtrace.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
1b8c6f09bc debug: show actually reason for taking this code path
Instead of not so useful backtrace, report what was the reason.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
e1af80c81c debug: drop FD from error message
Since now the error path already has device close and set -1,
there is not much in printing this info - actually shouldn't be
there at all..
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
dd8212365d debug: update messages 2020-10-02 21:04:16 +02:00
Zdenek Kabelac
e7fff97b8d wipe_lv: use BLKZEROOUT when possible
Since BLKZEROOUT ioctl should be supposedly fastest
way how to clear block device start using this ioctl
for zeroing a device. Commonly we do zero typically
small portion of a device (8KiB) - however since we now
also started to zero metadata devices, in the case
of i.e. thin-pool metadata this can go upto ~16GiB
and here the performance starts to be noticable.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
c65d3a6b8a wipe_lv: interruptible wiping
Since we now block signals and wiping may take unexpectedly long
time - support breaking command while wipe is in progress.
2020-10-02 21:03:19 +02:00
Zdenek Kabelac
7396f1cfee wipe_lv: drop label_scan_invalidate on error path
Since dev_set_bytes() now closes  dev on error path itself,
remove this unneeded call now (introduced few commits back
in history thus removing comment from WHATS_NEW)
2020-10-02 21:02:04 +02:00
Zdenek Kabelac
b44db5d1a7 bcache: use flexible arrays
Cleanup, allocate whole struct with a single malloc call.
2020-10-02 21:00:26 +02:00
Zdenek Kabelac
b3c7a2b3f0 bcache: support interrupts when waiting on IO
Since lvm2 normally block signals during protected
phase where it does not want to be interrupted.
Support interruptible processing when allowed
in section between sigint_allow() ... sigint_restore())
and let the 'io_getenvents()'  finish with EINTR.
2020-10-02 20:57:50 +02:00
Zdenek Kabelac
0fe58fc54f bcache: fix busy loop with too many errors
When bcache tries to write data to a faulty device,
it may get out of caching blocks and then just busy-loops
on a CPU - so this check protects this by checking
if there is already max_io (~64) errored blocks.
2020-10-02 20:56:55 +02:00
Zdenek Kabelac
41f9e372c0 bcache: fix waiting problem for completed IO
Call _wait_all() which does check whether there is still
some pending IO before sleep. Otherwise it may happen
our submitted IO operations have been already dispatched
and this call then endlessly waits for IO which are all done.
This can be reproduced when device returns quickly errors
on write requests.
2020-10-02 20:53:41 +02:00
David Teigland
450f272b31 devices: support printing the filter that rejects a device
Use of this new message function needs to be added
to various commands to improve the output.
2020-10-01 12:00:09 -05:00
David Teigland
c32d7fed4f writecache: use two step detach
When detaching a writecache, use the cleaner setting
by default to writeback data prior to suspending the
lv to detach the writecache.  This avoids potentially
blocking for a long period with the device suspended.

Detaching a writecache first sets the cleaner option, waits
for a short period of time (less than a second), and checks
if the writecache has quickly become clean.  If so, the
writecache is detached immediately.  This optimizes the case
where little writeback is needed.

If the writecache does not quickly become clean, then the
detach command leaves the writecache attached with the
cleaner option set.  This leaves the LV in the same state
as if the user had set the cleaner option directly with
lvchange --cachesettings cleaner=1 LV.

After leaving the LV with the cleaner option set, the
detach command will wait and watch the writeback progress,
and will finally detach the writecache when the writeback
is finished.  The detach command does not need to wait
during the writeback phase, and can be canceled, in which
case the LV will remain with the writecache attached and
the cleaner option set.  When the user runs the detach
command again it will complete the detach.

To detach a writecache directly, without using the cleaner
step (which has been the approach previously), add the
option --cachesettings cleaner=0 to the detach command.
2020-10-01 11:33:02 -05:00
David Teigland
2272a32e6f lvmlockd vdo: add support
lvmlockd handling for vdo lv and vdo pool is like
thin lv and thin pool.
2020-09-29 14:43:27 -05:00
David Teigland
82e270c18a lvmlockd vdo: disallow use of shared lock on LV
vdo cannot be active on multiple hosts concurrently
2020-09-29 14:43:26 -05:00
Zdenek Kabelac
6728788bf5 debug: remove stacktrace on regular path
Here _insert is expected to also fail, so just regular 'return 0'.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
0c89c5a40f debug: update debug message 2020-09-29 10:43:56 +02:00
Zdenek Kabelac
bd0d4de4e2 active: fix compilation without devmapper
Better support for compilation without device-mapper.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
4cd356b26b thin: remove unneeded code test
Since we detect already transaction if before starting
to build dm tree - this extra check is a duplicate
that would only capture very tiny 'race' and we later
validate transaction_id with suspended snapshot origin.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
92c0e8c17f writecache: archive before modification of metadata
Archive before we start to modify metadata.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
08e838f488 cleanup: avoid unneeded check
Since creation of thin snapshot already makes sure,
the message list is empty, there is no need to check
this again.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
af5f29c7e2 activation: move locking of critical section
Move begining of 'suspending' critical section closer to _lv_suspend_lv()
for better correctness of error paths.
2020-09-29 10:43:56 +02:00
Bastian Germann
168e2ffbcd lvm: add readline alternative editline
LVM2 is distributed under GPLv2 only. The readline library changed its
license long ago to GPLv3. Given that those licenses are incompatible
and you follow the FSF in their interpretation that dynamically linking
creates a derivative work, distributing LVM2 linked against a current
readline version might be legally problematic.

Add support for the BSD licensed editline library as an alternative for
readline.

Link: https://thrysoee.dk/editline
2020-09-29 10:13:24 +02:00
David Teigland
fb96e9ab21 tests: add case for metadata checksum differences
Cover the case where two copies of metadata have the
same seqno but different checksums.  Also elaborate
on an existing fixme in the code for this case, since
we should be doing something better for this case.

This had been uncovering an issue with reopening
fds in readwrite mode.
2020-09-28 13:25:57 -05:00
David Teigland
da14cf68cb scanning: keep open an lvm device with scanning problem
The command may want to update it.
2020-09-28 13:25:57 -05:00
David Teigland
890c7ef451 devices: fix reopen for unopened device
If there's a request to reopen rw a device that's not
open, then just call the normal open function.
2020-09-28 13:25:57 -05:00
Heinz Mauelshagen
8952dcbff0 Revert "lvconvert: display warning if raid1 LV image count does not change"
This reverts superfluous commit 3c9177fdc0 as
_lv_raid_change_image_count() already checks for non-changed image count.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872130
2020-09-28 17:14:03 +02:00
Zdenek Kabelac
e414ebef6e thin: pass through whole code
Instead of early 'return 0' let the whole code finish
in case of an error with syncing.
2020-09-25 22:59:35 +02:00
Zdenek Kabelac
8b22e38087 thin: improve error message
Add more info, explaing why the suspend of thin snapshot origin was omitted.
2020-09-25 22:59:35 +02:00
Zdenek Kabelac
ef59c83f2d thin: enhance lvcreate error paths
Improve error response and reporting, when creating thin snapshots.
If the thin pool kernel metadata already have device with ID lvm2
tries to create, give more meanigful error message and also properly
restore transaction id to the value known to thin-pool in this case.

Before it's been possible to divert by one from kernel TID value,
and lvm2 stacked delete message for such thin device.
2020-09-25 22:56:40 +02:00
Zdenek Kabelac
e2eb1dc501 thin: no delete message for device_id 0
Since we always use device_id > 0, we could use
device_id == 0 to actually mark thinLV as an
LV we want to remove without delete message.
2020-09-25 22:54:07 +02:00
Zdenek Kabelac
50a37948b5 vdo: allow passing renamed vdopool name to kernel
Although kernel does not allow to load a new dm table
with renamed vdopool, at least make lvm2 code ready
it it every will get supported.
2020-09-23 13:20:28 +02:00
Zdenek Kabelac
7c19186271 vdo: disable support for online rename of vdopool LV
Since ATM kernel does not support this operation,
disable 'lvrename' of an active vdopool.

As a workaround, user may simply deactivate, rename and activate.
2020-09-23 13:18:23 +02:00
Zdenek Kabelac
3a3307c0d8 vdo: enhance vdo pool extension
When user tries to extend vdo pool - he needs to go always
at least by 1 full VDO slab  (defined as  vdo_slab_size_mb).

To avoid all trouble around find 'workable' size - lvm2 automatically
increases the passed (or by --use-policies calculated) extension size
(and informs a user about sometimes possibly large increase as slab
size can go upto 32GiB)

With VDO users need to always 'think-big' anyway and expect such
operation to be in GiB domain range.
2020-09-22 23:28:43 +02:00
Zdenek Kabelac
f38b7afd62 vdo: extend vdo segment validation
Try to catch all suspicious VDO segments in metadata early.
2020-09-22 23:25:16 +02:00
Zdenek Kabelac
642ef54399 vdo: correct message about policy extend support
Policy extend is already supported for vdo pools as well,
so correct the error message.
2020-09-22 23:25:16 +02:00
Zdenek Kabelac
e08a0421a3 vdo: drop unnecessary tabulator from metadata output 2020-09-22 23:25:16 +02:00
Zdenek Kabelac
5bc66532c7 activation: use revert_lv on tree suspend failure
When thetable reload fails during suspend() - we were only calling
plain resume() - and this will reload only those devices,
which were left suspend, but will not try to restore
metadata state according to lvm2 reverted metadata.
So if we were reloading device tree - we have restored
only top-level LV and rest of reverted device manipulation
were left alone and possibly mismatched what is in committed
metadata.

FIXME: There are several cases were such revert will likely not work
properly anyway as some operation are currenly handled in single commit,
while they need multiple commits, but it's step towards better correctness.
At least we catch there errors now earlier.
2020-09-22 21:02:14 +02:00