1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

17090 Commits

Author SHA1 Message Date
David Teigland
c7311d4722 lvmcache: rename label_read label_scan_dev
for consistent naming with other similar functions
2020-10-21 16:24:16 -05:00
David Teigland
b3cdf0d881 lvmcache: add lvmcache_get_dev_mda
for future patch
2020-10-21 16:24:16 -05:00
David Teigland
2c9bb67604 scanning: improve filtering control
Filtering in label_scan was controlled indirectly by
the fact that bcache was not yet set up when label_scan
first ran.  The result is that filters that needed data
would not run and would return -EAGAIN, which would
result in the dev flag FILTER_AFTER_SCAN being set.
After the dev header was read for checking the label,
filters would be rechecked because of FILTER_AFTER_SCAN.
All filters would be checked this time because bcache
was now set up, and the filters needing data would
largely use data already scanned for reading the label.
This design worked but is hard to adjust for future
cases where bcache is already set up.

Replace this method (based on setting up bcache, or not)
with a new cmd flag filter_nodata_only.  When this flag
is set filters that need data will not run.  This allows
the same label_scan behavior when bcache has been set up.
There are no expected changes in behavior.
2020-10-21 16:24:16 -05:00
David Teigland
c74ccd5201 filters: nodata option
When filter_nodata_only is set, a filter that uses
data is skipped.
2020-10-21 16:24:16 -05:00
David Teigland
c601ec0d6e filters: allow filter wipe for one device
as passes_filter already does
2020-10-21 16:24:16 -05:00
David Teigland
83d0818523 tests: writecache-misc disable with lvmlockd
in a shared vg pvmove requires a named lv
2020-10-21 12:47:28 -05:00
Zdenek Kabelac
6be29e1179 tests: check dmevent with bigger reserved_stack
Check dmeventd remains working when reserved_stack
is above 300KiB.
2020-10-20 22:28:58 +02:00
Zdenek Kabelac
fdec4cd3e6 memlock: allocate at most halve of rlimit stack
Touch of stack allocation validated given size with rlimit
and if the reserved_stack was above rlimit, its been completely
ignored - now we will always touch stack upto rlimit/2 size.
2020-10-20 22:26:44 +02:00
Zdenek Kabelac
bd272e3bce lvmcmdlib: lvm2_init_threaded
cmd context has 'threaded' value that used be set
by clvmd - and allowed proper memory locking management.
Reuse same bit for dmeventd.

Since dmeventd is using 300KiB stack per thread,
we will ignore any user settings for allocation/reserved_stack
until some better solution is find.
This avoids crashing of dmevend when user changes this value
and because in most cases lvm2 should work ok with 64K stack
size, this change should not cause any problems.
2020-10-20 22:22:52 +02:00
Zdenek Kabelac
756066a2e8 libdm: relocate code for sending messages
To be able to send messages for recently resumed devices,
move code into inner loop.
Matching commit c1a6b10d09.
2020-10-19 16:53:19 +02:00
Zdenek Kabelac
3e06061d82 cov: split check for type assignment
Check that type is always defined, if not make it explicit internal
error (although logged as debug - so catched only with proper lvm.conf
setting).
This ensures later type being NULL can't be dereferenced with coredump.
2020-10-19 16:53:19 +02:00
Zdenek Kabelac
a17ec7e0ba dm: remove created devices on error path
DM tree keeps track of created device while preloading a device tree.
When fail occures during such preload, it will now try to remove
all created and preloaded device. This makes it easier to maintain
stacking of device, since we do not need to check in-depth for
existance of all possible created devices during the failure.
2020-10-19 16:53:19 +02:00
Zdenek Kabelac
b75c2dfe1b debug: shorten error message
Just check for sigint during log_error().
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
b2a326b511 libdm: validate thin-pool before sending messages
Alhtough lvm2 does validation on its side, ensure DM code
is not sending messages to failed thin pool.
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
4b0565b82f libdm: enhance error message 2020-10-19 16:53:18 +02:00
Zdenek Kabelac
4c1caa7e26 libdm: split code for sending message
Move message sending from _thin_pool_node_message to
new _node_message for possible better code sharing.
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
58976ccc34 properties: fix data_usage typo
Patch 4de6f58085 introduce typo,
we need to use data_usage.

Note: this code was used by lvmapp library and currently is unused.
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
d2bdad28d1 tests: extend area covered by error target
Since 'BLKZEROOUT' streams out more block at once, at can easily
zero-out larger set of blocks after 1st. failing one.

So the test is adapted to fully 'hide' swap header under error target.
2020-10-19 16:53:18 +02:00
Marian Csontos
b50134dc14 make: generate 2020-10-15 11:16:54 +02:00
Marian Csontos
616e5b854c gitignore: ignore gcov files 2020-10-15 11:13:13 +02:00
Marian Csontos
53db14171c Revert "tests: Adapt RAID test to changes"
The cpnversion of degraded RAID should still report a failure.

This reverts commit e12bdd591a.
2020-10-13 13:15:16 +02:00
Zdenek Kabelac
ee43ec5782 rpm: bare words are no longer supported
Update for new rpm requirement and use "..." words.
2020-10-02 22:27:00 +02:00
Zdenek Kabelac
99b6173f10 tests: enable tests for lvmlockd 2020-10-02 22:27:00 +02:00
Zdenek Kabelac
5e26a2b74d tests: aux hides zero and error device
When ERR_DEV and ZERO_DEV are used, they are automatically
taken down when the last user no longer needs them,
so hide them from 'forgotten' device check.
2020-10-02 22:27:00 +02:00
Zdenek Kabelac
8d9b4c624f tests: rename shown debug trace
As there could be few invokes of stacktrace, avoid
repeatedly display logs from commands.
So after first display rename  debug.log* -> debug_log
so the file still can remain for reading in test dir.
2020-10-02 22:27:00 +02:00
Zdenek Kabelac
73a3a0d347 debug: drop vgid from debug
From the code can be seen the VGID will be always NULL here
as vgid != NULL is already handled before.
Thus drop from being displayed.
2020-10-02 22:27:00 +02:00
Zdenek Kabelac
117fc64e6e debug: no backtrace
As the path already printed verbose message drop backtrace.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
1b8c6f09bc debug: show actually reason for taking this code path
Instead of not so useful backtrace, report what was the reason.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
e1af80c81c debug: drop FD from error message
Since now the error path already has device close and set -1,
there is not much in printing this info - actually shouldn't be
there at all..
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
dd8212365d debug: update messages 2020-10-02 21:04:16 +02:00
Zdenek Kabelac
e7fff97b8d wipe_lv: use BLKZEROOUT when possible
Since BLKZEROOUT ioctl should be supposedly fastest
way how to clear block device start using this ioctl
for zeroing a device. Commonly we do zero typically
small portion of a device (8KiB) - however since we now
also started to zero metadata devices, in the case
of i.e. thin-pool metadata this can go upto ~16GiB
and here the performance starts to be noticable.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
c65d3a6b8a wipe_lv: interruptible wiping
Since we now block signals and wiping may take unexpectedly long
time - support breaking command while wipe is in progress.
2020-10-02 21:03:19 +02:00
Zdenek Kabelac
7396f1cfee wipe_lv: drop label_scan_invalidate on error path
Since dev_set_bytes() now closes  dev on error path itself,
remove this unneeded call now (introduced few commits back
in history thus removing comment from WHATS_NEW)
2020-10-02 21:02:04 +02:00
Zdenek Kabelac
b44db5d1a7 bcache: use flexible arrays
Cleanup, allocate whole struct with a single malloc call.
2020-10-02 21:00:26 +02:00
Zdenek Kabelac
b3c7a2b3f0 bcache: support interrupts when waiting on IO
Since lvm2 normally block signals during protected
phase where it does not want to be interrupted.
Support interruptible processing when allowed
in section between sigint_allow() ... sigint_restore())
and let the 'io_getenvents()'  finish with EINTR.
2020-10-02 20:57:50 +02:00
Zdenek Kabelac
0fe58fc54f bcache: fix busy loop with too many errors
When bcache tries to write data to a faulty device,
it may get out of caching blocks and then just busy-loops
on a CPU - so this check protects this by checking
if there is already max_io (~64) errored blocks.
2020-10-02 20:56:55 +02:00
Zdenek Kabelac
41f9e372c0 bcache: fix waiting problem for completed IO
Call _wait_all() which does check whether there is still
some pending IO before sleep. Otherwise it may happen
our submitted IO operations have been already dispatched
and this call then endlessly waits for IO which are all done.
This can be reproduced when device returns quickly errors
on write requests.
2020-10-02 20:53:41 +02:00
Zdenek Kabelac
9885c9b43a configure: use our ordered list of python names
Since it seems it's prefered now to use python3 in path name,
prefer this name as first in the list.
2020-10-02 20:52:38 +02:00
Zdenek Kabelac
2df7ef58a5 configure: update with latest AM_PATH_PYTHON
World has moved towards python3.9.
Although we still don't like path ordering.
2020-10-02 20:48:41 +02:00
Zdenek Kabelac
ae96a43f05 configure: check for BLKZEROOUT support 2020-10-02 20:48:41 +02:00
David Teigland
91f869e43c lvconvert: move log message to fix segfault
log message was printing lv name from released vg
2020-10-02 09:23:25 -05:00
David Teigland
0143c7aebe improve message for invalid device arg in process_each_pv
Multiple commands process pvs by name using process_each_pv()
and will now have an improved error message for a device
that's excluded by filters.
2020-10-01 12:34:36 -05:00
David Teigland
74ed6e8a99 improve message for invalid device arg
for pvcreate, pvremove, vgcreate, vgextend.
2020-10-01 12:20:16 -05:00
David Teigland
450f272b31 devices: support printing the filter that rejects a device
Use of this new message function needs to be added
to various commands to improve the output.
2020-10-01 12:00:09 -05:00
David Teigland
ff3945777b tests: enable writecache test that uses cleaner 2020-10-01 11:33:02 -05:00
David Teigland
c32d7fed4f writecache: use two step detach
When detaching a writecache, use the cleaner setting
by default to writeback data prior to suspending the
lv to detach the writecache.  This avoids potentially
blocking for a long period with the device suspended.

Detaching a writecache first sets the cleaner option, waits
for a short period of time (less than a second), and checks
if the writecache has quickly become clean.  If so, the
writecache is detached immediately.  This optimizes the case
where little writeback is needed.

If the writecache does not quickly become clean, then the
detach command leaves the writecache attached with the
cleaner option set.  This leaves the LV in the same state
as if the user had set the cleaner option directly with
lvchange --cachesettings cleaner=1 LV.

After leaving the LV with the cleaner option set, the
detach command will wait and watch the writeback progress,
and will finally detach the writecache when the writeback
is finished.  The detach command does not need to wait
during the writeback phase, and can be canceled, in which
case the LV will remain with the writecache attached and
the cleaner option set.  When the user runs the detach
command again it will complete the detach.

To detach a writecache directly, without using the cleaner
step (which has been the approach previously), add the
option --cachesettings cleaner=0 to the detach command.
2020-10-01 11:33:02 -05:00
David Teigland
d1b7438c9f pvcreate/pvremove: reimplement device checks
Reorganize checking the device args for pvcreate/pvremove
to prepare for future changes.  There should be no change
in behavior.  Stop the inverted use of process_each_pv,
which pulled in a lot of unnecessary processing, and call
the check functions on each device directly.
2020-10-01 10:09:09 -05:00
Marian Csontos
46e5908759 test: grep -q may fail and it does
The script runs with pipefail, grep -q exits immediately sending SIGPIPE
to lvm segtype which fails whole pipe.
2020-10-01 11:33:57 +02:00
David Teigland
2272a32e6f lvmlockd vdo: add support
lvmlockd handling for vdo lv and vdo pool is like
thin lv and thin pool.
2020-09-29 14:43:27 -05:00
David Teigland
82e270c18a lvmlockd vdo: disallow use of shared lock on LV
vdo cannot be active on multiple hosts concurrently
2020-09-29 14:43:26 -05:00