IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
. error exit means that lvmdevices --update would make a change.
. remove check of PART field from --check because it isn't used.
. unlink searched_devnames file to ensure check|update will search
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
This reverts commit bd2baeaaa6.
This commit broke vgrename because vgrename relies on old bugs
in lvmcache_update_vg_from_write and lvmcache_update_vgname
which need to be fixed first.
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
This fixes an issue related to the optimization in
"pvscan: only add device args to dev cache"
If the devices file is not used, and the lvm.conf filter
accepts devices via symlink names, then those devices won't
be accepted by pvscan for autoactivation. To resolve this,
recognize when the filter contains symlinks and disable the
optimization. When the optimization is disabled, a full
dev_cache_scan is performed, and symlinks are associated
with the device names passed to pvscan. filter-regex
will accept a device if symlinks to that device are accepted.
If multipath component devices get through the filter and
cause lvm to see duplicate PVs, then check the wwid of the
devs and drop the component devices as if they had been
filtered. If a dm mpath device was found among the duplicates
then use that as the PV, otherwise do not use any of the
components as the PV.
"duplicate PVs" associated with multipath configs will no
longer stop commands from working.
When devnames are used as device ids and devnames change,
then new devices need to be located for the PVs. If the old
devname is now used by a filtered device, this was preventing
the code from searching for the new device, so the PV was
reported as missing.
Copy another optimization from pvscan -aay to vgchange -aay.
When using the optimized label scan for only one VG, acquire the
VG lock prior to the scan. This allows vg_read to then skip the
repeated label scan that normally happens after locking the vg.
For completeness and consistency, adjust the behavior
for some variations of:
vgchange -aay --autoactivation event [vgname]
The current standard use is with a VG name arg, and the
command is only called when all pvs_online files exist.
This is the optimal case, in which only pvs_online devs
are read. This remains the same.
Clean up behaviors for some other unexpected uses of the
command:
. With no VG name arg, the command activates any VGs
that are complete according to pvs_online. If no
pvs_online files exist, it does nothing.
. If a VG name is used but no PVs online files exist for
the VG, or the PVs online files are incomplete, then
consider there could be a problem with the pvs_online
files, and fall back to a full label scan prior to
attempting the activation.
Port another optimization from pvscan -aay to vgchange -aay:
"pvscan: only add device args to dev cache"
This optimization avoids doing a full dev_cache_scan, and
instead populates dev-cache with only the devices in the
VG being activated.
This involves shifting the use of pvs_online files from
the hints interface up to the higher level label_scan
interface. This specialized label_scan is structured
around creating a list of devices from the pvs_online
files. Previously, a list of all devices was created
first, and then reduced based on the pvs_online files.
The initial step of listing all devices was slow when
thousands of devices are present on the system.
This optimization extends the previous optimization that
used pvs_online files to limit the devices that were
actually scanned (i.e. reading to identify the device):
"vgchange -aay: optimize device scan using pvs_online files"
Port the old pvscan -aay scanning optimization to vgchange -aay.
The optimization uses pvs_online files created by pvscan --cache
to derive a list of devices to use when activating a VG. This
allows autoactivation of a VG to avoid scanning all devices, and
only scan the devices used by the VG itself. The optimization is
applied internally using the device hints interface.
The new option "--autoactivation event" is given to pvscan and
vgchange commands that are called by event activation. This
informs the command that it is being used for event activation,
so that it can apply checks and optimizations that are specific
to event activation. Those include:
- skipping the command if lvm.conf event_activation=0
- checking that a VG is complete before activating it
- using pvs_online files to limit device scanning
new udev rule 69-dm-lvm.rules replaces
69-dm-lvm-meta.rules and lvm2-pvscan.service
udev rule calls pvscan directly on the added device
pvscan output indicates if a complete VG can be activated
udev env var LVM_VG_NAME_COMPLETE is used to pass complete
VG name from pvscan to the udev rule
udev rule uses systemd-run to run vgchange -aay <vgname>
The new system_id_source="appmachineid" will cause
lvm to use an lvm-specific derivation of the machine-id,
instead of the machine-id directly. This is now
recommended in place of using machineid.
Corrupt metadata text (with good mda header) was being handled
in the label_scan phase, but not in the vg_read phase. This
was sufficient because metadata areas would always be read and
checksummed during label_scan (metadata parsing was skipped
previously as an optimization.)
This changed with the optimization in
commit 61a6f9905e
"metadata: optimize reading metadata copies in scan"
Now, some metadata areas will not be read and checksummed
at all during the label_scan phase, only during the vg_read
phase. This means that bad metadata text may first be detected
in the vg_read phase. So, add equivalent bad metadata handling
to the vg_read path to match the label_scan path.
Looks like newer version of mkfs.ext4 consumes more 'real' disk space,
when formating virtual volumes - so double the size of thin-pool to
have enough backend chunks for provisioning.
Do not call 'dmsetup info' for non-dm devices.
Better handling for VGNAME and LVNAME - so when convering LV as
backend device automatically recognize it and reuse LV name for VDOLV.
Add prompt for final confirmation before actual conversion is started
(once confirmed, lvm2 commands no longer prompts to avoid leaving
conversion in the middle of its process.)
Some device id types can only be used with specific device major
numbers, so use this restriction to avoid some comparisions.
This is more efficient, and can avoid some incorrect matches.
When check active componet of thinLV with external origin,
we need to check if the external origin isn't already active.
For this however we need to use layered check for -real device.
Add tool 'vdoimport' to support easy conversion of an existing VDO manager managed
VDO volumes into lvm2 managed VDO LV.
When physical converted volume is already a logical volume, conversion
happens with the VG itself, just with validation for extent_size, so
the virtually sized logical VDO volume size can be expressed in extents.
Example of basic simple usage:
vdoimport --name vg/vdolv /dev/mapper/vdophysicalvolume
Recent commit 84bd394cf9
"writecache: use block size 4096 when no fs is found"
changed the default writecache block size from 512 to 4096
when no file system is detected. The fs block size detection
requires the libblkid BLOCK_SIZE feature, so skip tests on
systems without this. Otherwise, 4096 writecache added to
512 xfs leads fs io or mount failures.
This patch is to test timeout handling after activate LV with shareable
mode. It has the same logic with the testing for LV exclusive mode,
except it verifies the locking with shareable mode.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to test timeout handling after activate LV with exclusive
mode. It contains two scripts for host A and host B separately.
The script on host A firstly creates VGs and LVs based on the passed
back devices, every back device is for a dedicated VG and a LV is
created as well in the VG. Afterwards, all LVs are activated by host A,
so host A acquires the lease for these LVs. Then the test is designed
to fail on host A.
After the host A fails, host B starts to run the paired testing script,
it firstly fails to activate the LVs since the locks are leased by
host A; after lease expiration (after 70s), host B can achieve the lease
for LVs and it can operate LVs and VGs.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add LV testing on multi hosts. There have two scripts,
the script multi_hosts_lv_hosta.sh is used to create LVs on one host,
and the second script multi_hosts_lv_hostb.sh will acquire
global lock and VG lock, and remove VGs. The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add VG testing on multi hosts. There have two scripts,
the script multi_hosts_vg_hosta.sh is used to create VGs on one host,
and the second script multi_hosts_vg_hostb.sh afterwards will acquire
global lock and VG lock, and remove VGs. The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.
On the host A:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh
On the host B:
make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the IDM lock manager fails to access drives, might partially fail to
access drives (e.g. it fails to access one of three drives), or totally
fail to access drives, the lock manager should handle properly for these
cases. When the drives are partially failure, if the lock manager still
can renew the lease for the locking, then it doesn't need to take any
action for the drive failure; otherwise, if it detects it cannot renew
the locking majority, it needs ti immediately kill the VG from the
lvmlockd.
This patch adds the test for verification the IDM lock manager failure;
the command can be used as below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdl3,/dev/sdq3 \
LVM_TEST_FAILURE=1 T=idm_ilm_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system. For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives. On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.
This patch is to add half brain failure for idm; the test command is as
below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_half_brain.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system. For worst case, the lease is timeout
and the drives cannot recovery back. So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.
And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly. The test command is as below:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
T=idm_fabric_failure_timeout.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.
For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.
This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case. The testing usage is:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.
This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back. Below is an example for testing
the case:
# make check_lvmlockd_idm \
LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing. Also add the prepare and remove shell scripts for
IDM.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
While we heavily try to spot arrays that are not yet in-sync,
some kernels tends to block our lvm2 command in kernel,
while we resume these smaller raid arrays even for 5 seconds.
But since the result is not really wrong - report these
check failures only as TEST WARNING.
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.
--setautoactivation y|n
enables|disables autoactivation of a VG or LV.
Autoactivation is enabled by default, which is consistent with
past behavior. The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.
If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.) When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.
The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.
Previous versions of lvm do not recognize this property. Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions. If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.
The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.
If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.
An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.
To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list. The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
Switch to plain 'kill' we should no longer need SIGKILL
as polling can be interrupted.
Resolve problem in aux wait_pvmove_lv_ready() that was using
lvm command to check for UUID - but this was interferring with
VG lock and it's been delaying confirmation.
So reducing slow-down of test - so it can run faster.
Looks like there was some missed versioning increase during devel.
So with kernel >= 4.18 version 1.19 is enough to look like 1.20
However backported 1.19 targets seems to not provide all
the capabilities.
Added comment the 'lvs' already initiates dmeventd
Note: we don't have any query mechanism to check if dmeventd
is already running except access of socket which basically
starts dmeventd if it's not running.
For determinist test results lvm2/dm service shall not be present
and running in the system as it may randomize test results.
In case they are found present, this test ends with warning (not failure).
Some older instancies of 'mdadm' opened legs in RW and
closed and opened again and expected exlusive access.
But here udev rule can be fired - so on these versions
slow down whole mdadm runtime by using strace, to
give system a bit more time to finish udev rule.
Just like lvm command ignores 0/xxxx report from judging the status.
Avoid using infinite loop and limit report checking to 100 checks.
If it would need more - something is not right.
Combination of throttling and slowed device is a bit faster.
Also add FIXME about the mutliple spawn polling processing
when activating invidual LV for a pvmove.
Use for testing new mdadm_create aux wrapper.
Place functionality into a 2 pass loop - one for 'auto' other for 'start'.
Share same tests between raid level 0 and level 1 version of raid.
We have here some kind of catch-22 - since older kernels do
use 'resync' while new 'recover' for initial raid synchronization.
So now - how do we recognize in which state of raid we are.
ATM seems to be simplest to simply keep disabled droping of primary raid
leg unless we are in sync.
FIXME: we should add a target version check and enable removal
Adding full filesystem sync, trying to fight with strange error from losetup:
losetup: loopa: failed to set up loop device: Resource temporarily unavailable
loop0: detected capacity change from 0 to 4096
loop_set_block_size: loop0 () has still dirty pages (nrpages=13)
Also reuse internal aux wipefs_a
This reverts commit 99b6173f10.
These tests are disabled with lvmlockd because they use
snapshots without an origin which is not permitted in a
shared vg.
user creates a file listing real devices they want
lvm tests to use, and sets LVM_TEST_DEVICE_LIST.
lvm tests can use these with prepare_real_devs
and get_real_devs.
Other aux functions do not work with these devs.
To better test actually fsadm in test suite - avoid setting
LVM_BINARY locally - since test setup already modifies
PATH to find test's lvm binary as the 1st. in path.
Move extra md component detection into the label scan phase.
It had been in set_pv_devices which was deep within the vg_read
phase, which wasn't a good place (better to detect that earlier.)
Now that pv metadata info is available in the scan phase, the pv
details (size and device_hint) can be used for extra md checking.
Use the device_hint from the pv metadata to trigger a full md
component check if the device_hint begins with /dev/md.
Stop triggering full md component checks based on missing
udev info for a dev.
Changes to tests to reflect that the code is now detecting
md components in some test case that it wasn't before.
A cachevol can be forcibly detached when it's missing devices.
Also allow this if it's damaged/invalid and unrepairable.
This would be needed to recover data from the origin LV after
a cachevol is lost or damaged beyond repair.
In cases where lvconvert does not detect a fs block size on the
device, it falls back to choosing a writecache block size based
on the device's LBS and PBS (tries to match those.)
If the user specifies a writecache block size on the command
line (--cachesettings block_size=4096|512), lvconvert currently
fails and reports an error if the user-specified value does not
match the value lvconvert would have chosen based on LBS and PBS.
The purpose of allowing a user-specified value on the command line
is to override what lvconvert would otherwise do, so change this
to just print a warning that the user value does not match the
value that would be chosen based on the LBS/PBS, and then take
the user-specified value as the writecache block size.
In case legs of a raid0 LV are removed, the lvdisplay command still
reports 'available' though raid0 is not providing any resilience
compared to the other raid levels.
Also lvdisplay does not display '(partial)' in case of missing raid0
legs as oposed to the lvs command.
Enhance lvdisplay to report "NOT available" for any RaidLV type in case
too many legs are inaccessible hence causing data loss. I.e. any leg
for raid0, all for raid1, more than 1 for raid4/5, more than 2 for raid6
and in case of completely lost mirror groups for raid10.
Add test/shell/lvdisplay-raid.sh.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872678
When a writecache sublv or an integrity metadata sublv
are partial (missing a dev), set the partial flag on
the upper level LV also, as is done for other sublvs.
Fix the two-step writecache detach in commit c32d7fed4f.
In the case of uncache, the cachevol is removed after
detaching the writecache. When the detach is finished
in the second step, the remove must wait until then.
Verify that corruption is corrected for raid levels other
than raid1. For other raid levels, attempt to corrupt the
given file pattern on each underlying device, since we don't
know which device contains the file being corrupted.
This ensures that corruption is actually be introduced
when testing the other raid levels.
Verify that corruption is being corrected by checking
the integritymismatches count is non-zero for the raid LV,
which includes the total from all images (since we don't
know which image will have the corruption.)
Test case where filesystem has been corrected via fsck.
In such case fsck returns '1' as success and should be
handled in a same way as '0' since fs is correct.
Restructure the pvscan code, and add new temporary files
that list pvids in a VG, used for processing PVs that
have no metadata.
The new temp files, in /run/lvm/pvs_lookup/<vgname>, allow a
proper pvscan --cache to be done on PVs that have no metadata.
pvscan --cache <dev> is only supposed to read <dev>, but when
<dev> has no metadata, this had not been possible. The
command had to fall back to scanning all devices to read all
VG metadata to get the list of all PVIDs needed to check for
a complete VG. Now, the temp file can be used in place of
reading metadata from all PVs on the system.
Since 'BLKZEROOUT' streams out more block at once, at can easily
zero-out larger set of blocks after 1st. failing one.
So the test is adapted to fully 'hide' swap header under error target.
Cover the case where two copies of metadata have the
same seqno but different checksums. Also elaborate
on an existing fixme in the code for this case, since
we should be doing something better for this case.
This had been uncovering an issue with reopening
fds in readwrite mode.
This test seems to be hitting some corner case in handling
out-of-metadata space condintion in thin-pool.
Add few more aid notes and functionality.
Also add missing '|| true' with now direct-IO dd command.
Shorten running time of the test.
Fix some issues in invoked resizing script to it returns
correct return code and dmeventd can be a little bit quicker
in this test.
When loop can't handle sector-size option - failure caused double fail
for access of unbound variable
Also fix expression for 'rm' and remove loops after loop release.
If the test runs of loop device backend with 512 sectors,
xfs selects this smaller sector size and then data do not fit
(we would need -l9 with most of 'raids').
With 4K sectors data always fits.
Shorten and make the test easily readable by moving same code into
function and removed one duplicated test for 512,4096 combination.
Always use scsi_debug - since default ramdisk or loop device backend
is unpredictible.
Use new SKIP_WITH_LOW_SPACE and set higher requirement for free space.
But still this test can't run on system's tmpfs directories -
as they typically provide less then 2G of space and when the test
runs there it also provisioning for all READ pages!)
BRD (ramdisk) device should work.
Extend a _wait_recalc() loop for slower hw.
When creating large raid which do not need to be fully synchronized use
them on delay devices - so even less data needs read/write.
Remove unneeded lvchange as lvcreate is already leaving LV inactive.
Replace printf with awk as generator.
mm
While the previous commit c9b40083fc
decresed version to 1.19 for using bigger datasets, it's not
been quite right - so from our bb machine it looks like
bigger metadata consumption started with 1.19 and kernel 4.18
(fc27)
Use bigger volume and slowdown writing to cache device.
This allows more simple to reach 'dirty' state.
Also document exactly 1 SIGINT has to fire aborting of flushing.
For proper checking of extension progress require version 1.15
It looks with older versoin extension happens during very slow
resume within lvm command - although speed is still somewhat slow
with latest version.
Speed-up a bit the first synchronization with just 50ms write delay,
but later set also delay on read to slowdown lvextend.
FIXME: there are still things to look at:
0 229376 raid raid1 2 AA 229376/229376 idle 0 0
0 229376 raid raid1 2 AA 0/229376 frozen 0 0 -
0 262144 raid raid1 2 AA 229376/262144 repair 0 0 -
0 262144 raid raid1 2 AA 229376/262144 repair 0 0 -
0 262144 raid raid1 2 AA 245888/262144 repair 0 0 -
On test system with 'default' filter (aka accept all) test
after enabling device can suffer from automatic system
activation - so for created LVs setup skipping this automatic
activation. This should prevent getting LVs into table
with pvscan service.