IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Thin kernel target 1.9 still does not support online resize of
thin pool metadata properly - so disable it with expectation
for much higher version - and reenable after fixing kernel.
If we are stuck in user for too long without output,
grab kernel stack traces.
If we just produce too many lines of output, it's
not probably kernel related bug.
When the test is interrupted because debug.log has got to big,
and the test doesn't react on SIGINT - and needs to be only
killed with SIGKILL - it's still valuable to print at least
a portion of this debug.log (currently 4MB).
LVM_TEST_UNLIMITED could be set to avoid this limitation
(i.e. when busy-looping lvm command needs to be running
for gdb attachment)
Make it easier to run a live lvmetad in debugging mode and
to avoid conflicts if multiple test instances need to be run
alongside a live one.
No longer require -s when -f is used: use built-in default.
Add -p to lvmetad to specify the pid file.
No longer disable pidfile if -f used to run in foreground.
If specified socket file appears to be genuine but stale, remove it
before use.
On error, only remove lvmetad socket file if created by the same
process. (Previous code removes socket even while a running instance
is using it!)
Do not use signature wiping for newly created LVs in tests - we're
reusing the devs in tests and such detection could just interfere
inappropriately. We'd need to modify all tests to anwer the prompt
whether any signature found should be removed or not or we'd need
to use "-y" option for all lvcreates in tests. It's better to disable
this feature then and let's do a separate test to test this signature
wiping functionality.
Changed naming of methods from camel case to all lower case with
underscores per guidelines. Changed any methods that can be
static methods to static.
Signed-off-by: Tony Asleson <tasleson@redhat.com>
3.12.0 kernel prevents raid test to be usable,
leaving unremovable devices in table.
This needs to be fixed ASAP, meanwhile disable test to make
test machines at least usable.
Add 'can_use_16T' to detect systems where we could
safely use 16T devices without causing system deadlocks.
16T size leads on those to endless loops in udevd
- it calls blkid which tries cached read from such device
- this ends in endless loop.
Related problems:
https://bugzilla.redhat.com/show_bug.cgi?id=1015028
Reverts previously added udevsettle call.
Seems to be unrelated, while udev on old system may take over 10
minutes, to finish it's very slow and CPU intensive work, it doesn't
interact directly with created device, only access /dev/mapper/control
node via dmsetup, so the device is ocasionaly blocked by something else.
Patch helps a bit when lvm2 is build with disabled udev_sync support,
but udevd runs in the system - so it randomly influences unrelated tests
even - so before every test wait at least till udevd is settled.
Initial testing of thin pool's metadata with thin repairing tools.
Try to use tools from configuration settings, but allow them
to be overriden by settings of these variables:
LVM_TEST_THIN_CHECK_CMD,
LVM_TEST_THIN_DUMP_CMD,
LVM_TEST_THIN_REPAIR_CMD
FIXME: test reveals some more important bugs:
pvremove -ff also needs --yes
vgremove -ff doesn not remove metadata when there are no real LVs.
vgreduce is not able to reduce VG with pool without pool's PVs
Reshape code a bit to make sockepair 'swappable' with plain old pipe
call.
Display status for FAILED error.
Increase buffer to hold always at least 1 page size.
Print error results with capitals.
1) When converting from an x-way mirror/raid1 to a y-way mirror/raid1,
the default behaviour should be to stay the same segment type.
2) When converting from linear to mirror or raid1, the default behaviour
should honor the mirror_segtype_default.
3) When converting and the '--type' argument is specified, the '--type'
argument should be honored.
catch such conditions, but errors in the tests caused the issue to go
unnoticed. The code has been fixed to perform #2 properly, the tests
have been corrected to properly test for #2, and a few other tests
were changed to explicitly specify the '--type mirror' when necessary.
A know issue with kmem_cach is causing failures while testing
RAID 4/5/6 device replacement. Blacklist the offending kernel
so that these tests are not performed there.
Since our current vgcfgbackup/restore doesn't deal
with difference of active volumes between current and
restored set of volumes - run test with inactive LVs.
Rewrite check lv_on and add new lv_tree_on
Move more pvmove test unrelated code out to check & get sections
(so they do not obfuscate trace output unnecesserily)
Use new lv_tree_on()
NOTE: unsure how the snapshot origin should be accounted here.
Split pmove-all-segments into separate tests for raid and thins
(so the test output properly shows what has been skipped in test)
Update usage of "" around shell vars.
trim needs to trim both sides now.
trim also removes debug.log since it's only called when lvm command
has finished properly (so if something fails afterward, there
is no missleading debug trace in the log)
'die' evaluates given string - so \n could be used for
multiline error report
Also remove debug.log since the command finished properly when we
call 'die'
Note: we should not call 'die' after lvm command failure.
lvchange-raid.sh checks to ensure that the 'p'artial flag takes
precedence over the 'w'ritemostly flag by disabling and reenabling
a device in the array. Most of the time this works fine, but
sometimes the kernel can notice the device failure before it is
reenabled. In that case, the attr flag will not return to 'w', but
to 'r'efresh. This is because 'r'efresh also takes precedence over
the 'w'ritemostly flag. So, we also do a quick check for 'r' and
not just 'w'.
Add a very simple hack for embeding /var/log/messages into
the tests output - it's not ideal since it sometimes breaks lines,
but still gives valuable info.
The same corner cases that exist for snapshots on mirrors exist for
any logical volume layered on top of mirror. (One example is when
a mirror image fails and a non-repair LVM command is the first to
detect it via label reading. In this case, the LVM command will hang
and prevent the necessary LVM repair command from running.) When
a better alternative exists, it makes no sense to allow a new target
to stack on mirrors as a new feature. Since, RAID is now capable of
running EX in a cluster and thin is not active-active aware, it makes
sense to pair these two rather than mirror+thinpool.
As further background, here are some additional comments that I made
when addressing a bug related to mirror+thinpool:
(https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9)
I am going to disallow thin* on top of mirror logical volumes.
Users will have to use the "raid1" segment type if they want this.
This bug has come down to a choice between:
1) Disallowing thin-LVs from being used as PVs.
2) Disallowing thinpools on top of mirrors.
The problem is that the code in dev_manager.c:device_is_usable() is unable
to tell whether there is a mirror device lower in the stack from the device
being checked. Pretty much anything layered on top of a mirror will suffer
from this problem. (Snapshots are a good example of this; and option #1
above has been chosen to deal with them. This can also be seen in
dev_manager.c:device_is_usable().) When a mirror failure occurs, the
kernel blocks all I/O to it. If there is an LVM command that comes along
to do the repair (or a different operation that requires label reading), it
would normally avoid the mirror when it sees that it is blocked. However,
if there is a snapshot or a thin-LV that is on a mirror, the above code
will not detect the mirror underneath and will issue label reading I/O.
This causes the command to hang.
Choosing #1 would mean that thin-LVs could never be used as PVs - even if
they are stacked on something other than mirrors.
Choosing #2 means that thinpools can never be placed on mirrors. This is
probably better than we think, since it is preferred that people use the
"raid1" segment type in the first place. However, RAID* cannot currently
be used in a cluster volume group - even in EX-only mode. Thus, a complete
solution for option #2 must include the ability to activate RAID logical
volumes (and perform RAID operations) in a cluster volume group. I've
already begun working on this.
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.
The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide. This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them. Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction. This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.
The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID. (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
Simulate crash of the system and restarted pvmove after next VG
activation.
Test is catching regression introduced in 2.02.99 for partial tree
creation changes.
Function to create slower responsive device.
Useful for testing things which needs to happen something during on
going operation - with 'delayed' device - much smaller sizes of devices
are needed and its much more deterministic (though still not optimal)
After enable_dev, the following commands were not
consistently seeing the pv on it.
Alasdair explained, "whenever enabling/disabling devs
outside the tools (and you aren't trying to test how
the tools cope with suddenly appearing/disappering
devices) use "vgscan""
The original "check" target stays confined to a local device directory, while
check_full does 6 flavours, 3 with a local device directory and 3 with the
global /dev directory (the latter are prefixed with "s" for
"system"). I.e.: normal, cluster, lvmetad, snormal, scluster, slvmetad.
Patch includes RAID1,4,5,6,10 tests for:
- setting writemostly/writebehind
* syncaction changes (i.e. scrubbing operations)
- refresh (i.e. reviving devices after transient failures)
- setting recovery rate (sync I/O throttling)
while the RAID LVs are under a thin-pool (both data and metadata)
* not fully tested because I haven't found a way to force bad
blocks to be noticed in the testsuite yet. Works just fine
when dealing with "real" devices.
Test moving linear, mirror, snapshot, RAID1,5,10, thinpool, thin
and thin on RAID. Perform the moves along with a dummy LV and
also without the dummy LV by specifying a logical volume name as
an argument to pvmove.
These test the toollib functions that select
vgs/lvs to process based on command line args:
empty, vg name(s), lv names(s), vg tag(s),
lv tags(s), and combinations of all.
1) Since the min|maxrecoveryrate args are size_kb_ARGs and they
are recorded (and sent to the kernel) in terms of kB/sec/disk,
we must back out the factor multiple done by size_kb_arg. This
is already performed by 'lvcreate' for these arguments.
2) Allow all RAID types, not just RAID1, to change these values.
3) Add min|maxrecoveryrate_ARG to the list of 'update_partial_unsafe'
commands so that lvchange will not complain about needing at
least one of a certain set of arguments and failing.
4) Add tests that check that these values can be set via lvchange
and lvcreate and that 'lvs' reports back the proper results.
In those places where mirrors were being created while assuming
a default segment type of "mirror", we include the '--type mirror'
argument to explicitly set the segment type. This will preserve
the mirror testing that is performed even though the default
mirroring segment type is now "raid1".
Support tests with abort when libdm encounters internal
error - i.e. for dmsetup tool.
Code execution will be aborted when
env var DM_ABORT_ON_INTERNAL_ERRORS is set to 1
When running in the context of the test framework
we need to limit our PVs to use to those created
in the framework.
Signed-off-by: Tony Asleson <tasleson@redhat.com>
Suggest to use _tdata and _tmeta devices for that.
This fixes regression from too relaxed change in
f1d5f6ae81
Without this patch there are some empty LVs created before
mirror code recognizes it cannot continue.
(in release fix)
When the merging of snapshot is finished, we need to clean dm table
intries for snapshot and -cow device. So for merging snapshot
we have to activate_lv plain 'cow' LV and let the table
resolver to its work - shortly deactivation_lv() request
will follow - in cluster this needs LV lock to be held by clvmd.
Also update a test - add small wait - if lvremove is not 'fast enough'
and merging process has not been stopped and $lv1 removed in background.
Ortherwise the following lvcreate occasionally finds name $lv1 still in use.
(in release fix)
We check the version number of dm-raid before testing certain
features to make sure they are present. However, this has
become somewhat complicated by the fact that the version #'s
in the upstream kernel and the REHL6 kernel have been diverging.
This has been a necessity because the upstream kernel has
undergone ABI changes that have necessitated a bump in the
'Y' component of the version #, while the RHEL6 kernel has not.
Thus, we need to know that the ABI has not changed but the
features have been added. So, the current version #'ing stands
as follows:
RHEL6 Upstream Comment
======|==========|========
** Same until version 1.3.1 **
------|----------|--------
N/A | 1.4.0 | Non-functional change.
| | Removes arg from mapping function.
------|----------|--------
1.3.2 | 1.4.1 | RAID10 fix redundancy validation checks.
------|----------|--------
1.3.5 | 1.4.2 | Add RAID10 "far" and "offset" algorithm support.
| | Note this feature came later in RHEL6 as part of
| | a separate update/feature.
------|----------|--------
1.3.3 | 1.5.0 | Add message interface to allow manipulation of
| | the sync_action.
| | New status (STATUSTYPE_INFO) fields: sync_action
| | and mismatch_cnt.
------|----------|--------
1.3.4 | 1.5.1 | Add ability to restore transiently failed devices
| | on resume.
------|----------|--------
1.3.5 | 1.5.2 | 'mismatch_cnt' is zero unless [last_]sync_action
| | is "check".
------|----------|--------
To simplify, writemostly/writebehind, scrubbing, and transient device
failure restoration are all tested based on the same version
requirements: (1.3.5 < V < 1.4.0) || (V > 1.5.2). Since kernel
support for writemostly/writebehind has been around for some time,
this could mean a reduction in the scope of kernels tested for this
feature. I don't view this as much of a problem, since support for
this feature was only recently added to LVM. Thus, the user would
have to be using a very recent LVM version with an older kernel.
The mismatch count reported by a dm-raid kernel target used
to be effectively random, unless it was checked after a
"check" scrubbing action had been performed. Updates to the
kernel now mean that the mismatch count will be 0 unless a
check has been performed and discrepancies had been found.
This has been the intended behaviour all along.
This patch updates the test suite to handle the change.
- lvs -o lv_attr has now 10 indicator bits
- use '--ignoremonitoring' instead of the shortcut '--ig' used before (since
it would be ambiguous with new '--ignoreactivationskip')
Update code to match lvm coding standards
Disable/skip test - since it's accessing VGs available in the system.
Before reenable - validate it's not touching any PV outside those
created during test.
Create/remove PV
Create/remove snapshots (old type & thin)
PV lookups based on name and UUID
PV resize
Signed-off-by: Tony Asleson <tasleson@redhat.com>
After the last rebase, existing unit test case was
run which uncovered a number of errors that required
attention.
Uninitialized variables and changes to type of numeric
return type were addressed.
Signed-off-by: Tony Asleson <tasleson@redhat.com>
Test the different RAID lvchange scenarios under snapshot as well.
This patch also updates calculations for where to write to an
underlying PV when testing various syncactions.
If the user would upconvert a linear LV to a mirror without specifying
the segment type ("--type mirror" vs "--type raid1"), the "mirror"
segment type would be chosen without consulting the 'default_mirror_segtype'
setting in lvm.conf. This is now used as the basis for determining
which should be used if left unspecified.
When vgname has not existed in metadata, it has crashed on double free
in format_instance destroy() - since VG was created, used FID and was
released - which also released FID, so further use was accessing bad
memory.
Fix it for this code path before release_vg() so FID will exists
when _vg_read_file_name() returns NULL.
Assumed size of 4M was too large and the test was failing because
'dd' was failing to perform its write.
Calculate the size we need to write with 'dd' instead, so we
don't overrun the device.
aux updates:
prepare_vg now created clustered VG for cluster tests.
since dm-raid doesn't work in cluster, skip the cluster
test when someone checks for dm-raid target until fixed.
Support vgsplit for VGs with thin pools and thin volumes.
In case the thin data and thin metadata volumes are moved to a new VG,
move there also all related thin volumes and check that external origins
are also present in this new VG.
Add limit for buffer so if the test is running in some loop
and generating lots of output, the output log gets too large
for browser to display (at least Firefox is quite confused
to display 300MB logs).
For now limit max output of one test to 32MB.
If there is need to see full log set LVM_TEST_UNLIMITED to
disable interruption of test.
harness now also prints max memory used during test,
it user and system time and amount of read/write operations.
For non udev path use DM_DEFAULT_NAME_MANGLING_MODE.
Skip this test when using real /dev dir, since udev is not able
to create such device name unless mangled...
This means that a test failure in one flavour no longer prevents all the
subsequent flavours from running. We also get an aggregate summary at
the end of the entire batch, instead of summaries interspersed with
progress output. Do not fail when flavour overrides are empty
This fixes a long standing regression since LVM2 2.02.74 (commit 4efb1d9c,
"Update heuristic used for default and detected data alignment.")
The default PE alignment could be used (via MAX()) even if it was
determined that the device's MD stripe width, or minimal_io_size or
optimal_io_size were not factors of the default PE alignment (either 64K
or the newer default of 1MB, etc). This bug would manifest if the
default PE alignment was larger than the overriding hint that the
device provided (e.g. default of 1MB vs optimal_io_size of 768K).
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Support for exclusive activation of snapshots revealed some problems.
When snapshot is created, COW LV is activated first (for clearing) and
then it's transformed into snapshot's COW LV, but it has left the lock
for such LV active in cluster and this lock could not have been removed
from dlm, unless snapshot has been removed within same dlm session.
If the user tried to remove snapshot after rebooting node, the lock was
missing, and COW LV could not have been detached.
Patch modifes the approach in this way:
Always deactivate COW LV for clustered vg after clearing (so it's
activated again via imlicit snapshot activation rule when snapshot is activated).
When snapshot is removed, activate COW LV as independend LV, so the lock
will exist for such LV, but only when the snapshot is active.
Also add test case for testing snapshot removal after cluster reboot.
'lvchange' is used to alter a RAID 1 logical volume's write-mostly and
write-behind characteristics. The '--writemostly' parameter takes a
PV as an argument with an optional trailing character to specify whether
to set ('y'), unset ('n'), or toggle ('t') the value. If no trailing
character is given, it will set the flag.
Synopsis:
lvchange [--writemostly <PV>:{t|y|n}] [--writebehind <count>] vg/lv
Example:
lvchange --writemostly /dev/sdb1:y --writebehind 512 vg/raid1_lv
The last character in the 'lv_attr' field is used to show whether a device
has the WriteMostly flag set. It is signified with a 'w'. If the device
has failed, the 'p'artial flag has priority.
Example ("nosync" raid1 with mismatch_cnt and writemostly):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg Rwi---r-m 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-w 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-- 1 linear 4.00m
Example (raid1 with mismatch_cnt, writemostly - but failed drive):
[~]# lvs -a --segment vg
LV VG Attr #Str Type SSize
raid1 vg rwi---r-p 2 raid1 500.00m
[raid1_rimage_0] vg Iwi---r-- 1 linear 500.00m
[raid1_rimage_1] vg Iwi---r-p 1 linear 500.00m
[raid1_rmeta_0] vg ewi---r-- 1 linear 4.00m
[raid1_rmeta_1] vg ewi---r-p 1 linear 4.00m
A new reportable field has been added for writebehind as well. If
write-behind has not been set or the LV is not RAID1, the field will
be blank.
Example (writebehind is set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r-- 512
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
Example (writebehind is not set):
[~]# lvs -a -o name,attr,writebehind vg
LV Attr WBehind
lv rwi-a-r--
[lv_rimage_0] iwi-aor-w
[lv_rimage_1] iwi-aor--
[lv_rmeta_0] ewi-aor--
[lv_rmeta_1] ewi-aor--
New options to 'lvchange' allow users to scrub their RAID LVs.
Synopsis:
lvchange --syncaction {check|repair} vg/raid_lv
RAID scrubbing is the process of reading all the data and parity blocks in
an array and checking to see whether they are coherent. 'lvchange' can
now initaite the two scrubbing operations: "check" and "repair". "check"
will go over the array and recored the number of discrepancies but not
repair them. "repair" will correct the discrepancies as it finds them.
'lvchange --syncaction repair vg/raid_lv' is not to be confused with
'lvconvert --repair vg/raid_lv'. The former initiates a background
synchronization operation on the array, while the latter is designed to
repair/replace failed devices in a mirror or RAID logical volume.
Additional reporting has been added for 'lvs' to support the new
operations. Two new printable fields (which are not printed by
default) have been added: "syncaction" and "mismatches". These
can be accessed using the '-o' option to 'lvs', like:
lvs -o +syncaction,mismatches vg/lv
"syncaction" will print the current synchronization operation that the
RAID volume is performing. It can be one of the following:
- idle: All sync operations complete (doing nothing)
- resync: Initializing an array or recovering after a machine failure
- recover: Replacing a device in the array
- check: Looking for array inconsistencies
- repair: Looking for and repairing inconsistencies
The "mismatches" field with print the number of descrepancies found during
a check or repair operation.
The 'Cpy%Sync' field already available to 'lvs' will print the progress
of any of the above syncactions, including check and repair.
Finally, the lv_attr field has changed to accomadate the scrubbing operations
as well. The role of the 'p'artial character in the lv_attr report field
as expanded. "Partial" is really an indicator for the health of a
logical volume and it makes sense to extend this include other health
indicators as well, specifically:
'm'ismatches: Indicates that there are discrepancies in a RAID
LV. This character is shown after a scrubbing
operation has detected that portions of the RAID
are not coherent.
'r'efresh : Indicates that a device in a RAID array has suffered
a failure and the kernel regards it as failed -
even though LVM can read the device label and
considers the device to be ok. The LV should be
'r'efreshed to notify the kernel that the device is
now available, or the device should be 'r'eplaced
if it is suspected of failing.
'in_sync' was using the last field in the RAID status output as
the location for the sync ratio field. The sync ratio may not always
be the last field, but it will always be the 7th field. So we switch
to using the absolute value rather than computing the last field
index number.
Instead of check for lv_is_active() for thin pool LV,
query the whole pool via new pool_is_active().
Fixes a problem when we cannot change discards settings
for active pool device where the actual layer for pool
device was inactive, but thin volumes using thin pool
have been active.
Calling pvscan --cache with -aay on a PV without an MDA would spuriously fail
with an internal error, because of an incorrect assumption that a parsed VG
structure was always available. This is not true and the autoactivation handler
needs to call vg_read to obtain metadata in cases where the PV had no MDAs to
parse. Therefore, we pass vgid into the handler instead of the (possibly NULL)
VG coming from the PV's MDA.
Aux function to replace PV with specifically damaged device.
Usage:
aux error_dev "$dev1" 8:32 96:8
Replaces from 8 sector 32 error 512b sectors
and from 96 sector next 8 sectors will fail on rw.
Rest of device is preserved.
For testing:
dd if="$dev1" of=x bs=512 count=104 conv=sync,noerror iflag=direct
Commit bf2741376d started to use
lv_is_active() instead of call for lv_info & info.exists so
we cover also cluster activated devices.
For snapshost the conversion was not correct and introduced
regression by blocking creation of snapshot of inactive LV.
Fix it by assigning lv_is_active() directly.
Note: we still have minor issue to fix - to make
lv_is_???? function able to return error states since
lv_info() may fail.