IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Restore fsync() call For more accurate tracking by buildbot.
Try different rather tricky way of static_cast to use
already opened FD instead of seperate open(),fsync(),close().
It's pretty strange there is no way to enforce fsync() for
C++ iostreams. Flush() is actully not equal.
Add Timespec class to increase time resolution to miliseconds
(can switch to microseconds if ever needed).
Use more const and const_interators and pass by reference.
Output rusage also to list result file.
Reduce inlining of C++ constructors.
If the system changes, locate PVs that appear on different devices,
and update the device IDs in the devices file. A system change is
detected by saving the DMI product_uuid or hostname in the devices
file, and comparing it to the current system value. If a root PV
is restored or copied to a new system with different devices, then
the product_uuid or hostname should change, and trigger lvm to
locate PVIDs from system.devices on new devices.
No write outside of $LVM_TEST_DIR (removed /test access).
Use 'aux prepare_scsi_debug_dev' for automated scsi_debug handling
Properly use "" around shell vars.
Smarter read of PVS values.
Relax requirement to only work with real /dev dir.
Handle the case of device teardown where the first pass
could have only a single, but opened device, for removal.
In such case we want to at least once go through
the udev_wait and retry removal again.
TODO: maybe a sleep .1 might be usable as well with udev_wait
Since lvmdbusd testing tends to do its own logging,
try for now to disable very generic logging mechnanism
of the test suite and see the result.
Some lvmdbusd test seems to rely on some log/file logic
which is modified with the use of these shell vars.
Require VDO version 6.2.3.
Skip the part of the test that needs vdo wrapper and 2 different
versions of vdoprepareforlvm to prepare shifted VDO header
at the 2MiB offset.
Hunt also for devices with LVMTEST prefix in UUID.
Call teardown_devs_prefixed - so if they hold RAM or SCSI
they are closed before trying to remove kernel modules.
Previous fix was invalid (after some in-place shuffling)
'dd' copied goes to 'stderr' so we need to catch all output.
Grep needs to check output of tee tool.
Ensure 'C' locales are in use with 'dd'.
Pass more args with some 'aux' commands:
wipefs_a, enable_dev, disable_dev
(so it's a bit more efficient using single udev_wait call).
Use prepare_vg instead of prepare_pvs.
Keep using test directory for created files.
Trap errors and remove brd in this case.
Use some shell builtins to reduce fork count.
Use "$VAR".
Run 'pvs' with devlist (so not acceing other system devices).
New dmpd tools return version string in different format,
so update code to understand both variant.
Also hide some shell var setting to local functions.
Some linkers do need libdevmapper-event-lvm2.so.2.03,
so add also this symlink to the tests /lib dir.
This fixes the need to use previous LD_LIBRARY_PATH complex
setting and now works with much shorter list.
With 3596558861 it's been introduced
a more fine grained description.
However 'disabled' might be actually more confusing then empty field,
so keep only the info about 'not enabled'aka dmevend is not allowed
to monitor LV which otherwise could be monitored.
Fix the testing logic.
With raid5 device the layout of files on a filesystem does not define
which leg will actually contain the block we try to damage.
So test will now figure out which device has damaged block.
Use 'check' functionality and also drop unneeded random write as we
now can identify easily in other way.
Compilation may configure it's own /etc path so ensure the test
has a defined location for access to this dir during testing.
Also prepare machine_id filei (with the use of uuidgen tool)
for the test.
The command "lvcreate --type thin --snapshot ..." to create a thin
snapshot would fail.
commit d651b340e6 removed the optional
"--type thin" from the command definition "lvcreate --snapshot LV_thin",
and added --type thin as AUTOTYPE. This was correct and should not have
changed anything if all the command defs were correct, but it broke
the "lvcreate --type thin --snapshot" case. It reveals a problem in a
different command definintion: "lvcreate --type thin LV_thin" that was
missing --snapshot in its OO list.
Fix in the code that matches devices to system.devices entries when
the devices have the same serial number. A non-PV device in
system.devices has no pvid value, and the code was segfaulting
when checking the null pvid value.
In previous lvm versions, trailing spaces at the end of a t10 wwid would
be replaced with underscores, so the IDNAME string in system.devices
would look something like "t10.123_". Current versions of lvm ignore
trailing spaces in a t10 wwid, so the IDNAME string used would be
"t10.123". The different values would cause lvm to not recognize a
device in system.devices with the trailing _. Fix this by ignoring
trailing underscores in the IDNAME string from system.devices.
The recent fix 05c2b10c5d ensures that raid LV images are not
using the same devices. This was happening in the lvextend commands
used by this test, so fix the test to use more devices to ensue
redundancy.
Check for running (possibly leftover) lvmdbusd running in the
system - as this daemon may interfere with this test as in this
case both be operating on same 'live' data in /run/lvm.
Make test faster by agregating sets of operation to work on a single
created filesystem yet checking all the variants of extension and reduction.
Split 'xfs' part into separate test and convert it for use of the
minimal size 300M nowdays required by mkfs.xfs.
Previous commit cause the pvmove could actually be started in unexpected
order - so make sure, we are not starting new pvmove in same VG until
the previous one is started.
Keep backups within test_dir instead of dev_dir (so it doesn't
leak large files there if the tests are run over real /dev dir).
Move restoring of dm_mirror throttling before test_dir is removed.
Not quite sure if this helps anything, some of testing
machines can't reliably remove scsi_debug, reporting
they are in use - but it's not easily reproducible...
aux wait_pvmove_lv_ready() now handles multiple pvmove LVs
at one go - which allows a bit fast checking - although
at some point we may need to switch to use delayed devs
since mirror throttling seems to be no longer working well,
as CPU are getting so fast, that most of data are already
pvmoved before throttling has any chance to do something...
When 'brd' device can be removed (is unused AKA not opened),
remove such device and use again for testing.
Let's assume user has no unused brd device left in the system.
When the 'tests' sometimes fail to cleanup devices, with this
change futher cleanup from some next test may evenually release
brd device and make it available for testing.
Convert test to use only ext4 instead of 300M demanding XFS.
Shorten 'B' files to 4K and use 4K strip size with >raid1 arrays
so we do not risk spreading of the file across stripe.
Also use easier 'aux corrupt_dev()' method to introduce a bit
corruption into a block device with integrity.
TODO: shorten _wait_recalc (should't be needed).
Add function to corrupt some bytes in give file path presenting
a device. 1st. patern in just once replaced with 2nd. pattern.
Usable to simulate some bit corruption for integrity devices.
Convert test to use a single skeleton and only different pieces
keep in separate tests.
Lower raid disk usage to smaller size and switch to ext4
as way less demanding fileystem.
Previously we were injecting a missing key in the lv, vg, and pv.
Given the order of processing in lvmdbusd, this prevented us from
exercising all the error paths. Change to returning just 1 instead.
Different sequences of steps that could be used to handle raid LVs
after VG takeover (what would happen in cluster failover) combined
with the loss of a disk.