IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Using virKModConfig would not simplify any existing code.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
All callers except for the test suite pass the same value
for the second arg, so it can be removed, simplifying the
code.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When a block copy job fails prior to reaching the synchronized phase
while we are waiting for the job to finish virsh would print the
following:
$ virsh blockcopy backup-test vda /tmp/dst.qcow2 --wait --reuse-external --transient-job
error:
Copy failed
The above message looks like we've forgot to print the error message
itself as the line ends after 'error:'. Unfortunately with the current
API design clients have no way of actually getting the error message as
the VIR_DOMAIN_EVENT_ID_BLOCK_JOB(_2) event only reports the status but
not an error and the job then vanishes.
Fix the expectations by using vshPrintExtra instead of vshError:
$ virsh blockcopy backup-test vda /tmp/dst.qcow2 --wait --reuse-external --transient-job
Copy failed
Note that the newline is required to avoid printing the 'Copy failed'
message on the same line when printing the job progress percentage.
Inspired by https://bugzilla.redhat.com/show_bug.cgi?id=1847867
Fix the same issue also for block pull and block commit job
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Outline the basics and how to integrate with externally created
overlays. Other topics will continue later.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Test both 'basic' and 'snapshots' cases on shallow and deep copy modes.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows the removal of the
ad-hoc implementation of bitmap merging for block copy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
New semantics of the bitmap handling don't need this. Remove the field
and all uses of it including the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Simulate commit between all the combinations of layers in the
'snapshots' case to see whether the code merges the correct bitmaps with
the correct depth of temporary bitmaps.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
In the 'basic' case we have few bitmaps in only the top layer. Simulate
commit into the backing of the top layer and also 2 levels deep.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows removing the ad-hoc
implementation of bitmap merging for block commit. The new approach is
way simpler and more robust and also allows us to get rid of the
disabling of bitmaps done prior to the start as we actually do want to
update the bitmaps in the base.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The 'snapshots' case has multiple layers so we need to make sure that
the bitmaps are merged with the appropriate temporary bitmaps formatted
from the allocation bitmap for any backing chain layer above.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The 'basic' case is just a single backing store layer containing the
bitmaps so we just copy the bitmaps over to the backup bitmap.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows removal of the ad-hoc
implementation of bitmap merging for backup. The new approach is simpler
and also more robust in case some of the bitmaps break as they remove
the dependency on the whole chain of bitmaps working.
The new approach also allows backups if a snapshot is created outside of
libvirt.
Additionally the code is greatly simplified.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Add a function which allows merging bitmaps according to the new
semantics and will allow replacing all the specific ad-hoc functions
currently in use for 'backup', 'block commit', 'block copy' and will
also be usable in the future for 'block pull' and non-shared storage
migration.
The semantics are a bit quirky for the 'backup' case but these quirks
are documented and will prevent us from having two slightly different
algorithms.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Exercise the now arguably simpler checkpoint deletion code on the
'basic', 'snapshots', and 'synthetic' test data sets.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Now that we've switched to the simple handling, the first thing that can
be massively simplified is checkpoint deletion. We now need to only go
through the backing chain and find the appropriately named bitmaps and
delete them, no complex lookups or merging.
Note that compared to other functions this deletes the bitmap in all
layers compared to others where we expect only exactly 1 bitmap of a
name in the backing chain to prevent potential problems.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Based on the 'snapshots' example with manual tweaks to introduce
inactive, transient, inconsistent and duplicate bitmaps in various parts
of the chain to exercise detection and new validation code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Now that we've updated both the test data and the validator to new
semantics we can start testing again.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reject duplicates and other problematic bitmaps according to the new
semantics of bitmap use in libvirt.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use test data which conforms to the new semantics which changed in the
previous patch.
The test data was created by the same set of commands as originally in
commit 0b27b655b1
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use test data which conforms to the new semantics which changed in the
previous patch.
The test data was created by the same set of commands as originally in
commit 9aac9d5bda
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Chaining bitmaps for checkpoints (disabling the active one and creating
a new) severely overcomplicated all operations in regards to bitmaps.
Specifically it requires us re-matching the on-disk state to the
internal metadata and in case of merging during block jobs it makes it
almost impossible to cover all corner cases.
Since the checkpoints and incremental backups were not yet enabled,
let's change the design to keep one bitmap per checkpoint. In case of
layered snapshots this will be filled in by using dirty-bitmap-populate.
Finally the main reason for this unnecessary complexity was the fear
that qemu's performance could degrade. In the end I think that
addressing the performance issue will be better done in qemu (e.g by
keeping an internal bitmap updated with changes and merging it
periodically back to the real bitmaps. QEMU writes out changes to disk
at shutdown so consistency is not a problem).
Removing the relationships between bitmaps frees us from complex
handling and also makes all the surrounding code more robust as one
broken bitmap doesn't necessarily invalidate whole chains of backups.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
There will be multiple places where we'll need to print nodenames from a
GSList of virStorageSource for testing purposes. Extract the code into a
function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
They will be replaced by a different set which will test scenarios
relevant for the new semantics.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Upcoming patches are going to rewrite and semantically modify how
bitmaps are handled during blockjobs. This is possible as incremental
backup is not yet fully enabled.
As the changes are going to be incompatible with any current test data
remove all test cases for bitmap handling during checkpoint deletion,
incremental backups, block commit, block copy, and bitmap validation
operations.
The tests will be gradually added back later after the code and
test-data is refactored.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use the new test data for checkpoint deletion testing. This test also
requires modification of the internals to allow checking for test
failure.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use the new test data when calculating incremental backup operations. As
incremental backup fails with no bitmap the test code is modified to
allow testing this case too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Fetch the checkpoint list for every disk specifically based on the new
per-disk 'incremental' field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
In preparation to allow heterogenous backups store the 'incremental'
field per-disk and fill it by default from the per-backup field.
Having this will be important once we'll want to allow incremental
backup working while hotplugging a new disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a disk is not captured by one of the intermediate checkpoints the
code would fail, but we can easily calculate the bitmaps to merge
correctly by skipping over checkpoints which don't describe the disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The algorithm is getting quite complex. Split out the lookup of range of
backing chain storage sources and bitmaps contained in them which
correspond to one checkpoint.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The CPUID data in *-{disabled,enabled}.xml convert feature names from
the corresponding *.json file into raw CPUID and MSR data and thus some
of them may need to be updated when new features are added into the CPU
map.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Rather than using the AM_LDFLAGS_MOD_NOUNDEF options with the noinstall
library that will come out of libtool from
libvirt_driver_nodedev_impl_la, use it with the installed version
libvirt_driver_nodedev_la.
Broken-by-commit: c44bffb9
Fixes: https://ci.centos.org/job/libvirt-rpm/systems=libvirt-fedora-32/1155/
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
As noted by Erik Skultety, we use the same XML schema to report
existing devices and to define new devices. However, some schema
elements are "read-only". In other words, they are used to report
information from the node device driver and cannot be used to define a
new device. Note these in the documentation.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Test that we run 'mdevctl' with the proper arguments when we destroy
mediated devices with virNodeDeviceDestroy()
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Add the ability to destroy mdev node devices via the mdevctl utility.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Test that we run 'mdevctl' with the proper arguments when creating new
mediated devices with virNodeDeviceCreateXML().
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In order to test the nodedev driver, we need to link against a
non-loadable module. Similar to other loadable modules already in the
repository, create an _impl library that can be linked against the unit
tests and then create a loadable module from that.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
With recent additions to the node device xml schema, an xml schema can
now describe a mdev device sufficiently for libvirt to create and start
the device using the mdevctl utility.
Note that some of the the configuration for a mediated device must be
passed to mdevctl as a JSON-formatted file. In order to avoid creating
and cleaning up temporary files, the JSON is instead fed to stdin and we
pass the filename /dev/stdin to mdevctl. While this may not be portable,
neither are mediated devices, so I don't believe it should cause any
problems.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In order to allow libvirt to create and start new mediated devices, we
need to be able to verify that the device has been started. In order to
do this, we'll need to save the UUID of newly-discovered devices within
the virNodeDevCapMdev structure. This allows us to search the device
list by UUID and verify whether the expected device has been started.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In preparation for creating mediated devices in libvirt, we will need to
wait for new mediated devices to be created as well. Refactor
nodeDeviceFindNewDevice() so that we can re-use the main logic from this
function to wait for different device types by passing a different
'find' function.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>