IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This code is needed to check whether virtio-scsi driver was installed.
This reverts commit f0afc439524853508938b2bfc758896f053462e3.
Message-Id: <20230310175433.781335-2-andrey.drobyshev@virtuozzo.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
OCaml 4.08 introduces a stdlib Option module which looks a bit like
ours but has a number of differences. In particular our functions
Option.may and Option.default have no corresponding functions in
stdlib, although there are close enough equivalents.
This change was automated using this command:
$ perl -pi.bak \
-e 's/Option.may/Option.iter/g; s/Option.default /Option.value ~default:/g' \
`git ls-files`
Update common module to include:
commit cffa077323fafcdfcf78e230c022afa891a6b3ff
Author: Richard W.M. Jones <rjones@redhat.com>
Date: Mon Feb 20 12:11:51 2023 +0000
mlstdutils: Rework the Option module to be compatible with stdlib
commit 007d0506c538db0a43fec7e9986a95ecdcd48b56
Author: Richard W.M. Jones <rjones@redhat.com>
Date: Mon Feb 20 12:18:29 2023 +0000
mltools: Replace Option.may with Option.iter
As with the prior commit, prefer -cpu host for all guests (except when
we have more information from the source hypervisor). Although there
is the disadvantage that -cpu host is non-migratable, in practice it
would be very difficult to live migrate a host launched using direct
qemu commands.
Note that after this change, gcaps_arch_min_version is basically an
informational field. No output uses it, but it will appear in debug
output and there's the possibility we might use it for a future output
mode.
Thanks: Laszlo Ersek
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In the case where the source hypervisor doesn't specify a CPU model,
previously we chose qemu64 (qemu's most basic model), except for a few
guests that we know won't work on qemu64, eg. RHEL 9 requires
x86_64-v2 where we use <cpu mode='host-model'/>
However we recently encountered an obscure KVM bug related to this
(https://bugzilla.redhat.com/show_bug.cgi?id=2168082). Windows 11
thinks the qemu64 CPU model when booted on real AMD Milan is an AMD
Phenom and tried to apply an ancient erratum to it. Since KVM didn't
emulate the correct MSRs for this it caused the guest to fail to boot.
After discussion upstream we can't see any reason not to give all
guests host-model. This should fix the bug above and also generally
improve performance by allowing the guest to exploit all host
features.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=2168082#c19
Related: https://listman.redhat.com/archives/libguestfs/2023-February/030624.html
Thanks: Laszlo Ersek, Dr. David Alan Gilbert, Daniel Berrangé
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Some guests require not just a specific architecture, but cannot run
on qemu's default CPU model, eg. requiring x86_64-v2. Since we
anticipate future guests requiring higher versions, let's encode the
minimum architecture version instead of a simple boolean.
This patch essentially just remaps:
gcaps_default_cpu = true => gcaps_arch_min_version = 0
gcaps_default_cpu = false => gcaps_arch_min_version = 2
I removed a long comment about how this capability is used because we
intend to completely remove use of the capability in a coming commit.
Updates: commit a50b975024ac5e46e107882e27fea498bf425685
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
For RHEL >= 9 / x86-64 guests we cannot use the default qemu CPU
(eg. "qemu64"), and so we have a mechanism for conversion to indicate
to the output modes that a more capable CPU is required. We
previously picked cpu='host-passthrough' (ie. the equivalent of qemu's
-cpu host). However this is not live migratable. cpu='host-model' is
a better choice as it is more likely to be migratable.
See also discussion here:
https://listman.redhat.com/archives/libguestfs/2023-February/030625.html
and previous discussion here:
https://listman.redhat.com/archives/libguestfs/2022-April/thread.html#28711
Acked-by: Laszlo Ersek <lersek@redhat.com>
Because this regexp was not anchored at both ends it would still
report a match for incorrect names.
Update common submodule to get this commit:
mlpcre: Remove ~anchored trap
PCRE2_ANCHORED only anchors the regexp at the start (ie. equivalent
to adding ^ at the beginning). It *does not* anchor it at the end as
well (for which there is a separate PCRE2_ENDANCHORED which has
various traps of its own).
Everywhere I've tried to use ~anchored I've fallen into the trap of
believing it anchors both ends, causing actual bugs. So remove it
completely from the bindings.
Replace ~anchored:true with ^...$ around the regular expression.
Fixes: commit 8a9c914544a49bed13eb5baf42290f835bdee7b5
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2162332
Reported-by: Ming Xie
For -o rhv-upload, the -os parameter specifies the storage domain.
Because the RHV API allows globs when searching for a domain, if you
used a parameter like -os 'data*' then this would confuse the Python
code, since it can glob to the name of a storage domain, but then
later fail when we try to exact match the storage domain we found.
The result of this was a confusing error in the precheck script:
IndexError: list index out of range
This fix validates the output storage parameter before trying to use
it. Since valid storage domain names cannot contain glob characters
or spaces, it avoids the problems above and improves the error message
that the user sees:
$ virt-v2v [...] -o rhv-upload -os ''
...
RuntimeError: The storage domain (-os) parameter ‘’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors
$ virt-v2v [...] -o rhv-upload -os 'data*'
...
RuntimeError: The storage domain (-os) parameter ‘data*’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors
Although the IndexError should no longer happen (except in extremely
rare cases like the storage domain being renamed), I also added a
try...catch around that code to improve the error.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1986386
Reported-by: Junqin Zhou
Reviewed-by: Nir Soffer <nsoffer@redhat.com>
This commit splits up any long lines found in errors, warnings or
other messages. OCaml ignores whitespace following "\<CR>" within a
string, eg:
"long string \
more stuff"
is parsed as:
"long string more stuff"
Thanks: Laszlo Ersek, for working out the OCaml syntax for this
Kubevirt supports something like RFC 1123 names (without the length
restriction). Helpfully it prints the regexp that it uses the
validate the names, so just use the same regexp.
Note that virt-v2v never renames guests (since that would add
unpredictability for automation). You must use the -on option to
rename the guest if the name is wrong. Hence this is an error, not a
warning or an attempt to rename the guest.
Reported-by: Ming Xie
Fixes: commit bfa62b4683d312fc2fa9bb3c08963fc4846831b9
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2162332
Apparently this element doesn't go in the obvious place (under
"resources", next to memory), but in a whole new section under "cpu",
which makes no logical sense but here we are. Also verified this
against Kubevirt examples/vm-template-fedora.yaml
Reported-by: Ming Xie
Fixes: commit bfa62b4683d312fc2fa9bb3c08963fc4846831b9
This was the only guest type which did not support ACPI. It also only
supported direct to libvirt output. It hasn't been tested for a long
time, and hasn't been supported by Red Hat for very much longer.
Removing this means we no longer have to think about non-ACPI guests.
This commit adds a new -o kubevirt mode. You can specify where to
place the disks and metadata (guest.yaml) file using -os. Also
allocation mode (-oa) and disk format (-of) may be set. For example:
$ virt-v2v -i disk guest.img -o kubevirt -os /var/tmp
will generate /var/tmp/guest.yaml and /var/tmp/guest-sda
This is not finalized since undoubtedly the way the disks are handled
will have to be changed in future.
When the source domain doesn't specify a VCPU model ("s_cpu_model" is
None), and the guest OS is assumed to work with the default VCPU model
("gcaps_default_cpu" is true), we don't output any <cpu> element. In that
case, libvirtd augments the domain config with:
[1] <cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu64</model>
</cpu>
where the @check='none' attribute ensures that the converted domain will
be launched, for example, on an Intel host, despite the "qemu64" VCPU
model containing AMD-only feature flags such as "svm".
However, if the source domain explicitly specifies the "qemu64" model
(mostly seen with "-i libvirt -ic qemu://..."), we presently output
[2] <cpu match='minimum'>
<model fallback='allow'>qemu64</model>
</cpu>
which libvirtd completes as
[3] <cpu mode='custom' match='minimum' check='partial'>
<model fallback='allow'>qemu64</model>
</cpu>
In [3], cpu/@match='minimum' and cpu/model/@fallback='allow' are both
laxer than @match='exact' and @fallback='forbid', respectively, in [1].
However, cpu/@check='partial' in [3] is stricter than @check='none' in
[1]; it causes libvirtd to catch the "svm" feature flag on an Intel host,
and prevents the converted domain from starting.
The "qemu64" VCPU model is supposed to run on every possible host
<https://gitlab.com/qemu-project/qemu/-/blob/master/docs/system/cpu-models-x86.rst.inc>,
therefore make an exception for the explicitly specified "qemu64" VCPU
model, and generate the @check='none' attribute.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2107503
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220722073627.6511-1-lersek@redhat.com>
Acked-by: Richard W.M. Jones <rjones@redhat.com>
Output.output_to_local_file is used by several output modes that write
to local files or devices. It launches an instance of qemu-nbd or
nbdkit connected to the local file.
Previously we unconditionally added an On_exit handler to kill the NBD
server. This is usually safe because nbdcopy --flush has guaranteed
that the data was written through to permanent storage, and so killing
the NBD server is just there to prevent orphaned processes.
However for output to RHV (-o rhv) we actually need the NBD server to
be cleaned up before we exit. See the analysis here:
https://bugzilla.redhat.com/show_bug.cgi?id=1953286#c26
Allow an alternate strategy of waiting for the NBD server to exit
during virt-v2v shutdown.
We only need this in virt-v2v so implement it here instead of pushing
it all the way into the On_exit module.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
To partially avoid a potential race against nbdkit or qemu-nbd
releasing files on the mountpoint before they exit, unmount as late as
we can.
See also https://bugzilla.redhat.com/show_bug.cgi?id=1953286#c26
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This function was renamed to make it clearer what it does (and that
it's potentially dangerous). The functionality is unchanged.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
For various output modes, implement -oo compressed which can be used
to generate compressed qcow2 files. This option was dropped when
modularizing virt-v2v, and required changes to nbdcopy which are
finally upstream in libnbd >= 1.13.5.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2047660
Fixes: commit 255722cbf39afc0b012e2ac00d16fa6ba2f8c21f
Reported-by: Xiaodai Wang
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
We currently support virtio-blk (commonly) or IDE (unusually) for exposing
disks to the converted guest; refer to "guestcaps.gcaps_block_bus" in
"lib/create_ovf.ml". When using virtio-blk (i.e., in the common case), RHV
can deal with at most 23 disks, as it plugs each virtio-blk device in a
separate slot on the PCI(e) root bus; and the other slots are reserved for
various purposes. When a domain has too many disks, the problem only
becomes apparent once the copying finishes and an import is attempted.
Modify the RHV outputs to fail relatively early when a domain has more
than 23 disks that need to be copied.
Notes:
- With IDE, the theoretical limit may even be as low as 4. However, in the
"Output_module.setup" function, we don't have access to
"guestcaps.gcaps_block_bus", and in practice the IDE limitation has not
caused surprises. So for now stick with 23, assuming virtio-blk.
Modifying the "Output_module.setup" parameter list just for this seems
overkill.
- We could move the new check to an even earlier step, namely
"Output_module.parse_options", due to the v2v directory deliberately
existing (and having been populated with input sockets) at that time.
However, even discounting the fact that "parse_options" is not a good
name for including this kind of step, "parse_options" does not have
access to the v2v directory name, and modifying the signature just for
this is (again) overkill.
- By adding the check to "Output_module.setup", we waste *some* effort
(namely, the conversion occurs between "parse_options" and "setup"),
but: (a) the "rhv-disk-uuid" count check (against the disk count) is
already being done in the rhv-upload module's "setup" function, (b) in
practice the slowest step ought to be the copying, and placing the new
check in "setup" is early enough to prevent that.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2051564
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220617095337.9122-1-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
The intent (even before RHBZ#2028764) has been to install the QEMU guest
agent in the converted domain unconditionally. Therefore, in order for the
GA to be actually accessible from the host side, augment the libvirt
output module with a "guest agent connection" also unconditionally.
For starters, the domain needs a virtio-serial device. Then there must be
a port on the device that (in the guest) the GA identifies by name, and
that (on the host) is exposed as a listening socket (usually in the unix
address family). The adress of that port (usually a pathname, i.e., for a
unix domain socket) is then passed to whatever host-side application wants
to talk to the GA.
The minimal domain XML fragment for that ("minimal" for our purposes) is
<controller type='virtio-serial' model='virtio'>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
The "controller" element is needed because "controller/@model" is where we
regulate "virtio" vs. "virtio-transitional".
Everything else is filled in by libvirt. Notably, libvirt (a) creates and
binds the unix domain socket itself (usually
"/var/lib/libvirt/qemu/channel/target/DOMAIN/org.qemu.guest_agent.0"), (b)
passes the file descriptor to QEMU, and (c) figures out the socket
pathname for commands such as
virsh domfsinfo DOMAIN
virsh domhostname DOMAIN --source agent
virsh domifaddr DOMAIN --source agent
virsh guestinfo DOMAIN
For QEMU, the corresponding options would be
-chardev socket,id=agent,server=on,wait=off,path=/tmp/DOMAIN-agent \
-device virtio-serial-pci,id=vioserial \
-device virtserialport,bus=vioserial.0,nr=1,chardev=agent,name=org.qemu.guest_agent.0 \
Note the "path=/tmp/DOMAIN-agent" property of "-chardev"; virt-v2v would
have to generate that (in place of the "fd=nnnn" property that libvirt
passes to QEMU).
Omit extending the QEMU output module for now, as the QGA protocol is
based on JSON, and one needs "virsh" or "virt-manager" (or another
management application interface) anyway, for efficiently exchanging
messages with QGA. I don't know of end-user tools that directly connect to
"/tmp/DOMAIN-agent".
Don't modify the RHV and OpenStack outputs either; both of these
management products likely configure the virtio-serial device
automatically, for the agent access.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2028764
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220613170135.12557-2-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
The ocaml-libvirt project (https://gitlab.com/libvirt/libvirt-ocaml)
provides bindings for libvirt. For historical reasons we bundled this
as it was throught ocaml-libvirt wasn't widespread on distros. In
fact Fedora and Debian derivatives all have this. Arch does not
(yet), but we can fix that.
It said in the README file in that directory, "before virt-v2v 1.42 is
released we hope to have unbundled this". That didn't happen, but
let's remove it now.
If "s_cpu_model" is None and "gcaps_default_cpu" is "false", generate the
"-cpu host" option for QEMU.
"-cpu host" produces an (almost) exact copy of the host (i.e., physical)
CPU for the guest, which is the best choice for guest OSes that cannot run
on QEMU's default VCPU type -- considering that domains converted by
virt-v2v are not expected to be migrateable without further tweaks by an
administrator.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-9-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
At the moment, QEMU is always started with the default "qemu64" VCPU
model, even if the source hypervisor sets a particular VCPU model.
"qemu64" is not suitable for some guest OSes. Honor "source.s_cpu_model"
via the "-cpu" option, if the source specifies the VCPU model. The logic
is basically copied from "lib/create_ovf.ml".
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-8-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
If "s_cpu_model" is None and "gcaps_default_cpu" is "false", generate the
<cpu mode="host-passthrough"/> element for libvirt.
This element produces an (almost) exact copy of the host (i.e., physical)
CPU for the guest, which is the best choice for guest OSes that cannot run
on QEMU's default VCPU type -- considering that domains converted by
virt-v2v are not expected to be migrateable without further tweaks by an
administrator.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-7-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
The 'match="minimum"' attribute of the <cpu> element only makes sense if
the CPU model is specified. If the CPU model is not specified by
"s_cpu_model", i.e., we produce the <cpu> element only due to
"s_cpu_topology", then we need no attributes for <cpu> at all. Restrict
'match="minimum"' to when "s_cpu_model" is specified.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-6-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Commit 2a576b7cc5c3 ("v2v: -o libvirt: Don't write only <vendor> without
<model> (RHBZ#1591789).", 2018-06-21) introduced a path in the code where
we create a childless
<cpu match="minimum"/>
element. Namely, after said commit, in case
source.s_cpu_vendor <> None &&
source.s_cpu_model = None &&
source.s_cpu_topology = None
we no longer trigger a libvirt error; however, we do create the element
<cpu match="minimum"/>
without any children. Surprisingly, libvirt doesn't complain; it silently
ignores and eliminates this element from the domain XML.
Remove this code path by restricting the outer condition, for creating the
<cpu> element, to:
source.s_cpu_model <> None ||
source.s_cpu_topology <> None
This reflects that "s_cpu_vendor" only plays a role if "s_cpu_model" is
specified -- and the latter guarantees in itself that the <cpu> element
will be generated with at least one child element.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-5-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
In the match on (s_cpu_vendor, s_cpu_model), the following expression is
duplicated:
e "model" ["fallback", "allow"] [PCData model]
For eliminating this, normalize the match: match each component in
separation, in the proper order, creating a tree-like match rather than a
table-like one.
This is done in preparation for a subsequent patch, which would otherwise
duplicate even more code.
We preserve the behavior added by commit 2a576b7cc5c3 ("v2v: -o libvirt:
Don't write only <vendor> without <model> (RHBZ#1591789).", 2018-06-21);
the change only simplifies the code.
With the elimination of the table-like pattern matching, we need not
specifically mention the "CPU vendor specified without CPU model" libvirt
error.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-4-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Squash the patterns
| None, None -> ()
| Some _, None -> ()
into the identical
| _, None -> ()
We preserve the behavior added by commit 2a576b7cc5c3 ("v2v: -o libvirt:
Don't write only <vendor> without <model> (RHBZ#1591789).", 2018-06-21);
the change only simplifies the code.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220420162333.24069-3-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
oVirt API call for VM creation finishes before the VM is actually
created. Entities may be still locked after virt-v2v terminates and if
user tries to perform (scripted) actions after virt-v2v those operations
may fail. To prevent this it is useful to monitor the task and wait for
the completion. This will also help to prevent some corner case
scenarios (that would be difficult to debug) when the VM creation job
fails after virt-v2v already termintates with success.
Thanks: Nir Soffer
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1985827
Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com>
Reviewed-by: Arik Hadas <ahadas@redhat.com>
Reviewed-by: Nir Soffer <nsoffer@redhat.com>
The current code handles some nonexistent cases (such as SCSI floppies,
virtio-block CD-ROMs), and does not create proper drives (that is,
back-ends with no media inserted) for removable devices (floppies,
CD-ROMs).
Rewrite the whole logic:
- handle only those scenarios that QEMU can support,
- separate the back-end (-drive) and front-end (-device) options,
- wherever / whenever a host-bus adapter front-end is needed
(virtio-scsi-pci, isa-fdc), create it,
- assign front-ends to buses (= parent front-ends) and back-ends
explicitly.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2074805
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220415084728.14366-1-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
[lersek@redhat.com: break "in" keywords to new lines after function
definitions (Rich]
[lersek@redhat.com: replace list catenation with cons]
In Python >= 3.3 we can use a monotonic instead of system clock, which
ensures the clock will never go backwards during these loops.
Thanks: Nir Soffer
The openstack output module currently passes the "--non-bootable" and
"--read-write" options to the "openstack volume create" command. There is
a bug in the "openstack" utility however (that is, in the
python-openstackclient project @ dabaec5a7b1b) where it assumes that the
image creation API blocks, and as soon as it completes, the readonly and
bootable flags can be tweaked with the APIs that exist for those purposes.
The image creation API does not block however, and when the "openstack"
command line utility tries to set the readonly & bootable flags, those
APIs fail because image creation is still in progress. This results in an
obscure error message on the virt-v2v standard error:
> [ 322.8] Initializing the target -o openstack
> openstack [...] volume create -f json --size 20 --description virt-v2v
> temporary volume for esx6.7-win2016-x86_64 --non-bootable --read-write
> esx6.7-win2016-x86_64-sda
> Failed to set volume read-only access mode flag: Invalid volume: Volume
> 009dc6bd-2f80-4ac3-b5e7-771863aca237 status must be available to update
> readonly flag, but current status is: creating. (HTTP 400) (Request-ID:
> req-6f56ce4c-249b-4112-9c52-dd91b7f5aae9)
Given that "--non-bootable" and "--read-write" are both defaults for VM
image creation, according to
<https://docs.openstack.org/python-openstackclient/yoga/cli/command-objects/volume.html>,
work the symptom around by simply not passing these options.
(Tested only with "make check"; I don't have an Openstack setup.)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2074801
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220413070534.21285-1-lersek@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
If a virtio-vsock device was installed or present in the guest kernel
then -o qemu mode would add the parameter -device vhost-vsock-pci to
the qemu command line. This fails with:
qemu-system-x86_64: -device vhost-vsock-pci: guest-cid property must be greater than 2
Thanks: Stefano Garzarella
Fixes: commit 4f6b143c1cb32a29cbd3ff04635ba220e5db82ed
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
QEMU removed the -balloon option (qemu commit 1d9cb42c56547 "Remove
the deprecated -balloon option"). Actually quite a long time ago, in
2018. Replace it in the qemu script code.
Reported-by: Ming Xie
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2074805
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This was removed when I removed -o json support, but I did not delete
the file from git. “make maintainer-check-extra-dist” complained
about this.
Fixes: commit 4e6b389b4e27c8d13e57fcaf777d96ad7e08650b
At this point, virt-v2v never relies on the Unix domain sockets created
inside the "run_unix" implementations. Simplify the code by removing this
option.
Consequently, the internally created temporary directory only holds the
NBD server's PID file, and never its UNIX domain socket. Therefore:
(1) we no longer need the libguestfs socket dir to be our temp dir,
(2) we need not change the file mode bits on the temp dir,
(3) we can rename "tmpdir" to the more specific "piddir".
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2066773
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20220323104330.9667-1-lersek@redhat.com>
Acked-by: Richard W.M. Jones <rjones@redhat.com>
This simply replaces the existing -oo rhv-direct option with a new -oo
rhv-proxy option. Note that using this option "bare" (ie. just “-oo
rhv-proxy”) does nothing in the current commit because the default is
still to use the proxy.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=2033096
Thanks: Nir Soffer
Output modules can specify now request_size to override the default
request size in nbdcopy.
The rhv-upload plugin is translating every NBD command to HTTP request,
translated back to NBD command on imageio server. The HTTP client and
server, and the NBD client on the imageio server side are synchronous
and implemented in python, so they have high overhead per request. To
get good performance we need to use larger request size.
Testing shows that request size of 4 MiB speeds up the copy disk phase
from 14.7 seconds to 7.9 seconds (1.8x times faster). Request size of 8
MiB is a little bit (3%) faster so we may want to tune this in the
future.
Here are stats extracted from imageio log when importing Fedora 35 image
with 3 GiB of random data. For each copy, we have 4 connection stats.
Before:
connection 1 ops, 14.767843 s
dispatch 4023 ops, 11.427662 s
zero 38 ops, 0.053840 s, 327.91 MiB, 5.95 GiB/s
write 3981 ops, 8.975877 s, 988.61 MiB, 110.14 MiB/s
flush 4 ops, 0.001023 s
connection 1 ops, 14.770026 s
dispatch 4006 ops, 11.408732 s
zero 37 ops, 0.057205 s, 633.21 MiB, 10.81 GiB/s
write 3965 ops, 8.907420 s, 986.65 MiB, 110.77 MiB/s
flush 4 ops, 0.000280 s
connection 1 ops, 14.768180 s
dispatch 4057 ops, 11.430712 s
zero 42 ops, 0.030011 s, 470.47 MiB, 15.31 GiB/s
write 4011 ops, 9.002055 s, 996.98 MiB, 110.75 MiB/s
flush 4 ops, 0.000261 s
connection 1 ops, 14.770744 s
dispatch 4037 ops, 11.462050 s
zero 45 ops, 0.026668 s, 750.82 MiB, 27.49 GiB/s
write 3988 ops, 9.002721 s, 989.36 MiB, 109.90 MiB/s
flush 4 ops, 0.000282 s
After:
connection 1 ops, 7.940377 s
dispatch 323 ops, 6.695582 s
zero 37 ops, 0.079958 s, 641.12 MiB, 7.83 GiB/s
write 282 ops, 6.300242 s, 1.01 GiB, 164.54 MiB/s
flush 4 ops, 0.000537 s
connection 1 ops, 7.908156 s
dispatch 305 ops, 6.643475 s
zero 36 ops, 0.144166 s, 509.43 MiB, 3.45 GiB/s
write 265 ops, 6.179187 s, 941.23 MiB, 152.32 MiB/s
flush 4 ops, 0.000324 s
connection 1 ops, 7.942349 s
dispatch 325 ops, 6.744800 s
zero 45 ops, 0.185335 s, 622.19 MiB, 3.28 GiB/s
write 276 ops, 6.236819 s, 995.45 MiB, 159.61 MiB/s
flush 4 ops, 0.000369 s
connection 1 ops, 7.955572 s
dispatch 317 ops, 6.721212 s
zero 43 ops, 0.135771 s, 409.68 MiB, 2.95 GiB/s
write 270 ops, 6.326366 s, 988.26 MiB, 156.21 MiB/s
flush 4 ops, 0.001439 s