IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The information about sockets having different label than the one on the file
and the way it needs to be set is very difficult to find for those who did not
come across it before. Let's describe what needs to happen in order for the
migration to go through rather than rely on general knowledge of others.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Even though it is technically possible, when running the migrations QEMU's
nbd-server-start errors out with:
"TLS is only supported with IPv4/IPv6"
We can always enable it when QEMU adds this feature, but for now it is safer to
show our error message rather than rely on QEMU to error out properly.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When executing the hypervisor-cpu-baseline command and if there is
only a single CPU definition present in the XML file, then the
baseline handler will exit early and libvirt will print an unhelpful
message:
"error: An error occurred, but the cause is unknown"
This is due to no CPU definition ever being "baselined", since the
handler expects at least two CPU models.
Let's fix this by performing a CPU model expansion on the single CPU
definition and returning the result to the caller. This will also
ensure the CPU model's feature set is sane if any were provided in
the file.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Check the provided CPU models against the CPU models
known by the hypervisor before baselining and print
an error if an unrecognized model is found.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When executing the hypervisor-cpu-baseline command and the
XML file contains a CPU definition without a model name, or
an invalid CPU definition, then the commands will fail and
return an error message from the QMP response.
Let's clean this up by checking for a valid definition and
presence of a model name.
This code is copied from virCPUBaseline.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Hypervisor-cpu-baseline requires the cpu-model-expansion
capability when expanding CPU model features if the
--features flag is provided.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When libvirt added support for firewalld, we were unable to use
firewalld's higher level rules, because they weren't detailed enough
and could not be applied to the iptables FORWARD or OUTPUT chains
(only to the INPUT chain). Instead we changed our code so that rather
than running the iptables/ip6tables/ebtables binaries ourselves, we
would send these commands to firewalld as "passthrough commands", and
firewalld would run the appropriate program on our behalf.
This was done under the assumption that firewalld was somehow tracking
all these rules, and that this tracking was benefitting proper
operation of firewalld and the system in general.
Several years later this came up in a discussion on IRC, and we
learned from the firewalld developers that, in fact, adding iptables
and ebtables rules with firewalld's passthrough commands actually has
*no* advantage; firewalld doesn't keep track of these rules in any
way, and doesn't use them to tailor the construction of its own rules.
Meanwhile, users have been complaining for some time that whenever
firewalld is restarted on a system with libvirt virtual networks
and/or nwfilter rules active, the system logs would be flooded with
warning messages whining that [lots of different rules] could not be
deleted because they didn't exist. For example:
firewalld[3536040]: WARNING: COMMAND_FAILED:
'/usr/sbin/iptables -w10 -w --table filter --delete LIBVIRT_OUT
--out-interface virbr4 --protocol udp --destination-port 68
--jump ACCEPT' failed: iptables: Bad rule
(does a matching rule exist in that chain?).
(See https://bugzilla.redhat.com/1790837 for many more examples and a
discussion)
Note that these messages are created by iptables, but are logged by
firewalld - when an iptables/ebtables command fails, firewalld grabs
whatever is in stderr of the program, and spits it out to the system
log as a warning. We've requested that firewalld not do this (and
instead leave it up to the calling application to do the appropriate
logging), but this request has been respectfully denied.
But combining the two problems above ( 1) firewalld doesn't do
anything useful when you use it as a proxy to add/remove iptables
rules, 2) firewalld often insists on logging lots of
annoying/misleading/useless "error" messages when you use it as a
proxy to remove iptables rules that don't already exist), leads to a
solution - simply stop using firewalld to add and remove iptables
rules. Instead, exec iptables/ip6tables/ebtables directly in the same
way we do when firewalld isn't active.
We still need to keep track of whether or not firewalld is active, as
there are some things that must be done, e.g. we need to add some
actual firewalld rules in the firewalld "libvirt" zone, and we need to
take notice when firewalld restarts, so that we can reload all our
rules.
This patch doesn't remove the infrastructure that allows having
different firewall backends that perform their functions in different
ways, as that will very possibly come in handy in the future when we
want to have an nftables direct backend, and possibly a "pure"
firewalld backend (now that firewalld supports more complex rules, and
can add those rules to the FORWARD and OUTPUT chains). Instead, it
just changes the action when the selected backend is "firewalld" so
that it adds rules directly rather than through firewalld, while
leaving as much of the existing code intact as possible.
In order for tests to still pass, virfirewalltest also had to be
modified to behave in a different way (i.e. by capturing the generated
commandline as it does for the DIRECT backend, rather than capturing
dbus messages using a mocked dbus API).
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
When it is starting up, firewalld will delete all existing iptables
rules and chains before adding its own rules. If libvirtd were to try
to directly add iptables rules during the time before firewalld has
finished initializing, firewalld would end up deleting the rules that
libvirtd has just added.
Currently this isn't a problem, since libvirtd only adds iptables
rules via the firewalld "passthrough command" API, and so firewalld is
able to properly serialize everything. However, we will soon be
changing libvirtd to add its iptables and ebtables rules by directly
calling iptables/ebtables rather than via firewalld, thus removing the
serialization of libvirtd adding rules vs. firewalld deleting rules.
This will especially apparent (if we don't fix it in advance, as this
patch does) when libvirtd is responding to the dbus NameOwnerChanged
event, which is used to learn when firewalld has been restarted. In
that case, dbus sends the event before firewalld has been able to
complete its initialization, so when libvirt responds to the event by
adding back its iptables rules (with direct calls to
/usr/bin/iptables), some of those rules are added before firewalld has
a chance to do its "remove everything" startup protocol. The usual
result of this is that libvirt will successfully add its private
chains (e.g. LIBVIRT_INP, etc), but then fail when it tries to add a
rule jumping to one of those chains (because in the interim, firewalld
has deleted the new chains).
The solution is for libvirt to preface it's direct calling to iptables
with a iptables command sent via firewalld's passthrough command
API. Since commands sent to firewalld are completed synchronously, and
since firewalld won't service them until it has completed its own
initialization, this will assure that by the time libvirt starts
calling iptables to add rules, that firewalld will not be following up
by deleting any of those rules.
To minimize the amount of extra overhead, we request the simplest
iptables command possible: "iptables -V" (and aside from logging a
debug message, we ignore the result, for good measure).
(This patch is being done *before* the patch that switches to calling
iptables directly, so that everything will function properly with any
fractional part of the series applied).
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Even though *we* don't call ebtables/iptables/ip6tables (yet) when the
firewalld backend is selected, firewalld does, so these binaries need
to be there; let's check for them. (Also, the patch after this one is
going to start execing those binaries directly rather than via
firewalld).
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This test was created with all the commandlines erroneously having
"--source-host", which is not a valid iptables option. The correct
name for the option is "--source". However, since the test is just
checking that the generated commandline matches what we told it to
generate (and never actually runs iptables, as that would be a "Really
Bad Idea"(tm)), the test has always succeeded. I only found it because
I made a change to the code that caused the test to incorrectly try to
run iptables during the test, and the error message I received was
"odd" (it complained about the bad option, rather than complaining
that I had insufficient privilege to run the command).
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
iptables and ip6tables have had a "-w" commandline option to grab a
systemwide lock that prevents two iptables invocations from modifying
the iptables chains since 2013 (upstream commit 93587a04 in
iptables-1.4.20). Similarly, ebtables has had a "--concurrent"
commandline option for the same purpose since 2011 (in the upstream
ebtables commit f9b4bcb93, which was present in ebtables-2.0.10.4).
Libvirt added code to conditionally use the commandline option for
iptables/ip6tables in upstream commit ba95426d6f (libvirt-1.2.0,
November 2013), and for ebtables in upstream commit dc33e6e4a5
(libvirt-1.2.11, November 2014) (the latter actually *re*-added the
locking for iptables/ip6tables, as it had accidentally been removed
during a refactor of firewall code in the interim).
I say "conditionally" because a check was made during firewall module
initialization that tried executing a test command with the
-w/--concurrent option, and only continued using it for actual
commands if that test command completed successfully. At the time the
code was added this was a reasonable thing to do, as it had been less
than a year since introduction of -w to iptables, so many distros
supported by libvirt were still using iptables (and possibly even
ebtables) versions too old to have the new commandline options.
It is now 2020, and as far as I can discern from repology.org (and
manually examining a RHEL7.9 system), every version of every distro
that is supported by libvirt now uses new enough versions of both
iptables and ebtables that they all have support for -w/--concurrent.
That means we can finally remove the conditional code and simply
always use them.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
All the unit tests that use iptables/ip6tables/ebtables have been
written to omit the locking/exclusive use primitive on the generated
commandlines. Even though none of the tests actually execute those
commands (and so it doesn't matter for purposes of the test whether or
not the commands support these options), it still made sense when some
systems had these locking options and some didn't.
We are now at a point where every supported Linux distro has supported
the locking options on these commands for quite a long time, and are
going to make their use non-optional. As a first step, this patch uses
the virFirewallSetLockOverride() function, which is called at the
beginning of all firewall-related tests, to set all the bools
controlling whether or not the locking options are used to true. This
means that all the test cases must be updated to include the proper
locking option in their commandlines.
The change to make actual execs of the commands unconditionally use
the locking option will be in an upcoming patch - this one affects
only the unit tests.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
When virfirewalltest.c was first written in commit 3a0ca7de51 (March
2013), a conditional accidentally tested for "ipv4" instead of
"ipv6". Since the file ended up only testing ipv4 rules, this has
never made any difference in practice, but I'm making some other
changes in this file and just couldn't let it stand :-)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The feature is never enabled by default on KVM and QEMU dropped it from
the models long ago.
https://bugzilla.redhat.com/show_bug.cgi?id=1798004
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
For backward compatibility with older versions of libvirt CPU models in
our CPU map are mostly immutable. We only changed them in a few specific
cases after showing it was safe. Sometimes QEMU developers realize a
specific feature should not be part of a particular (or any) CPU model
because it can never be enabled automatically without further
configuration. But we couldn't follow them because doing so would break
migration to older libvirt.
If QEMU drops feature F from CPU model M because F could not be enabled
automatically anyway, asking for M would never enable F. Even with older
QEMU versions. Naively removing F from libvirt's definition of M would
seem to work nicely on a single host. Libvirt would consider M to be
compatible with hosts CPU that do not support F. However, trying to
migrate domains using M without explicitly enabling or disabling F could
fail, because older libvirt would think F was enabled (it is part of M
there), but QEMU reports it as disabled once started.
Thus we can remove such feature from a libvirt's CPU model, but we have
to make sure any CPU definition using the affected model will always
explicitly mention the state of the removed feature.
https://bugzilla.redhat.com/show_bug.cgi?id=1798004
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
The patch adds a new attribute for the 'feature' element in CPU model
specification to indicate that a given feature was removed from a CPU
model. In other words, older versions of libvirt would consider such
feature to be included in the CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
This is just a preparation for adding new functionality to
virCPUx86Update.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Until now, the function returned immediately when the guest CPU
definition did not use optional features or minimum match. Clearly,
there's nothing to be updated according to the host CPU in this case,
but the arch specific code may still want to do some compatibility
updates based on the model and features used in the guest CPU
definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
This new function adds a feature to a CPU definition only if it is not
present there yet.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Replace the 'update' bool parameter with an enum so that we can have
more than two possible values.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
The function is supposed to add a feature to a CPU definition, let's
name it virCPUDefAddFeatureInternal. The behavior in case the feature is
already present in the CPU def is configurable and we will soon add a
new option to not do anything in that case, which wouldn't really work
well with the current *Update* name.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
dumpxml can now serialize:
* floppy drives
* file-backed and device-backed disk drives
* images mounted to virtual CD/DVD drives
* IDE and SCSI controllers
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Co-authored-by: Sri Ramanujam <sramanujam@datto.com>
Signed-off-by: Matt Coleman <matt@datto.com>
Mention the flag to enable TLS and also the knob to enforce it in the
qemu hypervisor driver.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Forgetting to use the VIR_MIGRATE_TLS flag with migration can lead to
leak of sensitive information. Add an administrative knob to force use
of the flag.
Note that without VIR_MIGRATE_PEER2PEER, the migration is driven by an
instance of the client library which doesn't necessarily run on either
of the hosts so the flag can't be used to assume VIR_MIGRATE_TLS even
if it wasn't provided by the user instead of rejecting if it's not.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/67
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Enumerate some features which are incompatible with tunnelled migration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemu's internals were not prepared for switching to -blockdev for the
legacy storage migration. Add a proper error message since qemu is
unlikely to attempt fixing the old protocol.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/65
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Move and aggregate all the logic which is switched based on whether the
migration is tunnelled or not before other checks. Further checks will
be added later.
While the code is being moved the error message is put on a single line
per new coding style.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Our streams are not the best transport for migration data and we support
TLS for security now. It's unlikely that there will be enough motivation
to add a new migration protocol to tunnel NBD too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Fix the following issues:
1) the very long line is overflowing the code box
2) '--migrateuri' was missing for the qemu data stream
3) '--desturi' was not used making it non-obvious what the argument
corresponds to
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Similarly to previous commit dealing with snapshots we must rewrite the
metadata of the previously-'current' checkpoint when changing which
checkpoint is considered 'current'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Whether a snapshot definition is considered 'current' or active is
stored in the metadata XML libvirt writes when we create metadata.
This means that if we are changing the 'current' snapshot we must
re-write the metadata of the previously 'current' snapshot to update the
field to prevent having multiple active snapshots.
Unfortunately the snapshot creation code didn't do this properly, which
resulted in the following error:
error : qemuDomainSnapshotLoad:430 : internal error: Too many snapshots claiming to be current for domain snapshot-test
being printed if libvirtd was terminated and restarted.
Introduce qemuSnapshotSetCurrent which writes out the old snapshot's
metadata when updating the current snapshot.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In some cases such as when creating an internal inactive snapshot we
know that the domain definition in the snapshot is equivalent to the
current definition. Additionally we set up the current definition for
the snapshotting but not the one contained in the snapshot. Thus in some
cases the caller knows better which def to use.
Make qemuDomainSnapshotForEachQcow2 take the definition by the caller
and copy the logic for selecting the definition to callers where we
don't know for sure that the above claim applies.
This fixes internal inactive snapshots when <disk type='volume'> is used
as we translate the pool/vol combo only in the current def.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/97
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Don't try to manipulate snapshots on network or unresolved volume backed
storage.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
'continue' the loop if the device is not a disk. Saving the level makes
one of the error messages fit on a single line.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit 912c6b22fc added abort() when the
'val' parameter is NULL along with setting the error variable for the
command. We don't want to abort in this case, just set the error.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When the host is shutting down then we get PrepareForShutdown
signal on DBus to which we react by creating a thread which
runs virStateStop() and thus qemuStateStop(). But if scheduling
the thread is delayed just a but it may happen that we receive
SIGTERM (sent by systemd) to which we respond by quitting our
event loop and cleaning up everything (including drivers). And
only after that the thread gets to run only to find qemu_driver
being NULL.
What we can do is to delay exiting event loop and join the thread
that's executing virStateStop(). If the join doesn't happen in
given timeout (currently 30 seconds) then libvirtd shuts down
forcefully anyways (see virNetDaemonRun()).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1895359
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1739564
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>