IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In these functions I'm fixing here, we do call
qemuMonitorJSONCheckError() followed by another check if qemu
reply contains 'return' object. If it wouldn't, the former
CheckError() function would error out and the flow would not even
get to the latter.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Usually, the flow in this area of the code is as follows:
qemuMonitorJSONMakeCommand()
qemuMonitorJSONCommand()
qemuMonitorJSONCheckError()
parseReply()
But in this function, for some reasons, the last two steps were
swapped. This makes no sense.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In qemuDomainDefAddDefaultDevices we check for a non-NULL
def->os.machine for x86 archs, but not the others.
Moreover, the only caller - qemuDomainDefPostParse
already checks for it and even then it can happen only
if /etc/libvirt contains an XML without a machine type.
We do not need to propagate the exact return values
and the only possible ones are 0 and -1 anyway.
Remove the temporary variable and use the usual pattern:
if (f() < 0)
return -1;
https://bugzilla.redhat.com/show_bug.cgi?id=1139766
Thing is, for some reasons you can have your domain's RTC to be
in something different than UTC. More weirdly, it's not only time
zone what you can shift it of, but an arbitrary value. So, if
domain is configured that way, libvirt will correctly put it onto
qemu cmd line and moreover track it as this offset changes during
domain's life time (e.g. because guest OS decides the best thing
to do is set new time to RTC). Anyway, they way in which this
tracking is implemented is events. But we've got a problem if
change in guest's RTC occurs and the daemon is not running. The
event is lost and we end up reporting invalid value in domain
XML. Therefore, when the daemon is starting up again and it is
reconnecting to all running domains, re-fetch their RTC so the
correct offset value can be computed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Wire-up the public client listing API. Along with this change, a private time
simple conversion method to interpret client's timestamp obtained from server
has been added as well. Format used to for time output is as follows:
YYYY-mm-DD HH:MM:SS+ZZZZ.
Although libvirt exposes methods time-related methods through virtime.h
internally, it utilizes millisecond precision which we don't need in this case,
especially when connection timestamps use precision to seconds only.
This is just a convenience int to string conversion method.
To reflect the new API, man page has been adjusted accordingly.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Although we document 6 types of transport that we support, internally we can
only differentiate between TCP, TLS, and UNIX transports only, since both SSH
and libssh2 transports, due to using netcat, behave in the exactly the same
way as a UNIX socket.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
For now, the list copy is done simply by locking the whole server, walking the
original and increasing the refcount on each object. We may want to change
the list to a lockable object (like list of domains) later in the future if
we discover some performance issues related to locking the whole server in
order to walk the whole list of clients, possibly issuing some 'ForEach'
callback.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Now that libvirt-admin supports another client-side object and provided that
we want to generate as many both client-side and server-side RPC dispatchers,
support for this needs to be added to gendispatch.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Besides ID, the object also stores static data like connection transport and
connection timestamp, since once obtained a list of all clients connected to a
server, from user's perspective, it would be nice to know whether a given
client is remote or local only and when did it connect to the daemon.
Along with the object introduction, all necessary client-side methods necessary
to work with the object are added as well.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Besides ID, libvirt should provide several parameters to help the user
distinguish two clients from each other. One of them is the connection
timestamp. This patch also adds a testcase for proper JSON formatting of the
new attribute too (proper formatting of older clients that did not support
this attribute yet is included in the existing tests) - in order to
testGenerateJSON to work, a mock of time_t time(time_t *timer) needed to be
created.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Admin API needs a way of addressing specific clients. Unlike servers, which we
are happy to address by names both because its name reflects its purpose (to
some extent) and we only have two of them (so far), naming clients doesn't make
any sense, since a) each client is an anonymous, i.e. not recognized after a
disconnect followed by a reconnect, b) we can't predict what kind of requests
it's going to send to daemon, and c) the are loads of them comming and going,
so the only viable option is to use an ID which is of a reasonably wide data
type.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Most distributions, including RHEL, have switched to systemd,
so we should detect it and act accordingly. This also means
that 'systemd+redhat' should be preferred to legacy 'redhat'.
Our witness for the check is the availability of the systemctl
command on the host.
If we didn't find a match, either because we're cross compiling
or because we're not building on RHEL, we won't install any
init script.
Make sure this is reported correctly in the configure summary.
If a panic device is being defined without a model in a domain
the default value is always overwritten with model ISA. An ISA
bus does not exist on S390 and therefore specifying a panic device
results in an unsupported configuration.
Since the S390 architecture inherently provides a crash detection
capability the panic device should be defined in the domain xml.
This patch adds an s390 panic device model and prevents setting a
device address on it.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The iohelper dies on SIGPIPE if the stream is closed before all data
is processed. IMO this should be an error condition for virStreamFinish
according to docs like:
* This method is a synchronization point for all asynchronous
* errors, so if this returns a success code the application can
* be sure that all data has been successfully processed.
However for virStreamAbort, not so much:
* Request that the in progress data transfer be cancelled
* abnormally before the end of the stream has been reached.
* For output streams this can be used to inform the driver
* that the stream is being terminated early. For input
* streams this can be used to inform the driver that it
* should stop sending data.
Without this, virStreamAbort will realistically always error for
active streams like domain console. So, treat the SIGPIPE case
as non-fatal if abort is requested.
Note, this will only affect an explicit user requested abort. An
abnormal abort, like from a server error, always raises an error
in the daemon.
Every time a client aborts a stream via the virStreamAbort API,
the daemon always logs an error like:
error : daemonStreamHandleAbort:617 : stream aborted at client request
and that same error is returned to the client. Meaning virStreamAbort
always returns -1, which seems strange.
This reworks the error handling to only raise an error on virStreamAbort
if the actual server side abort call raises an error. This is similar
to how virStreamFinish works.
If the abort code path is triggered by an unexpected message type
then we continue to raise an unconditional error. Also drop a redundant
VIR_WARN call there, since virReportError will raise a VIR_ERROR anyways
These are the only places where we don't set stream->closed when
aborting the stream. This leads to spurious errors when the client
hangs up unexpectedly:
error : virFDStreamUpdateCallback:127 : internal error: stream is not open
Calling virStreamFinish prematurely seems to trigger this code path
even after the stream is closed, which ends up hitting this error
message later:
error : virFDStreamUpdateCallback:127 : internal error: stream is not open
Skip this function if stream->closed, which is used in many other places
like read/write handlers
This is the only place in daemon/stream.c that sets
'stream->closed = true' but neglects to actually abort the stream
and remove the callback, which seems wrong.
libvirt-daemon-config-nwfilter will put a bunch of xml configs
into /etc/libvirt/nwfilter. These configs don't hardcode a UUID
and depends on libvirt to generate one. However the generated UUID
is never saved to disk, unless the user manually calls Define.
This makes daemon reload quite noisy with many errors like:
error : virNWFilterObjAssignDef:3101 : operation failed: filter 'allow-incoming-ipv4' already exists with uuid 50def3b5-48d6-46a3-b005-cc22df4e5c5c
Because a new UUID is generated every time the config is read from
disk, so libvirt constantly thinks it's finding a new nwfilter.
Detect if we generated a UUID when the config file is loaded; if so,
resave the new contents to disk to ensure the UUID is persisteny.
This is similar to what was done in commit a47ae7c0 with virtual
networks and generated MAC addresses
In virNWFilterObjLoad we can still fail after virNWFilterObjAssignDef,
but we don't unlock and free the created virNWFilterObjPtr in the
cleanup path.
The bit we are trying to do after AssignDef is just STRDUP in the
configFile path. However caching the configFile in the NWFilterObj
is largely redundant and doesn't follow the same pattern we use
for domain and network objects.
So just remove all the configFile caching which fixes the latent
bug as a side effect.
We will segfault of a daemon reload picks up a new network config
that needs to be autostarted. We shouldn't be passing NULL for
network_driver here. This seems like it was missed in the larger
rework in commit 1009a61e
The default USB controller is not sent to destination as the older versions
of libvirt(0.9.4 or earlier as I see in commit log of 409b5f54) didn't
support them. For some archs where the support started much later can
safely send the USB controllers without this worry. So, send the controller
to destination for all archs except x86. Moreover this is not very applicable
to x86 as the USB controller has model ich9_ehci1 on q35 and for pc-i440fx,
there cant be any slots before USB as it is fixed on slot 1.
The patch fixes a bug that, if the USB controller happens to occupy
a slot after disks/interfaces and one of them is hot-unplugged, then
the default USB controller added on destination takes the smallest slot
number and that would lead to savestate mismatch and migration
failure. Seen and verified on PPC64.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
We historically format runtime seclabel selinux/apparmor values,
however we skip formatting runtime DAC values. This was added in
commit 990e46c454
Author: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
Date: Fri Aug 31 13:40:41 2012 +0200
conf: Avoid formatting auto-generated DAC labels
to maintain migration compatibility with libvirt < 0.10.0.
However the formatting was skipped unconditionally. Instead only
skip formatting in the VIR_DOMAIN_DEF_FORMAT_MIGRATABLE case.
https://bugzilla.redhat.com/show_bug.cgi?id=1215833
Trying to define a pool name containing an embedded '/'
will immediately fail when trying to write the XML to disk.
This patch explicitly rejects names containing a '/'
Besides our stateful driver, there are two other storage impls:
esx and phyp. esx doesn't support pool creation, so this should
doesn't apply.
phyp does support pool creation, and the name is passed to the
'mksp' tool, which google doesn't reveal whether it accepts '/'
or not. IMO the likeliness of this impacting any users is near zero
Trying to define a network name containing an embedded '/'
will immediately fail when trying to write the XML to disk.
This patch explicitly rejects names containing a '/'
Besides the network bridge driver, the only other network
implementation is a very thin one for virtualbox, which seems to
use the network name as a host interface name, which won't
accept '/' anyways, so I think this is fine to do unconitionally.
https://bugzilla.redhat.com/show_bug.cgi?id=787604
Trying to define a domain name containing an embedded '/'
will immediately fail when trying to write the XML to disk for
our stateful drivers. This patch explicitly rejects names
containing a '/', and provides an xmlopt feature for drivers
to avoid this validation check, which is enabled in every
non-stateful driver that already has xmlopt handling wired up.
(Technically this could reject a previously accepted vmname like
'/foo', however at least for the qemu driver that falls over
later when starting qemu)
https://bugzilla.redhat.com/show_bug.cgi?id=639923
We were lacking tests that are checking for the completeness of our
nodedev XMLs and also whether we output properly formatted ones. This
patch adds parsing for the capability elements inside the <capability
type='pci'> element. Also bunch of tests are added to show everything
works properly.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>