IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
When trying clang, it found out that we were comparing sizeof with 0
even though we wanted to check the return value of memcmp. That showed
us that the test was wrong and it needs a fix as well.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reboot requires more sophistication and is left as a future work item --
but at least part of the plumbing is in place.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If someone removes blockcopy storage file when still in mirroring phase
and then requesting blockjob abort using pivot, virsh cmd freezes. This
is not an issue with older qemu versions which did not support
asynchronous jobs (which we prefer by default).
As we have reached the mirroring phase successfully, polling monitor for
blockjob info always returns 1 and the loop never ends.
This fix introduces a check for qemuDomainBlockPivot return code, possibly
skipping the asynchronous waiting completely, if an error occurred and
asynchronous waiting was the preferred method.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1139567
This changes the display from:
libvirt-storage: APIs for management of storages
to
libvirt-storage: APIs for management of storage pools and volumes
In making that change I expected my build tree html output to be
regenerated; however, it wasn't because the dependency for the separated
libvirt-storage.h wasn't there. It was only present for libvirt.h.in
So I added each in the order displayed on the docs/html/index.html page
Commit 86a15a25 introduced a new cpu driver API 'getModels'. Public API
allow you to pass NULL for models to get only number of existing models.
However the new code will crash with segfault so we have to count with
the possibility that the user wants only the number.
There is also difference in order of the models gathered by this new API
as the old approach was inserting the elements to the end of the array
so we should use 'VIR_APPEND_ELEMENT'.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reconnect to the VM is a possibly long-running job spawned in a separate
thread. We should reload the snapshot defs and managedsave state prior
to spawning the thread to avoid blocking of the daemon startup which
would serialize on the VM lock.
Also the reloading code would violate the domain job held while
reconnecting as the loader functions don't create jobs.
Coverity pointed out that in other places we always check the return
value from virJSONValueObjectGetNumberLong() but not in the new addition
in leaseshelper. To solve the issue and also be more robust in case
somebody would corrupt the file, skip outputting of the lease entry in
case the expiry time is missing.
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5
When trying to use an invalid offset to virStorageVolUpload(), libvirt
fails in virFDStreamOpenFileInternal(), although it seems libvirt does
not check the return in storageVolUpload(), and calls
virFDStreamSetInternalCloseCb() right after. But stream doesn't have a
privateData (is NULL) yet, and the daemon crashes then.
0 0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
1 0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
2 0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
3 0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
4 0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
5 0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
6 remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
7 0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit 570d0f63 describes disabling negative offset usage for
vol-upload/download (e.g. cmdVolDownload and cmdVolUpload; however,
the change was only made to cmdVolDownload. There was no change to
cmdVolUpload. This patch adds the same checks for vol-upload.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1087104
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
This patch enables the helper program to detect event(s) triggered when
there is a change in lease length or expiry and client-id. This
transfers complete control of leases database to libvirt and obsoletes
use of the lease database file (<network-name>.leases). That file will
not be created, read, or written. This is achieved by adding the option
--leasefile-ro to dnsmasq and passing a custom env var to leaseshelper,
which helps us map events related to leases with their corresponding
network bridges, no matter what the event be.
Also, this requires the addition of a new non-lease entry in our custom
lease database: "server-duid". It is required to identify a DHCPv6
server.
Now that dnsmasq doesn't maintain its own leases database, it relies on
our helper program to tell it about previous leases and server duid.
Thus, this patch makes our leases program honor an extra action: "init",
in which it sends the known info in a particular format to dnsmasq
by printing it to stdout.
The drawback of this change is that upgrade to this new approach does
not transfer the existing leases for the network if the leaseshelper
wasn't already used.
For Intel and PowerPC the implementation is calling a cpu driver
function across driver layers (i.e. from qemu driver directly to cpu
driver).
The correct behavior is to use libvirt API functionality to perform such
a inter-driver call.
This patch introduces a new cpu driver API function getModels() to
retrieve the cpu models. The currect implementation to process the
cpu_map XML content is transferred to the INTEL and PowerPC cpu driver
specific API functions.
Additionally processing the cpu_map XML file is not safe due to the fact
that the cpu map does not exist for all architectures. Therefore it is
better to encapsulate the processing in the architecture specific cpu
drivers.
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Based on previous commit, we can now precreate missing volumes. While
digging out the functionality from storage driver would be nicer, if
you've seen the code it's nearly impossible. So I'm going from the
other end:
1) For given disk target, disk path is looked up.
2) For the disk path, storage pool is looked up, a volume XML is
constructed and then passed to virStorageVolCreateXML() which has all
the knowledge how to create raw images, (encrypted) qcow(2) images,
etc.
One of the advantages of this approach is, we don't have to care about
image conversion - qemu does that for us. So for instance, users can
transform qcow2 into raw on migration (if the correct XML is passed to
the migration API).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up 'til now, users need to precreate non-shared storage on migration
themselves. This is not very friendly requirement and we should do
something about it. In this patch, the migration cookie is extended,
so that <nbd/> section does not only contain NBD port, but info on
disks being migrated. This patch sends a list of pairs of:
<disk target; disk size>
to the destination. The actual storage allocation is left for next
commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The function queries the block devices visible to qemu
('query-block') and parses the qemu's output. The info is
returned in a hash table which is expected to be pre-filled by
qemuMonitorJSONGetAllBlockStatsInfo(). However, in the next patch
we are not going to call the latter function at all, so we should
make the former function add devices into the hash table if not
found there.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While this could be exposed as a public API, it's not done yet as
there's no demand for that yet. Anyway, this is just preparing
the environment for easier volume creation on the destination.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since virDomainSnapshotFree will call virObjectUnref anyway, let's just use
that directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virInterfaceFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virNWFilterFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virSecretFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStreamFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStoragePoolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStorageVolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virNodeDeviceFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virNetworkFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virDomainFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
virConnect.privateData is void *, so we can't access
fields of parallelsConn, pointer to which is stored in
virConnect.privateData. So replace all occurences of
conn->privateData->storageState with privconn->storageState.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Partially reverts commit 5754dbd.
The code in the specfile adds a MAC address to every <bridge>,
even for <forward mode='bridge'> for which we don't support
changing MAC addresses.
Remove it completely. For new networks, we have been adding
MAC addresses on definition/creation since the commit mentioned above.
For existing networks (pre-0.9.0), the MAC is added by this commit.
https://bugzilla.redhat.com/show_bug.cgi?id=1156367
Since our big split of libvirt.c there are only a few functions
living there. The majority was moved to corresponding subfile,
e.g. domain functions were moved to libvirt-domain.c. However,
the patches for virDomainGetFSInfo() and virDomainFSInfoFree()
introduction were posted prior the big split and merged after.
This resulted in two domain functions landing in wrong file.
Move them to the correct one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Adding non-existing nwfilter to a network interface device without any
nwfilter specified will crash libvirt daemon with segfault. The reason is
that the nwfilter is not found an libvirt will try to restore old
nwfilter configuration but there is no nwfilter specified.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The function virNetworkObjListExport() in network_conf.c had a call to
the public API virNetworkFree() which was causing a link error:
CCLD libvirt_driver_vbox_network_impl.la
./.libs/libvirt_conf.a(libvirt_conf_la-network_conf.o): In function `virNetworkObjListExport':
/home/laine/devel/libvirt/src/conf/network_conf.c:4496: undefined reference to `virNetworkFree'
This would happen when I added
#include "network_conf.h"
into domain_conf.h, then attempted to call a new function from that
file (and enum converter, similar to virNetworkForwardTypeToString())
In the end, virNetworkFree() ends up just calling virObjectUnref(obj)
anyway (after clearing all pending errors, which we probably *don't*
want to do in the cleanup of a utility function), so this is likely
more correct than the original code as well.
Make was not able to realize the dependencies for html/*.html files when
running 'make -j9 dist'. All the files are generated together with
html/index.html, so simply separating them into another variable and
adding one block into the dependency chain solves the issue.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since libvirt.h was split into multiple files and similarly
docs/libvirt-libvirt.html, docs/hvsupport.html have bad hyperlinks. The
same happens for all the html.in files that used <code class='docref'>
tag, because page.xsl has no idea where to point the link that's found.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is a race condition between the fopen and fscanf calls
in qemuGetProcessInfo. If fopen succeeds, there is a small
possibility that the file no longer exists before reading from it.
Now, if either fopen or fscanf calls fail, the function will behave
just as only fopen had failed.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1169055
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id 'cb88d433' refactored the calling sequence to use a thread;
however, in doing so "lost" the check for if virNetSocketAccept returns
failure. Since other code makes that check, Coverity complains. Although
a false positive, adding back the failure check pacifies Coverity
Commit id '0d36a5d05' modified the code slightly, but removed the
return value check thus causing Coverity to complain that this call
was the only one where the return value wasn't checked. Since nothing
was done previously if there was a failure, just use ignore_value here
to pacify Coverity
Coverity complains that many other callers to return err from
virGetLastError() will check if err is not NULL before dereferencing
it. Just do the same here for safety.
Coverity complained that because the cfg->macFilter call checked
net->ifname != NULL before calling ebtablesRemoveForwardAllowIn, then
the virNetDevOpenvswitchRemovePort call should have the same check.
However, if I move the ebtables call prior to the check for TYPE_DIRECT
(where there is a VIR_FREE(net->ifname)), then it seems Coverity is
happy. Since firewall info is tacked on last during setup, removing
it in the opposite order of initialization seems to be natural anyway
https://bugzilla.redhat.com/show_bug.cgi?id=1159180
The virStoragePoolSourceFindDuplicate only checks the incoming definition
against the same type of pool as the def; however, for "scsi_host" and
"fc_host" adapter pools, it's possible that either some pool "scsi_host"
adapter definition is already using the scsi_hostN that the "fc_host"
adapter definition wants to use or some "fc_host" pool adapter definition
is using a vHBA scsi_hostN or parent scsi_hostN that an incoming "scsi_host"
definition is trying to use.
This patch adds the mismatched type checks and adds extraneous comments
to describe what each check is determining.
This patch also modifies the documentation to be describe what scsi_hostN
devices a "scsi_host" source adapter should use and which to avoid. It also
updates the parent definition to specifically call out that for mixed
environments it's better to define which parent to use so that the duplicate
pool checks can be done properly.
https://bugzilla.redhat.com/show_bug.cgi?id=1159180
Move the API from the backend to storage_conf and rename it to
virStoragePoolGetVhbaSCSIHostParent. A future patch will need to
use this functionality from storage_conf
There are some small issue in qemuProcessAttach:
1.Fix virSecurityManagerGetProcessLabel always get pid = 0,
move 'vm->pid = pid' before call virSecurityManagerGetProcessLabel.
2.Use virSecurityManagerGenLabel to get image label.
3.Fix always set selinux label for other security driver label.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When a block{commit,copy} job was aborted on a domain, block job handler
did not process it correctly, leaving a phantom job in the background.
Any further calls to any blockjob causes "block <jobtype> still active"
error. This patch fixes the blockjob handler so that it checks not only
for VIR_DOMAIN_BLOCK_JOB_FAILED status, but VIR_DOMAIN_BLOCK_JOB_CANCELED
status as well, followed by our existing cleanup routine.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135169
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>