IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Splitting the comparison into a separate function makes the code cleaner
and easier to update in the future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Rather than returning a "char *" indicating perhaps some sized set of
characters that is NUL terminated, alter the function to return 0 or -1
for success/failure and add two parameters to handle returning the
buffer and it's size.
The function no longer encodes the returned secret, rather it returns
the unencoded secret forcing callers to make the necessary adjustments.
Alter the callers to handle the adjusted model.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Call the internal driver callbacks rather than the public APIs to avoid
calling unnecessarily the error dispatching code and don't overwrite
the error messages provided by the APIs. They are good enough to
describe which secret is missing either by UUID or the usage (basically
name).
For a few cases where we handle secret information it's good to clear
the buffers containing sensitive data before freeing them.
Introduce VIR_DISPOSE, VIR_DISPOSE_N and VIR_DISPOSE_STRING that allow
simple clearing fo the buffers holding sensitive information on cleanup
paths.
When -cpu host is supported by a QEMU binary, a user can use
<cpu mode='host-passthrough'/> in domain XML even when libvirtd failed
to find a matching model for the host CPU. Let's make it obvious by
advertising <cpuselection/> guest capability whenever -cpu host is
supported.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When probing the <emulator> with '-help' to determine if
it is the old qemu, errors are reported if the emulator
doesn't exist
libvirt: error : internal error: Child process
(/usr/lib/xen/bin/qemu-dm -help) unexpected exit status 127:
libvirt: error : cannot execute binary /usr/lib/xen/bin/qemu-dm:
No such file or directory
Avoid the probe if the specified emulator doesn't exist,
squelching the error. There is no behavior change since
libxlDomainGetEmulatorType() would return
LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN if the probe failed
via virCommandRun().
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Move some parts of virStorageFileRemoveLastPathComponent
into a separate function so they can be reused.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Further followup discussions in list on commit 192a53e concluded
that we should be leaving out the USB controller only for
i440fx machines as default USB can be used by someone on q35
at random slots.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Move filling out the default video (v)ram to DeviceDefPostParse.
This means it can be removed from virDomainVideoDefParseXML
and qemuParseCommandLine. Also, we no longer need to special case
VIR_DOMAIN_VIRT_XEN, since the per-driver callback gets called
before the generic one.
Commit 6879be48 moved adding of an implicit video device after XML
parsing. As a result, libxlDomainDeviceDefPostParse() is no longer
called to set the default vram when adding an implicit device.
Commit 6879be48 assumes virDomainVideoDefaultRAM() will set the
default vram, but it returns 0 if the domain virtType is
VIR_DOMAIN_VIRT_XEN. Attempting to start an HVM domain with vram=0
results in
error: unsupported configuration: videoram must be at least 4MB for CIRRUS
The default vram setting for Xen HVM domains depends on the device
model used (qemu-xen vs qemu-traditional), hence setting the
default is deferred to libxlDomainDeviceDefPostParse().
Call the device post-parse callback even for implicit video,
to fill out the default vram even for VIR_DOMAIN_VIRT_XEN.
https://bugzilla.redhat.com/show_bug.cgi?id=1334557
Most-of-commit-message-by: Jim Fehlig <jfehlig@suse.com>
Both virGetLastError and virGetLastErrorMessage call virLastErrorObject method
that returns a thread-local error object. However, if a direct call to malloc
or pthread_setspecific (probably also due to malloc, since it sets ENOMEM)
fail, virLastErrorObject returns NULL which, although incorrectly interpreted
by virGetLastError as no error, still requires the caller to check for NULL
pointer. This isn't the case with virGetLastErrorMessage that also treated it
incorrectly as no error, but returned the literal "no error".
This patch tweaks the checks in the virGetLastErrorMessage function, so that
if virLastErrorObject failed, it returned "unknown error" which is equivalent
to the current approach with virGetLastError and if it returned NULL,
"unknown error" was set.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit id 'df1011ca8' modified virStorageBackendDiskDeleteVol to use
"dmsetup remove --force" to remove the volume, but left things in an
inconsistent state since the partition still existed on the disk and
only the device mapper device (/dev/dm-#) was removed.
Prior to commit '1895b421' (or '1ffd82bb' and '471e1c4e'), this could
go unnoticed since virStorageBackendDiskRefreshPool wasn't called.
However, the pool would be unusable since the /dev/dm-# device would
be removed even though the partition was not removed unless a multipathd
restart reset the link. That would of course make the volume appear again
in the pool after a refresh or pool start after libvirt reload.
This patch removes the 'dmsetup' logic and re-implements the partition
deletion logic for device mapper devices. The removal of the partition
via 'parted rm --script #' will cause udev device change logic to allow
multipathd to handle removing the dm-* device associated with the partition.
https://bugzilla.redhat.com/show_bug.cgi?id=1265694
Commit id '020135dc' didn't quite get the algorithm correct when a
device mapper source ended with a non numeric value (e.g. ends with
an alphabet value).
This patch modifies the 'part_separator' logic to add the "p" separator
to the attempted target path name only when specified as part_separator='yes'.
For a source name that already ends with a number, the logic doesn't change
as the part separator would need to be there.
For a source name that ends with something other than a number, this allows
the possibility that a "p" separator can be added. The default for one of
these source devices is to not add the separator.
The key for device mapper and the need for a partition separator "p" is
the presence of a number in the last character of the device name link
in /dev/mapper. A name such as "/dev/mapper/mpatha1" would generate
a "/dev/mapper/mpatha1p1" partition, while "/dev/mapper/mpatha" would
generate partition "/dev/mapper/mpatha1". Similarly for a device
mapper entry not using friendly names or an alias, a device such as
"/dev/mapper/3600a0b80005b10ca00005ad656fd8d93" would generate a
paritition "/dev/mapper/3600a0b80005b10ca00005ad656fd8d93p1", while
a device such as "/dev/mapper/3600a0b80005b10ca00005e115729093f" would
generate a partition "/dev/mapper/3600a0b80005b10ca00005e115729093f1".
The long number is the WWID of the device. It's also possible to assign
an alias for a device mapper entry, that alias follows the same rules
with respect to ending with a number or not when adding a "p" to create
the target device path.
Prior to calling the 'refreshPool' during CreatePool or UploadPool
operations, we need to clear the pool; otherwise, the pool will
have duplicated entries.
https://bugzilla.redhat.com/show_bug.cgi?id=1318993
Commit id 'dd519a294' caused a regression cloning a volume into a
logical pool by removing just the 'allocation' adjustment during
storageVolCreateXMLFrom. Combined with the change to not require the
new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
left the possibility that a zero allocation value (e.g., not provided)
would create a thin/sparse logical volume. When a thin lv becomes fully
populated, then LVM sets the partition 'inactive' and the subsequent
fdatasync() fails.
Add a new 'has_allocation' flag to be set at XML parse time to indicate
that allocation was provided. This is done so that if it's not provided
the create-from code uses the capacity value since we document that if
omitted, the volume will be fully allocated at time of creation.
For a logical backend, that creation time is 'createVol', while for a
file backend, creation doesn't set the size, but the 'createRaw' called
during buildVolFrom will decide whether the file is sparse or not based
on the provided capacity and allocation value.
For volume clones that provide different allocation and capacity values
to allow for sparse files, there is no change.
Usage of this keyword in front of function declaration that is exported via a
header file is unnecessary, since internally, this has been the default for most
compilers for quite some time.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
libvirt may automatically add a pci-root or pcie-root controller to a
domain, depending on the arch/machinetype, and it hopefully always
makes the right decision about which to add (since in all cases these
controllers are an implicit part of the virtual machine).
But it's always possible that someone will create a config that
explicitly supplies the wrong type of PCI controller for the selected
machinetype. In the past that would lead to an error later when
libvirt was trying to assign addresses to other devices, for example:
XML error: PCI bus is not compatible with the device at
0000:00:02.0. Device requires a PCI Express slot, which is not
provided by bus 0000:00
(that's the error message that appears if you replace the pcie-root
controller in a Q35 domain with a pci-root controller).
This patch adds a check at the same place that the implicit
controllers are added (to ensure that the same logic is used to check
which type of pci root is correct). If a pci controller with index='0'
is already present, we verify that it is of the model that we would
have otherwise added automatically; if not, an error is logged:
The PCI controller with index='0' must be " model='pcie-root' for
this machine type, " but model='pci-root' was found instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1004602
Similar to "support Xen migration stream V2 in save/restore",
add support for indicating the migration stream version in
the migration code. To accomplish this, add a minimal migration
cookie in the libxl driver that is passed between source and
destination hosts. Initially, the cookie is only used in
the Begin and Prepare phases of migration to communicate the
version of the migration stream produced by the source.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Xen 4.6 introduced a new migration stream commonly referred to as
"migration V2". Xen 4.6 and newer always produce this new stream,
whereas Xen 4.5 and older always produce the legacy stream.
Support for migration stream V2 can be detected at build time with
LIBXL_HAVE_SRM_V2 from libxl.h. The legacy and V2 streams are not
compatible, but a V2 host can accept and convert a legacy stream.
Commit e7440656 changed the libxl driver to use the lowest libxl
API version possible (version 0x040200) to ensure the driver
builds against older Xen releases. The old 4.2 restore API does
not support specifying a stream version and assumes a legacy
stream, even if the incoming stream is migration V2. Thinking it
has been given a legacy stream, libxl will fail to convert an
incoming stream that is already V2, which causes the entire
restore operation to fail. Xen's libvirt-related OSSTest has been
failing since commit e7440656 landed in libvirt.git master. One
of the more recent failures can be seen here
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00071.html
This patch changes the call to libxl_domain_create_restore() to
include the stream version if LIBXL_HAVE_SRM_V2 is defined. The
version field of the libxlSavefileHeader struct is also updated
to '2' when LIBXL_HAVE_SRM_V2 is defined, ensuring the stream
version in the header matches the actual stream version produced
by Xen. Along with bumping the libxl API requirement to 0x040400,
this patch fixes save/restore on a migration V2 Xen host.
Oddly, migration has never used the libxlSavefileHeader. It
handles passing configuration in the Begin and Prepare phases,
and then calls libxl directly to transfer domain state/memory
in the Perform phase. A subsequent patch will add stream
version handling in the Begin and Prepare phase handshaking,
which will fix the migration related OSSTest failures.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
In LIBXL_API_VERSION 0x040400, the libxl_domain_create_restore API
gained a parameter for specifying restore parameters. Switch to
using version 0x040400, which will be useful in a subsequent commit
to specify the Xen migration stream version when restoring.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove the possibility that a NULL hostdev->privateData or a
disk->privateData could crash libvirtd by checking for NULL
before dereferencing for the secinfo structure in the
qemuDomainSecret{Disk|Hostdev}Destroy functions. The hostdevPriv
could be NULL if qemuProcessNetworkPrepareDevices adds a new
hostdev during virDomainNetGetActualHostdev that then gets
inserted via virDomainHostdevInsert. The hostdevPriv was added
by commit id '27726d8' and is currently only used by scsi hostdev.
SRIOV VFs used in macvtap passthrough mode can take advantage of the
SRIOV card's transparent vlan tagging. All the code was there to set
the vlan tag, and it has been used for SRIOV VFs used for hostdev
interfaces for several years, but for some reason, the vlan tag for
macvtap passthrough devices was stubbed out with a -1.
This patch moves a bit of common validation down to a lower level
(virNetDevReplaceNetConfig()) so it is shared by hostdev and macvtap
modes, and updates the macvtap caller to actually send the vlan config
instead of -1.
Once we're able to list and identify all clients connected to a specific
server, we can then support force-closing a connection. This patch introduces
a simple API calling virNetServerClientClose on a specific client, which
can be later extended easily, e.g. by sending an event once the client is
disconnected successfully.
Signed-off-by: Erik Skultety <eskultet@redhat.com>