IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add test cases for address conflicts between disks and hostdevs that are
using drive addresses.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Don't use duplicate disk addresses in test cases unless it's useful. At
least the test case will break once we have a check for uniqueness of
addresses at time of domain definition.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
If libvirtd is running unprivileged, it can open a device's PCI config
data in sysfs, but can only read the first 64 bytes. But as part of
determining whether a device is Express or legacy PCI,
qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future
patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond
the first 64 bytes of the PCI config data and fails with an error log
if the read is unsuccessful.
In order to avoid creating a parallel "quiet" version of
virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down
through all the call chains that initialize the
qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver
pointer with the rest of the iterdata so that it can be used by
qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used
yet, but will be used in an upcoming patch (that detects Express vs
legacy PCI for VFIO assigned devices) to examine driver->privileged.
QEMU 2.8.0 adds support for unavailable-features in
query-cpu-definitions reply. The unavailable-features array lists CPU
features which prevent a corresponding CPU model from being usable on
current host. It can only be used when all the unavailable features are
disabled. Empty array means the CPU model can be used without
modifications.
We can use unavailable-features for providing CPU model usability info
in domain capabilities XML:
<domainCapabilities>
...
<cpu>
<mode name='host-passthrough' supported='yes'/>
<mode name='host-model' supported='yes'>
<model fallback='allow'>Skylake-Client</model>
...
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere</model>
<model usable='yes'>Skylake-Client</model>
...
</mode>
</cpu>
...
</domainCapabilities>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
"host" CPU model is supported by a special host-passthrough CPU mode and
users is not allowed to specify this model directly with custom mode.
Thus we should not advertise "host" CPU model in domain capabilities.
This worked well on architectures for which libvirt provides a list of
supported CPU models in cpu_map.xml (since "host" is not in the list).
But we need to explicitly filter "host" model out for all other
architectures.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU models (and especially some additional details which we will start
probing for later) differ depending on the accelerator. Thus we need to
call query-cpu-definitions in both KVM and TCG mode to get all data we
want.
Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
having to squash even more stuff into this single patch. They will all
be switched back later in separate commits.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU related capabilities may differ depending on accelerator used when
probing. Let's use KVM if available when probing QEMU and fall back to
TCG. The created capabilities already contain all we need to distinguish
whether KVM or TCG was used:
- KVM was used when probing capabilities:
QEMU_CAPS_KVM is set
QEMU_CAPS_ENABLE_KVM is not set
- TCG was used and QEMU supports KVM, but it failed (e.g., missing
kernel module or wrong /dev/kvm permissions)
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is set
- KVM was not used and QEMU does not support it
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is not set
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting QEMU more than once during a single probing process,
qemucapsprobe utility would save QMP greeting several times, which
doesn't play well with our test monitor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest
of the probing code can use these capabilities to handle KVM/TCG replies
differently.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We have couple of functions that operate over NULL terminated
lits of strings. However, our naming sucks:
virStringJoin
virStringFreeList
virStringFreeListCount
virStringArrayHasString
virStringGetFirstWithPrefix
We can do better:
virStringListJoin
virStringListFree
virStringListFreeCount
virStringListHasString
virStringListGetFirstWithPrefix
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
With the QEMU components in place, provide the XML parsing to
invoke that code when given the following XML snippet:
<hostdev mode='subsystem' type='scsi_host'>
<source protocol='vhost' wwpn='naa.501234567890abcd'/>
</hostdev>
An optional address element can be specified within the hostdev
(pick CCW or PCI as necessary):
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0625'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Add basic vhost-scsi tests which were cloned from hostdev-scsi-virtio-scsi
in both xml2argv and xml2xml. Added ones for both vhost-scsi-ccw and
vhost-scsi-pci since the syntaxes are slightly different between them.
Also adjusted the docs to describe the changes.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
For a new hostdev type='scsi_host' we have a number of
required functions for managing, adding, and removing the
host device to/from guests. Provide the basic infrastructure
for these tasks.
The name "SCSIVHost" (and its variants) is chosen to avoid
conflicts with existing code named "SCSIHost" to refer to
a hostdev type='scsi' protcol='none'.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
We already have a "scsi" hostdev subsys type, which refers to a single
LUN that is passed through to a guest. But what of things where
multiple LUNs are passed through via a single SCSI HBA, such as with
the vhost-scsi target? Create a new hostdev subsys type that will
carry this.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Do all the stuff for the vhost-scsi capability in QEMU,
so it's in place for our checks later.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
macOS doesn't support clock_gettime(2), at least versions prior 10.12
(I didn't actually check 10.12 though). So, use its own routines in
eventtest.
* configure.ac: check for requires symbols and define
HAVE_MACH_CLOCK_ROUTINES if found
* tests/eventtest.c: add clock_get_time() based implementation
Commit 3f71c79768 added 'qemu_id' field to track the id of the cpu
as reported by query-cpus. The patch did not include changes necessary
to propagate the id through the functions matching the data to the
libvirt cpu structures and thus all vcpus had id 0.
The field is named 'enable_id' in other structures and a patch recently
added 'qemu_id' which has different semantics. To avoid confusion in the
tests rename the field.
* virNetDevTapCreateInBridgePort() mock: free '*ifname' before
strdupping a hardoded value to it
* testCompareXMLToArgvFiles(): unref 'conn' object in cleanup
* testCompareXMLToArgvHelper(): free 'ldargs' and 'dmargs' in
cleanup
Guest CPU definitions with mode='custom' and missing <vendor> are
expected to run on a host CPU from any vendor as long as the required
CPU model can be used as a guest CPU on the host. But even though no CPU
vendor was explicitly requested we would sometimes force it due to a bug
in virCPUUpdate and virCPUTranslate.
The bug would effectively forbid cross vendor migrations even if they
were previously working just fine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
PPC driver needs to convert POWERx_v* legacy CPU model names into POWERx
to maintain backward compatibility with existing domains. This patch
adds a new step into the guest CPU configuration work flow which CPU
drivers can use to convert legacy CPU definitions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The API is no longer used anywhere else since it was replaced by a much
saner work flow utilizing new APIs that work on virCPUDefPtr directly:
virCPUCompare, virCPUUpdate, and virCPUTranslate.
Not testing the new work flow caused some bugs to be hidden. This patch
reveals them, but doesn't attempt to fix them. To make sure all test
still pass after this patch, all affected test results are modified to
pretend the tests succeeded. All of the bugs will be fixed in the
following commits and the artificial modifications will be reverted.
The following is the list of bugs in the new CPU model work flow:
- a guest CPU with mode='custom' and missing <vendor/> gets the vendor
copied from host's CPU (the vendor should only be copied to host-model
CPUs):
DO_TEST_UPDATE("x86", "host", "min", VIR_CPU_COMPARE_IDENTICAL)
DO_TEST_UPDATE("x86", "host", "pentium3", VIR_CPU_COMPARE_IDENTICAL)
DO_TEST_GUESTCPU("x86", "host-better", "pentium3", NULL, 0)
- when a guest CPU with mode='custom' needs to be translated into
another model because the original model is not supported by a
hypervisor, the result will have its vendor set to the vendor of the
original CPU model as specified in cpu_map.xml even if the original
guest CPU XML didn't contain <vendor/>:
DO_TEST_GUESTCPU("x86", "host", "guest", model486, 0)
DO_TEST_GUESTCPU("x86", "host", "guest", models, 0)
DO_TEST_GUESTCPU("x86", "host-Haswell-noTSX", "Haswell-noTSX",
haswell, 0)
- legacy POWERx_v* model names are not recognized:
DO_TEST_GUESTCPU("ppc64", "host", "guest-legacy", ppc_models, 0)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In some cases preferred model doesn't really do anything since the
result remains the same even if it is removed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using a preferred model for guest CPUs with forbidden fallback masks a
bug in the code. It would just happily use another CPU model supported
by a hypervisor even though it is explicitly forbidden in the CPU XML.
This patch temporarily changes the expected result to -2, which is used
when the result XML file cannot be found (but it was supposed not to be
found since the tested API should have failed). The result will be
switched back to -1 few commits later when the original bug gets fixed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using a preferred CPU model which is not in the list of CPU models
supported by a hypervisor does not make sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Guest CPUs with match='minimum' should always be updated to match host
CPU model. Trying to get different results by supplying preferred models
does not make sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For machinetypes with a pci-root bus (all legacy PCI), libvirt will
make a "fake" reservation for one extra slot prior to assigning
addresses to unaddressed PCI endpoint devices in the domain. This will
trigger auto-adding of a pci-bridge for the final device to be
assigned an address *if that device would have otherwise instead been
the last device on the last available pci-bridge*; thus it assures
that there will always be at least one slot left open in the domain's
bus topology for expansion (which is important both for hotplug (since
a new pci-bridge can't be added while the guest is running) as well as
for offline additions to the config (since adding a new device might
otherwise in some cases require re-addressing existing devices, which
we want to avoid)).
It's important to note that for the above case (legacy PCI), we must
check for the special case of all slots on all buses being occupied
*prior to assigning any addresses*, and avoid attempting to reserve
the extra address in that case, because there is no free address in
the existing topology, so no place to auto-add a pci-bridge for
expansion (i.e. it would always fail anyway). Since that condition can
only be reached by manual intervention, this is acceptable.
For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's
methodology for automatically expanding the bus topology is different
- pcie-root-ports are plugged into slots (soon to be functions) of
pcie-root as needed, and the new endpoint devices are assigned to the
single slot in each pcie-root-port. This is done so that the devices
are, by default, hotpluggable (the slots of pcie-root don't support
hotplug, but the single slot of the pcie-root-port does). Since
pcie-root-ports can only be plugged into pcie-root, and we don't
auto-assign endpoint devices to the pcie-root slots, this means
topology expansion doesn't compete with endpoint devices for slots, so
we don't need to worry about checking for all "useful" slots being
free *prior* to assigning addresses to new endpoint devices - as a
matter of fact, if we attempt to reserve the open slots before the
used slots, it can lead to errors.
Instead this patch just reserves one slot for a "future potential"
PCIe device after doing the assignment for actual devices, but only
if the only PCI controller defined prior to starting address
assignment was pcie-root, and only if we auto-added at least one PCI
controller during address assignment. This assures two things:
1) that reserving the open slots will only be done when the domain is
initially defined, never at any time after, and
2) that if the user understands enough about PCI controllers that they
are adding them manually, that we don't mess up their plan by
adding extras - if they know enough to add one pcie-root-port, or
to manually assign addresses such that no pcie-root-ports are
needed, they know enough to add extra pcie-root-ports if they want
them (this could be called the "libguestfs clause", since
libguestfs needs to be able to create domains with as few
devices/controllers as possible).
This is set to reserve a single free port for now, but could be
increased in the future if public sentiment goes in that direction
(it's easy to increase later, but essentially impossible to decrease)