IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
by adding them to their own list, saving the nodes where they are not
allowed, and return those on 'wantarray' so we don't break existing
callers that don't expect it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring pci devices that are mapped via cluster
resource mapping when the user has 'Resource.Use' on the ACL path
'/mapping/pci/{ID}' (in addition to the usual required vm config
privileges)
When given multiple mappings in the config, we use them as alternatives
for the passthrough, and will select the first free one on startup.
It is using our regular pci reservation mechanism for regular devices and
we introduce a selection mechanism for mediated devices.
A few changes to the inner workings were required to make this work well:
* parse_hostpci now returns a different structure where we have a list
of lists (first level is for the different alternatives and second
level is for the different devices that should be passed through
together)
* factor out the 'parse_hostpci_devices' which parses each device from
the config and does some precondition checks
* reserve_pci_usage now behaves slightly different when trying to
reserve an device with the same VMID that's already reserved for,
since for checking which alternative we can use, we already must
reserve one (this means that qm showcmd can actually reserve devices,
albeit only for up to 10 seconds)
* configuring a mediated device on a multifunction device is not
supported anymore, and results in failure to start (previously, it
just chose the first device to do it). This is a breaking change
* configuring a single pci device twice on different hostpci slots now
fails during commandline generation instead on qemu start, so we had
to adapt one test where this occurred (it could never have worked
anyway)
* we now check permissions during clone/restore, meaning raw/real
devices can only be cloned/restored by root@pam from now on.
this is a breaking change.
Fixes#3574: Improve SR-IOV usability
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring usb devices that are mapped via
cluster resource mapping when the user has 'Mapping.Use' on the ACL
path '/mapping/usb/{ID}' (in addition to the usual required vm config
privileges)
for now, this is only valid if there is exactly one mapping for the
host, since we don't track passed through usb devices yet
This now also checks permissions on clone/restore, meaning a
'non-mapped' device can only be cloned/restored as root@pam user.
That is a breaking change.
Refactor the checks for restoring into a sub, so we have central place
where we can add such checks
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
similar to how we handle the PCI module and format. This makes the
'verify_usb_device' method and format unnecessary since
we simply check the format with a regex.
while doing tihs, i noticed that we don't correctly check for the
case-insensitive variant for 'spice' during hotplug, so fix that too
With this we can also remove some parameters from the get_usb_devices
and get_usb_controllers functions
while were at it, refactor the permission checks for the usb config too
and use the new 'my sub' style for the functions
also make print_usbdevice_full parse the device itself, so we don't have
to do it in multiple places (especially in places where we don't see
that this is needed)
No functional change intended
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
Using the word 'agent' is highly confusing here as there is no QMP
agent and thus wrongly suggests that the value is related to the
guest agent[0].
[0]: https://forum.proxmox.com/threads/123590/post-537716
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This was not only rather inefficient (getting the config from the
archive twice) but also wrong, as we can override options on restore,
so we can do the check only when the backed-up config and override
config got merged.
If this is to late from POV of volume deletion or the like, then the
issue is that those things happen to early, as we can only know what
to do with the actual target config, so destructive actions that
happen before that are wrong by design.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for convenience. These options do not influence the QEMU instance
directly, but are only used for migration, so no need to keep them in
pending.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
instead use a recent example that served as a workaround in #4625.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 7246e8f9 ("Set zero $size and continue if volume_resize()
returns false") mentions that this is needed for "some storages with
backing block devices to do online resize" and since this patch came
together [0] with pve-storage commit a4aee43 ("Fix RBD resize with
krbd option enabled."), it's safe to assume that RBD with krbd is
meant. But it should be the same situation for any external plugin
relying on the same behavior.
Other storages backed by block devices like LVM(-thin) and ZFS return
1 and the new size respectively, and the code is older than the above
mentioned commits. So really, the RBD plugin just should have returned
a positive value to be in-line with those and there should be no need
to pass 0 to the block_resize QMP command either.
Actually, it's a hack, because the block_resize QMP command does not
actually do special handling for the value 0. It's just that in the
case of a block device, QEMU won't try to resize it (and not fail for
shrinkage). But the size in the raw driver's BlockDriverState is
temporarily set to 0 (which is not nice), until the sector count is
refreshed, where raw_co_getlength is called, which queries the new
size and sets the size in the raw driver's BlockDriverState again as a
side effect. It's not known to cause any issues, but bdrv_getlength is
a coroutine wrapper starting from QEMU 8.0.0, and it's just better to
avoid setting a completely wrong value even temporarily. Just pass the
actually requested size like is done for LVM(thin) and ZFS.
[0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Handle and warn about network interfaces which are not attached to
any bridge because the user actively removed it from the VM config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the recent pve-storage commit d70d814 ("api: fix get content call response
type for RBD/ZFS/iSCSI volumes"), the volume_size_info call for RBD in
list context is much slower than before (from a quick test, about twice as long
without snapshots, even longer with snapshots and untested, but when using an
external cluster with image not having the fast-diff feature, it should be worse
still) and thus increases the likelihood to run into timeouts here.
None of the callers here actually need the more expensive call, so just
avoid calling in list context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
in some nvidia grid drivers (e.g. 14.4 and 15.x), their kernel module
tries to clean up the mdev device when the vm is shutdown and if it
cannot do that (e.g. becaues we already cleaned it up), their removal
process cancels with an error such that the vgpu does still exist inside
their book-keeping, but can't be used/recreated/freed until a reboot.
since there seems no obvious way to detect if thats the case besides
either parsing dmesg (which is racy), or the nvidia kernel module
version(which i'd rather not do), we simply test the pci device vendor
for nvidia and add a 10s sleep. that should give the driver enough time
to clean up and we will not find the path anymore and skip the cleanup.
This way, it works with both the newer and older versions of the driver
(some of the older drivers are LTS releases, so they're still
supported).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using the mdev uuid. The nvidia driver does not actually care
that it's the same as the mdev, and in qemu the uuid parameter
overwrites the smbios1 uuid internally, so we should have been reusing
that in the first place.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Respecting bandwidth limit for offline clone was implemented by commit
56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
limit"). It's still not respected for EFI disks, but those are small,
so just ignore it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
[ T: reword subject line slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid pretending that a MTU change on a existing network device gets
applied live to a running VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reworded and expanded commit message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Required because there's one single efi for ARM, and the 2m/4m
difference doesn't seem to apply.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: move description to format and reword subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
[0]: https://forum.proxmox.com/threads/123159/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, changing aio would be applied to the configuration, but
the drive would still be using the old setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the web UI, this was fixed years ago by pve-manager commit c11c4a40
("fix #1631: change units to binary prefix").
Quickly checked with the 'query-memory-size-summary' QMP command that
this is actually the case.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a vm is configured to use a physical cd rom drive but there is no
such drive a cryptic "uninitialized value" error is thrown. this is
due to `$path` being undefined in `sub print_drive_commandline_full`.
warn that no cd rom drive is available instead.
note that the error was cosmetic as the vm would start just fine.
forum thread: https://forum.proxmox.com/threads/119592/
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Since we can now differentiate between 'suspended' and 'suspending',
it is possible to ignore the 'suspended' lock when destroying a VM.
It shouldn't matter whether the VM is locked because of hibernation
when you want to remove it. Therefore we can safely ignore the lock.
When $d->{'pci_bridge'}->{devices} is undef, @-dereferencing it will
die with:
> Can't use an undefined value as an ARRAY reference
This can happen (at least) when the VM is in 'prelaunch' state. The
QAPI definition for '@PciBridgeInfo' also declares the 'devices'
member as optional.
Before commit 721624b ("collect device list for nested pci-bridges"),
there was no issue, because $d->{'pci_bridge'}->{devices} was used in
foreach, so auto-vivified if undef.
Fixes: f721624b ("collect device list for nested pci-bridges")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
We can now do a few things that would be not really possible, or at
least mess with readability when this was still mostly inline
config2command, shaves of quite a few lines of code.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in preparation of reworking the new separate method for OVMF cmd
assembly, do this in a separate very targeted commit to make it more
clear that the next reworking-commit doesn't messes with our tests at
all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the fix for the recently introduced requirement of loading the VM config while
migrating was incomplete, since the vmlist node value could already be out of
date by the time load_config is called.
extend the fallback behaviour even further, by doing the following sequence:
- try regular load_config (likely case, rename already fully processed)
- if it fails, get node from vmlist, and load_config using that
- it that fails, invalidate the PVE::Cluster cache, retry regular load_config
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>