IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Commit 7246e8f9 ("Set zero $size and continue if volume_resize()
returns false") mentions that this is needed for "some storages with
backing block devices to do online resize" and since this patch came
together [0] with pve-storage commit a4aee43 ("Fix RBD resize with
krbd option enabled."), it's safe to assume that RBD with krbd is
meant. But it should be the same situation for any external plugin
relying on the same behavior.
Other storages backed by block devices like LVM(-thin) and ZFS return
1 and the new size respectively, and the code is older than the above
mentioned commits. So really, the RBD plugin just should have returned
a positive value to be in-line with those and there should be no need
to pass 0 to the block_resize QMP command either.
Actually, it's a hack, because the block_resize QMP command does not
actually do special handling for the value 0. It's just that in the
case of a block device, QEMU won't try to resize it (and not fail for
shrinkage). But the size in the raw driver's BlockDriverState is
temporarily set to 0 (which is not nice), until the sector count is
refreshed, where raw_co_getlength is called, which queries the new
size and sets the size in the raw driver's BlockDriverState again as a
side effect. But bdrv_getlength is a coroutine wrapper starting from
QEMU 8.0.0, and it's just better to avoid setting a completely wrong
value even temporarily. Just pass the actually requested size like is
done for LVM(thin) and ZFS.
Since this patch was originally written, Friedrich managed to find
that this actually can cause real issues:
1. Start VM with an RBD image without krbd
2. Change storage config to use krbd
3. Resize disk
Likely, because the disk is resized via the storage layer and the QMP
command resizing it to "0" happens simultaneously. The exact reason
was not yet determined, but the issue is gone in Proxmox VE 8 and
re-appears after reverting this patch.
Long-term, it makes sense to not rely on the storage flag, but look
how the disk is actually attached in QEMU to decide how to do the resize.
[0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit 2e4357c537287edd47d6031fec8bffc7b0ce2425)
[FE: mention actual issue in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
these are backed up directly with proxmox-backup-client, and the invocation was
lacking the key parameters.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit fbd3dde73543e7715ca323bebea539db1a95d480)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
(cherry picked from commit 9946d6fa576cc33ab979005c79d692a0724a60e1)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit 6cb2338f5338b47b960b71e0bcd1dd08ca5b8054)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.
For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit 62c190492154d932c27ace030c0e84eda5f81a3f)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 3e7567e0 ("do not use novnc wsproxy"), the websocket
upgrade is done via the HTTP server.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit 876d993886c1d674fc004c8bf1895316dc5d4a94)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The default VM startup timeout is `max(30, VM memory in GiB)` seconds.
Multiple reports in the forum [0] [1] and the bug tracker [2] suggest
this is too short when using PCI passthrough with a large amount of VM
memory, since QEMU needs to map the whole memory during startup (see
comment #2 in [2]). As a result, VM startup fails with "got timeout".
To work around this, set a larger default timeout if at least one PCI
device is passed through. The question remains how to choose an
appropriate timeout. Users reported the following startup times:
ref | RAM | time | ratio (s/GiB)
---------------------------------
[1] | 60G | 135s | 2.25
[1] | 70G | 157s | 2.24
[1] | 80G | 277s | 3.46
[2] | 65G | 213s | 3.28
[2] | 96G | >290s | >3.02
The data does not really indicate any simple (e.g. linear)
relationship between RAM and startup time (even data from the same
source). However, to keep the heuristic simple, assume linear growth
and multiply the default timeout by 4 if at least one `hostpci[n]`
option is present, obtaining `4 * max(30, VM memory in GiB)`. This
covers all cases above, and should still leave some headroom.
[0]: https://forum.proxmox.com/threads/83765/post-552071
[1]: https://forum.proxmox.com/threads/126398/post-592826
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502
Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
(cherry picked from commit 95f1de689e3c898382f8fcc721b024718a0c910a)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.
Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.
The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.
There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.
[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit 853757ccec20d5e84d6a1cc656a66beaf3d3e94c)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a template with disks on LVM is cloned to another node, the
volumes are first activated, then cloned and deactivated again after
cloning.
However, if clones of this template are now created in parallel to
other nodes, it can happen that one of the tasks can no longer
deactivate the logical volume because it is still in use. The reason
for this is that we use a shared lock.
Since the failed deactivation does not necessarily have consequences,
we downgrade the error to a warning, which means that the clone tasks
will continue to be completed successfully.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
(cherry picked from commit 9d6126e8db7234c3e357d165ae0fd7c0d9648229)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
(cherry picked from commit 629923d025ded237459ec4a29daabedb445edc28)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.
So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.
Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-October/059434.html
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ TL: Reflowed & expanded commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit d797bb62c3f77d5795d5eefec3b01ffb39af09ec)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Skip the software TPM startup when starting a template VM for performing
a backup. This fixes an error that occurs when the TPM state disk is
write-protected.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
(cherry picked from commit a55d0f71b2bc5484611312167384539a387e9c69)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Trying to regenerate a cloudinit drive as a non-root user via the API
currently throws a Perl error, as reported in the forum [1]. This is
due to a type mismatch in the permission check, where a string is
passed but an array is expected.
[1] https://forum.proxmox.com/threads/regenerate-cloudinit-by-put-api-return-500.124099/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
(cherry picked from commit dafabbd01fa2612b7d28a7bc5c6bf876259309f8)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
(cherry picked from commit c60586838a4bfbd4f879c509195cbfb1291db443)
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[ TL: add cherry-pick reference ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 9c6eabf028d8fd79460de4123956ea9960ee21ae)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
which is the case for passed-through disks. The qemu_img_format()
function cannot correctly handle those, and it's not safe to assume
they are raw (it's most likely, but not guaranteed), so just use the
storage method for the format like it was done before commit
efa3aa24 ("avoid list context for volume_size_info calls"). This will
use 'qemu-img info' to get the actual format.
Reported in the community forum:
https://forum.proxmox.com/threads/124794/https://forum.proxmox.com/threads/124823/https://forum.proxmox.com/threads/124818/
Fixes: efa3aa24 ("avoid list context for volume_size_info calls")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the recent pve-storage commit d70d814 ("api: fix get content call response
type for RBD/ZFS/iSCSI volumes"), the volume_size_info call for RBD in
list context is much slower than before (from a quick test, about twice as long
without snapshots, even longer with snapshots and untested, but when using an
external cluster with image not having the fast-diff feature, it should be worse
still) and thus increases the likelihood to run into timeouts here.
None of the callers here actually need the more expensive call, so just
avoid calling in list context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While, usually, the slot should match the ID, it's not explicitly
guaranteed and relies on QEMU internals. Using the numerical ID is
more future-proof and more consistent with plugging, where no slot
information (except the maximum limit) is relied upon.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
current qemu_dimm_list can return any kind of memory devices.
make it more generic, with an optionnal type device
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
in some nvidia grid drivers (e.g. 14.4 and 15.x), their kernel module
tries to clean up the mdev device when the vm is shutdown and if it
cannot do that (e.g. becaues we already cleaned it up), their removal
process cancels with an error such that the vgpu does still exist inside
their book-keeping, but can't be used/recreated/freed until a reboot.
since there seems no obvious way to detect if thats the case besides
either parsing dmesg (which is racy), or the nvidia kernel module
version(which i'd rather not do), we simply test the pci device vendor
for nvidia and add a 10s sleep. that should give the driver enough time
to clean up and we will not find the path anymore and skip the cleanup.
This way, it works with both the newer and older versions of the driver
(some of the older drivers are LTS releases, so they're still
supported).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using the mdev uuid. The nvidia driver does not actually care
that it's the same as the mdev, and in qemu the uuid parameter
overwrites the smbios1 uuid internally, so we should have been reusing
that in the first place.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Respecting bandwidth limit for offline clone was implemented by commit
56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
limit"). It's still not respected for EFI disks, but those are small,
so just ignore it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
[ T: reword subject line slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid pretending that a MTU change on a existing network device gets
applied live to a running VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reworded and expanded commit message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Required because there's one single efi for ARM, and the 2m/4m
difference doesn't seem to apply.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: move description to format and reword subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
[0]: https://forum.proxmox.com/threads/123159/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, changing aio would be applied to the configuration, but
the drive would still be using the old setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since commit 9b6efe43 ("migrate: add live-migration of replicated
disks") live-migration with replicated volumes is possible. When
handling the replication, it is checked that all local volumes
previously detected as replicatable are actually replicated. So the
check if migration with snapshots is possible can just allow volumes
that are detected as replicatable.
Note that VM state files are also replicated.
If there is an invalid configuration with a non-replicatable volume or
state file and replication is enabled, then replication will fail, and
thus migration will fail early.
Trying to live-migrate to a non-replication target (needs --force)
will still fail if there are snapshots, because they are (correctly)
detected as non-replicated.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the web UI, this was fixed years ago by pve-manager commit c11c4a40
("fix #1631: change units to binary prefix").
Quickly checked with the 'query-memory-size-summary' QMP command that
this is actually the case.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>