IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a check
for version 4.0.1 is not required anymore.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a
check for version 4.2 is not required anymore. The check was also
wrong, because it checked the installed version and not the currently
running one.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While archives with unknown or undetermined subtype could be shown,
this is only for autocompletion, so users can still specify those
manually if required.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Some callers like the move disk API endpoint do not pass an explicit
completion argument. This is not an issue in general, because
qemu_drive_mirror_monitor() defaults to 'complete'. However, there was
a string comparision for the cloudinit case that can trigger a warning
about the value being uninitialized.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
this was a stray search and replace for job -> job_id that should have only
changed variable names..
Fixes: 0ea24bf ("mirror monitor: refactoring/code cleanup")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 106 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block655'
After:
> block job (stream) error: restore-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 112 qmp command 'blockdev-del' failed - Node 'drive-scsi0-pbs' is busy: node is used as backing hd of '#block046'
After:
> block job (stream) error: restore-drive-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in actual examples (note that the fact that
it's a mirror job is already mentioned earlier in the full error, with
"block job (mirror) error:"):
Before:
> 'mirror' has been cancelled
> 'mirror' has been cancelled
After:
> Source and target image have different sizes (io-status: ok)
> No space left on device (io-status: ok)
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When auto-dismiss=true (the default), a failed job can disappear very
quickly from the job list and there might not be any chance to see the
error in the result of 'query-block-jobs'. For jobs with $completion
being 'auto', like 'block-stream', it couldn't even be detected that
the job failed.
Jobs with auto-dismiss=false on the other hand, will wait in
'concluded' state until manually dismissed. For those, it will be
possible to query the error if the job failed.
There doesn't seem to be a way to have only failed jobs stay around,
e.g. something like auto-dismiss=on-success.
Planned to be used for the 'drive-mirror' and 'block-stream' jobs
initially.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
templates can only be started in context of a pbs backup, and there we
don't need or want to use most of the config, since they cannot be
started normally anyway.
We minimize the config by copying some specific relevant options (see
the comments for why the options were chosen) and all disk
configurations.
Since we change the qemu commandline for templates, we now have to adapt
the tests involving templates.
Without this, users can get into a situation where the template cannot
be backed up when there are some resources not available (such as cpu
cores, kvm, pci devices, etc.) even if the backup process does not need
them.
This change has some nice side effects, such as we don't need to
allocate the full amount of memory anymore for templates that have a
hostpci device configured, the configured bridges don't have to exist,
etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
during pbs backups, we need to start templates, so add a test for that.
We already have some tests for templates, but none with hostpci,tpm,
etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
After the TPM state has been created (to be precise, initialized by
swtpm) it is not possible to change the version anymore. Doing so will
lead to failure starting the associated VM. While documented in the
description, it's better to enforce this via API.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since the check in start_swtpm() only checks for an explicitly
configured v2.0 to opt-in to version 2, the actual default is v1.2
and not v2.0 like the schema stated.
Of course, it would be nicer to have the default be v2.0, but changing
the check to use that default would break any TPM state without an
explicitly configured version.
There doesn't seem to be any code beside start_swtpm() accessing the
version.
Fixes: f9dde219 ("fix #3075: add TPM v1.2 and v2.0 support via swtpm")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With high IO pressure, 5 seconds might not be enough, even if the
request is small.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The generic "got timeout" message cannot be associated to a certain
code path and also isn't very user-friendly. Use dedicated messages
for each stage and also suggest why the timeout for reading the header
might have happened, i.e. because it was corrupted.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds.
While it should be rather fast to attach a new drive to QEMU, a busy
system might take longer, so future-proof and increase to 60 seconds.
On the other hand, detaching a drive needs to complete any pending IO
on it, so use the same 10 minutes timeout that's used for
drive-related QMP commands.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout is 5 seconds, but some HMP commands (e.g.
disk-related ones) might take longer than that. It's still an
interactive session, so use 30 seconds for now. Should there be any
user-complains about frequent timeouts, it could still be increased
further.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds and while it should
be rather fast to attach a new drive to QEMU, a system can be very
busy during backup, so future-proof and increase to 60 seconds.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout is 5 seconds, but some HMP commands (e.g.
disk-related ones) might take longer than that. The API call is
synchronous, so has to complete within 30 seconds, and since there is
no other costly operation, use 25 seconds.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Passing the timeout key with an explicit value of undef is fine,
because both the absence of the timeout key and an explicit value of
undef will lead to $timeout being undef in the qmp_cmd() function.
In preparation to increase the timeout for certain (e.g. disk-related)
HMP commands.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The savevm-end command also fails when no snapshot operation was
started before. In particular, this is the case when savevm-start
failed early, because of unmigratable devices.
Avoid potentially leaving an orphaned volume and snasphot-related
configuration keys around by continuing with cleanup instead.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
by wrapping the properties from the command definition to get an
actual schema definition.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Since commit 865ef132 ("implement dynamic migration_downtime") the
migration downtime will be automatically increased when migration
cannot converge at the very end. Update the description to reflect
reality.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This fixes the broken prevention of starting a VM with a 32-bit CPU
using a 64-bit OVMF (UEFI) BIOS.
Fixes: 89d5b1c9 ("prevent starting a 32-bit VM using a 64-bit OVMF BIOS")
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
[FE: add Fixes trailer, add prefix to title]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Otherwise, a VM in those states would be terminated after a backup
in handle_qmp_return() with QMP 'quit', which is pretty bad in case
of the 'suspended' state.
Does not change the fact that a VM started in prelaunch mode for
backup is terminated later (that is handled by the Perl code).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The machine handling was transformed into a full fledged property
string with a (sub) format, but the single call-site for print_machine
was seemingly not tested, as this could have never worked due to a
missing import of the print_property_string helper.
Fixes: 8082eb8 ("config: define machine schema as property-string")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When the nftables firewall is enabled, we do not need to create
firewall bridges.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
[ TL: use a more meaningful variable name and add a comment ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Management for fleecing images is implemented here. If the fleecing
option is set, for each disk (except EFI disk and TPM state) a new
fleecing image is allocated on the configured fleecing storage (same
storage as original disk by default). The disk is attached to QEMU
with the 'size' parameter, because the block node in QEMU has to be
the exact same size and the newly allocated image might be bigger if
the storage has a coarser allocation or rounded up. After backup, the
disks are detached and removed from the storage.
If the storage supports qcow2, use that as the fleecing image format.
This allows saving some space even on storages that do not properly
support discard, like, for example, older versions of NFS.
Since there can be multiple volumes with the same volume name on
different storages, the fleecing image's name cannot be just based on
the original volume's name. The schema vm-ID-fleece-N(.FORMAT) with N
incrementing for each disk is used.
Partially inspired by the existing handling of the TPM state image
during backup.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since commit
1f743141 (fix#1905: Allow moving unused disks)
we want to check the source drive name for 'unused', but in case of
importing a volume from the 'import' content type (e.g. from esxi),
there is no source drive name. So we have to first check if it's
defined.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous wording made it sound like all "visible" tasks were
aborted, which is not the case: A user with Sys.Audit but without
Sys.Modify may see a task that was started by a different user, but
overrule-shutdown would not abort the task.
Change wording to better reflect that not all visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks for the same VM (which are
visible to the user/token) are aborted before attempting to stop the
VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
In the past, moving unused disks to another storage was prohibited due
to oversights in the handling of unused disks. This commit rectifies
this limitation by allowing the movement of unused disks.
Historical context:
* 16 Sep 2010 r5164 qemu-server/pve2: The disknames sub was removed.
* 17 Sep 2010 r5170 qemu-server/pve2: Unused disks were introduced.
* 28 Jan 2011 r5461 qemu-server/pve2: The same disknames sub that was
removed in r5164 was brought back. Since unused disks were not around
yet in r5164 the disknames sub did not consider unused disks.
* 6-8 Aug 2012 c1175c92..f91b2e45 qemu-server.git: Disk resize was
introduced. In commit c1175c92 in sub qemu_block_resize unused disks
were not taken into account and in commit 2f48a4f5 (8 Aug 2012) the
resize API call was changed to only allow disks matching the ones in
the disknames sub. Since sub disknames did not contain any unused
disks, those were not allowed at all in the resize API call.
* 27 May 2013 586bfa78 qemu-server.git: Disk move was introduced. The
API call implementation borrowed heavily from disk resize, including
the behaviour of not taking unused disks into account. Thus, unused
disk could not be moved, which persists to this day.
In summary, this behaviour was introduced because the handling of unused
disks was overlooked and it was never changed.
There is no inherent reason why unused disks should be restricted from
being moved to another storage. These disks cannot use the
qemu_drive_mirror, but they can still be moved with qemu_img_convert,
the same way as any other disk of a stopped VM.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
vIOMMU enables the option to passthrough pci devices to L2 VMs in L1
VMs via Nested Virtualisation and adds an extra isolation.
Uses the new property-string from the "config: define machine schema
as property-string"-commit to add the viommu option to the machine
parameter.
Currently there are two vIOMMU implementation in QEMU to choose:
intel or virtio
Virtio-iommu is more recent but less used in production than intel-iommu.
The assert_valid_machine_property function prevents using intel-iommu with
i440fx.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[ TL: tiny coding style fix to extract variable inside if expr ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>