IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a check
for version 4.0.1 is not required anymore.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds and while it should
be rather fast to attach a new drive to QEMU, a system can be very
busy during backup, so future-proof and increase to 60 seconds.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Management for fleecing images is implemented here. If the fleecing
option is set, for each disk (except EFI disk and TPM state) a new
fleecing image is allocated on the configured fleecing storage (same
storage as original disk by default). The disk is attached to QEMU
with the 'size' parameter, because the block node in QEMU has to be
the exact same size and the newly allocated image might be bigger if
the storage has a coarser allocation or rounded up. After backup, the
disks are detached and removed from the storage.
If the storage supports qcow2, use that as the fleecing image format.
This allows saving some space even on storages that do not properly
support discard, like, for example, older versions of NFS.
Since there can be multiple volumes with the same volume name on
different storages, the fleecing image's name cannot be just based on
the original volume's name. The schema vm-ID-fleece-N(.FORMAT) with N
incrementing for each disk is used.
Partially inspired by the existing handling of the TPM state image
during backup.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
by including the errno. Might make it clearer what the issue is in
cases like: https://forum.proxmox.com/threads/135261/
Also add the missing newlines, the missing "to" in the second message,
switch to the more common "or die" and avoid line bloat while at it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.
Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a6828d. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.
This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.
Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.
To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: massage in Fiona's extra info into commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
factor the common checks for disk-less and "normal" backups out into
its own helper, avoiding code duplication and ensuring that the
messages and checks stay in sync.
The use sites for key and master key are a bit clearer, as it all
just depends on them being defined or not.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
.. and make it use a warn level, which can then also mark the whole
task as potentially problematic as with a new enough pve-guest-common
the REST environment worker warn counters are then increased.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
our backup logs are still quite noise at the task start part so avoid
logging that the task is running with encryption enabled twice for
the master-key feature.
The definedness check on master_keyfile isn't required anymore, it
was never for the no-disk case, and for the standard case it isn't
since 781fb80 ("vzdump: error out for master-key backup but no QEMU
support")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Our QEMU gained master-key support for Proxmox VE 6.4 with initial
QEMU 5.2.0 packaging in 0b8da68 ("add PBS master key support")
version.
As we're now two major releases in the future any VM needs to run
with a newer QEMU version we can just make this a hard-error, as
there really should be no use-case left. After all we only support
upgrading directly to the next major release, so one needs to do at
least a migration (or shutdown) of the VM to reboot the node for
upgrading to Proxmox VE 8, so the lowest QEMU version baseline is 6.0
for Proxmox VE 8 (i.e., the version from PVE 7.0).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are backed up directly with proxmox-backup-client, and the invocation was
lacking the key parameters.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
which is the case for passed-through disks. The qemu_img_format()
function cannot correctly handle those, and it's not safe to assume
they are raw (it's most likely, but not guaranteed), so just use the
storage method for the format like it was done before commit
efa3aa24 ("avoid list context for volume_size_info calls"). This will
use 'qemu-img info' to get the actual format.
Reported in the community forum:
https://forum.proxmox.com/threads/124794/https://forum.proxmox.com/threads/124823/https://forum.proxmox.com/threads/124818/
Fixes: efa3aa24 ("avoid list context for volume_size_info calls")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the recent pve-storage commit d70d814 ("api: fix get content call response
type for RBD/ZFS/iSCSI volumes"), the volume_size_info call for RBD in
list context is much slower than before (from a quick test, about twice as long
without snapshots, even longer with snapshots and untested, but when using an
external cluster with image not having the fast-diff feature, it should be worse
still) and thus increases the likelihood to run into timeouts here.
None of the callers here actually need the more expensive call, so just
avoid calling in list context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
the volume path could contain escaped ":" or ",", which means their '\'
needs to be escaped another time for passing to HMP.
the same approach is used for hotplugging regular drives in
PVE::QemuServer, and is needed (at least) for RBD storages with IPv6
monhosts or an explicit monhost port.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the created backups are encrypted, but are not restorable with the
master key in case the original PVE system is lost.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the key file doesn't exist (anymore), but the storage.cfg references
one, die when starting a backup that should use encryption instead of
falling back to plain-text operations.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.
Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.
Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.
For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).
The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
running outdated VMs without master key support will generate a warning
but proceed with a backup without encrypted key upload.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Users need to reboot at least once for the upgrade to 7.0, so any VM
running is then using a new enough QEMU...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes an issue in which a VM/CT fails to automatically restart after a
failed stop-mode backup.
Also fixes a minor typo in a comment
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Only show "not supported by QEMU version" message if we determine that
to be the actual cause, just print the error otherwise.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Commit "a941bbd0 client: raise HTTP_TIMEOUT to 120s" in proxmox-backup
did the same, however, we would now still fail after 60 seconds since
the QMP call would time out.
Increase the timeout here to the same +5 seconds to give some time to
receive a response, so if the HTTP call in proxmox-backup times out, we
can still get a useful error message instead of timing out the QMP call
too.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
...taking card not to lose the custom precision for byte conversion.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)
moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.
since a suspend backup would pause the vm anyway, we can skip that
step also
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
this fixes the issue that we did not generate the correct repository
url for pbs storages that contained an ipv6 address or a port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Now that VMs can be started during a backup, it makes sense to create a
dirty bitmap in these cases too, since the VM might be resumed and thus
continue running normally even after the backup is done.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Connect and send the vmid of the VM being backed up. This prevents
qmeventd from SIGTERMing the underlying QEMU instance, even if the guest
shuts itself down, until we close the socket connection (in cleanup,
which happens on success and abort, or if we crash the file handle will
be closed as well).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if the 'backup' qmp call itself times out or fails, we still want to
try to cancel the backup, else it can happen that there is still
a backup running even when vzdump thinks it was canceled
qapi docs says that backup cancel always returns success, even
if no backup is running
since we hold a global and a per vm lock for the backup, this should be
ok, since we should not reach this code without that lock
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We use verification for something more in-depth on the PBS server, so
avoid that term to avoid misunderstandings.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
until we maybe have a 'pbs-backup' that links Qemu and PBS like
'pbs-restore', we need to do a regular backup for the template case to
support all storage types and image formats.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This still works even if all drives were clean. It then shows the very
magical line:
INFO: backup was done incrementally, reused 34.00 GiB (100%)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU handles it just as well as with VMA, so this was probably just
forgotten to implement for PBS.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>