5
0
mirror of git://git.proxmox.com/git/qemu-server.git synced 2025-08-04 08:21:54 +03:00

3755 Commits

Author SHA1 Message Date
1d87945007 tests: add cache && aio tests for differents storage type
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.286.1749207110.395.pve-devel@lists.proxmox.com
[FE: rename volume group to avoid having duplicate with lvm-thin]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-06-11 10:00:35 +02:00
fa46162722 qmeventd: auto-format code base using clang-format
Using the LLVM style with some minor adaptions to avoid cramping to
much into single lines.

For now no make target, but with the .clang-format file one can simply
run:

  clang-format -i qmeventd.[ch]

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-04 20:33:40 +02:00
c7cd00516d virtiofs: prevent issue with Windows OS and too many files
As reported in the community forum [0] and the virtio-win project [1],
virtiofsd will run into its open file limit when used with a Windows
guest that reads too many files. It's also reported that the issue
does not occur with Linux guests and a workaround is using
'--inode-file-handles=mandatory' on virtiofsd command line.

The option is described as follows in the virtiofsd help:

> When to use file handles to reference inodes instead of O_PATH file
> descriptors (never, prefer, mandatory)

and the default is 'never'.

Fix the above issue by using 'prefer' rather than 'mandatory', because
that should not break other edge cases:

> prefer: Attempt to generate file handles, but fall back to O_PATH
> file descriptors where the underlying filesystem does not support
> file handles. Useful when there are various different filesystems
> under the shared directory and some of them do not support file
> handles.

[0]: https://forum.proxmox.com/threads/165565/
[1]: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/1136

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Markus Frank <m.frank@proxmox.com>
Link: https://lore.proxmox.com/20250502142133.59401-1-f.ebner@proxmox.com
2025-06-02 08:19:16 +02:00
1fd1ca60f9 tests: qemu_img_convert: add rbd and btrfs snapshots convert test
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
[FE: avoid re-using already existing path/pool for newly added storages
     minor style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-28 12:24:40 +02:00
c760952771 migrate: code cleanup: factor out variables for transferred memory and vfio
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250520131431.487048-1-f.ebner@proxmox.com
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-21 16:33:48 +02:00
7d1ec25bc1 migrate: silence 'Use of uninitialized value' warning
If no vfio device is present during migration, and the transferred
(main) memory did not change between loop cycles, we get a warning:

 Use of uninitialized value $last_vfio_transferred in string ne

To silence that, check if the transferred vfio value is defined before,
and always write a defined value to $last_vfio_transferred.

This was noticed by a forum user:
https://forum.proxmox.com/threads/166161/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250519144357.3515197-1-d.csapak@proxmox.com
2025-05-21 16:32:38 +02:00
f97b3da1e6 drive: code cleanup: remove unused $vmid argument from get_iso_path() helper
According to git history, the $vmid argument was never used.

Reported-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-06 14:08:04 +02:00
f671a0dcdf tests: add cfg2cmd for disk passthrough, rbd,krbd && zfs-over-scsi.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
[FE: mock helper for getting host CD-ROM path
     minor style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-06 11:35:22 +02:00
4d78861740 update disk config: consider recorded fleecing images
Otherwise, a rescan operation would add fleecing images as unused
disks, even if they are already recorded in the special 'fleecing'
section.

Usually, fleecing images are cleaned up directly after backup, so this
is less likely to be an issue after commit 8009da73 ("fix #6317:
backup: use correct cleanup_fleecing_images helper"), but still makes
sense for future-proofing and for other edge cases where cleanup might
have failed.

Reported-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250422080951.10072-1-f.ebner@proxmox.com
2025-04-22 10:14:32 +02:00
8009da73f1 fix #6317: backup: use correct cleanup_fleecing_images helper
The local one is specific for `allocate_fleecing_images` and has a
comment stating to use the one from `PVE::QemuConfig` in all other
cases.

The `cleanup` sub already called this, but only if the VM was running.
We do allocate fleecing images for previously-stopped VMs as well,
though, so we also need to do the cleanup.

As for the `detach_fleecing_images()` call: while could have stayed in
the `vm_running_locall()` branch, it also performs this check and this
way the entire fleecing cleanup stays together in one place.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-04-16 12:39:34 +02:00
a411016f58 bump version to 8.3.12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-08 17:30:54 +02:00
d2d8a15dea virtiofs: drop writeback option
VirtIO-fs using writeback cache seems very broken at the moment. If a
guest accesses a file (even just using 'touch'), that the host is
currently writing, the guest can permanently end up with a truncated
version of that file. Even subsequent operations like moving the file,
will not result in the correct file being visible, but just rename the
truncated one.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-04-08 17:30:03 +02:00
dfdd5c3689 d/control: bump (build-)dependency for libpve-guest-common-perl
Version 5.2.0 of libpve-guest-common-perl is required for the
PVE/Mapping/Dir.pm module, but there was a transitive dependency for
libpve-cluster-perl missing for tracking the corresponding file on the
cluster file system and build would still fail with: > unknown file
'mapping/directory.cfg' at /usr/share/perl5/PVE/Cluster.pm

Version 5.2.2 of libpve-guest-common-perl depends on recent enough
libpve-cluster-perl to fix this.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-04-08 12:47:00 +02:00
01b761e7f0 bump version to 8.3.11
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 23:35:54 +02:00
64dad62fd8 disable snapshot (with RAM) and hibernate with virtio-fs devices
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250407134950.265270-6-m.frank@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 22:31:04 +02:00
7bfffaee5f migration: check_local_resources for virtiofs
add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Link: https://lore.proxmox.com/20250407134950.265270-5-m.frank@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 22:31:04 +02:00
87b22e3839 fix #1027: virtio-fs support
Add support for sharing directories with a guest VM.

virtio-fs needs virtiofsd to be started. In order to start virtiofsd
as a process (despite being a daemon it is does not run in the
background), a double-fork is used.

virtiofsd should close itself together with QEMU.

There are the parameters dirid and the optional parameters direct-io,
cache and writeback. Additionally the expose-xattr & expose-acl
parameter can be set to expose xattr & acl settings from the shared
filesystem to the guest system.

The dirid gets mapped to the path on the current node and is also used
as a mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Add virtiofsd to the Recommends list for the qemu-server Debian
package, this allows users to opt-out of installing this package, e.g.
for certification reasons.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Link: https://lore.proxmox.com/20250407134950.265270-3-m.frank@proxmox.com
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
 [TL: squash d/control change and re-add Lukas' T-b, as nothing
  essentially changed from the v16 where his tag applied]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 22:30:11 +02:00
556cc662ab qmeventd: go back to extracting vmid from commandline instead of cgroup file
In spirit, this is a revert of 502870a0 ("qmeventd: extract vmid from
cgroup file instead of cmdline"), but instead of relying on the custom
'id' commandline option that's added by a Proxmox VE patch to QEMU,
rely on the standard 'pidfile' option to extract the VM ID.

As reported in the community forum [0], at least during stop mode
backup, it seems to be possible to end up with the VM process having
> 0::/system.slice/pvescheduler.service
as its single cgroup entry. It's not clear what exactly happens and
there was no success to reproduce the issue. Might be a rare bug in
systemd or in pve-common's enter_systemd_scope() code.

This was not the first time relying on the cgroup entry caused issues,
see d0b58753 ("qmeventd: improve getting VMID from PID in presence of
legacy cgroup entries").

To avoid such edge cases and issues in the future, go back to
extracting the VM ID from the process's commandline.

It's enough to care about the first occurrence of the 'pidfile'
option, because that's the one added by Proxmox VE, so the 'continue's
in the loop turn into 'break's. Even though a later option would
override the first for QEMU itself to use, that's not supported
anyways and the important part is the VM ID which is present in the
first.

[0]: https://forum.proxmox.com/threads/147409/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20240614092134.18729-1-f.ebner@proxmox.com
2025-04-07 22:08:13 +02:00
28d8e248c3 vmstatus_return_properties: add missing serial property
Fixes: 8107b37 ("add serial:1 to vmstatus when config has a serial device configured")
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250407162718.495812-2-a.lauterer@proxmox.com/
2025-04-07 21:55:22 +02:00
cbe9de99c5 machine: improve code style in get_pve_version
This makes it a bit more obvious what happens and having an actual
error for bogus $PVE_MACHINE_VERSION entries.

Note that there was no auto-vivification before, as we never directly
accessed $PVE_MACHINE_VERSION->{$verstr}->{highest} but used
get_machine_pve_revisions to query a specific QEMU machine version's
PVE revisions and then operated on the return value, and that method
returns undef if there is no entry at all.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 17:30:16 +02:00
a9a5d37bc6 tests: fix config to command expected outputs
This should have been in the patch doing the change :(

Fixes: 65b2041 ("vm-network-scripts: move scripts to /usr/libexec")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 17:30:16 +02:00
65b20410ff vm-network-scripts: move scripts to /usr/libexec
Moves the network scripts from /var/lib/qemu-server into
/usr/libexec/qemu-server.

/usr/libexec is described as binaries run by programs which are not
intended to be directly executed by the user on [FHS 4.7]. On the other
hand /var/lib corresponds to variable state information, which does not
fit the use case here, see [FHS 5.8].

For the sake of preventing race conditions during upgrade we ship both
versions until version 9. This is required as package files are first
unpacked, including the removal of files not shipped by the new
version anymore, and only then configured, which triggers the restart
of the services.

[FHS 4.7]: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html
[FHS 5.8]: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s08.html

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250218133206.318155-1-m.sandoval@proxmox.com
2025-04-07 16:01:21 +02:00
5f8a64ae59 Merge patch series "more robust handling of fleecing images"
Fiona Ebner <f.ebner@proxmox.com> says:

Record the created fleecing images in the VM configuration to be able
to remove left-overs after hard failures.

Adds a new special configuration section 'fleecing', making special
section handling more generic as preparation, as well as fixing some
corner cases in configuration parsing and adding tests.

Fiona Ebner (16):
  migration: remove unused variable
  test: avoid duplicate mock module in restore config test
  test: add parse config tests
  parse config: be precise about section type checks
  test: add test case exposing issue with unknown sections
  parse config: skip unknown sections and warn about their presence
  vzdump: anchor matches for pending and special sections
  vzdump: skip all special sections
  config: make special section handling generic
  test: parse config: test config with duplicate sections
  parse config: warn about duplicate sections
  check type: require schema as an argument
  config: add fleecing section
  fix #5440: vzdump: better cleanup fleecing images after hard errors
  migration: attempt to clean up potential left-over fleecing images
  destroy vm: clean up potential left-over fleecing images

Link: https://lore.proxmox.com/20250127112923.31703-1-f.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 14:13:56 +02:00
a5185ba114 destroy vm: clean up potential left-over fleecing images
Avoids that any left-over fleecing images become orphaned.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-17-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
4e659fcac6 migration: attempt to clean up potential left-over fleecing images
Clean up left-over fleecing images before the guest is migrated to a
different node and they'd really become orphaned.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-16-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
a39866732f fix #5440: vzdump: better cleanup fleecing images after hard errors
By recording the allocated fleecing images in the VM config, they
are not immediately orphaned, should a hard error occur during
backup that prevents cleanup.

They are attempted to be cleaned up during the next backup run.

In the cleanup helper, check if fleecing images are still attached in
QEMU and detach them. This allows recovering from more failure
scenarios. However, to avoid a deadlock, a left-over backup job needs
to be canceled first. While canceling a left-over backup already
happens when cleanup is done for a subsquent backup, it is required
for other cases that like cleanup before migration (to be added in a
following commit).

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-15-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
a82c4555e3 config: add fleecing section
Will be used for improved cleanup of fleecing images.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-14-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
6eb92d31ba check type: require schema as an argument
In preparation to re-use the helper with a dedicated schema for a
'fleecing' special section.

No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-13-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
68eabc42c1 parse config: warn about duplicate sections
Currently, a duplicate section will quietly override the previous
instance of the section with the same identifier. Keep the current
behavior of preferring later entries, but issue a warning or die when
parsing strictly.

The entry for 'pending' in the result needs to start out as undefined
for the check to also work in presence of empty sections. Avoid
changing the returned value itself, by making sure to initialize the
entry before returning.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-12-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
90ae915305 test: parse config: test config with duplicate sections
Add a test case to witness how duplicate sections are handled.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-11-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
63567c0c42 config: make special section handling generic
Collect special sections below a common 'special-sections' key in
preparation to introduce a new special section.

The special 'cloudinit' section was added in the top-level of the
configuration structure, but it's cleaner to group special sections
more similar to snapshots.

The 'cloudinit' key was already initialized, so having the new
'special-sections' key be always initialized should not cause issues
after checking and adapting all usages of 'cloudinit' which this patch
attempts to do.

Add compat handling for remote migration which might receive the
configuration hash from a node that does not yet have the changes.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-10-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
d295ea9a28 vzdump: skip all special sections
Also log an informational message just like for pending and snapshot
sections.

Add a comment about it to parse_vm_config() in the hope that the
behavior is noted when introducing a future special section.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-9-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
d4208c7cc6 vzdump: anchor matches for pending and special sections
Otherwise, a snapshot with a name that includes "pending" will be
misinterpreted as the pending section.

Only affects the warning messages, but still confusing.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-8-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
248349f25e parse config: skip unknown sections and warn about their presence
Currently, keys in an unknown section will be interpreted as still
belonging to the last section and might erroneously overwrite values
in that way. Explicitly ignore unknown sections to avoid this while
warning the user.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-7-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
c80f968cc1 test: add test case exposing issue with unknown sections
While unknown sections do lead to an error in strict mode, in
non-strict mode the line is just skipped, meaning that key-value
pairs from the unknown section will override the key-value pairs from
the previous section.

Fixed by the next commit.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-6-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
df6d255b0f parse config: be precise about section type checks
There are checks for custom parsing behavior inside certain sections
relying only on the section name. While the name 'pending' cannot be
used by snapshots, the name 'cloudinit' can. Introduce an associated
section type to make the checks precise.

The test was not added in a separate commit, because it would fail
when writing the config before the fix, and failure in writing is
never expected by the test script. So there is no easy way to
highlight just the difference in behavior together with the fix and
the git history should stay bisectable.

Compare with the verify-snapshot.conf testcase without the actual fix
applied to see the difference.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-5-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
60b9ff0fd9 test: add parse config tests
Tests for parsing and writing VM configuration files. The parsing part
is already covered by the config2command test too, but that only
focuses on the main section, not other section types and does not also
test parsing in strict mode.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-4-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
27f38b00f3 test: avoid duplicate mock module in restore config test
The duplication is there, because two independet fixes for a test
failure where applied, namely commits:
75c430ce ("test: unbreak restore_config_test")
cc1cdadb ("test: fix restore config test as unprivileged user")

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-3-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
3948f8d265 migration: remove unused variable
The cloudinit config variable has been unused since commit 898e9296
("migrate: drop outdated PVE 7.2 check guarding cloudinit config
section").

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250127112923.31703-2-f.ebner@proxmox.com
2025-04-07 14:13:01 +02:00
09dcd202d5 migration: remove wrong attempt of disabling 'rdma-pin-all' capability
This was added by a89fded1 ("migration : enable auto-converge
capability v2"), but migration capabilities are already disabled by
default and there is nothing special about 'rdma-pin-all' compared to
other ones that are not used by Proxmox VE. Morover, the code
currently doesn't even do anything, because the capability would need
to be set as 'rdma-pin-all' without the experimental 'x-' marker
prefix (at least since QEMU 8.0, maybe longer).

The function to set migration capabilities already queries which ones
are supported by the currently running QEMU and ignores others, so
there was no error about the invalid name.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250407073256.8889-5-f.ebner@proxmox.com
2025-04-07 09:45:56 +02:00
960cf315da migration: drop deprecated 'zero-blocks' capability
The 'zero-blocks' capability was deprecated in QEMU commit 73581a041e
("migration: Deprecate zero-blocks capability")

> The zero-blocks capability was meant to be used along with the block
> migration, which has been removed already in commit eef0bae3a7
> ("migration: Remove block migration").

> Setting zero-blocks is currently a noop, but the outright removal of
> the capability would cause and error in case some users are still
> setting it. Put the capability through the deprecation process.

The default for the capability already was disabled (checked in QEMU
8.0 source).

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250407073256.8889-4-f.ebner@proxmox.com
2025-04-07 09:45:56 +02:00
9844ff1e8d migration: drop removed 'compress' capability
This was added by commit b62532e4 ("migration: disable compress")
stating:

> it's already disable by default,
> but we want to be sure if it's change in later release

QEMU never did change the default (verified with QEMU 8.0 and that
would be would've been a breaking change from QEMU's side).

The 'compress' capability was removed in QEMU 9.1, with QEMU commit
0222111a22 ("migration: Remove non-multifd compression").

The function to set migration capabilities already queries which ones
are supported by the currently running QEMU and ignores others, so
there was no error about 'compress'.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250407073256.8889-3-f.ebner@proxmox.com
2025-04-07 09:45:56 +02:00
16d840e8d1 cfg2cmd: replace deprecated 'reconnect' option with 'reconnect-ms'
The 'reconnect' option was replaced by 'reconnect-ms' in QEMU commit
c8e2b6b4d7 ("chardev: introduce 'reconnect-ms' and deprecate
'reconnect'").

Makes qemu-server build-depend on QEMU 9.2 for the tests.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250407073256.8889-2-f.ebner@proxmox.com
2025-04-07 09:45:56 +02:00
e654c584d8 bump version to 8.3.10
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-06 21:39:23 +02:00
c6a291a28c d/control: update versioned dependency for libpve-storage-perl
To ensure new external backup provider infrastructure is available.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-06 21:22:47 +02:00
d106717895 backup: bitmap action to human: lie about TPM state
The TPM state drive is newly attached each time, so it is fully
expected that a bitmap from last time would be missing.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-21-f.ebner@proxmox.com
2025-04-06 20:18:52 +02:00
e74284d8c6 backup: support 'missing-recreated' bitmap action
A new 'missing-recreated' action was added on the QEMU side.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-20-f.ebner@proxmox.com
2025-04-06 20:18:52 +02:00
3ce3c029e0 backup: future-proof checks for QEMU feature support
The features returned by the 'query-proxmox-support' QMP command are
booleans, so just checking for definedness is not enough in principle.
In practice, a feature is currently always true if defined. Still, fix
the checks, should the need to disable support for a feature ever
arise in the future and to avoid propagating the pattern further.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-19-f.ebner@proxmox.com
2025-04-06 20:18:52 +02:00
cc4a8b81ce backup: implement restore for external providers
First, the provider is asked about what restore mechanism to use.
Currently, only 'qemu-img' is possible. Then the configuration files
are restored, the provider gives information about volumes contained
in the backup and finally the volumes are restored via
'qemu-img convert'.

The code for the restore_external_archive() function was copied and
adapted from the restore_proxmox_backup_archive() function. Together
with restore_vma_archive() it seems sensible to extract the common
parts and use a dedicated module for restore code.

The parse_restore_archive() helper was renamed, because it's not just
parsing.

While currently, the format for the source can only be raw, do an
untrusted check for the source for future-proofing. Still serves as a
basic sanity check currently.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: fix 'bwlimit' typo]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-18-f.ebner@proxmox.com
2025-04-06 20:18:52 +02:00
ebaf90d61c image convert: allow caller to specify the format of the source path
In preparation for the restore API for backup providers that doesn't
want detection based on the file extension but always requires raw.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-17-f.ebner@proxmox.com
2025-04-06 20:18:52 +02:00