IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We take care of killing QEMU processes when a guest shuts down manually.
QEMU will not exit itself, if started with -no-shutdown, but it will
still emit a "SHUTDOWN" event, which we await and then send SIGTERM.
This additionally allows us to handle backups in such situations. A
vzdump instance will connect to our socket and identify itself as such
in the handshake, sending along a VMID which will be marked as backing
up until the file handle is closed.
When a SHUTDOWN event is received while the VM is backing up, we do not
kill the VM. And when the vzdump handle is closed, we check if the
guest has started up since, and only if it's determined to still be
turned off, we then finally kill QEMU.
We cannot wait for QEMU directly to finish the backup (i.e. with
query-backup), as that would kill the VM too fast for vzdump to send the
last 'query-backup' to mark the backup as successful and done.
For handling 'query-status' messages sent to QEMU, a state-machine-esque
protocol is implemented into the Client struct (ClientState). This is
necessary, since QMP works asynchronously, and results arrive on the
same channel as events and even the handshake.
For referencing QEMU Clients from vzdump messages, they are kept in a
hash table. This requires linking against glib.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
by adding the missing argument (otherwise all the other ones are shifted
one slot to the left, which is of course bogus).
this has been broken since 2018 (d559309), but was only made
visible/caused a failure with the recent changes adding
use strict;
use warnings;
to PVE::QemuServer::PCI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the 'backup' qmp call itself times out or fails, we still want to
try to cancel the backup, else it can happen that there is still
a backup running even when vzdump thinks it was canceled
qapi docs says that backup cancel always returns success, even
if no backup is running
since we hold a global and a per vm lock for the backup, this should be
ok, since we should not reach this code without that lock
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We query QEMU if it's safe before enabling it, as on versions without
the necessary patches it not only would be useless, but can actually
lead to hangs.
PBS state is always migrated, as it's a small amount of data anyway, so
we don't need to set a specific flag for it.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Specifying 'boot: order=' was intended to be used for an empty bootorder
(i.e. no boot devices), but as it turns out our format parser doesn't
like empty '-list' properties if they are nested in a subformat.
Fixing this in JSONSchema sounds like a risky move, so instead just
write 'boot: ' (without 'order=') to indicate an empty bootorder. The
rest of the code handles it just fine, as this was valid before too.
Incidentally also fixes a bug where you couldn't create a new VM without
any disks if no explicit 'boot' property was specified (i.e. a simple
'qm create 100' without any parameters would fail).
Reported-by: Dominic Jäger <d.jaeger@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixes commit 74c17b7a23 which moved
this code here, but forgot to pass $vga ref, as the module was not
using warning nor strict mode this was not caught..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
After migration or a rollback the cloudinit disk might not be allocated, so
volume_size_info() fails. As we override the value anyway for cloudinit
and efi disks simply move the volume_size_info() call into the 'else'
branch.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
All volumes contained in $vollist are activated. In this case a snapshot
of the volume. For cloudinit disks no snapshots are created so don't add
it to the list of volumes to activate as it otherwise fails with no
logical volume found.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The API is updated to handle the deprecation correctly, i.e. when
updating the 'order' attribute, the old 'legacy' (default_key) values
are removed (would now be ignored anyway).
When removing a device that is in the bootorder list, it will be removed
from the aforementioned. Note that non-existing devices in the list will
not cause an error - they will simply be ignored - but it's still nice
to not have them in there.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(also fixes#3011)
Deprecates the old-style 'boot' and 'bootdisk' options by adding a new
'order=' subproperty to 'boot'.
This allows a user to specify more than one disk in the boot order,
helping with newer versions of SeaBIOS/OVMF where disks without a
bootindex won't be initialized at all (breaks soft-raid and some LVM
setups).
This also allows specifying a bootindex for USB and hostpci devices,
which was not possible before. Floppy boot support is not supported in
the new model, but I doubt that will be a problem (AFAICT we can't even
attach floppy disks to a VM?).
Default behaviour is intended to stay the same, i.e. while new VMs will
receive the new 'order' property, it will be set so the VM starts the
same as before (using get_default_bootorder).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The format is unused in this commit, but will replace the current
string-based format of the 'boot' property. It is included since the
parameter of bootorder_from_legacy follows it.
Two helper methods are introduced:
* bootorder_from_legacy: Parses the legacy format into a hash closer to
what the new format represents
* get_default_bootdevices: Encapsulates the legacy default behaviour if
nothing is specified in the boot order
resolve_first_disk is simplified and gets a new $cdrom parameter to
control the behaviour of excluding CD-ROMs or instead searching for only
them.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We use verification for something more in-depth on the PBS server, so
avoid that term to avoid misunderstandings.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We already keep hugepages if they are created with the kernel
commandline (hugepagesz=x hugepages=y), but some setups (specifically
hugepages across multiple NUMA nodes) cannot be configured that way.
Since we always clear these hugepages at VM shutdown, rebooting a VM
that uses them might not work, since the requested count might not be
available anymore by the time we want to use them (also, we would then
no longer allocate them correctly on the NUMA nodes).
Add a 'keephugepages' parameter to skip cleanup and simply leave them
untouched.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
until we maybe have a 'pbs-backup' that links Qemu and PBS like
'pbs-restore', we need to do a regular backup for the template case to
support all storage types and image formats.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Otherwise a warning is printed if the bios is not set in the config.
reported via community forum:
https://forum.proxmox.com/threads/warning-in-qemuserver.74683/
reproduced and tested that the patch fixes the issue.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This still works even if all drives were clean. It then shows the very
magical line:
INFO: backup was done incrementally, reused 34.00 GiB (100%)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU handles it just as well as with VMA, so this was probably just
forgotten to implement for PBS.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>