IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It causes problems when done as part of the clean target when building
the dsc with the following error due to the additional files:
dpkg-source: error: aborting due to unexpected upstream changes
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Copied from Debian's QEMU package's d/rules. Otherwise, ninja will end
up using only a single job (in Debian Bookworm/Proxmox VE 8).
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ T: remove all tarballs for a package and any .deb ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When turning off the "KVM hardware virtualization" checkbox in Proxmox
VE, the TCG accelerator is used, so these fixes are relevant then.
The first patch is included to allow cherry-picking the others without
changes.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Required for the debian/edk2-vars-generator.py script in the
pve-edk2-firmware repository when building the edk2-stable202302
release. Without this patch, the QEMU process spawned by the script
would hang indefinietly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patch 0008-memory-prevent-dma-reentracy-issues.patch introduced a
regression for the LSI SCSI controller leading to boot failures [0],
because, in its current form, it relies on reentrancy for a particular
ram_io region.
[0]: https://forum.proxmox.com/threads/123843
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patches were selected from the recent "Patch Round-up for stable
7.2.1" [0]. Those that should be relevant for our supported use-cases
(and the upcoming nvme use-case) were picked. Most of the patches
added now have not been submitted to qemu-stable before.
The follow-up for the virtio-rng-pci migration fix will break
migration between versions with the fix and without the fix when a
virtio-pci-rng(-non)-transitional device is used. Luckily Proxmox VE
only uses the virtio-pci-rng device, and this was fixed by
0006-virtio-rng-pci-fix-migration-compat-for-vectors.patch which was
applied before any public version of Proxmox VE's QEMU 7.2 package was
released.
[0]: https://lists.nongnu.org/archive/html/qemu-stable/2023-03/msg00010.html
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=2162569
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patch was incomplete and (re-)introduced an issue with a potential
failing assertion upon cancelation of the DMA request.
There is a patch on qemu-devel now[0], and it's the same as this one
code-wise (except for comments). But the discussion is still ongoing.
While there shouldn't be a real issue with the patch, there might be
better approaches. The plan is to use this as a stop-gap for now and
pick up the proper solution once it's ready.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg03325.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In particular, the deadlock can occur, together with unlucky timing
between the QEMU threads, when the guest is issuing trim requests
during the start of a backup operation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ T: resolve trivial merge conflict in series file ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In qemu-server, we already allocate 2 * $mem_size + 500 MiB for driver
state (which was 32 MiB long ago according to git history). It seems
likely that the 30 MiB cutoff in the savevm-async implementation was
chosen based on that.
In bug #4476 [0], another issue caused the iteration to not make any
progress and the state file filled up all the way to the 30 MiB +
pending_size cutoff. Since the guest is not stopped immediately after
the check, it can still dirty some RAM and the current cutoff is not
enough for a reproducer VM (was done while bug #4476 still was not
fixed), dirtying memory with
> stress-ng -B 2 --bigheap-growth 64.0M'
After entering the final stage, savevm actually filled up the state
file completely, leading to an I/O error. It's probably the same
scenario as reported in the bug report, the error message was fixed in
commit a020815 ("savevm-async: fix function name in error message")
after the bug report.
If not for the bug, the cutoff will only be reached by a VM that's
dirtying RAM faster than can be written to the storage, so increase
the cutoff to 100 MiB to have a bigger chance to finish successfully,
while still trying to not increase downtime too much for
non-hibernation snapshots.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=4476
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when pend_postcopy is large. By definition, pend_postcopy won't
decrease when iterating, so a value larger than the cutoff of 400000
would lead to essentially empty iterations, filling up the state file
until only 30 MiB + pending_size remain and the second half of the
check would trigger.
Avoid this, by not considering pend_postcopy for the cutoff to enter
the final phase.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
it ships files also shipped by our qemu package, switching from Debian qemu to
ours doesn't work without manual intervention otherwise..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
between QEMU less than 7.2 and QEMU 7.2 without the fix (both
directions are affected).
As mentioned in the patch message, this fix itself will break
migration between QEMU 7.2 and QEMU 7.2 with the fix (in both
directions, if a virtio-rng device is attached), but this is fine,
because no pve-qemu-kvm package with QEMU 7.2 has been publicly
released yet.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Two for virtio-mem and one for vIOMMU. Both features are not yet
exposed in PVE's qemu-server, but planned to be added.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Avoids a patch and is required to compile when not all patches are
applied. No functional change is intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Commit d03e1b3 ("update submodule and patches to 7.2.0") argued that
slirp is not explicitly supported in PVE, but that is not true. In
qemu-server, user networking is supported (via CLI/API) when no bridge
is set on a virtual NIC. So slirp needs to stay to keep such NICs
working.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Otherwise, it depends on whether libslirp-devel is installed or not.
See the previous commit message for more context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
User-facing breaking change:
The slirp submodule for user networking got removed. It would be
necessary to add the --enable-slirp option to the build and/or install
the appropriate library to continue building it. Since PVE is not
explicitly supporting it, it would require additionally installing the
libslirp0 package on all installations and there is *very* little
mention on the community forum when searching for "slirp" or
"netdev user", the plan is to only enable it again if there is some
real demand for it.
Notable changes:
* The big change for this release is the rework of job locking, using
a job mutex and introducing _locked() variants of job API functions
moving away from call-side AioContext locking. See (in the qemu
submodule) commit 6f592e5aca ("job.c: enable job lock/unlock and
remove Aiocontext locks") and previous commits for context.
Changes required for the backup patches:
* Use WITH_JOB_LOCK_GUARD() and call the _locked() variant of job
API functions where appropriate (many are only availalbe as
a _locked() variant).
* Remove acquiring/releasing AioContext around functions taking the
job mutex lock internally.
The patch introducing sequential transaction support for jobs needs
to temporarily unlock the job mutex to call job_start() when
starting the next job in the transaction.
* The zeroinit block driver now marks its child as primary.
The documentation in include/block/block-common.h states:
> Filter node has exactly one FILTERED|PRIMARY child, and may have
> other children which must not have these bits
Without this, an assert will trigger when copying to a zeroinit target
with qemu-img convert, because bdrv_child_cb_attach() expects any
non-PRIMARY child to be not FILTERED:
> qemu-img convert -n -p -f raw -O raw input.raw zeroinit:output.raw
> qemu-img: ../block.c:1476: bdrv_child_cb_attach: Assertion
> `!(child->role & BDRV_CHILD_FILTERED)' failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
albeit I was short from disarming that GLOBAL_STATE_CODE assert
completely, as its just bogus to assert that on runtime for a lot of
call sites, rather it should be verified on compilation (function
coloring with attributes and maybe a compiler plugin).
But, as this is already solved upstream lets take in that patch.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fixes file restore, where we actively unlink the PID file of the
transient VM ourself after opening it - while we use it only for
tracking when the QEMU process itself has finished start up, it's
easier and cleaner to fix this regression now, than to rework that to
something that doesn't depends on the PID file at all.
Applying Fiona's patch as patch-patch tracked under extra, as I
expect that something similar to this gets accepted upstreamed.
Link: https://lists.proxmox.com/pipermail/pve-devel/2022-October/054448.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The documentation in include/io/channel.h states that -1 or
QIO_CHANNEL_ERR_BLOCK should be returned upon error. Simply passing
along the return value from the blk-functions has the potential to
confuse the call sides. Non-blocking mode is not implemented
currently, so -1 it is.
The "return ret" was mistakenly left over from the previous
QEMUFileOps based implementation. Also, use error_setg_errno(), since
the blk(_co)_p{readv,writev} functions return errno codes.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>