From 62e5c96b58746edca6681641e070d4696c1d5c33 Mon Sep 17 00:00:00 2001 From: Fiona Ebner Date: Fri, 28 Apr 2023 14:32:07 +0200 Subject: [PATCH] block resize: avoid passing zero size to QMP command Commit 7246e8f9 ("Set zero $size and continue if volume_resize() returns false") mentions that this is needed for "some storages with backing block devices to do online resize" and since this patch came together [0] with pve-storage commit a4aee43 ("Fix RBD resize with krbd option enabled."), it's safe to assume that RBD with krbd is meant. But it should be the same situation for any external plugin relying on the same behavior. Other storages backed by block devices like LVM(-thin) and ZFS return 1 and the new size respectively, and the code is older than the above mentioned commits. So really, the RBD plugin just should have returned a positive value to be in-line with those and there should be no need to pass 0 to the block_resize QMP command either. Actually, it's a hack, because the block_resize QMP command does not actually do special handling for the value 0. It's just that in the case of a block device, QEMU won't try to resize it (and not fail for shrinkage). But the size in the raw driver's BlockDriverState is temporarily set to 0 (which is not nice), until the sector count is refreshed, where raw_co_getlength is called, which queries the new size and sets the size in the raw driver's BlockDriverState again as a side effect. But bdrv_getlength is a coroutine wrapper starting from QEMU 8.0.0, and it's just better to avoid setting a completely wrong value even temporarily. Just pass the actually requested size like is done for LVM(thin) and ZFS. Since this patch was originally written, Friedrich managed to find that this actually can cause real issues: 1. Start VM with an RBD image without krbd 2. Change storage config to use krbd 3. Resize disk Likely, because the disk is resized via the storage layer and the QMP command resizing it to "0" happens simultaneously. The exact reason was not yet determined, but the issue is gone in Proxmox VE 8 and re-appears after reverting this patch. Long-term, it makes sense to not rely on the storage flag, but look how the disk is actually attached in QEMU to decide how to do the resize. [0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html Signed-off-by: Fiona Ebner (cherry picked from commit 2e4357c537287edd47d6031fec8bffc7b0ce2425) [FE: mention actual issue in commit message] Signed-off-by: Fiona Ebner --- PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 891caa83..0b9d7763 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -4770,7 +4770,7 @@ sub qemu_block_resize { my $running = check_running($vmid); - $size = 0 if !PVE::Storage::volume_resize($storecfg, $volid, $size, $running); + PVE::Storage::volume_resize($storecfg, $volid, $size, $running); return if !$running;