IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch allow to hotplug memory dimm modules
though a new option : dimm_memory
The dimm modules are generated from a map
dimmid size dimm_memory
dimm0 512 512 100.00 0
dimm1 512 1024 50.00 1
dimm2 512 1536 33.33 2
dimm3 512 2048 25.00 3
dimm4 512 2560 20.00 0
dimm5 512 3072 16.67 1
dimm6 512 3584 14.29 2
dimm7 512 4096 12.50 3
dimm8 512 4608 11.11 0
dimm9 512 5120 10.00 1
dimm10 512 5632 9.09 2
dimm11 512 6144 8.33 3
dimm12 512 6656 7.69 0
dimm13 512 7168 7.14 1
dimm14 512 7680 6.67 2
dimm15 512 8192 6.25 3
dimm16 512 8704 5.88 0
dimm17 512 9216 5.56 1
dimm18 512 9728 5.26 2
dimm19 512 10240 5.00 3
dimm20 512 10752 4.76 0
...
dimm241 65536 3260416 2.01 1
dimm242 65536 3325952 1.97 2
dimm243 65536 3391488 1.93 3
dimm244 65536 3457024 1.90 0
dimm245 65536 3522560 1.86 1
dimm246 65536 3588096 1.83 2
dimm247 65536 3653632 1.79 3
dimm248 65536 3719168 1.76 0
dimm249 65536 3784704 1.73 1
dimm250 65536 3850240 1.70 2
dimm251 65536 3915776 1.67 3
dimm252 65536 3981312 1.65 0
dimm253 65536 4046848 1.62 1
dimm254 65536 4112384 1.59 2
dimm255 65536 4177920 1.57 3
max dimm_memory size is 4TB, which is the current qemu limit
If the dimm_memory value is not aligned on memory module, we align the dimm_memory on the next module.
vmid.conf
---------
memory: 1024
numa:1
hotplug: memmory
when hotplug memory option is enabled, the minimum memory value must be 1GB, and also numa need to be enabled.
we assign the first 1GB as static memory, splitted on each numa nodes.
The remaining memory is assigned on hotpluggable dimm devices.
The static memory need to be also 128MB aligned, to have other dimm devices aligned too.
This 128MB alignment is a linux limitation, windows can align on 2MB size.
Numa need to be aligned, as linux guest don't boot on some setup with multi sockets,
and windows need numa to be able to hotplug memory
hotplug
----
qm set <vmid> -memory X (where X is bigger than current value)
unplug (not yet implemented in qemu)
------
qm set <vmid> -memory X (where X is lower than current value)
linux guest
-----------
-acpi hotplug module should be loaded in guest
-need a recent kernel. (tested with 3.10)
can be enable automaticaly, adding:
/lib/udev/rules.d/80-hotplug-cpu-mem.rules
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", \
ATTR{online}="1"
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", \
ATTR{state}="online"
windows guest
-------------
tested with:
- windows 2012 standard
- windows 2008 enterprise/datacenter
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
vcpus = current allocate vpus to virtual machine
maxcpus is now compute from $sockets*cores
vcpus = maxcpus if not defined
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Original patch by Wolfgang, adopted for new hotplug implementation.
I do not verify link status, because that patch was rejected upstream.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
commit 1c0c1c17b0
Author: Wolfgang Link <wolfgang@linksystems.org>
Date: Wed Nov 26 11:11:40 2014 +0100
shutdown by Qemu Guest Agent if the agent flag in the config is set
Important: "guest-shutdown" returns only by error a message.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
breaks live migration as it always tries to load the vm config - even in case of $nocheck. Also it double loads the config ($conf && $config)
Signed-off-by: Stefan Priebe <s.priebe@profihost.ag>
This enable numa support inside the guest, and share the memory and cores across the sockets numa nodes.
numa: 0|1
example:
-------
sockets:2
cores:2
memory:4096
numa: 1
qemu command line
-----------------
-object memory-backend-ram,size=2048,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object memory-backend-ram,size=2048,id=ram-node1
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
remove the freezefs flag.
If Qemu Guest Agent flag is set in config the vm filesystem will always be frozen,
unless we save RAM.
also remove param freezefs in PVE::API2 snapshot,
because there is no use for it.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Even if we check the busy flag, we can have sometime race condition if new write
are coming between the query-block-job and the block-job-complete.
block-job-complete throw an error "The active block job for device '%(name)' cannot be completed"
we just need to retry in this case.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
block-job-cancel is async, we need to check that job is really finished
before try to free the volume
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
new config option:
iothread: 1|0
This enable iothread/dataplane support, to improve io performance on fast storages
Currently block jobs don't work yet, it's planned for qemu 2.2.
So it's better to not expose yet this option in gui.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
the machine option is write in the snapshot (ok), but also in the running config (bad)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
we should push to $devices array instead $cmd array,
because pci bridges need to be create before spice devices
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
multifunction device should be define without the .function
hostpci0: 00:00
example
-------
if 00:00.0
00:00.1
00:00.2
exists,
then we generate the multifunction devices
-device (pci-assign|vfio-pci),host=00:00.0,id=hostpci0.0,bus=...,addr=0x0.0,multifunction=on
-device (pci-assign|vfio-pci),host=00:00.1,id=hostpci0.1,bus=...,addr=0x0.1
-device (pci-assign|vfio-pci),host=00:00.2,id=hostpci0.2,bus=...,addr=0x0.2
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
hostpci0: .....,x-vga=on,pcie=1
x-vga require kernel 3.10 with vfio-vga support enable
if x-vga=on, we force vfio-pci device
pcie=1 choose the pciexpress bus (need q35 machine model)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
q35 use pcie.0 root by default. so currently we can't start machine model q35.
we need to add 3 pci-bridge pci.0, pci.1, pci.2, to handle our devices.
pcie.0 does not support hotplug. so pci-bridge are defined at startup.
I use an pve-q35.cfg (mostly the same than q35-chipset.cfg from qemu docs).
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
this a new option queue=(\d+) to net interface
Allow to use more than 1 cpu for network stream, so this can improve network bandwidth,
when vhost-net cpu is the bottleneck
http://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
-netdev tap,vhost=on,queues=N -device virtio-net-pci,mq=on,vectors=2N+2
host requirement
----------------
this require host kernel >= 3.8 (or qemu die at start)
linux guest requirement
-----------------------
kernel >= 3.8
manual enabling multiqueue
windows guest requierement
--------------------------
recent virtio-net driver
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
We simply add option iscsi if we have an initiator name. So we
never add this option multiple times, and it works with hotplug
in case someone plugs an 'iscsi:' drive later.
enable check if host support all cpu flags configured for the guests
this avoid some bad setup like Opteron vcpu on a intel host for example,
and avoid some bad live migrations
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This reduce guest cpu speed if dirtied bytes is 50% more than the approx.amount of bytes that just got transferred since the last time we were in this routine.
qemu commit :
http://git.qemu.org/?p=qemu.git;a=commit;h=bde1e2ec2176c363c1783bf8887b6b1beb08dfee
tested with "stress -m 2 -c 2" under debian
without autoconvergence : downtime 12s - duration 12min
with autoconvergence : downtime 2s - duration 4min
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
add qxl2 (2monitors),qxl3 (3monitors),qxl4 (4monitors) vga type.
For linux, we only need 1 qxl card with more memory
For windows, we need 1 qxl card by monitor
Original Information from spice-mailing
"
You need to specify multiple devices for Windows VMs. This is what
libvirt gives me (via 'virsh domxml-to-native qemu argv DOMAIN_XML'):
<...> -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device qxl,id=video1,ram_size=67108864,vram_size=33554432 -device qxl,id=video2,ram_size=67108864,vram_size=33554432 -device qxl,id=video3,ram_size=67108864,vram_size=33554432
For Linux VM, just one qxl device is OK but then it's advisable to
increase the available RAM:
<...> -vga qxl -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=33554432
If you don't turn off surfaces, then you should increase vram size to
say 64 MB from current default of 32 MB.
"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This patch adds support for unsecure migration using a direct tcp connection
KVM <=> KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.
You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg
Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:
current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:60000
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK
with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:60000
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK
That way we do not need to run qmp command to get the port.
Set spice ticket expire time to 30 (5 seconds seems a bit too short).
Coding style cleanups.
This add special hyper-v cpu flags for windows guests.
This improve performance and avoid some bsod related to timer.
(I currently disable the hv_vapic flag because I can't get it working).
I have tested all theses flags with: win2003, win2008R2, winxp, linux debian 64bit, on intel and amd physicals processor
It doesn't break live migration, because new cpu flags are not see by guests until a vm reset.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Need for win8 boot.
This flag was missing from rhel < 6.4 host kernel. It's ok now.
But it's also missing from kvm64 model. (It's exist in other cpu models, amd or intel).
So it's pretty safe to enable it.
If the host kernel is older, qemu filter the flag.
This also improve performance of winxp && win7 32 bits guests.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This reduce context switch with multicore guests.
Even if the host cpu don't have x2apic, it's working because qemu have an virtual x2apic implementation for guest.
We need in-kernel irqchip support for this, which is enable for kvm guest since qemu 1.3.
(I don't enable it if nokvm param is set)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This is experimental code, spice connections are not encryped and thus insecure.
We use ticket passwords for spice auth, and do direct spice connections to
the nodes instead of using a tunnel.
fix : Use of uninitialized value $bridgeid in numeric lt (<) at /usr/share/perl5/PVE/QemuServer.pm line 2774.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>