mirror of
git://git.proxmox.com/git/pve-docs.git
synced 2025-01-06 13:17:48 +03:00
Document CPU and Memory Input panel
This commit is contained in:
parent
9c63b5d9ea
commit
34e541c538
86
qm.adoc
86
qm.adoc
@ -175,6 +175,92 @@ With this enabled, Qemu uses one thread per disk, instead of one thread for all,
|
||||
so it should increase performance when using multiple disks.
|
||||
Note that backups do not currently work with *IO Thread* enabled.
|
||||
|
||||
CPU
|
||||
~~~
|
||||
A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
|
||||
This CPU can then contain one or many *cores*, which are independent
|
||||
processing units. Whether you have a single CPU socket with 4 cores, or two CPU
|
||||
sockets with two cores is mostly irrelevant from a performance point of view.
|
||||
However some software is licensed depending on the number of sockets you have in
|
||||
your machine, in that case it makes sense to set the number of of sockets to
|
||||
what the license allows you, and increase the number of cores. +
|
||||
Increasing the number of virtual cpus (cores and sockets) will usually provide a
|
||||
performance improvement though that is heavily dependent on the use of the VM.
|
||||
Multithreaded applications will of course benefit from a large number of
|
||||
virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
|
||||
execution on the host system. If you're not sure about the workload of your VM,
|
||||
it is usually a safe bet to set the number of *Total cores* to 2.
|
||||
|
||||
NOTE: It is perfectly safe to set the _overall_ number of total cores in all
|
||||
your VMs to be greater than the number of of cores you have on your server (ie.
|
||||
4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
|
||||
the host system will balance the Qemu execution threads between your server
|
||||
cores just like if you were running a standard multithreaded application.
|
||||
However {pve} will prevent you to allocate on a _single_ machine more vcpus than
|
||||
physically available, as this will only bring the performance down due to the
|
||||
cost of context switches.
|
||||
|
||||
Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
|
||||
processors. Each new processor generation adds new features, like hardware
|
||||
assisted 3d rendering, random number generation, memory protection, etc ...
|
||||
Usually you should select for your VM a processor type which closely matches the
|
||||
CPU of the host system, as it means that the host CPU features (also called _CPU
|
||||
flags_ ) will be available in your VMs. If you want an exact match, you can set
|
||||
the CPU type to *host* in which case the VM will have exactly the same CPU flags
|
||||
as your host system. +
|
||||
This has a downside though. If you want to do a live migration of VMs between
|
||||
different hosts, your VM might end up on a new system with a different CPU type.
|
||||
If the CPU flags passed to the guest are missing, the qemu process will stop. To
|
||||
remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
|
||||
kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
|
||||
but is guaranteed to work everywhere. +
|
||||
In short, if you care about live migration and moving VMs between nodes, leave
|
||||
the kvm64 default. If you don’t care about live migration, set the CPU type to
|
||||
host, as in theory this will give your guests maximum performance.
|
||||
|
||||
You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
|
||||
the NUMA architecture mean that instead of having a global memory pool available
|
||||
to all your cores, the memory is spread into local banks close to each socket.
|
||||
This can bring speed improvements as the memory bus is not a bottleneck
|
||||
anymore. If your system has a NUMA architecture footnote:[if the command
|
||||
`numactl --hardware | grep available` returns more than one node, then your host
|
||||
system has a NUMA architecture] we recommend to activate the option, as this
|
||||
will allow proper distribution of the VM resources on the host system. This
|
||||
option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
|
||||
|
||||
If the NUMA option is used, it is recommended to set the number of sockets to
|
||||
the number of sockets of the host system.
|
||||
|
||||
Memory
|
||||
~~~~~~
|
||||
For each VM you have the option to set a fixed size memory or asking
|
||||
{pve} to dynamically allocate memory based on the current RAM usage of the
|
||||
host.
|
||||
|
||||
When choosing a *fixed size memory* {pve} will simply allocate what you
|
||||
specify to your VM.
|
||||
|
||||
// see autoballoon() in pvestatd.pm
|
||||
When choosing to *automatically allocate memory*, {pve} will make sure that the
|
||||
minimum amount you specified is always available to the VM, and if RAM usage on
|
||||
the host is below 80%, will dynamically add memory to the guest up to the
|
||||
maximum memory specified. +
|
||||
When the host is becoming short on RAM, the VM will then release some memory
|
||||
back to the host, swapping running processes if needed and starting the oom
|
||||
killer in last resort. The passing around of memory between host and guest is
|
||||
done via a special `balloon` kernel driver running inside the guest, which will
|
||||
grab or release memory pages from the host.
|
||||
footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
|
||||
|
||||
All Linux distributions released after 2010 have the balloon kernel driver
|
||||
included. For Windows OSes, the balloon driver needs to be added manually and can
|
||||
incur a slowdown of the guest, so we don't recommend using it on critical
|
||||
systems.
|
||||
// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
|
||||
|
||||
When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
|
||||
of RAM available to the host.
|
||||
|
||||
Managing Virtual Machines with 'qm'
|
||||
------------------------------------
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user