mirror of
git://git.proxmox.com/git/pve-docs.git
synced 2025-05-28 13:05:37 +03:00
qm: rework migration sections, drop mentioning outdated limitations
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
b5acae243c
commit
277808347e
65
qm.adoc
65
qm.adoc
@ -1133,45 +1133,62 @@ There are generally two mechanisms for this
|
|||||||
Online Migration
|
Online Migration
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When your VM is running and it has no local resources defined (such as disks
|
If your VM is running and no locally bound resources are configured (such as
|
||||||
on local storage, passed through devices, etc.) you can initiate a live
|
passed-through devices), you can initiate a live migration with the `--online`
|
||||||
migration with the -online flag.
|
flag in the `qm migration` command evocation. The web-interface defaults to
|
||||||
|
live migration when the VM is running.
|
||||||
|
|
||||||
How it works
|
How it works
|
||||||
^^^^^^^^^^^^
|
^^^^^^^^^^^^
|
||||||
|
|
||||||
This starts a Qemu Process on the target host with the 'incoming' flag, which
|
Online migration first starts a new QEMU process on the target host with the
|
||||||
means that the process starts and waits for the memory data and device states
|
'incoming' flag, which performs only basic initialization with the guest vCPUs
|
||||||
from the source Virtual Machine (since all other resources, e.g. disks,
|
still paused and then waits for the guest memory and device state data streams
|
||||||
are shared, the memory content and device state are the only things left
|
of the source Virtual Machine.
|
||||||
to transmit).
|
All other resources, such as disks, are either shared or got already sent
|
||||||
|
before runtime state migration of the VMs begins; so only the memory content
|
||||||
|
and device state remain to be transferred.
|
||||||
|
|
||||||
Once this connection is established, the source begins to send the memory
|
Once this connection is established, the source begins asynchronously sending
|
||||||
content asynchronously to the target. If the memory on the source changes,
|
the memory content to the target. If the guest memory on the source changes,
|
||||||
those sections are marked dirty and there will be another pass of sending data.
|
those sections are marked dirty and another pass is made to send the guest
|
||||||
This happens until the amount of data to send is so small that it can
|
memory data.
|
||||||
pause the VM on the source, send the remaining data to the target and start
|
This loop is repeated until the data difference between running source VM
|
||||||
the VM on the target in under a second.
|
and incoming target VM is small enough to be sent in a few milliseconds,
|
||||||
|
because then the source VM can be paused completely, without a user or program
|
||||||
|
noticing the pause, so that the remaining data can be sent to the target, and
|
||||||
|
then unpause the targets VM's CPU to make it the new running VM in well under a
|
||||||
|
second.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
^^^^^^^^^^^^
|
^^^^^^^^^^^^
|
||||||
|
|
||||||
For Live Migration to work, there are some things required:
|
For Live Migration to work, there are some things required:
|
||||||
|
|
||||||
* The VM has no local resources (e.g. passed through devices, local disks, etc.)
|
* The VM has no local resources that cannot be migrated. For example,
|
||||||
* The hosts are in the same {pve} cluster.
|
PCI or USB devices that are passed through currently block live-migration.
|
||||||
* The hosts have a working (and reliable) network connection.
|
Local Disks, on the other hand, can be migrated by sending them to the target
|
||||||
* The target host must have the same or higher versions of the
|
just fine.
|
||||||
{pve} packages. (It *might* work the other way, but this is never guaranteed)
|
* The hosts are located in the same {pve} cluster.
|
||||||
* The hosts have CPUs from the same vendor. (It *might* work otherwise, but this
|
* The hosts have a working (and reliable) network connection between them.
|
||||||
is never guaranteed)
|
* The target host must have the same, or higher versions of the
|
||||||
|
{pve} packages. Although it can sometimes work the other way around, this
|
||||||
|
cannot be guaranteed.
|
||||||
|
* The hosts have CPUs from the same vendor with similar capabilities. Different
|
||||||
|
vendor *might* work depending on the actual models and VMs CPU type
|
||||||
|
configured, but it cannot be guaranteed - so please test before deploying
|
||||||
|
such a setup in production.
|
||||||
|
|
||||||
Offline Migration
|
Offline Migration
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If you have local resources, you can still offline migrate your VMs,
|
If you have local resources, you can still migrate your VMs offline as long as
|
||||||
as long as all disk are on storages, which are defined on both hosts.
|
all disk are on storage defined on both hosts.
|
||||||
Then the migration will copy the disk over the network to the target host.
|
Migration then copies the disks to the target host over the network, as with
|
||||||
|
online migration. Note that any hardware pass-through configuration may need to
|
||||||
|
be adapted to the device location on the target host.
|
||||||
|
|
||||||
|
// TODO: mention hardware map IDs as better way to solve that, once available
|
||||||
|
|
||||||
[[qm_copy_and_clone]]
|
[[qm_copy_and_clone]]
|
||||||
Copies and Clones
|
Copies and Clones
|
||||||
|
Loading…
x
Reference in New Issue
Block a user