IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch adds support for unsecure migration using a direct tcp connection
KVM <=> KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.
You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg
Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:
current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:60000
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK
with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:60000
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK
changelog:
- increment counter also if remaining memory equal 0 (qemu 1.4 migration code)
- only increment coutner and set down_time if memory transfert have occured. (to avoid too fast downtime increment)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Can be usefull to see what's wrong if target vm doesn't start (missing storage, missing bridge,...)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This help migrate for vm with of lot of memory access (like database)
live migration tests working:
kvm 1.2 -> kvm 1.2 (xbzrle set on both side)
kvm 1.1 -> kvm 1.2 (xbzrle on target)
kvm 1.1 -> kvm 1.1 (xbzrle not set, qmp command try to set xbzrle but fail)
failing migration
kvm 1.2 -> kvm 1.1 fail, but this is expected.
I tested with a memory benchmark running on the vm with 4GB ram
without xbzrle : migration take 10min, with many network hang
with xbzrle : migration take 1min, no hang
I display xbzrle counters for debug purpose, we can remove them later
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
to be sure that kvm process is killed (but it should kill itself),
and deactivate volumes
I slightly modified this patch (orig. from Alexandre) so that it apply cleanly.
Currently we get list from PVE::Storage (for unused volumes), from all storage.
If something goes wrong with the network on host and thenwe can't communicate with a network shared storage(sheepdog,rbd,..),
the vdisk_list die (timeout) and we cannot migrate the vm on another kvm host.(online or offline).
We don't need to scan shared storage, as they are no disk to sync.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>