IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
true if the value is included in a result array. An array of attributes
is obtained for multiple values xpath, e.g.
<CLUSTER>
<ID>100</ID>
<ID>101</ID>
</CLUSTER>
an expresion of the form CLUSTER/ID in a requirement returns [100, 101].
To test if a given element is in a datastore:
CLUSTER/ID @> 101 -----> [100, 101] @> 101 ---> true
Note that CLUSTER/ID = 101 only evaluates the first element found
CLUSTER/ID = 101 ----> 100 = 101 ----> false
(cherry picked from commit eed6b268adcb70660b0a5251c2d434b591c7bd8b)
The change was inspired by the live-migrate scripts for SYSTEM_DS with `shared` TM_MAD
After some rework I've figured out that it could be made as some sort of generic one.
The proposed changes
* tm_common.sh - new function migrate_other() receiving all arguments from the caller script
The function parses the TEMPLATE blob and exposes BASH arrays:
DISK_ID_ARRAY - array of all VM DISK_ID
TM_MAD_ARRAY - array of TM_MADs for each DISK_ID
CLONE_ARRAY - aray of all CLONE values for each DISK_ID
The function exposes DISK_ID of the CONTEXT disk too as CONTEXT_DISK_ID
The function check for extra argument to prevent recursion loops
For each TM_MAD it it is not current one the corresponding script is called
with same arguments but one additional - to mark that it is called not from
in the SYSTEM_DS context
Usage:
migrate_common "$@"
The function is appended to the following TM_MADs:
ceph
fs_lvm
qcow2
shared
ssh
The General idea is to allow live-migration of VMs with mix of disks with any of the above TM_MADs (different datastores)
For example if we have VM with the following disks
disk.0 ceph
disk.1 ceph (context)
disk.2 - ceph
disk.3 - fs_lvm
disk.4 - qcow2
In the above scenario when live migration is issed the following scripts will be called
ceph/premigrate <args>
fs_lvm/premigrate <args> "ceph"
qcow2/premigrate <args> "ceph"
As most probably I am missing someting I am open for discussion :)