IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before patch:
INFO: exclude disk 'scsi1' (backup=no)
INFO: skip unused drive 'local:401/vm-401-disk-3.raw' (not included into backup)
INFO: skip unused drive 'local:401/vm-401-disk-1.raw' (not included into backup)
After patch applied:
INFO: include disk 'scsi0' local:401/vm-401-disk-4.qcow2
INFO: exclude disk 'scsi1' local:401/vm-401-disk-2.raw (backup=no)
INFO: include disk 'scsi2' pve4tank:vm-401-disk-1
INFO: skip unused drive 'local:401/vm-401-disk-3.raw' (not included into backup)
INFO: skip unused drive 'local:401/vm-401-disk-1.raw' (not included into backup)
Let 'cdrom' use the pve-qm-ide format, as it's supposed to
be an alias to ide2.
We're not using the 'alias' schema property since the qemu
configs still use a custom parser (due to the
pending-changes system and the filename-to-volume-id
conversion for legacy support) which does not deal with
schema aliases.
before copying the efidisk image to a storage,
we first have to activate the volume
also, add the -n flag to qemu-img convert (prevents
creation of the target volume)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when restoring into an existing VM, we don't want to die
half-way through because we can't delete one of the existing
volumes. instead, warn about the deletion failure, but
continue anyway. the not deleted disk is then added as
unused automatically.
this adds a bootsplash image in /usr/share/qemu-server
and if this file exists, use it for seabios
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if efidisk0 is defined, use it as a efivars disk,
to permanently store efivars (such as boot options)
we check if the files exist, and act accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we create the efidisk0 over the api,
we discard the size, and create a 128kbyte vdisk
and copy it there with qemu-img convert
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
just a simple disk (only size, format and volid) for
efivars disk
also do not add it to command line in foreach_drive
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
drive-mirror is not working with qemu 2.6 when iothread is enabled.
with virtio-blk : mirror is working, but block-job-completed crash the vm
with virtio-scsi : mirror hang at start.
This should be fixed in qemu 2.7
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
foreach_dimm() provides a guest numa node index, when used
in conjunction with the guest-to-host numa node topology
mapping one has to make sure that the correct host-side
indices are used.
This covers situations where the user defines a numaX with
hostnodes=Y with Y != X.
For instance:
cores: 2
hotplug: disk,network,cpu,memory
hugepages: 2
memory: 2048
numa: 1
numa1: memory=512,hostnodes=0,cpus=0-1,policy=bind
numa2: memory=512,hostnodes=0,cpus=2-3,policy=bind
Both numa IDs 1 and 2 passed by foreach_dimm() have to be
mapped to host node 0.
Note that this also reverses the foreach_reverse_dimm() numa
node numbering as the current code, while walking sizes
backwards, walked the numa IDs inside each size forward,
which makes more sense. (Memory hot-unplug is still working
with this.)
we did not check if some values were hash refs in
the verbose output.
this patch adds a recursive hash print sub and uses it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this API call changes the config quite drastically, and as
such should not be possible while an operation that holds a
lock is ongoing (e.g., migration, backup, snapshot).