IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We pass a list of storage to scan for stale volumes to prepare_local_job().
So we make sure that we only activate/scan related storages.
Snapshot rollback may remove local replication shapshots. In that case
we still have the $conf->{parent} snapshot on both sides, so we
can use that as base snapshot.
Only call get_ssh_info() when needed, and do not pass $migration_network
for a simply remove_job - this produces just overhead and is not required.
If we always call get_ssh_info() we can never delete jobs using non-existing
targets.
The actual volume replication is done in replicate_volume(), which is just
a stub for now.
I also added a regression test replication_test5.pl to verify basic
functions.
Prepare for starting a replication job. This is called on the target
node before replication starts. This call is for internal use, and
return a JSON object on stdout. The method first test if VM <vmid>
reside on the local node. If so, stop immediately. After that the
method scans all volume IDs for snapshots, and removes all replications
snapshots with timestamps different than <last_sync>. It also removes
any unused volumes.
Returns a hash with boolean markers for all volumes with existing
replication snapshots.
We're already limiting CPUs to lxc/cpuset.effective_cpus,
so let's use the highest cpuid from that set as a maximum to
initialize the container count array.
If on bootup one of our VMs is locked by an backup we safely can
assume that this backup job does not run anymore and that the lock
has no reason anymore and just hinders uptime of services.
As at this time we (the node) have quorum so we may safely assume
that we have a consistent view of the cluster and all our VMs really
belong to us. We just need to ensure that we do not run into an
automatic backup jobs, so execute our code with VZDumps lock or
timeout.
Log in the Task and Sys log that we removed the lock, so that an
admin easily sees that there may be need for cleaning leftovers from
an interrupted backup.
Addresses bug #1024
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
small refactoring in get_filtered_vmlist: save a VMs config in its
own subhash to avoid collisions with other data which we want to save
in the vmid list, for now this is only `type` but in the next patch
I want to save also the class
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
extjs cannot "convert" and id from other fields, so the ids in the
diffstore and the realstore are different and we re-add every element on
every update
to mitigate this, we generate the id (which is "uid:hostname") in the
backend, and simply use it in the frontend
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>