IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
in vm_pending API, this method is used to represent the configuration as
a table with current, pending and delete columns.
by adding it as a guesthelper, we can also use it for container pending
changes.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
this code is already used by qemu-server's GET config API call. it is
however better to split this into to methods and decide what to run in
the API call.
this general implementation uses some $class helpers which allow to abstract
away the difference in the child classes. this will be used for
containers once they can do pending changes.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
use a better naming scheme for methods from qemuserver:
split_flagged_list -> parse_pending_delete
join_flagged_list -> print_pending_delete
vmconfig_delete_pending_option -> add_to_pending_delete
vmconfig_undelete_pending_option -> remove_from_pending_delete
vmconfig_cleanup_pending -> cleanup_pending
parse_pending_delete now has a better representation of the force value,
which is encoded in a perl hash i.e. $conf->{$opt}->{force} = 1 or 0
depending on if the string in config has '!' or not.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
The registration of the vzdump.cron file was handled in pve-manager.
By moving the relevant code to pve-guest-common, cyclic dependencies
for cfs registration are avoided.
This makes this patch of guest-common a build dependency for the other
packages touched in this patch series.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This was taken from a CLI helper, there $res is a common parameter
name which denotes that it's the res from the API call the CLI
command bases on. But here that makes no sense and is not really
telling about what the value(s) of $res could be. Further explain
that with a comment.
As this also prints uncoditionally to STDOUT let's say so through the
method name.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
qm/pct listsnapshot lack feature parity w.r.t. showing snapshots in a
tree ordered by the date. by moving this code into GuestHelpers, it can
be shared.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
while we already dynamically resolved the version from the changelog
using dpkg-parsechangelog, and those dpkg-dev helpers also use that
tool, let's switch to them nonetheless to have a bit more stream
lined dev environment.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In this file, the error of a failed job will also be stored.
The default 32768 bytes are not very much.
This file is on the local filesystem so there is no filesystem
size restrictions like in /etc/pve.
check_hookscript will be used for the container/vm api to check if the
hookscript volume id is correct
exec_hookscript can be called to execute a hookscript
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is the bash completion helper function for completing the snapshot
name. This is used both in qemu-server and pve-container.
This patch is the base for the patches in qemu-server and pve-container.
Signed-off-by: Rhonda D'Vine <rhonda@proxmox.com>
instead move the QEMU machine logic inside qemu-server package with
the help of the new snapshot rollback hook
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since it doesn't write but returns the text to be written,
let's be specific about the fact that we're returning a
value.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
With this patch we can restore the state of a state less job.
It may happen that there are more replication snapshots,
because no job state is known can not delete any snapshot.
Existing multiple replication-snapshot happens
when a node fails in middel of a replication
and then the VM is moved to another node.
That's why we have to test if we have a common base on both nodes.
Given this, we take this as a replica state.
After we have a state again, the rest of the snapshots can be deleted on the next run.
If last_sync is 0, the VM configuration has been stolen
(either manually or by HA restoration).
Under this condition, the replication snapshot should not be deleted.
This snapshot is used to restore replication state.
If the last_snap is greater than 0 and does not match the snap name
it must be a remnant of an earlier sync and should be deleted.
If a VM configuration has been manually moved or recovered by HA,
there is no status on this new node.
In this case, the replication snapshots still exist on the remote side.
It must be possible to remove a job without status,
otherwise, a new replication job on the same remote node will fail
and the disks will have to be manually removed.
When searching through the sorted_volumes generated from the VMID.conf,
we can be sure that every disk will be removed in the event
of a complete job removal on the remote side.
In the end, the remote_prepare_local_job calls on the remote side a prepare.