IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This makes the description consistent with the other places that
have bwlimit as a parameter as well.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
while it will work as is, autovivification can be a real PITA so this
should make it more robust and might even avoid having the one or
other warning about accessing undef values in logs.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The method is intended to be used in cases where the volumes actually
got renamed (e.g. migration). Thus, updating the volume IDs should of
course also be done for pending changes to avoid changes referring to
now non-existent volumes or even the wrong existing volume.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
tags must be unique, allow the user some control in how unique (case
sensitive) and honor the ordering settings (even if I doubt any
production setups wants to spent time and $$$ on cautiously
reordering all tags of their dozens to hundreds virtual guests..
Have some duplicate code to avoid checking to much in the loop
itself, as frequent branches can be more expensive.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
'get_allowed_tags':
returns the allowed tags for the given user
'assert_tag_permissions'
helper to check permissions for tag setting/updating/deleting
for both container and qemu-server
gets the list of allowed tags from the DataCenterConfig and the current
user permissions, and checks for each tag that is added/removed if
the user has permissions to modify it
'normal' tags require 'VM.Config.Options' on '/vms/<vmid>', but not
allowed tags (either limited with 'user-tag-access' or
'privileged-tags' in the datacenter.cfg) requrie 'Sys.Modify' on '/'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we use the relatively new SectionConfig functionallity of allowing to
parse/write unknown config types, that way we can directly use the
directly available base job plugin for vzdump jobs and update only
those, keeping the other jobs untouched.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We need access to vzdump type jobs at this level, else we cannot do
things like removing VMIDs on purge of their guest.
So split out the independent part (all but the actual run method)
from pve-manager's PVE::Jobs::VZDump module.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
that way we'll be able to re-use it for adding support of cleaning
out vzdump jobs from the newish job.cfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Initially, to be used for tuning backup performance with QEMU.
A few users reported IO-related issues during backup after upgrading
to PVE 7.x and using a modified QEMU build with max-workers reduced to
8 instead of 16 helped them [0].
Also generalizes the way vzdump property string are handled for easier
extension in the future.
[0]: https://forum.proxmox.com/threads/113790/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Only print it when there is a snapshot that would've been removed
without the safeguard. Mostly relevant when a new volume is added to
an already replicated guest.
Fixes replication tests in pve-manager.
Fixes: c0b2948 ("replication: prepare: safeguard against removal if expected snapshot is missing")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Such a check would also have prevented the issue in 1aa4d84
("ReplicationState: purge state from non local vms") and other
scenarios where state and disk state are inconsistent with regard to
the last_sync snapshot.
AFAICT, all existing callers intending to remove all snapshots use
last_sync=1 so chaning the behavior for other (non-zero) values should
be fine.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This prevents left-over volume(s) in the following situation:
1. replication with volumes on different storages A and B
2. remove all volumes on storage B from the guest configuration
3. schedule full removal before the next normal replication runs
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
because prepare() was changed in 8d1cd44 ("partially fix#3111:
replication: be less picky when selecting incremental base") to return
all local snapshots.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Commit 8d1cd44 ("partially fix#3111: replication: be less picky when
selecting incremental base") changed prepare() to return all local
snapshots.
Special behavior regarding last_sync is also better mentioned
explicitly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)
to have a fixed order, take the jobid also into consideration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the code does not expect the state to be out-of-sync
with the actual snapshots on disk.
One such problem is the following:
Replicate vm 100 from node A to node B and C, and activate HA. When node
A dies, it will be relocated to e.g. node B and start replicate from
there. If node B now had an old state lying around for it's sync to node
C, it might delete the common base snapshots of B and C and cannot sync
again.
Deleting the state for all non local guests fixes that issue, since it
always starts fresh, and the potentially existing old state cannot be
valid anyway since we just relocated the vm here (from a dead node).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
In command_line(), notes are printed, quoted, but otherwise as is,
which is a bit ugly for multi-line notes. But it is part of the
commandline, so print it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
So the repeat frequency for a stuck job is now:
t0 -> fails
t1 = t0 + 5m -> repat
t2 = t1 + 10m = t0 + 15m -> repat
t3 = t2 + 15m = t0 + 30m -> repat
t4 = t3 + 30m = t0 + 60-> repat
then
tx = tx-1 + 30m -> repat
So, we converge more naturally/stable to the 30m intervals than
before, when t3 would have been t0 + 45m.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previous:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
now:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
So requires less space, allowing deeper snapshot trees to still be
displayed nicely, and looks even better while doing that - the latter
may be subjective though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If a user has many snapshots, the length heuristic can go negative
and produce wrong indentation, so clamp it at 0.
Reported in the forum: https://forum.proxmox.com/threads/105740/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>