IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
'get_allowed_tags':
returns the allowed tags for the given user
'assert_tag_permissions'
helper to check permissions for tag setting/updating/deleting
for both container and qemu-server
gets the list of allowed tags from the DataCenterConfig and the current
user permissions, and checks for each tag that is added/removed if
the user has permissions to modify it
'normal' tags require 'VM.Config.Options' on '/vms/<vmid>', but not
allowed tags (either limited with 'user-tag-access' or
'privileged-tags' in the datacenter.cfg) requrie 'Sys.Modify' on '/'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we use the relatively new SectionConfig functionallity of allowing to
parse/write unknown config types, that way we can directly use the
directly available base job plugin for vzdump jobs and update only
those, keeping the other jobs untouched.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We need access to vzdump type jobs at this level, else we cannot do
things like removing VMIDs on purge of their guest.
So split out the independent part (all but the actual run method)
from pve-manager's PVE::Jobs::VZDump module.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
that way we'll be able to re-use it for adding support of cleaning
out vzdump jobs from the newish job.cfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Initially, to be used for tuning backup performance with QEMU.
A few users reported IO-related issues during backup after upgrading
to PVE 7.x and using a modified QEMU build with max-workers reduced to
8 instead of 16 helped them [0].
Also generalizes the way vzdump property string are handled for easier
extension in the future.
[0]: https://forum.proxmox.com/threads/113790/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Only print it when there is a snapshot that would've been removed
without the safeguard. Mostly relevant when a new volume is added to
an already replicated guest.
Fixes replication tests in pve-manager.
Fixes: c0b2948 ("replication: prepare: safeguard against removal if expected snapshot is missing")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Such a check would also have prevented the issue in 1aa4d84
("ReplicationState: purge state from non local vms") and other
scenarios where state and disk state are inconsistent with regard to
the last_sync snapshot.
AFAICT, all existing callers intending to remove all snapshots use
last_sync=1 so chaning the behavior for other (non-zero) values should
be fine.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This prevents left-over volume(s) in the following situation:
1. replication with volumes on different storages A and B
2. remove all volumes on storage B from the guest configuration
3. schedule full removal before the next normal replication runs
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
because prepare() was changed in 8d1cd44 ("partially fix#3111:
replication: be less picky when selecting incremental base") to return
all local snapshots.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Commit 8d1cd44 ("partially fix#3111: replication: be less picky when
selecting incremental base") changed prepare() to return all local
snapshots.
Special behavior regarding last_sync is also better mentioned
explicitly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)
to have a fixed order, take the jobid also into consideration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the code does not expect the state to be out-of-sync
with the actual snapshots on disk.
One such problem is the following:
Replicate vm 100 from node A to node B and C, and activate HA. When node
A dies, it will be relocated to e.g. node B and start replicate from
there. If node B now had an old state lying around for it's sync to node
C, it might delete the common base snapshots of B and C and cannot sync
again.
Deleting the state for all non local guests fixes that issue, since it
always starts fresh, and the potentially existing old state cannot be
valid anyway since we just relocated the vm here (from a dead node).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
In command_line(), notes are printed, quoted, but otherwise as is,
which is a bit ugly for multi-line notes. But it is part of the
commandline, so print it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
So the repeat frequency for a stuck job is now:
t0 -> fails
t1 = t0 + 5m -> repat
t2 = t1 + 10m = t0 + 15m -> repat
t3 = t2 + 15m = t0 + 30m -> repat
t4 = t3 + 30m = t0 + 60-> repat
then
tx = tx-1 + 30m -> repat
So, we converge more naturally/stable to the 30m intervals than
before, when t3 would have been t0 + 45m.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previous:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
now:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
So requires less space, allowing deeper snapshot trees to still be
displayed nicely, and looks even better while doing that - the latter
may be subjective though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If a user has many snapshots, the length heuristic can go negative
and produce wrong indentation, so clamp it at 0.
Reported in the forum: https://forum.proxmox.com/threads/105740/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
given a source and target storage query either locally or both locally
and remotely and merge the result.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
remote migration always has an explicit endpoint from the start which
gets used for everything.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
For snapshot creation, the storage for the vmstate file is activated
via vdisk_alloc when the state file is created.
Do not activate the volumes themselves, as that has unnecessary side
effects (e.g. waiting for zvol device link for ZFS, mapping the volume
for RBD). If a storage can only do snapshot operations on a volume
that has been activated, it needs to activate the volume itself.
The actual implementation will be in the plugins to be able to skip
CD ROM drives and bind-mounts, etc.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
for new clones this should not happen anyway, as the API calls clean
such parent references up now, but for old ones it still can happen,
log to inform.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If pvesr was terminated after finishing with the new sync and after
removing old replication snapshots, but before it could write the new
state, the next replication would fail. It would wrongly interpret the
actual last replication snapshot as stale, remove it, and (if no other
snapshots are present) attempt a full sync, which would fail.
Reported in the community forum [0], this was brought to light by the
new pvescheduler before it learned graceful reload.
It's not possible to simply preserve a last remaining snapshot in
prepare(), because prepare() is also used for valid removals. Instead,
update last_sync early enough. Stale snapshots will still be removed
on the next run if there are any.
[0]: https://forum.proxmox.com/threads/100154
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is now available from the storage back-end.
The benefits are:
1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
create a new snapshot B. Previously, B was selected as replication
base, but it didn't match on source and target. Now, C is correctly
selected.
2. Smaller delta in some cases by not prefering replication snapshots
over config snapshots, but using the most recent possible one from
both types of snapshots.
3. Less code complexity for snapshot selection.
If the remote side is old, it's not possible to detect mismatch of
distinct snapshots with the same name, but the timestamps from the
local side can still be used.
Still limit to our snapshots (config and replication), because we
don't have guarantees for other snapshots (could be deleted in the
meantime, name might not fit import/export regex, etc.).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This is backwards compatible, because existing users of prepare() only
rely on the elements to evaluate to true or be defined.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
By using a single loop instead. Should make the code more readable,
but also more efficient.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and abort if it does and --force is not specified.
After rollback, the rollback snapshot might still be needed as the
base for incremental replication, because rollback removes (blocking)
replication snapshots.
It's not enough to limit the check to the most recent snapshot,
because new snapshots might've been created between rollback and
remove.
It's not enough to limit the check to snapshots without a parent (i.e.
in case of ZFS, the oldest), because some volumes might've been added
only after that, meaning the oldest snapshot is not an incremental
replication base for them.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
After rollback, it might be necessary to start the replication from an
earlier, possibly non-replication, snapshot, because the replication
snapshot might have been removed from the source node. Previously,
replication could only recover in case the current parent snapshot was
already replicated.
To get into the bad situation (with no replication happening between
the steps):
1. have existing replication
2. take new snapshot
3. rollback to that snapshot
In case the partial fix to only remove blocking replication snapshots
for rollback was already applied, an additional step is necessary to
get into the bad situation:
4. take a second new snapshot
Since non-replication snapshots are now also included, where no
timestamp is readily available, it is necessary to filter them out
when probing for replication snapshots.
If no common replication snapshot is present, iterate backwards
through the config snapshots.
The changes are backwards compatible:
If one side is running the old code, and the other the new code,
the fact that one of the two prepare() calls does not return the
new additional snapshot candidates, means that no new match is
possible. Since the new prepare() returns a superset, no previously
possible match is now impossible.
The branch with @desc_sorted_snap is now taken more often, but
it can still only be taken when the volume exists on the remote side
(and has snapshots). In such cases, it is safe to die if no
incremental base snapshot can be found, because a full sync would not
be possible.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by using the new $blocker parameter. No longer remove all replication
snapshots from affected volumes unconditionally, but check first if
all blocking snapshots are replication snapshots. If they are, remove
them and proceed with rollback. If they are not, die without removing
any.
For backwards compatibility, it's still necessary to remove all
replication snapshots if $blockers is not available.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>