5
0
mirror of git://git.proxmox.com/git/pve-guest-common.git synced 2024-12-23 17:34:10 +03:00
Commit Graph

271 Commits

Author SHA1 Message Date
Fiona Ebner
4c1bd50289 replication: rename last_snapshots to local_snapshots
because prepare() was changed in 8d1cd44 ("partially fix #3111:
replication: be less picky when selecting incremental base") to return
all local snapshots.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-08-02 11:05:45 +02:00
Fiona Ebner
efe85efbb7 replication: prepare: adapt/expand function comment
Commit 8d1cd44 ("partially fix #3111: replication: be less picky when
selecting incremental base") changed prepare() to return all local
snapshots.

Special behavior regarding last_sync is also better mentioned
explicitly.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-08-02 11:05:38 +02:00
Dominik Csapak
f1fc7d6c61 ReplicationState: deterministically order replication jobs
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)

to have a fixed order, take the jobid also into consideration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-06-08 08:48:04 +02:00
Dominik Csapak
1aa4d844a1 ReplicationState: purge state from non local vms
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the code does not expect the state to be out-of-sync
with the actual snapshots on disk.

One such problem is the following:

Replicate vm 100 from node A to node B and C, and activate HA. When node
A dies, it will be relocated to e.g. node B and start replicate from
there. If node B now had an old state lying around for it's sync to node
C, it might delete the common base snapshots of B and C and cannot sync
again.

Deleting the state for all non local guests fixes that issue, since it
always starts fresh, and the potentially existing old state cannot be
valid anyway since we just relocated the vm here (from a dead node).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-06-08 08:48:04 +02:00
Thomas Lamprecht
3802f3ddd0 vzdump config: limit notes template to maximal 1024 characters
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-08 08:47:59 +02:00
Fabian Ebner
8ec108b341 vzdump: update notes-template description
as the actual handling in pve-manager changed a bit.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-28 15:06:43 +02:00
Thomas Lamprecht
54fba1b4a9 bump version to 4.1-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-27 18:51:21 +02:00
Fabian Ebner
51cbaa3f93 vzdump: schema: add 'notes-template' and 'protected' properties
In command_line(), notes are printed, quoted, but otherwise as is,
which is a bit ugly for multi-line notes. But it is part of the
commandline, so print it.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-27 10:45:59 +02:00
Thomas Lamprecht
73a3e4cb23 replication config: retry first three failed times quicker before going to 30m
So the repeat frequency for a stuck job is now:
t0 -> fails
t1 = t0 +  5m -> repat
t2 = t1 + 10m = t0 + 15m -> repat
t3 = t2 + 15m = t0 + 30m -> repat
t4 = t3 + 30m = t0 + 60-> repat
then
tx = tx-1 + 30m -> repat

So, we converge more naturally/stable to the 30m intervals than
before, when t3 would have been t0 + 45m.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-27 09:59:26 +02:00
Thomas Lamprecht
3bf8e49a94 replication config: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-27 09:58:41 +02:00
Thomas Lamprecht
e6e1550049 print snapshot tree: reduce indentation delta per level
previous:

> `-> foo                         2021-05-28 12:59:36 no-description
>   `-> bar                       2021-06-18 12:44:48 no-description
>     `-> current                                     You are here!

now:

> `-> foo                         2021-05-28 12:59:36 no-description
>  `-> bar                        2021-06-18 12:44:48 no-description
>   `-> current                                       You are here!

So requires less space, allowing deeper snapshot trees to still be
displayed nicely, and looks even better while doing that - the latter
may be subjective though.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-03-01 13:06:35 +01:00
Dominik Csapak
9fca8f9d5e print snapshot tree: clamp indentation length to positive
If a user has many snapshots, the length heuristic can go negative
and produce wrong indentation, so clamp it at 0.

Reported in the forum: https://forum.proxmox.com/threads/105740/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-03-01 12:59:38 +01:00
Thomas Lamprecht
6a5f25ee19 bump version to 4.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-09 18:28:20 +01:00
Fabian Grünbichler
1fa3dc1994 add storage tunnel module
encapsulating storage-related tunnel methods, currently
- source-side storage-migrate helper
- target-side disk-import handler
- target-side query-disk-import handler
- target-side bwlimit handler

to be extended further with replication-related handlers and helpers.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:22:29 +01:00
Fabian Grünbichler
d88d2066a5 add tunnel helper module
lifted from PVE::QemuMigrate, abstracting away use-case specific data.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:20:55 +01:00
Fabian Grünbichler
74c26370c0 migrate: add get_bwlimit helper
given a source and target storage query either locally or both locally
and remotely and merge the result.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:20:42 +01:00
Fabian Grünbichler
42a84dc9e1 migrate: handle migration_network with remote migration
remote migration always has an explicit endpoint from the start which
gets used for everything.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:36:44 +01:00
Thomas Lamprecht
1a400a9ea8 abstract config: fix implement-me comment
this is internal and doesn't needs to wait until a next major release

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:36:06 +01:00
Fabian Ebner
a68bfdb1ee config: activate affected storages for snapshot operations
For snapshot creation, the storage for the vmstate file is activated
via vdisk_alloc when the state file is created.

Do not activate the volumes themselves, as that has unnecessary side
effects (e.g. waiting for zvol device link for ZFS, mapping the volume
for RBD). If a storage can only do snapshot operations on a volume
that has been activated, it needs to activate the volume itself.

The actual implementation will be in the plugins to be able to skip
CD ROM drives and bind-mounts, etc.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-28 14:28:35 +01:00
Fabian Ebner
9643bddd3a config: remove unused variable
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-28 14:28:35 +01:00
Thomas Lamprecht
2fb36c40c8 snapshot prepare: log on parent-cycle deletion
for new clones this should not happen anyway, as the API calls clean
such parent references up now, but for old ones it still can happen,
log to inform.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-30 08:17:04 +01:00
Oguz Bektas
685a524ea3 snapshots: delete parent property if new snapshot name is already a parent to existing one
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Hannes Laimer <h.laimer@proxmox.com>
2021-11-30 07:59:55 +01:00
Fabian Ebner
244583a40b replication: prepare: simplify code
No functional change is intended.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-29 10:50:36 +01:00
Fabian Ebner
ff574bf8d2 replication: update last_sync before removing old replication snapshots
If pvesr was terminated after finishing with the new sync and after
removing old replication snapshots, but before it could write the new
state, the next replication would fail. It would wrongly interpret the
actual last replication snapshot as stale, remove it, and (if no other
snapshots are present) attempt a full sync, which would fail.

Reported in the community forum [0], this was brought to light by the
new pvescheduler before it learned graceful reload.

It's not possible to simply preserve a last remaining snapshot in
prepare(), because prepare() is also used for valid removals. Instead,
update last_sync early enough. Stale snapshots will still be removed
on the next run if there are any.

[0]: https://forum.proxmox.com/threads/100154

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-29 10:50:36 +01:00
Fabian Grünbichler
7d604b5bbd bump version to 4.0-3
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 13:17:37 +01:00
Fabian Ebner
2511f525f5 config: snapshot delete check: use volume_snapshot_info
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:38 +01:00
Fabian Ebner
b20bf9bf7d replication: find common snapshot: use additional information
which is now available from the storage back-end.

The benefits are:

1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
create a new snapshot B. Previously, B was selected as replication
base, but it didn't match on source and target. Now, C is correctly
selected.
2. Smaller delta in some cases by not prefering replication snapshots
over config snapshots, but using the most recent possible one from
both types of snapshots.
3. Less code complexity for snapshot selection.

If the remote side is old, it's not possible to detect mismatch of
distinct snapshots with the same name, but the timestamps from the
local side can still be used.

Still limit to our snapshots (config and replication), because we
don't have guarantees for other snapshots (could be deleted in the
meantime, name might not fit import/export regex, etc.).

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:34 +01:00
Fabian Ebner
3200c404a9 replication: prepare: return additional information about snapshots
This is backwards compatible, because existing users of prepare() only
rely on the elements to evaluate to true or be defined.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:34 +01:00
Fabian Ebner
84fc20aa37 replication: refactor finding most recent common replication snapshot
By using a single loop instead. Should make the code more readable,
but also more efficient.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:34 +01:00
Fabian Ebner
602ca77cdb fix #3111: config: snapshot delete: check if replication still needs it
and abort if it does and --force is not specified.

After rollback, the rollback snapshot might still be needed as the
base for incremental replication, because rollback removes (blocking)
replication snapshots.

It's not enough to limit the check to the most recent snapshot,
because new snapshots might've been created between rollback and
remove.

It's not enough to limit the check to snapshots without a parent (i.e.
in case of ZFS, the oldest), because some volumes might've been added
only after that, meaning the oldest snapshot is not an incremental
replication base for them.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:14 +01:00
Fabian Ebner
8d1cd44345 partially fix #3111: replication: be less picky when selecting incremental base
After rollback, it might be necessary to start the replication from an
earlier, possibly non-replication, snapshot, because the replication
snapshot might have been removed from the source node. Previously,
replication could only recover in case the current parent snapshot was
already replicated.

To get into the bad situation (with no replication happening between
the steps):
1. have existing replication
2. take new snapshot
3. rollback to that snapshot
In case the partial fix to only remove blocking replication snapshots
for rollback was already applied, an additional step is necessary to
get into the bad situation:
4. take a second new snapshot

Since non-replication snapshots are now also included, where no
timestamp is readily available, it is necessary to filter them out
when probing for replication snapshots.

If no common replication snapshot is present, iterate backwards
through the config snapshots.

The changes are backwards compatible:

If one side is running the old code, and the other the new code,
the fact that one of the two prepare() calls does not return the
new additional snapshot candidates, means that no new match is
possible. Since the new prepare() returns a superset, no previously
possible match is now impossible.

The branch with @desc_sorted_snap is now taken more often, but
it can still only be taken when the volume exists on the remote side
(and has snapshots). In such cases, it is safe to die if no
incremental base snapshot can be found, because a full sync would not
be possible.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Ebner
c05dc937d4 replication: pass guest config to find_common_replication_snapshot
in preparation to iterate over all config snapshots when necessary.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Ebner
fbbeb87225 replication: remove unused variable and style fixes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Ebner
45c0b7554c partially fix #3111: further improve removing replication snapshots
by using the new $blocker parameter. No longer remove all replication
snapshots from affected volumes unconditionally, but check first if
all blocking snapshots are replication snapshots. If they are, remove
them and proceed with rollback. If they are not, die without removing
any.

For backwards compatibility, it's still necessary to remove all
replication snapshots if $blockers is not available.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Ebner
a9bc9b3c89 config: rollback: factor out helper for removing replication snapshots
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Ebner
2dfe62927b partially fix #3111: snapshot rollback: improve removing replication snapshots
Get the replicatable volumes from the snapshot config rather than the
current config. And filter those volumes further to those that will
actually be rolled back.

Previously, a volume that only had replication snapshots (e.g. because
it was added after the snapshot was taken, or the vmstate volume)
would lose them.  Then, on the next replication run, such a volume
would lead to an error, because replication tried to do a full sync,
but the target volume still exists.

This is not a complete fix. It is still possible to run into problems:
- by removing the last (non-replication) snapshots after a rollback
  before replication can run once.
- by creating a snapshot and making a rollback before replication can
  run once.

The list of volumes is not required to be sorted for prepare(), but it
is sorted by how foreach_volume() iterates now, so not random.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Fabian Grünbichler
239fe671c3 build: switch upload target to bullseye
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-09 11:39:15 +02:00
Fabian Grünbichler
523e947366 bump version to 4.0-2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-09 10:07:42 +02:00
Fabian Ebner
60796d5fbb vzdump: defaults: keep all backups by default for 7.0
and switch to using prune-backups instead of maxfiles.

Storages created via the web UI defaulted to keeping all backups already, switch
to this safer default here as well.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-08 14:57:31 +02:00
Fabian Ebner
a58366f460 vzdump: remove deprecated size parameter
It was deprecated for a long time (before it got move to guest-common) already,
and there also was a deprecation warning when passed as a CLI option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-08 14:57:31 +02:00
Thomas Lamprecht
71066e627e bump version to 4.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:08:53 +02:00
Thomas Lamprecht
dfcc0de52d d/control: increase compat level to 12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:03:34 +02:00
Thomas Lamprecht
873b9de294 d/control: update meta information
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:03:09 +02:00
Thomas Lamprecht
960c85be38 buildsys: split packaging and source build-systems
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-09 20:10:14 +02:00
Fabian Ebner
1c527dfe62 mention prune behavior for the remove parameter
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-05 21:24:39 +01:00
Thomas Lamprecht
de1ae1652c bump version to 3.1-5
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-19 16:32:26 +01:00
Fabian Ebner
17b5185b77 vzdump: mailto: use email-or-username-list format
because it is a more complete pattern. Also, 'mailto' was a '-list' format in
PVE 6.2 and earlier, so this also fixes whitespace-related backwards
compatibility. In particular, this fixes creating a backup job in the GUI
without setting an address, which passes along ''.

For example,
> vzdump 153 --mailto " ,,,admin@proxmox.com;;; developer@proxmox.com , ; "
was valid and worked in PVE 6.2.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-19 16:29:58 +01:00
Fabian Ebner
7a9b527f54 vzdump: command line: make sure mailto is comma-separated
In addition to relying on shellquote(), it's still nice to avoid printing out
unnecessary whitespaces, especially newlines.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-19 16:29:58 +01:00
Fabian Ebner
9e542a4f90 vzdump: command line: refactor handling prune-backups
to re-use a line here and with the next patch.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-19 16:29:58 +01:00
Fabian Ebner
533d6e503a vzdump: correctly handle prune-backups option in commandline and cron config
Previously only the hash reference was printed instead of the property string.
It's also necessary to parse the property string when reading the cron config.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-26 18:48:27 +01:00