IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
which is now available from the storage back-end.
The benefits are:
1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
create a new snapshot B. Previously, B was selected as replication
base, but it didn't match on source and target. Now, C is correctly
selected.
2. Smaller delta in some cases by not prefering replication snapshots
over config snapshots, but using the most recent possible one from
both types of snapshots.
3. Less code complexity for snapshot selection.
If the remote side is old, it's not possible to detect mismatch of
distinct snapshots with the same name, but the timestamps from the
local side can still be used.
Still limit to our snapshots (config and replication), because we
don't have guarantees for other snapshots (could be deleted in the
meantime, name might not fit import/export regex, etc.).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This is backwards compatible, because existing users of prepare() only
rely on the elements to evaluate to true or be defined.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
By using a single loop instead. Should make the code more readable,
but also more efficient.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and abort if it does and --force is not specified.
After rollback, the rollback snapshot might still be needed as the
base for incremental replication, because rollback removes (blocking)
replication snapshots.
It's not enough to limit the check to the most recent snapshot,
because new snapshots might've been created between rollback and
remove.
It's not enough to limit the check to snapshots without a parent (i.e.
in case of ZFS, the oldest), because some volumes might've been added
only after that, meaning the oldest snapshot is not an incremental
replication base for them.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
After rollback, it might be necessary to start the replication from an
earlier, possibly non-replication, snapshot, because the replication
snapshot might have been removed from the source node. Previously,
replication could only recover in case the current parent snapshot was
already replicated.
To get into the bad situation (with no replication happening between
the steps):
1. have existing replication
2. take new snapshot
3. rollback to that snapshot
In case the partial fix to only remove blocking replication snapshots
for rollback was already applied, an additional step is necessary to
get into the bad situation:
4. take a second new snapshot
Since non-replication snapshots are now also included, where no
timestamp is readily available, it is necessary to filter them out
when probing for replication snapshots.
If no common replication snapshot is present, iterate backwards
through the config snapshots.
The changes are backwards compatible:
If one side is running the old code, and the other the new code,
the fact that one of the two prepare() calls does not return the
new additional snapshot candidates, means that no new match is
possible. Since the new prepare() returns a superset, no previously
possible match is now impossible.
The branch with @desc_sorted_snap is now taken more often, but
it can still only be taken when the volume exists on the remote side
(and has snapshots). In such cases, it is safe to die if no
incremental base snapshot can be found, because a full sync would not
be possible.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by using the new $blocker parameter. No longer remove all replication
snapshots from affected volumes unconditionally, but check first if
all blocking snapshots are replication snapshots. If they are, remove
them and proceed with rollback. If they are not, die without removing
any.
For backwards compatibility, it's still necessary to remove all
replication snapshots if $blockers is not available.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Get the replicatable volumes from the snapshot config rather than the
current config. And filter those volumes further to those that will
actually be rolled back.
Previously, a volume that only had replication snapshots (e.g. because
it was added after the snapshot was taken, or the vmstate volume)
would lose them. Then, on the next replication run, such a volume
would lead to an error, because replication tried to do a full sync,
but the target volume still exists.
This is not a complete fix. It is still possible to run into problems:
- by removing the last (non-replication) snapshots after a rollback
before replication can run once.
- by creating a snapshot and making a rollback before replication can
run once.
The list of volumes is not required to be sorted for prepare(), but it
is sorted by how foreach_volume() iterates now, so not random.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and switch to using prune-backups instead of maxfiles.
Storages created via the web UI defaulted to keeping all backups already, switch
to this safer default here as well.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It was deprecated for a long time (before it got move to guest-common) already,
and there also was a deprecation warning when passed as a CLI option.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
because it is a more complete pattern. Also, 'mailto' was a '-list' format in
PVE 6.2 and earlier, so this also fixes whitespace-related backwards
compatibility. In particular, this fixes creating a backup job in the GUI
without setting an address, which passes along ''.
For example,
> vzdump 153 --mailto " ,,,admin@proxmox.com;;; developer@proxmox.com , ; "
was valid and worked in PVE 6.2.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In addition to relying on shellquote(), it's still nice to avoid printing out
unnecessary whitespaces, especially newlines.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Previously only the hash reference was printed instead of the property string.
It's also necessary to parse the property string when reading the cron config.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by using switch_replication_job_target_nolock.
If a job is scheduled for removal and the guest was
stolen, it still makes sense to correct the job entry,
which didn't happen previously.
AFAICT, this was the only user of swap_source_target_nolock.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to get the current replication config and have the VM list
and state object as recent as possible.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
add a new string format to allow local usernames like 'root' but also
limit which characters can be used in the 'mailto' address.
Co-Authored-by: Fabian Gruenbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
which lead to current and pending/delete values being returned
separately, and being misinterpreted by the web interface (and probably
other clients as well).
Fixes: daf8fca57a34417365c873ed91f3a52bf0002a4f
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With the refactoring of config_with_pending_array in
daf8fca57a34417365c873ed91f3a52bf0002a4f a few sanity checks on parsed configs
were dropped.
One case where a config value should be skipped, instead of parsed and added
is when the value is not scalar. This is the case for the raw lxc keys
(e.g. lxc.init.cmd, lxc.apparmor.profile) - which get added as array to the
'lxc' key.
This patch reintroduces the skipping of non-scalar values, when parsing the
config but not for the pending values.
From a short look through the commit history the sanity checks were in place
since 2014 (introduced in qemu-server for handling pending configuration
changes), and their removal did not cause any other regressions.
To my knowledge only the raw lxc config keys are parsed into a non-scalar
value.
Tested by adding a 'lxc.init.cmd' key to a container config.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
as `data` was a bit too generic we now use `volume_config` in the actual
implementations. Thus we should adapt the description as well.
Tab spacing for the other keys has been adapted for easier readabilty.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
one could have a config with:
> acpi: 0
and a pending deletion for that to restore the default 1 value.
The config_with_pending_array method then pushed the key twice, one
in the loop iterating the config itself correctly and once in the
pending delete hash, which is normally only for those options not yet
referenced in the config at all. Here the check was on "truthiness"
not definedness, fix that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>