IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
else, error messages using this path_info refer to the parent directory instead
of the actual file entry causing the problem. since this is just for
informational purposes, lossy conversion is acceptable.
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.
If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.
In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.
Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.
link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Various smaller adaptions such as capitalization of the start of
sentences, expansion of abbreviations and shortening of to long
error messages.
To improve consistency with the rest of the error messages for the
sync job in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use "skipping" for non-owner-groups - we haven't started uploading at that
point, there is nothing to "abort"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
so that they are logged in the success case, since the error case already has
its own log messages.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Mixing of terms only makes the errors harder to understand.
In order to make error messages more intuitive, always refer to the
sync push target as remote, mention the remote explicitly and/or
improve messages where needed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When updating the datastore config using `proxmox-backup-manager` we
need to make an api-call, because the api-route starts a tokio task to
update the proxy-cache and the client will kill the task if we don't
wait. With an api-call the tokio task will be executed on the api
process and runs in the background while the endpoint handler has
already returned.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
When creating a datastore without the "reuse-datastore" option and the
datastore contains a `lost+found` directory (which is quite common), the
creation fails. Add `lost+found` to the ignore list.
Reported here: https://forum.proxmox.com/threads/bug-when-adding-new-storage-task-error-datastore-path-is-not-empty.157629/#post-721733
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
FG: slight code style change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Permissions are stored in the lower 9 bits (rwxrwxrwx),
so we have to mask `st_mode` with 0o777.
The datastore root dir is created with 755, the `.chunks` dir and its
contents with 750 and the `.lock` file with 644, this changes the
expected permissions accordingly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Reviewed-By: Gabriel Goller <g.goller@proxmox.com>
With single ticks the containing modes and archive formats are
displayed cursive, to be consistent with other sections of the
documentation use inline blocks.
Adapted line wrappings to the additional line length.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Currently, the change detection modes are described in the client
usage section, not intended for in-depth explanation on how these
client option works, but rather with focus on how to use them.
Therefore, add a reference to the more detailed technical section
regarding the change detection modes and reduce duplicate
explanations.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Describe in more details how the different change detection modes
operate and give insights into the inner workings, especially for the
more complex `metadata` mode, which involves lookahead caching and
padding calculation for reused payload chunks.
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
this got reported via e-mail - seems this one occurrence was
forgotten. grepped through the docs (and the whole repo) for 'Mail'
and 'Gateway', and it seems this was the only one.
Fixes: cbd7db1d ("docs: certificates")
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Skip and warn the user for files which returned a stale file handle
error while reading the metadata associated to that file, or the
target in case of a symbolic link.
Instead of returning with a hard error, report the stale file handle
and skip over encoding this file entry in the pxar archive.
Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5853
Link to thread in community forum:
https://forum.proxmox.com/threads/156822/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not fail hard if a file open fails because of a stale file handle.
Warn the user and ignore the file, just like the client already does
in case of missing privileges to access the file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the entries when a stale file handle is encountered during
generation of the entry list of a directory entry.
This will lead to the directory not being backed up if the directory
itself was invalidated, as then reading all child entries will fail
also, or the directory is backed up without entries which have been
invalidated.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the whole directory in case the file handle was invalidated
and therefore the filesystem type check returns with ESTALE.
Encode the directory start entry in the archive and the catalog only
after the filesystem type check, so the directory can be fully skipped.
At this point it is still possible to ignore the invalidated
directory. If the directory is invalidated afterwards, it will be
backed up only partially.
Introduce a helper method to report entries for which a stale file
handle was encountered, providing an optional path for cases where
the `Archiver`s state does not store the correct path.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch from mutable reference to shared reference on `self` and drop
unused return value.
These helpers only write log messages, there is currently no need for
a mutable reference to `self`, nor to return a `Result`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
else, combined with remove_vanished everything on the target side would be
removed.
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the group filters need adaptations both for pushing and local pulling, so left
those out for now.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid attempting to create them multiple times in case a whole hierarchy is
missing, and misleadingly logging that they were created multiple times as
well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the error message for failure to sync the whole namespace was too long, so
split it into two lines and make it a warning.
the namespace creation one lacked context (that the error was caused by the
remote side or the connection) and had too much (the datastore, which is
already logged very often) at the same time.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
add a bit more detail for the pull side, and reword some comments on the push
side to make them easier to read.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`try_exists` will return Ok(false) if the path is or containts a dangling
symlink, treat that as hard error just like if `try_exists` has returned an
Err(..).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
one million chunks are a bit much, considering that chunks are representing
1-2MB (dynamic) to 4MB (fixed) of input data, that would mean 1-4TB of re-used
input data in a single snapshot.
64k chunks are still representing 64-256GB of input data, which should be
plenty (and for such big snapshots with lots of re-used chunks, growing the
allocation of the HashSet should not be the bottleneck), and is also the
default capacity used for pulling.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of calling this three times, call it once:
retrieving the highest backup timestamp doesn't need its own request, it can
re-use the "main" result, the corresponding helper can thus be dropped.
remove_vanished can re-use the earlier result - if anybody prunes the backup
group or adds new snapshots while the sync is running, the whole group sync is
racy and might cause spurious errors anyway.
since re-syncing the last already existing snapshot is not possible at the
moment, the code can also be simplified by treating such a snapshots already
fully synced.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
a vanished namespace is one that
- exists on the target side, below the target prefix
- but within the specified max_depth
- and was not part of the synced namespaces
Co-developed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
two parameters that only differ by a letter are not very nice for quickly
understanding semantics..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
BackupGroup is serializable as its API parameter components, like BackupDir.
move the (always present) namespace closer to the group to improve readability.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make it easier to distinguish from missing "Prune" privs when removing
vanished groups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make the complex logic code shorter and easier to parse. no semantic changes
intended.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Documents the caveats of sync jobs in push direction, explicitly
recommending setting up dedicted remotes for these sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Expose the 'prune-delete-stats' as supported feature, in order for
the sync job in pull direction to pass the optional
`error-on-protected=false` flag to the api calls when pruning backup
snapshots, groups or namespaces.
Add and optionally expose the backup group delete statistics by adding the
return type to the corresponding REST API endpoints.
Clients can opt-into the new behaviour by setting the new `error-on-protected`
flag to `false` when calling the api endpoints, which results in removal not
erroring out when encountering protected snapshots.
The default value for the flag remains `true` for now, to remain backwards
compatible with older clients expecting this behaviour.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: reworded commit message slightly
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
In order to load data using the same model from different sources,
set the proxy on the store instead of the model.
This allows to use the view to display sync jobs in either pull or
push direction, by setting the `sync-direction` ont the view.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch the subject and labels to be shown based on the direction of
the sync job, and set the `sync-direction` parameter from the
submit values in case of push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Show sync jobs in pull and in push direction in two separate grids,
visually separating them to limit possible misconfiguration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch to the local datastore, used as sync source for jobs in push
direction, to get the available group filter options.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The namespace has to be set in order to get the correct groups to be
used as group filter options with a local datastore as source,
required for sync jobs in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`list_sync_jobs` exists as api method in `api2::admin::sync` and
`api2::config::sync`.
Rename the admin api endpoint method to `list_config_sync_jobs` in
order to reduce possible confusion when searching/reviewing.
No functional change intended.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Exposes and switch the config type for sync job operations based
on the `sync-direction` parameter, exposed on required api endpoints.
If not set, the default config type is `sync` and the default sync
direction is `pull` for full backwards compatibility. Whenever
possible, determine the sync direction and config type from the sync
job config directly rather than requiring it as optional api
parameter.
Further, extend read and modify access checks by sync direction to
conditionally check for the required permissions in pull and push
direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the sync job owner check to its own helper function, for it to
be reused for the owner check for sync jobs in push direction.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Read access to sync jobs is not granted to users not having at least
PRIV_DATASTORE_AUDIT permissions on the datastore. However a user is
able to create or modify such jobs, without having the audit
permission.
Therefore, further restrict the modify check by also including the
audit permissions.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Moves and refactores the sync_job_do function into the common server
sync module so that it can be reused for both sync directions, pull
and push.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>