IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.
There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.
The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to check for a pre-existing mount unit.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
and improve the variable namign while we are at it. this allows the check to be
re-used in other code paths, like when starting a garbage collection.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
there two kinds of overlap we need to check here:
- two removable datastores backed by the same device must not have nested
relative paths on the device
- any two datastores must not have nested absolute paths
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Sync jobs in push and pull direction require a different set of
privileges for the various api methods provided. Update the
descriptitons to include the push direction and list them
accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Users require `Datastore.Audit` on the source datastore to read sync
jobs. Further restrict also the permissions to modify sync jobs in
push direction to include the `Datastore.Audit` permission on the
source, as otherwise a user is able to create or edit sync jobs in
push direction, but not able to see them.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
pbs-datastore::datastore::is_datastore_mounted_at() verifies that the
mounted file system has the expected UUID. Therefore we don't have to
error out if we try to mount an already mounted removable datastore.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Show the full error context when fetching the remote target
namespaces fails. As logging of the error is handled by the calling
sync job, reformat the error to include the error context before
returning.
Instead of the error
```
TASK ERROR: Fetching remote namespaces failed, remote returned error
```
the user is now presented with an error like
```
TASK ERROR: Fetching remote namespaces failed, remote returned error: datastore 'removable1' is not mounted
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In an earlier version of this series the datastore path was absolute
for removable datastores. This is simply a leftover that was missed
when changing that to relative paths.
Reported-by: Markus Frank <m.frank@proxmox.com>
Fixes: 94a068e31 ("api: node: allow creation of removable datastore through directory endpoint")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
when loading the verification state for a local snapshot, it must
first be ensured that it actually exists, else the lack of manifest
will be interpreted as corrupt snapshot triggering a "resync" that is
actually a sync of all missing snapshots, not just the newer ones,
which is what's actually wanted here.
The diff is best seen by telling git to ignore the whitespace changes.
Fixes: 0974ddfa ("fix #3786: api: add resync-corrupt option to sync jobs")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ TL: reword subject and add a bit to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A lot of removable datastore actions can alter the system state
(mounting, unmounting), so require Sys.Modify for lack of better
alternative.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ TL: improve commit subject and add access-description for create,
and delete, where we do a dynamic access check ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the SyncJobConfig struct now contains a 'sync-direction' property, we can
omit the 'direction' property of the SyncJobStatus struct. This makes a
few adaptions in the ui necessary:
* use the correct field
* handle 'pull' as default (since we don't necessarily get a
'sync-direction' in that case)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In case we add another direction or another call site, doing it
without a wildcard match arm seems cleaner and more future-proof.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: adapt subject/message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the sync direction for the sync job as optional config parameter
and refrain from using the config section type for conditional
direction check, as they are now the same (see previous commit).
Use the configured sync job parameter instead of passing it to the
various methods as function parameter and only filter based on sync
direction if an optional api parameter to distingush/filter based on
direction is given.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
so that one can list all sync jobs, both pull and push, at the same
time. To not confuse existing clients that only know of pull syncs, show
only them by default and make the 'all' parameter opt-in. (But add a
todo for 4.x to change that)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If a datastore with a prune job is removed, the prune job is preserverd
as it is stored in /etc/proxmox-backup/prune.cfg. We also create a
default prune job for every datastore – this means that when reusing a
datastore that previously existed, you end up with duplicate prune jobs.
To avoid this we check if a prune job already exists, and when it does,
we refrain from creating the default one. (We also check if specific
keep-options have been added, if yes, then we create the job
nevertheless.)
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Some filesystems like FAT don't include a concept of UUIDs.
Instead, tools like blkid tools like blkid derive these
identifiers based on certain filesystem metadata, such as
volume serial numbers or other unique information. This does
however not follow the format specified in RFC 9562[1].
[1] https://datatracker.ietf.org/doc/html/rfc9562
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Data deletion is only possible if the datastore is mounted, won't attempt
mounting it for the purpose of deleting data.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
If a device houses multiple datastore, none of them will be mounted
automatically. If a device only contains a single datastore it will be
mounted automatically. The reason for not mounting multiple datastore
automatically is that we don't know which is actually wanted, and since
mounting all means also all have to be unmounted manually, it made sense
to have the user choose which to mount.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
We can't just directly delegate these commands to the API endpoints
since both mounting and unmounting are done in a worker, and that one
would be killed when the parent ends. In this case that would be the CLI
process, which basically ends right after spwaning the worker.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Devices can contains multiple datastores.
If the specified path already contains a datastore, `reuse datastore` has
to be set so it'll be added without creating a chunckstore.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Removable datastores can be mounted unless
- they are already
- their device is not present
For unmounting the maintenance mode is set to `unmount`,
which prohibits the starting of any new tasks envolving any
IO, this mode is unset either
- on completion of the unmount
- on abort of the unmount tasks
If the unmounting itself should fail, the maintenance mode stays in
place and requires manual intervention by unsetting it in the config
file directly. This is intentional, as unmounting should not fail,
and if it should the situation should be looked at.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... at a specific location. Also adds two additional functions to
get the mount status, and ensuring a removable datastore is mounted.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Add consent_text option to the node.cfg config. Embed the value into
index.html file using handlebars.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
we already have two different password schemas, `PBS_PASSWORD_SCHEMA`
being the stricter one, which ensures a minimum length of new
passwords. however, this wasn't used on the change password endpoint
before, so add it there too. this is also in-line with NIST's latest
recommendations [1].
[1]: https://pages.nist.gov/800-63-4/sp800-63b.html#passwordver
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
currently if a password is provided, we check whether the user that is
going to be updated can authenticate with it. later on, the password
is then set as the same password. this means that the password here
can only be changed if it is the exact same one that is already used.
so in essence, the password cannot be changed through this endpoint
already. remove all of this logic here in favor of the
`PUT /access/password` endpoint.
to keep the api stable for now, just ignore the parameter and add a
description that explains what to use instead.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Add the `resync-corrupt` option to the ui and the
`proxmox-backup-manager` cli. It is listed in the `Advanced` section,
because it slows the sync-job down and is useless if no verification
job was run beforehand.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This option allows us to "fix" corrupt snapshots (and/or their chunks)
by pulling them from another remote. When traversing the remote
snapshots, we check if it exists locally, and if it is, we check if the
last verification of it failed. If the local snapshot is broken and the
`resync-corrupt` option is turned on, we pull in the remote snapshot,
overwriting the local one.
This is very useful and has been requested a lot, as there is currently
no way to "fix" corrupt chunks/snapshots even if the user has a healthy
version of it on their offsite instance.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add helper functions to retrieve the verify_state from the manifest of a
snapshot. Replaced all the manual "verify_state" parsing with the helper
function.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Log also empty backup groups with no snapshots encountered during the
sync so the log output contains this additional information as well,
reducing possible confusion.
Nevertheless, continue with the regular logic, so that pruning of
vanished snapshot is honored.
Examplary output in the sync jobs task log:
```
2024-11-22T18:32:40+01:00: Syncing datastore 'datastore', root namespace into datastore 'push-target-store', namespace 'test'
2024-11-22T18:32:40+01:00: Found 2 groups to sync (out of 2 total)
2024-11-22T18:32:40+01:00: skipped: 1 snapshot(s) (2024-11-22T13:40:18Z) - older than the newest snapshot present on sync target
2024-11-22T18:32:40+01:00: Group 'vm/200' contains no snapshots to sync to remote
2024-11-22T18:32:40+01:00: Finished syncing root namespace, current progress: 1 groups, 0 snapshots
```
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add an anyhow context to errors and display the full error context
in the log output. Further, make it clear which errors stem from api
calls by explicitly mentioning this in the context message.
This also fixes incorrect error handling by placing the error context
on the api result instead of the serde deserialization error for
cases this was handled incorrectly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: add missing format!
FG: run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.
Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.
Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.
This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.
A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.
E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```
The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use LazyLock for constant archive names
FG: add missing import
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.
In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.
If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.
In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.
Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.
link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>