5
0
mirror of git://git.proxmox.com/git/proxmox-backup.git synced 2025-01-06 13:18:00 +03:00
Commit Graph

5030 Commits

Author SHA1 Message Date
Fiona Ebner
d3910e1334 api: disks: directory: fail if mount unit already exists
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.

There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.

The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-27 19:59:37 +01:00
Fiona Ebner
a7792e16c5 api: disks: directory: factor out helper for mount unit path
In preparation to check for a pre-existing mount unit.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-27 19:59:37 +01:00
Fabian Grünbichler
4d5e14c07e datastore: extract nesting check into helper
and improve the variable namign while we are at it. this allows the check to be
re-used in other code paths, like when starting a garbage collection.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-27 15:25:37 +01:00
Fabian Grünbichler
4948864a07 api: create_datastore: fix nesting checks
there two kinds of overlap we need to check here:
- two removable datastores backed by the same device must not have nested
  relative paths on the device
- any two datastores must not have nested absolute paths

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-27 14:17:29 +01:00
Christian Ebner
4eadbcc49f api: sync: include required permissions for push direction
Sync jobs in push and pull direction require a different set of
privileges for the various api methods provided. Update the
descriptitons to include the push direction and list them
accordingly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 13:20:22 +01:00
Christian Ebner
5eb51c75ca api: sync: restrict edit permissions for push sync jobs
Users require `Datastore.Audit` on the source datastore to read sync
jobs. Further restrict also the permissions to modify sync jobs in
push direction to include the `Datastore.Audit` permission on the
source, as otherwise a user is able to create or edit sync jobs in
push direction, but not able to see them.

Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 13:20:22 +01:00
Aaron Lauterer
0fe9fd8dd0 api: removable datastore: downgrade device already mounted error to info
pbs-datastore::datastore::is_datastore_mounted_at() verifies that the
mounted file system has the expected UUID. Therefore we don't have to
error out if we try to mount an already mounted removable datastore.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2024-11-27 12:44:06 +01:00
Christian Ebner
17b7ab8021 sync: push: pass full error context when returning error to job
Show the full error context when fetching the remote target
namespaces fails. As logging of the error is handled by the calling
sync job, reformat the error to include the error context before
returning.

Instead of the error
```
TASK ERROR: Fetching remote namespaces failed, remote returned error
```

the user is now presented with an error like
```
TASK ERROR: Fetching remote namespaces failed, remote returned error: datastore 'removable1' is not mounted
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 12:37:03 +01:00
Hannes Laimer
41b8bf2aff api: admin: add Datastore.Modify permission for mount
So the mount and unmount endpoint have matching permissions.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 11:49:26 +01:00
Hannes Laimer
363e32a805 api: directory: use relative path when creating removable datastore
In an earlier version of this series the datastore path was absolute
for removable datastores. This is simply a leftover that was missed
when changing that to relative paths.

Reported-by: Markus Frank <m.frank@proxmox.com>
Fixes: 94a068e31 ("api: node: allow creation of removable datastore through directory endpoint")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 10:05:40 +01:00
Fabian Grünbichler
c5c7fd3482 pull-sync: do not interpret older missing snapshots as needs-resync
when loading the verification state for a local snapshot, it must
first be ensured that it actually exists, else the lack of manifest
will be interpreted as corrupt snapshot triggering a "resync" that is
actually a sync of all missing snapshots, not just the newer ones,
which is what's actually wanted here.

The diff is best seen by telling git to ignore the whitespace changes.

Fixes: 0974ddfa ("fix #3786: api: add resync-corrupt option to sync jobs")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ TL: reword subject and add a bit to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 10:03:09 +01:00
Thomas Lamprecht
bb367c4d2e manager: run cargo fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:24:01 +01:00
Hannes Laimer
a5f3d4a21c api: removable datastores: require Sys.Modify permission on /system/disks
A lot of removable datastore actions can alter the system state
(mounting, unmounting), so require Sys.Modify for lack of better
alternative.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
 [ TL: improve commit subject and add access-description for create,
   and delete, where we do a dynamic access check ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:22:41 +01:00
Dominik Csapak
af4d5607f1 sync jobs: remove superfluous direction property
since the SyncJobConfig struct now contains a 'sync-direction' property, we can
omit the 'direction' property of the SyncJobStatus struct. This makes a
few adaptions in the ui necessary:

* use the correct field
* handle 'pull' as default (since we don't necessarily get a
  'sync-direction' in that case)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:02:22 +01:00
Fabian Grünbichler
b3f16f6227 api: sync direction: extract match check into impl fn
In case we add another direction or another call site, doing it
without a wildcard match arm seems cleaner and more future-proof.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
 [ TL: adapt subject/message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:00:54 +01:00
Christian Ebner
e066bd7207 bin: show direction in sync job list output
As the WebUI also lists the sync direction, display the direction in
the cli output as well.

Examplary output:
```
┌─────────────────┬────────────────┬───────────┬────────┬───────────────────┬──────────┬──────────────┬─────────┬─────────┐
│ id              │ sync-direction │ store     │ remote │ remote-store      │ schedule │ group-filter │ rate-in │ comment │
╞═════════════════╪════════════════╪═══════════╪════════╪═══════════════════╪══════════╪══════════════╪═════════╪═════════╡
│ s-6c16fab2-9e85 │                │ datastore │        │ datastore         │ hourly   │ all          │         │         │
├─────────────────┼────────────────┼───────────┼────────┼───────────────────┼──────────┼──────────────┼─────────┼─────────┤
│ s-8764c440-3a6c │ push           │ datastore │ local  │ push-target-store │ hourly   │ all          │         │         │
└─────────────────┴────────────────┴───────────┴────────┴───────────────────┴──────────┴──────────────┴─────────┴─────────┘
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:45 +01:00
Christian Ebner
4b76b731cb api: admin/config: introduce sync direction as job config parameter
Add the sync direction for the sync job as optional config parameter
and refrain from using the config section type for conditional
direction check, as they are now the same (see previous commit).

Use the configured sync job parameter instead of passing it to the
various methods as function parameter and only filter based on sync
direction if an optional api parameter to distingush/filter based on
direction is given.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:40 +01:00
Dominik Csapak
28d6afc2d7 cli: manager: sync: add 'sync-direction' parameter to list
so one can list pull and push jobs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
403ad1f6d1 api: admin: sync: add optional 'all' sync type for listing
so that one can list all sync jobs, both pull and push, at the same
time. To not confuse existing clients that only know of pull syncs, show
only them by default and make the 'all' parameter opt-in. (But add a
todo for 4.x to change that)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
23185135bb api: admin: sync: add direction to sync job status
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Lukas Wagner
6a9aa3b9f4 management cli: add CLI for webhook targets
The code was copied and adapted from the gotify target CLI.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-26 11:58:46 +01:00
Lukas Wagner
aa8f7f6208 api: notification: add API routes for webhook targets
Copied and adapted from the Gotify ones.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
2024-11-26 11:58:46 +01:00
Gabriel Goller
5a52d1f06c reuse-datastore: avoid creating another prune job
If a datastore with a prune job is removed, the prune job is preserverd
as it is stored in /etc/proxmox-backup/prune.cfg. We also create a
default prune job for every datastore – this means that when reusing a
datastore that previously existed, you end up with duplicate prune jobs.
To avoid this we check if a prune job already exists, and when it does,
we refrain from creating the default one. (We also check if specific
keep-options have been added, if yes, then we create the job
nevertheless.)

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-26 11:24:46 +01:00
Hannes Laimer
09155de386 api: disks: only return UUID of partitions if it actually is one
Some filesystems like FAT don't include a concept of UUIDs.
Instead, tools like blkid tools like blkid derive these
identifiers based on certain filesystem metadata, such as
volume serial numbers or other unique information. This does
however not follow the format specified in RFC 9562[1].

[1] https://datatracker.ietf.org/doc/html/rfc9562

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
18d4c4fc35 bin: debug: add inspect device command
... to get information about (removable) datastores a device contains

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
c8835f5882 ui: support create removable datastore through directory creation
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
42e3b2f12a node: disks: replace BASE_MOUNT_DIR with DATASTORE_MOUNT_DIR
... since they do have the same value.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
43466bf538 api: node: include removable datastores in directory list
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
94a068e316 api: node: allow creation of removable datastore through directory endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
b17ebd5c2c datastore: handle deletion of removable datastore properly
Data deletion is only possible if the datastore is mounted, won't attempt
mounting it for the purpose of deleting data.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
2f874935b5 add auto-mounting for removable datastores
If a device houses multiple datastore, none of them will be mounted
automatically. If a device only contains a single datastore it will be
mounted automatically. The reason for not mounting multiple datastore
automatically is that we don't know which is actually wanted, and since
mounting all means also all have to be unmounted manually, it made sense
to have the user choose which to mount.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
919925519a bin: manager: add (un)mount command
We can't just directly delegate these commands to the API endpoints
since both mounting and unmounting are done in a worker, and that one
would be killed when the parent ends. In this case that would be the CLI
process, which basically ends right after spwaning the worker.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
76609915d6 pbs-api-types: add mount_status field to DataStoreListItem
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
40a2b110bf api: add check for nested datastores on creation
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
91c67298f4 api: removable datastore creation
Devices can contains multiple datastores.
If the specified path already contains a datastore, `reuse datastore` has
to be set so it'll be added without creating a chunckstore.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
2b31406a37 api: admin: add (un)mount endpoint for removable datastores
Removable datastores can be mounted unless
 - they are already
 - their device is not present
For unmounting the maintenance mode is set to `unmount`,
which prohibits the starting of any new tasks envolving any
IO, this mode is unset either
 - on completion of the unmount
 - on abort of the unmount tasks
If the unmounting itself should fail, the maintenance mode stays in
place and requires manual intervention by unsetting it in the config
file directly. This is intentional, as unmounting should not fail,
and if it should the situation should be looked at.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
46d7e573a9 datastore: add helper for checking if a datastore is mounted
... at a specific location. Also adds two additional functions to
get the mount status, and ensuring a removable datastore is mounted.

Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Gabriel Goller
28028b15b7 api: add consent api handler and config option
Add consent_text option to the node.cfg config. Embed the value into
index.html file using handlebars.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 17:19:32 +01:00
Shannon Sterz
fb5b6f3eab api: enforce minimum character limit of 8 on new passwords
we already have two different password schemas, `PBS_PASSWORD_SCHEMA`
being the stricter one, which ensures a minimum length of new
passwords. however, this wasn't used on the change password endpoint
before, so add it there too. this is also in-line with NIST's latest
recommendations [1].

[1]: https://pages.nist.gov/800-63-4/sp800-63b.html#passwordver

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-25 15:51:47 +01:00
Shannon Sterz
9f3733c5ed api: ignore password parameter in the update_user endpoint
currently if a password is provided, we check whether the user that is
going to be updated can authenticate with it. later on, the password
is then set as the same password. this means that the password here
can only be changed if it is the exact same one that is already used.

so in essence, the password cannot be changed through this endpoint
already. remove all of this logic here in favor of the
`PUT /access/password` endpoint.

to keep the api stable for now, just ignore the parameter and add a
description that explains what to use instead.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-25 15:51:47 +01:00
Fabian Grünbichler
391822f9ce run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 13:18:11 +01:00
Fabian Grünbichler
8e057c3874 sync config: forbid setting resync_corrupt for push jobs
they don't support it (yet), so don't allow setting it in the backend either.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 11:22:46 +01:00
Gabriel Goller
590187ff53 fix #3786: ui/cli: add resync-corrupt option on sync-jobs
Add the `resync-corrupt` option to the ui and the
`proxmox-backup-manager` cli. It is listed in the `Advanced` section,
because it slows the sync-job down and is useless if no verification
job was run beforehand.

Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 10:53:26 +01:00
Gabriel Goller
0974ddfa17 fix #3786: api: add resync-corrupt option to sync jobs
This option allows us to "fix" corrupt snapshots (and/or their chunks)
by pulling them from another remote. When traversing the remote
snapshots, we check if it exists locally, and if it is, we check if the
last verification of it failed. If the local snapshot is broken and the
`resync-corrupt` option is turned on, we pull in the remote snapshot,
overwriting the local one.

This is very useful and has been requested a lot, as there is currently
no way to "fix" corrupt chunks/snapshots even if the user has a healthy
version of it on their offsite instance.

Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 10:53:26 +01:00
Gabriel Goller
b5be65cf8a snapshot: add helper function to retrieve verify_state
Add helper functions to retrieve the verify_state from the manifest of a
snapshot. Replaced all the manual "verify_state" parsing with the helper
function.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 10:52:40 +01:00
Christian Ebner
3dc9d2de69 server: push: log encountered empty backup groups during sync
Log also empty backup groups with no snapshots encountered during the
sync so the log output contains this additional information as well,
reducing possible confusion.

Nevertheless, continue with the regular logic, so that pruning of
vanished snapshot is honored.

Examplary output in the sync jobs task log:
```
2024-11-22T18:32:40+01:00: Syncing datastore 'datastore', root namespace into datastore 'push-target-store', namespace 'test'
2024-11-22T18:32:40+01:00: Found 2 groups to sync (out of 2 total)
2024-11-22T18:32:40+01:00: skipped: 1 snapshot(s) (2024-11-22T13:40:18Z) - older than the newest snapshot present on sync target
2024-11-22T18:32:40+01:00: Group 'vm/200' contains no snapshots to sync to remote
2024-11-22T18:32:40+01:00: Finished syncing root namespace, current progress: 1 groups, 0 snapshots
```

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 10:10:37 +01:00
Christian Ebner
62228d39f2 server: push: add error context to api calls and priv checks
Add an anyhow context to errors and display the full error context
in the log output. Further, make it clear which errors stem from api
calls by explicitly mentioning this in the context message.

This also fixes incorrect error handling by placing the error context
on the api result instead of the serde deserialization error for
cases this was handled incorrectly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: add missing format!
FG: run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 14:08:15 +01:00
Christian Ebner
6771869cc1 client/server: use dedicated api type for all archive names
Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.

Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.

Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.

This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.

A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.

E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```

The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: use LazyLock for constant archive names
FG: add missing import

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 13:47:05 +01:00
Christian Ebner
e932ec101e datastore: move ArchiveType to api types
Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.

In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-22 11:45:43 +01:00
Christian Ebner
da11d22610 fix #5710: api: backup: stat known chunks on backup finish
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.

If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.

In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.

Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.

link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-22 10:26:25 +01:00