IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
besides harmonizing versions, the only global change is that the tokio-io
feature of pxar is now implied since its default anyway, instead of being
spelled out.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
pbs-buildcfg is the only one that needs to inherit the version as well, since
it stores it in the compiled crate.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of hardcoding the default deep inside the code. This makes it
much easier to see what is the actual default
the first instance of ChunkOrder::None was only for the test case, were
the ordering doe not matter
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
rationale is that it makes the backup much safer than 'none', but does not
incur a big of a performance hit as 'file'.
here some benchmark:
data to be backed up:
~14GiB semi-random test images between 12kiB and 4GiB
that results in ~11GiB chunks (more than ram available on the target)
PBS setup:
virtualized (on an idle machine), PBS itself was also idle
8 cores (kvm64 on Intel 12700k) and 8 GiB memory
all virtual disks are on LVM with discard and iothread on
the HDD is a 4TB Seagate ST4000DM000 drive, and the NVME is a 2TB
Crucial CT2000P5PSSD8
i tested each disk with ext4/xfs/zfs (default created with the gui)
with 5 runs each, inbetween the caches are flushed and the filesystem synced
i removed the biggest and smallest result and from the remaining 3
results built the average (percentage is relative to the 'none' result)
result:
test none filesystem file
hdd - ext4 125.67s 140.39s (+11.71%) 358.10s (+184.95%)
hdd - xfs 92.18s 102.64s (+11.35%) 351.58s (+281.41%)
hdd - zfs 94.82s 104.00s (+9.68%) 309.13s (+226.02%)
nvme - ext4 60.44s 60.26s (-0.30%) 60.47s (+0.05%)
nvme - xfs 60.11s 60.47s (+0.60%) 60.49s (+0.63%)
nvme - zfs 60.83s 60.85s (+0.03%) 60.80s (-0.05%)
So all in all, it does not seem to make a difference for nvme drives,
for hdds 'filesystem' increases backup time by ~10%, while
for 'file' it largely depends on the filesystem, but always
in the range of factor ~3 - ~4
Note that this does not take into account parallel actions, such as gc,
verify or other backups.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
fixups for DatastoreFSyncLevel:
* use derive for Default
* add some more derives (Clone, Copy)
chunk store:
* drop to_owned for chunk_dir_path
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the dropped .into() is guarded by the bumped build-dependency on
proxmox-sys 0.4.1, the missing Eq is a new clippy lint.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
currently, we don't (f)sync on chunk insertion (or at any point after
that), which can lead to broken chunks in case of e.g. an unexpected
powerloss. To fix that, offer a tuning option for datastores that
controls the level of syncs it does:
* None (default): same as current state, no (f)syncs done at any point
* Filesystem: at the end of a backup, the datastore issues
a syncfs(2) to the filesystem of the datastore
* File: issues an fsync on each chunk as they get inserted
(using our 'replace_file' helper) and a fsync on the directory handle
a small benchmark showed the following (times in mm:ss):
setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner
size none filesystem file
2GiB (fits in ram) 00:13 0:41 01:00
33GiB 05:21 05:31 13:45
so if the backup fits in memory, there is a large difference between all
of the modes (expected), but as soon as it exceeds the memory size,
the difference between not syncing and syncing the fs at the end becomes
much smaller.
i also tested on an nvme, but there the syncs basically made no difference
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we converted the prune settings of datastores to prune-jobs, but did
not actually implement the notifications for them, even though
we had the notification options in the gui (they did not work).
implement the basic ok/error notification for prune jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
intended for passing the format to the file-restore client/daemon
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the remaining ones are:
- type complexity
- fns with many arguments
- new() without default()
- false positives for redundant closures (where closure returns a static
value)
- expected vs actual length check without match/cmp
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Otherwise we have to use BackupType::iter().iter() whenever
we're not using a `for _ in iter()` construct.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when creating the documentation (e.g. `cargo doc --open`), it would
warn that `Display` is not in scope.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
InfluxDbUdp and InfluxDbHttp for now
introduces schemas for host:port and https urls
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we can now use it for the error case and will further use it for the
can access namespace but not datastore case in a future patch
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead move the acl_path helper to BackupNamespace, and introduce a new
helper for printing a store+ns when logging/generating error messages.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for usage in permission check error messages, to allow easily indicating
which privs are missing.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
syncing to a namespace only requires privileges on the namespace (and
potentially its children during execution).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the namespace is optional, but should be captured to allow ACL checks
for unprivileged non-job-owners.
also add FIXME for other job types and workers that (might) need
updating.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We can have those in existing verify jobs configs, and that'd break
stuff. So, even while the "bad" commit got released only recently
with `2.1.6-1` (14 April 2022), we still need to cope with those that
used it, and using some serde parser magic to transform on read only
is hard here due to section config (json-value and verify currently
happen before we can do anything about it)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This reverts commit 7a1a5d206d7f526baaa39b86ecb462a870b641c1.
We could already cause the behavior by simply setting ignore-verified
to false, aas that flag is basically an on/off switch for even
considering outdated-after or not.
So avoid the extra logic and just make the gui use the previously
existing way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by adding the 'default' serde hint and renaming 'recursion_depth' to
'max_depth' (to be in line with sync job config)
also add the logic to actually add/update the tape backup job config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by adding a new parameter 'namespaces', which contains a mapping
for a namespace like this:
store=datastore,source=foo,target=bar,max-depth=2
if source or target are omitted the root namespace is used for its value
this mapping can be given several times (on the cli) or as an array (via
api) to have mappings for multiple datastores
if a specific snapshot list is given simultaneously, the given snapshots
will be restored according to this mapping, or to the source namespace
if no mapping was found.
to do this, we reutilize the restore_list_worker, but change it so that
it does not hold a lock for the duration of the restore, but fails
if the snapshot does exist at the end. also the snapshot will now
be temporarily restored into the target datastore into the
'.tmp/<media-set-uuid>' folder.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
these are helpers for the few cases where we want to print and parse
from a format that has the namespace and snapshot combined, like for
the on-tape catalog and snapshot archive.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
into the regular one (with default == MAX) and the one used for
pull/sync, where the default is 'None' which actually means the remote
end reduces the scope of sync automatically (or, if needed,
backwards-compat mode without any remote namespaces at all).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and use it when creating a sync job, and simplify the check on updating
(only check the final, resulting config instead of each intermediate
version).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We sometimes need to do some in-memory only stuff, e.g., to check if
GC is already running for a datastore, which is a try_lock on a mutex
that is in-memory.
Actually the whole thing would be nicer if we could guarantee to hold
the correct contract statically, e.g., like
https://docs.rust-embedded.org/book/static-guarantees/design-contracts.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>