datastore: make 'filesystem' the default sync-level

rationale is that it makes the backup much safer than 'none', but does not
incur a big of a performance hit as 'file'.

here some benchmark:

data to be backed up:
~14GiB semi-random test images between 12kiB and 4GiB
that results in ~11GiB chunks (more than ram available on the target)

PBS setup:
virtualized (on an idle machine), PBS itself was also idle
8 cores (kvm64 on Intel 12700k) and 8 GiB memory

all virtual disks are on LVM with discard and iothread on
the HDD is a 4TB Seagate ST4000DM000 drive, and the NVME is a 2TB
Crucial CT2000P5PSSD8

i tested each disk with ext4/xfs/zfs (default created with the gui)
with 5 runs each, inbetween the caches are flushed and the filesystem synced
i removed the biggest and smallest result and from the remaining 3
results built the average (percentage is relative to the 'none' result)

result:

test         none     filesystem         file
hdd - ext4   125.67s  140.39s (+11.71%)  358.10s (+184.95%)
hdd - xfs    92.18s   102.64s (+11.35%)  351.58s (+281.41%)
hdd - zfs    94.82s   104.00s (+9.68%)   309.13s (+226.02%)
nvme - ext4  60.44s   60.26s (-0.30%)    60.47s (+0.05%)
nvme - xfs   60.11s   60.47s (+0.60%)    60.49s (+0.63%)
nvme - zfs   60.83s   60.85s (+0.03%)    60.80s (-0.05%)

So all in all, it does not seem to make a difference for nvme drives,
for hdds 'filesystem' increases backup time by ~10%, while
for 'file' it largely depends on the filesystem, but always
in the range of factor ~3 - ~4

Note that this does not take into account parallel actions, such as gc,
verify or other backups.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This commit is contained in:
Dominik Csapak 2022-11-04 10:49:34 +01:00 committed by Thomas Lamprecht
parent f41233d219
commit 4694dede0e
2 changed files with 3 additions and 3 deletions

View File

@ -344,13 +344,13 @@ and only available on the CLI:
the crash resistance of backups in case of a powerloss or hard shutoff. the crash resistance of backups in case of a powerloss or hard shutoff.
There are currently three levels: There are currently three levels:
- `none` (default): Does not do any syncing when writing chunks. This is fast - `none` : Does not do any syncing when writing chunks. This is fast
and normally OK, since the kernel eventually flushes writes onto the disk. and normally OK, since the kernel eventually flushes writes onto the disk.
The kernel sysctls `dirty_expire_centisecs` and `dirty_writeback_centisecs` The kernel sysctls `dirty_expire_centisecs` and `dirty_writeback_centisecs`
are used to tune that behaviour, while the default is to flush old data are used to tune that behaviour, while the default is to flush old data
after ~30s. after ~30s.
- `filesystem` : This triggers a ``syncfs(2)`` after a backup, but before - `filesystem` (default): This triggers a ``syncfs(2)`` after a backup, but before
the task returns `OK`. This way it is ensured that the written backups the task returns `OK`. This way it is ensured that the written backups
are on disk. This is a good balance between speed and consistency. are on disk. This is a good balance between speed and consistency.
Note that the underlying storage device still needs to protect itself against Note that the underlying storage device still needs to protect itself against

View File

@ -181,7 +181,6 @@ pub enum DatastoreFSyncLevel {
/// which reduces IO pressure. /// which reduces IO pressure.
/// But it may cause losing data on powerloss or system crash without any uninterruptible power /// But it may cause losing data on powerloss or system crash without any uninterruptible power
/// supply. /// supply.
#[default]
None, None,
/// Triggers a fsync after writing any chunk on the datastore. While this can slow down /// Triggers a fsync after writing any chunk on the datastore. While this can slow down
/// backups significantly, depending on the underlying file system and storage used, it /// backups significantly, depending on the underlying file system and storage used, it
@ -196,6 +195,7 @@ pub enum DatastoreFSyncLevel {
/// Depending on the setup, it might have a negative impact on unrelated write operations /// Depending on the setup, it might have a negative impact on unrelated write operations
/// of the underlying filesystem, but it is generally a good compromise between performance /// of the underlying filesystem, but it is generally a good compromise between performance
/// and consitency. /// and consitency.
#[default]
Filesystem, Filesystem,
} }