tape: improve throughput by not unnecessarily syncing/committing

When writing data on tape, the idea was to sync/committing to tape and
the catalog to disk every 128GiB of data. For that the counter
'bytes_written' was introduced and checked after every chunk/snapshot
archive.

Sadly we forgot to reset the counter after doing so, which meant that
after 128GiB was written onto the tape, we synced/committed after every
archive on the tape for the remaining length of the tape.

Since syncing to tape and writing to disk takes a bit of time, the drive
had to slow down every time and reduced the available throughput. (In
our tests here from ~300MB/s to ~255MB/s).

By resetting the value to zero after syncing, we avoid that and increase
throughput performance when backups are bigger than 128GiB on tape.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This commit is contained in:
Dominik Csapak 2024-05-07 15:45:52 +02:00 committed by Dietmar Maurer
parent de2cd9a688
commit c343c3f7f6

View File

@ -43,7 +43,7 @@ struct PoolWriterState {
media_uuid: Uuid,
// tell if we already moved to EOM
at_eom: bool,
// bytes written after the last tape fush/sync
// bytes written after the last tape flush/sync and catalog commit
bytes_written: usize,
}
@ -200,8 +200,9 @@ impl PoolWriter {
/// This is done automatically during a backupsession, but needs to
/// be called explicitly before dropping the PoolWriter
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(PoolWriterState { ref mut drive, .. }) = self.status {
drive.sync()?; // sync all data to the tape
if let Some(ref mut status) = self.status {
status.drive.sync()?; // sync all data to the tape
status.bytes_written = 0; // reset bytes written
}
self.catalog_set.lock().unwrap().commit()?; // then commit the catalog
Ok(())