IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
is left in the datastore. Before, the GUI would report "Never" for the
estimated time full, because the value provided in the backend was in
the past. To get around this, the GUI now reports "Full" if the value
for available reaches 0.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The API now exposes the field 'available' as well, with which the
unprivileged total is calculated in all corresponsing views in the
frontend.
The rrd charts now also display the total as the unprivileged total
if available, otherwise the absolute total is used.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The rrd data now includes tracking the available field in disk usage.
The calculation for the estimated_time_full was adapted to use the
total for the unpriviliged user, which is the sum of used + available.
The total for unprivileged users is preferable, because datastores are
always written to by the backup user. Which means that any storage
space reserved for root is unusable for our purposes.
To avoid resetting the estimate when switching to this new version,
the backend will try to use the available value to calculate the
unprivileged total. When that is not an option, it will fall back to
using the absolute total.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true. If the limit parameter is set to
0, all lines in the tasklog will be returned in json format.
To make a file stream and a json response in the same API call work, I
had to use one of the lower level apimethod types from the
proxmox-router. Therefore, the routing declarations and parameter
schemas have been changed accordingly.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
This new subcommand compares a pxar archive in two different
snapshots and prints a list of added/modified/deleted file
entries.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
The backend doesn't have an 'enable' option, but 'disable'. Convert
it to avoid a negative value that is checked "enabled".
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Capsule it in a small QMPSock struct impl, make the usage nicer as
the caller should not have to care & keep track of the initial socket
state+details.
A send_raw and send Value method should cover most needs.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is on top of the normal memory, and over 1.3 GB required is just
huge, sadly the commit adding this has zero details about what setups
fail and what work again with the change, so hard to tell, but any
setup that needs that much sounds like a bug in ZFS or remaining code
here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoid the need to loop a parameter through a dozen function which all
don't care about it at all; iff this should be a global oncecell or
lock guarded param.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by adding 'dynamic-memory' parameter that controls if we automatically
increase the memory of the guest vm or not
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
which registers a binary in /root/.forward and handles mail forwarding
to the mail addresss configured for root@pam in PBS. Similar to how it
is done in PVE currently.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a backup contains a drive with zfs on it, the default memory
size (up to 384 MiB) is often not enough to hold the zfs metadata
to improve that situation, add memory dynamically (1GiB) when a path is
requested that is on zfs. Note that the image must be started with a
kernel capable of memory hotplug.
to achieve that, we also have to add a qmp socket to the vm, so that
we can later connect and add the memory backend and dimm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
fixups for DatastoreFSyncLevel:
* use derive for Default
* add some more derives (Clone, Copy)
chunk store:
* drop to_owned for chunk_dir_path
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the dropped .into() is guarded by the bumped build-dependency on
proxmox-sys 0.4.1, the missing Eq is a new clippy lint.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With the old code the rate limit parameters got passed in their own
dictionary under the limit key, but the API expects the rate-limit
settings as top-level keys. This commit correctly sets the rate-limit
parameters so the API actually uses them.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
currently, we don't (f)sync on chunk insertion (or at any point after
that), which can lead to broken chunks in case of e.g. an unexpected
powerloss. To fix that, offer a tuning option for datastores that
controls the level of syncs it does:
* None (default): same as current state, no (f)syncs done at any point
* Filesystem: at the end of a backup, the datastore issues
a syncfs(2) to the filesystem of the datastore
* File: issues an fsync on each chunk as they get inserted
(using our 'replace_file' helper) and a fsync on the directory handle
a small benchmark showed the following (times in mm:ss):
setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner
size none filesystem file
2GiB (fits in ram) 00:13 0:41 01:00
33GiB 05:21 05:31 13:45
so if the backup fits in memory, there is a large difference between all
of the modes (expected), but as soon as it exceeds the memory size,
the difference between not syncing and syncing the fs at the end becomes
much smaller.
i also tested on an nvme, but there the syncs basically made no difference
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely 'catalog' and 'read-all-labels', by always opening a
window (with a drive now autoselected) and the two checkboxes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in a disaster recovery case, it is useful to not only re-invetorize
the labels + media-sets, but also to try to recover the catalogs
from the tape (to know whats on there). This adds an option to
the inventory api call that tries to do a fast catalog restore
from each tape to be inventorized.
also sets the correct default for 'read-all-labels' in the api and
converts to a bool
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this way we can omit the pattern
```
let status_path = Path::new(TAPE_STATUS_DIR);
some_function(status_path);
```
and give the TAPE_STATUS_DIR directly. In some instances we now have to
give TAPE_STATUS_DIR more often, but most often we save a few
intermediary Paths.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Previously, autocompletion of archive names, for instance
in the case of
$ proxmox-backup-client restore <snapshot> <TAB>
did not work if no namespace was provided via the --ns option.
The fix is to fall back to the root namespace if the option is
not provided by the user.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>