IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
file-restore has a 'timeout' parameter and if that is exceeded, returns
an error with the http code 503 Service Unavailable
When the web ui encounters such an error, it retries the listing a few
times before giving up.
To make use of these, add the 'timeout' parameter to the new
'extraParams' of 'file_restore_list'.
25 seconds are chosen because it's under pveproxy 30s limit, with a bit
of overhead to spare for the rest of the api call, like json decoding,
forking, access control checks, etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since commit
ba690c40 ("file-restore: remove 'json-error' parameter from list_files")
in proxmox-backup, the file-restore binary will return the error as json
when called with '--output-format json' (which we do in PVE::PBSClient)
here, we assume that 'file-restore' will fail in that case, and we try
to use the return value as an array ref which fails, and the user never
sees the real error message.
To fix that, check the ref type of the return value and act accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that there is a better code locality and also we avoid forgetting
to adapt the check for each specific draid-config parameter if a new
one gets added or an existing one changed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka <s.hrdlicka@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
If both type and vmid is defined we don't need to list the current
snapshots, we simply can derive the single backup group from that and
let the PBS client handle the rest.
Should be a not so small speedup for most setups using PBS backup and
pruning configured on PVE side, as vzdump calls this separately for
every vmid on backup jobs with multiple guests included.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this format comes from the remote cluster, so it might not be supported
on the source side - checking whether it's known (as additional
safeguard) and untainting (to avoid open3 failure) is required.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ T: squashed in canonical perl array ref access ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fixing some issues reported by perlcritic along the way.
cutting down 70 lines, often with even improving readability.
Tried to recheck and be conservative, so there shouldn't be any
regression, but it's still perl after all...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This makes it consistent with the naming scheme in PBS/GUI.
Keep value for API stability reasons and remove it in the next major version.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.cspak@proxmox.com>
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.
On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.
This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.
By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.
For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
returns a list of mounted paths with the backing devices
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The default timeout in PVE/RADOS.pm is 5 seconds, but this is not
always enough for external clusters under load. Workers can and should
take their time to not fail here too quickly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to increase the timeout for workers. Both existing
callers of librados_connect() don't currently use the parameter.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The return value of get_rbd_dev_path() is only used when $scfg->{krbd}
evaluates to true and the function shouldn't have any side effects
that are needed later, so the call can be avoided otherwise.
This also saves a RADOS connection and command with configurations for
external clusters with krbd disabled.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When switching this from calling the external binary to
using the perl api client the timeout got reduced to 7
seconds, which is definitely insufficient for larger stores.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
while the resulting backups are encrypted, they would not be restorable
using the master key (only) if the original PVE system is lost.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the key file doesn't exist (anymore), but the storage.cfg references
one, die on commands that should use encryption instead of falling back
to plain-text operations.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Before af07f67 ("pbs: use vmid parameter in list_snapshots") the
namespace was set via do_raw_client_command, but now it needs to be
set explicitly here.
Fixes: af07f67 ("pbs: use vmid parameter in list_snapshots")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Particularly for operations such as pruning backups after a
scheduled backups we do not want to list the entire
store.
(pbs_api_connect is moved up unmodified)
Note that the 'snapshots' CLI command only takes a full
group, but the API does allow specifying a backup-id without
a backup-type!
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this allows setting notes+protected for backups on btrfs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by refactoring it into a helper and use that.
With this, we can omit the 'update_volume_notes' in subclasses
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and refactored usages for .log and .notes with them.
At some parts in the test case code I had to source new variables to
shorten the line length to not exceed the 100 column line limit.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This improves handling when two archive remove calls are creating a
race condition where one would formerly encounter an error. Now both
finish successfully.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
When a VM or Container backup was deleted, the .notes file was not
removed, therefore, over time the dump folder would get polluted with
notes for backups that no longer existed. As backup names contain a
timestamp and as the notes cannot be reused because of this, I think
it is safe to just delete them just like we do with the .log file.
Furthermore, I sourced the deletion of the log and notes file into a
new function called "archive_auxiliaries_remove". Additionally, the
archive_info object now returns one more field containing the name of
the notes file. The test cases have to be adapted to expect this new
value as the package will not compile otherwise.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The changes in cfe46e2d4a git not catch
all situations.
In the case of a guest having 2 disk images with the same name on a pool
with the same name but in two different ceph clusters we still had
issues when starting it. The first disk got mapped as expected. The
second disk did not get mapped because we returned the old $path to
"/dev/rbd/<pool>/<image>" because it already existed from the first
disk.
In the case that only the "old" /dev/rbd path exists and we do not have
the /dev/rbd-pve/<cluster>/... path available, we now check if the
cluster fsid used by that rbd device matches the one we expect. If it
does, then we are in the situation that the image has been mapped before
the new rbd-pve udev rule was introduced. If it does not, then we have
the situation of an ambiguous mapping in /dev/rbd and return the
$pve_path.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a data-pool is configured, use it for status infos. The 'data-pool'
config option is used to mark the erasure coded pool while the 'pool'
will be the replicated pool holding meta data such as the omap.
This means, the 'pool' will only use a small amount of space and people
are interested how much they can store in the erasure coded pool anyway.
Therefore this patch reorders the assignment of the used pool name by
availability of the scfg parameters: data-pool -> pool -> fallback 'rbd'
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
happens in case of a mistyped poolname, and the new message should be
more helpful than:
`Use of uninitialized value $free in addition (+) at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 64`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>