IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
this allows setting notes+protected for backups on btrfs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by refactoring it into a helper and use that.
With this, we can omit the 'update_volume_notes' in subclasses
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and refactored usages for .log and .notes with them.
At some parts in the test case code I had to source new variables to
shorten the line length to not exceed the 100 column line limit.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This improves handling when two archive remove calls are creating a
race condition where one would formerly encounter an error. Now both
finish successfully.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
When a VM or Container backup was deleted, the .notes file was not
removed, therefore, over time the dump folder would get polluted with
notes for backups that no longer existed. As backup names contain a
timestamp and as the notes cannot be reused because of this, I think
it is safe to just delete them just like we do with the .log file.
Furthermore, I sourced the deletion of the log and notes file into a
new function called "archive_auxiliaries_remove". Additionally, the
archive_info object now returns one more field containing the name of
the notes file. The test cases have to be adapted to expect this new
value as the package will not compile otherwise.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The changes in cfe46e2d4a git not catch
all situations.
In the case of a guest having 2 disk images with the same name on a pool
with the same name but in two different ceph clusters we still had
issues when starting it. The first disk got mapped as expected. The
second disk did not get mapped because we returned the old $path to
"/dev/rbd/<pool>/<image>" because it already existed from the first
disk.
In the case that only the "old" /dev/rbd path exists and we do not have
the /dev/rbd-pve/<cluster>/... path available, we now check if the
cluster fsid used by that rbd device matches the one we expect. If it
does, then we are in the situation that the image has been mapped before
the new rbd-pve udev rule was introduced. If it does not, then we have
the situation of an ambiguous mapping in /dev/rbd and return the
$pve_path.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a data-pool is configured, use it for status infos. The 'data-pool'
config option is used to mark the erasure coded pool while the 'pool'
will be the replicated pool holding meta data such as the omap.
This means, the 'pool' will only use a small amount of space and people
are interested how much they can store in the erasure coded pool anyway.
Therefore this patch reorders the assignment of the used pool name by
availability of the scfg parameters: data-pool -> pool -> fallback 'rbd'
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
happens in case of a mistyped poolname, and the new message should be
more helpful than:
`Use of uninitialized value $free in addition (+) at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 64`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the fallback to a default pool name of 'rbd' was introduced in:
1440604a4b
and worked for the status command, because it used the `rados_cmd`
sub.
This fallback was lost with the changes in:
41aacc6cde
leading to confusing errors:
`Use of uninitialized value in string eq at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 633`
(e.g. in the journal from pvestatd)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
to avoid calls into RADOS connect, that trigger RPCEnv not
initialized breakage in regression tests, but wouldn't really work
otherwise either
in the future the RBD $scfg could actually support this (or similarly
named) property, to safe on storage addition and then avoid frequent
mon commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When krbd is used, subsequent removal after an an operation
involving a rename could fail with
> librbd::image::PreRemoveRequest: 0x559b7506a470 \
> check_image_watchers: image has watchers - not removing
because the old mapping was still present.
For both operations with a rename, the owning guest should be offline,
but even if it weren't, unmap simply fails when the volume is in-use.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
it only redirected to get_rbd_dev_path with the same signature and both
are private subs..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the new udev rule is expected to be in place and active, switching the
checks around means 1 instead of 2 stat()s in this rather hot code path.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
By adding our own customized rbd udev rules and ceph-rbdnamer we can
create device paths that include the cluster fsid and avoid any
ambiguity if the same pool and namespace combination is used in
different clusters we connect to.
Additionally to the '/dev/rbd/<pool>/...' paths we now have
'/dev/rbd-pve/<cluster fsid>/<pool>/...' paths.
The other half of the patch makes use of the new device paths in the RBD
plugin.
The new 'get_rbd_dev_path' method the full device path. In case that the
image has been mapped before the rbd-pve udev rule has been installed,
it returns the old path.
The cluster fsid is read from the 'ceph.conf' file in the case of a
hyperconverged setup. In the case of an external Ceph cluster we need to
fetch it via a rados api call.
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When writing into the file, explicitly utf8 encode it, and then try
to utf8 decode it on read.
If the notes are not valid utf8, we assume they were iso-8859 encoded
and return as is.
Technically this is a breaking change, since there are iso-8859
comments that would successfully decode as utf8, for example: the
byte sequence "C2 A9" would be "£" in iso, but would decode to "£".
From what i can tell though, this is rather unlikely to happen for
"real world" notes, because the first byte would be in the range of
C0-F7 (which are mostly language dependent characters like "Â") and
the following bytes would have to be in the range of 80-BF, which are
only special characters like "£" (or undefined)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With 30s we got for sync api calls 10s leaves still enough room for
answering and other stuff.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.
We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.
The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.
Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.
For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.
An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Otherwise, there is no storage-agnostic way to filter by backup group.
Call it subtype, to not confuse it with content type, and to be able
to re-use it for other content types than backup, if the need ever
arises.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in the same manner as NT_STATUS_ACCESS_DENIED. It can be assumed to be
a configuration error, so avoid showing the generic "storage <storeid>
is not online". Reported in the community forum:
https://forum.proxmox.com/threads/storage-is-not-online-cifs.99201/post-428858
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.
Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.
The API requires them to be integers.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Perl's automatic conversion can lead to integers being converted to
strings, for example by matching it in a regex.
To make sure we always return an integer in the API call, add an
explicit cast to integer.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>