5
0
mirror of git://git.proxmox.com/git/pve-storage.git synced 2025-01-03 01:18:01 +03:00
Commit Graph

1749 Commits

Author SHA1 Message Date
Wolfgang Bumiller
78eac5baf2 pbs: namespace support
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-12 11:56:48 +02:00
Thomas Lamprecht
ba41f78f32 bump version to 7.2-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 18:20:02 +02:00
Thomas Lamprecht
78638b3dea rbd: get path: allow fake override of fsid in scfg for some regression tests
to avoid calls into RADOS connect, that trigger RPCEnv not
initialized breakage in regression tests, but wouldn't really work
otherwise either

in the future the RBD $scfg could actually support this (or similarly
named) property, to safe on storage addition and then avoid frequent
mon commands

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 18:17:58 +02:00
Thomas Lamprecht
cd58a69125 bump version to 7.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 17:38:22 +02:00
Fabian Ebner
9594717848 rbd: unmap volume after rename
When krbd is used, subsequent removal after an an operation
involving a rename could fail with
> librbd::image::PreRemoveRequest: 0x559b7506a470 \
> check_image_watchers: image has watchers - not removing
because the old mapping was still present.

For both operations with a rename, the owning guest should be offline,
but even if it weren't, unmap simply fails when the volume is in-use.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-28 13:44:21 +02:00
Fabian Grünbichler
cc682faafc rbd: drop get_kernel_device_path
it only redirected to get_rbd_dev_path with the same signature and both
are private subs..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-27 13:03:16 +02:00
Fabian Grünbichler
647a667e10 rbd: reduce number of stats in likely path
the new udev rule is expected to be in place and active, switching the
checks around means 1 instead of 2 stat()s in this rather hot code path.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-27 13:01:42 +02:00
Aaron Lauterer
cfe46e2d4a rbd: fix #3969: add rbd dev paths with cluster info
By adding our own customized rbd udev rules and ceph-rbdnamer we can
create device paths that include the cluster fsid and avoid any
ambiguity if the same pool and namespace combination is used in
different clusters we connect to.

Additionally to the '/dev/rbd/<pool>/...' paths we now have
'/dev/rbd-pve/<cluster fsid>/<pool>/...' paths.

The other half of the patch makes use of the new device paths in the RBD
plugin.

The new 'get_rbd_dev_path' method the full device path. In case that the
image has been mapped before the rbd-pve udev rule has been installed,
it returns the old path.

The cluster fsid is read from the 'ceph.conf' file in the case of a
hyperconverged setup. In the case of an external Ceph cluster we need to
fetch it via a rados api call.

Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-04-27 12:57:22 +02:00
Dominik Csapak
43f8112f0b storage plugins: en/decode volume notes as UTF-8
When writing into the file, explicitly utf8 encode it, and then try
to utf8 decode it on read.

If the notes are not valid utf8, we assume they were iso-8859 encoded
and return as is.

Technically this is a breaking change, since there are iso-8859
comments that would successfully decode as utf8, for example: the
byte sequence "C2 A9" would be "£" in iso, but would decode to "£".

From what i can tell though, this is rather unlikely to happen for
"real world" notes, because the first byte would be in the range of
C0-F7 (which are mostly language dependent characters like "Â") and
the following bytes would have to be in the range of 80-BF, which are
only special characters like "£" (or undefined)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:33:13 +02:00
Thomas Lamprecht
7a8751a2cd zfs pool: bump non-worker timeoiut default to 10s
With 30s we got for sync api calls 10s leaves still enough room for
answering and other stuff.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 15:25:40 +02:00
Dominik Csapak
4cd9b85d15 fix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker
Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.

We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.

The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:19:50 +02:00
Thomas Lamprecht
428872eb71 ceph config: minor code cleanup & comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 12:47:58 +02:00
Thomas Lamprecht
34fd49a69b some code style refcatoring/cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-22 14:30:01 +02:00
Fabian Grünbichler
8c1a476be9 bump version to 7.1-2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-06 13:30:21 +02:00
Thomas Lamprecht
107208bdbf disks: zfs: code indentation/style improvments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-06 12:56:47 +02:00
Fabian Ebner
8009417d0d plugins: allow limiting the number of protected backups per guest
The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.

Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.

For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.

An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-06 09:47:12 +02:00
Fabian Ebner
89a7507c7a api: file restore: use check_volume_access to restrict content type
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
1f37ad56f9 pvesm: extract config: add content type check
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
21c7b54622 check volume accesss: add content type parameter
Adding such a check here avoids the need to parse at the call sites in
some cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
42352a4988 check volume access: allow for images/rootdir if user has VM.Config.Disk
Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
3e1a618e34 check volume access: always allow with Datastore.Allocate privilege
Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
f303dec6e4 pvesm: extract config: check for VM.Backup privilege
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
Fabian Ebner
c66e0b8a0a list volumes: also return backup type for backups
Otherwise, there is no storage-agnostic way to filter by backup group.

Call it subtype, to not confuse it with content type, and to be able
to re-use it for other content types than backup, if the need ever
arises.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:42:39 +01:00
Fabian Ebner
5ef50a8262 cifs: check connection: bubble up NT_STATUS_LOGON_FAILURE
in the same manner as NT_STATUS_ACCESS_DENIED. It can be assumed to be
a configuration error, so avoid showing the generic "storage <storeid>
is not online". Reported in the community forum:
https://forum.proxmox.com/threads/storage-is-not-online-cifs.99201/post-428858

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:36:38 +01:00
Fabian Ebner
38431808b4 activate storage: improve error when check_connection dies
by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:36:38 +01:00
Lorenz Stechauner
18bf2e59da storage/plugin: factoring out regex for backup extension re
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
Lorenz Stechauner
cd461a5012 storage: rename REs for iso and vztmpl extensions
these changes make it more clear, how many capture groups each
RE inclues.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
Fabian Ebner
726c432964 zfs: volume import: use correct format for renaming
Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.

Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-03 14:33:33 +01:00
Mira Limbeck
eba7935f83 file_size_info: cast 'size' and 'used' to integer
`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.

The API requires them to be integers.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-02-21 16:07:30 +01:00
Mira Limbeck
7f30857519 fix #3894: cast 'size' and 'used' to integer
Perl's automatic conversion can lead to integers being converted to
strings, for example by matching it in a regex.

To make sure we always return an integer in the API call, add an
explicit cast to integer.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-02-21 16:07:27 +01:00
Fabian Grünbichler
3363fcf65e add volume_import/export_start helpers
exposing the two halves of a storage migration for usage across
cluster boundaries.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
Fabian Grünbichler
7a158d0bc9 storage_migrate: pull out import/export_prepare
for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
Fabian Grünbichler
686b07376f storage_migrate_snapshot: skip for btrfs without snapshots
this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
Thomas Lamprecht
db3d1ef163 bump version to 7.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 18:08:09 +01:00
Thomas Lamprecht
c915afca7e rbd: followup code style cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 18:04:31 +01:00
Aaron Lauterer
ef2afce74a fix #1816: rbd: add support for erasure coded ec pools
The first step is to allocate rbd images correctly.

The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-02-04 17:51:48 +01:00
Fabian Grünbichler
05b07a67f5 storage_migrate: pull out snapshot decision
into new top-level helper for re-use with remote migration.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
Fabian Grünbichler
f78f26ae60 volname_for_storage: parse volname before calling
to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
Aaron Lauterer
e7bc1f03b7 CephConfig: ensure newline in $secret and $cephfs_secret parameter
Ensure that the user provided $secret ends in a newline. Otherwise we
will have Input/output errors from rados_connect.

For consistency and possible future proofing, also add a newline to
CephFS secrets.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-25 10:57:58 +01:00
Fabian Ebner
73dfe360dd zfs: use -r parameter when listing snapshots
Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.

Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.

The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.

[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577

Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-10 13:33:31 +01:00
Fabian Ebner
b4616e5c4a lvm thin: add missing newline to error message
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-18 11:18:03 +01:00
Fabian Ebner
904f06a82e pbs: update attribute: cleaner error message if not supported
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-12 16:53:27 +01:00
Fabian Grünbichler
c7aa1e28dd bump version to 7.0-15
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 14:25:26 +01:00
Fabian Ebner
6fbffac0a3 lvm thin: don't assume that a thin pool and its volumes are active
There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.

It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.

Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.

[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 14:23:54 +01:00
Fabian Ebner
2668a86719 lvm thin: status: code cleanup
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 14:18:28 +01:00
Fabian Ebner
cde43c4880 api: disks: delete: add flag for cleaning up storage config
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:35:25 +01:00
Fabian Ebner
f81908eb58 api: disks: delete: add flag for wiping disks
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
Fabian Ebner
26082b7daf diskmanage: add helper for udev workaround
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
Fabian Ebner
a83d8eb178 api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
Fabian Ebner
a510449e2b api: list thin pools: add volume group to properties
So that DELETE can be called using only information from GET.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00