5
0
mirror of git://git.proxmox.com/git/pve-storage.git synced 2025-01-11 05:18:01 +03:00
Commit Graph

1564 Commits

Author SHA1 Message Date
Fabian Ebner
0153334270 api: content: correctly handle warnings status for delayed task
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 22:21:55 +02:00
Thomas Lamprecht
347e677b78 btrfs: drop qcow2 and vmdk for now
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 20:22:52 +02:00
Wolfgang Bumiller
d3c5cf2487 btrfs: make NOCOW optional
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
Wolfgang Bumiller
a0e3e224ea btrfs: add 'btrfs' import/export format
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
Wolfgang Bumiller
3cc29a0487 bump storage API: update import/export methods
Bumps APIVER to 9 and resets APIAGE to zero.

The import methods (volume_import, volume_import_formats):

These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
  `volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.

Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.

On the export side (volume_export, volume_export_formats):

The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
  The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
  This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
Wolfgang Bumiller
af50c2e671 add BTRFS storage plugin
This is mostly the same as a directory storage, with 2 major
differences:

* 'subvol' volumes are actual btrfs subvolumes and therefore
  allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
  also allow snapshots, the raw file for volume
  `btrstore:100/vm-100-disk-1.raw` can be found under
  `$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
  subvolume's directory name, so snapshot 'foo' of the above
  would be found under
  `$path/images/100/vm-100-disk-1@foo/disk.raw`
  or for format "subvol":
  `$path/images/100/subvol-100-disk-1.subvol@foo`

Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
Lorenz Stechauner
bba10cf4af factoring out regex for vztmpl
stores the regex definition in PVE::Storage.

One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 20:19:09 +02:00
Thomas Lamprecht
339a4eb3c0 file size info: return early if we cannot parse json
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
Thomas Lamprecht
d4e00f2bd5 file/volume size info: add actual errors to untaint messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
Stoiko Ivanov
ac598d851e plugins: untaint volume_size_info retuns
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.

One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/

A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.

This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
  by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
  returns.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-06-23 08:28:48 +02:00
Thomas Lamprecht
ffc31266da tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
Fabian Grünbichler
5b955999b9 pbs: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-22 13:44:06 +02:00
Thomas Lamprecht
046b64d40b bump version to 7.0-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-21 11:30:10 +02:00
Fabian Ebner
03c487e553 config: prevent empty content list when content type 'none' is not supported
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
Fabian Ebner
d96b789aed vdisk_list: only scan storages with the correct content type(s)
The enabled check in the lower loop is now redundant and can be removed.

If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.

Remaining vdisk_list users that do not specify a content type are:
    1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
       rootdir and images, so the storage is scanned (if enabled, same as
       before).
    2. pve-container migration
    3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.

This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
    0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
Fabian Ebner
6a4545601b lvm: volume import: handle worker returned by free_image
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
Fabian Ebner
7ae13a34d2 pbs: free image: explicitly return undef
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
Thomas Lamprecht
ead6be934d api: status: sort index and add missing "file-restore"
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-21 09:32:55 +02:00
Thomas Lamprecht
823e8afe72 plugin loader: text-width cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-18 18:33:20 +02:00
Fabian Ebner
d3c3c114c3 postinst: remove old file if new one is identical
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-17 11:12:15 +02:00
Fabian Ebner
cd48e1632c postinst: avoid spawning subshell
which makes the continue not behave as intended.

Reported by shellcheck: SC2106: This [i.e. continue] only exits the subshell
caused by the (..) group

Also factor out long message for readability.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-17 11:12:15 +02:00
Thomas Lamprecht
f985f33afd api: content/delete: die with newline to avoid addign file-context
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 19:24:38 +02:00
Fabian Ebner
cda32b2361 cephfs: update reminder for systemd_netmount removal
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.

The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:

 root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 ceph-fuse[20729]: starting ceph client
 2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
 ceph-fuse[20729]: starting fuse
 root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
 root@pve701 / # mount | grep mnttest
 ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
 10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)

Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...

 root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
 ceph-fuse[34480]: starting ceph client
 fuse: mountpoint is not empty
 fuse: if you are sure this is safe, use the 'nonempty' mount option
 ceph-fuse[34480]: fuse failed to start
 2021-06-15T14:42:56.338+0200 7f634edae080 -1
 fuse_mount(mountpoint=/mnttest/fuse) failed.
 Mount failed with status code: 5
 root@pve701 ~ # echo $?
 0

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
Fabian Ebner
9531988d5e cephfs: revert safe-guard check for Luminous
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
Fabian Ebner
3a3ff9d52b config: add backup content type to default local storage
which is used if there is no ('dir'-type) 'local' entry. Storage configurations
made by the installer also support backups for the 'local' storage, and the
'prune-backups' parameter is not really useful otherwise.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
Fabian Ebner
bbadd1659d config: mention that maxfiles is deprecated
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.

Also switch the setting for the default 'local' storage to 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
Fabian Ebner
1a4ab884e8 postinst: move cifs credential files into subdirectory upon update
and drop the compat code.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
Wolfgang Bumiller
d7f6f85ea0 fix find_free_disk_name invocations
The interface takes the storeid now, not the image dir.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-15 14:36:12 +02:00
Fabian Ebner
883c811f7f prune backups: activate storage
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:11:17 +02:00
Fabian Ebner
f7a95153d6 diskmanage: fix determining array length
$#* is the last index, not the length.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:10:33 +02:00
Fabian Ebner
0e30b3121d api: get rid of moved 'usb' call
pve-manger commit bd328734deb1dcea296858bb38d085e392adb99e changed the frontend
to use the new call.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-08 15:19:36 +02:00
Thomas Lamprecht
8bceafc65c bump version to 7.0-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 16:32:23 +02:00
Thomas Lamprecht
d938178298 disks: fixup join usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 14:19:53 +02:00
Thomas Lamprecht
839afff896 disks: wipe blockdev: pass all child partitions to wipefs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:13:26 +02:00
Thomas Lamprecht
fa6d05ab24 disks: wipe blockdev: improve variable locality/readability
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:12:57 +02:00
Thomas Lamprecht
70dc70984a disks: factor out stripping of /dev and cleanup vicinity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:10:10 +02:00
Fabian Ebner
2829e6a853 api: add wipedisk call
Try to detect active mounts and holders early, because it's cheap. The wipefs
command in the worker will detect even more situations where wiping alone is
not enough for the device to show up as unused, or could otherwise be
problematic.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
Fabian Ebner
cb057e21c5 diskmanage: add has_holder method
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
Fabian Ebner
3bf7f8891b diskmanage: add is_mounted method
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
Fabian Ebner
7e14102a4b diskmanage: factor out mounted_blockdevs helper
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
Fabian Ebner
262ad7a92e diskmanage: add wipe_blockdev method
based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
    * use wipefs to wipe labels first (to avoid sgdisk complaining about the
      backed up GPT structure on a subsequent GPT initialization)
    * only take one device as an argument
    * do not use an absolute path for 'dd'
    * die if one of the command fails

The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
Thomas Lamprecht
6f966017d0 bump version to 7.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:14:32 +02:00
Thomas Lamprecht
522cd32738 remove some more DRBD references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:14:17 +02:00
Thomas Lamprecht
58a09dcad5 bump version to 7.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:01:45 +02:00
Thomas Lamprecht
015984bbbe d/control: bump standards version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 09:11:14 +02:00
Thomas Lamprecht
f47d61f119 d/control: increase compat level to 12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 09:10:57 +02:00
Thomas Lamprecht
7a4d2928d4 d/control: update dependencies
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 09:10:38 +02:00
Thomas Lamprecht
dbf11c2f05 remove internal, unmaintained, DRBD plugin
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 09:08:22 +02:00
Thomas Lamprecht
a1e09e496e iscsi: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-04 12:02:47 +02:00
Fabian Ebner
9177cc2eda clone image: specify base format option with qemu-img
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-03 13:07:02 +02:00