IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bumps APIVER to 9 and resets APIAGE to zero.
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
stores the regex definition in PVE::Storage.
One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.
One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/
A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.
This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
returns.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
The enabled check in the lower loop is now redundant and can be removed.
If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.
Remaining vdisk_list users that do not specify a content type are:
1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
rootdir and images, so the storage is scanned (if enabled, same as
before).
2. pve-container migration
3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.
This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which makes the continue not behave as intended.
Reported by shellcheck: SC2106: This [i.e. continue] only exits the subshell
caused by the (..) group
Also factor out long message for readability.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.
The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:
root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
ceph-fuse[20729]: starting ceph client
2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
ceph-fuse[20729]: starting fuse
root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
root@pve701 / # mount | grep mnttest
ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)
Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...
root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
ceph-fuse[34480]: starting ceph client
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
ceph-fuse[34480]: fuse failed to start
2021-06-15T14:42:56.338+0200 7f634edae080 -1
fuse_mount(mountpoint=/mnttest/fuse) failed.
Mount failed with status code: 5
root@pve701 ~ # echo $?
0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is used if there is no ('dir'-type) 'local' entry. Storage configurations
made by the installer also support backups for the 'local' storage, and the
'prune-backups' parameter is not really useful otherwise.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.
Also switch the setting for the default 'local' storage to 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Try to detect active mounts and holders early, because it's cheap. The wipefs
command in the worker will detect even more situations where wiping alone is
not enough for the device to show up as unused, or could otherwise be
problematic.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
* use wipefs to wipe labels first (to avoid sgdisk complaining about the
backed up GPT structure on a subsequent GPT initialization)
* only take one device as an argument
* do not use an absolute path for 'dd'
* die if one of the command fails
The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>