IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
try to comment why not what, what is already described good enough by
the code here.
Also, we want to go up to 100cc text-width if it improves
readability, which for post-if's it most often does.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is the first step in which not the http server removes the
temporary file, but the worker itself.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
similar to commit 279d9de510
This calling style is pretty dangerous in general for such plugin
systems...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
By adding the keyring for RBD storage or the secret for CephFS ones, it
is possible to add an external Ceph cluster with only one API call.
Previously the keyring / secret file needed to be placed in
/etc/pve/priv/ceph/$storeID.{keyring,secret} manually.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This allows us to manually pass the used RBD keyring or CephFS secret.
Useful mostly when adding external Ceph clusters where we have no other
means to fetch them.
I renamed the previous $secret to $cephfs_secret to be able to use
$secret as parameter.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by not dying when the dataset is already unmounted. Can be triggered
for a container by doing two rollbacks in a row.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.
Other fix options would have been:
PVE::Storage::Plugin::free_image(@_);
or:
$class->SUPER::free_image($storeid, ...);
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after an error while copying the file to its destination the local
path of the destination was unlinked in every case, even when on the
destination was copied to via scp.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the addition of this enum does not change API behaviour, because
it is checked for 'iso' or 'vztmpl' aftwerwards anyway.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.
Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..
Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.
[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
as then the btrfs assertion would happen after we already created
subdirectories on some path, leaving those left-over..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
uses common function PVE::Tools::download_file_from_url to download
iso files.
Only users with permissions `Sys.Audit` and `Sys.Modify` on `/` are
permitted to perform this action. This restriction is due to the
fact, that the download function is able to download files from
internal networks (which are not visible/accessible from outside).
Users with these permissions anyway have the means to alter node
(network) config, so this does not create any further security risk.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bumps APIVER to 9 and resets APIAGE to zero.
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
stores the regex definition in PVE::Storage.
One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.
One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/
A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.
This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
returns.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
The enabled check in the lower loop is now redundant and can be removed.
If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.
Remaining vdisk_list users that do not specify a content type are:
1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
rootdir and images, so the storage is scanned (if enabled, same as
before).
2. pve-container migration
3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.
This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>