IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This avoids auto-detection by qemu-img and so the information will be
correct with respect to the actual image format on the storage layer.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Make it consistent with importing, which already relies on
parse_volname() for the format.
This could cause migration failures where the format returned by
file_size_info() would not match the one from parse_volname().
Pass the format that will be used for export to file_size_info() to
ensure the correct size will be determined.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Allow callers to opt-out of 'qemu-img' autodetecting the format.
Currently not supported to be done together with untrusted, because it
can lead to less checks being done. Could be further refined (e.g.
disallow only untrusted together with format being 'raw') should the
need arise.
For 'subvol' format, the checking is handled outside of 'qemu-img' of
course, based on whether it is a directory or not.
Currently, there is a fallback to 'raw' should the format not be among
the ones allowed for the 'pve-qm-image-format' standard option. This
is to reduce potential for fallout, in particular for the plan to
change the base plugin's volume_size_info() to pass in the expected
format when calling file_size_info() too.
While not explicitly part of the storage plugin API, the 'untrusted'
parameter is now in a different place, so a compat check is added for
external plugins that might've still used it.
Breaks for qemu-server needed (if we don't want to just rely on the
compat check).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
neither vmdk images with multiple children, nor ones with multiple extents
(that might in turn be backed by multiple files) are allowed when an image is
untrusted.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Some OVAs have a UID/GID set for their inner file, for example the one
from GNS3:
> tar tvf 'GNS3 VM.ova' --numeric-owner
> -rw-r----- 6/1 9047 2024-11-07 10:22 GNS3 VM.ovf
> -rw-rw---- 6/1 904088064 2024-11-07 10:22 GNS3 VM-disk001.vmdk
> -rw-rw---- 6/1 2879488 2024-11-07 10:22 GNS3 VM-disk002.vmdk
As we run as root, tar is defaulting to the `--same-owner` option,
where it tries extracting files with the same ownership as exists in
the archive.
This might not be ideal and results in an error for GNS3:
> tar: GNS3 VM-disk001.vmdk: Cannot change ownership to uid 6, gid 1: Operation not permitted
So, explicitly set the `--no-same-owner` option to make tar always use
the UID/GID of the running process, which is what we want here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Seems that some OVF do not have a ovf:Name element, but do have a
ovf:id attribute inside the ovf:VirtualSystem node that spells out
what the archive contains. So fallback to this attributes value if we
could not find any explicit name, can only win here, and the user
still can override this anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is nicer from a readability POV, but replace a arbitrary amount
of whitespace by a single minus character to avoid making it look odd.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a SAFE_CHAR_WITH_WHITESPACE_CLASS_RE sister variant of the shared
SAFE_CHAR_CLASS_RE shared regex to the base storage module, this use
case exist is a generic one after all, and use swap the untaint method
that parses the file a disk references to it.
Note that this is only the disk file name from inside the archive and
thus during the extraction to a staging/working directory, from there
it will be imported as volume allocated by the common storage system,
and thus follow our ordinary volume name scheme.
Improves disk detection when importing, e.g., the from upstream
provided GNS3 OVA.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some OVFs like for example the one from the GNS3 OVA doesn't has that
namespace/prefix, and does't really hurts us to make it optional as
long as the rest is correct.
Brings us nearer to have working disks with GNS3.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is used when finding the firmware type, so register it here or
libxml/xpath will complain about an "Undefined namespace prefix"
The schema URL was taken from some OVFs found in the wild.
Reported-by: Filip Schauer <f.schauer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the base image (parent) of an image contains e.g. whitespace in it's
path, the current untainting would not match and it would seem there was
no parent.
Since untrusted files are not allowed to have backing parts, just warn,
when encountering this case to keep backwards compatibility.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
introducing a separate regex that only contains ova, since
upload/downloading ovfs does not make sense (since the disks are then
missing).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by iterating over the relevant parts and trying to parse out the
'ResourceSubType'. The content of that is not standardized, but I only
ever found examples that are compatible with vmware, meaning it's
either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
the `lc()`)
As a fallback i used e1000, since that is our default too, and should
work for most guest operating systems.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
simply add all parsed disks to the boot order in the order we encounter
them (similar to the esxi plugin).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
it seems there is no part of the ovf standard that handles which type of
bios there is (at least i could not find it). Every ovf/ova i tested
either has no info about it, or has it in a vmware specific property
which we parse here.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
use the standards info about the ostypes to map to our own
(see comment for link to the relevant part of the dmtf schema)
every type that is not listed we map to 'other', so no need to have it
in a list.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
moves the filepath code a bit more closer to where it's actually used
checks the contained path before trying to find it's absolute path
properly add error handling to realpath
instead of checking the combined ovf_path + filepath, just make sure
filepath can't point to anythign besides a file in this directory
by checking for '.' and '..' (slashes are not allowed in SAFE_CHAR_CLASS_RE)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
since we want to handle ova files (which are only ovf+images bundled in
a tar file) for import, add code that handles that.
we introduce a valid volname for files contained in ovas like this:
storage:import/archive.ova/disk-1.vmdk
by basically treating the last part of the path as the name for the
contained disk we want.
in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
(we cannot use something like 'ova+vmdk' without extending the 'format'
parsing to that for all storages/formats. This is because it runs
though a verify format check at least once)
we then provide a function to use for that:
* extract_disk_from_import_file: this actually extracts the file from
the archive. Currently only ova is supported, so the extraction with
'tar' is hardcoded, but again we can easily extend/modify that should
we need to.
we currently extract into the either the import storage or a given
target storage in the images directory so if the cleanup does not
happen, the user can still see and interact with the image via
api/cli/gui
we have to modify the `parse_ovf` a bit to handle the missing disk
images, and we parse the size out of the ovf part (since this is
informal only, it should be no problem if we cannot parse it sometimes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
in DirPlugin and not Plugin (because of cyclic dependency of
Plugin -> OVF -> Storage -> Plugin otherwise)
only ovf is currently supported (though ova will be shown in import
listing), expects the files to not be in a subdir, and adjacent to the
ovf file.
listed will be all ovf/qcow2/raw/vmdk files.
ovf because it can be imported, and the rest because they can be used
in the 'import-from' part of qemu-server.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
to ensure the new verification callback for downloaded files is
available, while it's not a hard failure if it's missing it's good to
ensure that it's active and can work as intended.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by letting it run through 'file_size_info' as 'untrusted', since that
does the necessary checks. We do this so we don't accidentally
up/download a file that is not a valid iso
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Copy over the PVE::QemuServer::OVF module and relevant OVF tests from
the qemu-server package/repo.
We need it here for implementing the import content type support to
generic directory based plugins.
So it will also use PVE::Storage modules and thus anything higher, or
a different package, makes things only harder for now.
Put the OVF module under the new PVE::GuestImport module namespace.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: rework commit message to avoid file endings and clarify
intentions ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this allows checking some extra attributes for images which come from
a potentially malicious source.
since file_size_info is not part of the plugin API, no API bump is
needed. if desired, a similar check could also be implemented in
volume_size_info, which would entail bumping both APIVER and APIAGE
(since the additional parameter would make checking untrusted volumes
opt-in for external plugins).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
We build the disk path by appending the last part of the volname to
/dev/disk/by-id. These could in theory be any other disk found under
there instead of a LUN provided by the target configured.
This patch adds a way to verify the disk used is actually provided by
the target. To do so `udevadm` is used to get the devpath
(/devices/...). This can then be checked under `/sys` for a session.
With the session the targetname can be looked up under /sys/class and
compared with the configured target of the storage.
In case of multipath, all disks backing the multipath device are checked
recursively (in case of nested device mapper devices), and verification
succeeds if at least one backing disk is part of the iSCSI target.
Mixing disks from different iSCSI targets is allowed as long as one
corresponds to the right target.
udevadm input is limited to `/dev/` paths since we only pass those either
explicitly, or via Cwd::realpath on a /dev/disk/by-id path returned by
filesystem_path.
According to [0] /sys/subsystems should be preferred over /sys/class if
available, but neither kernel 6.8 nor kernel 6.11 provided it. It is
mentioned that in the future this will be moved to /sys/subsystems. So
this has to be kept in mind for future kernels.
[0] https://www.kernel.org/doc/html/v6.11/admin-guide/sysfs-rules.html
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The last part of an iSCSI volname is assumed to be a stable name found
in /dev/disk/by-id. These are not allowed to have `/` in their names.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Assume a cluster that already has an iSCSI storage A configured. After
adding a new iSCSI storage B with a different target on node 1, B will
only become active on node 1, not on the other nodes. On other nodes,
pvestatd logs 'storage B is not online'. The storage does not become
available even after a reboot. A workaround is to manually perform
iSCSI discovery against B's targets on the other nodes once.
This happens because the connectivity check of the iSCSI plugin on
node B does not correctly handle the case that iscsiadm already knows
portals (i.e., A's portals) but not B's portals.
The connectivity check calls `iscsi_portals` to determine the portals
to ping, which calls `iscsiadm -m node` to query all known portals,
and extracts all portals to the storage's target. If the iscsiadm
command fails, `iscsi_portals` returns the portal given in the storage
config. This works as expected if the storage is the first iSCSI
storage, because then iscsiadm does not know any portals and thus
exits with code 21.
However, since there already is an iSCSI storage A, iscsiadm exits
cleanly but its output does not contain any portals for B's target.
Hence, `iscsi_portals` returns an empty array of portals, so the
connectivity check fails and node 2 never performs discovery for B.
To fix this, let `iscsi_portals` also return the portal from B's
storage config if iscsiadm exited cleanly but its output contained no
matching portal.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
The decompressor_info method calls binaries provided by these packages
so they are (alphabetically) added explicitly as dependencies.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
FG: adapted commit message
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
A popular ISO compressed exclusively with bz2 is OPNsense [2].
Since this requires adding `bz2` to the list of known compression
formats we add decompression methods for vmz and tar.
[2] https://opnsense.org/download/
Suggested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
$storeid param is missing and $snapname is used as third param.
This seem to works actually because $snapname is always empty in lvm
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Since 90c1b10 ("fix #254: iscsi: add support for multipath targets"),
iSCSI storage activation checks whether a session exists for each
discovered portal. If there is a discovered portal without a session,
it performs a discovery and login in the hope of establishing a
session to the portal. If the portal is unreachable when trying to log
in, Open-iSCSI's default behavior is to retry for up to 2 minutes, as
explained in /etc/iscsi/iscid.conf:
> # The default node.session.initial_login_retry_max is 8 and
> # node.conn[0].timeo.login_timeout is 15 so we have:
> #
> # node.conn[0].timeo.login_timeout * \
> node.session.initial_login_retry_max = 120s
If pvestatd is activating the storage, it will be blocked during that
time, which is undesirable. This is particularly unfortunate if the
target announces portals that the host permanently cannot reach. In
that case, every pvestatd iteration will take 2 minutes. While it can
be argued that such setups are misconfigured, it is still desirable to
keep the fallout of that misconfiguration as low as possible.
In order to reduce the time Open-iSCSI tries to log in, instruct
Open-ISCSI to not perform login retries for that target. For this, set
node.session.initial_login_retry_max for the target to 0. This setting
is stored in Open-iSCSI's records under /etc/iscsi/nodes. As these
records are overwritten with the defaults from /etc/iscsi/iscsid.conf
on discovery, the setting needs to be applied after discovery.
With this setting, one login attempt should take at most 15 seconds.
This is still higher than pvestatd's iteration time of 10 seconds, but
more tolerable. Logins will still be continuously retried by pvestatd
in every iteration until there is a session to each discovered portal.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
to satisfy the new checks in the API handler that only allow downloading via
marked endpoints.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
It's not enough to check whether $! is set. From "perldoc perlvar":
> Many system or library calls set "errno" if they fail, to
> indicate the cause of failure. They usually do not set "errno"
> to zero if they succeed and may set "errno" to a non-zero value
> on success. This means "errno", hence $!, is meaningful only
> *immediately* after a failure:
To protect against potential issues, check the return value of unlink
and only check $! if it failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
If the json was empty, for example if the qemu-img command times out, a
message
warn "could not parse qemu-img info command output for '$filename' - $err\n";
would have been printed.
This message could lead one to think the issue lies in the contents of
the json, even if the previous warning said that there was a timeout.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Adds the ability to change the owner of a guest image.
Btrfs does not need special commands to rename a subvolume and this can
be achieved the same as in Storage/plugin.pm's rename_volume taking
special care of how the directory structure used by Btrfs.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-By: Aaron Lauterer <a.lauterer@proxmox.com>
Allow the ESXi storage disk entry property "fileName" to be flatcased
("filename") in addition to being camelcased ("fileName"). This adds
compatibility with older ESXi .vmx configuration files.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
The storage API version has been bumped to at least 9 since
libpve-storage = 7.0-4. If the source node is on Proxmox VE 8, where
this change will come in, then the target node can be assumed to be
running either Proxmox VE 8 or, during upgrade, the latest version of
Proxmox VE 7.4, so it's safe to assume a storage API version of at
least 9 in all cases.
As reported by Maximiliano, the fact that the 'apiinfo' call was
guarded with a quiet eval could lead to strange errors for replication
on a customer system where an SSH connection could not always be
established, because the target's API version would fall back to 1.
Because of that, the '-base' argument would be missing for the import
call on the target which would in turn lead to an error about the
target ZFS volume already existing (rather than doing an incremental
sync).
Reported-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>