IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The only caller where $running can even be truthy is QemuServer.pm's
qemu_volume_snapshot_delete(). But there, a check if snapshots should
be done with QEMU is already made and the storage function is only
called if snapshots should not be done with QEMU (like for TPM drives
which are not attached directly). So rely on the caller and do not
return early to fix removing snapshots in such cases.
Even if a stray call ends up here (can happen already by changing the
krbd setting while a VM is running to flip the above-mentioned check
and the early return check removed by this patch), it might not even
be problematic. At least a quick test worked fine:
1. take snapshot via a monitor command in QEMU
2. remove snapshot via the storage layer
3. create a new file in the VM
4. take a snapshot with the same name via monitor command in QEMU
5. roll back to the snapshot
6. check that the file in the VM is as expected
Using the storage layer to take the snapshots and QEMU to remove the
snapshot also worked doing the same test. Even if it were problematic,
the check in qemu-server should rather be improved then.
(Trying to issue a snapshot mon command for a krbd-mapped image fails
with an error on the other hand, but that is also not too bad and not
relevant to the storage code. Again, it rather would be necessary to
improve the check in qemu-server).
The fact that the pve-container code didn't even pass $running is the
reason removing snapshots worked for containers on a storage with krbd
disabled (the pve-container code calls map_volume() explicitly, so
containers can work regardless of the krbd setting in the storage
configuration; see commit 841fba6 ("call map_volume before using
volumes.") in pve-container).
For volume_snapshot(), the early return with $running was already
removed (or rather the relevant logic moved to QemuServer.pm) in 2015
by commit f5640e7 ("remove running from Storage and check it in
QemuServer"), even before krbd support was added in RBDPlugin.pm.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To be correct in all cases, it's still necessary to filter by "pool"
in zfs_parse_zvol_list(), because $scfg->{pool} could be e.g.
'foo/vm-123-disk-0' which looks like an image name and would pass the
other "should skip"-checks in zfs_parse_zvol_list().
No change in the result of zfs_list_zvol() is intended.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The 'pool' property in the result of zfs_parse_zvol_list() was not
used for anything else.
No functional change is intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Avoids the need for the fallback for the (only existing) caller.
Note that the old my $list = (); is a rather intransparent way of
assigning undef.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Rename the "dirs" parameter to "content-dirs". Switch from a "vtype:/dir"
format to "vtype=/dir", and remove the misleading error message talking
about "absolute" paths. One might expect these to be absolute over the
whole system, while in reality they are relative to the mountpoint of
the storage.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
Allowing overrides for the default directory locations seems to
integrate rather well into the existing system. Custom locations
are specified using the "dirs" parameter as a comma-separated list
of "vtype:/location" values.
For now, the option has been enabled for the Directory, CIFS and NFS
backends.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
There is no caller using $cache and the same $storeid multiple times,
so there is no need to keep the cache.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The plugin for remote ZFS storages currently also uses the same
list_images() as the plugin for local ZFS storages. There is only
one cache which does not remember the target host where the
information originated.
This is problematic for rescan in qemu-server:
1. Via list_images() and zfs_list_zvol(), ZFSPlugin.pm's zfs_request()
is executed for a remote ZFS.
2. $cache->{zfs} is set to the result of scanning the images there.
3. Another list_images() for a local ZFS storage happens and uses the
cache with the wrong information.
The only two operations using the cache currently are
1. Disk rescan in qemu-server which is affected by the issue. It is
done as part of (or as a) long-running operations.
2. prepare_local_job for pvesr, but the non-local ZFS plugin doesn't
support replication, so it cannot trigger there. The cache only
helps if there is a replicated guest with volumes on different
ZFS storages, but otherwise it will be less efficient than no
cache or querying every storage by itself.
Fix the issue by making the cache $storeid-specific, which effectively
makes the cache superfluous, because there is no caller using the same
$storeid multiple times. As argued above, this should not really hurt
the callers described above much and actually improves performance for
all other callers.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This essentially reverts commit c9bd3d2 ("fix #1123: modify NVME
device path for SMART support").
The man page for smartctl states
> Use the forms "/dev/nvme[0-9]" (broadcast namespace) or
> "/dev/nvme[0-9]n[1-9]" (specific namespace 1-9) for NVMe devices.
so it should be fine to pass the path with the specific namespace to
smartctl.
But that text was already present in the man page of version 6.5,
which is the version the commit c9bd3d2 talks about. It might be that
it was necessary to drop the specific namespace for the version
backported from Stretch to Jessie (the bug report mentions that that
version was used[0]), but it's not quite clear.
With current versions, passing in the path with the specific namespace
did work as expected[1], even on a device with multiple namespaces set
up tested locally. In PBS, the path queried via
udev::Device::from_syspath("/sys/block/{name}") is passed to smartctl
and that also included the specific namespace on the systems I tested
with a short script.
So pass the full path to make things a little bit simpler and to avoid
potential future issues like bug #2020[2].
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=1123#c3
[1]: https://forum.proxmox.com/threads/113962/post-493185
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=2020
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This reverts commit c3442aa554.
Nowadays, relying on 'readlink /sys/block/nvmeXnY/device' won't always
lead to the correct device, as reported in the community forum[0],
where it results in '../../nvme-subsys0' and there's no matching entry
under '/dev/'.
Since Linux kernel 5.4, in particular commit 733e4b69d508 ("nvme:
Assign subsys instance from first ctrl"), the problematic situation
from bug #2020 shouldn't happen anymore.
Stated more clearly by the commit's author here[1]:
> Indeed, that commit will make the naming a bit more sane and will
> definitely prevent mistaken identity. It is still possible to
> observe controllers with instances that don't match their
> namespaces, but it is impossible to get a namespace instance that
> matches a non-owning controller.
The only other user of get_sysdir_info() doesn't use the 'device'
entry, so reverting that part is fine too.
[0] https://forum.proxmox.com/threads/113962/
[1] https://github.com/linux-nvme/nvme-cli/issues/510#issuecomment-552508647
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Stefan Hrdlicka <s.hrdlicka@proxmox.com>
Previously, calling with e.g. $storage_list = [undef] would lead to an
early return of $override and not consider the limit from
datacenter.cfg.
Refactoring the bandwidth limit handling for migration introduced
calls such as described above, which broke applying the limit from
datacenter.cfg for VM RAM/state migration.
Reported in the community forum:
https://forum.proxmox.com/threads/37920/post-513005
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
file-restore has a 'timeout' parameter and if that is exceeded, returns
an error with the http code 503 Service Unavailable
When the web ui encounters such an error, it retries the listing a few
times before giving up.
To make use of these, add the 'timeout' parameter to the new
'extraParams' of 'file_restore_list'.
25 seconds are chosen because it's under pveproxy 30s limit, with a bit
of overhead to spare for the rest of the api call, like json decoding,
forking, access control checks, etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since commit
ba690c40 ("file-restore: remove 'json-error' parameter from list_files")
in proxmox-backup, the file-restore binary will return the error as json
when called with '--output-format json' (which we do in PVE::PBSClient)
here, we assume that 'file-restore' will fail in that case, and we try
to use the return value as an array ref which fails, and the user never
sees the real error message.
To fix that, check the ref type of the return value and act accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that there is a better code locality and also we avoid forgetting
to adapt the check for each specific draid-config parameter if a new
one gets added or an existing one changed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka <s.hrdlicka@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
If both type and vmid is defined we don't need to list the current
snapshots, we simply can derive the single backup group from that and
let the PBS client handle the rest.
Should be a not so small speedup for most setups using PBS backup and
pruning configured on PVE side, as vzdump calls this separately for
every vmid on backup jobs with multiple guests included.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this format comes from the remote cluster, so it might not be supported
on the source side - checking whether it's known (as additional
safeguard) and untainting (to avoid open3 failure) is required.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ T: squashed in canonical perl array ref access ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fixing some issues reported by perlcritic along the way.
cutting down 70 lines, often with even improving readability.
Tried to recheck and be conservative, so there shouldn't be any
regression, but it's still perl after all...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This makes it consistent with the naming scheme in PBS/GUI.
Keep value for API stability reasons and remove it in the next major version.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.cspak@proxmox.com>
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.
On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.
This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.
By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.
For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
returns a list of mounted paths with the backing devices
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The default timeout in PVE/RADOS.pm is 5 seconds, but this is not
always enough for external clusters under load. Workers can and should
take their time to not fail here too quickly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to increase the timeout for workers. Both existing
callers of librados_connect() don't currently use the parameter.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The return value of get_rbd_dev_path() is only used when $scfg->{krbd}
evaluates to true and the function shouldn't have any side effects
that are needed later, so the call can be avoided otherwise.
This also saves a RADOS connection and command with configurations for
external clusters with krbd disabled.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When switching this from calling the external binary to
using the perl api client the timeout got reduced to 7
seconds, which is definitely insufficient for larger stores.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
while the resulting backups are encrypted, they would not be restorable
using the master key (only) if the original PVE system is lost.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the key file doesn't exist (anymore), but the storage.cfg references
one, die on commands that should use encryption instead of falling back
to plain-text operations.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Before af07f67 ("pbs: use vmid parameter in list_snapshots") the
namespace was set via do_raw_client_command, but now it needs to be
set explicitly here.
Fixes: af07f67 ("pbs: use vmid parameter in list_snapshots")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>