IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Like this, the property will get added when parsing the storage configuration
and PBS storages will correctly show up as shared storages in API results.
AFAICT the only affected PBS operation is free_image via vdisk_free, which will
now be protected by a cluster-wide lock, and that shouldn't hurt.
Another issue this fixes, which is the reason this patch exists, was reported
in the forum[0]. The free space from PBS storages was counted once for each node
that had access to the storage.
[0]: https://forum.proxmox.com/threads/pve-6-3-the-storage-size-was-displayed-incorrectly.83136/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
LVM RAID logical volumes (including mirrors) can be valid disk images, so they
should show up in storage content listings (for example pvesm list).
Including LV types is safer than excluding, especially because of possible
additional types in the future.
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).
rpcinfo allows to query the existence of an nfs v4 service.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
as described in the zfs bug https://github.com/openzfs/zfs/issues/10931
the kernel keeps around cached data from mmaps after a rollback, thus
having invalid data in files that were allegedly rolled back
to workaround this (until a real fix comes along), we unmount the subvol,
invalidating the kernel cache anyway
the dataset gets mounted on the next 'activate_volume' again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for some controllers/disks there the line is
Percentage used endurance indicator: x%
so extend the regex for that possibilty.
We even had a test-case for SAS but did not notice we could extract
that info from there...
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
copied from test 'sas' with rotational set to 0
this has then the type 'ssd', rpm: 0, and health: 'OK'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In a very early version I wanted to parse the date from the backup
name, and when switching to using the ctime and localtime() instead,
I forgot to update the usage of strftime.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
reuse the one from DirPlugin by directing the call to it, but with
the actual $class. This should stay stable, as we provide an ABI and
try to always use $class->helpers.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previous to this we did not called the plugins update_volume_notes at
all in the case where a user delted the textarea, which results to
passing a falsy value ('').
Also adapt the currently sole implementation to delete the notes field
in the undef or '' value case. This can be done safely, as we default
to returning an empty string if no notes file exists.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
mostly re-ordering to improve statement grouping and avoiding the
need for an intermediate variable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the missing pieces allowing pve-manager to just point the
/nodes/<node>/scan api directory at this module, dropping it's
duplicated copy.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we have a 1:1 copy of that code in pve-manager's PVE::API2::Scan,
which we can avoid by using a common module form pvesm CLI and the
API.
This is the first basic step of dropping the code duplication in
pve-manager.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
improves UX of on_update and on_add hooks *a lot*.
This is a bit more expensive than the TCP ping, or even just an
unauthenticated ping, but not as bad as a full datastore status - as
this only reads the datastore config file (which is normally in page
cache anyway).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it is flexible enough to easily do so, and should do well until we
actually have cheap native bindings (e.g., through wolfgangs rust
permlod magic).
Make it a private helper, we do *not* want to expose it directly for
now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid returning something unexpected. Finish what
afeda18256 already started for all the other
plugins. At least for ZFS's on_add_hook this is necessary (adding a ZFS storage
currently fails as reported here [0]), but it cannot hurt
in the other places either as the only hooks we expect to return something
currently are PBS's on_add_hook and on_update_hook.
[0]: https://forum.proxmox.com/threads/gui-add-zfs-storage-verification-failed-400-config-type-check-object-failed.79734/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
it could be debated do have some security implications and that
deletion is safer, but key deletion is a pretty hairy thing.
Should be documented, and people just should use delete instead of
autogen if they want to "destroy" a key.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'file_size_info' only works for directory based storages, while
'volume_size_info' should work for all
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.
If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.
If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>