IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This commit adds a new module PVE::PullMetric. This module allows
us to store the status data of various subsystems, including status
data for the most recent pvestatd update loops. Right now, we
store 6 old generations - including the most recent values, that gives
70 seconds of stat history (based on a 10 second pvestatd update loop
interval).
This cache allows us to add support for pull-style metric collection
systems, be it Prometheus/OpenMetrics or some custom, JSON based
metric format.
This patch raises the required lib{proxmox,pve}-perl-rs version
requirements, since we need the new bindings for proxmox-shared-cache.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
[WB: actually bump *runtime* deps in d/control]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We need
"utils: add mechanism to add and override translatable notification
event descriptions in the product specific UIs"
otherwise there is an error in the browser console.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
This allows us to access the backup job id in the send_notification
function, where we can set it as metadata for the notification.
The 'job-id' parameter can only be used by 'root@pam' to prevent
abuse. This has the side effect that manually triggered backup jobs
cannot have the 'job-id' parameter at the moment. To mitigate that,
manually triggered backup jobs could be changed so that they
are not performed by a direct API call by the UI, but by requesting
pvescheduler to execute the job in the near future (similar to how
manually triggered replication jobs work).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
[ TL: fleece in d/control bump for guest-common now that the version
is known ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To ensure that the lifting of the bridge name == vmbr\d+ restriction
works correctly and that the new notes view double-click editing
setting can work.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to make the backup fleecing feature available. The bump for
qemu-server is also required for moving unused disks of VMs.
The bump for libpve-common-perl is required because of pve-common
commit c302a28 ("json schema: add format description for
pve-storage-id standard option"), which is required for API
verification.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
else this can break an upgrade for unrelated reasons (regular debhelper also
constructs the restart invocations like this, it even redirects output to
/dev/null)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and guard it to only run on ceph-using systems (the regular 'inited' check
doesn't work as a guard for this, because it checks for new-style inits
including the dir existing).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Due to Ceph dropping privileges when running the 'ceph-crash' daemon
[0], it is necessary to allow the daemon to authenticate with its
cluster in a safe manner.
In order to avoid exposing sensitive keyrings or somehow escalating
its privileges again, 'ceph-crash' is therefore provided with its own
keyring in the '/etc/pve/ceph' directory. This directory, due to being
on 'pmxcfs', may be read by members of the 'www-data' group, which
'ceph-crash' is made part of [1].
Expected Configuration
----------------------
1. A keyring file named '/etc/pve/ceph/ceph.client.crash.keyring'
exists
2. A section named 'client.crash' exists in '/etc/pve/ceph.conf'
3. The 'client.crash' section has a key named 'keyring' which
references the keyring file as '/etc/pve/ceph/$cluster.$name.keyring'
4. The 'client.crash' section has *no* key named 'key'
New Clusters
------------
The keyring file is created and the conf file is updated after the first
monitor has been created (when calling `pveceph mon create`).
Existing Clusters
-----------------
A new helper script creates and configures the 'client.crash' keyring in
`postinst`, if:
* Ceph is installed
* Ceph is initialized ('/etc/pve/ceph.conf' and '/etc/pve/ceph' exist)
* Connection to RADOS is successful
If the above conditions are met, the helper script ensures that the
existing configuration matches the expected configuration mentioned
above.
The configuration is not changed if it is already as expected.
The helper script may be called again manually if the `postinst` hook
fails. It is installed to '/usr/share/pve-manager/helpers/pve-init-ceph-crash'.
Existing `client.crash` Key
---------------------------
If a key named 'client.crash' already exists within the cluster, it is
reused and not regenerated.
[0]: https://github.com/ceph/ceph/pull/48713
[1]: https://git.proxmox.com/?p=ceph.git;a=commitdiff;h=f72c698a55905d93e9a0b7b95674616547deba8a
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
This commit adds the '/etc/pve/ceph' directory to our overall expected
Ceph configuration.
This directory is meant to store cluster-wide, non-private
configuration files used by Ceph applications and services that are
executed with lower privileges, such as 'ceph-crash.service'.
The existence of the directory is now also checked for when checking
whether Ceph is configured correctly. This makes it easier for our
other tooling to rely on the directory's existence, reducing the
number of otherwise needless frequent checking.
* For new clusters: `pveceph init` now creates '/etc/pve/ceph' when
called.
* For existing clusters: The 'postinst' hook this commit adds ensures
that '/etc/pve/ceph' is created when updating.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Since LVM 2.03.15 RBD devices are also scanned by default [1]. This
can lead to guest volumes being recognized and displayed on the host
when using KRBD for RBD-backed disks. In order to prevent this we add
an additional filter to the LVM config to avoid scanning rbds.
This also prevents a bug where LVM created a very high amount of
archive entries when there were logical volumes with the same path
available. This could happen when two guests with RBD disks had the
same LVM layout or a guest and host had the same layout.
previous behavior:
If there is no marker in the LVM conf and global_filter does not
contain '/dev/zd.*': replace the global_filter with our version
new behavior:
Replace the global_filter iff:
- There is no marker and global_filter is empty
- The global_filter is exactly the old default
If we don't replace the filter and it is a non-default value: We print
a warning. Addtionally we force this function to run once when
upgrading from older versions.
The previous versions could replace custom global_filters where the
comment had been removed and the zvol directive removed. The new
behavior is slightly more conservative, but works the same in other
cases.
[1] 6a431eb242
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>