IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
when installing non-quincy versions, 'ceph-volume' is not contained in
the respective repositories and, thus, the install process would fail.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
[ T: reworded commit subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
storage_check_enabled() already dies with an appropriate error message
so we don't have to handle it here
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid warnings like
parse error in '/etc/vzdump.conf' - 'storage': missing property -
'notes-template' requires this property
when there is no default for the required property configured.
In new(), the defaults are mixed in with the regular CLI/API
parameters, so re-check if the required property is set. If it's not,
the defaults do not apply to the current run, and can be dropped.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
For VMs, $task->{hostname} might be undef and when running on a
stand-alone node, there is no cluster name.
Reported-by: Marco Gabriel <mgabriel@inett.de>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
instead of just checking for a newline, do a full check already.
Also do the check at the beginning of generate_notes() for consistency
and remove the check after expansion to avoid failing late for things
like '{{cl{{node}}er}}' (which can even expand to a valid variable
making the error even more confusing).
Co-developed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
so that '{{foo}}{{bar}}' is not detected as being an unknown variable
named 'foo}}{{bar', but as 'foo' (and 'bar').
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is caused by the quoting operators \Q...\E. The actual intention
was to avoid such surprises.
Fixes: e01438a7 ("partially close#438: vzdump: support setting notes-template")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Mention which optional parameters will be used for the replicated
metadata pool but won't have an effect on the erasure coded data pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The crush rule is an optional paramter which can be used for the
metadata pool, but the erasure coded data pool will always get its own
crush rule. Therefore this parameter can not be adapted.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a schedule only has a limited amount of runs it can happen
(e.g. 2022-10-01 8:00/30), $next will be undef after the last run.
Exit early in that case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The osd dump already contains the pool type in numerical format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Erasure code pools cannot change certain settings after creation.
Trying to set them will cause errors on Cephs side.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The behavior of always adding the storage config was lost in commit
23c407e. But it is more sensible to make it a default that can be
changed if needed.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
moved to a format string 'erasurce-coded', that allows also to drop
most of the param existence checking as we can set the correct
optional'ness in there. Also avoids bloating the API to much for
just this.
Reuse the $rados connection more often to avoid to much
overhead/lingering sockets (the rados connection stays around in the
background to allow efficient reuse)
really should be three separate commits, but too intertwined and too
late for me to care tbh.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Previously, if the '--script' argument was passed with a non-existent
file, it would state that a non-executable script was the reason for
failure. This adds a check to see if the hook script exists, in order
to provide a more accurate error message.
Also adds an 'Error:' prefix the 'script not executable' error.
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
While vzdump itself wouldn't mind about unescaped newlines, the
parameter isn't supposed to contain any, and when used as part of the
job config, it has to be a single line too, so make it consistent.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
commit 89d146f207225bb8ca2e01d7e79000bb37a227d1 introduced permission
checks here that caused all regular bridges to be removed from the
returned list as soon as the SDN package is installed, unless the user
is root@pam or there exists a VNET with the same ID.
this is arguably a breaking change, so limit the priv check to actually
defined VNETs for the time being, and add ALL regular bridges
uncondtionally like before.
get_local_vnets already filters by the same prvs, so we need to get the
full config to find out which IDs are VNETs and which are not.
once/iff we introduce ACL paths for *all* bridges in the future, we can
limit accordingly here.
CC: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Check the number of protected backups early if the protected flag
is set.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
but rather multiple times becoming exponentially less frequent.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>