IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
while needed seldomly we had a few cases recently where this would
have saved the user and us one roundtrip.
adding to storage, to be close to the `findmnt` output (which usually
tells us if the system is booted in efi or legacy mode)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Removes the possibility to select the node on which to create the first
monitor in the configuration / initialization step and always sets it to
the current node.
This prevents that a user might select another node on which the Ceph
packages have not yet been installed. If a user did that, they would get
an error, but the Ceph config file would have been written. If the user
then does not select a valid node to create the first mon, but aborts
the wizard, they are greeted with a rados_connect error because the
config file exists, but it does not contain any mon infos that are
needed to connect to the Ceph cluster.
Creating a mon manually will remedy such a situation, but especially for
new users, this behavior is not ideal and confusing.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by
* showing the (optional) name in front of the type
* making the 'availble' column a bit narrower
* enabling 'cellWrap' for the description
* making the dropdown a bit wider (so all the information can fit)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
by extracting the JSON-encoded-string schema and dumping it into the
verbose description it at least shows up in the API viewer.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since this was missing a proper return type definition the api viewer
couldn't display the endpoint (`retinfs.items` was undefined). also
the `pvesh` command would complain that it cannot properly format the
return type because the variable `$item_type` in `CLIFormatter.pm` was
not defined.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Commit d8cd8e8cf9795dc9c2462a67e9ef89ad31759796 introduced a
regression where only stale replicated volumes with an older
timestamp would be cleaned up. This meant that after removing a volume
from the guest config, it would only be cleaned up the second time the
replication ran afterwards. And the volume could become completely
orphaned in case the relevant storage wasn't used by the job anymore.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
while dropping the instance where the local variable was unused.
prepare() was changed a while ago to return all local snapshots.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
as contrary to the property name this is actually only the rather
irrelevant part (auth wise) of the whole relying party, the ID is the
other part, and that one actually matters.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make origin optional, as it actually is, and its not covered by the
challenge anyway.
Also, not the thing we name rp, but the ID is important for the
validity of existing webauthn records, this error comes from the
confusing use of the same named thing in different ways by browsers,
us and the webauthn rust crate...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The ceph-disk service seems to have been removed with octopus (v15) and
we did not yet have them for ceph-volume which could lead to some
startup issues in cases where the pve-cluster service did not start fast
enough.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
It's not clear to users that the "VM data" includes mount point
volumes including those that are not marked for backup. This is
different from VM restore, where volumes attached at drives not
present in the backup will be kept around as unused volumes.
Several (supposedly newer) users got tripped up by this over the
years, the latest report being [0]. The long term plan is to make the
restore dialog more flexible to be able to select actions for disks
individually, but that will take a bit of time. In the mean time, make
the warning more explicit.
[0]: https://forum.proxmox.com/threads/111760/#post-482045
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
when installing non-quincy versions, 'ceph-volume' is not contained in
the respective repositories and, thus, the install process would fail.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
[ T: reworded commit subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's escaped in the API result and will be re-escaped upon submit,
leading to confusion as reported in the community forum:
https://forum.proxmox.com/threads/110747/post-480507
Fixes: c4dca88b ("ui: manual backup: also set notes-template default")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
so that the users can configure how to handle missed job runs
move the vmgrid inside the ipanel in 'columnB', so that the
advanced items show below the vmgrid (not above)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
storage_check_enabled() already dies with an appropriate error message
so we don't have to handle it here
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
By not auto filling the Ceph public network we can avoid accidental
clicks on 'Next' which will cause the first Mon to be created with a
potentially wrong network. While that is fixable, it is tedious and
can be easily avoided by making the user always select the network to
use.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Stefan Hrdlicka<s.hrdlicka@proxmox.com>
[ T: adapted commit subject to be more specific and match our common
style ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>