IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
this encode the current bad behavior of the maintenance mode getting
lost on active CRM switch, due to the request node state not being
transferred. Will be fixed in the next commit.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the change as of now is a no-op, as we only ever switched to
maintenance mode on shutdown-request, and there we exited immediately
if no active service and worker where around anyway.
So this is mostly preparing for a manual maintenance mode without any
pending shutdown.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
useful for re-balanacing on start, where we do not want to exclude
the current node like setting the $try_next param does, but also
don't want to favor it like not setting the $try_next param does.
We might want to transform both, `try_next` and `best_scored` into a
single `mode` parameter to reduce complexity and make it more
explicit what we want here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We always check for re-starting a service if its in the started
state, but for those that go from a (request_)stop to the stopped
state it can be useful to explicitly have a separate transition.
The newly introduced `request_start` state can also be used for CRS
to opt-into starting a service up on a load-wise better suited node
in the future.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
traverse the usual error counting mechanisms, as then the
select_service_node helper either picks up the right node and it
starts there or it can trigger fencing of that.
Note, in practice this normally can only happen if the admin
butchered around in the node cluster state, but as we only select the
safe nodes from the configured groups, we should be safe in any case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so that we don't need to specify all usage stats explicitly for
bigger tests.
Note, we explicitly use two digits for memory as with just one a lot
of services are exactly the same, which gives us flaky tests due to
rounding, or some flakiness in the rust code - so this is a bit of a
stop gap for that too and should be reduced to a single digit once
we fixed it in the future.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
E.g., pve-ha-manager is our current HA manager, so talking about the
"current HA stack" being EOL without mentioning the actually meant
`rgmanager` one, got taken up the wrong way by some potential users.
Correct that and a few other things, but as there are definitively
stuff still out-of-date, or will be in a few months, mention that
this is an older readme and refer to the HA reference docs at the
top.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pretty safe to do as we recompute everything per round anyway (and
much more often on top of that, but that's another topic).
Actually I'd argue that it's safer as this way a user doesn't need to
actively restart the manager, which grinds much more gears and
watchdog changes than checking periodically and updating it
internally. Plus, a lot of admins won't expect that they need to
restart the current active master and thus they'll complain that
their recently made change to the CRS config had no effect/the CRS
doesn't work at all.
We should codify such a change in test for this though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to hint to a potential "code optimizer" that it may not be easily
moved above to the scheduling selection
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
if something goes wrong with the TOPSIS scoring. Not expected to
happen, but it's rather cheap to be on the safe side.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With the Usage::Static plugin, scoring is not as cheap anymore and
select_service_node() is called for each running service.
This should cover most calls of select_service_node().
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With the Usage::Static plugin, scoring is not as cheap anymore and
select_service_node() is called for each running service.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Note that recompute_online_node_usage() becomes much slower when the
'static' resource scheduler mode is used. Tested it with ~300 HA
services (minimal containers) running on my virtual test cluster.
Timings with 'basic' mode were between 0.0004 - 0.001 seconds
Timings with 'static' mode were between 0.007 - 0.012 seconds
Combined with the fact that recompute_online_node_usage() is currently
called very often this can lead to a lot of delay during recovery
situations with hundreds of services and low thousands of services
overall and with genereous estimates even run into the watchdog timer.
Ideas to remedy this is using PVE::Cluster's
get_guest_config_properties() instead of load_config() and/or
optimizing how often recompute_online_node_usage() is called.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The method will be extended to include other HA-relevant settings from
datacenter.cfg.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for calculating node usage of services based upon static CPU and
memory configuration as well as scoring the nodes with that
information to decide where to start a new or recovered service.
For getting the service stats, it's necessary to also consider the
migration target (if present), becuase the configuration file might
have already moved.
It's necessary to update the cluster filesystem upon stealing the
service to be able to always read the moved config right away when
adding the usage.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
no functional change is intended.
One test needs adaptation too, because it created its own version of
$online_node_usage.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation for scheduling based on static information, where the
scoring of nodes depends on information from the service's
VM/CT configuration file (and the $sid is required to query that).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
in preparation to also support static resource scheduling via another
such Usage plugin.
The interface is designed in anticipation of the Usage::Static plugin,
the Usage::Basic plugin doesn't require all parameters.
In Usage::Static, the $haenv will necessary for logging and getting
the static node stats. add_service_usage_to_node() and
score_nodes_to_start_service() take the sid, service node and the
former also the optional migration target (during a migration it's not
clear whether the config file has already been moved or not) to be
able to get the static service stats.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
to be used for static resource scheduling.
In container's vmstatus(), the 'cores' option takes precedence over
the 'cpulimit' one, but it felt more accurate to prefer 'cpulimit'
here.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
to be used for static resource scheduling. In the simulation
environment, the information can be added in hardware_status.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
only count up target selection if that node is already in the online
node usage list, to avoid that a offline node is considered online if
its a target from any command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>