IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
While it would be correct to have them tracked here we cannot do this
at the moment, as with those two also depend on pve-ha-manager, and
with dpkg packaged under strech there's an issue with such cyclic
dependencies and trigger cycle detection only resolved for buster[0]
Currently, the issue exists on the following condition:
* update of pve-ha-manager plus either pve-container or qemu-server
* but _no_ update of pve-manager in the same upgrade cycle
[0]: 7f43bf5f93
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the zsh command completion generation for the ha-manager CLI tools.
This adds the automatic generation of the autocompletion scripts for zsh
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
service_is_ha_managed returns false if a service is in the resource
configuration but marked as 'ignore', as for the internal stack it is
as it wasn't HA managed at all.
But user should be able to remvoe it from the configuration easily
even in this state, without setting the requesttate to anything else
first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The vm_shutdown parameter forceStop differs in behaviour between VMs
and CTs. While on VMs it ensures that a VM gets stoppped if it could
not shutdown gracefully only after the timeout passed, the container
stack always ignores any timeout if forceStop is set and hard stops
the CT immediately.
To achieve this behaviour for CTs too, the timeout is enough, as
lxc-stop then does the hard stop after timeout itself.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We do not support all of the dlm.conf possibilities, but we also do
not want to die on such "unkown" keys/commands as an admin should be
able to share this config if it is already used for other purposes,
e.g. lockd, gfs, or such.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in this package we provide api functions, thus we want to activate
the pve-api-update trigger, so that packages like pve-manager get
notified about it. But we also use api functions directly so we setup
an interest in the pve-api-update trigger. This results in an lintian
error (lintian version from buster or newer) which we can override:
> [...]
> This tag is also triggered if the package has an activate trigger
> for something on which it also declares an interest. The only (but
> rather unlikely) reason to do this is if another package also
> declares an interest and this package needs to activate that other
> package. If the package is using it for this exact purpose, then
> please use a Lintian override to state this.
-- https://lintian.debian.org/tags/repeated-trigger-name.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This call was missed in the commit moving it from
PVE::HA::Tools to PVE::HA:Config.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Fixes: 0087839aa530 ("Tools: remove dependency on PVE::Cluster")
Allow an admin to set a datacenter wide HA policy which can change
the way we handle services on a node shutdown.
There's:
* freeze: always freeze servivces, independent of the shutdown type
(reboot, poweroff)
* failover: never freeze services, this means that a service will get
recovered to another node if possible and if the current node does
not comes back up in the grace period of 1 minute.
* default: this is the current behavior, freeze on reboot but do not
freeze on poweroff
Add to tests, shutdown-policy1 which is based of the reboot1 test,
but enforces no freeze with a failover policy, and shutdown-policy2
which is based on the shutdown1 test but with a explicit freeze
policy. You can compare (diff) each tests log result to the test it's
based on to see what changes.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
use dpkg-buildpackage and debhelper properly, add missing dependencies and
embed used perl modules from libpve-common-perl to make pve-ha-simulator
standalone.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by moving parse_sid to PVE::HA::Env, with the default implementation in
PVE::HA::Config.
the bash completion methods use PVE::HA::Config (and PVE::Cluster), but
the corresponding use statements are only in PVE::CLI::ha_manager, where the
bash completion is actually used.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid unnecessary dependency on PVE::Cluster in PVE::HA::Tools.
reading the LRM status file was the only instance of reading from the
CFS via this method.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and use PVE::HA::Groups to parse the config when testing/simulating.
this allows us to drop the dependency on PVE::HA::Config, which would
otherwise pull in a lot of additional depdendencies that we don't want
in the simulator.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since we want to test the version from the current working tree, and not
the installed one.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We ignored if the cluster state update failed and happily worked with
an empty state, resulting in strange actions, e.g., the removal of
all (not so) "stale" services or changing the all but the masters
node state to unknown.
Check on the update result and if failed, either do not get active,
or, if already active, skip the current round with the knowledge
that we only got here because the update failed but our lock renew
worked => cfs got already in a working and quorate state again -
(probably just a restart)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
We updated the CRM and LRM view of the cluster state only in the PVE2
environment, outside of all regression testing and simulation scope.
Further, we ignored if this update failed and happily worked with an
empty state, resulting in strange actions, e.g., the removal of all
(not so) "stale" services or changing the all but the masters node
state to unknown.
This patch tries to improve this by moving out the update in a own
environment method, cluster_update_state, calling this in the LRM and
CRM and saving its result.
As with our introduced functionallity to simulate cfs rw or update
errors we can also simulate failures of this state update with the RT
system.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
We called them at similar times anyways, and have them under the
regression test cover with this change.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>