IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
qemu-server ignores the flag if the VM runs, so just set it to true
hardcoded.
People have identical hosts with same HW and want to be able to
relocate VMs in such cases, so allow it here - qemu knows to complain
if it cannot work, as nothing bad happens then (VM stays just were it
is) we can only win, so do it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we only come to the if (!$vmd) check if the previous
if (my $vmd = $vmlist->{ids}->{$name) is taken, which means $vmd is
always true then.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This makes it easier to update the resource configuration from within the CRM/LRM stack,
which is needed for the new 'stop' command.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This will allow for new parameters beside 'target' to be used.
This is in preparation to allow for a 'timeout' parameter for a new 'stop' command.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This two missing dependencies makes it possible to install the package
on a stock Debian system (without PVE)
Signed-off-by: Rhonda D'Vine <rhonda@proxmox.com>
If an admin removes a node he may also remove /etc/pve/nodes/NODE
quite soon after that, if the "node really deleted" logic of our
NodeStatus module has not triggered until then (it waits an hour) the
current manager still tries to read the gone nodes LRM status, which
results in an exception. Move this exception to a warn and return a
node == unkown state in such a case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently we always set this, and thus each services gets a
"failed_nodes": null,
entry in the written out JSON ha/manager_status
so only set if neeed, which can reduce mananager_status quite a bit
with a lot of services.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We do not need to depend explicitly on dh-systemd as we have a
versioned debhelper dependency with >= 10~, and lintian on buster for
this .dsc even warns:
> build-depends-on-obsolete-package build-depends: dh-systemd => use debhelper (>= 9.20160709)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is a method called in our shutdown path, so if we die here we
may silent a shutdown, nad just ignore it.
In combination with the fact that our service unit is configured
with: 'TimeoutStopSec=infinity' this means that a systemctl stop may
wait infinitely for this to happen, and any other systemctl command
will be queued for that long.
So if pmxcfs is stopped, we then get a shutdown request, we cannot
start pmxcfs again, at least not through systemd.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
generated by: find . -name '*.pm' -exec sed -i 's/\s*$//g' {} \;
As I touched almost any file here anyway I'm not scared to appear in
git blame ;-) also it has support to suppress whitespace changes.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While it would be correct to have them tracked here we cannot do this
at the moment, as with those two also depend on pve-ha-manager, and
with dpkg packaged under strech there's an issue with such cyclic
dependencies and trigger cycle detection only resolved for buster[0]
Currently, the issue exists on the following condition:
* update of pve-ha-manager plus either pve-container or qemu-server
* but _no_ update of pve-manager in the same upgrade cycle
[0]: 7f43bf5f93
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the zsh command completion generation for the ha-manager CLI tools.
This adds the automatic generation of the autocompletion scripts for zsh
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
service_is_ha_managed returns false if a service is in the resource
configuration but marked as 'ignore', as for the internal stack it is
as it wasn't HA managed at all.
But user should be able to remvoe it from the configuration easily
even in this state, without setting the requesttate to anything else
first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The vm_shutdown parameter forceStop differs in behaviour between VMs
and CTs. While on VMs it ensures that a VM gets stoppped if it could
not shutdown gracefully only after the timeout passed, the container
stack always ignores any timeout if forceStop is set and hard stops
the CT immediately.
To achieve this behaviour for CTs too, the timeout is enough, as
lxc-stop then does the hard stop after timeout itself.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We do not support all of the dlm.conf possibilities, but we also do
not want to die on such "unkown" keys/commands as an admin should be
able to share this config if it is already used for other purposes,
e.g. lockd, gfs, or such.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>