IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
the name 'pve-manager' collides with our pve-manager package name,
which - from the user point of view - provides mainly the API and
WebUI.
An user could thus think that restarting 'pve-manager' would restart
the WebUIs server, which is relatable.
But, the pve-manager.service does not controls the WebUI or its
server but is responsible for starting all guest with 'onboot=1' in
their config on system boot and to stop all remaining running guests
on system shutdown.
Thus rename it to pve-guests and adapt its description. This may not
seem as ideal name at first glance, but its better than the current
option. Further it leads to log messages like:
> Starting PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
> [...]
> Started PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
> [...]
> Stopping PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
which makes it clearer what happens, or what this service is for.
Alias the new service to the old pve-manager.service for legacy
reasons. While our services do not depend on it an user could have
made an own service which used pve-manager.service as synchronisation
point.
Linitian then complains about init.d/pve-manager not having a related
systemd service file. Instead of renmaning it just drop it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The API calls are not to chatty but may give helpfull hints about
what gets tried to be done.
This may help an Admin to figure out which guest delays his host
shutdown.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This services is responsible for bringing up 'onboot' marked guests
on host power-on and stop _all_ guests gracefully on host shutdown
(be it reboot, shutdow, halt, ..)
It's type is set to 'oneshot', which implies a TimeoutStartSec of
'infinity', by default. With Jessies Version of Systemd the
TimeoutStopSec defaulted to TimeoutStartSec, if not set – so also
'infinity'.
But, Debian Stretchs Version of Systemd makes TimeoutStopSec defaults
to 'DefaultTimeoutStopSec' if it was not set, which is by default 90
seconds – much less than infinity.
This may cause non-gracefull shutdowns of guests, as after the 90
seconds systemd sends a SIGKILL to the pvesh 'stopall' process.
This may end in a bad guest state then. But besides that it can also
lead to a hanging shutdown in some cirumstancesm, as some guest still
operated on storages, so systemd-shutdown - the binary which gets
exec'ed by systemd to become the new PID 1 cannot finish its
sync/umount/shutdown procedure. It has a watchdog armed on sync, if
that triggers you may even get a fully shut down system.
Else it can possibly hang forever, at least until the power plug gets
pulled or similar actions are taken.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pvemailforward is a tiny oneliner, but for the sake of best
practices, let's use the build tools from the environment.
For example, with dpkg-buildpackage this will make us use
-D_FORTIFY_SOURCE=2 etc.
Commit 3385399339c94 ("replication: keep retrying every 30 minutes in
error state") changed the retry behavior to not stop after the 3rd error
and then stick to half-hour intervals. This needs to be reflected in the
tests. The numbers here match. (1900 + 30*60 = 3700).
Commit fd844180a7efa ("replication: don't sync to offline targets on
error states) changed the retry behavior to check whether the target
node is online. If this is not the case we fail right away. This
introduced a dependency on PVE::Cluster::get_members which we now need
to mock. Tests currently use node names "node{1,2,3}", so I just mock
those 3.
Otherwise the user running the tests may either create (and
end up being the ownderof) the system wide
/var/lock/pve-manager/* files, or the tests will fail (or
loop endlessly) if the user doesn't have access to them.
We pass a list of storage to scan for stale volumes to prepare_local_job().
So we make sure that we only activate/scan related storages.
Snapshot rollback may remove local replication shapshots. In that case
we still have the $conf->{parent} snapshot on both sides, so we
can use that as base snapshot.
The actual volume replication is done in replicate_volume(), which is just
a stub for now.
I also added a regression test replication_test5.pl to verify basic
functions.
having our ceph.service pulled in by ceph.target does not
work anymore, because "systemctl start ceph.target" hangs
forever on ceph-common upgrades. multi-user.target seems to
work as well, and we are ordered after pve-cluster anyway.
only replace the old ceph.service if it is an exact match.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Remove the POD content and the overwriting makefile rule
'pveperf.1.pod' so that the rule from pve-doc-generator.mk matches
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>