IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.
But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.
So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)
We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.
Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The only difference is that reload-or-try-restart does not do
anything if the service isn't already running, while
reload-or-restart also starts a stopped service.
We explicitly check if the service is enabled on upgrade before doing
any start/reload-or-restart action anyway. So, it would now start
daemons that were stopped but not disabled, which is not a really
valid state and would have happened on the next reboot anyway.
This starts new daemons (like the pvescheduler) automatically on a
package upgrade
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in theory we'd need to be more cautios but this was added only during
beta, which is when we do not really provided any stability
guarantee, further, it's rather unlikely that one added very
important repos that, when removed, break something (again *during*
beta).
The new APT repo management makes it also easy to see when one does
not gets any PVE updates, and one can add the pvetest repo there
again easily too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now that we no longer ship our own LVM packages, set the relevant
filtering options here if they are missing.
for an upgrade from PVE 6.x, the following two scenarios are likely:
A: user edited config provided by our old lvm2 package. it likely
contains our (or a modified) global_filter, but the old scan_lvs
default. in this case we ignore global_filter as long as it contains our
'don't scan zvols' entry, and set scan_lvs to false.
B: config provided by our old lvm2 package was taken over by default
config from stock lvm2 package. scan_lvs defaults to false already, but
global_filter is unset (scan everything), so we need to set our own
global_filter excluding zvols.
other combinations should be handled fine as well.
for new installs (installer, install on top of Debian Bullseye) we are
always in scenario B.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
any system upgrading to 7.x was either installed with >= 6.4 in the
first place, or upgraded to >= 6.4 and thus fixed those issues already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We could also just check the mtime of the machine-id as heuristic,
but extracting the machine-ids from our ISO archive was pretty
straight forward and avoids special handling for from Debian
installed systems, so use that.
The full map:
pve 4.0-62414ad6-11 -> "2ec24eda629a4c8d8c1f8dac50a9ee5f"
pve 4.1-a64d2990-21 -> "bd94244c0da6419a82a383e62dc03b51"
pve 4.2-95d93422-28 -> "45d4e7046c3d4c26af8acd589f358ac6"
pve 4.3-29d03d47-2 -> "8c445f96b3064ff79f825ea78a3eefde"
pve 4.4-f4006904-1 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 4.4-f4006904-2 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 5.0-786da0da-1 -> "285de85759894b3f9ad9844a89045af6"
pve 5.0-786da0da-2 -> "89971dede7b04c98b2b0bc8845f53320"
pve 5.0-20170505-test -> "4e3b6e9550f24d638bc26211a7b37df5"
pve 5.0-ad98a36-5 -> "bc2f684e31ee4daf95e45c62410a95b1"
pve 5.0-d136f4ad-3 -> "8cc7bc883fd048b78a4af7433c48e341"
pve 5.0-9795f744-4 -> "9b46d99712854566bb02a656a3ff9191"
pve 5.0-22d7548f-1 -> "e7fc055af47048ee884dcb88a7474336"
pve 5.0-273a9671-1 -> "13d879f75e6447a69ed85179bd93759a"
pve 5.1-2 -> "5b59e448c3e74029af2ac91f572d68a7"
pve 5.1-3 -> "5a2bd0d11a6c41f9a33fd527751224ea"
pve 5.1-cfaf62cd-1 -> "516afc72013c4b9da85b309aad987df2"
pve 5.1-test-20171019-1 -> "b0ce8d24684845e8ac337c588a7715cb"
pve 5.1-test-20171218 -> "e0af064c16e9463e9fa980eac66427c1"
pve 5.2-1 -> "6e925d11b497446e8e7f2ff38e7cf891"
pve 5.3-1 -> "eec280213051474d8bfe7e089a86744a"
pve 5.3-2 -> "708ded6ee82a46c08b77fecda2284c6c"
pve 5.3-preview-20181123-1 -> "615cb2b78b2240289fef74da610c146f"
pve 5.4-1 -> "b965b329a7e246d5be66a8d367f5760d"
pve 6.0-1 -> "5472a49c6436426fbebd7881f7b7f13b"
The 6.0 one should never trigger as there we had the fix already out,
but it may be that some internal installation missed that and it
doesn't hurt to check, so include it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
if pve-manager gets triggered we will normally always do a reload,
that means that updatecerts call won't get triggered, as systemd
doesn't executes the ExecStartPre directives in the reload case.
Do it ourself
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When installing from ISO '/etc/aliases' gets written correctly, however
postfix needs '/etc/aliases.db' (generated by running newaliases)
in order to work.
added to the postinst script to fix the issue for users having installed from
the ISO before this fix.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
check if a unit is masked before starting/restarting/reloading it,
as else we get pretty ugly error messages during upgrade.
as "deb-systemd-helper --quiet was-enabled" differs from the
"systemctl is-enabled" behaviour, the former returns true for masked
units while the latter does not, we have to manually call systemctl,
circumventing the deb helper.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we don't use anything bash specific in our postinst, and this way linitian
should warn us about any bashisms we introduce.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
with an added "reload-or" for pvedaemon/pveproxy/spiceproxy, which
dh_start unfortunately does not yet support
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
reduce code duplication, and reload-or-restart timers just like service
units instead of just starting them.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We have the pvebanner.service in places which ensures this gets
called on boot before the getty target.
Thus this only had an effect if we changed the nodename to IP mapping
_and_ upgraded/reinstalled pve-manager, then switching to another TTY
would show the updated IP. But as this a) is for sure not a common
triggered path and b) a IP change suggest a reboot either way, and if
the user can handle it on their own without a reboot, they should be
able to also handle an outdated /etc/issue until the next reboot.
Also for PVE ontop of plain Debian a reboot is needed, so that the
PVE kernel gets booted, so this shouldn't be an issue ther neither.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In commit 2bde88fb3f6ed61ddb67c01190cbffdbfc210ea9 we needed to
change the ceph.service install target to multi-user.target, as
ceph.target could hang indefinitely if ceph-common gets upgraded.
This change is included in pve-manager 4.4-13 and newer, as users
wanting to upgrade to 5.0 must upgrade to latest 4.4 to be able to do
so (without headache) this can be removed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The first case won't happen anymore on a recent PVE.
The 'version is empty or <unknown>' check may drop the '<unknown>'
part, it gets handled by the 'dpkg --compare-versions' bits just
fine, if it happens at all for the 'configure' case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
"Note that not all init systems print messages to the system console,
so that the logfile may remain empty; this is the case with systemd
(the default init system). Try "journalctl -b" instead."
-- https://packages.debian.org/stretch/bootlogd
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was added by c91649753 on 2012-02-21 11:42:32, as we had 2 major
upgrades since them every system either was update or new installed,
so just remove this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the name 'pve-manager' collides with our pve-manager package name,
which - from the user point of view - provides mainly the API and
WebUI.
An user could thus think that restarting 'pve-manager' would restart
the WebUIs server, which is relatable.
But, the pve-manager.service does not controls the WebUI or its
server but is responsible for starting all guest with 'onboot=1' in
their config on system boot and to stop all remaining running guests
on system shutdown.
Thus rename it to pve-guests and adapt its description. This may not
seem as ideal name at first glance, but its better than the current
option. Further it leads to log messages like:
> Starting PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
> [...]
> Started PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
> [...]
> Stopping PVE guests (Service providing start-on-boot and stop-all-on-shutdown)
which makes it clearer what happens, or what this service is for.
Alias the new service to the old pve-manager.service for legacy
reasons. While our services do not depend on it an user could have
made an own service which used pve-manager.service as synchronisation
point.
Linitian then complains about init.d/pve-manager not having a related
systemd service file. Instead of renmaning it just drop it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This reverts commit 12fe9183cb82c3ea148ac31990c67b518c50aabf.
Revert "add missing file"
This reverts commit c11885e0a0da0ad0c944a48e645d829553c6705d.
We've switched to Let's Encrypt.
postinst configure: run update-ca-certificates if the
previous version is <= 5.0-23.
having our ceph.service pulled in by ceph.target does not
work anymore, because "systemctl start ceph.target" hangs
forever on ceph-common upgrades. multi-user.target seems to
work as well, and we are ordered after pve-cluster anyway.
only replace the old ceph.service if it is an exact match.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>