IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
else this can break an upgrade for unrelated reasons (regular debhelper also
constructs the restart invocations like this, it even redirects output to
/dev/null)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and guard it to only run on ceph-using systems (the regular 'inited' check
doesn't work as a guard for this, because it checks for new-style inits
including the dir existing).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Due to Ceph dropping privileges when running the 'ceph-crash' daemon
[0], it is necessary to allow the daemon to authenticate with its
cluster in a safe manner.
In order to avoid exposing sensitive keyrings or somehow escalating
its privileges again, 'ceph-crash' is therefore provided with its own
keyring in the '/etc/pve/ceph' directory. This directory, due to being
on 'pmxcfs', may be read by members of the 'www-data' group, which
'ceph-crash' is made part of [1].
Expected Configuration
----------------------
1. A keyring file named '/etc/pve/ceph/ceph.client.crash.keyring'
exists
2. A section named 'client.crash' exists in '/etc/pve/ceph.conf'
3. The 'client.crash' section has a key named 'keyring' which
references the keyring file as '/etc/pve/ceph/$cluster.$name.keyring'
4. The 'client.crash' section has *no* key named 'key'
New Clusters
------------
The keyring file is created and the conf file is updated after the first
monitor has been created (when calling `pveceph mon create`).
Existing Clusters
-----------------
A new helper script creates and configures the 'client.crash' keyring in
`postinst`, if:
* Ceph is installed
* Ceph is initialized ('/etc/pve/ceph.conf' and '/etc/pve/ceph' exist)
* Connection to RADOS is successful
If the above conditions are met, the helper script ensures that the
existing configuration matches the expected configuration mentioned
above.
The configuration is not changed if it is already as expected.
The helper script may be called again manually if the `postinst` hook
fails. It is installed to '/usr/share/pve-manager/helpers/pve-init-ceph-crash'.
Existing `client.crash` Key
---------------------------
If a key named 'client.crash' already exists within the cluster, it is
reused and not regenerated.
[0]: https://github.com/ceph/ceph/pull/48713
[1]: https://git.proxmox.com/?p=ceph.git;a=commitdiff;h=f72c698a55905d93e9a0b7b95674616547deba8a
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
This commit adds the '/etc/pve/ceph' directory to our overall expected
Ceph configuration.
This directory is meant to store cluster-wide, non-private
configuration files used by Ceph applications and services that are
executed with lower privileges, such as 'ceph-crash.service'.
The existence of the directory is now also checked for when checking
whether Ceph is configured correctly. This makes it easier for our
other tooling to rely on the directory's existence, reducing the
number of otherwise needless frequent checking.
* For new clusters: `pveceph init` now creates '/etc/pve/ceph' when
called.
* For existing clusters: The 'postinst' hook this commit adds ensures
that '/etc/pve/ceph' is created when updating.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Since LVM 2.03.15 RBD devices are also scanned by default [1]. This
can lead to guest volumes being recognized and displayed on the host
when using KRBD for RBD-backed disks. In order to prevent this we add
an additional filter to the LVM config to avoid scanning rbds.
This also prevents a bug where LVM created a very high amount of
archive entries when there were logical volumes with the same path
available. This could happen when two guests with RBD disks had the
same LVM layout or a guest and host had the same layout.
previous behavior:
If there is no marker in the LVM conf and global_filter does not
contain '/dev/zd.*': replace the global_filter with our version
new behavior:
Replace the global_filter iff:
- There is no marker and global_filter is empty
- The global_filter is exactly the old default
If we don't replace the filter and it is a non-default value: We print
a warning. Addtionally we force this function to run once when
upgrading from older versions.
The previous versions could replace custom global_filters where the
comment had been removed and the zvol directive removed. The new
behavior is slightly more conservative, but works the same in other
cases.
[1] 6a431eb242
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
in theory we'd need to be more cautios but this was added only during
beta, which is when we do not really provided any stability
guarantee, further, it's rather unlikely that one added very
important repos that, when removed, break something (again *during*
beta).
The new APT repo management makes it also easy to see when one does
not gets any PVE updates, and one can add the pvetest repo there
again easily too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The mistake wasn't that bad, as we mostly checked for the migration
to often, i.e., for any update to the 7.2-X releases, not just until
7.2-11 was crossed (but with 7.3-X the check worked again as
indented).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
proxmox-mail-forward is a new helper binary in Rust intended to behave
essentially the same on PVE installations. It can also handle mixed
PBS+PVE installations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
missed when switching over to Proxmox::RS::Subscription, which stores
the same info in the product-specific /etc/apt/auth.conf.d/pve.conf .
the top-level file might contain non-PVE-managed entries, so only remove
entries matching "our" machine.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.
But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.
So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)
We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.
Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The only difference is that reload-or-try-restart does not do
anything if the service isn't already running, while
reload-or-restart also starts a stopped service.
We explicitly check if the service is enabled on upgrade before doing
any start/reload-or-restart action anyway. So, it would now start
daemons that were stopped but not disabled, which is not a really
valid state and would have happened on the next reboot anyway.
This starts new daemons (like the pvescheduler) automatically on a
package upgrade
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in theory we'd need to be more cautios but this was added only during
beta, which is when we do not really provided any stability
guarantee, further, it's rather unlikely that one added very
important repos that, when removed, break something (again *during*
beta).
The new APT repo management makes it also easy to see when one does
not gets any PVE updates, and one can add the pvetest repo there
again easily too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now that we no longer ship our own LVM packages, set the relevant
filtering options here if they are missing.
for an upgrade from PVE 6.x, the following two scenarios are likely:
A: user edited config provided by our old lvm2 package. it likely
contains our (or a modified) global_filter, but the old scan_lvs
default. in this case we ignore global_filter as long as it contains our
'don't scan zvols' entry, and set scan_lvs to false.
B: config provided by our old lvm2 package was taken over by default
config from stock lvm2 package. scan_lvs defaults to false already, but
global_filter is unset (scan everything), so we need to set our own
global_filter excluding zvols.
other combinations should be handled fine as well.
for new installs (installer, install on top of Debian Bullseye) we are
always in scenario B.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
any system upgrading to 7.x was either installed with >= 6.4 in the
first place, or upgraded to >= 6.4 and thus fixed those issues already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We could also just check the mtime of the machine-id as heuristic,
but extracting the machine-ids from our ISO archive was pretty
straight forward and avoids special handling for from Debian
installed systems, so use that.
The full map:
pve 4.0-62414ad6-11 -> "2ec24eda629a4c8d8c1f8dac50a9ee5f"
pve 4.1-a64d2990-21 -> "bd94244c0da6419a82a383e62dc03b51"
pve 4.2-95d93422-28 -> "45d4e7046c3d4c26af8acd589f358ac6"
pve 4.3-29d03d47-2 -> "8c445f96b3064ff79f825ea78a3eefde"
pve 4.4-f4006904-1 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 4.4-f4006904-2 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 5.0-786da0da-1 -> "285de85759894b3f9ad9844a89045af6"
pve 5.0-786da0da-2 -> "89971dede7b04c98b2b0bc8845f53320"
pve 5.0-20170505-test -> "4e3b6e9550f24d638bc26211a7b37df5"
pve 5.0-ad98a36-5 -> "bc2f684e31ee4daf95e45c62410a95b1"
pve 5.0-d136f4ad-3 -> "8cc7bc883fd048b78a4af7433c48e341"
pve 5.0-9795f744-4 -> "9b46d99712854566bb02a656a3ff9191"
pve 5.0-22d7548f-1 -> "e7fc055af47048ee884dcb88a7474336"
pve 5.0-273a9671-1 -> "13d879f75e6447a69ed85179bd93759a"
pve 5.1-2 -> "5b59e448c3e74029af2ac91f572d68a7"
pve 5.1-3 -> "5a2bd0d11a6c41f9a33fd527751224ea"
pve 5.1-cfaf62cd-1 -> "516afc72013c4b9da85b309aad987df2"
pve 5.1-test-20171019-1 -> "b0ce8d24684845e8ac337c588a7715cb"
pve 5.1-test-20171218 -> "e0af064c16e9463e9fa980eac66427c1"
pve 5.2-1 -> "6e925d11b497446e8e7f2ff38e7cf891"
pve 5.3-1 -> "eec280213051474d8bfe7e089a86744a"
pve 5.3-2 -> "708ded6ee82a46c08b77fecda2284c6c"
pve 5.3-preview-20181123-1 -> "615cb2b78b2240289fef74da610c146f"
pve 5.4-1 -> "b965b329a7e246d5be66a8d367f5760d"
pve 6.0-1 -> "5472a49c6436426fbebd7881f7b7f13b"
The 6.0 one should never trigger as there we had the fix already out,
but it may be that some internal installation missed that and it
doesn't hurt to check, so include it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
if pve-manager gets triggered we will normally always do a reload,
that means that updatecerts call won't get triggered, as systemd
doesn't executes the ExecStartPre directives in the reload case.
Do it ourself
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When installing from ISO '/etc/aliases' gets written correctly, however
postfix needs '/etc/aliases.db' (generated by running newaliases)
in order to work.
added to the postinst script to fix the issue for users having installed from
the ISO before this fix.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
check if a unit is masked before starting/restarting/reloading it,
as else we get pretty ugly error messages during upgrade.
as "deb-systemd-helper --quiet was-enabled" differs from the
"systemctl is-enabled" behaviour, the former returns true for masked
units while the latter does not, we have to manually call systemctl,
circumventing the deb helper.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we don't use anything bash specific in our postinst, and this way linitian
should warn us about any bashisms we introduce.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
with an added "reload-or" for pvedaemon/pveproxy/spiceproxy, which
dh_start unfortunately does not yet support
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
reduce code duplication, and reload-or-restart timers just like service
units instead of just starting them.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We have the pvebanner.service in places which ensures this gets
called on boot before the getty target.
Thus this only had an effect if we changed the nodename to IP mapping
_and_ upgraded/reinstalled pve-manager, then switching to another TTY
would show the updated IP. But as this a) is for sure not a common
triggered path and b) a IP change suggest a reboot either way, and if
the user can handle it on their own without a reboot, they should be
able to also handle an outdated /etc/issue until the next reboot.
Also for PVE ontop of plain Debian a reboot is needed, so that the
PVE kernel gets booted, so this shouldn't be an issue ther neither.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In commit 2bde88fb3f6ed61ddb67c01190cbffdbfc210ea9 we needed to
change the ceph.service install target to multi-user.target, as
ceph.target could hang indefinitely if ceph-common gets upgraded.
This change is included in pve-manager 4.4-13 and newer, as users
wanting to upgrade to 5.0 must upgrade to latest 4.4 to be able to do
so (without headache) this can be removed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The first case won't happen anymore on a recent PVE.
The 'version is empty or <unknown>' check may drop the '<unknown>'
part, it gets handled by the 'dpkg --compare-versions' bits just
fine, if it happens at all for the 'configure' case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
"Note that not all init systems print messages to the system console,
so that the logfile may remain empty; this is the case with systemd
(the default init system). Try "journalctl -b" instead."
-- https://packages.debian.org/stretch/bootlogd
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was added by c91649753 on 2012-02-21 11:42:32, as we had 2 major
upgrades since them every system either was update or new installed,
so just remove this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>