IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If we fail to start polkit, we get a message like
"org.freedesktop.DBus.Error.NameHasNoOwner: Could not activate remote peer.",
which has no meaning for the caller of our StartUnit method. Let's just
return -EACCES.
$ systemctl start apache
Failed to start apache.service: Could not activate remote peer. (before)
Failed to start apache.service: Access denied (after)
Fixes#13865.
We currently use the host's kernel and initramfs in our QEMU tests.
If the host is running on an encrypted LUKS partition, then the initramfs
will have a crypttab setup looking for the particular root disk it needs to
encrypt before booting into the system.
However, this disk obviously doesn't exist in our QEMU VM, so it turns out
our tests end up waiting for this device to become available, which will
never actually happen, and boot hangs for 90s until that service times out.
[*** ] A start job is running for /dev/disk/by-uuid/01234567-abcd-1234-abcd-0123456789ab (20s / 1min 30s)
In order to prevent this issue, let's pass "rd.luks=0" to disable LUKS in
the initramfs only as part of our default kernel command-line in our QEMU
tests.
This is enough to disable this behavior and prevent the timeout, while at
the same time doesn't conflict with our tests that actually check for LUKS
behavior in the systemd running under test (such as TEST-02-CRYPTSETUP).
Tested: `sudo make -C TEST-02-CRYPTSETUP/ clean setup run`
DNSOverTLS in strict mode (value yes) does check the server, as it is said in
the first few lines of the option documentation. The check is not performed in
"opportunistic" mode, however, as that is allowed by RFC 7858, section "4.1.
Opportunistic Privacy Profile".
> With such a discovered DNS server, the client might or might not validate the
> resolver. These choices maximize availability and performance, but they leave
> the client vulnerable to on-path attacks that remove privacy.
This partially reverts db11487d10 (the logic to
calculate the correct value is removed, we always use the same setting as for
the system manager). Distributions have an easy mechanism to override this if
they wish.
I think making this configurable is better, because different distros clearly
want different defaults here, and making this configurable is nice and clean.
If we don't make it configurable, distros which either have to carry patches,
or what would be worse, rely on some other configuration mechanism, like
/etc/profile. Those other solutions do not apply everywhere (they usually
require the shell to be used at some point), so it is better if we provide
a nice way to override the default.
Fixes #13469.
systemd-analyze verify command now results in segmentation fault if two
consecutive non-existent unit file names are given:
# ./build/systemd-analyze a.service b.service
...<snip irrelevant part>...
Unit a.service not found.
Unit b.service not found.
Segmentation fault (core dumped)
The cause of this is a wrong handling of return value of
manager_load_startable_unit_or_warn() in verify_units() in failure case.
It looks that the current logic wants to assign the first error status
throughout verify_units() into variable r and count up variable count only when
a given unit file exists.
However, due to the wrong handling of the return value of
manager_load_startable_unit_or_warn() in verify_units(), the variable count is
unexpectedly incremented even when there is no such unit file because the
variable r already contains non-zero value in the 2nd failure, set by the 1st
failure, and then the condition k < 0 && r == 0 evaluates to false.
This commit fixes the wrong handling of return value of
manager_load_startable_unit_or_warn() in verify_units().
I added a fairly vague entry to docs/ENVIRONMENT because I think it is worth
mentioning there (in case someone is looking for any environment variable that
might be relevant).
Initially I thought this is a good idea, but when reviewing a different PR
(https://github.com/systemd/systemd/pull/13862#discussion_r340604313) I changed
my mind about this. At some point we probably should start warning about the
old option name, and yet later remove it. But it'll make it easier for people
to transition to the new option name if there's a period of support for both
names without any fuss. There's nothing particularly wrong about the old name,
and there is no support cost.
Fixes#13919 (by avoiding the issue completely).
If udevd receives an exit signal, it releases its reference on the udev
monitor in manager_exit(). If at this time a worker is hanging, and if
the event timeout for this worker expires before udevd exits, udevd
crashes in on_sigchld()->udev_monitor_send_device(), because the monitor
has already been freed.
Fix this by releasing the main process's monitor ref later, in
manager_free().
This patch is a new attempt to fix the race originally described in issue #9754.
The initial fix (commit ad96887a12) consisted in
spawning a sub process that became the controlling process of the VT and hence
kicked the old controlling process off to make sure that the VT wouldn't have
entered in HUP state while logind restored the VT.
But it introduced a regression (see issue #11269) and thus was reverted. But
unlike it was described in the revert commit message, commit
adb8688b3f alone doen't fix the initial race.
This patch fixes the race in a simpler way by trying to restore the VT a second
time after making sure to re-open it if the first attempt fails.
Indeed if the old controlling process dies before or during the first attempt,
logind will fail to restore the VT. At this point the VT is in HUP state but
we're sure that it won't enter in a HUP state a second time. Therefore we will
retry by re-opening the VT to clear the HUP state and by restoring the VT a
second time, which should be safe this time.
Fixes: #9754Fixes: #13241
On some systems with lots of devices, device probing for certain drivers can
take a very long time. If systemd-udevd detects a timeout and kills the worker
running modprobe using SIGKILL, some devices will not be probed, or end up in
unusable state. The --event-timeout option can be used to modify the maximum
time spent in an uevent handler. But if systemd-udevd exits, it uses a
different timeout, hard-coded to 30s, and exits when this timeout expires,
causing all workers to be KILLed by systemd afterwards. In practice, this may
lead to workers being killed after significantly less time than specified with
the event-timeout. This is particularly significant during initrd processing:
systemd-udevd will be stopped by systemd when initrd-switch-root.target is
about to be isolated, which usually happens quickly after finding and mounting
the root FS.
If systemd-udevd is started by PID 1 (i.e. basically always), systemd will
kill both udevd and the workers after expiry of TimeoutStopSec. This is
actually better than the built-in udevd timeout, because it's more transparent
and configurable for users. This way users can avoid the mentioned boot problem
by simply increasing StopTimeoutSec= in systemd-udevd.service.
If udevd is not started by systemd (standalone), this is still an
improvement. udevd will kill hanging workers when the event timeout is
reached, which is configurable via the udev.event_timeout= kernel
command line parameter. Before this patch, udevd would simply exit with
workers still running, which would then become zombie processes.
With the timeout removed, the sd_event_now() assertion in manager_exit() can be
dropped.
This makes it easier to see what unit_name_is_valid() returns at a glance.
The output is not whitespace clean, but I think it's good enough for a test.
We compile some c++ code for tests. We would simply use the default options for
those. When the previous commit raised the default warning level, we started
getting warnings from c++ code. Let's add the most important options to the c++
command, so that we get a compilation without any warnings again.
I don't think it makes sense to add *all* the options that we add for c to the
c++ flags, because testing them takes quite a while, and the c++ compilations
are for small amounts of code, mostly to check that the headers have compatible
syntax.
Let's bump up the warning level, and not add by -Wextra by hand. This is the
approach recommended by meson. The idea is that all projects should be as
similar as possible to make it easier for users to switch between projects.
With meson-0.52.0-1.module_f31+6771+f5d842eb.noarch I get:
src/test/meson.build:19: WARNING: Overriding previous value of environment variable 'PATH' with a new one
When we're using *prepend*, the whole point is to modify an existing variable,
so meson shouldn't warn. But let's set avoid the warning and shorten things by
setting the final value immediately.