IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Otherwise we miss quite a lot of coverage (mainly from logind,
hostnamed, networkd, and possibly others), since they can't write their
reports with `ProtectSystem=strict`.
With `ProtectSystem=strict` gcov is unable to write the *.gcda files
with collected coverage. Let's add a yet another switch to make such
restriction less strict to make gcov happy.
This addresses following errors:
```
...
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/binfmt-util.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/base-filesystem.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/barrier.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/ask-password-api.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/apparmor-util.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/acpi-fpdt.c.gcda:Cannot open
...
```
When playing around with the coverage-enabled build I kept hitting
an issue where dnsmasq failed to start because the previous instance was
still shutting down. This should, hopefully, help to mitigate that.
I want to mark some files to be ignored for licensing purposes,
e.g. output from fuzzers and other samples. By using the gitattribute
machinery for this we don't need to design a custom protocol:
$ git check-attr generated test/test-sysusers/unhappy-*
test/test-sysusers/unhappy-1.expected-err: generated: set
test/test-sysusers/unhappy-1.input: generated: unspecified
test/test-sysusers/unhappy-2.expected-err: generated: set
test/test-sysusers/unhappy-2.input: generated: unspecified
test/test-sysusers/unhappy-3.expected-err: generated: set
test/test-sysusers/unhappy-3.input: generated: unspecified
Those are all consumed by our parser, so they all support comments.
I was considering whether they should have a license header at all,
but in the end I decided to add it because those files are often created
by copying parts of real unit files. And if the real ones have a license,
then those might as well. It's easier to add it than to make an exception.
We also have a bunch of files that have some bytes and a lot
of text, like the journal export format. For those, it is still quite
useful when the tools try to diff them, so let's not mark those.
When they work they finish quickly in under two minutes on slow machines, when
soft lock ups happen in the nested virt machine each test can run for like 5
hours clogging up CI infrastructure. It's best to fail quicker than that when
qemu or kernel are broken.
If the packages are built without libssl simply skip the signature
checks.
Oct 06 21:21:32 H systemd[1]: systemd 249.1249.gcc4df1f787.0 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS -OPENSSL
...
Oct 06 21:22:21 H systemd[459]: Activation of signed Verity volume worked neither via the kernel nor in userspace, can't activate.
Follow-up for #20691
This verifies that the argv part of any exec_command parameters that
are sent through dbus is not empty at deserialization time.
There is an additional check in service.c service_verify() that again
checks if all exec_commands are correctly populated, after the service
has been loaded, whether through dbus or otherwise.
Fixes#20933.
If -Db_coverage=true is used at build time, then ARTIFACT_DIRECTORY/TEST-XX-FOO.coverage-info
files are created with code coverage data, and run-integration-test.sh also
merges them into ARTIFACT_DIRECTORY/merged.coverage-info since the coveralls.io
helpers accept only a single file.
megasearch.net was meant to be a non-existing bogus domain, and had been
for a long time. But it seems some domain grabber recently registered
it, and it's an actual thing now:
$ host megasearch.net
megasearch.net has address 207.148.248.143
This causes the test to fail randomly.
Use search.example.com instead which yields
$ host search.example.com
Host search.example.com not found: 3(NXDOMAIN)
Fixes: #18357
Since f833df3 we now actually use the seccomp rules defined in portable
profiles. However, the default one is too restrictive for sanitizers, as
it blocks certain syscall required by LSan. Mitigate this by using the
'trusted' profile when running TEST-29-PORTABLE under sanitizers.
This adds a high level test verifying that syscall filtering in
combination with a simple architecture filter for the "native"
architecture works fine.
Currently there does not exist a way to specify a path relative to which
all binaries executed by Exec should be found. The only way is to
specify the absolute path.
This change implements the functionality to specify a path relative to which
binaries executed by Exec*= can be found.
Closes#6308
Compared to PID1 where systemd-oomd has to be the client to PID1
because PID1 is a more privileged process than systemd-oomd, systemd-oomd
is the more privileged process compared to a user manager so we have
user managers be the client whereas systemd-oomd is now the server.
The same varlink protocol is used between user managers and systemd-oomd
to deliver ManagedOOM property updates. systemd-oomd now sets up a varlink
server that user managers connect to to send ManagedOOM property updates.
We also add extra validation to make sure that non-root senders don't
send updates for cgroups they don't own.
The integration test was extended to repeat the chill/bloat test using
a user manager instead of PID1.
Unfortunately, when checking the return/exit code using &&, ||, if,
while, etc., `set -e` is disabled for all nested functions as well,
which leads to incorrectly ignored errors, *sigh*.
Example:
```
set -eu
set -o pipefail
task() {
echo "task init"
echo "this should fail"
false
nonexistentcommand
echo "task end (we shouldn't be here)"
}
if ! task; then
echo >&2 "The task failed"
exit 1
else
echo "The task passed"
fi
```
```
$ bash test.sh
task init
this should fail
test.sh: line 10: nonexistentcommand: command not found
task end (we shouldn't be here)
The task passed
$ echo $?
0
```
But without the `if`, everything works "as expected":
```
set -eu
set -o pipefail
task() {
echo "task init"
echo "this should fail"
false
nonexistentcommand
echo "task end (we shouldn't be here)"
}
task
```
```
$ bash test.sh
task init
this should fail
$ echo $?
1
```
Wonderful.
Pressure remains > 1% after a kill for some time and could cause
testchill to get killed. Bumping the limit from 1% to 20% should help
with this.
Fixes#20118
The `dracut_install` is a misnomer, since the systemd integration test
suite is based on the original dracut's test suite, and not all the
references to dracut has been edited out. Let's fix that.
For most fields, the text shown by `.id` is the value that should be set
in the unit file; however, for RestrictNamespaces, it is not. Changing
this to show the actual text makes it more clear to a user what the
actual change that needs to be made to the unit file is.
Fixes#17433. Currently, if any of the validations we do before we
check start rate limiting fail, we can still enter a busy loop as
no rate limiting gets applied. A common occurence of this scenario
is path units triggering a service that fails a condition check.
To fix the issue, we simply move up start rate limiting checks to
be the first thing we do when starting a unit. To achieve this,
we add a new method to the unit vtable and implement it for the
relevant unit types so that we can do the start rate limit checks
earlier on.
otherwise we might mark tests where something crashes during shutdown as
successful, as happened in one of the recent TEST-01-BASIC runs:
```
testsuite-01.service: About to execute rm -f /failed /testok
testsuite-01.service: Forked rm as 606
testsuite-01.service: Executing: rm -f /failed /testoktestsuite-01.service: Changed dead -> start-pre
Starting TEST-01-BASIC...
...
Child 606 (rm) died (code=exited, status=0/SUCCESS)
testsuite-01.service: Child 606 belongs to testsuite-01.service.
testsuite-01.service: Control process exited, code=exited, status=0/SUCCESS (success)
testsuite-01.service: Got final SIGCHLD for state start-pre.
testsuite-01.service: Passing 0 fds to service
testsuite-01.service: About to execute sh -e -x -c "systemctl --state=failed --no-legend --no-pager >/failed ; systemctl daemon-reload ; echo OK >/testok"
testsuite-01.service: Forked sh as 607
testsuite-01.service: Changed start-pre -> start
testsuite-01.service: Executing: sh -e -x -c "systemctl --state=failed --no-legend --no-pager >/failed ; systemctl daemon-reload ; echo OK >/testok"systemd-journald.service: Got notification message from PID 560 (FDSTORE=1)S
...
testsuite-01.service: Child 607 belongs to testsuite-01.service.
testsuite-01.service: Main process exited, code=exited, status=0/SUCCESS (success)
testsuite-01.service: Deactivated successfully.
testsuite-01.service: Service will not restart (restart setting)
testsuite-01.service: Changed start -> dead
testsuite-01.service: Job 207 testsuite-01.service/start finished, result=done
[ OK ] Finished TEST-01-BASIC.
...
end.service: About to execute /bin/sh -x -c "systemctl poweroff --no-block"
end.service: Forked /bin/sh as 623end.service: Executing: /bin/sh -x -c "systemctl poweroff --no-block"
...
end.service: Job 213 end.service/start finished, result=canceled
Caught <SEGV>, dumped core as pid 624.
Freezing execution.
CentOS Linux 8
Kernel 4.18.0-305.12.1.el8_4.x86_64 on an x86_64 (ttyS0)
H login: qemu-kvm: terminating on signal 15 from pid 80134 (timeout)
E: Test timed out after 600s
Spawning getter /root/systemd/build/journalctl -o export -D /var/tmp/systemd-test.0UYjAS/root/var/log/journal/ca6031c2491543fe8286c748258df8d1...
Finishing after writing 15125 entries
Spawning getter /root/systemd/build/journalctl -o export -D /var/tmp/systemd-test.0UYjAS/root/var/log/journal/remote...
Finishing after writing 0 entries
-rw-r-----. 1 root root 25165824 Aug 20 12:26 /var/tmp/systemd-test.0UYjAS/system.journal
TEST-01-BASIC RUN: Basic systemd setup [OK]
...
This reverts commit 491b736a49.
If the _static_ linked version of busybox is installed, openSUSE doesn't need
any specific code.
A following commit will make sure that the static linked version of busybox is
installed in the busybox container.
NO_BUILD=1 indicates that we want to test systemd from the local system and not
the one from the local build. Hence there should be no need to call
find-build-dir.sh when NO_BUID=1 especially since it's likely that the script
will fail to find a local build in this case.
This avoids find-build-dir.sh to emit 'Specify build directory with $BUILD_DIR'
message when NO_BUILD=1 and no local build can be found.
This introduces a behavior change though: systemd from the local system will
always be preferred when NO_BUILD=1 even if a local build can be found.
Previously, when Priority= is unspecified, networkd configured the rule with
the highest (=0) priority. This commit makes networkd distinguish the case
the setting is unspecified and one explicitly specified as Priority=0.
Note.
1) If the priority is unspecified on configure, then kernel dynamically picks
a priority for the rule.
2) The new behavior is consistent with 'ip rule' command.
Replaces #15606.
In some cases image names are unpredictable - some orchestrators/deployment
tools like to mangle names to suit their internal formats. In these cases,
the requirement that the extension-release file matches exactly the image
name where it's contained cannot work.
Allow falling back to loading the first regular file which name starts with
'extension-release' located in /usr/lib/extension-release.d/ and tagged with
a user.extension-release.strict extended attribute with a true value, if the
one with the expected name cannot be found.
Depending on the timing, socat will either get ECONNREFUSED oder EPIPE
from systemd. The latter will cause it to exit(1) and subsequently the
test to fail.
We are not actually interested in the return code of socat though. The
test is supposed to check, whether rate limiting of a socket unit works
properly.
So ignore any failures from the socat invocation and instead check, if
test10.socket is in state "failed" with result "trigger-limit-hit" after
it has been triggered.
TriggerLimitIntervalSec= by default is set to 2s. A "sleep 10" should
give systemd enough time even on slower machines, to reach the trigger
limit.
For better readability, break the test into separate ExecStart lines.
Fixes#19154.
Skip a harmless error when running the tests on a system with a significantly
older systemd version (ldd tries to resolve the unprefixed RPATH for libsystemd.so.0,
which is in this case older than the already installed libsystemd.so.0 in $initdir).
The issue is triggered by installing test dependencies in install_missing_libraries().
Spotted on CentOS 8.
```
$ ldd /var/tmp/systemd-test.nZO11F/root/lib/systemd/tests/test-sd-device-thread
/var/tmp/systemd-test.nZO11F/root/lib/systemd/tests/test-sd-device-thread: /lib64/libsystemd.so.0: version `LIBSYSTEMD_240' not found (required by /var/tmp/systemd-test.nZO11F/root/lib/systemd/tests/test-sd-device-thread)
linux-vdso64.so.1 (0x00007fffb79d0000)
libclang_rt.asan-powerpc64le.so => /usr/lib64/clang/11.0.0/lib/linux/libclang_rt.asan-powerpc64le.so (0x00007fffb6ef0000)
libsystemd.so.0 => /lib64/libsystemd.so.0 (0x00007fffb6d20000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fffb6cd0000)
libc.so.6 => /lib64/libc.so.6 (0x00007fffb6ab0000)
$ LD_LIBRARY_PATH=/var/tmp/systemd-test.nZO11F/root/lib64/ ldd /var/tmp/systemd-test.nZO11F/root/lib/systemd/tests/test-sd-device-thread
linux-vdso64.so.1 (0x00007fffaba80000)
libclang_rt.asan-powerpc64le.so => /usr/lib64/clang/11.0.0/lib/linux/libclang_rt.asan-powerpc64le.so (0x00007fffaafa0000)
libsystemd.so.0 => /var/tmp/systemd-test.nZO11F/root/lib64/libsystemd.so.0 (0x00007fffaa5f0000)
libpthread.so.0 => /var/tmp/systemd-test.nZO11F/root/lib64/libpthread.so.0 (0x00007fffaa5a0000)
libc.so.6 => /var/tmp/systemd-test.nZO11F/root/lib64/libc.so.6 (0x00007fffaa380000)
```
When `linux-headers` is installed on Arch Linux, it stores the module
source tree in the kernel module directory, which is then picked up by
`find` and we get a lot of harmless but annoying errors:
```
...
modprobe: FATAL: Module Kconfig.iosched not found in directory /lib/modules/5.13.7-arch1-1
modprobe: FATAL: Module Kconfig not found in directory /lib/modules/5.13.7-arch1-1
modprobe: FATAL: Module Kconfig not found in directory /lib/modules/5.13.7-arch1-1
modprobe: FATAL: Module dm-mpath.h not found in directory /lib/modules/5.13.7-arch1-1
modprobe: FATAL: Module dm-bio-prison-v2.h not found in directory /lib/modules/5.13.7-arch1-1
modprobe: FATAL: Module raid0.h not found in directory /lib/modules/5.13.7-arch1-1
...
```
Let's fix this by trying to install only kernel modules (*.ko files with
an optional compression).
We still sometimes try to grep an empty strace log because strace is not
yet properly initialized. Let's make the check a bit clever and wait
until strace is attached to PID 1 by checking the `TracerPid` field in
`/proc/1/status`.
Sometimes the ldconfig.service might take a bit longer to finish,
causing spurious test timeouts:
```
[ 1025.858923] systemd[24]: ldconfig.service: Executing: /sbin/ldconfig -X
...
[ 1043.883620] systemd[1]: ldconfig.service: Main process exited, code=exited, status=0/SUCCESS (success)
...
Trying to halt container. Send SIGTERM again to trigger immediate
termination.
Container TEST-52-HONORFIRSTSHUTDOWN terminated by signal KILL.
E: Test timed out after 20s
```
Let's unify handling of the boolean values throughout the test-functions
code, since we use 0/1, true/false, and yes/no almost randomly in many
places, so picking the right values during CI configuration can be a real
pain.
Saving the journal for passing tests creates a huge amount of unneeded
data stored for each full test run. Add a env var to allow saving the
journal only for failed tests.
This reverts commit cb0e818f7c.
After this was merged, some design and implementation issues were discovered,
see the discussion in #18782 and #19385. They certainly can be fixed, but so
far nobody has stepped up, and we're nearing a release. Hopefully, this feature
can be merged again after a rework.
Fixes#19345.
Due to a little misunderstanding the last patch doesn't work as
expected, since test_create_image() is called only for the first image
(usually TEST-01-BASIC), and all subsequent images are then (possibly)
modified with test_append_files().
Follow-up to 179ca4d2b1.
It turns out the "supporting services" were run in _all_ tests if
TEST-01-BASIC was run as the first test (which is usually the case),
since with the original condition in test_create_image() we would skip
the masking and then propagate the change to the default image used by
other tests. This has been causing multiple bogus test timeouts
(especially when the hwdb was being rebuilt in tests with short
timeouts, like TEST-52-HONORFIRSTSHUTDOWN).
Let's "fix" this by making the call to mask_supporting_services()
uncoditional and override the test_create_image() function in
TEST-01-BASIC to avoid the masking in this single case.
When checking the unit state after `systemctl freeze|thaw` we can be
"too fast" and get the intermediate state (freezing/thawing) which we're
not interested in. Let's wait a bit and try to get the state again in
such cases to avoid unnecessary flakiness.
```
[ 29.390203] testsuite-38.sh[218]: + state=thawing
[ 29.390203] testsuite-38.sh[218]: + '[' thawing = running ']'
[ 29.390203] testsuite-38.sh[218]: + echo 'error: unexpected freezer state, expected: running, actual: thawing'
[ 29.390203] testsuite-38.sh[218]: error: unexpected freezer state, expected: running, actual: thawing
[ 29.390203] testsuite-38.sh[218]: + exit 1
```
test-loop-block needs to run in qemu, so we are currently not
testing it in the CI. Run it by itself in a separate job from
TEST-02-UNITTESTS to avoid slowing that suite down.
Fixes https://github.com/systemd/systemd/issues/19966
Disable it in the bionic-* CI for now, as it's affected by
the same uevent ordering issue as TEST-50-DISSECT which makes
it flaky.
In many CI runs I noticed a race where we check the "active" state a bit
too early where the unit is still in the "inactive" state, causing the
`is-failed` check to fail. Mitigate this by waiting even if the unit is
in the inactive state and introduce a "safe net" which checks whether
the unit is not restarting indefinitely or more than it should (as
described in the original issue #3166).
Example:
```
[ 5.757784] testsuite-11.sh[216]: + systemctl --no-block start fail-on-restart.service
[ 5.853657] testsuite-11.sh[222]: ++ systemctl show --value --property ActiveState fail-on-restart.service
[ 5.946044] testsuite-11.sh[216]: + active_state=inactive
[ 5.946044] testsuite-11.sh[216]: + [[ inactive == \a\c\t\i\v\a\t\i\n\g ]]
[ 5.946044] testsuite-11.sh[216]: + [[ inactive == \a\c\t\i\v\e ]]
[ 5.946044] testsuite-11.sh[216]: + systemctl is-failed fail-on-restart.service
[ 5.946816] systemd[1]: fail-on-restart.service: Passing 0 fds to service
[ 5.946913] systemd[1]: fail-on-restart.service: About to execute false
[ 5.947011] systemd[1]: fail-on-restart.service: Forked false as 228
[ 5.947093] systemd[1]: fail-on-restart.service: Changed dead -> start
[ 5.947172] systemd[1]: Starting Fail on restart...
[ 5.947272] systemd[228]: fail-on-restart.service: Executing: false
[ 5.960553] testsuite-11.sh[227]: activating
[ 5.965188] testsuite-11.sh[216]: + exit 1
[ 6.011838] systemd[1]: Received SIGCHLD from PID 228 (4).
[ 6.012510] systemd[1]: fail-on-restart.service: Main process exited, code=exited, status=1/FAILURE
[ 6.012638] systemd[1]: fail-on-restart.service: Failed with result 'exit-code'.
[ 6.012834] systemd[1]: fail-on-restart.service: Service will restart (restart setting)
[ 6.012963] systemd[1]: fail-on-restart.service: Changed running -> failed
[ 6.013081] systemd[1]: fail-on-restart.service: Unit entered failed state.
```
The three-argument match() is a GNU AWK extension, thus breaking the
compatibility with mawk (used on Ubuntu/Debian, for example). Let's
replace it with a (hopefully) more portable sed expression to drop the
inadvertently introduced gawk dependency.
Fixes: #19957
The currently hardcoded value works with the default configuration, but
breaks when QEMU_MEM != 512M (in sanitizer runs, for example).
```
# QEMU_MEM=1G make -C test/TEST-36-NUMAPOLICY/ run
make: Entering directory '/home/fsumsal/repos/@systemd/systemd/test/TEST-36-NUMAPOLICY'
TEST-36-NUMAPOLICY RUN: test NUMAPolicy= and NUMAMask= options
+ /bin/qemu-kvm -smp 8 -net none -m 1G -nographic -kernel /boot/vmlinuz-5.12.5-300.fc34.x86_64 -drive format=raw'
qemu-kvm: total memory for NUMA nodes (0x20000000) should equal RAM size (0x40000000)
E: QEMU failed with exit code 1
```
It's hard to trigger the failure to exit the rate limit state in
isolation as it needs multiple event sources in order to show that it
gets stuck in the queue. Hence why this is an extended test.
Previously, IPv6LinkLocalAddressGenerationMode= is not set, then we
define the address generation mode based on the result of reading
stable_secret sysctl value. This makes the mode is determined by whether
a secret address is specified in the new setting.
Closes#19622.
For "systemd-tmpfiles --cleanup", when the "Age" parameter
is specified, the criteria for deletion is determined from
the path's last modification timestamp ("mtime"), its last
access timestamp ("atime") and its last status change
timestamp ("ctime").
For instance, if one of those paths to be cleaned up are
opened, it results in the modification of "atime", which
results file system entry to not be removed because the
default aging algorithm would skip the entry.
Add an optional "age-by" argument by extending the "Age"
parameter to restrict the clean-up for a particular type
of file timestamp, which can be specified in "tmpfiles.d"
as follows:
[age-by:]cleanup-age, where age-by is "[abcmACBM]+"
For example:
d /foo/bar - - - abM:1m -
Would clean-up any files that were not accessed and created,
or directories that were not modified less than a minute ago
in "/foo/bar".
Fixes: #17002
Add the '=' action modifier that instructs tmpfiles.d to check the file
type of a path and remove objects that do not match before trying to
open or create the path.
BUG=chromium:1186405
TEST=./test/test-systemd-tmpfiles.py "$(which systemd-tmpfiles)"
Change-Id: If807dc0db427393e9e0047aba640d0d114897c26
When fuzzing, the following happens:
- we parse 'data' and produce an argv array,
- one of the items in argv is assigned to arg_host,
- the argv array is subsequently freed by strv_freep(), and arg_host has a dangling symlink.
In normal use, argv is static, so arg_host can never become a dangling pointer.
In fuzz-systemctl-parse-argv, if we repeatedly parse the same array, we
have some dangling pointers while we're in the middle of parsing. If we parse
the same array a second time, at the end all the dangling pointers will have been
replaced again. But for a short time, if parsing one of the arguments uses another
argument, we would use a dangling pointer.
Such a case occurs when we have --host=… --boot-loader-entry=help. The latter calls
acquire_bus() which uses arg_host.
I'm not particularly happy with making the code more complicated just for
fuzzing, but I think it's better to resolve this, even if the issue cannot
occur in normal invocations, than to deal with fuzzer reports.
Should fix https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=31714.
without waiting for online, there is a race condition between systemd-networkd
actually setting the new values and the test checking those values
This also sets the link down before restarting systemd-networkd, to avoid
the wait for online being a no-op
This is like a really strong version of Wants=, that keeps starting the
specified unit if it is ever found inactive.
This is an alternative to Restart= inside a unit, acknowledging the fact
that whether to keep restarting the unit is sometimes not a property of
the unit itself but the state of the system.
This implements a part of what #4263 requests. i.e. there's no
distinction between "always" and "opportunistic". We just dumbly
implement "always" and become active whenever we see no job queued for
an inactive unit that is supposed to be upheld.
This is similar to OnFailure= but is activated whenever a unit returns
into inactive state successfully.
I was always afraid of adding this, since it effectively allows building
loops and makes our engine Turing complete, but it pretty much already
was it was just hidden.
Given that we have per-unit ratelimits as well as an event loop global
ratelimit I feel safe to add this finally, given it actually is useful.
Fixes: #13386
This takes inspiration from PropagatesReloadTo=, but propagates
stop jobs instead of restart jobs.
This is defined based on exactly two atoms: UNIT_ATOM_PROPAGATE_STOP +
UNIT_ATOM_RETROACTIVE_STOP_ON_STOP. The former ensures that when the
unit the dependency is originating from is stopped based on user
request, we'll propagate the stop job to the target unit, too. In
addition, when the originating unit suddenly stops from external causes
the stopping is propagated too. Note that this does *not* include the
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT atom (which is used by BoundBy=),
i.e. this dependency is purely about propagating "edges" and not
"levels", i.e. it's about propagating specific events, instead of
continious states.
This is supposed to be useful for dependencies between .mount units and
their backing .device units. So far we either placed a BindsTo= or
Requires= dependency between them. The former gave a very clear binding
of the to units together, however was problematic if users establish
mounnts manually with different block device sources than our
configuration defines, as we there might come to the conclusion that the
backing device was absent and thus we need to umount again what the user
mounted. By combining Requires= with the new StopPropagatedFrom= (i.e.
the inverse PropagateStopTo=) we can get behaviour that matches BindsTo=
in every single atom but one: UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT is
absent, and hence the level-triggered logic doesn't apply.
Replaces: #11340
Add quotes around use of $env{MODALIAS} in rules.d/80-drivers.rules. The
modalias can contain whitespace, for example when it is dynamically generated
using device or vendor IDs.
There are nothing we can configure in udevd for loopback interfaces;
no ethertool configs can be applied, MAC address, interface name should
not be touched.
m4 is required to build the test SELinux module:
```
[ 31.321789] sh[483]: /bin/sh: line 1: m4: command not found
[ 31.882668] sh[488]: Compiling targeted systemd_test module
[ 32.120862] sh[492]: /bin/sh: line 1: m4: command not found
[ 32.159897] sh[458]: make: *** [/usr/share/selinux/devel/include/Makefile:156: tmp/systemd_test.mod] Error 127
```
I wanted to use jinja2 templating here too, but it's hard to get right:
custom_target() strips the executable bit by default (unlike configure_file
apparently). custom_target() has install_mode setting, but it was only added
in meson-0.47, so it can't be used while we support 0.46. And without the
executable bit the test is not invoked properly. For example, "root-unittests"
in the debian package calls test-* after installation, so the executable bit
there is necessary. It would be possible to adjust the file mode after the
fact, but it would make things more complicated.
So let's use the native meson substitutions here. We don't need anything more
fancy.
m4 was hugely popular in the past, because autotools, automake, flex, bison and
many other things used it. But nowadays it much less popular, and might not even
be installed in the buildroot. (m4 is small, so it doesn't make a big difference.)
(FWIW, Fedora dropped make from the buildroot now,
https://fedoraproject.org/wiki/Changes/Remove_make_from_BuildRoot. I think it's
reasonable to assume that m4 will be dropped at some point too.)
The main reason to drop m4 is that the syntax is not very nice, and we should
minimize the number of different syntaxes that we use. We still have two
(configure_file() with @FOO@ and jinja2 templates with {{foo}} and the
pythonesque conditional expressions), but at least we don't need m4 (with
m4_dnl and `quotes').
Printing stdout and stderr from a failed test makes it harder to
interpret what the specific problem was; instead let's print out
the lines in order as we got them when the test was run
Also save failed test output to file if ARTIFACT_DIRECTORY is defined
Meson 0.58 has gotten quite bad with emitting a message every time
a quoted command is used:
Program /home/zbyszek/src/systemd-work/tools/meson-make-symlink.sh found: YES (/home/zbyszek/src/systemd-work/tools/meson-make-symlink.sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program xsltproc found: YES (/usr/bin/xsltproc)
Configuring custom-entities.ent using configuration
Message: Skipping bootctl.1 because ENABLE_EFI is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Message: Skipping journal-remote.conf.5 because HAVE_MICROHTTPD is false
Message: Skipping journal-upload.conf.5 because HAVE_MICROHTTPD is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Message: Skipping loader.conf.5 because ENABLE_EFI is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
...
Let's suffer one message only for each command. Hopefully we can silence
even this when https://github.com/mesonbuild/meson/issues/8642 is
resolved.