IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It's hard to trigger the failure to exit the rate limit state in
isolation as it needs multiple event sources in order to show that it
gets stuck in the queue. Hence why this is an extended test.
Previously, IPv6LinkLocalAddressGenerationMode= is not set, then we
define the address generation mode based on the result of reading
stable_secret sysctl value. This makes the mode is determined by whether
a secret address is specified in the new setting.
Closes#19622.
For "systemd-tmpfiles --cleanup", when the "Age" parameter
is specified, the criteria for deletion is determined from
the path's last modification timestamp ("mtime"), its last
access timestamp ("atime") and its last status change
timestamp ("ctime").
For instance, if one of those paths to be cleaned up are
opened, it results in the modification of "atime", which
results file system entry to not be removed because the
default aging algorithm would skip the entry.
Add an optional "age-by" argument by extending the "Age"
parameter to restrict the clean-up for a particular type
of file timestamp, which can be specified in "tmpfiles.d"
as follows:
[age-by:]cleanup-age, where age-by is "[abcmACBM]+"
For example:
d /foo/bar - - - abM:1m -
Would clean-up any files that were not accessed and created,
or directories that were not modified less than a minute ago
in "/foo/bar".
Fixes: #17002
Add the '=' action modifier that instructs tmpfiles.d to check the file
type of a path and remove objects that do not match before trying to
open or create the path.
BUG=chromium:1186405
TEST=./test/test-systemd-tmpfiles.py "$(which systemd-tmpfiles)"
Change-Id: If807dc0db427393e9e0047aba640d0d114897c26
When fuzzing, the following happens:
- we parse 'data' and produce an argv array,
- one of the items in argv is assigned to arg_host,
- the argv array is subsequently freed by strv_freep(), and arg_host has a dangling symlink.
In normal use, argv is static, so arg_host can never become a dangling pointer.
In fuzz-systemctl-parse-argv, if we repeatedly parse the same array, we
have some dangling pointers while we're in the middle of parsing. If we parse
the same array a second time, at the end all the dangling pointers will have been
replaced again. But for a short time, if parsing one of the arguments uses another
argument, we would use a dangling pointer.
Such a case occurs when we have --host=… --boot-loader-entry=help. The latter calls
acquire_bus() which uses arg_host.
I'm not particularly happy with making the code more complicated just for
fuzzing, but I think it's better to resolve this, even if the issue cannot
occur in normal invocations, than to deal with fuzzer reports.
Should fix https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=31714.
without waiting for online, there is a race condition between systemd-networkd
actually setting the new values and the test checking those values
This also sets the link down before restarting systemd-networkd, to avoid
the wait for online being a no-op
This is like a really strong version of Wants=, that keeps starting the
specified unit if it is ever found inactive.
This is an alternative to Restart= inside a unit, acknowledging the fact
that whether to keep restarting the unit is sometimes not a property of
the unit itself but the state of the system.
This implements a part of what #4263 requests. i.e. there's no
distinction between "always" and "opportunistic". We just dumbly
implement "always" and become active whenever we see no job queued for
an inactive unit that is supposed to be upheld.
This is similar to OnFailure= but is activated whenever a unit returns
into inactive state successfully.
I was always afraid of adding this, since it effectively allows building
loops and makes our engine Turing complete, but it pretty much already
was it was just hidden.
Given that we have per-unit ratelimits as well as an event loop global
ratelimit I feel safe to add this finally, given it actually is useful.
Fixes: #13386
This takes inspiration from PropagatesReloadTo=, but propagates
stop jobs instead of restart jobs.
This is defined based on exactly two atoms: UNIT_ATOM_PROPAGATE_STOP +
UNIT_ATOM_RETROACTIVE_STOP_ON_STOP. The former ensures that when the
unit the dependency is originating from is stopped based on user
request, we'll propagate the stop job to the target unit, too. In
addition, when the originating unit suddenly stops from external causes
the stopping is propagated too. Note that this does *not* include the
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT atom (which is used by BoundBy=),
i.e. this dependency is purely about propagating "edges" and not
"levels", i.e. it's about propagating specific events, instead of
continious states.
This is supposed to be useful for dependencies between .mount units and
their backing .device units. So far we either placed a BindsTo= or
Requires= dependency between them. The former gave a very clear binding
of the to units together, however was problematic if users establish
mounnts manually with different block device sources than our
configuration defines, as we there might come to the conclusion that the
backing device was absent and thus we need to umount again what the user
mounted. By combining Requires= with the new StopPropagatedFrom= (i.e.
the inverse PropagateStopTo=) we can get behaviour that matches BindsTo=
in every single atom but one: UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT is
absent, and hence the level-triggered logic doesn't apply.
Replaces: #11340
Add quotes around use of $env{MODALIAS} in rules.d/80-drivers.rules. The
modalias can contain whitespace, for example when it is dynamically generated
using device or vendor IDs.
There are nothing we can configure in udevd for loopback interfaces;
no ethertool configs can be applied, MAC address, interface name should
not be touched.
m4 is required to build the test SELinux module:
```
[ 31.321789] sh[483]: /bin/sh: line 1: m4: command not found
[ 31.882668] sh[488]: Compiling targeted systemd_test module
[ 32.120862] sh[492]: /bin/sh: line 1: m4: command not found
[ 32.159897] sh[458]: make: *** [/usr/share/selinux/devel/include/Makefile:156: tmp/systemd_test.mod] Error 127
```
I wanted to use jinja2 templating here too, but it's hard to get right:
custom_target() strips the executable bit by default (unlike configure_file
apparently). custom_target() has install_mode setting, but it was only added
in meson-0.47, so it can't be used while we support 0.46. And without the
executable bit the test is not invoked properly. For example, "root-unittests"
in the debian package calls test-* after installation, so the executable bit
there is necessary. It would be possible to adjust the file mode after the
fact, but it would make things more complicated.
So let's use the native meson substitutions here. We don't need anything more
fancy.
m4 was hugely popular in the past, because autotools, automake, flex, bison and
many other things used it. But nowadays it much less popular, and might not even
be installed in the buildroot. (m4 is small, so it doesn't make a big difference.)
(FWIW, Fedora dropped make from the buildroot now,
https://fedoraproject.org/wiki/Changes/Remove_make_from_BuildRoot. I think it's
reasonable to assume that m4 will be dropped at some point too.)
The main reason to drop m4 is that the syntax is not very nice, and we should
minimize the number of different syntaxes that we use. We still have two
(configure_file() with @FOO@ and jinja2 templates with {{foo}} and the
pythonesque conditional expressions), but at least we don't need m4 (with
m4_dnl and `quotes').
Printing stdout and stderr from a failed test makes it harder to
interpret what the specific problem was; instead let's print out
the lines in order as we got them when the test was run
Also save failed test output to file if ARTIFACT_DIRECTORY is defined
Meson 0.58 has gotten quite bad with emitting a message every time
a quoted command is used:
Program /home/zbyszek/src/systemd-work/tools/meson-make-symlink.sh found: YES (/home/zbyszek/src/systemd-work/tools/meson-make-symlink.sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program sh found: YES (/usr/bin/sh)
Program xsltproc found: YES (/usr/bin/xsltproc)
Configuring custom-entities.ent using configuration
Message: Skipping bootctl.1 because ENABLE_EFI is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Message: Skipping journal-remote.conf.5 because HAVE_MICROHTTPD is false
Message: Skipping journal-upload.conf.5 because HAVE_MICROHTTPD is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Message: Skipping loader.conf.5 because ENABLE_EFI is false
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
Program ln found: YES (/usr/bin/ln)
...
Let's suffer one message only for each command. Hopefully we can silence
even this when https://github.com/mesonbuild/meson/issues/8642 is
resolved.
This fixes an issue introduced by the commit 954c77c251.
For some reasons, setting default ACL on $TESTDIR makes TEST-29-PORTABLE
fail. Let's drop the default ACL, and set ACL on saved results instead.
Fixes#19519.
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=33881.
Not only we would duplicate unknown input on the stack, we would do it
over and over. So let's first check that the input has reasonable length,
but also allocate just one fixed size buffer.
"systemd-testsuite" gets in the way when grepping for "testsuite-*.sh".
Also, the name doesn't matter for anything, so let's just use something
very short to save space.
When editing this function in 7bf20e48bd, I couldn't
decide whether to initialize ret at the top and only reset it on success, or
whether to assign a value in each branch. In the end I did neither ;( So if the
test finished without creating any of the result files, we would echo a
message, but return "success".
But there was bigger confusion with /failed: some tests create it empty, some
don't. I think we may want to do away pre-creation of /failed completely, and
assume the test failed unless /testok is found. But I'm leaving that for later
rework. For now let's just make sure we report return success only if /testok
or /skipped is found.
This commit applies the filtering imposed by LogLevelMax on a unit's
processes to messages logged by PID1 about the unit as well.
The target use case for this feature is a service that runs on a timer
many times an hour, where the system administrator decides that writing
a generic success message to the journal every few minutes or seconds
adds no diagnostic value and isn't worth the clutter or disk I/O.
Basically the same scenario as in
a33e2692e1, where `awk` exits as soon
as it finds a match, thus sending SIGPIPE to `ldd` if it's not fast
enough. That, in combination with `set -o pipefail` causes random &
unexpected fails, like:
```
No journal files were found.
-rw-r----- 1 root root 16777216 Apr 30 10:31
/var/tmp/TEST-01-BASIC_sanitizers-nspawn/system.journal
TEST-01-BASIC RUN: Basic systemd setup [OK]
systemd is not linked against the ASan DSO
gcc does this by default, for clang compile with -shared-libasan
make: *** [Makefile:2: clean-again] Error 1
make: Leaving directory '/build/test/TEST-01-BASIC'
```
Now, RoutesToDNS= and RoutesToNTP= are enabled by default on DHCPv4
client. So, if DHCP server picks up DNS or NTP servers from uplink,
then the routes may break CI environment.
Hopefully fixes#19463.
https://github.com/systemd/systemd/pull/19316 failed with:
[1065/1670] Linking target systemd-hwdb
--- command ---
14:28:29 /root/src/test/hwdb-test.sh
--- stdout ---
./systemd-hwdb does not exist, please build first
I'm not sure what is going on here… In principle meson says that tests may be
called from any directory, but in practice is was always the build directory.
So far we were relying on systemd-hwdb being present in '.', and this worked.
Either way, it's nicer to pass the exact path, so let's do that.
This allows to limit units to machines that run on a certain firmware
type. For device tree defined machines checking against the machine's
compatible is also possible.
Specifying the test number manually is tedious and prone to errors (as
recently proven). Since we have all the necessary data to work out the
test number, let's do it automagically.
We want to use the result in a shell pipeline hence use -P mode (pipe
mode) instead of -t mode (interactive tty mode) for systemd-run.
This shouldn't change much about the test, but is slightly more correct
(and quicker).
We have to invoke the tests as superuser, and not being able to read
the journal as the invoking user is annoying. I don't think there are
any security considerations here, since the invoking user can already
put arbitrary code in the Makefile and test scripts which get executed
with root privileges.
The logic to query test state was rather complex. I don't quite grok the point
of ret=$((ret+1))… But afaics, the precise result was always ignored by the
caller anyway.
We would remove stuff only if successful, so repeated invocations would
trivially fail.
Also drop "-f", so that if we expect to remove something, it must be there.
oomd works way better with swap, so let's make the test less flaky by
configuring a swap device for it. This also allows us to drop the ugly
`cat`s from the load-generating script.
Cover the case where a service is recovered out of reloading state via
a restart Restart= configuration.
Signed-off-by: Peter Morrow <pemorrow@linux.microsoft.com>