IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The script currently parses either 'clean' or 'clean-again' as wanting
to clean both before and after running tests. This fixes that to split
the action up; clean runs before tests, clean-again after; and also
verifies the parameter(s) before passing them to make.
Add NO_BUILD var to allow testing with no local build, by installing
local systemd files into the image.
This only works for debian-like distros currently, that use the
tools 'apt' and 'dpkg' for package management.
The $BUILD_DIR is only used in test-functions, and doesn't need to
be specified in any other scripts. Additionally, to be able to allow
the integration test suite to be run against locally installed binaries,
instead of built binaries, moving BUILD_DIR logic completely into
test-functions allows later patches to be simpler.
As LLDP thing does not get involved in the link status, `networkctl lldp`
may not provide an expected information even if the link is in
'configured' state.
Fixes#17360.
Building custom images for each test takes a lot of time.
Build the default one, and if the test needs incompatible changes
just copy it and extend it instead.
This parameter allows configuring the activation policy for an interface,
meaning how it manages the interface's administrative state (IFF_UP flag).
The policy can be configured to bring the interface either up or down when
the interface is (re)configured, to always force the interface either up or
down, or to never change the interface administrative state.
If the interface is bound with BindCarrier=, its administrative state is
controlled by the interface(s) it's bound to, and this parameter is forced
to 'bound'.
This changes the default behavior of how systemd-networkd sets the IFF_UP
flag; previously, it was set up (if not already up) every time the
link_joined() function was called. Now, with the default ActivationPolicy=
setting of 'up', it will only set the IFF_UP flag once, the first time
link_joined() is called, during an interface's configuration; and on
the first link_joined() call each time the interface is reconfigured.
Fixes: #3031Fixes: #17437
This reverts commit 73484ecff9.
3976f372ae moved libudev.so to be built in the
main directory, so this addition to $LD_LIBRARY_PATH is now obsolete.
After that commit, we build the following shared libraries:
build/libnss_myhostname.so.2
build/libnss_mymachines.so.2
build/libnss_resolve.so.2
build/libnss_systemd.so.2
build/libsystemd.so.0.30.0
build/libudev.so.1.7.0
build/pam_systemd.so
build/pam_systemd_home.so
build/src/boot/efi/stub.so
build/src/boot/efi/systemd_boot.so
build/src/shared/libsystemd-shared-247.so
EFI stubs don't matter, and libsystemd-shared-nnn.so is loaded through rpath,
and is doesn't need to and shouldn't be in $LD_LIBRARY_PATH. In effect, we only
ever need to add the main build directory to the search path.
Not all optional libraries might be available on developers machines,
so log and skip.
Also some pkg-config files are broken (eg: tss2 on Debian Stable) so
skip if the required variables are missing, and improve logs.
Allow to setup new bind mounts for a service at runtime (via either
DBUS or a new 'systemctl bind' verb) with a new helper that forks into
the unit's mount namespace.
Add a new integration test to cover this.
Useful for zero-downtime addition to services that are running inside
mount namespaces, especially when using RootImage/RootDirectory.
If a service runs with a read-only root, a tmpfs is added on /run
to ensure we can create the airlock directory for incoming mounts
under /run/host/incoming.
machinectl fails since 21935150a0 as it's now
mounting onto a file descriptor in a target namespace, without joining the
target's PID namespace.
Note that it's not enough to setns CLONE_NEWPID, but a double-fork is required
as well, as implemented by namespace_fork().
Add a test case to TEST-13-NSPAWN to cover this use case.
The cmp in ExecStartPost= was actually failing – ExecStartPost= has the
same StandardOutput as the rest of the service, so the output file is
truncated before cmp can compare it with the expected output – but the
test still passed because test_exec_standardoutput_truncate() calls
test(), which only checks the main result, rather than test_service(),
which checks the result of the whole service. Fix the test by merging
the ExecStartPost= into the ExecStart= – the cmp has to be part of the
same command line as the cat so that the file is not truncated between
the two processes.
This adds the ability to specify truncate:PATH for StandardOutput= and
StandardError=, similar to the existing append:PATH. The code is mostly
copied from the related append: code. Fixes#8983.
This adds support for a new kernel root verity command line option
"verity_root_options=" which controls the behaviour of dm-verity by
forwarding options directly to systemd-veritysetup.
See `veritysetup(8)` for more details.
Enable udev to set the transmit queue length of a device via a new directive to
be used in link files. The kernel stores this parameter as an unsigned 32 bit
integer. As typical values currently range in the order of 10 to a few 10,000
packets reduce the domain of valid values for this directive to 0..4294967294
and take the excluded 4294967295 == UINT32_MAX to indicate that the directive
is unset.
The next libblkid v2.37 is going to support session offsets for
multi-session CD/DVDs. This feature is implemented by "hint offsets".
These offsets are optional and prober specific (e.g., iso, udf, ...).
For this purpose, the library provides a new function
blkid_probe_set_hint(), and blkid(8) provides a new command-line
option --hint <name>=<offset>. For CD/DVD, the offset name is
"session_offset".
The difference between classic --offset and the new --hint is that
--offset is very restrictive and defines the probing area and the rest
of the device is invisible to the library. The new --hint works
like a suggestion, it provides a hint where the user assumes the
filesystem, but the rest of the device is still readable for the
library (for example, to get some additional superblock information
etc.).
If the --hint is without a value then it defaults to zero.
The option --hint implementation in udev-builtin-blkid.c is backwardly
compatible. If compiled against old libblkid, then the option is used in
the same way as --offset.
Addresses: https://github.com/karelzak/util-linux/issues/1161
Addresses: https://github.com/systemd/systemd/pull/17424
Normally ls-files prints the full path to files from the repo root. But when
$GIT_WORK_TREE is set, ls-files prints paths relative to the current
directory. When rebasing, $GIT_WORK_TREE is set in the commands executed from
'rebase -x'. This causes problems if meson config is touched and the meson
reconfigures itself. ($GIT_WORK_TREE shouldn't be relevant, since the paths that
ls-files reports don't depend on the work tree, but whatever.) Let's unset
GIT_WORK_TREE to avoid the issue.
$ (cd test; git --git-dir=$PWD/../.git ls-files ':/test/dmidecode-dumps/*.bin')
test/dmidecode-dumps/HP-Z600.bin
test/dmidecode-dumps/Lenovo-ThinkPad-X280.bin
test/dmidecode-dumps/Lenovo-Thinkcentre-m720s.bin
$ (cd test; GIT_WORK_TREE=$PWD/.. git --git-dir=$PWD/../.git ls-files ':/test/dmidecode-dumps/*.bin')
dmidecode-dumps/HP-Z600.bin
dmidecode-dumps/Lenovo-ThinkPad-X280.bin
dmidecode-dumps/Lenovo-Thinkcentre-m720s.bin
Fixes#18148.
Enhance systemd-networkd to be able to control a CAN device's berr-reporting
flag via the new boolean directive BusErrorReporting= to be used in network
files.
By default the test suite prefers qemu, and uses nspawn only if
a test specifically says it doesn't support qemu.
Add a variable to allow flipping the default, and run as many
tests under nspawn as possible.
Allows to split the test run in two parts. Most tests can run under
nspawn which is much faster, and they can be ran in one chunk with
TEST_NO_QEMU=1. The qemu-only tests, which are just a handful, can
be ran in another chunk with TEST_QEMU_ONLY=1.
Allows autopkgtest to be split in two parts.
The image build function greps for ExecStart lines in unit files, but some
of them (eg: systemd-firstboot) do not use a full path.
It then falls back to 'type -P' but that only works if you have the binary
installed. For optional binaries like systemd-firstboot, the installation
can then fail.
Manually check if the binary already exists in /[usr/][s]bin.
Usually on Debian ROOTLIBDIR is /lib/<arch triplet>, which is not the right place.
Use pkg-config since we define it, and then fallback to /usr/lib/systemd/user which is
the canonical location.
On both Debian&friends and Fedora dbus/dbus-broker install the user socket/service
under /usr/lib/systemd/user, not /lib/systemd/systemd/user.
Previously, dns_answer_add() was O(n^2).
With this change dns_packet_extract() becomes ~15 times faster for some
extremal case.
Before:
```
$ time ./fuzz-dns-packet ~/downloads/clusterfuzz-testcase-minimized-fuzz-dns-packet-5631106733047808
/home/watanabe/downloads/clusterfuzz-testcase-minimized-fuzz-dns-packet-5631106733047808... ok
real 0m15.453s
user 0m15.430s
sys 0m0.007s
```
After:
```
$ time ./fuzz-dns-packet ~/downloads/clusterfuzz-testcase-minimized-fuzz-dns-packet-5631106733047808
/home/watanabe/downloads/clusterfuzz-testcase-minimized-fuzz-dns-packet-5631106733047808... ok
real 0m0.831s
user 0m0.824s
sys 0m0.006s
```
Hopefully fixes oss-fuzz#19227.
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=19227
Allow configuration for IPv6 discovered routes to be ignored instead of
adding them as a route. This can be used to block unwanted routes, for
example, you may wish to not receive some set of routes on an interface
if they are causing issues.
This allows them to be executed in parallel and also gives us
better reporting.
The dump files are renamed to avoid repeating "dmidecode-dump", since that
string is already present in the subdirectory name.
Add memory_id program to set properties about the physical memory
devices in the system. This is useful on machines with removable memory
modules to show how the machine can be upgraded, and on all devices to
detect the actual RAM size, without relying on the OS accessible amount.
Closes: #16651
I suspect the original version of the regex was written on a system,
which prints both the QEMU version and the QEMU package version in the
--version output, like Fedora:
$ /bin/qemu-system-x86_64 --version
QEMU emulator version 4.2.1 (qemu-4.2.1-1.fc32)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
However, Arch Linux prints only the QEMU version:
$ /bin/qemu-system-x86_64 --version
QEMU emulator version 5.2.0
Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers
This causes the awk regex to not match the version string, since there's
no whitespace after it, causing the version check to fail (as well as the
TEST-36-NUMAPOLICY) as well.
Follow-up for 43b49470d1.
Upgrading to qemu 5.2 breaks TEST-36-NUMAPOLICY like:
qemu-system-x86_64: total memory for NUMA nodes (0x0) should
equal RAM size (0x20000000)
Use the new (as in >=2014) form of memdev in test 36:
-object memory-backend-ram,id=mem0,size=512M -numa node,memdev=mem0,nodeid=0
Since some target systems are as old as qemu 1.5.3 (CentOS7) but the new
kind to specify was added in qemu 2.1 this needs to add version parsing and
add the argument only when qemu is >=5.2.
Fixes#17986.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Previous commits changed the dhcpv4 retransmission algorithm to be
slightly slower, changing the amount of time it takes to notify
systemd-networkd that the dhcpv4 configuration has (transiently)
failed from around 14 second up to 28 seconds.
Since the test_dhcp_client_with_ipv4ll_without_dhcp_server test
configures an interface to use dhcpv4 without any operating dhcpv4
server running, it must increase the amount of time it waits for
the test interface to reach degraded state.
The commit 6f3ac0d517 drops the prefix and
suffix in TAGS= property. But there exists several rules that have like
`TAGS=="*:tag:*"`. So, the property must be always prefixed and suffixed
with ":".
Fixes#17930.
As interfaces will be reconfigured asynchronously after `networkctl reload`.
So, right after `networkctl reload` is finished, interfaces may be still
in 'configured' state with the old .network files.
Fixes#17533
The memory pressure values of the units in TEST-56-OOMD seemed to be a
lot lower after updating to linux 5.9. This is likely due to a fix from
e22c6ed90a.
To account for this, I lowered memory.high on testbloat.service to
throttle it even more. This was enough to generate the 50%+ value to trigger
oomd for the test, but as an extra precaution I also lowered the oomd
threshold to 1% so it's certain to try and kill testbloat.service.
They are not really boolean, because we have both ipv4 and ipv6, but
for each protocol we have either unset, no, and yes.
From https://github.com/systemd/systemd/issues/13316#issuecomment-582906817:
LinkLocalAddressing must be a boolean option, at least for ipv4:
- LinkLocalAddressing=no => no LL at all.
- LinkLocalAddressing=yes + Static Address => invalid configuration, warn and
interpret as LinkLocalAddressing=no, no LL at all.
(we check that during parsing and reject)
- LinkLocalAddressing=yes + DHCP => LL process should be subordinated to the
DHCP one, an LL address must be acquired at start or after a short N
unsuccessful DHCP attemps, and must not stop DHCP to keeping trying. When a
DHCP address is acquired, drop the LL address. If the DHCP address is lost,
re-adquire a new LL address.
(next patch will move in this direction)
- LinkLocalAddressing=fallback has no reason to exist, because LL address must
always be allocated as a fallback option when using DHCP. Having both DHCP
and LL address at the same time is an RFC violation, so
LinkLocalAdressing=yes correctly implemented is already the "fallback"
behavior. The fallback option must be deprecated and if present in older
configs must be interpreted as LinkLocalAddressing=yes.
(removed)
- And for IPv6, the LinkLocalAddress option has any sense at all? IPv6-LL
address aren't required to be always set for every IPv6 enabled interface (in
this case, coexisting with static or dynamic address if any)? Shouldn't be
always =yes?
(good question)
This effectively reverts 29e81083bd. There is no
special "fallback" mode now, so the check doesn't make sense anymore.
The systemd-user file has been moved from /etc/pam.d into /usr/lib/pam.d,
so test-functions needs to copy it from /usr/lib/pam.d instead.
This will copy it from either location.
In Fedora rawhide various perl modules are now available as separate
packages that are not pulled in by dependencies. If we don't have some
package, skip the tests.
This ugly code is apparently the way to do conditional imports:
https://www.cs.ait.ac.th/~on/O/oreilly/perl/cookbook/ch12_03.htm.
FixedRandomDelay=yes will use
`siphash24(sd_id128_get_machine() || MANAGER_IS_SYSTEM(m) || getuid() || u->id)`,
where || is concatenation, instead of a random number to choose a value between
0 and RandomizedDelaySec= as the timer delay.
This essentially sets up a fixed, but seemingly random, offset for each timer
iteration rather than having a random offset recalculated each time it fires.
Closes#10355
Co-author: Anita Zhang <the.anitazha@gmail.com>
umount emits an error message "no mount point specified" if the
tmpfs isn't mounted yet, which is the normal case.
Suppress that by redirecting stderr.
Script for generating LOTS of SCSI disk and partition devices in
the fake sysfs we use for udev testing.
This script is supposed to be run after sys-script.py. It uses
code from sys-script.py as template to generate additional SCSI disk data
structures. Together with the "generator" code in udev-test.pl
added in the previous patch, it allows to run udev tests with almost
arbitrarily many devices, and thus to do performance scaling tests.
Manually listing all devices in the test definition becomes cumbersome with
lots of devices. Add a function that scans on all block devices in
the test sysfs and generates a list of devices to test.
Add 4 new tests using multiple devices. Number 2-4 use many
devices claiming the same symlink, where only one device has
a higher priority thatn the others. They fail sporadically with
the current code, if a race condition causes the symlink to point
to the wrong device. Test 4 is like test 2 with sleeps in between,
it's much less likely to fail.
the "last_rule" option hasn't been supported for some time.
Therefore this test fails if a "not_exp_links" attribute is added,
as it should be. Mark it appropriately.
Instead of testing the existence or non-exisitence of just a single
symlink, allow testing of several links per device.
Change the test definitions accordingly.
Test if symlinks are created correctly by comparing the symlink
targets to the devnode path. This implies (for the symlink) that
major/minor numbers and permissions are correct, as we have tested
that on the devnode already.
More often than not, the created devnode is the basename of the
sysfs entry. The "devnode" device may be used to override the
auto-detected node name.
Permissions and major/minor number are now verified on the devnode
itself, not on symlinks.
For those tests where exp_name is set to the computed devnode name,
the explicit "exp_name" can be removed. "exp_name" is only required for
symlinks.
This allows separate testing for devnodes and symlinks an a follow-up
patch.
Allow testing cases where multiple devices are added and removed
simultaneously. Tests are started as synchronously as possible using a
semaphore, in order to test possible race conditions. If this isn't desired,
the test parameter "sleep_us" can be set to the number of microseconds to wait
between udev invocations.
Allow testing cases where multiple devices are added and removed.
This implies a change of the data structure: every test allows
for multiple devices to be added, and "exp_name" etc. are now properties
of the device, not of the test.
The link state file is always removed when networkd is stopping. So,
the deserialization logic does not work. Moreover, the ADDRESSES=
entry is not used by sd-network, so serialization is also not necessary.
The test sometimes fails, e.g. in bionic-s390x ci. I think it might be because
the service binary exits before we get a chance to notice that it is running:
13:59:31 --- Listing only the last 100 lines from a long log. ---
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4639845)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4539608)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4439376)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4338946)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4238702)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4138424)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 4038116)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3937835)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3837553)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3737250)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3636934)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3536622)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3436318)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3336021)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3235730)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3135468)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 3035158)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2934855)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2834541)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2732511)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2632255)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2532014)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2431746)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2331438)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2231213)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2130952)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 2030663)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1930428)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1830172)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1729906)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1629652)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1529368)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1429110)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1328852)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1228593)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1128320)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 1028083)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 927824)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 827564)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 724935)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 624664)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 524411)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 424124)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 323853)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 223585)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 120356)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: 18053)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 line 293: path-unit.path: state = running; result = success (left: -82385)
13:59:31 line 293: path-mycustomunit.service: state = exited; result = success
13:59:31 Test timeout when testing path-unit.path
It seems test/test-path/path-service.service wasn't actually used for anything.
Ubuntu CI's just got the dependencies require dto run this test added,
and it seems the build is different enough from other platforms
that it fails to create the required directories:
cp: cannot create regular file '/var/tmp/systemd-test.JJMOBY/minimal/usr/lib/os-release': No such file or directory
It looks like we need to do some whack-a-mole before it will fully pass,
so disable for now. It was skipped until today anyway due to missing
dependencies.
Before this commit, event when Gateway=_dhcp4 or _ra is set, the
route was configured with 'protocol static', and other properties
specified by RouteTable=, RouteMTU=, or etc, were ignored.
This commit makes set the route protocol based on the protocol the
gateway address is obtained, and apply other settings if it is not
explicitly specified in the [Route] section.
We usually disable IPv6AcceptRA= if the test does not require any
dynamic address configuration, as it makes slightly slow down the test.
C.f. 491b79aeac.
Also, even if login.defs are not present, don't start allocating at 1, but at
SYSTEM_UID_MIN.
Fixes#9769.
The test is adjusted. Actually, it was busted before, because sysusers would
never use SYSTEM_GID_MIN, so if SYSTEM_GID_MIN was different than
SYSTEM_UID_MIN, the tests would fail. On all "normal" systems the two are
equal, so we didn't notice. Since sysusers now always uses the minimum of the
two, we only need to substitute one value.
We were looking at ${f%.*}, i.e. the $f with any suffix starting with a dot removed.
This worked fine for paths like /some/path/test-11.input. It also worked
for paths like /some/path/inline (there were no dots, so we got $f back unscathed).
But in the ubuntu CI the package is built in a temporary directory like
/tmp/autopkgtest-lxc.nnnfqb26/downtmp/build.UfW/ (yes, it has a dot, even two.).
That still worked for the first case, but in the second case we truncated things
after the first dot, and we would try to get
/tmp/autopkgtest-lxc.nnnfqb26/downtmp/build and try to load
/tmp/autopkgtest-lxc.nnnfqb26/downtmp/build.expected-password, which obviously
didn't work as expected. To avoid this issue, do the suffix removal only when
we know that there really is a suffix.
A second minor issue was that we would try to copy $1.expected-*, and sometimes
$1 would be given, and sometimes not. Effectively we were relying on there
not being any files matching .expected-*. There weren't any such files, but let's
avoid this ugliness and always pass $1.
All this test does is manipulate text files in a subdir specified with --testroot.
It can be a normal unittest without the overhead of creating a machine image.
As a bonus, also test the .standalone version.
When tests are executed serially (the default), it seems better to launch
the fairly generic test that runs the unittests early in the sequence.
Right now the tests are ordered based on when they were written, but
this doesn't make much sense.
We need to include ninja-build in the packages list because meson doesn't
depend on it (because it supports other backends too).
Also drop xz-devel, it's not crucial for the test.
Define explicit action "kill" for SystemCallErrorNumber=.
In addition to errno code, allow specifying "kill" as action for
SystemCallFilter=.
---
v7: seccomp_parse_errno_or_action() returns -EINVAL if !HAVE_SECCOMP
v6: use streq_ptr(), let errno_to_name() handle bad values, kill processes,
init syscall_errno
v5: actually use seccomp_errno_or_action_to_string(), don't fail bus unit
parsing without seccomp
v4: fix build without seccomp
v3: drop log action
v2: action -> number
All backslashes that should be single in shell syntax need to be written as "\\" because
our parser will remove one level of quoting. Also, single quotes were doubly nested, which
cannot work.
Should fix the following message:
test-execute/exec-dynamicuser-statedir.service:16: Ignoring unknown escape sequences: "test $$(find / \( -path /var/tmp -o -path /tmp -o -path /proc -o -path /dev/mqueue -o -path /dev/shm -o -path /sys/fs/bpf -o -path /dev/.lxc \) -prune -o -type d -writable -print 2>/dev/null | sort -u | tr -d \\n) = /var/lib/private/quux/pief/var/lib/private/waldo"
When invoking "ldd" to find dependency libraries we already set
$LD_LIBRARY_PATH to point to our own build tree, so that our libraries
are checked, not the host libraries. This is not sufficient howeever, as
libudev is built in a subdir. Add that, too.
With the changes from 2c0dffe82d, starting
systemd-networkd.service will also activate systemd-networkd.socket.
When tearing down a test, we need to stop the socket as well, to make
sure networkd can't be activated accidentally with the wrong
configuration.
We have four legal cases:
1. /usr/lib/os-release exists and /etc/os-release is a symlink to it
2. both exist but /etc/os-release is not a symlink to /usr/lib/os-release
3. only /usr/lib/os-release exists
4. only /etc/os-release exists
The generic setup code in test-functions and create-busybox-image didn't handle
case 3.
The test-specific code in TEST-50 didn't handle 2 (because the general setup
code would only install /etc/os-release in the image and
grep -f /usr/lib/os-release would not work) and 4 (same reason) and would fail
in case 3 in generic setup.
Kernel 5.8 gained a hidepid= implementation that is truly per procfs,
which allows us to mount a distinct once into every unit, with
individual hidepid= settings. Let's expose this via two new settings:
ProtectProc= (wrapping hidpid=) and ProcSubset= (wrapping subset=).
Replaces: #11670
Follow the same model established for RootImage and RootImageOptions,
and allow to either append a single list of options or tuples of
partition_number:options.
The concept is flawed, and mostly useless. Let's finally remove it.
It has been deprecated since 90a2ec10f2 (6
years ago) and we started to warn since
55dadc5c57 (1.5 years ago).
Let's get rid of it altogether.
The sd_notify() socket that nspawn binds that the payload can use to
talk to it was previously stored in /run/systemd/nspawn/notify, which is
weird (as in the previous commit) since this makes /run/systemd
something that is cooperatively maintained by systemd inside the
container and nspawn outside of it.
We now have a better place where container managers can put the stuff
they want to pass to the payload: /run/host/, hence let's make use of
that.
This is not a compat breakage, since the sd_notify() protocol is based
on the $NOTIFY_SOCKET env var, where we place the new socket path.
glibc 2.26 lifted restrictions on search domains count or length to
unlimited. This has also been backported to 2.17 in some distributions (RHEL 7
and derivatives). Other softwares may have their own limits for search domains,
but we should not restrict what is written out any more.
https://sourceware.org/legacy-ml/libc-announce/2017/msg00001.html
Follows the same pattern and features as RootImage, but allows an
arbitrary mount point under / to be specified by the user, and
multiple values - like BindPaths.
Original implementation by @topimiettinen at:
https://github.com/systemd/systemd/pull/14451
Reworked to use dissect's logic instead of bare libmount() calls
and other review comments.
Thanks Topi for the initial work to come up with and implement
this useful feature.
Allows to specify mount options for RootImage.
In case of multi-partition images, the partition number can be prefixed
followed by colon. Eg:
RootImageOptions=1:ro,dev 2:nosuid nodev
In absence of a partition number, 0 is assumed.
Let's find the right os-release file on the host side, and only mount
the one that matters, i.e. /etc/os-release if it exists and
/usr/lib/os-release otherwise. Use the fixed path /run/host/os-release
for that.
Let's also mount /run/host as a bind mount on itself before we set up
/run/host, and let's mount it MS_RDONLY after we are done, so that it
remains immutable as a whole.
Opening a verity device is an expensive operation. The kernelspace operations
are mostly sequential with a global lock held regardless of which device
is being opened. In userspace jumps in and out of multiple libraries are
required. When signatures are used, there's the additional cryptographic
checks.
We know when two devices are identical: they have the same root hash.
If libcrypsetup returns EEXIST, double check that the hashes are really
the same, and that either both or none have a signature, and if everything
matches simply remount the already open device. The kernel will do
reference counting for us.
In order to quickly and reliably discover if a device is already open,
change the node naming scheme from '/dev/mapper/major:minor-verity' to
'/dev/mapper/$roothash-verity'.
Unfortunately libdevmapper is not 100% reliable, so in some case it
will say that the device already exists and it is active, but in
reality it is not usable. Fallback to an individually-activated
unique device name in those cases for robustness.
The kernel interface requires setting up read-only bind-mounts in
two steps, the bind first and then a read-only remount.
Fix nspawn-mount, and cover this case in the integration test.
Fixes#16484
When the RTC time at boot is off in the future by a few days, OnCalendar=
timers will be scheduled based on the time at boot. But if the time has been
adjusted since boot, the timers will end up scheduled way in the future, which
may cause them not to fire as shortly or often as expected.
Update the logic so that the time will be adjusted based on monotonic time.
We do that by calculating the adjusted manager startup realtime from the
monotonic time stored at that time, by comparing that time with the realtime
and monotonic time of the current time.
Added a test case to validate this works as expected. The test case creates a
QEMU virtual machine with the clock 3 days in the future. Then we adjust the
clock back 3 days, and test creating a timer with an OnCalendar= for every 15
minutes. We also check the manager startup timestamp from both `systemd-analyze
dump` and from D-Bus.
Test output without the corresponding code changes that fix the issue:
Timer elapse outside of the expected 20 minute window.
next_elapsed=1594686119
now=1594426921
time_delta=259198
With the code changes in, the test passes as expected.
For some reason the wait-online is failing intermittently; it's unclear
exactly why, but this hopefully avoids the failure for unrelated PR.
This is a workaround (not fix) for #16105
Several recent failed runs show that the test is still racy in two ways:
1) Sometimes it takes a while before the PID file is created, leading
to:
```
[ 10.950540] testsuite-47.sh[308]: ++ cat /leakedtestpid
[ 10.959712] testsuite-47.sh[308]: cat: /leakedtestpid: No such file or directory
[ 10.959824] testsuite-47.sh[298]: + leaked_pid=
```
2) Again, sometimes we check the leaked PID before the unit is actually
stopped, leading to a false negative:
```
[ 18.099599] testsuite-47.sh[346]: ++ cat /leakedtestpid
[ 18.116462] testsuite-47.sh[333]: + leaked_pid=342
[ 18.117101] testsuite-47.sh[333]: + systemctl stop testsuite-47-repro
...
[ 20.033907] testsuite-47.sh[333]: + ps -p 342
[ 20.080050] testsuite-47.sh[351]: PID TTY TIME CMD
[ 20.080050] testsuite-47.sh[351]: 342 ? 00:00:00 sleep
[ 20.082040] testsuite-47.sh[333]: + exit 42
```
Add support for creating a MACVLAN interface in "source" mode by
specifying Mode=source in the [MACVLAN] section of a .netdev file.
A list of allowed MAC addresses for the corresponding MACVLAN can also
be specified with the SourceMACAddress= option of the [MACVLAN] section.
An example .netdev file:
[NetDev]
Name=macvlan0
Kind=macvlan
MACAddress=02:DE:AD:BE:EF:00
[MACVLAN]
Mode=source
SourceMACAddress=02:AB:AB:AB:AB:01 02:CD:CD:CD:CD:01
SourceMACAddress=02:EF:EF:EF:EF:01
The same keys can also be specified in [MACVTAP] for MACVTAP kinds of
interfaces, with the same semantics.
The fuzzer test case has a giant line with ";;;;;;;;;;;..." which is turned into
a strv of empty strings. Unfortunately, when pushing each string, strv_push() needs
to walk the whole array, which leads to quadratic behaviour. So let's use
greedy_allocation here and also keep location in the string to avoid iterating.
build/fuzz-xdg-desktop test/fuzz/fuzz-xdg-desktop/oss-fuzz-22812 51.10s user 0.01s system 99% cpu 51.295 total
↓
build/fuzz-xdg-desktop test/fuzz/fuzz-xdg-desktop/oss-fuzz-22812 0.07s user 0.01s system 96% cpu 0.083 total
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=22812.
Other minor changes:
- say "was already defined" instead of "defined multiple times" to make it
clear that we're ignoring this second definition, and not all definitions
of the key
- unescaping needs to be done also for the last entry
When a command asks to load a unit directly and it is in state
UNIT_NOT_FOUND, and the cache is outdated, we refresh it and
attempto to load again.
Use the same logic when building up a transaction and a dependency in
UNIT_NOT_FOUND state is encountered.
Update the unit test to exercise this code path.
SIG-prefixed signals for `kill` are not POSIX compliant, so on Ubuntu CI
(which defaults to dash instead of bash) the TEST-52 contains following
error:
[ 9693.549638] sh[51]: + systemctl poweroff --no-block
[ 9693.553130] systemd-logind[26]: System is powering down.
[ 9693.608911] sh[54]: /bin/sh: 1: kill: Illegal option -S
This can be reproduced manually as well, either by running dash, or bash
in POSIX mode:
$ dash -c 'kill -SIGKILL 123'
dash: 1: kill: Illegal option -S
$ bash --posix -c 'kill -SIGKILL 123'
bash: line 0: kill: SIGKILL: invalid signal specification
SR-IOV provides the ability to partition a single physical PCI
resource into virtual PCI functions which can then be injected in
to a VM. In the case of network VFs, SR-IOV improves north-south n
etwork performance (that is, traffic with endpoints outside the
host machine) by allowing traffic to bypass the host machine’s network stack.
When the system is under heavy load, it can happen that the unit cache
is refreshed for an unrelated reason (in the test I simulate this by
attempting to start a non-existing unit). The new unit is found and
accounted for in the cache, but it's ignored since we are loading
something else.
When we actually look for it, by attempting to start it, the cache is
up to date so no refresh happens, and starting fails although we have
it loaded in the cache.
When the unit state is set to UNIT_NOT_FOUND, mark the timestamp in
u->fragment_loadtime. Then when attempting to load again we can check
both if the cache itself needs a refresh, OR if it was refreshed AFTER
the last failed attempt that resulted in the state being
UNIT_NOT_FOUND.
Update the test so that this issue reproduces more often.
Since the hwdb update from a79be2f807
the systemd-hwdb-update service started timing out under ASan when
compiled with gcc, as we started tripping over the 3 minutes timeout.
This affects only gcc runs, since the current gcc on Arch still suffers
from the detect_stack_use_after_return performance penalty[0]. Until
the fixed gcc is present in the respective repositories, let's bump
the timeout to 4 minutes, as we might not be able to upgrade right
away, due to systemd/systemd#16199.
Before the hwdb update:
[ 7958.292540] systemd[63]: systemd-hwdb-update.service: Executing: /usr/bin/time systemd-hwdb update
[ 7958.304005] systemd[1]: systemd-journald.service: Got notification message from PID 44 (FDSTORE=1)
[ 7958.314434] systemd[1]: systemd-journald.service: Added fd 3 (n/a) to fd store.
[ 8008.520082] systemd[1]: systemd-journald.service: Got notification message from PID 44 (WATCHDOG=1)
[ 8068.520151] systemd[1]: systemd-journald.service: Got notification message from PID 44 (WATCHDOG=1)
[ 8125.682843] time[63]: 84.47user 82.92system 2:47.50elapsed 99%CPU (0avgtext+0avgdata 811512maxresident)k
[ 8125.682843] time[63]: 0inputs+19680outputs (0major+25000853minor)pagefaults 0swaps
After the hwdb update:
[ 6215.491958] systemd[63]: systemd-hwdb-update.service: Executing: /usr/bin/time systemd-hwdb update
[ 6215.503380] systemd[1]: systemd-journald.service: Got notification message from PID 44 (FDSTORE=1)
[ 6215.514172] systemd[1]: systemd-journald.service: Added fd 3 (n/a) to fd store.
[ 6329.392918] systemd[1]: systemd-journald.service: Got notification message from PID 44 (WATCHDOG=1)
[ 6394.920205] time[63]: 89.48user 89.98system 2:59.55elapsed 99%CPU (0avgtext+0avgdata 812764maxresident)k
[ 6394.920205] time[63]: 0inputs+20568outputs (0major+27318354minor)pagefaults 0swaps
[0] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94910
https://tools.ietf.org/html/draft-knodel-terminology-02https://lwn.net/Articles/823224/
This gets rid of most but not occasions of these loaded terms:
1. scsi_id and friends are something that is supposed to be removed from
our tree (see #7594)
2. The test suite defines an API used by the ubuntu CI. We can remove
this too later, but this needs to be done in sync with the ubuntu CI.
3. In some cases the terms are part of APIs we call or where we expose
concepts the kernel names the way it names them. (In particular all
remaining uses of the word "slave" in our codebase are like this,
it's used by the POSIX PTY layer, by the network subsystem, the mount
API and the block device subsystem). Getting rid of the term in these
contexts would mean doing some major fixes of the kernel ABI first.
Regarding the replacements: when whitelist/blacklist is used as noun we
replace with with allow list/deny list, and when used as verb with
allow-list/deny-list.