IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
With `ProtectSystem=strict` gcov is unable to write the *.gcda files
with collected coverage. Let's add a yet another switch to make such
restriction less strict to make gcov happy.
This addresses following errors:
```
...
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/binfmt-util.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/base-filesystem.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/barrier.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/ask-password-api.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/apparmor-util.c.gcda:Cannot open
systemd-networkd[272469]: profiling:/systemd-meson-build/src/shared/libsystemd-shared-249.a.p/acpi-fpdt.c.gcda:Cannot open
...
```
When playing around with the coverage-enabled build I kept hitting
an issue where dnsmasq failed to start because the previous instance was
still shutting down. This should, hopefully, help to mitigate that.
It is now ran on the nightly CentOS build, so that it can cover
integration tests too, and not just unit tests. It's nightly as
it considerably increases the integration test runtime, so it's
not appropriate for all PRs.
This is supposed to be used by package/image builders such as mkosi to
speed up building, since it allows us to suppress sync() inside a
container.
This does what Debian's eatmydata tool does, but for a container, and
via seccomp (instead of LD_PRELOAD).
The whole reason loop_device_make_internal() exists (as opposed to just
loop_device_make()) is to avoid mangling the loop flags value/call
getenv twice. Hence let's actually call it when we already mangled the
flags value.
When Assign= in [DHCPv6PrefixDelegation] is enabled, then the kernel
will create the prefix route for the assigned address with metric 256.
When Assign= is disabled, then the kernel will create the route with
metric 1024.
For the default value, we should choose a smaller value (higher priority)
than 1024, as the unreachable routes for delegated prefix will be
configured with 1024.
For static IPv6 routes without metric is specified, then we use 1024.
But such an adjustment is not performed to dynamic routes. So, let's
specify the metric explicitly.
Otherwise, configured routes will be handled as foreign.
Note that the kernel (at least 5.14.11) seems not to support lifetime
for IPv6 unreachable routes. The lifetime for routes of the type will be
handled by sd-event's timer event source.
So, we cannot confirm the lifetime with 'ip route' command.
Even in newer kernel version, it seems that some route type does not
support expiration, e.g. IPv4 route or unreachable route. Let's use
timer event source for such routes.
utmp(5) says `ut_line` is the device name minus the leading "/dev/". Therefore,
remove it. Without that, when using UtmpMode=user, we get `/dev/tty` in the
output of `last`/`w`.
Getting the variable directly from pkg-config (without
adding the sysroot prefix) is prone to host contamination
when building in sysroots as the compiler starts looking for the
headers on the host in addition to the sysroot.
Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Previously in most cases we'd allocate the HomeSetup context object
in generic code in homework.c. But for some cases we allocated them
instead inside the specific code in homework-{cifs,directory,luks}.c
Let's clean that up, and systematically allocate it in the outer
"entrypoint" calls in homework.c instead of the inner ones.
This doesn't change much in behaviour (i.e. it just means when something
fails we'll now clean it up one stack frame further up). But it will
allow is to more easily work with the context objects, since we'll have
them around in all stack frames.