IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This isn't a failure we care about, and it's somewhat alarming to see a
red error message flash up on the display when booting, so this just
simply returns EFI_SUCCESS and skips printing the "error" altogether.
If the namespaced systemd-journald instance was shut down due to
inactivity, we can consider it synchronized, so avoid throwing an error
in such case.
This should help with the random TEST-44-LOG-NAMESPACE fails where we
might try to sync the namespace just after it was shut down:
[ 7.682941] H testsuite-44.sh[381]: + systemd-run --wait -p LogNamespace=foobaz echo 'hello world'
[ 7.693916] H systemd-journald[389]: Failed to open /dev/kmsg, ignoring: Operation not permitted
[ 7.693983] H systemd-journald[389]: Collecting audit messages is disabled.
[ 7.725511] H systemd[1]: Started systemd-journald@foobar.service.
[ 7.726496] H systemd[1]: Listening on systemd-journald-varlink@foobaz.socket.
[ 7.726808] H systemd[1]: Listening on systemd-journald@foobaz.socket.
[ 7.750774] H systemd[1]: Started run-u3.service.
[ 7.795122] H systemd[1]: run-u3.service: Deactivated successfully.
[ 7.842042] H testsuite-44.sh[390]: Running as unit: run-u3.service; invocation ID: 56380adeb36940a8a170d9ffd2e1e433
[ 7.842561] H systemd[1]: systemd-journald-varlink@foobaz.socket: Deactivated successfully.
[ 7.842762] H systemd[1]: Closed systemd-journald-varlink@foobaz.socket.
[ 7.846394] H systemd[1]: systemd-journald@foobaz.socket: Deactivated successfully.
[ 7.846566] H systemd[1]: Closed systemd-journald@foobaz.socket.
[ 7.852983] H testsuite-44.sh[390]: Finished with result: success
[ 7.852983] H testsuite-44.sh[390]: Main processes terminated with: code=exited/status=0
[ 7.852983] H testsuite-44.sh[390]: Service runtime: 44ms
[ 7.852983] H testsuite-44.sh[390]: CPU time consumed: 8ms
[ 7.852983] H testsuite-44.sh[390]: Memory peak: 880.0K
[ 7.852983] H testsuite-44.sh[390]: Memory swap peak: 0B
[ 7.853785] H testsuite-44.sh[381]: + journalctl --namespace=foobar --sync
[ 7.860095] H systemd-journald[389]: Received client request to sync journal.
[ 7.862119] H testsuite-44.sh[381]: + journalctl --namespace=foobaz --sync
[ 7.868381] H journalctl[396]: Failed to connect to /run/systemd/journal.foobaz/io.systemd.journal: Connection refused
[ 7.871498] H systemd[1]: testsuite-44.service: Main process exited, code=exited, status=1/FAILURE
[ 7.871642] H systemd[1]: testsuite-44.service: Failed with result 'exit-code'.
[ 7.930772] H systemd[1]: Failed to start testsuite-44.service.
Previously,
1. use the passed Route object as is when a route is requested,
2. when the route becomes ready to configure, convert the Route object
if necessary, to resolve outgoing interface name, and split multipath
routes, and save them to the associated interfaces,
3. configure the route with the passed Route object.
However, there are several inconsistencies with what kernel does:
- The kernel does not merge nor split IPv4 multipath routes. However, we
unconditionally split multipath routes to manage.
- The kernel does not set gateway or so to a route if it has nexthop ID.
Fortunately, I do not find any issues caused by the inconsistencies. But
for safety, let's manage routes in a consistent way with the kernel.
This makes,
1. when a route is requested, split IPv6 multipath routes, but keep IPv4
multipath routes as is, and queue (possibly multiple) requests for
the route.
2. when the route becomes ready to configure, resolve nexthop and interface
name, and requeue request if necessary.
3. configure the (possibly split) route.
By using the logic,
- Now we manage routes in a mostly consistent way with the kernel.
- We can drop ConvertedRoutes object.
- Hopefully the code becomes much simpler.
If the requested new name for a network interface is already assigned as a
alternative name, then it is not necessary to and cannot rename the
interface.
This also changes to use sd_device to get some attributes.
So, on moving interfaces back to the parent, we need to populate sysfs
associated to the client netns.
That may look redundant and complicated, but it makes later change
easier, and hopefully faster.
This reverts commit 35fc10756bc5302d2dff1c235f864fa23a6d8771.
Although DocBook 4.5 states that `cmdsynopsis` can be used within `term` [1],
and `term` within `varlistentry`, `man` does not display the list of commands
after this change. FWIW, `cmdsynopsis` is used tree-wide within `refsynopsisdiv`
only.
[1] https://tdg.docbook.org/tdg/4.5/term
This is a supplement to #24419. On macOS Intel machines, detection needs to be done through cpuid.
In macOS, `dmi_vendors` detection is only applicable to M series.
Signed-off-by: Black-Hole1 <bh@bugs.cc>
This also introduce an extra argument for route_dup(), but it is
currently unused, will be used later.
No functional change, just preparation for later commits.
The OVMF UEFI firmware is measuring PK and KEK when secure boot is
disabled, and those variables are absent. This can be checked via the
event log to see that there are extensions for PCR 7 associated with PK
and KEK events of type EV_EFI_VARIABLE_DRIVER_CONFIG.
When running the "lock-secureboot-policy" verb, pcrlock complains that
those variables are not found and refuse to generate the
240-secureboot-policy.pcrlock.d/generated.pcrlock file.
The "TCG PC Client Platform Firmware Profile Specification Version 1.05
Revision 23"[1] from May 7, 2021, in section "3.3.4.8 PCR[7] - Secure
Boot Policy Measurements", point 10.b:
If reading a UEFI variable returns UEFI_NOT_FOUND, platform firmware
SHALL measure the absence of the variable. The
UEFI_VARIABLE_DATA.VariableDataLength field MUST be set to zero and
UEFI_VARIABLE_DATA.VariableData field will have a size of zero.
This patch mark those variables to be marked as "synthesize empty",
generating the correct hash for those variables.
Signed-off-by: Alberto Planas <aplanas@suse.com>
So far, if some component mounts a DDI in some local mount namespace we
created a temporary mountpoint in /tmp/ for that. Let's instead use the
same directory inode in /run/ instead. This is safe, since if everything
runs in a local mount namespace (with propagation on /run/ off) then
they shouldn't fight for the inode. And it relieves us from having to
clean up the directory after use. Morever, it allows us to run without
/tmp/ mounted.
This only moves dissect-image.c and the dissec tool over. More stuff is
moved over later.
Our dumbed down example PAM stacks do not contain cracklib/pwq modules,
hence using use_authtok on the pam_unix.so password change stack won't
work, because it has the effect that pam_unix.so never asks for a
password on its own, expecting the cracklib/pwq modules to have
queried/validated them beforehand.
I noticed this issue because of #30969: Debian's PAM setup suffers by
the same issue – even though they don't actually use our suggested PAM
fragments at all.
See: #30969