IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This extends our current polkit logic, so that we can in a very similar
fashion as we already can authenticate dbus peers authenticate varlink
connection peers.
polkit natively speaks dbus and can authentication dbus peers. To get
the same level of support for varlink we'll use authentication by
pidfd+uid. This requires polkit v124, and if that's not available it
will fallback to authorizing root only as before.
Co-authored-by: Luca Boccassi <bluca@debian.org>
This does what we do for system extension also for configuration
extension.
This is complicated by the fact that we previously looked for
<uki-binary>.d/*.raw for system extensions. We want to measure sysexts
and confexts to different PCRs (13 vs. 12) hence we must distinguish
them, but *.raw would match both kinds.
This commit solves this via the following mechanism: we'll load confexts
from *.confext.raw and sysexts from *.raw but will then enclude
*.confext.raw from the latter. This preserves compatibility but allows
us to somewhat reasonable distinguish both types of images.
The documentation is updated not going into this detail though, and
instead now claims that sysexts shall be *.sysext.raw and confexts
*.confext.raw even though we actually are more lenient than this. This
is simply to push people towards using the longer, more descriptive
suffixes.
I added an XML comment (<!-- … -->) about this to the docs, so that
whenever somebody notices the difference between code and docs
understands why and leaves it that way.
Currently, link_queue_request_safe(), which is a wrapper of
request_new(), is called with a free function at
- link_request_stacked_netdev() at netdev/netdev.c,
- link_request_address() at networkd-address.c,
- link_request_nexthop() at networkd-nexthop.c,
- link_request_neighbor() at networkd-networkd.c.
For the netdev case, the reference counter of the passed object is increased
only when the function returns 1. So, on failure (with -ENOMEM)
previously we unexpectedly dropped the reference of the NetDev object.
Similarly, for Address and friends, the ownership of the object is moved to the
Request object only when the function returns 1. And on failure, previously
the object was freed twice.
Also, netdev_queue_request(), which is another wrapper of request_new()
potentially leaks memory when the same NetDev object is queued twice.
Fortunately, that should not happen as the function is called only once
per object.
This fixes the above issue, and now the ownership or the reference
counter of the object is changed only when it is succeeded with 1.
Rewrite the test in bash and make it part of our integration test suite,
so it's actually executed in all our upstream CI environments.
The original test is flaky in environments where daemon-reload might
occur during the test runtime (e.g. when running the test in parallel
with the systemd-networkd test suite). Also, it was run only in CentOS
CI in limited way (i.e. without sanitizers), since it tests the host's
systemd, instead of the just built one.
Resolves: #29943
Then, replace address_remove_and_drop() with it.
If an address is requested, and the request is already called,
we may not received its reply and notification from the kernel, and
the corresponding address object may not be remmbered. Even in such
case, we need to remove the address, otherwise the address will come
later after the function called.
Otherwise, udev workers cannot detect slow programs invoked by
IMPORT{program}=, PROGRAM=, or RUN=, and whole worker process may be
killed.
Fixes#30436.
Co-authored-by: sushmbha <sushmita.bhattacharya@oracle.com>
Also, calculate the timeout for warning based on the remaining time for
the timeout of the event, rather than the timeout itself.
Currently, udev manager kills the worker if the timeout exceeds. So,
this does not change anything except for the timing of the warning.
Just refactoring and preparation for later commits.
Turns out Coccinelle can handle compound literals just fine, the parsing
errors were caused by incorrectly parsed macros in code before the
literals, so let's just provide simplified versions for such macros.
The parsing error in `Type *foo[ELEMENTSOF(bar)] = {};` is actually
harmless; it occurs only when creating an array of pointers for a type
that's in an external header and it occurs only on the first parser's
pass, subsequent passes resolve the type correctly.
Also, unset ENABLE_DEBUG_HASHMAP, so Coccinelle doesn't expand the
hashmap debug macros.
As for the remaining FIXMEs, I opened a couple of issues in the
Coccinelle upstream to see if they can be fixed there (or at least
properly analyzed).
When we want to do Polkit authentication we want to temporarily pause
handling of a method call until we have the Polkit reply, and then start
again. Let's add some glue to make that easy. This adds two helpers:
varlink_dispatch_again() allows to ask for redispatching of the
currently queued incoming message. Usecase is this: if we don't process
a methd right away, we can come back later, and ask it to be processed
again with this function, in which case our handlers will be called a
2nd time, exactly like on the first time.
varlink_get_current_message() provides access to the currently processed
method call.
With this the polkit logic can look into the current message, do its
thing, and then restart the method handling.
This combines safe_fork() with pidref_set_pid().
Eventually we really should switch this to use CLONE_PIDFD, but as that
is not wrapped by glibc yet, it's hard. But this is not crucial anyway,
as a child we just forked off can always safely be referenced also by
PID, given the reaping is under our own control.
A simple test case is added in a follow-up commit.
We simply don't carry any userspace support for TPM1.2 in our tree, and
we shouldn't given it's too weak by today's standards. Hence, if we
check if we are booted in UKI measured boot mode, don't just check if we
are booted in EFI, but also check that we have a TPM2 chip (as opposed
to none or only a TPM1.2 chip).
This is an alternative to #30652 but more comprehensive (and simpler),
since it covers all invocations of efi_measured_uki().
Fixes: #30650
Replaces: #30652