IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If wait_operstate() is called super quickly after ip command, then the
up/down state may not be changed and propagated to networkd, and
wait_operstate() mistakenly pass with the previous state.
To avoid such situation, wait for a while to make networkd actually
detect the interface brought up/down.
This fixes the following race:
1. when a dummy interface is created, it is initially down state,
2. hence, wait_operstate() may pass before the link is activated,
3. and the ip command bring up the interface before the activation,
4. and networkd activates, that is, brings down the interface,
5. thus, next wait_operstate() timedout, as it waits for the interface up.
To fix the race, let's wait the link is activated, before enter the loop
of wait_operstate().
Fixes#22267.
Tests DnsStream event handling, both for plain TCP DNS and DNS over TLS.
The DoT test requires the "openssl s_server" command line tool to mock a simple
TLS server. Thus the test's TLS part is skipped if openssl it not available.
The test works for both DNS_OVER_TLS_USE_GNUTLS and DNS_OVER_TLS_USE_OPENSSL.
The DoT case fails due to a bug, which is fixed on the next commit.
When sending multiple DNS questions to a DNS-over-TLS server (e.g. a question
for A and AAAA records, as is typical) on the same session, the server may
answer to each question in a separate TLS record, but it may also aggregate
multiple answers in a single TLS record.
(Some servers do this very often (e.g. Cloudflare 1.0.0.1), some do it sometimes
(e.g. Google 8.8.8.8) and some seem to never do it (e.g. Quad9 9.9.9.10)).
Both cases should be handled equivalently, as the byte stream is the same, but
when multiple answers came in a single TLS record, usually the first answer was
processed, but the second answer was entirely ignored, which caused a 10s delay
until the resolution timed out and the missing question was retried.
This can be reproduced by configuring one of the offending server and running
`resolvectl query google.com --cache=no` a few times.
To be notified of incoming data, systemd-resolved listens to `EPOLLIN` events
on the underlying socket. However, when DNS-over-TLS is used, the TLS library
(OpenSSL or GnuTLS) may read and buffer the entire TLS record when reading the
first answer, so usually no further `EPOLLIN` events will be generated, and the
second answer will never be processed.
To avoid this, if there's buffered TLS data, generate a "fake" EPOLLIN event.
This is hacky, but it makes this case transparent to the rest of the IO code.
These checks don't achieve anything of value. Assuming they were added to
check for corruption, they don't actually achieve this goal since other parts
of the data object can still get corrupted and we wouldn't notice unless we'd
recalculate the hash every time.
In theory, we could use the entry item hash to avoid a random access lookup
for the data object hash in the journal file in the future to speed up searching,
but for finding all entry objects containing a specific data objects, we already
have entry arrays per data object to get fast access to this information.
This means that duplicating the hashes in the entry item doesn't result in any
added value. In this commit, we remove the checks so that in future commits we
can remove the hashes from the journal file format in the new compact mode.
Previously, for each entry in a data object's entry array, we'd check
if one of that entry's entry items referred to the data object.
Instead, when verifying the main entry array, let's check if for each
entry item found by iterating the main entry array, the corresponding
data object's entry array refers to that entry.
This enables us to re-use more code from journal-file and turns out to
be roughly 10s faster when verifying my 4G laptop journal.
When verifying data objects, we still check if every entry in the data
object's entry array also exists in the main entry array so that we ensure
we're not missing any entries when iterating the main entry array.
Let's always try to link all entry items even if linking one fails
due to not being able to allocate a new entry array. Other entry
items might still be successfully linked if the entry array of the
corresponding data object isn't full yet.
Let's make sure we only move to objects when it's required. If "ret"
is NULL, the caller isn't interested in the actual object and the
function being called shouldn't move to it unless it has to
inspect/modify the object itself.
This reduces the number of calls to journal_file_move_to_object() which are heavy.
All call sites have easy access to the data object so this change doesn't end up
complicating things.
We currently use both offsetof(Object, ...) and offsetof(DataObject, ...).
This makes it harder to grep for usages as we have to make sure we grep for
both usages. Let's unify these all to use offsetof(Object, ...) to make it
easier to grep for usages.
When a Condition*= fails, and a service has Restart=always,
the service is not restarted.
Follow the same behaviour for ExecCondition= to avoid inconsistencies.
Fixes#22257
Previously, files with a hole at the end would get silently truncated
which breaks reading journal files. This commit makes sure that holes
are punched in existing space and if no more space is available, that
we grow the file and the hole by using ftruncate().
The corresponding test is extended to put a hole at the end of the file
and we make sure that hole is copied correctly.
Previously, even if a link is in unmanaged state, the function may
returns positive value. So, even if all managed links are in the configured
sate but do not satisfy the online criteria, e.g., IPv4 address state,
then wait-online finishes with positive value.
This makes the function always return 0 for unmanaged state. So, at
least one managed link must satisfies the online criteria.
This also adds more comments and debugging logs.
Fixes#22246.
The script was probably not used for a very long time. It is currently
passed systemd_boot.so as boot loader, which cannot work. The test
entries it creates are all pointing at non-existant efi/linux binaries,
which means they would not even show up in the menu if the created image
were actually booted. There is also nothing that actually tries to run
the image in the first place.
If we end up creating a proper systemd-boot test suite, it would be
better to start from scratch. In the meantime, mkosi already covers
the bare minimum with a simple bootup test.