IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
let's restore compatibility with pyparsing from fedora 35, i.e.:
python3-pyparsing-2.4.7-9.fc35.noarch
(cherry picked from commit 133a0003691daafaefa378f770ae01d01931787d)
The handling of whitespace in pyparsing is a bother. There's some
global state, and per-element state, and it's hard to get a handle on
things. With python3-pyparsing-2.4.7-10.fc36.noarch the grammar would
not match. After handling of tabs was fixed to not accept duplicate tabs,
the grammar passes.
It seems that the entry for usb:v8087p8087*
was generated incorrectly because we treated the interface line
(with two TABs) as a device line (with one TAB).
(cherry picked from commit f73d6895872cb9caffc523e1eddc53c9b98cfdec)
RestrictNamespaces should block clone3() like flatpak:
a10f52a756
clone3() passes arguments in a structure referenced by a pointer, so we can't
filter on the flags as with clone(). Let's disallow the whole function call.
(cherry picked from commit 30193fe817d262bd64b9a271534792046f19d7f5)
Follow-up for 2362fdde1b
When --machine is specified with --ephemeral, no random suffix is added, so
the recently added assert would fail.
Add a top-level variable with the expected file name for nspawn files, and
compute it when the rest of the names are computed.
(cherry picked from commit 3603f15171bbc2d650a8942714f6a6a900fb7c60)
When --ephemeral is used, a random 16 characters suffix is added to the image
name, so matching on .nspawn files based on the image name no longer works.
Fixes https://github.com/systemd/systemd/issues/13297
(cherry picked from commit 2362fdde1bd4bf54772383ef29431f683729ba76)
Otherwise, child event source will not work after the process is forked
and the event source is unref()ed on the child process.
(cherry picked from commit 01e6af737494c9790edcc5521ea8c668565b797f)
The implementation of MountImageUnit()/systemctl mount-image was
changed to use a /proc/self/fd path as the source, but that causes
the dm-verity files autodiscovery to fail, as it looks for files
in the same directory as the image.
Use the original file path when setting up dm-verity.
(cherry picked from commit cedf5b1aef4da2443f00eef2c242c8b005071aca)
Previously, systemd-analyze verify would return 0 even if warnings
were raised during analysis of the specified units or their
dependencies. With 3cc3dc7, verify was changed to return 1 when
warnings were raised.
This commit changes the default mode to _RECURSIVE_ERRORS_INVALID
so that verify returns zero again by default when warnings are
raised.
(cherry picked from commit cae7c282721ce13fc1405fc834382d3177a9b83d)
The deny/allow list check was inverted, if we are deny listing and the
hashmap contains the syscall then that's good
Fixes https://github.com/systemd/systemd/issues/22914
(cherry picked from commit dd51e725df9aec2847482131ef601e0215b371a0)
MOVE_MOUNT_T_EMPTY_PATH has been added to systemd 250 by [1]
but it's defined in kernel headers since version 5.2.
[1] c7bf079bbc19e3b409acc0c7acc3e14749211fe2
(cherry picked from commit 608c3b0293cac3cbb037b2d15c0a0f1e247eb71e)
If the time unit changes after adding the repetition value, the
timer may skip the next elapse. This patch reset sub time units
to minimum value when upper unit is changed.
Fixes#22665.
(cherry picked from commit 1e582ede3b04d12aae11fc5378a446a392054f1c)
When restoring the COW flag for journals on BTRFS, the full journal contents
are copied into new files. But during these operations, the acls of the
previous files were lost and users were not able to access to their old
journal contents anymore.
(cherry picked from commit 11ee11dbb34587edcde5020c5baf1402dcc4ffdf)
With negative numbers we wouldn't account for the minus sign, thus
returning a string with one character too short, triggering buffer
overflows in certain situations.
(cherry picked from commit e3dd9ea8ea4510221f73071ad30ee657ca77565d)
Also, set KeepConfiguration=dhcp-on-stop by default when running in
initrd.
Fixes#21967.
(cherry picked from commit ea853de57dd84a2173cd60e2ecec1b8c978e04f3)
The Stream Deck products from Elgato are simple key pads
intended to be used as macro pads. They're popular within
the streaming community.
This commit adds all 5 Stream Deck variants available to
the AV production file.
See https://www.elgato.com/en/stream-deck
(cherry picked from commit e982320b44486b26c4d39f7c81012f6a0e2aaf77)
This adds support for AV production controller devices, such
as DJ tables, music-oriented key pads, and others.
The USB vendor and product IDs come from Mixxx, Ctlra, and
Ardour.
Fixes#20533
Co-developed-by: Georges Basile Stavracas Neto <georges.stavracas@gmail.com>
(cherry picked from commit f2c36c0e2445fa95ba109017d4b768b2fd825c43)
Add srpm_build_deps key to the Packit config to specify needed dependencies for SRPM build
and indicate to build SRPM in Copr.
(cherry picked from commit d15e1a29e3aab04ee79d5e3ec8e1e65fca78e165)
The event loop is already shutting down, hence no point in using it
anymore, it's not going to run any further iteration.
(cherry picked from commit 47f04c2a69d5a604411f17a2e660021165d09c89)
If we're consuming an on-disk seed, we usually write out a new one after
consuming it. In that case, we might be at early boot and the randomness
could be rather poor, and the kernel doesn't guarantee that it'll use
the new randomness right away for us. In order to prevent the new
entropy from getting any worse, hash together the old seed and the new
seed, and replace the final bytes of the new seed with the hash output.
This way, entropy strictly increases and never regresses.
(cherry picked from commit da2862ef06f22fc8d31dafced6d2d6dc14f2ee0b)
Since test-resolved-stream brings up a simple DNS server on 127.0.0.1:12345,
only one instance could run at a time, so it would fail when run like
`meson test -C build test-resolved-stream --repeat=1000`.
Similarly, if by chance something is up on port 12345, the test would fail.
To make the test more reliable, run it in an isolated user + network namespace.
If this fails (some distributions disable user namespaces), just run as before.
(cherry picked from commit c76120f1b82f7e1c6a53b1569087db462c21b7d1)
In commit 2aaf6bb6e99b0f2bd73e0c49bef9e11a2844bf1a, an issue was fixed where
systemd-resolved could get stuck for multiple seconds waiting for incoming data,
since GnuTLS/OpenSSL can buffer a TLS record, so data could be available, but
no EPOLLIN event would be generated.
To fix this, a somewhat elaborate logic consisting on asking the TLS library
whether it had buffered data, then "faking" an EPOLLIN event was implemented.
However, there is a much simpler solution: Always read as much data as available
(i.e. until we get an event like EAGAIN when trying to read) from the stream
when we get an EPOLLIN event, instead of at most a single packet per event.
This approach does not require asking the TLS library whether it has buffered
data, and the logic is exactly the same for both the TCP and TLS case.
test-resolved-stream is fixed to avoid a latent double free bug.
(cherry picked from commit 839a70c3534ce10ed7a66b5925f4570d88b2b64a)
Since when handling a DNS over TLS stream, the TLS library can override the
requested events through dnstls_events for handshake/shutdown purposes,
obtaining the event flags through sd_event_source_get_io_events and checking
for EPOLLIN or EPOLLOUT does not really tell us whether we want to read/write
a packet. Instead, it could just be OpenSSL/GnuTLS doing something else.
To make the logic more robust (and simpler), save the flags that tell us
whether we want to read/write a packet, and check them instead of the IO flags.
(& use uint32_t for the flags like in sd_event_source_set_io_events prototype)
(cherry picked from commit eff107736e17bfe43680c42ae39baa3d41fb4715)
Previously, the condition in on_stream_io_impl() never hit, as the
read packet is always taken from the stream in the few lines above.
Instead of the dns_stream_complete() under the condition, the stream
is unref()ed in the on_packet callback for LLMNR stream, unlike the
other on_packet callbacks.
That's quite tricky. Also, potentially, the stream may still have
queued packets to write.
This fix the condition, and drops the unref() in the on_packet callback.
C.f. https://github.com/systemd/systemd/pull/22274#issuecomment-1023708449.
Closes#22266.
(cherry picked from commit a5e2a488e83fabf6d8ade7621c2fc3574a8faaa7)
As dns_stream_take_read_packet() is called only in on_packet callbacks,
and all on_packet callbacks call it.
(cherry picked from commit 624f907ea9a42930bffb343dd44fbb0e34746cb0)
Tests DnsStream event handling, both for plain TCP DNS and DNS over TLS.
The DoT test requires the "openssl s_server" command line tool to mock a simple
TLS server. Thus the test's TLS part is skipped if openssl it not available.
The test works for both DNS_OVER_TLS_USE_GNUTLS and DNS_OVER_TLS_USE_OPENSSL.
The DoT case fails due to a bug, which is fixed on the next commit.
(cherry picked from commit 726bcd81b965afa3c9cc71f6c7a81b1eefb4bcf5)
When sending multiple DNS questions to a DNS-over-TLS server (e.g. a question
for A and AAAA records, as is typical) on the same session, the server may
answer to each question in a separate TLS record, but it may also aggregate
multiple answers in a single TLS record.
(Some servers do this very often (e.g. Cloudflare 1.0.0.1), some do it sometimes
(e.g. Google 8.8.8.8) and some seem to never do it (e.g. Quad9 9.9.9.10)).
Both cases should be handled equivalently, as the byte stream is the same, but
when multiple answers came in a single TLS record, usually the first answer was
processed, but the second answer was entirely ignored, which caused a 10s delay
until the resolution timed out and the missing question was retried.
This can be reproduced by configuring one of the offending server and running
`resolvectl query google.com --cache=no` a few times.
To be notified of incoming data, systemd-resolved listens to `EPOLLIN` events
on the underlying socket. However, when DNS-over-TLS is used, the TLS library
(OpenSSL or GnuTLS) may read and buffer the entire TLS record when reading the
first answer, so usually no further `EPOLLIN` events will be generated, and the
second answer will never be processed.
To avoid this, if there's buffered TLS data, generate a "fake" EPOLLIN event.
This is hacky, but it makes this case transparent to the rest of the IO code.
(cherry picked from commit 2aaf6bb6e99b0f2bd73e0c49bef9e11a2844bf1a)
RANDOM_BLOCK has existed for a long time, but RANDOM_ALLOW_INSECURE was
added more recently, leading to an awkward relationship between the two.
It turns out that only one, RANDOM_BLOCK, is needed.
RANDOM_BLOCK means return cryptographically secure numbers no matter
what. If it's not set, it means try to do that, but if it fails, fall
back to using unseeded randomness.
This part of falling back to unseeded randomness is the intent of
GRND_INSECURE, which is what RANDOM_ALLOW_INSECURE previously aliased.
Rather than having an additional flag for that, it makes more sense to
just use it whenever RANDOM_BLOCK is not set. This saves us the overhead
of having to open up /dev/urandom.
Additionally, when getrandom returns too little data, but not zero data,
we currently fall back to using /dev/urandom if RANDOM_BLOCK is not set.
This doesn't quite make sense, because if getrandom returned seeded data
once, then it will forever after return the same thing as whatever
/dev/urandom does. So in that case, we should just loop again.
Since there's never really a time where /dev/urandom is able to return
some easily but more with difficulty, we can also get rid of
RANDOM_EXTEND_WITH_PSEUDO. Once the RNG is initialized, bytes
should just flow normally.
This also makes RANDOM_MAY_FAIL obsolete, because the only case this ran
was where we'd fall back to /dev/urandom on old kernels and return
GRND_INSECURE bytes on new kernels. So also get rid of that flag.
Finally, since we're always able to use GRND_INSECURE on newer kernels,
and we only fall back to /dev/urandom on older kernels, also only fall
back to using RDRAND on those older kernels. There, the only reason to
have RDRAND is to avoid a kmsg entry about unseeded randomness.
The result of this commit is that we now cascade like this:
- Use getrandom(0) if RANDOM_BLOCK.
- Use getrandom(GRND_INSECURE) if !RANDOM_BLOCK.
- Use /dev/urandom if !RANDOM_BLOCK and no GRND_INSECURE support.
- Use /dev/urandom if no getrandom() support.
- Use RDRAND if we would use /dev/urandom for any of the above reasons
and RANDOM_ALLOW_RDRAND is set.
(cherry picked from commit 31234fbeec1c4a8e500106dff4779ccaa5baef83)
Previously, even if a link is in unmanaged state, the function may
returns positive value. So, even if all managed links are in the configured
sate but do not satisfy the online criteria, e.g., IPv4 address state,
then wait-online finishes with positive value.
This makes the function always return 0 for unmanaged state. So, at
least one managed link must satisfies the online criteria.
This also adds more comments and debugging logs.
Fixes#22246.
(cherry picked from commit cd7fcda54333dc95116a434cffc591f21edddbb2)