IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This adds a new script tools/check-version-history.py and a corresponding
test when building in developer mode. It checks manpages (except dbus
documentation which is handled by update-dbus-docs) for missing version
history information.
It also adds ignore lists based on version 183 (the version that our version
annotations go back to). These can be augmented if we want to ignore other
elements if it doesn't make sense for them to have version annotations.
Because journal_file_next_entry_for_data() provides the first entry, while
journal_file_next_entry() actually provides the next entry of the input,
this also renames it to journal_file_move_to_entry_for_data().
Also, previously, on DIRECTION_UP the function did not fall back to the
'extra' entry when all entries linked in the chained array are broken.
This also fixes the issue, and now it fall back to the extra entry.
When we reach an empty array, there are at least two possibilities:
- journal file is corrupted,
- invalid index is requested.
We cannot distinguish them here, let's simply return earlier.
If there's corruption and we are going upwards, then the 'total'
must be decreased when we go to the previous array. However,
previously, we wrongly kept or increased the number. This fixes
the behavior.
This effectively reverts d9b61db922.
In the do-while loop, we do not read any other entry array object, hence
the current object is always in the mmap cache and not necessary to re-read it.
Doing that in VMs without acceleration is prohibitively expensive (i.e.
20+ seconds in the C8S job). Thankfully, the recent [0] --lines=+n syntax
makes this all quite easy to fix.
[0] 8d6791d2aa
Since the soft-reboot drops the enqueued end.service, we won't shutdown
the test VM if the test fails and have to wait for the watchdog to kill
us (which may take quite a long time). Let's just forcibly kill the
machine instead to save CI resources.
Let's ake sure we check confexts against the confext api level, and
sysext against the sysext api level.
Previously the test would simply be skipped for confexts...
This adds an explicit service for initializing the TPM2 SRK. This is
implicitly also done by systemd-cryptsetup, hence strictly speaking
redundant, but doing this early has the benefit that we can parallelize
this in a nicer way. This also write a copy of the SRK public key in PEM
format to /run/ + /var/lib/, thus pinning the disk image to the TPM.
Making the SRK public key is also useful for allowing easy offline
encryption for a specific TPM.
Sooner or later we should probably grow what this service does, the
above is just the first step. For example, the service should probably
offer the ability to reset the TPM (clear the owner hierarchy?) on a
factory reset, if such a policy is needed. And we might want to install
some default AK (?).
Fixes: #27986
Also see: #22637
Given that gold is pretty much unmaintained and does not support
`-static-pie` for bootloader components it should be safe to drop.
Also switch to clang-17 while we're at it.
The biggest reason for forcing bfd was the use of linker scrips. Since
we don't rely on those anymore we can lift the requirement.
The biggest issue is gold as it does not understand -static-pie. Given
that it's pretty much on life support it's safe to just declare it not
supported anymore.
Don't link addons with libefi as clang/lld is sometimes very eager to
include memset etc., causing needless binary bloat and link errors with
LTO.
Fixes: #29165
This makes the special PE sections available again in our output EFI
images.
Since the compiler provides no way to mark a section as not allocated,
we use GNU assembler syntax to emit the sections instead. This ensures
the section data isn't emitted twice as load segments will only contain
allocating input sections.
The main reason we need to apply a whole lot of logic to the section
conversion logic is because PE sections have to be aligned to the page
size (although, currently not even EDK2 enforces this). The process of
achieving this with a linker script is fraught with errors, they are a
pain to set up correctly and suck in general. They are also not
supported by mold, which requires us to forcibly use bfd, which also
means that linker feature detection is easily at odds as meson has a
differnt idea of what linker is in use.
Instead of forcing a manual ELF segment layout with a linker script we
just let the linker do its thing. We then simply copy/concatenate the
sections while observing proper page boundaries.
Note that we could just copy the ELF load *segments* directly and
achieve the same result. Doing this manually allows us to strip sections
we don't need at runtime like the dynamic linking information (the
elf2efi conversion is effectively the dynamic loader).
Important sections like .sbat that we emit directly from code will
currently *not* be exposed as individual PE sections as they are
contained within the ELF segments. A future commit will fix this.
Adjust the tpm2_esys_handle_from_tpm_handle() function into better-named
tpm2_index_to_handle(), which operates like tpm2_get_srk() but allows using any
handle index. Also add matching tpm2_index_from_handle().
Also change the references to 'location' in tpm2_persist_handle() to more
appropriate 'handle index'.
So the coverage-related drop-in [0] can kick in to avoid errors with
DynamicUser=true. Also, to not make the test confusing with this change,
replace "nft-test" with "test-nft" everywhere.
[0] See test/README.testsuite, section "Code coverage"
I have no idea what went on in my mind when I used a path in /var/ for
the tpm2 event log we now keep for userspace measurements. The
measurements are only valid for the current boot, hence should not be
persisted (in particular as they cannot be rotated, hence should not
grow without bounds).
Fix that, simply move from /var/log/ to /run/log/.
Before this fix, when recursive-errors was set to 'no' during a systemd-analyze
verification, the parent slice was checked regardless. The 'no' setting means that,
only the specified unit should be looked at and verified and errors in the slices should be
ignored. This commit fixes that issue.
Example:
Say we have a sample.service file:
[Unit]
Description=Sample Service
[Service]
ExecStart=/bin/echo "a"
Slice=support.slice
Before Change:
systemd-analyze verify --recursive-errors=no maanya/sample.service
Assertion 'u' failed at src/core/unit.c:153, function unit_has_name(). Aborting.
Aborted (core dumped)
After Change:
systemd-analyze verify --recursive-errors=no maanya/sample.service
{No errors}
This makes sure unit_watch_pid() and unit_unwatch_pid() will track
processes by pidfd if supported. Also ports over some related code.
Should not really change behaviour.
Note that this does *not* add support waiting for POLLIN on the pidfds
as additional exit notification. This is left for a later commit (this
commit is already large enough), in particular as that would add new
logic and not just convert existing logic.
This matters once we track processes with pidfds rather than just pid_t,
because made up PIDs likely won't exist.
The essence of the test remains unmodified, we just use a real, existing
PID instead of 4711.
This new helper can be used after reading process info from procfs, to
verify that the data that was just read actually matches the pidfd, and
does not belong to some new process that just reused the numeric PID of
the process we originally pinned.