IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Since the files with generated directives are now automatically
generated during build, they're now under the respective build directory
which the current oss-fuzz CI script didn't account for.
Follow-up to: #24958Resolves: #25859
We generate a "version string" that is reported by various tools. This patch
changes this version string to use the characters specified for the version
string in the Boot Loader Specification. We start using the special characters
we have in the spec for this exact purpose and thus fix version comparisons.
We also stop using '+' which is not part of the allowed charset and is used for
boot attempt counting and should not be part of the version string.
The version string is (among other places) used in sd-boot and the comparison
result is used by 'bootctl update' to decide whether to install a new binary.
Before, because 'nn-rc1' compares higher than 'nn', we would refuse to upgrade
pre-release versions.
The boot loader is the primary motivation. I'm not aware of programatic version
comparisons in other places, but it makes sense to use the same versions string
everywhere.
(This patch effectively only matters for non-distro builds, because distro
builds presumably use -Dversion-tag to set something meaningful. Ideally, those
version strings are compatible with our version strings, but this is outside of
our control.)
The lists of directives for fuzzer tests are maintained manually in the
repo. There is a tools/check-directives.sh script that runs during test
phase and reports stale directive lists.
Let's rework the script into a generator so that these directive files
are created on-the-flight and needn't be updated whenever a unit file
directives change. The scripts is rewritten in Python to get rid of gawk
dependency and each generated file is a separate meson target so that
incremental builds refresh what is just necessary (and parallelize
(negligible)).
Note: test/fuzz/fuzz-unit-file/directives-all.slice is kept since there
is not automated way to generate it (it is not covered by the check
script neither).
We're already using C.UTF-8 as the default locale for nspawn. Let's
make the same change for the default-locale option instead of deciding
what to use based on the locale used by the host system. Users can
still override the locale using the default-locale option if needed.
No need to involve a trivial shell script for this.
We could call the compiler directly, but test() expects arguments
to be passed separately and cc.cmd_array() can contain arguments
itself. Using env is easier than manually slicing the array because
meson has no builtins for that.
Bash will generate a very nice message for us:
/tmp/ff.sh: line 1: SOMEVAR: parameter null or not set
Let's save some keystrokes by not replacing this with our own inferior
messages.
GIT_VERSION is not available as a config.h variable, because it's rendered
into version.h during builds. Let's rework jinja2 rendering to also
parse version.h. No functional change, the new variable is so far unused.
I guess this will make partial rebuilds a bit slower, but it's useful
to be able to use the full version string.
This is very similar to (and directly based on) the test for --help. I think
it's nice to do this: the test is very quick, but it'll catch cases where we
forgot to hook up the option, or forgot to exit after printing --version, and
it'll also increase our test coverage a bit.
It seems that --invert-grep used to affect --author, but now it doesn't (with
git-2.35.1-1.fc36.x86_64), so effectively we would only show the one entry that
was supposed to be filtered out.
When we do mkdir, we should just use 0o777 and let the umask take care of the
rest. Specifying an explicit mode is inappropriate. And when touching the code,
let's replace black madness with normal python style.
It's like CIFuzz but unlike CIFuzz it's compatible with forks and
it should make it possible to run the fuzzers to make sure that
patches backported to them are backported correctly without introducing
new bugs and regressions.
It was copy-pasted directly from OSS-Fuzz where it makes sense to
kind of strip binaries to get nice backtraces but when the fuzzers
are built and run locally with gdb it would be nice to have a little
bit more than that.
It was initially discovered in elfutils where I put the same flags
and was surprised when I couldn't run the fuzzer comfortably step
by step, which led to the same change there: https://github.com/google/oss-fuzz/pull/7092
:-)
The scheme is very similar to libsystemd-shared.so: instead of building a
static library, we build a shared library from the same objects and link the
two users to it. Both systemd and systemd-analyze consist mostly of the fairly
big code in libcore, so we save a bit on the installation:
(-0g, no strip)
-rwxr-xr-x 5238864 Dec 14 12:52 /var/tmp/inst1/usr/lib/systemd/systemd
-rwxr-xr-x 5399600 Dec 14 12:52 /var/tmp/inst1/usr/bin/systemd-analyze
-rwxr-xr-x 244912 Dec 14 13:17 /var/tmp/inst2/usr/lib/systemd/systemd
-rwxr-xr-x 461224 Dec 14 13:17 /var/tmp/inst2/usr/bin/systemd-analyze
-rwxr-xr-x 5271568 Dec 14 13:17 /var/tmp/inst2/usr/lib/systemd/libsystemd-core-250.so
(-0g, strip)
-rwxr-xr-x 2522080 Dec 14 13:19 /var/tmp/inst1/usr/lib/systemd/systemd
-rwxr-xr-x 2604160 Dec 14 13:19 /var/tmp/inst1/usr/bin/systemd-analyze
-rwxr-xr-x 113304 Dec 14 13:19 /var/tmp/inst2/usr/lib/systemd/systemd
-rwxr-xr-x 207656 Dec 14 13:19 /var/tmp/inst2/usr/bin/systemd-analyze
-rwxr-xr-x 2648520 Dec 14 13:19 /var/tmp/inst2/usr/lib/systemd/libsystemd-core-250.so
So for systemd itself we grow a bit (2522080 → 2648520+113304=2761824), but
overall we save. The most is saved on all the test files that link to libcore,
if they are installed, because there's 15 of them:
$ du -s /var/tmp/inst?
220096 /var/tmp/inst1
122960 /var/tmp/inst2
I also considered making systemd-analyze a symlink to /usr/lib/systemd/systemd
and turning systemd into a multicall binary. We did something like this with
udevd and udevadm. But that solution doesn't fit well in this case.
systemd-analyze has a bunch of functionality that is not used in systemd,
so the systemd binary would need to grow quite a bit. And we're likely to
add new types of verification or introspection features in analyze, and this
baggage would only grow. In addition, there are the test binaries which also
benefit from this.
00db9a114e ("docs: generate table from header using a script") got the
descriptions for the partition types mixed up. After that change, the
spec claimed, for example, that the /usr partition should contain
"dm-verity integrity hash data for the matching root partition", and
that the /usr verity partition should be of type "Any native, optionally
in LUKS". This made the spec an extremely confusing read before I
figured out what must have happened!
I've gone through the table as it existed prior to 00db9a114e, and moved
the descriptions around in the script that generates the table until
they matched up with what they used to be. Then I regenerated the
table from the fixed script.
This adds a helper script:
$ python3 tools/list-discoverable-partitions.py <src/shared/gpt.h
<!-- generated with tools/list-discoverable-partitions.py -->
| Partition Type UUID | Name | Allowed File Systems | Explanation |
|---------------------|------|----------------------|-------------|
| _Root Partition (Alpha)_ | `6523f8ae-3eb1-4e2a-a05a-18b695ae656f` | [Root Partition] | [Root Partition more] |
| _Root Partition (ARC)_ | `d27f46ed-2919-4cb8-bd25-9531f3c16534` | ditto | ditto |
...
The output can be pasted into the markdown file. I think this works better than
trying to match the two lists by hand.
When using "capture : true" in custom_target()s the mode of the source
file is not preserved when the generated file is not installed and so
needs to be tweaked manually. Switch from output capture to creating the
target file and copy the permissions from the input file.
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Imports are sorted in the usual fashion: stdlib first.
literal_eval() parses string/numbers/lists/sets/dicts, and nothing else, while
eval will execute any python code. Using literal_eval() is generally more
correct, because it avoids the risk of side effects from the parsed expression.
In this case, we generate the parsed strings ourselves, so it's very unlikely
to have anything unexpected in the expressions. But let's do the correct thing
anyway.