IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Right now it mostly duplicates a test that already exists in
TEST-50-DISSECT.mountfsd.sh, but it serves as a template for more unprivileged
nspawn tests.
The .cred suffix is stripped from a credential as it is imported from
the ESP, hence it should not be included in the credential name embedded
in the credential.
Fixes: #33497
So far you had to pick:
1. Use a signed PCR TPM2 policy to lock your disk to (i.e. UKI vendor
blesses your setup via signature)
or
2. Use a pcrlock policy (i.e. local system blesses your setup via
dynamic local policy stored in NV index)
It was not possible combine these two, because TPM2 access policies do
not allow the combination of PolicyAuthorize (used to implement #1
above) and PolicyAuthorizeNV (used to implement #2) in a single policy,
unless one is "further upstream" (and can simply remove the other from
the policy freely).
This is quite limiting of course, since we actually do want to enforce
on each TPM object that both the OS vendor policy and the local policy
must be fulfilled, without the chance for the vendor or the local system
to disable the other.
This patch addresses this: instead of trying to find a way to come up
with some adventurous scheme to combine both policy into one TPM2
policy, we simply shard the symmetric LUKS decryption key: one half we
protect via the signed PCR policy, and the other we protect via the
pcrlock policy. Only if both halves can be acquired the disk can be
decrypted.
This means:
1. we simply double the unlock key in length in case both policies shall
be used.
2. We store two resulting TPM policy hashes in the LUKS token JSON, one
for each policy
3. We store two sealed TPM policy key blobs in the LUKS token JSON, for
both halves of the LUKS unlock key.
This patch keeps the "sharding" logic relatively generic (i.e. the low
level logic is actually fine with more than 2 shards), because I figure
sooner or later we might have to encode more shards, for example if we
add further TPM2-based access policies, for example when combining FIDO2
with TPM2, or implementing TOTP for this.
Now that mkfs.btrfs is adding support for compressing the generated
filesystem (https://github.com/kdave/btrfs-progs/pull/882), let's
add general support for specifying the compression algorithm and
compression level to use.
We opt to not parse the specified compression algorithm and instead
pass it on as is to the mkfs tool. This has a few benefits:
- We support every compression algorithm supported by every tool
automatically.
- Users don't need to modify systemd-repart if a mkfs tool learns a
new compression algorithm in the future
- We don't need to maintain a bunch of tables for filesystem to map
from our generic compression algorithm enum to the filesystem specific
names.
We don't add support for btrfs just yet until the corresponding PR
in btrfs-progs is merged.
The original regex didn't cover the `run-unit-tests.py` script that
made the old framework pull in Python into the test image, which in turn
allowed the new TEST-69-SHUTDOWN Python script to get executed in the
old framework's image, causing unexpected fails with latest Python on
Rawhide.
Force means force, we skip checks with PID1 for existing units, but
then bail out with EEXIST if the files are actually there. Overwrite
everything instead.
Currently, if for example a traffic control object already exist, networkd
will silently do nothing, even if the settings in the network file for the
traffic control object have changed. Let's instead replace the object if it
already exists so that new settings from the network file are applied as
expected.
Fixes#31226
These operations might require slow I/O, and thus might block PID1's main
loop for an undeterminated amount of time. Instead of performing them
inline, fork a worker process and stash away the D-Bus message, and reply
once we get a SIGCHILD indicating they have completed. That way we don't
break compatibility and callers can continue to rely on the fact that when
they get the method reply the operation either succeeded or failed.
To keep backward compatibility, unlike reload control processes, these
are ran inside init.scope and not the target cgroup. Unlike ExecReload,
this is under our control and is not defined by the unit. This is necessary
because previously the operation also wasn't ran from the target cgroup,
so suddenly forking a copy-on-write copy of pid1 into the target cgroup
will make memory usage spike, and if there is a MemoryMax= or MemoryHigh=
set and the cgroup is already close to the limit, it will cause an OOM
kill, where previously it would have worked fine.
One of the major pait points of managing fleets of headless nodes is
that when something fails at startup, unless debug level was already
enabled (which usually isn't, as it's a firehose), one needs to manually
enable it and pray the issue can be reproduced, which often is really
hard and time consuming, just to get extra info. Usually the extra log
messages are enough to triage an issue.
This new option makes it so that when a service fails and is restarted
due to Restart=, log level for that unit is set to debug, so that all
setup code in pid1 and sd-executor logs at debug level, and also a new
DEBUG_INVOCATION=1 env var is passed to the service itself, so that it
knows it should start with a higher log level. Once the unit succeeds
or reaches the rate limit the original level is restored.
I don't actually need this anymore since we're going with a
unit based approach for the containers stuff internally so
let's just revert it.
Fixes#34085
This reverts commit ce2291730d.
We usually configure a test rule with a unique priority. Hence, finding
rule by priority reduces the lines of output, and we can debug easily.
Also print short comments on check. That's helpful when the check is
called several times.
That indicates the interface name in 'iif' or 'oif' cannot be resolved
when 'ip rule' command is invoked. That's natural when networkd fail to
remove rule but the corresponding interface is already removed.
To make not the residual rules interfere subsequent test cases, let's
ignore the flag and actually remove unwanted rules.
Note, `systemd-analyze foo@.service --instance=hoge` is equivalent to
`systemd-analyze foo@hoge.service`. But, the option may be useful when
e.g. passing multiple template units that have restriction on their
instance name:
```
$ ls
template_aaa@.service template_bbb@.service template_ccc@.service
$ systemd-analyze ./template_* --instance=hoge
```
Without the option, we need to embed an instance name into each unit
name, so cannot use globs.
Prompted by #33681.