IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Since the general generator logic was established in the rewrite in
07719a21b6, generators would always write to /tmp
by default. I think this not a good default at all, because generators write a
bunch of files and would create a mess in /tmp. And for debugging, one
generally needs to remove all the files in the output directory, because
generators will complain in the output paths are already present. Thus the
approach of disabling console logging and writing many files to /tmp when
invoked with no arguments is not nice, so let's disallow operation with no
args.
But when debugging, one generally does not care about the separate output dirs
(most generators use only one). Thus the general pattern I use is something
like:
rm -rf /tmp/x && mkdir /tmp/x && build/some-generator /tmp/{x,x,x}
This commit allows only one directory to be specified and simplifies this to:
rm -rf /tmp/x && mkdir /tmp/x && build/some-generator /tmp/x
This is useful to use "f" or "w" to write arbitrary binary files to
disk, or files with newlines and similar (for example to provision SSH
host keys and similar).
This imports credentials also via SMBIOS' "OEM vendor string" section,
similar to the existing import logic from fw_cfg.
Functionality-wise this is very similar to the existing fw_cfg logic,
both of which are easily settable on the qemu command line.
Pros and cons of each:
SMBIOS OEM vendor strings:
- pro: fast, because memory mapped
- pro: somewhat VMM independent, at least in theory
- pro: qemu upstream sees this as the future
- pro: no additional kernel module needed
- con: strings only, thus binary data is base64 encoded
fw_cfg:
- pro: has been supported for longer in qemu
- pro: supports binary data
- con: slow, because IO port based
- con: only qemu
- con: requires qemu_fw_cfg.ko kernel module
- con: qemu upstream sees this as legacy
DefaultSmackProcessLabel tells systemd what label to assign to its child
process in case SmackProcessLabel is not set in the service file. By
default, when DefaultSmackProcessLabel is not set child processes inherit
label from systemd.
If DefaultSmackProcessLabel is set to "/" (which is an invalid character
for a SMACK label) the DEFAULT_SMACK_PROCESS_LABEL set during compilation
is ignored and systemd act as if the option was unset.
In the welcome line, use NAME= as the fallback for PRETTY_NAME=.
PRETTY_NAME= doesn't have to be set, but NAME= should.
Example output:
---
Welcome to Fedora Linux 37 (Rawhide Prerelease)!
[ !! ] This OS version (Fedora Linux 37 (Rawhide Prerelease)) is past its end-of-support date (1999-01-01)
Queued start job for default target graphical.target.
[ OK ] Created slice system-getty.slice.
---
We had a description in README, and an outdated list in the man page.
I think we should keep a reference-style list in the man page. The description
in README is more free-form.
I thought it would be nice to specify the last day of support, because I
thought it'd seem more natural. But in practice this doesn't work well, because
such a truncated timestamp is usually taken to mean midnight that starts the
given date. I.e. 2011-12-13 is a shorthand for 2011-12-13 00:00:00 and not
2011-12-13 23:59:59.999999999999. Let's instead specify that the given date is
the first unsupported day, which is meaningful for humans, and let the computer
treat it as midnight, which gives consistent interpratation.
When using --root=/--image= the binaries to install/update will be
picked from the directory/image. Add an option to let the caller
choose.
By default (auto) the image is tried first, and if nothing is found
then the host. The other options allow to strictly try the image
or host and ignore the other.
Fixes#21764.
I think is very simple, but flexible. The date may be set early, for distros
that have a fixed schedule, but it doesn't have to. So for example Debian could
push out an update that sets a few months before the release goes EOL. And
various tools, in particular graphical desktops, can start nagging people to
upgrade a few weeks before the date.
As discussed in the bug, we don't need granularity higher than a day. And this
means that we can use a simple human- and machine-readable format.
I was considering other names, e.g. something with "EOL", but I think that
"SUPPORT_END" is better because it doesn't imply that the machine will somehow
stop working. This is supposed to be an advisory, nothing more.
It's pretty hard to write tests without this. I started out by adding separate
variables for each of the files we read, but there's a bunch, and in practice
it's good enough to just override the directory.
Variables read by kernel-install and those exported by it were described
without any clear separation. So in particular it was pretty hard to answer
a question like "what variables can be set in install.conf". The in- and
out-variables are now split into two separate subsections.
Same story as before: disabling a non-existent event source shouldn't
need to be guarded by an if. I retained the wrapper so that that we don't
have to say SD_EVENT_OFF in the many places where this is called.
This is a natural use case, and instead of defining a wrapper to do this
for us, let's just make this part of the API. Calling with NULL was not
allowed, so this is not a breaking change to the interface.
(After sd_event_source_is_enabled was originally added, we introduced
sd_event_source_disable_unref() and other similar functions which accept
NULL. So not accepting NULL here is likely to confuse people. Let's just
make the API usable with minimal fuss.)
In places the text was overly formal, e.g. "an 128-bit ID" was repeated, even
though it is clear from the context that we're talking about this type of ID.
OTOH, in other places the text was informal, e.g. "You can use …".
Also, "you may use f() to frob" → "f() frobs". The text without all the
flourishes is easier to read.
sd_id128_in_set_sentinel() was described only in passing when taking about
sd_id128_in_set(), now it gets is own brief paragraph.
The synopsis was missing.
Currently, the only way to set display name of a graphical session is to
pass it to CreateSession(). But modern display managers like gdm start
the display server as part of the user session, which means that the
display name isn't known yet when the session is being created. Hence,
let's make it possible to set it later.
This reverts PR #23269 and its follow-up commit. Especially,
2299b1cae3 (partially), and
3cf63830ac.
The PR was merged without final approval, and has several issues:
- The NetLabel for static addresses are not assigned, as labels are
stored in the Address objects managed by Network, instead of Link.
- If NetLabel is specified for a static address, then the address
section will be invalid and the address will not be configured,
- It should be implemented with Request object,
- There is no test about the feature.
This reverts PR #22587 and its follow-up commit. More specifically,
2299b1cae3 (partially),
e176f85527,
ceb46a31a0, and
51bb9076ab.
The PR was merged without final approval, and has several issues:
- OSS fuzz reported issues in the conf parser,
- It calls synchrnous netlink call, it should not be especially in PID1,
- The importance of NFTSet for CGroup and DynamicUser may be
questionable, at least, there was no justification PID1 should support
it.
- For networkd, it should be implemented with Request object,
- There is no test for the feature.
Fixes#23711.
Fixes#23717.
Fixes#23719.
Fixes#23720.
Fixes#23721.
Fixes#23759.
A future commit will add support for unicode collation protocol that
allows case folding and comparing strings with locale awareness. But it
only operates on whole strings, so fnmatch cannot use those without a
heavy cost. Instead we just case fold the patterns instead (the IDs we
try to match are already lower case).
New directive `DynamicUserNFTSet=` provides a method for integrating
configuration of dynamic users into firewall rules with NFT sets.
Example:
```
table inet filter {
set u {
typeof meta skuid
}
chain service_output {
meta skuid != @u drop
accept
}
}
```
```
/etc/systemd/system/dunft.service
[Service]
DynamicUser=yes
DynamicUserNFTSet=inet:filter:u
ExecStart=/bin/sleep 1000
[Install]
WantedBy=multi-user.target
```
```
$ sudo nft list set inet filter u
table inet filter {
set u {
typeof meta skuid
elements = { 64864 }
}
}
$ ps -n --format user,group,pid,command -p `pgrep sleep`
USER GROUP PID COMMAND
64864 64864 55158 /bin/sleep 1000
```
New directives `NFTSet=`, `IPv4NFTSet=` and `IPv6NFTSet=` provide a method for
integrating configuration of dynamic networks into firewall rules with NFT
sets.
/etc/systemd/network/eth.network
```
[DHCPv4]
...
NFTSet=netdev:filter:eth_ipv4_address
```
```
table netdev filter {
set eth_ipv4_address {
type ipv4_addr
flags interval
}
chain eth_ingress {
type filter hook ingress device "eth0" priority filter; policy drop;
ip saddr != @eth_ipv4_address drop
accept
}
}
```
```
sudo nft list set netdev filter eth_ipv4_address
table netdev filter {
set eth_ipv4_address {
type ipv4_addr
flags interval
elements = { 10.0.0.0/24 }
}
}
```
New directive `NetLabel=` provides a method for integrating dynamic network
configuration into Linux NetLabel subsystem rules, used by Linux security
modules (LSMs) for network access control. The option expects a whitespace
separated list of NetLabel labels. The labels must conform to lexical
restrictions of LSM labels. When an interface is configured with IP addresses,
the addresses and subnetwork masks will be appended to the NetLabel Fallback
Peer Labeling rules. They will be removed when the interface is
deconfigured. Failures to manage the labels will be ignored.
Example:
```
[DHCP]
NetLabel=system_u:object_r:localnet_peer_t:s0
```
With the above rules for interface `eth0`, when the interface is configured with
an IPv4 address of 10.0.0.0/8, `systemd-networkd` performs the equivalent of
`netlabelctl` operation
```
$ sudo netlabelctl unlbl add interface eth0 address:10.0.0.0/8 label:system_u:object_r:localnet_peer_t:s0
```
Result:
```
$ sudo netlabelctl -p unlbl list
...
interface: eth0
address: 10.0.0.0/8
label: "system_u:object_r:localnet_peer_t:s0"
...
```
/etc/os-release existence is only enforced in --boot mode,
therefore the term "starting" (which also applies to chroot-like mode)
is substituted with "booting" in this context.
The recommendation to use machinectl login/shell instead of
trying to combine two distinct container instances seemed a
litte bit out of context and is now combined via "rather".
The existing sd_hwdb_new function always initializes the hwdb from the
first successful hwdb.bin it finds from hwdb_bin_paths. This means there
is currently no way to initialize a hwdb from an explicit path, which
would be useful for systemd-hwdb query.
Add sd_hwdb_new_from_path to allow a sd_hwdb to be initialized from a
custom path outside of hwdb_bin_paths.
All wiki pages that contain a deprecation banner
pointing to systemd.io or manpages are updated to
point to their replacements directly.
Helpful command for identification of available links:
git grep freedesktop.org/wiki | \
sed "s#.*\(https://www.freedesktop.org/wiki[^ $<'\\\")]*\)\(.*\)#\\1#" | \
sort | uniq
* Avoid traling slash as most links are defined without.
* Always use https:// protocol and www. subdomain
Allows for easier tree-wide linkvalidation
for our migration to systemd.io.
The interface, output, and exit status convention are all taken directly from
rpmdev-vercmp and dpkg --compare-versions. The implementation is different
though. See test-string-util for a list of known cases where we compare
strings incompatibly.
The idea is that this string comparison function will be declared as "the"
method to use for boot entry ordering in the specification and similar
uses. Thus it's nice to allow users to compare strings.
The methods published by the example have a reply in the signature, but
the code was not sending any, so the client gets stuck waiting for a
response that doesn't arrive. Echo back the input string.
Update the object path to follow what would be the canonical format.
Request a service name on the bus, so that the code can be dropped in a
service and it can be dbus-activatable. It also makes it easier to see
on busctl list.
In the documentation, using the term "managed" for both the RA flag and
the DHCPv6 mode is confusing because the mode is referred to as
"solicit" both in the official DHCPv6 documentation (see RFC 8415) and
in the WithoutRA option.
Furthermore, calling the other RA flag "other information" or "other
address configuration" is confusing because its official name is simply
"other configuration" (see RFC 4861 and RFC 5175) and it isn't used to
assign IP addresses.
Rewrite the documentation for DHCPv6Client and WithoutRA to make it
clear that getting the "managed" RA flag triggers the same kind of DHCP
request as WithoutRA=solicit, whereas getting the "other configuration"
RA flag triggers the same kind of DHCP request as
WithoutRA=information-request.
We have vendor presets, and local admin presets, and runtime presets
(under /usr/lib, /usr/local/lib and /etc, /run, respectively). When we
display preset state, it can be configured in any of those places, so
we shouldn't say anything about the origin.
(Another nice advantage is that it improves alignment:
[root@f36 ~]# systemctl list-unit-files multipathd.service
UNIT FILE STATE VENDOR PRESET
multipathd.service enabled enabled
^ this looks we have a "PRESET" column that is empty.)
It doesn't really care about the hash value passed (which is processed
by systemd-veritysetup-generator), but it does care about the fact that
it is set (and mounts the DM nodes /dev/mapper/usr + /dev/mapper/root in
that case).
I don't know why this didn't occur to me earlier, but of course, it
*has* to be this data.
(This replaces some German prose about Berlin, that i guess only very
few people will get. With the new blob I think we have a much broader
chance of delivering smiles.)
Let's merge the footnote with the overall explanation of where systemd
parses its options from and reword the section a bit to hopefully make
things a bit more clear.
The gist of the description is moved from systemd.resource-control
to systemd-oomd man page. Cross-references to OOMPolicy, memory.oom.group,
oomctl, ManagedOOMSwap and ManagedOOMMemoryPressure are added in all
places.
The descriptions are also more down-to-earth: instead of talking
about "taking action" let's just say "kill". We *might* add configuration
for different actions in the future, but we're not there yet, so let's
just describe what we do now.
We use authenticated encryption, and that deserves mention. This in
particular relevant as the fact they are authenticated makes the
credentials useful as initrd parameterization items.
This is supposed to be useful when generating credentials for immutable
initrd environments, where it is is relevant to support credentials even
on systems lacking a TPM2 chip.
With this, if `systemd-creds encrypt --with-key=auto-initrd` is used a
credential will be encrypted/signed with the TPM2 if it is available and
recognized by the firmware. Otherwise it will be encrypted/signed with
the fixed empty key, thus providing no confidentiality or authenticity.
The idea is that distributions use this mode to generically create
credentials that are as locked down as possible on the specific
platform.
We got documentation for sd-device for the first time with
b51f4eaf7b, so let's celebrate by adding a
landing page that also explains the relationship with libudev.
%R is already used in service manager specifier expansion (cgroup root),
hence use a different char, that was so far not used.
Follow-up for: 6ceb0a4094
Previously, systemd-analyze verify would return 0 even if warnings
were raised during analysis of the specified units or their
dependencies. With 3cc3dc7, verify was changed to return 1 when
warnings were raised.
This commit changes the default mode to _RECURSIVE_ERRORS_INVALID
so that verify returns zero again by default when warnings are
raised.
If we have two or more devices that share the same slot but they are
also multifunction then it is OK to use the slot information even if it
is the same for all of them. Name conflict will be avoided because we
will append function number and form names like, ens1f1, ens1f2...