IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It is really unclear if we want to / have the resources to support this fully, so drop it
for now. It can easily be brought back if a killer usecase emerges.
Note that this code was never hooked up, so this does not remove any features.
memory_erase() so far just called memset(), which the compiler might
optimize away under certain conditions if it feels there's benefit in
it. C11 knows a new memset_s() call that is like memset(), but may not
be optimized away. Ideally, we'd just use that call, but glibc currently
does not support it. Hence, implement our own simplistic version of it.
We use a GCC pragma to turn off optimization for this call, and also use
the "volatile" keyword on the pointers to ensure that gcc will use the
pointers as-is. According to a variety of internet sources, either one
does the trick. However, there are also reports that at least the
volatile thing isn't fully correct, hence let's add some snake oil and
employ both techniques.
https://news.ycombinator.com/item?id=4711346
Tests for the functions defined in src/basic/parse-util.c. Reorder them
to match the order in which the functions are defined in the source
file. Adjusted the list of include files to remove the ones no longer
needed in test-util.c.
Tested that `make check` still passes as expected. Also checked the
number of lines removed from test-util.c matches the expected, as an
additional verification that no tests were dropped or duplicated in the
move.
So far we had two pretty much identical calls in user-util.[ch]:
lookup_uid() and uid_to_name(). Get rid of the former, in favour of the
latter, and while we are at it, rewrite it, to use getpwuid_r()
correctly, inside an allocation loop, as POSIX intended.
The actual code rename will follow. The reason for the change of name is to make it
simpler and more uniform with how we name other libraries (we don't include the
underlying protocol). The new name also matches the naming in the kernel (which
is particularly relevent here as we expect to let the kernel do some parts of
the protocol and we do others).
And remove machine-id-commit as separate binary.
There's really no point in keeping this separate, as the sources are
pretty much identical, and have pretty identical interfaces. Let's unify
this in one binary.
Given that machine-id-commit was a private binary of systemd (shipped in
/usr/lib/) removing the tool is not an API break.
While we are at it, improve the documentation of the command substantially.
Tests are modified to check behaviour with relax and without relax.
New tests are added for hostname_cleanup().
Tests are moved a new file (test-hostname-util) because there's
now a bunch of them.
New parameter is not used anywhere, except in tests, so there should
be no observable change.
This drops the libsystemd-terminal and systemd-consoled code for various
reasons:
* It's been sitting there unfinished for over a year now and won't get
finished any time soon.
* Since its initial creation, several parts need significant rework: The
input handling should be replaced with the now commonly used libinput,
the drm accessors should coordinate the handling of mode-object
hotplugging (including split connectors) with other DRM users, and the
internal library users should be converted to sd-device and friends.
* There is still significant kernel work required before sd-console is
really useful. This includes, but is not limited to, simpledrm and
drmlog.
* The authority daemon is needed before all this code can be used for
real. And this will definitely take a lot more time to get done as
no-one else is currently working on this, but me.
* kdbus maintenance has taken up way more time than I thought and it has
much higher priority. I don't see me spending much time on the
terminal code in the near future.
If anyone intends to hack on this, please feel free to contact me. I'll
gladly help you out with any issues. Once kdbus and authorityd are
finished (whenever that will be..) I'll definitely pick this up again. But
until then, lets reduce compile times and maintenance efforts on this code
and drop it for now.
This adds test-bus-proxy which should be used to test correct behavior of
systemd-bus-proxyd. The first test that was added is to verify we actually
receive NameAcquired signals for ourselves on bus-connect.
For a longer discussion see this:
http://lists.freedesktop.org/archives/systemd-devel/2015-April/030175.html
This introduces /run/systemd/fsck.progress as a simply
AF_UNIX/SOCK_STREAM socket. If it exists and is connectable we'll
connect fsck's -c switch with it. If external programs want to get
progress data they should hence listen on this socket and will get
all they need via that socket. To get information about the connecting
fsck client they should use SO_PEERCRED.
Unless /run/systemd/fsck.progress is around and connectable this change
reverts back to v219 behaviour where we'd forward fsck output to
/dev/console on our own.
Not that all functionality has been ported over to logind, the old
implementation can be removed. There goes one of the oldest parts of
the systemd code base.
<audit-1400> is replaced by AVC, etc.
A fallback mechanism is provided for unlisted event types.
Occasionally new types are added to the kernel, but not too often.
Add a simple "test", which simply prints the mapping.
Add systemd-fsckd multiplexer which accepts multiple systemd-fsck
instances to connect to it and sends progress report. systemd-fsckd then
computes and writes to /dev/console the number of devices currently being
checked and the minimum fsck progress. This will be used for interactive
progress report and cancelling in plymouth.
systemd-fsckd stops on idle when no systemd-fsck is connected.
Make the necessary changes to systemd-fsck to connect to the systemd-fsckd
socket.
What used to be gummiboot, was renamed sd-boot when it was merged into
systemd. Let's try to be a bit more consistent with the rest of systemd
and rename it again as follows:
The EFI bootloader is now called 'systemd-bootx64.efi', and its sources are in
'src/boot/efi/'. The drop-in directory where bootctl will find EFI loaders
is now /usr/lib/systemd/boot/efi/.
The old "systemd-import" binary is now an internal tool. We still use it
as asynchronous backend for systemd-importd. Since the import tool might
require some IO and CPU resources (due to qcow2 explosion, and
decompression), and because we might want to run it with more minimal
priviliges we still keep it around as the worker binary to execute as
child process of importd.
machinectl now has verbs for pulling down images, cancelling them and
listing them.
With this change the import tool will now unpack qcow2 images into
normal raw disk images, suitable for usage with nspawn.
This allows has the benefit of also allowing importing Ubuntu Cloud
images for usage with nspawn.
Even though we use fallocate() it appears that file systems like btrfs
will trigger SIGBUS on certain low-disk-space situation. We should
handle that, hence catch the signal, add it to a list of invalidated
pages, and replace the page with an empty memory area. After each write
check if SIGBUS was triggered, and consider the write invalid if it was.
This should make journald a lot more robust with file systems where
fallocate() is not reliable, for example all CoW file systems
(btrfs...), where changing written data can fail with disk full errors.
https://bugzilla.redhat.com/show_bug.cgi?id=1045810
This adds a simply but powerful tool for downloading container images
from the most popular container solution used today. Use it like
this:
# systemd-import pull-dck mattdm/fedora
# systemd-nspawn -M fedora
This will donwload the layers for "mattdm/fedora", and make them
available locally as /var/lib/container/fedora.
The tool is pretty complete, as long as it's only about pulling down
images, or updating them. Pushing or searching is not supported yet.
This pulls out the hwdb managment from udevadm into an independent tool.
The old code is left in place for backwards compatibility, and easy of
testing, but all documentation is dropped to encourage use of the new
tool instead.
This is useful inside of containers or local networks to intrdouce a
stable name of the default gateway host (in case of containers usually
the host, in case of LANs usually local router).
I often want to use the awesome "./autogen.sh [cmd]" arguments, but have
to append some custom ./configure options. For now, I always had to edit
autogen.sh manually, or copy the full commands out of it and run it
myself.
As I think this is super annoying, this commit adds support for
".config.args" files in $topdir. If it exists, any content is just
appended to $args, thus to any ./configure invokation of autogen.sh.
Maybe autotools provide something similar out-of-the-box. In that case,
feel free to revert this and lemme know!
add tests for the following directives:
- WorkingDirectory
- Personality
- IgnoreSIGPIPE
- PrivateTmp
- SystemCallFilter: It makes test/TEST-04-SECCOMP obsolete, so it has
been removed.
- SystemCallErrorNumber
- User
- Group
- Environment
It tests all available directives of Path units:
- PathChanged
- PathModified
- PathExists
- PathExisysGlob
- DirectoryNotEmpty
- MakeDirectory
- DirectoryMode
- Unit
This library negotiates a PPPoE channel. It handles the discovery stage and
leaves the session stage to the kernel. A further PPP library is needed to
actually set up a PPP unit (negotatie LCP, IPCP and do authentication), so in
isolation this is not yet very useful.
The test program has two modes:
# ./test-pppoe
will create a veth tunnel in a new network namespace, start pppoe-server on one
end and this client library on the other. The pppd server will time out as no
LCP is performed, and the client will then shut down gracefully.
# ./test-pppoe eth0
will run the client on eth0 (or any other netdev), and requires a PPPoE server
to be reachable on the local link.
This adds a first draft of systemd-consoled. This is still missing a lot
of features and does some rather primitive rendering. However, it shows
the direction this code is going and serves as basis for further testing.
The systemd-consoled binary should be run as `systemd --user' unit. It
automatically picks up any session marked as Desktop=SYSTEMD-CONSOLE.
Therefore, you can use any login-manager you want (ranging from /bin/login
to gdm) to create sessions for systemd-consoled. However, the sessions
managers must be prepared to set the Desktop= variable properly.
The user-session is called `systemd-console', only the daemon providing
the terminal environment is called `systemd-consoled' (mind the 'd').
So far, only a single terminal session is provided on each opened
user-session. However, we support multiple user-sessions (even across
multiple seats) just fine. In the future, the workspace logic will get
extended so you can have multiple terminal sessions in a single
user-session for easier access.
Note that this is still experimental! Instructions on how to run it will
follow shortly.
The systemd-modeset tool is meant to debug grdev issues. It simply
displays morphing colors on any found display. This is pretty handy to
look for tearing in the backends and debug hotplug issues.
Note that this tool requires systemd-logind to be compiled from git
(there're important fixes that haven't been released, yet).
Like systemd-subterm, this new systemd-evcat tool should only be used to
debug libsystemd-terminal. systemd-evcat attaches to the running session
and pushes all evdev devices attached to the current session into an
idev-session. All events of the created idev-devices are then printed to
stdout for input-event debugging.
hibernate-resume-generator understands resume= kernel command line parameter
and instantiates the systemd-resume@.service accordingly if it is passed.
This enables resume from hibernation using device specified on the kernel
command line, and it may be specified either as "/dev/disk/by-foo/bar"
or "FOO=bar", not only "/dev/sdXY" which is understood by the in-kernel
implementation.
So now resume= is brought on par with root= in terms of possible ways to
specify a device.
This can be used to initiate a resume from hibernation by path to a swap
device containing the hibernation image.
The respective templated unit is also added. It is instantiated using
path to the desired resume device.
In the long run this should become a full fledged client to networkd
(but not before networkd learns bus support). For now, just pull
interesting data out of networkd, udev, and rtnl and present it to the
user, in a simple but useful output.
This tool will warn about misspelt directives, unknown sections, and
non-executable commands. It will also catch the common mistake of
using Accept=yes with a non-template unit and vice versa.
https://bugs.freedesktop.org/show_bug.cgi?id=56607
The unifont layer of libsystemd-terminal provides a fallback font for
situations where no system-fonts are available, or if you don't want to
deal with traditional font-formats for some reasons.
The unifont API mmaps a pre-compiled bitmap font that was generated out of
GNU-Unifont font-data. This guarantees, that all users of the font will
share the pages in memory. Furthermore, the layout of the binary file
allows accessing glyph data in O(1) without pre-rendering glyphs etc. That
is, the OS can skip loading pages for glyphs that we never access.
Note that this is currently a test-run and we want to include the binary
file in the GNU-Unifont package. However, until it was considered stable
and accepted by the maintainers, we will ship it as part of systemd. So
far it's only enabled with the experimental --enable-terminal, anyway.
The systemd-subterm example is a stacked terminal that shows how to
use sd-term. Instead of rendering images and displaying it via X11/etc.,
it uses its parent terminal to display the page (terminal-emulator inside
a terminal-emulator) (like GNU-screen and friends do).
This is only for testing and not installed system-wide!
The term-parser is used to parse any input from TTY-clients. It reads CSI,
DCS, OSC and ST control sequences and normal escape sequences. It doesn't
do anything with the parsed data besides detecting the sequence and
returning it. The caller has to react to them.
The parser also comes with its own UTF-8 helpers. The reason for that is
that we don't want to assert() or hard-fail on parsing errors. Instead,
we treat any invalid UTF-8 sequences as ISO-8859-1. This allows pasting
invalid data into a terminal (which cannot be controlled through the TTY,
anyway) and we still deal with it in a proper manner.
This is _required_ for 8-bit and 7-bit DEC modes (including the g0-g3
mappings), so it's not just an ugly fallback because we can (it's still
horribly ugly but at least we have an excuse).
This commit introduces libsystemd-ui, a systemd-internal helper library
that will contain all the UI related functionality. It is going to be used
by systemd-welcomed, systemd-consoled, systemd-greeter and systemd-er.
Further use-cases may follow.
For now, this commit only adds terminal-page handling based on lines only.
Follow-up commits will add more functionality.
This Pty API wraps the ugliness that is POSIX PTY. It takes care of:
- edge-triggered HUP handling (avoid heavy CPU-usage on vhangup)
- HUP vs. input-queue draining (handle HUP _after_ draining the whole
input queue)
- SIGCHLD vs. HUP (HUP is no reliable way to catch PTY deaths, always
use SIGCHLD. Otherwise, vhangup() and friends will break.)
- Output queue buffering (async EPOLLOUT handling)
- synchronous setup (via Barrier API)
At the same time, the PTY API does not execve(). It simply fork()s and
leaves everything else to the caller. Usually, they execve() but we
support other setups, too.
This will be needed by multiple UI binaries (systemd-console, systemd-er,
...) so it's placed in src/shared/. It's not strictly related to
libsystemd-terminal, so it's not included there.
The "Barrier" object is a simple inter-process barrier implementation. It
allows placing synchronization points and waiting for the other side to
reach it. Additionally, it has an abortion-mechanism as second-layer
synchronization to send abortion-events asynchronously to the other side.
The API is usually used to synchronize processes during fork(). However,
it can be extended to pass state through execve() so you could synchronize
beyond execve().
Usually, it's used like this (error-handling replaced by assert() for
simplicity):
Barrier b;
r = barrier_init(&b);
assert_se(r >= 0);
pid = fork();
assert_se(pid >= 0);
if (pid == 0) {
barrier_set_role(&b, BARRIER_CHILD);
...do child post-setup...
if (CHILD_SETUP_FAILED)
exit(1);
...child setup done...
barrier_place(&b);
if (!barrier_sync(&b)) {
/* parent setup failed */
exit(1);
}
barrier_destroy(&b); /* redundant as execve() and exit() imply this */
/* parent & child setup successful */
execve(...);
}
barrier_set_role(&b, BARRIER_PARENT);
...do parent post-setup...
if (PARENT_SETUP_FAILED) {
barrier_abort(&b); /* send abortion event */
barrier_wait_abortion(&b); /* wait for child to abort (exit() implies abortion) */
barrier_destroy(&b);
...bail out...
}
...parent setup done...
barrier_place(&b);
if (!barrier_sync(&b)) {
...child setup failed... ;
barrier_destroy(&b);
...bail out...
}
barrier_destroy(&b);
...child setup successfull...
This is the most basic API. Using barrier_place() to place barriers and
barrier_sync() to perform a full synchronization between both processes.
barrier_abort() places an abortion barrier which superceeds any other
barriers, exit() (or barrier_destroy()) places an abortion-barrier that
queues behind existing barriers (thus *not* replacing existing barriers
unlike barrier_abort()).
This example uses hard-synchronization with wait_abortion(), sync() and
friends. These are all optional. Barriers are highly dynamic and can be
used for one-way synchronization or even no synchronization at all
(postponing it for later). The sync() call performs a full two-way
synchronization.
The API is documented and should be fairly self-explanatory. A test-suite
shows some special semantics regarding abortion, wait_next() and exit().
Internally, barriers use two eventfds and a pipe. The pipe is used to
detect exit()s of the remote side as eventfds do not allow that. The
eventfds are used to place barriers, one for each side. Barriers itself
are numbered, but the numbers are reused once both sides reached the same
barrier, thus you cannot address barriers by the index. Moreover, the
numbering is implicit and we only store a counter. This makes the
implementation itself very lightweight, which is probably negligible
considering that we need 3 FDs for a barrier..
Last but not least: This barrier implementation is quite heavy. It's
definitely not meant for fast IPC synchronization. However, it's very easy
to use. And given the *HUGE* overhead of fork(), the barrier-overhead
should be negligible.
Let's turn resolved into a something truly useful: a fully asynchronous
DNS stub resolver that subscribes to network changes.
(More to come: caching, LLMNR, mDNS/DNS-SD, DNSSEC, IDN, NSS module)
As Zbigniew pointed out a new ConditionFirstBoot= appears like the nicer
way to hook in systemd-firstboot.service on first boots (those with /etc
unpopulated), so let's do this, and get rid of the generator again.
A new tool "systemd-firstboot" can be used either interactively on boot,
where it will query basic locale, timezone, hostname, root password
information and set it. Or it can be used non-interactively from the
command line when prepareing disk images for booting. When used
non-inertactively the tool can either copy settings from the host, or
take settings on the command line.
$ systemd-firstboot --root=/path/to/my/new/root --copy-locale --copy-root-password --hostname=waldi
The tool will be automatically invoked (interactively) now on first boot
if /etc is found unpopulated.
This also creates the infrastructure for generators to be notified via
an environment variable whether they are running on the first boot, or
not.
This is useful to test the behaviour of the compressor for various buffer
sizes.
Time is limited to a minute per compression, since otherwise, when LZ4
takes more than a second which is necessary to reduce the noise, XZ
takes more than 10 minutes.
% build/test-compress-benchmark (without time limit)
XZ: compressed & decompressed 2535300963 bytes in 794.57s (3.04MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.56s (1550.07MiB/s), mean compresion 99.60%, skipped 990 bytes
% build/test-compress-benchmark (with time limit)
XZ: compressed & decompressed 174321481 bytes in 60.02s (2.77MiB/s), mean compresion 99.76%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.63s (1480.83MiB/s), mean compresion 99.60%, skipped 990 bytes
It appears that there's a bug in lzma_end where it leaks 32 bytes.
This new tool is based on "sd-path", a new (so far unexported) API for
libsystemd, that can hopefully grow into a workable API covering /opt
and more one day.
When disk space taken up by coredumps grows beyond a configured limit
start removing the oldest coredump of the user with the most coredumps,
until we get below the limit again.
debug-generator can mask specific units if they are specified on the
kernel command line with systemd.mask=.
debug-generator can pull in debug-shell.service is systemd.debug-shell
is passed on the kernel command line.