IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The source file name and the binary name were mismatched.
Rename binary to match.
Make the test exit with TEST_SKIP if the data is missing or we
have no permissions. Otherwise, the data will be printed, which
should be safe to enable by default.
Add path argument to clock_is_localtime() and default to "/etc/adjtime" if it's
NULL. This makes the function testable.
Add test-clock: initial test cases for some scenarios, using a temporary file.
This also checks the behaviour with a NULL (i. e. the system's /etc/adjtime)
file.
This commit rips out systemd-bootchart. It will be given a new home, outside
of the systemd repository. The code itself isn't actually specific to
systemd and can be used without systemd even, so let's put it somewhere
else.
It has fairly wide functionality now and the interface has been
stable for a while. It it a useful testing tool.
The name is changed to better indicate what it does.
This was used by the dkr logic, which is gone now, hence remove this too.
Should we need it one day again the git history never forgets...
Note that this only covers the JSON parser. The JSON generator used by
"journalctl -o json" remains, as its much much simpler and requires no
infrastructure except printf() and the most basic escaping.
Packets are stored in a simple format:
<size> <packet-wire-format> <size> <packet-wire-format> ...
Packets for some example domains are dumped, to test rr code for various
record types. Currently:
A
AAAA
CAA
DNSKEY
LOC
MX
NS
NSEC
OPENPGPKEY
SOA
SPF
TXT
The hashing code is executed, but results are not checked.
Also build other tests in src/resolve only with --enable-resolve.
As kdbus won't land in the anticipated way, the bus-proxy is not needed in
its current form. It can be resurrected at any time thanks to the history,
but for now, let's remove it from the sources. If we'll have a similar tool
in the future, it will look quite differently anyway.
Note that stdio-bridge is still available. It was restored from a version
prior to f252ff17, and refactored to make use of the current APIs.
Let's make sure our poll() calls don't get interrupted where they shouldn't (SIGALRM, ...), but allow them to be
interrupted where they should (SIGINT, ...).
Fixes#1965
The tool resolves way more than just hosts, hence give it a more generic name. This should be safe, as the tool is
currently undocumented. Before we add documentation for it, let's get the name right.
This also moves the C source into src/resolve/ (from src/resolve-host/), since the old name is a misnomer now. Also,
since it links directly to many of the C files of resolved it really belongs into resolved's directory anyway.
This adds an self-standing RB-Tree implementation to src/basic/. This
will be needed for NSEC RR lookups, since we need "close lookups", which
hashmaps (not even ordered-hashmaps) can give us in reasonable time.
For now, only add_acls_for_user is tested. When run under root, it
actually sets the acls. When run under non-root, it sets the acls for
the user, which does nothing, but at least calls the functions.
It is really unclear if we want to / have the resources to support this fully, so drop it
for now. It can easily be brought back if a killer usecase emerges.
Note that this code was never hooked up, so this does not remove any features.
memory_erase() so far just called memset(), which the compiler might
optimize away under certain conditions if it feels there's benefit in
it. C11 knows a new memset_s() call that is like memset(), but may not
be optimized away. Ideally, we'd just use that call, but glibc currently
does not support it. Hence, implement our own simplistic version of it.
We use a GCC pragma to turn off optimization for this call, and also use
the "volatile" keyword on the pointers to ensure that gcc will use the
pointers as-is. According to a variety of internet sources, either one
does the trick. However, there are also reports that at least the
volatile thing isn't fully correct, hence let's add some snake oil and
employ both techniques.
https://news.ycombinator.com/item?id=4711346
Tests for the functions defined in src/basic/parse-util.c. Reorder them
to match the order in which the functions are defined in the source
file. Adjusted the list of include files to remove the ones no longer
needed in test-util.c.
Tested that `make check` still passes as expected. Also checked the
number of lines removed from test-util.c matches the expected, as an
additional verification that no tests were dropped or duplicated in the
move.
So far we had two pretty much identical calls in user-util.[ch]:
lookup_uid() and uid_to_name(). Get rid of the former, in favour of the
latter, and while we are at it, rewrite it, to use getpwuid_r()
correctly, inside an allocation loop, as POSIX intended.
The actual code rename will follow. The reason for the change of name is to make it
simpler and more uniform with how we name other libraries (we don't include the
underlying protocol). The new name also matches the naming in the kernel (which
is particularly relevent here as we expect to let the kernel do some parts of
the protocol and we do others).
And remove machine-id-commit as separate binary.
There's really no point in keeping this separate, as the sources are
pretty much identical, and have pretty identical interfaces. Let's unify
this in one binary.
Given that machine-id-commit was a private binary of systemd (shipped in
/usr/lib/) removing the tool is not an API break.
While we are at it, improve the documentation of the command substantially.
Tests are modified to check behaviour with relax and without relax.
New tests are added for hostname_cleanup().
Tests are moved a new file (test-hostname-util) because there's
now a bunch of them.
New parameter is not used anywhere, except in tests, so there should
be no observable change.
This drops the libsystemd-terminal and systemd-consoled code for various
reasons:
* It's been sitting there unfinished for over a year now and won't get
finished any time soon.
* Since its initial creation, several parts need significant rework: The
input handling should be replaced with the now commonly used libinput,
the drm accessors should coordinate the handling of mode-object
hotplugging (including split connectors) with other DRM users, and the
internal library users should be converted to sd-device and friends.
* There is still significant kernel work required before sd-console is
really useful. This includes, but is not limited to, simpledrm and
drmlog.
* The authority daemon is needed before all this code can be used for
real. And this will definitely take a lot more time to get done as
no-one else is currently working on this, but me.
* kdbus maintenance has taken up way more time than I thought and it has
much higher priority. I don't see me spending much time on the
terminal code in the near future.
If anyone intends to hack on this, please feel free to contact me. I'll
gladly help you out with any issues. Once kdbus and authorityd are
finished (whenever that will be..) I'll definitely pick this up again. But
until then, lets reduce compile times and maintenance efforts on this code
and drop it for now.
This adds test-bus-proxy which should be used to test correct behavior of
systemd-bus-proxyd. The first test that was added is to verify we actually
receive NameAcquired signals for ourselves on bus-connect.
For a longer discussion see this:
http://lists.freedesktop.org/archives/systemd-devel/2015-April/030175.html
This introduces /run/systemd/fsck.progress as a simply
AF_UNIX/SOCK_STREAM socket. If it exists and is connectable we'll
connect fsck's -c switch with it. If external programs want to get
progress data they should hence listen on this socket and will get
all they need via that socket. To get information about the connecting
fsck client they should use SO_PEERCRED.
Unless /run/systemd/fsck.progress is around and connectable this change
reverts back to v219 behaviour where we'd forward fsck output to
/dev/console on our own.
Not that all functionality has been ported over to logind, the old
implementation can be removed. There goes one of the oldest parts of
the systemd code base.
<audit-1400> is replaced by AVC, etc.
A fallback mechanism is provided for unlisted event types.
Occasionally new types are added to the kernel, but not too often.
Add a simple "test", which simply prints the mapping.
Add systemd-fsckd multiplexer which accepts multiple systemd-fsck
instances to connect to it and sends progress report. systemd-fsckd then
computes and writes to /dev/console the number of devices currently being
checked and the minimum fsck progress. This will be used for interactive
progress report and cancelling in plymouth.
systemd-fsckd stops on idle when no systemd-fsck is connected.
Make the necessary changes to systemd-fsck to connect to the systemd-fsckd
socket.
What used to be gummiboot, was renamed sd-boot when it was merged into
systemd. Let's try to be a bit more consistent with the rest of systemd
and rename it again as follows:
The EFI bootloader is now called 'systemd-bootx64.efi', and its sources are in
'src/boot/efi/'. The drop-in directory where bootctl will find EFI loaders
is now /usr/lib/systemd/boot/efi/.
The old "systemd-import" binary is now an internal tool. We still use it
as asynchronous backend for systemd-importd. Since the import tool might
require some IO and CPU resources (due to qcow2 explosion, and
decompression), and because we might want to run it with more minimal
priviliges we still keep it around as the worker binary to execute as
child process of importd.
machinectl now has verbs for pulling down images, cancelling them and
listing them.
With this change the import tool will now unpack qcow2 images into
normal raw disk images, suitable for usage with nspawn.
This allows has the benefit of also allowing importing Ubuntu Cloud
images for usage with nspawn.
Even though we use fallocate() it appears that file systems like btrfs
will trigger SIGBUS on certain low-disk-space situation. We should
handle that, hence catch the signal, add it to a list of invalidated
pages, and replace the page with an empty memory area. After each write
check if SIGBUS was triggered, and consider the write invalid if it was.
This should make journald a lot more robust with file systems where
fallocate() is not reliable, for example all CoW file systems
(btrfs...), where changing written data can fail with disk full errors.
https://bugzilla.redhat.com/show_bug.cgi?id=1045810
This adds a simply but powerful tool for downloading container images
from the most popular container solution used today. Use it like
this:
# systemd-import pull-dck mattdm/fedora
# systemd-nspawn -M fedora
This will donwload the layers for "mattdm/fedora", and make them
available locally as /var/lib/container/fedora.
The tool is pretty complete, as long as it's only about pulling down
images, or updating them. Pushing or searching is not supported yet.
This pulls out the hwdb managment from udevadm into an independent tool.
The old code is left in place for backwards compatibility, and easy of
testing, but all documentation is dropped to encourage use of the new
tool instead.
This is useful inside of containers or local networks to intrdouce a
stable name of the default gateway host (in case of containers usually
the host, in case of LANs usually local router).
I often want to use the awesome "./autogen.sh [cmd]" arguments, but have
to append some custom ./configure options. For now, I always had to edit
autogen.sh manually, or copy the full commands out of it and run it
myself.
As I think this is super annoying, this commit adds support for
".config.args" files in $topdir. If it exists, any content is just
appended to $args, thus to any ./configure invokation of autogen.sh.
Maybe autotools provide something similar out-of-the-box. In that case,
feel free to revert this and lemme know!
add tests for the following directives:
- WorkingDirectory
- Personality
- IgnoreSIGPIPE
- PrivateTmp
- SystemCallFilter: It makes test/TEST-04-SECCOMP obsolete, so it has
been removed.
- SystemCallErrorNumber
- User
- Group
- Environment
It tests all available directives of Path units:
- PathChanged
- PathModified
- PathExists
- PathExisysGlob
- DirectoryNotEmpty
- MakeDirectory
- DirectoryMode
- Unit
This library negotiates a PPPoE channel. It handles the discovery stage and
leaves the session stage to the kernel. A further PPP library is needed to
actually set up a PPP unit (negotatie LCP, IPCP and do authentication), so in
isolation this is not yet very useful.
The test program has two modes:
# ./test-pppoe
will create a veth tunnel in a new network namespace, start pppoe-server on one
end and this client library on the other. The pppd server will time out as no
LCP is performed, and the client will then shut down gracefully.
# ./test-pppoe eth0
will run the client on eth0 (or any other netdev), and requires a PPPoE server
to be reachable on the local link.
This adds a first draft of systemd-consoled. This is still missing a lot
of features and does some rather primitive rendering. However, it shows
the direction this code is going and serves as basis for further testing.
The systemd-consoled binary should be run as `systemd --user' unit. It
automatically picks up any session marked as Desktop=SYSTEMD-CONSOLE.
Therefore, you can use any login-manager you want (ranging from /bin/login
to gdm) to create sessions for systemd-consoled. However, the sessions
managers must be prepared to set the Desktop= variable properly.
The user-session is called `systemd-console', only the daemon providing
the terminal environment is called `systemd-consoled' (mind the 'd').
So far, only a single terminal session is provided on each opened
user-session. However, we support multiple user-sessions (even across
multiple seats) just fine. In the future, the workspace logic will get
extended so you can have multiple terminal sessions in a single
user-session for easier access.
Note that this is still experimental! Instructions on how to run it will
follow shortly.