IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
This way the user service will have a loginuid, and it will be inherited by
child services. This shouldn't change anything as far as systemd itself is
concerned, but is nice for various services spawned from by systemd --user
that expect a loginuid.
pam_loginuid(8) says that it should be enabled for "..., crond and atd".
user@.service should behave similarly to those two as far as audit is
concerned.
https://bugzilla.redhat.com/show_bug.cgi?id=1328947#c28
Container images from Debian or suchlike contain device nodes in /dev. Let's
make sure we can clone them properly, hence pass CAP_MKNOD to machined.
Fixes: #2867#465
Fixes: #2060
(Of course, in the long run, we should probably add a copy-based fall-back. But
given how slow that is, this probably requires some asynchronous forking logic
like the CopyFrom() and CopyTo() method calls already implement.)
The "resources" error is really just the generic error we return when
we hit some kind of error and we have no more appropriate error for the case to
return, for example because of some OS error.
Hence, reword the explanation and don't claim any relation to resource limits.
Admittedly, the "resources" service error is a bit of a misnomer, but I figure
it's kind of API now.
Fixes: #2716
Early in journal_file_set_offline() f->header->state is tested to see if
it's != STATE_ONLINE, and since there's no need to do anything if the
journal isn't online, the function simply returned here.
Since moving part of the offlining process to a separate thread, there
are two problems here:
1. We can't simply check f->header->state, because if there is an
offline thread active it may modify f->header->state.
2. Even if the journal is deemed offline, the thread responsible may
still need joining, so a bare return may leak the thread's resources
like its stack.
To address #1, the helper journal_file_is_offlining() is called prior to
accessing f->header->state.
If journal_file_is_offlining() returns true, f->header->state isn't even
checked, because an offlining journal is obviously online, and we'll
just continue with the normal set offline code path.
If journal_file_is_offlining() returns false, then it's safe to check
f->header->state, because the offline_state is beyond the point of
modifying f->header->state, and there's a memory barrier in the helper.
If we find f->header->state is != STATE_ONLINE, then we call the
idempotent journal_file_set_offline_thread_join() on the way out of the
function, to join a potential lingering offline thread.
I am pretty sure we shouldn't carry history sections in man pages, since it's
very hard to keep them correctly updated, the current ones are very
out-of-date, and they tend to make APIs appear unnecessarily complex.
This way, the switch becomes compatible with nspawn containers using --image=,
and those which only store journal data in /run (i.e. have persistant logs
off).
Fixes: #49
When appending to a journal file, journald will:
a) first, append the actual entry to the end of the journal file
b) second, add an offset reference to it to the global entry array stored at
the beginning of the file
c) third, add offset references to it to the per-field entry array stored at
various places of the file
The global entry array, maintained by b) is used when iterating through the
journal without matches applied.
The per-field entry array maintained by c) is used when iterating through the
journal with a match for that specific field applied.
In the wild, there are journal files where a) and b) were completed, but c)
was not before the files were abandoned. This means, that in some cases log
entries are at the end of these files that appear in the global entry array,
but not in the per-field entry array of the _BOOT_ID= field. Now, the
"journalctl --list-boots" command alternatingly uses the global entry array
and the per-field entry array of the _BOOT_ID= field. It seeks to the last
entry of a specific _BOOT_ID=field by having the right match installed, and
then jumps to the next following entry with no match installed anymore, under
the assumption this would bring it to the next boot ID. However, if the
per-field entry wasn't written fully, it might actually turn out that the
global entry array might know one more entry with the same _BOOT_ID, thus
resulting in a indefinite loop around the same _BOOT_ID.
This patch fixes that, by updating the boot search logic to always continue
reading entries until the boot ID actually changed from the previous. Thus, the
per-field entry array is used as quick jump index (i.e. as an optimization),
but not trusted otherwise. Only the global entry array is trusted.
This replaces PR #1904, which is actually very similar to this one. However,
this one actually reads the boot ID directly from the entry header, and doesn't
try to read it at all until the read pointer is actually really located on the
first item to read.
Fixes: #617
Replaces: #1904
Show the various timestamps in hexadecimal too. This is useful for matching the
timestamps included in cursor strings (which are encoded in hex, too), with the
references in the journal header.
* sd-netlink: permit RTM_DELLINK messages with no ifindex
This is useful for removing network interfaces by name.
* nspawn: explicitly remove veth links we created after use
Sometimes the kernel keeps veth links pinned after the namespace they have been
joined to died. Let's hence explicitly remove veth links after use.
Fixes: #2173
Drop the "read_realtime" parameter. Getting the realtime timestamp from an
entry is cheap, as it is a normal header field, hence let's just get this
unconditionally, and simplify our code a bit.
With this change a new flag SD_JOURNAL_OS_ROOT is introduced. If specified
while opening the journal with the per-directory calls (specifically:
sd_journal_open_directory() and sd_journal_open_directory_fd()) the passed
directory is assumed to be the root directory of an OS tree, and the journal
files are searched for in /var/log/journal, /run/log/journal relative to it.
This is useful to allow usage of sd-journal on file descriptors returned by the
OpenRootDirectory() call of machined.
This new call returns a file descriptor for the root directory of a container.
This file descriptor may then be used to access the rest of the container's
file system, via openat() and similar calls. Since the file descriptor returned
is for the file system namespace inside of the container it may be used to
access all files of the container exactly the way the container itself would
see them. This is particularly useful for containers run directly from
loopback media, for example via systemd-nspawn's --image= switch. It also
provides access to directories such as /run of a container that are normally
not accessible to the outside of a container.
This replaces PR #2870.
Fixes: #2870
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
Let's output the actual error code encountered, and let's not claim this was
purely triggered by files, because it can also be triggered by directories.
This is slightly nicer, since we actually watch the directories we opened and
enumerate. However, primarily this is preparation for adding support for
opening journal files by fd without specifying any path, to be added in a later
commit.
We are not able to add multiple properties.
wlp3s0.network:
[Match]
Name=wlp3s0
[Route]
Gateway=10.68.5.26
Metric=10
sudo ./systemd-networkd
Failed to parse file '/usr/lib/systemd/network/wlp3s0.network': File
exists
Could not load configuration files: File exists
This patch fixes it.
While the function journal-remote-parse.c:get_line() enforces an assertion that source->filled <= source->size, in function journal-remote-parse.c:process_source() there is a chance that source->size will be decreased to a lower value than source->filled, when source->buf is reallocated. Therefore a check is added that ensures that source->buf is reallocated only when source->filled is smaller than target / 2.
Let's be more careful when setting up the Slice= property of transient units:
let's use manager_load_unit_prepare() instead of manager_load_unit(), so that
the load queue isn't dispatched right away, because our own transient unit is
in it, and we don#t want to have it loaded until we finished initializing it.
That way we can be sure that local users are logged out before the network is
shut down when the system goes down, so that SSH session should be ending
cleanly before the system goes down.
Fixes: #2390