IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
* sd-journal: detect earlier if we try to read an object from an invalid offset
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
* journal: when dumping journal contents, react nicer to lines we can't read
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
* journal-file: always generate the same error when encountering corrupted files
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
* journal-file: when iterating through a partly corruped journal file, treat error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
* journal-file: make seeking in corrupted files work
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
* man: elaborate on the automatic systemd-journald.socket service dependencies
Fixes: #1603
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
This way the user service will have a loginuid, and it will be inherited by
child services. This shouldn't change anything as far as systemd itself is
concerned, but is nice for various services spawned from by systemd --user
that expect a loginuid.
pam_loginuid(8) says that it should be enabled for "..., crond and atd".
user@.service should behave similarly to those two as far as audit is
concerned.
https://bugzilla.redhat.com/show_bug.cgi?id=1328947#c28
Container images from Debian or suchlike contain device nodes in /dev. Let's
make sure we can clone them properly, hence pass CAP_MKNOD to machined.
Fixes: #2867#465
Fixes: #2060
(Of course, in the long run, we should probably add a copy-based fall-back. But
given how slow that is, this probably requires some asynchronous forking logic
like the CopyFrom() and CopyTo() method calls already implement.)
The "resources" error is really just the generic error we return when
we hit some kind of error and we have no more appropriate error for the case to
return, for example because of some OS error.
Hence, reword the explanation and don't claim any relation to resource limits.
Admittedly, the "resources" service error is a bit of a misnomer, but I figure
it's kind of API now.
Fixes: #2716
Early in journal_file_set_offline() f->header->state is tested to see if
it's != STATE_ONLINE, and since there's no need to do anything if the
journal isn't online, the function simply returned here.
Since moving part of the offlining process to a separate thread, there
are two problems here:
1. We can't simply check f->header->state, because if there is an
offline thread active it may modify f->header->state.
2. Even if the journal is deemed offline, the thread responsible may
still need joining, so a bare return may leak the thread's resources
like its stack.
To address #1, the helper journal_file_is_offlining() is called prior to
accessing f->header->state.
If journal_file_is_offlining() returns true, f->header->state isn't even
checked, because an offlining journal is obviously online, and we'll
just continue with the normal set offline code path.
If journal_file_is_offlining() returns false, then it's safe to check
f->header->state, because the offline_state is beyond the point of
modifying f->header->state, and there's a memory barrier in the helper.
If we find f->header->state is != STATE_ONLINE, then we call the
idempotent journal_file_set_offline_thread_join() on the way out of the
function, to join a potential lingering offline thread.
I am pretty sure we shouldn't carry history sections in man pages, since it's
very hard to keep them correctly updated, the current ones are very
out-of-date, and they tend to make APIs appear unnecessarily complex.
This way, the switch becomes compatible with nspawn containers using --image=,
and those which only store journal data in /run (i.e. have persistant logs
off).
Fixes: #49
When appending to a journal file, journald will:
a) first, append the actual entry to the end of the journal file
b) second, add an offset reference to it to the global entry array stored at
the beginning of the file
c) third, add offset references to it to the per-field entry array stored at
various places of the file
The global entry array, maintained by b) is used when iterating through the
journal without matches applied.
The per-field entry array maintained by c) is used when iterating through the
journal with a match for that specific field applied.
In the wild, there are journal files where a) and b) were completed, but c)
was not before the files were abandoned. This means, that in some cases log
entries are at the end of these files that appear in the global entry array,
but not in the per-field entry array of the _BOOT_ID= field. Now, the
"journalctl --list-boots" command alternatingly uses the global entry array
and the per-field entry array of the _BOOT_ID= field. It seeks to the last
entry of a specific _BOOT_ID=field by having the right match installed, and
then jumps to the next following entry with no match installed anymore, under
the assumption this would bring it to the next boot ID. However, if the
per-field entry wasn't written fully, it might actually turn out that the
global entry array might know one more entry with the same _BOOT_ID, thus
resulting in a indefinite loop around the same _BOOT_ID.
This patch fixes that, by updating the boot search logic to always continue
reading entries until the boot ID actually changed from the previous. Thus, the
per-field entry array is used as quick jump index (i.e. as an optimization),
but not trusted otherwise. Only the global entry array is trusted.
This replaces PR #1904, which is actually very similar to this one. However,
this one actually reads the boot ID directly from the entry header, and doesn't
try to read it at all until the read pointer is actually really located on the
first item to read.
Fixes: #617
Replaces: #1904
Show the various timestamps in hexadecimal too. This is useful for matching the
timestamps included in cursor strings (which are encoded in hex, too), with the
references in the journal header.
* sd-netlink: permit RTM_DELLINK messages with no ifindex
This is useful for removing network interfaces by name.
* nspawn: explicitly remove veth links we created after use
Sometimes the kernel keeps veth links pinned after the namespace they have been
joined to died. Let's hence explicitly remove veth links after use.
Fixes: #2173
Drop the "read_realtime" parameter. Getting the realtime timestamp from an
entry is cheap, as it is a normal header field, hence let's just get this
unconditionally, and simplify our code a bit.
With this change a new flag SD_JOURNAL_OS_ROOT is introduced. If specified
while opening the journal with the per-directory calls (specifically:
sd_journal_open_directory() and sd_journal_open_directory_fd()) the passed
directory is assumed to be the root directory of an OS tree, and the journal
files are searched for in /var/log/journal, /run/log/journal relative to it.
This is useful to allow usage of sd-journal on file descriptors returned by the
OpenRootDirectory() call of machined.
This new call returns a file descriptor for the root directory of a container.
This file descriptor may then be used to access the rest of the container's
file system, via openat() and similar calls. Since the file descriptor returned
is for the file system namespace inside of the container it may be used to
access all files of the container exactly the way the container itself would
see them. This is particularly useful for containers run directly from
loopback media, for example via systemd-nspawn's --image= switch. It also
provides access to directories such as /run of a container that are normally
not accessible to the outside of a container.
This replaces PR #2870.
Fixes: #2870
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
Sometimes the kernel keeps veth links pinned after the namespace they have been
joined to died. Let's hence explicitly remove veth links after use.
Fixes: #2173
In case a file is on a networked filesystem, we may tag the fstab record with
_netdev option, however, corrrect dependencies will be created for this mount.
Dependencies were not created for _netdev mountpoints, the reasoning for this
is in the commit fc676b00, i.e. to avoid adding dependencies for network
mountpoints where What= appears like a path. Thus proposing this semantically
more correct condition when dependencies are added for _actual_ bind mounts
irrespectively of network flag.
Consequently it allows to add _netdev option to bind mounts, which includes
them in remote-fs.target, which simplifies configuration.
This should allow tools like rkt to pre-mount read-only subtrees in the OS
tree, without breaking the patching code.
Note that the code will still fail, if the top-level directory is already
read-only.
In order to implement this we change the bool arg_userns into an enum
UserNamespaceMode, which can take one of NO, PICK or FIXED, and replace the
arg_uid_range_pick bool with it.
This adds the new value "pick" to --private-users=. When specified a new
UID/GID range of 65536 users is automatically and randomly allocated from the
host range 0x00080000-0xDFFF0000 and used for the container. The setting
implies --private-users-chown, so that container directory is recursively
chown()ed to the newly allocated UID/GID range, if that's necessary. As an
optimization before picking a randomized UID/GID the UID of the container's
root directory is used as starting point and used if currently not used
otherwise.
To protect against using the same UID/GID range multiple times a few mechanisms
are in place:
- The first and the last UID and GID of the range are checked with getpwuid()
and getgrgid(). If an entry already exists a different range is picked. Note
that by "last" UID the user 65534 is used, as 65535 is the 16bit (uid_t) -1.
- A lock file for the range is taken in /run/systemd/nspawn-uid/. Since the
ranges are taken in a non-overlapping fashion, and always start on 64K
boundaries this allows us to maintain a single lock file for each range that
can be randomly picked. This protects nspawn from picking the same range in
two parallel instances.
- If possible the /etc/passwd lock file is taken while a new range is selected
until the container is up. This means adduser/addgroup should safely avoid
the range as long as nss-mymachines is used, since the allocated range will
then show up in the user database.
The UID/GID range nspawn picks from is compiled in and not configurable at the
moment. That should probably stay that way, since we already provide ways how
users can pick their own ranges manually if they don't like the automatic
logic.
The new --private-users=pick logic makes user namespacing pretty useful now, as
it relieves the user from managing UID/GID ranges.
This adds a new --private-userns-chown switch that may be used in combination
with --private-userns. If it is passed a recursive chmod() operation is run on
the OS tree, fixing all file owner UID/GIDs to the right ranges. This should
make user namespacing pretty workable, as the OS trees don't need to be
prepared manually anymore.
In nspawn we invoke copy_bytes() on a TTY fd. copy_file_range() returns EBADF
on a TTY and this error is considered fatal by copy_bytes() so far. Correct
that, so that nspawn's copy_bytes() operation works again.
This is a follow-up for a44202e98b.
Let's output the actual error code encountered, and let's not claim this was
purely triggered by files, because it can also be triggered by directories.