IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
That way we can pin a specific inode and analyze it and manipulate it
without it being swapped out beneath our hands.
Fixes a vulnerability originally found by Jann Horn from Google.
CVE-2018-15687
LP: #1796692https://bugzilla.redhat.com/show_bug.cgi?id=1639076
When we start a service process we pass the selected watchdog timeout to
it with the $WATCHDOG_USEC environment variable. If the unit file is
reconfigured later, we need to make sure to continue to honour the
original timeout, i.e. watch $WATCHDOG_USEC was set to, otherwise we'll
expect the ping at a different time as the service process is sending it
to us.
Hence, whenever we start a unit, save the watchdog timeout, and stick to
that for everything we do.
Fixes: #9467
Let's make sure we always use the right watchdog timeout: when a service
has overwritten it, then stick to it, also for follow-up processes of
the same service.
This was mostly prompted by seeing the expression "in_initrd() && flags
& PROC_CMDLINE_RD_STRICT", which uses & and && without any brackets.
Let's make that a bit more readable and hide all doubts about operator
precedence.
Let's be more careful with what we serialize: let's ensure we never
serialize strings that are longer than LONG_LINE_MAX, so that we know we
can read them back with read_line(…, LONG_LINE_MAX, …) safely.
In order to implement this all serialization functions are move to
serialize.[ch], and internally will do line size checks. We'd rather
skip a serialization line (with a loud warning) than write an overly
long line out. Of course, this is just a second level protection, after
all the data we serialize shouldn't be this long in the first place.
While we are at it also clean up logging: while serializing make sure to
always log about errors immediately. Also, (void)ify all calls we don't
expect errors in (or catch errors as part of the general
fflush_and_check() at the end.
The predicate function manager_timestamp_shall_serialize() simply says
whether to serialize or not serialize a timestamp, and should make
things a bit easier to read.
This should be much better than fgets(), as we can read substantially
longer lines and overly long lines result in proper errors.
Fixes a vulnerability discovered by Jann Horn at Google.
CVE-2018-15686
LP: #1796402https://bugzilla.redhat.com/show_bug.cgi?id=1639071
In the default journalctl output, unprintable entries are abbreviated as
“[<amount> blob data]”; using the same term in the documentation helps
users to quickly discover the option they need to add in order to see
those entries.
"killing" is very UNIX terminology, and not really what this is about.
Let's be more correct and say "send a UNIX signal" for the operation.
Otherwise things are really weird if users call "journalctl --rotate"
from the command line, which internally asks systemd to send SIGUSR2 to
to journald: when german locale is selected this asks the user — roughly
transliterated — whether they want to "eliminate" journald, which is
definitely not the intended meaning.
Before this when asked for rotation we'd only rotate files we have open
anyway. However there might be a number of other files on disk that are
active (i.e. not archived yet) but not open. Let's take care of those
too, so that rotation is always comprehensive, and the user gets the
guarantee that afterthe rotation all stored data is in archived files.
Fixes: #1017
journalctl --vacuum-*= only vacuums archived files. To archive all
active files the rotate operation is used. Let's add a new switch that
combines both, so that the user a single command to first move all
running journal files into archival and then vacuum them.
See: #1017
Let's split out the part that actually renames the file in case we can't
open it into a new function journal_file_dispose().
This way we can reuse the function in other cases where we want to open
a file but can't.
Let's split the function in three: the part where we archive the old
file into journal_file_archive(), and the part where we initiate the
deferred closing into journal_file_initiate_close().
journal_file_rotate() then simply becomes a wrapper around these two
calls, and the opening of the new journal file.
This useful so that we can archive journal files without having to open
new ones, i.e. to do only the archival part of the rotation, without the
rotation part.
Let's store array sizes and indexes in size_t. And let's count numbers
of files in uint64_t (simply because that is the type of the input
parameter for this of the function)
journald calls fd_get_path() a lot (it probably shouldn't, there's some
room for improvement there, but I'll leave that for another time), hence
it's worth optimizing the call a bit, in particular as it's easy.
Previously we'd open the dir /proc/self/fd/ first, before reading the
symlink inside it. This means the whole function requires three system
calls: open(), readlinkat(), close(). The reason for doing it this way
is to distinguish the case when we see ENOENT because /proc is not
mounted and the case when the fd doesn't exist.
With this change we'll directly go for the readlink(), and only if that
fails do an access() to see if /proc is mounted at all.
This optimizes the common case (where the fd is valid and /proc
mounted), in favour of the uncommon case (where the fd doesn#t exist or
/proc is not mounted).