IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We introduce two news types of benchmarks in chart-mode:
- 'legacy' connects using the session bus
- 'direct' connects using a peer-to-peer socket
We should probably also introduce a mode for testing the dbus1-kdbus proxy.
If a unit is stopped for a moment, we need to invalidate our knowledge
of it, otherwise we might be confused by automatic restarts
This makes reboots for nspawn containers run as service work correctly.
https://bugs.freedesktop.org/show_bug.cgi?id=87428
For a longer discussion see this:
http://lists.freedesktop.org/archives/systemd-devel/2015-April/030175.html
This introduces /run/systemd/fsck.progress as a simply
AF_UNIX/SOCK_STREAM socket. If it exists and is connectable we'll
connect fsck's -c switch with it. If external programs want to get
progress data they should hence listen on this socket and will get
all they need via that socket. To get information about the connecting
fsck client they should use SO_PEERCRED.
Unless /run/systemd/fsck.progress is around and connectable this change
reverts back to v219 behaviour where we'd forward fsck output to
/dev/console on our own.
Even trivial service occasionally get stuck, for example when
there's a problem with the journal. There's nothing more annoying
that looking at the cylon eye for a job with an infinite timeout.
Use standard 90s for jobs that do some work, and 30s for those which
should be almost instantenous.
Like the T440s these need the sensitity to be set significantly higher
then the default of 128 for the trackpoint to be usable. Like with the
T440s 200 seems to be a good value to get a reasonable but not too high
sensitivity.
Otherwise it might happen that by the time PID 1 adds our process to the
scope unit the process might already have died, if the process is
short-running (such as an invocation to /bin/true).
https://bugs.freedesktop.org/show_bug.cgi?id=86520
This is yet another attempt to fix coldplugging order (more especially,
the problem which happens when one creates a job during coldplugging and
it references a not-yet-coldplugged unit).
Now we forcibly coldplug all units which participate in jobs. This
is a superset of previously implemented handling of the UNIT_TRIGGERS
dependencies, so that handling is removed.
http://lists.freedesktop.org/archives/systemd-devel/2015-April/031212.htmlhttps://bugs.freedesktop.org/show_bug.cgi?id=88401 (once again)
If the main daemon is not notified about a worker finishing an event
the refcounting of the worker struct will be wrong, and we will lose
track of the number of children we have to wait for.
This should not happen, but if it does we better complain loudly about
it. Worst case udev will wait for 30 seconsd at shutdown waiting for
nonexistent workers.
We shouldn't fail the sysctl service if an option is missing.
Previously the warning about this was already downgraded to LOG_DEBUG,
but we really shouldn't propagate such errors either.
Remove some redundant logging, and reduce the log-level in most cases. The only
case that is really critical is if a worker failed while hanlding an event, so
keep that at error level.