1
0
mirror of https://github.com/systemd/systemd.git synced 2024-12-25 01:34:28 +03:00
systemd/units/user@.service.in

31 lines
868 B
SYSTEMD
Raw Normal View History

# SPDX-License-Identifier: LGPL-2.1-or-later
2017-11-18 19:35:03 +03:00
#
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
2014-01-09 02:20:45 +04:00
Description=User Manager for UID %i
Documentation=man:user@.service(5)
After=user-runtime-dir@%i.service dbus.service systemd-oomd.service
Requires=user-runtime-dir@%i.service
IgnoreOnIsolate=yes
[Service]
User=%i
PAMName=systemd-user
Type=notify-reload
ExecStart={{ROOTLIBEXECDIR}}/systemd --user
Slice=user-%i.slice
KillMode=mixed
user: delegate cpu controller, assign weights to user slices So far we didn't enable the cpu controller because of overhead of the accounting. If I'm reading things correctly, delegation was enabled for a while for the units with user and pam context set, i.e. for user@.service too. a931ad47a8623163a29d898224d8a8c1177ffdaf added the explicit Delegate=yes|no switch, but it was initially set to 'yes'. acc8059129b38d60c1b923670863137f8ec8f91a disabled delegation for user@.service with the justication that CPU accounting is expensive, but half a year later a88c5b8ac4df713d9831d0073a07fac82e884fb3 changed DefaultCPUAccounting=yes for kernels >=4.15 with the justification that CPU accounting is inexpensive there. In my (very noncomprehensive) testing, I don't see a measurable overhead if the cpu controller is enabled for user slices. I tried some repeated compilations, and there is was no statistical difference, but the noise level was fairly high. Maybe better benchmarking would reveal a difference. The goal of this change is very simple: currently all of the user session, including services like the display server and pipewire are under user@.service. This means that when e.g. a compilation job is started in the session's app.slice, the processes in session.slice compete for CPU and can be starved. In particular, audio starts to stutter, etc. With CPU controller enabled, I can start start 'ninja -C build -j40' in a tab and this doesn't have any noticable effect on audio. I don't think the particular values matter too much: the CPU controller is work-convserving, and presumably the session slice would never need more than e.g. one 1 full CPU, i.e. half or a quarter of available CPU resources on even the smallest of today's machines. app.slice and session.slice are assigned equal weights, background.slice is assigned a smaller fraction. CPUWeight=100 is the default, but I wrote it explicitly to make it easier for users to see how the split is done. So effectively this should result in session.slice getting as much power as it needs. If if turns out that this does have a noticable overhead, we could make it opt-in. But I think that the benefit to usability is important enough to enable it by default. W/o something like this the session is not really usable with background tasks.
2022-07-02 11:33:49 +03:00
Delegate=pids memory cpu
DelegateSubgroup=init.scope
TasksMax=infinity
TimeoutStopSec={{ DEFAULT_USER_TIMEOUT_SEC*4//3 }}s
KeyringMode=inherit
OOMScoreAdjust=100
MemoryPressureWatch=skip