3fe2f7446f
- Cleanups for SCHED_DEADLINE - Tracing updates/fixes - CPU Accounting fixes - First wave of changes to optimize the overhead of the scheduler build, from the fast-headers tree - including placeholder *_api.h headers for later header split-ups. - Preempt-dynamic using static_branch() for ARM64 - Isolation housekeeping mask rework; preperatory for further changes - NUMA-balancing: deal with CPU-less nodes - NUMA-balancing: tune systems that have multiple LLC cache domains per node (eg. AMD) - Updates to RSEQ UAPI in preparation for glibc usage - Lots of RSEQ/selftests, for same - Add Suren as PSI co-maintainer Signed-off-by: Ingo Molnar <mingo@kernel.org> -----BEGIN PGP SIGNATURE----- iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmI5rg8RHG1pbmdvQGtl cm5lbC5vcmcACgkQEnMQ0APhK1hGrw/+M3QOk6fH7G48wjlNnBvcOife6ls+Ni4k ixOAcF4JKoixO8HieU5vv0A7yf/83tAa6fpeXeMf1hkCGc0NSlmLtuIux+WOmoAL LzCyDEYfiP8KnVh0A1Tui/lK0+AkGo21O6ADhQE2gh8o2LpslOHQMzvtyekSzeeb mVxMYQN+QH0m518xdO2D8IQv9ctOYK0eGjmkqdNfntOlytypPZHeNel/tCzwklP/ dElJUjNiSKDlUgTBPtL3DfpoLOI/0mHF2p6NEXvNyULxSOqJTu8pv9Z2ADb2kKo1 0D56iXBDngMi9MHIJLgvzsA8gKzHLFSuPbpODDqkTZCa28vaMB9NYGhJ643NtEie IXTJEvF1rmNkcLcZlZxo0yjL0fjvPkczjw4Vj27gbrUQeEBfb4mfuI4BRmij63Ep qEkgQTJhduCqqrQP1rVyhwWZRk1JNcVug+F6N42qWW3fg1xhj0YSrLai2c9nPez6 3Zt98H8YGS1Z/JQomSw48iGXVqfTp/ETI7uU7jqHK8QcjzQ4lFK5H4GZpwuqGBZi NJJ1l97XMEas+rPHiwMEN7Z1DVhzJLCp8omEj12QU+tGLofxxwAuuOVat3CQWLRk f80Oya3TLEgd22hGIKDRmHa22vdWnNQyS0S15wJotawBzQf+n3auS9Q3/rh979+t ES/qvlGxTIs= =Z8uT -----END PGP SIGNATURE----- Merge tag 'sched-core-2022-03-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: - Cleanups for SCHED_DEADLINE - Tracing updates/fixes - CPU Accounting fixes - First wave of changes to optimize the overhead of the scheduler build, from the fast-headers tree - including placeholder *_api.h headers for later header split-ups. - Preempt-dynamic using static_branch() for ARM64 - Isolation housekeeping mask rework; preperatory for further changes - NUMA-balancing: deal with CPU-less nodes - NUMA-balancing: tune systems that have multiple LLC cache domains per node (eg. AMD) - Updates to RSEQ UAPI in preparation for glibc usage - Lots of RSEQ/selftests, for same - Add Suren as PSI co-maintainer * tag 'sched-core-2022-03-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (81 commits) sched/headers: ARM needs asm/paravirt_api_clock.h too sched/numa: Fix boot crash on arm64 systems headers/prep: Fix header to build standalone: <linux/psi.h> sched/headers: Only include <linux/entry-common.h> when CONFIG_GENERIC_ENTRY=y cgroup: Fix suspicious rcu_dereference_check() usage warning sched/preempt: Tell about PREEMPT_DYNAMIC on kernel headers sched/topology: Remove redundant variable and fix incorrect type in build_sched_domains sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() sched/deadline,rt: Remove unused functions for !CONFIG_SMP sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy() sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file sched/deadline: Remove unused def_dl_bandwidth sched/tracing: Report TASK_RTLOCK_WAIT tasks as TASK_UNINTERRUPTIBLE sched/tracing: Don't re-read p->state when emitting sched_switch event sched/rt: Plug rt_mutex_setprio() vs push_rt_task() race sched/cpuacct: Remove redundant RCU read lock sched/cpuacct: Optimize away RCU read lock sched/cpuacct: Fix charge percpu cpuusage sched/headers: Reorganize, clean up and optimize kernel/sched/sched.h dependencies ...
147 lines
5.2 KiB
Plaintext
147 lines
5.2 KiB
Plaintext
# SPDX-License-Identifier: GPL-2.0-only
|
|
|
|
config PREEMPT_NONE_BUILD
|
|
bool
|
|
|
|
config PREEMPT_VOLUNTARY_BUILD
|
|
bool
|
|
|
|
config PREEMPT_BUILD
|
|
bool
|
|
select PREEMPTION
|
|
select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
|
|
|
|
choice
|
|
prompt "Preemption Model"
|
|
default PREEMPT_NONE
|
|
|
|
config PREEMPT_NONE
|
|
bool "No Forced Preemption (Server)"
|
|
select PREEMPT_NONE_BUILD if !PREEMPT_DYNAMIC
|
|
help
|
|
This is the traditional Linux preemption model, geared towards
|
|
throughput. It will still provide good latencies most of the
|
|
time, but there are no guarantees and occasional longer delays
|
|
are possible.
|
|
|
|
Select this option if you are building a kernel for a server or
|
|
scientific/computation system, or if you want to maximize the
|
|
raw processing power of the kernel, irrespective of scheduling
|
|
latencies.
|
|
|
|
config PREEMPT_VOLUNTARY
|
|
bool "Voluntary Kernel Preemption (Desktop)"
|
|
depends on !ARCH_NO_PREEMPT
|
|
select PREEMPT_VOLUNTARY_BUILD if !PREEMPT_DYNAMIC
|
|
help
|
|
This option reduces the latency of the kernel by adding more
|
|
"explicit preemption points" to the kernel code. These new
|
|
preemption points have been selected to reduce the maximum
|
|
latency of rescheduling, providing faster application reactions,
|
|
at the cost of slightly lower throughput.
|
|
|
|
This allows reaction to interactive events by allowing a
|
|
low priority process to voluntarily preempt itself even if it
|
|
is in kernel mode executing a system call. This allows
|
|
applications to run more 'smoothly' even when the system is
|
|
under load.
|
|
|
|
Select this if you are building a kernel for a desktop system.
|
|
|
|
config PREEMPT
|
|
bool "Preemptible Kernel (Low-Latency Desktop)"
|
|
depends on !ARCH_NO_PREEMPT
|
|
select PREEMPT_BUILD
|
|
help
|
|
This option reduces the latency of the kernel by making
|
|
all kernel code (that is not executing in a critical section)
|
|
preemptible. This allows reaction to interactive events by
|
|
permitting a low priority process to be preempted involuntarily
|
|
even if it is in kernel mode executing a system call and would
|
|
otherwise not be about to reach a natural preemption point.
|
|
This allows applications to run more 'smoothly' even when the
|
|
system is under load, at the cost of slightly lower throughput
|
|
and a slight runtime overhead to kernel code.
|
|
|
|
Select this if you are building a kernel for a desktop or
|
|
embedded system with latency requirements in the milliseconds
|
|
range.
|
|
|
|
config PREEMPT_RT
|
|
bool "Fully Preemptible Kernel (Real-Time)"
|
|
depends on EXPERT && ARCH_SUPPORTS_RT
|
|
select PREEMPTION
|
|
help
|
|
This option turns the kernel into a real-time kernel by replacing
|
|
various locking primitives (spinlocks, rwlocks, etc.) with
|
|
preemptible priority-inheritance aware variants, enforcing
|
|
interrupt threading and introducing mechanisms to break up long
|
|
non-preemptible sections. This makes the kernel, except for very
|
|
low level and critical code paths (entry code, scheduler, low
|
|
level interrupt handling) fully preemptible and brings most
|
|
execution contexts under scheduler control.
|
|
|
|
Select this if you are building a kernel for systems which
|
|
require real-time guarantees.
|
|
|
|
endchoice
|
|
|
|
config PREEMPT_COUNT
|
|
bool
|
|
|
|
config PREEMPTION
|
|
bool
|
|
select PREEMPT_COUNT
|
|
|
|
config PREEMPT_DYNAMIC
|
|
bool "Preemption behaviour defined on boot"
|
|
depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
|
|
select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
|
|
select PREEMPT_BUILD
|
|
default y if HAVE_PREEMPT_DYNAMIC_CALL
|
|
help
|
|
This option allows to define the preemption model on the kernel
|
|
command line parameter and thus override the default preemption
|
|
model defined during compile time.
|
|
|
|
The feature is primarily interesting for Linux distributions which
|
|
provide a pre-built kernel binary to reduce the number of kernel
|
|
flavors they offer while still offering different usecases.
|
|
|
|
The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
|
|
but if runtime patching is not available for the specific architecture
|
|
then the potential overhead should be considered.
|
|
|
|
Interesting if you want the same pre-built kernel should be used for
|
|
both Server and Desktop workloads.
|
|
|
|
config SCHED_CORE
|
|
bool "Core Scheduling for SMT"
|
|
depends on SCHED_SMT
|
|
help
|
|
This option permits Core Scheduling, a means of coordinated task
|
|
selection across SMT siblings. When enabled -- see
|
|
prctl(PR_SCHED_CORE) -- task selection ensures that all SMT siblings
|
|
will execute a task from the same 'core group', forcing idle when no
|
|
matching task is found.
|
|
|
|
Use of this feature includes:
|
|
- mitigation of some (not all) SMT side channels;
|
|
- limiting SMT interference to improve determinism and/or performance.
|
|
|
|
SCHED_CORE is default disabled. When it is enabled and unused,
|
|
which is the likely usage by Linux distributions, there should
|
|
be no measurable impact on performance.
|
|
|
|
config ARCH_WANTS_RT_DELAYED_SIGNALS
|
|
bool
|
|
help
|
|
This option is selected by architectures where raising signals
|
|
can happen in atomic contexts on PREEMPT_RT enabled kernels. This
|
|
option delays raising the signal until the return to user space
|
|
loop where it is also delivered. X86 requires this to deliver
|
|
signals from trap handlers which run on IST stacks.
|
|
|
|
config RT_DELAYED_SIGNALS
|
|
def_bool PREEMPT_RT && ARCH_WANTS_RT_DELAYED_SIGNALS
|