IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The find_busiest_queue() naming has two small quirks:
- Scheduler functions that deal with runqueues usually have a rq_ prefix
or _rq postfix, but this function has neither.
- Plus the 'busiest' qualifier to this function was historically
correct, but has become somewhat of a misnomer: in quite a few
cases we will not pick the busiest runqueue - but the best
(possible) runqueue we can balance tasks from. So name it a
bit more neutrally, similar to the 'src/dst' nomenclature
we are already using when moving tasks between runqueues.
To fix both quirks, and to standardize scheduler load-balancing
function names on the sched_balance_() prefix, rename the
function to sched_balance_find_src_rq().
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-7-mingo@kernel.org
Standardize scheduler load-balancing function names on the
sched_balance_() prefix.
Also load_balance() has become somewhat of a misnomer: historically
it was the first and primary load-balancing function that was called,
but with the introduction of sched domains, it's become a lower
layer function that balances runqueues.
Rename it to sched_balance_rq() accordingly.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-6-mingo@kernel.org
- Standardize on prefixing scheduler-internal functions defined
in <linux/sched.h> with sched_*() prefix. scheduler_tick() was
the only function using the scheduler_ prefix. Harmonize it.
- The other reason to rename it is the NOHZ scheduler tick
handling functions are already named sched_tick_*().
Make the 'git grep sched_tick' more meaningful.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-3-mingo@kernel.org
run_rebalance_domains() is a misnomer, as it doesn't only
run rebalance_domains(), but since the introduction of the
NOHZ code it also runs nohz_idle_balance().
Rename it to sched_balance_softirq(), reflecting its more
generic purpose and that it's a softirq handler.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-2-mingo@kernel.org
The first sentence of the comment explaining run_rebalance_domains()
is historic and not true anymore:
* run_rebalance_domains is triggered when needed from the scheduler tick.
... contradicted/modified by the second sentence:
* Also triggered for NOHZ idle balancing (with NOHZ_BALANCE_KICK set).
Avoid that kind of confusion straight away and explain from what
places sched_balance_softirq() is triggered.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20240308105901.1096078-9-mingo@kernel.org
Fix two typos:
- There's no such thing as 'nohz_balancing_kick', the
flag is named 'BALANCE' and is capitalized: NOHZ_BALANCE_KICK.
- Likewise there's no such thing as a 'pending nohz_balance_kick'
either, the NOHZ_BALANCE_KICK flag is all upper-case.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20240308105901.1096078-8-mingo@kernel.org
So the scheduler has two such comment blocks, with '=' used as a double underline:
/*
* VRUNTIME
* ========
*
'========' also happens to be a Git conflict marker, throwing off a simple
search in an editor for this pattern.
Change them to '-------' type of underline instead - it looks just as good.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20240308105901.1096078-7-mingo@kernel.org
We changed the order of definitions within 'enum cpu_idle_type',
which changed the order of [CPU_MAX_IDLE_TYPES] columns in
show_schedstat().
Suggested-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: "Gautham R. Shenoy" <gautham.shenoy@amd.com>
Link: https://lore.kernel.org/r/20240308105901.1096078-5-mingo@kernel.org
The cpu_idle_type enum has the confusingly inverted property
that 'not idle' is 1, and 'idle' is '0'.
This resulted in a number of unnecessary complications in the code.
Reverse the order, remove the CPU_NOT_IDLE type, and convert
all code to a natural boolean form.
It's much more readable:
- enum cpu_idle_type idle = this_rq->idle_balance ?
- CPU_IDLE : CPU_NOT_IDLE;
-
+ enum cpu_idle_type idle = this_rq->idle_balance;
--------------------------------
- if (env->idle == CPU_NOT_IDLE || !busiest->sum_nr_running)
+ if (!env->idle || !busiest->sum_nr_running)
--------------------------------
And gets rid of the double negation in these usages:
- if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1)
+ if (env->idle && env->src_rq->nr_running <= 1)
Furthermore, this makes code much more obvious where there's
differentiation between CPU_IDLE and CPU_NEWLY_IDLE.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: "Gautham R. Shenoy" <gautham.shenoy@amd.com>
Link: https://lore.kernel.org/r/20240308105901.1096078-4-mingo@kernel.org
show_schedstat() output breaks and doesn't print all entries
if the ordering of the definitions in 'enum cpu_idle_type' is changed,
because show_schedstat() assumes that 'CPU_IDLE' is 0.
Fix it before we change the definition order & values.
[ mingo: Added changelog. ]
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20240308105901.1096078-3-mingo@kernel.org
The 'balancing' spinlock added in:
08c183f31bdb ("[PATCH] sched: add option to serialize load balancing")
... is taken when the SD_SERIALIZE flag is set in a domain, but in reality it
is a glorified global atomic flag serializing the load-balancing of
those domains.
It doesn't have any explicit locking semantics per se: we just
spin_trylock() it.
Turn it into a ... global atomic flag. This makes it more
clear what is going on here, and reduces overhead and code
size a bit:
# kernel/sched/fair.o: [x86-64 defconfig]
text data bss dec hex filename
60730 2721 104 63555 f843 fair.o.before
60718 2721 104 63543 f837 fair.o.after
Also document the flag a bit.
No change in functionality intended.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308105901.1096078-2-mingo@kernel.org
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEV76QKkVc4xCGURexaDWVMHDJkrAFAmXvQXoACgkQaDWVMHDJ
krCm4g//cNrAJWz53340D/GkHjOi3TKRBm7BSOJU0NT+E/gBpeQtWxDHbY9DwVP0
Wd/NqXeO1F4ZBQSmNa2RrLj2K0w2Oe5amiSFgJmChhTadaRnnxLIqUpl5j2FFoNv
wV+vWxftDzQhWbAQdlsmyaOIoX6QpbLYimmoMnvgKjqHldBy1ur0HNZEraBgnFaI
oO3lNpDZQo/WRsyvRm+fHrZ0LWJbQULby3ILPb6z6Pnx/OESvv/kFqi87rT54+um
VJ58zikIpAwSFGb5B1fUgwVnU7qG5z5Be+WF3TunuGAerihmkAIPNcJjcNMyO80r
eaLuQrH6kFKWcp3qSMcDMP2JbaMnfs3Yyz1ytDO4egS5DjMZczXhaoth5QTPLNhF
bkPvzfEY5LP/QgGB6o9vVXEnwv2zwYuk2XUeJLcIXQZ57Izg89Y+PgjiFLvvwIQE
vWGfsgZeVi/b5P5OwQGWn+1slm2peHVeg8Rh91Rwbb6PtZcOFv4cBb+Wwj0APTnr
wPuYRaMVrRSRw1U00frZwiiEk5Tm+f8DgntNH8IuJLVV2uRJMY3iWmEBrxXniasS
boKo3InAGoJ0Pu52lYS7XzG5XIuIF87CW0WmHlpghBTTgjoKJnuaXkFbGShvE0RK
5zuR4+2978K/0ou8IXYwGohuTWGIXZ8zm82QoGlDATtR3t4HE50=
=y5RP
-----END PGP SIGNATURE-----
Merge tag 'x86_tdx_for_6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 tdx update from Dave Hansen:
- Fix sparse warning from TDX use of movdir64b()
* tag 'x86_tdx_for_6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/asm: Remove the __iomem annotation of movdir64b()'s dst argument
operations require VMM cooperation, even in CoCo environments
where the VMM is untrusted. While it's _possible_ that memory
pressure could trigger the new warning, the odds are that a
guest would only see this from an attacking VMM.
* Simplify page fault code by re-enabling interrupts unconditionally
* Avoid truncation issues when pfns are passed in to
pfn_to_kaddr() with small (<64-bit) types.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEV76QKkVc4xCGURexaDWVMHDJkrAFAmXvPnEACgkQaDWVMHDJ
krAP/RAArzmgZ4TkOSMxVi8nIB45Fd5f9HVhw7cK69hkdTA+WdH7JrAfyNAyzA7S
NZdFk6gmp2HUpe+GmipQ3le1UJgCrNZyXvGYU3capm9O1Ql9BOl7htTpoCxCwwDr
RRRKnTcRKPKjAU1pX9eTvJGFBKl4UgYgliVMMhrIdgsExJxacSYiHm1C7BLprWVI
VBfnf3qAolW+zK4RnJD6uqrm4Xzq/WkZiMlpEP/whBy8AhbffwYho+12a7j3fCtM
j9K6fqvuh3U3XsVn9jOHFeS0NcPdBDBxrN21yD9CtsFA8cKcuJXeO0D7XqlGBvIj
TlHJz1YByRyWXP92MI2SQ9DBhE21pAiaWEGHz35z0XR6BcTyiLggI2DjVhFkKURg
aLBbrd0qG6PRefvz13SOmGTa87OJusgzXdX7FKvyMtGC7hEBf9zQo+E+JEog+mQ1
sqaCWXEC3Oc2WZPeUbLRKw1Q0w0CIZJznu1vtYOOt3iZllvS4RPZMauTFw7iQtVb
CKmGy7zWqpchQ75ZsXgU1Exhw8/NnrBL/SsfdfRIG9Bn3laUcEwlRXFf/IgPOFbF
dZfd0UJoriIbGw5Zz8ZxVIl9qsB8UdWSZNOlMQe0nswH75L1FWysqA+jwgdMdgm6
/jahkn6oVfj4Eu71GNYkEUIEatdoiI9KpbyteLHUTPQwUspz6gc=
=4QdP
-----END PGP SIGNATURE-----
Merge tag 'x86_mm_for_6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 mm updates from Dave Hansen:
- Add a warning when memory encryption conversions fail. These
operations require VMM cooperation, even in CoCo environments where
the VMM is untrusted. While it's _possible_ that memory pressure
could trigger the new warning, the odds are that a guest would only
see this from an attacking VMM.
- Simplify page fault code by re-enabling interrupts unconditionally
- Avoid truncation issues when pfns are passed in to pfn_to_kaddr()
with small (<64-bit) types.
* tag 'x86_mm_for_6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm/cpa: Warn for set_memory_XXcrypted() VMM fails
x86/mm: Get rid of conditional IF flag handling in page fault path
x86/mm: Ensure input to pfn_to_kaddr() is treated as a 64-bit type
- The biggest change is the rework of the percpu code,
to support the 'Named Address Spaces' GCC feature,
by Uros Bizjak:
- This allows C code to access GS and FS segment relative
memory via variables declared with such attributes,
which allows the compiler to better optimize those accesses
than the previous inline assembly code.
- The series also includes a number of micro-optimizations
for various percpu access methods, plus a number of
cleanups of %gs accesses in assembly code.
- These changes have been exposed to linux-next testing for
the last ~5 months, with no known regressions in this area.
- Fix/clean up __switch_to()'s broken but accidentally
working handling of FPU switching - which also generates
better code.
- Propagate more RIP-relative addressing in assembly code,
to generate slightly better code.
- Rework the CPU mitigations Kconfig space to be less idiosyncratic,
to make it easier for distros to follow & maintain these options.
- Rework the x86 idle code to cure RCU violations and
to clean up the logic.
- Clean up the vDSO Makefile logic.
- Misc cleanups and fixes.
[ Please note that there's a higher number of merge commits in
this branch (three) than is usual in x86 topic trees. This happened
due to the long testing lifecycle of the percpu changes that
involved 3 merge windows, which generated a longer history
and various interactions with other core x86 changes that we
felt better about to carry in a single branch. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXvB0gRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1jUqRAAqnEQPiabF5acQlHrwviX+cjSobDlqtH5
9q2AQy9qaEHapzD0XMOxvFye6XIvehGOGxSPvk6CoviSxBND8rb56lvnsEZuLeBV
Bo5QSIL2x42Zrvo11iPHwgXZfTIusU90sBuKDRFkYBAxY3HK2naMDZe8MAsYCUE9
nwgHF8DDc/NYiSOXV8kosWoWpNIkoK/STyH5bvTQZMqZcwyZ49AIeP1jGZb/prbC
e/rbnlrq5Eu6brpM7xo9kELO0Vhd34urV14KrrIpdkmUKytW2KIsyvW8D6fqgDBj
NSaQLLcz0pCXbhF+8Nqvdh/1coR4L7Ymt08P1rfEjCsQgb/2WnSAGUQuC5JoGzaj
ngkbFcZllIbD9gNzMQ1n4Aw5TiO+l9zxCqPC/r58Uuvstr+K9QKlwnp2+B3Q73Ft
rojIJ04NJL6lCHdDgwAjTTks+TD2PT/eBWsDfJ/1pnUWttmv9IjMpnXD5sbHxoiU
2RGGKnYbxXczYdq/ALYDWM6JXpfnJZcXL3jJi0IDcCSsb92xRvTANYFHnTfyzGfw
EHkhbF4e4Vy9f6QOkSP3CvW5H26BmZS9DKG0J9Il5R3u2lKdfbb5vmtUmVTqHmAD
Ulo5cWZjEznlWCAYSI/aIidmBsp9OAEvYd+X7Z5SBIgTfSqV7VWHGt0BfA1heiVv
F/mednG0gGc=
=3v4F
-----END PGP SIGNATURE-----
Merge tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull core x86 updates from Ingo Molnar:
- The biggest change is the rework of the percpu code, to support the
'Named Address Spaces' GCC feature, by Uros Bizjak:
- This allows C code to access GS and FS segment relative memory
via variables declared with such attributes, which allows the
compiler to better optimize those accesses than the previous
inline assembly code.
- The series also includes a number of micro-optimizations for
various percpu access methods, plus a number of cleanups of %gs
accesses in assembly code.
- These changes have been exposed to linux-next testing for the
last ~5 months, with no known regressions in this area.
- Fix/clean up __switch_to()'s broken but accidentally working handling
of FPU switching - which also generates better code
- Propagate more RIP-relative addressing in assembly code, to generate
slightly better code
- Rework the CPU mitigations Kconfig space to be less idiosyncratic, to
make it easier for distros to follow & maintain these options
- Rework the x86 idle code to cure RCU violations and to clean up the
logic
- Clean up the vDSO Makefile logic
- Misc cleanups and fixes
* tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
x86/idle: Select idle routine only once
x86/idle: Let prefer_mwait_c1_over_halt() return bool
x86/idle: Cleanup idle_setup()
x86/idle: Clean up idle selection
x86/idle: Sanitize X86_BUG_AMD_E400 handling
sched/idle: Conditionally handle tick broadcast in default_idle_call()
x86: Increase brk randomness entropy for 64-bit systems
x86/vdso: Move vDSO to mmap region
x86/vdso/kbuild: Group non-standard build attributes and primary object file rules together
x86/vdso: Fix rethunk patching for vdso-image-{32,64}.o
x86/retpoline: Ensure default return thunk isn't used at runtime
x86/vdso: Use CONFIG_COMPAT_32 to specify vdso32
x86/vdso: Use $(addprefix ) instead of $(foreach )
x86/vdso: Simplify obj-y addition
x86/vdso: Consolidate targets and clean-files
x86/bugs: Rename CONFIG_RETHUNK => CONFIG_MITIGATION_RETHUNK
x86/bugs: Rename CONFIG_CPU_SRSO => CONFIG_MITIGATION_SRSO
x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY => CONFIG_MITIGATION_IBRS_ENTRY
x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY => CONFIG_MITIGATION_UNRET_ENTRY
x86/bugs: Rename CONFIG_SLS => CONFIG_MITIGATION_SLS
...
cure Sparse warnings.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXvAFQRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1hkDRAAwASVCQ88kiGqNQtHibXlK54mAFGsc0xv
T8OPds15DUzoLg/y8lw0X0DHly6MdGXVmygybejNIw2BN4lhLjQ7f4Ria7rv7LDy
FcI1jfvysEMyYRFHGRefb/GBFzuEfKoROwf+QylGmKz0ZK674gNMngsI9pwOBdbe
wElq3IkHoNuTUfH9QA4BvqGam1n122nvVTop3g0PMHWzx9ky8hd/BEUjXFZhfINL
zZk3fwUbER2QYbhHt+BN2GRbdf2BrKvqTkXpKxyXTdnpiqAo0CzBGKerZ62H82qG
n737Nib1lrsfM5yDHySnau02aamRXaGvCJUd6gpac1ZmNpZMWhEOT/0Tr/Nj5ztF
lUAvKqMZn/CwwQky1/XxD0LHegnve0G+syqQt/7x7o1ELdiwTzOWMCx016UeodzB
yyHf3Xx9J8nt3snlrlZBaGEfegg9ePLu5Vir7iXjg3vrloUW8A+GZM62NVxF4HVV
QWF80BfWf8zbLQ/OS1382t1shaioIe5pEXzIjcnyVIZCiiP2/5kP2O6P4XVbwVlo
Ca5eEt8U1rtsLUZaCzI2ZRTQf/8SLMQWyaV+ZmkVwcVdFoARC31EgdE5wYYoZOf6
7Vl+rXd+rZCuTWk0ZgznCZEm75aaqukaQCBa2V8hIVociLFVzhg/Tjedv7s0CspA
hNfxdN1LDZc=
=0eJ7
-----END PGP SIGNATURE-----
Merge tag 'x86-cleanups-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cleanups from Ingo Molnar:
"Misc cleanups, including a large series from Thomas Gleixner to cure
sparse warnings"
* tag 'x86-cleanups-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/nmi: Drop unused declaration of proc_nmi_enabled()
x86/callthunks: Use EXPORT_PER_CPU_SYMBOL_GPL() for per CPU variables
x86/cpu: Provide a declaration for itlb_multihit_kvm_mitigation
x86/cpu: Use EXPORT_PER_CPU_SYMBOL_GPL() for x86_spec_ctrl_current
x86/uaccess: Add missing __force to casts in __access_ok() and valid_user_address()
x86/percpu: Cure per CPU madness on UP
smp: Consolidate smp_prepare_boot_cpu()
x86/msr: Add missing __percpu annotations
x86/msr: Prepare for including <linux/percpu.h> into <asm/msr.h>
perf/x86/amd/uncore: Fix __percpu annotation
x86/nmi: Remove an unnecessary IS_ENABLED(CONFIG_SMP)
x86/apm_32: Remove dead function apm_get_battery_status()
x86/insn-eval: Fix function param name in get_eff_addr_sib()
- Fix inconsistency in misfit task load-balancing
- Fix CPU isolation bugs in the task-wakeup logic
- Rework & unify the sched_use_asym_prio() and sched_asym_prefer() logic
- Clean up & simplify ->avg_* accesses
- Misc cleanups & fixes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXu9V0RHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1gqWBAAvqPlJx/jwNTePiXtxsObmtTnTStnVSM8
8SRxb2uznSFjYj73RdMDUzeYOfweE48elJoUAN7IGX2fgCFjxeDgpPnAyvnU0jFE
X/gJXEO2xCCYsvDnMg1huNSxEJ1ZQl6YJgdd6eLGjBK6l75pkgLJLOSmeFfTShgw
gMk4yIaUrxd/yc/bBvK39gMW1JDXiFIwmHuzfEl0/5k+abzVOU0ZfqFir2OH/GT9
YH8ZNsKKn88i01mp2qzo9LouF7mmOH4dZYd9k0SueH+rW8Z+goSuVF8O3igodL0T
TM5sqqG7qd1WC8SN0zng+OGODmJ+PrN7soKbTZC5NsC+LvipjVZ1Y92dLyS1xhgn
Bpm+NjVNrz9ZWhZiC5LiIF+zDZHu51RDejcOgt1Va6qBIY229GFKLgxFSis/TzzD
7xFpi7ApGCS/Rp9VeIDC69V8ZVfsCPJ7D1oxo5wmLzGe17nThxMeE1AmoWOXOUp8
M9ISbvete8i/8uS8jJQQMylrFceQkzumTVK7p+LqEdlaH0fF/fNKyeH81ZLZMwpM
0pfc7OVFpxd3Rt4wq+db00ilStdfV4yKkVAJiOLfVPyh+tZusvxkKjqXIMrm3RI/
DkZu6/3KYompfVcfkVXbW57Zu+kfgi6kQVt+6yEGrnLcIPkaPR08inEB7vtf6T+R
EBncKVtt1Rs=
=3CZV
-----END PGP SIGNATURE-----
Merge tag 'sched-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
- Fix inconsistency in misfit task load-balancing
- Fix CPU isolation bugs in the task-wakeup logic
- Rework and unify the sched_use_asym_prio() and sched_asym_prefer()
logic
- Clean up and simplify ->avg_* accesses
- Misc cleanups and fixes
* tag 'sched-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/topology: Rename SD_SHARE_PKG_RESOURCES to SD_SHARE_LLC
sched/fair: Check the SD_ASYM_PACKING flag in sched_use_asym_prio()
sched/fair: Rework sched_use_asym_prio() and sched_asym_prefer()
sched/fair: Remove unused parameter from sched_asym()
sched/topology: Remove duplicate descriptions from TOPOLOGY_SD_FLAGS
sched/fair: Simplify the update_sd_pick_busiest() logic
sched/fair: Do strict inequality check for busiest misfit task group
sched/fair: Remove unnecessary goto in update_sd_lb_stats()
sched/fair: Take the scheduling domain into account in select_idle_core()
sched/fair: Take the scheduling domain into account in select_idle_smt()
sched/fair: Add READ_ONCE() and use existing helper function to access ->avg_irq
sched/fair: Use existing helper functions to access ->avg_rt and ->avg_dl
sched/core: Simplify code by removing duplicate #ifdefs
- Micro-optimize local_xchg() and the rtmutex code on x86
- Fix percpu-rwsem contention tracepoints
- Simplify debugging Kconfig dependencies
- Update/clarify the documentation of atomic primitives
- Misc cleanups
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXu6EARHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1i7og/8DY/pEGqa/9xYZNE+3NZypuri93XjzFKu
i2yN1ymjSmjgQY83ImmP67gBBf7xd3kS0oiHM+lWnPE10pkzIPhleru4iExoyOB6
oMcQSyZALK3uRzxG/EwhuZhE0z9SadB/vkFUDJh677beMRsqfm2QXb4urEcTLUye
z4+Tg5zjJvNpKpGoTO7sWj0AfvpEa40RFaGAZEBdmU5CrykLE9tIL6wBEP5RAUcI
b8M+tr7D0JD0VGp4zhayEvq2TiwZhhxQ9C5HpVqck7LsfQvoXgBhGtxl/EkXVJ59
PiaLDJAY/D0ocyz1WNB7pFfOdZP6RV0a/5gEzp1uvmRdRV+gEhX88aBmtcc2072p
Do5fQwoqNecpHdY1+QY4n5Bq5KYQz9JZl3U1M5g/5dAjDiCo1W+eKk4AlkdymLQQ
4jhCsBFnrQdcrxHIfyHi1ocggs0cUXTCDIRPZSsA1ij51UxcLK2kz/6Ba1jSnFGk
iAfcF+Dj68/48zrz9yr+DS1od+oIsj4E+lr0btbj7xf2yiFXKbjPNE5Z8dk3JLay
/Eyb5NSZzfT4cpjpwYAoQ/JJySm3i0Uu/llOOIlTTi94waFomFBaCAo7/ujoGOOJ
/VHqouGVWaWtv6JhkjikgqzVj34Yr3rqZq9O3SbrZcu4YafKbaLNbzlt5z4zMkQv
wSMAPFgvcZQ=
=t84c
-----END PGP SIGNATURE-----
Merge tag 'locking-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
- Micro-optimize local_xchg() and the rtmutex code on x86
- Fix percpu-rwsem contention tracepoints
- Simplify debugging Kconfig dependencies
- Update/clarify the documentation of atomic primitives
- Misc cleanups
* tag 'locking-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
locking/rtmutex: Use try_cmpxchg_relaxed() in mark_rt_mutex_waiters()
locking/x86: Implement local_xchg() using CMPXCHG without the LOCK prefix
locking/percpu-rwsem: Trigger contention tracepoints only if contended
locking/rwsem: Make DEBUG_RWSEMS and PREEMPT_RT mutually exclusive
locking/rwsem: Clarify that RWSEM_READER_OWNED is just a hint
locking/mutex: Simplify <linux/mutex.h>
locking/qspinlock: Fix 'wait_early' set but not used warning
locking/atomic: scripts: Clarify ordering of conditional atomics
collects and manages previously encountered hw errors in order to
save them to persistent storage across reboots. Previously recorded
errors are "replayed" upon reboot in order to poison memory which has
caused said errors in the past.
The main use case is stacked, on-chip memory which cannot simply be
replaced so poisoning faulty areas of it and thus making them
inaccessible is the only strategy to prolong its lifetime.
- Add an AMD address translation library glue which converts the
reported addresses of hw errors into system physical addresses in
order to be used by other subsystems like memory failure, for
example. Add support for MI300 accelerators to that library.
- igen6: Add support for Alder Lake-N SoC
- i10nm: Add Grand Ridge support
- The usual fixlets and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvKHcACgkQEsHwGGHe
VUo4Lg/+OwXDI1EaCDyaHJ+f6JRmNok1EGjKMVjpp71/XmE3eUjiXfCv/b0bwl3V
oIXGlXpJ5RSME+9aFDWADaE3h5zAGzTwQXuKtOUQPiJ6UuCebXodm8SaIG8V8trG
yaW/hhP98AoJD+fN6qzv4XWYvTG8VRQs4tdISg9FXiljTjv4mKA+sxuCu8KpfrDh
Tg+9F4Rre6gyR5GaB6N7Cc0k97DM7n5yKBZZGKucv+oYzDyf6n631ZSJ2zA9NC51
CJlux917hCXI/IWrCQ2nkyfPPXxn8AaznUAA30wKgwlt8TFSdKTW+DvRA2zyuAU3
0UDHO4FezOKuzVnWkzdnKsIMAnDyTGOz3Fi2LU4mC+JHaHHmI2quSWDxp5phWBuy
S+T3XHxpbSsLGEI7zxT5F9u1oAlCvYu1C7HJw+yxNSn2iCy5LoNo0H/kl/nhR8Xr
FgVp8SYgQRU2Pp8vgGOibMYY/TAHX55EticKdxvBI0yY+iqoJyAbZ0fb0XyLNc7s
GqoWfvrK1KQzf5/Ya1Mm//0/QTPyFmJwujMJ2eEnMRRER+23bYpGvVBBT8E1sG9s
gqEJkKjmVCPt9xJTcivm96sLJ7CG36w8+r/axSqpKXdcvDG9ec8G8PRqjlo5pcvh
gYevmCBIcKny1xuhALwD6Rn2mkPip7araycDx9X9nd5z1qCxBaU=
=FR2l
-----END PGP SIGNATURE-----
Merge tag 'edac_updates_for_v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull EDAC updates from Borislav Petkov:
- Add a FRU (Field Replaceable Unit) memory poison manager which
collects and manages previously encountered hw errors in order to
save them to persistent storage across reboots. Previously recorded
errors are "replayed" upon reboot in order to poison memory which has
caused said errors in the past.
The main use case is stacked, on-chip memory which cannot simply be
replaced so poisoning faulty areas of it and thus making them
inaccessible is the only strategy to prolong its lifetime.
- Add an AMD address translation library glue which converts the
reported addresses of hw errors into system physical addresses in
order to be used by other subsystems like memory failure, for
example. Add support for MI300 accelerators to that library.
- igen6: Add support for Alder Lake-N SoC
- i10nm: Add Grand Ridge support
- The usual fixlets and cleanups
* tag 'edac_updates_for_v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/versal: Convert to platform remove callback returning void
RAS/AMD/FMPM: Fix off by one when unwinding on error
RAS/AMD/FMPM: Add debugfs interface to print record entries
RAS/AMD/FMPM: Save SPA values
RAS: Export helper to get ras_debugfs_dir
RAS/AMD/ATL: Fix bit overflow in denorm_addr_df4_np2()
RAS: Introduce a FRU memory poison manager
RAS/AMD/ATL: Add MI300 row retirement support
Documentation: Move RAS section to admin-guide
EDAC/versal: Make the bit position of injected errors configurable
EDAC/i10nm: Add Intel Grand Ridge micro-server support
EDAC/igen6: Add one more Intel Alder Lake-N SoC support
RAS/AMD/ATL: Add MI300 DRAM to normalized address translation support
RAS/AMD/ATL: Fix array overflow in get_logical_coh_st_fabric_id_mi300()
RAS/AMD/ATL: Add MI300 support
Documentation: RAS: Add index and address translation section
EDAC/amd64: Use new AMD Address Translation Library
RAS: Introduce AMD Address Translation Library
EDAC/synopsys: Convert to devm_platform_ioremap_resource()
not) a NMI handler
- Ratelimit unknown NMIs messages in order to not potentially slow down
the machine
- Other fixlets
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvN0wACgkQEsHwGGHe
VUqZLg//fo0puvI2XVjcyW2aNZXNyCWUID5J0HvIZqLveQQQzOopfuX4NLfgKSRR
GUX3k/jlfO9pku+gz6rQRYi8kaTlY8rScf9XpbUBgZZg3Pz2/ySel5uhPpHatgZ7
Zj455XALGVLA3T4bFKfCvUGKmRVmSTyXgPg3i/yFpfVzRZ8yhvAyJWJSWxJpFOpC
Eeg/cXUUPjlb2qOom0Bk9BEjG8Ez76yImAlN5ys/csG2Fe7iE3rU+DQ2IfU/yLfI
22QNZa8xGJY47c7iP1A/tGsxKGu5Pjsz4I2QvobWhteeiu+03g2NUWUcAaP+3/GN
6hj2IeiNAkhDcWaJMS9U5vaVAcfDZzTEErkPf896bk6lrR0UY1CRQlJzEQZLz1Vy
0ZVUuppY2hBcTj3YA9h65a/+sdsxAUG4BdsUJ63jHejJYEPN5YSFvL5wXZlxj3GO
XVVMsHMs9Lgnz1x+xzAB8SmmoPSj6qdMneY1Xp92cEtV6QQM/EinTfIcTUtvDACZ
9FJ77Iu6Up4hemftTGOC8eVqr+V0Q8M5x2Xs8NQAwlq9dnFVQCIwd/LjdRDyJ3Gw
ksFrq6Cv94Fi4bqmQi4CY04GH3kc5ua9sDeTM7rkBMm6RRSTO2NBgIOqHcBbrlOT
B3kSUqoUB6BEqlRRqP/YZ8YSOL5FWk2A2WDKtp8+ThkDYixGy1M=
=Jt9B
-----END PGP SIGNATURE-----
Merge tag 'x86_misc_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull misc x86 fixes from Borislav Petkov:
- Fix a wrong check in the function reporting whether a CPU executes
(or not) a NMI handler
- Ratelimit unknown NMIs messages in order to not potentially slow down
the machine
- Other fixlets
* tag 'x86_misc_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/nmi: Fix the inverse "in NMI handler" check
Documentation/maintainer-tip: Add C++ tail comments exception
Documentation/maintainer-tip: Add Closes tag
x86/nmi: Rate limit unknown NMI messages
Documentation/kernel-parameters: Add spec_rstack_overflow to mitigations=off
kernel to be used as a KVM hypervisor capable of running SNP (Secure
Nested Paging) guests. Roughly speaking, SEV-SNP is the ultimate goal
of the AMD confidential computing side, providing the most
comprehensive confidential computing environment up to date.
This is the x86 part and there is a KVM part which did not get ready
in time for the merge window so latter will be forthcoming in the next
cycle.
- Rework the early code's position-dependent SEV variable references in
order to allow building the kernel with clang and -fPIE/-fPIC and
-mcmodel=kernel
- The usual set of fixes, cleanups and improvements all over the place
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvH0wACgkQEsHwGGHe
VUrzmA//VS/n6dhHRnm/nAGngr4PeegkgV1OhyKYFfiZ272rT6P9QvblQrgcY0dc
Ij1DOhEKlke51pTHvMOQ33B3P4Fuc0mx3dpCLY0up5V26kzQiKCjRKEkC4U1bcw8
W4GqMejaR89bE14bYibmwpSib9T/uVsV65eM3xf1iF5UvsnoUaTziymDoy+nb43a
B1pdd5vcl4mBNqXeEvt0qjg+xkMLpWUI9tJDB8mbMl/cnIFGgMZzBaY8oktHSROK
QpuUnKegOgp1RXpfLbNjmZ2Q4Rkk4MNazzDzWq3EIxaRjXL3Qp507ePK7yeA2qa0
J3jCBQc9E2j7lfrIkUgNIzOWhMAXM2YH5bvH6UrIcMi1qsWJYDmkp2MF1nUedjdf
Wj16/pJbeEw1aKKIywJGwsmViSQju158vY3SzXG83U/A/Iz7zZRHFmC/ALoxZptY
Bi7VhfcOSpz98PE3axnG8CvvxRDWMfzBr2FY1VmQbg6VBNo1Xl1aP/IH1I8iQNKg
/laBYl/qP+1286TygF1lthYROb1lfEIJprgi2xfO6jVYUqPb7/zq2sm78qZRfm7l
25PN/oHnuidfVfI/H3hzcGubjOG9Zwra8WWYBB2EEmelf21rT0OLqq+eS4T6pxFb
GNVfc0AzG77UmqbrpkAMuPqL7LrGaSee4NdU3hkEdSphlx1/YTo=
=c1ps
-----END PGP SIGNATURE-----
Merge tag 'x86_sev_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 SEV updates from Borislav Petkov:
- Add the x86 part of the SEV-SNP host support.
This will allow the kernel to be used as a KVM hypervisor capable of
running SNP (Secure Nested Paging) guests. Roughly speaking, SEV-SNP
is the ultimate goal of the AMD confidential computing side,
providing the most comprehensive confidential computing environment
up to date.
This is the x86 part and there is a KVM part which did not get ready
in time for the merge window so latter will be forthcoming in the
next cycle.
- Rework the early code's position-dependent SEV variable references in
order to allow building the kernel with clang and -fPIE/-fPIC and
-mcmodel=kernel
- The usual set of fixes, cleanups and improvements all over the place
* tag 'x86_sev_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
x86/sev: Disable KMSAN for memory encryption TUs
x86/sev: Dump SEV_STATUS
crypto: ccp - Have it depend on AMD_IOMMU
iommu/amd: Fix failure return from snp_lookup_rmpentry()
x86/sev: Fix position dependent variable references in startup code
crypto: ccp: Make snp_range_list static
x86/Kconfig: Remove CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
Documentation: virt: Fix up pre-formatted text block for SEV ioctls
crypto: ccp: Add the SNP_SET_CONFIG command
crypto: ccp: Add the SNP_COMMIT command
crypto: ccp: Add the SNP_PLATFORM_STATUS command
x86/cpufeatures: Enable/unmask SEV-SNP CPU feature
KVM: SEV: Make AVIC backing, VMSA and VMCB memory allocation SNP safe
crypto: ccp: Add panic notifier for SEV/SNP firmware shutdown on kdump
iommu/amd: Clean up RMP entries for IOMMU pages during SNP shutdown
crypto: ccp: Handle legacy SEV commands when SNP is enabled
crypto: ccp: Handle non-volatile INIT_EX data when SNP is enabled
crypto: ccp: Handle the legacy TMR allocation when SNP is enabled
x86/sev: Introduce an SNP leaked pages list
crypto: ccp: Provide an API to issue SEV and SNP commands
...
accessors and splitting the locking, in order to accomodate ARM's MPAM
implementation of hw resource control and be able to use the same
filesystem control interface like on x86. Work by James Morse
- Improve the memory bandwidth throttling heuristic to handle workloads
with not too regular load levels which end up penalized unnecessarily
- Use CPUID to detect the memory bandwidth enforcement limit on AMD
- The usual set of fixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvGP8ACgkQEsHwGGHe
VUo7nw//e3qGYx09qA6UShcIjz4e9cVM3gUraBn82rd4T6oeIfU5ecJn6auJzlVO
cvlRFumaLbrNZXHd+Ww5VG0g0LVEcLmqS2ER295Rbp5gTvbDTNrmIAgriUpxER42
UkVtI4/y+P5980Y0Jl1j5xECACIdXFxJEGO3Eiok0rk3ZRhcFZgf1T2/35F2Jiif
hXAtvmkeTBxldhcdgovdaoR7SIY4MBZjgB1zX5WqJGlFdxfc6RaYbpCnl8rVXF2J
2DSUvHjtXco9MWNDm9c2bwNzXHV3EaAvUiCwmfoNeXCCJEqpyYFaPs3U61RnlwQe
ucAtSXeRx8YmJAVNJTjSR4Cou0stQDJdLZx0yYgoAvhXqwcpePilMzfHwdHkZ/5V
K7Kwl+VbJ1JxnTJgYmcgJ3juF7R7VW+stiKZOTkFYvBsWzXvCK5w+w1JScbdphqa
P878tySa58ehIaEf9/472QpA+zbItENsf1OFytfbJPKAJhnKMG73X4lrt6swSZBW
a1rmTGqG0ufuPiXT9XDajgeFR/15RQWcYtXPVXmWLaIJ+hHhRc57v11qy0uIMs9V
o0uRtdJP2SL+7rEm26VPjBXyS3orf2tvigrXnYeyNpTR/RVhMHL4n+0kxs4p9ELf
3oD4vd/KqyGHo7LO5QMm52eSxfHLpJzgFL02inBgFTFtmWMWpy8=
=v7bo
-----END PGP SIGNATURE-----
Merge tag 'x86_cache_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull resource control updates from Borislav Petkov:
- Rework different aspects of the resctrl code like adding
arch-specific accessors and splitting the locking, in order to
accomodate ARM's MPAM implementation of hw resource control and be
able to use the same filesystem control interface like on x86. Work
by James Morse
- Improve the memory bandwidth throttling heuristic to handle workloads
with not too regular load levels which end up penalized unnecessarily
- Use CPUID to detect the memory bandwidth enforcement limit on AMD
- The usual set of fixes
* tag 'x86_cache_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits)
x86/resctrl: Remove lockdep annotation that triggers false positive
x86/resctrl: Separate arch and fs resctrl locks
x86/resctrl: Move domain helper migration into resctrl_offline_cpu()
x86/resctrl: Add CPU offline callback for resctrl work
x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but CPU
x86/resctrl: Add CPU online callback for resctrl work
x86/resctrl: Add helpers for system wide mon/alloc capable
x86/resctrl: Make rdt_enable_key the arch's decision to switch
x86/resctrl: Move alloc/mon static keys into helpers
x86/resctrl: Make resctrl_mounted checks explicit
x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
x86/resctrl: Queue mon_event_read() instead of sending an IPI
x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow
x86/resctrl: Move CLOSID/RMID matching and setting to use helpers
x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid
x86/resctrl: Use __set_bit()/__clear_bit() instead of open coding
x86/resctrl: Track the number of dirty RMID a CLOSID has
x86/resctrl: Allow RMID allocation to be scoped by CLOSID
x86/resctrl: Access per-rmid structures by index
...
programming protocol of disabling the cache around the changes. The
reason behind this is the current algorithm triggering a #VE
exception for TDX guests and unnecessarily complicating things
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXux8sACgkQEsHwGGHe
VUodOw//diEAM3//Ht733soDDMYuc3pnLBgpIvEYtU7nvo7rVuNJASUny+WmQNVl
Szm1ATl88I0H1t54CAdvd398csKlZPmsO/puu/sLiJrvmjXtH4raE/u9lFjpdBwo
yoSbgb8v15No0JlszeE782rJfAHQ01FK7LbEuV0EKF3dx+KDZQPY8E+/LGVNeyh4
X7OWh2RJHUKENYxYgQBBuw2Hkm9HXIgyQiKe9eIrEwpHskCmZ/y8F8LazohVmw8L
XqlUZFCmKPwHsLE44sWq5coXoN28RKZfQ2D7jvhts8AwwU1RRoFv5WgCXhFe0Rfe
dPfLm93PvxxUYV0OHyCsKeJJkA8KH+vuXiaC1iw7Za6Ipkio1LzNAc/pxa/Q4x8Y
dwOM+WI/OdXz8KHQAJlU37ZNGbnA/ETWumNN7SrqqxvKzUbjcjDwZqIqneFT0dg6
c5quB/fgj+lL1xXk9EDE4HrOkzLv3/ax449oLFkJ3JKfRRMAzQalRaTwjTh/hufM
7Eig3iNRN+G6bItXC6XoQjDBEEJP7LplXT8jNQkVbHyMg8WPPToxtJGXBnR73PQp
q8+Iv3gLqM5EPqetdAtElVRhikmPHPqCdcBj47EHCoPFsQ1E9b72BUutDH0MVEG4
BIFCWQ4DS+3OXX/BZf7P5UOcPDcGkP+2PqbUmiBRB5I3174XQDQ=
=nNC0
-----END PGP SIGNATURE-----
Merge tag 'x86_mtrr_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 MTRR update from Borislav Petkov:
- Relax the PAT MSR programming which was unnecessarily using the MTRR
programming protocol of disabling the cache around the changes. The
reason behind this is the current algorithm triggering a #VE
exception for TDX guests and unnecessarily complicating things
* tag 'x86_mtrr_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/pat: Simplify the PAT programming protocol
in order to save some future enablement effort
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXuxn4ACgkQEsHwGGHe
VUolmQ//djDJa11FTQ5Zfnu8RjH4LFe6ZanLMIP93urT8rRuOfhlOZLHqxFGvJHy
1K1yT34NmHdXBsVWX7MxDmyhRJMOhgkkgGhYaBqZWrcV1RO26PKg8FS5B/a3BsVI
Y7ryOOqWNg0Hf/++Qm0zSq21VEH3Ehq4gYitK0irX/gBbHQMdui63pbLqOHwdszG
bhgMSI42EjZxpbR1ow5Bx7dia0ChBODbV4WeVB0eZo47mSJU4eu8yDPuy5+5ywwA
fOOVWZ2e12HrisfJYxL01vivU/pK0WYB2gJlAKv0tp+Q2ReIvo/vh4w2MHC1c+YT
X8e95rz1jzzlTkEKt4iWE/NZ1XS30z77jGbKVLxl8lsWswTtup48xLw0idLHc39L
M0ayY3yXbWRVxSltucH2DVKMzG8IP5XNeG53qfiMqIHsoYbmnVgxWk/0HrtgcrSL
jvcU4f2hwehO/ZvwlRyRlQACOlDSHGehNHmAVK3BqxYxM2+a9ArTA2KmnbC6+U9u
LAKaXlf+lMo6lszHDqKb+GUePqZ4EX01X4EuSTRX/G6qD4RMZIu1+4sBwfr79miE
uKJvRIT9DH74+OLPeSt/osdbGAK26BzJM9ZnqkdcggOMM/tHPNkQ5YTK/lStP3gl
JAh8ih/Or9p3LQHNKIU1zoT0MOKv6Mbr8n+MPYAhaS/oNpST6Bs=
=h7IU
-----END PGP SIGNATURE-----
Merge tag 'x86_cpu_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cpu update from Borislav Petkov:
- Have AMD Zen common init code run on all families from Zen1 onwards
in order to save some future enablement effort
* tag 'x86_cpu_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/CPU/AMD: Do the common init on future Zens too
driver core can handle that
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXuxYIACgkQEsHwGGHe
VUrVGBAAgOB8RglOqSCaF2m//92E2TyXKGSpXXuiizuHbV4G7v+yRgunbX99XOBa
wGkeja0rqovmjaSNOK3B4Hp6/eSXycdKgL8KfMRqa7VzGcla4oN097d6Nvz1YPo3
YL/8sJ04wy1CrF2Hgxj9bFF/ni0WFUgRr8GvlzKqeYGm7rRP2V8kNk64beAMa1GR
XTwoqSVq9cA88/Xnw4/qnYG2HxIL+Eu1uJWtkb47EWGD6qzsgC7t+PE0aKrqcTC8
jzcbiINHPK10FxoXGq3xa1yJQH02E83w0EmjhGmQ06/3gHQVoSUFrO0k4rOJJ7KI
GvAOYMGjkG/vuX0a2+FcxYoU/ODUuA8tiHK9x1HBkqLPzkiz3FPwQQ0yjfqOyo95
6dPd2EeUPjSK12xZ2LM22jyfhkIX6v02QjbmkwkP5pVcQ2WQOQVaOQzITZ/5vhLu
/Eaw+wRj8PBf2Jxv8yX885+qT9owkZIH2jSsVajGpMdoOTkS4R0CmUtPq7D43pGb
PEUabjcGBkSLGvHeKV0xmMeGAMDrsYNcqZ09RJdnIJ4LExI7tsR7lw5jwRSf0M/O
3Vg8ZSW4WlWkTzK5ikFitB8p6fWCe2MhE22zdEJhOei4Wzfz1MmvfcgQW6k9i2KB
AqZGlkg148ItHA56+NMUIgKUqPyblQixR97VkpZoHUAlMdaSdd4=
=RL7O
-----END PGP SIGNATURE-----
Merge tag 'ras_core_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull RAS fixlet from Borislav Petkov:
- Constify yet another static struct bus_type instance now that the
driver core can handle that
* tag 'ras_core_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mce: Make mce_subsys const
This reverts commit 8e0ef412869430d114158fc3b9b1fb111e247bd3.
It's broken, and causes the boot to fail on encrypted volumes.
Reported-and-bisected-by: Johannes Weiner <hannes@cmpxchg.org>
Link: https://lore.kernel.org/all/20240311235023.GA1205@cmpxchg.org/
Acked-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The current CR3 handling for kernel page table isolation in the paranoid
return paths which are relevant for #NMI, #MCE, #VC, #DB and #DF is
unconditionally writing CR3 with the value retrieved on exception entry.
In the vast majority of cases when returning to the kernel this is a
pointless exercise because CR3 was not modified on exception entry. The
only situation where this is necessary is when the exception interrupts a
entry from user before switching to kernel CR3 or interrupts an exit to
user after switching back to user CR3.
As CR3 writes can be expensive on some systems this becomes measurable
overhead with high frequency #NMIs such as perf.
Avoid this overhead by checking the CR3 value, which was saved on entry,
and write it back to CR3 only when it us a user CR3.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXvTXYTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoYMED/40YXFa0si5/9LRh/LSYglxVe/RaXCn
3oU19oWFRxdHCCLYHeQdlQGrpugM773X+4EC1dE92QpYjFnuLhl5H10h3t2e+3Uw
Q2VoWEo95FuJ2v7nqex7p2pglOvNjT2VBBlcFFdhqxiC1FCupXvU17nCcLeBsPkj
wbY2Sq4DxPDoWhWMNK2jhCQNVyYYluJERylS5+j0CK8vhQghq1N1WjcB6tQiAYsa
7nXz2ZJeGF0jnvLanyhAVSHDKU7QOMO3zkQpaaMlGQ9izawupe5/Gbi8ouFieCh+
xoLnGo1sgtMOXInnYaJnCiwuc+WiVN3d83aO/s7NZi8ZF60ib72xhzsRip2Cu4aV
kBtJaCVLFItQZ81HRSBABj6s9MLphHVm4AaOCvCIxK0ib5KDFaWy3tZpwTU4dvwX
rcwKsQrSLlOOD5zqO5dZn+HX6hK2lsNeTPLfcKVqARGn5S9fITzYbUMlkhO/FGaj
ZhIgadH8+rXwFDbgS6CGbVYKtM6Ncf/VBGFfE7tEOUQVUmLws3pdLiWo6I2QTGtw
fCAeF9uYmvhtiKk0e2jotZdbAg6HP2XTQSZfBxQpRgY6AnYW+XyDezcN0X1eNMJC
lmNC72WYxURHZUoOIxiiVzDS9kz7YTUo3pBHFrpQlNqGTqP8r+tAhUyou16yDK/0
2G9Mms/85u89MQ==
=UcMe
-----END PGP SIGNATURE-----
Merge tag 'x86-entry-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 entry update from Thomas Gleixner:
"A single update for the x86 entry code:
The current CR3 handling for kernel page table isolation in the
paranoid return paths which are relevant for #NMI, #MCE, #VC, #DB and
#DF is unconditionally writing CR3 with the value retrieved on
exception entry.
In the vast majority of cases when returning to the kernel this is a
pointless exercise because CR3 was not modified on exception entry.
The only situation where this is necessary is when the exception
interrupts a entry from user before switching to kernel CR3 or
interrupts an exit to user after switching back to user CR3.
As CR3 writes can be expensive on some systems this becomes measurable
overhead with high frequency #NMIs such as perf.
Avoid this overhead by checking the CR3 value, which was saved on
entry, and write it back to CR3 only when it is a user CR3"
* tag 'x86-entry-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/entry: Avoid redundant CR3 write on paranoid returns
FRED is a replacement for IDT event delivery on x86 and addresses most of
the technical nightmares which IDT exposes:
1) Exception cause registers like CR2 need to be manually preserved in
nested exception scenarios.
2) Hardware interrupt stack switching is suboptimal for nested exceptions
as the interrupt stack mechanism rewinds the stack on each entry which
requires a massive effort in the low level entry of #NMI code to handle
this.
3) No hardware distinction between entry from kernel or from user which
makes establishing kernel context more complex than it needs to be
especially for unconditionally nestable exceptions like NMI.
4) NMI nesting caused by IRET unconditionally reenabling NMIs, which is a
problem when the perf NMI takes a fault when collecting a stack trace.
5) Partial restore of ESP when returning to a 16-bit segment
6) Limitation of the vector space which can cause vector exhaustion on
large systems.
7) Inability to differentiate NMI sources
FRED addresses these shortcomings by:
1) An extended exception stack frame which the CPU uses to save exception
cause registers. This ensures that the meta information for each
exception is preserved on stack and avoids the extra complexity of
preserving it in software.
2) Hardware interrupt stack switching is non-rewinding if a nested
exception uses the currently interrupt stack.
3) The entry points for kernel and user context are separate and GS BASE
handling which is required to establish kernel context for per CPU
variable access is done in hardware.
4) NMIs are now nesting protected. They are only reenabled on the return
from NMI.
5) FRED guarantees full restore of ESP
6) FRED does not put a limitation on the vector space by design because it
uses a central entry points for kernel and user space and the CPUstores
the entry type (exception, trap, interrupt, syscall) on the entry stack
along with the vector number. The entry code has to demultiplex this
information, but this removes the vector space restriction.
The first hardware implementations will still have the current
restricted vector space because lifting this limitation requires
further changes to the local APIC.
7) FRED stores the vector number and meta information on stack which
allows having more than one NMI vector in future hardware when the
required local APIC changes are in place.
The series implements the initial FRED support by:
- Reworking the existing entry and IDT handling infrastructure to
accomodate for the alternative entry mechanism.
- Expanding the stack frame to accomodate for the extra 16 bytes FRED
requires to store context and meta information
- Providing FRED specific C entry points for events which have information
pushed to the extended stack frame, e.g. #PF and #DB.
- Providing FRED specific C entry points for #NMI and #MCE
- Implementing the FRED specific ASM entry points and the C code to
demultiplex the events
- Providing detection and initialization mechanisms and the necessary
tweaks in context switching, GS BASE handling etc.
The FRED integration aims for maximum code reuse vs. the existing IDT
implementation to the extent possible and the deviation in hot paths like
context switching are handled with alternatives to minimalize the
impact. The low level entry and exit paths are seperate due to the extended
stack frame and the hardware based GS BASE swichting and therefore have no
impact on IDT based systems.
It has been extensively tested on existing systems and on the FRED
simulation and as of now there are know outstanding problems.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuKPgTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoWyUEACevJMHU+Ot9zqBPizSWxByM1uunHbp
bjQXhaFeskd3mt7k7HU6GsPRSmC3q4lliP1Y9ypfbU0DvYSI2h/PhMWizjhmot2y
nIvFpl51r/NsI+JHx1oXcFetz0eGHEqBui/4YQ/swgOCMymYgfqgHhazXTdldV3g
KpH9/8W3AeGvw79uzXFH9tjBzTkbvywpam3v0LYNDJWTCuDkilyo8PjhsgRZD4x3
V9f1nLD7nSHZW8XLoktdJJ38bKwI2Lhao91NQ0ErwopekA4/9WphZEKsDpidUSXJ
sn1O148oQ8X92IO2OaQje8XC5pLGr5GqQBGPWzRH56P/Vd3+WOwBxaFoU6Drxc5s
tIe23ZjkVcpA8EEG7BQBZV1Un/NX7XaCCnMniOt0RauXw+1NaslX7t/tnUAh5F1V
TWCH4D0I0oJ0qJ7kNliGn2BP3agYXOVg81xVEUjT6KfHcYU4ImUrwi+BkeNXuXtL
Ch5ADnbYAcUjWLFnAmEmaRtfmfNGY5T7PeGFHW2RRkaOJ88v5g14Voo6gPJaDUPn
wMQ0nLq1xN4xZWF6ZgfRqAhArvh20k38ZujRku5vXEqnhOugQ76TF2UYiFEwOXbQ
8jcM+yEBLGgBz7tGMwmIAml6kfxaFF1KPpdrtcPxNkGlbE6KTSuIolLx2YGUvlSU
6/O8nwZy49ckmQ==
=Ib7w
-----END PGP SIGNATURE-----
Merge tag 'x86-fred-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 FRED support from Thomas Gleixner:
"Support for x86 Fast Return and Event Delivery (FRED).
FRED is a replacement for IDT event delivery on x86 and addresses most
of the technical nightmares which IDT exposes:
1) Exception cause registers like CR2 need to be manually preserved
in nested exception scenarios.
2) Hardware interrupt stack switching is suboptimal for nested
exceptions as the interrupt stack mechanism rewinds the stack on
each entry which requires a massive effort in the low level entry
of #NMI code to handle this.
3) No hardware distinction between entry from kernel or from user
which makes establishing kernel context more complex than it needs
to be especially for unconditionally nestable exceptions like NMI.
4) NMI nesting caused by IRET unconditionally reenabling NMIs, which
is a problem when the perf NMI takes a fault when collecting a
stack trace.
5) Partial restore of ESP when returning to a 16-bit segment
6) Limitation of the vector space which can cause vector exhaustion
on large systems.
7) Inability to differentiate NMI sources
FRED addresses these shortcomings by:
1) An extended exception stack frame which the CPU uses to save
exception cause registers. This ensures that the meta information
for each exception is preserved on stack and avoids the extra
complexity of preserving it in software.
2) Hardware interrupt stack switching is non-rewinding if a nested
exception uses the currently interrupt stack.
3) The entry points for kernel and user context are separate and GS
BASE handling which is required to establish kernel context for
per CPU variable access is done in hardware.
4) NMIs are now nesting protected. They are only reenabled on the
return from NMI.
5) FRED guarantees full restore of ESP
6) FRED does not put a limitation on the vector space by design
because it uses a central entry points for kernel and user space
and the CPUstores the entry type (exception, trap, interrupt,
syscall) on the entry stack along with the vector number. The
entry code has to demultiplex this information, but this removes
the vector space restriction.
The first hardware implementations will still have the current
restricted vector space because lifting this limitation requires
further changes to the local APIC.
7) FRED stores the vector number and meta information on stack which
allows having more than one NMI vector in future hardware when the
required local APIC changes are in place.
The series implements the initial FRED support by:
- Reworking the existing entry and IDT handling infrastructure to
accomodate for the alternative entry mechanism.
- Expanding the stack frame to accomodate for the extra 16 bytes FRED
requires to store context and meta information
- Providing FRED specific C entry points for events which have
information pushed to the extended stack frame, e.g. #PF and #DB.
- Providing FRED specific C entry points for #NMI and #MCE
- Implementing the FRED specific ASM entry points and the C code to
demultiplex the events
- Providing detection and initialization mechanisms and the necessary
tweaks in context switching, GS BASE handling etc.
The FRED integration aims for maximum code reuse vs the existing IDT
implementation to the extent possible and the deviation in hot paths
like context switching are handled with alternatives to minimalize the
impact. The low level entry and exit paths are seperate due to the
extended stack frame and the hardware based GS BASE swichting and
therefore have no impact on IDT based systems.
It has been extensively tested on existing systems and on the FRED
simulation and as of now there are no outstanding problems"
* tag 'x86-fred-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits)
x86/fred: Fix init_task thread stack pointer initialization
MAINTAINERS: Add a maintainer entry for FRED
x86/fred: Fix a build warning with allmodconfig due to 'inline' failing to inline properly
x86/fred: Invoke FRED initialization code to enable FRED
x86/fred: Add FRED initialization functions
x86/syscall: Split IDT syscall setup code into idt_syscall_init()
KVM: VMX: Call fred_entry_from_kvm() for IRQ/NMI handling
x86/entry: Add fred_entry_from_kvm() for VMX to handle IRQ/NMI
x86/entry/calling: Allow PUSH_AND_CLEAR_REGS being used beyond actual entry code
x86/fred: Fixup fault on ERETU by jumping to fred_entrypoint_user
x86/fred: Let ret_from_fork_asm() jmp to asm_fred_exit_user when FRED is enabled
x86/traps: Add sysvec_install() to install a system interrupt handler
x86/fred: FRED entry/exit and dispatch code
x86/fred: Add a machine check entry stub for FRED
x86/fred: Add a NMI entry stub for FRED
x86/fred: Add a debug fault entry stub for FRED
x86/idtentry: Incorporate definitions/declarations of the FRED entries
x86/fred: Make exc_page_fault() work for FRED
x86/fred: Allow single-step trap and NMI when starting a new task
x86/fred: No ESPFIX needed when FRED is enabled
...
The current implementation has a couple of shortcomings:
- It fails to handle hybrid systems correctly.
- The APIC registration code which handles CPU number assignents is in
the middle of the APIC code and detached from the topology evaluation.
- The various mechanisms which enumerate APICs, ACPI, MPPARSE and guest
specific ones, tweak global variables as they see fit or in case of
XENPV just hack around the generic mechanisms completely.
- The CPUID topology evaluation code is sprinkled all over the vendor
code and reevaluates global variables on every hotplug operation.
- There is no way to analyze topology on the boot CPU before bringing up
the APs. This causes problems for infrastructure like PERF which needs
to size certain aspects upfront or could be simplified if that would be
possible.
- The APIC admission and CPU number association logic is incomprehensible
and overly complex and needs to be kept around after boot instead of
completing this right after the APIC enumeration.
This update addresses these shortcomings with the following changes:
- Rework the CPUID evaluation code so it is common for all vendors and
provides information about the APIC ID segments in a uniform way
independent of the number of segments (Thread, Core, Module, ..., Die,
Package) so that this information can be computed instead of rewriting
global variables of dubious value over and over.
- A few cleanups and simplifcations of the APIC, IO/APIC and related
interfaces to prepare for the topology evaluation changes.
- Seperation of the parser stages so the early evaluation which tries to
find the APIC address can be seperately overridden from the late
evaluation which enumerates and registers the local APIC as further
preparation for sanitizing the topology evaluation.
- A new registration and admission logic which
- encapsulates the inner workings so that parsers and guest logic
cannot longer fiddle in it
- uses the APIC ID segments to build topology bitmaps at registration
time
- provides a sane admission logic
- allows to detect the crash kernel case, where CPU0 does not run on
the real BSP, automatically. This is required to prevent sending
INIT/SIPI sequences to the real BSP which would reset the whole
machine. This was so far handled by a tedious command line
parameter, which does not even work in nested crash scenarios.
- Associates CPU number after the enumeration completed and prevents
the late registration of APICs, which was somehow tolerated before.
- Converting all parsers and guest enumeration mechanisms over to the
new interfaces.
This allows to get rid of all global variable tweaking from the parsers
and enumeration mechanisms and sanitizes the XEN[PV] handling so it can
use CPUID evaluation for the first time.
- Mopping up existing sins by taking the information from the APIC ID
segment bitmaps.
This evaluates hybrid systems correctly on the boot CPU and allows for
cleanups and fixes in the related drivers, e.g. PERF.
The series has been extensively tested and the minimal late fallout due to
a broken ACPI/MADT table has been addressed by tightening the admission
logic further.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuDawTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYobE7EACngItF+UOTCoCV6och2lL6HVoIdZD1
Y5oaAgD+WzQSz/lBkH6b9kZSyvjlMo6O9GlnGX+ii+VUnijDp4VrspnxbJDaKEq3
gOfsSg2Tk+ps50HqMcZawjjBYJb/TmvKwEV2XuzIBPOONSWLNjvN7nBSzLl1eF9/
8uCE39/8aB5K3GXryRyXdo2uLu6eHTVC0aYFu/kLX1/BbVqF5NMD3sz9E9w8+D/U
MIIMEMXy4Fn+P2o0vVH+gjUlwI76mJbB1WqCX/sqbVacXrjl3KfNJRiisTFIOOYV
8o+rIV0ef5X9xmZqtOXAdyZQzj++Gwmz9+4TU1M4YHtS7UkYn6AluOjvVekCc+gc
qXE3WhqKfCK2/carRMLQxAMxNeRylkZG+Wuv1Qtyjpe9JX2dTqtems0f4DMp9DKf
b7InO3z39kJanpqcUG2Sx+GWanetfnX+0Ho2Moqu6Xi+2ATr1PfMG/Wyr5/WWOfV
qApaHSTwa+J43mSzP6BsXngEv085EHSGM5tPe7u46MCYFqB21+bMl+qH82KjMkOe
c6uZovFQMmX2WBlqJSYGVCH+Jhgvqq8HFeRs19Hd4enOt3e6LE3E74RBVD1AyfLV
1b/m8tYB/o871ZlEZwDCGVrV/LNnA7PxmFpq5ZHLpUt39g2/V0RH1puBVz1e97pU
YsTT7hBCUYzgjQ==
=/5oR
-----END PGP SIGNATURE-----
Merge tag 'x86-apic-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 APIC updates from Thomas Gleixner:
"Rework of APIC enumeration and topology evaluation.
The current implementation has a couple of shortcomings:
- It fails to handle hybrid systems correctly.
- The APIC registration code which handles CPU number assignents is
in the middle of the APIC code and detached from the topology
evaluation.
- The various mechanisms which enumerate APICs, ACPI, MPPARSE and
guest specific ones, tweak global variables as they see fit or in
case of XENPV just hack around the generic mechanisms completely.
- The CPUID topology evaluation code is sprinkled all over the vendor
code and reevaluates global variables on every hotplug operation.
- There is no way to analyze topology on the boot CPU before bringing
up the APs. This causes problems for infrastructure like PERF which
needs to size certain aspects upfront or could be simplified if
that would be possible.
- The APIC admission and CPU number association logic is
incomprehensible and overly complex and needs to be kept around
after boot instead of completing this right after the APIC
enumeration.
This update addresses these shortcomings with the following changes:
- Rework the CPUID evaluation code so it is common for all vendors
and provides information about the APIC ID segments in a uniform
way independent of the number of segments (Thread, Core, Module,
..., Die, Package) so that this information can be computed instead
of rewriting global variables of dubious value over and over.
- A few cleanups and simplifcations of the APIC, IO/APIC and related
interfaces to prepare for the topology evaluation changes.
- Seperation of the parser stages so the early evaluation which tries
to find the APIC address can be seperately overridden from the late
evaluation which enumerates and registers the local APIC as further
preparation for sanitizing the topology evaluation.
- A new registration and admission logic which
- encapsulates the inner workings so that parsers and guest logic
cannot longer fiddle in it
- uses the APIC ID segments to build topology bitmaps at
registration time
- provides a sane admission logic
- allows to detect the crash kernel case, where CPU0 does not run
on the real BSP, automatically. This is required to prevent
sending INIT/SIPI sequences to the real BSP which would reset
the whole machine. This was so far handled by a tedious command
line parameter, which does not even work in nested crash
scenarios.
- Associates CPU number after the enumeration completed and
prevents the late registration of APICs, which was somehow
tolerated before.
- Converting all parsers and guest enumeration mechanisms over to the
new interfaces.
This allows to get rid of all global variable tweaking from the
parsers and enumeration mechanisms and sanitizes the XEN[PV]
handling so it can use CPUID evaluation for the first time.
- Mopping up existing sins by taking the information from the APIC ID
segment bitmaps.
This evaluates hybrid systems correctly on the boot CPU and allows
for cleanups and fixes in the related drivers, e.g. PERF.
The series has been extensively tested and the minimal late fallout
due to a broken ACPI/MADT table has been addressed by tightening the
admission logic further"
* tag 'x86-apic-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (76 commits)
x86/topology: Ignore non-present APIC IDs in a present package
x86/apic: Build the x86 topology enumeration functions on UP APIC builds too
smp: Provide 'setup_max_cpus' definition on UP too
smp: Avoid 'setup_max_cpus' namespace collision/shadowing
x86/bugs: Use fixed addressing for VERW operand
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
x86/cpu/topology: Provide __num_[cores|threads]_per_package
x86/cpu/topology: Rename topology_max_die_per_package()
x86/cpu/topology: Rename smp_num_siblings
x86/cpu/topology: Retrieve cores per package from topology bitmaps
x86/cpu/topology: Use topology logical mapping mechanism
x86/cpu/topology: Provide logical pkg/die mapping
x86/cpu/topology: Simplify cpu_mark_primary_thread()
x86/cpu/topology: Mop up primary thread mask handling
x86/cpu/topology: Use topology bitmaps for sizing
x86/cpu/topology: Let XEN/PV use topology from CPUID/MADT
x86/xen/smp_pv: Count number of vCPUs early
x86/cpu/topology: Assign hotpluggable CPUIDs during init
x86/cpu/topology: Reject unknown APIC IDs on ACPI hotplug
x86/topology: Add a mechanism to track topology via APIC IDs
...
- The hierarchical timer pull model
When timer wheel timers are armed they are placed into the timer wheel
of a CPU which is likely to be busy at the time of expiry. This is done
to avoid wakeups on potentially idle CPUs.
This is wrong in several aspects:
1) The heuristics to select the target CPU are wrong by
definition as the chance to get the prediction right is close
to zero.
2) Due to #1 it is possible that timers are accumulated on a
single target CPU
3) The required computation in the enqueue path is just overhead for
dubious value especially under the consideration that the vast
majority of timer wheel timers are either canceled or rearmed
before they expire.
The timer pull model avoids the above by removing the target
computation on enqueue and queueing timers always on the CPU on which
they get armed.
This is achieved by having separate wheels for CPU pinned timers and
global timers which do not care about where they expire.
As long as a CPU is busy it handles both the pinned and the global
timers which are queued on the CPU local timer wheels.
When a CPU goes idle it evaluates its own timer wheels:
- If the first expiring timer is a pinned timer, then the global
timers can be ignored as the CPU will wake up before they expire.
- If the first expiring timer is a global timer, then the expiry time
is propagated into the timer pull hierarchy and the CPU makes sure
to wake up for the first pinned timer.
The timer pull hierarchy organizes CPUs in groups of eight at the
lowest level and at the next levels groups of eight groups up to the
point where no further aggregation of groups is required, i.e. the
number of levels is log8(NR_CPUS). The magic number of eight has been
established by experimention, but can be adjusted if needed.
In each group one busy CPU acts as the migrator. It's only one CPU to
avoid lock contention on remote timer wheels.
The migrator CPU checks in its own timer wheel handling whether there
are other CPUs in the group which have gone idle and have global timers
to expire. If there are global timers to expire, the migrator locks the
remote CPU timer wheel and handles the expiry.
Depending on the group level in the hierarchy this handling can require
to walk the hierarchy downwards to the CPU level.
Special care is taken when the last CPU goes idle. At this point the
CPU is the systemwide migrator at the top of the hierarchy and it
therefore cannot delegate to the hierarchy. It needs to arm its own
timer device to expire either at the first expiring timer in the
hierarchy or at the first CPU local timer, which ever expires first.
This completely removes the overhead from the enqueue path, which is
e.g. for networking a true hotpath and trades it for a slightly more
complex idle path.
This has been in development for a couple of years and the final series
has been extensively tested by various teams from silicon vendors and
ran through extensive CI.
There have been slight performance improvements observed on network
centric workloads and an Intel team confirmed that this allows them to
power down a die completely on a mult-die socket for the first time in
a mostly idle scenario.
There is only one outstanding ~1.5% regression on a specific overloaded
netperf test which is currently investigated, but the rest is either
positive or neutral performance wise and positive on the power
management side.
- Fixes for the timekeeping interpolation code for cross-timestamps:
cross-timestamps are used for PTP to get snapshots from hardware timers
and interpolated them back to clock MONOTONIC. The changes address a
few corner cases in the interpolation code which got the math and logic
wrong.
- Simplifcation of the clocksource watchdog retry logic to automatically
adjust to handle larger systems correctly instead of having more
incomprehensible command line parameters.
- Treewide consolidation of the VDSO data structures.
- The usual small improvements and cleanups all over the place.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuAN0THHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoVKXEADIR45rjR1Xtz32js7B53Y65O4WNoOQ
6/ycWcswuGzg/h4QUpPSJ6gOGVmKSWwZi4n0P/VadCiXGSPPm0aUKsoRUt9DZsPY
mtj2wjCSXKXiyhTl9OtrZME86ZAIGO1dQXa/sOHsiP5PCjgQkD0b5CYi1+B6eHDt
1/Uo2Tb9g8VAPppq20V5Uo93GrPf642oyi3FCFrR1M112Uuak5DmqHJYiDpreNcG
D5SgI+ykSiaUaVyHifvqijoJk0rYXkqEC6evl02477lJ/X0vVo2/M8XPS95BxHST
s5Iruo4rP+qeAy8QvhZpoPX59fO0m/AgA7cf77XXAtOpVdLH+bs4ILsEbouAIOtv
lsmRkcYt+TpvrZFHPAxks+6g3afuROiDtxD5sXXpVWxvofi8FwWqubdlqdsbw9MP
ZCTNyzNyKL47QeDwBfSynYUL1RSyqsphtIwk4oeQklH9rwMAnW21hi30z15hQ0pQ
FOVkmcwi79JNvl/G+jRkDzw7r8/zcHshWdSjyUM04CDjjnCDjQOFWSIjEPwbQjjz
S4HXpJKJW963dBgs9Z84/Ctw1GwoBk1qedDWDJE1257Qvmo/Wpe/7GddWcazOGnN
RRFMzGPbOqBDbjtErOKGU+iCisgNEvz2XK+TI16uRjWde7DxZpiTVYgNDrZ+/Pyh
rQ23UBms6ZRR+A==
=iQlu
-----END PGP SIGNATURE-----
Merge tag 'timers-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer updates from Thomas Gleixner:
"A large set of updates and features for timers and timekeeping:
- The hierarchical timer pull model
When timer wheel timers are armed they are placed into the timer
wheel of a CPU which is likely to be busy at the time of expiry.
This is done to avoid wakeups on potentially idle CPUs.
This is wrong in several aspects:
1) The heuristics to select the target CPU are wrong by
definition as the chance to get the prediction right is
close to zero.
2) Due to #1 it is possible that timers are accumulated on
a single target CPU
3) The required computation in the enqueue path is just overhead
for dubious value especially under the consideration that the
vast majority of timer wheel timers are either canceled or
rearmed before they expire.
The timer pull model avoids the above by removing the target
computation on enqueue and queueing timers always on the CPU on
which they get armed.
This is achieved by having separate wheels for CPU pinned timers
and global timers which do not care about where they expire.
As long as a CPU is busy it handles both the pinned and the global
timers which are queued on the CPU local timer wheels.
When a CPU goes idle it evaluates its own timer wheels:
- If the first expiring timer is a pinned timer, then the global
timers can be ignored as the CPU will wake up before they
expire.
- If the first expiring timer is a global timer, then the expiry
time is propagated into the timer pull hierarchy and the CPU
makes sure to wake up for the first pinned timer.
The timer pull hierarchy organizes CPUs in groups of eight at the
lowest level and at the next levels groups of eight groups up to
the point where no further aggregation of groups is required, i.e.
the number of levels is log8(NR_CPUS). The magic number of eight
has been established by experimention, but can be adjusted if
needed.
In each group one busy CPU acts as the migrator. It's only one CPU
to avoid lock contention on remote timer wheels.
The migrator CPU checks in its own timer wheel handling whether
there are other CPUs in the group which have gone idle and have
global timers to expire. If there are global timers to expire, the
migrator locks the remote CPU timer wheel and handles the expiry.
Depending on the group level in the hierarchy this handling can
require to walk the hierarchy downwards to the CPU level.
Special care is taken when the last CPU goes idle. At this point
the CPU is the systemwide migrator at the top of the hierarchy and
it therefore cannot delegate to the hierarchy. It needs to arm its
own timer device to expire either at the first expiring timer in
the hierarchy or at the first CPU local timer, which ever expires
first.
This completely removes the overhead from the enqueue path, which
is e.g. for networking a true hotpath and trades it for a slightly
more complex idle path.
This has been in development for a couple of years and the final
series has been extensively tested by various teams from silicon
vendors and ran through extensive CI.
There have been slight performance improvements observed on network
centric workloads and an Intel team confirmed that this allows them
to power down a die completely on a mult-die socket for the first
time in a mostly idle scenario.
There is only one outstanding ~1.5% regression on a specific
overloaded netperf test which is currently investigated, but the
rest is either positive or neutral performance wise and positive on
the power management side.
- Fixes for the timekeeping interpolation code for cross-timestamps:
cross-timestamps are used for PTP to get snapshots from hardware
timers and interpolated them back to clock MONOTONIC. The changes
address a few corner cases in the interpolation code which got the
math and logic wrong.
- Simplifcation of the clocksource watchdog retry logic to
automatically adjust to handle larger systems correctly instead of
having more incomprehensible command line parameters.
- Treewide consolidation of the VDSO data structures.
- The usual small improvements and cleanups all over the place"
* tag 'timers-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits)
timer/migration: Fix quick check reporting late expiry
tick/sched: Fix build failure for CONFIG_NO_HZ_COMMON=n
vdso/datapage: Quick fix - use asm/page-def.h for ARM64
timers: Assert no next dyntick timer look-up while CPU is offline
tick: Assume timekeeping is correctly handed over upon last offline idle call
tick: Shut down low-res tick from dying CPU
tick: Split nohz and highres features from nohz_mode
tick: Move individual bit features to debuggable mask accesses
tick: Move got_idle_tick away from common flags
tick: Assume the tick can't be stopped in NOHZ_MODE_INACTIVE mode
tick: Move broadcast cancellation up to CPUHP_AP_TICK_DYING
tick: Move tick cancellation up to CPUHP_AP_TICK_DYING
tick: Start centralizing tick related CPU hotplug operations
tick/sched: Don't clear ts::next_tick again in can_stop_idle_tick()
tick/sched: Rename tick_nohz_stop_sched_tick() to tick_nohz_full_stop_tick()
tick: Use IS_ENABLED() whenever possible
tick/sched: Remove useless oneshot ifdeffery
tick/nohz: Remove duplicate between lowres and highres handlers
tick/nohz: Remove duplicate between tick_nohz_switch_to_nohz() and tick_setup_sched_timer()
hrtimer: Select housekeeping CPU during migration
...
The cross-timestamp mechanism which allows to correlate hardware
clocks uses clocksource pointers for describing the correlation.
That's suboptimal as drivers need to obtain the pointer, which requires
needless exports and exposing internals.
This can be completely avoided by assigning clocksource IDs and using
them for describing the correlated clock source.
This update adds clocksource IDs to all clocksources in the tree which
can be exposed to this mechanism and removes the pointer and now needless
exports.
This is separate from the timer core changes as it was provided to the
PTP folks to build further changes on top.
A related improvement for the core and the correlation handling has not
made it this time, but is expected to get ready for the next round.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuAsoTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoQSFD/0Qvyrm/tKgJwdOZrXAmcPkCRu4amrv
z5GiZtMt6/GHN6JA6ZkR9tjpYnh/NrhxaGxD2k9kcUsaj1tEZyGULNYtfPXsS/j0
SVOVpuagqppPGryfqnxgnZk7M+zjGAxb58miGMEkk08Ex7ysAkujGnmfHzNBP1mz
Ryeeime6aOVB8jhISS68GtAYZ5fD0fWjXfN7DN9G1faJwmF82nJLKkGFy7E1TV9Y
IYaW4r/EZuRATXesnIg6YAjop3l3qK1J8hMAam7OqvOqVzGCs0QNg9usg9Pf6je4
BaELA6GIwDw8ncR5865ONVC8Qpw8/AgChNf7WJrXsP1xBL56FFDmyTPGJMcUFXya
G7s/YIQSj+yXg9+LPMAQqFTqLolnwspBw/fz2ctShpbnGbs8lmnAOTAjNz5lBddd
vrQSn3Gtcj9vHP5OTKXSzHIYGmbvTZp0acsTtuSQGGzJySgVD43m1/xwY5eb11gp
vS57GADgqTli8mrgipVPZCQ3o87RxNMqqda9lrEG/6lfuJ1rUGZWTkvqoasJI/jq
mGiWidFhDOGHaJJUQajLIHPXLll+NN2LIa4wcZqPWE4qdtBAqtutkPfVAC5O0Qot
dA1eWjW02i1Hy7SsUwlpivlDO+MoMn7hqmfXxA01u/x4y8UCnB+vSjWs0LdVlG3G
xWIbTzzp7HKEwg==
=xKya
-----END PGP SIGNATURE-----
Merge tag 'timers-ptp-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull clocksource updates from Thomas Gleixner:
"Updates for timekeeping and PTP core.
The cross-timestamp mechanism which allows to correlate hardware
clocks uses clocksource pointers for describing the correlation.
That's suboptimal as drivers need to obtain the pointer, which
requires needless exports and exposing internals. This can all be
completely avoided by assigning clocksource IDs and using them for
describing the correlated clock source.
So this adds clocksource IDs to all clocksources in the tree which can
be exposed to this mechanism and removes the pointer and now needless
exports.
A related improvement for the core and the correlation handling has
not made it this time, but is expected to get ready for the next
round"
* tag 'timers-ptp-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
kvmclock: Unexport kvmclock clocksource
treewide: Remove system_counterval_t.cs, which is never read
timekeeping: Evaluate system_counterval_t.cs_id instead of .cs
ptp/kvm, arm_arch_timer: Set system_counterval_t.cs_id to constant
x86/kvm, ptp/kvm: Add clocksource ID, set system_counterval_t.cs_id
x86/tsc: Add clocksource ID, set system_counterval_t.cs_id
timekeeping: Add clocksource ID to struct system_counterval_t
x86/tsc: Correct kernel-doc notation
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXt8METHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoeSoD/4hdN591SZekKLJ4HhGRS+EjXaAUxB7
3o48nrgd6JlpLXoJUZumd7gXwhxzch8uobin4uirqI/mckzmaMJDBsvpOmM7uasa
vWWkj3BNUq5zgV2RP+DhcUUbSdTd/TCbJww2EQj2wc7h6+XTv9rdQCL6I/KnG4JT
BlPNH0qQSDUCf+/vEwmOTC9zLYsg8/Dd9hd/ayAETWMetLSER6rX3yq/jiM1nh1K
rcMuTPNSuJ18YzJMc9Pk5Dq7j1m0CR7FpYnGQ813RFxvHfjz1oinWPsJEwU7h9pf
ehffH5on4JmgzidJQIeQCQ/3RmV45rTIMitKP8TQ1jEPnbtnskOm+OVns0dx7Yo8
OSq3Cs/QqBH0qZyXrWJsaVOIOUfIbqBICGI6gc9oUU5ilNgPmnyr/etMsyHcBF5A
1ymqRexvdcH2pUtbBeWZaerd5ZFvfacH34gKrz4dVuIaCZ5nQwApIihPwjHzyRSK
FUxngbYVvJNlLwloqw/Z2TR7e1IcyfZjF2mZTUdx0hXm/X/lXHsuoKpvV9mSVzAj
RWwuh+3XMU+T2hgIKrFKVkkXAi1tl4Qy8NQCerixRpRpBrKVTI9wTANypcayZ8RF
v5lmaYEA0bVMy1bTwAvbtnHA/vh2RbHefPOuHAUMGsMCEWcKxXASO/cxXxy0xcOz
TaHNlvVVZhC0Jg==
=vzxP
-----END PGP SIGNATURE-----
Merge tag 'smp-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull cpu core updates from Thomas Gleixner:
"A small boring set of cleanups for the SMP and CPU hotplug code"
* tag 'smp-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
cpu: Remove stray semicolon
smp: Make __smp_processor_id() 0-argument macro
cpu: Mark cpu_possible_mask as __ro_after_init
kernel/cpu: Convert snprintf() to sysfs_emit()
cpu/hotplug: Delete an extraneous kernel-doc description
- Core and platform-MSI
The core changes have been adopted from previous work which converted
ARM[64] to the new per device MSI domain model, which was merged to
support multiple MSI domain per device. The ARM[64] changes are being
worked on too, but have not been ready yet. The core and platform-MSI
changes have been split out to not hold up RISC-V and to avoid that
RISC-V builds on the scheduled for removal interfaces.
The core support provides new interfaces to handle wire to MSI bridges
in a straight forward way and introduces new platform-MSI interfaces
which are built on top of the per device MSI domain model.
Once ARM[64] is converted over the old platform-MSI interfaces and the
related ugliness in the MSI core code will be removed.
- Drivers:
- Add a new driver for the Andes hart-level interrupt controller
- Rework the SiFive PLIC driver to prepare for MSI suport
- Expand the RISC-V INTC driver to support the new RISC-V AIA
controller which provides the basis for MSI on RISC-V
- A few fixup for the fallout of the core changes.
The actual MSI parts for RISC-V were finalized late and have been
post-poned for the next merge window.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXt7MsTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYofrMD/9Dag12ttmbE2uqzTzlTxc7RHC2MX5n
VJLt84FNNwGPA4r7WLOOqHrfuvfoGjuWT9pYMrVaXCglRG1CMvL10kHMB2f28UWv
Qpc5PzbJwpD6tqyfRSFHMoJp63DAI8IpS7J3I8bqnRD8+0PwYn3jMA1+iMZkH0B7
8uO3mxlFhQ7BFvIAeMEAhR0szuAfvXqEtpi1iTgQTrQ4Je4Rf1pmLjEe2rkwDvF4
p3SAmPIh4+F3IjO7vNsVkQ2yOarTP2cpSns6JmO8mrobLIVX7ZCQ6uVaVCfBhxfx
WttuJO6Bmh/I15yDe/waH6q9ym+0VBwYRWi5lonMpViGdq4/D2WVnY1mNeLRIfjl
X65aMWE1+bhiqyIIUfc24hacf0UgBIlMEW4kJ31VmQzb+OyLDXw+UvzWg1dO6XdA
3L6j1nRgHk0ea5yFyH6SfH/mrfeyqHuwHqo17KFyHxD3jM2H1RRMplpbwXiOIepp
KJJ/O06eMEzHqzn4B8GCT2EvX6L2ehgoWbLeEDNLQh/3LwA9OdcBzPr6gsweEl0U
Q7szJgUWZHeMr39F2rnt0GmvkEuu6muEp/nQzfnohjoYZ0PhpMLSq++4Gi+Ko3fz
2IyecJ+tlbSfyM5//8AdNnOSpsTG3f8u6B/WwhGp5lIDwMnMzCssgfQmRnc3Uyv5
kU3pdMjURJaTUA==
=7aXj
-----END PGP SIGNATURE-----
Merge tag 'irq-msi-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull MSI updates from Thomas Gleixner:
"Updates for the MSI interrupt subsystem and initial RISC-V MSI
support.
The core changes have been adopted from previous work which converted
ARM[64] to the new per device MSI domain model, which was merged to
support multiple MSI domain per device. The ARM[64] changes are being
worked on too, but have not been ready yet. The core and platform-MSI
changes have been split out to not hold up RISC-V and to avoid that
RISC-V builds on the scheduled for removal interfaces.
The core support provides new interfaces to handle wire to MSI bridges
in a straight forward way and introduces new platform-MSI interfaces
which are built on top of the per device MSI domain model.
Once ARM[64] is converted over the old platform-MSI interfaces and the
related ugliness in the MSI core code will be removed.
The actual MSI parts for RISC-V were finalized late and have been
post-poned for the next merge window.
Drivers:
- Add a new driver for the Andes hart-level interrupt controller
- Rework the SiFive PLIC driver to prepare for MSI suport
- Expand the RISC-V INTC driver to support the new RISC-V AIA
controller which provides the basis for MSI on RISC-V
- A few fixup for the fallout of the core changes"
* tag 'irq-msi-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (29 commits)
irqchip/riscv-intc: Fix low-level interrupt handler setup for AIA
x86/apic/msi: Use DOMAIN_BUS_GENERIC_MSI for HPET/IO-APIC domain search
genirq/matrix: Dynamic bitmap allocation
irqchip/riscv-intc: Add support for RISC-V AIA
irqchip/sifive-plic: Improve locking safety by using irqsave/irqrestore
irqchip/sifive-plic: Parse number of interrupts and contexts early in plic_probe()
irqchip/sifive-plic: Cleanup PLIC contexts upon irqdomain creation failure
irqchip/sifive-plic: Use riscv_get_intc_hwnode() to get parent fwnode
irqchip/sifive-plic: Use devm_xyz() for managed allocation
irqchip/sifive-plic: Use dev_xyz() in-place of pr_xyz()
irqchip/sifive-plic: Convert PLIC driver into a platform driver
irqchip/riscv-intc: Introduce Andes hart-level interrupt controller
irqchip/riscv-intc: Allow large non-standard interrupt number
genirq/irqdomain: Don't call ops->select for DOMAIN_BUS_ANY tokens
irqchip/imx-intmux: Handle pure domain searches correctly
genirq/msi: Provide MSI_FLAG_PARENT_PM_DEV
genirq/irqdomain: Reroute device MSI create_mapping
genirq/msi: Provide allocation/free functions for "wired" MSI interrupts
genirq/msi: Optionally use dev->fwnode for device domain
genirq/msi: Provide DOMAIN_BUS_WIRED_TO_MSI
...
- Core:
- Make affinity changes immediately effective for interrupt
threads. This reduces the impact on isolated CPUs as it pulls over the
thread right away instead of doing it after the next hardware
interrupt arrived.
- Cleanup and improvements for the interrupt chip simulator
- Deduplication of the interrupt descriptor initialization code so the
sparse and non-sparse mode share more code.
- Drivers:
- A set of conversions to platform_drivers::remove_new() which gets rid
of the pointless return value.
- A new driver for the Starfive JH8100 SoC
- Support for Amlogic-T7 SoCs
- Improvement for the interrupt handling and EOI management for the
loongson interrupt controller.
- The usual fixes and improvements all over the place.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXt6RUTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoRahEACenZz//vEy+n5t94UCNoYEBsqL4qsl
eHb2LPkOwJdzy0I0et8sSRfmjFgfmiB5vmcOtuTjbA+pAASMU16M5nU38dD4Qw7V
lwfutv3wb0XT7INslvrsEF4SvhapoiSBtzdK4IEVJysaHek/bbvZg8rot2tXTjCR
3sK4sMuWLXxB+MzcaYEXSZlIlsrXcARHYNVCbudsEqL2Rt7mGtBJBMIPAYXaWLMn
Y1B15huDNcj+Z9s/rbX218oSajEYJv24NE7JW/eYhG8Rv3yc+1zMTIARq35V77/3
KIV15XqKozkR4G8BEzQ1hUp6l1cggOjMslkwjyKnXTddkHQnQs5928/48y1qs4W0
IDpJqpPL30ckfzg/fUKfUU98t95qB4X55jmK3LuiWfdS8cfd65gq4Ro2bIszM1NQ
SYhcTvZRRcNJqlbO3rQfFAmVU0bvVyR3DlmrLzVl2tH5touwNBBQ/3D3o7CRGEns
37c07zjVZnir+HFmrtTKOiENTay+fHrtIw5dFf7FMqREpE4kL/nsgZfN0wgZPUHj
QGFExV/kJNSMvqwCz77uvHt6c5uoVZGn2j8iYAdqWVKYRcWCMids2gVEkc8QK4gQ
eWsIEAClIEjArPqpQzPE2v3a9puCmOpbHWRmU7VDtNka9/ur8qoU2KMXMJBySaL4
UKXfWYE+43RVbQ==
=AbVv
-----END PGP SIGNATURE-----
Merge tag 'irq-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner:
"Core:
- Make affinity changes take effect immediately for interrupt
threads. This reduces the impact on isolated CPUs as it pulls over
the thread right away instead of doing it after the next hardware
interrupt arrived.
- Cleanup and improvements for the interrupt chip simulator
- Deduplication of the interrupt descriptor initialization code so
the sparse and non-sparse mode share more code.
Drivers:
- A set of conversions to platform_drivers::remove_new() which gets
rid of the pointless return value.
- A new driver for the Starfive JH8100 SoC
- Support for Amlogic-T7 SoCs
- Improvement for the interrupt handling and EOI management for the
loongson interrupt controller.
- The usual fixes and improvements all over the place"
* tag 'irq-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
irqchip/ts4800: Convert to platform_driver::remove_new() callback
irqchip/stm32-exti: Convert to platform_driver::remove_new() callback
irqchip/renesas-rza1: Convert to platform_driver::remove_new() callback
irqchip/renesas-irqc: Convert to platform_driver::remove_new() callback
irqchip/renesas-intc-irqpin: Convert to platform_driver::remove_new() callback
irqchip/pruss-intc: Convert to platform_driver::remove_new() callback
irqchip/mvebu-pic: Convert to platform_driver::remove_new() callback
irqchip/madera: Convert to platform_driver::remove_new() callback
irqchip/ls-scfg-msi: Convert to platform_driver::remove_new() callback
irqchip/keystone: Convert to platform_driver::remove_new() callback
irqchip/imx-irqsteer: Convert to platform_driver::remove_new() callback
irqchip/imx-intmux: Convert to platform_driver::remove_new() callback
irqchip/imgpdc: Convert to platform_driver::remove_new() callback
irqchip: Add StarFive external interrupt controller
dt-bindings: interrupt-controller: Add starfive,jh8100-intc
arm64: dts: Add gpio_intc node for Amlogic-T7 SoCs
irqchip/meson-gpio: Add support for Amlogic-T7 SoCs
dt-bindings: interrupt-controller: Add support for Amlogic-T7 SoCs
irqchip/vic: Fix a kernel-doc warning
genirq: Wake interrupt threads immediately when changing affinity
...
A quiet cycle. One trivial doc update patch. Two patches to drop now defunct
memory_spread_slab feature from cgroup1 cpuset.
-----BEGIN PGP SIGNATURE-----
iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZe7MVQ4cdGpAa2VybmVs
Lm9yZwAKCRCxYfJx3gVYGR59APwO8h/GCRH0KovpemkjsIHxicWMlvfHVleIdS4l
FY7lLgD+JGucXcxd4YM/ZAZkj9pSUvrEm46n+Jrst7GFH8lfUQ0=
=YY0C
-----END PGP SIGNATURE-----
Merge tag 'cgroup-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
Pull cgroup updates from Tejun Heo:
"A quiet cycle. One trivial doc update patch. Two patches to drop the
now defunct memory_spread_slab feature from cgroup1 cpuset"
* tag 'cgroup-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup:
cgroup/cpuset: Mark memory_spread_slab as obsolete
cgroup/cpuset: Remove cpuset_do_slab_mem_spread()
docs: cgroup-v1: add missing code-block tags
This pull request contains two patches that convert tasklet users to BH
workqueue - backtractest and usb hcd. DM conversions are being routed
through the respective subsystem tree. Hopefully, the next cycle will see a
lot more conversions.
-----BEGIN PGP SIGNATURE-----
iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZe7KuA4cdGpAa2VybmVs
Lm9yZwAKCRCxYfJx3gVYGUmfAQC6bbrghugnvvAREeJSymM6aATfICTrN98FdBIC
cRn5KgEAqDpKcFA2zbWXPPU7KGBjAAYX199XFc9+NqiXpvCfoA8=
=uQz1
-----END PGP SIGNATURE-----
Merge tag 'wq-for-6.9-bh-conversions' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue BH conversions from Tejun Heo:
"This contains two patches that convert tasklet users to BH workqueues:
backtracetest and usb hcd.
DM conversions are being routed through the respective subsystem tree.
Hopefully, the next cycle will see a lot more conversions"
* tag 'wq-for-6.9-bh-conversions' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
usb: core: hcd: Convert from tasklet to BH workqueue
backtracetest: Convert from tasklet to BH workqueue
This cycle, a lot of workqueue changes including some that are significant
and invasive.
- During v6.6 cycle, unbound workqueues were updated so that they are more
topology aware and flexible, which among other things improved workqueue
behavior on modern multi-L3 CPUs. In the process, 636b927eba5b
("workqueue: Make unbound workqueues to use per-cpu pool_workqueues")
switched unbound workqueues to use per-CPU frontend pool_workqueues as a
part of increasing front-back mapping flexibility.
An unwelcome side effect of this change was that this made max concurrency
enforcement per-CPU blowing up the maximum number of allowed concurrent
executions. I incorrectly assumed that this wouldn't cause practical
problems as most unbound workqueue users are self-regulate max
concurrency; however, there definitely are which don't (e.g. on IO paths)
and the drastic increase in the allowed max concurrency led to noticeable
perf regressions in some use cases.
This is now addressed by separating out max concurrency enforcement to a
separate struct - wq_node_nr_active - which makes @max_active consistently
mean system-wide max concurrency regardless of the number of CPUs or
(finally) NUMA nodes. This is a rather invasive and, in places, a bit
clunky; however, the clunkiness rises from the the inherent requirement to
handle the disagreement between the execution locality domain and max
concurrency enforcement domain on some modern machines. See 5797b1c18919
("workqueue: Implement system-wide nr_active enforcement for unbound
workqueues") for more details.
- BH workqueue support is added. They are similar to per-CPU workqueues but
execute work items in the softirq context. This is expected to replace
tasklet. However, currently, it's missing the ability to disable and
enable work items which is needed to convert many tasklet users. To avoid
crowding this merge window too much, this will be included in the next
merge window. A separate pull request will be sent for the couple
conversion patches that are currently pending.
- Waiman plugged a long-standing hole in workqueue CPU isolation where
ordered workqueues didn't follow wq_unbound_cpumask updates. Ordered
workqueues now follow the same rules as other unbound workqueues.
- More CPU isolation improvements: Juri fixed another deficit in workqueue
isolation where unbound rescuers don't respect wq_unbound_cpumask.
Leonardo fixed delayed_work timers firing on isolated CPUs.
- Other misc changes.
-----BEGIN PGP SIGNATURE-----
iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZe7JCQ4cdGpAa2VybmVs
Lm9yZwAKCRCxYfJx3gVYGcnqAP9UP8zEM1la19cilhboDumxmRWyRpV/egFOqsMP
Y5PuoAEAtsBJtQWtm5w46+y+fk3nK2ugXlQio2gH0qQcxX6SdgQ=
=/ovv
-----END PGP SIGNATURE-----
Merge tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue updates from Tejun Heo:
"This cycle, a lot of workqueue changes including some that are
significant and invasive.
- During v6.6 cycle, unbound workqueues were updated so that they are
more topology aware and flexible, which among other things improved
workqueue behavior on modern multi-L3 CPUs. In the process, commit
636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu
pool_workqueues") switched unbound workqueues to use per-CPU
frontend pool_workqueues as a part of increasing front-back mapping
flexibility.
An unwelcome side effect of this change was that this made max
concurrency enforcement per-CPU blowing up the maximum number of
allowed concurrent executions. I incorrectly assumed that this
wouldn't cause practical problems as most unbound workqueue users
are self-regulate max concurrency; however, there definitely are
which don't (e.g. on IO paths) and the drastic increase in the
allowed max concurrency led to noticeable perf regressions in some
use cases.
This is now addressed by separating out max concurrency enforcement
to a separate struct - wq_node_nr_active - which makes @max_active
consistently mean system-wide max concurrency regardless of the
number of CPUs or (finally) NUMA nodes. This is a rather invasive
and, in places, a bit clunky; however, the clunkiness rises from
the the inherent requirement to handle the disagreement between the
execution locality domain and max concurrency enforcement domain on
some modern machines.
See commit 5797b1c18919 ("workqueue: Implement system-wide
nr_active enforcement for unbound workqueues") for more details.
- BH workqueue support is added.
They are similar to per-CPU workqueues but execute work items in
the softirq context. This is expected to replace tasklet. However,
currently, it's missing the ability to disable and enable work
items which is needed to convert many tasklet users. To avoid
crowding this merge window too much, this will be included in the
next merge window. A separate pull request will be sent for the
couple conversion patches that are currently pending.
- Waiman plugged a long-standing hole in workqueue CPU isolation
where ordered workqueues didn't follow wq_unbound_cpumask updates.
Ordered workqueues now follow the same rules as other unbound
workqueues.
- More CPU isolation improvements: Juri fixed another deficit in
workqueue isolation where unbound rescuers don't respect
wq_unbound_cpumask. Leonardo fixed delayed_work timers firing on
isolated CPUs.
- Other misc changes"
* tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (54 commits)
workqueue: Drain BH work items on hot-unplugged CPUs
workqueue: Introduce from_work() helper for cleaner callback declarations
workqueue: Control intensive warning threshold through cmdline
workqueue: Make @flags handling consistent across set_work_data() and friends
workqueue: Remove clear_work_data()
workqueue: Factor out work_grab_pending() from __cancel_work_sync()
workqueue: Clean up enum work_bits and related constants
workqueue: Introduce work_cancel_flags
workqueue: Use variable name irq_flags for saving local irq flags
workqueue: Reorganize flush and cancel[_sync] functions
workqueue: Rename __cancel_work_timer() to __cancel_timer_sync()
workqueue: Use rcu_read_lock_any_held() instead of rcu_read_lock_held()
workqueue: Cosmetic changes
workqueue, irq_work: Build fix for !CONFIG_IRQ_WORK
workqueue: Fix queue_work_on() with BH workqueues
async: Use a dedicated unbound workqueue with raised min_active
workqueue: Implement workqueue_set_min_active()
workqueue: Fix kernel-doc comment of unplug_oldest_pwq()
workqueue: Bind unbound workqueue rescuer to wq_unbound_cpumask
kernel/workqueue: Let rescuers follow unbound wq cpumask changes
...
Another routine one in terms of features. We got two version upgrades
this time, but in terms of lines, 'alloc' changes are not very large.
Toolchain and infrastructure:
- Upgrade to Rust 1.76.0.
This time around, due to how the kernel and Rust schedules have
aligned, there are two upgrades in fact. These allow us to remove two
more unstable features ('const_maybe_uninit_zeroed' and
'ptr_metadata') from the list, among other improvements.
- Mark 'rustc' (and others) invocations as recursive, which fixes a new
warning and prepares us for the future in case we eventually take
advantage of the Make jobserver.
'kernel' crate:
- Add the 'container_of!' macro.
- Stop using the unstable 'ptr_metadata' feature by employing the now
stable 'byte_sub' method to implement 'Arc::from_raw()'.
- Add the 'time' module with a 'msecs_to_jiffies()' conversion function
to begin with, to be used by Rust Binder.
- Add 'notify_sync()' and 'wait_interruptible_timeout()' methods to
'CondVar', to be used by Rust Binder.
- Update integer types for 'CondVar'.
- Rename 'wait_list' field to 'wait_queue_head' in 'CondVar'.
- Implement 'Display' and 'Debug' for 'BStr'.
- Add the 'try_from_foreign()' method to the 'ForeignOwnable' trait.
- Add reexports for macros so that they can be used from the right
module (in addition to the root).
- A series of code documentation improvements, including adding
intra-doc links, consistency improvements, typo fixes...
'macros' crate:
- Place generated 'init_module()' function in '.init.text'.
Documentation:
- Add documentation on Rust doctests and how they work.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEPjU5OPd5QIZ9jqqOGXyLc2htIW0FAmXsZXsACgkQGXyLc2ht
IW26LQ//QdJvnkqwrhijfFchTZSc1SuPPb88yeUveVv9Ve568EObkGuvlFKo+OLB
vt16h+0/LFIW32ZbJ1GeXYsmztOjc3xfyUoSi0Le9jDcffiO+km1DRFAkTVTlYha
18h01bJs/55JuIjU7UkKrxav6pNqBoNGOkkUvWdlitzqdw+kG0ad/7XiUomoAOI3
AEibG2Vltr0DmazW2sZLs4Ae9ytOBPuyMeRoh8WaxiFWz/Rtq3qCNN9ww5Et9RKl
7nhjoc6r2nweavE0oCilYhoFDl6fblhRUSGBCpF1nBOdG6KyrJswdAlv3xpncC/u
TSZ+6N1BMn+xgPP4ftv0kG8TXm/AcInjiOlbOfnx+UX/R3laxfNrTrjpDzftc4Qm
f+ygKefMClBCMHPlXu4OXCpL5C52p1GLK7q+q5PqF60P4qGoW6M3Vx6S8h9jT1oE
kta+p0Rh3tz0YKwxPHcESuFdimkGh5+9zgAmbc3lKJ/uJ0AIdeEscriQn1S3xLF2
De57l2iGO7OpMzV8T9hf4rQImTVOvd9zpoyPF0aMRymoxiy3kQtG8WVIkVixIDPW
LKkoQif0Eh4r28rHBZ2Hvt5tC9ZYTSZP1MgDl8dQGi+5h4fmcN7WvcdciCYOPBRc
em8ifLgCB77DuRhA6AWV5p0IgDDC0aHL6UAF8qm5vSyb6HcoGhE=
=BAGo
-----END PGP SIGNATURE-----
Merge tag 'rust-6.9' of https://github.com/Rust-for-Linux/linux
Pull Rust updates from Miguel Ojeda:
"Another routine one in terms of features. We got two version upgrades
this time, but in terms of lines, 'alloc' changes are not very large.
Toolchain and infrastructure:
- Upgrade to Rust 1.76.0
This time around, due to how the kernel and Rust schedules have
aligned, there are two upgrades in fact. These allow us to remove
two more unstable features ('const_maybe_uninit_zeroed' and
'ptr_metadata') from the list, among other improvements
- Mark 'rustc' (and others) invocations as recursive, which fixes a
new warning and prepares us for the future in case we eventually
take advantage of the Make jobserver
'kernel' crate:
- Add the 'container_of!' macro
- Stop using the unstable 'ptr_metadata' feature by employing the now
stable 'byte_sub' method to implement 'Arc::from_raw()'
- Add the 'time' module with a 'msecs_to_jiffies()' conversion
function to begin with, to be used by Rust Binder
- Add 'notify_sync()' and 'wait_interruptible_timeout()' methods to
'CondVar', to be used by Rust Binder
- Update integer types for 'CondVar'
- Rename 'wait_list' field to 'wait_queue_head' in 'CondVar'
- Implement 'Display' and 'Debug' for 'BStr'
- Add the 'try_from_foreign()' method to the 'ForeignOwnable' trait
- Add reexports for macros so that they can be used from the right
module (in addition to the root)
- A series of code documentation improvements, including adding
intra-doc links, consistency improvements, typo fixes...
'macros' crate:
- Place generated 'init_module()' function in '.init.text'
Documentation:
- Add documentation on Rust doctests and how they work"
* tag 'rust-6.9' of https://github.com/Rust-for-Linux/linux: (29 commits)
rust: upgrade to Rust 1.76.0
kbuild: mark `rustc` (and others) invocations as recursive
rust: add `container_of!` macro
rust: str: implement `Display` and `Debug` for `BStr`
rust: module: place generated init_module() function in .init.text
rust: types: add `try_from_foreign()` method
docs: rust: Add description of Rust documentation test as KUnit ones
docs: rust: Move testing to a separate page
rust: kernel: stop using ptr_metadata feature
rust: kernel: add reexports for macros
rust: locked_by: shorten doclink preview
rust: kernel: remove unneeded doclink targets
rust: kernel: add doclinks
rust: kernel: add blank lines in front of code blocks
rust: kernel: mark code fragments in docs with backticks
rust: kernel: unify spelling of refcount in docs
rust: str: move SAFETY comment in front of unsafe block
rust: str: use `NUL` instead of 0 in doc comments
rust: kernel: add srctree-relative doclinks
rust: ioctl: end top-level module docs with full stop
...
This pull request contains the following branches:
rcu-doc.2024.02.14a: Documentation updates.
rcu-nocb.2024.02.14a: RCU NOCB updates, code cleanups, unnecessary
barrier removals and minor bug fixes.
rcu-exp.2024.02.14a: RCU exp, fixing a circular dependency between
workqueue and RCU expedited callback handling.
rcu-tasks.2024.02.26a: RCU tasks, avoiding deadlocks in do_exit() when
calling synchronize_rcu_task() with a mutex hold, maintaining
real-time response in rcu_tasks_postscan() and a minor
fix for tasks trace quiescence check.
rcu-misc.2024.02.14a: Misc updates, comments and readibility
improvement, boot time parameter for lazy RCU and rcutorture
improvement.
-----BEGIN PGP SIGNATURE-----
iQFJBAABCAAzFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAmXev80VHGJvcXVuLmZl
bmdAZ21haWwuY29tAAoJEEl56MO1B/q4UYgH/3CQF495sAS58M3tsy/HCMbq8DUb
9AoIKCdzqvN2xzjYxHHs59jA+MdEIOGbSIx1yWk0KZSqRSfxwd9nGbxO5EHbz6L3
gdZdOHbpZHPmtcUbdOfXDyhy4JaF+EBuRp9FOnsJ+w4/a0lFWMinaic4BweMEESS
y+gD5fcMzzCthedXn/HeQpeYUKOQ8Jpth5K5s4CkeaehEbdRVLFxjwFgQYd8Oeqn
0SfjNMRdBubDxydi4Rx1Ado7mKnfBHoot+9l0PHi6T2Rq89H0AUn/Dj3YOEkW7QT
aKRSVpPJnG3EFHUUzwprODAoQGOC6EpTVpxSqnpO2ewHnnMPhz/IXzRT86w=
=gypc
-----END PGP SIGNATURE-----
Merge tag 'rcu.next.v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux
Pull RCU updates from Boqun Feng:
- Eliminate deadlocks involving do_exit() and RCU tasks, by Paul:
Instead of SRCU read side critical sections, now a percpu list is
used in do_exit() for scaning yet-to-exit tasks
- Fix a deadlock due to the dependency between workqueue and RCU
expedited grace period, reported by Anna-Maria Behnsen and Thomas
Gleixner and fixed by Frederic: Now RCU expedited always uses its own
kthread worker instead of a workqueue
- RCU NOCB updates, code cleanups, unnecessary barrier removals and
minor bug fixes
- Maintain real-time response in rcu_tasks_postscan() and a minor fix
for tasks trace quiescence check
- Misc updates, comments and readibility improvement, boot time
parameter for lazy RCU and rcutorture improvement
- Documentation updates
* tag 'rcu.next.v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux: (34 commits)
rcu-tasks: Maintain real-time response in rcu_tasks_postscan()
rcu-tasks: Eliminate deadlocks involving do_exit() and RCU tasks
rcu-tasks: Maintain lists to eliminate RCU-tasks/do_exit() deadlocks
rcu-tasks: Initialize data to eliminate RCU-tasks/do_exit() deadlocks
rcu-tasks: Initialize callback lists at rcu_init() time
rcu-tasks: Add data to eliminate RCU-tasks/do_exit() deadlocks
rcu-tasks: Repair RCU Tasks Trace quiescence check
rcu/sync: remove un-used rcu_sync_enter_start function
rcutorture: Suppress rtort_pipe_count warnings until after stalls
srcu: Improve comments about acceleration leak
rcu: Provide a boot time parameter to control lazy RCU
rcu: Rename jiffies_till_flush to jiffies_lazy_flush
doc: Update checklist.rst discussion of callback execution
doc: Clarify use of slab constructors and SLAB_TYPESAFE_BY_RCU
context_tracking: Fix kerneldoc headers for __ct_user_{enter,exit}()
doc: Add EARLY flag to early-parsed kernel boot parameters
doc: Add CONFIG_RCU_STRICT_GRACE_PERIOD to checklist.rst
doc: Make checklist.rst note that spinlocks are implied RCU readers
doc: Make whatisRCU.rst note that spinlocks are RCU readers
doc: Spinlocks are implied RCU readers
...
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmXuD/AQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpsojEACNlJKqsebZv24szCR5ViBGqoDi/A5v5vZv
1p7f0sVgpwFLuDu3CCb9IG1tuAiuhBa5yvBKKpyGuGglQd+7Sxqsgdc2Bv/76D7S
Ej/fc1x5dxuvAvAetYk4yH2idPhYIBVIx3g2oz44bO4Ur3jFZ/yXzp+JtuKEuTba
7kQmAXfN7c497XDsmSv1eJM/+D/LKjmvjqMX2gnXprw2qPgdAklXcUSnBYaS2JEt
o4HGWAImJOV416d7QkOWgKfk6ksJbO3lFzQ6R+JdQCl6KVqc0+5u0oT06ZGVpSUf
fQqfcV+cJw41dQB47Qr017ku0EdDI19L3YpL9/WOnNMBM421j1QER1cKiKfiHD2B
LCOn+tvunxcGMzYonAFfgSF4XXFJWSK33TpvmmVsU3w0+YSC9oIqFfCxOdHuAJqB
tHSuGHgzkufgqhNIQWHiWZEJJUW+MO4Dv2rUV6n+dfCz6JQG48Gs9clDv/tAEY4U
4NzErfYLCsWlNaMPQK1f/b9dWjBXAnpJA4yq8jPyYB3GqjnVuX3Ze14UfwOWgv0B
E++qgPsh30ShbP/NRHqS9tNQC2hIy27x/jzpTyKwxuoSs/nyeZg7lFXIPaQQo7wt
GZhGzsMasbhoylqblB171NFlxpRetY9aYvHZ3OfUP4xAt1THVOzR6hZrBurOKMv/
e8FBGBh/cg==
=Hy//
-----END PGP SIGNATURE-----
Merge tag 'for-6.9/io_uring-20240310' of git://git.kernel.dk/linux
Pull io_uring updates from Jens Axboe:
- Make running of task_work internal loops more fair, and unify how the
different methods deal with them (me)
- Support for per-ring NAPI. The two minor networking patches are in a
shared branch with netdev (Stefan)
- Add support for truncate (Tony)
- Export SQPOLL utilization stats (Xiaobing)
- Multishot fixes (Pavel)
- Fix for a race in manipulating the request flags via poll (Pavel)
- Cleanup the multishot checking by making it generic, moving it out of
opcode handlers (Pavel)
- Various tweaks and cleanups (me, Kunwu, Alexander)
* tag 'for-6.9/io_uring-20240310' of git://git.kernel.dk/linux: (53 commits)
io_uring: Fix sqpoll utilization check racing with dying sqpoll
io_uring/net: dedup io_recv_finish req completion
io_uring: refactor DEFER_TASKRUN multishot checks
io_uring: fix mshot io-wq checks
io_uring/net: add io_req_msg_cleanup() helper
io_uring/net: simplify msghd->msg_inq checking
io_uring/kbuf: rename REQ_F_PARTIAL_IO to REQ_F_BL_NO_RECYCLE
io_uring/net: remove dependency on REQ_F_PARTIAL_IO for sr->done_io
io_uring/net: correctly handle multishot recvmsg retry setup
io_uring/net: clear REQ_F_BL_EMPTY in the multishot retry handler
io_uring: fix io_queue_proc modifying req->flags
io_uring: fix mshot read defer taskrun cqe posting
io_uring/net: fix overflow check in io_recvmsg_mshot_prep()
io_uring/net: correct the type of variable
io_uring/sqpoll: statistics of the true utilization of sq threads
io_uring/net: move recv/recvmsg flags out of retry loop
io_uring/kbuf: flag request if buffer pool is empty after buffer pick
io_uring/net: improve the usercopy for sendmsg/recvmsg
io_uring/net: move receive multishot out of the generic msghdr path
io_uring/net: unify how recvmsg and sendmsg copy in the msghdr
...
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZem5LwAKCRCRxhvAZXjc
onZsAQCjMNabNWAty2VBAQrNIpGkZ+AMA2DxEajPldaPiJH5zQEA9ea7feB3T47i
NUrXXfMQ5DSop+k5Y65pPkEpbX4rhQo=
=NZgd
-----END PGP SIGNATURE-----
Merge tag 'vfs-6.9.uuid' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs uuid updates from Christian Brauner:
"This adds two new ioctl()s for getting the filesystem uuid and
retrieving the sysfs path based on the path of a mounted filesystem.
Getting the filesystem uuid has been implemented in filesystem
specific code for a while it's now lifted as a generic ioctl"
* tag 'vfs-6.9.uuid' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
xfs: add support for FS_IOC_GETFSSYSFSPATH
fs: add FS_IOC_GETFSSYSFSPATH
fat: Hook up sb->s_uuid
fs: FS_IOC_GETUUID
ovl: convert to super_set_uuid()
fs: super_set_uuid()
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZem4DwAKCRCRxhvAZXjc
ooTRAQDRI6Qz6wJym5Yblta8BScMGbt/SgrdgkoCvT6y83MtqwD+Nv/AZQzi3A3l
9NdULtniW1reuCYkc8R7dYM8S+yAwAc=
=Y1qX
-----END PGP SIGNATURE-----
Merge tag 'vfs-6.9.super' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull block handle updates from Christian Brauner:
"Last cycle we changed opening of block devices, and opening a block
device would return a bdev_handle. This allowed us to implement
support for restricting and forbidding writes to mounted block
devices. It was accompanied by converting and adding helpers to
operate on bdev_handles instead of plain block devices.
That was already a good step forward but ultimately it isn't necessary
to have special purpose helpers for opening block devices internally
that return a bdev_handle.
Fundamentally, opening a block device internally should just be
equivalent to opening files. So now all internal opens of block
devices return files just as a userspace open would. Instead of
introducing a separate indirection into bdev_open_by_*() via struct
bdev_handle bdev_file_open_by_*() is made to just return a struct
file. Opening and closing a block device just becomes equivalent to
opening and closing a file.
This all works well because internally we already have a pseudo fs for
block devices and so opening block devices is simple. There's a few
places where we needed to be careful such as during boot when the
kernel is supposed to mount the rootfs directly without init doing it.
Here we need to take care to ensure that we flush out any asynchronous
file close. That's what we already do for opening, unpacking, and
closing the initramfs. So nothing new here.
The equivalence of opening and closing block devices to regular files
is a win in and of itself. But it also has various other advantages.
We can remove struct bdev_handle completely. Various low-level helpers
are now private to the block layer. Other helpers were simply
removable completely.
A follow-up series that is already reviewed build on this and makes it
possible to remove bdev->bd_inode and allows various clean ups of the
buffer head code as well. All places where we stashed a bdev_handle
now just stash a file and use simple accessors to get to the actual
block device which was already the case for bdev_handle"
* tag 'vfs-6.9.super' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (35 commits)
block: remove bdev_handle completely
block: don't rely on BLK_OPEN_RESTRICT_WRITES when yielding write access
bdev: remove bdev pointer from struct bdev_handle
bdev: make struct bdev_handle private to the block layer
bdev: make bdev_{release, open_by_dev}() private to block layer
bdev: remove bdev_open_by_path()
reiserfs: port block device access to file
ocfs2: port block device access to file
nfs: port block device access to files
jfs: port block device access to file
f2fs: port block device access to files
ext4: port block device access to file
erofs: port device access to file
btrfs: port device access to file
bcachefs: port block device access to file
target: port block device access to file
s390: port block device access to file
nvme: port block device access to file
block2mtd: port device access to files
bcache: port block device access to files
...