a0f4b7879f
The lightweight spinlock checks verify that a spinlock has either value
0 (spinlock locked) and that not any other bits than in
__ARCH_SPIN_LOCK_UNLOCKED_VAL is set.
This breaks the current LWS code, which writes the address of the lock
into the lock word to unlock it, which was an optimization to save one
assembler instruction.
Fix it by making spinlock_types.h accessible for asm code, change the
LWS spinlock-unlocking code to write __ARCH_SPIN_LOCK_UNLOCKED_VAL into
the lock word, and add some missing lightweight spinlock checks to the
LWS path. Finally, make the spinlock checks dependend on DEBUG_KERNEL.
Noticed-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Tested-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v6.4+
Fixes: 15e64ef652
("parisc: Add lightweight spinlock checks")
24 lines
760 B
Plaintext
24 lines
760 B
Plaintext
# SPDX-License-Identifier: GPL-2.0
|
|
#
|
|
config LIGHTWEIGHT_SPINLOCK_CHECK
|
|
bool "Enable lightweight spinlock checks"
|
|
depends on DEBUG_KERNEL && SMP && !DEBUG_SPINLOCK
|
|
default y
|
|
help
|
|
Add checks with low performance impact to the spinlock functions
|
|
to catch memory overwrites at runtime. For more advanced
|
|
spinlock debugging you should choose the DEBUG_SPINLOCK option
|
|
which will detect unitialized spinlocks too.
|
|
If unsure say Y here.
|
|
|
|
config TLB_PTLOCK
|
|
bool "Use page table locks in TLB fault handler"
|
|
depends on SMP
|
|
default n
|
|
help
|
|
Select this option to enable page table locking in the TLB
|
|
fault handler. This ensures that page table entries are
|
|
updated consistently on SMP machines at the expense of some
|
|
loss in performance.
|
|
|