linux/arch/riscv/include/asm/jump_label.h
Andy Chiu 9ddfc3cd80
riscv: jump_label: Fixup unaligned arch_static_branch function
Runtime code patching must be done at a naturally aligned address, or we
may execute on a partial instruction.

We have encountered problems traced back to static jump functions during
the test. We switched the tracer randomly for every 1~5 seconds on a
dual-core QEMU setup and found the kernel sucking at a static branch
where it jumps to itself.

The reason is that the static branch was 2-byte but not 4-byte aligned.
Then, the kernel would patch the instruction, either J or NOP, with two
half-word stores if the machine does not have efficient unaligned
accesses. Thus, moments exist where half of the NOP mixes with the other
half of the J when transitioning the branch. In our particular case, on
a little-endian machine, the upper half of the NOP was mixed with the
lower part of the J when enabling the branch, resulting in a jump that
jumped to itself. Conversely, it would result in a HINT instruction when
disabling the branch, but it might not be observable.

ARM64 does not have this problem since all instructions must be 4-byte
aligned.

Fixes: ebc00dde8a ("riscv: Add jump-label implementation")
Link: https://lore.kernel.org/linux-riscv/20220913094252.3555240-6-andy.chiu@sifive.com/
Reviewed-by: Greentime Hu <greentime.hu@sifive.com>
Signed-off-by: Andy Chiu <andy.chiu@sifive.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Link: https://lore.kernel.org/r/20230206090440.1255001-1-guoren@kernel.org
Cc: stable@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-21 17:21:16 -08:00

63 lines
1.4 KiB
C

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020 Emil Renner Berthing
*
* Based on arch/arm64/include/asm/jump_label.h
*/
#ifndef __ASM_JUMP_LABEL_H
#define __ASM_JUMP_LABEL_H
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <asm/asm.h>
#define JUMP_LABEL_NOP_SIZE 4
static __always_inline bool arch_static_branch(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
" .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
"1: nop \n\t"
" .option pop \n\t"
" .pushsection __jump_table, \"aw\" \n\t"
" .align " RISCV_LGPTR " \n\t"
" .long 1b - ., %l[label] - . \n\t"
" " RISCV_PTR " %0 - . \n\t"
" .popsection \n\t"
: : "i"(&((char *)key)[branch]) : : label);
return false;
label:
return true;
}
static __always_inline bool arch_static_branch_jump(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
" .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
"1: jal zero, %l[label] \n\t"
" .option pop \n\t"
" .pushsection __jump_table, \"aw\" \n\t"
" .align " RISCV_LGPTR " \n\t"
" .long 1b - ., %l[label] - . \n\t"
" " RISCV_PTR " %0 - . \n\t"
" .popsection \n\t"
: : "i"(&((char *)key)[branch]) : : label);
return false;
label:
return true;
}
#endif /* __ASSEMBLY__ */
#endif /* __ASM_JUMP_LABEL_H */