linux/arch/riscv/kernel/copy-unaligned.S
Evan Green 584ea6564b
RISC-V: Probe for unaligned access speed
Rather than deferring unaligned access speed determinations to a vendor
function, let's probe them and find out how fast they are. If we
determine that an unaligned word access is faster than N byte accesses,
mark the hardware's unaligned access as "fast". Otherwise, we mark
accesses as slow.

The algorithm itself runs for a fixed amount of jiffies. Within each
iteration it attempts to time a single loop, and then keeps only the best
(fastest) loop it saw. This algorithm was found to have lower variance from
run to run than my first attempt, which counted the total number of
iterations that could be done in that fixed amount of jiffies. By taking
only the best iteration in the loop, assuming at least one loop wasn't
perturbed by an interrupt, we eliminate the effects of interrupts and
other "warm up" factors like branch prediction. The only downside is it
depends on having an rdtime granular and accurate enough to measure a
single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
leave the unaligned setting at UNKNOWN.

There is a slight change in user-visible behavior here. Previously, all
boards except the THead C906 reported misaligned access speed of
UNKNOWN. C906 reported FAST. With this change, since we're now measuring
misaligned access speed on each hart, all RISC-V systems will have this
key set as either FAST or SLOW.

Currently, we don't have a way to confidently measure the difference between
SLOW and EMULATED, so we label anything not fast as SLOW. This will
mislabel some systems that are actually EMULATED as SLOW. When we get
support for delegating misaligned access traps to the kernel (as opposed
to the firmware quietly handling it), we can explicitly test in Linux to
see if unaligned accesses trap. Those systems will start to report
EMULATED, though older (today's) systems without that new SBI mechanism
will continue to report SLOW.

I've updated the documentation for those hwprobe values to reflect
this, specifically: SLOW may or may not be emulated by software, and FAST
represents means being faster than equivalent byte accesses. The change
in documentation is accurate with respect to both the former and current
behavior.

Signed-off-by: Evan Green <evan@rivosinc.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230818194136.4084400-2-evan@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-09-01 09:06:25 -07:00

72 lines
1.5 KiB
ArmAsm

/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (C) 2023 Rivos Inc. */
#include <linux/linkage.h>
#include <asm/asm.h>
.text
/* void __riscv_copy_words_unaligned(void *, const void *, size_t) */
/* Performs a memcpy without aligning buffers, using word loads and stores. */
/* Note: The size is truncated to a multiple of 8 * SZREG */
ENTRY(__riscv_copy_words_unaligned)
andi a4, a2, ~((8*SZREG)-1)
beqz a4, 2f
add a3, a1, a4
1:
REG_L a4, 0(a1)
REG_L a5, SZREG(a1)
REG_L a6, 2*SZREG(a1)
REG_L a7, 3*SZREG(a1)
REG_L t0, 4*SZREG(a1)
REG_L t1, 5*SZREG(a1)
REG_L t2, 6*SZREG(a1)
REG_L t3, 7*SZREG(a1)
REG_S a4, 0(a0)
REG_S a5, SZREG(a0)
REG_S a6, 2*SZREG(a0)
REG_S a7, 3*SZREG(a0)
REG_S t0, 4*SZREG(a0)
REG_S t1, 5*SZREG(a0)
REG_S t2, 6*SZREG(a0)
REG_S t3, 7*SZREG(a0)
addi a0, a0, 8*SZREG
addi a1, a1, 8*SZREG
bltu a1, a3, 1b
2:
ret
END(__riscv_copy_words_unaligned)
/* void __riscv_copy_bytes_unaligned(void *, const void *, size_t) */
/* Performs a memcpy without aligning buffers, using only byte accesses. */
/* Note: The size is truncated to a multiple of 8 */
ENTRY(__riscv_copy_bytes_unaligned)
andi a4, a2, ~(8-1)
beqz a4, 2f
add a3, a1, a4
1:
lb a4, 0(a1)
lb a5, 1(a1)
lb a6, 2(a1)
lb a7, 3(a1)
lb t0, 4(a1)
lb t1, 5(a1)
lb t2, 6(a1)
lb t3, 7(a1)
sb a4, 0(a0)
sb a5, 1(a0)
sb a6, 2(a0)
sb a7, 3(a0)
sb t0, 4(a0)
sb t1, 5(a0)
sb t2, 6(a0)
sb t3, 7(a0)
addi a0, a0, 8
addi a1, a1, 8
bltu a1, a3, 1b
2:
ret
END(__riscv_copy_bytes_unaligned)