linux/arch/arm64/lib
Mark Rutland 139f9ab73d arm64: lib: __arch_copy_to_user(): fold fixups into body
Like other functions, __arch_copy_to_user() places its exception fixups
in the `.fixup` section without any clear association with
__arch_copy_to_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_copy_to_user() itself will be missing from the backtrace.

This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.

To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_copy_to_user().

Inline assembly will be dealt with in subsequent patches.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-10-21 10:45:21 +01:00
..
clear_page.S arm64: lib: Annotate {clear, copy}_page() as position-independent 2021-03-19 12:01:19 +00:00
clear_user.S arm64: lib: __arch_clear_user(): fold fixups into body 2021-10-21 10:45:21 +01:00
copy_from_user.S arm64: lib: __arch_copy_from_user(): fold fixups into body 2021-10-21 10:45:21 +01:00
copy_page.S arm64: lib: Annotate {clear, copy}_page() as position-independent 2021-03-19 12:01:19 +00:00
copy_template.S treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
copy_to_user.S arm64: lib: __arch_copy_to_user(): fold fixups into body 2021-10-21 10:45:21 +01:00
crc32.S arm64: lib: Consistently enable crc32 extension 2020-04-28 14:36:32 +01:00
csum.c arm64: csum: Disable KASAN for do_csum() 2020-04-15 21:36:41 +01:00
delay.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
error-inject.c arm64: Add support for function error injection 2019-08-07 13:53:09 +01:00
insn.c arm64: use __func__ to get function name in pr_err 2021-07-30 16:26:16 +01:00
kasan_sw_tags.S kasan: arm64: support specialized outlined tag mismatch checks 2021-05-26 23:31:26 +01:00
Makefile arch: remove compat_alloc_user_space 2021-09-08 15:32:35 -07:00
memchr.S arm64: Better optimised memchr() 2021-06-01 18:34:38 +01:00
memcmp.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memcpy.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memset.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-10-30 08:32:31 +00:00
mte.S arm64: mte: handle tags zeroing at page allocation time 2021-06-04 19:32:21 +01:00
strchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strcmp.S arm64: Mitigate MTE issues with str{n}cmp() 2021-09-21 14:50:19 +01:00
strlen.S arm64: fix strlen() with CONFIG_KASAN_HW_TAGS 2021-07-12 13:36:22 +01:00
strncmp.S arm64: Mitigate MTE issues with str{n}cmp() 2021-09-21 14:50:19 +01:00
strnlen.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strrchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
tishift.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
uaccess_flushcache.c arm64: Rename arm64-internal cache maintenance functions 2021-05-25 19:27:49 +01:00
xor-neon.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 2019-06-19 17:09:55 +02:00