2012-04-20 17:45:54 +04:00
#
# arch/arm64/Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1995-2001 by Russell King
2020-12-18 03:24:32 +03:00
LDFLAGS_vmlinux := --no-undefined -X
2012-04-20 17:45:54 +04:00
arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27
Upon upgrading to binutils 2.27, we found that our lz4 and gzip
compressed kernel images were significantly larger, resulting is 10ms
boot time regressions.
As noted by Rahul:
"aarch64 binaries uses RELA relocations, where each relocation entry
includes an addend value. This is similar to x86_64. On x86_64, the
addend values are also stored at the relocation offset for relative
relocations. This is an optimization: in the case where code does not
need to be relocated, the loader can simply skip processing relative
relocations. In binutils-2.25, both bfd and gold linkers did this for
x86_64, but only the gold linker did this for aarch64. The kernel build
here is using the bfd linker, which stored zeroes at the relocation
offsets for relative relocations. Since a set of zeroes compresses
better than a set of non-zero addend values, this behavior was resulting
in much better lz4 compression.
The bfd linker in binutils-2.27 is now storing the actual addend values
at the relocation offsets. The behavior is now consistent with what it
does for x86_64 and what gold linker does for both architectures. The
change happened in this upstream commit:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
Since a bunch of zeroes got replaced by non-zero addend values, we see
the side effect of lz4 compressed image being a bit bigger.
To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
flag can be used:
$ LDFLAGS="--no-apply-dynamic-relocs" make
With this flag, the compressed image size is back to what it was with
binutils-2.25.
If the kernel is using ASLR, there aren't additional runtime costs to
--no-apply-dynamic-relocs, as the relocations will need to be applied
again anyway after the kernel is relocated to a random address.
If the kernel is not using ASLR, then presumably the current default
behavior of the linker is better. Since the static linker performed the
dynamic relocs, and the kernel is not moved to a different address at
load time, it can skip applying the relocations all over again."
Some measurements:
$ ld -v
GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
^
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
$ ld -v
GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
^
pre patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
post patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
By Siqi's measurement w/ gzip:
binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
Image 41535488
Image.gz 13404067
binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
Image 41535488
Image.gz 14125516
Any compression scheme should be able to get better results from the
longer runs of zeros, not just GZIP and LZ4.
10ms boot time savings isn't anything to get excited about, but users of
arm64+compression+bfd-2.27 should not have to pay a penalty for no
runtime improvement.
Reported-by: Gopinath Elanchezhian <gelanchezhian@google.com>
Reported-by: Sindhuri Pentyala <spentyala@google.com>
Reported-by: Wei Wang <wvw@google.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Rahul Chaudhry <rahulchaudhry@google.com>
Suggested-by: Siqi Lin <siqilin@google.com>
Suggested-by: Stephen Hines <srhines@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: added comment to Makefile]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-10-27 19:33:41 +03:00
i f e q ( $( CONFIG_RELOCATABLE ) , y )
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
# for relative relocs, since this leads to better Image compression
# with the relocation offsets always being zero.
2020-10-16 20:53:39 +03:00
LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27
Upon upgrading to binutils 2.27, we found that our lz4 and gzip
compressed kernel images were significantly larger, resulting is 10ms
boot time regressions.
As noted by Rahul:
"aarch64 binaries uses RELA relocations, where each relocation entry
includes an addend value. This is similar to x86_64. On x86_64, the
addend values are also stored at the relocation offset for relative
relocations. This is an optimization: in the case where code does not
need to be relocated, the loader can simply skip processing relative
relocations. In binutils-2.25, both bfd and gold linkers did this for
x86_64, but only the gold linker did this for aarch64. The kernel build
here is using the bfd linker, which stored zeroes at the relocation
offsets for relative relocations. Since a set of zeroes compresses
better than a set of non-zero addend values, this behavior was resulting
in much better lz4 compression.
The bfd linker in binutils-2.27 is now storing the actual addend values
at the relocation offsets. The behavior is now consistent with what it
does for x86_64 and what gold linker does for both architectures. The
change happened in this upstream commit:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
Since a bunch of zeroes got replaced by non-zero addend values, we see
the side effect of lz4 compressed image being a bit bigger.
To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
flag can be used:
$ LDFLAGS="--no-apply-dynamic-relocs" make
With this flag, the compressed image size is back to what it was with
binutils-2.25.
If the kernel is using ASLR, there aren't additional runtime costs to
--no-apply-dynamic-relocs, as the relocations will need to be applied
again anyway after the kernel is relocated to a random address.
If the kernel is not using ASLR, then presumably the current default
behavior of the linker is better. Since the static linker performed the
dynamic relocs, and the kernel is not moved to a different address at
load time, it can skip applying the relocations all over again."
Some measurements:
$ ld -v
GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
^
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
$ ld -v
GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
^
pre patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
post patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
By Siqi's measurement w/ gzip:
binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
Image 41535488
Image.gz 13404067
binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
Image 41535488
Image.gz 14125516
Any compression scheme should be able to get better results from the
longer runs of zeros, not just GZIP and LZ4.
10ms boot time savings isn't anything to get excited about, but users of
arm64+compression+bfd-2.27 should not have to pay a penalty for no
runtime improvement.
Reported-by: Gopinath Elanchezhian <gelanchezhian@google.com>
Reported-by: Sindhuri Pentyala <spentyala@google.com>
Reported-by: Wei Wang <wvw@google.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Rahul Chaudhry <rahulchaudhry@google.com>
Suggested-by: Siqi Lin <siqilin@google.com>
Suggested-by: Stephen Hines <srhines@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: added comment to Makefile]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-10-27 19:33:41 +03:00
$( call ld-option, --no-apply-dynamic-relocs)
2016-01-26 11:13:44 +03:00
e n d i f
2016-08-22 13:58:36 +03:00
i f e q ( $( CONFIG_ARM 64_ERRATUM_ 843419) , y )
2021-08-01 08:35:25 +03:00
ifeq ( $( CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419) ,y)
2016-08-22 13:58:36 +03:00
LDFLAGS_vmlinux += --fix-cortex-a53-843419
endif
e n d i f
2019-08-29 16:34:42 +03:00
cc_has_k_constraint := $( call try-run,echo \
' int main( void) { \
asm volatile( "and w0, w0, %w0" :: "K" ( 4294967295) ) ; \
return 0; \
} ' | $( CC) -S -x c -o " $$ TMP " -,,-DCONFIG_CC_HAS_K_CONSTRAINT= 1)
2020-01-15 17:18:25 +03:00
i f e q ( $( CONFIG_BROKEN_GAS_INST ) , y )
2016-12-06 18:27:43 +03:00
$( warning Detected assembler with broken .inst ; disassembly will be unreliable )
e n d i f
2020-01-15 17:18:25 +03:00
KBUILD_CFLAGS += -mgeneral-regs-only \
2019-08-29 16:34:42 +03:00
$( compat_vdso) $( cc_has_k_constraint)
arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS
This is a GCC only option, which warns about ABI changes within GCC, so
unconditionally adding it breaks Clang with tons of:
warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
and link time failures:
ld.lld: error: undefined symbol: __efistub___stack_chk_guard
>>> referenced by arm-stub.c:73
(/home/nathan/cbl/linux/drivers/firmware/efi/libstub/arm-stub.c:73)
>>> arm-stub.stub.o:(__efistub_install_memreserve_table)
in archive ./drivers/firmware/efi/libstub/lib.a
These failures come from the lack of -fno-stack-protector, which is
added via cc-option in drivers/firmware/efi/libstub/Makefile. When an
unknown flag is added to KBUILD_CFLAGS, clang will noisily warn that it
is ignoring the option like above, unlike gcc, who will just error.
$ echo "int main() { return 0; }" > tmp.c
$ clang -Wno-psabi tmp.c; echo $?
warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
1 warning generated.
0
$ gcc -Wsometimes-uninitialized tmp.c; echo $?
gcc: error: unrecognized command line option
‘-Wsometimes-uninitialized’; did you mean ‘-Wmaybe-uninitialized’?
1
For cc-option to work properly with clang and behave like gcc, -Werror
is needed, which was done in commit c3f0d0bc5b01 ("kbuild, LLVMLinux:
Add -Werror to cc-option to support clang").
$ clang -Werror -Wno-psabi tmp.c; echo $?
error: unknown warning option '-Wno-psabi'
[-Werror,-Wunknown-warning-option]
1
As a consequence of this, when an unknown flag is unconditionally added
to KBUILD_CFLAGS, it will cause cc-option to always fail and those flags
will never get added:
$ clang -Werror -Wno-psabi -fno-stack-protector tmp.c; echo $?
error: unknown warning option '-Wno-psabi'
[-Werror,-Wunknown-warning-option]
1
This can be seen when compiling the whole kernel as some warnings that
are normally disabled (see below) show up. The full list of flags
missing from drivers/firmware/efi/libstub are the following (gathered
from diffing .arm64-stub.o.cmd):
-fno-delete-null-pointer-checks
-Wno-address-of-packed-member
-Wframe-larger-than=2048
-Wno-unused-const-variable
-fno-strict-overflow
-fno-merge-all-constants
-fno-stack-check
-Werror=date-time
-Werror=incompatible-pointer-types
-ffreestanding
-fno-stack-protector
Use cc-disable-warning so that it gets disabled for GCC and does nothing
for Clang.
Fixes: ebcc5928c5d9 ("arm64: Silence gcc warnings about arch ABI drift")
Link: https://github.com/ClangBuiltLinux/linux/issues/511
Reported-by: Qian Cai <cai@lca.pw>
Acked-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-11 20:19:32 +03:00
KBUILD_CFLAGS += $( call cc-disable-warning, psabi)
2020-01-15 17:18:25 +03:00
KBUILD_AFLAGS += $( compat_vdso)
2015-02-03 19:14:13 +03:00
2017-09-18 13:20:20 +03:00
KBUILD_CFLAGS += $( call cc-option,-mabi= lp64)
KBUILD_AFLAGS += $( call cc-option,-mabi= lp64)
2020-08-21 22:42:51 +03:00
# Avoid generating .eh_frame* sections.
2022-10-27 18:59:06 +03:00
i f n e q ( $( CONFIG_UNWIND_TABLES ) , y )
2020-08-21 22:42:51 +03:00
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
KBUILD_AFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
2022-10-27 18:59:06 +03:00
e l s e
KBUILD_CFLAGS += -fasynchronous-unwind-tables
KBUILD_AFLAGS += -fasynchronous-unwind-tables
e n d i f
2020-08-21 22:42:51 +03:00
2018-12-12 15:08:44 +03:00
i f e q ( $( CONFIG_STACKPROTECTOR_PER_TASK ) , y )
prepare : stack_protector_prepare
stack_protector_prepare : prepare 0
$( eval KBUILD_CFLAGS += -mstack-protector-guard= sysreg \
-mstack-protector-guard-reg= sp_el0 \
-mstack-protector-guard-offset= $( shell \
awk '{if ($$2 == "TSK_STACK_CANARY") print $$3;}' \
include/generated/asm-offsets.h) )
e n d i f
2021-12-13 17:02:52 +03:00
i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 2) , y )
# make sure to pass the newest target architecture to -march.
asm-arch := armv8.2-a
e n d i f
2020-03-31 22:44:59 +03:00
# Ensure that if the compiler supports branch protection we default it
# off, this will be overridden if we are using branch protection.
branch-prot-flags-y += $( call cc-option,-mbranch-protection= none)
2021-06-13 12:26:31 +03:00
i f e q ( $( CONFIG_ARM 64_PTR_AUTH_KERNEL ) , y )
arm64: compile the kernel with ptrauth return address signing
Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.
This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.
The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.
CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.
GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.
Clang uses an external assembler hence this patch makes sure that the
correct parameters (-march=armv8.3-a) are passed down to help it recognize
the ptrauth instructions.
Ftrace function tracer works properly with Ptrauth only when
patchable-function-entry feature is present and is ensured by the
Kconfig dependency.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments, Ftrace Kconfig]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-13 12:05:03 +03:00
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address= all
2020-05-06 22:51:28 +03:00
# We enable additional protection for leaf functions as there is some
# narrow potential for ROP protection benefits and no substantial
# performance impact has been observed.
2022-10-27 18:59:08 +03:00
PACRET-y := pac-ret+leaf
# Using a shadow call stack in leaf functions is too costly, so avoid PAC there
# as well when we may be patching PAC into SCS
PACRET-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) := pac-ret
2020-05-06 22:51:29 +03:00
i f e q ( $( CONFIG_ARM 64_BTI_KERNEL ) , y )
2022-10-27 18:59:08 +03:00
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI) := -mbranch-protection= $( PACRET-y) +bti
2020-05-06 22:51:29 +03:00
e l s e
2022-10-27 18:59:08 +03:00
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection= $( PACRET-y)
2020-05-06 22:51:29 +03:00
e n d i f
arm64: compile the kernel with ptrauth return address signing
Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.
This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.
The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.
CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.
GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.
Clang uses an external assembler hence this patch makes sure that the
correct parameters (-march=armv8.3-a) are passed down to help it recognize
the ptrauth instructions.
Ftrace function tracer works properly with Ptrauth only when
patchable-function-entry feature is present and is ensured by the
Kconfig dependency.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments, Ftrace Kconfig]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-13 12:05:03 +03:00
# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
# compiler to generate them and consequently to break the single image contract
# we pass it only to the assembler. This option is utilized only in case of non
# integrated assemblers.
2020-08-27 23:36:08 +03:00
i f e q ( $( CONFIG_AS_HAS_PAC ) , y )
asm-arch := armv8.3-a
arm64: compile the kernel with ptrauth return address signing
Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.
This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.
The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.
CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.
GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.
Clang uses an external assembler hence this patch makes sure that the
correct parameters (-march=armv8.3-a) are passed down to help it recognize
the ptrauth instructions.
Ftrace function tracer works properly with Ptrauth only when
patchable-function-entry feature is present and is ensured by the
Kconfig dependency.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments, Ftrace Kconfig]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-13 12:05:03 +03:00
e n d i f
2020-07-15 10:19:44 +03:00
e n d i f
arm64: compile the kernel with ptrauth return address signing
Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.
This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.
The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.
CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.
GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.
Clang uses an external assembler hence this patch makes sure that the
correct parameters (-march=armv8.3-a) are passed down to help it recognize
the ptrauth instructions.
Ftrace function tracer works properly with Ptrauth only when
patchable-function-entry feature is present and is ensured by the
Kconfig dependency.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments, Ftrace Kconfig]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-13 12:05:03 +03:00
2020-03-31 22:44:59 +03:00
KBUILD_CFLAGS += $( branch-prot-flags-y)
2020-07-15 10:19:44 +03:00
i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 4) , y )
# make sure to pass the newest target architecture to -march.
2020-08-27 23:36:08 +03:00
asm-arch := armv8.4-a
e n d i f
2020-12-22 23:01:24 +03:00
i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 5) , y )
# make sure to pass the newest target architecture to -march.
asm-arch := armv8.5-a
e n d i f
2020-08-27 23:36:08 +03:00
i f d e f a s m - a r c h
KBUILD_CFLAGS += -Wa,-march= $( asm-arch) \
-DARM64_ASM_ARCH= '"$(asm-arch)"'
2020-07-15 10:19:44 +03:00
e n d i f
2020-04-27 19:00:11 +03:00
i f e q ( $( CONFIG_SHADOW_CALL_STACK ) , y )
KBUILD_CFLAGS += -ffixed-x18
e n d i f
2013-10-11 17:52:08 +04:00
i f e q ( $( CONFIG_CPU_BIG_ENDIAN ) , y )
KBUILD_CPPFLAGS += -mbig-endian
2017-06-24 18:42:11 +03:00
CHECKFLAGS += -D__AARCH64EB__
2018-07-13 18:30:33 +03:00
# Prefer the baremetal ELF build target, but not all toolchains include
# it so fall back to the standard linux version if needed.
2020-12-18 03:24:32 +03:00
KBUILD_LDFLAGS += -EB $( call ld-option, -maarch64elfb, -maarch64linuxb -z norelro)
2016-08-30 11:31:35 +03:00
UTS_MACHINE := aarch64_be
2013-10-11 17:52:08 +04:00
e l s e
2012-04-20 17:45:54 +04:00
KBUILD_CPPFLAGS += -mlittle-endian
2017-06-24 18:42:11 +03:00
CHECKFLAGS += -D__AARCH64EL__
2018-07-13 18:30:33 +03:00
# Same as above, prefer ELF but fall back to linux target if needed.
2020-12-18 03:24:32 +03:00
KBUILD_LDFLAGS += -EL $( call ld-option, -maarch64elf, -maarch64linux -z norelro)
2016-08-30 11:31:35 +03:00
UTS_MACHINE := aarch64
2013-10-11 17:52:08 +04:00
e n d i f
2012-04-20 17:45:54 +04:00
2020-12-18 03:24:32 +03:00
i f e q ( $( CONFIG_LD_IS_LLD ) , y )
KBUILD_LDFLAGS += -z norelro
e n d i f
2018-05-30 23:48:38 +03:00
CHECKFLAGS += -D__aarch64__
2012-04-20 17:45:54 +04:00
ftrace: arm64: move from REGS to ARGS
This commit replaces arm64's support for FTRACE_WITH_REGS with support
for FTRACE_WITH_ARGS. This removes some overhead and complexity, and
removes some latent issues with inconsistent presentation of struct
pt_regs (which can only be reliably saved/restored at exception
boundaries).
FTRACE_WITH_REGS has been supported on arm64 since commit:
3b23e4991fb66f6d ("arm64: implement ftrace with regs")
As noted in the commit message, the major reasons for implementing
FTRACE_WITH_REGS were:
(1) To make it possible to use the ftrace graph tracer with pointer
authentication, where it's necessary to snapshot/manipulate the LR
before it is signed by the instrumented function.
(2) To make it possible to implement LIVEPATCH in future, where we need
to hook function entry before an instrumented function manipulates
the stack or argument registers. Practically speaking, we need to
preserve the argument/return registers, PC, LR, and SP.
Neither of these need a struct pt_regs, and only require the set of
registers which are live at function call/return boundaries. Our calling
convention is defined by "Procedure Call Standard for the Arm® 64-bit
Architecture (AArch64)" (AKA "AAPCS64"), which can currently be found
at:
https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst
Per AAPCS64, all function call argument and return values are held in
the following GPRs:
* X0 - X7 : parameter / result registers
* X8 : indirect result location register
* SP : stack pointer (AKA SP)
Additionally, ad function call boundaries, the following GPRs hold
context/return information:
* X29 : frame pointer (AKA FP)
* X30 : link register (AKA LR)
... and for ftrace we need to capture the instrumented address:
* PC : program counter
No other GPRs are relevant, as none of the other arguments hold
parameters or return values:
* X9 - X17 : temporaries, may be clobbered
* X18 : shadow call stack pointer (or temorary)
* X19 - X28 : callee saved
This patch implements FTRACE_WITH_ARGS for arm64, only saving/restoring
the minimal set of registers necessary. This is always sufficient to
manipulate control flow (e.g. for live-patching) or to manipulate
function arguments and return values.
This reduces the necessary stack usage from 336 bytes for pt_regs down
to 112 bytes for ftrace_regs + 32 bytes for two frame records, freeing
up 188 bytes. This could be reduced further with changes to the
unwinder.
As there is no longer a need to save different sets of registers for
different features, we no longer need distinct `ftrace_caller` and
`ftrace_regs_caller` trampolines. This allows the trampoline assembly to
be simpler, and simplifies code which previously had to handle the two
trampolines.
I've tested this with the ftrace selftests, where there are no
unexpected failures.
Co-developed-by: Florent Revest <revest@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Florent Revest <revest@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20221103170520.931305-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-03 20:05:20 +03:00
i f e q ( $( CONFIG_DYNAMIC_FTRACE_WITH_ARGS ) , y )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry= 2
e n d i f
2018-12-28 11:29:57 +03:00
i f e q ( $( CONFIG_KASAN_SW_TAGS ) , y )
KASAN_SHADOW_SCALE_SHIFT := 4
2020-12-22 23:02:06 +03:00
e l s e i f e q ( $( CONFIG_KASAN_GENERIC ) , y )
2018-12-28 11:29:57 +03:00
KASAN_SHADOW_SCALE_SHIFT := 3
e n d i f
KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
2012-04-20 17:45:54 +04:00
libs-y := arch/arm64/lib/ $( libs-y)
2020-06-04 05:20:30 +03:00
libs-$(CONFIG_EFI_STUB) += $( objtree) /drivers/firmware/efi/libstub/lib.a
2012-04-20 17:45:54 +04:00
# Default target when executing plain make
2016-11-23 00:34:29 +03:00
boot := arch/arm64/boot
2022-05-02 02:10:03 +03:00
i f e q ( $( CONFIG_EFI_ZBOOT ) , )
2016-11-23 00:34:29 +03:00
KBUILD_IMAGE := $( boot) /Image.gz
2022-05-02 02:10:03 +03:00
e l s e
KBUILD_IMAGE := $( boot) /vmlinuz.efi
e n d i f
2012-04-20 17:45:54 +04:00
2022-05-02 02:10:03 +03:00
all : $( notdir $ ( KBUILD_IMAGE ) )
2012-04-20 17:45:54 +04:00
2022-05-02 02:10:03 +03:00
Image vmlinuz.efi : vmlinux
2015-07-16 23:26:16 +03:00
$( Q) $( MAKE) $( build) = $( boot) $( boot) /$@
2016-06-21 04:44:00 +03:00
Image.% : Image
2012-12-04 03:17:21 +04:00
$( Q) $( MAKE) $( build) = $( boot) $( boot) /$@
2012-04-20 17:45:54 +04:00
2022-05-03 05:47:16 +03:00
install : KBUILD_IMAGE := $( boot ) /Image
2021-07-29 17:05:27 +03:00
install zinstall :
2022-05-03 05:47:16 +03:00
$( call cmd,install)
2012-04-20 17:45:54 +04:00
2013-06-16 23:32:44 +04:00
PHONY += vdso_install
vdso_install :
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso $@
2020-08-28 02:40:12 +03:00
$( if $( CONFIG_COMPAT_VDSO) , \
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso32 $@ )
2013-06-16 23:32:44 +04:00
2021-04-28 15:12:31 +03:00
archprepare :
$( Q) $( MAKE) $( build) = arch/arm64/tools kapi
2021-08-01 08:35:25 +03:00
i f e q ( $( CONFIG_ARM 64_ERRATUM_ 843419) , y )
ifneq ( $( CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419) ,y)
@echo "warning: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum" >& 2
endif
e n d i f
i f e q ( $( CONFIG_ARM 64_USE_LSE_ATOMICS ) , y )
ifneq ( $( CONFIG_ARM64_LSE_ATOMICS) ,y)
@echo "warning: LSE atomics not supported by binutils" >& 2
endif
e n d i f
2018-10-31 02:37:10 +03:00
i f e q ( $( KBUILD_EXTMOD ) , )
arm64: fix vdso-offsets.h dependency
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
generated before these files are compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds. This is made obvious when using make -j:
touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
will sometimes result in none of arm64/kernel/*.o being
rebuilt, sometimes all of them, or even just some of them.
It is quite difficult to ensure that a header is generated before it
is used with recursive Makefiles by using normal rules. Instead,
arch-specific generated headers are normally built in the archprepare
recipe in the arch Makefile (see for instance arch/ia64/Makefile).
Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
therefore be generated before vdso-offsets.h, which is not the case if
archprepare is used. For this reason, a rule run after archprepare has
to be used.
This commit adds rules in arm64/Makefile to build vdso-offsets.h
during the prepare step, ensuring that vdso-offsets.h is generated
before building anything. It also removes the now-unnecessary
dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
removes the duplication of asm-offsets.h between arm64/kernel/vdso/
and include/generated/ and makes include/generated/vdso-offsets.h a
target in arm64/kernel/vdso/Makefile.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-05-12 19:39:15 +03:00
# We need to generate vdso-offsets.h before compiling certain files in kernel/.
# In order to do that, we should use the archprepare target, but we can't since
# asm-offsets.h is included in some files used to generate vdso-offsets.h, and
# asm-offsets.h is built in prepare0, for which archprepare is a dependency.
# Therefore we need to generate the header after prepare0 has been made, hence
# this hack.
prepare : vdso_prepare
vdso_prepare : prepare 0
2020-12-18 05:45:40 +03:00
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso \
include/generated/vdso-offsets.h arch/arm64/kernel/vdso/vdso.so
i f d e f C O N F I G _ C O M P A T _ V D S O
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso32 \
include/generated/vdso32-offsets.h arch/arm64/kernel/vdso32/vdso.so
e n d i f
2018-10-31 02:37:10 +03:00
e n d i f
arm64: fix vdso-offsets.h dependency
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
generated before these files are compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds. This is made obvious when using make -j:
touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
will sometimes result in none of arm64/kernel/*.o being
rebuilt, sometimes all of them, or even just some of them.
It is quite difficult to ensure that a header is generated before it
is used with recursive Makefiles by using normal rules. Instead,
arch-specific generated headers are normally built in the archprepare
recipe in the arch Makefile (see for instance arch/ia64/Makefile).
Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
therefore be generated before vdso-offsets.h, which is not the case if
archprepare is used. For this reason, a rule run after archprepare has
to be used.
This commit adds rules in arm64/Makefile to build vdso-offsets.h
during the prepare step, ensuring that vdso-offsets.h is generated
before building anything. It also removes the now-unnecessary
dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
removes the duplication of asm-offsets.h between arm64/kernel/vdso/
and include/generated/ and makes include/generated/vdso-offsets.h a
target in arm64/kernel/vdso/Makefile.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-05-12 19:39:15 +03:00
2012-04-20 17:45:54 +04:00
d e f i n e a r c h h e l p
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
echo ' install - Install uncompressed kernel'
echo ' zinstall - Install compressed kernel'
echo ' Install using (your) ~/bin/installkernel or'
echo ' (distribution) /sbin/installkernel or'
echo ' install to $$(INSTALL_PATH) and run lilo'
e n d e f