2012-04-20 17:45:54 +04:00
#
# arch/arm64/Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1995-2001 by Russell King
2020-12-18 03:24:32 +03:00
LDFLAGS_vmlinux := --no-undefined -X
2012-04-20 17:45:54 +04:00
arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27
Upon upgrading to binutils 2.27, we found that our lz4 and gzip
compressed kernel images were significantly larger, resulting is 10ms
boot time regressions.
As noted by Rahul:
"aarch64 binaries uses RELA relocations, where each relocation entry
includes an addend value. This is similar to x86_64. On x86_64, the
addend values are also stored at the relocation offset for relative
relocations. This is an optimization: in the case where code does not
need to be relocated, the loader can simply skip processing relative
relocations. In binutils-2.25, both bfd and gold linkers did this for
x86_64, but only the gold linker did this for aarch64. The kernel build
here is using the bfd linker, which stored zeroes at the relocation
offsets for relative relocations. Since a set of zeroes compresses
better than a set of non-zero addend values, this behavior was resulting
in much better lz4 compression.
The bfd linker in binutils-2.27 is now storing the actual addend values
at the relocation offsets. The behavior is now consistent with what it
does for x86_64 and what gold linker does for both architectures. The
change happened in this upstream commit:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
Since a bunch of zeroes got replaced by non-zero addend values, we see
the side effect of lz4 compressed image being a bit bigger.
To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
flag can be used:
$ LDFLAGS="--no-apply-dynamic-relocs" make
With this flag, the compressed image size is back to what it was with
binutils-2.25.
If the kernel is using ASLR, there aren't additional runtime costs to
--no-apply-dynamic-relocs, as the relocations will need to be applied
again anyway after the kernel is relocated to a random address.
If the kernel is not using ASLR, then presumably the current default
behavior of the linker is better. Since the static linker performed the
dynamic relocs, and the kernel is not moved to a different address at
load time, it can skip applying the relocations all over again."
Some measurements:
$ ld -v
GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
^
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
$ ld -v
GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
^
pre patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
post patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
By Siqi's measurement w/ gzip:
binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
Image 41535488
Image.gz 13404067
binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
Image 41535488
Image.gz 14125516
Any compression scheme should be able to get better results from the
longer runs of zeros, not just GZIP and LZ4.
10ms boot time savings isn't anything to get excited about, but users of
arm64+compression+bfd-2.27 should not have to pay a penalty for no
runtime improvement.
Reported-by: Gopinath Elanchezhian <gelanchezhian@google.com>
Reported-by: Sindhuri Pentyala <spentyala@google.com>
Reported-by: Wei Wang <wvw@google.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Rahul Chaudhry <rahulchaudhry@google.com>
Suggested-by: Siqi Lin <siqilin@google.com>
Suggested-by: Stephen Hines <srhines@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: added comment to Makefile]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-10-27 19:33:41 +03:00
i f e q ( $( CONFIG_RELOCATABLE ) , y )
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
# for relative relocs, since this leads to better Image compression
# with the relocation offsets always being zero.
2020-10-16 20:53:39 +03:00
LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27
Upon upgrading to binutils 2.27, we found that our lz4 and gzip
compressed kernel images were significantly larger, resulting is 10ms
boot time regressions.
As noted by Rahul:
"aarch64 binaries uses RELA relocations, where each relocation entry
includes an addend value. This is similar to x86_64. On x86_64, the
addend values are also stored at the relocation offset for relative
relocations. This is an optimization: in the case where code does not
need to be relocated, the loader can simply skip processing relative
relocations. In binutils-2.25, both bfd and gold linkers did this for
x86_64, but only the gold linker did this for aarch64. The kernel build
here is using the bfd linker, which stored zeroes at the relocation
offsets for relative relocations. Since a set of zeroes compresses
better than a set of non-zero addend values, this behavior was resulting
in much better lz4 compression.
The bfd linker in binutils-2.27 is now storing the actual addend values
at the relocation offsets. The behavior is now consistent with what it
does for x86_64 and what gold linker does for both architectures. The
change happened in this upstream commit:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
Since a bunch of zeroes got replaced by non-zero addend values, we see
the side effect of lz4 compressed image being a bit bigger.
To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
flag can be used:
$ LDFLAGS="--no-apply-dynamic-relocs" make
With this flag, the compressed image size is back to what it was with
binutils-2.25.
If the kernel is using ASLR, there aren't additional runtime costs to
--no-apply-dynamic-relocs, as the relocations will need to be applied
again anyway after the kernel is relocated to a random address.
If the kernel is not using ASLR, then presumably the current default
behavior of the linker is better. Since the static linker performed the
dynamic relocs, and the kernel is not moved to a different address at
load time, it can skip applying the relocations all over again."
Some measurements:
$ ld -v
GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
^
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
$ ld -v
GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
^
pre patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
post patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
By Siqi's measurement w/ gzip:
binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
Image 41535488
Image.gz 13404067
binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
Image 41535488
Image.gz 14125516
Any compression scheme should be able to get better results from the
longer runs of zeros, not just GZIP and LZ4.
10ms boot time savings isn't anything to get excited about, but users of
arm64+compression+bfd-2.27 should not have to pay a penalty for no
runtime improvement.
Reported-by: Gopinath Elanchezhian <gelanchezhian@google.com>
Reported-by: Sindhuri Pentyala <spentyala@google.com>
Reported-by: Wei Wang <wvw@google.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Rahul Chaudhry <rahulchaudhry@google.com>
Suggested-by: Siqi Lin <siqilin@google.com>
Suggested-by: Stephen Hines <srhines@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: added comment to Makefile]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-10-27 19:33:41 +03:00
$( call ld-option, --no-apply-dynamic-relocs)
2016-01-26 11:13:44 +03:00
e n d i f
2016-08-22 13:58:36 +03:00
i f e q ( $( CONFIG_ARM 64_ERRATUM_ 843419) , y )
2021-08-01 08:35:25 +03:00
ifeq ( $( CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419) ,y)
2016-08-22 13:58:36 +03:00
LDFLAGS_vmlinux += --fix-cortex-a53-843419
endif
e n d i f
2019-08-29 16:34:42 +03:00
cc_has_k_constraint := $( call try-run,echo \
' int main( void) { \
asm volatile( "and w0, w0, %w0" :: "K" ( 4294967295) ) ; \
return 0; \
} ' | $( CC) -S -x c -o " $$ TMP " -,,-DCONFIG_CC_HAS_K_CONSTRAINT= 1)
2020-01-15 17:18:25 +03:00
i f e q ( $( CONFIG_BROKEN_GAS_INST ) , y )
2016-12-06 18:27:43 +03:00
$( warning Detected assembler with broken .inst ; disassembly will be unreliable )
e n d i f
2020-01-15 17:18:25 +03:00
KBUILD_CFLAGS += -mgeneral-regs-only \
2019-08-29 16:34:42 +03:00
$( compat_vdso) $( cc_has_k_constraint)
arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS
This is a GCC only option, which warns about ABI changes within GCC, so
unconditionally adding it breaks Clang with tons of:
warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
and link time failures:
ld.lld: error: undefined symbol: __efistub___stack_chk_guard
>>> referenced by arm-stub.c:73
(/home/nathan/cbl/linux/drivers/firmware/efi/libstub/arm-stub.c:73)
>>> arm-stub.stub.o:(__efistub_install_memreserve_table)
in archive ./drivers/firmware/efi/libstub/lib.a
These failures come from the lack of -fno-stack-protector, which is
added via cc-option in drivers/firmware/efi/libstub/Makefile. When an
unknown flag is added to KBUILD_CFLAGS, clang will noisily warn that it
is ignoring the option like above, unlike gcc, who will just error.
$ echo "int main() { return 0; }" > tmp.c
$ clang -Wno-psabi tmp.c; echo $?
warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
1 warning generated.
0
$ gcc -Wsometimes-uninitialized tmp.c; echo $?
gcc: error: unrecognized command line option
‘-Wsometimes-uninitialized’; did you mean ‘-Wmaybe-uninitialized’?
1
For cc-option to work properly with clang and behave like gcc, -Werror
is needed, which was done in commit c3f0d0bc5b01 ("kbuild, LLVMLinux:
Add -Werror to cc-option to support clang").
$ clang -Werror -Wno-psabi tmp.c; echo $?
error: unknown warning option '-Wno-psabi'
[-Werror,-Wunknown-warning-option]
1
As a consequence of this, when an unknown flag is unconditionally added
to KBUILD_CFLAGS, it will cause cc-option to always fail and those flags
will never get added:
$ clang -Werror -Wno-psabi -fno-stack-protector tmp.c; echo $?
error: unknown warning option '-Wno-psabi'
[-Werror,-Wunknown-warning-option]
1
This can be seen when compiling the whole kernel as some warnings that
are normally disabled (see below) show up. The full list of flags
missing from drivers/firmware/efi/libstub are the following (gathered
from diffing .arm64-stub.o.cmd):
-fno-delete-null-pointer-checks
-Wno-address-of-packed-member
-Wframe-larger-than=2048
-Wno-unused-const-variable
-fno-strict-overflow
-fno-merge-all-constants
-fno-stack-check
-Werror=date-time
-Werror=incompatible-pointer-types
-ffreestanding
-fno-stack-protector
Use cc-disable-warning so that it gets disabled for GCC and does nothing
for Clang.
Fixes: ebcc5928c5d9 ("arm64: Silence gcc warnings about arch ABI drift")
Link: https://github.com/ClangBuiltLinux/linux/issues/511
Reported-by: Qian Cai <cai@lca.pw>
Acked-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-11 20:19:32 +03:00
KBUILD_CFLAGS += $( call cc-disable-warning, psabi)
2020-01-15 17:18:25 +03:00
KBUILD_AFLAGS += $( compat_vdso)
2015-02-03 19:14:13 +03:00
2017-09-18 13:20:20 +03:00
KBUILD_CFLAGS += $( call cc-option,-mabi= lp64)
KBUILD_AFLAGS += $( call cc-option,-mabi= lp64)
2020-08-21 22:42:51 +03:00
# Avoid generating .eh_frame* sections.
2022-10-27 18:59:06 +03:00
i f n e q ( $( CONFIG_UNWIND_TABLES ) , y )
2020-08-21 22:42:51 +03:00
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
KBUILD_AFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
2022-10-27 18:59:06 +03:00
e l s e
KBUILD_CFLAGS += -fasynchronous-unwind-tables
KBUILD_AFLAGS += -fasynchronous-unwind-tables
e n d i f
2020-08-21 22:42:51 +03:00
2018-12-12 15:08:44 +03:00
i f e q ( $( CONFIG_STACKPROTECTOR_PER_TASK ) , y )
prepare : stack_protector_prepare
stack_protector_prepare : prepare 0
$( eval KBUILD_CFLAGS += -mstack-protector-guard= sysreg \
-mstack-protector-guard-reg= sp_el0 \
-mstack-protector-guard-offset= $( shell \
awk '{if ($$2 == "TSK_STACK_CANARY") print $$3;}' \
include/generated/asm-offsets.h) )
e n d i f
2020-05-06 22:51:29 +03:00
i f e q ( $( CONFIG_ARM 64_BTI_KERNEL ) , y )
arm64: pauth: don't sign leaf functions
Currently, when CONFIG_ARM64_PTR_AUTH_KERNEL=y (and
CONFIG_UNWIND_PATCH_PAC_INTO_SCS=n), we enable pointer authentication
for all functions, including leaf functions. This isn't necessary, and
is unfortunate for a few reasons:
* Any PACIASP instruction is implicitly a `BTI C` landing pad, and
forcing the addition of a PACIASP in every function introduces a
larger set of BTI gadgets than is necessary.
* The PACIASP and AUTIASP instructions make leaf functions larger than
necessary, bloating the kernel Image. For a defconfig v6.2-rc3 kernel,
this appears to add ~64KiB relative to not signing leaf functions,
which is unfortunate but not entirely onerous.
* The PACIASP and AUTIASP instructions potentially make leaf functions
more expensive in terms of performance and/or power. For many trivial
leaf functions, this is clearly unnecessary, e.g.
| <arch_local_save_flags>:
| d503233f paciasp
| d53b4220 mrs x0, daif
| d50323bf autiasp
| d65f03c0 ret
| <calibration_delay_done>:
| d503233f paciasp
| d50323bf autiasp
| d65f03c0 ret
| d503201f nop
* When CONFIG_UNWIND_PATCH_PAC_INTO_SCS=y we disable pointer
authentication for leaf functions, so clearly this is not functionally
necessary, indicates we have an inconsistent threat model, and
convolutes the Makefile logic.
We've used pointer authentication in leaf functions since the
introduction of in-kernel pointer authentication in commit:
74afda4016a7437e ("arm64: compile the kernel with ptrauth return address signing")
... but at the time we had no rationale for signing leaf functions.
Subsequently, we considered avoiding signing leaf functions:
https://lore.kernel.org/linux-arm-kernel/1586856741-26839-1-git-send-email-amit.kachhap@arm.com/
https://lore.kernel.org/linux-arm-kernel/1588149371-20310-1-git-send-email-amit.kachhap@arm.com/
... however at the time we didn't have an abundance of reasons to avoid
signing leaf functions as above (e.g. the BTI case), we had no hardware
to make performance measurements, and it was reasoned that this gave
some level of protection against a limited set of code-reuse gadgets
which would fall through to a RET. We documented this in commit:
717b938e22f8dbf0 ("arm64: Document why we enable PAC support for leaf functions")
Notably, this was before we supported any forward-edge CFI scheme (e.g.
Arm BTI, or Clang CFI/kCFI), which would prevent jumping into the middle
of a function.
In addition, even with signing forced for leaf functions, AUTIASP may be
placed before a number of instructions which might constitute such a
gadget, e.g.
| <user_regs_reset_single_step>:
| f9400022 ldr x2, [x1]
| d503233f paciasp
| d50323bf autiasp
| f9408401 ldr x1, [x0, #264]
| 720b005f tst w2, #0x200000
| b26b0022 orr x2, x1, #0x200000
| 926af821 and x1, x1, #0xffffffffffdfffff
| 9a820021 csel x1, x1, x2, eq // eq = none
| f9008401 str x1, [x0, #264]
| d65f03c0 ret
| <fpsimd_cpu_dead>:
| 2a0003e3 mov w3, w0
| 9000ff42 adrp x2, ffff800009ffd000 <xen_dynamic_chip+0x48>
| 9120e042 add x2, x2, #0x838
| 52800000 mov w0, #0x0 // #0
| d503233f paciasp
| f000d041 adrp x1, ffff800009a20000 <this_cpu_vector>
| d50323bf autiasp
| 9102c021 add x1, x1, #0xb0
| f8635842 ldr x2, [x2, w3, uxtw #3]
| f821685f str xzr, [x2, x1]
| d65f03c0 ret
| d503201f nop
So generally, trying to use AUTIASP to detect such gadgetization is not
robust, and this is dealt with far better by forward-edge CFI (which is
designed to prevent such cases). We should bite the bullet and stop
pretending that AUTIASP is a mitigation for such forward-edge
gadgetization.
For the above reasons, this patch has the kernel consistently sign
non-leaf functions and avoid signing leaf functions.
Considering a defconfig v6.2-rc3 kernel built with LLVM 15.0.6:
* The vmlinux is ~43KiB smaller:
| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 338547808 Jan 25 17:17 vmlinux-after
| -rwxr-xr-x 1 mark mark 338591472 Jan 25 17:22 vmlinux-before
* The resulting Image is 64KiB smaller:
| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rwxr-xr-x 1 mark mark 32702976 Jan 25 17:17 Image-after
| -rwxr-xr-x 1 mark mark 32768512 Jan 25 17:22 Image-before
* There are ~400 fewer BTI gadgets:
| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
| 1219 bti c
| 61982 paciasp
| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
| 10099 bti c
| 52699 paciasp
Which is +8880 BTIs, and -9283 PACIASPs, for -403 unnecessary BTI
gadgets. While this is small relative to the total, distinguishing the
two cases will make it easier to analyse and reduce this set further
in future.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230131105809.991288-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-31 13:58:09 +03:00
KBUILD_CFLAGS += -mbranch-protection= pac-ret+bti
e l s e i f e q ( $( CONFIG_ARM 64_PTR_AUTH_KERNEL ) , y )
ifeq ( $( CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) ,y)
KBUILD_CFLAGS += -mbranch-protection= pac-ret
else
KBUILD_CFLAGS += -msign-return-address= non-leaf
endif
2020-05-06 22:51:29 +03:00
e l s e
arm64: pauth: don't sign leaf functions
Currently, when CONFIG_ARM64_PTR_AUTH_KERNEL=y (and
CONFIG_UNWIND_PATCH_PAC_INTO_SCS=n), we enable pointer authentication
for all functions, including leaf functions. This isn't necessary, and
is unfortunate for a few reasons:
* Any PACIASP instruction is implicitly a `BTI C` landing pad, and
forcing the addition of a PACIASP in every function introduces a
larger set of BTI gadgets than is necessary.
* The PACIASP and AUTIASP instructions make leaf functions larger than
necessary, bloating the kernel Image. For a defconfig v6.2-rc3 kernel,
this appears to add ~64KiB relative to not signing leaf functions,
which is unfortunate but not entirely onerous.
* The PACIASP and AUTIASP instructions potentially make leaf functions
more expensive in terms of performance and/or power. For many trivial
leaf functions, this is clearly unnecessary, e.g.
| <arch_local_save_flags>:
| d503233f paciasp
| d53b4220 mrs x0, daif
| d50323bf autiasp
| d65f03c0 ret
| <calibration_delay_done>:
| d503233f paciasp
| d50323bf autiasp
| d65f03c0 ret
| d503201f nop
* When CONFIG_UNWIND_PATCH_PAC_INTO_SCS=y we disable pointer
authentication for leaf functions, so clearly this is not functionally
necessary, indicates we have an inconsistent threat model, and
convolutes the Makefile logic.
We've used pointer authentication in leaf functions since the
introduction of in-kernel pointer authentication in commit:
74afda4016a7437e ("arm64: compile the kernel with ptrauth return address signing")
... but at the time we had no rationale for signing leaf functions.
Subsequently, we considered avoiding signing leaf functions:
https://lore.kernel.org/linux-arm-kernel/1586856741-26839-1-git-send-email-amit.kachhap@arm.com/
https://lore.kernel.org/linux-arm-kernel/1588149371-20310-1-git-send-email-amit.kachhap@arm.com/
... however at the time we didn't have an abundance of reasons to avoid
signing leaf functions as above (e.g. the BTI case), we had no hardware
to make performance measurements, and it was reasoned that this gave
some level of protection against a limited set of code-reuse gadgets
which would fall through to a RET. We documented this in commit:
717b938e22f8dbf0 ("arm64: Document why we enable PAC support for leaf functions")
Notably, this was before we supported any forward-edge CFI scheme (e.g.
Arm BTI, or Clang CFI/kCFI), which would prevent jumping into the middle
of a function.
In addition, even with signing forced for leaf functions, AUTIASP may be
placed before a number of instructions which might constitute such a
gadget, e.g.
| <user_regs_reset_single_step>:
| f9400022 ldr x2, [x1]
| d503233f paciasp
| d50323bf autiasp
| f9408401 ldr x1, [x0, #264]
| 720b005f tst w2, #0x200000
| b26b0022 orr x2, x1, #0x200000
| 926af821 and x1, x1, #0xffffffffffdfffff
| 9a820021 csel x1, x1, x2, eq // eq = none
| f9008401 str x1, [x0, #264]
| d65f03c0 ret
| <fpsimd_cpu_dead>:
| 2a0003e3 mov w3, w0
| 9000ff42 adrp x2, ffff800009ffd000 <xen_dynamic_chip+0x48>
| 9120e042 add x2, x2, #0x838
| 52800000 mov w0, #0x0 // #0
| d503233f paciasp
| f000d041 adrp x1, ffff800009a20000 <this_cpu_vector>
| d50323bf autiasp
| 9102c021 add x1, x1, #0xb0
| f8635842 ldr x2, [x2, w3, uxtw #3]
| f821685f str xzr, [x2, x1]
| d65f03c0 ret
| d503201f nop
So generally, trying to use AUTIASP to detect such gadgetization is not
robust, and this is dealt with far better by forward-edge CFI (which is
designed to prevent such cases). We should bite the bullet and stop
pretending that AUTIASP is a mitigation for such forward-edge
gadgetization.
For the above reasons, this patch has the kernel consistently sign
non-leaf functions and avoid signing leaf functions.
Considering a defconfig v6.2-rc3 kernel built with LLVM 15.0.6:
* The vmlinux is ~43KiB smaller:
| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 338547808 Jan 25 17:17 vmlinux-after
| -rwxr-xr-x 1 mark mark 338591472 Jan 25 17:22 vmlinux-before
* The resulting Image is 64KiB smaller:
| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rwxr-xr-x 1 mark mark 32702976 Jan 25 17:17 Image-after
| -rwxr-xr-x 1 mark mark 32768512 Jan 25 17:22 Image-before
* There are ~400 fewer BTI gadgets:
| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
| 1219 bti c
| 61982 paciasp
| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
| 10099 bti c
| 52699 paciasp
Which is +8880 BTIs, and -9283 PACIASPs, for -403 unnecessary BTI
gadgets. While this is small relative to the total, distinguishing the
two cases will make it easier to analyse and reduce this set further
in future.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230131105809.991288-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-31 13:58:09 +03:00
KBUILD_CFLAGS += $( call cc-option,-mbranch-protection= none)
2020-08-27 23:36:08 +03:00
e n d i f
arm64: unify asm-arch manipulation
Assemblers will reject instructions not supported by a target
architecture version, and so we must explicitly tell the assembler the
latest architecture version for which we want to assemble instructions
from.
We've added a few AS_HAS_ARMV8_<N> definitions for this, in addition to
an inconsistently named AS_HAS_PAC definition, from which arm64's
top-level Makefile determines the architecture version that we intend to
target, and generates the `asm-arch` variable.
To make this a bit clearer and easier to maintain, this patch reworks
the Makefile to determine asm-arch in a single if-else-endif chain.
AS_HAS_PAC, which is defined when the assembler supports
`-march=armv8.3-a`, is renamed to AS_HAS_ARMV8_3.
As the logic for armv8.3-a is lifted out of the block handling pointer
authentication, `asm-arch` may now be set to armv8.3-a regardless of
whether support for pointer authentication is selected. This means that
it will be possible to assemble armv8.3-a instructions even if we didn't
intend to, but this is consistent with our handling of other
architecture versions, and the compiler won't generate armv8.3-a
instructions regardless.
For the moment there's no need for an CONFIG_AS_HAS_ARMV8_1, as the code
for LSE atomics and LDAPR use individual `.arch_extension` entries and
do not require the baseline asm arch to be bumped to armv8.1-a. The
other armv8.1-a features (e.g. PAN) do not require assembler support.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230131105809.991288-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-31 13:58:08 +03:00
# Tell the assembler to support instructions from the latest target
# architecture.
#
# For non-integrated assemblers we'll pass this on the command line, and for
# integrated assemblers we'll define ARM64_ASM_ARCH and ARM64_ASM_PREAMBLE for
# inline usage.
#
# We cannot pass the same arch flag to the compiler as this would allow it to
# freely generate instructions which are not supported by earlier architecture
# versions, which would prevent a single kernel image from working on earlier
# hardware.
2020-12-22 23:01:24 +03:00
i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 5) , y )
arm64: unify asm-arch manipulation
Assemblers will reject instructions not supported by a target
architecture version, and so we must explicitly tell the assembler the
latest architecture version for which we want to assemble instructions
from.
We've added a few AS_HAS_ARMV8_<N> definitions for this, in addition to
an inconsistently named AS_HAS_PAC definition, from which arm64's
top-level Makefile determines the architecture version that we intend to
target, and generates the `asm-arch` variable.
To make this a bit clearer and easier to maintain, this patch reworks
the Makefile to determine asm-arch in a single if-else-endif chain.
AS_HAS_PAC, which is defined when the assembler supports
`-march=armv8.3-a`, is renamed to AS_HAS_ARMV8_3.
As the logic for armv8.3-a is lifted out of the block handling pointer
authentication, `asm-arch` may now be set to armv8.3-a regardless of
whether support for pointer authentication is selected. This means that
it will be possible to assemble armv8.3-a instructions even if we didn't
intend to, but this is consistent with our handling of other
architecture versions, and the compiler won't generate armv8.3-a
instructions regardless.
For the moment there's no need for an CONFIG_AS_HAS_ARMV8_1, as the code
for LSE atomics and LDAPR use individual `.arch_extension` entries and
do not require the baseline asm arch to be bumped to armv8.1-a. The
other armv8.1-a features (e.g. PAN) do not require assembler support.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230131105809.991288-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-31 13:58:08 +03:00
asm-arch := armv8.5-a
e l s e i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 4) , y )
asm-arch := armv8.4-a
e l s e i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 3) , y )
asm-arch := armv8.3-a
e l s e i f e q ( $( CONFIG_AS_HAS_ARMV 8_ 2) , y )
asm-arch := armv8.2-a
2020-12-22 23:01:24 +03:00
e n d i f
2020-08-27 23:36:08 +03:00
i f d e f a s m - a r c h
KBUILD_CFLAGS += -Wa,-march= $( asm-arch) \
-DARM64_ASM_ARCH= '"$(asm-arch)"'
2020-07-15 10:19:44 +03:00
e n d i f
2020-04-27 19:00:11 +03:00
i f e q ( $( CONFIG_SHADOW_CALL_STACK ) , y )
KBUILD_CFLAGS += -ffixed-x18
e n d i f
2013-10-11 17:52:08 +04:00
i f e q ( $( CONFIG_CPU_BIG_ENDIAN ) , y )
KBUILD_CPPFLAGS += -mbig-endian
2017-06-24 18:42:11 +03:00
CHECKFLAGS += -D__AARCH64EB__
2018-07-13 18:30:33 +03:00
# Prefer the baremetal ELF build target, but not all toolchains include
# it so fall back to the standard linux version if needed.
2020-12-18 03:24:32 +03:00
KBUILD_LDFLAGS += -EB $( call ld-option, -maarch64elfb, -maarch64linuxb -z norelro)
2016-08-30 11:31:35 +03:00
UTS_MACHINE := aarch64_be
2013-10-11 17:52:08 +04:00
e l s e
2012-04-20 17:45:54 +04:00
KBUILD_CPPFLAGS += -mlittle-endian
2017-06-24 18:42:11 +03:00
CHECKFLAGS += -D__AARCH64EL__
2018-07-13 18:30:33 +03:00
# Same as above, prefer ELF but fall back to linux target if needed.
2020-12-18 03:24:32 +03:00
KBUILD_LDFLAGS += -EL $( call ld-option, -maarch64elf, -maarch64linux -z norelro)
2016-08-30 11:31:35 +03:00
UTS_MACHINE := aarch64
2013-10-11 17:52:08 +04:00
e n d i f
2012-04-20 17:45:54 +04:00
2020-12-18 03:24:32 +03:00
i f e q ( $( CONFIG_LD_IS_LLD ) , y )
KBUILD_LDFLAGS += -z norelro
e n d i f
2018-05-30 23:48:38 +03:00
CHECKFLAGS += -D__aarch64__
2012-04-20 17:45:54 +04:00
arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
This patch enables support for DYNAMIC_FTRACE_WITH_CALL_OPS on arm64.
This allows each ftrace callsite to provide an ftrace_ops to the common
ftrace trampoline, allowing each callsite to invoke distinct tracer
functions without the need to fall back to list processing or to
allocate custom trampolines for each callsite. This significantly speeds
up cases where multiple distinct trace functions are used and callsites
are mostly traced by a single tracer.
The main idea is to place a pointer to the ftrace_ops as a literal at a
fixed offset from the function entry point, which can be recovered by
the common ftrace trampoline. Using a 64-bit literal avoids branch range
limitations, and permits the ops to be swapped atomically without
special considerations that apply to code-patching. In future this will
also allow for the implementation of DYNAMIC_FTRACE_WITH_DIRECT_CALLS
without branch range limitations by using additional fields in struct
ftrace_ops.
As noted in the core patch adding support for
DYNAMIC_FTRACE_WITH_CALL_OPS, this approach allows for directly invoking
ftrace_ops::func even for ftrace_ops which are dynamically-allocated (or
part of a module), without going via ftrace_ops_list_func.
Currently, this approach is not compatible with CLANG_CFI, as the
presence/absence of pre-function NOPs changes the offset of the
pre-function type hash, and there's no existing mechanism to ensure a
consistent offset for instrumented and uninstrumented functions. When
CLANG_CFI is enabled, the existing scheme with a global ops->func
pointer is used, and there should be no functional change. I am
currently working with others to allow the two to work together in
future (though this will liekly require updated compiler support).
I've benchamrked this with the ftrace_ops sample module [1], which is
not currently upstream, but available at:
https://lore.kernel.org/lkml/20230103124912.2948963-1-mark.rutland@arm.com
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git ftrace-ops-sample-20230109
Using that module I measured the total time taken for 100,000 calls to a
trivial instrumented function, with a number of tracers enabled with
relevant filters (which would apply to the instrumented function) and a
number of tracers enabled with irrelevant filters (which would not apply
to the instrumented function). I tested on an M1 MacBook Pro, running
under a HVF-accelerated QEMU VM (i.e. on real hardware).
Before this patch:
Number of tracers || Total time | Per-call average time (ns)
Relevant | Irrelevant || (ns) | Total | Overhead
=========+============++=============+==============+============
0 | 0 || 94,583 | 0.95 | -
0 | 1 || 93,709 | 0.94 | -
0 | 2 || 93,666 | 0.94 | -
0 | 10 || 93,709 | 0.94 | -
0 | 100 || 93,792 | 0.94 | -
---------+------------++-------------+--------------+------------
1 | 1 || 6,467,833 | 64.68 | 63.73
1 | 2 || 7,509,708 | 75.10 | 74.15
1 | 10 || 23,786,792 | 237.87 | 236.92
1 | 100 || 106,432,500 | 1,064.43 | 1063.38
---------+------------++-------------+--------------+------------
1 | 0 || 1,431,875 | 14.32 | 13.37
2 | 0 || 6,456,334 | 64.56 | 63.62
10 | 0 || 22,717,000 | 227.17 | 226.22
100 | 0 || 103,293,667 | 1032.94 | 1031.99
---------+------------++-------------+--------------+--------------
Note: per-call overhead is estimated relative to the baseline case
with 0 relevant tracers and 0 irrelevant tracers.
After this patch
Number of tracers || Total time | Per-call average time (ns)
Relevant | Irrelevant || (ns) | Total | Overhead
=========+============++=============+==============+============
0 | 0 || 94,541 | 0.95 | -
0 | 1 || 93,666 | 0.94 | -
0 | 2 || 93,709 | 0.94 | -
0 | 10 || 93,667 | 0.94 | -
0 | 100 || 93,792 | 0.94 | -
---------+------------++-------------+--------------+------------
1 | 1 || 281,000 | 2.81 | 1.86
1 | 2 || 281,042 | 2.81 | 1.87
1 | 10 || 280,958 | 2.81 | 1.86
1 | 100 || 281,250 | 2.81 | 1.87
---------+------------++-------------+--------------+------------
1 | 0 || 280,959 | 2.81 | 1.86
2 | 0 || 6,502,708 | 65.03 | 64.08
10 | 0 || 18,681,209 | 186.81 | 185.87
100 | 0 || 103,550,458 | 1,035.50 | 1034.56
---------+------------++-------------+--------------+------------
Note: per-call overhead is estimated relative to the baseline case
with 0 relevant tracers and 0 irrelevant tracers.
As can be seen from the above:
a) Whenever there is a single relevant tracer function associated with a
tracee, the overhead of invoking the tracer is constant, and does not
scale with the number of tracers which are *not* associated with that
tracee.
b) The overhead for a single relevant tracer has dropped to ~1/7 of the
overhead prior to this series (from 13.37ns to 1.86ns). This is
largely due to permitting calls to dynamically-allocated ftrace_ops
without going through ftrace_ops_list_func.
I've run the ftrace selftests from v6.2-rc3, which reports:
| # of passed: 110
| # of failed: 0
| # of unresolved: 3
| # of untested: 0
| # of unsupported: 0
| # of xfailed: 1
| # of undefined(test bug): 0
... where the unresolved entries were the tests for DIRECT functions
(which are not supported), and the checkbashisms selftest (which is
irrelevant here):
| [8] Test ftrace direct functions against tracers [UNRESOLVED]
| [9] Test ftrace direct functions against kprobes [UNRESOLVED]
| [62] Meta-selftest: Checkbashisms [UNRESOLVED]
... with all other tests passing (or failing as expected).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Florent Revest <revest@chromium.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230123134603.1064407-9-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-23 16:46:03 +03:00
i f e q ( $( CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS ) , y )
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry= 4,2
e l s e i f e q ( $( CONFIG_DYNAMIC_FTRACE_WITH_ARGS ) , y )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry= 2
e n d i f
2018-12-28 11:29:57 +03:00
i f e q ( $( CONFIG_KASAN_SW_TAGS ) , y )
KASAN_SHADOW_SCALE_SHIFT := 4
2020-12-22 23:02:06 +03:00
e l s e i f e q ( $( CONFIG_KASAN_GENERIC ) , y )
2018-12-28 11:29:57 +03:00
KASAN_SHADOW_SCALE_SHIFT := 3
e n d i f
KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT= $( KASAN_SHADOW_SCALE_SHIFT)
2012-04-20 17:45:54 +04:00
libs-y := arch/arm64/lib/ $( libs-y)
2020-06-04 05:20:30 +03:00
libs-$(CONFIG_EFI_STUB) += $( objtree) /drivers/firmware/efi/libstub/lib.a
2012-04-20 17:45:54 +04:00
# Default target when executing plain make
2016-11-23 00:34:29 +03:00
boot := arch/arm64/boot
2022-05-02 02:10:03 +03:00
i f e q ( $( CONFIG_EFI_ZBOOT ) , )
2016-11-23 00:34:29 +03:00
KBUILD_IMAGE := $( boot) /Image.gz
2022-05-02 02:10:03 +03:00
e l s e
KBUILD_IMAGE := $( boot) /vmlinuz.efi
e n d i f
2012-04-20 17:45:54 +04:00
2022-05-02 02:10:03 +03:00
all : $( notdir $ ( KBUILD_IMAGE ) )
2012-04-20 17:45:54 +04:00
2023-11-19 08:32:34 +03:00
vmlinuz.efi : Image
2022-05-02 02:10:03 +03:00
Image vmlinuz.efi : vmlinux
2015-07-16 23:26:16 +03:00
$( Q) $( MAKE) $( build) = $( boot) $( boot) /$@
2016-06-21 04:44:00 +03:00
Image.% : Image
2012-12-04 03:17:21 +04:00
$( Q) $( MAKE) $( build) = $( boot) $( boot) /$@
2012-04-20 17:45:54 +04:00
2022-05-03 05:47:16 +03:00
install : KBUILD_IMAGE := $( boot ) /Image
2021-07-29 17:05:27 +03:00
install zinstall :
2022-05-03 05:47:16 +03:00
$( call cmd,install)
2012-04-20 17:45:54 +04:00
2021-04-28 15:12:31 +03:00
archprepare :
$( Q) $( MAKE) $( build) = arch/arm64/tools kapi
2021-08-01 08:35:25 +03:00
i f e q ( $( CONFIG_ARM 64_ERRATUM_ 843419) , y )
ifneq ( $( CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419) ,y)
@echo "warning: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum" >& 2
endif
e n d i f
i f e q ( $( CONFIG_ARM 64_USE_LSE_ATOMICS ) , y )
ifneq ( $( CONFIG_ARM64_LSE_ATOMICS) ,y)
@echo "warning: LSE atomics not supported by binutils" >& 2
endif
e n d i f
2018-10-31 02:37:10 +03:00
i f e q ( $( KBUILD_EXTMOD ) , )
arm64: fix vdso-offsets.h dependency
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
generated before these files are compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds. This is made obvious when using make -j:
touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
will sometimes result in none of arm64/kernel/*.o being
rebuilt, sometimes all of them, or even just some of them.
It is quite difficult to ensure that a header is generated before it
is used with recursive Makefiles by using normal rules. Instead,
arch-specific generated headers are normally built in the archprepare
recipe in the arch Makefile (see for instance arch/ia64/Makefile).
Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
therefore be generated before vdso-offsets.h, which is not the case if
archprepare is used. For this reason, a rule run after archprepare has
to be used.
This commit adds rules in arm64/Makefile to build vdso-offsets.h
during the prepare step, ensuring that vdso-offsets.h is generated
before building anything. It also removes the now-unnecessary
dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
removes the duplication of asm-offsets.h between arm64/kernel/vdso/
and include/generated/ and makes include/generated/vdso-offsets.h a
target in arm64/kernel/vdso/Makefile.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-05-12 19:39:15 +03:00
# We need to generate vdso-offsets.h before compiling certain files in kernel/.
# In order to do that, we should use the archprepare target, but we can't since
# asm-offsets.h is included in some files used to generate vdso-offsets.h, and
# asm-offsets.h is built in prepare0, for which archprepare is a dependency.
# Therefore we need to generate the header after prepare0 has been made, hence
# this hack.
prepare : vdso_prepare
vdso_prepare : prepare 0
2020-12-18 05:45:40 +03:00
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso \
include/generated/vdso-offsets.h arch/arm64/kernel/vdso/vdso.so
i f d e f C O N F I G _ C O M P A T _ V D S O
$( Q) $( MAKE) $( build) = arch/arm64/kernel/vdso32 \
include/generated/vdso32-offsets.h arch/arm64/kernel/vdso32/vdso.so
e n d i f
2018-10-31 02:37:10 +03:00
e n d i f
arm64: fix vdso-offsets.h dependency
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
generated before these files are compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds. This is made obvious when using make -j:
touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
will sometimes result in none of arm64/kernel/*.o being
rebuilt, sometimes all of them, or even just some of them.
It is quite difficult to ensure that a header is generated before it
is used with recursive Makefiles by using normal rules. Instead,
arch-specific generated headers are normally built in the archprepare
recipe in the arch Makefile (see for instance arch/ia64/Makefile).
Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
therefore be generated before vdso-offsets.h, which is not the case if
archprepare is used. For this reason, a rule run after archprepare has
to be used.
This commit adds rules in arm64/Makefile to build vdso-offsets.h
during the prepare step, ensuring that vdso-offsets.h is generated
before building anything. It also removes the now-unnecessary
dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
removes the duplication of asm-offsets.h between arm64/kernel/vdso/
and include/generated/ and makes include/generated/vdso-offsets.h a
target in arm64/kernel/vdso/Makefile.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-05-12 19:39:15 +03:00
kbuild: unify vdso_install rules
Currently, there is no standard implementation for vdso_install,
leading to various issues:
1. Code duplication
Many architectures duplicate similar code just for copying files
to the install destination.
Some architectures (arm, sparc, x86) create build-id symlinks,
introducing more code duplication.
2. Unintended updates of in-tree build artifacts
The vdso_install rule depends on the vdso files to install.
It may update in-tree build artifacts. This can be problematic,
as explained in commit 19514fc665ff ("arm, kbuild: make
"make install" not depend on vmlinux").
3. Broken code in some architectures
Makefile code is often copied from one architecture to another
without proper adaptation.
'make vdso_install' for parisc does not work.
'make vdso_install' for s390 installs vdso64, but not vdso32.
To address these problems, this commit introduces a generic vdso_install
rule.
Architectures that support vdso_install need to define vdso-install-y
in arch/*/Makefile. vdso-install-y lists the files to install.
For example, arch/x86/Makefile looks like this:
vdso-install-$(CONFIG_X86_64) += arch/x86/entry/vdso/vdso64.so.dbg
vdso-install-$(CONFIG_X86_X32_ABI) += arch/x86/entry/vdso/vdsox32.so.dbg
vdso-install-$(CONFIG_X86_32) += arch/x86/entry/vdso/vdso32.so.dbg
vdso-install-$(CONFIG_IA32_EMULATION) += arch/x86/entry/vdso/vdso32.so.dbg
These files will be installed to $(MODLIB)/vdso/ with the .dbg suffix,
if exists, stripped away.
vdso-install-y can optionally take the second field after the colon
separator. This is needed because some architectures install a vdso
file as a different base name.
The following is a snippet from arch/arm64/Makefile.
vdso-install-$(CONFIG_COMPAT_VDSO) += arch/arm64/kernel/vdso32/vdso.so.dbg:vdso32.so
This will rename vdso.so.dbg to vdso32.so during installation. If such
architectures change their implementation so that the base names match,
this workaround will go away.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Acked-by: Sven Schnelle <svens@linux.ibm.com> # s390
Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
Reviewed-by: Guo Ren <guoren@kernel.org>
Acked-by: Helge Deller <deller@gmx.de> # parisc
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2023-10-14 13:54:35 +03:00
vdso-install-y += arch/arm64/kernel/vdso/vdso.so.dbg
vdso-install-$(CONFIG_COMPAT_VDSO) += arch/arm64/kernel/vdso32/vdso.so.dbg:vdso32.so
2023-02-10 22:52:49 +03:00
i n c l u d e $( srctree ) / s c r i p t s / M a k e f i l e . d e f c o n f
PHONY += virtconfig
virtconfig :
$( call merge_into_defconfig_override,defconfig,virt)
2012-04-20 17:45:54 +04:00
d e f i n e a r c h h e l p
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
echo ' install - Install uncompressed kernel'
echo ' zinstall - Install compressed kernel'
echo ' Install using (your) ~/bin/installkernel or'
echo ' (distribution) /sbin/installkernel or'
echo ' install to $$(INSTALL_PATH) and run lilo'
e n d e f