Merge branches 'for-next/misc', 'for-next/cache-ops-dzp', 'for-next/stacktrace', 'for-next/xor-neon', 'for-next/kasan', 'for-next/armv8_7-fp', 'for-next/atomics', 'for-next/bti', 'for-next/sve', 'for-next/kselftest' and 'for-next/kcsan', remote-tracking branch 'arm64/for-next/perf' into for-next/core
* arm64/for-next/perf: (32 commits) arm64: perf: Don't register user access sysctl handler multiple times drivers: perf: marvell_cn10k: fix an IS_ERR() vs NULL check perf/smmuv3: Fix unused variable warning when CONFIG_OF=n arm64: perf: Support new DT compatibles arm64: perf: Simplify registration boilerplate arm64: perf: Support Denver and Carmel PMUs drivers/perf: hisi: Add driver for HiSilicon PCIe PMU docs: perf: Add description for HiSilicon PCIe PMU driver dt-bindings: perf: Add YAML schemas for Marvell CN10K LLC-TAD pmu bindings drivers: perf: Add LLC-TAD perf counter support perf/smmuv3: Synthesize IIDR from CoreSight ID registers perf/smmuv3: Add devicetree support dt-bindings: Add Arm SMMUv3 PMCG binding perf/arm-cmn: Add debugfs topology info perf/arm-cmn: Add CI-700 Support dt-bindings: perf: arm-cmn: Add CI-700 perf/arm-cmn: Support new IP features perf/arm-cmn: Demarcate CMN-600 specifics perf/arm-cmn: Move group validation data off-stack perf/arm-cmn: Optimise DTC counter accesses ... * for-next/misc: : Miscellaneous patches arm64: Use correct method to calculate nomap region boundaries arm64: Drop outdated links in comments arm64: errata: Fix exec handling in erratum 1418040 workaround arm64: Unhash early pointer print plus improve comment asm-generic: introduce io_stop_wc() and add implementation for ARM64 arm64: remove __dma_*_area() aliases docs/arm64: delete a space from tagged-address-abi arm64/fp: Add comments documenting the usage of state restore functions arm64: mm: Use asid feature macro for cheanup arm64: mm: Rename asid2idx() to ctxid2asid() arm64: kexec: reduce calls to page_address() arm64: extable: remove unused ex_handler_t definition arm64: entry: Use SDEI event constants arm64: Simplify checking for populated DT arm64/kvm: Fix bitrotted comment for SVE handling in handle_exit.c * for-next/cache-ops-dzp: : Avoid DC instructions when DCZID_EL0.DZP == 1 arm64: mte: DC {GVA,GZVA} shouldn't be used when DCZID_EL0.DZP == 1 arm64: clear_page() shouldn't use DC ZVA when DCZID_EL0.DZP == 1 * for-next/stacktrace: : Unify the arm64 unwind code arm64: Make some stacktrace functions private arm64: Make dump_backtrace() use arch_stack_walk() arm64: Make profile_pc() use arch_stack_walk() arm64: Make return_address() use arch_stack_walk() arm64: Make __get_wchan() use arch_stack_walk() arm64: Make perf_callchain_kernel() use arch_stack_walk() arm64: Mark __switch_to() as __sched arm64: Add comment for stack_info::kr_cur arch: Make ARCH_STACKWALK independent of STACKTRACE * for-next/xor-neon: : Use SHA3 instructions to speed up XOR arm64/xor: use EOR3 instructions when available * for-next/kasan: : Log potential KASAN shadow aliases arm64: mm: log potential KASAN shadow alias arm64: mm: use die_kernel_fault() in do_mem_abort() * for-next/armv8_7-fp: : Add HWCAPS for ARMv8.7 FEAT_AFP amd FEAT_RPRES arm64: cpufeature: add HWCAP for FEAT_RPRES arm64: add ID_AA64ISAR2_EL1 sys register arm64: cpufeature: add HWCAP for FEAT_AFP * for-next/atomics: : arm64 atomics clean-ups and codegen improvements arm64: atomics: lse: define RETURN ops in terms of FETCH ops arm64: atomics: lse: improve constraints for simple ops arm64: atomics: lse: define ANDs in terms of ANDNOTs arm64: atomics lse: define SUBs in terms of ADDs arm64: atomics: format whitespace consistently * for-next/bti: : BTI clean-ups arm64: Ensure that the 'bti' macro is defined where linkage.h is included arm64: Use BTI C directly and unconditionally arm64: Unconditionally override SYM_FUNC macros arm64: Add macro version of the BTI instruction arm64: ftrace: add missing BTIs arm64: kexec: use __pa_symbol(empty_zero_page) arm64: update PAC description for kernel * for-next/sve: : SVE code clean-ups and refactoring in prepararation of Scalable Matrix Extensions arm64/sve: Minor clarification of ABI documentation arm64/sve: Generalise vector length configuration prctl() for SME arm64/sve: Make sysctl interface for SVE reusable by SME * for-next/kselftest: : arm64 kselftest additions kselftest/arm64: Add pidbench for floating point syscall cases kselftest/arm64: Add a test program to exercise the syscall ABI kselftest/arm64: Allow signal tests to trigger from a function kselftest/arm64: Parameterise ptrace vector length information * for-next/kcsan: : Enable KCSAN for arm64 arm64: Enable KCSAN
This commit is contained in:
parent
3da4390bcd
daa149dd8c
685e2564da
d2d1d2645c
2c54b423cf
07b742a4d9
1175011a7d
053f58bab3
dd73d18e7f
aed34d9e52
2c94ebedc8
dd03762ab6
commit
945409a6ef
@ -275,6 +275,23 @@ infrastructure:
|
||||
| SVEVer | [3-0] | y |
|
||||
+------------------------------+---------+---------+
|
||||
|
||||
8) ID_AA64MMFR1_EL1 - Memory model feature register 1
|
||||
|
||||
+------------------------------+---------+---------+
|
||||
| Name | bits | visible |
|
||||
+------------------------------+---------+---------+
|
||||
| AFP | [47-44] | y |
|
||||
+------------------------------+---------+---------+
|
||||
|
||||
9) ID_AA64ISAR2_EL1 - Instruction set attribute register 2
|
||||
|
||||
+------------------------------+---------+---------+
|
||||
| Name | bits | visible |
|
||||
+------------------------------+---------+---------+
|
||||
| RPRES | [7-4] | y |
|
||||
+------------------------------+---------+---------+
|
||||
|
||||
|
||||
Appendix I: Example
|
||||
-------------------
|
||||
|
||||
|
@ -251,6 +251,14 @@ HWCAP2_ECV
|
||||
|
||||
Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001.
|
||||
|
||||
HWCAP2_AFP
|
||||
|
||||
Functionality implied by ID_AA64MFR1_EL1.AFP == 0b0001.
|
||||
|
||||
HWCAP2_RPRES
|
||||
|
||||
Functionality implied by ID_AA64ISAR2_EL1.RPRES == 0b0001.
|
||||
|
||||
4. Unused AT_HWCAP bits
|
||||
-----------------------
|
||||
|
||||
|
@ -53,11 +53,10 @@ The number of bits that the PAC occupies in a pointer is 55 minus the
|
||||
virtual address size configured by the kernel. For example, with a
|
||||
virtual address size of 48, the PAC is 7 bits wide.
|
||||
|
||||
Recent versions of GCC can compile code with APIAKey-based return
|
||||
address protection when passed the -msign-return-address option. This
|
||||
uses instructions in the HINT space (unless -march=armv8.3-a or higher
|
||||
is also passed), and such code can run on systems without the pointer
|
||||
authentication extension.
|
||||
When ARM64_PTR_AUTH_KERNEL is selected, the kernel will be compiled
|
||||
with HINT space pointer authentication instructions protecting
|
||||
function returns. Kernels built with this option will work on hardware
|
||||
with or without pointer authentication support.
|
||||
|
||||
In addition to exec(), keys can also be reinitialized to random values
|
||||
using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
|
||||
|
@ -255,7 +255,7 @@ prctl(PR_SVE_GET_VL)
|
||||
vector length change (which would only normally be the case between a
|
||||
fork() or vfork() and the corresponding execve() in typical use).
|
||||
|
||||
To extract the vector length from the result, and it with
|
||||
To extract the vector length from the result, bitwise and it with
|
||||
PR_SVE_VL_LEN_MASK.
|
||||
|
||||
Return value: a nonnegative value on success, or a negative value on error:
|
||||
|
@ -49,7 +49,7 @@ how the user addresses are used by the kernel:
|
||||
|
||||
- ``brk()``, ``mmap()`` and the ``new_address`` argument to
|
||||
``mremap()`` as these have the potential to alias with existing
|
||||
user addresses.
|
||||
user addresses.
|
||||
|
||||
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
|
||||
incorrectly accept valid tagged pointers for the ``brk()``,
|
||||
|
@ -1950,6 +1950,14 @@ There are some more advanced barrier functions:
|
||||
For load from persistent memory, existing read memory barriers are sufficient
|
||||
to ensure read ordering.
|
||||
|
||||
(*) io_stop_wc();
|
||||
|
||||
For memory accesses with write-combining attributes (e.g. those returned
|
||||
by ioremap_wc(), the CPU may wait for prior accesses to be merged with
|
||||
subsequent ones. io_stop_wc() can be used to prevent the merging of
|
||||
write-combining memory accesses before this macro with those after it when
|
||||
such wait has performance implications.
|
||||
|
||||
===============================
|
||||
IMPLICIT KERNEL MEMORY BARRIERS
|
||||
===============================
|
||||
|
@ -150,6 +150,8 @@ config ARM64
|
||||
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
|
||||
select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
|
||||
select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
|
||||
# Some instrumentation may be unsound, hence EXPERT
|
||||
select HAVE_ARCH_KCSAN if EXPERT
|
||||
select HAVE_ARCH_KFENCE
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_MMAP_RND_BITS
|
||||
@ -1545,6 +1547,12 @@ endmenu
|
||||
|
||||
menu "ARMv8.2 architectural features"
|
||||
|
||||
config AS_HAS_ARMV8_2
|
||||
def_bool $(cc-option,-Wa$(comma)-march=armv8.2-a)
|
||||
|
||||
config AS_HAS_SHA3
|
||||
def_bool $(as-instr,.arch armv8.2-a+sha3)
|
||||
|
||||
config ARM64_PMEM
|
||||
bool "Enable support for persistent memory"
|
||||
select ARCH_HAS_PMEM_API
|
||||
|
@ -58,6 +58,11 @@ stack_protector_prepare: prepare0
|
||||
include/generated/asm-offsets.h))
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_AS_HAS_ARMV8_2), y)
|
||||
# make sure to pass the newest target architecture to -march.
|
||||
asm-arch := armv8.2-a
|
||||
endif
|
||||
|
||||
# Ensure that if the compiler supports branch protection we default it
|
||||
# off, this will be overridden if we are using branch protection.
|
||||
branch-prot-flags-y += $(call cc-option,-mbranch-protection=none)
|
||||
|
@ -363,15 +363,15 @@ ST5( mov v4.16b, vctr.16b )
|
||||
adr x16, 1f
|
||||
sub x16, x16, x12, lsl #3
|
||||
br x16
|
||||
hint 34 // bti c
|
||||
bti c
|
||||
mov v0.d[0], vctr.d[0]
|
||||
hint 34 // bti c
|
||||
bti c
|
||||
mov v1.d[0], vctr.d[0]
|
||||
hint 34 // bti c
|
||||
bti c
|
||||
mov v2.d[0], vctr.d[0]
|
||||
hint 34 // bti c
|
||||
bti c
|
||||
mov v3.d[0], vctr.d[0]
|
||||
ST5( hint 34 )
|
||||
ST5( bti c )
|
||||
ST5( mov v4.d[0], vctr.d[0] )
|
||||
1: b 2f
|
||||
.previous
|
||||
|
@ -790,6 +790,16 @@ alternative_endif
|
||||
.Lnoyield_\@:
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Branch Target Identifier (BTI)
|
||||
*/
|
||||
.macro bti, targets
|
||||
.equ .L__bti_targets_c, 34
|
||||
.equ .L__bti_targets_j, 36
|
||||
.equ .L__bti_targets_jc,38
|
||||
hint #.L__bti_targets_\targets
|
||||
.endm
|
||||
|
||||
/*
|
||||
* This macro emits a program property note section identifying
|
||||
* architecture features which require special handling, mainly for
|
||||
|
@ -44,11 +44,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v) \
|
||||
\
|
||||
asm volatile("// atomic_" #op "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" stxr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n") \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" stxr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n") \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i)); \
|
||||
}
|
||||
@ -62,12 +62,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \
|
||||
\
|
||||
asm volatile("// atomic_" #op "_return" #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" st" #rel "xr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" st" #rel "xr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@ -84,12 +84,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
\
|
||||
asm volatile("// atomic_fetch_" #op #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %w0, %3\n" \
|
||||
" " #asm_op " %w1, %w0, %w4\n" \
|
||||
" st" #rel "xr %w2, %w1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %w0, %3\n" \
|
||||
" " #asm_op " %w1, %w0, %w4\n" \
|
||||
" st" #rel "xr %w2, %w1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@ -143,11 +143,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
\
|
||||
asm volatile("// atomic64_" #op "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" stxr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b") \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" stxr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b") \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i)); \
|
||||
}
|
||||
@ -161,12 +161,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
|
||||
\
|
||||
asm volatile("// atomic64_" #op "_return" #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" st" #rel "xr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" st" #rel "xr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@ -176,19 +176,19 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
|
||||
|
||||
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
|
||||
static inline long \
|
||||
__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
|
||||
__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
s64 result, val; \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile("// atomic64_fetch_" #op #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %0, %3\n" \
|
||||
" " #asm_op " %1, %0, %4\n" \
|
||||
" st" #rel "xr %w2, %1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %0, %3\n" \
|
||||
" " #asm_op " %1, %0, %4\n" \
|
||||
" st" #rel "xr %w2, %1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@ -241,14 +241,14 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
|
||||
|
||||
asm volatile("// atomic64_dec_if_positive\n"
|
||||
__LL_SC_FALLBACK(
|
||||
" prfm pstl1strm, %2\n"
|
||||
"1: ldxr %0, %2\n"
|
||||
" subs %0, %0, #1\n"
|
||||
" b.lt 2f\n"
|
||||
" stlxr %w1, %0, %2\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
" dmb ish\n"
|
||||
"2:")
|
||||
" prfm pstl1strm, %2\n"
|
||||
"1: ldxr %0, %2\n"
|
||||
" subs %0, %0, #1\n"
|
||||
" b.lt 2f\n"
|
||||
" stlxr %w1, %0, %2\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
" dmb ish\n"
|
||||
"2:")
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
|
||||
:
|
||||
: "cc", "memory");
|
||||
|
@ -11,13 +11,13 @@
|
||||
#define __ASM_ATOMIC_LSE_H
|
||||
|
||||
#define ATOMIC_OP(op, asm_op) \
|
||||
static inline void __lse_atomic_##op(int i, atomic_t *v) \
|
||||
static inline void __lse_atomic_##op(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op " %w[i], %[v]\n" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v)); \
|
||||
" " #asm_op " %w[i], %[v]\n" \
|
||||
: [v] "+Q" (v->counter) \
|
||||
: [i] "r" (i)); \
|
||||
}
|
||||
|
||||
ATOMIC_OP(andnot, stclr)
|
||||
@ -25,19 +25,27 @@ ATOMIC_OP(or, stset)
|
||||
ATOMIC_OP(xor, steor)
|
||||
ATOMIC_OP(add, stadd)
|
||||
|
||||
static inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
__lse_atomic_add(-i, v);
|
||||
}
|
||||
|
||||
#undef ATOMIC_OP
|
||||
|
||||
#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \
|
||||
static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
int old; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
" " #asm_op #mb " %w[i], %w[old], %[v]" \
|
||||
: [v] "+Q" (v->counter), \
|
||||
[old] "=r" (old) \
|
||||
: [i] "r" (i) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return old; \
|
||||
}
|
||||
|
||||
#define ATOMIC_FETCH_OPS(op, asm_op) \
|
||||
@ -54,51 +62,46 @@ ATOMIC_FETCH_OPS(add, ldadd)
|
||||
#undef ATOMIC_FETCH_OP
|
||||
#undef ATOMIC_FETCH_OPS
|
||||
|
||||
#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \
|
||||
static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \
|
||||
#define ATOMIC_FETCH_OP_SUB(name) \
|
||||
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
u32 tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
|
||||
" add %w[i], %w[i], %w[tmp]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return __lse_atomic_fetch_add##name(-i, v); \
|
||||
}
|
||||
|
||||
ATOMIC_OP_ADD_RETURN(_relaxed, )
|
||||
ATOMIC_OP_ADD_RETURN(_acquire, a, "memory")
|
||||
ATOMIC_OP_ADD_RETURN(_release, l, "memory")
|
||||
ATOMIC_OP_ADD_RETURN( , al, "memory")
|
||||
ATOMIC_FETCH_OP_SUB(_relaxed)
|
||||
ATOMIC_FETCH_OP_SUB(_acquire)
|
||||
ATOMIC_FETCH_OP_SUB(_release)
|
||||
ATOMIC_FETCH_OP_SUB( )
|
||||
|
||||
#undef ATOMIC_OP_ADD_RETURN
|
||||
#undef ATOMIC_FETCH_OP_SUB
|
||||
|
||||
#define ATOMIC_OP_ADD_SUB_RETURN(name) \
|
||||
static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_add##name(i, v) + i; \
|
||||
} \
|
||||
\
|
||||
static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_sub(i, v) - i; \
|
||||
}
|
||||
|
||||
ATOMIC_OP_ADD_SUB_RETURN(_relaxed)
|
||||
ATOMIC_OP_ADD_SUB_RETURN(_acquire)
|
||||
ATOMIC_OP_ADD_SUB_RETURN(_release)
|
||||
ATOMIC_OP_ADD_SUB_RETURN( )
|
||||
|
||||
#undef ATOMIC_OP_ADD_SUB_RETURN
|
||||
|
||||
static inline void __lse_atomic_and(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" mvn %w[i], %w[i]\n"
|
||||
" stclr %w[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
: "r" (v));
|
||||
return __lse_atomic_andnot(~i, v);
|
||||
}
|
||||
|
||||
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
|
||||
static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" mvn %w[i], %w[i]\n" \
|
||||
" ldclr" #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return __lse_atomic_fetch_andnot##name(~i, v); \
|
||||
}
|
||||
|
||||
ATOMIC_FETCH_OP_AND(_relaxed, )
|
||||
@ -108,69 +111,14 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
|
||||
|
||||
#undef ATOMIC_FETCH_OP_AND
|
||||
|
||||
static inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" neg %w[i], %w[i]\n"
|
||||
" stadd %w[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
: "r" (v));
|
||||
}
|
||||
|
||||
#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \
|
||||
static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
u32 tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %w[i], %w[i]\n" \
|
||||
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
|
||||
" add %w[i], %w[i], %w[tmp]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
}
|
||||
|
||||
ATOMIC_OP_SUB_RETURN(_relaxed, )
|
||||
ATOMIC_OP_SUB_RETURN(_acquire, a, "memory")
|
||||
ATOMIC_OP_SUB_RETURN(_release, l, "memory")
|
||||
ATOMIC_OP_SUB_RETURN( , al, "memory")
|
||||
|
||||
#undef ATOMIC_OP_SUB_RETURN
|
||||
|
||||
#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \
|
||||
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %w[i], %w[i]\n" \
|
||||
" ldadd" #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
}
|
||||
|
||||
ATOMIC_FETCH_OP_SUB(_relaxed, )
|
||||
ATOMIC_FETCH_OP_SUB(_acquire, a, "memory")
|
||||
ATOMIC_FETCH_OP_SUB(_release, l, "memory")
|
||||
ATOMIC_FETCH_OP_SUB( , al, "memory")
|
||||
|
||||
#undef ATOMIC_FETCH_OP_SUB
|
||||
|
||||
#define ATOMIC64_OP(op, asm_op) \
|
||||
static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op " %[i], %[v]\n" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v)); \
|
||||
" " #asm_op " %[i], %[v]\n" \
|
||||
: [v] "+Q" (v->counter) \
|
||||
: [i] "r" (i)); \
|
||||
}
|
||||
|
||||
ATOMIC64_OP(andnot, stclr)
|
||||
@ -178,19 +126,27 @@ ATOMIC64_OP(or, stset)
|
||||
ATOMIC64_OP(xor, steor)
|
||||
ATOMIC64_OP(add, stadd)
|
||||
|
||||
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
{
|
||||
__lse_atomic64_add(-i, v);
|
||||
}
|
||||
|
||||
#undef ATOMIC64_OP
|
||||
|
||||
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
|
||||
static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
|
||||
{ \
|
||||
s64 old; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
" " #asm_op #mb " %[i], %[old], %[v]" \
|
||||
: [v] "+Q" (v->counter), \
|
||||
[old] "=r" (old) \
|
||||
: [i] "r" (i) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return old; \
|
||||
}
|
||||
|
||||
#define ATOMIC64_FETCH_OPS(op, asm_op) \
|
||||
@ -207,51 +163,46 @@ ATOMIC64_FETCH_OPS(add, ldadd)
|
||||
#undef ATOMIC64_FETCH_OP
|
||||
#undef ATOMIC64_FETCH_OPS
|
||||
|
||||
#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \
|
||||
static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
|
||||
#define ATOMIC64_FETCH_OP_SUB(name) \
|
||||
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
|
||||
" add %[i], %[i], %x[tmp]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return __lse_atomic64_fetch_add##name(-i, v); \
|
||||
}
|
||||
|
||||
ATOMIC64_OP_ADD_RETURN(_relaxed, )
|
||||
ATOMIC64_OP_ADD_RETURN(_acquire, a, "memory")
|
||||
ATOMIC64_OP_ADD_RETURN(_release, l, "memory")
|
||||
ATOMIC64_OP_ADD_RETURN( , al, "memory")
|
||||
ATOMIC64_FETCH_OP_SUB(_relaxed)
|
||||
ATOMIC64_FETCH_OP_SUB(_acquire)
|
||||
ATOMIC64_FETCH_OP_SUB(_release)
|
||||
ATOMIC64_FETCH_OP_SUB( )
|
||||
|
||||
#undef ATOMIC64_OP_ADD_RETURN
|
||||
#undef ATOMIC64_FETCH_OP_SUB
|
||||
|
||||
#define ATOMIC64_OP_ADD_SUB_RETURN(name) \
|
||||
static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
|
||||
{ \
|
||||
return __lse_atomic64_fetch_add##name(i, v) + i; \
|
||||
} \
|
||||
\
|
||||
static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\
|
||||
{ \
|
||||
return __lse_atomic64_fetch_sub##name(i, v) - i; \
|
||||
}
|
||||
|
||||
ATOMIC64_OP_ADD_SUB_RETURN(_relaxed)
|
||||
ATOMIC64_OP_ADD_SUB_RETURN(_acquire)
|
||||
ATOMIC64_OP_ADD_SUB_RETURN(_release)
|
||||
ATOMIC64_OP_ADD_SUB_RETURN( )
|
||||
|
||||
#undef ATOMIC64_OP_ADD_SUB_RETURN
|
||||
|
||||
static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" mvn %[i], %[i]\n"
|
||||
" stclr %[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
: "r" (v));
|
||||
return __lse_atomic64_andnot(~i, v);
|
||||
}
|
||||
|
||||
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
|
||||
static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" mvn %[i], %[i]\n" \
|
||||
" ldclr" #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
return __lse_atomic64_fetch_andnot##name(~i, v); \
|
||||
}
|
||||
|
||||
ATOMIC64_FETCH_OP_AND(_relaxed, )
|
||||
@ -261,61 +212,6 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
|
||||
|
||||
#undef ATOMIC64_FETCH_OP_AND
|
||||
|
||||
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" neg %[i], %[i]\n"
|
||||
" stadd %[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
: "r" (v));
|
||||
}
|
||||
|
||||
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
|
||||
static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %[i], %[i]\n" \
|
||||
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
|
||||
" add %[i], %[i], %x[tmp]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
}
|
||||
|
||||
ATOMIC64_OP_SUB_RETURN(_relaxed, )
|
||||
ATOMIC64_OP_SUB_RETURN(_acquire, a, "memory")
|
||||
ATOMIC64_OP_SUB_RETURN(_release, l, "memory")
|
||||
ATOMIC64_OP_SUB_RETURN( , al, "memory")
|
||||
|
||||
#undef ATOMIC64_OP_SUB_RETURN
|
||||
|
||||
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
|
||||
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %[i], %[i]\n" \
|
||||
" ldadd" #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
: cl); \
|
||||
\
|
||||
return i; \
|
||||
}
|
||||
|
||||
ATOMIC64_FETCH_OP_SUB(_relaxed, )
|
||||
ATOMIC64_FETCH_OP_SUB(_acquire, a, "memory")
|
||||
ATOMIC64_FETCH_OP_SUB(_release, l, "memory")
|
||||
ATOMIC64_FETCH_OP_SUB( , al, "memory")
|
||||
|
||||
#undef ATOMIC64_FETCH_OP_SUB
|
||||
|
||||
static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
|
@ -26,6 +26,14 @@
|
||||
#define __tsb_csync() asm volatile("hint #18" : : : "memory")
|
||||
#define csdb() asm volatile("hint #20" : : : "memory")
|
||||
|
||||
/*
|
||||
* Data Gathering Hint:
|
||||
* This instruction prevents merging memory accesses with Normal-NC or
|
||||
* Device-GRE attributes before the hint instruction with any memory accesses
|
||||
* appearing after the hint instruction.
|
||||
*/
|
||||
#define dgh() asm volatile("hint #6" : : : "memory")
|
||||
|
||||
#ifdef CONFIG_ARM64_PSEUDO_NMI
|
||||
#define pmr_sync() \
|
||||
do { \
|
||||
@ -46,6 +54,7 @@
|
||||
#define dma_rmb() dmb(oshld)
|
||||
#define dma_wmb() dmb(oshst)
|
||||
|
||||
#define io_stop_wc() dgh()
|
||||
|
||||
#define tsb_csync() \
|
||||
do { \
|
||||
|
@ -51,6 +51,7 @@ struct cpuinfo_arm64 {
|
||||
u64 reg_id_aa64dfr1;
|
||||
u64 reg_id_aa64isar0;
|
||||
u64 reg_id_aa64isar1;
|
||||
u64 reg_id_aa64isar2;
|
||||
u64 reg_id_aa64mmfr0;
|
||||
u64 reg_id_aa64mmfr1;
|
||||
u64 reg_id_aa64mmfr2;
|
||||
|
@ -51,8 +51,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
|
||||
extern void fpsimd_flush_task_state(struct task_struct *target);
|
||||
extern void fpsimd_save_and_flush_cpu_state(void);
|
||||
|
||||
/* Maximum VL that SVE VL-agnostic software can transparently support */
|
||||
#define SVE_VL_ARCH_MAX 0x100
|
||||
/* Maximum VL that SVE/SME VL-agnostic software can transparently support */
|
||||
#define VL_ARCH_MAX 0x100
|
||||
|
||||
/* Offset of FFR in the SVE register dump */
|
||||
static inline size_t sve_ffr_offset(int vl)
|
||||
@ -122,7 +122,7 @@ extern void fpsimd_sync_to_sve(struct task_struct *task);
|
||||
extern void sve_sync_to_fpsimd(struct task_struct *task);
|
||||
extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task);
|
||||
|
||||
extern int sve_set_vector_length(struct task_struct *task,
|
||||
extern int vec_set_vector_length(struct task_struct *task, enum vec_type type,
|
||||
unsigned long vl, unsigned long flags);
|
||||
|
||||
extern int sve_set_current_vl(unsigned long arg);
|
||||
|
@ -106,6 +106,8 @@
|
||||
#define KERNEL_HWCAP_BTI __khwcap2_feature(BTI)
|
||||
#define KERNEL_HWCAP_MTE __khwcap2_feature(MTE)
|
||||
#define KERNEL_HWCAP_ECV __khwcap2_feature(ECV)
|
||||
#define KERNEL_HWCAP_AFP __khwcap2_feature(AFP)
|
||||
#define KERNEL_HWCAP_RPRES __khwcap2_feature(RPRES)
|
||||
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
|
@ -1,48 +1,43 @@
|
||||
#ifndef __ASM_LINKAGE_H
|
||||
#define __ASM_LINKAGE_H
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
#include <asm/assembler.h>
|
||||
#endif
|
||||
|
||||
#define __ALIGN .align 2
|
||||
#define __ALIGN_STR ".align 2"
|
||||
|
||||
#if defined(CONFIG_ARM64_BTI_KERNEL) && defined(__aarch64__)
|
||||
|
||||
/*
|
||||
* Since current versions of gas reject the BTI instruction unless we
|
||||
* set the architecture version to v8.5 we use the hint instruction
|
||||
* instead.
|
||||
*/
|
||||
#define BTI_C hint 34 ;
|
||||
|
||||
/*
|
||||
* When using in-kernel BTI we need to ensure that PCS-conformant assembly
|
||||
* functions have suitable annotations. Override SYM_FUNC_START to insert
|
||||
* a BTI landing pad at the start of everything.
|
||||
* When using in-kernel BTI we need to ensure that PCS-conformant
|
||||
* assembly functions have suitable annotations. Override
|
||||
* SYM_FUNC_START to insert a BTI landing pad at the start of
|
||||
* everything, the override is done unconditionally so we're more
|
||||
* likely to notice any drift from the overridden definitions.
|
||||
*/
|
||||
#define SYM_FUNC_START(name) \
|
||||
SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \
|
||||
BTI_C
|
||||
bti c ;
|
||||
|
||||
#define SYM_FUNC_START_NOALIGN(name) \
|
||||
SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \
|
||||
BTI_C
|
||||
bti c ;
|
||||
|
||||
#define SYM_FUNC_START_LOCAL(name) \
|
||||
SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN) \
|
||||
BTI_C
|
||||
bti c ;
|
||||
|
||||
#define SYM_FUNC_START_LOCAL_NOALIGN(name) \
|
||||
SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \
|
||||
BTI_C
|
||||
bti c ;
|
||||
|
||||
#define SYM_FUNC_START_WEAK(name) \
|
||||
SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN) \
|
||||
BTI_C
|
||||
bti c ;
|
||||
|
||||
#define SYM_FUNC_START_WEAK_NOALIGN(name) \
|
||||
SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \
|
||||
BTI_C
|
||||
|
||||
#endif
|
||||
bti c ;
|
||||
|
||||
/*
|
||||
* Annotate a function as position independent, i.e., safe to be called before
|
||||
|
@ -84,10 +84,12 @@ static inline void __dc_gzva(u64 p)
|
||||
static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
|
||||
bool init)
|
||||
{
|
||||
u64 curr, mask, dczid_bs, end1, end2, end3;
|
||||
u64 curr, mask, dczid, dczid_bs, dczid_dzp, end1, end2, end3;
|
||||
|
||||
/* Read DC G(Z)VA block size from the system register. */
|
||||
dczid_bs = 4ul << (read_cpuid(DCZID_EL0) & 0xf);
|
||||
dczid = read_cpuid(DCZID_EL0);
|
||||
dczid_bs = 4ul << (dczid & 0xf);
|
||||
dczid_dzp = (dczid >> 4) & 1;
|
||||
|
||||
curr = (u64)__tag_set(addr, tag);
|
||||
mask = dczid_bs - 1;
|
||||
@ -106,7 +108,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
|
||||
*/
|
||||
#define SET_MEMTAG_RANGE(stg_post, dc_gva) \
|
||||
do { \
|
||||
if (size >= 2 * dczid_bs) { \
|
||||
if (!dczid_dzp && size >= 2 * dczid_bs) {\
|
||||
do { \
|
||||
curr = stg_post(curr); \
|
||||
} while (curr < end1); \
|
||||
|
@ -47,6 +47,10 @@ struct stack_info {
|
||||
* @prev_type: The type of stack this frame record was on, or a synthetic
|
||||
* value of STACK_TYPE_UNKNOWN. This is used to detect a
|
||||
* transition from one stack to another.
|
||||
*
|
||||
* @kr_cur: When KRETPROBES is selected, holds the kretprobe instance
|
||||
* associated with the most recently encountered replacement lr
|
||||
* value.
|
||||
*/
|
||||
struct stackframe {
|
||||
unsigned long fp;
|
||||
@ -59,9 +63,6 @@ struct stackframe {
|
||||
#endif
|
||||
};
|
||||
|
||||
extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
|
||||
extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
|
||||
bool (*fn)(void *, unsigned long), void *data);
|
||||
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
||||
const char *loglvl);
|
||||
|
||||
@ -146,7 +147,4 @@ static inline bool on_accessible_stack(const struct task_struct *tsk,
|
||||
return false;
|
||||
}
|
||||
|
||||
void start_backtrace(struct stackframe *frame, unsigned long fp,
|
||||
unsigned long pc);
|
||||
|
||||
#endif /* __ASM_STACKTRACE_H */
|
||||
|
@ -182,6 +182,7 @@
|
||||
|
||||
#define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0)
|
||||
#define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1)
|
||||
#define SYS_ID_AA64ISAR2_EL1 sys_reg(3, 0, 0, 6, 2)
|
||||
|
||||
#define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0)
|
||||
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
|
||||
@ -771,6 +772,20 @@
|
||||
#define ID_AA64ISAR1_GPI_NI 0x0
|
||||
#define ID_AA64ISAR1_GPI_IMP_DEF 0x1
|
||||
|
||||
/* id_aa64isar2 */
|
||||
#define ID_AA64ISAR2_RPRES_SHIFT 4
|
||||
#define ID_AA64ISAR2_WFXT_SHIFT 0
|
||||
|
||||
#define ID_AA64ISAR2_RPRES_8BIT 0x0
|
||||
#define ID_AA64ISAR2_RPRES_12BIT 0x1
|
||||
/*
|
||||
* Value 0x1 has been removed from the architecture, and is
|
||||
* reserved, but has not yet been removed from the ARM ARM
|
||||
* as of ARM DDI 0487G.b.
|
||||
*/
|
||||
#define ID_AA64ISAR2_WFXT_NI 0x0
|
||||
#define ID_AA64ISAR2_WFXT_SUPPORTED 0x2
|
||||
|
||||
/* id_aa64pfr0 */
|
||||
#define ID_AA64PFR0_CSV3_SHIFT 60
|
||||
#define ID_AA64PFR0_CSV2_SHIFT 56
|
||||
@ -889,6 +904,7 @@
|
||||
#endif
|
||||
|
||||
/* id_aa64mmfr1 */
|
||||
#define ID_AA64MMFR1_AFP_SHIFT 44
|
||||
#define ID_AA64MMFR1_ETS_SHIFT 36
|
||||
#define ID_AA64MMFR1_TWED_SHIFT 32
|
||||
#define ID_AA64MMFR1_XNX_SHIFT 28
|
||||
|
@ -76,5 +76,7 @@
|
||||
#define HWCAP2_BTI (1 << 17)
|
||||
#define HWCAP2_MTE (1 << 18)
|
||||
#define HWCAP2_ECV (1 << 19)
|
||||
#define HWCAP2_AFP (1 << 20)
|
||||
#define HWCAP2_RPRES (1 << 21)
|
||||
|
||||
#endif /* _UAPI__ASM_HWCAP_H */
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/of_fdt.h>
|
||||
#include <linux/libfdt.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/serial_core.h>
|
||||
#include <linux/pgtable.h>
|
||||
@ -62,29 +63,22 @@ static int __init parse_acpi(char *arg)
|
||||
}
|
||||
early_param("acpi", parse_acpi);
|
||||
|
||||
static int __init dt_scan_depth1_nodes(unsigned long node,
|
||||
const char *uname, int depth,
|
||||
void *data)
|
||||
static bool __init dt_is_stub(void)
|
||||
{
|
||||
/*
|
||||
* Ignore anything not directly under the root node; we'll
|
||||
* catch its parent instead.
|
||||
*/
|
||||
if (depth != 1)
|
||||
return 0;
|
||||
int node;
|
||||
|
||||
if (strcmp(uname, "chosen") == 0)
|
||||
return 0;
|
||||
fdt_for_each_subnode(node, initial_boot_params, 0) {
|
||||
const char *name = fdt_get_name(initial_boot_params, node, NULL);
|
||||
if (strcmp(name, "chosen") == 0)
|
||||
continue;
|
||||
if (strcmp(name, "hypervisor") == 0 &&
|
||||
of_flat_dt_is_compatible(node, "xen,xen"))
|
||||
continue;
|
||||
|
||||
if (strcmp(uname, "hypervisor") == 0 &&
|
||||
of_flat_dt_is_compatible(node, "xen,xen"))
|
||||
return 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* This node at depth 1 is neither a chosen node nor a xen node,
|
||||
* which we do not expect.
|
||||
*/
|
||||
return 1;
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -205,8 +199,7 @@ void __init acpi_boot_table_init(void)
|
||||
* and ACPI has not been [force] enabled (acpi=on|force)
|
||||
*/
|
||||
if (param_acpi_off ||
|
||||
(!param_acpi_on && !param_acpi_force &&
|
||||
of_scan_flat_dt(dt_scan_depth1_nodes, NULL)))
|
||||
(!param_acpi_on && !param_acpi_force && !dt_is_stub()))
|
||||
goto done;
|
||||
|
||||
/*
|
||||
|
@ -225,6 +225,11 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
|
||||
static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
|
||||
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
|
||||
@ -325,6 +330,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
|
||||
};
|
||||
|
||||
static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
|
||||
@ -637,6 +643,7 @@ static const struct __ftr_reg_entry {
|
||||
ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
|
||||
ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1,
|
||||
&id_aa64isar1_override),
|
||||
ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),
|
||||
|
||||
/* Op1 = 0, CRn = 0, CRm = 7 */
|
||||
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
|
||||
@ -933,6 +940,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
|
||||
init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
|
||||
init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
|
||||
@ -1151,6 +1159,8 @@ void update_cpu_features(int cpu,
|
||||
info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
|
||||
taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
|
||||
info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
|
||||
taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu,
|
||||
info->reg_id_aa64isar2, boot->reg_id_aa64isar2);
|
||||
|
||||
/*
|
||||
* Differing PARange support is fine as long as all peripherals and
|
||||
@ -1272,6 +1282,7 @@ u64 __read_sysreg_by_encoding(u32 sys_id)
|
||||
read_sysreg_case(SYS_ID_AA64MMFR2_EL1);
|
||||
read_sysreg_case(SYS_ID_AA64ISAR0_EL1);
|
||||
read_sysreg_case(SYS_ID_AA64ISAR1_EL1);
|
||||
read_sysreg_case(SYS_ID_AA64ISAR2_EL1);
|
||||
|
||||
read_sysreg_case(SYS_CNTFRQ_EL0);
|
||||
read_sysreg_case(SYS_CTR_EL0);
|
||||
@ -2476,6 +2487,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
|
||||
{},
|
||||
};
|
||||
|
||||
|
@ -95,6 +95,8 @@ static const char *const hwcap_str[] = {
|
||||
[KERNEL_HWCAP_BTI] = "bti",
|
||||
[KERNEL_HWCAP_MTE] = "mte",
|
||||
[KERNEL_HWCAP_ECV] = "ecv",
|
||||
[KERNEL_HWCAP_AFP] = "afp",
|
||||
[KERNEL_HWCAP_RPRES] = "rpres",
|
||||
};
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
@ -391,6 +393,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
|
||||
info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
|
||||
info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
|
||||
info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
|
||||
info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1);
|
||||
info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
|
||||
info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
|
||||
info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
|
||||
|
@ -77,11 +77,13 @@
|
||||
.endm
|
||||
|
||||
SYM_CODE_START(ftrace_regs_caller)
|
||||
bti c
|
||||
ftrace_regs_entry 1
|
||||
b ftrace_common
|
||||
SYM_CODE_END(ftrace_regs_caller)
|
||||
|
||||
SYM_CODE_START(ftrace_caller)
|
||||
bti c
|
||||
ftrace_regs_entry 0
|
||||
b ftrace_common
|
||||
SYM_CODE_END(ftrace_caller)
|
||||
|
@ -966,8 +966,10 @@ SYM_CODE_START(__sdei_asm_handler)
|
||||
mov sp, x1
|
||||
|
||||
mov x1, x0 // address to complete_and_resume
|
||||
/* x0 = (x0 <= 1) ? EVENT_COMPLETE:EVENT_COMPLETE_AND_RESUME */
|
||||
cmp x0, #1
|
||||
/* x0 = (x0 <= SDEI_EV_FAILED) ?
|
||||
* EVENT_COMPLETE:EVENT_COMPLETE_AND_RESUME
|
||||
*/
|
||||
cmp x0, #SDEI_EV_FAILED
|
||||
mov_q x2, SDEI_1_0_FN_SDEI_EVENT_COMPLETE
|
||||
mov_q x3, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME
|
||||
csel x0, x2, x3, ls
|
||||
|
@ -15,6 +15,7 @@
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpu_pm.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/irqflags.h>
|
||||
@ -406,12 +407,13 @@ static unsigned int find_supported_vector_length(enum vec_type type,
|
||||
|
||||
#if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL)
|
||||
|
||||
static int sve_proc_do_default_vl(struct ctl_table *table, int write,
|
||||
static int vec_proc_do_default_vl(struct ctl_table *table, int write,
|
||||
void *buffer, size_t *lenp, loff_t *ppos)
|
||||
{
|
||||
struct vl_info *info = &vl_info[ARM64_VEC_SVE];
|
||||
struct vl_info *info = table->extra1;
|
||||
enum vec_type type = info->type;
|
||||
int ret;
|
||||
int vl = get_sve_default_vl();
|
||||
int vl = get_default_vl(type);
|
||||
struct ctl_table tmp_table = {
|
||||
.data = &vl,
|
||||
.maxlen = sizeof(vl),
|
||||
@ -428,7 +430,7 @@ static int sve_proc_do_default_vl(struct ctl_table *table, int write,
|
||||
if (!sve_vl_valid(vl))
|
||||
return -EINVAL;
|
||||
|
||||
set_sve_default_vl(find_supported_vector_length(ARM64_VEC_SVE, vl));
|
||||
set_default_vl(type, find_supported_vector_length(type, vl));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -436,7 +438,8 @@ static struct ctl_table sve_default_vl_table[] = {
|
||||
{
|
||||
.procname = "sve_default_vector_length",
|
||||
.mode = 0644,
|
||||
.proc_handler = sve_proc_do_default_vl,
|
||||
.proc_handler = vec_proc_do_default_vl,
|
||||
.extra1 = &vl_info[ARM64_VEC_SVE],
|
||||
},
|
||||
{ }
|
||||
};
|
||||
@ -629,7 +632,7 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
|
||||
__fpsimd_to_sve(sst, fst, vq);
|
||||
}
|
||||
|
||||
int sve_set_vector_length(struct task_struct *task,
|
||||
int vec_set_vector_length(struct task_struct *task, enum vec_type type,
|
||||
unsigned long vl, unsigned long flags)
|
||||
{
|
||||
if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
|
||||
@ -640,33 +643,35 @@ int sve_set_vector_length(struct task_struct *task,
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Clamp to the maximum vector length that VL-agnostic SVE code can
|
||||
* work with. A flag may be assigned in the future to allow setting
|
||||
* of larger vector lengths without confusing older software.
|
||||
* Clamp to the maximum vector length that VL-agnostic code
|
||||
* can work with. A flag may be assigned in the future to
|
||||
* allow setting of larger vector lengths without confusing
|
||||
* older software.
|
||||
*/
|
||||
if (vl > SVE_VL_ARCH_MAX)
|
||||
vl = SVE_VL_ARCH_MAX;
|
||||
if (vl > VL_ARCH_MAX)
|
||||
vl = VL_ARCH_MAX;
|
||||
|
||||
vl = find_supported_vector_length(ARM64_VEC_SVE, vl);
|
||||
vl = find_supported_vector_length(type, vl);
|
||||
|
||||
if (flags & (PR_SVE_VL_INHERIT |
|
||||
PR_SVE_SET_VL_ONEXEC))
|
||||
task_set_sve_vl_onexec(task, vl);
|
||||
task_set_vl_onexec(task, type, vl);
|
||||
else
|
||||
/* Reset VL to system default on next exec: */
|
||||
task_set_sve_vl_onexec(task, 0);
|
||||
task_set_vl_onexec(task, type, 0);
|
||||
|
||||
/* Only actually set the VL if not deferred: */
|
||||
if (flags & PR_SVE_SET_VL_ONEXEC)
|
||||
goto out;
|
||||
|
||||
if (vl == task_get_sve_vl(task))
|
||||
if (vl == task_get_vl(task, type))
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* To ensure the FPSIMD bits of the SVE vector registers are preserved,
|
||||
* write any live register state back to task_struct, and convert to a
|
||||
* non-SVE thread.
|
||||
* regular FPSIMD thread. Since the vector length can only be changed
|
||||
* with a syscall we can't be in streaming mode while reconfiguring.
|
||||
*/
|
||||
if (task == current) {
|
||||
get_cpu_fpsimd_context();
|
||||
@ -687,10 +692,10 @@ int sve_set_vector_length(struct task_struct *task,
|
||||
*/
|
||||
sve_free(task);
|
||||
|
||||
task_set_sve_vl(task, vl);
|
||||
task_set_vl(task, type, vl);
|
||||
|
||||
out:
|
||||
update_tsk_thread_flag(task, TIF_SVE_VL_INHERIT,
|
||||
update_tsk_thread_flag(task, vec_vl_inherit_flag(type),
|
||||
flags & PR_SVE_VL_INHERIT);
|
||||
|
||||
return 0;
|
||||
@ -698,20 +703,21 @@ out:
|
||||
|
||||
/*
|
||||
* Encode the current vector length and flags for return.
|
||||
* This is only required for prctl(): ptrace has separate fields
|
||||
* This is only required for prctl(): ptrace has separate fields.
|
||||
* SVE and SME use the same bits for _ONEXEC and _INHERIT.
|
||||
*
|
||||
* flags are as for sve_set_vector_length().
|
||||
* flags are as for vec_set_vector_length().
|
||||
*/
|
||||
static int sve_prctl_status(unsigned long flags)
|
||||
static int vec_prctl_status(enum vec_type type, unsigned long flags)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (flags & PR_SVE_SET_VL_ONEXEC)
|
||||
ret = task_get_sve_vl_onexec(current);
|
||||
ret = task_get_vl_onexec(current, type);
|
||||
else
|
||||
ret = task_get_sve_vl(current);
|
||||
ret = task_get_vl(current, type);
|
||||
|
||||
if (test_thread_flag(TIF_SVE_VL_INHERIT))
|
||||
if (test_thread_flag(vec_vl_inherit_flag(type)))
|
||||
ret |= PR_SVE_VL_INHERIT;
|
||||
|
||||
return ret;
|
||||
@ -729,11 +735,11 @@ int sve_set_current_vl(unsigned long arg)
|
||||
if (!system_supports_sve() || is_compat_task())
|
||||
return -EINVAL;
|
||||
|
||||
ret = sve_set_vector_length(current, vl, flags);
|
||||
ret = vec_set_vector_length(current, ARM64_VEC_SVE, vl, flags);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return sve_prctl_status(flags);
|
||||
return vec_prctl_status(ARM64_VEC_SVE, flags);
|
||||
}
|
||||
|
||||
/* PR_SVE_GET_VL */
|
||||
@ -742,7 +748,7 @@ int sve_get_current_vl(void)
|
||||
if (!system_supports_sve() || is_compat_task())
|
||||
return -EINVAL;
|
||||
|
||||
return sve_prctl_status(0);
|
||||
return vec_prctl_status(ARM64_VEC_SVE, 0);
|
||||
}
|
||||
|
||||
static void vec_probe_vqs(struct vl_info *info,
|
||||
@ -1107,7 +1113,7 @@ static void fpsimd_flush_thread_vl(enum vec_type type)
|
||||
vl = get_default_vl(type);
|
||||
|
||||
if (WARN_ON(!sve_vl_valid(vl)))
|
||||
vl = SVE_VL_MIN;
|
||||
vl = vl_info[type].min_vl;
|
||||
|
||||
supported_vl = find_supported_vector_length(type, vl);
|
||||
if (WARN_ON(supported_vl != vl))
|
||||
@ -1213,7 +1219,8 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state,
|
||||
/*
|
||||
* Load the userland FPSIMD state of 'current' from memory, but only if the
|
||||
* FPSIMD state already held in the registers is /not/ the most recent FPSIMD
|
||||
* state of 'current'
|
||||
* state of 'current'. This is called when we are preparing to return to
|
||||
* userspace to ensure that userspace sees a good register state.
|
||||
*/
|
||||
void fpsimd_restore_current_state(void)
|
||||
{
|
||||
@ -1244,7 +1251,9 @@ void fpsimd_restore_current_state(void)
|
||||
/*
|
||||
* Load an updated userland FPSIMD state for 'current' from memory and set the
|
||||
* flag that indicates that the FPSIMD register contents are the most recent
|
||||
* FPSIMD state of 'current'
|
||||
* FPSIMD state of 'current'. This is used by the signal code to restore the
|
||||
* register state when returning from a signal handler in FPSIMD only cases,
|
||||
* any SVE context will be discarded.
|
||||
*/
|
||||
void fpsimd_update_current_state(struct user_fpsimd_state const *state)
|
||||
{
|
||||
|
@ -7,10 +7,6 @@
|
||||
* Ubuntu project, hibernation support for mach-dove
|
||||
* Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu)
|
||||
* Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.)
|
||||
* https://lkml.org/lkml/2010/6/18/4
|
||||
* https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html
|
||||
* https://patchwork.kernel.org/patch/96442/
|
||||
*
|
||||
* Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
|
||||
*/
|
||||
#define pr_fmt(x) "hibernate: " x
|
||||
|
@ -104,13 +104,15 @@ static void *kexec_page_alloc(void *arg)
|
||||
{
|
||||
struct kimage *kimage = (struct kimage *)arg;
|
||||
struct page *page = kimage_alloc_control_pages(kimage, 0);
|
||||
void *vaddr = NULL;
|
||||
|
||||
if (!page)
|
||||
return NULL;
|
||||
|
||||
memset(page_address(page), 0, PAGE_SIZE);
|
||||
vaddr = page_address(page);
|
||||
memset(vaddr, 0, PAGE_SIZE);
|
||||
|
||||
return page_address(page);
|
||||
return vaddr;
|
||||
}
|
||||
|
||||
int machine_kexec_post_load(struct kimage *kimage)
|
||||
@ -147,7 +149,7 @@ int machine_kexec_post_load(struct kimage *kimage)
|
||||
if (rc)
|
||||
return rc;
|
||||
kimage->arch.ttbr1 = __pa(trans_pgd);
|
||||
kimage->arch.zero_page = __pa(empty_zero_page);
|
||||
kimage->arch.zero_page = __pa_symbol(empty_zero_page);
|
||||
|
||||
reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start;
|
||||
memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
|
||||
|
@ -5,10 +5,10 @@
|
||||
* Copyright (C) 2015 ARM Limited
|
||||
*/
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/stacktrace.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include <asm/pointer_auth.h>
|
||||
#include <asm/stacktrace.h>
|
||||
|
||||
struct frame_tail {
|
||||
struct frame_tail __user *fp;
|
||||
@ -132,30 +132,21 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Gets called by walk_stackframe() for every stackframe. This will be called
|
||||
* whist unwinding the stackframe and is like a subroutine return so we use
|
||||
* the PC.
|
||||
*/
|
||||
static bool callchain_trace(void *data, unsigned long pc)
|
||||
{
|
||||
struct perf_callchain_entry_ctx *entry = data;
|
||||
perf_callchain_store(entry, pc);
|
||||
return true;
|
||||
return perf_callchain_store(entry, pc) == 0;
|
||||
}
|
||||
|
||||
void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
struct stackframe frame;
|
||||
|
||||
if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
|
||||
/* We don't support guest os callchain now */
|
||||
return;
|
||||
}
|
||||
|
||||
start_backtrace(&frame, regs->regs[29], regs->pc);
|
||||
walk_stackframe(current, &frame, callchain_trace, entry);
|
||||
arch_stack_walk(callchain_trace, entry, current, regs);
|
||||
}
|
||||
|
||||
unsigned long perf_instruction_pointer(struct pt_regs *regs)
|
||||
|
@ -40,6 +40,7 @@
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/thread_info.h>
|
||||
#include <linux/prctl.h>
|
||||
#include <linux/stacktrace.h>
|
||||
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/compat.h>
|
||||
@ -439,34 +440,26 @@ static void entry_task_switch(struct task_struct *next)
|
||||
|
||||
/*
|
||||
* ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT.
|
||||
* Assuming the virtual counter is enabled at the beginning of times:
|
||||
*
|
||||
* - disable access when switching from a 64bit task to a 32bit task
|
||||
* - enable access when switching from a 32bit task to a 64bit task
|
||||
* Ensure access is disabled when switching to a 32bit task, ensure
|
||||
* access is enabled when switching to a 64bit task.
|
||||
*/
|
||||
static void erratum_1418040_thread_switch(struct task_struct *prev,
|
||||
struct task_struct *next)
|
||||
static void erratum_1418040_thread_switch(struct task_struct *next)
|
||||
{
|
||||
bool prev32, next32;
|
||||
u64 val;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040))
|
||||
if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) ||
|
||||
!this_cpu_has_cap(ARM64_WORKAROUND_1418040))
|
||||
return;
|
||||
|
||||
prev32 = is_compat_thread(task_thread_info(prev));
|
||||
next32 = is_compat_thread(task_thread_info(next));
|
||||
|
||||
if (prev32 == next32 || !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
|
||||
return;
|
||||
|
||||
val = read_sysreg(cntkctl_el1);
|
||||
|
||||
if (!next32)
|
||||
val |= ARCH_TIMER_USR_VCT_ACCESS_EN;
|
||||
if (is_compat_thread(task_thread_info(next)))
|
||||
sysreg_clear_set(cntkctl_el1, ARCH_TIMER_USR_VCT_ACCESS_EN, 0);
|
||||
else
|
||||
val &= ~ARCH_TIMER_USR_VCT_ACCESS_EN;
|
||||
sysreg_clear_set(cntkctl_el1, 0, ARCH_TIMER_USR_VCT_ACCESS_EN);
|
||||
}
|
||||
|
||||
write_sysreg(val, cntkctl_el1);
|
||||
static void erratum_1418040_new_exec(void)
|
||||
{
|
||||
preempt_disable();
|
||||
erratum_1418040_thread_switch(current);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
/*
|
||||
@ -490,7 +483,8 @@ void update_sctlr_el1(u64 sctlr)
|
||||
/*
|
||||
* Thread switching.
|
||||
*/
|
||||
__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
|
||||
__notrace_funcgraph __sched
|
||||
struct task_struct *__switch_to(struct task_struct *prev,
|
||||
struct task_struct *next)
|
||||
{
|
||||
struct task_struct *last;
|
||||
@ -501,7 +495,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
|
||||
contextidr_thread_switch(next);
|
||||
entry_task_switch(next);
|
||||
ssbs_thread_switch(next);
|
||||
erratum_1418040_thread_switch(prev, next);
|
||||
erratum_1418040_thread_switch(next);
|
||||
ptrauth_thread_switch_user(next);
|
||||
|
||||
/*
|
||||
@ -528,30 +522,37 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
|
||||
return last;
|
||||
}
|
||||
|
||||
struct wchan_info {
|
||||
unsigned long pc;
|
||||
int count;
|
||||
};
|
||||
|
||||
static bool get_wchan_cb(void *arg, unsigned long pc)
|
||||
{
|
||||
struct wchan_info *wchan_info = arg;
|
||||
|
||||
if (!in_sched_functions(pc)) {
|
||||
wchan_info->pc = pc;
|
||||
return false;
|
||||
}
|
||||
return wchan_info->count++ < 16;
|
||||
}
|
||||
|
||||
unsigned long __get_wchan(struct task_struct *p)
|
||||
{
|
||||
struct stackframe frame;
|
||||
unsigned long stack_page, ret = 0;
|
||||
int count = 0;
|
||||
struct wchan_info wchan_info = {
|
||||
.pc = 0,
|
||||
.count = 0,
|
||||
};
|
||||
|
||||
stack_page = (unsigned long)try_get_task_stack(p);
|
||||
if (!stack_page)
|
||||
if (!try_get_task_stack(p))
|
||||
return 0;
|
||||
|
||||
start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p));
|
||||
arch_stack_walk(get_wchan_cb, &wchan_info, p, NULL);
|
||||
|
||||
do {
|
||||
if (unwind_frame(p, &frame))
|
||||
goto out;
|
||||
if (!in_sched_functions(frame.pc)) {
|
||||
ret = frame.pc;
|
||||
goto out;
|
||||
}
|
||||
} while (count++ < 16);
|
||||
|
||||
out:
|
||||
put_task_stack(p);
|
||||
return ret;
|
||||
|
||||
return wchan_info.pc;
|
||||
}
|
||||
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
@ -611,6 +612,7 @@ void arch_setup_new_exec(void)
|
||||
current->mm->context.flags = mmflags;
|
||||
ptrauth_thread_init_user();
|
||||
mte_thread_init_user();
|
||||
erratum_1418040_new_exec();
|
||||
|
||||
if (task_spec_ssb_noexec(current)) {
|
||||
arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS,
|
||||
|
@ -812,9 +812,9 @@ static int sve_set(struct task_struct *target,
|
||||
|
||||
/*
|
||||
* Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by
|
||||
* sve_set_vector_length(), which will also validate them for us:
|
||||
* vec_set_vector_length(), which will also validate them for us:
|
||||
*/
|
||||
ret = sve_set_vector_length(target, header.vl,
|
||||
ret = vec_set_vector_length(target, ARM64_VEC_SVE, header.vl,
|
||||
((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
@ -9,9 +9,9 @@
|
||||
#include <linux/export.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/kprobes.h>
|
||||
#include <linux/stacktrace.h>
|
||||
|
||||
#include <asm/stack_pointer.h>
|
||||
#include <asm/stacktrace.h>
|
||||
|
||||
struct return_address_data {
|
||||
unsigned int level;
|
||||
@ -35,15 +35,11 @@ NOKPROBE_SYMBOL(save_return_addr);
|
||||
void *return_address(unsigned int level)
|
||||
{
|
||||
struct return_address_data data;
|
||||
struct stackframe frame;
|
||||
|
||||
data.level = level + 2;
|
||||
data.addr = NULL;
|
||||
|
||||
start_backtrace(&frame,
|
||||
(unsigned long)__builtin_frame_address(0),
|
||||
(unsigned long)return_address);
|
||||
walk_stackframe(current, &frame, save_return_addr, &data);
|
||||
arch_stack_walk(save_return_addr, &data, current, NULL);
|
||||
|
||||
if (!data.level)
|
||||
return data.addr;
|
||||
|
@ -189,11 +189,16 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
|
||||
|
||||
if (!dt_virt || !early_init_dt_scan(dt_virt)) {
|
||||
pr_crit("\n"
|
||||
"Error: invalid device tree blob at physical address %pa (virtual address 0x%p)\n"
|
||||
"Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
|
||||
"The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
|
||||
"\nPlease check your bootloader.",
|
||||
&dt_phys, dt_virt);
|
||||
|
||||
/*
|
||||
* Note that in this _really_ early stage we cannot even BUG()
|
||||
* or oops, so the least terrible thing to do is cpu_relax(),
|
||||
* or else we could end-up printing non-initialized data, etc.
|
||||
*/
|
||||
while (true)
|
||||
cpu_relax();
|
||||
}
|
||||
@ -232,12 +237,14 @@ static void __init request_standard_resources(void)
|
||||
if (memblock_is_nomap(region)) {
|
||||
res->name = "reserved";
|
||||
res->flags = IORESOURCE_MEM;
|
||||
res->start = __pfn_to_phys(memblock_region_reserved_base_pfn(region));
|
||||
res->end = __pfn_to_phys(memblock_region_reserved_end_pfn(region)) - 1;
|
||||
} else {
|
||||
res->name = "System RAM";
|
||||
res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
|
||||
res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
|
||||
res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
|
||||
}
|
||||
res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
|
||||
res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
|
||||
|
||||
request_resource(&iomem_resource, res);
|
||||
|
||||
|
@ -33,8 +33,8 @@
|
||||
*/
|
||||
|
||||
|
||||
void start_backtrace(struct stackframe *frame, unsigned long fp,
|
||||
unsigned long pc)
|
||||
static void start_backtrace(struct stackframe *frame, unsigned long fp,
|
||||
unsigned long pc)
|
||||
{
|
||||
frame->fp = fp;
|
||||
frame->pc = pc;
|
||||
@ -63,7 +63,8 @@ void start_backtrace(struct stackframe *frame, unsigned long fp,
|
||||
* records (e.g. a cycle), determined based on the location and fp value of A
|
||||
* and the location (but not the fp value) of B.
|
||||
*/
|
||||
int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
|
||||
static int notrace unwind_frame(struct task_struct *tsk,
|
||||
struct stackframe *frame)
|
||||
{
|
||||
unsigned long fp = frame->fp;
|
||||
struct stack_info info;
|
||||
@ -141,8 +142,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
|
||||
}
|
||||
NOKPROBE_SYMBOL(unwind_frame);
|
||||
|
||||
void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
|
||||
bool (*fn)(void *, unsigned long), void *data)
|
||||
static void notrace walk_stackframe(struct task_struct *tsk,
|
||||
struct stackframe *frame,
|
||||
bool (*fn)(void *, unsigned long), void *data)
|
||||
{
|
||||
while (1) {
|
||||
int ret;
|
||||
@ -156,24 +158,20 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
|
||||
}
|
||||
NOKPROBE_SYMBOL(walk_stackframe);
|
||||
|
||||
static void dump_backtrace_entry(unsigned long where, const char *loglvl)
|
||||
static bool dump_backtrace_entry(void *arg, unsigned long where)
|
||||
{
|
||||
char *loglvl = arg;
|
||||
printk("%s %pSb\n", loglvl, (void *)where);
|
||||
return true;
|
||||
}
|
||||
|
||||
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
||||
const char *loglvl)
|
||||
{
|
||||
struct stackframe frame;
|
||||
int skip = 0;
|
||||
|
||||
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
|
||||
|
||||
if (regs) {
|
||||
if (user_mode(regs))
|
||||
return;
|
||||
skip = 1;
|
||||
}
|
||||
if (regs && user_mode(regs))
|
||||
return;
|
||||
|
||||
if (!tsk)
|
||||
tsk = current;
|
||||
@ -181,36 +179,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
||||
if (!try_get_task_stack(tsk))
|
||||
return;
|
||||
|
||||
if (tsk == current) {
|
||||
start_backtrace(&frame,
|
||||
(unsigned long)__builtin_frame_address(0),
|
||||
(unsigned long)dump_backtrace);
|
||||
} else {
|
||||
/*
|
||||
* task blocked in __switch_to
|
||||
*/
|
||||
start_backtrace(&frame,
|
||||
thread_saved_fp(tsk),
|
||||
thread_saved_pc(tsk));
|
||||
}
|
||||
|
||||
printk("%sCall trace:\n", loglvl);
|
||||
do {
|
||||
/* skip until specified stack frame */
|
||||
if (!skip) {
|
||||
dump_backtrace_entry(frame.pc, loglvl);
|
||||
} else if (frame.fp == regs->regs[29]) {
|
||||
skip = 0;
|
||||
/*
|
||||
* Mostly, this is the case where this function is
|
||||
* called in panic/abort. As exception handler's
|
||||
* stack frame does not contain the corresponding pc
|
||||
* at which an exception has taken place, use regs->pc
|
||||
* instead.
|
||||
*/
|
||||
dump_backtrace_entry(regs->pc, loglvl);
|
||||
}
|
||||
} while (!unwind_frame(tsk, &frame));
|
||||
arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
|
||||
|
||||
put_task_stack(tsk);
|
||||
}
|
||||
@ -221,8 +191,6 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
|
||||
barrier();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
|
||||
noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
||||
void *cookie, struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
@ -241,5 +209,3 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
||||
|
||||
walk_stackframe(task, &frame, consume_entry, cookie);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -18,6 +18,7 @@
|
||||
#include <linux/timex.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/profile.h>
|
||||
#include <linux/stacktrace.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/irq.h>
|
||||
@ -29,25 +30,25 @@
|
||||
#include <clocksource/arm_arch_timer.h>
|
||||
|
||||
#include <asm/thread_info.h>
|
||||
#include <asm/stacktrace.h>
|
||||
#include <asm/paravirt.h>
|
||||
|
||||
static bool profile_pc_cb(void *arg, unsigned long pc)
|
||||
{
|
||||
unsigned long *prof_pc = arg;
|
||||
|
||||
if (in_lock_functions(pc))
|
||||
return true;
|
||||
*prof_pc = pc;
|
||||
return false;
|
||||
}
|
||||
|
||||
unsigned long profile_pc(struct pt_regs *regs)
|
||||
{
|
||||
struct stackframe frame;
|
||||
unsigned long prof_pc = 0;
|
||||
|
||||
if (!in_lock_functions(regs->pc))
|
||||
return regs->pc;
|
||||
arch_stack_walk(profile_pc_cb, &prof_pc, current, regs);
|
||||
|
||||
start_backtrace(&frame, regs->regs[29], regs->pc);
|
||||
|
||||
do {
|
||||
int ret = unwind_frame(NULL, &frame);
|
||||
if (ret < 0)
|
||||
return 0;
|
||||
} while (in_lock_functions(frame.pc));
|
||||
|
||||
return frame.pc;
|
||||
return prof_pc;
|
||||
}
|
||||
EXPORT_SYMBOL(profile_pc);
|
||||
|
||||
|
@ -32,6 +32,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
|
||||
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \
|
||||
$(CC_FLAGS_LTO)
|
||||
KASAN_SANITIZE := n
|
||||
KCSAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
KCOV_INSTRUMENT := n
|
||||
|
@ -140,9 +140,12 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Guest access to SVE registers should be routed to this handler only
|
||||
* when the system doesn't support SVE.
|
||||
*/
|
||||
static int handle_sve(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
/* Until SVE is supported for guests: */
|
||||
kvm_inject_undefined(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
@ -89,6 +89,7 @@ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI)
|
||||
# cause crashes. Just disable it.
|
||||
GCOV_PROFILE := n
|
||||
KASAN_SANITIZE := n
|
||||
KCSAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
|
@ -52,10 +52,10 @@ int kvm_arm_init_sve(void)
|
||||
* The get_sve_reg()/set_sve_reg() ioctl interface will need
|
||||
* to be extended with multiple register slice support in
|
||||
* order to support vector lengths greater than
|
||||
* SVE_VL_ARCH_MAX:
|
||||
* VL_ARCH_MAX:
|
||||
*/
|
||||
if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX))
|
||||
kvm_sve_max_vl = SVE_VL_ARCH_MAX;
|
||||
if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX))
|
||||
kvm_sve_max_vl = VL_ARCH_MAX;
|
||||
|
||||
/*
|
||||
* Don't even try to make use of vector lengths that
|
||||
@ -103,7 +103,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
|
||||
* set_sve_vls(). Double-check here just to be sure:
|
||||
*/
|
||||
if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() ||
|
||||
vl > SVE_VL_ARCH_MAX))
|
||||
vl > VL_ARCH_MAX))
|
||||
return -EIO;
|
||||
|
||||
buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL_ACCOUNT);
|
||||
|
@ -1525,7 +1525,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
||||
/* CRm=6 */
|
||||
ID_SANITISED(ID_AA64ISAR0_EL1),
|
||||
ID_SANITISED(ID_AA64ISAR1_EL1),
|
||||
ID_UNALLOCATED(6,2),
|
||||
ID_SANITISED(ID_AA64ISAR2_EL1),
|
||||
ID_UNALLOCATED(6,3),
|
||||
ID_UNALLOCATED(6,4),
|
||||
ID_UNALLOCATED(6,5),
|
||||
|
@ -16,6 +16,7 @@
|
||||
*/
|
||||
SYM_FUNC_START_PI(clear_page)
|
||||
mrs x1, dczid_el0
|
||||
tbnz x1, #4, 2f /* Branch if DC ZVA is prohibited */
|
||||
and w1, w1, #0xf
|
||||
mov x2, #4
|
||||
lsl x1, x2, x1
|
||||
@ -25,5 +26,14 @@ SYM_FUNC_START_PI(clear_page)
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
b.ne 1b
|
||||
ret
|
||||
|
||||
2: stnp xzr, xzr, [x0]
|
||||
stnp xzr, xzr, [x0, #16]
|
||||
stnp xzr, xzr, [x0, #32]
|
||||
stnp xzr, xzr, [x0, #48]
|
||||
add x0, x0, #64
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
b.ne 2b
|
||||
ret
|
||||
SYM_FUNC_END_PI(clear_page)
|
||||
EXPORT_SYMBOL(clear_page)
|
||||
|
@ -38,9 +38,7 @@
|
||||
* incremented by 256 prior to return).
|
||||
*/
|
||||
SYM_CODE_START(__hwasan_tag_mismatch)
|
||||
#ifdef BTI_C
|
||||
BTI_C
|
||||
#endif
|
||||
bti c
|
||||
add x29, sp, #232
|
||||
stp x2, x3, [sp, #8 * 2]
|
||||
stp x4, x5, [sp, #8 * 4]
|
||||
|
@ -43,17 +43,23 @@ SYM_FUNC_END(mte_clear_page_tags)
|
||||
* x0 - address to the beginning of the page
|
||||
*/
|
||||
SYM_FUNC_START(mte_zero_clear_page_tags)
|
||||
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
|
||||
mrs x1, dczid_el0
|
||||
tbnz x1, #4, 2f // Branch if DC GZVA is prohibited
|
||||
and w1, w1, #0xf
|
||||
mov x2, #4
|
||||
lsl x1, x2, x1
|
||||
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
|
||||
|
||||
1: dc gzva, x0
|
||||
add x0, x0, x1
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
b.ne 1b
|
||||
ret
|
||||
|
||||
2: stz2g x0, [x0], #(MTE_GRANULE_SIZE * 2)
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
b.ne 2b
|
||||
ret
|
||||
SYM_FUNC_END(mte_zero_clear_page_tags)
|
||||
|
||||
/*
|
||||
|
@ -167,7 +167,7 @@ void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1,
|
||||
} while (--lines > 0);
|
||||
}
|
||||
|
||||
struct xor_block_template const xor_block_inner_neon = {
|
||||
struct xor_block_template xor_block_inner_neon __ro_after_init = {
|
||||
.name = "__inner_neon__",
|
||||
.do_2 = xor_arm64_neon_2,
|
||||
.do_3 = xor_arm64_neon_3,
|
||||
@ -176,6 +176,151 @@ struct xor_block_template const xor_block_inner_neon = {
|
||||
};
|
||||
EXPORT_SYMBOL(xor_block_inner_neon);
|
||||
|
||||
static inline uint64x2_t eor3(uint64x2_t p, uint64x2_t q, uint64x2_t r)
|
||||
{
|
||||
uint64x2_t res;
|
||||
|
||||
asm(ARM64_ASM_PREAMBLE ".arch_extension sha3\n"
|
||||
"eor3 %0.16b, %1.16b, %2.16b, %3.16b"
|
||||
: "=w"(res) : "w"(p), "w"(q), "w"(r));
|
||||
return res;
|
||||
}
|
||||
|
||||
static void xor_arm64_eor3_3(unsigned long bytes, unsigned long *p1,
|
||||
unsigned long *p2, unsigned long *p3)
|
||||
{
|
||||
uint64_t *dp1 = (uint64_t *)p1;
|
||||
uint64_t *dp2 = (uint64_t *)p2;
|
||||
uint64_t *dp3 = (uint64_t *)p3;
|
||||
|
||||
register uint64x2_t v0, v1, v2, v3;
|
||||
long lines = bytes / (sizeof(uint64x2_t) * 4);
|
||||
|
||||
do {
|
||||
/* p1 ^= p2 ^ p3 */
|
||||
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
|
||||
vld1q_u64(dp3 + 0));
|
||||
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
|
||||
vld1q_u64(dp3 + 2));
|
||||
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
|
||||
vld1q_u64(dp3 + 4));
|
||||
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
|
||||
vld1q_u64(dp3 + 6));
|
||||
|
||||
/* store */
|
||||
vst1q_u64(dp1 + 0, v0);
|
||||
vst1q_u64(dp1 + 2, v1);
|
||||
vst1q_u64(dp1 + 4, v2);
|
||||
vst1q_u64(dp1 + 6, v3);
|
||||
|
||||
dp1 += 8;
|
||||
dp2 += 8;
|
||||
dp3 += 8;
|
||||
} while (--lines > 0);
|
||||
}
|
||||
|
||||
static void xor_arm64_eor3_4(unsigned long bytes, unsigned long *p1,
|
||||
unsigned long *p2, unsigned long *p3,
|
||||
unsigned long *p4)
|
||||
{
|
||||
uint64_t *dp1 = (uint64_t *)p1;
|
||||
uint64_t *dp2 = (uint64_t *)p2;
|
||||
uint64_t *dp3 = (uint64_t *)p3;
|
||||
uint64_t *dp4 = (uint64_t *)p4;
|
||||
|
||||
register uint64x2_t v0, v1, v2, v3;
|
||||
long lines = bytes / (sizeof(uint64x2_t) * 4);
|
||||
|
||||
do {
|
||||
/* p1 ^= p2 ^ p3 */
|
||||
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
|
||||
vld1q_u64(dp3 + 0));
|
||||
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
|
||||
vld1q_u64(dp3 + 2));
|
||||
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
|
||||
vld1q_u64(dp3 + 4));
|
||||
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
|
||||
vld1q_u64(dp3 + 6));
|
||||
|
||||
/* p1 ^= p4 */
|
||||
v0 = veorq_u64(v0, vld1q_u64(dp4 + 0));
|
||||
v1 = veorq_u64(v1, vld1q_u64(dp4 + 2));
|
||||
v2 = veorq_u64(v2, vld1q_u64(dp4 + 4));
|
||||
v3 = veorq_u64(v3, vld1q_u64(dp4 + 6));
|
||||
|
||||
/* store */
|
||||
vst1q_u64(dp1 + 0, v0);
|
||||
vst1q_u64(dp1 + 2, v1);
|
||||
vst1q_u64(dp1 + 4, v2);
|
||||
vst1q_u64(dp1 + 6, v3);
|
||||
|
||||
dp1 += 8;
|
||||
dp2 += 8;
|
||||
dp3 += 8;
|
||||
dp4 += 8;
|
||||
} while (--lines > 0);
|
||||
}
|
||||
|
||||
static void xor_arm64_eor3_5(unsigned long bytes, unsigned long *p1,
|
||||
unsigned long *p2, unsigned long *p3,
|
||||
unsigned long *p4, unsigned long *p5)
|
||||
{
|
||||
uint64_t *dp1 = (uint64_t *)p1;
|
||||
uint64_t *dp2 = (uint64_t *)p2;
|
||||
uint64_t *dp3 = (uint64_t *)p3;
|
||||
uint64_t *dp4 = (uint64_t *)p4;
|
||||
uint64_t *dp5 = (uint64_t *)p5;
|
||||
|
||||
register uint64x2_t v0, v1, v2, v3;
|
||||
long lines = bytes / (sizeof(uint64x2_t) * 4);
|
||||
|
||||
do {
|
||||
/* p1 ^= p2 ^ p3 */
|
||||
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
|
||||
vld1q_u64(dp3 + 0));
|
||||
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
|
||||
vld1q_u64(dp3 + 2));
|
||||
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
|
||||
vld1q_u64(dp3 + 4));
|
||||
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
|
||||
vld1q_u64(dp3 + 6));
|
||||
|
||||
/* p1 ^= p4 ^ p5 */
|
||||
v0 = eor3(v0, vld1q_u64(dp4 + 0), vld1q_u64(dp5 + 0));
|
||||
v1 = eor3(v1, vld1q_u64(dp4 + 2), vld1q_u64(dp5 + 2));
|
||||
v2 = eor3(v2, vld1q_u64(dp4 + 4), vld1q_u64(dp5 + 4));
|
||||
v3 = eor3(v3, vld1q_u64(dp4 + 6), vld1q_u64(dp5 + 6));
|
||||
|
||||
/* store */
|
||||
vst1q_u64(dp1 + 0, v0);
|
||||
vst1q_u64(dp1 + 2, v1);
|
||||
vst1q_u64(dp1 + 4, v2);
|
||||
vst1q_u64(dp1 + 6, v3);
|
||||
|
||||
dp1 += 8;
|
||||
dp2 += 8;
|
||||
dp3 += 8;
|
||||
dp4 += 8;
|
||||
dp5 += 8;
|
||||
} while (--lines > 0);
|
||||
}
|
||||
|
||||
static int __init xor_neon_init(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_AS_HAS_SHA3) && cpu_have_named_feature(SHA3)) {
|
||||
xor_block_inner_neon.do_3 = xor_arm64_eor3_3;
|
||||
xor_block_inner_neon.do_4 = xor_arm64_eor3_4;
|
||||
xor_block_inner_neon.do_5 = xor_arm64_eor3_5;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
module_init(xor_neon_init);
|
||||
|
||||
static void __exit xor_neon_exit(void)
|
||||
{
|
||||
}
|
||||
module_exit(xor_neon_exit);
|
||||
|
||||
MODULE_AUTHOR("Jackie Liu <liuyun01@kylinos.cn>");
|
||||
MODULE_DESCRIPTION("ARMv8 XOR Extensions");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -140,15 +140,7 @@ SYM_FUNC_END(dcache_clean_pou)
|
||||
* - start - kernel start address of region
|
||||
* - end - kernel end address of region
|
||||
*/
|
||||
SYM_FUNC_START_LOCAL(__dma_inv_area)
|
||||
SYM_FUNC_START_PI(dcache_inval_poc)
|
||||
/* FALLTHROUGH */
|
||||
|
||||
/*
|
||||
* __dma_inv_area(start, end)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
dcache_line_size x2, x3
|
||||
sub x3, x2, #1
|
||||
tst x1, x3 // end cache line aligned?
|
||||
@ -167,7 +159,6 @@ SYM_FUNC_START_PI(dcache_inval_poc)
|
||||
dsb sy
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_inval_poc)
|
||||
SYM_FUNC_END(__dma_inv_area)
|
||||
|
||||
/*
|
||||
* dcache_clean_poc(start, end)
|
||||
@ -178,19 +169,10 @@ SYM_FUNC_END(__dma_inv_area)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
SYM_FUNC_START_LOCAL(__dma_clean_area)
|
||||
SYM_FUNC_START_PI(dcache_clean_poc)
|
||||
/* FALLTHROUGH */
|
||||
|
||||
/*
|
||||
* __dma_clean_area(start, end)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
dcache_by_line_op cvac, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_clean_poc)
|
||||
SYM_FUNC_END(__dma_clean_area)
|
||||
|
||||
/*
|
||||
* dcache_clean_pop(start, end)
|
||||
@ -232,8 +214,8 @@ SYM_FUNC_END_PI(__dma_flush_area)
|
||||
SYM_FUNC_START_PI(__dma_map_area)
|
||||
add x1, x0, x1
|
||||
cmp w2, #DMA_FROM_DEVICE
|
||||
b.eq __dma_inv_area
|
||||
b __dma_clean_area
|
||||
b.eq __pi_dcache_inval_poc
|
||||
b __pi_dcache_clean_poc
|
||||
SYM_FUNC_END_PI(__dma_map_area)
|
||||
|
||||
/*
|
||||
@ -245,6 +227,6 @@ SYM_FUNC_END_PI(__dma_map_area)
|
||||
SYM_FUNC_START_PI(__dma_unmap_area)
|
||||
add x1, x0, x1
|
||||
cmp w2, #DMA_TO_DEVICE
|
||||
b.ne __dma_inv_area
|
||||
b.ne __pi_dcache_inval_poc
|
||||
ret
|
||||
SYM_FUNC_END_PI(__dma_unmap_area)
|
||||
|
@ -35,8 +35,8 @@ static unsigned long *pinned_asid_map;
|
||||
#define ASID_FIRST_VERSION (1UL << asid_bits)
|
||||
|
||||
#define NUM_USER_ASIDS ASID_FIRST_VERSION
|
||||
#define asid2idx(asid) ((asid) & ~ASID_MASK)
|
||||
#define idx2asid(idx) asid2idx(idx)
|
||||
#define ctxid2asid(asid) ((asid) & ~ASID_MASK)
|
||||
#define asid2ctxid(asid, genid) ((asid) | (genid))
|
||||
|
||||
/* Get the ASIDBits supported by the current CPU */
|
||||
static u32 get_cpu_asid_bits(void)
|
||||
@ -50,10 +50,10 @@ static u32 get_cpu_asid_bits(void)
|
||||
pr_warn("CPU%d: Unknown ASID size (%d); assuming 8-bit\n",
|
||||
smp_processor_id(), fld);
|
||||
fallthrough;
|
||||
case 0:
|
||||
case ID_AA64MMFR0_ASID_8:
|
||||
asid = 8;
|
||||
break;
|
||||
case 2:
|
||||
case ID_AA64MMFR0_ASID_16:
|
||||
asid = 16;
|
||||
}
|
||||
|
||||
@ -120,7 +120,7 @@ static void flush_context(void)
|
||||
*/
|
||||
if (asid == 0)
|
||||
asid = per_cpu(reserved_asids, i);
|
||||
__set_bit(asid2idx(asid), asid_map);
|
||||
__set_bit(ctxid2asid(asid), asid_map);
|
||||
per_cpu(reserved_asids, i) = asid;
|
||||
}
|
||||
|
||||
@ -162,7 +162,7 @@ static u64 new_context(struct mm_struct *mm)
|
||||
u64 generation = atomic64_read(&asid_generation);
|
||||
|
||||
if (asid != 0) {
|
||||
u64 newasid = generation | (asid & ~ASID_MASK);
|
||||
u64 newasid = asid2ctxid(ctxid2asid(asid), generation);
|
||||
|
||||
/*
|
||||
* If our current ASID was active during a rollover, we
|
||||
@ -183,7 +183,7 @@ static u64 new_context(struct mm_struct *mm)
|
||||
* We had a valid ASID in a previous life, so try to re-use
|
||||
* it if possible.
|
||||
*/
|
||||
if (!__test_and_set_bit(asid2idx(asid), asid_map))
|
||||
if (!__test_and_set_bit(ctxid2asid(asid), asid_map))
|
||||
return newasid;
|
||||
}
|
||||
|
||||
@ -209,7 +209,7 @@ static u64 new_context(struct mm_struct *mm)
|
||||
set_asid:
|
||||
__set_bit(asid, asid_map);
|
||||
cur_idx = asid;
|
||||
return idx2asid(asid) | generation;
|
||||
return asid2ctxid(asid, generation);
|
||||
}
|
||||
|
||||
void check_and_switch_context(struct mm_struct *mm)
|
||||
@ -300,13 +300,13 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
|
||||
}
|
||||
|
||||
nr_pinned_asids++;
|
||||
__set_bit(asid2idx(asid), pinned_asid_map);
|
||||
__set_bit(ctxid2asid(asid), pinned_asid_map);
|
||||
refcount_set(&mm->context.pinned, 1);
|
||||
|
||||
out_unlock:
|
||||
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
|
||||
|
||||
asid &= ~ASID_MASK;
|
||||
asid = ctxid2asid(asid);
|
||||
|
||||
/* Set the equivalent of USER_ASID_BIT */
|
||||
if (asid && arm64_kernel_unmapped_at_el0())
|
||||
@ -327,7 +327,7 @@ void arm64_mm_context_put(struct mm_struct *mm)
|
||||
raw_spin_lock_irqsave(&cpu_asid_lock, flags);
|
||||
|
||||
if (refcount_dec_and_test(&mm->context.pinned)) {
|
||||
__clear_bit(asid2idx(asid), pinned_asid_map);
|
||||
__clear_bit(ctxid2asid(asid), pinned_asid_map);
|
||||
nr_pinned_asids--;
|
||||
}
|
||||
|
||||
|
@ -10,9 +10,6 @@
|
||||
#include <asm/asm-extable.h>
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
typedef bool (*ex_handler_t)(const struct exception_table_entry *,
|
||||
struct pt_regs *);
|
||||
|
||||
static inline unsigned long
|
||||
get_ex_fixup(const struct exception_table_entry *ex)
|
||||
{
|
||||
|
@ -297,6 +297,8 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
|
||||
pr_alert("Unable to handle kernel %s at virtual address %016lx\n", msg,
|
||||
addr);
|
||||
|
||||
kasan_non_canonical_hook(addr);
|
||||
|
||||
mem_abort_decode(esr);
|
||||
|
||||
show_pte(addr);
|
||||
@ -813,11 +815,8 @@ void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs)
|
||||
if (!inf->fn(far, esr, regs))
|
||||
return;
|
||||
|
||||
if (!user_mode(regs)) {
|
||||
pr_alert("Unhandled fault at 0x%016lx\n", addr);
|
||||
mem_abort_decode(esr);
|
||||
show_pte(addr);
|
||||
}
|
||||
if (!user_mode(regs))
|
||||
die_kernel_fault(inf->name, addr, esr, regs);
|
||||
|
||||
/*
|
||||
* At this point we have an unrecognized fault type whose tag bits may
|
||||
|
@ -47,7 +47,7 @@ obj-y := cputable.o syscalls.o \
|
||||
udbg.o misc.o io.o misc_$(BITS).o \
|
||||
of_platform.o prom_parse.o firmware.o \
|
||||
hw_breakpoint_constraints.o interrupt.o \
|
||||
kdebugfs.o
|
||||
kdebugfs.o stacktrace.o
|
||||
obj-y += ptrace/
|
||||
obj-$(CONFIG_PPC64) += setup_64.o \
|
||||
paca.o nvram_64.o note.o
|
||||
@ -116,7 +116,6 @@ obj-$(CONFIG_OPTPROBES) += optprobes.o optprobes_head.o
|
||||
obj-$(CONFIG_KPROBES_ON_FTRACE) += kprobes-ftrace.o
|
||||
obj-$(CONFIG_UPROBES) += uprobes.o
|
||||
obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o
|
||||
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
||||
obj-$(CONFIG_SWIOTLB) += dma-swiotlb.o
|
||||
obj-$(CONFIG_ARCH_HAS_DMA_SET_MASK) += dma-mask.o
|
||||
|
||||
|
@ -139,12 +139,8 @@ unsigned long __get_wchan(struct task_struct *task)
|
||||
return pc;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
|
||||
noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
|
||||
struct task_struct *task, struct pt_regs *regs)
|
||||
{
|
||||
walk_stackframe(task, regs, consume_entry, cookie);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_STACKTRACE */
|
||||
|
@ -40,7 +40,7 @@ obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o
|
||||
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
|
||||
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
|
||||
obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o
|
||||
obj-y += smp.o text_amode31.o
|
||||
obj-y += smp.o text_amode31.o stacktrace.o
|
||||
|
||||
extra-y += head64.o vmlinux.lds
|
||||
|
||||
@ -55,7 +55,6 @@ compat-obj-$(CONFIG_AUDIT) += compat_audit.o
|
||||
obj-$(CONFIG_COMPAT) += compat_linux.o compat_signal.o
|
||||
obj-$(CONFIG_COMPAT) += $(compat-obj-y)
|
||||
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
|
||||
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
||||
obj-$(CONFIG_KPROBES) += kprobes.o
|
||||
obj-$(CONFIG_KPROBES) += kprobes_insn_page.o
|
||||
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
|
||||
|
@ -84,7 +84,7 @@ obj-$(CONFIG_IA32_EMULATION) += tls.o
|
||||
obj-y += step.o
|
||||
obj-$(CONFIG_INTEL_TXT) += tboot.o
|
||||
obj-$(CONFIG_ISA_DMA_API) += i8237.o
|
||||
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
||||
obj-y += stacktrace.o
|
||||
obj-y += cpu/
|
||||
obj-y += acpi/
|
||||
obj-y += reboot.o
|
||||
|
@ -251,5 +251,16 @@ do { \
|
||||
#define pmem_wmb() wmb()
|
||||
#endif
|
||||
|
||||
/*
|
||||
* ioremap_wc() maps I/O memory as memory with write-combining attributes. For
|
||||
* this kind of memory accesses, the CPU may wait for prior accesses to be
|
||||
* merged with subsequent ones. In some situation, such wait is bad for the
|
||||
* performance. io_stop_wc() can be used to prevent the merging of
|
||||
* write-combining memory accesses before this macro with those after it.
|
||||
*/
|
||||
#ifndef io_stop_wc
|
||||
#define io_stop_wc do { } while (0)
|
||||
#endif
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
#endif /* __ASM_GENERIC_BARRIER_H */
|
||||
|
@ -8,22 +8,6 @@
|
||||
struct task_struct;
|
||||
struct pt_regs;
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
|
||||
int spaces);
|
||||
int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
|
||||
unsigned int nr_entries, int spaces);
|
||||
unsigned int stack_trace_save(unsigned long *store, unsigned int size,
|
||||
unsigned int skipnr);
|
||||
unsigned int stack_trace_save_tsk(struct task_struct *task,
|
||||
unsigned long *store, unsigned int size,
|
||||
unsigned int skipnr);
|
||||
unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
|
||||
unsigned int size, unsigned int skipnr);
|
||||
unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
|
||||
unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
|
||||
|
||||
/* Internal interfaces. Do not use in generic code */
|
||||
#ifdef CONFIG_ARCH_STACKWALK
|
||||
|
||||
/**
|
||||
@ -76,8 +60,25 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
|
||||
|
||||
void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
|
||||
const struct pt_regs *regs);
|
||||
#endif /* CONFIG_ARCH_STACKWALK */
|
||||
|
||||
#else /* CONFIG_ARCH_STACKWALK */
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
|
||||
int spaces);
|
||||
int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
|
||||
unsigned int nr_entries, int spaces);
|
||||
unsigned int stack_trace_save(unsigned long *store, unsigned int size,
|
||||
unsigned int skipnr);
|
||||
unsigned int stack_trace_save_tsk(struct task_struct *task,
|
||||
unsigned long *store, unsigned int size,
|
||||
unsigned int skipnr);
|
||||
unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
|
||||
unsigned int size, unsigned int skipnr);
|
||||
unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
|
||||
unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
|
||||
|
||||
#ifndef CONFIG_ARCH_STACKWALK
|
||||
/* Internal interfaces. Do not use in generic code */
|
||||
struct stack_trace {
|
||||
unsigned int nr_entries, max_entries;
|
||||
unsigned long *entries;
|
||||
|
@ -8,6 +8,7 @@ CFLAGS_REMOVE_debugfs.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE)
|
||||
|
||||
CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
|
||||
$(call cc-option,-mno-outline-atomics) \
|
||||
-fno-stack-protector -DDISABLE_BRANCH_PROFILING
|
||||
|
||||
obj-y := core.o debugfs.o report.o
|
||||
|
@ -4,7 +4,7 @@
|
||||
ARCH ?= $(shell uname -m 2>/dev/null || echo not)
|
||||
|
||||
ifneq (,$(filter $(ARCH),aarch64 arm64))
|
||||
ARM64_SUBTARGETS ?= tags signal pauth fp mte bti
|
||||
ARM64_SUBTARGETS ?= tags signal pauth fp mte bti abi
|
||||
else
|
||||
ARM64_SUBTARGETS :=
|
||||
endif
|
||||
|
1
tools/testing/selftests/arm64/abi/.gitignore
vendored
Normal file
1
tools/testing/selftests/arm64/abi/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
syscall-abi
|
8
tools/testing/selftests/arm64/abi/Makefile
Normal file
8
tools/testing/selftests/arm64/abi/Makefile
Normal file
@ -0,0 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# Copyright (C) 2021 ARM Limited
|
||||
|
||||
TEST_GEN_PROGS := syscall-abi
|
||||
|
||||
include ../../lib.mk
|
||||
|
||||
$(OUTPUT)/syscall-abi: syscall-abi.c syscall-abi-asm.S
|
240
tools/testing/selftests/arm64/abi/syscall-abi-asm.S
Normal file
240
tools/testing/selftests/arm64/abi/syscall-abi-asm.S
Normal file
@ -0,0 +1,240 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
// Copyright (C) 2021 ARM Limited.
|
||||
//
|
||||
// Assembly portion of the syscall ABI test
|
||||
|
||||
//
|
||||
// Load values from memory into registers, invoke a syscall and save the
|
||||
// register values back to memory for later checking. The syscall to be
|
||||
// invoked is configured in x8 of the input GPR data.
|
||||
//
|
||||
// x0: SVE VL, 0 for FP only
|
||||
//
|
||||
// GPRs: gpr_in, gpr_out
|
||||
// FPRs: fpr_in, fpr_out
|
||||
// Zn: z_in, z_out
|
||||
// Pn: p_in, p_out
|
||||
// FFR: ffr_in, ffr_out
|
||||
|
||||
.arch_extension sve
|
||||
|
||||
.globl do_syscall
|
||||
do_syscall:
|
||||
// Store callee saved registers x19-x29 (80 bytes) plus x0 and x1
|
||||
stp x29, x30, [sp, #-112]!
|
||||
mov x29, sp
|
||||
stp x0, x1, [sp, #16]
|
||||
stp x19, x20, [sp, #32]
|
||||
stp x21, x22, [sp, #48]
|
||||
stp x23, x24, [sp, #64]
|
||||
stp x25, x26, [sp, #80]
|
||||
stp x27, x28, [sp, #96]
|
||||
|
||||
// Load GPRs x8-x28, and save our SP/FP for later comparison
|
||||
ldr x2, =gpr_in
|
||||
add x2, x2, #64
|
||||
ldp x8, x9, [x2], #16
|
||||
ldp x10, x11, [x2], #16
|
||||
ldp x12, x13, [x2], #16
|
||||
ldp x14, x15, [x2], #16
|
||||
ldp x16, x17, [x2], #16
|
||||
ldp x18, x19, [x2], #16
|
||||
ldp x20, x21, [x2], #16
|
||||
ldp x22, x23, [x2], #16
|
||||
ldp x24, x25, [x2], #16
|
||||
ldp x26, x27, [x2], #16
|
||||
ldr x28, [x2], #8
|
||||
str x29, [x2], #8 // FP
|
||||
str x30, [x2], #8 // LR
|
||||
|
||||
// Load FPRs if we're not doing SVE
|
||||
cbnz x0, 1f
|
||||
ldr x2, =fpr_in
|
||||
ldp q0, q1, [x2]
|
||||
ldp q2, q3, [x2, #16 * 2]
|
||||
ldp q4, q5, [x2, #16 * 4]
|
||||
ldp q6, q7, [x2, #16 * 6]
|
||||
ldp q8, q9, [x2, #16 * 8]
|
||||
ldp q10, q11, [x2, #16 * 10]
|
||||
ldp q12, q13, [x2, #16 * 12]
|
||||
ldp q14, q15, [x2, #16 * 14]
|
||||
ldp q16, q17, [x2, #16 * 16]
|
||||
ldp q18, q19, [x2, #16 * 18]
|
||||
ldp q20, q21, [x2, #16 * 20]
|
||||
ldp q22, q23, [x2, #16 * 22]
|
||||
ldp q24, q25, [x2, #16 * 24]
|
||||
ldp q26, q27, [x2, #16 * 26]
|
||||
ldp q28, q29, [x2, #16 * 28]
|
||||
ldp q30, q31, [x2, #16 * 30]
|
||||
1:
|
||||
|
||||
// Load the SVE registers if we're doing SVE
|
||||
cbz x0, 1f
|
||||
|
||||
ldr x2, =z_in
|
||||
ldr z0, [x2, #0, MUL VL]
|
||||
ldr z1, [x2, #1, MUL VL]
|
||||
ldr z2, [x2, #2, MUL VL]
|
||||
ldr z3, [x2, #3, MUL VL]
|
||||
ldr z4, [x2, #4, MUL VL]
|
||||
ldr z5, [x2, #5, MUL VL]
|
||||
ldr z6, [x2, #6, MUL VL]
|
||||
ldr z7, [x2, #7, MUL VL]
|
||||
ldr z8, [x2, #8, MUL VL]
|
||||
ldr z9, [x2, #9, MUL VL]
|
||||
ldr z10, [x2, #10, MUL VL]
|
||||
ldr z11, [x2, #11, MUL VL]
|
||||
ldr z12, [x2, #12, MUL VL]
|
||||
ldr z13, [x2, #13, MUL VL]
|
||||
ldr z14, [x2, #14, MUL VL]
|
||||
ldr z15, [x2, #15, MUL VL]
|
||||
ldr z16, [x2, #16, MUL VL]
|
||||
ldr z17, [x2, #17, MUL VL]
|
||||
ldr z18, [x2, #18, MUL VL]
|
||||
ldr z19, [x2, #19, MUL VL]
|
||||
ldr z20, [x2, #20, MUL VL]
|
||||
ldr z21, [x2, #21, MUL VL]
|
||||
ldr z22, [x2, #22, MUL VL]
|
||||
ldr z23, [x2, #23, MUL VL]
|
||||
ldr z24, [x2, #24, MUL VL]
|
||||
ldr z25, [x2, #25, MUL VL]
|
||||
ldr z26, [x2, #26, MUL VL]
|
||||
ldr z27, [x2, #27, MUL VL]
|
||||
ldr z28, [x2, #28, MUL VL]
|
||||
ldr z29, [x2, #29, MUL VL]
|
||||
ldr z30, [x2, #30, MUL VL]
|
||||
ldr z31, [x2, #31, MUL VL]
|
||||
|
||||
ldr x2, =ffr_in
|
||||
ldr p0, [x2, #0]
|
||||
wrffr p0.b
|
||||
|
||||
ldr x2, =p_in
|
||||
ldr p0, [x2, #0, MUL VL]
|
||||
ldr p1, [x2, #1, MUL VL]
|
||||
ldr p2, [x2, #2, MUL VL]
|
||||
ldr p3, [x2, #3, MUL VL]
|
||||
ldr p4, [x2, #4, MUL VL]
|
||||
ldr p5, [x2, #5, MUL VL]
|
||||
ldr p6, [x2, #6, MUL VL]
|
||||
ldr p7, [x2, #7, MUL VL]
|
||||
ldr p8, [x2, #8, MUL VL]
|
||||
ldr p9, [x2, #9, MUL VL]
|
||||
ldr p10, [x2, #10, MUL VL]
|
||||
ldr p11, [x2, #11, MUL VL]
|
||||
ldr p12, [x2, #12, MUL VL]
|
||||
ldr p13, [x2, #13, MUL VL]
|
||||
ldr p14, [x2, #14, MUL VL]
|
||||
ldr p15, [x2, #15, MUL VL]
|
||||
1:
|
||||
|
||||
// Do the syscall
|
||||
svc #0
|
||||
|
||||
// Save GPRs x8-x30
|
||||
ldr x2, =gpr_out
|
||||
add x2, x2, #64
|
||||
stp x8, x9, [x2], #16
|
||||
stp x10, x11, [x2], #16
|
||||
stp x12, x13, [x2], #16
|
||||
stp x14, x15, [x2], #16
|
||||
stp x16, x17, [x2], #16
|
||||
stp x18, x19, [x2], #16
|
||||
stp x20, x21, [x2], #16
|
||||
stp x22, x23, [x2], #16
|
||||
stp x24, x25, [x2], #16
|
||||
stp x26, x27, [x2], #16
|
||||
stp x28, x29, [x2], #16
|
||||
str x30, [x2]
|
||||
|
||||
// Restore x0 and x1 for feature checks
|
||||
ldp x0, x1, [sp, #16]
|
||||
|
||||
// Save FPSIMD state
|
||||
ldr x2, =fpr_out
|
||||
stp q0, q1, [x2]
|
||||
stp q2, q3, [x2, #16 * 2]
|
||||
stp q4, q5, [x2, #16 * 4]
|
||||
stp q6, q7, [x2, #16 * 6]
|
||||
stp q8, q9, [x2, #16 * 8]
|
||||
stp q10, q11, [x2, #16 * 10]
|
||||
stp q12, q13, [x2, #16 * 12]
|
||||
stp q14, q15, [x2, #16 * 14]
|
||||
stp q16, q17, [x2, #16 * 16]
|
||||
stp q18, q19, [x2, #16 * 18]
|
||||
stp q20, q21, [x2, #16 * 20]
|
||||
stp q22, q23, [x2, #16 * 22]
|
||||
stp q24, q25, [x2, #16 * 24]
|
||||
stp q26, q27, [x2, #16 * 26]
|
||||
stp q28, q29, [x2, #16 * 28]
|
||||
stp q30, q31, [x2, #16 * 30]
|
||||
|
||||
// Save the SVE state if we have some
|
||||
cbz x0, 1f
|
||||
|
||||
ldr x2, =z_out
|
||||
str z0, [x2, #0, MUL VL]
|
||||
str z1, [x2, #1, MUL VL]
|
||||
str z2, [x2, #2, MUL VL]
|
||||
str z3, [x2, #3, MUL VL]
|
||||
str z4, [x2, #4, MUL VL]
|
||||
str z5, [x2, #5, MUL VL]
|
||||
str z6, [x2, #6, MUL VL]
|
||||
str z7, [x2, #7, MUL VL]
|
||||
str z8, [x2, #8, MUL VL]
|
||||
str z9, [x2, #9, MUL VL]
|
||||
str z10, [x2, #10, MUL VL]
|
||||
str z11, [x2, #11, MUL VL]
|
||||
str z12, [x2, #12, MUL VL]
|
||||
str z13, [x2, #13, MUL VL]
|
||||
str z14, [x2, #14, MUL VL]
|
||||
str z15, [x2, #15, MUL VL]
|
||||
str z16, [x2, #16, MUL VL]
|
||||
str z17, [x2, #17, MUL VL]
|
||||
str z18, [x2, #18, MUL VL]
|
||||
str z19, [x2, #19, MUL VL]
|
||||
str z20, [x2, #20, MUL VL]
|
||||
str z21, [x2, #21, MUL VL]
|
||||
str z22, [x2, #22, MUL VL]
|
||||
str z23, [x2, #23, MUL VL]
|
||||
str z24, [x2, #24, MUL VL]
|
||||
str z25, [x2, #25, MUL VL]
|
||||
str z26, [x2, #26, MUL VL]
|
||||
str z27, [x2, #27, MUL VL]
|
||||
str z28, [x2, #28, MUL VL]
|
||||
str z29, [x2, #29, MUL VL]
|
||||
str z30, [x2, #30, MUL VL]
|
||||
str z31, [x2, #31, MUL VL]
|
||||
|
||||
ldr x2, =p_out
|
||||
str p0, [x2, #0, MUL VL]
|
||||
str p1, [x2, #1, MUL VL]
|
||||
str p2, [x2, #2, MUL VL]
|
||||
str p3, [x2, #3, MUL VL]
|
||||
str p4, [x2, #4, MUL VL]
|
||||
str p5, [x2, #5, MUL VL]
|
||||
str p6, [x2, #6, MUL VL]
|
||||
str p7, [x2, #7, MUL VL]
|
||||
str p8, [x2, #8, MUL VL]
|
||||
str p9, [x2, #9, MUL VL]
|
||||
str p10, [x2, #10, MUL VL]
|
||||
str p11, [x2, #11, MUL VL]
|
||||
str p12, [x2, #12, MUL VL]
|
||||
str p13, [x2, #13, MUL VL]
|
||||
str p14, [x2, #14, MUL VL]
|
||||
str p15, [x2, #15, MUL VL]
|
||||
|
||||
ldr x2, =ffr_out
|
||||
rdffr p0.b
|
||||
str p0, [x2, #0]
|
||||
1:
|
||||
|
||||
// Restore callee saved registers x19-x30
|
||||
ldp x19, x20, [sp, #32]
|
||||
ldp x21, x22, [sp, #48]
|
||||
ldp x23, x24, [sp, #64]
|
||||
ldp x25, x26, [sp, #80]
|
||||
ldp x27, x28, [sp, #96]
|
||||
ldp x29, x30, [sp], #112
|
||||
|
||||
ret
|
318
tools/testing/selftests/arm64/abi/syscall-abi.c
Normal file
318
tools/testing/selftests/arm64/abi/syscall-abi.c
Normal file
@ -0,0 +1,318 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (C) 2021 ARM Limited.
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <stdbool.h>
|
||||
#include <stddef.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/auxv.h>
|
||||
#include <sys/prctl.h>
|
||||
#include <asm/hwcap.h>
|
||||
#include <asm/sigcontext.h>
|
||||
#include <asm/unistd.h>
|
||||
|
||||
#include "../../kselftest.h"
|
||||
|
||||
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
|
||||
#define NUM_VL ((SVE_VQ_MAX - SVE_VQ_MIN) + 1)
|
||||
|
||||
extern void do_syscall(int sve_vl);
|
||||
|
||||
static void fill_random(void *buf, size_t size)
|
||||
{
|
||||
int i;
|
||||
uint32_t *lbuf = buf;
|
||||
|
||||
/* random() returns a 32 bit number regardless of the size of long */
|
||||
for (i = 0; i < size / sizeof(uint32_t); i++)
|
||||
lbuf[i] = random();
|
||||
}
|
||||
|
||||
/*
|
||||
* We also repeat the test for several syscalls to try to expose different
|
||||
* behaviour.
|
||||
*/
|
||||
static struct syscall_cfg {
|
||||
int syscall_nr;
|
||||
const char *name;
|
||||
} syscalls[] = {
|
||||
{ __NR_getpid, "getpid()" },
|
||||
{ __NR_sched_yield, "sched_yield()" },
|
||||
};
|
||||
|
||||
#define NUM_GPR 31
|
||||
uint64_t gpr_in[NUM_GPR];
|
||||
uint64_t gpr_out[NUM_GPR];
|
||||
|
||||
static void setup_gpr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
fill_random(gpr_in, sizeof(gpr_in));
|
||||
gpr_in[8] = cfg->syscall_nr;
|
||||
memset(gpr_out, 0, sizeof(gpr_out));
|
||||
}
|
||||
|
||||
static int check_gpr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* GPR x0-x7 may be clobbered, and all others should be preserved.
|
||||
*/
|
||||
for (i = 9; i < ARRAY_SIZE(gpr_in); i++) {
|
||||
if (gpr_in[i] != gpr_out[i]) {
|
||||
ksft_print_msg("%s SVE VL %d mismatch in GPR %d: %llx != %llx\n",
|
||||
cfg->name, sve_vl, i,
|
||||
gpr_in[i], gpr_out[i]);
|
||||
errors++;
|
||||
}
|
||||
}
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
#define NUM_FPR 32
|
||||
uint64_t fpr_in[NUM_FPR * 2];
|
||||
uint64_t fpr_out[NUM_FPR * 2];
|
||||
|
||||
static void setup_fpr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
fill_random(fpr_in, sizeof(fpr_in));
|
||||
memset(fpr_out, 0, sizeof(fpr_out));
|
||||
}
|
||||
|
||||
static int check_fpr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
if (!sve_vl) {
|
||||
for (i = 0; i < ARRAY_SIZE(fpr_in); i++) {
|
||||
if (fpr_in[i] != fpr_out[i]) {
|
||||
ksft_print_msg("%s Q%d/%d mismatch %llx != %llx\n",
|
||||
cfg->name,
|
||||
i / 2, i % 2,
|
||||
fpr_in[i], fpr_out[i]);
|
||||
errors++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
static uint8_t z_zero[__SVE_ZREG_SIZE(SVE_VQ_MAX)];
|
||||
uint8_t z_in[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)];
|
||||
uint8_t z_out[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)];
|
||||
|
||||
static void setup_z(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
fill_random(z_in, sizeof(z_in));
|
||||
fill_random(z_out, sizeof(z_out));
|
||||
}
|
||||
|
||||
static int check_z(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
size_t reg_size = sve_vl;
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
if (!sve_vl)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* After a syscall the low 128 bits of the Z registers should
|
||||
* be preserved and the rest be zeroed or preserved.
|
||||
*/
|
||||
for (i = 0; i < SVE_NUM_ZREGS; i++) {
|
||||
void *in = &z_in[reg_size * i];
|
||||
void *out = &z_out[reg_size * i];
|
||||
|
||||
if (memcmp(in, out, SVE_VQ_BYTES) != 0) {
|
||||
ksft_print_msg("%s SVE VL %d Z%d low 128 bits changed\n",
|
||||
cfg->name, sve_vl, i);
|
||||
errors++;
|
||||
}
|
||||
}
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
uint8_t p_in[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)];
|
||||
uint8_t p_out[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)];
|
||||
|
||||
static void setup_p(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
fill_random(p_in, sizeof(p_in));
|
||||
fill_random(p_out, sizeof(p_out));
|
||||
}
|
||||
|
||||
static int check_p(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */
|
||||
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
if (!sve_vl)
|
||||
return 0;
|
||||
|
||||
/* After a syscall the P registers should be preserved or zeroed */
|
||||
for (i = 0; i < SVE_NUM_PREGS * reg_size; i++)
|
||||
if (p_out[i] && (p_in[i] != p_out[i]))
|
||||
errors++;
|
||||
if (errors)
|
||||
ksft_print_msg("%s SVE VL %d predicate registers non-zero\n",
|
||||
cfg->name, sve_vl);
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
uint8_t ffr_in[__SVE_PREG_SIZE(SVE_VQ_MAX)];
|
||||
uint8_t ffr_out[__SVE_PREG_SIZE(SVE_VQ_MAX)];
|
||||
|
||||
static void setup_ffr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
/*
|
||||
* It is only valid to set a contiguous set of bits starting
|
||||
* at 0. For now since we're expecting this to be cleared by
|
||||
* a syscall just set all bits.
|
||||
*/
|
||||
memset(ffr_in, 0xff, sizeof(ffr_in));
|
||||
fill_random(ffr_out, sizeof(ffr_out));
|
||||
}
|
||||
|
||||
static int check_ffr(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
if (!sve_vl)
|
||||
return 0;
|
||||
|
||||
/* After a syscall the P registers should be preserved or zeroed */
|
||||
for (i = 0; i < reg_size; i++)
|
||||
if (ffr_out[i] && (ffr_in[i] != ffr_out[i]))
|
||||
errors++;
|
||||
if (errors)
|
||||
ksft_print_msg("%s SVE VL %d FFR non-zero\n",
|
||||
cfg->name, sve_vl);
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
typedef void (*setup_fn)(struct syscall_cfg *cfg, int sve_vl);
|
||||
typedef int (*check_fn)(struct syscall_cfg *cfg, int sve_vl);
|
||||
|
||||
/*
|
||||
* Each set of registers has a setup function which is called before
|
||||
* the syscall to fill values in a global variable for loading by the
|
||||
* test code and a check function which validates that the results are
|
||||
* as expected. Vector lengths are passed everywhere, a vector length
|
||||
* of 0 should be treated as do not test.
|
||||
*/
|
||||
static struct {
|
||||
setup_fn setup;
|
||||
check_fn check;
|
||||
} regset[] = {
|
||||
{ setup_gpr, check_gpr },
|
||||
{ setup_fpr, check_fpr },
|
||||
{ setup_z, check_z },
|
||||
{ setup_p, check_p },
|
||||
{ setup_ffr, check_ffr },
|
||||
};
|
||||
|
||||
static bool do_test(struct syscall_cfg *cfg, int sve_vl)
|
||||
{
|
||||
int errors = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(regset); i++)
|
||||
regset[i].setup(cfg, sve_vl);
|
||||
|
||||
do_syscall(sve_vl);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(regset); i++)
|
||||
errors += regset[i].check(cfg, sve_vl);
|
||||
|
||||
return errors == 0;
|
||||
}
|
||||
|
||||
static void test_one_syscall(struct syscall_cfg *cfg)
|
||||
{
|
||||
int sve_vq, sve_vl;
|
||||
|
||||
/* FPSIMD only case */
|
||||
ksft_test_result(do_test(cfg, 0),
|
||||
"%s FPSIMD\n", cfg->name);
|
||||
|
||||
if (!(getauxval(AT_HWCAP) & HWCAP_SVE))
|
||||
return;
|
||||
|
||||
for (sve_vq = SVE_VQ_MAX; sve_vq > 0; --sve_vq) {
|
||||
sve_vl = prctl(PR_SVE_SET_VL, sve_vq * 16);
|
||||
if (sve_vl == -1)
|
||||
ksft_exit_fail_msg("PR_SVE_SET_VL failed: %s (%d)\n",
|
||||
strerror(errno), errno);
|
||||
|
||||
sve_vl &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
if (sve_vq != sve_vq_from_vl(sve_vl))
|
||||
sve_vq = sve_vq_from_vl(sve_vl);
|
||||
|
||||
ksft_test_result(do_test(cfg, sve_vl),
|
||||
"%s SVE VL %d\n", cfg->name, sve_vl);
|
||||
}
|
||||
}
|
||||
|
||||
int sve_count_vls(void)
|
||||
{
|
||||
unsigned int vq;
|
||||
int vl_count = 0;
|
||||
int vl;
|
||||
|
||||
if (!(getauxval(AT_HWCAP) & HWCAP_SVE))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Enumerate up to SVE_VQ_MAX vector lengths
|
||||
*/
|
||||
for (vq = SVE_VQ_MAX; vq > 0; --vq) {
|
||||
vl = prctl(PR_SVE_SET_VL, vq * 16);
|
||||
if (vl == -1)
|
||||
ksft_exit_fail_msg("PR_SVE_SET_VL failed: %s (%d)\n",
|
||||
strerror(errno), errno);
|
||||
|
||||
vl &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
if (vq != sve_vq_from_vl(vl))
|
||||
vq = sve_vq_from_vl(vl);
|
||||
|
||||
vl_count++;
|
||||
}
|
||||
|
||||
return vl_count;
|
||||
}
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
srandom(getpid());
|
||||
|
||||
ksft_print_header();
|
||||
ksft_set_plan(ARRAY_SIZE(syscalls) * (sve_count_vls() + 1));
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(syscalls); i++)
|
||||
test_one_syscall(&syscalls[i]);
|
||||
|
||||
ksft_print_cnts();
|
||||
|
||||
return 0;
|
||||
}
|
1
tools/testing/selftests/arm64/fp/.gitignore
vendored
1
tools/testing/selftests/arm64/fp/.gitignore
vendored
@ -1,3 +1,4 @@
|
||||
fp-pidbench
|
||||
fpsimd-test
|
||||
rdvl-sve
|
||||
sve-probe-vls
|
||||
|
@ -2,13 +2,15 @@
|
||||
|
||||
CFLAGS += -I../../../../../usr/include/
|
||||
TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg
|
||||
TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \
|
||||
TEST_PROGS_EXTENDED := fp-pidbench fpsimd-test fpsimd-stress \
|
||||
rdvl-sve \
|
||||
sve-test sve-stress \
|
||||
vlset
|
||||
|
||||
all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED)
|
||||
|
||||
fp-pidbench: fp-pidbench.S asm-utils.o
|
||||
$(CC) -nostdlib $^ -o $@
|
||||
fpsimd-test: fpsimd-test.o asm-utils.o
|
||||
$(CC) -nostdlib $^ -o $@
|
||||
rdvl-sve: rdvl-sve.o rdvl.o
|
||||
|
71
tools/testing/selftests/arm64/fp/fp-pidbench.S
Normal file
71
tools/testing/selftests/arm64/fp/fp-pidbench.S
Normal file
@ -0,0 +1,71 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
// Copyright (C) 2021 ARM Limited.
|
||||
// Original author: Mark Brown <broonie@kernel.org>
|
||||
//
|
||||
// Trivial syscall overhead benchmark.
|
||||
//
|
||||
// This is implemented in asm to ensure that we don't have any issues with
|
||||
// system libraries using instructions that disrupt the test.
|
||||
|
||||
#include <asm/unistd.h>
|
||||
#include "assembler.h"
|
||||
|
||||
.arch_extension sve
|
||||
|
||||
.macro test_loop per_loop
|
||||
mov x10, x20
|
||||
mov x8, #__NR_getpid
|
||||
mrs x11, CNTVCT_EL0
|
||||
1:
|
||||
\per_loop
|
||||
svc #0
|
||||
sub x10, x10, #1
|
||||
cbnz x10, 1b
|
||||
|
||||
mrs x12, CNTVCT_EL0
|
||||
sub x0, x12, x11
|
||||
bl putdec
|
||||
puts "\n"
|
||||
.endm
|
||||
|
||||
// Main program entry point
|
||||
.globl _start
|
||||
function _start
|
||||
_start:
|
||||
puts "Iterations per test: "
|
||||
mov x20, #10000
|
||||
lsl x20, x20, #8
|
||||
mov x0, x20
|
||||
bl putdec
|
||||
puts "\n"
|
||||
|
||||
// Test having never used SVE
|
||||
puts "No SVE: "
|
||||
test_loop
|
||||
|
||||
// Check for SVE support - should use hwcap but that's hard in asm
|
||||
mrs x0, ID_AA64PFR0_EL1
|
||||
ubfx x0, x0, #32, #4
|
||||
cbnz x0, 1f
|
||||
puts "System does not support SVE\n"
|
||||
b out
|
||||
1:
|
||||
|
||||
// Execute a SVE instruction
|
||||
puts "SVE VL: "
|
||||
rdvl x0, #8
|
||||
bl putdec
|
||||
puts "\n"
|
||||
|
||||
puts "SVE used once: "
|
||||
test_loop
|
||||
|
||||
// Use SVE per syscall
|
||||
puts "SVE used per syscall: "
|
||||
test_loop "rdvl x0, #8"
|
||||
|
||||
// And we're done
|
||||
out:
|
||||
mov x0, #0
|
||||
mov x8, #__NR_exit
|
||||
svc #0
|
@ -21,16 +21,37 @@
|
||||
|
||||
#include "../../kselftest.h"
|
||||
|
||||
#define VL_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3)
|
||||
#define FPSIMD_TESTS 5
|
||||
|
||||
#define EXPECTED_TESTS (VL_TESTS + FPSIMD_TESTS)
|
||||
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
|
||||
|
||||
/* <linux/elf.h> and <sys/auxv.h> don't like each other, so: */
|
||||
#ifndef NT_ARM_SVE
|
||||
#define NT_ARM_SVE 0x405
|
||||
#endif
|
||||
|
||||
struct vec_type {
|
||||
const char *name;
|
||||
unsigned long hwcap_type;
|
||||
unsigned long hwcap;
|
||||
int regset;
|
||||
int prctl_set;
|
||||
};
|
||||
|
||||
static const struct vec_type vec_types[] = {
|
||||
{
|
||||
.name = "SVE",
|
||||
.hwcap_type = AT_HWCAP,
|
||||
.hwcap = HWCAP_SVE,
|
||||
.regset = NT_ARM_SVE,
|
||||
.prctl_set = PR_SVE_SET_VL,
|
||||
},
|
||||
};
|
||||
|
||||
#define VL_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3)
|
||||
#define FLAG_TESTS 2
|
||||
#define FPSIMD_TESTS 3
|
||||
|
||||
#define EXPECTED_TESTS ((VL_TESTS + FLAG_TESTS + FPSIMD_TESTS) * ARRAY_SIZE(vec_types))
|
||||
|
||||
static void fill_buf(char *buf, size_t size)
|
||||
{
|
||||
int i;
|
||||
@ -59,7 +80,8 @@ static int get_fpsimd(pid_t pid, struct user_fpsimd_state *fpsimd)
|
||||
return ptrace(PTRACE_GETREGSET, pid, NT_PRFPREG, &iov);
|
||||
}
|
||||
|
||||
static struct user_sve_header *get_sve(pid_t pid, void **buf, size_t *size)
|
||||
static struct user_sve_header *get_sve(pid_t pid, const struct vec_type *type,
|
||||
void **buf, size_t *size)
|
||||
{
|
||||
struct user_sve_header *sve;
|
||||
void *p;
|
||||
@ -80,7 +102,7 @@ static struct user_sve_header *get_sve(pid_t pid, void **buf, size_t *size)
|
||||
|
||||
iov.iov_base = *buf;
|
||||
iov.iov_len = sz;
|
||||
if (ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov))
|
||||
if (ptrace(PTRACE_GETREGSET, pid, type->regset, &iov))
|
||||
goto error;
|
||||
|
||||
sve = *buf;
|
||||
@ -96,17 +118,18 @@ error:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int set_sve(pid_t pid, const struct user_sve_header *sve)
|
||||
static int set_sve(pid_t pid, const struct vec_type *type,
|
||||
const struct user_sve_header *sve)
|
||||
{
|
||||
struct iovec iov;
|
||||
|
||||
iov.iov_base = (void *)sve;
|
||||
iov.iov_len = sve->size;
|
||||
return ptrace(PTRACE_SETREGSET, pid, NT_ARM_SVE, &iov);
|
||||
return ptrace(PTRACE_SETREGSET, pid, type->regset, &iov);
|
||||
}
|
||||
|
||||
/* Validate setting and getting the inherit flag */
|
||||
static void ptrace_set_get_inherit(pid_t child)
|
||||
static void ptrace_set_get_inherit(pid_t child, const struct vec_type *type)
|
||||
{
|
||||
struct user_sve_header sve;
|
||||
struct user_sve_header *new_sve = NULL;
|
||||
@ -118,9 +141,10 @@ static void ptrace_set_get_inherit(pid_t child)
|
||||
sve.size = sizeof(sve);
|
||||
sve.vl = sve_vl_from_vq(SVE_VQ_MIN);
|
||||
sve.flags = SVE_PT_VL_INHERIT;
|
||||
ret = set_sve(child, &sve);
|
||||
ret = set_sve(child, type, &sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to set SVE_PT_VL_INHERIT\n");
|
||||
ksft_test_result_fail("Failed to set %s SVE_PT_VL_INHERIT\n",
|
||||
type->name);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -128,35 +152,39 @@ static void ptrace_set_get_inherit(pid_t child)
|
||||
* Read back the new register state and verify that we have
|
||||
* set the flags we expected.
|
||||
*/
|
||||
if (!get_sve(child, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read SVE flags\n");
|
||||
if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read %s SVE flags\n",
|
||||
type->name);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result(new_sve->flags & SVE_PT_VL_INHERIT,
|
||||
"SVE_PT_VL_INHERIT set\n");
|
||||
"%s SVE_PT_VL_INHERIT set\n", type->name);
|
||||
|
||||
/* Now clear */
|
||||
sve.flags &= ~SVE_PT_VL_INHERIT;
|
||||
ret = set_sve(child, &sve);
|
||||
ret = set_sve(child, type, &sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to clear SVE_PT_VL_INHERIT\n");
|
||||
ksft_test_result_fail("Failed to clear %s SVE_PT_VL_INHERIT\n",
|
||||
type->name);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!get_sve(child, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read SVE flags\n");
|
||||
if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read %s SVE flags\n",
|
||||
type->name);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result(!(new_sve->flags & SVE_PT_VL_INHERIT),
|
||||
"SVE_PT_VL_INHERIT cleared\n");
|
||||
"%s SVE_PT_VL_INHERIT cleared\n", type->name);
|
||||
|
||||
free(new_sve);
|
||||
}
|
||||
|
||||
/* Validate attempting to set the specfied VL via ptrace */
|
||||
static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported)
|
||||
static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
|
||||
unsigned int vl, bool *supported)
|
||||
{
|
||||
struct user_sve_header sve;
|
||||
struct user_sve_header *new_sve = NULL;
|
||||
@ -166,10 +194,10 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported)
|
||||
*supported = false;
|
||||
|
||||
/* Check if the VL is supported in this process */
|
||||
prctl_vl = prctl(PR_SVE_SET_VL, vl);
|
||||
prctl_vl = prctl(type->prctl_set, vl);
|
||||
if (prctl_vl == -1)
|
||||
ksft_exit_fail_msg("prctl(PR_SVE_SET_VL) failed: %s (%d)\n",
|
||||
strerror(errno), errno);
|
||||
ksft_exit_fail_msg("prctl(PR_%s_SET_VL) failed: %s (%d)\n",
|
||||
type->name, strerror(errno), errno);
|
||||
|
||||
/* If the VL is not supported then a supported VL will be returned */
|
||||
*supported = (prctl_vl == vl);
|
||||
@ -178,9 +206,10 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported)
|
||||
memset(&sve, 0, sizeof(sve));
|
||||
sve.size = sizeof(sve);
|
||||
sve.vl = vl;
|
||||
ret = set_sve(child, &sve);
|
||||
ret = set_sve(child, type, &sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to set VL %u\n", vl);
|
||||
ksft_test_result_fail("Failed to set %s VL %u\n",
|
||||
type->name, vl);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -188,12 +217,14 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported)
|
||||
* Read back the new register state and verify that we have the
|
||||
* same VL that we got from prctl() on ourselves.
|
||||
*/
|
||||
if (!get_sve(child, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read VL %u\n", vl);
|
||||
if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read %s VL %u\n",
|
||||
type->name, vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result(new_sve->vl = prctl_vl, "Set VL %u\n", vl);
|
||||
ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n",
|
||||
type->name, vl);
|
||||
|
||||
free(new_sve);
|
||||
}
|
||||
@ -209,7 +240,7 @@ static void check_u32(unsigned int vl, const char *reg,
|
||||
}
|
||||
|
||||
/* Access the FPSIMD registers via the SVE regset */
|
||||
static void ptrace_sve_fpsimd(pid_t child)
|
||||
static void ptrace_sve_fpsimd(pid_t child, const struct vec_type *type)
|
||||
{
|
||||
void *svebuf = NULL;
|
||||
size_t svebufsz = 0;
|
||||
@ -219,17 +250,18 @@ static void ptrace_sve_fpsimd(pid_t child)
|
||||
unsigned char *p;
|
||||
|
||||
/* New process should start with FPSIMD registers only */
|
||||
sve = get_sve(child, &svebuf, &svebufsz);
|
||||
sve = get_sve(child, type, &svebuf, &svebufsz);
|
||||
if (!sve) {
|
||||
ksft_test_result_fail("get_sve: %s\n", strerror(errno));
|
||||
ksft_test_result_fail("get_sve(%s): %s\n",
|
||||
type->name, strerror(errno));
|
||||
|
||||
return;
|
||||
} else {
|
||||
ksft_test_result_pass("get_sve(FPSIMD)\n");
|
||||
ksft_test_result_pass("get_sve(%s FPSIMD)\n", type->name);
|
||||
}
|
||||
|
||||
ksft_test_result((sve->flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD,
|
||||
"Set FPSIMD registers\n");
|
||||
"Set FPSIMD registers via %s\n", type->name);
|
||||
if ((sve->flags & SVE_PT_REGS_MASK) != SVE_PT_REGS_FPSIMD)
|
||||
goto out;
|
||||
|
||||
@ -243,9 +275,9 @@ static void ptrace_sve_fpsimd(pid_t child)
|
||||
p[j] = j;
|
||||
}
|
||||
|
||||
if (set_sve(child, sve)) {
|
||||
ksft_test_result_fail("set_sve(FPSIMD): %s\n",
|
||||
strerror(errno));
|
||||
if (set_sve(child, type, sve)) {
|
||||
ksft_test_result_fail("set_sve(%s FPSIMD): %s\n",
|
||||
type->name, strerror(errno));
|
||||
|
||||
goto out;
|
||||
}
|
||||
@ -257,16 +289,20 @@ static void ptrace_sve_fpsimd(pid_t child)
|
||||
goto out;
|
||||
}
|
||||
if (memcmp(fpsimd, &new_fpsimd, sizeof(*fpsimd)) == 0)
|
||||
ksft_test_result_pass("get_fpsimd() gave same state\n");
|
||||
ksft_test_result_pass("%s get_fpsimd() gave same state\n",
|
||||
type->name);
|
||||
else
|
||||
ksft_test_result_fail("get_fpsimd() gave different state\n");
|
||||
ksft_test_result_fail("%s get_fpsimd() gave different state\n",
|
||||
type->name);
|
||||
|
||||
out:
|
||||
free(svebuf);
|
||||
}
|
||||
|
||||
/* Validate attempting to set SVE data and read SVE data */
|
||||
static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl)
|
||||
static void ptrace_set_sve_get_sve_data(pid_t child,
|
||||
const struct vec_type *type,
|
||||
unsigned int vl)
|
||||
{
|
||||
void *write_buf;
|
||||
void *read_buf = NULL;
|
||||
@ -281,8 +317,8 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl)
|
||||
data_size = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE);
|
||||
write_buf = malloc(data_size);
|
||||
if (!write_buf) {
|
||||
ksft_test_result_fail("Error allocating %d byte buffer for VL %u\n",
|
||||
data_size, vl);
|
||||
ksft_test_result_fail("Error allocating %d byte buffer for %s VL %u\n",
|
||||
data_size, type->name, vl);
|
||||
return;
|
||||
}
|
||||
write_sve = write_buf;
|
||||
@ -306,23 +342,26 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl)
|
||||
|
||||
/* TODO: Generate a valid FFR pattern */
|
||||
|
||||
ret = set_sve(child, write_sve);
|
||||
ret = set_sve(child, type, write_sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to set VL %u data\n", vl);
|
||||
ksft_test_result_fail("Failed to set %s VL %u data\n",
|
||||
type->name, vl);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Read the data back */
|
||||
if (!get_sve(child, (void **)&read_buf, &read_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read VL %u data\n", vl);
|
||||
if (!get_sve(child, type, (void **)&read_buf, &read_sve_size)) {
|
||||
ksft_test_result_fail("Failed to read %s VL %u data\n",
|
||||
type->name, vl);
|
||||
goto out;
|
||||
}
|
||||
read_sve = read_buf;
|
||||
|
||||
/* We might read more data if there's extensions we don't know */
|
||||
if (read_sve->size < write_sve->size) {
|
||||
ksft_test_result_fail("Wrote %d bytes, only read %d\n",
|
||||
write_sve->size, read_sve->size);
|
||||
ksft_test_result_fail("%s wrote %d bytes, only read %d\n",
|
||||
type->name, write_sve->size,
|
||||
read_sve->size);
|
||||
goto out_read;
|
||||
}
|
||||
|
||||
@ -349,7 +388,8 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl)
|
||||
check_u32(vl, "FPCR", write_buf + SVE_PT_SVE_FPCR_OFFSET(vq),
|
||||
read_buf + SVE_PT_SVE_FPCR_OFFSET(vq), &errors);
|
||||
|
||||
ksft_test_result(errors == 0, "Set and get SVE data for VL %u\n", vl);
|
||||
ksft_test_result(errors == 0, "Set and get %s data for VL %u\n",
|
||||
type->name, vl);
|
||||
|
||||
out_read:
|
||||
free(read_buf);
|
||||
@ -358,7 +398,9 @@ out:
|
||||
}
|
||||
|
||||
/* Validate attempting to set SVE data and read SVE data */
|
||||
static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl)
|
||||
static void ptrace_set_sve_get_fpsimd_data(pid_t child,
|
||||
const struct vec_type *type,
|
||||
unsigned int vl)
|
||||
{
|
||||
void *write_buf;
|
||||
struct user_sve_header *write_sve;
|
||||
@ -376,8 +418,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl)
|
||||
data_size = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE);
|
||||
write_buf = malloc(data_size);
|
||||
if (!write_buf) {
|
||||
ksft_test_result_fail("Error allocating %d byte buffer for VL %u\n",
|
||||
data_size, vl);
|
||||
ksft_test_result_fail("Error allocating %d byte buffer for %s VL %u\n",
|
||||
data_size, type->name, vl);
|
||||
return;
|
||||
}
|
||||
write_sve = write_buf;
|
||||
@ -395,16 +437,17 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl)
|
||||
fill_buf(write_buf + SVE_PT_SVE_FPSR_OFFSET(vq), SVE_PT_SVE_FPSR_SIZE);
|
||||
fill_buf(write_buf + SVE_PT_SVE_FPCR_OFFSET(vq), SVE_PT_SVE_FPCR_SIZE);
|
||||
|
||||
ret = set_sve(child, write_sve);
|
||||
ret = set_sve(child, type, write_sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to set VL %u data\n", vl);
|
||||
ksft_test_result_fail("Failed to set %s VL %u data\n",
|
||||
type->name, vl);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Read the data back */
|
||||
if (get_fpsimd(child, &fpsimd_state)) {
|
||||
ksft_test_result_fail("Failed to read VL %u FPSIMD data\n",
|
||||
vl);
|
||||
ksft_test_result_fail("Failed to read %s VL %u FPSIMD data\n",
|
||||
type->name, vl);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -419,7 +462,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl)
|
||||
sizeof(tmp));
|
||||
|
||||
if (tmp != fpsimd_state.vregs[i]) {
|
||||
printf("# Mismatch in FPSIMD for VL %u Z%d\n", vl, i);
|
||||
printf("# Mismatch in FPSIMD for %s VL %u Z%d\n",
|
||||
type->name, vl, i);
|
||||
errors++;
|
||||
}
|
||||
}
|
||||
@ -429,8 +473,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl)
|
||||
check_u32(vl, "FPCR", write_buf + SVE_PT_SVE_FPCR_OFFSET(vq),
|
||||
&fpsimd_state.fpcr, &errors);
|
||||
|
||||
ksft_test_result(errors == 0, "Set and get FPSIMD data for VL %u\n",
|
||||
vl);
|
||||
ksft_test_result(errors == 0, "Set and get FPSIMD data for %s VL %u\n",
|
||||
type->name, vl);
|
||||
|
||||
out:
|
||||
free(write_buf);
|
||||
@ -440,7 +484,7 @@ static int do_parent(pid_t child)
|
||||
{
|
||||
int ret = EXIT_FAILURE;
|
||||
pid_t pid;
|
||||
int status;
|
||||
int status, i;
|
||||
siginfo_t si;
|
||||
unsigned int vq, vl;
|
||||
bool vl_supported;
|
||||
@ -499,26 +543,47 @@ static int do_parent(pid_t child)
|
||||
}
|
||||
}
|
||||
|
||||
/* FPSIMD via SVE regset */
|
||||
ptrace_sve_fpsimd(child);
|
||||
|
||||
/* prctl() flags */
|
||||
ptrace_set_get_inherit(child);
|
||||
|
||||
/* Step through every possible VQ */
|
||||
for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; vq++) {
|
||||
vl = sve_vl_from_vq(vq);
|
||||
|
||||
/* First, try to set this vector length */
|
||||
ptrace_set_get_vl(child, vl, &vl_supported);
|
||||
|
||||
/* If the VL is supported validate data set/get */
|
||||
if (vl_supported) {
|
||||
ptrace_set_sve_get_sve_data(child, vl);
|
||||
ptrace_set_sve_get_fpsimd_data(child, vl);
|
||||
for (i = 0; i < ARRAY_SIZE(vec_types); i++) {
|
||||
/* FPSIMD via SVE regset */
|
||||
if (getauxval(vec_types[i].hwcap_type) & vec_types[i].hwcap) {
|
||||
ptrace_sve_fpsimd(child, &vec_types[i]);
|
||||
} else {
|
||||
ksft_test_result_skip("set SVE get SVE for VL %d\n", vl);
|
||||
ksft_test_result_skip("set SVE get FPSIMD for VL %d\n", vl);
|
||||
ksft_test_result_skip("%s FPSIMD get via SVE\n",
|
||||
vec_types[i].name);
|
||||
ksft_test_result_skip("%s FPSIMD set via SVE\n",
|
||||
vec_types[i].name);
|
||||
ksft_test_result_skip("%s set read via FPSIMD\n",
|
||||
vec_types[i].name);
|
||||
}
|
||||
|
||||
/* prctl() flags */
|
||||
ptrace_set_get_inherit(child, &vec_types[i]);
|
||||
|
||||
/* Step through every possible VQ */
|
||||
for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; vq++) {
|
||||
vl = sve_vl_from_vq(vq);
|
||||
|
||||
/* First, try to set this vector length */
|
||||
if (getauxval(vec_types[i].hwcap_type) &
|
||||
vec_types[i].hwcap) {
|
||||
ptrace_set_get_vl(child, &vec_types[i], vl,
|
||||
&vl_supported);
|
||||
} else {
|
||||
ksft_test_result_skip("%s get/set VL %d\n",
|
||||
vec_types[i].name, vl);
|
||||
vl_supported = false;
|
||||
}
|
||||
|
||||
/* If the VL is supported validate data set/get */
|
||||
if (vl_supported) {
|
||||
ptrace_set_sve_get_sve_data(child, &vec_types[i], vl);
|
||||
ptrace_set_sve_get_fpsimd_data(child, &vec_types[i], vl);
|
||||
} else {
|
||||
ksft_test_result_skip("%s set SVE get SVE for VL %d\n",
|
||||
vec_types[i].name, vl);
|
||||
ksft_test_result_skip("%s set SVE get FPSIMD for VL %d\n",
|
||||
vec_types[i].name, vl);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -310,14 +310,12 @@ int test_setup(struct tdescr *td)
|
||||
|
||||
int test_run(struct tdescr *td)
|
||||
{
|
||||
if (td->sig_trig) {
|
||||
if (td->trigger)
|
||||
return td->trigger(td);
|
||||
else
|
||||
return default_trigger(td);
|
||||
} else {
|
||||
if (td->trigger)
|
||||
return td->trigger(td);
|
||||
else if (td->sig_trig)
|
||||
return default_trigger(td);
|
||||
else
|
||||
return td->run(td, NULL, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
void test_result(struct tdescr *td)
|
||||
|
Loading…
x
Reference in New Issue
Block a user