2019-06-03 07:44:50 +02:00
/* SPDX-License-Identifier: GPL-2.0-only */
2012-03-05 11:49:27 +00:00
/ *
* Low- l e v e l e x c e p t i o n h a n d l i n g c o d e
*
* Copyright ( C ) 2 0 1 2 A R M L t d .
* Authors : Catalin M a r i n a s < c a t a l i n . m a r i n a s @arm.com>
* Will D e a c o n < w i l l . d e a c o n @arm.com>
* /
2018-05-29 13:11:06 +01:00
# include < l i n u x / a r m - s m c c c . h >
2012-03-05 11:49:27 +00:00
# include < l i n u x / i n i t . h >
# include < l i n u x / l i n k a g e . h >
2015-06-01 10:47:41 +01:00
# include < a s m / a l t e r n a t i v e . h >
2012-03-05 11:49:27 +00:00
# include < a s m / a s s e m b l e r . h >
# include < a s m / a s m - o f f s e t s . h >
2020-03-13 14:34:51 +05:30
# include < a s m / a s m _ p o i n t e r _ a u t h . h >
2020-06-30 13:53:07 +01:00
# include < a s m / b u g . h >
2015-03-23 19:07:02 +00:00
# include < a s m / c p u f e a t u r e . h >
2012-03-05 11:49:27 +00:00
# include < a s m / e r r n o . h >
2013-04-08 17:17:03 +01:00
# include < a s m / e s r . h >
2015-12-04 11:02:27 +00:00
# include < a s m / i r q . h >
2017-11-14 14:07:40 +00:00
# include < a s m / m e m o r y . h >
# include < a s m / m m u . h >
2017-08-31 11:30:50 +03:00
# include < a s m / p r o c e s s o r . h >
2016-09-02 14:54:03 +01:00
# include < a s m / p t r a c e . h >
2020-04-27 09:00:16 -07:00
# include < a s m / s c s . h >
2012-03-05 11:49:27 +00:00
# include < a s m / t h r e a d _ i n f o . h >
2016-12-26 04:10:19 -05:00
# include < a s m / a s m - u a c c e s s . h >
2012-03-05 11:49:27 +00:00
# include < a s m / u n i s t d . h >
2014-05-30 12:34:15 -07:00
/ *
* Context t r a c k i n g s u b s y s t e m . U s e d t o i n s t r u m e n t t r a n s i t i o n s
* between u s e r a n d k e r n e l m o d e .
* /
2019-08-20 18:45:57 +01:00
.macro ct_user_exit_irqoff
2014-05-30 12:34:15 -07:00
# ifdef C O N F I G _ C O N T E X T _ T R A C K I N G
2019-08-20 18:45:57 +01:00
bl e n t e r _ f r o m _ u s e r _ m o d e
2014-05-30 12:34:15 -07:00
# endif
.endm
.macro ct_user_enter
# ifdef C O N F I G _ C O N T E X T _ T R A C K I N G
bl c o n t e x t _ t r a c k i n g _ u s e r _ e n t e r
# endif
.endm
2018-07-11 14:56:48 +01:00
.macro clear_gp_regs
.irp n,0 ,1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ,9 ,1 0 ,1 1 ,1 2 ,1 3 ,1 4 ,1 5 ,1 6 ,1 7 ,1 8 ,1 9 ,2 0 ,2 1 ,2 2 ,2 3 ,2 4 ,2 5 ,2 6 ,2 7 ,2 8 ,2 9
mov x \ n , x z r
.endr
.endm
2012-03-05 11:49:27 +00:00
/ *
* Bad A b o r t n u m b e r s
* - - - - - - - - - - - - - - - - -
* /
# define B A D _ S Y N C 0
# define B A D _ I R Q 1
# define B A D _ F I Q 2
# define B A D _ E R R O R 3
2017-11-14 14:20:21 +00:00
.macro kernel_ v e n t r y , e l , l a b e l , r e g s i z e = 6 4
2017-07-19 17:24:49 +01:00
.align 7
2017-11-14 14:24:29 +00:00
# ifdef C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0
.if \ el = = 0
2020-01-09 16:02:59 +00:00
alternative_ i f A R M 6 4 _ U N M A P _ K E R N E L _ A T _ E L 0
2017-11-14 14:24:29 +00:00
.if \ regsize = = 6 4
mrs x30 , t p i d r r o _ e l 0
msr t p i d r r o _ e l 0 , x z r
.else
mov x30 , x z r
.endif
2017-11-14 14:38:19 +00:00
alternative_ e l s e _ n o p _ e n d i f
2020-01-09 16:02:59 +00:00
.endif
2017-11-14 14:24:29 +00:00
# endif
2014-09-29 12:26:41 +01:00
sub s p , s p , #S _ F R A M E _ S I Z E
arm64: add VMAP_STACK overflow detection
This patch adds stack overflow detection to arm64, usable when vmap'd stacks
are in use.
Overflow is detected in a small preamble executed for each exception entry,
which checks whether there is enough space on the current stack for the general
purpose registers to be saved. If there is not enough space, the overflow
handler is invoked on a per-cpu overflow stack. This approach preserves the
original exception information in ESR_EL1 (and where appropriate, FAR_EL1).
Task and IRQ stacks are aligned to double their size, enabling overflow to be
detected with a single bit test. For example, a 16K stack is aligned to 32K,
ensuring that bit 14 of the SP must be zero. On an overflow (or underflow),
this bit is flipped. Thus, overflow (of less than the size of the stack) can be
detected by testing whether this bit is set.
The overflow check is performed before any attempt is made to access the
stack, avoiding recursive faults (and the loss of exception information
these would entail). As logical operations cannot be performed on the SP
directly, the SP is temporarily swapped with a general purpose register
using arithmetic operations to enable the test to be performed.
This gives us a useful error message on stack overflow, as can be trigger with
the LKDTM overflow test:
[ 305.388749] lkdtm: Performing direct entry OVERFLOW
[ 305.395444] Insufficient stack space to handle exception!
[ 305.395482] ESR: 0x96000047 -- DABT (current EL)
[ 305.399890] FAR: 0xffff00000a5e7f30
[ 305.401315] Task stack: [0xffff00000a5e8000..0xffff00000a5ec000]
[ 305.403815] IRQ stack: [0xffff000008000000..0xffff000008004000]
[ 305.407035] Overflow stack: [0xffff80003efce4e0..0xffff80003efcf4e0]
[ 305.409622] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.412785] Hardware name: linux,dummy-virt (DT)
[ 305.415756] task: ffff80003d051c00 task.stack: ffff00000a5e8000
[ 305.419221] PC is at recursive_loop+0x10/0x48
[ 305.421637] LR is at recursive_loop+0x38/0x48
[ 305.423768] pc : [<ffff00000859f330>] lr : [<ffff00000859f358>] pstate: 40000145
[ 305.428020] sp : ffff00000a5e7f50
[ 305.430469] x29: ffff00000a5e8350 x28: ffff80003d051c00
[ 305.433191] x27: ffff000008981000 x26: ffff000008f80400
[ 305.439012] x25: ffff00000a5ebeb8 x24: ffff00000a5ebeb8
[ 305.440369] x23: ffff000008f80138 x22: 0000000000000009
[ 305.442241] x21: ffff80003ce65000 x20: ffff000008f80188
[ 305.444552] x19: 0000000000000013 x18: 0000000000000006
[ 305.446032] x17: 0000ffffa2601280 x16: ffff0000081fe0b8
[ 305.448252] x15: ffff000008ff546d x14: 000000000047a4c8
[ 305.450246] x13: ffff000008ff7872 x12: 0000000005f5e0ff
[ 305.452953] x11: ffff000008ed2548 x10: 000000000005ee8d
[ 305.454824] x9 : ffff000008545380 x8 : ffff00000a5e8770
[ 305.457105] x7 : 1313131313131313 x6 : 00000000000000e1
[ 305.459285] x5 : 0000000000000000 x4 : 0000000000000000
[ 305.461781] x3 : 0000000000000000 x2 : 0000000000000400
[ 305.465119] x1 : 0000000000000013 x0 : 0000000000000012
[ 305.467724] Kernel panic - not syncing: kernel stack overflow
[ 305.470561] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.473325] Hardware name: linux,dummy-virt (DT)
[ 305.475070] Call trace:
[ 305.476116] [<ffff000008088ad8>] dump_backtrace+0x0/0x378
[ 305.478991] [<ffff000008088e64>] show_stack+0x14/0x20
[ 305.481237] [<ffff00000895a178>] dump_stack+0x98/0xb8
[ 305.483294] [<ffff0000080c3288>] panic+0x118/0x280
[ 305.485673] [<ffff0000080c2e9c>] nmi_panic+0x6c/0x70
[ 305.486216] [<ffff000008089710>] handle_bad_stack+0x118/0x128
[ 305.486612] Exception stack(0xffff80003efcf3a0 to 0xffff80003efcf4e0)
[ 305.487334] f3a0: 0000000000000012 0000000000000013 0000000000000400 0000000000000000
[ 305.488025] f3c0: 0000000000000000 0000000000000000 00000000000000e1 1313131313131313
[ 305.488908] f3e0: ffff00000a5e8770 ffff000008545380 000000000005ee8d ffff000008ed2548
[ 305.489403] f400: 0000000005f5e0ff ffff000008ff7872 000000000047a4c8 ffff000008ff546d
[ 305.489759] f420: ffff0000081fe0b8 0000ffffa2601280 0000000000000006 0000000000000013
[ 305.490256] f440: ffff000008f80188 ffff80003ce65000 0000000000000009 ffff000008f80138
[ 305.490683] f460: ffff00000a5ebeb8 ffff00000a5ebeb8 ffff000008f80400 ffff000008981000
[ 305.491051] f480: ffff80003d051c00 ffff00000a5e8350 ffff00000859f358 ffff00000a5e7f50
[ 305.491444] f4a0: ffff00000859f330 0000000040000145 0000000000000000 0000000000000000
[ 305.492008] f4c0: 0001000000000000 0000000000000000 ffff00000a5e8350 ffff00000859f330
[ 305.493063] [<ffff00000808205c>] __bad_stack+0x88/0x8c
[ 305.493396] [<ffff00000859f330>] recursive_loop+0x10/0x48
[ 305.493731] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494088] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494425] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494649] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494898] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495205] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495453] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495708] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496000] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496302] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496644] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496894] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497138] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497325] [<ffff00000859f3dc>] lkdtm_OVERFLOW+0x14/0x20
[ 305.497506] [<ffff00000859f314>] lkdtm_do_action+0x1c/0x28
[ 305.497786] [<ffff00000859f178>] direct_entry+0xe0/0x170
[ 305.498095] [<ffff000008345568>] full_proxy_write+0x60/0xa8
[ 305.498387] [<ffff0000081fb7f4>] __vfs_write+0x1c/0x128
[ 305.498679] [<ffff0000081fcc68>] vfs_write+0xa0/0x1b0
[ 305.498926] [<ffff0000081fe0fc>] SyS_write+0x44/0xa0
[ 305.499182] Exception stack(0xffff00000a5ebec0 to 0xffff00000a5ec000)
[ 305.499429] bec0: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.499674] bee0: 574f4c465245564f 0000000000000000 0000000000000000 8000000080808080
[ 305.499904] bf00: 0000000000000040 0000000000000038 fefefeff1b4bc2ff 7f7f7f7f7f7fff7f
[ 305.500189] bf20: 0101010101010101 0000000000000000 000000000047a4c8 0000000000000038
[ 305.500712] bf40: 0000000000000000 0000ffffa2601280 0000ffffc63f6068 00000000004b5000
[ 305.501241] bf60: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.501791] bf80: 0000000000000020 0000000000000000 00000000004b5000 000000001c4cc458
[ 305.502314] bfa0: 0000000000000000 0000ffffc63f7950 000000000040a3c4 0000ffffc63f70e0
[ 305.502762] bfc0: 0000ffffa2601268 0000000080000000 0000000000000001 0000000000000040
[ 305.503207] bfe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 305.503680] [<ffff000008082fb0>] el0_svc_naked+0x24/0x28
[ 305.504720] Kernel Offset: disabled
[ 305.505189] CPU features: 0x002082
[ 305.505473] Memory Limit: none
[ 305.506181] ---[ end Kernel panic - not syncing: kernel stack overflow
This patch was co-authored by Ard Biesheuvel and Mark Rutland.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
2017-07-14 20:30:35 +01:00
# ifdef C O N F I G _ V M A P _ S T A C K
/ *
* Test w h e t h e r t h e S P h a s o v e r f l o w e d , w i t h o u t c o r r u p t i n g a G P R .
2019-12-02 19:37:02 +08:00
* Task a n d I R Q s t a c k s a r e a l i g n e d s o t h a t S P & ( 1 < < T H R E A D _ S H I F T )
* should a l w a y s b e z e r o .
arm64: add VMAP_STACK overflow detection
This patch adds stack overflow detection to arm64, usable when vmap'd stacks
are in use.
Overflow is detected in a small preamble executed for each exception entry,
which checks whether there is enough space on the current stack for the general
purpose registers to be saved. If there is not enough space, the overflow
handler is invoked on a per-cpu overflow stack. This approach preserves the
original exception information in ESR_EL1 (and where appropriate, FAR_EL1).
Task and IRQ stacks are aligned to double their size, enabling overflow to be
detected with a single bit test. For example, a 16K stack is aligned to 32K,
ensuring that bit 14 of the SP must be zero. On an overflow (or underflow),
this bit is flipped. Thus, overflow (of less than the size of the stack) can be
detected by testing whether this bit is set.
The overflow check is performed before any attempt is made to access the
stack, avoiding recursive faults (and the loss of exception information
these would entail). As logical operations cannot be performed on the SP
directly, the SP is temporarily swapped with a general purpose register
using arithmetic operations to enable the test to be performed.
This gives us a useful error message on stack overflow, as can be trigger with
the LKDTM overflow test:
[ 305.388749] lkdtm: Performing direct entry OVERFLOW
[ 305.395444] Insufficient stack space to handle exception!
[ 305.395482] ESR: 0x96000047 -- DABT (current EL)
[ 305.399890] FAR: 0xffff00000a5e7f30
[ 305.401315] Task stack: [0xffff00000a5e8000..0xffff00000a5ec000]
[ 305.403815] IRQ stack: [0xffff000008000000..0xffff000008004000]
[ 305.407035] Overflow stack: [0xffff80003efce4e0..0xffff80003efcf4e0]
[ 305.409622] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.412785] Hardware name: linux,dummy-virt (DT)
[ 305.415756] task: ffff80003d051c00 task.stack: ffff00000a5e8000
[ 305.419221] PC is at recursive_loop+0x10/0x48
[ 305.421637] LR is at recursive_loop+0x38/0x48
[ 305.423768] pc : [<ffff00000859f330>] lr : [<ffff00000859f358>] pstate: 40000145
[ 305.428020] sp : ffff00000a5e7f50
[ 305.430469] x29: ffff00000a5e8350 x28: ffff80003d051c00
[ 305.433191] x27: ffff000008981000 x26: ffff000008f80400
[ 305.439012] x25: ffff00000a5ebeb8 x24: ffff00000a5ebeb8
[ 305.440369] x23: ffff000008f80138 x22: 0000000000000009
[ 305.442241] x21: ffff80003ce65000 x20: ffff000008f80188
[ 305.444552] x19: 0000000000000013 x18: 0000000000000006
[ 305.446032] x17: 0000ffffa2601280 x16: ffff0000081fe0b8
[ 305.448252] x15: ffff000008ff546d x14: 000000000047a4c8
[ 305.450246] x13: ffff000008ff7872 x12: 0000000005f5e0ff
[ 305.452953] x11: ffff000008ed2548 x10: 000000000005ee8d
[ 305.454824] x9 : ffff000008545380 x8 : ffff00000a5e8770
[ 305.457105] x7 : 1313131313131313 x6 : 00000000000000e1
[ 305.459285] x5 : 0000000000000000 x4 : 0000000000000000
[ 305.461781] x3 : 0000000000000000 x2 : 0000000000000400
[ 305.465119] x1 : 0000000000000013 x0 : 0000000000000012
[ 305.467724] Kernel panic - not syncing: kernel stack overflow
[ 305.470561] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.473325] Hardware name: linux,dummy-virt (DT)
[ 305.475070] Call trace:
[ 305.476116] [<ffff000008088ad8>] dump_backtrace+0x0/0x378
[ 305.478991] [<ffff000008088e64>] show_stack+0x14/0x20
[ 305.481237] [<ffff00000895a178>] dump_stack+0x98/0xb8
[ 305.483294] [<ffff0000080c3288>] panic+0x118/0x280
[ 305.485673] [<ffff0000080c2e9c>] nmi_panic+0x6c/0x70
[ 305.486216] [<ffff000008089710>] handle_bad_stack+0x118/0x128
[ 305.486612] Exception stack(0xffff80003efcf3a0 to 0xffff80003efcf4e0)
[ 305.487334] f3a0: 0000000000000012 0000000000000013 0000000000000400 0000000000000000
[ 305.488025] f3c0: 0000000000000000 0000000000000000 00000000000000e1 1313131313131313
[ 305.488908] f3e0: ffff00000a5e8770 ffff000008545380 000000000005ee8d ffff000008ed2548
[ 305.489403] f400: 0000000005f5e0ff ffff000008ff7872 000000000047a4c8 ffff000008ff546d
[ 305.489759] f420: ffff0000081fe0b8 0000ffffa2601280 0000000000000006 0000000000000013
[ 305.490256] f440: ffff000008f80188 ffff80003ce65000 0000000000000009 ffff000008f80138
[ 305.490683] f460: ffff00000a5ebeb8 ffff00000a5ebeb8 ffff000008f80400 ffff000008981000
[ 305.491051] f480: ffff80003d051c00 ffff00000a5e8350 ffff00000859f358 ffff00000a5e7f50
[ 305.491444] f4a0: ffff00000859f330 0000000040000145 0000000000000000 0000000000000000
[ 305.492008] f4c0: 0001000000000000 0000000000000000 ffff00000a5e8350 ffff00000859f330
[ 305.493063] [<ffff00000808205c>] __bad_stack+0x88/0x8c
[ 305.493396] [<ffff00000859f330>] recursive_loop+0x10/0x48
[ 305.493731] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494088] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494425] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494649] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494898] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495205] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495453] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495708] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496000] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496302] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496644] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496894] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497138] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497325] [<ffff00000859f3dc>] lkdtm_OVERFLOW+0x14/0x20
[ 305.497506] [<ffff00000859f314>] lkdtm_do_action+0x1c/0x28
[ 305.497786] [<ffff00000859f178>] direct_entry+0xe0/0x170
[ 305.498095] [<ffff000008345568>] full_proxy_write+0x60/0xa8
[ 305.498387] [<ffff0000081fb7f4>] __vfs_write+0x1c/0x128
[ 305.498679] [<ffff0000081fcc68>] vfs_write+0xa0/0x1b0
[ 305.498926] [<ffff0000081fe0fc>] SyS_write+0x44/0xa0
[ 305.499182] Exception stack(0xffff00000a5ebec0 to 0xffff00000a5ec000)
[ 305.499429] bec0: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.499674] bee0: 574f4c465245564f 0000000000000000 0000000000000000 8000000080808080
[ 305.499904] bf00: 0000000000000040 0000000000000038 fefefeff1b4bc2ff 7f7f7f7f7f7fff7f
[ 305.500189] bf20: 0101010101010101 0000000000000000 000000000047a4c8 0000000000000038
[ 305.500712] bf40: 0000000000000000 0000ffffa2601280 0000ffffc63f6068 00000000004b5000
[ 305.501241] bf60: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.501791] bf80: 0000000000000020 0000000000000000 00000000004b5000 000000001c4cc458
[ 305.502314] bfa0: 0000000000000000 0000ffffc63f7950 000000000040a3c4 0000ffffc63f70e0
[ 305.502762] bfc0: 0000ffffa2601268 0000000080000000 0000000000000001 0000000000000040
[ 305.503207] bfe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 305.503680] [<ffff000008082fb0>] el0_svc_naked+0x24/0x28
[ 305.504720] Kernel Offset: disabled
[ 305.505189] CPU features: 0x002082
[ 305.505473] Memory Limit: none
[ 305.506181] ---[ end Kernel panic - not syncing: kernel stack overflow
This patch was co-authored by Ard Biesheuvel and Mark Rutland.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
2017-07-14 20:30:35 +01:00
* /
add s p , s p , x0 / / s p ' = s p + x0
sub x0 , s p , x0 / / x0 ' = s p ' - x0 = ( s p + x0 ) - x0 = s p
tbnz x0 , #T H R E A D _ S H I F T , 0 f
sub x0 , s p , x0 / / x0 ' ' = s p ' - x0 ' = ( s p + x0 ) - s p = x0
sub s p , s p , x0 / / s p ' ' = s p ' - x0 = ( s p + x0 ) - x0 = s p
2017-11-14 14:20:21 +00:00
b e l \ ( ) \ e l \ ( ) _ \ l a b e l
arm64: add VMAP_STACK overflow detection
This patch adds stack overflow detection to arm64, usable when vmap'd stacks
are in use.
Overflow is detected in a small preamble executed for each exception entry,
which checks whether there is enough space on the current stack for the general
purpose registers to be saved. If there is not enough space, the overflow
handler is invoked on a per-cpu overflow stack. This approach preserves the
original exception information in ESR_EL1 (and where appropriate, FAR_EL1).
Task and IRQ stacks are aligned to double their size, enabling overflow to be
detected with a single bit test. For example, a 16K stack is aligned to 32K,
ensuring that bit 14 of the SP must be zero. On an overflow (or underflow),
this bit is flipped. Thus, overflow (of less than the size of the stack) can be
detected by testing whether this bit is set.
The overflow check is performed before any attempt is made to access the
stack, avoiding recursive faults (and the loss of exception information
these would entail). As logical operations cannot be performed on the SP
directly, the SP is temporarily swapped with a general purpose register
using arithmetic operations to enable the test to be performed.
This gives us a useful error message on stack overflow, as can be trigger with
the LKDTM overflow test:
[ 305.388749] lkdtm: Performing direct entry OVERFLOW
[ 305.395444] Insufficient stack space to handle exception!
[ 305.395482] ESR: 0x96000047 -- DABT (current EL)
[ 305.399890] FAR: 0xffff00000a5e7f30
[ 305.401315] Task stack: [0xffff00000a5e8000..0xffff00000a5ec000]
[ 305.403815] IRQ stack: [0xffff000008000000..0xffff000008004000]
[ 305.407035] Overflow stack: [0xffff80003efce4e0..0xffff80003efcf4e0]
[ 305.409622] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.412785] Hardware name: linux,dummy-virt (DT)
[ 305.415756] task: ffff80003d051c00 task.stack: ffff00000a5e8000
[ 305.419221] PC is at recursive_loop+0x10/0x48
[ 305.421637] LR is at recursive_loop+0x38/0x48
[ 305.423768] pc : [<ffff00000859f330>] lr : [<ffff00000859f358>] pstate: 40000145
[ 305.428020] sp : ffff00000a5e7f50
[ 305.430469] x29: ffff00000a5e8350 x28: ffff80003d051c00
[ 305.433191] x27: ffff000008981000 x26: ffff000008f80400
[ 305.439012] x25: ffff00000a5ebeb8 x24: ffff00000a5ebeb8
[ 305.440369] x23: ffff000008f80138 x22: 0000000000000009
[ 305.442241] x21: ffff80003ce65000 x20: ffff000008f80188
[ 305.444552] x19: 0000000000000013 x18: 0000000000000006
[ 305.446032] x17: 0000ffffa2601280 x16: ffff0000081fe0b8
[ 305.448252] x15: ffff000008ff546d x14: 000000000047a4c8
[ 305.450246] x13: ffff000008ff7872 x12: 0000000005f5e0ff
[ 305.452953] x11: ffff000008ed2548 x10: 000000000005ee8d
[ 305.454824] x9 : ffff000008545380 x8 : ffff00000a5e8770
[ 305.457105] x7 : 1313131313131313 x6 : 00000000000000e1
[ 305.459285] x5 : 0000000000000000 x4 : 0000000000000000
[ 305.461781] x3 : 0000000000000000 x2 : 0000000000000400
[ 305.465119] x1 : 0000000000000013 x0 : 0000000000000012
[ 305.467724] Kernel panic - not syncing: kernel stack overflow
[ 305.470561] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.473325] Hardware name: linux,dummy-virt (DT)
[ 305.475070] Call trace:
[ 305.476116] [<ffff000008088ad8>] dump_backtrace+0x0/0x378
[ 305.478991] [<ffff000008088e64>] show_stack+0x14/0x20
[ 305.481237] [<ffff00000895a178>] dump_stack+0x98/0xb8
[ 305.483294] [<ffff0000080c3288>] panic+0x118/0x280
[ 305.485673] [<ffff0000080c2e9c>] nmi_panic+0x6c/0x70
[ 305.486216] [<ffff000008089710>] handle_bad_stack+0x118/0x128
[ 305.486612] Exception stack(0xffff80003efcf3a0 to 0xffff80003efcf4e0)
[ 305.487334] f3a0: 0000000000000012 0000000000000013 0000000000000400 0000000000000000
[ 305.488025] f3c0: 0000000000000000 0000000000000000 00000000000000e1 1313131313131313
[ 305.488908] f3e0: ffff00000a5e8770 ffff000008545380 000000000005ee8d ffff000008ed2548
[ 305.489403] f400: 0000000005f5e0ff ffff000008ff7872 000000000047a4c8 ffff000008ff546d
[ 305.489759] f420: ffff0000081fe0b8 0000ffffa2601280 0000000000000006 0000000000000013
[ 305.490256] f440: ffff000008f80188 ffff80003ce65000 0000000000000009 ffff000008f80138
[ 305.490683] f460: ffff00000a5ebeb8 ffff00000a5ebeb8 ffff000008f80400 ffff000008981000
[ 305.491051] f480: ffff80003d051c00 ffff00000a5e8350 ffff00000859f358 ffff00000a5e7f50
[ 305.491444] f4a0: ffff00000859f330 0000000040000145 0000000000000000 0000000000000000
[ 305.492008] f4c0: 0001000000000000 0000000000000000 ffff00000a5e8350 ffff00000859f330
[ 305.493063] [<ffff00000808205c>] __bad_stack+0x88/0x8c
[ 305.493396] [<ffff00000859f330>] recursive_loop+0x10/0x48
[ 305.493731] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494088] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494425] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494649] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494898] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495205] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495453] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495708] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496000] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496302] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496644] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496894] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497138] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497325] [<ffff00000859f3dc>] lkdtm_OVERFLOW+0x14/0x20
[ 305.497506] [<ffff00000859f314>] lkdtm_do_action+0x1c/0x28
[ 305.497786] [<ffff00000859f178>] direct_entry+0xe0/0x170
[ 305.498095] [<ffff000008345568>] full_proxy_write+0x60/0xa8
[ 305.498387] [<ffff0000081fb7f4>] __vfs_write+0x1c/0x128
[ 305.498679] [<ffff0000081fcc68>] vfs_write+0xa0/0x1b0
[ 305.498926] [<ffff0000081fe0fc>] SyS_write+0x44/0xa0
[ 305.499182] Exception stack(0xffff00000a5ebec0 to 0xffff00000a5ec000)
[ 305.499429] bec0: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.499674] bee0: 574f4c465245564f 0000000000000000 0000000000000000 8000000080808080
[ 305.499904] bf00: 0000000000000040 0000000000000038 fefefeff1b4bc2ff 7f7f7f7f7f7fff7f
[ 305.500189] bf20: 0101010101010101 0000000000000000 000000000047a4c8 0000000000000038
[ 305.500712] bf40: 0000000000000000 0000ffffa2601280 0000ffffc63f6068 00000000004b5000
[ 305.501241] bf60: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.501791] bf80: 0000000000000020 0000000000000000 00000000004b5000 000000001c4cc458
[ 305.502314] bfa0: 0000000000000000 0000ffffc63f7950 000000000040a3c4 0000ffffc63f70e0
[ 305.502762] bfc0: 0000ffffa2601268 0000000080000000 0000000000000001 0000000000000040
[ 305.503207] bfe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 305.503680] [<ffff000008082fb0>] el0_svc_naked+0x24/0x28
[ 305.504720] Kernel Offset: disabled
[ 305.505189] CPU features: 0x002082
[ 305.505473] Memory Limit: none
[ 305.506181] ---[ end Kernel panic - not syncing: kernel stack overflow
This patch was co-authored by Ard Biesheuvel and Mark Rutland.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
2017-07-14 20:30:35 +01:00
0 :
/ *
* Either w e ' v e j u s t d e t e c t e d a n o v e r f l o w , o r w e ' v e t a k e n a n e x c e p t i o n
* while o n t h e o v e r f l o w s t a c k . E i t h e r w a y , w e w o n ' t r e t u r n t o
* userspace, a n d c a n c l o b b e r E L 0 r e g i s t e r s t o f r e e u p G P R s .
* /
/* Stash the original SP (minus S_FRAME_SIZE) in tpidr_el0. */
msr t p i d r _ e l 0 , x0
/* Recover the original x0 value and stash it in tpidrro_el0 */
sub x0 , s p , x0
msr t p i d r r o _ e l 0 , x0
/* Switch to the overflow stack */
adr_ t h i s _ c p u s p , o v e r f l o w _ s t a c k + O V E R F L O W _ S T A C K _ S I Z E , x0
/ *
* Check w h e t h e r w e w e r e a l r e a d y o n t h e o v e r f l o w s t a c k . T h i s m a y h a p p e n
* after p a n i c ( ) r e - e n a b l e s i n t e r r u p t s .
* /
mrs x0 , t p i d r _ e l 0 / / s p o f i n t e r r u p t e d c o n t e x t
sub x0 , s p , x0 / / d e l t a w i t h t o p o f o v e r f l o w s t a c k
tst x0 , #~ ( O V E R F L O W _ S T A C K _ S I Z E - 1 ) / / w i t h i n r a n g e ?
b. n e _ _ b a d _ s t a c k / / n o ? - > b a d s t a c k p o i n t e r
/* We were already on the overflow stack. Restore sp/x0 and carry on. */
sub s p , s p , x0
mrs x0 , t p i d r r o _ e l 0
# endif
2017-11-14 14:20:21 +00:00
b e l \ ( ) \ e l \ ( ) _ \ l a b e l
2017-07-19 17:24:49 +01:00
.endm
2017-11-14 14:24:29 +00:00
.macro tramp_ a l i a s , d s t , s y m
mov_ q \ d s t , T R A M P _ V A L I A S
add \ d s t , \ d s t , #( \ s y m - . e n t r y . t r a m p . t e x t )
2017-07-19 17:24:49 +01:00
.endm
2020-07-08 22:10:01 +01:00
/ *
* This m a c r o c o r r u p t s x0 - x3 . I t i s t h e c a l l e r ' s d u t y t o s a v e / r e s t o r e
* them i f r e q u i r e d .
* /
2018-07-11 14:56:47 +01:00
.macro apply_ s s b d , s t a t e , t m p1 , t m p2
2020-09-18 11:54:33 +01:00
alternative_ c b s p e c t r e _ v4 _ p a t c h _ f w _ m i t i g a t i o n _ e n a b l e
b . L _ _ a s m _ s s b d _ s k i p \ @ // Patched to NOP
2018-05-29 13:11:11 +01:00
alternative_ c b _ e n d
2018-05-29 13:11:07 +01:00
ldr_ t h i s _ c p u \ t m p2 , a r m 6 4 _ s s b d _ c a l l b a c k _ r e q u i r e d , \ t m p1
2018-07-11 14:56:47 +01:00
cbz \ t m p2 , . L _ _ a s m _ s s b d _ s k i p \ @
2018-05-29 13:11:13 +01:00
ldr \ t m p2 , [ t s k , #T S K _ T I _ F L A G S ]
2018-07-11 14:56:47 +01:00
tbnz \ t m p2 , #T I F _ S S B D , . L _ _ a s m _ s s b d _ s k i p \ @
2018-05-29 13:11:06 +01:00
mov w0 , #A R M _ S M C C C _ A R C H _ W O R K A R O U N D _ 2
mov w1 , #\ s t a t e
2020-09-18 11:54:33 +01:00
alternative_ c b s p e c t r e _ v4 _ p a t c h _ f w _ m i t i g a t i o n _ c o n d u i t
2018-05-29 13:11:06 +01:00
nop / / P a t c h e d t o S M C / H V C #0
alternative_ c b _ e n d
2018-07-11 14:56:47 +01:00
.L__asm_ssbd_skip \ @:
2018-05-29 13:11:06 +01:00
.endm
2019-09-16 11:51:17 +01:00
/* Check for MTE asynchronous tag check faults */
.macro check_ m t e _ a s y n c _ t c f , f l g s , t m p
# ifdef C O N F I G _ A R M 6 4 _ M T E
alternative_ i f _ n o t A R M 6 4 _ M T E
b 1 f
alternative_ e l s e _ n o p _ e n d i f
mrs_ s \ t m p , S Y S _ T F S R E 0 _ E L 1
tbz \ t m p , #S Y S _ T F S R _ E L 1 _ T F 0 _ S H I F T , 1 f
/* Asynchronous TCF occurred for TTBR0 access, set the TI flag */
orr \ f l g s , \ f l g s , #_ T I F _ M T E _ A S Y N C _ F A U L T
str \ f l g s , [ t s k , #T S K _ T I _ F L A G S ]
msr_ s S Y S _ T F S R E 0 _ E L 1 , x z r
1 :
# endif
.endm
/* Clear the MTE asynchronous tag check faults */
.macro clear_mte_async_tcf
# ifdef C O N F I G _ A R M 6 4 _ M T E
alternative_ i f A R M 6 4 _ M T E
dsb i s h
msr_ s S Y S _ T F S R E 0 _ E L 1 , x z r
alternative_ e l s e _ n o p _ e n d i f
# endif
.endm
2017-07-19 17:24:49 +01:00
.macro kernel_ e n t r y , e l , r e g s i z e = 6 4
2012-03-05 11:49:27 +00:00
.if \ regsize = = 3 2
mov w0 , w0 / / z e r o u p p e r 3 2 b i t s o f x0
.endif
2014-09-29 12:26:41 +01:00
stp x0 , x1 , [ s p , #16 * 0 ]
stp x2 , x3 , [ s p , #16 * 1 ]
stp x4 , x5 , [ s p , #16 * 2 ]
stp x6 , x7 , [ s p , #16 * 3 ]
stp x8 , x9 , [ s p , #16 * 4 ]
stp x10 , x11 , [ s p , #16 * 5 ]
stp x12 , x13 , [ s p , #16 * 6 ]
stp x14 , x15 , [ s p , #16 * 7 ]
stp x16 , x17 , [ s p , #16 * 8 ]
stp x18 , x19 , [ s p , #16 * 9 ]
stp x20 , x21 , [ s p , #16 * 1 0 ]
stp x22 , x23 , [ s p , #16 * 1 1 ]
stp x24 , x25 , [ s p , #16 * 1 2 ]
stp x26 , x27 , [ s p , #16 * 1 3 ]
stp x28 , x29 , [ s p , #16 * 1 4 ]
2012-03-05 11:49:27 +00:00
.if \ el = = 0
2018-07-11 14:56:48 +01:00
clear_ g p _ r e g s
2012-03-05 11:49:27 +00:00
mrs x21 , s p _ e l 0
2020-01-16 18:35:48 +00:00
ldr_ t h i s _ c p u t s k , _ _ e n t r y _ t a s k , x20
msr s p _ e l 0 , t s k
2020-07-08 22:10:01 +01:00
/ *
* Ensure M D S C R _ E L 1 . S S i s c l e a r , s i n c e w e c a n u n m a s k d e b u g e x c e p t i o n s
* when s c h e d u l i n g .
* /
2020-01-16 18:35:48 +00:00
ldr x19 , [ t s k , #T S K _ T I _ F L A G S ]
disable_ s t e p _ t s k x19 , x20
2015-12-10 10:22:41 +00:00
2019-09-16 11:51:17 +01:00
/* Check for asynchronous tag check faults in user space */
check_ m t e _ a s y n c _ t c f x19 , x22
2018-07-11 14:56:47 +01:00
apply_ s s b d 1 , x22 , x23
2018-05-29 13:11:06 +01:00
2020-04-23 11:16:05 +01:00
ptrauth_ k e y s _ i n s t a l l _ k e r n e l t s k , x20 , x22 , x23
2020-04-27 09:00:16 -07:00
scs_ l o a d t s k , x20
2012-03-05 11:49:27 +00:00
.else
add x21 , s p , #S _ F R A M E _ S I Z E
2019-02-22 09:32:50 +00:00
get_ c u r r e n t _ t a s k t s k
2018-02-05 15:34:18 +00:00
/* Save the task's original addr_limit and set USER_DS */
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
ldr x20 , [ t s k , #T S K _ T I _ A D D R _ L I M I T ]
2016-06-20 18:28:01 +01:00
str x20 , [ s p , #S _ O R I G _ A D D R _ L I M I T ]
2018-02-05 15:34:18 +00:00
mov x20 , #U S E R _ D S
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
str x20 , [ t s k , #T S K _ T I _ A D D R _ L I M I T ]
2016-09-01 14:35:59 +01:00
/* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */
2016-06-20 18:28:01 +01:00
.endif /* \el == 0 */
2012-03-05 11:49:27 +00:00
mrs x22 , e l r _ e l 1
mrs x23 , s p s r _ e l 1
stp l r , x21 , [ s p , #S _ L R ]
2016-09-02 14:54:03 +01:00
arm64: unwind: reference pt_regs via embedded stack frame
As it turns out, the unwind code is slightly broken, and probably has
been for a while. The problem is in the dumping of the exception stack,
which is intended to dump the contents of the pt_regs struct at each
level in the call stack where an exception was taken and routed to a
routine marked as __exception (which means its stack frame is right
below the pt_regs struct on the stack).
'Right below the pt_regs struct' is ill defined, though: the unwind
code assigns 'frame pointer + 0x10' to the .sp member of the stackframe
struct at each level, and dump_backtrace() happily dereferences that as
the pt_regs pointer when encountering an __exception routine. However,
the actual size of the stack frame created by this routine (which could
be one of many __exception routines we have in the kernel) is not known,
and so frame.sp is pretty useless to figure out where struct pt_regs
really is.
So it seems the only way to ensure that we can find our struct pt_regs
when walking the stack frames is to put it at a known fixed offset of
the stack frame pointer that is passed to such __exception routines.
The simplest way to do that is to put it inside pt_regs itself, which is
the main change implemented by this patch. As a bonus, doing this allows
us to get rid of a fair amount of cruft related to walking from one stack
to the other, which is especially nice since we intend to introduce yet
another stack for overflow handling once we add support for vmapped
stacks. It also fixes an inconsistency where we only add a stack frame
pointing to ELR_EL1 if we are executing from the IRQ stack but not when
we are executing from the task stack.
To consistly identify exceptions regs even in the presence of exceptions
taken from entry code, we must check whether the next frame was created
by entry text, rather than whether the current frame was crated by
exception text.
To avoid backtracing using PCs that fall in the idmap, or are controlled
by userspace, we must explcitly zero the FP and LR in startup paths, and
must ensure that the frame embedded in pt_regs is zeroed upon entry from
EL0. To avoid these NULL entries showin in the backtrace, unwind_frame()
is updated to avoid them.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: compare current frame against .entry.text, avoid bogus PCs]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
2017-07-22 18:45:33 +01:00
/ *
* In o r d e r t o b e a b l e t o d u m p t h e c o n t e n t s o f s t r u c t p t _ r e g s a t t h e
* time t h e e x c e p t i o n w a s t a k e n ( i n c a s e w e a t t e m p t t o w a l k t h e c a l l
* stack l a t e r ) , c h a i n i t t o g e t h e r w i t h t h e s t a c k f r a m e s .
* /
.if \ el = = 0
stp x z r , x z r , [ s p , #S _ S T A C K F R A M E ]
.else
stp x29 , x22 , [ s p , #S _ S T A C K F R A M E ]
.endif
add x29 , s p , #S _ S T A C K F R A M E
2016-09-02 14:54:03 +01:00
# ifdef C O N F I G _ A R M 6 4 _ S W _ T T B R 0 _ P A N
2020-07-21 10:33:15 +02:00
alternative_ i f _ n o t A R M 6 4 _ H A S _ P A N
bl _ _ s w p a n _ e n t r y _ e l \ e l
2016-09-02 14:54:03 +01:00
alternative_ e l s e _ n o p _ e n d i f
# endif
2012-03-05 11:49:27 +00:00
stp x22 , x23 , [ s p , #S _ P C ]
2017-08-01 15:35:54 +01:00
/* Not in a syscall by default (el0_svc overwrites for real syscall) */
2012-03-05 11:49:27 +00:00
.if \ el = = 0
2017-08-01 15:35:54 +01:00
mov w21 , #N O _ S Y S C A L L
arm64: syscallno is secretly an int, make it official
The upper 32 bits of the syscallno field in thread_struct are
handled inconsistently, being sometimes zero extended and sometimes
sign-extended. In fact, only the lower 32 bits seem to have any
real significance for the behaviour of the code: it's been OK to
handle the upper bits inconsistently because they don't matter.
Currently, the only place I can find where those bits are
significant is in calling trace_sys_enter(), which may be
unintentional: for example, if a compat tracer attempts to cancel a
syscall by passing -1 to (COMPAT_)PTRACE_SET_SYSCALL at the
syscall-enter-stop, it will be traced as syscall 4294967295
rather than -1 as might be expected (and as occurs for a native
tracer doing the same thing). Elsewhere, reads of syscallno cast
it to an int or truncate it.
There's also a conspicuous amount of code and casting to bodge
around the fact that although semantically an int, syscallno is
stored as a u64.
Let's not pretend any more.
In order to preserve the stp x instruction that stores the syscall
number in entry.S, this patch special-cases the layout of struct
pt_regs for big endian so that the newly 32-bit syscallno field
maps onto the low bits of the stored value. This is not beautiful,
but benchmarking of the getpid syscall on Juno suggests indicates a
minor slowdown if the stp is split into an stp x and stp w.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-08-01 15:35:53 +01:00
str w21 , [ s p , #S _ S Y S C A L L N O ]
2012-03-05 11:49:27 +00:00
.endif
2019-01-31 14:58:46 +00:00
/* Save pmr */
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
mrs_ s x20 , S Y S _ I C C _ P M R _ E L 1
str x20 , [ s p , #S _ P M R _ S A V E ]
alternative_ e l s e _ n o p _ e n d i f
2019-09-16 11:51:17 +01:00
/* Re-enable tag checking (TCO set on exception entry) */
# ifdef C O N F I G _ A R M 6 4 _ M T E
alternative_ i f A R M 6 4 _ M T E
SET_ P S T A T E _ T C O ( 0 )
alternative_ e l s e _ n o p _ e n d i f
# endif
2012-03-05 11:49:27 +00:00
/ *
* Registers t h a t m a y b e u s e f u l a f t e r t h i s m a c r o i s i n v o k e d :
*
2019-06-11 10:38:10 +01:00
* x2 0 - I C C _ P M R _ E L 1
2012-03-05 11:49:27 +00:00
* x2 1 - a b o r t e d S P
* x2 2 - a b o r t e d P C
* x2 3 - a b o r t e d P S T A T E
* /
.endm
2015-08-19 15:57:09 +01:00
.macro kernel_ e x i t , e l
2016-06-20 18:28:01 +01:00
.if \ el ! = 0
2017-11-02 12:12:37 +00:00
disable_ d a i f
2016-06-20 18:28:01 +01:00
/* Restore the task's original addr_limit. */
ldr x20 , [ s p , #S _ O R I G _ A D D R _ L I M I T ]
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
str x20 , [ t s k , #T S K _ T I _ A D D R _ L I M I T ]
2016-06-20 18:28:01 +01:00
/* No need to restore UAO, it will be restored from SPSR_EL1 */
.endif
2019-01-31 14:58:46 +00:00
/* Restore pmr */
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
ldr x20 , [ s p , #S _ P M R _ S A V E ]
msr_ s S Y S _ I C C _ P M R _ E L 1 , x20
2019-10-02 10:06:12 +01:00
mrs_ s x21 , S Y S _ I C C _ C T L R _ E L 1
tbz x21 , #6 , . L _ _ s k i p _ p m r _ s y n c \ @ // Check for ICC_CTLR_EL1.PMHE
dsb s y / / E n s u r e p r i o r i t y c h a n g e i s s e e n b y r e d i s t r i b u t o r
.L__skip_pmr_sync \ @:
2019-01-31 14:58:46 +00:00
alternative_ e l s e _ n o p _ e n d i f
2012-03-05 11:49:27 +00:00
ldp x21 , x22 , [ s p , #S _ P C ] / / l o a d E L R , S P S R
.if \ el = = 0
2014-05-30 12:34:15 -07:00
ct_ u s e r _ e n t e r
2016-09-02 14:54:03 +01:00
.endif
# ifdef C O N F I G _ A R M 6 4 _ S W _ T T B R 0 _ P A N
2020-07-21 10:33:15 +02:00
alternative_ i f _ n o t A R M 6 4 _ H A S _ P A N
bl _ _ s w p a n _ e x i t _ e l \ e l
2016-09-02 14:54:03 +01:00
alternative_ e l s e _ n o p _ e n d i f
# endif
.if \ el = = 0
2012-03-05 11:49:27 +00:00
ldr x23 , [ s p , #S _ S P ] / / l o a d r e t u r n s t a c k p o i n t e r
2014-09-29 12:26:41 +01:00
msr s p _ e l 0 , x23
2017-11-14 14:24:29 +00:00
tst x22 , #P S R _ M O D E 32 _ B I T / / n a t i v e t a s k ?
b. e q 3 f
2015-03-23 19:07:02 +00:00
# ifdef C O N F I G _ A R M 6 4 _ E R R A T U M _ 8 4 5 7 1 9
2016-09-07 11:07:09 +01:00
alternative_ i f A R M 6 4 _ W O R K A R O U N D _ 8 4 5 7 1 9
2015-07-22 12:21:03 +01:00
# ifdef C O N F I G _ P I D _ I N _ C O N T E X T I D R
mrs x29 , c o n t e x t i d r _ e l 1
msr c o n t e x t i d r _ e l 1 , x29
2015-03-23 19:07:02 +00:00
# else
2015-07-22 12:21:03 +01:00
msr c o n t e x t i d r _ e l 1 , x z r
2015-03-23 19:07:02 +00:00
# endif
2016-09-07 11:07:09 +01:00
alternative_ e l s e _ n o p _ e n d i f
2015-03-23 19:07:02 +00:00
# endif
2017-11-14 14:24:29 +00:00
3 :
2020-04-27 09:00:16 -07:00
scs_ s a v e t s k , x0
2020-03-13 14:34:56 +05:30
/* No kernel C function calls after this as user keys are set. */
2020-03-13 14:34:51 +05:30
ptrauth_ k e y s _ i n s t a l l _ u s e r t s k , x0 , x1 , x2
2018-07-11 14:56:47 +01:00
apply_ s s b d 0 , x0 , x1
2012-03-05 11:49:27 +00:00
.endif
2016-09-02 14:54:03 +01:00
2014-09-29 12:26:41 +01:00
msr e l r _ e l 1 , x21 / / s e t u p t h e r e t u r n d a t a
msr s p s r _ e l 1 , x22
ldp x0 , x1 , [ s p , #16 * 0 ]
ldp x2 , x3 , [ s p , #16 * 1 ]
ldp x4 , x5 , [ s p , #16 * 2 ]
ldp x6 , x7 , [ s p , #16 * 3 ]
ldp x8 , x9 , [ s p , #16 * 4 ]
ldp x10 , x11 , [ s p , #16 * 5 ]
ldp x12 , x13 , [ s p , #16 * 6 ]
ldp x14 , x15 , [ s p , #16 * 7 ]
ldp x16 , x17 , [ s p , #16 * 8 ]
ldp x18 , x19 , [ s p , #16 * 9 ]
ldp x20 , x21 , [ s p , #16 * 1 0 ]
ldp x22 , x23 , [ s p , #16 * 1 1 ]
ldp x24 , x25 , [ s p , #16 * 1 2 ]
ldp x26 , x27 , [ s p , #16 * 1 3 ]
ldp x28 , x29 , [ s p , #16 * 1 4 ]
ldr l r , [ s p , #S _ L R ]
add s p , s p , #S _ F R A M E _ S I Z E / / r e s t o r e s p
2017-11-14 14:24:29 +00:00
.if \ el = = 0
2017-11-14 14:38:19 +00:00
alternative_ i n s n e r e t , n o p , A R M 6 4 _ U N M A P _ K E R N E L _ A T _ E L 0
# ifdef C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0
2020-07-08 22:10:01 +01:00
bne 4 f
2017-11-14 14:24:29 +00:00
msr f a r _ e l 1 , x30
tramp_ a l i a s x30 , t r a m p _ e x i t _ n a t i v e
br x30
2020-07-08 22:10:01 +01:00
4 :
2017-11-14 14:24:29 +00:00
tramp_ a l i a s x30 , t r a m p _ e x i t _ c o m p a t
br x30
2017-11-14 14:38:19 +00:00
# endif
2017-11-14 14:24:29 +00:00
.else
2020-10-28 13:28:39 -05:00
/* Ensure any device/NC reads complete */
alternative_ i n s n n o p , " d m b s y " , A R M 6 4 _ W O R K A R O U N D _ 1 5 0 8 4 1 2
2017-11-14 14:24:29 +00:00
eret
.endif
2018-06-14 11:23:38 +01:00
sb
2012-03-05 11:49:27 +00:00
.endm
2020-07-21 10:33:15 +02:00
# ifdef C O N F I G _ A R M 6 4 _ S W _ T T B R 0 _ P A N
/ *
* Set t h e T T B R 0 P A N b i t i n S P S R . W h e n t h e e x c e p t i o n i s t a k e n f r o m
* EL0 , t h e r e i s n o n e e d t o c h e c k t h e s t a t e o f T T B R 0 _ E L 1 s i n c e
* accesses a r e a l w a y s e n a b l e d .
* Note t h a t t h e m e a n i n g o f t h i s b i t d i f f e r s f r o m t h e A R M v8 . 1 P A N
* feature a s a l l T T B R 0 _ E L 1 a c c e s s e s a r e d i s a b l e d , n o t j u s t t h o s e t o
* user m a p p i n g s .
* /
SYM_ C O D E _ S T A R T _ L O C A L ( _ _ s w p a n _ e n t r y _ e l 1 )
mrs x21 , t t b r0 _ e l 1
tst x21 , #T T B R _ A S I D _ M A S K / / C h e c k f o r t h e r e s e r v e d A S I D
orr x23 , x23 , #P S R _ P A N _ B I T / / S e t t h e e m u l a t e d P A N i n t h e s a v e d S P S R
b. e q 1 f / / T T B R 0 a c c e s s a l r e a d y d i s a b l e d
and x23 , x23 , #~ P S R _ P A N _ B I T / / C l e a r t h e e m u l a t e d P A N i n t h e s a v e d S P S R
SYM_ I N N E R _ L A B E L ( _ _ s w p a n _ e n t r y _ e l 0 , S Y M _ L _ L O C A L )
_ _ uaccess_ t t b r0 _ d i s a b l e x21
1 : ret
SYM_ C O D E _ E N D ( _ _ s w p a n _ e n t r y _ e l 1 )
/ *
* Restore a c c e s s t o T T B R 0 _ E L 1 . I f r e t u r n i n g t o E L 0 , n o n e e d f o r S P S R
* PAN b i t c h e c k i n g .
* /
SYM_ C O D E _ S T A R T _ L O C A L ( _ _ s w p a n _ e x i t _ e l 1 )
tbnz x22 , #22 , 1 f / / S k i p r e - e n a b l i n g T T B R 0 a c c e s s i f t h e P S R _ P A N _ B I T i s s e t
_ _ uaccess_ t t b r0 _ e n a b l e x0 , x1
1 : and x22 , x22 , #~ P S R _ P A N _ B I T / / A R M v 8.0 C P U s d o n o t u n d e r s t a n d t h i s b i t
ret
SYM_ C O D E _ E N D ( _ _ s w p a n _ e x i t _ e l 1 )
SYM_ C O D E _ S T A R T _ L O C A L ( _ _ s w p a n _ e x i t _ e l 0 )
_ _ uaccess_ t t b r0 _ e n a b l e x0 , x1
/ *
* Enable e r r a t a w o r k a r o u n d s o n l y i f r e t u r n i n g t o u s e r . T h e o n l y
* workaround c u r r e n t l y r e q u i r e d f o r T T B R 0 _ E L 1 c h a n g e s a r e f o r t h e
* Cavium e r r a t u m 2 7 4 5 6 ( b r o a d c a s t T L B I i n s t r u c t i o n s m a y c a u s e I - c a c h e
* corruption) .
* /
b p o s t _ t t b r _ u p d a t e _ w o r k a r o u n d
SYM_ C O D E _ E N D ( _ _ s w p a n _ e x i t _ e l 0 )
# endif
2015-12-15 11:21:25 +00:00
.macro irq_stack_entry
2015-12-04 11:02:27 +00:00
mov x19 , s p / / p r e s e r v e t h e o r i g i n a l s p
2020-04-27 09:00:16 -07:00
# ifdef C O N F I G _ S H A D O W _ C A L L _ S T A C K
2020-05-15 14:46:46 +01:00
mov x24 , s c s _ s p / / p r e s e r v e t h e o r i g i n a l s h a d o w s t a c k
2020-04-27 09:00:16 -07:00
# endif
2015-12-04 11:02:27 +00:00
/ *
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
* Compare s p w i t h t h e b a s e o f t h e t a s k s t a c k .
* If t h e t o p ~ ( T H R E A D _ S I Z E - 1 ) b i t s m a t c h , w e a r e o n a t a s k s t a c k ,
* and s h o u l d s w i t c h t o t h e i r q s t a c k .
2015-12-04 11:02:27 +00:00
* /
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
ldr x25 , [ t s k , T S K _ S T A C K ]
eor x25 , x25 , x19
and x25 , x25 , #~ ( T H R E A D _ S I Z E - 1 )
cbnz x25 , 9 9 9 8 f
2015-12-04 11:02:27 +00:00
2017-07-31 21:17:03 +01:00
ldr_ t h i s _ c p u x25 , i r q _ s t a c k _ p t r , x26
arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP
For historical reasons, we leave the top 16 bytes of our task and IRQ
stacks unused, a practice used to ensure that the SP can always be
masked to find the base of the current stack (historically, where
thread_info could be found).
However, this is not necessary, as:
* When an exception is taken from a task stack, we decrement the SP by
S_FRAME_SIZE and stash the exception registers before we compare the
SP against the task stack. In such cases, the SP must be at least
S_FRAME_SIZE below the limit, and can be safely masked to determine
whether the task stack is in use.
* When transitioning to an IRQ stack, we'll place a dummy frame onto the
IRQ stack before enabling asynchronous exceptions, or executing code
we expect to trigger faults. Thus, if an exception is taken from the
IRQ stack, the SP must be at least 16 bytes below the limit.
* We no longer mask the SP to find the thread_info, which is now found
via sp_el0. Note that historically, the offset was critical to ensure
that cpu_switch_to() found the correct stack for new threads that
hadn't yet executed ret_from_fork().
Given that, this initial offset serves no purpose, and can be removed.
This brings us in-line with other architectures (e.g. x86) which do not
rely on this masking.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: rebase, kill THREAD_START_SP, commit msg additions]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
2017-07-20 17:15:45 +01:00
mov x26 , #I R Q _ S T A C K _ S I Z E
2015-12-04 11:02:27 +00:00
add x26 , x25 , x26
arm64: remove irq_count and do_softirq_own_stack()
sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
irq_stack implementation wrongly assumed this would only ever happen
via the softirq path, allowing it to update irq_count late, in
do_softirq_own_stack().
This means if an irq occurs in sysrq_handle_reboot(), during
emergency_restart() the stack will be corrupted, as irq_count wasn't
updated.
Lose the optimisation, and instead of moving the adding/subtracting of
irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
if we are on a task stack, if so, we can safely switch to the irq stack.
Finally, remove do_softirq_own_stack(), we don't need it anymore.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
[will: use get_thread_info macro]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-18 16:01:47 +00:00
/* switch to the irq stack */
2015-12-04 11:02:27 +00:00
mov s p , x26
2020-04-27 09:00:16 -07:00
# ifdef C O N F I G _ S H A D O W _ C A L L _ S T A C K
/* also switch to the irq shadow stack */
2020-05-15 14:46:46 +01:00
adr_ t h i s _ c p u s c s _ s p , i r q _ s h a d o w _ c a l l _ s t a c k , x26
2020-04-27 09:00:16 -07:00
# endif
2015-12-04 11:02:27 +00:00
9998 :
.endm
/ *
2020-04-27 09:00:16 -07:00
* The c a l l e e - s a v e d r e g s ( x19 - x29 ) s h o u l d b e p r e s e r v e d b e t w e e n
* irq_ s t a c k _ e n t r y a n d i r q _ s t a c k _ e x i t , b u t n o t e t h a t k e r n e l _ e n t r y
* uses x20 - x23 t o s t o r e d a t a f o r l a t e r u s e .
2015-12-04 11:02:27 +00:00
* /
.macro irq_stack_exit
mov s p , x19
2020-04-27 09:00:16 -07:00
# ifdef C O N F I G _ S H A D O W _ C A L L _ S T A C K
2020-05-15 14:46:46 +01:00
mov s c s _ s p , x24
2020-04-27 09:00:16 -07:00
# endif
2015-12-04 11:02:27 +00:00
.endm
2019-01-03 13:23:10 +00:00
/* GPRs used by entry code */
2012-03-05 11:49:27 +00:00
tsk . r e q x28 / / c u r r e n t t h r e a d _ i n f o
/ *
* Interrupt h a n d l i n g .
* /
.macro irq_handler
2015-12-04 11:02:27 +00:00
ldr_ l x1 , h a n d l e _ a r c h _ i r q
2012-03-05 11:49:27 +00:00
mov x0 , s p
2015-12-15 11:21:25 +00:00
irq_ s t a c k _ e n t r y
2012-03-05 11:49:27 +00:00
blr x1
2015-12-04 11:02:27 +00:00
irq_ s t a c k _ e x i t
2012-03-05 11:49:27 +00:00
.endm
2019-06-11 10:38:09 +01:00
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
/ *
* Set r e s t o 0 i f i r q s w e r e u n m a s k e d i n i n t e r r u p t e d c o n t e x t .
* Otherwise s e t r e s t o n o n - 0 v a l u e .
* /
.macro test_irqs_unmasked res : req, p m r : r e q
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
sub \ r e s , \ p m r , #G I C _ P R I O _ I R Q O N
alternative_ e l s e
mov \ r e s , x z r
alternative_ e n d i f
.endm
# endif
2019-06-11 10:38:10 +01:00
.macro gic_ p r i o _ k e n t r y _ s e t u p , t m p : r e q
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
mov \ t m p , #( G I C _ P R I O _ P S R _ I _ S E T | G I C _ P R I O _ I R Q O N )
msr_ s S Y S _ I C C _ P M R _ E L 1 , \ t m p
alternative_ e l s e _ n o p _ e n d i f
# endif
.endm
.macro gic_ p r i o _ i r q _ s e t u p , p m r : r e q , t m p : r e q
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
orr \ t m p , \ p m r , #G I C _ P R I O _ P S R _ I _ S E T
msr_ s S Y S _ I C C _ P M R _ E L 1 , \ t m p
alternative_ e l s e _ n o p _ e n d i f
# endif
.endm
2012-03-05 11:49:27 +00:00
.text
/ *
* Exception v e c t o r s .
* /
2016-07-08 12:35:50 -04:00
.pushsection " .entry .text " , " ax"
2012-03-05 11:49:27 +00:00
.align 11
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T ( v e c t o r s )
2017-11-14 14:20:21 +00:00
kernel_ v e n t r y 1 , s y n c _ i n v a l i d / / S y n c h r o n o u s E L 1 t
kernel_ v e n t r y 1 , i r q _ i n v a l i d / / I R Q E L 1 t
kernel_ v e n t r y 1 , f i q _ i n v a l i d / / F I Q E L 1 t
kernel_ v e n t r y 1 , e r r o r _ i n v a l i d / / E r r o r E L 1 t
2012-03-05 11:49:27 +00:00
2017-11-14 14:20:21 +00:00
kernel_ v e n t r y 1 , s y n c / / S y n c h r o n o u s E L 1 h
kernel_ v e n t r y 1 , i r q / / I R Q E L 1 h
kernel_ v e n t r y 1 , f i q _ i n v a l i d / / F I Q E L 1 h
kernel_ v e n t r y 1 , e r r o r / / E r r o r E L 1 h
2012-03-05 11:49:27 +00:00
2017-11-14 14:20:21 +00:00
kernel_ v e n t r y 0 , s y n c / / S y n c h r o n o u s 6 4 - b i t E L 0
kernel_ v e n t r y 0 , i r q / / I R Q 6 4 - b i t E L 0
kernel_ v e n t r y 0 , f i q _ i n v a l i d / / F I Q 6 4 - b i t E L 0
kernel_ v e n t r y 0 , e r r o r / / E r r o r 6 4 - b i t E L 0
2012-03-05 11:49:27 +00:00
# ifdef C O N F I G _ C O M P A T
2017-11-14 14:20:21 +00:00
kernel_ v e n t r y 0 , s y n c _ c o m p a t , 3 2 / / S y n c h r o n o u s 3 2 - b i t E L 0
kernel_ v e n t r y 0 , i r q _ c o m p a t , 3 2 / / I R Q 3 2 - b i t E L 0
kernel_ v e n t r y 0 , f i q _ i n v a l i d _ c o m p a t , 3 2 / / F I Q 3 2 - b i t E L 0
kernel_ v e n t r y 0 , e r r o r _ c o m p a t , 3 2 / / E r r o r 3 2 - b i t E L 0
2012-03-05 11:49:27 +00:00
# else
2017-11-14 14:20:21 +00:00
kernel_ v e n t r y 0 , s y n c _ i n v a l i d , 3 2 / / S y n c h r o n o u s 3 2 - b i t E L 0
kernel_ v e n t r y 0 , i r q _ i n v a l i d , 3 2 / / I R Q 3 2 - b i t E L 0
kernel_ v e n t r y 0 , f i q _ i n v a l i d , 3 2 / / F I Q 3 2 - b i t E L 0
kernel_ v e n t r y 0 , e r r o r _ i n v a l i d , 3 2 / / E r r o r 3 2 - b i t E L 0
2012-03-05 11:49:27 +00:00
# endif
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( v e c t o r s )
2012-03-05 11:49:27 +00:00
arm64: add VMAP_STACK overflow detection
This patch adds stack overflow detection to arm64, usable when vmap'd stacks
are in use.
Overflow is detected in a small preamble executed for each exception entry,
which checks whether there is enough space on the current stack for the general
purpose registers to be saved. If there is not enough space, the overflow
handler is invoked on a per-cpu overflow stack. This approach preserves the
original exception information in ESR_EL1 (and where appropriate, FAR_EL1).
Task and IRQ stacks are aligned to double their size, enabling overflow to be
detected with a single bit test. For example, a 16K stack is aligned to 32K,
ensuring that bit 14 of the SP must be zero. On an overflow (or underflow),
this bit is flipped. Thus, overflow (of less than the size of the stack) can be
detected by testing whether this bit is set.
The overflow check is performed before any attempt is made to access the
stack, avoiding recursive faults (and the loss of exception information
these would entail). As logical operations cannot be performed on the SP
directly, the SP is temporarily swapped with a general purpose register
using arithmetic operations to enable the test to be performed.
This gives us a useful error message on stack overflow, as can be trigger with
the LKDTM overflow test:
[ 305.388749] lkdtm: Performing direct entry OVERFLOW
[ 305.395444] Insufficient stack space to handle exception!
[ 305.395482] ESR: 0x96000047 -- DABT (current EL)
[ 305.399890] FAR: 0xffff00000a5e7f30
[ 305.401315] Task stack: [0xffff00000a5e8000..0xffff00000a5ec000]
[ 305.403815] IRQ stack: [0xffff000008000000..0xffff000008004000]
[ 305.407035] Overflow stack: [0xffff80003efce4e0..0xffff80003efcf4e0]
[ 305.409622] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.412785] Hardware name: linux,dummy-virt (DT)
[ 305.415756] task: ffff80003d051c00 task.stack: ffff00000a5e8000
[ 305.419221] PC is at recursive_loop+0x10/0x48
[ 305.421637] LR is at recursive_loop+0x38/0x48
[ 305.423768] pc : [<ffff00000859f330>] lr : [<ffff00000859f358>] pstate: 40000145
[ 305.428020] sp : ffff00000a5e7f50
[ 305.430469] x29: ffff00000a5e8350 x28: ffff80003d051c00
[ 305.433191] x27: ffff000008981000 x26: ffff000008f80400
[ 305.439012] x25: ffff00000a5ebeb8 x24: ffff00000a5ebeb8
[ 305.440369] x23: ffff000008f80138 x22: 0000000000000009
[ 305.442241] x21: ffff80003ce65000 x20: ffff000008f80188
[ 305.444552] x19: 0000000000000013 x18: 0000000000000006
[ 305.446032] x17: 0000ffffa2601280 x16: ffff0000081fe0b8
[ 305.448252] x15: ffff000008ff546d x14: 000000000047a4c8
[ 305.450246] x13: ffff000008ff7872 x12: 0000000005f5e0ff
[ 305.452953] x11: ffff000008ed2548 x10: 000000000005ee8d
[ 305.454824] x9 : ffff000008545380 x8 : ffff00000a5e8770
[ 305.457105] x7 : 1313131313131313 x6 : 00000000000000e1
[ 305.459285] x5 : 0000000000000000 x4 : 0000000000000000
[ 305.461781] x3 : 0000000000000000 x2 : 0000000000000400
[ 305.465119] x1 : 0000000000000013 x0 : 0000000000000012
[ 305.467724] Kernel panic - not syncing: kernel stack overflow
[ 305.470561] CPU: 0 PID: 1219 Comm: sh Not tainted 4.13.0-rc3-00021-g9636aea #5
[ 305.473325] Hardware name: linux,dummy-virt (DT)
[ 305.475070] Call trace:
[ 305.476116] [<ffff000008088ad8>] dump_backtrace+0x0/0x378
[ 305.478991] [<ffff000008088e64>] show_stack+0x14/0x20
[ 305.481237] [<ffff00000895a178>] dump_stack+0x98/0xb8
[ 305.483294] [<ffff0000080c3288>] panic+0x118/0x280
[ 305.485673] [<ffff0000080c2e9c>] nmi_panic+0x6c/0x70
[ 305.486216] [<ffff000008089710>] handle_bad_stack+0x118/0x128
[ 305.486612] Exception stack(0xffff80003efcf3a0 to 0xffff80003efcf4e0)
[ 305.487334] f3a0: 0000000000000012 0000000000000013 0000000000000400 0000000000000000
[ 305.488025] f3c0: 0000000000000000 0000000000000000 00000000000000e1 1313131313131313
[ 305.488908] f3e0: ffff00000a5e8770 ffff000008545380 000000000005ee8d ffff000008ed2548
[ 305.489403] f400: 0000000005f5e0ff ffff000008ff7872 000000000047a4c8 ffff000008ff546d
[ 305.489759] f420: ffff0000081fe0b8 0000ffffa2601280 0000000000000006 0000000000000013
[ 305.490256] f440: ffff000008f80188 ffff80003ce65000 0000000000000009 ffff000008f80138
[ 305.490683] f460: ffff00000a5ebeb8 ffff00000a5ebeb8 ffff000008f80400 ffff000008981000
[ 305.491051] f480: ffff80003d051c00 ffff00000a5e8350 ffff00000859f358 ffff00000a5e7f50
[ 305.491444] f4a0: ffff00000859f330 0000000040000145 0000000000000000 0000000000000000
[ 305.492008] f4c0: 0001000000000000 0000000000000000 ffff00000a5e8350 ffff00000859f330
[ 305.493063] [<ffff00000808205c>] __bad_stack+0x88/0x8c
[ 305.493396] [<ffff00000859f330>] recursive_loop+0x10/0x48
[ 305.493731] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494088] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494425] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494649] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.494898] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495205] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495453] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.495708] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496000] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496302] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496644] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.496894] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497138] [<ffff00000859f358>] recursive_loop+0x38/0x48
[ 305.497325] [<ffff00000859f3dc>] lkdtm_OVERFLOW+0x14/0x20
[ 305.497506] [<ffff00000859f314>] lkdtm_do_action+0x1c/0x28
[ 305.497786] [<ffff00000859f178>] direct_entry+0xe0/0x170
[ 305.498095] [<ffff000008345568>] full_proxy_write+0x60/0xa8
[ 305.498387] [<ffff0000081fb7f4>] __vfs_write+0x1c/0x128
[ 305.498679] [<ffff0000081fcc68>] vfs_write+0xa0/0x1b0
[ 305.498926] [<ffff0000081fe0fc>] SyS_write+0x44/0xa0
[ 305.499182] Exception stack(0xffff00000a5ebec0 to 0xffff00000a5ec000)
[ 305.499429] bec0: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.499674] bee0: 574f4c465245564f 0000000000000000 0000000000000000 8000000080808080
[ 305.499904] bf00: 0000000000000040 0000000000000038 fefefeff1b4bc2ff 7f7f7f7f7f7fff7f
[ 305.500189] bf20: 0101010101010101 0000000000000000 000000000047a4c8 0000000000000038
[ 305.500712] bf40: 0000000000000000 0000ffffa2601280 0000ffffc63f6068 00000000004b5000
[ 305.501241] bf60: 0000000000000001 000000001c4cf5e0 0000000000000009 000000001c4cf5e0
[ 305.501791] bf80: 0000000000000020 0000000000000000 00000000004b5000 000000001c4cc458
[ 305.502314] bfa0: 0000000000000000 0000ffffc63f7950 000000000040a3c4 0000ffffc63f70e0
[ 305.502762] bfc0: 0000ffffa2601268 0000000080000000 0000000000000001 0000000000000040
[ 305.503207] bfe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 305.503680] [<ffff000008082fb0>] el0_svc_naked+0x24/0x28
[ 305.504720] Kernel Offset: disabled
[ 305.505189] CPU features: 0x002082
[ 305.505473] Memory Limit: none
[ 305.506181] ---[ end Kernel panic - not syncing: kernel stack overflow
This patch was co-authored by Ard Biesheuvel and Mark Rutland.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
2017-07-14 20:30:35 +01:00
# ifdef C O N F I G _ V M A P _ S T A C K
/ *
* We d e t e c t e d a n o v e r f l o w i n k e r n e l _ v e n t r y , w h i c h s w i t c h e d t o t h e
* overflow s t a c k . S t a s h t h e e x c e p t i o n r e g s , a n d h e a d t o o u r o v e r f l o w
* handler.
* /
__bad_stack :
/* Restore the original x0 value */
mrs x0 , t p i d r r o _ e l 0
/ *
* Store t h e o r i g i n a l G P R s t o t h e n e w s t a c k . T h e o r g i n a l S P ( m i n u s
* S_ F R A M E _ S I Z E ) w a s s t a s h e d i n t p i d r _ e l 0 b y k e r n e l _ v e n t r y .
* /
sub s p , s p , #S _ F R A M E _ S I Z E
kernel_ e n t r y 1
mrs x0 , t p i d r _ e l 0
add x0 , x0 , #S _ F R A M E _ S I Z E
str x0 , [ s p , #S _ S P ]
/* Stash the regs for handle_bad_stack */
mov x0 , s p
/* Time to die */
bl h a n d l e _ b a d _ s t a c k
ASM_ B U G ( )
# endif / * C O N F I G _ V M A P _ S T A C K * /
2012-03-05 11:49:27 +00:00
/ *
* Invalid m o d e h a n d l e r s
* /
.macro inv_ e n t r y , e l , r e a s o n , r e g s i z e = 6 4
2016-03-18 10:58:09 +01:00
kernel_ e n t r y \ e l , \ r e g s i z e
2012-03-05 11:49:27 +00:00
mov x0 , s p
mov x1 , #\ r e a s o n
mrs x2 , e s r _ e l 1
arm64: consistently use bl for C exception entry
In most cases, our exception entry assembly branches to C handlers with
a BL instruction, but in cases where we do not expect to return, we use
B instead.
While this is correct today, it means that backtraces for fatal
exceptions miss the entry assembly (as the LR is stale at the point we
call C code), while non-fatal exceptions have the entry assembly in the
LR. In subsequent patches, we will need the LR to be set in these cases
in order to backtrace reliably.
This patch updates these sites to use a BL, ensuring consistency, and
preparing for backtrace rework. An ASM_BUG() is added after each of
these new BLs, which both catches unexpected returns, and ensures that
the LR value doesn't point to another function label.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
2017-07-26 11:14:53 +01:00
bl b a d _ m o d e
ASM_ B U G ( )
2012-03-05 11:49:27 +00:00
.endm
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ s y n c _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 0 , B A D _ S Y N C
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ s y n c _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ i r q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 0 , B A D _ I R Q
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ i r q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ f i q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 0 , B A D _ F I Q
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ f i q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ e r r o r _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 0 , B A D _ E R R O R
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ e r r o r _ i n v a l i d )
2012-03-05 11:49:27 +00:00
# ifdef C O N F I G _ C O M P A T
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ f i q _ i n v a l i d _ c o m p a t )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 0 , B A D _ F I Q , 3 2
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ f i q _ i n v a l i d _ c o m p a t )
2012-03-05 11:49:27 +00:00
# endif
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 1 _ s y n c _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 1 , B A D _ S Y N C
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ s y n c _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 1 _ i r q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 1 , B A D _ I R Q
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ i r q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 1 _ f i q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 1 , B A D _ F I Q
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ f i q _ i n v a l i d )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 1 _ e r r o r _ i n v a l i d )
2012-03-05 11:49:27 +00:00
inv_ e n t r y 1 , B A D _ E R R O R
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ e r r o r _ i n v a l i d )
2012-03-05 11:49:27 +00:00
/ *
* EL1 m o d e h a n d l e r s .
* /
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 1 _ s y n c )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 1
mov x0 , s p
2019-10-25 17:42:13 +01:00
bl e l 1 _ s y n c _ h a n d l e r
2018-08-07 13:43:06 +01:00
kernel_ e x i t 1
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ s y n c )
2012-03-05 11:49:27 +00:00
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 1 _ i r q )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 1
2019-06-11 10:38:10 +01:00
gic_ p r i o _ i r q _ s e t u p p m r =x20 , t m p =x1
2017-11-02 12:12:41 +00:00
enable_ d a _ f
2019-06-11 10:38:09 +01:00
2019-01-31 14:59:02 +00:00
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
2019-06-11 10:38:09 +01:00
test_ i r q s _ u n m a s k e d r e s =x0 , p m r =x20
cbz x0 , 1 f
bl a s m _ n m i _ e n t e r
1 :
2019-01-31 14:59:02 +00:00
# endif
2019-06-11 10:38:09 +01:00
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
2012-03-05 11:49:27 +00:00
bl t r a c e _ h a r d i r q s _ o f f
# endif
2013-11-12 17:11:53 +00:00
2012-03-05 11:49:27 +00:00
irq_ h a n d l e r
2013-11-12 17:11:53 +00:00
2019-10-15 21:17:49 +02:00
# ifdef C O N F I G _ P R E E M P T I O N
2018-12-11 13:41:32 +00:00
ldr x24 , [ t s k , #T S K _ T I _ P R E E M P T ] / / g e t p r e e m p t c o u n t
2019-01-31 14:59:01 +00:00
alternative_ i f A R M 6 4 _ H A S _ I R Q _ P R I O _ M A S K I N G
/ *
* DA_ F w e r e c l e a r e d a t s t a r t o f h a n d l i n g . I f a n y t h i n g i s s e t i n D A I F ,
* we c o m e b a c k f r o m a n N M I , s o s k i p p r e e m p t i o n
* /
mrs x0 , d a i f
orr x24 , x24 , x0
alternative_ e l s e _ n o p _ e n d i f
cbnz x24 , 1 f / / p r e e m p t c o u n t ! = 0 | | N M I r e t u r n p a t h
arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
Preempting from IRQ-return means that the task has its PSTATE saved
on the stack, which will get restored when the task is resumed and does
the actual IRQ return.
However, enabling some CPU features requires modifying the PSTATE. This
means that, if a task was scheduled out during an IRQ-return before all
CPU features are enabled, the task might restore a PSTATE that does not
include the feature enablement changes once scheduled back in.
* Task 1:
PAN == 0 ---| |---------------
| |<- return from IRQ, PSTATE.PAN = 0
| <- IRQ |
+--------+ <- preempt() +--
^
|
reschedule Task 1, PSTATE.PAN == 1
* Init:
--------------------+------------------------
^
|
enable_cpu_features
set PSTATE.PAN on all CPUs
Worse than this, since PSTATE is untouched when task switching is done,
a task missing the new bits in PSTATE might affect another task, if both
do direct calls to schedule() (outside of IRQ/exception contexts).
Fix this by preventing preemption on IRQ-return until features are
enabled on all CPUs.
This way the only PSTATE values that are saved on the stack are from
synchronous exceptions. These are expected to be fatal this early, the
exception is BRK for WARN_ON(), but as this uses do_debug_exception()
which keeps IRQs masked, it shouldn't call schedule().
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[james: Replaced a really cool hack, with an even simpler static key in C.
expanded commit message with Julien's cover-letter ascii art]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-15 18:25:44 +01:00
bl a r m 6 4 _ p r e e m p t _ s c h e d u l e _ i r q / / i r q e n / d i s a b l e i s d o n e i n s i d e
2012-03-05 11:49:27 +00:00
1 :
# endif
2019-06-11 10:38:09 +01:00
2019-01-31 14:59:02 +00:00
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
/ *
2019-06-11 10:38:10 +01:00
* When u s i n g I R Q p r i o r i t y m a s k i n g , w e c a n g e t s p u r i o u s i n t e r r u p t s w h i l e
* PMR i s s e t t o G I C _ P R I O _ I R Q O F F . A n N M I m i g h t a l s o h a v e o c c u r r e d i n a
* section w i t h i n t e r r u p t s d i s a b l e d . S k i p t r a c i n g i n t h o s e c a s e s .
2019-01-31 14:59:02 +00:00
* /
2019-06-11 10:38:09 +01:00
test_ i r q s _ u n m a s k e d r e s =x0 , p m r =x20
cbz x0 , 1 f
bl a s m _ n m i _ e x i t
1 :
# endif
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
# ifdef C O N F I G _ A R M 6 4 _ P S E U D O _ N M I
test_ i r q s _ u n m a s k e d r e s =x0 , p m r =x20
cbnz x0 , 1 f
2019-01-31 14:59:02 +00:00
# endif
2012-03-05 11:49:27 +00:00
bl t r a c e _ h a r d i r q s _ o n
2019-01-31 14:59:02 +00:00
1 :
2012-03-05 11:49:27 +00:00
# endif
2019-01-31 14:59:02 +00:00
2012-03-05 11:49:27 +00:00
kernel_ e x i t 1
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ i r q )
2012-03-05 11:49:27 +00:00
/ *
* EL0 m o d e h a n d l e r s .
* /
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 0 _ s y n c )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 0
2019-10-25 17:42:14 +01:00
mov x0 , s p
bl e l 0 _ s y n c _ h a n d l e r
b r e t _ t o _ u s e r
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ s y n c )
2012-03-05 11:49:27 +00:00
# ifdef C O N F I G _ C O M P A T
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 0 _ s y n c _ c o m p a t )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 0 , 3 2
2018-07-11 14:56:45 +01:00
mov x0 , s p
2019-10-25 17:42:14 +01:00
bl e l 0 _ s y n c _ c o m p a t _ h a n d l e r
2018-07-11 14:56:45 +01:00
b r e t _ t o _ u s e r
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ s y n c _ c o m p a t )
2012-03-05 11:49:27 +00:00
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 0 _ i r q _ c o m p a t )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 0 , 3 2
b e l 0 _ i r q _ n a k e d
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ i r q _ c o m p a t )
2017-11-02 12:12:42 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 0 _ e r r o r _ c o m p a t )
2017-11-02 12:12:42 +00:00
kernel_ e n t r y 0 , 3 2
b e l 0 _ e r r o r _ n a k e d
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ e r r o r _ c o m p a t )
2018-02-02 17:31:39 +00:00
# endif
2012-03-05 11:49:27 +00:00
.align 6
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L _ N O A L I G N ( e l 0 _ i r q )
2012-03-05 11:49:27 +00:00
kernel_ e n t r y 0
el0_irq_naked :
2019-06-11 10:38:10 +01:00
gic_ p r i o _ i r q _ s e t u p p m r =x20 , t m p =x0
2019-08-20 18:45:57 +01:00
ct_ u s e r _ e x i t _ i r q o f f
2017-11-02 12:12:41 +00:00
enable_ d a _ f
2019-06-11 10:38:10 +01:00
2012-03-05 11:49:27 +00:00
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
bl t r a c e _ h a r d i r q s _ o f f
# endif
2013-11-12 17:11:53 +00:00
2018-02-02 17:31:40 +00:00
tbz x22 , #55 , 1 f
bl d o _ e l 0 _ i r q _ b p _ h a r d e n i n g
1 :
2012-03-05 11:49:27 +00:00
irq_ h a n d l e r
2013-11-12 17:11:53 +00:00
2012-03-05 11:49:27 +00:00
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
bl t r a c e _ h a r d i r q s _ o n
# endif
b r e t _ t o _ u s e r
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ i r q )
2012-03-05 11:49:27 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 1 _ e r r o r )
2017-11-02 12:12:42 +00:00
kernel_ e n t r y 1
mrs x1 , e s r _ e l 1
2019-06-11 10:38:10 +01:00
gic_ p r i o _ k e n t r y _ s e t u p t m p =x2
2017-11-02 12:12:42 +00:00
enable_ d b g
mov x0 , s p
bl d o _ s e r r o r
kernel_ e x i t 1
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 1 _ e r r o r )
2017-11-02 12:12:42 +00:00
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ S T A R T _ L O C A L ( e l 0 _ e r r o r )
2017-11-02 12:12:42 +00:00
kernel_ e n t r y 0
el0_error_naked :
2019-08-20 18:45:57 +01:00
mrs x25 , e s r _ e l 1
2019-06-11 10:38:10 +01:00
gic_ p r i o _ k e n t r y _ s e t u p t m p =x2
2019-08-20 18:45:57 +01:00
ct_ u s e r _ e x i t _ i r q o f f
2017-11-02 12:12:42 +00:00
enable_ d b g
mov x0 , s p
2019-08-20 18:45:57 +01:00
mov x1 , x25
2017-11-02 12:12:42 +00:00
bl d o _ s e r r o r
2019-06-11 10:38:06 +01:00
enable_ d a _ f
2017-11-02 12:12:42 +00:00
b r e t _ t o _ u s e r
2020-02-18 19:58:27 +00:00
SYM_ C O D E _ E N D ( e l 0 _ e r r o r )
2017-11-02 12:12:42 +00:00
2012-03-05 11:49:27 +00:00
/ *
* " slow" s y s c a l l r e t u r n p a t h .
* /
2020-05-01 12:54:28 +01:00
SYM_ C O D E _ S T A R T _ L O C A L ( r e t _ t o _ u s e r )
2017-11-02 12:12:37 +00:00
disable_ d a i f
2019-06-11 10:38:10 +01:00
gic_ p r i o _ k e n t r y _ s e t u p t m p =x3
arm64: split thread_info from task stack
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-03 20:23:13 +00:00
ldr x1 , [ t s k , #T S K _ T I _ F L A G S ]
2012-03-05 11:49:27 +00:00
and x2 , x1 , #_ T I F _ W O R K _ M A S K
cbnz x2 , w o r k _ p e n d i n g
2016-07-14 16:48:14 -04:00
finish_ret_to_user :
2019-09-16 11:51:17 +01:00
/* Ignore asynchronous tag check faults in the uaccess routines */
clear_ m t e _ a s y n c _ t c f
arm64: debug: avoid accessing mdscr_el1 on fault paths where possible
Since mdscr_el1 is part of the debug register group, it is highly likely
to be trapped by a hypervisor to prevent virtual machines from debugging
(buggering?) each other. Unfortunately, this absolutely destroys our
performance, since we access the register on many of our low-level
fault handling paths to keep track of the various debug state machines.
This patch removes our dependency on mdscr_el1 in the case that debugging
is not being used. More specifically we:
- Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and
avoid disabling step in the MDSCR when we don't need to.
MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from
userspace.
- Ensure debug exceptions are re-enabled on *all* exception entry
paths, even the debug exception handling path (where we re-enable
exceptions after invoking the handler). Since we can now rely on
MDSCR_EL1.SS being cleared by the entry code, exception handlers can
usually enable debug immediately before enabling interrupts.
- Remove all debug exception unmasking from ret_to_user and
el1_preempt, since we will never get here with debug exceptions
masked.
This results in a slight change to kernel debug behaviour, where we now
step into interrupt handlers and data aborts from EL1 when debugging the
kernel, which is actually a useful thing to do. A side-effect of this is
that it *does* potentially prevent stepping off {break,watch}points when
there is a high-frequency interrupt source (e.g. a timer), so a debugger
would need to use either breakpoints or manually disable interrupts to
get around this issue.
With this patch applied, guest performance is restored under KVM when
debug register accesses are trapped (and we get a measurable performance
increase on the host on Cortex-A57 too).
Cc: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-29 19:04:06 +01:00
enable_ s t e p _ t s k x1 , x2
2018-07-20 14:41:54 -07:00
# ifdef C O N F I G _ G C C _ P L U G I N _ S T A C K L E A K
bl s t a c k l e a k _ e r a s e
# endif
2015-08-19 15:57:09 +01:00
kernel_ e x i t 0
2020-05-01 12:54:28 +01:00
/ *
* Ok, w e n e e d t o d o e x t r a p r o c e s s i n g , e n t e r t h e s l o w p a t h .
* /
work_pending :
mov x0 , s p / / ' r e g s '
bl d o _ n o t i f y _ r e s u m e
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
bl t r a c e _ h a r d i r q s _ o n / / e n a b l e d w h i l e i n u s e r s p a c e
# endif
ldr x1 , [ t s k , #T S K _ T I _ F L A G S ] / / r e - c h e c k f o r s i n g l e - s t e p
b f i n i s h _ r e t _ t o _ u s e r
SYM_ C O D E _ E N D ( r e t _ t o _ u s e r )
2012-03-05 11:49:27 +00:00
2016-07-08 12:35:50 -04:00
.popsection / / .entry .text
2017-11-14 14:07:40 +00:00
# ifdef C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0
/ *
* Exception v e c t o r s t r a m p o l i n e .
* /
.pushsection " .entry .tramp .text " , " ax"
.macro tramp_ m a p _ k e r n e l , t m p
mrs \ t m p , t t b r1 _ e l 1
2018-01-11 10:11:58 +00:00
add \ t m p , \ t m p , #( P A G E _ S I Z E + R E S E R V E D _ T T B R 0 _ S I Z E )
2017-11-14 14:07:40 +00:00
bic \ t m p , \ t m p , #U S E R _ A S I D _ F L A G
msr t t b r1 _ e l 1 , \ t m p
2017-11-14 14:29:19 +00:00
# ifdef C O N F I G _ Q C O M _ F A L K O R _ E R R A T U M _ 1 0 0 3
alternative_ i f A R M 6 4 _ W O R K A R O U N D _ Q C O M _ F A L K O R _ E 1 0 0 3
/* ASID already in \tmp[63:48] */
movk \ t m p , #: a b s _ g 2 _ n c : ( T R A M P _ V A L I A S > > 1 2 )
movk \ t m p , #: a b s _ g 1 _ n c : ( T R A M P _ V A L I A S > > 1 2 )
/* 2MB boundary containing the vectors, so we nobble the walk cache */
movk \ t m p , #: a b s _ g 0 _ n c : ( ( T R A M P _ V A L I A S & ~ ( S Z _ 2 M - 1 ) ) > > 1 2 )
isb
tlbi v a e 1 , \ t m p
dsb n s h
alternative_ e l s e _ n o p _ e n d i f
# endif / * C O N F I G _ Q C O M _ F A L K O R _ E R R A T U M _ 1 0 0 3 * /
2017-11-14 14:07:40 +00:00
.endm
.macro tramp_ u n m a p _ k e r n e l , t m p
mrs \ t m p , t t b r1 _ e l 1
2018-01-11 10:11:58 +00:00
sub \ t m p , \ t m p , #( P A G E _ S I Z E + R E S E R V E D _ T T B R 0 _ S I Z E )
2017-11-14 14:07:40 +00:00
orr \ t m p , \ t m p , #U S E R _ A S I D _ F L A G
msr t t b r1 _ e l 1 , \ t m p
/ *
2018-01-29 11:59:58 +00:00
* We a v o i d r u n n i n g t h e p o s t _ t t b r _ u p d a t e _ w o r k a r o u n d h e r e b e c a u s e
* it' s o n l y n e e d e d b y C a v i u m T h u n d e r X , w h i c h r e q u i r e s K P T I t o b e
* disabled.
2017-11-14 14:07:40 +00:00
* /
.endm
.macro tramp_ v e n t r y , r e g s i z e = 6 4
.align 7
1 :
.if \ regsize = = 6 4
msr t p i d r r o _ e l 0 , x30 / / R e s t o r e d i n k e r n e l _ v e n t r y
.endif
2017-11-14 16:15:59 +00:00
/ *
* Defend a g a i n s t b r a n c h a l i a s i n g a t t a c k s b y p u s h i n g a d u m m y
* entry o n t o t h e r e t u r n s t a c k a n d u s i n g a R E T i n s t r u c t i o n t o
* enter t h e f u l l - f a t k e r n e l v e c t o r s .
* /
bl 2 f
b .
2 :
2017-11-14 14:07:40 +00:00
tramp_ m a p _ k e r n e l x30
2017-12-06 11:24:02 +00:00
# ifdef C O N F I G _ R A N D O M I Z E _ B A S E
adr x30 , t r a m p _ v e c t o r s + P A G E _ S I Z E
alternative_ i n s n i s b , n o p , A R M 6 4 _ W O R K A R O U N D _ Q C O M _ F A L K O R _ E 1 0 0 3
ldr x30 , [ x30 ]
# else
2017-11-14 14:07:40 +00:00
ldr x30 , =vectors
2017-12-06 11:24:02 +00:00
# endif
2019-04-09 16:22:24 +01:00
alternative_ i f _ n o t A R M 6 4 _ W O R K A R O U N D _ C A V I U M _ T X 2 _ 2 1 9 _ P R F M
2017-11-14 14:07:40 +00:00
prfm p l i l 1 s t r m , [ x30 , #( 1 b - t r a m p _ v e c t o r s ) ]
2019-04-09 16:22:24 +01:00
alternative_ e l s e _ n o p _ e n d i f
2017-11-14 14:07:40 +00:00
msr v b a r _ e l 1 , x30
add x30 , x30 , #( 1 b - t r a m p _ v e c t o r s )
isb
2017-11-14 16:15:59 +00:00
ret
2017-11-14 14:07:40 +00:00
.endm
.macro tramp_ e x i t , r e g s i z e = 6 4
adr x30 , t r a m p _ v e c t o r s
msr v b a r _ e l 1 , x30
tramp_ u n m a p _ k e r n e l x30
.if \ regsize = = 6 4
mrs x30 , f a r _ e l 1
.endif
eret
2018-06-14 11:23:38 +01:00
sb
2017-11-14 14:07:40 +00:00
.endm
.align 11
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ S T A R T _ N O A L I G N ( t r a m p _ v e c t o r s )
2017-11-14 14:07:40 +00:00
.space 0x400
tramp_ v e n t r y
tramp_ v e n t r y
tramp_ v e n t r y
tramp_ v e n t r y
tramp_ v e n t r y 3 2
tramp_ v e n t r y 3 2
tramp_ v e n t r y 3 2
tramp_ v e n t r y 3 2
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ E N D ( t r a m p _ v e c t o r s )
2017-11-14 14:07:40 +00:00
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ S T A R T ( t r a m p _ e x i t _ n a t i v e )
2017-11-14 14:07:40 +00:00
tramp_ e x i t
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ E N D ( t r a m p _ e x i t _ n a t i v e )
2017-11-14 14:07:40 +00:00
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ S T A R T ( t r a m p _ e x i t _ c o m p a t )
2017-11-14 14:07:40 +00:00
tramp_ e x i t 3 2
2020-02-18 19:58:29 +00:00
SYM_ C O D E _ E N D ( t r a m p _ e x i t _ c o m p a t )
2017-11-14 14:07:40 +00:00
.ltorg
.popsection / / .entry .tramp .text
2017-12-06 11:24:02 +00:00
# ifdef C O N F I G _ R A N D O M I Z E _ B A S E
.pushsection " .rodata " , " a"
.align PAGE_SHIFT
2020-02-18 19:58:35 +00:00
SYM_ D A T A _ S T A R T ( _ _ e n t r y _ t r a m p _ d a t a _ s t a r t )
2017-12-06 11:24:02 +00:00
.quad vectors
2020-02-18 19:58:35 +00:00
SYM_ D A T A _ E N D ( _ _ e n t r y _ t r a m p _ d a t a _ s t a r t )
2017-12-06 11:24:02 +00:00
.popsection / / .rodata
# endif / * C O N F I G _ R A N D O M I Z E _ B A S E * /
2017-11-14 14:07:40 +00:00
# endif / * C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0 * /
2017-07-26 16:05:20 +01:00
/ *
* Register s w i t c h f o r A A r c h64 . T h e c a l l e e - s a v e d r e g i s t e r s n e e d t o b e s a v e d
* and r e s t o r e d . O n e n t r y :
* x0 = p r e v i o u s t a s k _ s t r u c t ( m u s t b e p r e s e r v e d a c r o s s t h e s w i t c h )
* x1 = n e x t t a s k _ s t r u c t
* Previous a n d n e x t a r e g u a r a n t e e d n o t t o b e t h e s a m e .
*
* /
2020-02-18 19:58:29 +00:00
SYM_ F U N C _ S T A R T ( c p u _ s w i t c h _ t o )
2017-07-26 16:05:20 +01:00
mov x10 , #T H R E A D _ C P U _ C O N T E X T
add x8 , x0 , x10
mov x9 , s p
stp x19 , x20 , [ x8 ] , #16 / / s t o r e c a l l e e - s a v e d r e g i s t e r s
stp x21 , x22 , [ x8 ] , #16
stp x23 , x24 , [ x8 ] , #16
stp x25 , x26 , [ x8 ] , #16
stp x27 , x28 , [ x8 ] , #16
stp x29 , x9 , [ x8 ] , #16
str l r , [ x8 ]
add x8 , x1 , x10
ldp x19 , x20 , [ x8 ] , #16 / / r e s t o r e c a l l e e - s a v e d r e g i s t e r s
ldp x21 , x22 , [ x8 ] , #16
ldp x23 , x24 , [ x8 ] , #16
ldp x25 , x26 , [ x8 ] , #16
ldp x27 , x28 , [ x8 ] , #16
ldp x29 , x9 , [ x8 ] , #16
ldr l r , [ x8 ]
mov s p , x9
msr s p _ e l 0 , x1
2020-04-23 11:16:05 +01:00
ptrauth_ k e y s _ i n s t a l l _ k e r n e l x1 , x8 , x9 , x10
2020-04-27 09:00:16 -07:00
scs_ s a v e x0 , x8
scs_ l o a d x1 , x8
2017-07-26 16:05:20 +01:00
ret
2020-02-18 19:58:29 +00:00
SYM_ F U N C _ E N D ( c p u _ s w i t c h _ t o )
2017-07-26 16:05:20 +01:00
NOKPROBE( c p u _ s w i t c h _ t o )
/ *
* This i s h o w w e r e t u r n f r o m a f o r k .
* /
2020-02-18 19:58:28 +00:00
SYM_ C O D E _ S T A R T ( r e t _ f r o m _ f o r k )
2017-07-26 16:05:20 +01:00
bl s c h e d u l e _ t a i l
cbz x19 , 1 f / / n o t a k e r n e l t h r e a d
mov x0 , x20
blr x19
2019-02-22 09:32:50 +00:00
1 : get_ c u r r e n t _ t a s k t s k
2017-07-26 16:05:20 +01:00
b r e t _ t o _ u s e r
2020-02-18 19:58:28 +00:00
SYM_ C O D E _ E N D ( r e t _ f r o m _ f o r k )
2017-07-26 16:05:20 +01:00
NOKPROBE( r e t _ f r o m _ f o r k )
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
# ifdef C O N F I G _ A R M _ S D E _ I N T E R F A C E
# include < a s m / s d e i . h >
# include < u a p i / l i n u x / a r m _ s d e i . h >
2018-01-08 15:38:18 +00:00
.macro sdei_handler_exit exit_ m o d e
/* On success, this call never returns... */
cmp \ e x i t _ m o d e , #S D E I _ E X I T _ S M C
b. n e 9 9 f
smc #0
b .
99 : hvc #0
b .
.endm
# ifdef C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0
/ *
* The r e g u l a r S D E I e n t r y p o i n t m a y h a v e b e e n u n m a p p e d a l o n g w i t h t h e r e s t o f
* the k e r n e l . T h i s t r a m p o l i n e r e s t o r e s t h e k e r n e l m a p p i n g t o m a k e t h e x1 m e m o r y
* argument a c c e s s i b l e .
*
* This c l o b b e r s x4 , _ _ s d e i _ h a n d l e r ( ) w i l l r e s t o r e t h i s f r o m f i r m w a r e ' s
* copy.
* /
.ltorg
.pushsection " .entry .tramp .text " , " ax"
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ S T A R T ( _ _ s d e i _ a s m _ e n t r y _ t r a m p o l i n e )
2018-01-08 15:38:18 +00:00
mrs x4 , t t b r1 _ e l 1
tbz x4 , #U S E R _ A S I D _ B I T , 1 f
tramp_ m a p _ k e r n e l t m p =x4
isb
mov x4 , x z r
/ *
* Use r e g - > i n t e r r u p t e d _ r e g s . a d d r _ l i m i t t o r e m e m b e r w h e t h e r t o u n m a p
* the k e r n e l o n e x i t .
* /
1 : str x4 , [ x1 , #( S D E I _ E V E N T _ I N T R E G S + S _ O R I G _ A D D R _ L I M I T ) ]
# ifdef C O N F I G _ R A N D O M I Z E _ B A S E
adr x4 , t r a m p _ v e c t o r s + P A G E _ S I Z E
add x4 , x4 , #: l o 12 : _ _ s d e i _ a s m _ t r a m p o l i n e _ n e x t _ h a n d l e r
ldr x4 , [ x4 ]
# else
ldr x4 , =__sdei_asm_handler
# endif
br x4
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ E N D ( _ _ s d e i _ a s m _ e n t r y _ t r a m p o l i n e )
2018-01-08 15:38:18 +00:00
NOKPROBE( _ _ s d e i _ a s m _ e n t r y _ t r a m p o l i n e )
/ *
* Make t h e e x i t c a l l a n d r e s t o r e t h e o r i g i n a l t t b r1 _ e l 1
*
* x0 & x1 : s e t u p f o r t h e e x i t A P I c a l l
* x2 : exit_ m o d e
* x4 : struct s d e i _ r e g i s t e r e d _ e v e n t a r g u m e n t f r o m r e g i s t r a t i o n t i m e .
* /
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ S T A R T ( _ _ s d e i _ a s m _ e x i t _ t r a m p o l i n e )
2018-01-08 15:38:18 +00:00
ldr x4 , [ x4 , #( S D E I _ E V E N T _ I N T R E G S + S _ O R I G _ A D D R _ L I M I T ) ]
cbnz x4 , 1 f
tramp_ u n m a p _ k e r n e l t m p =x4
1 : sdei_ h a n d l e r _ e x i t e x i t _ m o d e =x2
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ E N D ( _ _ s d e i _ a s m _ e x i t _ t r a m p o l i n e )
2018-01-08 15:38:18 +00:00
NOKPROBE( _ _ s d e i _ a s m _ e x i t _ t r a m p o l i n e )
.ltorg
.popsection / / .entry .tramp .text
# ifdef C O N F I G _ R A N D O M I Z E _ B A S E
.pushsection " .rodata " , " a"
2020-02-18 19:58:35 +00:00
SYM_ D A T A _ S T A R T ( _ _ s d e i _ a s m _ t r a m p o l i n e _ n e x t _ h a n d l e r )
2018-01-08 15:38:18 +00:00
.quad __sdei_asm_handler
2020-02-18 19:58:35 +00:00
SYM_ D A T A _ E N D ( _ _ s d e i _ a s m _ t r a m p o l i n e _ n e x t _ h a n d l e r )
2018-01-08 15:38:18 +00:00
.popsection / / .rodata
# endif / * C O N F I G _ R A N D O M I Z E _ B A S E * /
# endif / * C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0 * /
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
/ *
* Software D e l e g a t e d E x c e p t i o n e n t r y p o i n t .
*
* x0 : Event n u m b e r
* x1 : struct s d e i _ r e g i s t e r e d _ e v e n t a r g u m e n t f r o m r e g i s t r a t i o n t i m e .
* x2 : interrupted P C
* x3 : interrupted P S T A T E
2018-01-08 15:38:18 +00:00
* x4 : maybe c l o b b e r e d b y t h e t r a m p o l i n e
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
*
* Firmware h a s p r e s e r v e d x0 - > x17 f o r u s , w e m u s t s a v e / r e s t o r e t h e r e s t t o
* follow S M C - C C . W e s a v e ( o r r e t r i e v e ) a l l t h e r e g i s t e r s a s t h e h a n d l e r m a y
* want t h e m .
* /
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ S T A R T ( _ _ s d e i _ a s m _ h a n d l e r )
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
stp x2 , x3 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + S _ P C ]
stp x4 , x5 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 2 ]
stp x6 , x7 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 3 ]
stp x8 , x9 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 4 ]
stp x10 , x11 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 5 ]
stp x12 , x13 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 6 ]
stp x14 , x15 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 7 ]
stp x16 , x17 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 8 ]
stp x18 , x19 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 9 ]
stp x20 , x21 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 0 ]
stp x22 , x23 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 1 ]
stp x24 , x25 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 2 ]
stp x26 , x27 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 3 ]
stp x28 , x29 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 4 ]
mov x4 , s p
stp l r , x4 , [ x1 , #S D E I _ E V E N T _ I N T R E G S + S _ L R ]
mov x19 , x1
2020-04-27 09:00:17 -07:00
# if d e f i n e d ( C O N F I G _ V M A P _ S T A C K ) | | d e f i n e d ( C O N F I G _ S H A D O W _ C A L L _ S T A C K )
ldrb w4 , [ x19 , #S D E I _ E V E N T _ P R I O R I T Y ]
# endif
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
# ifdef C O N F I G _ V M A P _ S T A C K
/ *
* entry. S m a y h a v e b e e n u s i n g s p a s a s c r a t c h r e g i s t e r , f i n d w h e t h e r
* this i s a n o r m a l o r c r i t i c a l e v e n t a n d s w i t c h t o t h e a p p r o p r i a t e
* stack f o r t h i s C P U .
* /
cbnz w4 , 1 f
ldr_ t h i s _ c p u d s t =x5 , s y m =sdei_stack_normal_ptr , t m p =x6
b 2 f
1 : ldr_ t h i s _ c p u d s t =x5 , s y m =sdei_stack_critical_ptr , t m p =x6
2 : mov x6 , #S D E I _ S T A C K _ S I Z E
add x5 , x5 , x6
mov s p , x5
# endif
2020-04-27 09:00:17 -07:00
# ifdef C O N F I G _ S H A D O W _ C A L L _ S T A C K
/* Use a separate shadow call stack for normal and critical events */
cbnz w4 , 3 f
2020-05-15 14:46:46 +01:00
adr_ t h i s _ c p u d s t =scs_sp , s y m =sdei_shadow_call_stack_normal , t m p =x6
2020-04-27 09:00:17 -07:00
b 4 f
2020-05-15 14:46:46 +01:00
3 : adr_ t h i s _ c p u d s t =scs_sp , s y m =sdei_shadow_call_stack_critical , t m p =x6
2020-04-27 09:00:17 -07:00
4 :
# endif
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
/ *
* We m a y h a v e i n t e r r u p t e d u s e r s p a c e , o r a g u e s t , o r e x i t - f r o m o r
* return- t o e i t h e r o f t h e s e . W e c a n ' t t r u s t s p _ e l 0 , r e s t o r e i t .
* /
mrs x28 , s p _ e l 0
ldr_ t h i s _ c p u d s t =x0 , s y m =__entry_task , t m p =x1
msr s p _ e l 0 , x0
/* If we interrupted the kernel point to the previous stack/frame. */
and x0 , x3 , #0xc
mrs x1 , C u r r e n t E L
cmp x0 , x1
csel x29 , x29 , x z r , e q / / f p , o r z e r o
csel x4 , x2 , x z r , e q / / e l r , o r z e r o
stp x29 , x4 , [ s p , #- 16 ] !
mov x29 , s p
add x0 , x19 , #S D E I _ E V E N T _ I N T R E G S
mov x1 , x19
bl _ _ s d e i _ h a n d l e r
msr s p _ e l 0 , x28
/* restore regs >x17 that we clobbered */
2018-01-08 15:38:18 +00:00
mov x4 , x19 / / k e e p x4 f o r _ _ s d e i _ a s m _ e x i t _ t r a m p o l i n e
ldp x28 , x29 , [ x4 , #S D E I _ E V E N T _ I N T R E G S + 16 * 1 4 ]
ldp x18 , x19 , [ x4 , #S D E I _ E V E N T _ I N T R E G S + 16 * 9 ]
ldp l r , x1 , [ x4 , #S D E I _ E V E N T _ I N T R E G S + S _ L R ]
mov s p , x1
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
mov x1 , x0 / / a d d r e s s t o c o m p l e t e _ a n d _ r e s u m e
/* x0 = (x0 <= 1) ? EVENT_COMPLETE:EVENT_COMPLETE_AND_RESUME */
cmp x0 , #1
mov_ q x2 , S D E I _ 1 _ 0 _ F N _ S D E I _ E V E N T _ C O M P L E T E
mov_ q x3 , S D E I _ 1 _ 0 _ F N _ S D E I _ E V E N T _ C O M P L E T E _ A N D _ R E S U M E
csel x0 , x2 , x3 , l s
ldr_ l x2 , s d e i _ e x i t _ m o d e
2018-01-08 15:38:18 +00:00
alternative_ i f _ n o t A R M 6 4 _ U N M A P _ K E R N E L _ A T _ E L 0
sdei_ h a n d l e r _ e x i t e x i t _ m o d e =x2
alternative_ e l s e _ n o p _ e n d i f
# ifdef C O N F I G _ U N M A P _ K E R N E L _ A T _ E L 0
tramp_ a l i a s d s t =x5 , s y m =__sdei_asm_exit_trampoline
br x5
# endif
2020-02-18 19:58:40 +00:00
SYM_ C O D E _ E N D ( _ _ s d e i _ a s m _ h a n d l e r )
arm64: kernel: Add arch-specific SDEI entry code and CPU masking
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupted CPU context. Because this
is not a CPU exception, it cannot reuse the existing entry code.
(crucially we don't implicitly know which exception level we interrupted),
Add the entry point to entry.S to set us up for calling into C code. If
the event interrupted code that had interrupts masked, we always return
to that location. Otherwise we pretend this was an IRQ, and use SDEI's
complete_and_resume call to return to vbar_el1 + offset.
This allows the kernel to deliver signals to user space processes. For
KVM this triggers the world switch, a quick spin round vcpu_run, then
back into the guest, unless there are pending signals.
Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
the panic() code-path, which doesn't invoke cpuhotplug notifiers.
Because we can interrupt entry-from/exit-to another EL, we can't trust the
value in sp_el0 or x29, even if we interrupted the kernel, in this case
the code in entry.S will save/restore sp_el0 and use the value in
__entry_task.
When we have VMAP stacks we can interrupt the stack-overflow test, which
stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
these are allocated when we probe the interface. Future patches will add
refcounting hooks to allow the arch code to allocate them lazily.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08 15:38:12 +00:00
NOKPROBE( _ _ s d e i _ a s m _ h a n d l e r )
# endif / * C O N F I G _ A R M _ S D E _ I N T E R F A C E * /