2019-05-27 08:55:01 +02:00
/* SPDX-License-Identifier: GPL-2.0-or-later */
2005-10-10 22:36:14 +10:00
/ *
* PowerPC v e r s i o n
* Copyright ( C ) 1 9 9 5 - 1 9 9 6 G a r y T h o m a s ( g d t @linuxppc.org)
* Rewritten b y C o r t D o u g a n ( c o r t @cs.nmt.edu) for PReP
* Copyright ( C ) 1 9 9 6 C o r t D o u g a n < c o r t @cs.nmt.edu>
* Adapted f o r P o w e r M a c i n t o s h b y P a u l M a c k e r r a s .
* Low- l e v e l e x c e p t i o n h a n d l e r s a n d M M U s u p p o r t
* rewritten b y P a u l M a c k e r r a s .
* Copyright ( C ) 1 9 9 6 P a u l M a c k e r r a s .
* MPC8 x x m o d i f i c a t i o n s C o p y r i g h t ( C ) 1 9 9 7 D a n M a l e k ( d m a l e k @jlc.net).
*
* This f i l e c o n t a i n s t h e s y s t e m c a l l e n t r y c o d e , c o n t e x t s w i t c h
* code, a n d e x c e p t i o n / i n t e r r u p t r e t u r n c o d e f o r P o w e r P C .
* /
# include < l i n u x / e r r n o . h >
powerpc/kernel: Switch to using MAX_ERRNO
Currently on powerpc we have our own #define for the highest (negative)
errno value, called _LAST_ERRNO. This is defined to be 516, for reasons
which are not clear.
The generic code, and x86, use MAX_ERRNO, which is defined to be 4095.
In particular seccomp uses MAX_ERRNO to restrict the value that a
seccomp filter can return.
Currently with the mismatch between _LAST_ERRNO and MAX_ERRNO, a seccomp
tracer wanting to return 600, expecting it to be seen as an error, would
instead find on powerpc that userspace sees a successful syscall with a
return value of 600.
To avoid this inconsistency, switch powerpc to use MAX_ERRNO.
We are somewhat confident that generic syscalls that can return a
non-error value above negative MAX_ERRNO have already been updated to
use force_successful_syscall_return().
I have also checked all the powerpc specific syscalls, and believe that
none of them expect to return a non-error value between -MAX_ERRNO and
-516. So this change should be safe ...
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-23 20:21:01 +10:00
# include < l i n u x / e r r . h >
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
# include < a s m / c a c h e . h >
2005-10-10 22:36:14 +10:00
# include < a s m / u n i s t d . h >
# include < a s m / p r o c e s s o r . h >
# include < a s m / p a g e . h >
# include < a s m / m m u . h >
# include < a s m / t h r e a d _ i n f o . h >
2018-07-24 01:07:54 +10:00
# include < a s m / c o d e - p a t c h i n g - a s m . h >
2005-10-10 22:36:14 +10:00
# include < a s m / p p c _ a s m . h >
# include < a s m / a s m - o f f s e t s . h >
# include < a s m / c p u t a b l e . h >
2006-09-25 18:19:00 +10:00
# include < a s m / f i r m w a r e . h >
2007-01-01 18:45:34 +00:00
# include < a s m / b u g . h >
2008-04-17 14:34:59 +10:00
# include < a s m / p t r a c e . h >
2008-04-17 14:35:01 +10:00
# include < a s m / i r q f l a g s . h >
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
# include < a s m / h w _ i r q . h >
2013-05-13 16:16:43 +00:00
# include < a s m / c o n t e x t _ t r a c k i n g . h >
2015-06-12 11:06:32 +10:00
# include < a s m / t m . h >
2016-04-26 10:28:50 +10:00
# include < a s m / p p c - o p c o d e . h >
2018-04-24 14:15:59 +10:00
# include < a s m / b a r r i e r . h >
2016-01-13 23:33:46 -05:00
# include < a s m / e x p o r t . h >
2018-07-05 16:24:57 +00:00
# include < a s m / a s m - c o m p a t . h >
2018-01-10 03:07:15 +11:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
# include < a s m / e x c e p t i o n - 6 4 s . h >
# else
# include < a s m / e x c e p t i o n - 6 4 e . h >
# endif
2018-07-05 16:25:01 +00:00
# include < a s m / f e a t u r e - f i x u p s . h >
2019-04-18 16:51:24 +10:00
# include < a s m / k u p . h >
2005-10-10 22:36:14 +10:00
/ *
* System c a l l s .
* /
.section " .toc " , " aw"
2014-02-04 16:05:53 +11:00
SYS_CALL_TABLE :
.tc sys_ c a l l _ t a b l e [ T C ] ,s y s _ c a l l _ t a b l e
2005-10-10 22:36:14 +10:00
2020-03-20 11:20:16 +01:00
# ifdef C O N F I G _ C O M P A T
2018-12-17 16:10:35 +05:30
COMPAT_SYS_CALL_TABLE :
.tc compat_ s y s _ c a l l _ t a b l e [ T C ] ,c o m p a t _ s y s _ c a l l _ t a b l e
2020-03-20 11:20:16 +01:00
# endif
2018-12-17 16:10:35 +05:30
2005-10-10 22:36:14 +10:00
/* This value is used to mark exception frames on the stack. */
exception_marker :
2008-04-17 14:34:59 +10:00
.tc ID_ E X C _ M A R K E R [ T C ] ,S T A C K _ F R A M E _ R E G S _ M A R K E R
2005-10-10 22:36:14 +10:00
.section " .text "
.align 7
2020-06-11 18:12:03 +10:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
.macro system_call_vectored name t r a p n r
.globl system_ c a l l _ v e c t o r e d _ \ n a m e
system_ c a l l _ v e c t o r e d _ \ n a m e :
_ ASM_ N O K P R O B E _ S Y M B O L ( s y s t e m _ c a l l _ v e c t o r e d _ \ n a m e )
# ifdef C O N F I G _ P P C _ T R A N S A C T I O N A L _ M E M
BEGIN_ F T R _ S E C T I O N
extrdi. r10 , r12 , 1 , ( 6 3 - M S R _ T S _ T _ L G ) / * t r a n s a c t i o n a c t i v e ? * /
bne . L t a b o r t _ s y s c a l l
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ T M )
# endif
INTERRUPT_ T O _ K E R N E L
mr r10 ,r1
ld r1 ,P A C A K S A V E ( r13 )
std r10 ,0 ( r1 )
std r11 ,_ N I P ( r1 )
std r12 ,_ M S R ( r1 )
std r0 ,G P R 0 ( r1 )
std r10 ,G P R 1 ( r1 )
std r2 ,G P R 2 ( r1 )
ld r2 ,P A C A T O C ( r13 )
mfcr r12
li r11 ,0
/* Can we avoid saving r3-r8 in common case? */
std r3 ,G P R 3 ( r1 )
std r4 ,G P R 4 ( r1 )
std r5 ,G P R 5 ( r1 )
std r6 ,G P R 6 ( r1 )
std r7 ,G P R 7 ( r1 )
std r8 ,G P R 8 ( r1 )
/* Zero r9-r12, this should only be required when restoring all GPRs */
std r11 ,G P R 9 ( r1 )
std r11 ,G P R 1 0 ( r1 )
std r11 ,G P R 1 1 ( r1 )
std r11 ,G P R 1 2 ( r1 )
std r9 ,G P R 1 3 ( r1 )
SAVE_ N V G P R S ( r1 )
std r11 ,_ X E R ( r1 )
std r11 ,_ L I N K ( r1 )
std r11 ,_ C T R ( r1 )
li r11 ,\ t r a p n r
std r11 ,_ T R A P ( r1 )
std r12 ,_ C C R ( r1 )
std r3 ,O R I G _ G P R 3 ( r1 )
addi r10 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ld r11 ,e x c e p t i o n _ m a r k e r @toc(r2)
std r11 ,- 1 6 ( r10 ) / * " r e g s h e r e " m a r k e r * /
2020-08-25 17:53:09 +10:00
BEGIN_ F T R _ S E C T I O N
HMT_ M E D I U M
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ H A S _ P P R )
2020-06-11 18:12:03 +10:00
/ *
* RECONCILE_ I R Q _ S T A T E w i t h o u t c a l l i n g t r a c e _ h a r d i r q s _ o f f ( ) , w h i c h
* would c l o b b e r s y s c a l l p a r a m e t e r s . A l s o w e a l w a y s e n t e r w i t h I R Q s
* enabled a n d n o t h i n g p e n d i n g . s y s t e m _ c a l l _ e x c e p t i o n ( ) w i l l c a l l
* trace_ h a r d i r q s _ o f f ( ) .
*
* scv e n t e r s w i t h M S R [ E E ] =1 , s o d o n ' t s e t P A C A _ I R Q _ H A R D _ D I S . T h e
* entry v e c t o r a l r e a d y s e t s P A C A I R Q S O F T M A S K t o I R Q S _ A L L _ D I S A B L E D .
* /
/* Calling convention has r9 = orig r0, r10 = regs */
mr r9 ,r0
bl s y s t e m _ c a l l _ e x c e p t i o n
.Lsyscall_vectored_ \ name\ ( ) _ e x i t :
addi r4 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
li r5 ,1 / * s c v * /
bl s y s c a l l _ e x i t _ p r e p a r e
ld r2 ,_ C C R ( r1 )
ld r4 ,_ N I P ( r1 )
ld r5 ,_ M S R ( r1 )
BEGIN_ F T R _ S E C T I O N
stdcx. r0 ,0 ,r1 / * t o c l e a r t h e r e s e r v a t i o n * /
END_ F T R _ S E C T I O N _ I F C L R ( C P U _ F T R _ S T C X _ C H E C K S _ A D D R E S S )
BEGIN_ F T R _ S E C T I O N
HMT_ M E D I U M _ L O W
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ H A S _ P P R )
cmpdi r3 ,0
bne . L s y s c a l l _ v e c t o r e d _ \ n a m e \ ( ) _ r e s t o r e _ r e g s
/* rfscv returns with LR->NIA and CTR->MSR */
mtlr r4
mtctr r5
/ * Could z e r o t h e s e a s p e r A B I , b u t w e m a y c o n s i d e r a s t r i c t e r A B I
* which p r e s e r v e s t h e s e i f l i b c i m p l e m e n t a t i o n s c a n b e n e f i t , s o
* restore t h e m f o r n o w u n t i l f u r t h e r m e a s u r e m e n t i s d o n e . * /
ld r0 ,G P R 0 ( r1 )
ld r4 ,G P R 4 ( r1 )
ld r5 ,G P R 5 ( r1 )
ld r6 ,G P R 6 ( r1 )
ld r7 ,G P R 7 ( r1 )
ld r8 ,G P R 8 ( r1 )
/* Zero volatile regs that may contain sensitive kernel data */
li r9 ,0
li r10 ,0
li r11 ,0
li r12 ,0
mtspr S P R N _ X E R ,r0
/ *
* We d o n ' t n e e d t o r e s t o r e A M R o n t h e w a y b a c k t o u s e r s p a c e f o r K U A P .
* The v a l u e o f A M R o n l y m a t t e r s w h i l e w e ' r e i n t h e k e r n e l .
* /
mtcr r2
ld r2 ,G P R 2 ( r1 )
ld r3 ,G P R 3 ( r1 )
ld r13 ,G P R 1 3 ( r1 )
ld r1 ,G P R 1 ( r1 )
RFSCV_ T O _ U S E R
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
.Lsyscall_vectored_ \ name\ ( ) _ r e s t o r e _ r e g s :
li r3 ,0
mtmsrd r3 ,1
mtspr S P R N _ S R R 0 ,r4
mtspr S P R N _ S R R 1 ,r5
ld r3 ,_ C T R ( r1 )
ld r4 ,_ L I N K ( r1 )
ld r5 ,_ X E R ( r1 )
REST_ N V G P R S ( r1 )
ld r0 ,G P R 0 ( r1 )
mtcr r2
mtctr r3
mtlr r4
mtspr S P R N _ X E R ,r5
REST_ 1 0 G P R S ( 2 , r1 )
REST_ 2 G P R S ( 1 2 , r1 )
ld r1 ,G P R 1 ( r1 )
RFI_ T O _ U S E R
.endm
system_ c a l l _ v e c t o r e d c o m m o n 0 x30 0 0
/ *
* We i n s t a n t i a t e a n o t h e r e n t r y c o p y f o r t h e S I G I L L v a r i a n t , w i t h T R A P =0x7ff0
* which i s t e s t e d b y s y s t e m _ c a l l _ e x c e p t i o n w h e n r0 i s - 1 ( a s s e t b y v e c t o r
* entry c o d e ) .
* /
system_ c a l l _ v e c t o r e d s i g i l l 0 x7 f f0
/ *
* Entered v i a k e r n e l r e t u r n s e t u p b y k e r n e l / s s t e p . c , m u s t m a t c h e n t r y r e g s
* /
.globl system_call_vectored_emulate
system_call_vectored_emulate :
_ ASM_ N O K P R O B E _ S Y M B O L ( s y s t e m _ c a l l _ v e c t o r e d _ e m u l a t e )
li r10 ,I R Q S _ A L L _ D I S A B L E D
stb r10 ,P A C A I R Q S O F T M A S K ( r13 )
b s y s t e m _ c a l l _ v e c t o r e d _ c o m m o n
# endif
.balign IFETCH_ALIGN_BYTES
2005-10-10 22:36:14 +10:00
.globl system_call_common
system_call_common :
2020-06-11 18:12:03 +10:00
_ ASM_ N O K P R O B E _ S Y M B O L ( s y s t e m _ c a l l _ c o m m o n )
2015-06-12 11:06:32 +10:00
# ifdef C O N F I G _ P P C _ T R A N S A C T I O N A L _ M E M
BEGIN_ F T R _ S E C T I O N
extrdi. r10 , r12 , 1 , ( 6 3 - M S R _ T S _ T _ L G ) / * t r a n s a c t i o n a c t i v e ? * /
2017-06-29 23:19:16 +05:30
bne . L t a b o r t _ s y s c a l l
2015-06-12 11:06:32 +10:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ T M )
# endif
2005-10-10 22:36:14 +10:00
mr r10 ,r1
ld r1 ,P A C A K S A V E ( r13 )
2019-08-27 13:30:07 +10:00
std r10 ,0 ( r1 )
2005-10-10 22:36:14 +10:00
std r11 ,_ N I P ( r1 )
std r12 ,_ M S R ( r1 )
std r0 ,G P R 0 ( r1 )
std r10 ,G P R 1 ( r1 )
2020-02-26 03:35:34 +10:00
std r2 ,G P R 2 ( r1 )
2018-12-12 16:03:05 +02:00
# ifdef C O N F I G _ P P C _ F S L _ B O O K 3 E
START_ B T B _ F L U S H _ S E C T I O N
BTB_ F L U S H ( r10 )
END_ B T B _ F L U S H _ S E C T I O N
# endif
2020-02-26 03:35:34 +10:00
ld r2 ,P A C A T O C ( r13 )
mfcr r12
li r11 ,0
/* Can we avoid saving r3-r8 in common case? */
2005-10-10 22:36:14 +10:00
std r3 ,G P R 3 ( r1 )
std r4 ,G P R 4 ( r1 )
std r5 ,G P R 5 ( r1 )
std r6 ,G P R 6 ( r1 )
std r7 ,G P R 7 ( r1 )
std r8 ,G P R 8 ( r1 )
2020-02-26 03:35:34 +10:00
/* Zero r9-r12, this should only be required when restoring all GPRs */
2005-10-10 22:36:14 +10:00
std r11 ,G P R 9 ( r1 )
std r11 ,G P R 1 0 ( r1 )
std r11 ,G P R 1 1 ( r1 )
std r11 ,G P R 1 2 ( r1 )
std r9 ,G P R 1 3 ( r1 )
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
SAVE_ N V G P R S ( r1 )
2020-02-26 03:35:34 +10:00
std r11 ,_ X E R ( r1 )
std r11 ,_ C T R ( r1 )
2005-10-10 22:36:14 +10:00
mflr r10
2020-02-26 03:35:34 +10:00
2012-04-05 03:44:48 +00:00
/ *
* This c l e a r s C R 0 . S O ( b i t 2 8 ) , w h i c h i s t h e e r r o r i n d i c a t i o n o n
* return f r o m t h i s s y s t e m c a l l .
* /
2020-02-26 03:35:34 +10:00
rldimi r12 ,r11 ,2 8 ,( 6 3 - 2 8 )
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
li r11 ,0 x c00
2005-10-10 22:36:14 +10:00
std r10 ,_ L I N K ( r1 )
std r11 ,_ T R A P ( r1 )
2020-02-26 03:35:34 +10:00
std r12 ,_ C C R ( r1 )
2005-10-10 22:36:14 +10:00
std r3 ,O R I G _ G P R 3 ( r1 )
2020-02-26 03:35:34 +10:00
addi r10 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2005-10-10 22:36:14 +10:00
ld r11 ,e x c e p t i o n _ m a r k e r @toc(r2)
2020-02-26 03:35:34 +10:00
std r11 ,- 1 6 ( r10 ) / * " r e g s h e r e " m a r k e r * /
2018-04-24 14:15:59 +10:00
2020-02-26 03:35:39 +10:00
/ *
* RECONCILE_ I R Q _ S T A T E w i t h o u t c a l l i n g t r a c e _ h a r d i r q s _ o f f ( ) , w h i c h
* would c l o b b e r s y s c a l l p a r a m e t e r s . A l s o w e a l w a y s e n t e r w i t h I R Q s
* enabled a n d n o t h i n g p e n d i n g . s y s t e m _ c a l l _ e x c e p t i o n ( ) w i l l c a l l
* trace_ h a r d i r q s _ o f f ( ) .
* /
li r11 ,I R Q S _ A L L _ D I S A B L E D
li r12 ,P A C A _ I R Q _ H A R D _ D I S
stb r11 ,P A C A I R Q S O F T M A S K ( r13 )
stb r12 ,P A C A I R Q H A P P E N E D ( r13 )
2020-02-26 03:35:34 +10:00
/* Calling convention has r9 = orig r0, r10 = regs */
mr r9 ,r0
bl s y s t e m _ c a l l _ e x c e p t i o n
2005-10-10 22:36:14 +10:00
2014-12-05 21:16:59 +11:00
.Lsyscall_exit :
2020-02-26 03:35:34 +10:00
addi r4 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2020-06-11 18:12:03 +10:00
li r5 ,0 / * ! s c v * /
2020-02-26 03:35:34 +10:00
bl s y s c a l l _ e x i t _ p r e p a r e
2009-07-23 23:15:59 +00:00
2020-02-26 03:35:34 +10:00
ld r2 ,_ C C R ( r1 )
ld r4 ,_ N I P ( r1 )
ld r5 ,_ M S R ( r1 )
ld r6 ,_ L I N K ( r1 )
2016-02-29 17:53:47 +11:00
2010-08-11 01:40:27 +00:00
BEGIN_ F T R _ S E C T I O N
2005-10-10 22:36:14 +10:00
stdcx. r0 ,0 ,r1 / * t o c l e a r t h e r e s e r v a t i o n * /
2010-08-11 01:40:27 +00:00
END_ F T R _ S E C T I O N _ I F C L R ( C P U _ F T R _ S T C X _ C H E C K S _ A D D R E S S )
2019-04-18 16:51:24 +10:00
2020-02-26 03:35:34 +10:00
mtspr S P R N _ S R R 0 ,r4
mtspr S P R N _ S R R 1 ,r5
mtlr r6
2018-11-29 17:42:24 +11:00
2020-02-26 03:35:34 +10:00
cmpdi r3 ,0
bne . L s y s c a l l _ r e s t o r e _ r e g s
2020-02-26 03:35:35 +10:00
/* Zero volatile regs that may contain sensitive kernel data */
li r0 ,0
li r4 ,0
li r5 ,0
li r6 ,0
li r7 ,0
li r8 ,0
li r9 ,0
li r10 ,0
li r11 ,0
li r12 ,0
mtctr r0
mtspr S P R N _ X E R ,r0
2020-02-26 03:35:34 +10:00
.Lsyscall_restore_regs_cont :
2015-11-25 14:25:17 +11:00
BEGIN_ F T R _ S E C T I O N
HMT_ M E D I U M _ L O W
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ H A S _ P P R )
2019-04-18 16:51:24 +10:00
/ *
* We d o n ' t n e e d t o r e s t o r e A M R o n t h e w a y b a c k t o u s e r s p a c e f o r K U A P .
* The v a l u e o f A M R o n l y m a t t e r s w h i l e w e ' r e i n t h e k e r n e l .
* /
2020-02-26 03:35:34 +10:00
mtcr r2
2018-01-10 03:07:15 +11:00
ld r2 ,G P R 2 ( r1 )
2020-02-26 03:35:34 +10:00
ld r3 ,G P R 3 ( r1 )
ld r13 ,G P R 1 3 ( r1 )
2018-01-10 03:07:15 +11:00
ld r1 ,G P R 1 ( r1 )
RFI_ T O _ U S E R
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2020-02-26 03:35:34 +10:00
.Lsyscall_restore_regs :
ld r3 ,_ C T R ( r1 )
ld r4 ,_ X E R ( r1 )
2006-03-08 13:24:22 +11:00
REST_ N V G P R S ( r1 )
2020-02-26 03:35:34 +10:00
mtctr r3
mtspr S P R N _ X E R ,r4
ld r0 ,G P R 0 ( r1 )
REST_ 8 G P R S ( 4 , r1 )
ld r12 ,G P R 1 2 ( r1 )
b . L s y s c a l l _ r e s t o r e _ r e g s _ c o n t
2005-10-10 22:36:14 +10:00
2015-06-12 11:06:32 +10:00
# ifdef C O N F I G _ P P C _ T R A N S A C T I O N A L _ M E M
2017-06-29 23:19:16 +05:30
.Ltabort_syscall :
2015-06-12 11:06:32 +10:00
/* Firstly we need to enable TM in the kernel */
mfmsr r10
2016-07-25 14:26:51 +10:00
li r9 , 1
rldimi r10 , r9 , M S R _ T M _ L G , 6 3 - M S R _ T M _ L G
2015-06-12 11:06:32 +10:00
mtmsrd r10 , 0
/* tabort, this dooms the transaction, nothing else */
2016-07-25 14:26:51 +10:00
li r9 , ( T M _ C A U S E _ S Y S C A L L | T M _ C A U S E _ P E R S I S T E N T )
TABORT( R 9 )
2015-06-12 11:06:32 +10:00
/ *
* Return d i r e c t l y t o u s e r s p a c e . W e h a v e c o r r u p t e d u s e r r e g i s t e r s t a t e ,
* but u s e r s p a c e w i l l n e v e r s e e t h a t r e g i s t e r s t a t e . E x e c u t i o n w i l l
* resume a f t e r t h e t b e g i n o f t h e a b o r t e d t r a n s a c t i o n w i t h t h e
* checkpointed r e g i s t e r s t a t e .
* /
2016-07-25 14:26:51 +10:00
li r9 , M S R _ R I
andc r10 , r10 , r9
2015-06-12 11:06:32 +10:00
mtmsrd r10 , 1
mtspr S P R N _ S R R 0 , r11
mtspr S P R N _ S R R 1 , r12
2018-01-10 03:07:15 +11:00
RFI_ T O _ U S E R
2015-06-12 11:06:32 +10:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
# endif
2020-06-11 18:12:03 +10:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
_ GLOBAL( r e t _ f r o m _ f o r k _ s c v )
bl s c h e d u l e _ t a i l
REST_ N V G P R S ( r1 )
li r3 ,0 / * f o r k ( ) r e t u r n v a l u e * /
b . L s y s c a l l _ v e c t o r e d _ c o m m o n _ e x i t
# endif
2005-10-10 22:36:14 +10:00
_ GLOBAL( r e t _ f r o m _ f o r k )
2014-02-04 16:04:35 +11:00
bl s c h e d u l e _ t a i l
2005-10-10 22:36:14 +10:00
REST_ N V G P R S ( r1 )
2020-06-11 18:12:03 +10:00
li r3 ,0 / * f o r k ( ) r e t u r n v a l u e * /
2014-12-05 21:16:59 +11:00
b . L s y s c a l l _ e x i t
2005-10-10 22:36:14 +10:00
2012-09-12 18:32:42 -04:00
_ GLOBAL( r e t _ f r o m _ k e r n e l _ t h r e a d )
2014-02-04 16:04:35 +11:00
bl s c h e d u l e _ t a i l
2012-09-12 18:32:42 -04:00
REST_ N V G P R S ( r1 )
2020-06-11 22:11:19 +10:00
mtctr r14
2012-09-12 18:32:42 -04:00
mr r3 ,r15
2016-06-06 22:26:10 +05:30
# ifdef P P C 6 4 _ E L F _ A B I _ v2
2014-02-04 16:08:51 +11:00
mr r12 ,r14
# endif
2020-06-11 22:11:19 +10:00
bctrl
2012-09-12 18:32:42 -04:00
li r3 ,0
2014-12-05 21:16:59 +11:00
b . L s y s c a l l _ e x i t
2012-08-31 15:48:05 -04:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
/* Save non-volatile GPRs, if not already saved. */
_ GLOBAL( s a v e _ n v g p r s )
ld r11 ,_ T R A P ( r1 )
andi. r0 ,r11 ,1
beqlr-
SAVE_ N V G P R S ( r1 )
clrrdi r0 ,r11 ,1
std r0 ,_ T R A P ( r1 )
blr
_ ASM_ N O K P R O B E _ S Y M B O L ( s a v e _ n v g p r s ) ;
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
# endif
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
2018-07-24 01:07:54 +10:00
# ifdef C O N F I G _ P P C _ B O O K 3 S _ 6 4
# define F L U S H _ C O U N T _ C A C H E \
1 : nop; \
2020-10-07 18:06:05 +10:00
patch_ s i t e 1 b , p a t c h _ _ c a l l _ f l u s h _ b r a n c h _ c a c h e s1 ; \
1 : nop; \
patch_ s i t e 1 b , p a t c h _ _ c a l l _ f l u s h _ b r a n c h _ c a c h e s2 ; \
1 : nop; \
patch_ s i t e 1 b , p a t c h _ _ c a l l _ f l u s h _ b r a n c h _ c a c h e s3
2018-07-24 01:07:54 +10:00
.macro nops number
.rept \ number
nop
.endr
.endm
.balign 32
2020-06-09 17:06:04 +10:00
.global flush_branch_caches
flush_branch_caches :
2018-07-24 01:07:54 +10:00
/* Save LR into r9 */
mflr r9
2019-11-13 21:05:41 +11:00
/ / Flush t h e l i n k s t a c k
2018-07-24 01:07:54 +10:00
.rept 64
bl . + 4
.endr
b 1 f
nops 6
.balign 32
/* Restore LR */
1 : mtlr r9
2019-11-13 21:05:41 +11:00
/ / If w e ' r e j u s t f l u s h i n g t h e l i n k s t a c k , r e t u r n h e r e
3 : nop
patch_ s i t e 3 b p a t c h _ _ f l u s h _ l i n k _ s t a c k _ r e t u r n
2018-07-24 01:07:54 +10:00
li r9 ,0 x7 f f f
mtctr r9
2020-06-09 17:06:08 +10:00
PPC_ B C C T R _ F L U S H
2018-07-24 01:07:54 +10:00
2 : nop
patch_ s i t e 2 b p a t c h _ _ f l u s h _ c o u n t _ c a c h e _ r e t u r n
nops 3
.rept 278
.balign 32
2020-06-09 17:06:08 +10:00
PPC_ B C C T R _ F L U S H
2018-07-24 01:07:54 +10:00
nops 7
.endr
blr
# else
# define F L U S H _ C O U N T _ C A C H E
# endif / * C O N F I G _ P P C _ B O O K 3 S _ 6 4 * /
2005-10-10 22:36:14 +10:00
/ *
* This r o u t i n e s w i t c h e s b e t w e e n t w o d i f f e r e n t t a s k s . T h e p r o c e s s
* state o f o n e i s s a v e d o n i t s k e r n e l s t a c k . T h e n t h e s t a t e
* of t h e o t h e r i s r e s t o r e d f r o m i t s k e r n e l s t a c k . T h e m e m o r y
* management h a r d w a r e i s u p d a t e d t o t h e s e c o n d p r o c e s s ' s s t a t e .
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
* Finally, w e c a n r e t u r n t o t h e s e c o n d p r o c e s s , v i a i n t e r r u p t _ r e t u r n .
2005-10-10 22:36:14 +10:00
* On e n t r y , r3 p o i n t s t o t h e T H R E A D f o r t h e c u r r e n t t a s k , r4
* points t o t h e T H R E A D f o r t h e n e w t a s k .
*
* Note : there a r e t w o w a y s t o g e t t o t h e " g o i n g o u t " p o r t i o n
* of t h i s c o d e ; either by coming in via the entry (_switch)
* or v i a " f o r k " w h i c h m u s t s e t u p a n e n v i r o n m e n t e q u i v a l e n t
* to t h e " _ s w i t c h " p a t h . I f y o u c h a n g e t h i s y o u ' l l h a v e t o c h a n g e
* the f o r k c o d e a l s o .
*
* The c o d e w h i c h c r e a t e s t h e n e w t a s k c o n t e x t i s i n ' c o p y _ t h r e a d '
2006-01-23 10:58:20 -06:00
* in a r c h / p o w e r p c / k e r n e l / p r o c e s s . c
2005-10-10 22:36:14 +10:00
* /
.align 7
_ GLOBAL( _ s w i t c h )
mflr r0
std r0 ,1 6 ( r1 )
stdu r1 ,- S W I T C H _ F R A M E _ S I Z E ( r1 )
/* r3-r13 are caller saved -- Cort */
2019-12-11 13:35:52 +11:00
SAVE_ N V G P R S ( r1 )
2015-10-29 11:43:56 +11:00
std r0 ,_ N I P ( r1 ) / * R e t u r n t o s w i t c h c a l l e r * /
2005-10-10 22:36:14 +10:00
mfcr r23
std r23 ,_ C C R ( r1 )
std r1 ,K S P ( r3 ) / * S e t o l d s t a c k p o i n t e r * /
2019-04-18 16:51:24 +10:00
kuap_ c h e c k _ a m r r9 , r10
2020-10-07 18:06:05 +10:00
FLUSH_ C O U N T _ C A C H E / * C l o b b e r s r9 , c t r * /
2018-07-24 01:07:54 +10:00
2017-06-09 01:36:08 +10:00
/ *
* On S M P k e r n e l s , c a r e m u s t b e t a k e n b e c a u s e a t a s k m a y b e
* scheduled o f f C P U x a n d o n t o C P U y . M e m o r y o r d e r i n g m u s t b e
* considered.
*
* Cacheable s t o r e s o n C P U x w i l l b e v i s i b l e w h e n t h e t a s k i s
* scheduled o n C P U y b y v i r t u e o f t h e c o r e s c h e d u l e r b a r r i e r s
* ( see " N o t e s o n P r o g r a m - O r d e r g u a r a n t e e s o n S M P s y s t e m s . " i n
* kernel/ s c h e d / c o r e . c ) .
*
* Uncacheable s t o r e s i n t h e c a s e o f i n v o l u n t a r y p r e e m p t i o n m u s t
2020-07-16 12:38:20 -07:00
* be t a k e n c a r e o f . T h e s m p _ m b _ _ a f t e r _ s p i n l o c k ( ) i n _ _ s c h e d u l e ( )
2017-06-09 01:36:08 +10:00
* is i m p l e m e n t e d a s h w s y n c o n p o w e r p c , w h i c h o r d e r s M M I O t o o . S o
* long a s t h e r e i s a n h w s y n c i n t h e c o n t e x t s w i t c h p a t h , i t w i l l
* be e x e c u t e d o n t h e s o u r c e C P U a f t e r t h e t a s k h a s p e r f o r m e d
* all M M I O o p s o n t h a t C P U , a n d o n t h e d e s t i n a t i o n C P U b e f o r e t h e
* task p e r f o r m s a n y M M I O o p s t h e r e .
2005-10-10 22:36:14 +10:00
* /
2010-08-11 01:40:27 +00:00
/ *
2017-06-09 01:36:07 +10:00
* The k e r n e l c o n t e x t s w i t c h p a t h m u s t c o n t a i n a s p i n _ l o c k ,
* which c o n t a i n s l a r x / s t c x , w h i c h w i l l c l e a r a n y r e s e r v a t i o n
* of t h e t a s k b e i n g s w i t c h e d .
2010-08-11 01:40:27 +00:00
* /
2013-05-29 19:34:27 +00:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
/ * Cancel a l l e x p l i c t u s e r s t r e a m s a s t h e y w i l l h a v e n o u s e a f t e r c o n t e x t
* switch a n d w i l l s t o p t h e H W f r o m c r e a t i n g s t r e a m s i t s e l f
* /
2018-02-21 05:08:26 +10:00
DCBT_ B O O K 3 S _ S T O P _ A L L _ S T R E A M _ I D S ( r6 )
2013-05-29 19:34:27 +00:00
# endif
2005-10-10 22:36:14 +10:00
addi r6 ,r4 ,- T H R E A D / * C o n v e r t T H R E A D t o ' c u r r e n t ' * /
std r6 ,P A C A C U R R E N T ( r13 ) / * S e t n e w ' c u r r e n t ' * /
2018-09-27 07:05:55 +00:00
# if d e f i n e d ( C O N F I G _ S T A C K P R O T E C T O R )
ld r6 , T A S K _ C A N A R Y ( r6 )
std r6 , P A C A _ C A N A R Y ( r13 )
# endif
2005-10-10 22:36:14 +10:00
ld r8 ,K S P ( r4 ) / * n e w s t a c k p o i n t e r * /
2017-10-19 15:08:43 +11:00
# ifdef C O N F I G _ P P C _ B O O K 3 S _ 6 4
2016-04-29 23:26:07 +10:00
BEGIN_ M M U _ F T R _ S E C T I O N
b 2 f
2016-07-27 13:19:01 +10:00
END_ M M U _ F T R _ S E C T I O N _ I F S E T ( M M U _ F T R _ T Y P E _ R A D I X )
2007-10-11 20:37:10 +10:00
BEGIN_ F T R _ S E C T I O N
2005-10-10 22:36:14 +10:00
clrrdi r6 ,r8 ,2 8 / * g e t i t s E S I D * /
clrrdi r9 ,r1 ,2 8 / * g e t c u r r e n t s p E S I D * /
2014-07-10 12:29:20 +10:00
FTR_ S E C T I O N _ E L S E
2007-10-11 20:37:10 +10:00
clrrdi r6 ,r8 ,4 0 / * g e t i t s 1 T E S I D * /
clrrdi r9 ,r1 ,4 0 / * g e t c u r r e n t s p 1 T E S I D * /
2014-07-10 12:29:20 +10:00
ALT_ M M U _ F T R _ S E C T I O N _ E N D _ I F C L R ( M M U _ F T R _ 1 T _ S E G M E N T )
2005-10-10 22:36:14 +10:00
clrldi. r0 ,r6 ,2 / * i s n e w E S I D c00 0 0 0 0 0 0 ? * /
cmpd c r1 ,r6 ,r9 / * o r i s n e w E S I D t h e s a m e a s c u r r e n t E S I D ? * /
cror e q ,4 * c r1 + e q ,e q
beq 2 f / * i f y e s , d o n ' t s l b i e i t * /
/* Bolt in the new stack SLB entry */
ld r7 ,K S P _ V S I D ( r4 ) / * G e t n e w s t a c k ' s V S I D * /
oris r0 ,r6 ,( S L B _ E S I D _ V ) @h
ori r0 ,r0 ,( S L B _ N U M _ B O L T E D - 1 ) @l
2007-10-11 20:37:10 +10:00
BEGIN_ F T R _ S E C T I O N
li r9 ,M M U _ S E G S I Z E _ 1 T / * i n s e r t B f i e l d * /
oris r6 ,r6 ,( M M U _ S E G S I Z E _ 1 T < < S L B I E _ S S I Z E _ S H I F T ) @h
rldimi r7 ,r9 ,S L B _ V S I D _ S S I Z E _ S H I F T ,0
2011-04-06 19:48:50 +00:00
END_ M M U _ F T R _ S E C T I O N _ I F S E T ( M M U _ F T R _ 1 T _ S E G M E N T )
2006-08-07 16:19:19 +10:00
2007-08-24 16:58:37 +10:00
/ * Update t h e l a s t b o l t e d S L B . N o w r i t e b a r r i e r s a r e n e e d e d
* here, p r o v i d e d w e o n l y u p d a t e t h e c u r r e n t C P U ' s S L B s h a d o w
* buffer.
* /
2006-08-07 16:19:19 +10:00
ld r9 ,P A C A _ S L B S H A D O W P T R ( r13 )
2006-08-09 17:00:30 +10:00
li r12 ,0
2013-08-07 02:01:46 +10:00
std r12 ,S L B S H A D O W _ S T A C K E S I D ( r9 ) / * C l e a r E S I D * /
li r12 ,S L B S H A D O W _ S T A C K V S I D
STDX_ B E r7 ,r12 ,r9 / * S a v e V S I D * /
li r12 ,S L B S H A D O W _ S T A C K E S I D
STDX_ B E r0 ,r12 ,r9 / * S a v e E S I D * /
2006-08-07 16:19:19 +10:00
2011-04-06 19:48:50 +00:00
/ * No n e e d t o c h e c k f o r M M U _ F T R _ N O _ S L B I E _ B h e r e , s i n c e w h e n
2007-10-16 00:58:59 +10:00
* we h a v e 1 T B s e g m e n t s , t h e o n l y C P U s k n o w n t o h a v e t h e e r r a t a
* only s u p p o r t l e s s t h a n 1 T B o f s y s t e m m e m o r y a n d w e ' l l n e v e r
* actually h i t t h i s c o d e p a t h .
* /
powerpc/mm/hash: Add missing isync prior to kernel stack SLB switch
Currently we do not have an isync, or any other context synchronizing
instruction prior to the slbie/slbmte in _switch() that updates the
SLB entry for the kernel stack.
However that is not correct as outlined in the ISA.
From Power ISA Version 3.0B, Book III, Chapter 11, page 1133:
"Changing the contents of ... the contents of SLB entries ... can
have the side effect of altering the context in which data
addresses and instruction addresses are interpreted, and in which
instructions are executed and data accesses are performed.
...
These side effects need not occur in program order, and therefore
may require explicit synchronization by software.
...
The synchronizing instruction before the context-altering
instruction ensures that all instructions up to and including that
synchronizing instruction are fetched and executed in the context
that existed before the alteration."
And page 1136:
"For data accesses, the context synchronizing instruction before the
slbie, slbieg, slbia, slbmte, tlbie, or tlbiel instruction ensures
that all preceding instructions that access data storage have
completed to a point at which they have reported all exceptions
they will cause."
We're not aware of any bugs caused by this, but it should be fixed
regardless.
Add the missing isync when updating kernel stack SLB entry.
Cc: stable@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Flesh out change log with more ISA text & explanation]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-30 18:48:04 +05:30
isync
2005-10-10 22:36:14 +10:00
slbie r6
2018-09-15 01:30:46 +10:00
BEGIN_ F T R _ S E C T I O N
2005-10-10 22:36:14 +10:00
slbie r6 / * W o r k a r o u n d P O W E R 5 < D D 2 . 1 i s s u e * /
2018-09-15 01:30:46 +10:00
END_ F T R _ S E C T I O N _ I F C L R ( C P U _ F T R _ A R C H _ 2 0 7 S )
2005-10-10 22:36:14 +10:00
slbmte r7 ,r0
isync
2 :
2017-10-19 15:08:43 +11:00
# endif / * C O N F I G _ P P C _ B O O K 3 S _ 6 4 * /
2009-07-23 23:15:59 +00:00
2019-01-17 23:23:57 +11:00
clrrdi r7 , r8 , T H R E A D _ S H I F T / * b a s e o f n e w s t a c k * /
2005-10-10 22:36:14 +10:00
/ * Note : this u s e s S W I T C H _ F R A M E _ S I Z E r a t h e r t h a n I N T _ F R A M E _ S I Z E
because w e d o n ' t n e e d t o l e a v e t h e 2 8 8 - b y t e A B I g a p a t t h e
top o f t h e k e r n e l s t a c k . * /
addi r7 ,r7 ,T H R E A D _ S I Z E - S W I T C H _ F R A M E _ S I Z E
2017-06-09 01:36:06 +10:00
/ *
* PMU i n t e r r u p t s i n r a d i x m a y c o m e i n h e r e . T h e y w i l l u s e r1 , n o t
* PACAKSAVE, s o t h i s s t a c k s w i t c h w i l l n o t c a u s e a p r o b l e m . T h e y
* will s t o r e t o t h e p r o c e s s s t a c k , w h i c h m a y t h e n b e m i g r a t e d t o
* another C P U . H o w e v e r t h e r q l o c k r e l e a s e o n t h i s C P U p a i r e d w i t h
* the r q l o c k a c q u i r e o n t h e n e w C P U b e f o r e t h e s t a c k b e c o m e s
* active o n t h e n e w C P U , w i l l o r d e r t h o s e s t o r e s .
* /
2005-10-10 22:36:14 +10:00
mr r1 ,r8 / * s t a r t u s i n g n e w s t a c k p o i n t e r * /
std r7 ,P A C A K S A V E ( r13 )
2012-09-03 16:51:10 +00:00
ld r6 ,_ C C R ( r1 )
mtcrf 0 x F F ,r6
2005-10-10 22:36:14 +10:00
/* r3-r13 are destroyed -- Cort */
2019-12-11 13:35:52 +11:00
REST_ N V G P R S ( r1 )
2005-10-10 22:36:14 +10:00
/* convert old thread to its task_struct for return value */
addi r3 ,r3 ,- T H R E A D
ld r7 ,_ N I P ( r1 ) / * R e t u r n t o _ s w i t c h c a l l e r i n n e w t a s k * /
mtlr r7
addi r1 ,r1 ,S W I T C H _ F R A M E _ S I Z E
blr
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
2005-10-10 22:36:14 +10:00
/ *
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
* If M S R E E / R I w a s n e v e r e n a b l e d , I R Q s n o t r e c o n c i l e d , N V G P R s n o t
2020-04-29 16:56:53 +10:00
* touched, n o e x i t w o r k c r e a t e d , t h e n t h i s c a n b e u s e d .
2005-10-10 22:36:14 +10:00
* /
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.balign IFETCH_ALIGN_BYTES
.globl fast_interrupt_return
fast_interrupt_return :
_ ASM_ N O K P R O B E _ S Y M B O L ( f a s t _ i n t e r r u p t _ r e t u r n )
2020-04-29 16:56:53 +10:00
kuap_ c h e c k _ a m r r3 , r4
2020-04-29 16:56:54 +10:00
ld r5 ,_ M S R ( r1 )
andi. r0 ,r5 ,M S R _ P R
2020-11-27 10:14:12 +05:30
bne . L f a s t _ u s e r _ i n t e r r u p t _ r e t u r n _ a m r
kuap_ k e r n e l _ r e s t o r e r3 , r4
2020-04-29 16:56:54 +10:00
andi. r0 ,r5 ,M S R _ R I
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
li r3 ,0 / * 0 r e t u r n v a l u e , n o E M U L A T E _ S T A C K _ S T O R E * /
bne+ . L f a s t _ k e r n e l _ i n t e r r u p t _ r e t u r n
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl u n r e c o v e r a b l e _ e x c e p t i o n
b . / * s h o u l d n o t g e t h e r e * /
2005-10-10 22:36:14 +10:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.balign IFETCH_ALIGN_BYTES
.globl interrupt_return
interrupt_return :
_ ASM_ N O K P R O B E _ S Y M B O L ( i n t e r r u p t _ r e t u r n )
ld r4 ,_ M S R ( r1 )
andi. r0 ,r4 ,M S R _ P R
beq . L k e r n e l _ i n t e r r u p t _ r e t u r n
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
bl i n t e r r u p t _ e x i t _ u s e r _ p r e p a r e
cmpdi r3 ,0
bne- . L r e s t o r e _ n v g p r s
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
2020-11-27 10:14:12 +05:30
.Lfast_user_interrupt_return_amr :
2020-11-27 10:14:24 +05:30
kuap_ u s e r _ r e s t o r e r3 , r4
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.Lfast_user_interrupt_return :
ld r11 ,_ N I P ( r1 )
ld r12 ,_ M S R ( r1 )
BEGIN_ F T R _ S E C T I O N
ld r10 ,_ P P R ( r1 )
mtspr S P R N _ P P R ,r10
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ H A S _ P P R )
mtspr S P R N _ S R R 0 ,r11
mtspr S P R N _ S R R 1 ,r12
2012-09-16 23:54:30 +00:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
BEGIN_ F T R _ S E C T I O N
stdcx. r0 ,0 ,r1 / * t o c l e a r t h e r e s e r v a t i o n * /
FTR_ S E C T I O N _ E L S E
ldarx r0 ,0 ,r1
ALT_ F T R _ S E C T I O N _ E N D _ I F C L R ( C P U _ F T R _ S T C X _ C H E C K S _ A D D R E S S )
2012-09-16 23:54:30 +00:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
ld r3 ,_ C C R ( r1 )
ld r4 ,_ L I N K ( r1 )
ld r5 ,_ C T R ( r1 )
ld r6 ,_ X E R ( r1 )
li r0 ,0
2012-09-16 23:54:30 +00:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
REST_ 4 G P R S ( 7 , r1 )
REST_ 2 G P R S ( 1 1 , r1 )
REST_ G P R ( 1 3 , r1 )
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
mtcr r3
mtlr r4
mtctr r5
mtspr S P R N _ X E R ,r6
2013-01-06 00:49:34 +00:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
REST_ 4 G P R S ( 2 , r1 )
REST_ G P R ( 6 , r1 )
REST_ G P R ( 0 , r1 )
REST_ G P R ( 1 , r1 )
RFI_ T O _ U S E R
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.Lrestore_nvgprs :
REST_ N V G P R S ( r1 )
b . L f a s t _ u s e r _ i n t e r r u p t _ r e t u r n
2005-10-10 22:36:14 +10:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.balign IFETCH_ALIGN_BYTES
.Lkernel_interrupt_return :
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl i n t e r r u p t _ e x i t _ k e r n e l _ p r e p a r e
2007-02-07 13:13:26 +11:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
.Lfast_kernel_interrupt_return :
cmpdi c r1 ,r3 ,0
ld r11 ,_ N I P ( r1 )
ld r12 ,_ M S R ( r1 )
mtspr S P R N _ S R R 0 ,r11
mtspr S P R N _ S R R 1 ,r12
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
BEGIN_ F T R _ S E C T I O N
stdcx. r0 ,0 ,r1 / * t o c l e a r t h e r e s e r v a t i o n * /
FTR_ S E C T I O N _ E L S E
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
ldarx r0 ,0 ,r1
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
ALT_ F T R _ S E C T I O N _ E N D _ I F C L R ( C P U _ F T R _ S T C X _ C H E C K S _ A D D R E S S )
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
ld r3 ,_ L I N K ( r1 )
2007-02-07 13:13:26 +11:00
ld r4 ,_ C T R ( r1 )
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
ld r5 ,_ X E R ( r1 )
ld r6 ,_ C C R ( r1 )
li r0 ,0
2005-10-10 22:36:14 +10:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
REST_ 4 G P R S ( 7 , r1 )
REST_ 2 G P R S ( 1 1 , r1 )
2018-01-10 03:07:15 +11:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
mtlr r3
mtctr r4
mtspr S P R N _ X E R ,r5
2005-10-10 22:36:14 +10:00
powerpc/64s: Clear on-stack exception marker upon exception return
The ppc64 specific implementation of the reliable stacktracer,
save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
trace" whenever it finds an exception frame on the stack. Stack frames
are classified as exception frames if the STACK_FRAME_REGS_MARKER
magic, as written by exception prologues, is found at a particular
location.
However, as observed by Joe Lawrence, it is possible in practice that
non-exception stack frames can alias with prior exception frames and
thus, that the reliable stacktracer can find a stale
STACK_FRAME_REGS_MARKER on the stack. It in turn falsely reports an
unreliable stacktrace and blocks any live patching transition to
finish. Said condition lasts until the stack frame is
overwritten/initialized by function call or other means.
In principle, we could mitigate this by making the exception frame
classification condition in save_stack_trace_tsk_reliable() stronger:
in addition to testing for STACK_FRAME_REGS_MARKER, we could also take
into account that for all exceptions executing on the kernel stack
- their stack frames's backlink pointers always match what is saved
in their pt_regs instance's ->gpr[1] slot and that
- their exception frame size equals STACK_INT_FRAME_SIZE, a value
uncommonly large for non-exception frames.
However, while these are currently true, relying on them would make
the reliable stacktrace implementation more sensitive towards future
changes in the exception entry code. Note that false negatives, i.e.
not detecting exception frames, would silently break the live patching
consistency model.
Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
rely on STACK_FRAME_REGS_MARKER as well.
Make the exception exit code clear the on-stack
STACK_FRAME_REGS_MARKER for those exceptions running on the "normal"
kernel stack and returning to kernelspace: because the topmost frame
is ignored by the reliable stack tracer anyway, returns to userspace
don't need to take care of clearing the marker.
Furthermore, as I don't have the ability to test this on Book 3E or 32
bits, limit the change to Book 3S and 64 bits.
Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-22 10:57:21 -05:00
/ *
* Leaving a s t a l e e x c e p t i o n _ m a r k e r o n t h e s t a c k c a n c o n f u s e
* the r e l i a b l e s t a c k u n w i n d e r l a t e r o n . C l e a r i t .
* /
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
std r0 ,S T A C K _ F R A M E _ O V E R H E A D - 1 6 ( r1 )
2019-04-18 16:51:24 +10:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
REST_ 4 G P R S ( 2 , r1 )
2019-04-18 16:51:24 +10:00
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
bne- c r1 ,1 f / * e m u l a t e s t a c k s t o r e * /
mtcr r6
REST_ G P R ( 6 , r1 )
REST_ G P R ( 0 , r1 )
REST_ G P R ( 1 , r1 )
2018-01-10 03:07:15 +11:00
RFI_ T O _ K E R N E L
2005-10-10 22:36:14 +10:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
1 : / *
* Emulate s t a c k s t o r e w i t h u p d a t e . N e w r1 v a l u e w a s a l r e a d y c a l c u l a t e d
* and u p d a t e d i n o u r i n t e r r u p t r e g s b y e m u l a t e _ l o a d s t o r e , b u t w e c a n ' t
* store t h e p r e v i o u s v a l u e o f r1 t o t h e s t a c k b e f o r e r e - l o a d i n g o u r
* registers f r o m i t , o t h e r w i s e t h e y c o u l d b e c l o b b e r e d . U s e
* PACA_ E X G E N a s t e m p o r a r y s t o r a g e t o h o l d t h e s t o r e d a t a , a s
* interrupts a r e d i s a b l e d h e r e s o i t w o n ' t b e c l o b b e r e d .
2012-05-10 16:12:38 +00:00
* /
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
mtcr r6
std r9 ,P A C A _ E X G E N + 0 ( r13 )
addi r9 ,r1 ,I N T _ F R A M E _ S I Z E / * g e t o r i g i n a l r1 * /
REST_ G P R ( 6 , r1 )
REST_ G P R ( 0 , r1 )
REST_ G P R ( 1 , r1 )
std r9 ,0 ( r1 ) / * p e r f o r m s t o r e c o m p o n e n t o f s t d u * /
ld r9 ,P A C A _ E X G E N + 0 ( r13 )
2017-06-29 23:19:19 +05:30
powerpc/64s: Implement interrupt exit logic in C
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack
store.
The stack store emulation is significantly simplfied, rather than
creating a new return frame and switching to that before performing
the store, it uses the PACA to keep a scratch register around to
perform the store.
The asm return code is moved into 64e for now. The new logic has made
allowance for 64e, but I don't have a full environment that works well
to test it, and even booting in emulated qemu is not great for stress
testing. 64e shouldn't be too far off working with this, given a bit
more testing and auditing of the logic.
This is slightly faster on a POWER9 (page fault speed increases about
1.1%), probably due to reduced mtmsrd.
mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE
handling (including the fast_interrupt_return path), to remove
trace_hardirqs_on(), and fixes the interrupt-return part of the
MSR_VSX restore bug caught by tm-unavailable selftest.
mpe: Incorporate fix from Nick:
The return-to-kernel path has to replay any soft-pending interrupts if
it is returning to a context that had interrupts soft-enabled. It has
to do this carefully and avoid plain enabling interrupts if this is an
irq context, which can cause multiple nesting of interrupts on the
stack, and other unexpected issues.
The code which avoided this case got the soft-mask state wrong, and
marked interrupts as enabled before going around again to retry. This
seems to be mostly harmless except when PREEMPT=y, this calls
preempt_schedule_irq with irqs apparently enabled and runs into a BUG
in kernel/sched/core.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-29-npiggin@gmail.com
2020-02-26 03:35:37 +10:00
RFI_ T O _ K E R N E L
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
# endif / * C O N F I G _ P P C _ B O O K 3 S * /
2005-10-10 22:36:14 +10:00
# ifdef C O N F I G _ P P C _ R T A S
/ *
* On C H R P , t h e R u n - T i m e A b s t r a c t i o n S e r v i c e s ( R T A S ) h a v e t o b e
* called w i t h t h e M M U o f f .
*
* In a d d i t i o n , w e n e e d t o b e i n 3 2 b m o d e , a t l e a s t f o r n o w .
*
* Note : r3 i s a n i n p u t p a r a m e t e r t o r t a s , s o d o n ' t t r a s h i t . . .
* /
_ GLOBAL( e n t e r _ r t a s )
mflr r0
std r0 ,1 6 ( r1 )
2018-10-12 13:14:06 +10:30
stdu r1 ,- S W I T C H _ F R A M E _ S I Z E ( r1 ) / * S a v e S P a n d c r e a t e s t a c k s p a c e . * /
2005-10-10 22:36:14 +10:00
/ * Because R T A S i s r u n n i n g i n 3 2 b m o d e , i t c l o b b e r s t h e h i g h o r d e r h a l f
* of a l l r e g i s t e r s t h a t i t s a v e s . W e t h e r e f o r e s a v e t h o s e r e g i s t e r s
* RTAS m i g h t t o u c h t o t h e s t a c k . ( r0 , r3 - r13 a r e c a l l e r s a v e d )
* /
SAVE_ G P R ( 2 , r1 ) / * S a v e t h e T O C * /
SAVE_ G P R ( 1 3 , r1 ) / * S a v e p a c a * /
2019-12-11 13:35:52 +11:00
SAVE_ N V G P R S ( r1 ) / * S a v e t h e n o n - v o l a t i l e s * /
2005-10-10 22:36:14 +10:00
mfcr r4
std r4 ,_ C C R ( r1 )
mfctr r5
std r5 ,_ C T R ( r1 )
mfspr r6 ,S P R N _ X E R
std r6 ,_ X E R ( r1 )
mfdar r7
std r7 ,_ D A R ( r1 )
mfdsisr r8
std r8 ,_ D S I S R ( r1 )
2006-03-27 15:20:00 -08:00
/ * Temporary w o r k a r o u n d t o c l e a r C R u n t i l R T A S c a n b e m o d i f i e d t o
* ignore a l l b i t s .
* /
li r0 ,0
mtcr r0
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
# ifdef C O N F I G _ B U G
2005-10-10 22:36:14 +10:00
/ * There i s n o w a y i t i s a c c e p t a b l e t o g e t h e r e w i t h i n t e r r u p t s e n a b l e d ,
* check i t w i t h t h e a s m e q u i v a l e n t o f W A R N _ O N
* /
2017-12-20 09:25:50 +05:30
lbz r0 ,P A C A I R Q S O F T M A S K ( r13 )
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
1 : tdeqi r0 ,I R Q S _ E N A B L E D
2007-01-01 18:45:34 +00:00
EMIT_ B U G _ E N T R Y 1 b ,_ _ F I L E _ _ ,_ _ L I N E _ _ ,B U G F L A G _ W A R N I N G
# endif
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 16:47:49 +10:00
/* Hard-disable interrupts */
mfmsr r6
rldicl r7 ,r6 ,4 8 ,1
rotldi r7 ,r7 ,1 6
mtmsrd r7 ,1
2005-10-10 22:36:14 +10:00
/ * Unfortunately, t h e s t a c k p o i n t e r a n d t h e M S R a r e a l s o c l o b b e r e d ,
* so t h e y a r e s a v e d i n t h e P A C A w h i c h a l l o w s u s t o r e s t o r e
* our o r i g i n a l s t a t e a f t e r R T A S r e t u r n s .
* /
std r1 ,P A C A R 1 ( r13 )
std r6 ,P A C A S A V E D M S R ( r13 )
/* Setup our real return addr */
2014-02-04 16:04:52 +11:00
LOAD_ R E G _ A D D R ( r4 ,r t a s _ r e t u r n _ l o c )
2006-01-13 14:56:25 +11:00
clrldi r4 ,r4 ,2 / * c o n v e r t t o r e a l m o d e a d d r e s s * /
2005-10-10 22:36:14 +10:00
mtlr r4
li r0 ,0
ori r0 ,r0 ,M S R _ E E | M S R _ S E | M S R _ B E | M S R _ R I
andc r0 ,r6 ,r0
li r9 ,1
rldicr r9 ,r9 ,M S R _ S F _ L G ,( 6 3 - M S R _ S F _ L G )
2013-09-23 12:04:45 +10:00
ori r9 ,r9 ,M S R _ I R | M S R _ D R | M S R _ F E 0 | M S R _ F E 1 | M S R _ F P | M S R _ R I | M S R _ L E
2005-10-10 22:36:14 +10:00
andc r6 ,r0 ,r9
2017-06-29 23:19:20 +05:30
__enter_rtas :
2005-10-10 22:36:14 +10:00
sync / * d i s a b l e i n t e r r u p t s s o S R R 0 / 1 * /
mtmsrd r0 / * d o n ' t g e t t r a s h e d * /
2006-01-13 14:56:25 +11:00
LOAD_ R E G _ A D D R ( r4 , r t a s )
2005-10-10 22:36:14 +10:00
ld r5 ,R T A S E N T R Y ( r4 ) / * g e t t h e r t a s - > e n t r y v a l u e * /
ld r4 ,R T A S B A S E ( r4 ) / * g e t t h e r t a s - > b a s e v a l u e * /
mtspr S P R N _ S R R 0 ,r5
mtspr S P R N _ S R R 1 ,r6
2018-01-10 03:07:15 +11:00
RFI_ T O _ K E R N E L
2005-10-10 22:36:14 +10:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2014-02-04 16:04:52 +11:00
rtas_return_loc :
2013-09-23 12:04:45 +10:00
FIXUP_ E N D I A N
2017-12-22 21:17:10 +10:00
/ *
* Clear R I a n d s e t S F b e f o r e a n y t h i n g .
* /
mfmsr r6
li r0 ,M S R _ R I
andc r6 ,r6 ,r0
sldi r0 ,r0 ,( M S R _ S F _ L G - M S R _ R I _ L G )
or r6 ,r6 ,r0
sync
mtmsrd r6
2005-10-10 22:36:14 +10:00
/* relocation is off at this point */
2011-01-20 17:50:21 +11:00
GET_ P A C A ( r4 )
2006-01-13 14:56:25 +11:00
clrldi r4 ,r4 ,2 / * c o n v e r t t o r e a l m o d e a d d r e s s * /
2005-10-10 22:36:14 +10:00
2008-08-30 11:41:12 +10:00
bcl 2 0 ,3 1 ,$ + 4
0 : mflr r3
2014-02-04 16:04:52 +11:00
ld r3 ,( 1 f - 0 b ) ( r3 ) / * g e t & r t a s _ r e s t o r e _ r e g s * /
2008-08-30 11:41:12 +10:00
2005-10-10 22:36:14 +10:00
ld r1 ,P A C A R 1 ( r4 ) / * R e s t o r e o u r S P * /
ld r4 ,P A C A S A V E D M S R ( r4 ) / * R e s t o r e o u r M S R * /
mtspr S P R N _ S R R 0 ,r3
mtspr S P R N _ S R R 1 ,r4
2018-01-10 03:07:15 +11:00
RFI_ T O _ K E R N E L
2005-10-10 22:36:14 +10:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2017-06-29 23:19:20 +05:30
_ ASM_ N O K P R O B E _ S Y M B O L ( _ _ e n t e r _ r t a s )
_ ASM_ N O K P R O B E _ S Y M B O L ( r t a s _ r e t u r n _ l o c )
2005-10-10 22:36:14 +10:00
2008-08-30 11:41:12 +10:00
.align 3
2017-03-09 16:42:12 +11:00
1 : .8byte r t a s _ r e s t o r e _ r e g s
2008-08-30 11:41:12 +10:00
2014-02-04 16:04:52 +11:00
rtas_restore_regs :
2005-10-10 22:36:14 +10:00
/* relocation is on at this point */
REST_ G P R ( 2 , r1 ) / * R e s t o r e t h e T O C * /
REST_ G P R ( 1 3 , r1 ) / * R e s t o r e p a c a * /
2019-12-11 13:35:52 +11:00
REST_ N V G P R S ( r1 ) / * R e s t o r e t h e n o n - v o l a t i l e s * /
2005-10-10 22:36:14 +10:00
2011-01-20 17:50:21 +11:00
GET_ P A C A ( r13 )
2005-10-10 22:36:14 +10:00
ld r4 ,_ C C R ( r1 )
mtcr r4
ld r5 ,_ C T R ( r1 )
mtctr r5
ld r6 ,_ X E R ( r1 )
mtspr S P R N _ X E R ,r6
ld r7 ,_ D A R ( r1 )
mtdar r7
ld r8 ,_ D S I S R ( r1 )
mtdsisr r8
2018-10-12 13:14:06 +10:30
addi r1 ,r1 ,S W I T C H _ F R A M E _ S I Z E / * U n s t a c k o u r f r a m e * /
2005-10-10 22:36:14 +10:00
ld r0 ,1 6 ( r1 ) / * g e t r e t u r n a d d r e s s * /
mtlr r0
blr / * r e t u r n t o c a l l e r * /
# endif / * C O N F I G _ P P C _ R T A S * /
_ GLOBAL( e n t e r _ p r o m )
mflr r0
std r0 ,1 6 ( r1 )
2018-10-12 13:14:06 +10:30
stdu r1 ,- S W I T C H _ F R A M E _ S I Z E ( r1 ) / * S a v e S P a n d c r e a t e s t a c k s p a c e * /
2005-10-10 22:36:14 +10:00
/ * Because P R O M i s r u n n i n g i n 3 2 b m o d e , i t c l o b b e r s t h e h i g h o r d e r h a l f
* of a l l r e g i s t e r s t h a t i t s a v e s . W e t h e r e f o r e s a v e t h o s e r e g i s t e r s
* PROM m i g h t t o u c h t o t h e s t a c k . ( r0 , r3 - r13 a r e c a l l e r s a v e d )
* /
2009-07-23 23:15:07 +00:00
SAVE_ G P R ( 2 , r1 )
2005-10-10 22:36:14 +10:00
SAVE_ G P R ( 1 3 , r1 )
2019-12-11 13:35:52 +11:00
SAVE_ N V G P R S ( r1 )
2009-07-23 23:15:07 +00:00
mfcr r10
2005-10-10 22:36:14 +10:00
mfmsr r11
2009-07-23 23:15:07 +00:00
std r10 ,_ C C R ( r1 )
2005-10-10 22:36:14 +10:00
std r11 ,_ M S R ( r1 )
2013-09-23 12:04:45 +10:00
/* Put PROM address in SRR0 */
mtsrr0 r4
/* Setup our trampoline return addr in LR */
bcl 2 0 ,3 1 ,$ + 4
0 : mflr r4
addi r4 ,r4 ,( 1 f - 0 b )
mtlr r4
2005-10-10 22:36:14 +10:00
2013-09-23 12:04:45 +10:00
/ * Prepare a 3 2 - b i t m o d e b i g e n d i a n M S R
2005-10-10 22:36:14 +10:00
* /
2009-07-23 23:15:59 +00:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
rlwinm r11 ,r11 ,0 ,1 ,3 1
2013-09-23 12:04:45 +10:00
mtsrr1 r11
rfi
2009-07-23 23:15:59 +00:00
# else / * C O N F I G _ P P C _ B O O K 3 E * /
2013-09-23 12:04:45 +10:00
LOAD_ R E G _ I M M E D I A T E ( r12 , M S R _ S F | M S R _ I S F | M S R _ L E )
andc r11 ,r11 ,r12
mtsrr1 r11
2018-01-10 03:07:15 +11:00
RFI_ T O _ K E R N E L
2009-07-23 23:15:59 +00:00
# endif / * C O N F I G _ P P C _ B O O K 3 E * /
2005-10-10 22:36:14 +10:00
2013-09-23 12:04:45 +10:00
1 : /* Return from OF */
FIXUP_ E N D I A N
2005-10-10 22:36:14 +10:00
/ * Just m a k e s u r e t h a t r1 t o p 3 2 b i t s d i d n ' t g e t
* corrupt b y O F
* /
rldicl r1 ,r1 ,0 ,3 2
/* Restore the MSR (back to 64 bits) */
ld r0 ,_ M S R ( r1 )
2009-07-23 23:15:07 +00:00
MTMSRD( r0 )
2005-10-10 22:36:14 +10:00
isync
/* Restore other registers */
REST_ G P R ( 2 , r1 )
REST_ G P R ( 1 3 , r1 )
2019-12-11 13:35:52 +11:00
REST_ N V G P R S ( r1 )
2005-10-10 22:36:14 +10:00
ld r4 ,_ C C R ( r1 )
mtcr r4
2018-10-12 13:14:06 +10:30
addi r1 ,r1 ,S W I T C H _ F R A M E _ S I Z E
2005-10-10 22:36:14 +10:00
ld r0 ,1 6 ( r1 )
mtlr r0
blr