2005-09-26 10:04:21 +04:00
/ *
* PowerPC v e r s i o n
* Copyright ( C ) 1 9 9 5 - 1 9 9 6 G a r y T h o m a s ( g d t @linuxppc.org)
*
* Rewritten b y C o r t D o u g a n ( c o r t @cs.nmt.edu) for PReP
* Copyright ( C ) 1 9 9 6 C o r t D o u g a n < c o r t @cs.nmt.edu>
* Adapted f o r P o w e r M a c i n t o s h b y P a u l M a c k e r r a s .
* Low- l e v e l e x c e p t i o n h a n d l e r s a n d M M U s u p p o r t
* rewritten b y P a u l M a c k e r r a s .
* Copyright ( C ) 1 9 9 6 P a u l M a c k e r r a s .
*
* Adapted f o r 6 4 b i t P o w e r P C b y D a v e E n g e b r e t s e n , P e t e r B e r g n e r , a n d
* Mike C o r r i g a n { e n g e b r e t | b e r g n e r | m i k e j c } @us.ibm.com
*
2009-06-03 01:17:38 +04:00
* This f i l e c o n t a i n s t h e e n t r y p o i n t f o r t h e 6 4 - b i t k e r n e l a l o n g
* with s o m e e a r l y i n i t i a l i z a t i o n c o d e c o m m o n t o a l l 6 4 - b i t p o w e r p c
* variants.
2005-09-26 10:04:21 +04:00
*
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or
* modify i t u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e
* as p u b l i s h e d b y t h e F r e e S o f t w a r e F o u n d a t i o n ; either version
* 2 of t h e L i c e n s e , o r ( a t y o u r o p t i o n ) a n y l a t e r v e r s i o n .
* /
# include < l i n u x / t h r e a d s . h >
2014-01-09 09:44:29 +04:00
# include < l i n u x / i n i t . h >
2005-10-10 08:01:07 +04:00
# include < a s m / r e g . h >
2005-09-26 10:04:21 +04:00
# include < a s m / p a g e . h >
# include < a s m / m m u . h >
# include < a s m / p p c _ a s m . h >
# include < a s m / a s m - o f f s e t s . h >
# include < a s m / b u g . h >
# include < a s m / c p u t a b l e . h >
# include < a s m / s e t u p . h >
# include < a s m / h v c a l l . h >
[PATCH] powerpc: Merge thread_info.h
Merge ppc32 and ppc64 versions of thread_info.h. They were pretty
similar already, the chief changes are:
- Instead of inline asm to implement current_thread_info(),
which needs to be different for ppc32 and ppc64, we use C with an
asm("r1") register variable. gcc turns it into the same asm as we
used to have for both platforms.
- We replace ppc32's 'local_flags' with the ppc64
'syscall_noerror' field. The noerror flag was in fact the only thing
in the local_flags field anyway, so the ppc64 approach is simpler, and
means we only need a load-immediate/store instead of load/mask/store
when clearing the flag.
- In readiness for 64k pages, when THREAD_SIZE will be less
than a page, ppc64 used kmalloc() rather than get_free_pages() to
allocate the kernel stack. With this patch we do the same for ppc32,
since there's no strong reason not to.
- For ppc64, we no longer export THREAD_SHIFT and THREAD_SIZE
via asm-offsets, thread_info.h can now be safely included in asm, as
on ppc32.
Built and booted on G4 Powerbook (ARCH=ppc and ARCH=powerpc) and
Power5 (ARCH=ppc64 and ARCH=powerpc).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-10-21 09:45:50 +04:00
# include < a s m / t h r e a d _ i n f o . h >
2006-09-25 12:19:00 +04:00
# include < a s m / f i r m w a r e . h >
2007-08-20 08:58:36 +04:00
# include < a s m / p a g e _ 6 4 . h >
2008-04-17 08:35:01 +04:00
# include < a s m / i r q f l a g s . h >
2010-04-16 02:11:32 +04:00
# include < a s m / k v m _ b o o k 3 s _ a s m . h >
2010-11-18 18:06:17 +03:00
# include < a s m / p t r a c e . h >
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
# include < a s m / h w _ i r q . h >
2005-09-26 10:04:21 +04:00
2011-03-31 05:57:33 +04:00
/ * The p h y s i c a l m e m o r y i s l a i d o u t s u c h t h a t t h e s e c o n d a r y p r o c e s s o r
2009-06-03 01:17:38 +04:00
* spin c o d e s i t s a t 0 x00 0 0 . . . 0 x00 f f . O n s e r v e r , t h e v e c t o r s f o l l o w
* using t h e l a y o u t d e s c r i b e d i n e x c e p t i o n s - 6 4 s . S
2005-09-26 10:04:21 +04:00
* /
/ *
* Entering i n t o t h i s c o d e w e m a k e t h e f o l l o w i n g a s s u m p t i o n s :
2009-06-03 01:17:38 +04:00
*
* For p S e r i e s o r s e r v e r p r o c e s s o r s :
2005-09-26 10:04:21 +04:00
* 1 . The M M U i s o f f & o p e n f i r m w a r e i s r u n n i n g i n r e a l m o d e .
* 2 . The k e r n e l i s e n t e r e d a t _ _ s t a r t
2011-09-19 22:27:58 +04:00
* - or- F o r O P A L e n t r y :
* 1 . The M M U i s o f f , p r o c e s s o r i n H V m o d e , p r i m a r y C P U e n t e r s a t 0
2011-09-19 21:44:59 +04:00
* with d e v i c e - t r e e i n g p r3 . W e a l s o g e t O P A L b a s e i n r8 a n d
* entry i n r9 f o r d e b u g g i n g p u r p o s e s
2011-09-19 22:27:58 +04:00
* 2 . Secondary p r o c e s s o r s e n t e r a t 0 x60 w i t h P I R i n g p r3
2005-09-26 10:04:21 +04:00
*
2009-06-03 01:17:38 +04:00
* For B o o k 3 E p r o c e s s o r s :
* 1 . The M M U i s o n r u n n i n g i n A S 0 i n a s t a t e d e f i n e d i n e P A P R
* 2 . The k e r n e l i s e n t e r e d a t _ _ s t a r t
2005-09-26 10:04:21 +04:00
* /
.text
.globl _stext
_stext :
_ GLOBAL( _ _ s t a r t )
/* NOP this out unconditionally */
BEGIN_ F T R _ S E C T I O N
2013-09-23 06:04:45 +04:00
FIXUP_ E N D I A N
2005-10-06 04:59:19 +04:00
b . _ _ s t a r t _ i n i t i a l i z a t i o n _ m u l t i p l a t f o r m
2005-09-26 10:04:21 +04:00
END_ F T R _ S E C T I O N ( 0 , 1 )
/* Catch branch to 0 in real mode */
trap
2008-08-30 05:40:24 +04:00
/ * Secondary p r o c e s s o r s s p i n o n t h i s v a l u e u n t i l i t b e c o m e s n o n z e r o .
* When i t d o e s i t c o n t a i n s t h e r e a l a d d r e s s o f t h e d e s c r i p t o r
* of t h e f u n c t i o n t h a t t h e c p u s h o u l d j u m p t o t o c o n t i n u e
* initialization.
* /
2013-12-29 01:01:47 +04:00
.balign 8
2005-09-26 10:04:21 +04:00
.globl __secondary_hold_spinloop
__secondary_hold_spinloop :
.llong 0x0
/* Secondary processors write this value with their cpu # */
/* after they enter the spin loop immediately below. */
.globl __secondary_hold_acknowledge
__secondary_hold_acknowledge :
.llong 0x0
2010-11-18 03:35:07 +03:00
# ifdef C O N F I G _ R E L O C A T A B L E
2008-10-23 22:41:09 +04:00
/ * This f l a g i s s e t t o 1 b y a l o a d e r i f t h e k e r n e l s h o u l d r u n
* at t h e l o a d e d a d d r e s s i n s t e a d o f t h e l i n k e d a d d r e s s . T h i s
* is u s e d b y k e x e c - t o o l s t o k e e p t h e t h e k d u m p k e r n e l i n t h e
* crash_ k e r n e l r e g i o n . T h e l o a d e r i s r e s p o n s i b l e f o r
* observing t h e a l i g n m e n t r e q u i r e m e n t .
* /
/* Do not move this variable as kexec-tools knows about it. */
. = 0 x5 c
.globl __run_at_load
__run_at_load :
.long 0x72756e30 /* "run0" -- relocate to 0 by default */
# endif
2005-09-26 10:04:21 +04:00
. = 0 x6 0
/ *
2007-06-16 02:06:23 +04:00
* The f o l l o w i n g c o d e i s u s e d t o h o l d s e c o n d a r y p r o c e s s o r s
* in a s p i n l o o p a f t e r t h e y h a v e e n t e r e d t h e k e r n e l , b u t
2005-09-26 10:04:21 +04:00
* before t h e b u l k o f t h e k e r n e l h a s b e e n r e l o c a t e d . T h i s c o d e
* is r e l o c a t e d t o p h y s i c a l a d d r e s s 0 x60 b e f o r e p r o m _ i n i t i s r u n .
* All o f i t m u s t f i t b e l o w t h e f i r s t e x c e p t i o n v e c t o r a t 0 x10 0 .
2008-08-30 05:40:24 +04:00
* Use . g l o b l h e r e n o t _ G L O B A L b e c a u s e w e w a n t _ _ s e c o n d a r y _ h o l d
* to b e t h e a c t u a l t e x t a d d r e s s , n o t a d e s c r i p t o r .
2005-09-26 10:04:21 +04:00
* /
2008-08-30 05:40:24 +04:00
.globl __secondary_hold
__secondary_hold :
2013-09-23 06:04:45 +04:00
FIXUP_ E N D I A N
2009-07-24 03:15:59 +04:00
# ifndef C O N F I G _ P P C _ B O O K 3 E
2005-09-26 10:04:21 +04:00
mfmsr r24
ori r24 ,r24 ,M S R _ R I
mtmsrd r24 / * R I o n * /
2009-07-24 03:15:59 +04:00
# endif
2006-02-13 10:11:13 +03:00
/* Grab our physical cpu number */
2005-09-26 10:04:21 +04:00
mr r24 ,r3
2012-12-03 21:05:47 +04:00
/* stash r4 for book3e */
mr r25 ,r4
2005-09-26 10:04:21 +04:00
/* Tell the master cpu we're here */
/* Relocation is off & we are located at an address less */
/* than 0x100, so only need to grab low order offset. */
2008-08-30 05:41:12 +04:00
std r24 ,_ _ s e c o n d a r y _ h o l d _ a c k n o w l e d g e - _ s t e x t ( 0 )
2005-09-26 10:04:21 +04:00
sync
2012-12-03 21:05:47 +04:00
li r26 ,0
# ifdef C O N F I G _ P P C _ B O O K 3 E
tovirt( r26 ,r26 )
# endif
2005-09-26 10:04:21 +04:00
/* All secondary cpus wait here until told to start. */
2012-12-03 21:05:47 +04:00
100 : ld r4 ,_ _ s e c o n d a r y _ h o l d _ s p i n l o o p - _ s t e x t ( r26 )
2008-08-30 05:40:24 +04:00
cmpdi 0 ,r4 ,0
beq 1 0 0 b
2005-09-26 10:04:21 +04:00
2006-02-13 10:11:13 +03:00
# if d e f i n e d ( C O N F I G _ S M P ) | | d e f i n e d ( C O N F I G _ K E X E C )
2012-12-03 21:05:47 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
tovirt( r4 ,r4 )
# endif
2008-08-30 05:40:24 +04:00
ld r4 ,0 ( r4 ) / * d e r e f f u n c t i o n d e s c r i p t o r * /
2005-12-06 00:49:00 +03:00
mtctr r4
2005-09-26 10:04:21 +04:00
mr r3 ,r24
2012-12-03 21:05:47 +04:00
/ *
* it m a y b e t h e c a s e t h a t o t h e r p l a t f o r m s h a v e r4 r i g h t t o
* begin w i t h , t h i s g i v e s u s s o m e s a f e t y i n c a s e i t i s n o t
* /
# ifdef C O N F I G _ P P C _ B O O K 3 E
mr r4 ,r25
# else
2009-07-24 03:15:59 +04:00
li r4 ,0
2012-12-03 21:05:47 +04:00
# endif
2011-04-05 08:34:58 +04:00
/* Make sure that patched code is visible */
isync
2005-12-06 00:49:00 +03:00
bctr
2005-09-26 10:04:21 +04:00
# else
BUG_ O P C O D E
# endif
/* This value is used to mark exception frames on the stack. */
.section " .toc " , " aw"
exception_marker :
.tc ID_ 7 2 6 5 6 7 7 3 _ 6 8 6 5 7 2 6 5 [ T C ] ,0 x72 6 5 6 7 7 3 6 8 6 5 7 2 6 5
.text
/ *
2009-06-03 01:17:38 +04:00
* On s e r v e r , w e i n c l u d e t h e e x c e p t i o n v e c t o r s c o d e h e r e a s i t
* relies o n a b s o l u t e a d d r e s s i n g w h i c h i s o n l y p o s s i b l e w i t h i n
* this c o m p i l a t i o n u n i t
2005-11-07 03:06:55 +03:00
* /
2009-06-03 01:17:38 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
# include " e x c e p t i o n s - 6 4 s . S "
2008-08-30 05:40:24 +04:00
# endif
2005-11-07 03:06:55 +03:00
2009-07-24 03:15:59 +04:00
_ GLOBAL( g e n e r i c _ s e c o n d a r y _ t h r e a d _ i n i t )
mr r24 ,r3
/* turn on 64-bit mode */
bl . e n a b l e _ 6 4 b _ m o d e
/* get a valid TOC pointer, wherever we're mapped at */
bl . r e l a t i v e _ t o c
2012-11-26 21:41:08 +04:00
tovirt( r2 ,r2 )
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
/* Book3E initialization */
mr r3 ,r24
bl . b o o k 3 e _ s e c o n d a r y _ t h r e a d _ i n i t
# endif
b g e n e r i c _ s e c o n d a r y _ c o m m o n _ i n i t
2005-09-26 10:04:21 +04:00
/ *
2006-08-11 09:07:08 +04:00
* On p S e r i e s a n d m o s t o t h e r p l a t f o r m s , s e c o n d a r y p r o c e s s o r s s p i n
* in t h e f o l l o w i n g c o d e .
2005-09-26 10:04:21 +04:00
* At e n t r y , r3 = t h i s p r o c e s s o r ' s n u m b e r ( p h y s i c a l c p u i d )
2009-07-24 03:15:59 +04:00
*
* On B o o k 3 E , r4 = 1 t o i n d i c a t e t h a t t h e i n i t i a l T L B e n t r y f o r
* this c o r e a l r e a d y e x i s t s ( s e t u p v i a s o m e o t h e r m e c h a n i s m s u c h
* as S C O M b e f o r e e n t r y ) .
2005-09-26 10:04:21 +04:00
* /
2006-08-11 09:07:08 +04:00
_ GLOBAL( g e n e r i c _ s e c o n d a r y _ s m p _ i n i t )
2013-09-23 06:04:45 +04:00
FIXUP_ E N D I A N
2005-09-26 10:04:21 +04:00
mr r24 ,r3
2009-07-24 03:15:59 +04:00
mr r25 ,r4
2005-09-26 10:04:21 +04:00
/* turn on 64-bit mode */
bl . e n a b l e _ 6 4 b _ m o d e
2009-07-24 03:15:59 +04:00
/* get a valid TOC pointer, wherever we're mapped at */
2008-08-30 05:41:12 +04:00
bl . r e l a t i v e _ t o c
2012-11-26 21:41:08 +04:00
tovirt( r2 ,r2 )
2008-08-30 05:41:12 +04:00
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
/* Book3E initialization */
mr r3 ,r24
mr r4 ,r25
bl . b o o k 3 e _ s e c o n d a r y _ c o r e _ i n i t
# endif
generic_secondary_common_init :
2005-09-26 10:04:21 +04:00
/ * Set u p a p a c a v a l u e f o r t h i s p r o c e s s o r . S i n c e w e h a v e t h e
* physical c p u i d i n r24 , w e n e e d t o s e a r c h t h e p a c a s t o f i n d
* which l o g i c a l i d m a p s t o o u r p h y s i c a l o n e .
* /
2010-01-28 16:23:22 +03:00
LOAD_ R E G _ A D D R ( r13 , p a c a ) / * L o a d p a c a p o i n t e r * /
ld r13 ,0 ( r13 ) / * G e t b a s e v a d d r o f p a c a a r r a y * /
2011-05-10 23:28:37 +04:00
# ifndef C O N F I G _ S M P
addi r13 ,r13 ,P A C A _ S I Z E / * k n o w r13 i f u s e d a c c i d e n t a l l y * /
b . k e x e c _ w a i t / * w a i t f o r n e x t k e r n e l i f ! S M P * /
# else
LOAD_ R E G _ A D D R ( r7 , n r _ c p u _ i d s ) / * L o a d n r _ c p u _ i d s a d d r e s s * /
lwz r7 ,0 ( r7 ) / * a l s o t h e m a x p a c a a l l o c a t e d * /
2005-09-26 10:04:21 +04:00
li r5 ,0 / * l o g i c a l c p u i d * /
1 : lhz r6 ,P A C A H W C P U I D ( r13 ) / * L o a d H W p r o c i d f r o m p a c a * /
cmpw r6 ,r24 / * C o m p a r e t o o u r i d * /
beq 2 f
addi r13 ,r13 ,P A C A _ S I Z E / * L o o p t o n e x t P A C A o n m i s s * /
addi r5 ,r5 ,1
2011-05-10 23:28:37 +04:00
cmpw r5 ,r7 / * C h e c k i f m o r e p a c a s e x i s t * /
2005-09-26 10:04:21 +04:00
blt 1 b
mr r3 ,r24 / * n o t f o u n d , c o p y p h y s t o r3 * /
b . k e x e c _ w a i t / * n e x t k e r n e l m i g h t d o b e t t e r * /
2011-01-20 09:50:21 +03:00
2 : SET_ P A C A ( r13 )
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
addi r12 ,r13 ,P A C A _ E X T L B / * a n d T L B e x c f r a m e i n a n o t h e r * /
mtspr S P R N _ S P R G _ T L B _ E X F R A M E ,r12
# endif
2005-09-26 10:04:21 +04:00
/* From now on, r24 is expected to be logical cpuid */
mr r24 ,r5
2008-07-12 03:00:26 +04:00
2006-08-11 09:07:08 +04:00
/* See if we need to call a cpu state restore handler */
2008-08-30 05:41:12 +04:00
LOAD_ R E G _ A D D R ( r23 , c u r _ c p u _ s p e c )
2006-08-11 09:07:08 +04:00
ld r23 ,0 ( r23 )
ld r23 ,C P U _ S P E C _ R E S T O R E ( r23 )
cmpdi 0 ,r23 ,0
2011-03-16 06:54:35 +03:00
beq 3 f
2006-08-11 09:07:08 +04:00
ld r23 ,0 ( r23 )
mtctr r23
bctrl
2011-05-25 22:09:12 +04:00
3 : LOAD_ R E G _ A D D R ( r3 , s p i n n i n g _ s e c o n d a r i e s ) / * D e c r e m e n t s p i n n i n g _ s e c o n d a r i e s * /
2011-03-16 06:54:35 +03:00
lwarx r4 ,0 ,r3
subi r4 ,r4 ,1
stwcx. r4 ,0 ,r3
bne 3 b
isync
4 : HMT_ L O W
2011-02-01 04:13:09 +03:00
lbz r23 ,P A C A P R O C S T A R T ( r13 ) / * T e s t i f t h i s p r o c e s s o r s h o u l d * /
/* start. */
cmpwi 0 ,r23 ,0
2011-03-16 06:54:35 +03:00
beq 4 b / * L o o p u n t i l t o l d t o g o * /
2011-02-01 04:13:09 +03:00
sync / * o r d e r p a c a . r u n a n d c u r _ c p u _ s p e c * /
2011-03-16 06:54:35 +03:00
isync / * I n c a s e c o d e p a t c h i n g h a p p e n e d * /
2011-02-01 04:13:09 +03:00
2011-03-16 06:54:35 +03:00
/* Create a temp kernel stack for use before relocation is on. */
2005-09-26 10:04:21 +04:00
ld r1 ,P A C A E M E R G S P ( r13 )
subi r1 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2006-11-27 06:59:50 +03:00
b _ _ s e c o n d a r y _ s t a r t
2011-05-10 23:28:37 +04:00
# endif / * S M P * /
2005-09-26 10:04:21 +04:00
2008-08-30 05:41:12 +04:00
/ *
* Turn t h e M M U o f f .
* Assumes w e ' r e m a p p e d E A = = R A i f t h e M M U i s o n .
* /
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 S
2005-09-26 10:04:21 +04:00
_ STATIC( _ _ m m u _ o f f )
mfmsr r3
andi. r0 ,r3 ,M S R _ I R | M S R _ D R
beqlr
2008-08-30 05:41:12 +04:00
mflr r4
2005-09-26 10:04:21 +04:00
andc r3 ,r3 ,r0
mtspr S P R N _ S R R 0 ,r4
mtspr S P R N _ S R R 1 ,r3
sync
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2009-07-24 03:15:59 +04:00
# endif
2005-09-26 10:04:21 +04:00
/ *
* Here i s o u r m a i n k e r n e l e n t r y p o i n t . W e s u p p o r t c u r r e n t l y 2 k i n d o f e n t r i e s
* depending o n t h e v a l u e o f r5 .
*
* r5 ! = N U L L - > O F e n t r y , w e g o t o p r o m _ i n i t , " l e g a c y " p a r a m e t e r c o n t e n t
* in r3 . . . r7
*
* r5 = = N U L L - > k e x e c s t y l e e n t r y . r3 i s a p h y s i c a l p o i n t e r t o t h e
* DT b l o c k , r4 i s a p h y s i c a l p o i n t e r t o t h e k e r n e l i t s e l f
*
* /
_ GLOBAL( _ _ s t a r t _ i n i t i a l i z a t i o n _ m u l t i p l a t f o r m )
2008-08-30 05:41:12 +04:00
/* Make sure we are running in 64 bits mode */
bl . e n a b l e _ 6 4 b _ m o d e
/* Get TOC pointer (current runtime address) */
bl . r e l a t i v e _ t o c
/* find out where we are now */
bcl 2 0 ,3 1 ,$ + 4
0 : mflr r26 / * r26 = r u n t i m e a d d r h e r e * /
addis r26 ,r26 ,( _ s t e x t - 0 b ) @ha
addi r26 ,r26 ,( _ s t e x t - 0 b ) @l /* current runtime base addr */
2005-09-26 10:04:21 +04:00
/ *
* Are w e b o o t e d f r o m a P R O M O f - t y p e c l i e n t - i n t e r f a c e ?
* /
cmpldi c r0 ,r5 ,0
2007-07-31 10:44:13 +04:00
beq 1 f
b . _ _ b o o t _ f r o m _ p r o m / * y e s - > p r o m * /
1 :
2005-09-26 10:04:21 +04:00
/* Save parameters */
mr r31 ,r3
mr r30 ,r4
2011-09-19 21:44:59 +04:00
# ifdef C O N F I G _ P P C _ E A R L Y _ D E B U G _ O P A L
/* Save OPAL entry */
mr r28 ,r8
mr r29 ,r9
# endif
2005-09-26 10:04:21 +04:00
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
bl . s t a r t _ i n i t i a l i z a t i o n _ b o o k 3 e
b . _ _ a f t e r _ p r o m _ s t a r t
# else
2005-09-26 10:04:21 +04:00
/* Setup some critical 970 SPRs before switching MMU off */
2006-08-11 09:07:08 +04:00
mfspr r0 ,S P R N _ P V R
srwi r0 ,r0 ,1 6
cmpwi r0 ,0 x39 / * 9 7 0 * /
beq 1 f
cmpwi r0 ,0 x3 c / * 9 7 0 F X * /
beq 1 f
cmpwi r0 ,0 x44 / * 9 7 0 M P * /
2006-10-26 02:32:40 +04:00
beq 1 f
cmpwi r0 ,0 x45 / * 9 7 0 G X * /
2006-08-11 09:07:08 +04:00
bne 2 f
1 : bl . _ _ c p u _ p r e i n i t _ p p c97 0
2 :
2005-09-26 10:04:21 +04:00
2008-08-30 05:41:12 +04:00
/* Switch off MMU if not already off */
2005-09-26 10:04:21 +04:00
bl . _ _ m m u _ o f f
b . _ _ a f t e r _ p r o m _ s t a r t
2009-07-24 03:15:59 +04:00
# endif / * C O N F I G _ P P C _ B O O K 3 E * /
2005-09-26 10:04:21 +04:00
2007-07-31 10:44:13 +04:00
_ INIT_ S T A T I C ( _ _ b o o t _ f r o m _ p r o m )
2009-03-10 20:53:27 +03:00
# ifdef C O N F I G _ P P C _ O F _ B O O T _ T R A M P O L I N E
2005-09-26 10:04:21 +04:00
/* Save parameters */
mr r31 ,r3
mr r30 ,r4
mr r29 ,r5
mr r28 ,r6
mr r27 ,r7
[PATCH] correct the comment about stackpointer alignment in __boot_from_prom
The address of variable val in prom_init_stdout is passed to prom_getprop.
prom_getprop casts the pointer to u32 and passes it to call_prom in the hope
that OpenFirmware stores something there.
But the pointer is truncated in the lower bits and the expected value is
stored somewhere else.
In my testing I had a stackpointer of 0x0023e6b4. val was at offset 120,
wich has address 0x0023e72c. But the value passed to OF was 0x0023e728.
c00000000040b710: 3b 01 00 78 addi r24,r1,120
...
c00000000040b754: 57 08 00 38 rlwinm r8,r24,0,0,28
...
c00000000040b784: 80 01 00 78 lwz r0,120(r1)
...
c00000000040b798: 90 1b 00 0c stw r0,12(r27)
...
The stackpointer came from 32bit code.
The chain was yaboot -> zImage -> vmlinux
PowerMac OpenFirmware does appearently not handle the ELF sections
correctly. If yaboot was compiled in
/usr/src/packages/BUILD/lilo-10.1.1/yaboot, then the stackpointer is
unaligned. But the stackpointer is correct if yaboot is compiled in
/tmp/yaboot.
This bug triggered since 2.6.15, now prom_getprop is an inline
function. gcc clears the lower bits, instead of just clearing the
upper 32 bits.
Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-23 23:50:59 +03:00
/ *
* Align t h e s t a c k t o 1 6 - b y t e b o u n d a r y
* Depending o n t h e s i z e a n d l a y o u t o f t h e E L F s e c t i o n s i n t h e i n i t i a l
2008-08-30 05:41:12 +04:00
* boot b i n a r y , t h e s t a c k p o i n t e r m a y b e u n a l i g n e d o n P o w e r M a c
[PATCH] correct the comment about stackpointer alignment in __boot_from_prom
The address of variable val in prom_init_stdout is passed to prom_getprop.
prom_getprop casts the pointer to u32 and passes it to call_prom in the hope
that OpenFirmware stores something there.
But the pointer is truncated in the lower bits and the expected value is
stored somewhere else.
In my testing I had a stackpointer of 0x0023e6b4. val was at offset 120,
wich has address 0x0023e72c. But the value passed to OF was 0x0023e728.
c00000000040b710: 3b 01 00 78 addi r24,r1,120
...
c00000000040b754: 57 08 00 38 rlwinm r8,r24,0,0,28
...
c00000000040b784: 80 01 00 78 lwz r0,120(r1)
...
c00000000040b798: 90 1b 00 0c stw r0,12(r27)
...
The stackpointer came from 32bit code.
The chain was yaboot -> zImage -> vmlinux
PowerMac OpenFirmware does appearently not handle the ELF sections
correctly. If yaboot was compiled in
/usr/src/packages/BUILD/lilo-10.1.1/yaboot, then the stackpointer is
unaligned. But the stackpointer is correct if yaboot is compiled in
/tmp/yaboot.
This bug triggered since 2.6.15, now prom_getprop is an inline
function. gcc clears the lower bits, instead of just clearing the
upper 32 bits.
Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-23 23:50:59 +03:00
* /
2006-03-05 02:00:45 +03:00
rldicr r1 ,r1 ,0 ,5 9
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
# ifdef C O N F I G _ R E L O C A T A B L E
/* Relocate code for where we are now */
mr r3 ,r26
bl . r e l o c a t e
# endif
2005-09-26 10:04:21 +04:00
/* Restore parameters */
mr r3 ,r31
mr r4 ,r30
mr r5 ,r29
mr r6 ,r28
mr r7 ,r27
/* Do all of the interaction with OF client interface */
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
mr r8 ,r26
2005-09-26 10:04:21 +04:00
bl . p r o m _ i n i t
2009-03-10 20:53:27 +03:00
# endif / * #C O N F I G _ P P C _ O F _ B O O T _ T R A M P O L I N E * /
/ * We n e v e r r e t u r n . W e a l s o h i t t h a t t r a p i f t r y i n g t o b o o t
* from O F w h i l e C O N F I G _ P P C _ O F _ B O O T _ T R A M P O L I N E i s n ' t s e l e c t e d * /
2005-09-26 10:04:21 +04:00
trap
_ STATIC( _ _ a f t e r _ p r o m _ s t a r t )
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
# ifdef C O N F I G _ R E L O C A T A B L E
/* process relocations for the final address of the kernel */
lis r25 ,P A G E _ O F F S E T @highest /* compute virtual base of kernel */
sldi r25 ,r25 ,3 2
2008-10-23 22:41:09 +04:00
lwz r7 ,_ _ r u n _ a t _ l o a d - _ s t e x t ( r26 )
2010-11-18 03:35:07 +03:00
cmplwi c r0 ,r7 ,1 / * f l a g g e d t o s t a y w h e r e w e a r e ? * /
2008-10-21 21:38:10 +04:00
bne 1 f
add r25 ,r25 ,r26
1 : mr r3 ,r25
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
bl . r e l o c a t e
# endif
2005-09-26 10:04:21 +04:00
/ *
2008-08-30 05:41:12 +04:00
* We n e e d t o r u n w i t h _ s t e x t a t p h y s i c a l a d d r e s s P H Y S I C A L _ S T A R T .
2005-09-26 10:04:21 +04:00
* This w i l l l e a v e s o m e c o d e i n t h e f i r s t 2 5 6 B o f
* real m e m o r y , w h i c h a r e r e s e r v e d f o r s o f t w a r e u s e .
*
* Note : This p r o c e s s o v e r w r i t e s t h e O F e x c e p t i o n v e c t o r s .
* /
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
li r3 ,0 / * t a r g e t a d d r * /
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
tovirt( r3 ,r3 ) / * o n b o o k e , w e a l r e a d y r u n a t P A G E _ O F F S E T * /
# endif
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
mr. r4 ,r26 / * I n s o m e c a s e s t h e l o a d e r m a y * /
2008-08-30 05:41:12 +04:00
beq 9 f / * h a v e a l r e a d y p u t u s a t z e r o * /
2005-09-26 10:04:21 +04:00
li r6 ,0 x10 0 / * S t a r t o f f s e t , t h e f i r s t 0 x10 0 * /
/* bytes were copied earlier. */
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
tovirt( r6 ,r6 ) / * o n b o o k e , w e a l r e a d y r u n a t P A G E _ O F F S E T * /
# endif
2005-09-26 10:04:21 +04:00
2012-11-11 23:01:05 +04:00
# ifdef C O N F I G _ R E L O C A T A B L E
2008-10-21 21:38:10 +04:00
/ *
* Check i f t h e k e r n e l h a s t o b e r u n n i n g a s r e l o c a t a b l e k e r n e l b a s e d o n t h e
2008-10-23 22:41:09 +04:00
* variable _ _ r u n _ a t _ l o a d , i f i t i s s e t t h e k e r n e l i s t r e a t e d a s r e l o c a t a b l e
2008-10-21 21:38:10 +04:00
* kernel, o t h e r w i s e i t w i l l b e m o v e d t o P H Y S I C A L _ S T A R T
* /
2008-10-23 22:41:09 +04:00
lwz r7 ,_ _ r u n _ a t _ l o a d - _ s t e x t ( r26 )
cmplwi c r0 ,r7 ,1
2008-10-21 21:38:10 +04:00
bne 3 f
2012-11-02 10:21:43 +04:00
/* just copy interrupts */
LOAD_ R E G _ I M M E D I A T E ( r5 , _ _ e n d _ i n t e r r u p t s - _ s t e x t )
2008-10-21 21:38:10 +04:00
b 5 f
3 :
# endif
lis r5 ,( c o p y _ t o _ h e r e - _ s t e x t ) @ha
addi r5 ,r5 ,( c o p y _ t o _ h e r e - _ s t e x t ) @l /* # bytes of memory to copy */
2005-09-26 10:04:21 +04:00
bl . c o p y _ a n d _ f l u s h / * c o p y t h e f i r s t n b y t e s * /
/* this includes the code being */
/* executed here. */
2008-08-30 05:41:12 +04:00
addis r8 ,r3 ,( 4 f - _ s t e x t ) @ha /* Jump to the copy of this code */
addi r8 ,r8 ,( 4 f - _ s t e x t ) @l /* that we just made */
mtctr r8
2005-09-26 10:04:21 +04:00
bctr
2013-12-23 05:19:51 +04:00
.balign 8
2008-10-21 21:38:10 +04:00
p_end : .llong _ e n d - _ s t e x t
2008-08-30 05:41:12 +04:00
4 : /* Now copy the rest of the kernel up to _end */
addis r5 ,r26 ,( p _ e n d - _ s t e x t ) @ha
ld r5 ,( p _ e n d - _ s t e x t ) @l(r5) /* get _end */
2008-10-21 21:38:10 +04:00
5 : bl . c o p y _ a n d _ f l u s h / * c o p y t h e r e s t * /
2008-08-30 05:41:12 +04:00
9 : b . s t a r t _ h e r e _ m u l t i p l a t f o r m
2005-09-26 10:04:21 +04:00
/ *
* Copy r o u t i n e u s e d t o c o p y t h e k e r n e l t o s t a r t a t p h y s i c a l a d d r e s s 0
* and f l u s h a n d i n v a l i d a t e t h e c a c h e s a s n e e d e d .
* r3 = d e s t a d d r , r4 = s o u r c e a d d r , r5 = c o p y l i m i t , r6 = s t a r t o f f s e t
* on e x i t , r3 , r4 , r5 a r e u n c h a n g e d , r6 i s u p d a t e d t o b e > = r5 .
*
* Note : this r o u t i n e * o n l y * c l o b b e r s r0 , r6 a n d l r
* /
_ GLOBAL( c o p y _ a n d _ f l u s h )
addi r5 ,r5 ,- 8
addi r6 ,r6 ,- 8
2006-09-06 23:34:41 +04:00
4 : li r0 ,8 / * U s e t h e s m a l l e s t c o m m o n * /
2005-09-26 10:04:21 +04:00
/* denominator cache line */
/* size. This results in */
/* extra cache line flushes */
/* but operation is correct. */
/* Can't get cache line size */
/* from NACA as it is being */
/* moved too. */
mtctr r0 / * p u t # w o r d s / l i n e i n c t r * /
3 : addi r6 ,r6 ,8 / * c o p y a c a c h e l i n e * /
ldx r0 ,r6 ,r4
stdx r0 ,r6 ,r3
bdnz 3 b
dcbst r6 ,r3 / * w r i t e i t t o m e m o r y * /
sync
icbi r6 ,r3 / * f l u s h t h e i c a c h e l i n e * /
cmpld 0 ,r6 ,r5
blt 4 b
sync
addi r5 ,r5 ,8
addi r6 ,r6 ,8
2013-04-24 04:30:09 +04:00
isync
2005-09-26 10:04:21 +04:00
blr
.align 8
copy_to_here :
# ifdef C O N F I G _ S M P
# ifdef C O N F I G _ P P C _ P M A C
/ *
* On P o w e r M a c , s e c o n d a r y p r o c e s s o r s s t a r t s f r o m t h e r e s e t v e c t o r , w h i c h
* is t e m p o r a r i l y t u r n e d i n t o a c a l l t o o n e o f t h e f u n c t i o n s b e l o w .
* /
.section " .text " ;
.align 2 ;
2005-10-22 10:02:39 +04:00
.globl __secondary_start_pmac_0
__secondary_start_pmac_0 :
/* NB the entries for cpus 0, 1, 2 must each occupy 8 bytes. */
li r24 ,0
b 1 f
li r24 ,1
b 1 f
li r24 ,2
b 1 f
li r24 ,3
1 :
2005-09-26 10:04:21 +04:00
_ GLOBAL( p m a c _ s e c o n d a r y _ s t a r t )
/* turn on 64-bit mode */
bl . e n a b l e _ 6 4 b _ m o d e
2009-01-11 22:03:45 +03:00
li r0 ,0
mfspr r3 ,S P R N _ H I D 4
rldimi r3 ,r0 ,4 0 ,2 3 / * c l e a r b i t 2 3 ( r m _ c i ) * /
sync
mtspr S P R N _ H I D 4 ,r3
isync
sync
slbia
2008-08-30 05:41:12 +04:00
/* get TOC pointer (real address) */
bl . r e l a t i v e _ t o c
2012-11-26 21:41:08 +04:00
tovirt( r2 ,r2 )
2008-08-30 05:41:12 +04:00
2005-09-26 10:04:21 +04:00
/* Copy some CPU settings from CPU 0 */
2006-08-11 09:07:08 +04:00
bl . _ _ r e s t o r e _ c p u _ p p c97 0
2005-09-26 10:04:21 +04:00
/* pSeries do that early though I don't think we really need it */
mfmsr r3
ori r3 ,r3 ,M S R _ R I
mtmsrd r3 / * R I o n * /
/* Set up a paca value for this processor. */
2010-01-28 16:23:22 +03:00
LOAD_ R E G _ A D D R ( r4 ,p a c a ) / * L o a d p a c a p o i n t e r * /
ld r4 ,0 ( r4 ) / * G e t b a s e v a d d r o f p a c a a r r a y * /
2008-08-30 05:41:12 +04:00
mulli r13 ,r24 ,P A C A _ S I Z E / * C a l c u l a t e v a d d r o f r i g h t p a c a * /
2005-09-26 10:04:21 +04:00
add r13 ,r13 ,r4 / * f o r t h i s p r o c e s s o r . * /
2011-01-20 09:50:21 +03:00
SET_ P A C A ( r13 ) / * S a v e v a d d r o f p a c a i n a n S P R G * /
2005-09-26 10:04:21 +04:00
2011-02-21 08:49:58 +03:00
/ * Mark i n t e r r u p t s s o f t a n d h a r d d i s a b l e d ( t h e y m i g h t b e e n a b l e d
* in t h e P A C A w h e n d o i n g h o t p l u g )
* /
li r0 ,0
stb r0 ,P A C A S O F T I R Q E N ( r13 )
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
li r0 ,P A C A _ I R Q _ H A R D _ D I S
stb r0 ,P A C A I R Q H A P P E N E D ( r13 )
2011-02-21 08:49:58 +03:00
2005-09-26 10:04:21 +04:00
/* Create a temp kernel stack for use before relocation is on. */
ld r1 ,P A C A E M E R G S P ( r13 )
subi r1 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2006-11-27 06:59:50 +03:00
b _ _ s e c o n d a r y _ s t a r t
2005-09-26 10:04:21 +04:00
# endif / * C O N F I G _ P P C _ P M A C * /
/ *
* This f u n c t i o n i s c a l l e d a f t e r t h e m a s t e r C P U h a s r e l e a s e d t h e
* secondary p r o c e s s o r s . T h e e x e c u t i o n e n v i r o n m e n t i s r e l o c a t i o n o f f .
* The p a c a f o r t h i s p r o c e s s o r h a s t h e f o l l o w i n g f i e l d s i n i t i a l i z e d a t
* this p o i n t :
* 1 . Processor n u m b e r
* 2 . Segment t a b l e p o i n t e r ( v i r t u a l a d d r e s s )
* On e n t r y t h e f o l l o w i n g a r e s e t :
2012-02-28 06:44:58 +04:00
* r1 = s t a c k p o i n t e r ( r e a l a d d r o f t e m p s t a c k )
2009-07-15 00:52:54 +04:00
* r2 4 = c p u # ( i n L i n u x t e r m s )
* r1 3 = p a c a v i r t u a l a d d r e s s
* SPRG_ P A C A = p a c a v i r t u a l a d d r e s s
2005-09-26 10:04:21 +04:00
* /
2009-07-24 03:15:59 +04:00
.section " .text " ;
.align 2 ;
2007-08-22 07:44:58 +04:00
.globl __secondary_start
2006-11-27 06:59:50 +03:00
__secondary_start :
2005-11-10 05:37:51 +03:00
/* Set thread priority to MEDIUM */
HMT_ M E D I U M
2005-09-26 10:04:21 +04:00
2012-02-28 06:44:58 +04:00
/* Initialize the kernel stack */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ A D D R ( r3 , c u r r e n t _ s e t )
2005-09-26 10:04:21 +04:00
sldi r28 ,r24 ,3 / * g e t c u r r e n t _ s e t [ c p u #] * /
2010-08-26 01:04:25 +04:00
ldx r14 ,r3 ,r28
addi r14 ,r14 ,T H R E A D _ S I Z E - S T A C K _ F R A M E _ O V E R H E A D
std r14 ,P A C A K S A V E ( r13 )
2005-09-26 10:04:21 +04:00
2010-08-13 00:58:28 +04:00
/* Do early setup for that CPU (stab, slb, hash table pointer) */
bl . e a r l y _ s e t u p _ s e c o n d a r y
2010-08-26 01:04:25 +04:00
/ *
* setup t h e n e w s t a c k p o i n t e r , b u t * d o n ' t * u s e t h i s u n t i l
* translation i s o n .
* /
mr r1 , r14
2005-11-10 05:37:51 +03:00
/* Clear backchain so we get nice backtraces */
2005-09-26 10:04:21 +04:00
li r7 ,0
mtlr r7
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
/ * Mark i n t e r r u p t s s o f t a n d h a r d d i s a b l e d ( t h e y m i g h t b e e n a b l e d
* in t h e P A C A w h e n d o i n g h o t p l u g )
* /
2012-02-28 06:44:58 +04:00
stb r7 ,P A C A S O F T I R Q E N ( r13 )
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
li r0 ,P A C A _ I R Q _ H A R D _ D I S
stb r0 ,P A C A I R Q H A P P E N E D ( r13 )
2012-02-28 06:44:58 +04:00
2005-09-26 10:04:21 +04:00
/* enable MMU and jump to start_secondary */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ A D D R ( r3 , . s t a r t _ s e c o n d a r y _ p r o l o g )
LOAD_ R E G _ I M M E D I A T E ( r4 , M S R _ K E R N E L )
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r3
mtspr S P R N _ S R R 1 ,r4
2009-07-24 03:15:59 +04:00
RFI
2005-09-26 10:04:21 +04:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/ *
* Running w i t h r e l o c a t i o n o n a t t h i s p o i n t . A l l w e w a n t t o d o i s
2008-08-30 05:41:12 +04:00
* zero t h e s t a c k b a c k - c h a i n p o i n t e r a n d g e t t h e T O C v i r t u a l a d d r e s s
* before g o i n g i n t o C c o d e .
2005-09-26 10:04:21 +04:00
* /
_ GLOBAL( s t a r t _ s e c o n d a r y _ p r o l o g )
2008-08-30 05:41:12 +04:00
ld r2 ,P A C A T O C ( r13 )
2005-09-26 10:04:21 +04:00
li r3 ,0
std r3 ,0 ( r1 ) / * Z e r o t h e s t a c k f r a m e p o i n t e r * /
bl . s t a r t _ s e c o n d a r y
2005-11-10 05:37:51 +03:00
b .
2010-03-01 05:58:09 +03:00
/ *
* Reset s t a c k p o i n t e r a n d c a l l s t a r t _ s e c o n d a r y
* to c o n t i n u e w i t h o n l i n e o p e r a t i o n w h e n w o k e n u p
* from c e d e i n c p u o f f l i n e .
* /
_ GLOBAL( s t a r t _ s e c o n d a r y _ r e s u m e )
ld r1 ,P A C A K S A V E ( r13 ) / * R e l o a d k e r n e l s t a c k p o i n t e r * /
li r3 ,0
std r3 ,0 ( r1 ) / * Z e r o t h e s t a c k f r a m e p o i n t e r * /
bl . s t a r t _ s e c o n d a r y
b .
2005-09-26 10:04:21 +04:00
# endif
/ *
* This s u b r o u t i n e c l o b b e r s r11 a n d r12
* /
_ GLOBAL( e n a b l e _ 6 4 b _ m o d e )
mfmsr r11 / * g r a b t h e c u r r e n t M S R * /
2009-07-24 03:15:59 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 E
oris r11 ,r11 ,0 x80 0 0 / * C M b i t s e t , w e ' l l s e t I C M l a t e r * /
mtmsr r11
# else / * C O N F I G _ P P C _ B O O K 3 E * /
2011-04-08 01:56:03 +04:00
li r12 ,( M S R _ 6 4 B I T | M S R _ I S F ) @highest
2008-08-30 05:41:12 +04:00
sldi r12 ,r12 ,4 8
2005-09-26 10:04:21 +04:00
or r11 ,r11 ,r12
mtmsrd r11
isync
2009-07-24 03:15:59 +04:00
# endif
2005-09-26 10:04:21 +04:00
blr
2008-08-30 05:41:12 +04:00
/ *
* This p u t s t h e T O C p o i n t e r i n t o r2 , o f f s e t b y 0 x80 0 0 ( a s e x p e c t e d
* by t h e t o o l c h a i n ) . I t c o m p u t e s t h e c o r r e c t v a l u e f o r w h e r e v e r w e
* are r u n n i n g a t t h e m o m e n t , u s i n g p o s i t i o n - i n d e p e n d e n t c o d e .
2012-11-26 21:41:08 +04:00
*
* Note : The c o m p i l e r c o n s t r u c t s p o i n t e r s u s i n g o f f s e t s f r o m t h e
* TOC i n - m c m o d e l =medium m o d e . A f t e r w e r e l o c a t e t o 0 b u t b e f o r e
* the M M U i s o n w e n e e d o u r T O C t o b e a v i r t u a l a d d r e s s o t h e r w i s e
* these p o i n t e r s w i l l b e r e a l a d d r e s s e s w h i c h m a y g e t s t o r e d a n d
* accessed l a t e r w i t h t h e M M U o n . W e u s e t o v i r t ( ) a t t h e c a l l
* sites t o h a n d l e t h i s .
2008-08-30 05:41:12 +04:00
* /
_ GLOBAL( r e l a t i v e _ t o c )
mflr r0
bcl 2 0 ,3 1 ,$ + 4
2011-09-19 21:44:51 +04:00
0 : mflr r11
ld r2 ,( p _ t o c - 0 b ) ( r11 )
add r2 ,r2 ,r11
2008-08-30 05:41:12 +04:00
mtlr r0
blr
2013-08-06 20:01:18 +04:00
.balign 8
2008-08-30 05:41:12 +04:00
p_toc : .llong _ _ t o c _ s t a r t + 0x8000 - 0 b
2005-09-26 10:04:21 +04:00
/ *
* This i s w h e r e t h e m a i n k e r n e l c o d e s t a r t s .
* /
2007-07-31 10:44:13 +04:00
_ INIT_ S T A T I C ( s t a r t _ h e r e _ m u l t i p l a t f o r m )
2012-11-26 21:41:08 +04:00
/* set up the TOC */
bl . r e l a t i v e _ t o c
tovirt( r2 ,r2 )
2005-09-26 10:04:21 +04:00
/ * Clear o u t t h e B S S . I t m a y h a v e b e e n d o n e i n p r o m _ i n i t ,
* already b u t t h a t ' s i r r e l e v a n t s i n c e p r o m _ i n i t w i l l s o o n
* be d e t a c h e d f r o m t h e k e r n e l c o m p l e t e l y . B e s i d e s , w e n e e d
* to c l e a r i t n o w f o r k e x e c - s t y l e e n t r y .
* /
2008-08-30 05:41:12 +04:00
LOAD_ R E G _ A D D R ( r11 ,_ _ b s s _ s t o p )
LOAD_ R E G _ A D D R ( r8 ,_ _ b s s _ s t a r t )
2005-09-26 10:04:21 +04:00
sub r11 ,r11 ,r8 / * b s s s i z e * /
addi r11 ,r11 ,7 / * r o u n d u p t o a n e v e n d o u b l e w o r d * /
2008-08-30 05:41:12 +04:00
srdi. r11 ,r11 ,3 / * s h i f t r i g h t b y 3 * /
2005-09-26 10:04:21 +04:00
beq 4 f
addi r8 ,r8 ,- 8
li r0 ,0
mtctr r11 / * z e r o t h i s m a n y d o u b l e w o r d s * /
3 : stdu r0 ,8 ( r8 )
bdnz 3 b
4 :
2011-09-19 21:44:59 +04:00
# ifdef C O N F I G _ P P C _ E A R L Y _ D E B U G _ O P A L
/* Setup OPAL entry */
2012-10-21 18:30:52 +04:00
LOAD_ R E G _ A D D R ( r11 , o p a l )
2011-09-19 21:44:59 +04:00
std r28 ,0 ( r11 ) ;
std r29 ,8 ( r11 ) ;
# endif
2009-07-24 03:15:59 +04:00
# ifndef C O N F I G _ P P C _ B O O K 3 E
2005-09-26 10:04:21 +04:00
mfmsr r6
ori r6 ,r6 ,M S R _ R I
mtmsrd r6 / * R I o n * /
2009-07-24 03:15:59 +04:00
# endif
2005-09-26 10:04:21 +04:00
powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-30 05:43:47 +04:00
# ifdef C O N F I G _ R E L O C A T A B L E
/* Save the physical address we're running at in kernstart_addr */
LOAD_ R E G _ A D D R ( r4 , k e r n s t a r t _ a d d r )
clrldi r0 ,r25 ,2
std r0 ,0 ( r4 )
# endif
2008-08-30 05:41:12 +04:00
/* The following gets the stack set up with the regs */
2005-09-26 10:04:21 +04:00
/* pointing to the real addr of the kernel stack. This is */
/* all done to support the C function call below which sets */
/* up the htab. This is done because we have relocated the */
/* kernel but are still running in real mode. */
2008-08-30 05:41:12 +04:00
LOAD_ R E G _ A D D R ( r3 ,i n i t _ t h r e a d _ u n i o n )
2005-09-26 10:04:21 +04:00
2008-08-30 05:41:12 +04:00
/* set up a stack pointer */
2005-09-26 10:04:21 +04:00
addi r1 ,r3 ,T H R E A D _ S I Z E
li r0 ,0
stdu r0 ,- S T A C K _ F R A M E _ O V E R H E A D ( r1 )
/ * Do v e r y e a r l y k e r n e l i n i t i a l i z a t i o n s , i n c l u d i n g i n i t i a l h a s h t a b l e ,
* stab a n d s l b s e t u p b e f o r e w e t u r n o n r e l o c a t i o n . * /
/* Restore parameters passed from prom_init/kexec */
mr r3 ,r31
2009-07-15 00:52:54 +04:00
bl . e a r l y _ s e t u p / * a l s o s e t s r13 a n d S P R G _ P A C A * /
2005-09-26 10:04:21 +04:00
2008-08-30 05:41:12 +04:00
LOAD_ R E G _ A D D R ( r3 , . s t a r t _ h e r e _ c o m m o n )
ld r4 ,P A C A K M S R ( r13 )
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r3
mtspr S P R N _ S R R 1 ,r4
2009-07-24 03:15:59 +04:00
RFI
2005-09-26 10:04:21 +04:00
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/* This is where all platforms converge execution */
2007-08-22 07:44:58 +04:00
_ INIT_ G L O B A L ( s t a r t _ h e r e _ c o m m o n )
2005-09-26 10:04:21 +04:00
/* relocation is on at this point */
2008-08-30 05:41:12 +04:00
std r1 ,P A C A K S A V E ( r13 )
2005-09-26 10:04:21 +04:00
2008-08-30 05:41:12 +04:00
/* Load the TOC (virtual address) */
2005-09-26 10:04:21 +04:00
ld r2 ,P A C A T O C ( r13 )
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
/* Do more system initializations in virtual mode */
2005-09-26 10:04:21 +04:00
bl . s e t u p _ s y s t e m
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
/ * Mark i n t e r r u p t s s o f t a n d h a r d d i s a b l e d ( t h e y m i g h t b e e n a b l e d
* in t h e P A C A w h e n d o i n g h o t p l u g )
* /
li r0 ,0
stb r0 ,P A C A S O F T I R Q E N ( r13 )
li r0 ,P A C A _ I R Q _ H A R D _ D I S
stb r0 ,P A C A I R Q H A P P E N E D ( r13 )
2005-09-26 10:04:21 +04:00
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 11:27:59 +04:00
/* Generic kernel entry */
2008-04-02 08:58:40 +04:00
bl . s t a r t _ k e r n e l
2005-09-26 10:04:21 +04:00
2006-02-13 10:11:13 +03:00
/* Not reached */
BUG_ O P C O D E
2005-09-26 10:04:21 +04:00
/ *
* We p u t a f e w t h i n g s h e r e t h a t h a v e t o b e p a g e - a l i g n e d .
* This s t u f f g o e s a t t h e b e g i n n i n g o f t h e b s s , w h i c h i s p a g e - a l i g n e d .
* /
.section " .bss "
.align PAGE_SHIFT
.globl empty_zero_page
empty_zero_page :
.space PAGE_SIZE
.globl swapper_pg_dir
swapper_pg_dir :
2007-09-18 11:22:59 +04:00
.space PGD_TABLE_SIZE