2005-09-26 10:04:21 +04:00
/ *
* PowerPC v e r s i o n
* Copyright ( C ) 1 9 9 5 - 1 9 9 6 G a r y T h o m a s ( g d t @linuxppc.org)
*
* Rewritten b y C o r t D o u g a n ( c o r t @cs.nmt.edu) for PReP
* Copyright ( C ) 1 9 9 6 C o r t D o u g a n < c o r t @cs.nmt.edu>
* Adapted f o r P o w e r M a c i n t o s h b y P a u l M a c k e r r a s .
* Low- l e v e l e x c e p t i o n h a n d l e r s a n d M M U s u p p o r t
* rewritten b y P a u l M a c k e r r a s .
* Copyright ( C ) 1 9 9 6 P a u l M a c k e r r a s .
*
* Adapted f o r 6 4 b i t P o w e r P C b y D a v e E n g e b r e t s e n , P e t e r B e r g n e r , a n d
* Mike C o r r i g a n { e n g e b r e t | b e r g n e r | m i k e j c } @us.ibm.com
*
* This f i l e c o n t a i n s t h e l o w - l e v e l s u p p o r t a n d s e t u p f o r t h e
* PowerPC- 6 4 p l a t f o r m , i n c l u d i n g t r a p a n d i n t e r r u p t d i s p a t c h .
*
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or
* modify i t u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e
* as p u b l i s h e d b y t h e F r e e S o f t w a r e F o u n d a t i o n ; either version
* 2 of t h e L i c e n s e , o r ( a t y o u r o p t i o n ) a n y l a t e r v e r s i o n .
* /
# include < l i n u x / t h r e a d s . h >
2005-10-10 08:01:07 +04:00
# include < a s m / r e g . h >
2005-09-26 10:04:21 +04:00
# include < a s m / p a g e . h >
# include < a s m / m m u . h >
# include < a s m / p p c _ a s m . h >
# include < a s m / a s m - o f f s e t s . h >
# include < a s m / b u g . h >
# include < a s m / c p u t a b l e . h >
# include < a s m / s e t u p . h >
# include < a s m / h v c a l l . h >
2005-11-02 07:02:47 +03:00
# include < a s m / i s e r i e s / l p a r _ m a p . h >
[PATCH] powerpc: Merge thread_info.h
Merge ppc32 and ppc64 versions of thread_info.h. They were pretty
similar already, the chief changes are:
- Instead of inline asm to implement current_thread_info(),
which needs to be different for ppc32 and ppc64, we use C with an
asm("r1") register variable. gcc turns it into the same asm as we
used to have for both platforms.
- We replace ppc32's 'local_flags' with the ppc64
'syscall_noerror' field. The noerror flag was in fact the only thing
in the local_flags field anyway, so the ppc64 approach is simpler, and
means we only need a load-immediate/store instead of load/mask/store
when clearing the flag.
- In readiness for 64k pages, when THREAD_SIZE will be less
than a page, ppc64 used kmalloc() rather than get_free_pages() to
allocate the kernel stack. With this patch we do the same for ppc32,
since there's no strong reason not to.
- For ppc64, we no longer export THREAD_SHIFT and THREAD_SIZE
via asm-offsets, thread_info.h can now be safely included in asm, as
on ppc32.
Built and booted on G4 Powerbook (ARCH=ppc and ARCH=powerpc) and
Power5 (ARCH=ppc64 and ARCH=powerpc).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-10-21 09:45:50 +04:00
# include < a s m / t h r e a d _ i n f o . h >
2006-09-25 12:19:00 +04:00
# include < a s m / f i r m w a r e . h >
2007-08-20 08:58:36 +04:00
# include < a s m / p a g e _ 6 4 . h >
2007-08-22 07:46:44 +04:00
# include < a s m / e x c e p t i o n . h >
2008-04-17 08:35:01 +04:00
# include < a s m / i r q f l a g s . h >
2005-09-26 10:04:21 +04:00
/ *
* We l a y o u t p h y s i c a l m e m o r y a s f o l l o w s :
* 0 x0 0 0 0 - 0 x00 f f : S e c o n d a r y p r o c e s s o r s p i n c o d e
* 0 x0 1 0 0 - 0 x2 f f f : p S e r i e s I n t e r r u p t p r o l o g s
* 0 x3 0 0 0 - 0 x5 f f f : i n t e r r u p t s u p p o r t , i S e r i e s a n d c o m m o n i n t e r r u p t p r o l o g s
* 0 x6 0 0 0 - 0 x6 f f f : I n i t i a l ( C P U 0 ) s e g m e n t t a b l e
* 0 x7 0 0 0 - 0 x7 f f f : F W N M I d a t a a r e a
* 0 x8 0 0 0 - : E a r l y i n i t a n d s u p p o r t c o d e
* /
/ *
* SPRG U s a g e
*
* Register D e f i n i t i o n
*
* SPRG0 r e s e r v e d f o r h y p e r v i s o r
* SPRG1 t e m p - u s e d t o s a v e g p r
* SPRG2 t e m p - u s e d t o s a v e g p r
* SPRG3 v i r t a d d r o f p a c a
* /
/ *
* Entering i n t o t h i s c o d e w e m a k e t h e f o l l o w i n g a s s u m p t i o n s :
* For p S e r i e s :
* 1 . The M M U i s o f f & o p e n f i r m w a r e i s r u n n i n g i n r e a l m o d e .
* 2 . The k e r n e l i s e n t e r e d a t _ _ s t a r t
*
* For i S e r i e s :
* 1 . The M M U i s o n ( a s i t a l w a y s i s f o r i S e r i e s )
* 2 . The k e r n e l i s e n t e r e d a t s y s t e m _ r e s e t _ i S e r i e s
* /
.text
.globl _stext
_stext :
_ GLOBAL( _ _ s t a r t )
/* NOP this out unconditionally */
BEGIN_ F T R _ S E C T I O N
2005-10-06 04:59:19 +04:00
b . _ _ s t a r t _ i n i t i a l i z a t i o n _ m u l t i p l a t f o r m
2005-09-26 10:04:21 +04:00
END_ F T R _ S E C T I O N ( 0 , 1 )
/* Catch branch to 0 in real mode */
trap
/* Secondary processors spin on this value until it goes to 1. */
.globl __secondary_hold_spinloop
__secondary_hold_spinloop :
.llong 0x0
/* Secondary processors write this value with their cpu # */
/* after they enter the spin loop immediately below. */
.globl __secondary_hold_acknowledge
__secondary_hold_acknowledge :
.llong 0x0
2006-06-23 12:15:37 +04:00
# ifdef C O N F I G _ P P C _ I S E R I E S
/ *
* At o f f s e t 0 x20 , t h e r e i s a p o i n t e r t o i S e r i e s L P A R d a t a .
* This i s r e q u i r e d b y t h e h y p e r v i s o r
* /
. = 0 x2 0
.llong hvReleaseData- K E R N E L B A S E
# endif / * C O N F I G _ P P C _ I S E R I E S * /
2005-09-26 10:04:21 +04:00
. = 0 x6 0
/ *
2007-06-16 02:06:23 +04:00
* The f o l l o w i n g c o d e i s u s e d t o h o l d s e c o n d a r y p r o c e s s o r s
* in a s p i n l o o p a f t e r t h e y h a v e e n t e r e d t h e k e r n e l , b u t
2005-09-26 10:04:21 +04:00
* before t h e b u l k o f t h e k e r n e l h a s b e e n r e l o c a t e d . T h i s c o d e
* is r e l o c a t e d t o p h y s i c a l a d d r e s s 0 x60 b e f o r e p r o m _ i n i t i s r u n .
* All o f i t m u s t f i t b e l o w t h e f i r s t e x c e p t i o n v e c t o r a t 0 x10 0 .
* /
_ GLOBAL( _ _ s e c o n d a r y _ h o l d )
mfmsr r24
ori r24 ,r24 ,M S R _ R I
mtmsrd r24 / * R I o n * /
2006-02-13 10:11:13 +03:00
/* Grab our physical cpu number */
2005-09-26 10:04:21 +04:00
mr r24 ,r3
/* Tell the master cpu we're here */
/* Relocation is off & we are located at an address less */
/* than 0x100, so only need to grab low order offset. */
std r24 ,_ _ s e c o n d a r y _ h o l d _ a c k n o w l e d g e @l(0)
sync
/* All secondary cpus wait here until told to start. */
100 : ld r4 ,_ _ s e c o n d a r y _ h o l d _ s p i n l o o p @l(0)
cmpdi 0 ,r4 ,1
bne 1 0 0 b
2006-02-13 10:11:13 +03:00
# if d e f i n e d ( C O N F I G _ S M P ) | | d e f i n e d ( C O N F I G _ K E X E C )
2006-08-11 09:07:08 +04:00
LOAD_ R E G _ I M M E D I A T E ( r4 , . g e n e r i c _ s e c o n d a r y _ s m p _ i n i t )
2005-12-06 00:49:00 +03:00
mtctr r4
2005-09-26 10:04:21 +04:00
mr r3 ,r24
2005-12-06 00:49:00 +03:00
bctr
2005-09-26 10:04:21 +04:00
# else
BUG_ O P C O D E
# endif
/* This value is used to mark exception frames on the stack. */
.section " .toc " , " aw"
exception_marker :
.tc ID_ 7 2 6 5 6 7 7 3 _ 6 8 6 5 7 2 6 5 [ T C ] ,0 x72 6 5 6 7 7 3 6 8 6 5 7 2 6 5
.text
/ *
* This i s t h e s t a r t o f t h e i n t e r r u p t h a n d l e r s f o r p S e r i e s
* This c o d e r u n s w i t h r e l o c a t i o n o f f .
* /
. = 0 x1 0 0
.globl __start_interrupts
__start_interrupts :
STD_ E X C E P T I O N _ P S E R I E S ( 0 x10 0 , s y s t e m _ r e s e t )
. = 0 x2 0 0
_machine_check_pSeries :
HMT_ M E D I U M
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 1 ,r13 / * s a v e r13 * /
2005-09-26 10:04:21 +04:00
EXCEPTION_ P R O L O G _ P S E R I E S ( P A C A _ E X M C , m a c h i n e _ c h e c k _ c o m m o n )
. = 0 x3 0 0
.globl data_access_pSeries
data_access_pSeries :
HMT_ M E D I U M
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 1 ,r13
2005-09-26 10:04:21 +04:00
BEGIN_ F T R _ S E C T I O N
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 2 ,r12
mfspr r13 ,S P R N _ D A R
mfspr r12 ,S P R N _ D S I S R
2005-09-26 10:04:21 +04:00
srdi r13 ,r13 ,6 0
rlwimi r13 ,r12 ,1 6 ,0 x20
mfcr r12
cmpwi r13 ,0 x2 c
2006-11-02 01:44:37 +03:00
beq d o _ s t a b _ b o l t e d _ p S e r i e s
2005-09-26 10:04:21 +04:00
mtcrf 0 x80 ,r12
2005-10-10 08:01:07 +04:00
mfspr r12 ,S P R N _ S P R G 2
2005-09-26 10:04:21 +04:00
END_ F T R _ S E C T I O N _ I F C L R ( C P U _ F T R _ S L B )
EXCEPTION_ P R O L O G _ P S E R I E S ( P A C A _ E X G E N , d a t a _ a c c e s s _ c o m m o n )
. = 0 x3 8 0
.globl data_access_slb_pSeries
data_access_slb_pSeries :
HMT_ M E D I U M
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 1 ,r13
mfspr r13 ,S P R N _ S P R G 3 / * g e t p a c a a d d r e s s i n t o r13 * /
2005-11-07 03:06:55 +03:00
std r3 ,P A C A _ E X S L B + E X _ R 3 ( r13 )
mfspr r3 ,S P R N _ D A R
2005-09-26 10:04:21 +04:00
std r9 ,P A C A _ E X S L B + E X _ R 9 ( r13 ) / * s a v e r9 - r12 * /
2005-11-07 03:06:55 +03:00
mfcr r9
# ifdef _ _ D I S A B L E D _ _
/* Keep that around for when we re-implement dynamic VSIDs */
cmpdi r3 ,0
bge s l b _ m i s s _ u s e r _ p s e r i e s
# endif / * _ _ D I S A B L E D _ _ * /
2005-09-26 10:04:21 +04:00
std r10 ,P A C A _ E X S L B + E X _ R 1 0 ( r13 )
std r11 ,P A C A _ E X S L B + E X _ R 1 1 ( r13 )
std r12 ,P A C A _ E X S L B + E X _ R 1 2 ( r13 )
2005-11-07 03:06:55 +03:00
mfspr r10 ,S P R N _ S P R G 1
std r10 ,P A C A _ E X S L B + E X _ R 1 3 ( r13 )
2005-10-10 08:01:07 +04:00
mfspr r12 ,S P R N _ S R R 1 / * a n d S R R 1 * /
2005-11-07 03:06:55 +03:00
b . s l b _ m i s s _ r e a l m o d e / * R e l . b r a n c h w o r k s i n r e a l m o d e * /
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x40 0 , i n s t r u c t i o n _ a c c e s s )
. = 0 x4 8 0
.globl instruction_access_slb_pSeries
instruction_access_slb_pSeries :
HMT_ M E D I U M
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 1 ,r13
mfspr r13 ,S P R N _ S P R G 3 / * g e t p a c a a d d r e s s i n t o r13 * /
2005-11-07 03:06:55 +03:00
std r3 ,P A C A _ E X S L B + E X _ R 3 ( r13 )
mfspr r3 ,S P R N _ S R R 0 / * S R R 0 i s f a u l t i n g a d d r e s s * /
2005-09-26 10:04:21 +04:00
std r9 ,P A C A _ E X S L B + E X _ R 9 ( r13 ) / * s a v e r9 - r12 * /
2005-11-07 03:06:55 +03:00
mfcr r9
# ifdef _ _ D I S A B L E D _ _
/* Keep that around for when we re-implement dynamic VSIDs */
cmpdi r3 ,0
bge s l b _ m i s s _ u s e r _ p s e r i e s
# endif / * _ _ D I S A B L E D _ _ * /
2005-09-26 10:04:21 +04:00
std r10 ,P A C A _ E X S L B + E X _ R 1 0 ( r13 )
std r11 ,P A C A _ E X S L B + E X _ R 1 1 ( r13 )
std r12 ,P A C A _ E X S L B + E X _ R 1 2 ( r13 )
2005-11-07 03:06:55 +03:00
mfspr r10 ,S P R N _ S P R G 1
std r10 ,P A C A _ E X S L B + E X _ R 1 3 ( r13 )
2005-10-10 08:01:07 +04:00
mfspr r12 ,S P R N _ S R R 1 / * a n d S R R 1 * /
2005-11-07 03:06:55 +03:00
b . s l b _ m i s s _ r e a l m o d e / * R e l . b r a n c h w o r k s i n r e a l m o d e * /
2005-09-26 10:04:21 +04:00
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
MASKABLE_ E X C E P T I O N _ P S E R I E S ( 0 x50 0 , h a r d w a r e _ i n t e r r u p t )
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x60 0 , a l i g n m e n t )
STD_ E X C E P T I O N _ P S E R I E S ( 0 x70 0 , p r o g r a m _ c h e c k )
STD_ E X C E P T I O N _ P S E R I E S ( 0 x80 0 , f p _ u n a v a i l a b l e )
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
MASKABLE_ E X C E P T I O N _ P S E R I E S ( 0 x90 0 , d e c r e m e n t e r )
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x a00 , t r a p _ 0 a )
STD_ E X C E P T I O N _ P S E R I E S ( 0 x b00 , t r a p _ 0 b )
. = 0 xc0 0
.globl system_call_pSeries
system_call_pSeries :
HMT_ M E D I U M
2008-04-28 07:52:31 +04:00
BEGIN_ F T R _ S E C T I O N
cmpdi r0 ,0 x1 e b e
beq- 1 f
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ R E A L _ L E )
2005-09-26 10:04:21 +04:00
mr r9 ,r13
mfmsr r10
2005-10-10 08:01:07 +04:00
mfspr r13 ,S P R N _ S P R G 3
mfspr r11 ,S P R N _ S R R 0
2005-09-26 10:04:21 +04:00
clrrdi r12 ,r13 ,3 2
oris r12 ,r12 ,s y s t e m _ c a l l _ c o m m o n @h
ori r12 ,r12 ,s y s t e m _ c a l l _ c o m m o n @l
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r12
2005-09-26 10:04:21 +04:00
ori r10 ,r10 ,M S R _ I R | M S R _ D R | M S R _ R I
2005-10-10 08:01:07 +04:00
mfspr r12 ,S P R N _ S R R 1
mtspr S P R N _ S R R 1 ,r10
2005-09-26 10:04:21 +04:00
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2008-04-28 07:52:31 +04:00
/* Fast LE/BE switch system call */
1 : mfspr r12 ,S P R N _ S R R 1
xori r12 ,r12 ,M S R _ L E
mtspr S P R N _ S R R 1 ,r12
rfid / * r e t u r n t o u s e r s p a c e * /
b .
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x d00 , s i n g l e _ s t e p )
STD_ E X C E P T I O N _ P S E R I E S ( 0 x e 0 0 , t r a p _ 0 e )
/ * We n e e d t o d e a l w i t h t h e A l t i v e c u n a v a i l a b l e e x c e p t i o n
* here w h i c h i s a t 0 x f20 , t h u s i n t h e m i d d l e o f t h e
* prolog c o d e o f t h e P e r f o r m a n c e M o n i t o r o n e . A l i t t l e
* trickery i s t h u s n e c e s s a r y
* /
. = 0 xf0 0
b p e r f o r m a n c e _ m o n i t o r _ p S e r i e s
2008-06-25 08:07:18 +04:00
. = 0 xf2 0
b a l t i v e c _ u n a v a i l a b l e _ p S e r i e s
2005-09-26 10:04:21 +04:00
2008-06-25 08:07:18 +04:00
. = 0 xf4 0
b v s x _ u n a v a i l a b l e _ p S e r i e s
2006-06-19 22:33:16 +04:00
# ifdef C O N F I G _ C B E _ R A S
HSTD_ E X C E P T I O N _ P S E R I E S ( 0 x12 0 0 , c b e _ s y s t e m _ e r r o r )
# endif / * C O N F I G _ C B E _ R A S * /
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x13 0 0 , i n s t r u c t i o n _ b r e a k p o i n t )
2006-06-19 22:33:16 +04:00
# ifdef C O N F I G _ C B E _ R A S
HSTD_ E X C E P T I O N _ P S E R I E S ( 0 x16 0 0 , c b e _ m a i n t e n a n c e )
# endif / * C O N F I G _ C B E _ R A S * /
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( 0 x17 0 0 , a l t i v e c _ a s s i s t )
2006-06-19 22:33:16 +04:00
# ifdef C O N F I G _ C B E _ R A S
HSTD_ E X C E P T I O N _ P S E R I E S ( 0 x18 0 0 , c b e _ t h e r m a l )
# endif / * C O N F I G _ C B E _ R A S * /
2005-09-26 10:04:21 +04:00
. = 0 x3 0 0 0
/*** pSeries interrupt support ***/
/* moved from 0xf00 */
[POWERPC] Fix performance monitor exception
To the issue: some point during 2.6.20 development, Paul Mackerras
introduced the "lazy IRQ disabling" patch (very cool work, BTW).
In that patch, the performance monitor unit exception was marked as
"maskable", in the sense that if interrupts were soft-disabled, that
exception could be ignored. This broke my PowerPC profiling code.
The symptom that I see is that a varying number of interrupts
(from 0 to $n$, typically closer to 0) get delivered, when, in
reality, it should always be very close to $n$.
The issue stems from the way masking is being done. Masking in
this fashion seems to work well with the decrementer and external
interrupts, because they are raised again until "really" handled.
For the PMU, however, this does not apply (at least on my Xserver
machine with a 970FX processor). If the PMU exception is not handled,
it will _not_ be re-raised (at least on my machine). The documentation
states that the PMXE bit in MMCR0 is set to 0 when the PMU exception
is raised. However, software must re-set the bit to re-enable PMU
exceptions. If the exception is ignored (as currently) not only is
that interrupt lost, but because software does not re-set PMXE, the
PMU registers are "frozen" forever.
[This patch means that performance monitor exceptions are taken and
handled even if irqs are off, as long as some other interrupt hasn't
come along and caused interrupts to be hard-disabled. In this sense
the PMU exception becomes like an NMI. The oprofile code for most
powerpc processors does nothing that is unsafe in an NMI context, but
the Cell oprofile code does a spin_lock_irqsave. However, that turns
out to be OK because Cell doesn't actually use the performance
monitor exception; performance monitor interrupts come in as a
regular interrupt on Cell, so will be disabled when irqs are off.
-- paulus.]
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-07 04:51:36 +03:00
STD_ E X C E P T I O N _ P S E R I E S ( . , p e r f o r m a n c e _ m o n i t o r )
2008-06-25 08:07:18 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( . , a l t i v e c _ u n a v a i l a b l e )
2008-06-25 08:07:18 +04:00
STD_ E X C E P T I O N _ P S E R I E S ( . , v s x _ u n a v a i l a b l e )
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
/ *
* An i n t e r r u p t c a m e i n w h i l e s o f t - d i s a b l e d ; clear EE in SRR1,
* clear p a c a - > h a r d _ e n a b l e d a n d r e t u r n .
* /
masked_interrupt :
stb r10 ,P A C A H A R D I R Q E N ( r13 )
mtcrf 0 x80 ,r9
ld r9 ,P A C A _ E X G E N + E X _ R 9 ( r13 )
mfspr r10 ,S P R N _ S R R 1
rldicl r10 ,r10 ,4 8 ,1 / * c l e a r M S R _ E E * /
rotldi r10 ,r10 ,1 6
mtspr S P R N _ S R R 1 ,r10
ld r10 ,P A C A _ E X G E N + E X _ R 1 0 ( r13 )
mfspr r13 ,S P R N _ S P R G 1
rfid
b .
2005-09-26 10:04:21 +04:00
.align 7
2006-11-02 01:44:37 +03:00
do_stab_bolted_pSeries :
2005-09-26 10:04:21 +04:00
mtcrf 0 x80 ,r12
2005-10-10 08:01:07 +04:00
mfspr r12 ,S P R N _ S P R G 2
2005-09-26 10:04:21 +04:00
EXCEPTION_ P R O L O G _ P S E R I E S ( P A C A _ E X S L B , . d o _ s t a b _ b o l t e d )
2008-08-30 05:39:26 +04:00
# ifdef C O N F I G _ P P C _ P S E R I E S
/ *
* Vectors f o r t h e F W N M I o p t i o n . S h a r e c o m m o n c o d e .
* /
.globl system_reset_fwnmi
.align 7
system_reset_fwnmi :
HMT_ M E D I U M
mtspr S P R N _ S P R G 1 ,r13 / * s a v e r13 * /
EXCEPTION_ P R O L O G _ P S E R I E S ( P A C A _ E X G E N , s y s t e m _ r e s e t _ c o m m o n )
.globl machine_check_fwnmi
.align 7
machine_check_fwnmi :
HMT_ M E D I U M
mtspr S P R N _ S P R G 1 ,r13 / * s a v e r13 * /
EXCEPTION_ P R O L O G _ P S E R I E S ( P A C A _ E X M C , m a c h i n e _ c h e c k _ c o m m o n )
# endif / * C O N F I G _ P P C _ P S E R I E S * /
# ifdef _ _ D I S A B L E D _ _
2005-11-07 03:06:55 +03:00
/ *
* This i s u s e d f o r w h e n t h e S L B m i s s h a n d l e r h a s t o g o v i r t u a l ,
* which d o e s n ' t h a p p e n f o r n o w a n y m o r e b u t w i l l o n c e w e r e - i m p l e m e n t
* dynamic V S I D s f o r s h a r e d p a g e t a b l e s
* /
slb_miss_user_pseries :
std r10 ,P A C A _ E X G E N + E X _ R 1 0 ( r13 )
std r11 ,P A C A _ E X G E N + E X _ R 1 1 ( r13 )
std r12 ,P A C A _ E X G E N + E X _ R 1 2 ( r13 )
mfspr r10 ,S P R G 1
ld r11 ,P A C A _ E X S L B + E X _ R 9 ( r13 )
ld r12 ,P A C A _ E X S L B + E X _ R 3 ( r13 )
std r10 ,P A C A _ E X G E N + E X _ R 1 3 ( r13 )
std r11 ,P A C A _ E X G E N + E X _ R 9 ( r13 )
std r12 ,P A C A _ E X G E N + E X _ R 3 ( r13 )
clrrdi r12 ,r13 ,3 2
mfmsr r10
mfspr r11 ,S R R 0 / * s a v e S R R 0 * /
ori r12 ,r12 ,s l b _ m i s s _ u s e r _ c o m m o n @l /* virt addr of handler */
ori r10 ,r10 ,M S R _ I R | M S R _ D R | M S R _ R I
mtspr S R R 0 ,r12
mfspr r12 ,S R R 1 / * a n d S R R 1 * /
mtspr S R R 1 ,r10
rfid
b . / * p r e v e n t s p e c . e x e c u t i o n * /
# endif / * _ _ D I S A B L E D _ _ * /
2008-08-30 05:39:26 +04:00
.align 7
.globl __end_interrupts
__end_interrupts :
2005-09-26 10:04:21 +04:00
/ *
2008-08-30 05:39:26 +04:00
* Code f r o m h e r e d o w n t o _ _ e n d _ h a n d l e r s i s i n v o k e d f r o m t h e
* exception p r o l o g s a b o v e .
2005-09-26 10:04:21 +04:00
* /
2007-09-18 11:25:12 +04:00
2005-09-26 10:04:21 +04:00
/*** Common interrupt handlers ***/
STD_ E X C E P T I O N _ C O M M O N ( 0 x10 0 , s y s t e m _ r e s e t , . s y s t e m _ r e s e t _ e x c e p t i o n )
/ *
* Machine c h e c k i s d i f f e r e n t b e c a u s e w e u s e a d i f f e r e n t
* save a r e a : P A C A _ E X M C i n s t e a d o f P A C A _ E X G E N .
* /
.align 7
.globl machine_check_common
machine_check_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x20 0 , P A C A _ E X M C )
2006-04-18 15:49:11 +04:00
FINISH_ N A P
2005-09-26 10:04:21 +04:00
DISABLE_ I N T S
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . m a c h i n e _ c h e c k _ e x c e p t i o n
b . r e t _ f r o m _ e x c e p t
STD_ E X C E P T I O N _ C O M M O N _ L I T E ( 0 x90 0 , d e c r e m e n t e r , . t i m e r _ i n t e r r u p t )
STD_ E X C E P T I O N _ C O M M O N ( 0 x a00 , t r a p _ 0 a , . u n k n o w n _ e x c e p t i o n )
STD_ E X C E P T I O N _ C O M M O N ( 0 x b00 , t r a p _ 0 b , . u n k n o w n _ e x c e p t i o n )
STD_ E X C E P T I O N _ C O M M O N ( 0 x d00 , s i n g l e _ s t e p , . s i n g l e _ s t e p _ e x c e p t i o n )
STD_ E X C E P T I O N _ C O M M O N ( 0 x e 0 0 , t r a p _ 0 e , . u n k n o w n _ e x c e p t i o n )
2006-04-18 15:49:11 +04:00
STD_ E X C E P T I O N _ C O M M O N _ I D L E ( 0 x f00 , p e r f o r m a n c e _ m o n i t o r , . p e r f o r m a n c e _ m o n i t o r _ e x c e p t i o n )
2005-09-26 10:04:21 +04:00
STD_ E X C E P T I O N _ C O M M O N ( 0 x13 0 0 , i n s t r u c t i o n _ b r e a k p o i n t , . i n s t r u c t i o n _ b r e a k p o i n t _ e x c e p t i o n )
# ifdef C O N F I G _ A L T I V E C
STD_ E X C E P T I O N _ C O M M O N ( 0 x17 0 0 , a l t i v e c _ a s s i s t , . a l t i v e c _ a s s i s t _ e x c e p t i o n )
# else
STD_ E X C E P T I O N _ C O M M O N ( 0 x17 0 0 , a l t i v e c _ a s s i s t , . u n k n o w n _ e x c e p t i o n )
# endif
2006-06-19 22:33:16 +04:00
# ifdef C O N F I G _ C B E _ R A S
STD_ E X C E P T I O N _ C O M M O N ( 0 x12 0 0 , c b e _ s y s t e m _ e r r o r , . c b e _ s y s t e m _ e r r o r _ e x c e p t i o n )
STD_ E X C E P T I O N _ C O M M O N ( 0 x16 0 0 , c b e _ m a i n t e n a n c e , . c b e _ m a i n t e n a n c e _ e x c e p t i o n )
STD_ E X C E P T I O N _ C O M M O N ( 0 x18 0 0 , c b e _ t h e r m a l , . c b e _ t h e r m a l _ e x c e p t i o n )
# endif / * C O N F I G _ C B E _ R A S * /
2005-09-26 10:04:21 +04:00
/ *
* Here w e h a v e d e t e c t e d t h a t t h e k e r n e l s t a c k p o i n t e r i s b a d .
* R9 c o n t a i n s t h e s a v e d C R , r13 p o i n t s t o t h e p a c a ,
* r1 0 c o n t a i n s t h e ( b a d ) k e r n e l s t a c k p o i n t e r ,
* r1 1 a n d r12 c o n t a i n t h e s a v e d S R R 0 a n d S R R 1 .
* We s w i t c h t o u s i n g a n e m e r g e n c y s t a c k , s a v e t h e r e g i s t e r s t h e r e ,
* and c a l l k e r n e l _ b a d _ s t a c k ( ) , w h i c h p a n i c s .
* /
bad_stack :
ld r1 ,P A C A E M E R G S P ( r13 )
subi r1 ,r1 ,6 4 + I N T _ F R A M E _ S I Z E
std r9 ,_ C C R ( r1 )
std r10 ,G P R 1 ( r1 )
std r11 ,_ N I P ( r1 )
std r12 ,_ M S R ( r1 )
2005-10-10 08:01:07 +04:00
mfspr r11 ,S P R N _ D A R
mfspr r12 ,S P R N _ D S I S R
2005-09-26 10:04:21 +04:00
std r11 ,_ D A R ( r1 )
std r12 ,_ D S I S R ( r1 )
mflr r10
mfctr r11
mfxer r12
std r10 ,_ L I N K ( r1 )
std r11 ,_ C T R ( r1 )
std r12 ,_ X E R ( r1 )
SAVE_ G P R ( 0 ,r1 )
SAVE_ G P R ( 2 ,r1 )
SAVE_ 4 G P R S ( 3 ,r1 )
SAVE_ 2 G P R S ( 7 ,r1 )
SAVE_ 1 0 G P R S ( 1 2 ,r1 )
SAVE_ 1 0 G P R S ( 2 2 ,r1 )
2007-04-23 19:11:55 +04:00
lhz r12 ,P A C A _ T R A P _ S A V E ( r13 )
std r12 ,_ T R A P ( r1 )
2005-09-26 10:04:21 +04:00
addi r11 ,r1 ,I N T _ F R A M E _ S I Z E
std r11 ,0 ( r1 )
li r12 ,0
std r12 ,0 ( r11 )
ld r2 ,P A C A T O C ( r13 )
1 : addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . k e r n e l _ b a d _ s t a c k
b 1 b
/ *
* Here r13 p o i n t s t o t h e p a c a , r9 c o n t a i n s t h e s a v e d C R ,
* SRR0 a n d S R R 1 a r e s a v e d i n r11 a n d r12 ,
* r9 - r13 a r e s a v e d i n p a c a - > e x g e n .
* /
.align 7
.globl data_access_common
data_access_common :
2005-10-10 08:01:07 +04:00
mfspr r10 ,S P R N _ D A R
2005-09-26 10:04:21 +04:00
std r10 ,P A C A _ E X G E N + E X _ D A R ( r13 )
2005-10-10 08:01:07 +04:00
mfspr r10 ,S P R N _ D S I S R
2005-09-26 10:04:21 +04:00
stw r10 ,P A C A _ E X G E N + E X _ D S I S R ( r13 )
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x30 0 , P A C A _ E X G E N )
ld r3 ,P A C A _ E X G E N + E X _ D A R ( r13 )
lwz r4 ,P A C A _ E X G E N + E X _ D S I S R ( r13 )
li r5 ,0 x30 0
b . d o _ h a s h _ p a g e / * T r y t o h a n d l e a s h p t e f a u l t * /
.align 7
.globl instruction_access_common
instruction_access_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x40 0 , P A C A _ E X G E N )
ld r3 ,_ N I P ( r1 )
andis. r4 ,r12 ,0 x58 2 0
li r5 ,0 x40 0
b . d o _ h a s h _ p a g e / * T r y t o h a n d l e a s h p t e f a u l t * /
2005-11-07 03:06:55 +03:00
/ *
* Here i s t h e c o m m o n S L B m i s s u s e r t h a t i s u s e d w h e n g o i n g t o v i r t u a l
* mode f o r S L B m i s s e s , t h a t i s c u r r e n t l y n o t u s e d
* /
# ifdef _ _ D I S A B L E D _ _
.align 7
.globl slb_miss_user_common
slb_miss_user_common :
mflr r10
std r3 ,P A C A _ E X G E N + E X _ D A R ( r13 )
stw r9 ,P A C A _ E X G E N + E X _ C C R ( r13 )
std r10 ,P A C A _ E X G E N + E X _ L R ( r13 )
std r11 ,P A C A _ E X G E N + E X _ S R R 0 ( r13 )
bl . s l b _ a l l o c a t e _ u s e r
ld r10 ,P A C A _ E X G E N + E X _ L R ( r13 )
ld r3 ,P A C A _ E X G E N + E X _ R 3 ( r13 )
lwz r9 ,P A C A _ E X G E N + E X _ C C R ( r13 )
ld r11 ,P A C A _ E X G E N + E X _ S R R 0 ( r13 )
mtlr r10
beq- s l b _ m i s s _ f a u l t
andi. r10 ,r12 ,M S R _ R I / * c h e c k f o r u n r e c o v e r a b l e e x c e p t i o n * /
beq- u n r e c o v _ u s e r _ s l b
mfmsr r10
.machine push
.machine " power4 "
mtcrf 0 x80 ,r9
.machine pop
clrrdi r10 ,r10 ,2 / * c l e a r R I b e f o r e s e t t i n g S R R 0 / 1 * /
mtmsrd r10 ,1
mtspr S R R 0 ,r11
mtspr S R R 1 ,r12
ld r9 ,P A C A _ E X G E N + E X _ R 9 ( r13 )
ld r10 ,P A C A _ E X G E N + E X _ R 1 0 ( r13 )
ld r11 ,P A C A _ E X G E N + E X _ R 1 1 ( r13 )
ld r12 ,P A C A _ E X G E N + E X _ R 1 2 ( r13 )
ld r13 ,P A C A _ E X G E N + E X _ R 1 3 ( r13 )
rfid
b .
slb_miss_fault :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x38 0 , P A C A _ E X G E N )
ld r4 ,P A C A _ E X G E N + E X _ D A R ( r13 )
li r5 ,0
std r4 ,_ D A R ( r1 )
std r5 ,_ D S I S R ( r1 )
2006-11-02 01:44:37 +03:00
b h a n d l e _ p a g e _ f a u l t
2005-11-07 03:06:55 +03:00
unrecov_user_slb :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x42 0 0 , P A C A _ E X G E N )
DISABLE_ I N T S
bl . s a v e _ n v g p r s
1 : addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . u n r e c o v e r a b l e _ e x c e p t i o n
b 1 b
# endif / * _ _ D I S A B L E D _ _ * /
/ *
* r1 3 p o i n t s t o t h e P A C A , r9 c o n t a i n s t h e s a v e d C R ,
* r1 2 c o n t a i n t h e s a v e d S R R 1 , S R R 0 i s s t i l l r e a d y f o r r e t u r n
* r3 h a s t h e f a u l t i n g a d d r e s s
* r9 - r13 a r e s a v e d i n p a c a - > e x s l b .
* r3 i s s a v e d i n p a c a - > s l b _ r3
* We a s s u m e w e a r e n ' t g o i n g t o t a k e a n y e x c e p t i o n s d u r i n g t h i s p r o c e d u r e .
* /
_ GLOBAL( s l b _ m i s s _ r e a l m o d e )
mflr r10
stw r9 ,P A C A _ E X S L B + E X _ C C R ( r13 ) / * s a v e C R i n e x c . f r a m e * /
std r10 ,P A C A _ E X S L B + E X _ L R ( r13 ) / * s a v e L R * /
bl . s l b _ a l l o c a t e _ r e a l m o d e
/* All done -- return from exception. */
ld r10 ,P A C A _ E X S L B + E X _ L R ( r13 )
ld r3 ,P A C A _ E X S L B + E X _ R 3 ( r13 )
lwz r9 ,P A C A _ E X S L B + E X _ C C R ( r13 ) / * g e t s a v e d C R * /
# ifdef C O N F I G _ P P C _ I S E R I E S
2006-09-25 12:19:00 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
2006-01-13 02:26:42 +03:00
ld r11 ,P A C A L P P A C A P T R ( r13 )
ld r11 ,L P P A C A S R R 0 ( r11 ) / * g e t S R R 0 v a l u e * /
2006-09-25 12:19:00 +04:00
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
2005-11-07 03:06:55 +03:00
# endif / * C O N F I G _ P P C _ I S E R I E S * /
mtlr r10
andi. r10 ,r12 ,M S R _ R I / * c h e c k f o r u n r e c o v e r a b l e e x c e p t i o n * /
2008-04-14 07:59:02 +04:00
beq- 2 f
2005-11-07 03:06:55 +03:00
.machine push
.machine " power4 "
mtcrf 0 x80 ,r9
mtcrf 0 x01 ,r9 / * s l b _ a l l o c a t e u s e s c r0 a n d c r7 * /
.machine pop
# ifdef C O N F I G _ P P C _ I S E R I E S
2006-09-25 12:19:00 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
2005-11-07 03:06:55 +03:00
mtspr S P R N _ S R R 0 ,r11
mtspr S P R N _ S R R 1 ,r12
2006-09-25 12:19:00 +04:00
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
2005-11-07 03:06:55 +03:00
# endif / * C O N F I G _ P P C _ I S E R I E S * /
ld r9 ,P A C A _ E X S L B + E X _ R 9 ( r13 )
ld r10 ,P A C A _ E X S L B + E X _ R 1 0 ( r13 )
ld r11 ,P A C A _ E X S L B + E X _ R 1 1 ( r13 )
ld r12 ,P A C A _ E X S L B + E X _ R 1 2 ( r13 )
ld r13 ,P A C A _ E X S L B + E X _ R 1 3 ( r13 )
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
2008-04-14 07:59:02 +04:00
2 :
# ifdef C O N F I G _ P P C _ I S E R I E S
BEGIN_ F W _ F T R _ S E C T I O N
b u n r e c o v _ s l b
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
# endif / * C O N F I G _ P P C _ I S E R I E S * /
mfspr r11 ,S P R N _ S R R 0
clrrdi r10 ,r13 ,3 2
LOAD_ H A N D L E R ( r10 ,u n r e c o v _ s l b )
mtspr S P R N _ S R R 0 ,r10
mfmsr r10
ori r10 ,r10 ,M S R _ I R | M S R _ D R | M S R _ R I
mtspr S P R N _ S R R 1 ,r10
rfid
b .
2005-11-07 03:06:55 +03:00
unrecov_slb :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x41 0 0 , P A C A _ E X S L B )
DISABLE_ I N T S
bl . s a v e _ n v g p r s
1 : addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . u n r e c o v e r a b l e _ e x c e p t i o n
b 1 b
2005-09-26 10:04:21 +04:00
.align 7
.globl hardware_interrupt_common
.globl hardware_interrupt_entry
hardware_interrupt_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x50 0 , P A C A _ E X G E N )
2006-04-18 15:49:11 +04:00
FINISH_ N A P
2005-09-26 10:04:21 +04:00
hardware_interrupt_entry :
DISABLE_ I N T S
2007-09-05 06:42:30 +04:00
BEGIN_ F T R _ S E C T I O N
2006-02-13 06:48:35 +03:00
bl . p p c64 _ r u n l a t c h _ o n
2007-09-05 06:42:30 +04:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ C T R L )
2005-09-26 10:04:21 +04:00
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . d o _ I R Q
b . r e t _ f r o m _ e x c e p t _ l i t e
2006-04-18 15:49:11 +04:00
# ifdef C O N F I G _ P P C _ 9 7 0 _ N A P
power4_fixup_nap :
andc r9 ,r9 ,r10
std r9 ,T I _ L O C A L _ F L A G S ( r11 )
ld r10 ,_ L I N K ( r1 ) / * m a k e i d l e t a s k d o t h e * /
std r10 ,_ N I P ( r1 ) / * e q u i v a l e n t o f a b l r * /
blr
# endif
2005-09-26 10:04:21 +04:00
.align 7
.globl alignment_common
alignment_common :
2005-10-10 08:01:07 +04:00
mfspr r10 ,S P R N _ D A R
2005-09-26 10:04:21 +04:00
std r10 ,P A C A _ E X G E N + E X _ D A R ( r13 )
2005-10-10 08:01:07 +04:00
mfspr r10 ,S P R N _ D S I S R
2005-09-26 10:04:21 +04:00
stw r10 ,P A C A _ E X G E N + E X _ D S I S R ( r13 )
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x60 0 , P A C A _ E X G E N )
ld r3 ,P A C A _ E X G E N + E X _ D A R ( r13 )
lwz r4 ,P A C A _ E X G E N + E X _ D S I S R ( r13 )
std r3 ,_ D A R ( r1 )
std r4 ,_ D S I S R ( r1 )
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ENABLE_ I N T S
bl . a l i g n m e n t _ e x c e p t i o n
b . r e t _ f r o m _ e x c e p t
.align 7
.globl program_check_common
program_check_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x70 0 , P A C A _ E X G E N )
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ENABLE_ I N T S
bl . p r o g r a m _ c h e c k _ e x c e p t i o n
b . r e t _ f r o m _ e x c e p t
.align 7
.globl fp_unavailable_common
fp_unavailable_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x80 0 , P A C A _ E X G E N )
2006-11-02 01:44:37 +03:00
bne 1 f / * i f f r o m u s e r , j u s t l o a d i t u p * /
2005-09-26 10:04:21 +04:00
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ENABLE_ I N T S
bl . k e r n e l _ f p _ u n a v a i l a b l e _ e x c e p t i o n
BUG_ O P C O D E
2008-06-25 08:07:18 +04:00
1 : bl . l o a d _ u p _ f p u
b f a s t _ e x c e p t i o n _ r e t u r n
2005-09-26 10:04:21 +04:00
.align 7
.globl altivec_unavailable_common
altivec_unavailable_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x f20 , P A C A _ E X G E N )
# ifdef C O N F I G _ A L T I V E C
BEGIN_ F T R _ S E C T I O N
2008-06-25 08:07:18 +04:00
beq 1 f
bl . l o a d _ u p _ a l t i v e c
b f a s t _ e x c e p t i o n _ r e t u r n
1 :
2005-09-26 10:04:21 +04:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ A L T I V E C )
# endif
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ENABLE_ I N T S
bl . a l t i v e c _ u n a v a i l a b l e _ e x c e p t i o n
b . r e t _ f r o m _ e x c e p t
2008-08-30 05:39:26 +04:00
.align 7
.globl vsx_unavailable_common
vsx_unavailable_common :
EXCEPTION_ P R O L O G _ C O M M O N ( 0 x f40 , P A C A _ E X G E N )
# ifdef C O N F I G _ V S X
BEGIN_ F T R _ S E C T I O N
bne . l o a d _ u p _ v s x
1 :
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ V S X )
# endif
bl . s a v e _ n v g p r s
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
ENABLE_ I N T S
bl . v s x _ u n a v a i l a b l e _ e x c e p t i o n
b . r e t _ f r o m _ e x c e p t
.align 7
.globl __end_handlers
__end_handlers :
/ *
* Return f r o m a n e x c e p t i o n w i t h m i n i m a l c h e c k s .
* The c a l l e r i s a s s u m e d t o h a v e d o n e E X C E P T I O N _ P R O L O G _ C O M M O N .
* If i n t e r r u p t s h a v e b e e n e n a b l e d , o r a n y t h i n g h a s b e e n
* done t h a t m i g h t h a v e c h a n g e d t h e s c h e d u l i n g s t a t u s o f
* any t a s k o r s e n t a n y t a s k a s i g n a l , y o u s h o u l d u s e
* ret_ f r o m _ e x c e p t o r r e t _ f r o m _ e x c e p t _ l i t e i n s t e a d o f t h i s .
* /
fast_exc_return_irq : /* restores irq state too */
ld r3 ,S O F T E ( r1 )
TRACE_ A N D _ R E S T O R E _ I R Q ( r3 ) ;
ld r12 ,_ M S R ( r1 )
rldicl r4 ,r12 ,4 9 ,6 3 / * g e t M S R _ E E t o L S B * /
stb r4 ,P A C A H A R D I R Q E N ( r13 ) / * r e s t o r e p a c a - > h a r d _ e n a b l e d * /
b 1 f
.globl fast_exception_return
fast_exception_return :
ld r12 ,_ M S R ( r1 )
1 : ld r11 ,_ N I P ( r1 )
andi. r3 ,r12 ,M S R _ R I / * c h e c k i f R I i s s e t * /
beq- u n r e c o v _ f e r
# ifdef C O N F I G _ V I R T _ C P U _ A C C O U N T I N G
andi. r3 ,r12 ,M S R _ P R
beq 2 f
ACCOUNT_ C P U _ U S E R _ E X I T ( r3 , r4 )
2 :
# endif
ld r3 ,_ C C R ( r1 )
ld r4 ,_ L I N K ( r1 )
ld r5 ,_ C T R ( r1 )
ld r6 ,_ X E R ( r1 )
mtcr r3
mtlr r4
mtctr r5
mtxer r6
REST_ G P R ( 0 , r1 )
REST_ 8 G P R S ( 2 , r1 )
mfmsr r10
rldicl r10 ,r10 ,4 8 ,1 / * c l e a r E E * /
rldicr r10 ,r10 ,1 6 ,6 1 / * c l e a r R I ( L E i s 0 a l r e a d y ) * /
mtmsrd r10 ,1
mtspr S P R N _ S R R 1 ,r12
mtspr S P R N _ S R R 0 ,r11
REST_ 4 G P R S ( 1 0 , r1 )
ld r1 ,G P R 1 ( r1 )
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
unrecov_fer :
bl . s a v e _ n v g p r s
1 : addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . u n r e c o v e r a b l e _ e x c e p t i o n
b 1 b
2005-09-26 10:04:21 +04:00
# ifdef C O N F I G _ A L T I V E C
/ *
* load_ u p _ a l t i v e c ( u n u s e d , u n u s e d , t s k )
* Disable V M X f o r t h e t a s k w h i c h h a d i t p r e v i o u s l y ,
* and s a v e i t s v e c t o r r e g i s t e r s i n i t s t h r e a d _ s t r u c t .
* Enables t h e V M X f o r u s e i n t h e k e r n e l o n r e t u r n .
* On S M P w e k n o w t h e V M X i s f r e e , s i n c e w e g i v e i t u p e v e r y
* switch ( i e , n o l a z y s a v e o f t h e v e c t o r r e g i s t e r s ) .
* On e n t r y : r13 = = ' c u r r e n t ' & & l a s t _ t a s k _ u s e d _ a l t i v e c ! = ' c u r r e n t '
* /
_ STATIC( l o a d _ u p _ a l t i v e c )
mfmsr r5 / * g r a b t h e c u r r e n t M S R * /
oris r5 ,r5 ,M S R _ V E C @h
mtmsrd r5 / * e n a b l e u s e o f V M X n o w * /
isync
/ *
* For S M P , w e d o n ' t d o l a z y V M X s w i t c h i n g b e c a u s e i t j u s t g e t s t o o
* horrendously c o m p l e x , e s p e c i a l l y w h e n a t a s k s w i t c h e s f r o m o n e C P U
* to a n o t h e r . I n s t e a d w e c a l l g i v e u p _ a l t v e c i n s w i t c h _ t o .
* VRSAVE i s n ' t d e a l t w i t h h e r e , t h a t i s d o n e i n t h e n o r m a l c o n t e x t
* switch c o d e . N o t e t h a t w e c o u l d r e l y o n v r s a v e v a l u e t o e v e n t u a l l y
* avoid s a v i n g a l l o f t h e V R E G s h e r e . . .
* /
# ifndef C O N F I G _ S M P
ld r3 ,l a s t _ t a s k _ u s e d _ a l t i v e c @got(r2)
ld r4 ,0 ( r3 )
cmpdi 0 ,r4 ,0
beq 1 f
/* Save VMX state to last_task_used_altivec's THREAD struct */
addi r4 ,r4 ,T H R E A D
SAVE_ 3 2 V R S ( 0 ,r5 ,r4 )
mfvscr v r0
li r10 ,T H R E A D _ V S C R
stvx v r0 ,r10 ,r4
/* Disable VMX for last_task_used_altivec */
ld r5 ,P T _ R E G S ( r4 )
ld r4 ,_ M S R - S T A C K _ F R A M E _ O V E R H E A D ( r5 )
lis r6 ,M S R _ V E C @h
andc r4 ,r4 ,r6
std r4 ,_ M S R - S T A C K _ F R A M E _ O V E R H E A D ( r5 )
1 :
# endif / * C O N F I G _ S M P * /
/ * Hack : if w e g e t a n a l t i v e c u n a v a i l a b l e t r a p w i t h V R S A V E
* set t o a l l z e r o s , w e a s s u m e t h i s i s a b r o k e n a p p l i c a t i o n
* that f a i l s t o s e t i t p r o p e r l y , a n d t h u s w e s w i t c h i t t o
* all 1 ' s
* /
mfspr r4 ,S P R N _ V R S A V E
cmpdi 0 ,r4 ,0
bne+ 1 f
li r4 ,- 1
mtspr S P R N _ V R S A V E ,r4
1 :
/* enable use of VMX after return */
ld r4 ,P A C A C U R R E N T ( r13 )
addi r5 ,r4 ,T H R E A D / * G e t T H R E A D * /
oris r12 ,r12 ,M S R _ V E C @h
std r12 ,_ M S R ( r1 )
li r4 ,1
li r10 ,T H R E A D _ V S C R
stw r4 ,T H R E A D _ U S E D _ V R ( r5 )
lvx v r0 ,r10 ,r5
mtvscr v r0
REST_ 3 2 V R S ( 0 ,r4 ,r5 )
# ifndef C O N F I G _ S M P
/* Update last_task_used_math to 'current' */
subi r4 ,r5 ,T H R E A D / * B a c k t o ' c u r r e n t ' * /
std r4 ,0 ( r3 )
# endif / * C O N F I G _ S M P * /
/* restore registers and return */
2008-06-25 08:07:18 +04:00
blr
2005-09-26 10:04:21 +04:00
# endif / * C O N F I G _ A L T I V E C * /
2008-06-25 08:07:18 +04:00
# ifdef C O N F I G _ V S X
/ *
* load_ u p _ v s x ( u n u s e d , u n u s e d , t s k )
* Disable V S X f o r t h e t a s k w h i c h h a d i t p r e v i o u s l y ,
* and s a v e i t s v e c t o r r e g i s t e r s i n i t s t h r e a d _ s t r u c t .
* Reuse t h e f p a n d v s x s a v e s , b u t f i r s t c h e c k t o s e e i f t h e y h a v e
* been s a v e d a l r e a d y .
* On e n t r y : r13 = = ' c u r r e n t ' & & l a s t _ t a s k _ u s e d _ v s x ! = ' c u r r e n t '
* /
_ STATIC( l o a d _ u p _ v s x )
/* Load FP and VSX registers if they haven't been done yet */
andi. r5 ,r12 ,M S R _ F P
beql+ l o a d _ u p _ f p u / * s k i p i f a l r e a d y l o a d e d * /
andis. r5 ,r12 ,M S R _ V E C @h
beql+ l o a d _ u p _ a l t i v e c / * s k i p i f a l r e a d y l o a d e d * /
# ifndef C O N F I G _ S M P
ld r3 ,l a s t _ t a s k _ u s e d _ v s x @got(r2)
ld r4 ,0 ( r3 )
cmpdi 0 ,r4 ,0
beq 1 f
/* Disable VSX for last_task_used_vsx */
addi r4 ,r4 ,T H R E A D
ld r5 ,P T _ R E G S ( r4 )
ld r4 ,_ M S R - S T A C K _ F R A M E _ O V E R H E A D ( r5 )
lis r6 ,M S R _ V S X @h
andc r6 ,r4 ,r6
std r6 ,_ M S R - S T A C K _ F R A M E _ O V E R H E A D ( r5 )
1 :
# endif / * C O N F I G _ S M P * /
ld r4 ,P A C A C U R R E N T ( r13 )
addi r4 ,r4 ,T H R E A D / * G e t T H R E A D * /
li r6 ,1
stw r6 ,T H R E A D _ U S E D _ V S R ( r4 ) / * . . . a l s o s e t t h r e a d u s e d v s r * /
/* enable use of VSX after return */
oris r12 ,r12 ,M S R _ V S X @h
std r12 ,_ M S R ( r1 )
# ifndef C O N F I G _ S M P
/* Update last_task_used_math to 'current' */
ld r4 ,P A C A C U R R E N T ( r13 )
std r4 ,0 ( r3 )
# endif / * C O N F I G _ S M P * /
b f a s t _ e x c e p t i o n _ r e t u r n
# endif / * C O N F I G _ V S X * /
2005-09-26 10:04:21 +04:00
/ *
* Hash t a b l e s t u f f
* /
.align 7
2008-04-17 08:35:01 +04:00
_ STATIC( d o _ h a s h _ p a g e )
2005-09-26 10:04:21 +04:00
std r3 ,_ D A R ( r1 )
std r4 ,_ D S I S R ( r1 )
andis. r0 ,r4 ,0 x a45 0 / * w e i r d e r r o r ? * /
2006-11-02 01:44:37 +03:00
bne- h a n d l e _ p a g e _ f a u l t / * i f n o t , t r y t o i n s e r t a H P T E * /
2005-09-26 10:04:21 +04:00
BEGIN_ F T R _ S E C T I O N
andis. r0 ,r4 ,0 x00 2 0 / * I s i t a s e g m e n t t a b l e f a u l t ? * /
2006-11-02 01:44:37 +03:00
bne- d o _ s t e _ a l l o c / * I f s o h a n d l e i t * /
2005-09-26 10:04:21 +04:00
END_ F T R _ S E C T I O N _ I F C L R ( C P U _ F T R _ S L B )
2008-04-17 08:35:01 +04:00
/ *
* On i S e r i e s , w e s o f t - d i s a b l e i n t e r r u p t s h e r e , t h e n
* hard- e n a b l e i n t e r r u p t s s o t h a t t h e h a s h _ p a g e c o d e c a n s p i n o n
* the h a s h _ t a b l e _ l o c k w i t h o u t p r o b l e m s o n a s h a r e d p r o c e s s o r .
* /
DISABLE_ I N T S
/ *
* Currently, t r a c e _ h a r d i r q s _ o f f ( ) w i l l b e c a l l e d b y D I S A B L E _ I N T S
* and w i l l c l o b b e r v o l a t i l e r e g i s t e r s w h e n i r q t r a c i n g i s e n a b l e d
* so w e n e e d t o r e l o a d t h e m . I t m a y b e p o s s i b l e t o b e s m a r t e r h e r e
* and m o v e t h e i r q t r a c i n g e l s e w h e r e b u t l e t ' s k e e p i t s i m p l e f o r
* now
* /
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
ld r3 ,_ D A R ( r1 )
ld r4 ,_ D S I S R ( r1 )
ld r5 ,_ T R A P ( r1 )
ld r12 ,_ M S R ( r1 )
clrrdi r5 ,r5 ,4
# endif / * C O N F I G _ T R A C E _ I R Q F L A G S * /
2005-09-26 10:04:21 +04:00
/ *
* We n e e d t o s e t t h e _ P A G E _ U S E R b i t i f M S R _ P R i s s e t o r i f w e a r e
* accessing a u s e r s p a c e s e g m e n t ( e v e n f r o m t h e k e r n e l ) . W e a s s u m e
* kernel a d d r e s s e s a l w a y s h a v e t h e h i g h b i t s e t .
* /
rlwinm r4 ,r4 ,3 2 - 2 5 + 9 ,3 1 - 9 ,3 1 - 9 / * D S I S R _ S T O R E - > _ P A G E _ R W * /
rotldi r0 ,r3 ,1 5 / * M o v e h i g h b i t i n t o M S R _ P R p o s n * /
orc r0 ,r12 ,r0 / * M S R _ P R | ~ h i g h _ b i t * /
rlwimi r4 ,r0 ,3 2 - 1 3 ,3 0 ,3 0 / * b e c o m e s _ P A G E _ U S E R a c c e s s b i t * /
ori r4 ,r4 ,1 / * a d d _ P A G E _ P R E S E N T * /
rlwimi r4 ,r5 ,2 2 + 2 ,3 1 - 2 ,3 1 - 2 / * S e t _ P A G E _ E X E C i f t r a p i s 0 x40 0 * /
/ *
* r3 c o n t a i n s t h e f a u l t i n g a d d r e s s
* r4 c o n t a i n s t h e r e q u i r e d a c c e s s p e r m i s s i o n s
* r5 c o n t a i n s t h e t r a p n u m b e r
*
* at r e t u r n r3 = 0 f o r s u c c e s s
* /
bl . h a s h _ p a g e / * b u i l d H P T E i f p o s s i b l e * /
cmpdi r3 ,0 / * s e e i f h a s h _ p a g e s u c c e e d e d * /
2006-09-25 12:19:00 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
2005-09-26 10:04:21 +04:00
/ *
* If w e h a d i n t e r r u p t s s o f t - e n a b l e d a t t h e p o i n t w h e r e t h e
* DSI/ I S I o c c u r r e d , a n d a n i n t e r r u p t c a m e i n d u r i n g h a s h _ p a g e ,
* handle i t n o w .
* We j u m p t o r e t _ f r o m _ e x c e p t _ l i t e r a t h e r t h a n f a s t _ e x c e p t i o n _ r e t u r n
* because r e t _ f r o m _ e x c e p t _ l i t e w i l l c h e c k f o r a n d h a n d l e p e n d i n g
* interrupts i f n e c e s s a r y .
* /
2006-11-02 01:44:37 +03:00
beq 1 3 f
2006-10-18 04:11:22 +04:00
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
2008-04-17 08:35:01 +04:00
2006-10-18 04:11:22 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
/ *
* Here w e h a v e i n t e r r u p t s h a r d - d i s a b l e d , s o i t i s s u f f i c i e n t
* to r e s t o r e p a c a - > { s o f t ,h a r d } _ e n a b l e a n d g e t o u t .
* /
beq f a s t _ e x c _ r e t u r n _ i r q / * R e t u r n f r o m e x c e p t i o n o n s u c c e s s * /
END_ F W _ F T R _ S E C T I O N _ I F C L R ( F W _ F E A T U R E _ I S E R I E S )
2005-09-26 10:04:21 +04:00
/* For a hash failure, we don't bother re-enabling interrupts */
ble- 1 2 f
/ *
* hash_ p a g e c o u l d n ' t h a n d l e i t , s e t s o f t i n t e r r u p t e n a b l e b a c k
2008-04-17 08:35:01 +04:00
* to w h a t i t w a s b e f o r e t h e t r a p . N o t e t h a t . r a w _ l o c a l _ i r q _ r e s t o r e
2005-09-26 10:04:21 +04:00
* handles a n y i n t e r r u p t s p e n d i n g a t t h i s p o i n t .
* /
ld r3 ,S O F T E ( r1 )
2008-04-17 08:35:01 +04:00
TRACE_ A N D _ R E S T O R E _ I R Q _ P A R T I A L ( r3 , 1 1 f )
bl . r a w _ l o c a l _ i r q _ r e s t o r e
2005-09-26 10:04:21 +04:00
b 1 1 f
/* Here we have a page fault that hash_page can't handle. */
2006-11-02 01:44:37 +03:00
handle_page_fault :
2005-09-26 10:04:21 +04:00
ENABLE_ I N T S
11 : ld r4 ,_ D A R ( r1 )
ld r5 ,_ D S I S R ( r1 )
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
bl . d o _ p a g e _ f a u l t
cmpdi r3 ,0
2006-11-02 01:44:37 +03:00
beq+ 1 3 f
2005-09-26 10:04:21 +04:00
bl . s a v e _ n v g p r s
mr r5 ,r3
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
lwz r4 ,_ D A R ( r1 )
bl . b a d _ p a g e _ f a u l t
b . r e t _ f r o m _ e x c e p t
2006-12-04 07:59:07 +03:00
13 : b . r e t _ f r o m _ e x c e p t _ l i t e
2005-09-26 10:04:21 +04:00
/ * We h a v e a p a g e f a u l t t h a t h a s h _ p a g e c o u l d h a n d l e b u t H V r e f u s e d
* the P T E i n s e r t i o n
* /
12 : bl . s a v e _ n v g p r s
[POWERPC] Provide a way to protect 4k subpages when using 64k pages
Using 64k pages on 64-bit PowerPC systems makes life difficult for
emulators that are trying to emulate an ISA, such as x86, which use a
smaller page size, since the emulator can no longer use the MMU and
the normal system calls for controlling page protections. Of course,
the emulator can emulate the MMU by checking and possibly remapping
the address for each memory access in software, but that is pretty
slow.
This provides a facility for such programs to control the access
permissions on individual 4k sub-pages of 64k pages. The idea is
that the emulator supplies an array of protection masks to apply to a
specified range of virtual addresses. These masks are applied at the
level where hardware PTEs are inserted into the hardware page table
based on the Linux PTEs, so the Linux PTEs are not affected. Note
that this new mechanism does not allow any access that would otherwise
be prohibited; it can only prohibit accesses that would otherwise be
allowed. This new facility is only available on 64-bit PowerPC and
only when the kernel is configured for 64k pages.
The masks are supplied using a new subpage_prot system call, which
takes a starting virtual address and length, and a pointer to an array
of protection masks in memory. The array has a 32-bit word per 64k
page to be protected; each 32-bit word consists of 16 2-bit fields,
for which 0 allows any access (that is otherwise allowed), 1 prevents
write accesses, and 2 or 3 prevent any access.
Implicit in this is that the regions of the address space that are
protected are switched to use 4k hardware pages rather than 64k
hardware pages (on machines with hardware 64k page support). In fact
the whole process is switched to use 4k hardware pages when the
subpage_prot system call is used, but this could be improved in future
to switch only the affected segments.
The subpage protection bits are stored in a 3 level tree akin to the
page table tree. The top level of this tree is stored in a structure
that is appended to the top level of the page table tree, i.e., the
pgd array. Since it will often only be 32-bit addresses (below 4GB)
that are protected, the pointers to the first four bottom level pages
are also stored in this structure (each bottom level page contains the
protection bits for 1GB of address space), so the protection bits for
addresses below 4GB can be accessed with one fewer loads than those
for higher addresses.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-24 00:35:13 +03:00
mr r5 ,r3
2005-09-26 10:04:21 +04:00
addi r3 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2007-11-07 09:17:02 +03:00
ld r4 ,_ D A R ( r1 )
2005-09-26 10:04:21 +04:00
bl . l o w _ h a s h _ f a u l t
b . r e t _ f r o m _ e x c e p t
/* here we have a segment miss */
2006-11-02 01:44:37 +03:00
do_ste_alloc :
2005-09-26 10:04:21 +04:00
bl . s t e _ a l l o c a t e / * t r y t o i n s e r t s t a b e n t r y * /
cmpdi r3 ,0
2006-11-02 01:44:37 +03:00
bne- h a n d l e _ p a g e _ f a u l t
b f a s t _ e x c e p t i o n _ r e t u r n
2005-09-26 10:04:21 +04:00
/ *
* r1 3 p o i n t s t o t h e P A C A , r9 c o n t a i n s t h e s a v e d C R ,
* r1 1 a n d r12 c o n t a i n t h e s a v e d S R R 0 a n d S R R 1 .
* r9 - r13 a r e s a v e d i n p a c a - > e x s l b .
* We a s s u m e w e a r e n ' t g o i n g t o t a k e a n y e x c e p t i o n s d u r i n g t h i s p r o c e d u r e .
* We a s s u m e ( D A R > > 6 0 ) = = 0 x c .
* /
.align 7
_ GLOBAL( d o _ s t a b _ b o l t e d )
stw r9 ,P A C A _ E X S L B + E X _ C C R ( r13 ) / * s a v e C R i n e x c . f r a m e * /
std r11 ,P A C A _ E X S L B + E X _ S R R 0 ( r13 ) / * s a v e S R R 0 i n e x c . f r a m e * /
/* Hash to the primary group */
ld r10 ,P A C A S T A B V I R T ( r13 )
2005-10-10 08:01:07 +04:00
mfspr r11 ,S P R N _ D A R
2005-09-26 10:04:21 +04:00
srdi r11 ,r11 ,2 8
rldimi r10 ,r11 ,7 ,5 2 / * r10 = f i r s t s t e o f t h e g r o u p * /
/* Calculate VSID */
/* This is a kernel address, so protovsid = ESID */
2007-10-11 14:37:10 +04:00
ASM_ V S I D _ S C R A M B L E ( r11 , r9 , 2 5 6 M )
2005-09-26 10:04:21 +04:00
rldic r9 ,r11 ,1 2 ,1 6 / * r9 = v s i d < < 1 2 * /
/* Search the primary group for a free entry */
1 : ld r11 ,0 ( r10 ) / * T e s t v a l i d b i t o f t h e c u r r e n t s t e * /
andi. r11 ,r11 ,0 x80
beq 2 f
addi r10 ,r10 ,1 6
andi. r11 ,r10 ,0 x70
bne 1 b
/* Stick for only searching the primary group for now. */
/* At least for now, we use a very simple random castout scheme */
/* Use the TB as a random number ; OR in 1 to avoid entry 0 */
mftb r11
rldic r11 ,r11 ,4 ,5 7 / * r11 = ( r11 < < 4 ) & 0 x70 * /
ori r11 ,r11 ,0 x10
/* r10 currently points to an ste one past the group of interest */
/* make it point to the randomly selected entry */
subi r10 ,r10 ,1 2 8
or r10 ,r10 ,r11 / * r10 i s t h e e n t r y t o i n v a l i d a t e * /
isync / * m a r k t h e e n t r y i n v a l i d * /
ld r11 ,0 ( r10 )
rldicl r11 ,r11 ,5 6 ,1 / * c l e a r t h e v a l i d b i t * /
rotldi r11 ,r11 ,8
std r11 ,0 ( r10 )
sync
clrrdi r11 ,r11 ,2 8 / * G e t t h e e s i d p a r t o f t h e s t e * /
slbie r11
2 : std r9 ,8 ( r10 ) / * S t o r e t h e v s i d p a r t o f t h e s t e * /
eieio
2005-10-10 08:01:07 +04:00
mfspr r11 ,S P R N _ D A R / * G e t t h e n e w e s i d * /
2005-09-26 10:04:21 +04:00
clrrdi r11 ,r11 ,2 8 / * P e r m i t s a f u l l 3 2 b o f E S I D * /
ori r11 ,r11 ,0 x90 / * T u r n o n v a l i d a n d k p * /
std r11 ,0 ( r10 ) / * P u t n e w e n t r y b a c k i n t o t h e s t a b * /
sync
/* All done -- return from exception. */
lwz r9 ,P A C A _ E X S L B + E X _ C C R ( r13 ) / * g e t s a v e d C R * /
ld r11 ,P A C A _ E X S L B + E X _ S R R 0 ( r13 ) / * g e t s a v e d S R R 0 * /
andi. r10 ,r12 ,M S R _ R I
beq- u n r e c o v _ s l b
mtcrf 0 x80 ,r9 / * r e s t o r e C R * /
mfmsr r10
clrrdi r10 ,r10 ,2
mtmsrd r10 ,1
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r11
mtspr S P R N _ S R R 1 ,r12
2005-09-26 10:04:21 +04:00
ld r9 ,P A C A _ E X S L B + E X _ R 9 ( r13 )
ld r10 ,P A C A _ E X S L B + E X _ R 1 0 ( r13 )
ld r11 ,P A C A _ E X S L B + E X _ R 1 1 ( r13 )
ld r12 ,P A C A _ E X S L B + E X _ R 1 2 ( r13 )
ld r13 ,P A C A _ E X S L B + E X _ R 1 3 ( r13 )
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/ *
* Space f o r C P U 0 ' s s e g m e n t t a b l e .
*
* On i S e r i e s , t h e h y p e r v i s o r m u s t f i l l i n a t l e a s t o n e e n t r y b e f o r e
2007-08-20 08:58:36 +04:00
* we g e t c o n t r o l ( w i t h r e l o c a t e o n ) . T h e a d d r e s s i s g i v e n t o t h e h v
* as a p a g e n u m b e r ( s e e x L p a r M a p b e l o w ) , s o t h i s m u s t b e a t a
2005-09-26 10:04:21 +04:00
* fixed a d d r e s s ( t h e l i n k e r c a n ' t c o m p u t e ( u 6 4 ) & i n i t i a l _ s t a b > >
* PAGE_ S H I F T ) .
* /
2005-12-06 00:49:00 +03:00
. = STAB0 _ O F F S E T / * 0 x60 0 0 * /
2005-09-26 10:04:21 +04:00
.globl initial_stab
initial_stab :
.space 4096
2007-09-18 11:25:12 +04:00
# ifdef C O N F I G _ P P C _ P S E R I E S
2005-09-26 10:04:21 +04:00
/ *
* Data a r e a r e s e r v e d f o r F W N M I o p t i o n .
* This a d d r e s s ( 0 x70 0 0 ) i s f i x e d b y t h e R P A .
* /
. = 0 x7 0 0 0
.globl fwnmi_data_area
fwnmi_data_area :
2007-09-18 11:25:12 +04:00
# endif / * C O N F I G _ P P C _ P S E R I E S * /
2005-09-26 10:04:21 +04:00
/ * iSeries d o e s n o t u s e t h e F W N M I s t u f f , s o i t i s s a f e t o p u t
* this h e r e , e v e n i f w e l a t e r a l l o w k e r n e l s t h a t w i l l b o o t o n
* both p S e r i e s a n d i S e r i e s * /
# ifdef C O N F I G _ P P C _ I S E R I E S
. = LPARMAP_ P H Y S
2007-08-20 08:58:36 +04:00
.globl xLparMap
xLparMap :
.quad HvEsidsToMap /* xNumberEsids */
.quad HvRangesToMap /* xNumberRanges */
.quad STAB0_PAGE /* xSegmentTableOffs */
.zero 40 /* xRsvd */
/* xEsids (HvEsidsToMap entries of 2 quads) */
.quad PAGE_OFFSET_ESID /* xKernelEsid */
.quad PAGE_OFFSET_VSID /* xKernelVsid */
.quad VMALLOC_START_ESID /* xKernelEsid */
.quad VMALLOC_START_VSID /* xKernelVsid */
/* xRanges (HvRangesToMap entries of 3 quads) */
.quad HvPagesToMap /* xPages */
.quad 0 /* xOffset */
.quad PAGE_OFFSET_VSID < < ( SID_ S H I F T - H W _ P A G E _ S H I F T ) / * x V P N * /
2005-09-26 10:04:21 +04:00
# endif / * C O N F I G _ P P C _ I S E R I E S * /
2007-09-18 11:25:12 +04:00
# ifdef C O N F I G _ P P C _ P S E R I E S
2005-09-26 10:04:21 +04:00
. = 0 x8 0 0 0
2007-09-18 11:25:12 +04:00
# endif / * C O N F I G _ P P C _ P S E R I E S * /
2005-09-26 10:04:21 +04:00
/ *
2006-08-11 09:07:08 +04:00
* On p S e r i e s a n d m o s t o t h e r p l a t f o r m s , s e c o n d a r y p r o c e s s o r s s p i n
* in t h e f o l l o w i n g c o d e .
2005-09-26 10:04:21 +04:00
* At e n t r y , r3 = t h i s p r o c e s s o r ' s n u m b e r ( p h y s i c a l c p u i d )
* /
2006-08-11 09:07:08 +04:00
_ GLOBAL( g e n e r i c _ s e c o n d a r y _ s m p _ i n i t )
2005-09-26 10:04:21 +04:00
mr r24 ,r3
/* turn on 64-bit mode */
bl . e n a b l e _ 6 4 b _ m o d e
/ * Set u p a p a c a v a l u e f o r t h i s p r o c e s s o r . S i n c e w e h a v e t h e
* physical c p u i d i n r24 , w e n e e d t o s e a r c h t h e p a c a s t o f i n d
* which l o g i c a l i d m a p s t o o u r p h y s i c a l o n e .
* /
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r13 , p a c a ) / * G e t b a s e v a d d r o f p a c a a r r a y * /
2005-09-26 10:04:21 +04:00
li r5 ,0 / * l o g i c a l c p u i d * /
1 : lhz r6 ,P A C A H W C P U I D ( r13 ) / * L o a d H W p r o c i d f r o m p a c a * /
cmpw r6 ,r24 / * C o m p a r e t o o u r i d * /
beq 2 f
addi r13 ,r13 ,P A C A _ S I Z E / * L o o p t o n e x t P A C A o n m i s s * /
addi r5 ,r5 ,1
cmpwi r5 ,N R _ C P U S
blt 1 b
mr r3 ,r24 / * n o t f o u n d , c o p y p h y s t o r3 * /
b . k e x e c _ w a i t / * n e x t k e r n e l m i g h t d o b e t t e r * /
2005-10-10 08:01:07 +04:00
2 : mtspr S P R N _ S P R G 3 ,r13 / * S a v e v a d d r o f p a c a i n S P R G 3 * /
2005-09-26 10:04:21 +04:00
/* From now on, r24 is expected to be logical cpuid */
mr r24 ,r5
3 : HMT_ L O W
lbz r23 ,P A C A P R O C S T A R T ( r13 ) / * T e s t i f t h i s p r o c e s s o r s h o u l d * /
/* start. */
2006-08-11 09:07:08 +04:00
# ifndef C O N F I G _ S M P
b 3 b / * N e v e r g o o n n o n - S M P * /
# else
cmpwi 0 ,r23 ,0
beq 3 b / * L o o p u n t i l t o l d t o g o * /
2008-07-12 03:00:26 +04:00
sync / * o r d e r p a c a . r u n a n d c u r _ c p u _ s p e c * /
2006-08-11 09:07:08 +04:00
/* See if we need to call a cpu state restore handler */
LOAD_ R E G _ I M M E D I A T E ( r23 , c u r _ c p u _ s p e c )
ld r23 ,0 ( r23 )
ld r23 ,C P U _ S P E C _ R E S T O R E ( r23 )
cmpdi 0 ,r23 ,0
beq 4 f
ld r23 ,0 ( r23 )
mtctr r23
bctrl
4 : /* Create a temp kernel stack for use before relocation is on. */
2005-09-26 10:04:21 +04:00
ld r1 ,P A C A E M E R G S P ( r13 )
subi r1 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2006-11-27 06:59:50 +03:00
b _ _ s e c o n d a r y _ s t a r t
2005-09-26 10:04:21 +04:00
# endif
_ STATIC( _ _ m m u _ o f f )
mfmsr r3
andi. r0 ,r3 ,M S R _ I R | M S R _ D R
beqlr
andc r3 ,r3 ,r0
mtspr S P R N _ S R R 0 ,r4
mtspr S P R N _ S R R 1 ,r3
sync
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/ *
* Here i s o u r m a i n k e r n e l e n t r y p o i n t . W e s u p p o r t c u r r e n t l y 2 k i n d o f e n t r i e s
* depending o n t h e v a l u e o f r5 .
*
* r5 ! = N U L L - > O F e n t r y , w e g o t o p r o m _ i n i t , " l e g a c y " p a r a m e t e r c o n t e n t
* in r3 . . . r7
*
* r5 = = N U L L - > k e x e c s t y l e e n t r y . r3 i s a p h y s i c a l p o i n t e r t o t h e
* DT b l o c k , r4 i s a p h y s i c a l p o i n t e r t o t h e k e r n e l i t s e l f
*
* /
_ GLOBAL( _ _ s t a r t _ i n i t i a l i z a t i o n _ m u l t i p l a t f o r m )
/ *
* Are w e b o o t e d f r o m a P R O M O f - t y p e c l i e n t - i n t e r f a c e ?
* /
cmpldi c r0 ,r5 ,0
2007-07-31 10:44:13 +04:00
beq 1 f
b . _ _ b o o t _ f r o m _ p r o m / * y e s - > p r o m * /
1 :
2005-09-26 10:04:21 +04:00
/* Save parameters */
mr r31 ,r3
mr r30 ,r4
/* Make sure we are running in 64 bits mode */
bl . e n a b l e _ 6 4 b _ m o d e
/* Setup some critical 970 SPRs before switching MMU off */
2006-08-11 09:07:08 +04:00
mfspr r0 ,S P R N _ P V R
srwi r0 ,r0 ,1 6
cmpwi r0 ,0 x39 / * 9 7 0 * /
beq 1 f
cmpwi r0 ,0 x3 c / * 9 7 0 F X * /
beq 1 f
cmpwi r0 ,0 x44 / * 9 7 0 M P * /
2006-10-26 02:32:40 +04:00
beq 1 f
cmpwi r0 ,0 x45 / * 9 7 0 G X * /
2006-08-11 09:07:08 +04:00
bne 2 f
1 : bl . _ _ c p u _ p r e i n i t _ p p c97 0
2 :
2005-09-26 10:04:21 +04:00
/* Switch off MMU if not already */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r4 , . _ _ a f t e r _ p r o m _ s t a r t - K E R N E L B A S E )
2005-09-26 10:04:21 +04:00
add r4 ,r4 ,r30
bl . _ _ m m u _ o f f
b . _ _ a f t e r _ p r o m _ s t a r t
2007-07-31 10:44:13 +04:00
_ INIT_ S T A T I C ( _ _ b o o t _ f r o m _ p r o m )
2005-09-26 10:04:21 +04:00
/* Save parameters */
mr r31 ,r3
mr r30 ,r4
mr r29 ,r5
mr r28 ,r6
mr r27 ,r7
[PATCH] correct the comment about stackpointer alignment in __boot_from_prom
The address of variable val in prom_init_stdout is passed to prom_getprop.
prom_getprop casts the pointer to u32 and passes it to call_prom in the hope
that OpenFirmware stores something there.
But the pointer is truncated in the lower bits and the expected value is
stored somewhere else.
In my testing I had a stackpointer of 0x0023e6b4. val was at offset 120,
wich has address 0x0023e72c. But the value passed to OF was 0x0023e728.
c00000000040b710: 3b 01 00 78 addi r24,r1,120
...
c00000000040b754: 57 08 00 38 rlwinm r8,r24,0,0,28
...
c00000000040b784: 80 01 00 78 lwz r0,120(r1)
...
c00000000040b798: 90 1b 00 0c stw r0,12(r27)
...
The stackpointer came from 32bit code.
The chain was yaboot -> zImage -> vmlinux
PowerMac OpenFirmware does appearently not handle the ELF sections
correctly. If yaboot was compiled in
/usr/src/packages/BUILD/lilo-10.1.1/yaboot, then the stackpointer is
unaligned. But the stackpointer is correct if yaboot is compiled in
/tmp/yaboot.
This bug triggered since 2.6.15, now prom_getprop is an inline
function. gcc clears the lower bits, instead of just clearing the
upper 32 bits.
Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-23 23:50:59 +03:00
/ *
* Align t h e s t a c k t o 1 6 - b y t e b o u n d a r y
* Depending o n t h e s i z e a n d l a y o u t o f t h e E L F s e c t i o n s i n t h e i n i t i a l
* boot b i n a r y , t h e s t a c k p o i n t e r w i l l b e u n a l i g n e t o n P o w e r M a c
* /
2006-03-05 02:00:45 +03:00
rldicr r1 ,r1 ,0 ,5 9
2005-09-26 10:04:21 +04:00
/* Make sure we are running in 64 bits mode */
bl . e n a b l e _ 6 4 b _ m o d e
/* put a relocation offset into r3 */
bl . r e l o c _ o f f s e t
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r2 ,_ _ t o c _ s t a r t )
2005-09-26 10:04:21 +04:00
addi r2 ,r2 ,0 x40 0 0
addi r2 ,r2 ,0 x40 0 0
/* Relocate the TOC from a virt addr to a real addr */
2005-10-10 16:41:25 +04:00
add r2 ,r2 ,r3
2005-09-26 10:04:21 +04:00
/* Restore parameters */
mr r3 ,r31
mr r4 ,r30
mr r5 ,r29
mr r6 ,r28
mr r7 ,r27
/* Do all of the interaction with OF client interface */
bl . p r o m _ i n i t
/* We never return */
trap
_ STATIC( _ _ a f t e r _ p r o m _ s t a r t )
/ *
2005-12-06 00:49:00 +03:00
* We n e e d t o r u n w i t h _ _ s t a r t a t p h y s i c a l a d d r e s s P H Y S I C A L _ S T A R T .
2005-09-26 10:04:21 +04:00
* This w i l l l e a v e s o m e c o d e i n t h e f i r s t 2 5 6 B o f
* real m e m o r y , w h i c h a r e r e s e r v e d f o r s o f t w a r e u s e .
* The r e m a i n d e r o f t h e f i r s t p a g e i s l o a d e d w i t h t h e f i x e d
* interrupt v e c t o r s . T h e n e x t t w o p a g e s a r e f i l l e d w i t h
* unknown e x c e p t i o n p l a c e h o l d e r s .
*
* Note : This p r o c e s s o v e r w r i t e s t h e O F e x c e p t i o n v e c t o r s .
* r2 6 = = r e l o c a t i o n o f f s e t
* r2 7 = = K E R N E L B A S E
* /
bl . r e l o c _ o f f s e t
mr r26 ,r3
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r27 , K E R N E L B A S E )
2005-09-26 10:04:21 +04:00
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r3 , P H Y S I C A L _ S T A R T ) / * t a r g e t a d d r * /
2005-09-26 10:04:21 +04:00
/ / XXX F I X M E : U s e p h y s r e t u r n e d b y O F ( r30 )
2005-10-10 16:41:25 +04:00
add r4 ,r27 ,r26 / * s o u r c e a d d r * /
2005-09-26 10:04:21 +04:00
/* current address of _start */
/* i.e. where we are running */
/* the source addr */
2006-06-26 12:56:58 +04:00
cmpdi r4 ,0 / * I n s o m e c a s e s t h e l o a d e r m a y * /
2007-07-31 10:44:13 +04:00
bne 1 f
b . s t a r t _ h e r e _ m u l t i p l a t f o r m / * h a v e a l r e a d y p u t u s a t z e r o * /
2006-06-26 12:56:58 +04:00
/* so we can skip the copy. */
2007-07-31 10:44:13 +04:00
1 : LOAD_ R E G _ I M M E D I A T E ( r5 ,c o p y _ t o _ h e r e ) / * # b y t e s o f m e m o r y t o c o p y * /
2005-09-26 10:04:21 +04:00
sub r5 ,r5 ,r27
li r6 ,0 x10 0 / * S t a r t o f f s e t , t h e f i r s t 0 x10 0 * /
/* bytes were copied earlier. */
bl . c o p y _ a n d _ f l u s h / * c o p y t h e f i r s t n b y t e s * /
/* this includes the code being */
/* executed here. */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r0 , 4 f ) / * J u m p t o t h e c o p y o f t h i s c o d e * /
2005-09-26 10:04:21 +04:00
mtctr r0 / * t h a t w e j u s t m a d e / r e l o c a t e d * /
bctr
2006-01-13 06:56:25 +03:00
4 : LOAD_ R E G _ I M M E D I A T E ( r5 ,k l i m i t )
2005-10-10 16:41:25 +04:00
add r5 ,r5 ,r26
2005-09-26 10:04:21 +04:00
ld r5 ,0 ( r5 ) / * g e t t h e v a l u e o f k l i m i t * /
sub r5 ,r5 ,r27
bl . c o p y _ a n d _ f l u s h / * c o p y t h e r e s t * /
b . s t a r t _ h e r e _ m u l t i p l a t f o r m
/ *
* Copy r o u t i n e u s e d t o c o p y t h e k e r n e l t o s t a r t a t p h y s i c a l a d d r e s s 0
* and f l u s h a n d i n v a l i d a t e t h e c a c h e s a s n e e d e d .
* r3 = d e s t a d d r , r4 = s o u r c e a d d r , r5 = c o p y l i m i t , r6 = s t a r t o f f s e t
* on e x i t , r3 , r4 , r5 a r e u n c h a n g e d , r6 i s u p d a t e d t o b e > = r5 .
*
* Note : this r o u t i n e * o n l y * c l o b b e r s r0 , r6 a n d l r
* /
_ GLOBAL( c o p y _ a n d _ f l u s h )
addi r5 ,r5 ,- 8
addi r6 ,r6 ,- 8
2006-09-06 23:34:41 +04:00
4 : li r0 ,8 / * U s e t h e s m a l l e s t c o m m o n * /
2005-09-26 10:04:21 +04:00
/* denominator cache line */
/* size. This results in */
/* extra cache line flushes */
/* but operation is correct. */
/* Can't get cache line size */
/* from NACA as it is being */
/* moved too. */
mtctr r0 / * p u t # w o r d s / l i n e i n c t r * /
3 : addi r6 ,r6 ,8 / * c o p y a c a c h e l i n e * /
ldx r0 ,r6 ,r4
stdx r0 ,r6 ,r3
bdnz 3 b
dcbst r6 ,r3 / * w r i t e i t t o m e m o r y * /
sync
icbi r6 ,r3 / * f l u s h t h e i c a c h e l i n e * /
cmpld 0 ,r6 ,r5
blt 4 b
sync
addi r5 ,r5 ,8
addi r6 ,r6 ,8
blr
.align 8
copy_to_here :
# ifdef C O N F I G _ S M P
# ifdef C O N F I G _ P P C _ P M A C
/ *
* On P o w e r M a c , s e c o n d a r y p r o c e s s o r s s t a r t s f r o m t h e r e s e t v e c t o r , w h i c h
* is t e m p o r a r i l y t u r n e d i n t o a c a l l t o o n e o f t h e f u n c t i o n s b e l o w .
* /
.section " .text " ;
.align 2 ;
2005-10-22 10:02:39 +04:00
.globl __secondary_start_pmac_0
__secondary_start_pmac_0 :
/* NB the entries for cpus 0, 1, 2 must each occupy 8 bytes. */
li r24 ,0
b 1 f
li r24 ,1
b 1 f
li r24 ,2
b 1 f
li r24 ,3
1 :
2005-09-26 10:04:21 +04:00
_ GLOBAL( p m a c _ s e c o n d a r y _ s t a r t )
/* turn on 64-bit mode */
bl . e n a b l e _ 6 4 b _ m o d e
/* Copy some CPU settings from CPU 0 */
2006-08-11 09:07:08 +04:00
bl . _ _ r e s t o r e _ c p u _ p p c97 0
2005-09-26 10:04:21 +04:00
/* pSeries do that early though I don't think we really need it */
mfmsr r3
ori r3 ,r3 ,M S R _ R I
mtmsrd r3 / * R I o n * /
/* Set up a paca value for this processor. */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r4 , p a c a ) / * G e t b a s e v a d d r o f p a c a a r r a y * /
2005-09-26 10:04:21 +04:00
mulli r13 ,r24 ,P A C A _ S I Z E / * C a l c u l a t e v a d d r o f r i g h t p a c a * /
add r13 ,r13 ,r4 / * f o r t h i s p r o c e s s o r . * /
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S P R G 3 ,r13 / * S a v e v a d d r o f p a c a i n S P R G 3 * /
2005-09-26 10:04:21 +04:00
/* Create a temp kernel stack for use before relocation is on. */
ld r1 ,P A C A E M E R G S P ( r13 )
subi r1 ,r1 ,S T A C K _ F R A M E _ O V E R H E A D
2006-11-27 06:59:50 +03:00
b _ _ s e c o n d a r y _ s t a r t
2005-09-26 10:04:21 +04:00
# endif / * C O N F I G _ P P C _ P M A C * /
/ *
* This f u n c t i o n i s c a l l e d a f t e r t h e m a s t e r C P U h a s r e l e a s e d t h e
* secondary p r o c e s s o r s . T h e e x e c u t i o n e n v i r o n m e n t i s r e l o c a t i o n o f f .
* The p a c a f o r t h i s p r o c e s s o r h a s t h e f o l l o w i n g f i e l d s i n i t i a l i z e d a t
* this p o i n t :
* 1 . Processor n u m b e r
* 2 . Segment t a b l e p o i n t e r ( v i r t u a l a d d r e s s )
* On e n t r y t h e f o l l o w i n g a r e s e t :
* r1 = s t a c k p o i n t e r . v a d d r f o r i S e r i e s , r a d d r ( t e m p s t a c k ) f o r p S e r i e s
* r2 4 = c p u # ( i n L i n u x t e r m s )
* r1 3 = p a c a v i r t u a l a d d r e s s
* SPRG3 = p a c a v i r t u a l a d d r e s s
* /
2007-08-22 07:44:58 +04:00
.globl __secondary_start
2006-11-27 06:59:50 +03:00
__secondary_start :
2005-11-10 05:37:51 +03:00
/* Set thread priority to MEDIUM */
HMT_ M E D I U M
2005-09-26 10:04:21 +04:00
2005-11-10 05:37:51 +03:00
/* Load TOC */
2005-09-26 10:04:21 +04:00
ld r2 ,P A C A T O C ( r13 )
2005-11-10 05:37:51 +03:00
/* Do early setup for that CPU (stab, slb, hash table pointer) */
bl . e a r l y _ s e t u p _ s e c o n d a r y
2005-09-26 10:04:21 +04:00
/* Initialize the kernel stack. Just a repeat for iSeries. */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ A D D R ( r3 , c u r r e n t _ s e t )
2005-09-26 10:04:21 +04:00
sldi r28 ,r24 ,3 / * g e t c u r r e n t _ s e t [ c p u #] * /
ldx r1 ,r3 ,r28
addi r1 ,r1 ,T H R E A D _ S I Z E - S T A C K _ F R A M E _ O V E R H E A D
std r1 ,P A C A K S A V E ( r13 )
2005-11-10 05:37:51 +03:00
/* Clear backchain so we get nice backtraces */
2005-09-26 10:04:21 +04:00
li r7 ,0
mtlr r7
/* enable MMU and jump to start_secondary */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ A D D R ( r3 , . s t a r t _ s e c o n d a r y _ p r o l o g )
LOAD_ R E G _ I M M E D I A T E ( r4 , M S R _ K E R N E L )
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
# ifdef C O N F I G _ P P C _ I S E R I E S
2006-09-25 12:19:00 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
2005-09-26 10:04:21 +04:00
ori r4 ,r4 ,M S R _ E E
2008-04-02 08:58:40 +04:00
li r8 ,1
stb r8 ,P A C A H A R D I R Q E N ( r13 )
2006-09-25 12:19:00 +04:00
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
2005-09-26 10:04:21 +04:00
# endif
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
BEGIN_ F W _ F T R _ S E C T I O N
stb r7 ,P A C A H A R D I R Q E N ( r13 )
END_ F W _ F T R _ S E C T I O N _ I F C L R ( F W _ F E A T U R E _ I S E R I E S )
2008-04-02 08:58:40 +04:00
stb r7 ,P A C A S O F T I R Q E N ( r13 )
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r3
mtspr S P R N _ S R R 1 ,r4
2005-09-26 10:04:21 +04:00
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/ *
* Running w i t h r e l o c a t i o n o n a t t h i s p o i n t . A l l w e w a n t t o d o i s
* zero t h e s t a c k b a c k - c h a i n p o i n t e r b e f o r e g o i n g i n t o C c o d e .
* /
_ GLOBAL( s t a r t _ s e c o n d a r y _ p r o l o g )
li r3 ,0
std r3 ,0 ( r1 ) / * Z e r o t h e s t a c k f r a m e p o i n t e r * /
bl . s t a r t _ s e c o n d a r y
2005-11-10 05:37:51 +03:00
b .
2005-09-26 10:04:21 +04:00
# endif
/ *
* This s u b r o u t i n e c l o b b e r s r11 a n d r12
* /
_ GLOBAL( e n a b l e _ 6 4 b _ m o d e )
mfmsr r11 / * g r a b t h e c u r r e n t M S R * /
li r12 ,1
rldicr r12 ,r12 ,M S R _ S F _ L G ,( 6 3 - M S R _ S F _ L G )
or r11 ,r11 ,r12
li r12 ,1
rldicr r12 ,r12 ,M S R _ I S F _ L G ,( 6 3 - M S R _ I S F _ L G )
or r11 ,r11 ,r12
mtmsrd r11
isync
blr
/ *
* This i s w h e r e t h e m a i n k e r n e l c o d e s t a r t s .
* /
2007-07-31 10:44:13 +04:00
_ INIT_ S T A T I C ( s t a r t _ h e r e _ m u l t i p l a t f o r m )
2005-09-26 10:04:21 +04:00
/* get a new offset, now that the kernel has moved. */
bl . r e l o c _ o f f s e t
mr r26 ,r3
/ * Clear o u t t h e B S S . I t m a y h a v e b e e n d o n e i n p r o m _ i n i t ,
* already b u t t h a t ' s i r r e l e v a n t s i n c e p r o m _ i n i t w i l l s o o n
* be d e t a c h e d f r o m t h e k e r n e l c o m p l e t e l y . B e s i d e s , w e n e e d
* to c l e a r i t n o w f o r k e x e c - s t y l e e n t r y .
* /
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r11 ,_ _ b s s _ s t o p )
LOAD_ R E G _ I M M E D I A T E ( r8 ,_ _ b s s _ s t a r t )
2005-09-26 10:04:21 +04:00
sub r11 ,r11 ,r8 / * b s s s i z e * /
addi r11 ,r11 ,7 / * r o u n d u p t o a n e v e n d o u b l e w o r d * /
rldicl. r11 ,r11 ,6 1 ,3 / * s h i f t r i g h t b y 3 * /
beq 4 f
addi r8 ,r8 ,- 8
li r0 ,0
mtctr r11 / * z e r o t h i s m a n y d o u b l e w o r d s * /
3 : stdu r0 ,8 ( r8 )
bdnz 3 b
4 :
mfmsr r6
ori r6 ,r6 ,M S R _ R I
mtmsrd r6 / * R I o n * /
/* The following gets the stack and TOC set up with the regs */
/* pointing to the real addr of the kernel stack. This is */
/* all done to support the C function call below which sets */
/* up the htab. This is done because we have relocated the */
/* kernel but are still running in real mode. */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r3 ,i n i t _ t h r e a d _ u n i o n )
2005-10-10 16:41:25 +04:00
add r3 ,r3 ,r26
2005-09-26 10:04:21 +04:00
/* set up a stack pointer (physical address) */
addi r1 ,r3 ,T H R E A D _ S I Z E
li r0 ,0
stdu r0 ,- S T A C K _ F R A M E _ O V E R H E A D ( r1 )
/* set up the TOC (physical address) */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r2 ,_ _ t o c _ s t a r t )
2005-09-26 10:04:21 +04:00
addi r2 ,r2 ,0 x40 0 0
addi r2 ,r2 ,0 x40 0 0
2005-10-10 16:41:25 +04:00
add r2 ,r2 ,r26
2005-09-26 10:04:21 +04:00
/ * Do v e r y e a r l y k e r n e l i n i t i a l i z a t i o n s , i n c l u d i n g i n i t i a l h a s h t a b l e ,
* stab a n d s l b s e t u p b e f o r e w e t u r n o n r e l o c a t i o n . * /
/* Restore parameters passed from prom_init/kexec */
mr r3 ,r31
bl . e a r l y _ s e t u p
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r3 , . s t a r t _ h e r e _ c o m m o n )
LOAD_ R E G _ I M M E D I A T E ( r4 , M S R _ K E R N E L )
2005-10-10 08:01:07 +04:00
mtspr S P R N _ S R R 0 ,r3
mtspr S P R N _ S R R 1 ,r4
2005-09-26 10:04:21 +04:00
rfid
b . / * p r e v e n t s p e c u l a t i v e e x e c u t i o n * /
/* This is where all platforms converge execution */
2007-08-22 07:44:58 +04:00
_ INIT_ G L O B A L ( s t a r t _ h e r e _ c o m m o n )
2005-09-26 10:04:21 +04:00
/* relocation is on at this point */
/* The following code sets up the SP and TOC now that we are */
/* running with translation enabled. */
2006-01-13 06:56:25 +03:00
LOAD_ R E G _ I M M E D I A T E ( r3 ,i n i t _ t h r e a d _ u n i o n )
2005-09-26 10:04:21 +04:00
/* set up the stack */
addi r1 ,r3 ,T H R E A D _ S I Z E
li r0 ,0
stdu r0 ,- S T A C K _ F R A M E _ O V E R H E A D ( r1 )
/* Load the TOC */
ld r2 ,P A C A T O C ( r13 )
std r1 ,P A C A K S A V E ( r13 )
bl . s e t u p _ s y s t e m
/* Load up the kernel context */
5 :
li r5 ,0
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 10:47:49 +04:00
stb r5 ,P A C A S O F T I R Q E N ( r13 ) / * S o f t D i s a b l e d * /
# ifdef C O N F I G _ P P C _ I S E R I E S
BEGIN_ F W _ F T R _ S E C T I O N
2005-09-26 10:04:21 +04:00
mfmsr r5
2008-04-02 08:58:40 +04:00
ori r5 ,r5 ,M S R _ E E / * H a r d E n a b l e d o n i S e r i e s * /
2005-09-26 10:04:21 +04:00
mtmsrd r5
2008-04-02 08:58:40 +04:00
li r5 ,1
2006-09-25 12:19:00 +04:00
END_ F W _ F T R _ S E C T I O N _ I F S E T ( F W _ F E A T U R E _ I S E R I E S )
2005-09-26 10:04:21 +04:00
# endif
2008-04-02 08:58:40 +04:00
stb r5 ,P A C A H A R D I R Q E N ( r13 ) / * H a r d D i s a b l e d o n o t h e r s * /
2005-09-26 10:04:21 +04:00
2008-04-02 08:58:40 +04:00
bl . s t a r t _ k e r n e l
2005-09-26 10:04:21 +04:00
2006-02-13 10:11:13 +03:00
/* Not reached */
BUG_ O P C O D E
2005-09-26 10:04:21 +04:00
/ *
* We p u t a f e w t h i n g s h e r e t h a t h a v e t o b e p a g e - a l i g n e d .
* This s t u f f g o e s a t t h e b e g i n n i n g o f t h e b s s , w h i c h i s p a g e - a l i g n e d .
* /
.section " .bss "
.align PAGE_SHIFT
.globl empty_zero_page
empty_zero_page :
.space PAGE_SIZE
.globl swapper_pg_dir
swapper_pg_dir :
2007-09-18 11:22:59 +04:00
.space PGD_TABLE_SIZE