2005-10-10 16:36:14 +04:00
/ *
* This f i l e c o n t a i n s m i s c e l l a n e o u s l o w - l e v e l f u n c t i o n s .
* Copyright ( C ) 1 9 9 5 - 1 9 9 6 G a r y T h o m a s ( g d t @linuxppc.org)
*
* Largely r e w r i t t e n b y C o r t D o u g a n ( c o r t @cs.nmt.edu)
* and P a u l M a c k e r r a s .
*
[PATCH] powerpc: Merge kexec
This patch merges, to some extent, the PPC32 and PPC64 kexec implementations.
We adopt the PPC32 approach of having ppc_md callbacks for the kexec functions.
The current PPC64 implementation becomes the "default" implementation for PPC64
which platforms can select if they need no special treatment.
I've added these default callbacks to pseries/maple/cell/powermac, this means
iSeries no longer supports kexec - but it never worked anyway.
I've renamed PPC32's machine_kexec_simple to default_machine_kexec, inline with
PPC64. Judging by the comments it might be better named machine_kexec_non_of,
or something, but at the moment it's the only implementation for PPC32 so it's
the "default".
Kexec requires machine_shutdown(), which is in machine_kexec.c on PPC32, but we
already have in setup-common.c on powerpc. All this does is call
ppc_md.nvram_sync, which only powermac implements, so instead make
machine_shutdown a ppc_md member and have it call core99_nvram_sync directly
on powermac.
I've also stuck relocate_kernel.S into misc_32.S for powerpc.
Built for ARCH=ppc, and 32 & 64 bit ARCH=powerpc, with KEXEC=y/n. Booted on
P5 LPAR and successfully kexec'ed.
Should apply on top of 493f25ef4087395891c99fcfe2c72e62e293e89f.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-14 15:35:00 +03:00
* kexec b i t s :
* Copyright ( C ) 2 0 0 2 - 2 0 0 3 E r i c B i e d e r m a n < e b i e d e r m @xmission.com>
* GameCube/ p p c32 p o r t C o p y r i g h t ( C ) 2 0 0 4 A l b e r t H e r r a n z
*
2005-10-10 16:36:14 +04:00
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or
* modify i t u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e
* as p u b l i s h e d b y t h e F r e e S o f t w a r e F o u n d a t i o n ; either version
* 2 of t h e L i c e n s e , o r ( a t y o u r o p t i o n ) a n y l a t e r v e r s i o n .
*
* /
# include < l i n u x / s y s . h >
# include < a s m / u n i s t d . h >
# include < a s m / e r r n o . h >
# include < a s m / r e g . h >
# include < a s m / p a g e . h >
# include < a s m / c a c h e . h >
# include < a s m / c p u t a b l e . h >
# include < a s m / m m u . h >
# include < a s m / p p c _ a s m . h >
# include < a s m / t h r e a d _ i n f o . h >
# include < a s m / a s m - o f f s e t s . h >
[PATCH] powerpc: Merge kexec
This patch merges, to some extent, the PPC32 and PPC64 kexec implementations.
We adopt the PPC32 approach of having ppc_md callbacks for the kexec functions.
The current PPC64 implementation becomes the "default" implementation for PPC64
which platforms can select if they need no special treatment.
I've added these default callbacks to pseries/maple/cell/powermac, this means
iSeries no longer supports kexec - but it never worked anyway.
I've renamed PPC32's machine_kexec_simple to default_machine_kexec, inline with
PPC64. Judging by the comments it might be better named machine_kexec_non_of,
or something, but at the moment it's the only implementation for PPC32 so it's
the "default".
Kexec requires machine_shutdown(), which is in machine_kexec.c on PPC32, but we
already have in setup-common.c on powerpc. All this does is call
ppc_md.nvram_sync, which only powermac implements, so instead make
machine_shutdown a ppc_md member and have it call core99_nvram_sync directly
on powermac.
I've also stuck relocate_kernel.S into misc_32.S for powerpc.
Built for ARCH=ppc, and 32 & 64 bit ARCH=powerpc, with KEXEC=y/n. Booted on
P5 LPAR and successfully kexec'ed.
Should apply on top of 493f25ef4087395891c99fcfe2c72e62e293e89f.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-14 15:35:00 +03:00
# include < a s m / p r o c e s s o r . h >
# include < a s m / k e x e c . h >
2005-10-10 16:36:14 +04:00
.text
2005-10-20 03:23:26 +04:00
/ *
* This r e t u r n s t h e h i g h 6 4 b i t s o f t h e p r o d u c t o f t w o 6 4 - b i t n u m b e r s .
* /
_ GLOBAL( m u l h d u )
cmpwi r6 ,0
cmpwi c r1 ,r3 ,0
mr r10 ,r4
mulhwu r4 ,r4 ,r5
beq 1 f
mulhwu r0 ,r10 ,r6
mullw r7 ,r10 ,r5
addc r7 ,r0 ,r7
addze r4 ,r4
1 : beqlr c r1 / * a l l d o n e i f h i g h p a r t o f A i s 0 * /
mr r10 ,r3
mullw r9 ,r3 ,r5
mulhwu r3 ,r3 ,r5
beq 2 f
mullw r0 ,r10 ,r6
mulhwu r8 ,r10 ,r6
addc r7 ,r0 ,r7
adde r4 ,r4 ,r8
addze r3 ,r3
2 : addc r4 ,r4 ,r9
addze r3 ,r3
blr
2005-10-10 16:36:14 +04:00
/ *
* sub_ r e l o c _ o f f s e t ( x ) r e t u r n s x - r e l o c _ o f f s e t ( ) .
* /
_ GLOBAL( s u b _ r e l o c _ o f f s e t )
mflr r0
bl 1 f
1 : mflr r5
lis r4 ,1 b @ha
addi r4 ,r4 ,1 b @l
subf r5 ,r4 ,r5
subf r3 ,r5 ,r3
mtlr r0
blr
/ *
* reloc_ g o t 2 r u n s t h r o u g h t h e . g o t 2 s e c t i o n a d d i n g a n o f f s e t
* to e a c h e n t r y .
* /
_ GLOBAL( r e l o c _ g o t 2 )
mflr r11
lis r7 ,_ _ g o t 2 _ s t a r t @ha
addi r7 ,r7 ,_ _ g o t 2 _ s t a r t @l
lis r8 ,_ _ g o t 2 _ e n d @ha
addi r8 ,r8 ,_ _ g o t 2 _ e n d @l
subf r8 ,r7 ,r8
srwi. r8 ,r8 ,2
beqlr
mtctr r8
bl 1 f
1 : mflr r0
lis r4 ,1 b @ha
addi r4 ,r4 ,1 b @l
subf r0 ,r4 ,r0
add r7 ,r0 ,r7
2 : lwz r0 ,0 ( r7 )
add r0 ,r0 ,r3
stw r0 ,0 ( r7 )
addi r7 ,r7 ,4
bdnz 2 b
mtlr r11
blr
/ *
* call_ s e t u p _ c p u - c a l l t h e s e t u p _ c p u f u n c t i o n f o r t h i s c p u
* r3 = d a t a o f f s e t , r24 = c p u n u m b e r
*
* Setup f u n c t i o n i s c a l l e d w i t h :
* r3 = d a t a o f f s e t
* r4 = p t r t o C P U s p e c ( r e l o c a t e d )
* /
_ GLOBAL( c a l l _ s e t u p _ c p u )
addis r4 ,r3 ,c u r _ c p u _ s p e c @ha
addi r4 ,r4 ,c u r _ c p u _ s p e c @l
lwz r4 ,0 ( r4 )
add r4 ,r4 ,r3
lwz r5 ,C P U _ S P E C _ S E T U P ( r4 )
2006-05-19 08:24:18 +04:00
cmpwi 0 ,r5 ,0
2005-10-10 16:36:14 +04:00
add r5 ,r5 ,r3
beqlr
mtctr r5
bctr
# if d e f i n e d ( C O N F I G _ C P U _ F R E Q _ P M A C ) & & d e f i n e d ( C O N F I G _ 6 x x )
/ * This g e t s c a l l e d b y v i a - p m u . c t o s w i t c h t h e P L L s e l e c t i o n
* on 7 5 0 f x C P U . T h i s f u n c t i o n s h o u l d r e a l l y b e m o v e d t o s o m e
* other p l a c e ( a s m o s t o f t h e c p u f r e q c o d e i n v i a - p m u
* /
_ GLOBAL( l o w _ c h o o s e _ 7 5 0 f x _ p l l )
/* Clear MSR:EE */
mfmsr r7
rlwinm r0 ,r7 ,0 ,1 7 ,1 5
mtmsr r0
/* If switching to PLL1, disable HID0:BTIC */
cmplwi c r0 ,r3 ,0
beq 1 f
mfspr r5 ,S P R N _ H I D 0
rlwinm r5 ,r5 ,0 ,2 7 ,2 5
sync
mtspr S P R N _ H I D 0 ,r5
isync
sync
1 :
/* Calc new HID1 value */
mfspr r4 ,S P R N _ H I D 1 / * B u i l d a H I D 1 : P S b i t f r o m p a r a m e t e r * /
rlwinm r5 ,r3 ,1 6 ,1 5 ,1 5 / * C l e a r o u t H I D 1 : P S f r o m v a l u e r e a d * /
rlwinm r4 ,r4 ,0 ,1 6 ,1 4 / * C o u l d h a v e I u s e d r l w i m i h e r e ? * /
or r4 ,r4 ,r5
mtspr S P R N _ H I D 1 ,r4
/* Store new HID1 image */
rlwinm r6 ,r1 ,0 ,0 ,1 8
lwz r6 ,T I _ C P U ( r6 )
slwi r6 ,r6 ,2
addis r6 ,r6 ,n a p _ s a v e _ h i d1 @ha
stw r4 ,n a p _ s a v e _ h i d1 @l(r6)
/* If switching to PLL0, enable HID0:BTIC */
cmplwi c r0 ,r3 ,0
bne 1 f
mfspr r5 ,S P R N _ H I D 0
ori r5 ,r5 ,H I D 0 _ B T I C
sync
mtspr S P R N _ H I D 0 ,r5
isync
sync
1 :
/* Return */
mtmsr r7
blr
_ GLOBAL( l o w _ c h o o s e _ 7 4 4 7 a _ d f s )
/* Clear MSR:EE */
mfmsr r7
rlwinm r0 ,r7 ,0 ,1 7 ,1 5
mtmsr r0
/* Calc new HID1 value */
mfspr r4 ,S P R N _ H I D 1
insrwi r4 ,r3 ,1 ,9 / * i n s e r t p a r a m e t e r i n t o b i t 9 * /
sync
mtspr S P R N _ H I D 1 ,r4
sync
isync
/* Return */
mtmsr r7
blr
# endif / * C O N F I G _ C P U _ F R E Q _ P M A C & & C O N F I G _ 6 x x * /
/ *
* complement m a s k o n t h e m s r t h e n " o r " s o m e v a l u e s o n .
* _ nmask_ a n d _ o r _ m s r ( n m a s k , v a l u e _ t o _ o r )
* /
_ GLOBAL( _ n m a s k _ a n d _ o r _ m s r )
mfmsr r0 / * G e t c u r r e n t m s r * /
andc r0 ,r0 ,r3 / * A n d o f f t h e b i t s s e t i n r3 ( f i r s t p a r m ) * /
or r0 ,r0 ,r4 / * O r o n t h e b i t s i n r4 ( s e c o n d p a r m ) * /
SYNC / * S o m e c h i p r e v s h a v e p r o b l e m s h e r e . . . * /
mtmsr r0 / * U p d a t e m a c h i n e s t a t e * /
isync
blr / * D o n e * /
/ *
* Flush M M U T L B
* /
_ GLOBAL( _ t l b i a )
# if d e f i n e d ( C O N F I G _ 4 0 x )
sync / * F l u s h t o m e m o r y b e f o r e c h a n g i n g m a p p i n g * /
tlbia
isync / * F l u s h s h a d o w T L B * /
# elif d e f i n e d ( C O N F I G _ 4 4 x )
li r3 ,0
sync
/* Load high watermark */
lis r4 ,t l b _ 4 4 x _ h w a t e r @ha
lwz r5 ,t l b _ 4 4 x _ h w a t e r @l(r4)
1 : tlbwe r3 ,r3 ,P P C 4 4 x _ T L B _ P A G E I D
addi r3 ,r3 ,1
cmpw 0 ,r3 ,r5
ble 1 b
isync
# elif d e f i n e d ( C O N F I G _ F S L _ B O O K E )
/* Invalidate all entries in TLB0 */
li r3 , 0 x04
tlbivax 0 ,3
/* Invalidate all entries in TLB1 */
li r3 , 0 x0 c
tlbivax 0 ,3
/* Invalidate all entries in TLB2 */
li r3 , 0 x14
tlbivax 0 ,3
/* Invalidate all entries in TLB3 */
li r3 , 0 x1 c
tlbivax 0 ,3
msync
# ifdef C O N F I G _ S M P
tlbsync
# endif / * C O N F I G _ S M P * /
# else / * ! ( C O N F I G _ 4 0 x | | C O N F I G _ 4 4 x | | C O N F I G _ F S L _ B O O K E ) * /
# if d e f i n e d ( C O N F I G _ S M P )
rlwinm r8 ,r1 ,0 ,0 ,1 8
lwz r8 ,T I _ C P U ( r8 )
oris r8 ,r8 ,1 0
mfmsr r10
SYNC
rlwinm r0 ,r10 ,0 ,1 7 ,1 5 / * c l e a r b i t 1 6 ( M S R _ E E ) * /
rlwinm r0 ,r0 ,0 ,2 8 ,2 6 / * c l e a r D R * /
mtmsr r0
SYNC_ 6 0 1
isync
lis r9 ,m m u _ h a s h _ l o c k @h
ori r9 ,r9 ,m m u _ h a s h _ l o c k @l
tophys( r9 ,r9 )
10 : lwarx r7 ,0 ,r9
cmpwi 0 ,r7 ,0
bne- 1 0 b
stwcx. r8 ,0 ,r9
bne- 1 0 b
sync
tlbia
sync
TLBSYNC
li r0 ,0
stw r0 ,0 ( r9 ) / * c l e a r m m u _ h a s h _ l o c k * /
mtmsr r10
SYNC_ 6 0 1
isync
# else / * C O N F I G _ S M P * /
sync
tlbia
sync
# endif / * C O N F I G _ S M P * /
# endif / * ! d e f i n e d ( C O N F I G _ 4 0 x ) * /
blr
/ *
* Flush M M U T L B f o r a p a r t i c u l a r a d d r e s s
* /
_ GLOBAL( _ t l b i e )
# if d e f i n e d ( C O N F I G _ 4 0 x )
2007-10-30 01:46:06 +03:00
/ * We r u n t h e s e a r c h w i t h i n t e r r u p t s d i s a b l e d b e c a u s e w e h a v e t o c h a n g e
* the P I D a n d I d o n ' t w a n t t o p r e e m p t w h e n t h a t h a p p e n s .
* /
mfmsr r5
mfspr r6 ,S P R N _ P I D
wrteei 0
mtspr S P R N _ P I D ,r4
2005-10-10 16:36:14 +04:00
tlbsx. r3 , 0 , r3
2007-10-30 01:46:06 +03:00
mtspr S P R N _ P I D ,r6
wrtee r5
2005-10-10 16:36:14 +04:00
bne 1 0 f
sync
/ * There a r e o n l y 6 4 T L B e n t r i e s , s o r3 < 6 4 , w h i c h m e a n s b i t 2 5 i s c l e a r .
* Since 2 5 i s t h e V b i t i n t h e T L B _ T A G , l o a d i n g t h i s v a l u e w i l l i n v a l i d a t e
* the T L B e n t r y . * /
tlbwe r3 , r3 , T L B _ T A G
isync
10 :
2007-10-30 01:46:06 +03:00
2005-10-10 16:36:14 +04:00
# elif d e f i n e d ( C O N F I G _ 4 4 x )
2007-10-30 01:46:06 +03:00
mfspr r5 ,S P R N _ M M U C R
rlwimi r5 ,r4 ,0 ,2 4 ,3 1 / * S e t T I D * /
2005-10-10 16:36:14 +04:00
2007-08-07 08:20:50 +04:00
/ * We h a v e t o r u n t h e s e a r c h w i t h i n t e r r u p t s d i s a b l e d , e v e n c r i t i c a l
* and d e b u g i n t e r r u p t s ( i n f a c t t h e o n l y c r i t i c a l e x c e p t i o n s w e h a v e
* are d e b u g a n d m a c h i n e c h e c k ) . O t h e r w i s e a n i n t e r r u p t w h i c h c a u s e s
* a T L B m i s s c a n c l o b b e r t h e M M U C R b e t w e e n t h e m t s p r a n d t h e t l b s x . * /
2007-10-30 01:46:06 +03:00
mfmsr r4
2007-08-07 08:20:50 +04:00
lis r6 ,( M S R _ E E | M S R _ C E | M S R _ M E | M S R _ D E ) @ha
addi r6 ,r6 ,( M S R _ E E | M S R _ C E | M S R _ M E | M S R _ D E ) @l
2007-10-30 01:46:06 +03:00
andc r6 ,r4 ,r6
2007-08-07 08:20:50 +04:00
mtmsr r6
2007-10-30 01:46:06 +03:00
mtspr S P R N _ M M U C R ,r5
2005-10-10 16:36:14 +04:00
tlbsx. r3 , 0 , r3
2007-10-30 01:46:06 +03:00
mtmsr r4
2005-10-10 16:36:14 +04:00
bne 1 0 f
sync
/ * There a r e o n l y 6 4 T L B e n t r i e s , s o r3 < 6 4 ,
* which m e a n s b i t 2 2 , i s c l e a r . S i n c e 2 2 i s
* the V b i t i n t h e T L B _ P A G E I D , l o a d i n g t h i s
* value w i l l i n v a l i d a t e t h e T L B e n t r y .
* /
tlbwe r3 , r3 , P P C 4 4 x _ T L B _ P A G E I D
isync
10 :
# elif d e f i n e d ( C O N F I G _ F S L _ B O O K E )
rlwinm r4 , r3 , 0 , 0 , 1 9
ori r5 , r4 , 0 x08 / * T L B S E L = 1 * /
ori r6 , r4 , 0 x10 / * T L B S E L = 2 * /
ori r7 , r4 , 0 x18 / * T L B S E L = 3 * /
tlbivax 0 , r4
tlbivax 0 , r5
tlbivax 0 , r6
tlbivax 0 , r7
msync
# if d e f i n e d ( C O N F I G _ S M P )
tlbsync
# endif / * C O N F I G _ S M P * /
# else / * ! ( C O N F I G _ 4 0 x | | C O N F I G _ 4 4 x | | C O N F I G _ F S L _ B O O K E ) * /
# if d e f i n e d ( C O N F I G _ S M P )
rlwinm r8 ,r1 ,0 ,0 ,1 8
lwz r8 ,T I _ C P U ( r8 )
oris r8 ,r8 ,1 1
mfmsr r10
SYNC
rlwinm r0 ,r10 ,0 ,1 7 ,1 5 / * c l e a r b i t 1 6 ( M S R _ E E ) * /
rlwinm r0 ,r0 ,0 ,2 8 ,2 6 / * c l e a r D R * /
mtmsr r0
SYNC_ 6 0 1
isync
lis r9 ,m m u _ h a s h _ l o c k @h
ori r9 ,r9 ,m m u _ h a s h _ l o c k @l
tophys( r9 ,r9 )
10 : lwarx r7 ,0 ,r9
cmpwi 0 ,r7 ,0
bne- 1 0 b
stwcx. r8 ,0 ,r9
bne- 1 0 b
eieio
tlbie r3
sync
TLBSYNC
li r0 ,0
stw r0 ,0 ( r9 ) / * c l e a r m m u _ h a s h _ l o c k * /
mtmsr r10
SYNC_ 6 0 1
isync
# else / * C O N F I G _ S M P * /
tlbie r3
sync
# endif / * C O N F I G _ S M P * /
# endif / * ! C O N F I G _ 4 0 x * /
blr
/ *
* Flush i n s t r u c t i o n c a c h e .
* This i s a n o - o p o n t h e 6 0 1 .
* /
_ GLOBAL( f l u s h _ i n s t r u c t i o n _ c a c h e )
# if d e f i n e d ( C O N F I G _ 8 x x )
isync
lis r5 , I D C _ I N V A L L @h
mtspr S P R N _ I C _ C S T , r5
# elif d e f i n e d ( C O N F I G _ 4 x x )
# ifdef C O N F I G _ 4 0 3 G C X
li r3 , 5 1 2
mtctr r3
lis r4 , K E R N E L B A S E @h
1 : iccci 0 , r4
addi r4 , r4 , 1 6
bdnz 1 b
# else
lis r3 , K E R N E L B A S E @h
iccci 0 ,r3
# endif
# elif C O N F I G _ F S L _ B O O K E
BEGIN_ F T R _ S E C T I O N
mfspr r3 ,S P R N _ L 1 C S R 0
ori r3 ,r3 ,L 1 C S R 0 _ C F I | L 1 C S R 0 _ C L F C
/* msync; isync recommended here */
mtspr S P R N _ L 1 C S R 0 ,r3
isync
blr
[POWERPC] Merge CPU features pertaining to icache coherency
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-13 08:52:57 +04:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ U N I F I E D _ I D _ C A C H E )
2005-10-10 16:36:14 +04:00
mfspr r3 ,S P R N _ L 1 C S R 1
ori r3 ,r3 ,L 1 C S R 1 _ I C F I | L 1 C S R 1 _ I C L F R
mtspr S P R N _ L 1 C S R 1 ,r3
# else
mfspr r3 ,S P R N _ P V R
rlwinm r3 ,r3 ,1 6 ,1 6 ,3 1
cmpwi 0 ,r3 ,1
beqlr / * f o r 6 0 1 , d o n o t h i n g * /
/* 603/604 processor - use invalidate-all bit in HID0 */
mfspr r3 ,S P R N _ H I D 0
ori r3 ,r3 ,H I D 0 _ I C F I
mtspr S P R N _ H I D 0 ,r3
# endif / * C O N F I G _ 8 x x / 4 x x * /
isync
blr
/ *
* Write a n y m o d i f i e d d a t a c a c h e b l o c k s o u t t o m e m o r y
* and i n v a l i d a t e t h e c o r r e s p o n d i n g i n s t r u c t i o n c a c h e b l o c k s .
* This i s a n o - o p o n t h e 6 0 1 .
*
* flush_ i c a c h e _ r a n g e ( u n s i g n e d l o n g s t a r t , u n s i g n e d l o n g s t o p )
* /
[PATCH] powerpc: Merge cacheflush.h and cache.h
The ppc32 and ppc64 versions of cacheflush.h were almost identical.
The two versions of cache.h are fairly similar, except for a bunch of
register definitions in the ppc32 version which probably belong better
elsewhere. This patch, therefore, merges both headers. Notable
points:
- there are several functions in cacheflush.h which exist only
on ppc32 or only on ppc64. These are handled by #ifdef for now, but
these should probably be consolidated, along with the actual code
behind them later.
- Confusingly, both ppc32 and ppc64 have a
flush_dcache_range(), but they're subtly different: it uses dcbf on
ppc32 and dcbst on ppc64, ppc64 has a flush_inval_dcache_range() which
uses dcbf. These too should be merged and consolidated later.
- Also flush_dcache_range() was defined in cacheflush.h on
ppc64, and in cache.h on ppc32. In the merged version it's in
cacheflush.h
- On ppc32 flush_icache_range() is a normal function from
misc.S. On ppc64, it was wrapper, testing a feature bit before
calling __flush_icache_range() which does the actual flush. This
patch takes the ppc64 approach, which amounts to no change on ppc32,
since CPU_FTR_COHERENT_ICACHE will never be set there, but does mean
renaming flush_icache_range() to __flush_icache_range() in
arch/ppc/kernel/misc.S and arch/powerpc/kernel/misc_32.S
- The PReP register info from asm-ppc/cache.h has moved to
arch/ppc/platforms/prep_setup.c
- The 8xx register info from asm-ppc/cache.h has moved to a
new asm-powerpc/reg_8xx.h, included from reg.h
- flush_dcache_all() was defined on ppc32 (only), but was
never called (although it was exported). Thus this patch removes it
from cacheflush.h and from ARCH=powerpc (misc_32.S) entirely. It's
left in ARCH=ppc for now, with the prototype moved to ppc_ksyms.c.
Built for Walnut (ARCH=ppc), 32-bit multiplatform (pmac, CHRP and PReP
ARCH=ppc, pmac and CHRP ARCH=powerpc). Built and booted on POWER5
LPAR (ARCH=powerpc and ARCH=ppc64).
Built for 32-bit powermac (ARCH=ppc and ARCH=powerpc). Built and
booted on POWER5 LPAR (ARCH=powerpc and ARCH=ppc64). Built and booted
on G5 (ARCH=powerpc)
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-10 03:50:16 +03:00
_ GLOBAL( _ _ f l u s h _ i c a c h e _ r a n g e )
2005-10-10 16:36:14 +04:00
BEGIN_ F T R _ S E C T I O N
blr / * f o r 6 0 1 , d o n o t h i n g * /
[POWERPC] Merge CPU features pertaining to icache coherency
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-13 08:52:57 +04:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ C O H E R E N T _ I C A C H E )
2005-10-17 05:50:32 +04:00
li r5 ,L 1 _ C A C H E _ B Y T E S - 1
2005-10-10 16:36:14 +04:00
andc r3 ,r3 ,r5
subf r4 ,r3 ,r4
add r4 ,r4 ,r5
2005-10-17 05:50:32 +04:00
srwi. r4 ,r4 ,L 1 _ C A C H E _ S H I F T
2005-10-10 16:36:14 +04:00
beqlr
mtctr r4
mr r6 ,r3
1 : dcbst 0 ,r3
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync / * w a i t f o r d c b s t ' s t o g e t t o r a m * /
mtctr r4
2 : icbi 0 ,r6
2005-10-17 05:50:32 +04:00
addi r6 ,r6 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 2 b
sync / * a d d i t i o n a l s y n c n e e d e d o n g 4 * /
isync
blr
/ *
* Write a n y m o d i f i e d d a t a c a c h e b l o c k s o u t t o m e m o r y .
* Does n o t i n v a l i d a t e t h e c o r r e s p o n d i n g c a c h e l i n e s ( e s p e c i a l l y f o r
* any c o r r e s p o n d i n g i n s t r u c t i o n c a c h e ) .
*
* clean_ d c a c h e _ r a n g e ( u n s i g n e d l o n g s t a r t , u n s i g n e d l o n g s t o p )
* /
_ GLOBAL( c l e a n _ d c a c h e _ r a n g e )
2005-10-17 05:50:32 +04:00
li r5 ,L 1 _ C A C H E _ B Y T E S - 1
2005-10-10 16:36:14 +04:00
andc r3 ,r3 ,r5
subf r4 ,r3 ,r4
add r4 ,r4 ,r5
2005-10-17 05:50:32 +04:00
srwi. r4 ,r4 ,L 1 _ C A C H E _ S H I F T
2005-10-10 16:36:14 +04:00
beqlr
mtctr r4
1 : dcbst 0 ,r3
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync / * w a i t f o r d c b s t ' s t o g e t t o r a m * /
blr
/ *
* Write a n y m o d i f i e d d a t a c a c h e b l o c k s o u t t o m e m o r y a n d i n v a l i d a t e t h e m .
* Does n o t i n v a l i d a t e t h e c o r r e s p o n d i n g i n s t r u c t i o n c a c h e b l o c k s .
*
* flush_ d c a c h e _ r a n g e ( u n s i g n e d l o n g s t a r t , u n s i g n e d l o n g s t o p )
* /
_ GLOBAL( f l u s h _ d c a c h e _ r a n g e )
2005-10-17 05:50:32 +04:00
li r5 ,L 1 _ C A C H E _ B Y T E S - 1
2005-10-10 16:36:14 +04:00
andc r3 ,r3 ,r5
subf r4 ,r3 ,r4
add r4 ,r4 ,r5
2005-10-17 05:50:32 +04:00
srwi. r4 ,r4 ,L 1 _ C A C H E _ S H I F T
2005-10-10 16:36:14 +04:00
beqlr
mtctr r4
1 : dcbf 0 ,r3
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync / * w a i t f o r d c b s t ' s t o g e t t o r a m * /
blr
/ *
* Like a b o v e , b u t i n v a l i d a t e t h e D - c a c h e . T h i s i s u s e d b y t h e 8 x x
* to i n v a l i d a t e t h e c a c h e s o t h e P P C c o r e d o e s n ' t g e t s t a l e d a t a
* from t h e C P M ( n o c a c h e s n o o p i n g h e r e : - ) .
*
* invalidate_ d c a c h e _ r a n g e ( u n s i g n e d l o n g s t a r t , u n s i g n e d l o n g s t o p )
* /
_ GLOBAL( i n v a l i d a t e _ d c a c h e _ r a n g e )
2005-10-17 05:50:32 +04:00
li r5 ,L 1 _ C A C H E _ B Y T E S - 1
2005-10-10 16:36:14 +04:00
andc r3 ,r3 ,r5
subf r4 ,r3 ,r4
add r4 ,r4 ,r5
2005-10-17 05:50:32 +04:00
srwi. r4 ,r4 ,L 1 _ C A C H E _ S H I F T
2005-10-10 16:36:14 +04:00
beqlr
mtctr r4
1 : dcbi 0 ,r3
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync / * w a i t f o r d c b i ' s t o g e t t o r a m * /
blr
/ *
* Flush a p a r t i c u l a r p a g e f r o m t h e d a t a c a c h e t o R A M .
* Note : this i s n e c e s s a r y b e c a u s e t h e i n s t r u c t i o n c a c h e d o e s * n o t *
* snoop f r o m t h e d a t a c a c h e .
* This i s a n o - o p o n t h e 6 0 1 w h i c h h a s a u n i f i e d c a c h e .
*
* void _ _ f l u s h _ d c a c h e _ i c a c h e ( v o i d * p a g e )
* /
_ GLOBAL( _ _ f l u s h _ d c a c h e _ i c a c h e )
BEGIN_ F T R _ S E C T I O N
[POWERPC] Merge CPU features pertaining to icache coherency
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-13 08:52:57 +04:00
blr
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ C O H E R E N T _ I C A C H E )
2005-10-10 16:36:14 +04:00
rlwinm r3 ,r3 ,0 ,0 ,1 9 / * G e t p a g e b a s e a d d r e s s * /
2005-10-17 05:50:32 +04:00
li r4 ,4 0 9 6 / L 1 _ C A C H E _ B Y T E S / * N u m b e r o f l i n e s i n a p a g e * /
2005-10-10 16:36:14 +04:00
mtctr r4
mr r6 ,r3
0 : dcbst 0 ,r3 / * W r i t e l i n e t o r a m * /
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 0 b
sync
2007-10-31 08:42:19 +03:00
# ifndef C O N F I G _ 4 4 x
/ * We d o n ' t f l u s h t h e i c a c h e o n 4 4 x . T h o s e h a v e a v i r t u a l i c a c h e
* and w e d o n ' t h a v e a c c e s s t o t h e v i r t u a l a d d r e s s h e r e ( i t ' s
* not t h e p a g e v a d d r b u t w h e r e i t ' s m a p p e d i n u s e r s p a c e ) . T h e
* flushing o f t h e i c a c h e o n t h e s e i s h a n d l e d e l s e w h e r e , w h e n
* a c h a n g e i n t h e a d d r e s s s p a c e o c c u r s , b e f o r e r e t u r n i n g t o
* user s p a c e
* /
2005-10-10 16:36:14 +04:00
mtctr r4
1 : icbi 0 ,r6
2005-10-17 05:50:32 +04:00
addi r6 ,r6 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync
isync
2007-10-31 08:42:19 +03:00
# endif / * C O N F I G _ 4 4 x * /
2005-10-10 16:36:14 +04:00
blr
/ *
* Flush a p a r t i c u l a r p a g e f r o m t h e d a t a c a c h e t o R A M , i d e n t i f i e d
* by i t s p h y s i c a l a d d r e s s . W e t u r n o f f t h e M M U s o w e c a n j u s t u s e
* the p h y s i c a l a d d r e s s ( t h i s m a y b e a h i g h m e m p a g e w i t h o u t a k e r n e l
* mapping) .
*
* void _ _ f l u s h _ d c a c h e _ i c a c h e _ p h y s ( u n s i g n e d l o n g p h y s a d d r )
* /
_ GLOBAL( _ _ f l u s h _ d c a c h e _ i c a c h e _ p h y s )
BEGIN_ F T R _ S E C T I O N
blr / * f o r 6 0 1 , d o n o t h i n g * /
[POWERPC] Merge CPU features pertaining to icache coherency
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-13 08:52:57 +04:00
END_ F T R _ S E C T I O N _ I F S E T ( C P U _ F T R _ C O H E R E N T _ I C A C H E )
2005-10-10 16:36:14 +04:00
mfmsr r10
rlwinm r0 ,r10 ,0 ,2 8 ,2 6 / * c l e a r D R * /
mtmsr r0
isync
rlwinm r3 ,r3 ,0 ,0 ,1 9 / * G e t p a g e b a s e a d d r e s s * /
2005-10-17 05:50:32 +04:00
li r4 ,4 0 9 6 / L 1 _ C A C H E _ B Y T E S / * N u m b e r o f l i n e s i n a p a g e * /
2005-10-10 16:36:14 +04:00
mtctr r4
mr r6 ,r3
0 : dcbst 0 ,r3 / * W r i t e l i n e t o r a m * /
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 0 b
sync
mtctr r4
1 : icbi 0 ,r6
2005-10-17 05:50:32 +04:00
addi r6 ,r6 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
sync
mtmsr r10 / * r e s t o r e D R * /
isync
blr
/ *
* Clear p a g e s u s i n g t h e d c b z i n s t r u c t i o n , w h i c h d o e s n ' t c a u s e a n y
* memory t r a f f i c ( e x c e p t t o w r i t e o u t a n y c a c h e l i n e s w h i c h g e t
* displaced) . T h i s o n l y w o r k s o n c a c h e a b l e m e m o r y .
*
* void c l e a r _ p a g e s ( v o i d * p a g e , i n t o r d e r ) ;
* /
_ GLOBAL( c l e a r _ p a g e s )
2005-10-17 05:50:32 +04:00
li r0 ,4 0 9 6 / L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
slw r0 ,r0 ,r4
mtctr r0
# ifdef C O N F I G _ 8 x x
li r4 , 0
1 : stw r4 , 0 ( r3 )
stw r4 , 4 ( r3 )
stw r4 , 8 ( r3 )
stw r4 , 1 2 ( r3 )
# else
1 : dcbz 0 ,r3
# endif
2005-10-17 05:50:32 +04:00
addi r3 ,r3 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 b
blr
/ *
* Copy a w h o l e p a g e . W e u s e t h e d c b z i n s t r u c t i o n o n t h e d e s t i n a t i o n
* to r e d u c e m e m o r y t r a f f i c ( i t e l i m i n a t e s t h e u n n e c e s s a r y r e a d s o f
* the d e s t i n a t i o n i n t o c a c h e ) . T h i s r e q u i r e s t h a t t h e d e s t i n a t i o n
* is c a c h e a b l e .
* /
# define C O P Y _ 1 6 _ B Y T E S \
lwz r6 ,4 ( r4 ) ; \
lwz r7 ,8 ( r4 ) ; \
lwz r8 ,1 2 ( r4 ) ; \
lwzu r9 ,1 6 ( r4 ) ; \
stw r6 ,4 ( r3 ) ; \
stw r7 ,8 ( r3 ) ; \
stw r8 ,1 2 ( r3 ) ; \
stwu r9 ,1 6 ( r3 )
_ GLOBAL( c o p y _ p a g e )
addi r3 ,r3 ,- 4
addi r4 ,r4 ,- 4
# ifdef C O N F I G _ 8 x x
/* don't use prefetch on 8xx */
2005-10-17 05:50:32 +04:00
li r0 ,4 0 9 6 / L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
mtctr r0
1 : COPY_ 1 6 _ B Y T E S
bdnz 1 b
blr
# else / * n o t 8 x x , w e c a n p r e f e t c h * /
li r5 ,4
# if M A X _ C O P Y _ P R E F E T C H > 1
li r0 ,M A X _ C O P Y _ P R E F E T C H
li r11 ,4
mtctr r0
11 : dcbt r11 ,r4
2005-10-17 05:50:32 +04:00
addi r11 ,r11 ,L 1 _ C A C H E _ B Y T E S
2005-10-10 16:36:14 +04:00
bdnz 1 1 b
# else / * M A X _ C O P Y _ P R E F E T C H = = 1 * /
dcbt r5 ,r4
2005-10-17 05:50:32 +04:00
li r11 ,L 1 _ C A C H E _ B Y T E S + 4
2005-10-10 16:36:14 +04:00
# endif / * M A X _ C O P Y _ P R E F E T C H * /
2005-10-17 05:50:32 +04:00
li r0 ,4 0 9 6 / L 1 _ C A C H E _ B Y T E S - M A X _ C O P Y _ P R E F E T C H
2005-10-10 16:36:14 +04:00
crclr 4 * c r0 + e q
2 :
mtctr r0
1 :
dcbt r11 ,r4
dcbz r5 ,r3
COPY_ 1 6 _ B Y T E S
2005-10-17 05:50:32 +04:00
# if L 1 _ C A C H E _ B Y T E S > = 3 2
2005-10-10 16:36:14 +04:00
COPY_ 1 6 _ B Y T E S
2005-10-17 05:50:32 +04:00
# if L 1 _ C A C H E _ B Y T E S > = 6 4
2005-10-10 16:36:14 +04:00
COPY_ 1 6 _ B Y T E S
COPY_ 1 6 _ B Y T E S
2005-10-17 05:50:32 +04:00
# if L 1 _ C A C H E _ B Y T E S > = 1 2 8
2005-10-10 16:36:14 +04:00
COPY_ 1 6 _ B Y T E S
COPY_ 1 6 _ B Y T E S
COPY_ 1 6 _ B Y T E S
COPY_ 1 6 _ B Y T E S
# endif
# endif
# endif
bdnz 1 b
beqlr
crnot 4 * c r0 + e q ,4 * c r0 + e q
li r0 ,M A X _ C O P Y _ P R E F E T C H
li r11 ,4
b 2 b
# endif / * C O N F I G _ 8 x x * /
/ *
* void a t o m i c _ c l e a r _ m a s k ( a t o m i c _ t m a s k , a t o m i c _ t * a d d r )
* void a t o m i c _ s e t _ m a s k ( a t o m i c _ t m a s k , a t o m i c _ t * a d d r ) ;
* /
_ GLOBAL( a t o m i c _ c l e a r _ m a s k )
10 : lwarx r5 ,0 ,r4
andc r5 ,r5 ,r3
PPC4 0 5 _ E R R 7 7 ( 0 ,r4 )
stwcx. r5 ,0 ,r4
bne- 1 0 b
blr
_ GLOBAL( a t o m i c _ s e t _ m a s k )
10 : lwarx r5 ,0 ,r4
or r5 ,r5 ,r3
PPC4 0 5 _ E R R 7 7 ( 0 ,r4 )
stwcx. r5 ,0 ,r4
bne- 1 0 b
blr
/ *
* Extended p r e c i s i o n s h i f t s .
*
* Updated t o b e v a l i d f o r s h i f t c o u n t s f r o m 0 t o 6 3 i n c l u s i v e .
* - - Gabriel
*
* R3 / R 4 h a s 6 4 b i t v a l u e
* R5 h a s s h i f t c o u n t
* result i n R 3 / R 4
*
* ashrdi3 : arithmetic r i g h t s h i f t ( s i g n p r o p a g a t i o n )
* lshrdi3 : logical r i g h t s h i f t
* ashldi3 : left s h i f t
* /
_ GLOBAL( _ _ a s h r d i 3 )
subfic r6 ,r5 ,3 2
srw r4 ,r4 ,r5 # L S W = c o u n t > 31 ? 0 : L S W > > c o u n t
addi r7 ,r5 ,3 2 # c o u l d b e x o r i , o r a d d i w i t h - 32
slw r6 ,r3 ,r6 # t 1 = c o u n t > 3 1 ? 0 : M S W < < ( 3 2 - c o u n t )
rlwinm r8 ,r7 ,0 ,3 2 # t 3 = ( c o u n t < 3 2 ) ? 3 2 : 0
sraw r7 ,r3 ,r7 # t 2 = M S W > > ( c o u n t - 3 2 )
or r4 ,r4 ,r6 # L S W | = t 1
slw r7 ,r7 ,r8 # t 2 = ( c o u n t < 3 2 ) ? 0 : t 2
sraw r3 ,r3 ,r5 # M S W = M S W > > c o u n t
or r4 ,r4 ,r7 # L S W | = t 2
blr
_ GLOBAL( _ _ a s h l d i 3 )
subfic r6 ,r5 ,3 2
slw r3 ,r3 ,r5 # M S W = c o u n t > 31 ? 0 : M S W < < c o u n t
addi r7 ,r5 ,3 2 # c o u l d b e x o r i , o r a d d i w i t h - 32
srw r6 ,r4 ,r6 # t 1 = c o u n t > 3 1 ? 0 : L S W > > ( 3 2 - c o u n t )
slw r7 ,r4 ,r7 # t 2 = c o u n t < 3 2 ? 0 : L S W < < ( c o u n t - 3 2 )
or r3 ,r3 ,r6 # M S W | = t 1
slw r4 ,r4 ,r5 # L S W = L S W < < c o u n t
or r3 ,r3 ,r7 # M S W | = t 2
blr
_ GLOBAL( _ _ l s h r d i 3 )
subfic r6 ,r5 ,3 2
srw r4 ,r4 ,r5 # L S W = c o u n t > 31 ? 0 : L S W > > c o u n t
addi r7 ,r5 ,3 2 # c o u l d b e x o r i , o r a d d i w i t h - 32
slw r6 ,r3 ,r6 # t 1 = c o u n t > 3 1 ? 0 : M S W < < ( 3 2 - c o u n t )
srw r7 ,r3 ,r7 # t 2 = c o u n t < 3 2 ? 0 : M S W > > ( c o u n t - 3 2 )
or r4 ,r4 ,r6 # L S W | = t 1
srw r3 ,r3 ,r5 # M S W = M S W > > c o u n t
or r4 ,r4 ,r7 # L S W | = t 2
blr
_ GLOBAL( a b s )
srawi r4 ,r3 ,3 1
xor r3 ,r3 ,r4
sub r3 ,r3 ,r4
blr
/ *
* Create a k e r n e l t h r e a d
* kernel_ t h r e a d ( f n , a r g , f l a g s )
* /
_ GLOBAL( k e r n e l _ t h r e a d )
stwu r1 ,- 1 6 ( r1 )
stw r30 ,8 ( r1 )
stw r31 ,1 2 ( r1 )
mr r30 ,r3 / * f u n c t i o n * /
mr r31 ,r4 / * a r g u m e n t * /
ori r3 ,r5 ,C L O N E _ V M / * f l a g s * /
oris r3 ,r3 ,C L O N E _ U N T R A C E D > > 1 6
li r4 ,0 / * n e w s p ( u n u s e d ) * /
li r0 ,_ _ N R _ c l o n e
sc
cmpwi 0 ,r3 ,0 / * p a r e n t o r c h i l d ? * /
bne 1 f / * r e t u r n i f p a r e n t * /
li r0 ,0 / * m a k e t o p - l e v e l s t a c k f r a m e * /
stwu r0 ,- 1 6 ( r1 )
mtlr r30 / * f n a d d r i n l r * /
mr r3 ,r31 / * l o a d a r g a n d c a l l f n * /
PPC4 4 0 E P _ E R R 4 2
blrl
li r0 ,_ _ N R _ e x i t / * e x i t i f f u n c t i o n r e t u r n s * /
li r3 ,0
sc
1 : lwz r30 ,8 ( r1 )
lwz r31 ,1 2 ( r1 )
addi r1 ,r1 ,1 6
blr
/ *
* This r o u t i n e i s j u s t h e r e t o k e e p G C C h a p p y - s i g h . . .
* /
_ GLOBAL( _ _ m a i n )
blr
[PATCH] powerpc: Merge kexec
This patch merges, to some extent, the PPC32 and PPC64 kexec implementations.
We adopt the PPC32 approach of having ppc_md callbacks for the kexec functions.
The current PPC64 implementation becomes the "default" implementation for PPC64
which platforms can select if they need no special treatment.
I've added these default callbacks to pseries/maple/cell/powermac, this means
iSeries no longer supports kexec - but it never worked anyway.
I've renamed PPC32's machine_kexec_simple to default_machine_kexec, inline with
PPC64. Judging by the comments it might be better named machine_kexec_non_of,
or something, but at the moment it's the only implementation for PPC32 so it's
the "default".
Kexec requires machine_shutdown(), which is in machine_kexec.c on PPC32, but we
already have in setup-common.c on powerpc. All this does is call
ppc_md.nvram_sync, which only powermac implements, so instead make
machine_shutdown a ppc_md member and have it call core99_nvram_sync directly
on powermac.
I've also stuck relocate_kernel.S into misc_32.S for powerpc.
Built for ARCH=ppc, and 32 & 64 bit ARCH=powerpc, with KEXEC=y/n. Booted on
P5 LPAR and successfully kexec'ed.
Should apply on top of 493f25ef4087395891c99fcfe2c72e62e293e89f.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-14 15:35:00 +03:00
# ifdef C O N F I G _ K E X E C
/ *
* Must b e r e l o c a t a b l e P I C c o d e c a l l a b l e a s a C f u n c t i o n .
* /
.globl relocate_new_kernel
relocate_new_kernel :
/* r3 = page_list */
/* r4 = reboot_code_buffer */
/* r5 = start_address */
li r0 , 0
/ *
* Set M a c h i n e S t a t u s R e g i s t e r t o a k n o w n s t a t u s ,
* switch t h e M M U o f f a n d j u m p t o 1 : i n a s i n g l e s t e p .
* /
mr r8 , r0
ori r8 , r8 , M S R _ R I | M S R _ M E
mtspr S P R N _ S R R 1 , r8
addi r8 , r4 , 1 f - r e l o c a t e _ n e w _ k e r n e l
mtspr S P R N _ S R R 0 , r8
sync
rfi
1 :
/* from this point address translation is turned off */
/* and interrupts are disabled */
/* set a new stack at the bottom of our page... */
/* (not really needed now) */
addi r1 , r4 , K E X E C _ C O N T R O L _ C O D E _ S I Z E - 8 / * f o r L R S a v e + B a c k C h a i n * /
stw r0 , 0 ( r1 )
/* Do the copies */
li r6 , 0 / * c h e c k s u m * /
mr r0 , r3
b 1 f
0 : /* top, read another word for the indirection page */
lwzu r0 , 4 ( r3 )
1 :
/* is it a destination page? (r8) */
rlwinm. r7 , r0 , 0 , 3 1 , 3 1 / * I N D _ D E S T I N A T I O N ( 1 < < 0 ) * /
beq 2 f
rlwinm r8 , r0 , 0 , 0 , 1 9 / * c l e a r k e x e c f l a g s , p a g e a l i g n * /
b 0 b
2 : /* is it an indirection page? (r3) */
rlwinm. r7 , r0 , 0 , 3 0 , 3 0 / * I N D _ I N D I R E C T I O N ( 1 < < 1 ) * /
beq 2 f
rlwinm r3 , r0 , 0 , 0 , 1 9 / * c l e a r k e x e c f l a g s , p a g e a l i g n * /
subi r3 , r3 , 4
b 0 b
2 : /* are we done? */
rlwinm. r7 , r0 , 0 , 2 9 , 2 9 / * I N D _ D O N E ( 1 < < 2 ) * /
beq 2 f
b 3 f
2 : /* is it a source page? (r9) */
rlwinm. r7 , r0 , 0 , 2 8 , 2 8 / * I N D _ S O U R C E ( 1 < < 3 ) * /
beq 0 b
rlwinm r9 , r0 , 0 , 0 , 1 9 / * c l e a r k e x e c f l a g s , p a g e a l i g n * /
li r7 , P A G E _ S I Z E / 4
mtctr r7
subi r9 , r9 , 4
subi r8 , r8 , 4
9 :
lwzu r0 , 4 ( r9 ) / * d o t h e c o p y * /
xor r6 , r6 , r0
stwu r0 , 4 ( r8 )
dcbst 0 , r8
sync
icbi 0 , r8
bdnz 9 b
addi r9 , r9 , 4
addi r8 , r8 , 4
b 0 b
3 :
/ * To b e c e r t a i n o f a v o i d i n g p r o b l e m s w i t h s e l f - m o d i f y i n g c o d e
* execute a s e r i a l i z i n g i n s t r u c t i o n h e r e .
* /
isync
sync
/* jump to the entry point, usually the setup routine */
mtlr r5
blrl
1 : b 1 b
relocate_new_kernel_end :
.globl relocate_new_kernel_size
relocate_new_kernel_size :
.long relocate_new_kernel_end - relocate_ n e w _ k e r n e l
# endif