2006-06-28 05:55:49 +04:00
/ *
* This f i l e c o n t a i n s m i s c e l l a n e o u s l o w - l e v e l f u n c t i o n s .
* Copyright ( C ) 1 9 9 5 - 1 9 9 6 G a r y T h o m a s ( g d t @linuxppc.org)
*
* Largely r e w r i t t e n b y C o r t D o u g a n ( c o r t @cs.nmt.edu)
* and P a u l M a c k e r r a s .
*
* PPC6 4 u p d a t e s b y D a v e E n g e b r e t s e n ( e n g e b r e t @us.ibm.com)
*
2008-01-18 07:50:30 +03:00
* setjmp/ l o n g j m p c o d e b y P a u l M a c k e r r a s .
*
2006-06-28 05:55:49 +04:00
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or
* modify i t u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e
* as p u b l i s h e d b y t h e F r e e S o f t w a r e F o u n d a t i o n ; either version
* 2 of t h e L i c e n s e , o r ( a t y o u r o p t i o n ) a n y l a t e r v e r s i o n .
* /
# include < a s m / p p c _ a s m . h >
2007-11-28 03:13:02 +03:00
# include < a s m / u n i s t d . h >
2008-01-18 07:50:30 +03:00
# include < a s m / a s m - c o m p a t . h >
# include < a s m / a s m - o f f s e t s . h >
2016-01-14 07:33:46 +03:00
# include < a s m / e x p o r t . h >
2006-06-28 05:55:49 +04:00
.text
/ *
* Returns ( a d d r e s s w e a r e r u n n i n g a t ) - ( a d d r e s s w e w e r e l i n k e d a t )
* for u s e b e f o r e t h e t e x t a n d d a t a a r e m a p p e d t o K E R N E L B A S E .
* /
_ GLOBAL( r e l o c _ o f f s e t )
mflr r0
bl 1 f
1 : mflr r3
2008-08-30 05:41:12 +04:00
PPC_ L L r4 ,( 2 f - 1 b ) ( r3 )
2006-06-28 05:55:49 +04:00
subf r3 ,r4 ,r3
mtlr r0
blr
2008-08-30 05:41:12 +04:00
.align 3
2 : PPC_ L O N G 1 b
2006-06-28 05:55:49 +04:00
/ *
* add_ r e l o c _ o f f s e t ( x ) r e t u r n s x + r e l o c _ o f f s e t ( ) .
* /
_ GLOBAL( a d d _ r e l o c _ o f f s e t )
mflr r0
bl 1 f
1 : mflr r5
2008-08-30 05:41:12 +04:00
PPC_ L L r4 ,( 2 f - 1 b ) ( r5 )
2006-06-28 05:55:49 +04:00
subf r5 ,r4 ,r5
add r3 ,r3 ,r5
mtlr r0
blr
2007-11-28 03:13:02 +03:00
2008-08-30 05:41:12 +04:00
.align 3
2 : PPC_ L O N G 1 b
2008-01-18 07:50:30 +03:00
_ GLOBAL( s e t j m p )
mflr r0
PPC_ S T L r0 ,0 ( r3 )
PPC_ S T L r1 ,S Z L ( r3 )
PPC_ S T L r2 ,2 * S Z L ( r3 )
mfcr r0
PPC_ S T L r0 ,3 * S Z L ( r3 )
PPC_ S T L r13 ,4 * S Z L ( r3 )
PPC_ S T L r14 ,5 * S Z L ( r3 )
PPC_ S T L r15 ,6 * S Z L ( r3 )
PPC_ S T L r16 ,7 * S Z L ( r3 )
PPC_ S T L r17 ,8 * S Z L ( r3 )
PPC_ S T L r18 ,9 * S Z L ( r3 )
PPC_ S T L r19 ,1 0 * S Z L ( r3 )
PPC_ S T L r20 ,1 1 * S Z L ( r3 )
PPC_ S T L r21 ,1 2 * S Z L ( r3 )
PPC_ S T L r22 ,1 3 * S Z L ( r3 )
PPC_ S T L r23 ,1 4 * S Z L ( r3 )
PPC_ S T L r24 ,1 5 * S Z L ( r3 )
PPC_ S T L r25 ,1 6 * S Z L ( r3 )
PPC_ S T L r26 ,1 7 * S Z L ( r3 )
PPC_ S T L r27 ,1 8 * S Z L ( r3 )
PPC_ S T L r28 ,1 9 * S Z L ( r3 )
PPC_ S T L r29 ,2 0 * S Z L ( r3 )
PPC_ S T L r30 ,2 1 * S Z L ( r3 )
PPC_ S T L r31 ,2 2 * S Z L ( r3 )
li r3 ,0
blr
_ GLOBAL( l o n g j m p )
PPC_ L C M P I r4 ,0
bne 1 f
li r4 ,1
1 : PPC_ L L r13 ,4 * S Z L ( r3 )
PPC_ L L r14 ,5 * S Z L ( r3 )
PPC_ L L r15 ,6 * S Z L ( r3 )
PPC_ L L r16 ,7 * S Z L ( r3 )
PPC_ L L r17 ,8 * S Z L ( r3 )
PPC_ L L r18 ,9 * S Z L ( r3 )
PPC_ L L r19 ,1 0 * S Z L ( r3 )
PPC_ L L r20 ,1 1 * S Z L ( r3 )
PPC_ L L r21 ,1 2 * S Z L ( r3 )
PPC_ L L r22 ,1 3 * S Z L ( r3 )
PPC_ L L r23 ,1 4 * S Z L ( r3 )
PPC_ L L r24 ,1 5 * S Z L ( r3 )
PPC_ L L r25 ,1 6 * S Z L ( r3 )
PPC_ L L r26 ,1 7 * S Z L ( r3 )
PPC_ L L r27 ,1 8 * S Z L ( r3 )
PPC_ L L r28 ,1 9 * S Z L ( r3 )
PPC_ L L r29 ,2 0 * S Z L ( r3 )
PPC_ L L r30 ,2 1 * S Z L ( r3 )
PPC_ L L r31 ,2 2 * S Z L ( r3 )
PPC_ L L r0 ,3 * S Z L ( r3 )
mtcrf 0 x38 ,r0
PPC_ L L r0 ,0 ( r3 )
PPC_ L L r1 ,S Z L ( r3 )
PPC_ L L r2 ,2 * S Z L ( r3 )
mtlr r0
mr r3 ,r4
blr
powerpc: Reimplement __get_SP() as a function not a define
Li Zhong points out an issue with our current __get_SP()
implementation. If ftrace function tracing is enabled (ie -pg
profiling using _mcount) we spill a stack frame on 64bit all the
time.
If a function calls __get_SP() and later calls a function that is
tail call optimised, we will pop the stack frame and the value
returned by __get_SP() is no longer valid. An example from Li can
be found in save_stack_trace -> save_context_stack:
c0000000000432c0 <.save_stack_trace>:
c0000000000432c0: mflr r0
c0000000000432c4: std r0,16(r1)
c0000000000432c8: stdu r1,-128(r1) <-- stack frame for _mcount
c0000000000432cc: std r3,112(r1)
c0000000000432d0: bl <._mcount>
c0000000000432d4: nop
c0000000000432d8: mr r4,r1 <-- __get_SP()
c0000000000432dc: ld r5,632(r13)
c0000000000432e0: ld r3,112(r1)
c0000000000432e4: li r6,1
c0000000000432e8: addi r1,r1,128 <-- pop stack frame
c0000000000432ec: ld r0,16(r1)
c0000000000432f0: mtlr r0
c0000000000432f4: b <.save_context_stack> <-- tail call optimized
save_context_stack ends up with a stack pointer below the current
one, and it is likely to be scribbled over.
Fix this by making __get_SP() a function which returns the
callers stack frame. Also replace inline assembly which grabs
the stack pointer in save_stack_trace and show_stack with
__get_SP().
This also fixes an issue with perf_arch_fetch_caller_regs().
It currently unwinds the stack once, which will skip a
valid stack frame on a leaf function. With the __get_SP() fixes
in this patch, we never need to unwind the stack frame to get
to the first interesting frame.
We have to export __get_SP() because perf_arch_fetch_caller_regs()
(which is used in modules) calls it from a header file.
Reported-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-13 12:41:38 +04:00
2014-10-13 12:41:39 +04:00
_ GLOBAL( c u r r e n t _ s t a c k _ p o i n t e r )
powerpc: Reimplement __get_SP() as a function not a define
Li Zhong points out an issue with our current __get_SP()
implementation. If ftrace function tracing is enabled (ie -pg
profiling using _mcount) we spill a stack frame on 64bit all the
time.
If a function calls __get_SP() and later calls a function that is
tail call optimised, we will pop the stack frame and the value
returned by __get_SP() is no longer valid. An example from Li can
be found in save_stack_trace -> save_context_stack:
c0000000000432c0 <.save_stack_trace>:
c0000000000432c0: mflr r0
c0000000000432c4: std r0,16(r1)
c0000000000432c8: stdu r1,-128(r1) <-- stack frame for _mcount
c0000000000432cc: std r3,112(r1)
c0000000000432d0: bl <._mcount>
c0000000000432d4: nop
c0000000000432d8: mr r4,r1 <-- __get_SP()
c0000000000432dc: ld r5,632(r13)
c0000000000432e0: ld r3,112(r1)
c0000000000432e4: li r6,1
c0000000000432e8: addi r1,r1,128 <-- pop stack frame
c0000000000432ec: ld r0,16(r1)
c0000000000432f0: mtlr r0
c0000000000432f4: b <.save_context_stack> <-- tail call optimized
save_context_stack ends up with a stack pointer below the current
one, and it is likely to be scribbled over.
Fix this by making __get_SP() a function which returns the
callers stack frame. Also replace inline assembly which grabs
the stack pointer in save_stack_trace and show_stack with
__get_SP().
This also fixes an issue with perf_arch_fetch_caller_regs().
It currently unwinds the stack once, which will skip a
valid stack frame on a leaf function. With the __get_SP() fixes
in this patch, we never need to unwind the stack frame to get
to the first interesting frame.
We have to export __get_SP() because perf_arch_fetch_caller_regs()
(which is used in modules) calls it from a header file.
Reported-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-13 12:41:38 +04:00
PPC_ L L r3 ,0 ( r1 )
blr
2016-01-14 07:33:46 +03:00
EXPORT_ S Y M B O L ( c u r r e n t _ s t a c k _ p o i n t e r )