2019-06-04 11:11:33 +03:00
/* SPDX-License-Identifier: GPL-2.0-only */
2014-04-30 13:54:33 +04:00
/ *
* arch/ a r m 6 4 / k e r n e l / e n t r y - f t r a c e . S
*
* Copyright ( C ) 2 0 1 3 L i n a r o L i m i t e d
* Author : AKASHI T a k a h i r o < t a k a h i r o . a k a s h i @linaro.org>
* /
# include < l i n u x / l i n k a g e . h >
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# include < a s m / a s m - o f f s e t s . h >
2017-02-15 00:32:58 +03:00
# include < a s m / a s s e m b l e r . h >
2014-04-30 13:54:33 +04:00
# include < a s m / f t r a c e . h >
# include < a s m / i n s n . h >
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# ifdef C O N F I G _ D Y N A M I C _ F T R A C E _ W I T H _ R E G S
/ *
* Due t o - f p a t c h a b l e - f u n c t i o n - e n t r y =2 , t h e c o m p i l e r h a s p l a c e d t w o N O P s b e f o r e
* the r e g u l a r f u n c t i o n p r o l o g u e . F o r a n e n a b l e d c a l l s i t e , f t r a c e _ i n i t _ n o p ( ) a n d
* ftrace_ m a k e _ c a l l ( ) h a v e p a t c h e d t h o s e N O P s t o :
*
* MOV X 9 , L R
* BL < e n t r y >
*
* . . . where < e n t r y > i s e i t h e r f t r a c e _ c a l l e r o r f t r a c e _ r e g s _ c a l l e r .
*
* Each i n s t r u m e n t e d f u n c t i o n f o l l o w s t h e A A P C S , s o h e r e x0 - x8 a n d x19 - x30 a r e
* live, a n d x9 - x18 a r e s a f e t o c l o b b e r .
*
* We s a v e t h e c a l l s i t e ' s c o n t e x t i n t o a p t _ r e g s b e f o r e i n v o k i n g a n y f t r a c e
* callbacks. S o t h a t w e c a n g e t a s e n s i b l e b a c k t r a c e , w e c r e a t e a s t a c k r e c o r d
* for t h e c a l l s i t e a n d t h e f t r a c e e n t r y a s s e m b l y . T h i s i s n o t s u f f i c i e n t f o r
* reliable s t a c k t r a c e : u n t i l w e c r e a t e t h e c a l l s i t e s t a c k r e c o r d , i t s c a l l e r
* is m i s s i n g f r o m t h e L R a n d e x i s t i n g c h a i n o f f r a m e r e c o r d s .
* /
.macro ftrace_ r e g s _ e n t r y , a l l r e g s =0
/* Make room for pt_regs, plus a callee frame */
sub s p , s p , #( S _ F R A M E _ S I Z E + 16 )
/* Save function arguments (and x9 for simplicity) */
stp x0 , x1 , [ s p , #S _ X 0 ]
stp x2 , x3 , [ s p , #S _ X 2 ]
stp x4 , x5 , [ s p , #S _ X 4 ]
stp x6 , x7 , [ s p , #S _ X 6 ]
stp x8 , x9 , [ s p , #S _ X 8 ]
/* Optionally save the callee-saved registers, always save the FP */
.if \ allregs = = 1
stp x10 , x11 , [ s p , #S _ X 10 ]
stp x12 , x13 , [ s p , #S _ X 12 ]
stp x14 , x15 , [ s p , #S _ X 14 ]
stp x16 , x17 , [ s p , #S _ X 16 ]
stp x18 , x19 , [ s p , #S _ X 18 ]
stp x20 , x21 , [ s p , #S _ X 20 ]
stp x22 , x23 , [ s p , #S _ X 22 ]
stp x24 , x25 , [ s p , #S _ X 24 ]
stp x26 , x27 , [ s p , #S _ X 26 ]
stp x28 , x29 , [ s p , #S _ X 28 ]
.else
str x29 , [ s p , #S _ F P ]
.endif
/* Save the callsite's SP and LR */
add x10 , s p , #( S _ F R A M E _ S I Z E + 16 )
stp x9 , x10 , [ s p , #S _ L R ]
/* Save the PC after the ftrace callsite */
str x30 , [ s p , #S _ P C ]
/* Create a frame record for the callsite above pt_regs */
stp x29 , x9 , [ s p , #S _ F R A M E _ S I Z E ]
add x29 , s p , #S _ F R A M E _ S I Z E
/* Create our frame record within pt_regs. */
stp x29 , x30 , [ s p , #S _ S T A C K F R A M E ]
add x29 , s p , #S _ S T A C K F R A M E
.endm
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ S T A R T ( f t r a c e _ r e g s _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
ftrace_ r e g s _ e n t r y 1
b f t r a c e _ c o m m o n
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ E N D ( f t r a c e _ r e g s _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ S T A R T ( f t r a c e _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
ftrace_ r e g s _ e n t r y 0
b f t r a c e _ c o m m o n
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ E N D ( f t r a c e _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ S T A R T ( f t r a c e _ c o m m o n )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
sub x0 , x30 , #A A R C H 64 _ I N S N _ S I Z E / / i p ( c a l l s i t e ' s B L i n s n )
mov x1 , x9 / / p a r e n t _ i p ( c a l l s i t e ' s L R )
ldr_ l x2 , f u n c t i o n _ t r a c e _ o p / / o p
mov x3 , s p / / r e g s
2020-02-18 22:58:30 +03:00
SYM_ I N N E R _ L A B E L ( f t r a c e _ c a l l , S Y M _ L _ G L O B A L )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
bl f t r a c e _ s t u b
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
2020-02-18 22:58:30 +03:00
SYM_ I N N E R _ L A B E L ( f t r a c e _ g r a p h _ c a l l , S Y M _ L _ G L O B A L ) / / f t r a c e _ g r a p h _ c a l l e r ( ) ;
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
nop / / I f e n a b l e d , t h i s w i l l b e r e p l a c e d
/ / " b f t r a c e _ g r a p h _ c a l l e r "
# endif
/ *
* At t h e c a l l s i t e x0 - x8 a n d x19 - x30 w e r e l i v e . A n y C c o d e w i l l h a v e p r e s e r v e d
* x1 9 - x29 p e r t h e A A P C S , a n d w e c r e a t e d f r a m e r e c o r d s u p o n e n t r y , s o w e n e e d
* to r e s t o r e x0 - x8 , x29 , a n d x30 .
* /
ftrace_common_return :
/* Restore function arguments */
ldp x0 , x1 , [ s p ]
ldp x2 , x3 , [ s p , #S _ X 2 ]
ldp x4 , x5 , [ s p , #S _ X 4 ]
ldp x6 , x7 , [ s p , #S _ X 6 ]
ldr x8 , [ s p , #S _ X 8 ]
/* Restore the callsite's FP, LR, PC */
ldr x29 , [ s p , #S _ F P ]
ldr x30 , [ s p , #S _ L R ]
ldr x9 , [ s p , #S _ P C ]
/* Restore the callsite's SP */
add s p , s p , #S _ F R A M E _ S I Z E + 16
ret x9
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ E N D ( f t r a c e _ c o m m o n )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ S T A R T ( f t r a c e _ g r a p h _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
ldr x0 , [ s p , #S _ P C ]
sub x0 , x0 , #A A R C H 64 _ I N S N _ S I Z E / / i p ( c a l l s i t e ' s B L i n s n )
add x1 , s p , #S _ L R / / p a r e n t _ i p ( c a l l s i t e ' s L R )
ldr x2 , [ s p , #S _ F R A M E _ S I Z E ] / / p a r e n t f p ( c a l l s i t e ' s F P )
bl p r e p a r e _ f t r a c e _ r e t u r n
b f t r a c e _ c o m m o n _ r e t u r n
2020-02-18 22:58:31 +03:00
SYM_ C O D E _ E N D ( f t r a c e _ g r a p h _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# endif
# else / * C O N F I G _ D Y N A M I C _ F T R A C E _ W I T H _ R E G S * /
2014-04-30 13:54:33 +04:00
/ *
* Gcc w i t h - p g w i l l p u t t h e f o l l o w i n g c o d e i n t h e b e g i n n i n g o f e a c h f u n c t i o n :
* mov x0 , x30
* bl _ m c o u n t
* [ function' s b o d y . . . ]
* " bl _ m c o u n t " m a y b e r e p l a c e d t o " b l f t r a c e _ c a l l e r " o r N O P i f d y n a m i c
* ftrace i s e n a b l e d .
*
* Please n o t e t h a t x0 a s a n a r g u m e n t w i l l n o t b e u s e d h e r e b e c a u s e w e c a n
* get l r ( x30 ) o f i n s t r u m e n t e d f u n c t i o n a t a n y t i m e b y w i n d i n g u p c a l l s t a c k
* as l o n g a s t h e k e r n e l i s c o m p i l e d w i t h o u t - f o m i t - f r a m e - p o i n t e r .
* ( or C O N F I G _ F R A M E _ P O I N T E R , t h i s i s f o r c e d o n a r m 6 4 )
*
* stack l a y o u t a f t e r m c o u n t _ e n t e r i n _ m c o u n t ( ) :
*
* current s p / f p = > 0 : + - - - - - +
* in _ m c o u n t ( ) | x29 | - > i n s t r u m e n t e d f u n c t i o n ' s f p
* + - - - - - +
* | x3 0 | - > _ m c o u n t ( ) ' s l r ( = i n s t r u m e n t e d f u n c t i o n ' s p c )
* old s p = > + 1 6 : + - - - - - +
* when i n s t r u m e n t e d | |
* function c a l l s | . . . |
* _ mcount( ) | |
* | |
* instrumented = > + x x : + - - - - - +
* function' s f p | x29 | - > p a r e n t ' s f p
* + - - - - - +
* | x3 0 | - > i n s t r u m e n t e d f u n c t i o n ' s l r ( = p a r e n t ' s p c )
* + - - - - - +
* | . . . |
* /
.macro mcount_enter
stp x29 , x30 , [ s p , #- 16 ] !
mov x29 , s p
.endm
.macro mcount_exit
ldp x29 , x30 , [ s p ] , #16
ret
.endm
.macro mcount_adjust_addr rd, r n
sub \ r d , \ r n , #A A R C H 64 _ I N S N _ S I Z E
.endm
/* for instrumented function's parent */
.macro mcount_get_parent_fp reg
ldr \ r e g , [ x29 ]
ldr \ r e g , [ \ r e g ]
.endm
/* for instrumented function */
.macro mcount_get_pc0 reg
mcount_ a d j u s t _ a d d r \ r e g , x30
.endm
.macro mcount_get_pc reg
ldr \ r e g , [ x29 , #8 ]
mcount_ a d j u s t _ a d d r \ r e g , \ r e g
.endm
.macro mcount_get_lr reg
ldr \ r e g , [ x29 ]
ldr \ r e g , [ \ r e g , #8 ]
.endm
.macro mcount_get_lr_addr reg
ldr \ r e g , [ x29 ]
add \ r e g , \ r e g , #8
.endm
2014-04-30 13:54:34 +04:00
# ifndef C O N F I G _ D Y N A M I C _ F T R A C E
2014-04-30 13:54:33 +04:00
/ *
* void _ m c o u n t ( u n s i g n e d l o n g r e t u r n _ a d d r e s s )
* @return_address: return address to instrumented function
*
* This f u n c t i o n m a k e s c a l l s , i f e n a b l e d , t o :
* - tracer f u n c t i o n t o p r o b e i n s t r u m e n t e d f u n c t i o n ' s e n t r y ,
* - ftrace_ g r a p h _ c a l l e r t o s e t u p a n e x i t h o o k
* /
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ S T A R T ( _ m c o u n t )
2014-04-30 13:54:33 +04:00
mcount_ e n t e r
2017-01-17 19:10:58 +03:00
ldr_ l x2 , f t r a c e _ t r a c e _ f u n c t i o n
2014-04-30 13:54:33 +04:00
adr x0 , f t r a c e _ s t u b
cmp x0 , x2 / / i f ( f t r a c e _ t r a c e _ f u n c t i o n
b. e q s k i p _ f t r a c e _ c a l l / / ! = f t r a c e _ s t u b ) {
mcount_ g e t _ p c x0 / / f u n c t i o n ' s p c
mcount_ g e t _ l r x1 / / f u n c t i o n ' s l r ( = p a r e n t ' s p c )
blr x2 / / ( * f t r a c e _ t r a c e _ f u n c t i o n ) ( p c , l r ) ;
2017-11-03 14:44:16 +03:00
skip_ftrace_call : / / }
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
2017-01-17 19:10:58 +03:00
ldr_ l x2 , f t r a c e _ g r a p h _ r e t u r n
2014-11-07 17:12:33 +03:00
cmp x0 , x2 / / i f ( ( f t r a c e _ g r a p h _ r e t u r n
b. n e f t r a c e _ g r a p h _ c a l l e r / / ! = f t r a c e _ s t u b )
2017-01-17 19:10:58 +03:00
ldr_ l x2 , f t r a c e _ g r a p h _ e n t r y / / | | ( f t r a c e _ g r a p h _ e n t r y
adr_ l x0 , f t r a c e _ g r a p h _ e n t r y _ s t u b / / ! = f t r a c e _ g r a p h _ e n t r y _ s t u b ) )
2014-04-30 13:54:33 +04:00
cmp x0 , x2
b. n e f t r a c e _ g r a p h _ c a l l e r / / f t r a c e _ g r a p h _ c a l l e r ( ) ;
# endif / * C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R * /
2017-11-03 14:44:16 +03:00
mcount_ e x i t
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ E N D ( _ m c o u n t )
2018-12-07 21:08:22 +03:00
EXPORT_ S Y M B O L ( _ m c o u n t )
NOKPROBE( _ m c o u n t )
2014-04-30 13:54:33 +04:00
2014-04-30 13:54:34 +04:00
# else / * C O N F I G _ D Y N A M I C _ F T R A C E * /
/ *
* _ mcount( ) i s u s e d t o b u i l d t h e k e r n e l w i t h - p g o p t i o n , b u t a l l t h e b r a n c h
* instructions t o _ m c o u n t ( ) a r e r e p l a c e d t o N O P i n i t i a l l y a t k e r n e l s t a r t u p ,
* and l a t e r o n , N O P t o b r a n c h t o f t r a c e _ c a l l e r ( ) w h e n e n a b l e d o r b r a n c h t o
* NOP w h e n d i s a b l e d p e r - f u n c t i o n b a s e .
* /
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ S T A R T ( _ m c o u n t )
2014-04-30 13:54:34 +04:00
ret
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ E N D ( _ m c o u n t )
2018-12-07 21:08:22 +03:00
EXPORT_ S Y M B O L ( _ m c o u n t )
NOKPROBE( _ m c o u n t )
2014-04-30 13:54:34 +04:00
/ *
* void f t r a c e _ c a l l e r ( u n s i g n e d l o n g r e t u r n _ a d d r e s s )
* @return_address: return address to instrumented function
*
* This f u n c t i o n i s a c o u n t e r p a r t o f _ m c o u n t ( ) i n ' s t a t i c ' f t r a c e , a n d
* makes c a l l s t o :
* - tracer f u n c t i o n t o p r o b e i n s t r u m e n t e d f u n c t i o n ' s e n t r y ,
* - ftrace_ g r a p h _ c a l l e r t o s e t u p a n e x i t h o o k
* /
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ S T A R T ( f t r a c e _ c a l l e r )
2014-04-30 13:54:34 +04:00
mcount_ e n t e r
mcount_ g e t _ p c0 x0 / / f u n c t i o n ' s p c
mcount_ g e t _ l r x1 / / f u n c t i o n ' s l r
2020-02-18 22:58:30 +03:00
SYM_ I N N E R _ L A B E L ( f t r a c e _ c a l l , S Y M _ L _ G L O B A L ) / / t r a c e r ( p c , l r ) ;
2014-04-30 13:54:34 +04:00
nop / / T h i s w i l l b e r e p l a c e d w i t h " b l x x x "
/ / where x x x c a n b e a n y k i n d o f t r a c e r .
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
2020-02-18 22:58:30 +03:00
SYM_ I N N E R _ L A B E L ( f t r a c e _ g r a p h _ c a l l ) / / f t r a c e _ g r a p h _ c a l l e r ( ) ;
2014-04-30 13:54:34 +04:00
nop / / I f e n a b l e d , t h i s w i l l b e r e p l a c e d
/ / " b f t r a c e _ g r a p h _ c a l l e r "
# endif
mcount_ e x i t
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ E N D ( f t r a c e _ c a l l e r )
2019-12-06 16:01:29 +03:00
# endif / * C O N F I G _ D Y N A M I C _ F T R A C E * /
2014-04-30 13:54:33 +04:00
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
/ *
* void f t r a c e _ g r a p h _ c a l l e r ( v o i d )
*
* Called f r o m _ m c o u n t ( ) o r f t r a c e _ c a l l e r ( ) w h e n f u n c t i o n _ g r a p h t r a c e r i s
* selected.
* This f u n c t i o n w / p r e p a r e _ f t r a c e _ r e t u r n ( ) f a k e s l i n k r e g i s t e r ' s v a l u e o n
* the c a l l s t a c k i n o r d e r t o i n t e r c e p t i n s t r u m e n t e d f u n c t i o n ' s r e t u r n p a t h
* and r u n r e t u r n _ t o _ h a n d l e r ( ) l a t e r o n i t s e x i t .
* /
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ S T A R T ( f t r a c e _ g r a p h _ c a l l e r )
2018-11-16 01:42:03 +03:00
mcount_ g e t _ p c x0 / / f u n c t i o n ' s p c
mcount_ g e t _ l r _ a d d r x1 / / p o i n t e r t o f u n c t i o n ' s s a v e d l r
2014-04-30 13:54:33 +04:00
mcount_ g e t _ p a r e n t _ f p x2 / / p a r e n t ' s f p
2018-11-16 01:42:03 +03:00
bl p r e p a r e _ f t r a c e _ r e t u r n / / p r e p a r e _ f t r a c e _ r e t u r n ( p c , & l r , f p )
2014-04-30 13:54:33 +04:00
mcount_ e x i t
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ E N D ( f t r a c e _ g r a p h _ c a l l e r )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# endif / * C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R * /
# endif / * C O N F I G _ D Y N A M I C _ F T R A C E _ W I T H _ R E G S * /
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ S T A R T ( f t r a c e _ s t u b )
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
ret
2020-02-18 22:58:30 +03:00
SYM_ F U N C _ E N D ( f t r a c e _ s t u b )
2014-04-30 13:54:33 +04:00
arm64: implement ftrace with regs
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.
Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).
For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:
| unsigned long bar(void);
|
| unsigned long foo(void)
| {
| return bar() + 1;
| }
... to:
| <foo>:
| nop
| nop
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl 0 <bar>
| add x0, x0, #0x1
| ldp x29, x30, [sp], #16
| ret
This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:
| mov x9, x30
| bl <ftrace-entry>
Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.
There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
2019-02-08 18:10:19 +03:00
# ifdef C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R
2014-04-30 13:54:33 +04:00
/ *
* void r e t u r n _ t o _ h a n d l e r ( v o i d )
*
* Run f t r a c e _ r e t u r n _ t o _ h a n d l e r ( ) b e f o r e g o i n g b a c k t o p a r e n t .
2018-11-16 01:42:00 +03:00
* @fp is checked against the value passed by ftrace_graph_caller().
2014-04-30 13:54:33 +04:00
* /
2020-02-18 22:58:32 +03:00
SYM_ C O D E _ S T A R T ( r e t u r n _ t o _ h a n d l e r )
2018-11-16 01:42:02 +03:00
/* save return value regs */
sub s p , s p , #64
stp x0 , x1 , [ s p ]
stp x2 , x3 , [ s p , #16 ]
stp x4 , x5 , [ s p , #32 ]
stp x6 , x7 , [ s p , #48 ]
2014-04-30 13:54:33 +04:00
mov x0 , x29 / / p a r e n t ' s f p
bl f t r a c e _ r e t u r n _ t o _ h a n d l e r / / a d d r = f t r a c e _ r e t u r n _ t o _ h a n d e r ( f p ) ;
mov x30 , x0 / / r e s t o r e t h e o r i g i n a l r e t u r n a d d r e s s
2018-11-16 01:42:02 +03:00
/* restore return value regs */
ldp x0 , x1 , [ s p ]
ldp x2 , x3 , [ s p , #16 ]
ldp x4 , x5 , [ s p , #32 ]
ldp x6 , x7 , [ s p , #48 ]
add s p , s p , #64
2014-04-30 13:54:33 +04:00
ret
2020-02-18 22:58:32 +03:00
SYM_ C O D E _ E N D ( r e t u r n _ t o _ h a n d l e r )
2014-04-30 13:54:33 +04:00
# endif / * C O N F I G _ F U N C T I O N _ G R A P H _ T R A C E R * /