2005-04-17 02:20:36 +04:00
/*
* Kernel Probes ( KProbes )
*
* This program is free software ; you can redistribute it and / or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation ; either version 2 of the License , or
* ( at your option ) any later version .
*
* This program is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
*
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write to the Free Software
* Foundation , Inc . , 59 Temple Place - Suite 330 , Boston , MA 02111 - 1307 , USA .
*
* Copyright ( C ) IBM Corporation , 2002 , 2004
*
* 2002 - Oct Created by Vamsi Krishna S < vamsi_krishna @ in . ibm . com > Kernel
* Probes initial implementation ( includes contributions from
* Rusty Russell ) .
* 2004 - July Suparna Bhattacharya < suparna @ in . ibm . com > added jumper probes
* interface to access function arguments .
2008-01-30 15:31:21 +03:00
* 2004 - Oct Jim Keniston < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > adapted for x86_64 from i386 .
2005-04-17 02:20:36 +04:00
* 2005 - Mar Roland McGrath < roland @ redhat . com >
* Fixed to handle % rip - relative addressing mode correctly .
2008-01-30 15:31:21 +03:00
* 2005 - May Hien Nguyen < hien @ us . ibm . com > , Jim Keniston
* < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > added function - return probes .
* 2005 - May Rusty Lynch < rusty . lynch @ intel . com >
2012-03-05 17:32:22 +04:00
* Added function return probes functionality
2008-01-30 15:31:21 +03:00
* 2006 - Feb Masami Hiramatsu < hiramatu @ sdl . hitachi . co . jp > added
2012-03-05 17:32:22 +04:00
* kprobe - booster and kretprobe - booster for i386 .
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > added kprobe - booster
2012-03-05 17:32:22 +04:00
* and kretprobe - booster for x86 - 64
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > , Arjan van de Ven
2012-03-05 17:32:22 +04:00
* < arjan @ infradead . org > and Jim Keniston < jkenisto @ us . ibm . com >
* unified x86 kprobes code .
2005-04-17 02:20:36 +04:00
*/
# include <linux/kprobes.h>
# include <linux/ptrace.h>
# include <linux/string.h>
# include <linux/slab.h>
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 15:32:32 +03:00
# include <linux/hardirq.h>
2005-04-17 02:20:36 +04:00
# include <linux/preempt.h>
2006-03-26 13:38:23 +04:00
# include <linux/module.h>
2007-05-08 11:27:03 +04:00
# include <linux/kdebug.h>
2009-08-14 00:34:28 +04:00
# include <linux/kallsyms.h>
2010-02-25 16:34:46 +03:00
# include <linux/ftrace.h>
2005-06-28 02:17:01 +04:00
2008-01-30 15:31:21 +03:00
# include <asm/cacheflush.h>
# include <asm/desc.h>
2005-04-17 02:20:36 +04:00
# include <asm/pgtable.h>
2006-03-26 13:38:23 +04:00
# include <asm/uaccess.h>
2007-07-22 13:12:31 +04:00
# include <asm/alternative.h>
2009-08-14 00:34:28 +04:00
# include <asm/insn.h>
2009-06-01 22:17:06 +04:00
# include <asm/debugreg.h>
2005-04-17 02:20:36 +04:00
2012-03-05 17:32:22 +04:00
# include "kprobes-common.h"
2005-04-17 02:20:36 +04:00
void jprobe_return_end ( void ) ;
2005-11-07 12:00:12 +03:00
DEFINE_PER_CPU ( struct kprobe * , current_kprobe ) = NULL ;
DEFINE_PER_CPU ( struct kprobe_ctlblk , kprobe_ctlblk ) ;
2005-04-17 02:20:36 +04:00
2009-10-13 01:14:10 +04:00
# define stack_addr(regs) ((unsigned long *)kernel_stack_pointer(regs))
2008-01-30 15:31:21 +03:00
# define W(row, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf)\
( ( ( b0 # # UL < < 0x0 ) | ( b1 # # UL < < 0x1 ) | ( b2 # # UL < < 0x2 ) | ( b3 # # UL < < 0x3 ) | \
( b4 # # UL < < 0x4 ) | ( b5 # # UL < < 0x5 ) | ( b6 # # UL < < 0x6 ) | ( b7 # # UL < < 0x7 ) | \
( b8 # # UL < < 0x8 ) | ( b9 # # UL < < 0x9 ) | ( ba # # UL < < 0xa ) | ( bb # # UL < < 0xb ) | \
( bc # # UL < < 0xc ) | ( bd # # UL < < 0xd ) | ( be # # UL < < 0xe ) | ( bf # # UL < < 0xf ) ) \
< < ( row % 32 ) )
/*
* Undefined / reserved opcodes , conditional jump , Opcode Extension
* Groups , and some special opcodes can not boost .
2011-10-26 19:03:38 +04:00
* This is non - const and volatile to keep gcc from statically
* optimizing it out , as variable_test_bit makes gcc think only
* * ( unsigned long * ) is used .
2008-01-30 15:31:21 +03:00
*/
2011-10-26 19:03:38 +04:00
static volatile u32 twobyte_is_boostable [ 256 / 32 ] = {
2008-01-30 15:31:21 +03:00
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
/* ---------------------------------------------- */
W ( 0x00 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 00 */
W ( 0x10 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 10 */
W ( 0x20 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 20 */
W ( 0x30 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 30 */
W ( 0x40 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 40 */
W ( 0x50 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 50 */
W ( 0x60 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 1 , 1 ) | /* 60 */
W ( 0x70 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 ) , /* 70 */
W ( 0x80 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 80 */
W ( 0x90 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* 90 */
W ( 0xa0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* a0 */
W ( 0xb0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 ) , /* b0 */
W ( 0xc0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* c0 */
W ( 0xd0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) , /* d0 */
W ( 0xe0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* e0 */
W ( 0xf0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 0 ) /* f0 */
/* ----------------------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
} ;
# undef W
2007-10-16 12:27:49 +04:00
struct kretprobe_blackpoint kretprobe_blacklist [ ] = {
{ " __switch_to " , } , /* This function switches only current task, but
doesn ' t switch kernel stack . */
{ NULL , NULL } /* Terminator */
} ;
2012-03-05 17:32:22 +04:00
2007-10-16 12:27:49 +04:00
const int kretprobe_blacklist_size = ARRAY_SIZE ( kretprobe_blacklist ) ;
2010-02-25 16:34:46 +03:00
static void __kprobes __synthesize_relative_insn ( void * from , void * to , u8 op )
2008-01-30 15:31:21 +03:00
{
2010-02-25 16:34:46 +03:00
struct __arch_relative_insn {
u8 op ;
2008-01-30 15:31:21 +03:00
s32 raddr ;
2010-02-25 16:34:46 +03:00
} __attribute__ ( ( packed ) ) * insn ;
insn = ( struct __arch_relative_insn * ) from ;
insn - > raddr = ( s32 ) ( ( long ) ( to ) - ( ( long ) ( from ) + 5 ) ) ;
insn - > op = op ;
}
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
2012-03-05 17:32:22 +04:00
void __kprobes synthesize_reljump ( void * from , void * to )
2010-02-25 16:34:46 +03:00
{
__synthesize_relative_insn ( from , to , RELATIVEJUMP_OPCODE ) ;
2008-01-30 15:31:21 +03:00
}
2012-03-05 17:32:22 +04:00
/* Insert a call instruction at address 'from', which calls address 'to'.*/
void __kprobes synthesize_relcall ( void * from , void * to )
{
__synthesize_relative_insn ( from , to , RELATIVECALL_OPCODE ) ;
}
2008-01-30 15:32:14 +03:00
/*
2010-06-29 09:53:50 +04:00
* Skip the prefixes of the instruction .
2008-01-30 15:32:14 +03:00
*/
2010-06-29 09:53:50 +04:00
static kprobe_opcode_t * __kprobes skip_prefixes ( kprobe_opcode_t * insn )
2008-01-30 15:32:14 +03:00
{
2010-06-29 09:53:50 +04:00
insn_attr_t attr ;
attr = inat_get_opcode_attribute ( ( insn_byte_t ) * insn ) ;
while ( inat_is_legacy_prefix ( attr ) ) {
insn + + ;
attr = inat_get_opcode_attribute ( ( insn_byte_t ) * insn ) ;
}
2008-01-30 15:32:14 +03:00
# ifdef CONFIG_X86_64
2010-06-29 09:53:50 +04:00
if ( inat_is_rex_prefix ( attr ) )
insn + + ;
2008-01-30 15:32:14 +03:00
# endif
2010-06-29 09:53:50 +04:00
return insn ;
2008-01-30 15:32:14 +03:00
}
2008-01-30 15:31:21 +03:00
/*
2008-01-30 15:31:21 +03:00
* Returns non - zero if opcode is boostable .
* RIP relative instructions are adjusted at copying time in 64 bits mode
2008-01-30 15:31:21 +03:00
*/
2012-03-05 17:32:22 +04:00
int __kprobes can_boost ( kprobe_opcode_t * opcodes )
2008-01-30 15:31:21 +03:00
{
kprobe_opcode_t opcode ;
kprobe_opcode_t * orig_opcodes = opcodes ;
2009-03-18 15:07:45 +03:00
if ( search_exception_tables ( ( unsigned long ) opcodes ) )
2009-03-17 01:57:22 +03:00
return 0 ; /* Page fault may occur on this address. */
2008-01-30 15:31:21 +03:00
retry :
if ( opcodes - orig_opcodes > MAX_INSN_SIZE - 1 )
return 0 ;
opcode = * ( opcodes + + ) ;
/* 2nd-byte opcode */
if ( opcode = = 0x0f ) {
if ( opcodes - orig_opcodes > MAX_INSN_SIZE - 1 )
return 0 ;
2008-01-30 15:31:21 +03:00
return test_bit ( * opcodes ,
( unsigned long * ) twobyte_is_boostable ) ;
2008-01-30 15:31:21 +03:00
}
switch ( opcode & 0xf0 ) {
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
case 0x40 :
goto retry ; /* REX prefix is boostable */
2008-01-30 15:31:21 +03:00
# endif
2008-01-30 15:31:21 +03:00
case 0x60 :
if ( 0x63 < opcode & & opcode < 0x67 )
goto retry ; /* prefixes */
/* can't boost Address-size override and bound */
return ( opcode ! = 0x62 & & opcode ! = 0x67 ) ;
case 0x70 :
return 0 ; /* can't boost conditional jump */
case 0xc0 :
/* can't boost software-interruptions */
return ( 0xc1 < opcode & & opcode < 0xcc ) | | opcode = = 0xcf ;
case 0xd0 :
/* can boost AA* and XLAT */
return ( opcode = = 0xd4 | | opcode = = 0xd5 | | opcode = = 0xd7 ) ;
case 0xe0 :
/* can boost in/out and absolute jmps */
return ( ( opcode & 0x04 ) | | opcode = = 0xea ) ;
case 0xf0 :
if ( ( opcode & 0x0c ) = = 0 & & opcode ! = 0xf1 )
goto retry ; /* lock/rep(ne) prefix */
/* clear and set flags are boostable */
return ( opcode = = 0xf5 | | ( 0xf7 < opcode & & opcode < 0xfe ) ) ;
default :
/* segment override prefixes are boostable */
if ( opcode = = 0x26 | | opcode = = 0x36 | | opcode = = 0x3e )
goto retry ; /* prefixes */
/* CS override prefix and call are not boostable */
return ( opcode ! = 0x2e & & opcode ! = 0x9a ) ;
}
}
2012-03-05 17:32:22 +04:00
static unsigned long
__recover_probed_insn ( kprobe_opcode_t * buf , unsigned long addr )
2009-08-14 00:34:28 +04:00
{
struct kprobe * kp ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
2009-08-14 00:34:28 +04:00
kp = get_kprobe ( ( void * ) addr ) ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
/* There is no probe, return original address */
2009-08-14 00:34:28 +04:00
if ( ! kp )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return addr ;
2009-08-14 00:34:28 +04:00
/*
* Basically , kp - > ainsn . insn has an original instruction .
* However , RIP - relative instruction can not do single - stepping
2010-02-25 16:34:46 +03:00
* at different place , __copy_instruction ( ) tweaks the displacement of
2009-08-14 00:34:28 +04:00
* that instruction . In that case , we can ' t recover the instruction
* from the kp - > ainsn . insn .
*
* On the other hand , kp - > opcode has a copy of the first byte of
* the probed instruction , which is overwritten by int3 . And
* the instruction at kp - > addr is not modified by kprobes except
* for the first byte , we can recover the original instruction
* from it and kp - > opcode .
*/
memcpy ( buf , kp - > addr , MAX_INSN_SIZE * sizeof ( kprobe_opcode_t ) ) ;
buf [ 0 ] = kp - > opcode ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return ( unsigned long ) buf ;
}
/*
* Recover the probed instruction at addr for further analysis .
* Caller must lock kprobes by kprobe_mutex , or disable preemption
* for preventing to release referencing kprobes .
*/
2012-03-05 17:32:22 +04:00
unsigned long recover_probed_instruction ( kprobe_opcode_t * buf , unsigned long addr )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
{
unsigned long __addr ;
__addr = __recover_optprobed_insn ( buf , addr ) ;
if ( __addr ! = addr )
return __addr ;
return __recover_probed_insn ( buf , addr ) ;
2009-08-14 00:34:28 +04:00
}
/* Check if paddr is at an instruction boundary */
static int __kprobes can_probe ( unsigned long paddr )
{
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
unsigned long addr , __addr , offset = 0 ;
2009-08-14 00:34:28 +04:00
struct insn insn ;
kprobe_opcode_t buf [ MAX_INSN_SIZE ] ;
2010-09-15 05:04:29 +04:00
if ( ! kallsyms_lookup_size_offset ( paddr , NULL , & offset ) )
2009-08-14 00:34:28 +04:00
return 0 ;
/* Decode instructions */
addr = paddr - offset ;
while ( addr < paddr ) {
/*
* Check if the instruction has been modified by another
* kprobe , in which case we replace the breakpoint by the
* original instruction in our buffer .
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
* Also , jump optimization will change the breakpoint to
* relative - jump . Since the relative - jump itself is
* normally used , we just go through if there is no kprobe .
2009-08-14 00:34:28 +04:00
*/
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
__addr = recover_probed_instruction ( buf , addr ) ;
kernel_insn_init ( & insn , ( void * ) __addr ) ;
2009-08-14 00:34:28 +04:00
insn_get_length ( & insn ) ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
/*
* Another debugging subsystem might insert this breakpoint .
* In that case , we can ' t recover it .
*/
if ( insn . opcode . bytes [ 0 ] = = BREAKPOINT_INSTRUCTION )
return 0 ;
2009-08-14 00:34:28 +04:00
addr + = insn . length ;
}
return ( addr = = paddr ) ;
}
2005-04-17 02:20:36 +04:00
/*
2008-01-30 15:31:21 +03:00
* Returns non - zero if opcode modifies the interrupt flag .
2005-04-17 02:20:36 +04:00
*/
2007-11-26 22:42:19 +03:00
static int __kprobes is_IF_modifier ( kprobe_opcode_t * insn )
2005-04-17 02:20:36 +04:00
{
2010-06-29 09:53:50 +04:00
/* Skip prefixes */
insn = skip_prefixes ( insn ) ;
2005-04-17 02:20:36 +04:00
switch ( * insn ) {
case 0xfa : /* cli */
case 0xfb : /* sti */
case 0xcf : /* iret/iretd */
case 0x9d : /* popf/popfd */
return 1 ;
}
2008-01-30 15:32:14 +03:00
2005-04-17 02:20:36 +04:00
return 0 ;
}
/*
2010-02-25 16:34:46 +03:00
* Copy an instruction and adjust the displacement if the instruction
* uses the % rip - relative addressing mode .
2008-01-30 15:31:21 +03:00
* If it does , Return the address of the 32 - bit displacement word .
2005-04-17 02:20:36 +04:00
* If not , return null .
2008-01-30 15:32:16 +03:00
* Only applicable to 64 - bit x86 .
2005-04-17 02:20:36 +04:00
*/
2012-03-05 17:32:22 +04:00
int __kprobes __copy_instruction ( u8 * dest , u8 * src )
2005-04-17 02:20:36 +04:00
{
2009-08-14 00:34:36 +04:00
struct insn insn ;
2010-02-25 16:34:46 +03:00
kprobe_opcode_t buf [ MAX_INSN_SIZE ] ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
2012-03-05 17:32:16 +04:00
kernel_insn_init ( & insn , ( void * ) recover_probed_instruction ( buf , ( unsigned long ) src ) ) ;
2010-02-25 16:34:46 +03:00
insn_get_length ( & insn ) ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
/* Another subsystem puts a breakpoint, failed to recover */
2012-03-05 17:32:16 +04:00
if ( insn . opcode . bytes [ 0 ] = = BREAKPOINT_INSTRUCTION )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return 0 ;
2010-02-25 16:34:46 +03:00
memcpy ( dest , insn . kaddr , insn . length ) ;
# ifdef CONFIG_X86_64
2009-08-14 00:34:36 +04:00
if ( insn_rip_relative ( & insn ) ) {
s64 newdisp ;
u8 * disp ;
2010-02-25 16:34:46 +03:00
kernel_insn_init ( & insn , dest ) ;
2009-08-14 00:34:36 +04:00
insn_get_displacement ( & insn ) ;
/*
* The copied instruction uses the % rip - relative addressing
* mode . Adjust the displacement for the difference between
* the original location of this instruction and the location
* of the copy that will actually be run . The tricky bit here
* is making sure that the sign extension happens correctly in
* this calculation , since we need a signed 32 - bit result to
* be sign - extended to 64 bits when it ' s added to the % rip
* value and yield the same 64 - bit result that the sign -
* extension of the original signed 32 - bit displacement would
* have given .
*/
2012-03-05 17:32:16 +04:00
newdisp = ( u8 * ) src + ( s64 ) insn . displacement . value - ( u8 * ) dest ;
2009-08-14 00:34:36 +04:00
BUG_ON ( ( s64 ) ( s32 ) newdisp ! = newdisp ) ; /* Sanity check. */
2010-02-25 16:34:46 +03:00
disp = ( u8 * ) dest + insn_offset_displacement ( & insn ) ;
2009-08-14 00:34:36 +04:00
* ( s32 * ) disp = ( s32 ) newdisp ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
# endif
2010-02-25 16:34:46 +03:00
return insn . length ;
2008-01-30 15:32:16 +03:00
}
2005-04-17 02:20:36 +04:00
2006-01-10 07:52:44 +03:00
static void __kprobes arch_copy_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2012-03-05 17:32:16 +04:00
/* Copy an instruction with recovering if other optprobe modifies it.*/
__copy_instruction ( p - > ainsn . insn , p - > addr ) ;
2010-02-25 16:34:46 +03:00
/*
2012-03-05 17:32:16 +04:00
* __copy_instruction can modify the displacement of the instruction ,
* but it doesn ' t affect boostable check .
2010-02-25 16:34:46 +03:00
*/
2012-03-05 17:32:16 +04:00
if ( can_boost ( p - > ainsn . insn ) )
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 0 ;
2008-01-30 15:31:21 +03:00
else
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = - 1 ;
2008-01-30 15:31:21 +03:00
2012-03-05 17:32:16 +04:00
/* Also, displacement change doesn't affect the first byte */
p - > opcode = p - > ainsn . insn [ 0 ] ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
int __kprobes arch_prepare_kprobe ( struct kprobe * p )
{
2010-02-03 00:49:18 +03:00
if ( alternatives_text_reserved ( p - > addr , p - > addr ) )
return - EINVAL ;
2009-08-14 00:34:28 +04:00
if ( ! can_probe ( ( unsigned long ) p - > addr ) )
return - EILSEQ ;
2008-01-30 15:31:21 +03:00
/* insn: must be on special executable page on x86. */
p - > ainsn . insn = get_insn_slot ( ) ;
if ( ! p - > ainsn . insn )
return - ENOMEM ;
arch_copy_kprobe ( p ) ;
return 0 ;
}
2005-09-07 02:19:28 +04:00
void __kprobes arch_arm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2007-07-22 13:12:31 +04:00
text_poke ( p - > addr , ( ( unsigned char [ ] ) { BREAKPOINT_INSTRUCTION } ) , 1 ) ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
void __kprobes arch_disarm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2007-07-22 13:12:31 +04:00
text_poke ( p - > addr , & p - > opcode , 1 ) ;
2005-06-23 11:09:25 +04:00
}
2006-01-10 07:52:46 +03:00
void __kprobes arch_remove_kprobe ( struct kprobe * p )
2005-06-23 11:09:25 +04:00
{
2009-01-07 01:41:50 +03:00
if ( p - > ainsn . insn ) {
free_insn_slot ( p - > ainsn . insn , ( p - > ainsn . boostable = = 1 ) ) ;
p - > ainsn . insn = NULL ;
}
2005-04-17 02:20:36 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes save_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2005-11-07 12:00:12 +03:00
kcb - > prev_kprobe . kp = kprobe_running ( ) ;
kcb - > prev_kprobe . status = kcb - > kprobe_status ;
2008-01-30 15:31:21 +03:00
kcb - > prev_kprobe . old_flags = kcb - > kprobe_old_flags ;
kcb - > prev_kprobe . saved_flags = kcb - > kprobe_saved_flags ;
2005-06-23 11:09:37 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes restore_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , kcb - > prev_kprobe . kp ) ;
2005-11-07 12:00:12 +03:00
kcb - > kprobe_status = kcb - > prev_kprobe . status ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_old_flags = kcb - > prev_kprobe . old_flags ;
kcb - > kprobe_saved_flags = kcb - > prev_kprobe . saved_flags ;
2005-06-23 11:09:37 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes set_current_kprobe ( struct kprobe * p , struct pt_regs * regs ,
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , p ) ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_saved_flags = kcb - > kprobe_old_flags
2008-01-30 15:31:27 +03:00
= ( regs - > flags & ( X86_EFLAGS_TF | X86_EFLAGS_IF ) ) ;
2005-06-23 11:09:37 +04:00
if ( is_IF_modifier ( p - > ainsn . insn ) )
2008-01-30 15:31:27 +03:00
kcb - > kprobe_saved_flags & = ~ X86_EFLAGS_IF ;
2005-06-23 11:09:37 +04:00
}
2008-01-30 15:31:43 +03:00
static void __kprobes clear_btf ( void )
2008-01-30 15:30:54 +03:00
{
2010-03-25 16:51:51 +03:00
if ( test_thread_flag ( TIF_BLOCKSTEP ) ) {
unsigned long debugctl = get_debugctlmsr ( ) ;
debugctl & = ~ DEBUGCTLMSR_BTF ;
update_debugctlmsr ( debugctl ) ;
}
2008-01-30 15:30:54 +03:00
}
2008-01-30 15:31:43 +03:00
static void __kprobes restore_btf ( void )
2008-01-30 15:30:54 +03:00
{
2010-03-25 16:51:51 +03:00
if ( test_thread_flag ( TIF_BLOCKSTEP ) ) {
unsigned long debugctl = get_debugctlmsr ( ) ;
debugctl | = DEBUGCTLMSR_BTF ;
update_debugctlmsr ( debugctl ) ;
}
2008-01-30 15:30:54 +03:00
}
2012-03-05 17:32:22 +04:00
void __kprobes
arch_prepare_kretprobe ( struct kretprobe_instance * ri , struct pt_regs * regs )
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
{
2008-01-30 15:31:21 +03:00
unsigned long * sara = stack_addr ( regs ) ;
2005-06-28 02:17:10 +04:00
2007-05-08 11:34:14 +04:00
ri - > ret_addr = ( kprobe_opcode_t * ) * sara ;
2008-01-30 15:31:21 +03:00
2007-05-08 11:34:14 +04:00
/* Replace the return addr with trampoline addr */
* sara = ( unsigned long ) & kretprobe_trampoline ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2008-01-30 15:32:50 +03:00
2012-03-05 17:32:22 +04:00
static void __kprobes
setup_singlestep ( struct kprobe * p , struct pt_regs * regs , struct kprobe_ctlblk * kcb , int reenter )
2008-01-30 15:32:50 +03:00
{
2010-02-25 16:34:46 +03:00
if ( setup_detour_execution ( p , regs , reenter ) )
return ;
2010-02-03 00:49:04 +03:00
# if !defined(CONFIG_PREEMPT)
2008-01-30 15:32:50 +03:00
if ( p - > ainsn . boostable = = 1 & & ! p - > post_handler ) {
/* Boost up -- we can execute copied instructions directly */
2010-02-25 16:34:23 +03:00
if ( ! reenter )
reset_current_kprobe ( ) ;
/*
* Reentering boosted probe doesn ' t reset current_kprobe ,
* nor set current_kprobe , because it doesn ' t use single
* stepping .
*/
2008-01-30 15:32:50 +03:00
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
preempt_enable_no_resched ( ) ;
return ;
}
# endif
2010-02-25 16:34:23 +03:00
if ( reenter ) {
save_previous_kprobe ( kcb ) ;
set_current_kprobe ( p , regs , kcb ) ;
kcb - > kprobe_status = KPROBE_REENTER ;
} else
kcb - > kprobe_status = KPROBE_HIT_SS ;
/* Prepare real single stepping */
clear_btf ( ) ;
regs - > flags | = X86_EFLAGS_TF ;
regs - > flags & = ~ X86_EFLAGS_IF ;
/* single step inline if the instruction is an int3 */
if ( p - > opcode = = BREAKPOINT_INSTRUCTION )
regs - > ip = ( unsigned long ) p - > addr ;
else
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
2008-01-30 15:32:50 +03:00
}
2008-01-30 15:32:02 +03:00
/*
* We have reentered the kprobe_handler ( ) , since another probe was hit while
* within the handler . We save the original kprobes variables and just single
* step on the instruction of the new probe without calling any user handlers .
*/
2012-03-05 17:32:22 +04:00
static int __kprobes
reenter_kprobe ( struct kprobe * p , struct pt_regs * regs , struct kprobe_ctlblk * kcb )
2008-01-30 15:32:02 +03:00
{
2008-01-30 15:32:50 +03:00
switch ( kcb - > kprobe_status ) {
case KPROBE_HIT_SSDONE :
case KPROBE_HIT_ACTIVE :
2008-01-30 15:33:13 +03:00
kprobes_inc_nmissed_count ( p ) ;
2010-02-25 16:34:23 +03:00
setup_singlestep ( p , regs , kcb , 1 ) ;
2008-01-30 15:32:50 +03:00
break ;
case KPROBE_HIT_SS :
2009-08-27 21:22:58 +04:00
/* A probe has been hit in the codepath leading up to, or just
* after , single - stepping of a probed instruction . This entire
* codepath should strictly reside in . kprobes . text section .
* Raise a BUG or we ' ll continue in an endless reentering loop
* and eventually a stack overflow .
*/
printk ( KERN_WARNING " Unrecoverable kprobe detected at %p. \n " ,
p - > addr ) ;
dump_kprobe ( p ) ;
BUG ( ) ;
2008-01-30 15:32:50 +03:00
default :
/* impossible cases */
WARN_ON ( 1 ) ;
2008-01-30 15:33:13 +03:00
return 0 ;
2008-01-30 15:32:02 +03:00
}
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:02 +03:00
return 1 ;
2008-01-30 15:32:02 +03:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2008-01-30 15:31:21 +03:00
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 18:09:05 +03:00
* remain disabled throughout this function .
2008-01-30 15:31:21 +03:00
*/
static int __kprobes kprobe_handler ( struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
kprobe_opcode_t * addr ;
2008-01-30 15:32:50 +03:00
struct kprobe * p ;
2005-11-07 12:00:14 +03:00
struct kprobe_ctlblk * kcb ;
2008-01-30 15:31:21 +03:00
addr = ( kprobe_opcode_t * ) ( regs - > ip - sizeof ( kprobe_opcode_t ) ) ;
2005-11-07 12:00:14 +03:00
/*
* We don ' t want to be preempted for the entire
2008-01-30 15:32:50 +03:00
* duration of kprobe processing . We conditionally
* re - enable preemption at the end of this function ,
* and also in reenter_kprobe ( ) and setup_singlestep ( ) .
2005-11-07 12:00:14 +03:00
*/
preempt_disable ( ) ;
2005-04-17 02:20:36 +04:00
2008-01-30 15:32:50 +03:00
kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:32:19 +03:00
p = get_kprobe ( addr ) ;
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:19 +03:00
if ( p ) {
if ( kprobe_running ( ) ) {
2008-01-30 15:32:50 +03:00
if ( reenter_kprobe ( p , regs , kcb ) )
return 1 ;
2005-04-17 02:20:36 +04:00
} else {
2008-01-30 15:32:19 +03:00
set_current_kprobe ( p , regs , kcb ) ;
kcb - > kprobe_status = KPROBE_HIT_ACTIVE ;
2008-01-30 15:32:50 +03:00
2005-04-17 02:20:36 +04:00
/*
2008-01-30 15:32:50 +03:00
* If we have no pre - handler or it returned 0 , we
* continue with normal processing . If we have a
* pre - handler and it returned non - zero , it prepped
* for calling the break_handler below on re - entry
* for jprobe processing , so get out doing nothing
* more here .
2005-04-17 02:20:36 +04:00
*/
2008-01-30 15:32:50 +03:00
if ( ! p - > pre_handler | | ! p - > pre_handler ( p , regs ) )
2010-02-25 16:34:23 +03:00
setup_singlestep ( p , regs , kcb , 0 ) ;
2008-01-30 15:32:50 +03:00
return 1 ;
2008-01-30 15:32:19 +03:00
}
2010-04-28 02:33:49 +04:00
} else if ( * addr ! = BREAKPOINT_INSTRUCTION ) {
/*
* The breakpoint instruction was removed right
* after we hit it . Another cpu has removed
* either a probepoint or a debugger breakpoint
* at this address . In either case , no further
* handling of this interrupt is appropriate .
* Back up over the ( now missing ) int3 and run
* the original instruction .
*/
regs - > ip = ( unsigned long ) addr ;
preempt_enable_no_resched ( ) ;
return 1 ;
2008-01-30 15:32:50 +03:00
} else if ( kprobe_running ( ) ) {
2010-12-06 20:16:25 +03:00
p = __this_cpu_read ( current_kprobe ) ;
2008-01-30 15:32:50 +03:00
if ( p - > break_handler & & p - > break_handler ( p , regs ) ) {
2010-02-25 16:34:23 +03:00
setup_singlestep ( p , regs , kcb , 0 ) ;
2008-01-30 15:32:50 +03:00
return 1 ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:32:50 +03:00
} /* else: not a kprobe fault; let the kernel handle it */
2005-04-17 02:20:36 +04:00
2005-11-07 12:00:14 +03:00
preempt_enable_no_resched ( ) ;
2008-01-30 15:32:50 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* When a retprobed function returns , this code saves registers and
* calls trampoline_handler ( ) runs , which calls the kretprobe ' s handler .
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
*/
2008-02-15 02:23:53 +03:00
static void __used __kprobes kretprobe_trampoline_holder ( void )
2008-01-30 15:33:01 +03:00
{
2008-01-30 15:31:21 +03:00
asm volatile (
" .global kretprobe_trampoline \n "
2008-01-30 15:31:21 +03:00
" kretprobe_trampoline: \n "
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
/* We don't bother saving the ss register */
" pushq %rsp \n "
" pushfq \n "
2010-02-25 16:34:30 +03:00
SAVE_REGS_STRING
2008-01-30 15:31:21 +03:00
" movq %rsp, %rdi \n "
" call trampoline_handler \n "
/* Replace saved sp with true return address. */
" movq %rax, 152(%rsp) \n "
2010-02-25 16:34:30 +03:00
RESTORE_REGS_STRING
2008-01-30 15:31:21 +03:00
" popfq \n "
2008-01-30 15:31:21 +03:00
# else
" pushf \n "
2010-02-25 16:34:30 +03:00
SAVE_REGS_STRING
2008-01-30 15:31:21 +03:00
" movl %esp, %eax \n "
" call trampoline_handler \n "
/* Move flags to cs */
2009-03-23 17:14:52 +03:00
" movl 56(%esp), %edx \n "
" movl %edx, 52(%esp) \n "
2008-01-30 15:31:21 +03:00
/* Replace saved flags with true return address. */
2009-03-23 17:14:52 +03:00
" movl %eax, 56(%esp) \n "
2010-02-25 16:34:30 +03:00
RESTORE_REGS_STRING
2008-01-30 15:31:21 +03:00
" popf \n "
# endif
2008-01-30 15:31:21 +03:00
" ret \n " ) ;
2008-01-30 15:33:01 +03:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* Called from kretprobe_trampoline
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
*/
2008-02-15 02:23:53 +03:00
static __used __kprobes void * trampoline_handler ( struct pt_regs * regs )
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
{
2006-10-02 13:17:33 +04:00
struct kretprobe_instance * ri = NULL ;
2006-10-02 13:17:35 +04:00
struct hlist_head * head , empty_rp ;
2006-10-02 13:17:33 +04:00
struct hlist_node * node , * tmp ;
2005-11-07 12:00:14 +03:00
unsigned long flags , orig_ret_address = 0 ;
2008-01-30 15:31:21 +03:00
unsigned long trampoline_address = ( unsigned long ) & kretprobe_trampoline ;
2010-08-15 10:18:04 +04:00
kprobe_opcode_t * correct_ret_addr = NULL ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2006-10-02 13:17:35 +04:00
INIT_HLIST_HEAD ( & empty_rp ) ;
2008-07-25 12:46:04 +04:00
kretprobe_hash_lock ( current , & head , & flags ) ;
2008-01-30 15:31:21 +03:00
/* fixup registers */
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
regs - > cs = __KERNEL_CS ;
2008-01-30 15:31:21 +03:00
# else
regs - > cs = __KERNEL_CS | get_kernel_rpl ( ) ;
2009-03-23 17:14:52 +03:00
regs - > gs = 0 ;
2008-01-30 15:31:21 +03:00
# endif
2008-01-30 15:31:21 +03:00
regs - > ip = trampoline_address ;
2008-01-30 15:31:21 +03:00
regs - > orig_ax = ~ 0UL ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2005-06-28 02:17:10 +04:00
/*
* It is possible to have multiple instances associated with a given
2008-01-30 15:31:21 +03:00
* task either because multiple functions in the call path have
2008-10-16 21:02:37 +04:00
* return probes installed on them , and / or more than one
2005-06-28 02:17:10 +04:00
* return probe was registered for a target function .
*
* We can handle this because :
2008-01-30 15:31:21 +03:00
* - instances are always pushed into the head of the list
2005-06-28 02:17:10 +04:00
* - when multiple return probes are registered for the same
2008-01-30 15:31:21 +03:00
* function , the ( chronologically ) first instance ' s ret_addr
* will be the real return address , and all the rest will
* point to kretprobe_trampoline .
2005-06-28 02:17:10 +04:00
*/
hlist_for_each_entry_safe ( ri , node , tmp , head , hlist ) {
2006-10-02 13:17:33 +04:00
if ( ri - > task ! = current )
2005-06-28 02:17:10 +04:00
/* another task is sharing our hash bucket */
2006-10-02 13:17:33 +04:00
continue ;
2005-06-28 02:17:10 +04:00
2010-08-15 10:18:04 +04:00
orig_ret_address = ( unsigned long ) ri - > ret_addr ;
if ( orig_ret_address ! = trampoline_address )
/*
* This is the real return address . Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break ;
}
kretprobe_assert ( ri , orig_ret_address , trampoline_address ) ;
correct_ret_addr = ri - > ret_addr ;
hlist_for_each_entry_safe ( ri , node , tmp , head , hlist ) {
if ( ri - > task ! = current )
/* another task is sharing our hash bucket */
continue ;
orig_ret_address = ( unsigned long ) ri - > ret_addr ;
2008-01-30 15:31:21 +03:00
if ( ri - > rp & & ri - > rp - > handler ) {
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , & ri - > rp - > kp ) ;
2008-01-30 15:31:21 +03:00
get_kprobe_ctlblk ( ) - > kprobe_status = KPROBE_HIT_ACTIVE ;
2010-08-15 10:18:04 +04:00
ri - > ret_addr = correct_ret_addr ;
2005-06-28 02:17:10 +04:00
ri - > rp - > handler ( ri , regs ) ;
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , NULL ) ;
2008-01-30 15:31:21 +03:00
}
2005-06-28 02:17:10 +04:00
2006-10-02 13:17:35 +04:00
recycle_rp_inst ( ri , & empty_rp ) ;
2005-06-28 02:17:10 +04:00
if ( orig_ret_address ! = trampoline_address )
/*
* This is the real return address . Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2005-06-28 02:17:10 +04:00
2008-07-25 12:46:04 +04:00
kretprobe_hash_unlock ( current , & flags ) ;
2005-06-28 02:17:10 +04:00
2006-10-02 13:17:35 +04:00
hlist_for_each_entry_safe ( ri , node , tmp , & empty_rp , hlist ) {
hlist_del ( & ri - > hlist ) ;
kfree ( ri ) ;
}
2008-01-30 15:31:21 +03:00
return ( void * ) orig_ret_address ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2005-04-17 02:20:36 +04:00
/*
* Called after single - stepping . p - > addr is the address of the
* instruction whose first byte has been replaced by the " int 3 "
* instruction . To avoid the SMP problems that can occur when we
* temporarily put back the original opcode to single - step , we
* single - stepped a copy of the instruction . The address of this
* copy is p - > ainsn . insn .
*
* This function prepares to return from the post - single - step
* interrupt . We have to fix up the stack as follows :
*
* 0 ) Except in the case of absolute or indirect jump or call instructions ,
2008-01-30 15:30:56 +03:00
* the new ip is relative to the copied instruction . We need to make
2005-04-17 02:20:36 +04:00
* it relative to the original instruction .
*
* 1 ) If the single - stepped instruction was pushfl , then the TF and IF
2008-01-30 15:30:56 +03:00
* flags are set in the just - pushed flags , and may need to be cleared .
2005-04-17 02:20:36 +04:00
*
* 2 ) If the single - stepped instruction was a call , the return address
* that is atop the stack is the address following the copied instruction .
* We need to make it the address following the original instruction .
2008-01-30 15:31:21 +03:00
*
* If this is the first time we ' ve single - stepped the instruction at
* this probepoint , and the instruction is boostable , boost it : add a
* jump instruction after the copied instruction , that jumps to the next
* instruction after the probepoint .
2005-04-17 02:20:36 +04:00
*/
2012-03-05 17:32:22 +04:00
static void __kprobes
resume_execution ( struct kprobe * p , struct pt_regs * regs , struct kprobe_ctlblk * kcb )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
unsigned long * tos = stack_addr ( regs ) ;
unsigned long copy_ip = ( unsigned long ) p - > ainsn . insn ;
unsigned long orig_ip = ( unsigned long ) p - > addr ;
2005-04-17 02:20:36 +04:00
kprobe_opcode_t * insn = p - > ainsn . insn ;
2010-06-29 09:53:50 +04:00
/* Skip prefixes */
insn = skip_prefixes ( insn ) ;
2005-04-17 02:20:36 +04:00
2008-01-30 15:31:27 +03:00
regs - > flags & = ~ X86_EFLAGS_TF ;
2005-04-17 02:20:36 +04:00
switch ( * insn ) {
2007-12-18 20:05:58 +03:00
case 0x9c : /* pushfl */
2008-01-30 15:31:27 +03:00
* tos & = ~ ( X86_EFLAGS_TF | X86_EFLAGS_IF ) ;
2008-01-30 15:31:21 +03:00
* tos | = kcb - > kprobe_old_flags ;
2005-04-17 02:20:36 +04:00
break ;
2007-12-18 20:05:58 +03:00
case 0xc2 : /* iret/ret/lret */
case 0xc3 :
2005-05-06 03:15:40 +04:00
case 0xca :
2007-12-18 20:05:58 +03:00
case 0xcb :
case 0xcf :
case 0xea : /* jmp absolute -- ip is correct */
/* ip is already adjusted, no more changes required */
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
2007-12-18 20:05:58 +03:00
goto no_change ;
case 0xe8 : /* call relative - Fix return addr */
2008-01-30 15:31:21 +03:00
* tos = orig_ip + ( * tos - copy_ip ) ;
2005-04-17 02:20:36 +04:00
break ;
2008-01-30 15:31:43 +03:00
# ifdef CONFIG_X86_32
2008-01-30 15:31:21 +03:00
case 0x9a : /* call absolute -- same as call absolute, indirect */
* tos = orig_ip + ( * tos - copy_ip ) ;
goto no_change ;
# endif
2005-04-17 02:20:36 +04:00
case 0xff :
2006-05-21 02:00:21 +04:00
if ( ( insn [ 1 ] & 0x30 ) = = 0x10 ) {
2008-01-30 15:31:21 +03:00
/*
* call absolute , indirect
* Fix return addr ; ip is correct .
* But this is not boostable
*/
* tos = orig_ip + ( * tos - copy_ip ) ;
2007-12-18 20:05:58 +03:00
goto no_change ;
2008-01-30 15:31:21 +03:00
} else if ( ( ( insn [ 1 ] & 0x31 ) = = 0x20 ) | |
( ( insn [ 1 ] & 0x31 ) = = 0x21 ) ) {
/*
* jmp near and far , absolute indirect
* ip is correct . And this is boostable
*/
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
2007-12-18 20:05:58 +03:00
goto no_change ;
2005-04-17 02:20:36 +04:00
}
default :
break ;
}
2008-01-30 15:31:21 +03:00
if ( p - > ainsn . boostable = = 0 ) {
2008-01-30 15:31:21 +03:00
if ( ( regs - > ip > copy_ip ) & &
( regs - > ip - copy_ip ) + 5 < MAX_INSN_SIZE ) {
2008-01-30 15:31:21 +03:00
/*
* These instructions can be executed directly if it
* jumps back to correct address .
*/
2010-02-25 16:34:46 +03:00
synthesize_reljump ( ( void * ) regs - > ip ,
( void * ) orig_ip + ( regs - > ip - copy_ip ) ) ;
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
} else {
p - > ainsn . boostable = - 1 ;
}
}
2008-01-30 15:31:21 +03:00
regs - > ip + = orig_ip - copy_ip ;
2008-01-30 15:30:56 +03:00
2007-12-18 20:05:58 +03:00
no_change :
2008-01-30 15:30:54 +03:00
restore_btf ( ) ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate and they
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 18:09:05 +03:00
* remain disabled throughout this function .
2008-01-30 15:31:21 +03:00
*/
static int __kprobes post_kprobe_handler ( struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe * cur = kprobe_running ( ) ;
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
if ( ! cur )
2005-04-17 02:20:36 +04:00
return 0 ;
2008-03-16 11:21:21 +03:00
resume_execution ( cur , regs , kcb ) ;
regs - > flags | = kcb - > kprobe_saved_flags ;
2005-11-07 12:00:12 +03:00
if ( ( kcb - > kprobe_status ! = KPROBE_REENTER ) & & cur - > post_handler ) {
kcb - > kprobe_status = KPROBE_HIT_SSDONE ;
cur - > post_handler ( cur , regs , 0 ) ;
2005-06-23 11:09:37 +04:00
}
2005-04-17 02:20:36 +04:00
2008-01-30 15:31:21 +03:00
/* Restore back the original saved kprobes variables and continue. */
2005-11-07 12:00:12 +03:00
if ( kcb - > kprobe_status = = KPROBE_REENTER ) {
restore_previous_kprobe ( kcb ) ;
2005-06-23 11:09:37 +04:00
goto out ;
}
2005-11-07 12:00:12 +03:00
reset_current_kprobe ( ) ;
2005-06-23 11:09:37 +04:00
out :
2005-04-17 02:20:36 +04:00
preempt_enable_no_resched ( ) ;
/*
2008-01-30 15:30:56 +03:00
* if somebody else is singlestepping across a probe point , flags
2005-04-17 02:20:36 +04:00
* will have TF set , in which case , continue the remaining processing
* of do_debug , as if this is not a probe hit .
*/
2008-01-30 15:31:27 +03:00
if ( regs - > flags & X86_EFLAGS_TF )
2005-04-17 02:20:36 +04:00
return 0 ;
return 1 ;
}
2005-09-07 02:19:28 +04:00
int __kprobes kprobe_fault_handler ( struct pt_regs * regs , int trapnr )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe * cur = kprobe_running ( ) ;
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:31:21 +03:00
switch ( kcb - > kprobe_status ) {
2006-03-26 13:38:23 +04:00
case KPROBE_HIT_SS :
case KPROBE_REENTER :
/*
* We are here because the instruction being single
* stepped caused a page fault . We reset the current
2008-01-30 15:30:56 +03:00
* kprobe and the ip points back to the probe address
2006-03-26 13:38:23 +04:00
* and allow the page fault handler to continue as a
* normal page fault .
*/
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) cur - > addr ;
2008-01-30 15:31:21 +03:00
regs - > flags | = kcb - > kprobe_old_flags ;
2006-03-26 13:38:23 +04:00
if ( kcb - > kprobe_status = = KPROBE_REENTER )
restore_previous_kprobe ( kcb ) ;
else
reset_current_kprobe ( ) ;
2005-04-17 02:20:36 +04:00
preempt_enable_no_resched ( ) ;
2006-03-26 13:38:23 +04:00
break ;
case KPROBE_HIT_ACTIVE :
case KPROBE_HIT_SSDONE :
/*
* We increment the nmissed count for accounting ,
2008-01-30 15:31:21 +03:00
* we can also use npre / npostfault count for accounting
2006-03-26 13:38:23 +04:00
* these specific fault cases .
*/
kprobes_inc_nmissed_count ( cur ) ;
/*
* We come here because instructions in the pre / post
* handler caused the page_fault , this could happen
* if handler tries to access user space by
* copy_from_user ( ) , get_user ( ) etc . Let the
* user - specified handler try to fix it first .
*/
if ( cur - > fault_handler & & cur - > fault_handler ( cur , regs , trapnr ) )
return 1 ;
/*
* In case the user - specified fault handler returned
* zero , try to fix up .
*/
2008-01-30 15:31:21 +03:00
if ( fixup_exception ( regs ) )
return 1 ;
2008-01-30 15:31:41 +03:00
2006-03-26 13:38:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* fixup routine could not handle it ,
2006-03-26 13:38:23 +04:00
* Let do_page_fault ( ) fix it .
*/
break ;
default :
break ;
2005-04-17 02:20:36 +04:00
}
return 0 ;
}
/*
* Wrapper routine for handling exceptions .
*/
2012-03-05 17:32:22 +04:00
int __kprobes
kprobe_exceptions_notify ( struct notifier_block * self , unsigned long val , void * data )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:33:23 +03:00
struct die_args * args = data ;
2005-11-07 12:00:07 +03:00
int ret = NOTIFY_DONE ;
2008-01-30 15:31:21 +03:00
if ( args - > regs & & user_mode_vm ( args - > regs ) )
2006-03-26 13:38:21 +04:00
return ret ;
2005-04-17 02:20:36 +04:00
switch ( val ) {
case DIE_INT3 :
if ( kprobe_handler ( args - > regs ) )
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2005-04-17 02:20:36 +04:00
break ;
case DIE_DEBUG :
2009-06-01 22:17:06 +04:00
if ( post_kprobe_handler ( args - > regs ) ) {
/*
* Reset the BS bit in dr6 ( pointed by args - > err ) to
* denote completion of processing
*/
( * ( unsigned long * ) ERR_PTR ( args - > err ) ) & = ~ DR_STEP ;
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2009-06-01 22:17:06 +04:00
}
2005-04-17 02:20:36 +04:00
break ;
case DIE_GPF :
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 15:32:32 +03:00
/*
* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running ( ) , we have
* be non - preemptible .
*/
if ( ! preemptible ( ) & & kprobe_running ( ) & &
2005-04-17 02:20:36 +04:00
kprobe_fault_handler ( args - > regs , args - > trapnr ) )
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2005-04-17 02:20:36 +04:00
break ;
default :
break ;
}
2005-11-07 12:00:07 +03:00
return ret ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
int __kprobes setjmp_pre_handler ( struct kprobe * p , struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
struct jprobe * jp = container_of ( p , struct jprobe , kp ) ;
unsigned long addr ;
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2005-04-17 02:20:36 +04:00
2005-11-07 12:00:12 +03:00
kcb - > jprobe_saved_regs = * regs ;
2008-01-30 15:31:21 +03:00
kcb - > jprobe_saved_sp = stack_addr ( regs ) ;
addr = ( unsigned long ) ( kcb - > jprobe_saved_sp ) ;
2005-04-17 02:20:36 +04:00
/*
* As Linus pointed out , gcc assumes that the callee
* owns the argument space and could overwrite it , e . g .
* tailcall optimization . So , to be absolutely safe
* we also save and restore enough stack bytes to cover
* the argument area .
*/
2005-11-07 12:00:12 +03:00
memcpy ( kcb - > jprobes_stack , ( kprobe_opcode_t * ) addr ,
2008-01-30 15:31:21 +03:00
MIN_STACK_SIZE ( addr ) ) ;
2008-01-30 15:31:27 +03:00
regs - > flags & = ~ X86_EFLAGS_IF ;
2007-10-12 00:25:25 +04:00
trace_hardirqs_off ( ) ;
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) ( jp - > entry ) ;
2005-04-17 02:20:36 +04:00
return 1 ;
}
2005-09-07 02:19:28 +04:00
void __kprobes jprobe_return ( void )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:31:21 +03:00
asm volatile (
# ifdef CONFIG_X86_64
" xchg %%rbx,%%rsp \n "
# else
" xchgl %%ebx,%%esp \n "
# endif
" int3 \n "
" .globl jprobe_return_end \n "
" jprobe_return_end: \n "
" nop \n " : : " b "
( kcb - > jprobe_saved_sp ) : " memory " ) ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
int __kprobes longjmp_break_handler ( struct kprobe * p , struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:30:56 +03:00
u8 * addr = ( u8 * ) ( regs - > ip - 1 ) ;
2005-04-17 02:20:36 +04:00
struct jprobe * jp = container_of ( p , struct jprobe , kp ) ;
2008-01-30 15:31:21 +03:00
if ( ( addr > ( u8 * ) jprobe_return ) & &
( addr < ( u8 * ) jprobe_return_end ) ) {
2008-01-30 15:31:21 +03:00
if ( stack_addr ( regs ) ! = kcb - > jprobe_saved_sp ) {
2007-12-18 20:05:58 +03:00
struct pt_regs * saved_regs = & kcb - > jprobe_saved_regs ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR
" current sp %p does not match saved sp %p \n " ,
2008-01-30 15:31:21 +03:00
stack_addr ( regs ) , kcb - > jprobe_saved_sp ) ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR " Saved registers for jprobe %p \n " , jp ) ;
2012-05-09 11:47:37 +04:00
show_regs ( saved_regs ) ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR " Current registers \n " ) ;
2012-05-09 11:47:37 +04:00
show_regs ( regs ) ;
2005-04-17 02:20:36 +04:00
BUG ( ) ;
}
2005-11-07 12:00:12 +03:00
* regs = kcb - > jprobe_saved_regs ;
2008-01-30 15:31:21 +03:00
memcpy ( ( kprobe_opcode_t * ) ( kcb - > jprobe_saved_sp ) ,
kcb - > jprobes_stack ,
MIN_STACK_SIZE ( kcb - > jprobe_saved_sp ) ) ;
2005-11-07 12:00:14 +03:00
preempt_enable_no_resched ( ) ;
2005-04-17 02:20:36 +04:00
return 1 ;
}
return 0 ;
}
2005-06-28 02:17:10 +04:00
2005-07-06 05:54:50 +04:00
int __init arch_init_kprobes ( void )
2005-06-28 02:17:10 +04:00
{
2012-03-05 17:32:22 +04:00
return arch_init_optprobes ( ) ;
2005-06-28 02:17:10 +04:00
}
2007-05-08 11:34:16 +04:00
int __kprobes arch_trampoline_kprobe ( struct kprobe * p )
{
return 0 ;
}