2005-04-17 02:20:36 +04:00
/*
* Kernel Probes ( KProbes )
*
* This program is free software ; you can redistribute it and / or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation ; either version 2 of the License , or
* ( at your option ) any later version .
*
* This program is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
*
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write to the Free Software
* Foundation , Inc . , 59 Temple Place - Suite 330 , Boston , MA 02111 - 1307 , USA .
*
* Copyright ( C ) IBM Corporation , 2002 , 2004
*
* 2002 - Oct Created by Vamsi Krishna S < vamsi_krishna @ in . ibm . com > Kernel
* Probes initial implementation ( includes contributions from
* Rusty Russell ) .
* 2004 - July Suparna Bhattacharya < suparna @ in . ibm . com > added jumper probes
* interface to access function arguments .
2008-01-30 15:31:21 +03:00
* 2004 - Oct Jim Keniston < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > adapted for x86_64 from i386 .
2005-04-17 02:20:36 +04:00
* 2005 - Mar Roland McGrath < roland @ redhat . com >
* Fixed to handle % rip - relative addressing mode correctly .
2008-01-30 15:31:21 +03:00
* 2005 - May Hien Nguyen < hien @ us . ibm . com > , Jim Keniston
* < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > added function - return probes .
* 2005 - May Rusty Lynch < rusty . lynch @ intel . com >
* Added function return probes functionality
* 2006 - Feb Masami Hiramatsu < hiramatu @ sdl . hitachi . co . jp > added
* kprobe - booster and kretprobe - booster for i386 .
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > added kprobe - booster
* and kretprobe - booster for x86 - 64
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > , Arjan van de Ven
* < arjan @ infradead . org > and Jim Keniston < jkenisto @ us . ibm . com >
* unified x86 kprobes code .
2005-04-17 02:20:36 +04:00
*/
# include <linux/kprobes.h>
# include <linux/ptrace.h>
# include <linux/string.h>
# include <linux/slab.h>
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 15:32:32 +03:00
# include <linux/hardirq.h>
2005-04-17 02:20:36 +04:00
# include <linux/preempt.h>
2006-03-26 13:38:23 +04:00
# include <linux/module.h>
2007-05-08 11:27:03 +04:00
# include <linux/kdebug.h>
2005-06-28 02:17:01 +04:00
2008-01-30 15:31:21 +03:00
# include <asm/cacheflush.h>
# include <asm/desc.h>
2005-04-17 02:20:36 +04:00
# include <asm/pgtable.h>
2006-03-26 13:38:23 +04:00
# include <asm/uaccess.h>
2007-07-22 13:12:31 +04:00
# include <asm/alternative.h>
2005-04-17 02:20:36 +04:00
void jprobe_return_end ( void ) ;
2005-11-07 12:00:12 +03:00
DEFINE_PER_CPU ( struct kprobe * , current_kprobe ) = NULL ;
DEFINE_PER_CPU ( struct kprobe_ctlblk , kprobe_ctlblk ) ;
2005-04-17 02:20:36 +04:00
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
# define stack_addr(regs) ((unsigned long *)regs->sp)
2008-01-30 15:31:21 +03:00
# else
/*
* " ®s->sp " looks wrong , but it ' s correct for x86_32 . x86_32 CPUs
* don ' t save the ss and esp registers if the CPU is already in kernel
* mode when it traps . So for kprobes , regs - > sp and regs - > ss are not
* the [ nonexistent ] saved stack pointer and ss register , but rather
* the top 8 bytes of the pre - int3 stack . So & regs - > sp happens to
* point to the top of the pre - int3 stack .
*/
# define stack_addr(regs) ((unsigned long *)®s->sp)
# endif
2008-01-30 15:31:21 +03:00
# define W(row, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf)\
( ( ( b0 # # UL < < 0x0 ) | ( b1 # # UL < < 0x1 ) | ( b2 # # UL < < 0x2 ) | ( b3 # # UL < < 0x3 ) | \
( b4 # # UL < < 0x4 ) | ( b5 # # UL < < 0x5 ) | ( b6 # # UL < < 0x6 ) | ( b7 # # UL < < 0x7 ) | \
( b8 # # UL < < 0x8 ) | ( b9 # # UL < < 0x9 ) | ( ba # # UL < < 0xa ) | ( bb # # UL < < 0xb ) | \
( bc # # UL < < 0xc ) | ( bd # # UL < < 0xd ) | ( be # # UL < < 0xe ) | ( bf # # UL < < 0xf ) ) \
< < ( row % 32 ) )
/*
* Undefined / reserved opcodes , conditional jump , Opcode Extension
* Groups , and some special opcodes can not boost .
*/
static const u32 twobyte_is_boostable [ 256 / 32 ] = {
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
/* ---------------------------------------------- */
W ( 0x00 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 00 */
W ( 0x10 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 10 */
W ( 0x20 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 20 */
W ( 0x30 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 30 */
W ( 0x40 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 40 */
W ( 0x50 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 50 */
W ( 0x60 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 1 , 1 ) | /* 60 */
W ( 0x70 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 ) , /* 70 */
W ( 0x80 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 80 */
W ( 0x90 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* 90 */
W ( 0xa0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* a0 */
W ( 0xb0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 ) , /* b0 */
W ( 0xc0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* c0 */
W ( 0xd0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) , /* d0 */
W ( 0xe0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* e0 */
W ( 0xf0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 0 ) /* f0 */
/* ----------------------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
} ;
static const u32 onebyte_has_modrm [ 256 / 32 ] = {
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
/* ----------------------------------------------- */
W ( 0x00 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 ) | /* 00 */
W ( 0x10 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 ) , /* 10 */
W ( 0x20 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 ) | /* 20 */
W ( 0x30 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 ) , /* 30 */
W ( 0x40 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 40 */
W ( 0x50 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 50 */
W ( 0x60 , 0 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 0 ) | /* 60 */
W ( 0x70 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 70 */
W ( 0x80 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 80 */
W ( 0x90 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 90 */
W ( 0xa0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* a0 */
W ( 0xb0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* b0 */
W ( 0xc0 , 1 , 1 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* c0 */
W ( 0xd0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* d0 */
W ( 0xe0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* e0 */
W ( 0xf0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 ) /* f0 */
/* ----------------------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
} ;
static const u32 twobyte_has_modrm [ 256 / 32 ] = {
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
/* ----------------------------------------------- */
W ( 0x00 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 1 ) | /* 0f */
W ( 0x10 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 1f */
W ( 0x20 , 1 , 1 , 1 , 1 , 1 , 0 , 1 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 2f */
W ( 0x30 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 3f */
W ( 0x40 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 4f */
W ( 0x50 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* 5f */
W ( 0x60 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 6f */
W ( 0x70 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 ) , /* 7f */
W ( 0x80 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 8f */
W ( 0x90 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* 9f */
W ( 0xa0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 ) | /* af */
W ( 0xb0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* bf */
W ( 0xc0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* cf */
W ( 0xd0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* df */
W ( 0xe0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* ef */
W ( 0xf0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 ) /* ff */
/* ----------------------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
} ;
# undef W
2007-10-16 12:27:49 +04:00
struct kretprobe_blackpoint kretprobe_blacklist [ ] = {
{ " __switch_to " , } , /* This function switches only current task, but
doesn ' t switch kernel stack . */
{ NULL , NULL } /* Terminator */
} ;
const int kretprobe_blacklist_size = ARRAY_SIZE ( kretprobe_blacklist ) ;
2008-01-30 15:31:21 +03:00
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
2008-01-30 15:31:43 +03:00
static void __kprobes set_jmp_op ( void * from , void * to )
2008-01-30 15:31:21 +03:00
{
struct __arch_jmp_op {
char op ;
s32 raddr ;
} __attribute__ ( ( packed ) ) * jop ;
jop = ( struct __arch_jmp_op * ) from ;
jop - > raddr = ( s32 ) ( ( long ) ( to ) - ( ( long ) ( from ) + 5 ) ) ;
jop - > op = RELATIVEJUMP_INSTRUCTION ;
}
2008-01-30 15:32:14 +03:00
/*
* Check for the REX prefix which can only exist on X86_64
* X86_32 always returns 0
*/
static int __kprobes is_REX_prefix ( kprobe_opcode_t * insn )
{
# ifdef CONFIG_X86_64
if ( ( * insn & 0xf0 ) = = 0x40 )
return 1 ;
# endif
return 0 ;
}
2008-01-30 15:31:21 +03:00
/*
2008-01-30 15:31:21 +03:00
* Returns non - zero if opcode is boostable .
* RIP relative instructions are adjusted at copying time in 64 bits mode
2008-01-30 15:31:21 +03:00
*/
2008-01-30 15:31:43 +03:00
static int __kprobes can_boost ( kprobe_opcode_t * opcodes )
2008-01-30 15:31:21 +03:00
{
kprobe_opcode_t opcode ;
kprobe_opcode_t * orig_opcodes = opcodes ;
retry :
if ( opcodes - orig_opcodes > MAX_INSN_SIZE - 1 )
return 0 ;
opcode = * ( opcodes + + ) ;
/* 2nd-byte opcode */
if ( opcode = = 0x0f ) {
if ( opcodes - orig_opcodes > MAX_INSN_SIZE - 1 )
return 0 ;
2008-01-30 15:31:21 +03:00
return test_bit ( * opcodes ,
( unsigned long * ) twobyte_is_boostable ) ;
2008-01-30 15:31:21 +03:00
}
switch ( opcode & 0xf0 ) {
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
case 0x40 :
goto retry ; /* REX prefix is boostable */
2008-01-30 15:31:21 +03:00
# endif
2008-01-30 15:31:21 +03:00
case 0x60 :
if ( 0x63 < opcode & & opcode < 0x67 )
goto retry ; /* prefixes */
/* can't boost Address-size override and bound */
return ( opcode ! = 0x62 & & opcode ! = 0x67 ) ;
case 0x70 :
return 0 ; /* can't boost conditional jump */
case 0xc0 :
/* can't boost software-interruptions */
return ( 0xc1 < opcode & & opcode < 0xcc ) | | opcode = = 0xcf ;
case 0xd0 :
/* can boost AA* and XLAT */
return ( opcode = = 0xd4 | | opcode = = 0xd5 | | opcode = = 0xd7 ) ;
case 0xe0 :
/* can boost in/out and absolute jmps */
return ( ( opcode & 0x04 ) | | opcode = = 0xea ) ;
case 0xf0 :
if ( ( opcode & 0x0c ) = = 0 & & opcode ! = 0xf1 )
goto retry ; /* lock/rep(ne) prefix */
/* clear and set flags are boostable */
return ( opcode = = 0xf5 | | ( 0xf7 < opcode & & opcode < 0xfe ) ) ;
default :
/* segment override prefixes are boostable */
if ( opcode = = 0x26 | | opcode = = 0x36 | | opcode = = 0x3e )
goto retry ; /* prefixes */
/* CS override prefix and call are not boostable */
return ( opcode ! = 0x2e & & opcode ! = 0x9a ) ;
}
}
2005-04-17 02:20:36 +04:00
/*
2008-01-30 15:31:21 +03:00
* Returns non - zero if opcode modifies the interrupt flag .
2005-04-17 02:20:36 +04:00
*/
2007-11-26 22:42:19 +03:00
static int __kprobes is_IF_modifier ( kprobe_opcode_t * insn )
2005-04-17 02:20:36 +04:00
{
switch ( * insn ) {
case 0xfa : /* cli */
case 0xfb : /* sti */
case 0xcf : /* iret/iretd */
case 0x9d : /* popf/popfd */
return 1 ;
}
2008-01-30 15:32:14 +03:00
2008-01-30 15:31:21 +03:00
/*
2008-01-30 15:32:14 +03:00
* on X86_64 , 0x40 - 0x4f are REX prefixes so we need to look
2008-01-30 15:31:21 +03:00
* at the next byte instead . . but of course not recurse infinitely
*/
2008-01-30 15:32:14 +03:00
if ( is_REX_prefix ( insn ) )
2008-01-30 15:31:21 +03:00
return is_IF_modifier ( + + insn ) ;
2008-01-30 15:32:14 +03:00
2005-04-17 02:20:36 +04:00
return 0 ;
}
/*
2008-01-30 15:31:21 +03:00
* Adjust the displacement if the instruction uses the % rip - relative
* addressing mode .
2008-01-30 15:31:21 +03:00
* If it does , Return the address of the 32 - bit displacement word .
2005-04-17 02:20:36 +04:00
* If not , return null .
2008-01-30 15:32:16 +03:00
* Only applicable to 64 - bit x86 .
2005-04-17 02:20:36 +04:00
*/
2008-01-30 15:31:21 +03:00
static void __kprobes fix_riprel ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:32:16 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
u8 * insn = p - > ainsn . insn ;
s64 disp ;
2005-04-17 02:20:36 +04:00
int need_modrm ;
/* Skip legacy instruction prefixes. */
while ( 1 ) {
switch ( * insn ) {
case 0x66 :
case 0x67 :
case 0x2e :
case 0x3e :
case 0x26 :
case 0x64 :
case 0x65 :
case 0x36 :
case 0xf0 :
case 0xf3 :
case 0xf2 :
+ + insn ;
continue ;
}
break ;
}
/* Skip REX instruction prefix. */
2008-01-30 15:32:14 +03:00
if ( is_REX_prefix ( insn ) )
2005-04-17 02:20:36 +04:00
+ + insn ;
2008-01-30 15:31:21 +03:00
if ( * insn = = 0x0f ) {
/* Two-byte opcode. */
2005-04-17 02:20:36 +04:00
+ + insn ;
2008-01-30 15:31:21 +03:00
need_modrm = test_bit ( * insn ,
( unsigned long * ) twobyte_has_modrm ) ;
2008-01-30 15:31:21 +03:00
} else
/* One-byte opcode. */
2008-01-30 15:31:21 +03:00
need_modrm = test_bit ( * insn ,
( unsigned long * ) onebyte_has_modrm ) ;
2005-04-17 02:20:36 +04:00
if ( need_modrm ) {
u8 modrm = * + + insn ;
2008-01-30 15:31:21 +03:00
if ( ( modrm & 0xc7 ) = = 0x05 ) {
/* %rip+disp32 addressing mode */
2005-04-17 02:20:36 +04:00
/* Displacement follows ModRM byte. */
2008-01-30 15:31:21 +03:00
+ + insn ;
/*
* The copied instruction uses the % rip - relative
* addressing mode . Adjust the displacement for the
* difference between the original location of this
* instruction and the location of the copy that will
* actually be run . The tricky bit here is making sure
* that the sign extension happens correctly in this
* calculation , since we need a signed 32 - bit result to
* be sign - extended to 64 bits when it ' s added to the
* % rip value and yield the same 64 - bit result that the
* sign - extension of the original signed 32 - bit
* displacement would have given .
*/
disp = ( u8 * ) p - > addr + * ( ( s32 * ) insn ) -
( u8 * ) p - > ainsn . insn ;
BUG_ON ( ( s64 ) ( s32 ) disp ! = disp ) ; /* Sanity check. */
* ( s32 * ) insn = ( s32 ) disp ;
2005-04-17 02:20:36 +04:00
}
}
2008-01-30 15:31:21 +03:00
# endif
2008-01-30 15:32:16 +03:00
}
2005-04-17 02:20:36 +04:00
2006-01-10 07:52:44 +03:00
static void __kprobes arch_copy_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
memcpy ( p - > ainsn . insn , p - > addr , MAX_INSN_SIZE * sizeof ( kprobe_opcode_t ) ) ;
2008-01-30 15:32:16 +03:00
2008-01-30 15:31:21 +03:00
fix_riprel ( p ) ;
2008-01-30 15:32:16 +03:00
2008-01-30 15:31:21 +03:00
if ( can_boost ( p - > addr ) )
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 0 ;
2008-01-30 15:31:21 +03:00
else
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = - 1 ;
2008-01-30 15:31:21 +03:00
2005-06-23 11:09:25 +04:00
p - > opcode = * p - > addr ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
int __kprobes arch_prepare_kprobe ( struct kprobe * p )
{
/* insn: must be on special executable page on x86. */
p - > ainsn . insn = get_insn_slot ( ) ;
if ( ! p - > ainsn . insn )
return - ENOMEM ;
arch_copy_kprobe ( p ) ;
return 0 ;
}
2005-09-07 02:19:28 +04:00
void __kprobes arch_arm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2007-07-22 13:12:31 +04:00
text_poke ( p - > addr , ( ( unsigned char [ ] ) { BREAKPOINT_INSTRUCTION } ) , 1 ) ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
void __kprobes arch_disarm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2007-07-22 13:12:31 +04:00
text_poke ( p - > addr , & p - > opcode , 1 ) ;
2005-06-23 11:09:25 +04:00
}
2006-01-10 07:52:46 +03:00
void __kprobes arch_remove_kprobe ( struct kprobe * p )
2005-06-23 11:09:25 +04:00
{
2009-01-07 01:41:50 +03:00
if ( p - > ainsn . insn ) {
free_insn_slot ( p - > ainsn . insn , ( p - > ainsn . boostable = = 1 ) ) ;
p - > ainsn . insn = NULL ;
}
2005-04-17 02:20:36 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes save_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2005-11-07 12:00:12 +03:00
kcb - > prev_kprobe . kp = kprobe_running ( ) ;
kcb - > prev_kprobe . status = kcb - > kprobe_status ;
2008-01-30 15:31:21 +03:00
kcb - > prev_kprobe . old_flags = kcb - > kprobe_old_flags ;
kcb - > prev_kprobe . saved_flags = kcb - > kprobe_saved_flags ;
2005-06-23 11:09:37 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes restore_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2005-11-07 12:00:12 +03:00
__get_cpu_var ( current_kprobe ) = kcb - > prev_kprobe . kp ;
kcb - > kprobe_status = kcb - > prev_kprobe . status ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_old_flags = kcb - > prev_kprobe . old_flags ;
kcb - > kprobe_saved_flags = kcb - > prev_kprobe . saved_flags ;
2005-06-23 11:09:37 +04:00
}
2006-04-19 09:22:00 +04:00
static void __kprobes set_current_kprobe ( struct kprobe * p , struct pt_regs * regs ,
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2005-11-07 12:00:12 +03:00
__get_cpu_var ( current_kprobe ) = p ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_saved_flags = kcb - > kprobe_old_flags
2008-01-30 15:31:27 +03:00
= ( regs - > flags & ( X86_EFLAGS_TF | X86_EFLAGS_IF ) ) ;
2005-06-23 11:09:37 +04:00
if ( is_IF_modifier ( p - > ainsn . insn ) )
2008-01-30 15:31:27 +03:00
kcb - > kprobe_saved_flags & = ~ X86_EFLAGS_IF ;
2005-06-23 11:09:37 +04:00
}
2008-01-30 15:31:43 +03:00
static void __kprobes clear_btf ( void )
2008-01-30 15:30:54 +03:00
{
if ( test_thread_flag ( TIF_DEBUGCTLMSR ) )
2008-03-10 16:11:17 +03:00
update_debugctlmsr ( 0 ) ;
2008-01-30 15:30:54 +03:00
}
2008-01-30 15:31:43 +03:00
static void __kprobes restore_btf ( void )
2008-01-30 15:30:54 +03:00
{
if ( test_thread_flag ( TIF_DEBUGCTLMSR ) )
2008-03-10 16:11:17 +03:00
update_debugctlmsr ( current - > thread . debugctlmsr ) ;
2008-01-30 15:30:54 +03:00
}
2005-09-07 02:19:28 +04:00
static void __kprobes prepare_singlestep ( struct kprobe * p , struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:30:54 +03:00
clear_btf ( ) ;
2008-01-30 15:31:27 +03:00
regs - > flags | = X86_EFLAGS_TF ;
regs - > flags & = ~ X86_EFLAGS_IF ;
2008-01-30 15:31:43 +03:00
/* single step inline if the instruction is an int3 */
2005-04-17 02:20:36 +04:00
if ( p - > opcode = = BREAKPOINT_INSTRUCTION )
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) p - > addr ;
2005-04-17 02:20:36 +04:00
else
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
2005-04-17 02:20:36 +04:00
}
2007-05-08 11:34:14 +04:00
void __kprobes arch_prepare_kretprobe ( struct kretprobe_instance * ri ,
2005-09-07 02:19:28 +04:00
struct pt_regs * regs )
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
{
2008-01-30 15:31:21 +03:00
unsigned long * sara = stack_addr ( regs ) ;
2005-06-28 02:17:10 +04:00
2007-05-08 11:34:14 +04:00
ri - > ret_addr = ( kprobe_opcode_t * ) * sara ;
2008-01-30 15:31:21 +03:00
2007-05-08 11:34:14 +04:00
/* Replace the return addr with trampoline addr */
* sara = ( unsigned long ) & kretprobe_trampoline ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2008-01-30 15:32:50 +03:00
static void __kprobes setup_singlestep ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
{
2009-01-06 23:15:32 +03:00
# if !defined(CONFIG_PREEMPT) || defined(CONFIG_FREEZER)
2008-01-30 15:32:50 +03:00
if ( p - > ainsn . boostable = = 1 & & ! p - > post_handler ) {
/* Boost up -- we can execute copied instructions directly */
reset_current_kprobe ( ) ;
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
preempt_enable_no_resched ( ) ;
return ;
}
# endif
prepare_singlestep ( p , regs ) ;
kcb - > kprobe_status = KPROBE_HIT_SS ;
}
2008-01-30 15:32:02 +03:00
/*
* We have reentered the kprobe_handler ( ) , since another probe was hit while
* within the handler . We save the original kprobes variables and just single
* step on the instruction of the new probe without calling any user handlers .
*/
2008-01-30 15:32:02 +03:00
static int __kprobes reenter_kprobe ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
2008-01-30 15:32:02 +03:00
{
2008-01-30 15:32:50 +03:00
switch ( kcb - > kprobe_status ) {
case KPROBE_HIT_SSDONE :
2008-01-30 15:32:02 +03:00
# ifdef CONFIG_X86_64
/* TODO: Provide re-entrancy from post_kprobes_handler() and
* avoid exception stack corruption while single - stepping on
* the instruction of the new probe .
*/
arch_disarm_kprobe ( p ) ;
regs - > ip = ( unsigned long ) p - > addr ;
reset_current_kprobe ( ) ;
2008-01-30 15:32:50 +03:00
preempt_enable_no_resched ( ) ;
break ;
2008-01-30 15:32:02 +03:00
# endif
2008-01-30 15:32:50 +03:00
case KPROBE_HIT_ACTIVE :
2008-01-30 15:33:13 +03:00
save_previous_kprobe ( kcb ) ;
set_current_kprobe ( p , regs , kcb ) ;
kprobes_inc_nmissed_count ( p ) ;
prepare_singlestep ( p , regs ) ;
kcb - > kprobe_status = KPROBE_REENTER ;
2008-01-30 15:32:50 +03:00
break ;
case KPROBE_HIT_SS :
2008-01-30 15:33:13 +03:00
if ( p = = kprobe_running ( ) ) {
2008-03-28 17:56:56 +03:00
regs - > flags & = ~ X86_EFLAGS_TF ;
2008-01-30 15:32:50 +03:00
regs - > flags | = kcb - > kprobe_saved_flags ;
return 0 ;
} else {
2008-01-30 15:33:13 +03:00
/* A probe has been hit in the codepath leading up
* to , or just after , single - stepping of a probed
* instruction . This entire codepath should strictly
* reside in . kprobes . text section . Raise a warning
* to highlight this peculiar case .
*/
2008-01-30 15:32:50 +03:00
}
default :
/* impossible cases */
WARN_ON ( 1 ) ;
2008-01-30 15:33:13 +03:00
return 0 ;
2008-01-30 15:32:02 +03:00
}
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:02 +03:00
return 1 ;
2008-01-30 15:32:02 +03:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2008-01-30 15:31:21 +03:00
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
* remain disabled thorough out this function .
*/
static int __kprobes kprobe_handler ( struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
kprobe_opcode_t * addr ;
2008-01-30 15:32:50 +03:00
struct kprobe * p ;
2005-11-07 12:00:14 +03:00
struct kprobe_ctlblk * kcb ;
2008-01-30 15:31:21 +03:00
addr = ( kprobe_opcode_t * ) ( regs - > ip - sizeof ( kprobe_opcode_t ) ) ;
2008-01-30 15:32:50 +03:00
if ( * addr ! = BREAKPOINT_INSTRUCTION ) {
/*
* The breakpoint instruction was removed right
* after we hit it . Another cpu has removed
* either a probepoint or a debugger breakpoint
* at this address . In either case , no further
* handling of this interrupt is appropriate .
* Back up over the ( now missing ) int3 and run
* the original instruction .
*/
regs - > ip = ( unsigned long ) addr ;
return 1 ;
}
2008-01-30 15:31:21 +03:00
2005-11-07 12:00:14 +03:00
/*
* We don ' t want to be preempted for the entire
2008-01-30 15:32:50 +03:00
* duration of kprobe processing . We conditionally
* re - enable preemption at the end of this function ,
* and also in reenter_kprobe ( ) and setup_singlestep ( ) .
2005-11-07 12:00:14 +03:00
*/
preempt_disable ( ) ;
2005-04-17 02:20:36 +04:00
2008-01-30 15:32:50 +03:00
kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:32:19 +03:00
p = get_kprobe ( addr ) ;
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:19 +03:00
if ( p ) {
if ( kprobe_running ( ) ) {
2008-01-30 15:32:50 +03:00
if ( reenter_kprobe ( p , regs , kcb ) )
return 1 ;
2005-04-17 02:20:36 +04:00
} else {
2008-01-30 15:32:19 +03:00
set_current_kprobe ( p , regs , kcb ) ;
kcb - > kprobe_status = KPROBE_HIT_ACTIVE ;
2008-01-30 15:32:50 +03:00
2005-04-17 02:20:36 +04:00
/*
2008-01-30 15:32:50 +03:00
* If we have no pre - handler or it returned 0 , we
* continue with normal processing . If we have a
* pre - handler and it returned non - zero , it prepped
* for calling the break_handler below on re - entry
* for jprobe processing , so get out doing nothing
* more here .
2005-04-17 02:20:36 +04:00
*/
2008-01-30 15:32:50 +03:00
if ( ! p - > pre_handler | | ! p - > pre_handler ( p , regs ) )
setup_singlestep ( p , regs , kcb ) ;
return 1 ;
2008-01-30 15:32:19 +03:00
}
2008-01-30 15:32:50 +03:00
} else if ( kprobe_running ( ) ) {
p = __get_cpu_var ( current_kprobe ) ;
if ( p - > break_handler & & p - > break_handler ( p , regs ) ) {
setup_singlestep ( p , regs , kcb ) ;
return 1 ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:32:50 +03:00
} /* else: not a kprobe fault; let the kernel handle it */
2005-04-17 02:20:36 +04:00
2005-11-07 12:00:14 +03:00
preempt_enable_no_resched ( ) ;
2008-01-30 15:32:50 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* When a retprobed function returns , this code saves registers and
* calls trampoline_handler ( ) runs , which calls the kretprobe ' s handler .
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
*/
2008-02-15 02:23:53 +03:00
static void __used __kprobes kretprobe_trampoline_holder ( void )
2008-01-30 15:33:01 +03:00
{
2008-01-30 15:31:21 +03:00
asm volatile (
" .global kretprobe_trampoline \n "
2008-01-30 15:31:21 +03:00
" kretprobe_trampoline: \n "
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
/* We don't bother saving the ss register */
" pushq %rsp \n "
" pushfq \n "
/*
* Skip cs , ip , orig_ax .
* trampoline_handler ( ) will plug in these values
*/
" subq $24, %rsp \n "
" pushq %rdi \n "
" pushq %rsi \n "
" pushq %rdx \n "
" pushq %rcx \n "
" pushq %rax \n "
" pushq %r8 \n "
" pushq %r9 \n "
" pushq %r10 \n "
" pushq %r11 \n "
" pushq %rbx \n "
" pushq %rbp \n "
" pushq %r12 \n "
" pushq %r13 \n "
" pushq %r14 \n "
" pushq %r15 \n "
" movq %rsp, %rdi \n "
" call trampoline_handler \n "
/* Replace saved sp with true return address. */
" movq %rax, 152(%rsp) \n "
" popq %r15 \n "
" popq %r14 \n "
" popq %r13 \n "
" popq %r12 \n "
" popq %rbp \n "
" popq %rbx \n "
" popq %r11 \n "
" popq %r10 \n "
" popq %r9 \n "
" popq %r8 \n "
" popq %rax \n "
" popq %rcx \n "
" popq %rdx \n "
" popq %rsi \n "
" popq %rdi \n "
/* Skip orig_ax, ip, cs */
" addq $24, %rsp \n "
" popfq \n "
2008-01-30 15:31:21 +03:00
# else
" pushf \n "
/*
* Skip cs , ip , orig_ax .
* trampoline_handler ( ) will plug in these values
*/
" subl $12, %esp \n "
" pushl %fs \n "
" pushl %ds \n "
" pushl %es \n "
" pushl %eax \n "
" pushl %ebp \n "
" pushl %edi \n "
" pushl %esi \n "
" pushl %edx \n "
" pushl %ecx \n "
" pushl %ebx \n "
" movl %esp, %eax \n "
" call trampoline_handler \n "
/* Move flags to cs */
" movl 52(%esp), %edx \n "
" movl %edx, 48(%esp) \n "
/* Replace saved flags with true return address. */
" movl %eax, 52(%esp) \n "
" popl %ebx \n "
" popl %ecx \n "
" popl %edx \n "
" popl %esi \n "
" popl %edi \n "
" popl %ebp \n "
" popl %eax \n "
/* Skip ip, orig_ax, es, ds, fs */
" addl $20, %esp \n "
" popf \n "
# endif
2008-01-30 15:31:21 +03:00
" ret \n " ) ;
2008-01-30 15:33:01 +03:00
}
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* Called from kretprobe_trampoline
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
*/
2008-02-15 02:23:53 +03:00
static __used __kprobes void * trampoline_handler ( struct pt_regs * regs )
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
{
2006-10-02 13:17:33 +04:00
struct kretprobe_instance * ri = NULL ;
2006-10-02 13:17:35 +04:00
struct hlist_head * head , empty_rp ;
2006-10-02 13:17:33 +04:00
struct hlist_node * node , * tmp ;
2005-11-07 12:00:14 +03:00
unsigned long flags , orig_ret_address = 0 ;
2008-01-30 15:31:21 +03:00
unsigned long trampoline_address = ( unsigned long ) & kretprobe_trampoline ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2006-10-02 13:17:35 +04:00
INIT_HLIST_HEAD ( & empty_rp ) ;
2008-07-25 12:46:04 +04:00
kretprobe_hash_lock ( current , & head , & flags ) ;
2008-01-30 15:31:21 +03:00
/* fixup registers */
2008-01-30 15:31:21 +03:00
# ifdef CONFIG_X86_64
2008-01-30 15:31:21 +03:00
regs - > cs = __KERNEL_CS ;
2008-01-30 15:31:21 +03:00
# else
regs - > cs = __KERNEL_CS | get_kernel_rpl ( ) ;
# endif
2008-01-30 15:31:21 +03:00
regs - > ip = trampoline_address ;
2008-01-30 15:31:21 +03:00
regs - > orig_ax = ~ 0UL ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2005-06-28 02:17:10 +04:00
/*
* It is possible to have multiple instances associated with a given
2008-01-30 15:31:21 +03:00
* task either because multiple functions in the call path have
2008-10-16 21:02:37 +04:00
* return probes installed on them , and / or more than one
2005-06-28 02:17:10 +04:00
* return probe was registered for a target function .
*
* We can handle this because :
2008-01-30 15:31:21 +03:00
* - instances are always pushed into the head of the list
2005-06-28 02:17:10 +04:00
* - when multiple return probes are registered for the same
2008-01-30 15:31:21 +03:00
* function , the ( chronologically ) first instance ' s ret_addr
* will be the real return address , and all the rest will
* point to kretprobe_trampoline .
2005-06-28 02:17:10 +04:00
*/
hlist_for_each_entry_safe ( ri , node , tmp , head , hlist ) {
2006-10-02 13:17:33 +04:00
if ( ri - > task ! = current )
2005-06-28 02:17:10 +04:00
/* another task is sharing our hash bucket */
2006-10-02 13:17:33 +04:00
continue ;
2005-06-28 02:17:10 +04:00
2008-01-30 15:31:21 +03:00
if ( ri - > rp & & ri - > rp - > handler ) {
__get_cpu_var ( current_kprobe ) = & ri - > rp - > kp ;
get_kprobe_ctlblk ( ) - > kprobe_status = KPROBE_HIT_ACTIVE ;
2005-06-28 02:17:10 +04:00
ri - > rp - > handler ( ri , regs ) ;
2008-01-30 15:31:21 +03:00
__get_cpu_var ( current_kprobe ) = NULL ;
}
2005-06-28 02:17:10 +04:00
orig_ret_address = ( unsigned long ) ri - > ret_addr ;
2006-10-02 13:17:35 +04:00
recycle_rp_inst ( ri , & empty_rp ) ;
2005-06-28 02:17:10 +04:00
if ( orig_ret_address ! = trampoline_address )
/*
* This is the real return address . Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2005-06-28 02:17:10 +04:00
2007-05-08 11:28:27 +04:00
kretprobe_assert ( ri , orig_ret_address , trampoline_address ) ;
2005-06-28 02:17:10 +04:00
2008-07-25 12:46:04 +04:00
kretprobe_hash_unlock ( current , & flags ) ;
2005-06-28 02:17:10 +04:00
2006-10-02 13:17:35 +04:00
hlist_for_each_entry_safe ( ri , node , tmp , & empty_rp , hlist ) {
hlist_del ( & ri - > hlist ) ;
kfree ( ri ) ;
}
2008-01-30 15:31:21 +03:00
return ( void * ) orig_ret_address ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
}
2005-04-17 02:20:36 +04:00
/*
* Called after single - stepping . p - > addr is the address of the
* instruction whose first byte has been replaced by the " int 3 "
* instruction . To avoid the SMP problems that can occur when we
* temporarily put back the original opcode to single - step , we
* single - stepped a copy of the instruction . The address of this
* copy is p - > ainsn . insn .
*
* This function prepares to return from the post - single - step
* interrupt . We have to fix up the stack as follows :
*
* 0 ) Except in the case of absolute or indirect jump or call instructions ,
2008-01-30 15:30:56 +03:00
* the new ip is relative to the copied instruction . We need to make
2005-04-17 02:20:36 +04:00
* it relative to the original instruction .
*
* 1 ) If the single - stepped instruction was pushfl , then the TF and IF
2008-01-30 15:30:56 +03:00
* flags are set in the just - pushed flags , and may need to be cleared .
2005-04-17 02:20:36 +04:00
*
* 2 ) If the single - stepped instruction was a call , the return address
* that is atop the stack is the address following the copied instruction .
* We need to make it the address following the original instruction .
2008-01-30 15:31:21 +03:00
*
* If this is the first time we ' ve single - stepped the instruction at
* this probepoint , and the instruction is boostable , boost it : add a
* jump instruction after the copied instruction , that jumps to the next
* instruction after the probepoint .
2005-04-17 02:20:36 +04:00
*/
2005-11-07 12:00:12 +03:00
static void __kprobes resume_execution ( struct kprobe * p ,
struct pt_regs * regs , struct kprobe_ctlblk * kcb )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
unsigned long * tos = stack_addr ( regs ) ;
unsigned long copy_ip = ( unsigned long ) p - > ainsn . insn ;
unsigned long orig_ip = ( unsigned long ) p - > addr ;
2005-04-17 02:20:36 +04:00
kprobe_opcode_t * insn = p - > ainsn . insn ;
/*skip the REX prefix*/
2008-01-30 15:32:14 +03:00
if ( is_REX_prefix ( insn ) )
2005-04-17 02:20:36 +04:00
insn + + ;
2008-01-30 15:31:27 +03:00
regs - > flags & = ~ X86_EFLAGS_TF ;
2005-04-17 02:20:36 +04:00
switch ( * insn ) {
2007-12-18 20:05:58 +03:00
case 0x9c : /* pushfl */
2008-01-30 15:31:27 +03:00
* tos & = ~ ( X86_EFLAGS_TF | X86_EFLAGS_IF ) ;
2008-01-30 15:31:21 +03:00
* tos | = kcb - > kprobe_old_flags ;
2005-04-17 02:20:36 +04:00
break ;
2007-12-18 20:05:58 +03:00
case 0xc2 : /* iret/ret/lret */
case 0xc3 :
2005-05-06 03:15:40 +04:00
case 0xca :
2007-12-18 20:05:58 +03:00
case 0xcb :
case 0xcf :
case 0xea : /* jmp absolute -- ip is correct */
/* ip is already adjusted, no more changes required */
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
2007-12-18 20:05:58 +03:00
goto no_change ;
case 0xe8 : /* call relative - Fix return addr */
2008-01-30 15:31:21 +03:00
* tos = orig_ip + ( * tos - copy_ip ) ;
2005-04-17 02:20:36 +04:00
break ;
2008-01-30 15:31:43 +03:00
# ifdef CONFIG_X86_32
2008-01-30 15:31:21 +03:00
case 0x9a : /* call absolute -- same as call absolute, indirect */
* tos = orig_ip + ( * tos - copy_ip ) ;
goto no_change ;
# endif
2005-04-17 02:20:36 +04:00
case 0xff :
2006-05-21 02:00:21 +04:00
if ( ( insn [ 1 ] & 0x30 ) = = 0x10 ) {
2008-01-30 15:31:21 +03:00
/*
* call absolute , indirect
* Fix return addr ; ip is correct .
* But this is not boostable
*/
* tos = orig_ip + ( * tos - copy_ip ) ;
2007-12-18 20:05:58 +03:00
goto no_change ;
2008-01-30 15:31:21 +03:00
} else if ( ( ( insn [ 1 ] & 0x31 ) = = 0x20 ) | |
( ( insn [ 1 ] & 0x31 ) = = 0x21 ) ) {
/*
* jmp near and far , absolute indirect
* ip is correct . And this is boostable
*/
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
2007-12-18 20:05:58 +03:00
goto no_change ;
2005-04-17 02:20:36 +04:00
}
default :
break ;
}
2008-01-30 15:31:21 +03:00
if ( p - > ainsn . boostable = = 0 ) {
2008-01-30 15:31:21 +03:00
if ( ( regs - > ip > copy_ip ) & &
( regs - > ip - copy_ip ) + 5 < MAX_INSN_SIZE ) {
2008-01-30 15:31:21 +03:00
/*
* These instructions can be executed directly if it
* jumps back to correct address .
*/
set_jmp_op ( ( void * ) regs - > ip ,
2008-01-30 15:31:21 +03:00
( void * ) orig_ip + ( regs - > ip - copy_ip ) ) ;
2008-01-30 15:31:21 +03:00
p - > ainsn . boostable = 1 ;
} else {
p - > ainsn . boostable = - 1 ;
}
}
2008-01-30 15:31:21 +03:00
regs - > ip + = orig_ip - copy_ip ;
2008-01-30 15:30:56 +03:00
2007-12-18 20:05:58 +03:00
no_change :
2008-01-30 15:30:54 +03:00
restore_btf ( ) ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate and they
* remain disabled thoroughout this function .
*/
static int __kprobes post_kprobe_handler ( struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe * cur = kprobe_running ( ) ;
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
if ( ! cur )
2005-04-17 02:20:36 +04:00
return 0 ;
2008-03-16 11:21:21 +03:00
resume_execution ( cur , regs , kcb ) ;
regs - > flags | = kcb - > kprobe_saved_flags ;
2005-11-07 12:00:12 +03:00
if ( ( kcb - > kprobe_status ! = KPROBE_REENTER ) & & cur - > post_handler ) {
kcb - > kprobe_status = KPROBE_HIT_SSDONE ;
cur - > post_handler ( cur , regs , 0 ) ;
2005-06-23 11:09:37 +04:00
}
2005-04-17 02:20:36 +04:00
2008-01-30 15:31:21 +03:00
/* Restore back the original saved kprobes variables and continue. */
2005-11-07 12:00:12 +03:00
if ( kcb - > kprobe_status = = KPROBE_REENTER ) {
restore_previous_kprobe ( kcb ) ;
2005-06-23 11:09:37 +04:00
goto out ;
}
2005-11-07 12:00:12 +03:00
reset_current_kprobe ( ) ;
2005-06-23 11:09:37 +04:00
out :
2005-04-17 02:20:36 +04:00
preempt_enable_no_resched ( ) ;
/*
2008-01-30 15:30:56 +03:00
* if somebody else is singlestepping across a probe point , flags
2005-04-17 02:20:36 +04:00
* will have TF set , in which case , continue the remaining processing
* of do_debug , as if this is not a probe hit .
*/
2008-01-30 15:31:27 +03:00
if ( regs - > flags & X86_EFLAGS_TF )
2005-04-17 02:20:36 +04:00
return 0 ;
return 1 ;
}
2005-09-07 02:19:28 +04:00
int __kprobes kprobe_fault_handler ( struct pt_regs * regs , int trapnr )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe * cur = kprobe_running ( ) ;
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:31:21 +03:00
switch ( kcb - > kprobe_status ) {
2006-03-26 13:38:23 +04:00
case KPROBE_HIT_SS :
case KPROBE_REENTER :
/*
* We are here because the instruction being single
* stepped caused a page fault . We reset the current
2008-01-30 15:30:56 +03:00
* kprobe and the ip points back to the probe address
2006-03-26 13:38:23 +04:00
* and allow the page fault handler to continue as a
* normal page fault .
*/
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) cur - > addr ;
2008-01-30 15:31:21 +03:00
regs - > flags | = kcb - > kprobe_old_flags ;
2006-03-26 13:38:23 +04:00
if ( kcb - > kprobe_status = = KPROBE_REENTER )
restore_previous_kprobe ( kcb ) ;
else
reset_current_kprobe ( ) ;
2005-04-17 02:20:36 +04:00
preempt_enable_no_resched ( ) ;
2006-03-26 13:38:23 +04:00
break ;
case KPROBE_HIT_ACTIVE :
case KPROBE_HIT_SSDONE :
/*
* We increment the nmissed count for accounting ,
2008-01-30 15:31:21 +03:00
* we can also use npre / npostfault count for accounting
2006-03-26 13:38:23 +04:00
* these specific fault cases .
*/
kprobes_inc_nmissed_count ( cur ) ;
/*
* We come here because instructions in the pre / post
* handler caused the page_fault , this could happen
* if handler tries to access user space by
* copy_from_user ( ) , get_user ( ) etc . Let the
* user - specified handler try to fix it first .
*/
if ( cur - > fault_handler & & cur - > fault_handler ( cur , regs , trapnr ) )
return 1 ;
/*
* In case the user - specified fault handler returned
* zero , try to fix up .
*/
2008-01-30 15:31:21 +03:00
if ( fixup_exception ( regs ) )
return 1 ;
2008-01-30 15:31:41 +03:00
2006-03-26 13:38:23 +04:00
/*
2008-01-30 15:31:21 +03:00
* fixup routine could not handle it ,
2006-03-26 13:38:23 +04:00
* Let do_page_fault ( ) fix it .
*/
break ;
default :
break ;
2005-04-17 02:20:36 +04:00
}
return 0 ;
}
/*
* Wrapper routine for handling exceptions .
*/
2005-09-07 02:19:28 +04:00
int __kprobes kprobe_exceptions_notify ( struct notifier_block * self ,
unsigned long val , void * data )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:33:23 +03:00
struct die_args * args = data ;
2005-11-07 12:00:07 +03:00
int ret = NOTIFY_DONE ;
2008-01-30 15:31:21 +03:00
if ( args - > regs & & user_mode_vm ( args - > regs ) )
2006-03-26 13:38:21 +04:00
return ret ;
2005-04-17 02:20:36 +04:00
switch ( val ) {
case DIE_INT3 :
if ( kprobe_handler ( args - > regs ) )
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2005-04-17 02:20:36 +04:00
break ;
case DIE_DEBUG :
if ( post_kprobe_handler ( args - > regs ) )
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2005-04-17 02:20:36 +04:00
break ;
case DIE_GPF :
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 15:32:32 +03:00
/*
* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running ( ) , we have
* be non - preemptible .
*/
if ( ! preemptible ( ) & & kprobe_running ( ) & &
2005-04-17 02:20:36 +04:00
kprobe_fault_handler ( args - > regs , args - > trapnr ) )
2005-11-07 12:00:07 +03:00
ret = NOTIFY_STOP ;
2005-04-17 02:20:36 +04:00
break ;
default :
break ;
}
2005-11-07 12:00:07 +03:00
return ret ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
int __kprobes setjmp_pre_handler ( struct kprobe * p , struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
struct jprobe * jp = container_of ( p , struct jprobe , kp ) ;
unsigned long addr ;
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2005-04-17 02:20:36 +04:00
2005-11-07 12:00:12 +03:00
kcb - > jprobe_saved_regs = * regs ;
2008-01-30 15:31:21 +03:00
kcb - > jprobe_saved_sp = stack_addr ( regs ) ;
addr = ( unsigned long ) ( kcb - > jprobe_saved_sp ) ;
2005-04-17 02:20:36 +04:00
/*
* As Linus pointed out , gcc assumes that the callee
* owns the argument space and could overwrite it , e . g .
* tailcall optimization . So , to be absolutely safe
* we also save and restore enough stack bytes to cover
* the argument area .
*/
2005-11-07 12:00:12 +03:00
memcpy ( kcb - > jprobes_stack , ( kprobe_opcode_t * ) addr ,
2008-01-30 15:31:21 +03:00
MIN_STACK_SIZE ( addr ) ) ;
2008-01-30 15:31:27 +03:00
regs - > flags & = ~ X86_EFLAGS_IF ;
2007-10-12 00:25:25 +04:00
trace_hardirqs_off ( ) ;
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) ( jp - > entry ) ;
2005-04-17 02:20:36 +04:00
return 1 ;
}
2005-09-07 02:19:28 +04:00
void __kprobes jprobe_return ( void )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:31:21 +03:00
asm volatile (
# ifdef CONFIG_X86_64
" xchg %%rbx,%%rsp \n "
# else
" xchgl %%ebx,%%esp \n "
# endif
" int3 \n "
" .globl jprobe_return_end \n "
" jprobe_return_end: \n "
" nop \n " : : " b "
( kcb - > jprobe_saved_sp ) : " memory " ) ;
2005-04-17 02:20:36 +04:00
}
2005-09-07 02:19:28 +04:00
int __kprobes longjmp_break_handler ( struct kprobe * p , struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:30:56 +03:00
u8 * addr = ( u8 * ) ( regs - > ip - 1 ) ;
2005-04-17 02:20:36 +04:00
struct jprobe * jp = container_of ( p , struct jprobe , kp ) ;
2008-01-30 15:31:21 +03:00
if ( ( addr > ( u8 * ) jprobe_return ) & &
( addr < ( u8 * ) jprobe_return_end ) ) {
2008-01-30 15:31:21 +03:00
if ( stack_addr ( regs ) ! = kcb - > jprobe_saved_sp ) {
2007-12-18 20:05:58 +03:00
struct pt_regs * saved_regs = & kcb - > jprobe_saved_regs ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR
" current sp %p does not match saved sp %p \n " ,
2008-01-30 15:31:21 +03:00
stack_addr ( regs ) , kcb - > jprobe_saved_sp ) ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR " Saved registers for jprobe %p \n " , jp ) ;
2005-04-17 02:20:36 +04:00
show_registers ( saved_regs ) ;
2008-01-30 15:31:21 +03:00
printk ( KERN_ERR " Current registers \n " ) ;
2005-04-17 02:20:36 +04:00
show_registers ( regs ) ;
BUG ( ) ;
}
2005-11-07 12:00:12 +03:00
* regs = kcb - > jprobe_saved_regs ;
2008-01-30 15:31:21 +03:00
memcpy ( ( kprobe_opcode_t * ) ( kcb - > jprobe_saved_sp ) ,
kcb - > jprobes_stack ,
MIN_STACK_SIZE ( kcb - > jprobe_saved_sp ) ) ;
2005-11-07 12:00:14 +03:00
preempt_enable_no_resched ( ) ;
2005-04-17 02:20:36 +04:00
return 1 ;
}
return 0 ;
}
2005-06-28 02:17:10 +04:00
2005-07-06 05:54:50 +04:00
int __init arch_init_kprobes ( void )
2005-06-28 02:17:10 +04:00
{
2008-01-30 15:31:21 +03:00
return 0 ;
2005-06-28 02:17:10 +04:00
}
2007-05-08 11:34:16 +04:00
int __kprobes arch_trampoline_kprobe ( struct kprobe * p )
{
return 0 ;
}