2019-05-27 09:55:05 +03:00
// SPDX-License-Identifier: GPL-2.0-or-later
2005-04-17 02:20:36 +04:00
/*
* Kernel Probes ( KProbes )
*
* Copyright ( C ) IBM Corporation , 2002 , 2004
*
* 2002 - Oct Created by Vamsi Krishna S < vamsi_krishna @ in . ibm . com > Kernel
* Probes initial implementation ( includes contributions from
* Rusty Russell ) .
* 2004 - July Suparna Bhattacharya < suparna @ in . ibm . com > added jumper probes
* interface to access function arguments .
2008-01-30 15:31:21 +03:00
* 2004 - Oct Jim Keniston < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > adapted for x86_64 from i386 .
2005-04-17 02:20:36 +04:00
* 2005 - Mar Roland McGrath < roland @ redhat . com >
* Fixed to handle % rip - relative addressing mode correctly .
2008-01-30 15:31:21 +03:00
* 2005 - May Hien Nguyen < hien @ us . ibm . com > , Jim Keniston
* < jkenisto @ us . ibm . com > and Prasanna S Panchamukhi
* < prasanna @ in . ibm . com > added function - return probes .
* 2005 - May Rusty Lynch < rusty . lynch @ intel . com >
2012-03-05 17:32:22 +04:00
* Added function return probes functionality
2008-01-30 15:31:21 +03:00
* 2006 - Feb Masami Hiramatsu < hiramatu @ sdl . hitachi . co . jp > added
2012-03-05 17:32:22 +04:00
* kprobe - booster and kretprobe - booster for i386 .
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > added kprobe - booster
2012-03-05 17:32:22 +04:00
* and kretprobe - booster for x86 - 64
2008-01-30 15:31:21 +03:00
* 2007 - Dec Masami Hiramatsu < mhiramat @ redhat . com > , Arjan van de Ven
2012-03-05 17:32:22 +04:00
* < arjan @ infradead . org > and Jim Keniston < jkenisto @ us . ibm . com >
* unified x86 kprobes code .
2005-04-17 02:20:36 +04:00
*/
# include <linux/kprobes.h>
# include <linux/ptrace.h>
# include <linux/string.h>
# include <linux/slab.h>
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 15:32:32 +03:00
# include <linux/hardirq.h>
2005-04-17 02:20:36 +04:00
# include <linux/preempt.h>
2017-02-08 20:51:35 +03:00
# include <linux/sched/debug.h>
2020-05-12 15:19:12 +03:00
# include <linux/perf_event.h>
2016-09-20 00:04:18 +03:00
# include <linux/extable.h>
2007-05-08 11:27:03 +04:00
# include <linux/kdebug.h>
2009-08-14 00:34:28 +04:00
# include <linux/kallsyms.h>
2022-12-19 17:35:10 +03:00
# include <linux/kgdb.h>
2010-02-25 16:34:46 +03:00
# include <linux/ftrace.h>
2016-10-14 17:07:23 +03:00
# include <linux/kasan.h>
2017-05-25 13:38:17 +03:00
# include <linux/moduleloader.h>
2020-09-04 18:30:25 +03:00
# include <linux/objtool.h>
2019-11-26 19:54:09 +03:00
# include <linux/vmalloc.h>
2020-06-09 07:32:42 +03:00
# include <linux/pgtable.h>
2022-10-26 13:13:03 +03:00
# include <linux/set_memory.h>
2005-06-28 02:17:01 +04:00
2016-04-26 22:23:24 +03:00
# include <asm/text-patching.h>
2008-01-30 15:31:21 +03:00
# include <asm/cacheflush.h>
# include <asm/desc.h>
2016-12-24 22:46:01 +03:00
# include <linux/uaccess.h>
2007-07-22 13:12:31 +04:00
# include <asm/alternative.h>
2009-08-14 00:34:28 +04:00
# include <asm/insn.h>
2009-06-01 22:17:06 +04:00
# include <asm/debugreg.h>
2022-03-08 18:30:32 +03:00
# include <asm/ibt.h>
2005-04-17 02:20:36 +04:00
2012-09-28 12:15:22 +04:00
# include "common.h"
2012-03-05 17:32:22 +04:00
2005-11-07 12:00:12 +03:00
DEFINE_PER_CPU ( struct kprobe * , current_kprobe ) = NULL ;
DEFINE_PER_CPU ( struct kprobe_ctlblk , kprobe_ctlblk ) ;
2005-04-17 02:20:36 +04:00
2008-01-30 15:31:21 +03:00
# define W(row, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf)\
( ( ( b0 # # UL < < 0x0 ) | ( b1 # # UL < < 0x1 ) | ( b2 # # UL < < 0x2 ) | ( b3 # # UL < < 0x3 ) | \
( b4 # # UL < < 0x4 ) | ( b5 # # UL < < 0x5 ) | ( b6 # # UL < < 0x6 ) | ( b7 # # UL < < 0x7 ) | \
( b8 # # UL < < 0x8 ) | ( b9 # # UL < < 0x9 ) | ( ba # # UL < < 0xa ) | ( bb # # UL < < 0xb ) | \
( bc # # UL < < 0xc ) | ( bd # # UL < < 0xd ) | ( be # # UL < < 0xe ) | ( bf # # UL < < 0xf ) ) \
< < ( row % 32 ) )
/*
* Undefined / reserved opcodes , conditional jump , Opcode Extension
* Groups , and some special opcodes can not boost .
2011-10-26 19:03:38 +04:00
* This is non - const and volatile to keep gcc from statically
* optimizing it out , as variable_test_bit makes gcc think only
2012-09-28 12:15:22 +04:00
* * ( unsigned long * ) is used .
2008-01-30 15:31:21 +03:00
*/
2011-10-26 19:03:38 +04:00
static volatile u32 twobyte_is_boostable [ 256 / 32 ] = {
2008-01-30 15:31:21 +03:00
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
/* ---------------------------------------------- */
W ( 0x00 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 00 */
2015-02-10 04:34:05 +03:00
W ( 0x10 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 ) , /* 10 */
2008-01-30 15:31:21 +03:00
W ( 0x20 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 20 */
W ( 0x30 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 30 */
W ( 0x40 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* 40 */
W ( 0x50 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , /* 50 */
W ( 0x60 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 1 , 1 ) | /* 60 */
W ( 0x70 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 ) , /* 70 */
W ( 0x80 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) | /* 80 */
W ( 0x90 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , /* 90 */
W ( 0xa0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* a0 */
W ( 0xb0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 ) , /* b0 */
W ( 0xc0 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) | /* c0 */
W ( 0xd0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) , /* d0 */
W ( 0xe0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 ) | /* e0 */
W ( 0xf0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 0 ) /* f0 */
/* ----------------------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
} ;
# undef W
2007-10-16 12:27:49 +04:00
struct kretprobe_blackpoint kretprobe_blacklist [ ] = {
{ " __switch_to " , } , /* This function switches only current task, but
doesn ' t switch kernel stack . */
{ NULL , NULL } /* Terminator */
} ;
2012-03-05 17:32:22 +04:00
2007-10-16 12:27:49 +04:00
const int kretprobe_blacklist_size = ARRAY_SIZE ( kretprobe_blacklist ) ;
2014-04-17 12:18:14 +04:00
static nokprobe_inline void
2017-08-18 11:24:00 +03:00
__synthesize_relative_insn ( void * dest , void * from , void * to , u8 op )
2008-01-30 15:31:21 +03:00
{
2010-02-25 16:34:46 +03:00
struct __arch_relative_insn {
u8 op ;
2008-01-30 15:31:21 +03:00
s32 raddr ;
2012-09-28 12:15:22 +04:00
} __packed * insn ;
2010-02-25 16:34:46 +03:00
2017-08-18 11:24:00 +03:00
insn = ( struct __arch_relative_insn * ) dest ;
2010-02-25 16:34:46 +03:00
insn - > raddr = ( s32 ) ( ( long ) ( to ) - ( ( long ) ( from ) + 5 ) ) ;
insn - > op = op ;
}
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
2017-08-18 11:24:00 +03:00
void synthesize_reljump ( void * dest , void * from , void * to )
2010-02-25 16:34:46 +03:00
{
2019-10-09 14:57:17 +03:00
__synthesize_relative_insn ( dest , from , to , JMP32_INSN_OPCODE ) ;
2008-01-30 15:31:21 +03:00
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( synthesize_reljump ) ;
2008-01-30 15:31:21 +03:00
2012-03-05 17:32:22 +04:00
/* Insert a call instruction at address 'from', which calls address 'to'.*/
2017-08-18 11:24:00 +03:00
void synthesize_relcall ( void * dest , void * from , void * to )
2012-03-05 17:32:22 +04:00
{
2019-10-09 14:57:17 +03:00
__synthesize_relative_insn ( dest , from , to , CALL_INSN_OPCODE ) ;
2012-03-05 17:32:22 +04:00
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( synthesize_relcall ) ;
2012-03-05 17:32:22 +04:00
2008-01-30 15:31:21 +03:00
/*
2017-03-29 08:05:06 +03:00
* Returns non - zero if INSN is boostable .
2008-01-30 15:31:21 +03:00
* RIP relative instructions are adjusted at copying time in 64 bits mode
2008-01-30 15:31:21 +03:00
*/
2017-03-29 08:05:06 +03:00
int can_boost ( struct insn * insn , void * addr )
2008-01-30 15:31:21 +03:00
{
kprobe_opcode_t opcode ;
2021-03-25 13:08:31 +03:00
insn_byte_t prefix ;
int i ;
2008-01-30 15:31:21 +03:00
2017-02-28 19:23:24 +03:00
if ( search_exception_tables ( ( unsigned long ) addr ) )
2009-03-17 01:57:22 +03:00
return 0 ; /* Page fault may occur on this address. */
2008-01-30 15:31:21 +03:00
/* 2nd-byte opcode */
2017-03-29 08:05:06 +03:00
if ( insn - > opcode . nbytes = = 2 )
return test_bit ( insn - > opcode . bytes [ 1 ] ,
2008-01-30 15:31:21 +03:00
( unsigned long * ) twobyte_is_boostable ) ;
2017-03-29 07:59:15 +03:00
2017-03-29 08:05:06 +03:00
if ( insn - > opcode . nbytes ! = 1 )
2017-03-29 07:59:15 +03:00
return 0 ;
2021-03-25 13:08:31 +03:00
for_each_insn_prefix ( insn , i , prefix ) {
insn_attr_t attr ;
attr = inat_get_opcode_attribute ( prefix ) ;
/* Can't boost Address-size override prefix and CS override prefix */
if ( prefix = = 0x2e | | inat_is_address_size_prefix ( attr ) )
return 0 ;
}
2017-03-29 07:59:15 +03:00
2017-03-29 08:05:06 +03:00
opcode = insn - > opcode . bytes [ 0 ] ;
2008-01-30 15:31:21 +03:00
2021-03-25 13:08:43 +03:00
switch ( opcode ) {
case 0x62 : /* bound */
case 0x70 . . . 0x7f : /* Conditional jumps */
case 0x9a : /* Call far */
case 0xc0 . . . 0xc1 : /* Grp2 */
case 0xcc . . . 0xce : /* software exceptions */
case 0xd0 . . . 0xd3 : /* Grp2 */
case 0xd6 : /* (UD) */
case 0xd8 . . . 0xdf : /* ESC */
case 0xe0 . . . 0xe3 : /* LOOP*, JCXZ */
case 0xe8 . . . 0xe9 : /* near Call, JMP */
case 0xeb : /* Short JMP */
case 0xf0 . . . 0xf4 : /* LOCK/REP, HLT */
case 0xf6 . . . 0xf7 : /* Grp3 */
case 0xfe : /* Grp4 */
/* ... are not boostable */
return 0 ;
case 0xff : /* Grp5 */
/* Only indirect jmp is boostable */
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
return X86_MODRM_REG ( insn - > modrm . bytes [ 0 ] ) = = 4 ;
2008-01-30 15:31:21 +03:00
default :
2021-03-25 13:08:43 +03:00
return 1 ;
2008-01-30 15:31:21 +03:00
}
}
2012-03-05 17:32:22 +04:00
static unsigned long
__recover_probed_insn ( kprobe_opcode_t * buf , unsigned long addr )
2009-08-14 00:34:28 +04:00
{
struct kprobe * kp ;
2022-03-08 18:30:29 +03:00
bool faddr ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
2009-08-14 00:34:28 +04:00
kp = get_kprobe ( ( void * ) addr ) ;
2022-03-08 18:30:29 +03:00
faddr = ftrace_location ( addr ) = = addr ;
2015-02-20 17:07:29 +03:00
/*
* Use the current code if it is not modified by Kprobe
* and it cannot be modified by ftrace .
*/
if ( ! kp & & ! faddr )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return addr ;
2009-08-14 00:34:28 +04:00
/*
2015-02-20 17:07:29 +03:00
* Basically , kp - > ainsn . insn has an original instruction .
* However , RIP - relative instruction can not do single - stepping
* at different place , __copy_instruction ( ) tweaks the displacement of
* that instruction . In that case , we can ' t recover the instruction
* from the kp - > ainsn . insn .
*
* On the other hand , in case on normal Kprobe , kp - > opcode has a copy
* of the first byte of the probed instruction , which is overwritten
* by int3 . And the instruction at kp - > addr is not modified by kprobes
* except for the first byte , we can recover the original instruction
* from it and kp - > opcode .
2009-08-14 00:34:28 +04:00
*
2015-02-20 17:07:29 +03:00
* In case of Kprobes using ftrace , we do not have a copy of
* the original instruction . In fact , the ftrace location might
* be modified at anytime and even could be in an inconsistent state .
* Fortunately , we know that the original code is the ideal 5 - byte
* long NOP .
2009-08-14 00:34:28 +04:00
*/
2020-06-17 10:37:53 +03:00
if ( copy_from_kernel_nofault ( buf , ( void * ) addr ,
2017-03-29 08:03:56 +03:00
MAX_INSN_SIZE * sizeof ( kprobe_opcode_t ) ) )
return 0UL ;
2015-02-20 17:07:29 +03:00
if ( faddr )
x86: Remove dynamic NOP selection
This ensures that a NOP is a NOP and not a random other instruction that
is also a NOP. It allows simplification of dynamic code patching that
wants to verify existing code before writing new instructions (ftrace,
jump_label, static_call, etc..).
Differentiating on NOPs is not a feature.
This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS).
32bit is not a performance target.
Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is
fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL
being required for x86_64, all x86_64 CPUs can use NOPL. So stop
caring about NOPs, simplify things and get on with life.
[ The problem seems to be that some uarchs can only decode NOPL on a
single front-end port while others have severe decode penalties for
excessive prefixes. All modern uarchs can handle both, except Atom,
which has prefix penalties. ]
[ Also, much doubt you can actually measure any of this on normal
workloads. ]
After this, FEATURE_NOPL is unused except for required-features for
x86_64. FEATURE_K8 is only used for PTI.
[ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge
which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf
Acked-by: Linus Torvalds <torvalds@linuxfoundation.org>
Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
2021-03-12 14:32:54 +03:00
memcpy ( buf , x86_nops [ 5 ] , 5 ) ;
2015-02-20 17:07:29 +03:00
else
buf [ 0 ] = kp - > opcode ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return ( unsigned long ) buf ;
}
/*
* Recover the probed instruction at addr for further analysis .
* Caller must lock kprobes by kprobe_mutex , or disable preemption
* for preventing to release referencing kprobes .
2017-03-29 08:03:56 +03:00
* Returns zero if the instruction can not get recovered ( or access failed ) .
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
*/
2012-03-05 17:32:22 +04:00
unsigned long recover_probed_instruction ( kprobe_opcode_t * buf , unsigned long addr )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
{
unsigned long __addr ;
__addr = __recover_optprobed_insn ( buf , addr ) ;
if ( __addr ! = addr )
return __addr ;
return __recover_probed_insn ( buf , addr ) ;
2009-08-14 00:34:28 +04:00
}
/* Check if paddr is at an instruction boundary */
2014-04-17 12:17:47 +04:00
static int can_probe ( unsigned long paddr )
2009-08-14 00:34:28 +04:00
{
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
unsigned long addr , __addr , offset = 0 ;
2009-08-14 00:34:28 +04:00
struct insn insn ;
kprobe_opcode_t buf [ MAX_INSN_SIZE ] ;
2010-09-15 05:04:29 +04:00
if ( ! kallsyms_lookup_size_offset ( paddr , NULL , & offset ) )
2009-08-14 00:34:28 +04:00
return 0 ;
/* Decode instructions */
addr = paddr - offset ;
while ( addr < paddr ) {
2020-11-16 20:10:11 +03:00
int ret ;
2009-08-14 00:34:28 +04:00
/*
* Check if the instruction has been modified by another
* kprobe , in which case we replace the breakpoint by the
* original instruction in our buffer .
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
* Also , jump optimization will change the breakpoint to
* relative - jump . Since the relative - jump itself is
* normally used , we just go through if there is no kprobe .
2009-08-14 00:34:28 +04:00
*/
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
__addr = recover_probed_instruction ( buf , addr ) ;
2015-02-20 17:07:30 +03:00
if ( ! __addr )
return 0 ;
2020-11-16 20:10:11 +03:00
2021-03-26 18:12:00 +03:00
ret = insn_decode_kernel ( & insn , ( void * ) __addr ) ;
2020-11-16 20:10:11 +03:00
if ( ret < 0 )
return 0 ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
2022-12-19 17:35:10 +03:00
# ifdef CONFIG_KGDB
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
/*
2022-12-19 17:35:10 +03:00
* If there is a dynamically installed kgdb sw breakpoint ,
* this function should not be probed .
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
*/
2022-12-19 17:35:10 +03:00
if ( insn . opcode . bytes [ 0 ] = = INT3_INSN_OPCODE & &
kgdb_has_hit_break ( addr ) )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return 0 ;
2022-12-19 17:35:10 +03:00
# endif
2009-08-14 00:34:28 +04:00
addr + = insn . length ;
}
return ( addr = = paddr ) ;
}
2022-03-08 18:30:32 +03:00
/* If x86 supports IBT (ENDBR) it must be skipped. */
kprobe_opcode_t * arch_adjust_kprobe_addr ( unsigned long addr , unsigned long offset ,
bool * on_func_entry )
{
if ( is_endbr ( * ( u32 * ) addr ) ) {
* on_func_entry = ! offset | | offset = = 4 ;
if ( * on_func_entry )
offset = 4 ;
} else {
* on_func_entry = ! offset ;
}
return ( kprobe_opcode_t * ) ( addr + offset ) ;
}
2005-04-17 02:20:36 +04:00
/*
2017-03-29 07:58:06 +03:00
* Copy an instruction with recovering modified instruction by kprobes
* and adjust the displacement if the instruction uses the % rip - relative
2017-08-18 11:24:00 +03:00
* addressing mode . Note that since @ real will be the final place of copied
* instruction , displacement must be adjust by @ real , not @ dest .
2017-03-29 07:58:06 +03:00
* This returns the length of copied instruction , or 0 if it has an error .
2005-04-17 02:20:36 +04:00
*/
2017-08-18 11:24:00 +03:00
int __copy_instruction ( u8 * dest , u8 * src , u8 * real , struct insn * insn )
2005-04-17 02:20:36 +04:00
{
2010-02-25 16:34:46 +03:00
kprobe_opcode_t buf [ MAX_INSN_SIZE ] ;
2020-11-16 20:10:11 +03:00
unsigned long recovered_insn = recover_probed_instruction ( buf , ( unsigned long ) src ) ;
int ret ;
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
2017-03-29 08:05:06 +03:00
if ( ! recovered_insn | | ! insn )
2015-02-20 17:07:30 +03:00
return 0 ;
2015-03-17 13:09:18 +03:00
2017-03-29 08:05:06 +03:00
/* This can access kernel text if given address is not recovered */
2020-06-17 10:37:53 +03:00
if ( copy_from_kernel_nofault ( dest , ( void * ) recovered_insn ,
MAX_INSN_SIZE ) )
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 17:32:09 +04:00
return 0 ;
2017-03-29 08:03:56 +03:00
2021-03-26 18:12:00 +03:00
ret = insn_decode_kernel ( insn , dest ) ;
2020-11-16 20:10:11 +03:00
if ( ret < 0 )
return 0 ;
2017-03-29 08:05:06 +03:00
2019-09-06 16:14:20 +03:00
/* We can not probe force emulate prefixed instruction */
if ( insn_has_emulate_prefix ( insn ) )
return 0 ;
2017-03-29 08:05:06 +03:00
/* Another subsystem puts a breakpoint, failed to recover */
2019-10-09 14:57:17 +03:00
if ( insn - > opcode . bytes [ 0 ] = = INT3_INSN_OPCODE )
2017-03-29 08:03:56 +03:00
return 0 ;
2010-02-25 16:34:46 +03:00
2018-05-09 15:58:15 +03:00
/* We should not singlestep on the exception masking instructions */
if ( insn_masking_exception ( insn ) )
return 0 ;
2010-02-25 16:34:46 +03:00
# ifdef CONFIG_X86_64
2017-03-29 07:58:06 +03:00
/* Only x86_64 has RIP relative instructions */
2017-03-29 08:05:06 +03:00
if ( insn_rip_relative ( insn ) ) {
2009-08-14 00:34:36 +04:00
s64 newdisp ;
u8 * disp ;
/*
* The copied instruction uses the % rip - relative addressing
* mode . Adjust the displacement for the difference between
* the original location of this instruction and the location
* of the copy that will actually be run . The tricky bit here
* is making sure that the sign extension happens correctly in
* this calculation , since we need a signed 32 - bit result to
* be sign - extended to 64 bits when it ' s added to the % rip
* value and yield the same 64 - bit result that the sign -
* extension of the original signed 32 - bit displacement would
* have given .
*/
2017-03-29 08:05:06 +03:00
newdisp = ( u8 * ) src + ( s64 ) insn - > displacement . value
2017-08-18 11:24:00 +03:00
- ( u8 * ) real ;
2013-04-04 14:42:30 +04:00
if ( ( s64 ) ( s32 ) newdisp ! = newdisp ) {
pr_err ( " Kprobes error: new displacement does not fit into s32 (%llx) \n " , newdisp ) ;
return 0 ;
}
2017-03-29 08:05:06 +03:00
disp = ( u8 * ) dest + insn_offset_displacement ( insn ) ;
2009-08-14 00:34:36 +04:00
* ( s32 * ) disp = ( s32 ) newdisp ;
2005-04-17 02:20:36 +04:00
}
2008-01-30 15:31:21 +03:00
# endif
2017-03-29 08:05:06 +03:00
return insn - > length ;
2008-01-30 15:32:16 +03:00
}
2005-04-17 02:20:36 +04:00
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/* Prepare reljump or int3 right after instruction */
static int prepare_singlestep ( kprobe_opcode_t * buf , struct kprobe * p ,
struct insn * insn )
2017-03-29 08:00:25 +03:00
{
2017-08-18 11:24:00 +03:00
int len = insn - > length ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( ! IS_ENABLED ( CONFIG_PREEMPTION ) & &
! p - > post_handler & & can_boost ( insn , p - > addr ) & &
2019-10-09 14:57:17 +03:00
MAX_INSN_SIZE - len > = JMP32_INSN_SIZE ) {
2017-03-29 08:00:25 +03:00
/*
* These instructions can be executed directly if it
* jumps back to correct address .
*/
2017-08-18 11:24:00 +03:00
synthesize_reljump ( buf + len , p - > ainsn . insn + len ,
2017-03-29 08:05:06 +03:00
p - > addr + insn - > length ) ;
2019-10-09 14:57:17 +03:00
len + = JMP32_INSN_SIZE ;
2020-12-18 17:12:05 +03:00
p - > ainsn . boostable = 1 ;
2017-03-29 08:00:25 +03:00
} else {
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/* Otherwise, put an int3 for trapping singlestep */
if ( MAX_INSN_SIZE - len < INT3_INSN_SIZE )
return - ENOSPC ;
buf [ len ] = INT3_INSN_OPCODE ;
len + = INT3_INSN_SIZE ;
2017-03-29 08:00:25 +03:00
}
2017-08-18 11:24:00 +03:00
return len ;
}
/* Make page to RO mode when allocate it */
void * alloc_insn_page ( void )
{
void * page ;
page = module_alloc ( PAGE_SIZE ) ;
2019-04-26 03:11:30 +03:00
if ( ! page )
return NULL ;
/*
* TODO : Once additional kernel code protection mechanisms are set , ensure
* that the page was not maliciously altered and it is still zeroed .
*/
2022-10-26 13:13:03 +03:00
set_memory_rox ( ( unsigned long ) page , 1 ) ;
2017-08-18 11:24:00 +03:00
return page ;
2017-03-29 08:00:25 +03:00
}
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/* Kprobe x86 instruction emulation - only regs->ip or IF flag modifiers */
static void kprobe_emulate_ifmodifiers ( struct kprobe * p , struct pt_regs * regs )
{
switch ( p - > ainsn . opcode ) {
case 0xfa : /* cli */
regs - > flags & = ~ ( X86_EFLAGS_IF ) ;
break ;
case 0xfb : /* sti */
regs - > flags | = X86_EFLAGS_IF ;
break ;
case 0x9c : /* pushf */
int3_emulate_push ( regs , regs - > flags ) ;
break ;
case 0x9d : /* popf */
regs - > flags = int3_emulate_pop ( regs ) ;
break ;
}
regs - > ip = regs - > ip - INT3_INSN_SIZE + p - > ainsn . size ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_ifmodifiers ) ;
static void kprobe_emulate_ret ( struct kprobe * p , struct pt_regs * regs )
{
int3_emulate_ret ( regs ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_ret ) ;
static void kprobe_emulate_call ( struct kprobe * p , struct pt_regs * regs )
{
unsigned long func = regs - > ip - INT3_INSN_SIZE + p - > ainsn . size ;
func + = p - > ainsn . rel32 ;
int3_emulate_call ( regs , func ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_call ) ;
static nokprobe_inline
void __kprobe_emulate_jmp ( struct kprobe * p , struct pt_regs * regs , bool cond )
{
unsigned long ip = regs - > ip - INT3_INSN_SIZE + p - > ainsn . size ;
if ( cond )
ip + = p - > ainsn . rel32 ;
int3_emulate_jmp ( regs , ip ) ;
}
static void kprobe_emulate_jmp ( struct kprobe * p , struct pt_regs * regs )
{
__kprobe_emulate_jmp ( p , regs , true ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_jmp ) ;
static const unsigned long jcc_mask [ 6 ] = {
[ 0 ] = X86_EFLAGS_OF ,
[ 1 ] = X86_EFLAGS_CF ,
[ 2 ] = X86_EFLAGS_ZF ,
[ 3 ] = X86_EFLAGS_CF | X86_EFLAGS_ZF ,
[ 4 ] = X86_EFLAGS_SF ,
[ 5 ] = X86_EFLAGS_PF ,
} ;
static void kprobe_emulate_jcc ( struct kprobe * p , struct pt_regs * regs )
{
bool invert = p - > ainsn . jcc . type & 1 ;
bool match ;
if ( p - > ainsn . jcc . type < 0xc ) {
match = regs - > flags & jcc_mask [ p - > ainsn . jcc . type > > 1 ] ;
} else {
match = ( ( regs - > flags & X86_EFLAGS_SF ) > > X86_EFLAGS_SF_BIT ) ^
( ( regs - > flags & X86_EFLAGS_OF ) > > X86_EFLAGS_OF_BIT ) ;
if ( p - > ainsn . jcc . type > = 0xe )
2022-08-14 01:59:43 +03:00
match = match | | ( regs - > flags & X86_EFLAGS_ZF ) ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
}
__kprobe_emulate_jmp ( p , regs , ( match & & ! invert ) | | ( ! match & & invert ) ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_jcc ) ;
static void kprobe_emulate_loop ( struct kprobe * p , struct pt_regs * regs )
{
bool match ;
if ( p - > ainsn . loop . type ! = 3 ) { /* LOOP* */
if ( p - > ainsn . loop . asize = = 32 )
match = ( ( * ( u32 * ) & regs - > cx ) - - ) ! = 0 ;
# ifdef CONFIG_X86_64
else if ( p - > ainsn . loop . asize = = 64 )
match = ( ( * ( u64 * ) & regs - > cx ) - - ) ! = 0 ;
# endif
else
match = ( ( * ( u16 * ) & regs - > cx ) - - ) ! = 0 ;
} else { /* JCXZ */
if ( p - > ainsn . loop . asize = = 32 )
match = * ( u32 * ) ( & regs - > cx ) = = 0 ;
# ifdef CONFIG_X86_64
else if ( p - > ainsn . loop . asize = = 64 )
match = * ( u64 * ) ( & regs - > cx ) = = 0 ;
# endif
else
match = * ( u16 * ) ( & regs - > cx ) = = 0 ;
}
if ( p - > ainsn . loop . type = = 0 ) /* LOOPNE */
match = match & & ! ( regs - > flags & X86_EFLAGS_ZF ) ;
else if ( p - > ainsn . loop . type = = 1 ) /* LOOPE */
match = match & & ( regs - > flags & X86_EFLAGS_ZF ) ;
__kprobe_emulate_jmp ( p , regs , match ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_loop ) ;
static const int addrmode_regoffs [ ] = {
offsetof ( struct pt_regs , ax ) ,
offsetof ( struct pt_regs , cx ) ,
offsetof ( struct pt_regs , dx ) ,
offsetof ( struct pt_regs , bx ) ,
offsetof ( struct pt_regs , sp ) ,
offsetof ( struct pt_regs , bp ) ,
offsetof ( struct pt_regs , si ) ,
offsetof ( struct pt_regs , di ) ,
# ifdef CONFIG_X86_64
offsetof ( struct pt_regs , r8 ) ,
offsetof ( struct pt_regs , r9 ) ,
offsetof ( struct pt_regs , r10 ) ,
offsetof ( struct pt_regs , r11 ) ,
offsetof ( struct pt_regs , r12 ) ,
offsetof ( struct pt_regs , r13 ) ,
offsetof ( struct pt_regs , r14 ) ,
offsetof ( struct pt_regs , r15 ) ,
# endif
} ;
static void kprobe_emulate_call_indirect ( struct kprobe * p , struct pt_regs * regs )
{
unsigned long offs = addrmode_regoffs [ p - > ainsn . indirect . reg ] ;
int3_emulate_call ( regs , regs_get_register ( regs , offs ) ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_call_indirect ) ;
static void kprobe_emulate_jmp_indirect ( struct kprobe * p , struct pt_regs * regs )
{
unsigned long offs = addrmode_regoffs [ p - > ainsn . indirect . reg ] ;
int3_emulate_jmp ( regs , regs_get_register ( regs , offs ) ) ;
}
NOKPROBE_SYMBOL ( kprobe_emulate_jmp_indirect ) ;
static int prepare_emulation ( struct kprobe * p , struct insn * insn )
2020-12-18 17:12:05 +03:00
{
insn_byte_t opcode = insn - > opcode . bytes [ 0 ] ;
switch ( opcode ) {
case 0xfa : /* cli */
case 0xfb : /* sti */
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
case 0x9c : /* pushfl */
2020-12-18 17:12:05 +03:00
case 0x9d : /* popf/popfd */
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/*
* IF modifiers must be emulated since it will enable interrupt while
* int3 single stepping .
*/
p - > ainsn . emulate_op = kprobe_emulate_ifmodifiers ;
p - > ainsn . opcode = opcode ;
2020-12-18 17:12:05 +03:00
break ;
case 0xc2 : /* ret/lret */
case 0xc3 :
case 0xca :
case 0xcb :
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
p - > ainsn . emulate_op = kprobe_emulate_ret ;
2020-12-18 17:12:05 +03:00
break ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
case 0x9a : /* far call absolute -- segment is not supported */
case 0xea : /* far jmp absolute -- segment is not supported */
case 0xcc : /* int3 */
case 0xcf : /* iret -- in-kernel IRET is not supported */
return - EOPNOTSUPP ;
2020-12-18 17:12:05 +03:00
break ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
case 0xe8 : /* near call relative */
p - > ainsn . emulate_op = kprobe_emulate_call ;
if ( insn - > immediate . nbytes = = 2 )
p - > ainsn . rel32 = * ( s16 * ) & insn - > immediate . value ;
else
p - > ainsn . rel32 = * ( s32 * ) & insn - > immediate . value ;
break ;
case 0xeb : /* short jump relative */
case 0xe9 : /* near jump relative */
p - > ainsn . emulate_op = kprobe_emulate_jmp ;
if ( insn - > immediate . nbytes = = 1 )
p - > ainsn . rel32 = * ( s8 * ) & insn - > immediate . value ;
else if ( insn - > immediate . nbytes = = 2 )
p - > ainsn . rel32 = * ( s16 * ) & insn - > immediate . value ;
else
p - > ainsn . rel32 = * ( s32 * ) & insn - > immediate . value ;
break ;
case 0x70 . . . 0x7f :
/* 1 byte conditional jump */
p - > ainsn . emulate_op = kprobe_emulate_jcc ;
p - > ainsn . jcc . type = opcode & 0xf ;
p - > ainsn . rel32 = * ( char * ) insn - > immediate . bytes ;
break ;
case 0x0f :
opcode = insn - > opcode . bytes [ 1 ] ;
if ( ( opcode & 0xf0 ) = = 0x80 ) {
/* 2 bytes Conditional Jump */
p - > ainsn . emulate_op = kprobe_emulate_jcc ;
p - > ainsn . jcc . type = opcode & 0xf ;
if ( insn - > immediate . nbytes = = 2 )
p - > ainsn . rel32 = * ( s16 * ) & insn - > immediate . value ;
else
p - > ainsn . rel32 = * ( s32 * ) & insn - > immediate . value ;
} else if ( opcode = = 0x01 & &
X86_MODRM_REG ( insn - > modrm . bytes [ 0 ] ) = = 0 & &
X86_MODRM_MOD ( insn - > modrm . bytes [ 0 ] ) = = 3 ) {
/* VM extensions - not supported */
return - EOPNOTSUPP ;
}
break ;
case 0xe0 : /* Loop NZ */
case 0xe1 : /* Loop */
case 0xe2 : /* Loop */
case 0xe3 : /* J*CXZ */
p - > ainsn . emulate_op = kprobe_emulate_loop ;
p - > ainsn . loop . type = opcode & 0x3 ;
p - > ainsn . loop . asize = insn - > addr_bytes * 8 ;
p - > ainsn . rel32 = * ( s8 * ) & insn - > immediate . value ;
2020-12-18 17:12:05 +03:00
break ;
case 0xff :
2021-03-02 18:25:24 +03:00
/*
* Since the 0xff is an extended group opcode , the instruction
* is determined by the MOD / RM byte .
*/
opcode = insn - > modrm . bytes [ 0 ] ;
2020-12-18 17:12:05 +03:00
if ( ( opcode & 0x30 ) = = 0x10 ) {
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( ( opcode & 0x8 ) = = 0x8 )
return - EOPNOTSUPP ; /* far call */
/* call absolute, indirect */
p - > ainsn . emulate_op = kprobe_emulate_call_indirect ;
2021-03-02 18:25:34 +03:00
} else if ( ( opcode & 0x30 ) = = 0x20 ) {
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( ( opcode & 0x8 ) = = 0x8 )
return - EOPNOTSUPP ; /* far jmp */
/* jmp near absolute indirect */
p - > ainsn . emulate_op = kprobe_emulate_jmp_indirect ;
} else
break ;
if ( insn - > addr_bytes ! = sizeof ( unsigned long ) )
2021-05-12 20:58:31 +03:00
return - EOPNOTSUPP ; /* Don't support different size */
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( X86_MODRM_MOD ( opcode ) ! = 3 )
return - EOPNOTSUPP ; /* TODO: support memory addressing */
p - > ainsn . indirect . reg = X86_MODRM_RM ( opcode ) ;
# ifdef CONFIG_X86_64
if ( X86_REX_B ( insn - > rex_prefix . value ) )
p - > ainsn . indirect . reg + = 8 ;
# endif
break ;
default :
2020-12-18 17:12:05 +03:00
break ;
}
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
p - > ainsn . size = insn - > length ;
return 0 ;
2020-12-18 17:12:05 +03:00
}
2014-04-17 12:17:47 +04:00
static int arch_copy_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2017-03-29 08:05:06 +03:00
struct insn insn ;
2017-08-18 11:24:00 +03:00
kprobe_opcode_t buf [ MAX_INSN_SIZE ] ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
int ret , len ;
2013-06-05 07:12:16 +04:00
2012-03-05 17:32:16 +04:00
/* Copy an instruction with recovering if other optprobe modifies it.*/
2017-08-18 11:24:00 +03:00
len = __copy_instruction ( buf , p - > addr , p - > ainsn . insn , & insn ) ;
2017-03-29 08:00:25 +03:00
if ( ! len )
2013-06-05 07:12:16 +04:00
return - EINVAL ;
2012-03-05 17:32:16 +04:00
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/* Analyze the opcode and setup emulate functions */
ret = prepare_emulation ( p , & insn ) ;
if ( ret < 0 )
return ret ;
2017-03-29 08:02:46 +03:00
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/* Add int3 for single-step or booster jmp */
len = prepare_singlestep ( buf , p , & insn ) ;
if ( len < 0 )
return len ;
2013-03-14 15:52:43 +04:00
2012-03-05 17:32:16 +04:00
/* Also, displacement change doesn't affect the first byte */
2017-08-18 11:24:00 +03:00
p - > opcode = buf [ 0 ] ;
2020-05-12 15:19:12 +03:00
p - > ainsn . tp_len = len ;
perf_event_text_poke ( p - > ainsn . insn , NULL , 0 , buf , len ) ;
2017-08-18 11:24:00 +03:00
/* OK, write back the instruction(s) into ROX insn buffer */
text_poke ( p - > ainsn . insn , buf , len ) ;
2013-06-05 07:12:16 +04:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
2014-04-17 12:17:47 +04:00
int arch_prepare_kprobe ( struct kprobe * p )
2008-01-30 15:31:21 +03:00
{
2017-07-21 17:45:52 +03:00
int ret ;
2010-02-03 00:49:18 +03:00
if ( alternatives_text_reserved ( p - > addr , p - > addr ) )
return - EINVAL ;
2009-08-14 00:34:28 +04:00
if ( ! can_probe ( ( unsigned long ) p - > addr ) )
return - EILSEQ ;
2020-12-18 17:12:05 +03:00
memset ( & p - > ainsn , 0 , sizeof ( p - > ainsn ) ) ;
2008-01-30 15:31:21 +03:00
/* insn: must be on special executable page on x86. */
p - > ainsn . insn = get_insn_slot ( ) ;
if ( ! p - > ainsn . insn )
return - ENOMEM ;
2013-06-05 07:12:16 +04:00
2017-07-21 17:45:52 +03:00
ret = arch_copy_kprobe ( p ) ;
if ( ret ) {
free_insn_slot ( p - > ainsn . insn , 0 ) ;
p - > ainsn . insn = NULL ;
}
return ret ;
2008-01-30 15:31:21 +03:00
}
2014-04-17 12:17:47 +04:00
void arch_arm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2020-05-12 15:19:12 +03:00
u8 int3 = INT3_INSN_OPCODE ;
text_poke ( p - > addr , & int3 , 1 ) ;
x86/kprobes: Fix ordering while text-patching
Kprobes does something like:
register:
arch_arm_kprobe()
text_poke(INT3)
/* guarantees nothing, INT3 will become visible at some point, maybe */
kprobe_optimizer()
/* guarantees the bytes after INT3 are unused */
synchronize_rcu_tasks();
text_poke_bp(JMP32);
/* implies IPI-sync, kprobe really is enabled */
unregister:
__disarm_kprobe()
unoptimize_kprobe()
text_poke_bp(INT3 + tail);
/* implies IPI-sync, so tail is guaranteed visible */
arch_disarm_kprobe()
text_poke(old);
/* guarantees nothing, old will maybe become visible */
synchronize_rcu()
free-stuff
Now the problem is that on register, the synchronize_rcu_tasks() does
not imply sufficient to guarantee all CPUs have already observed INT3
(although in practice this is exceedingly unlikely not to have
happened) (similar to how MEMBARRIER_CMD_PRIVATE_EXPEDITED does not
imply MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).
Worse, even if it did, we'd have to do 2 synchronize calls to provide
the guarantee we're looking for, the first to ensure INT3 is visible,
the second to guarantee nobody is then still using the instruction
bytes after INT3.
Similar on unregister; the synchronize_rcu() between
__unregister_kprobe_top() and __unregister_kprobe_bottom() does not
guarantee all CPUs are free of the INT3 (and observe the old text).
Therefore, sprinkle some IPI-sync love around. This guarantees that
all CPUs agree on the text and RCU once again provides the required
guaranteed.
Tested-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191111132458.162172862@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-09 22:15:28 +03:00
text_poke_sync ( ) ;
2020-05-12 15:19:12 +03:00
perf_event_text_poke ( p - > addr , & p - > opcode , 1 , & int3 , 1 ) ;
2005-04-17 02:20:36 +04:00
}
2014-04-17 12:17:47 +04:00
void arch_disarm_kprobe ( struct kprobe * p )
2005-04-17 02:20:36 +04:00
{
2020-05-12 15:19:12 +03:00
u8 int3 = INT3_INSN_OPCODE ;
perf_event_text_poke ( p - > addr , & int3 , 1 , & p - > opcode , 1 ) ;
2007-07-22 13:12:31 +04:00
text_poke ( p - > addr , & p - > opcode , 1 ) ;
x86/kprobes: Fix ordering while text-patching
Kprobes does something like:
register:
arch_arm_kprobe()
text_poke(INT3)
/* guarantees nothing, INT3 will become visible at some point, maybe */
kprobe_optimizer()
/* guarantees the bytes after INT3 are unused */
synchronize_rcu_tasks();
text_poke_bp(JMP32);
/* implies IPI-sync, kprobe really is enabled */
unregister:
__disarm_kprobe()
unoptimize_kprobe()
text_poke_bp(INT3 + tail);
/* implies IPI-sync, so tail is guaranteed visible */
arch_disarm_kprobe()
text_poke(old);
/* guarantees nothing, old will maybe become visible */
synchronize_rcu()
free-stuff
Now the problem is that on register, the synchronize_rcu_tasks() does
not imply sufficient to guarantee all CPUs have already observed INT3
(although in practice this is exceedingly unlikely not to have
happened) (similar to how MEMBARRIER_CMD_PRIVATE_EXPEDITED does not
imply MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).
Worse, even if it did, we'd have to do 2 synchronize calls to provide
the guarantee we're looking for, the first to ensure INT3 is visible,
the second to guarantee nobody is then still using the instruction
bytes after INT3.
Similar on unregister; the synchronize_rcu() between
__unregister_kprobe_top() and __unregister_kprobe_bottom() does not
guarantee all CPUs are free of the INT3 (and observe the old text).
Therefore, sprinkle some IPI-sync love around. This guarantees that
all CPUs agree on the text and RCU once again provides the required
guaranteed.
Tested-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191111132458.162172862@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-09 22:15:28 +03:00
text_poke_sync ( ) ;
2005-06-23 11:09:25 +04:00
}
2014-04-17 12:17:47 +04:00
void arch_remove_kprobe ( struct kprobe * p )
2005-06-23 11:09:25 +04:00
{
2009-01-07 01:41:50 +03:00
if ( p - > ainsn . insn ) {
2020-05-12 15:19:12 +03:00
/* Record the perf event before freeing the slot */
perf_event_text_poke ( p - > ainsn . insn , p - > ainsn . insn ,
p - > ainsn . tp_len , NULL , 0 ) ;
2017-03-29 08:01:35 +03:00
free_insn_slot ( p - > ainsn . insn , p - > ainsn . boostable ) ;
2009-01-07 01:41:50 +03:00
p - > ainsn . insn = NULL ;
}
2005-04-17 02:20:36 +04:00
}
2014-04-17 12:18:14 +04:00
static nokprobe_inline void
save_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2005-11-07 12:00:12 +03:00
kcb - > prev_kprobe . kp = kprobe_running ( ) ;
kcb - > prev_kprobe . status = kcb - > kprobe_status ;
2008-01-30 15:31:21 +03:00
kcb - > prev_kprobe . old_flags = kcb - > kprobe_old_flags ;
kcb - > prev_kprobe . saved_flags = kcb - > kprobe_saved_flags ;
2005-06-23 11:09:37 +04:00
}
2014-04-17 12:18:14 +04:00
static nokprobe_inline void
restore_previous_kprobe ( struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , kcb - > prev_kprobe . kp ) ;
2005-11-07 12:00:12 +03:00
kcb - > kprobe_status = kcb - > prev_kprobe . status ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_old_flags = kcb - > prev_kprobe . old_flags ;
kcb - > kprobe_saved_flags = kcb - > prev_kprobe . saved_flags ;
2005-06-23 11:09:37 +04:00
}
2014-04-17 12:18:14 +04:00
static nokprobe_inline void
set_current_kprobe ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
2005-06-23 11:09:37 +04:00
{
2010-12-06 20:16:25 +03:00
__this_cpu_write ( current_kprobe , p ) ;
2008-01-30 15:31:21 +03:00
kcb - > kprobe_saved_flags = kcb - > kprobe_old_flags
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
= ( regs - > flags & X86_EFLAGS_IF ) ;
2008-01-30 15:30:54 +03:00
}
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
static void kprobe_post_process ( struct kprobe * cur , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
{
/* Restore back the original saved kprobes variables and continue. */
2022-08-02 09:04:16 +03:00
if ( kcb - > kprobe_status = = KPROBE_REENTER ) {
/* This will restore both kcb and current_kprobe */
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
restore_previous_kprobe ( kcb ) ;
2022-08-02 09:04:16 +03:00
} else {
/*
* Always update the kcb status because
* reset_curent_kprobe ( ) doesn ' t update kcb .
*/
kcb - > kprobe_status = KPROBE_HIT_SSDONE ;
if ( cur - > post_handler )
cur - > post_handler ( cur , regs , 0 ) ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
reset_current_kprobe ( ) ;
2022-08-02 09:04:16 +03:00
}
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
}
NOKPROBE_SYMBOL ( kprobe_post_process ) ;
2014-04-17 12:18:14 +04:00
static void setup_singlestep ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb , int reenter )
2008-01-30 15:32:50 +03:00
{
2010-02-25 16:34:46 +03:00
if ( setup_detour_execution ( p , regs , reenter ) )
return ;
2019-07-27 00:19:42 +03:00
# if !defined(CONFIG_PREEMPTION)
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( p - > ainsn . boostable ) {
2008-01-30 15:32:50 +03:00
/* Boost up -- we can execute copied instructions directly */
2010-02-25 16:34:23 +03:00
if ( ! reenter )
reset_current_kprobe ( ) ;
/*
* Reentering boosted probe doesn ' t reset current_kprobe ,
* nor set current_kprobe , because it doesn ' t use single
* stepping .
*/
2008-01-30 15:32:50 +03:00
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
return ;
}
# endif
2010-02-25 16:34:23 +03:00
if ( reenter ) {
save_previous_kprobe ( kcb ) ;
set_current_kprobe ( p , regs , kcb ) ;
kcb - > kprobe_status = KPROBE_REENTER ;
} else
kcb - > kprobe_status = KPROBE_HIT_SS ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
if ( p - > ainsn . emulate_op ) {
p - > ainsn . emulate_op ( p , regs ) ;
kprobe_post_process ( p , regs , kcb ) ;
return ;
}
/* Disable interrupt, and set ip register on trampoline */
2010-02-25 16:34:23 +03:00
regs - > flags & = ~ X86_EFLAGS_IF ;
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
regs - > ip = ( unsigned long ) p - > ainsn . insn ;
2008-01-30 15:32:50 +03:00
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( setup_singlestep ) ;
2008-01-30 15:32:50 +03:00
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
/*
* Called after single - stepping . p - > addr is the address of the
* instruction whose first byte has been replaced by the " int3 "
* instruction . To avoid the SMP problems that can occur when we
* temporarily put back the original opcode to single - step , we
* single - stepped a copy of the instruction . The address of this
* copy is p - > ainsn . insn . We also doesn ' t use trap , but " int3 " again
* right after the copied instruction .
* Different from the trap single - step , " int3 " single - step can not
* handle the instruction which changes the ip register , e . g . jmp ,
* call , conditional jmp , and the instructions which changes the IF
* flags because interrupt must be disabled around the single - stepping .
* Such instructions are software emulated , but others are single - stepped
* using " int3 " .
*
* When the 2 nd " int3 " handled , the regs - > ip and regs - > flags needs to
* be adjusted , so that we can resume execution on correct code .
*/
static void resume_singlestep ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
{
unsigned long copy_ip = ( unsigned long ) p - > ainsn . insn ;
unsigned long orig_ip = ( unsigned long ) p - > addr ;
/* Restore saved interrupt flag and ip register */
regs - > flags | = kcb - > kprobe_saved_flags ;
/* Note that regs->ip is executed int3 so must be a step back */
regs - > ip + = ( orig_ip - copy_ip ) - INT3_INSN_SIZE ;
}
NOKPROBE_SYMBOL ( resume_singlestep ) ;
2008-01-30 15:32:02 +03:00
/*
* We have reentered the kprobe_handler ( ) , since another probe was hit while
* within the handler . We save the original kprobes variables and just single
* step on the instruction of the new probe without calling any user handlers .
*/
2014-04-17 12:18:14 +04:00
static int reenter_kprobe ( struct kprobe * p , struct pt_regs * regs ,
struct kprobe_ctlblk * kcb )
2008-01-30 15:32:02 +03:00
{
2008-01-30 15:32:50 +03:00
switch ( kcb - > kprobe_status ) {
case KPROBE_HIT_SSDONE :
case KPROBE_HIT_ACTIVE :
2014-04-17 12:16:51 +04:00
case KPROBE_HIT_SS :
2008-01-30 15:33:13 +03:00
kprobes_inc_nmissed_count ( p ) ;
2010-02-25 16:34:23 +03:00
setup_singlestep ( p , regs , kcb , 1 ) ;
2008-01-30 15:32:50 +03:00
break ;
2014-04-17 12:16:51 +04:00
case KPROBE_REENTER :
2009-08-27 21:22:58 +04:00
/* A probe has been hit in the codepath leading up to, or just
* after , single - stepping of a probed instruction . This entire
* codepath should strictly reside in . kprobes . text section .
* Raise a BUG or we ' ll continue in an endless reentering loop
* and eventually a stack overflow .
*/
2018-04-28 15:37:03 +03:00
pr_err ( " Unrecoverable kprobe detected. \n " ) ;
2009-08-27 21:22:58 +04:00
dump_kprobe ( p ) ;
BUG ( ) ;
2008-01-30 15:32:50 +03:00
default :
/* impossible cases */
WARN_ON ( 1 ) ;
2008-01-30 15:33:13 +03:00
return 0 ;
2008-01-30 15:32:02 +03:00
}
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:02 +03:00
return 1 ;
2008-01-30 15:32:02 +03:00
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( reenter_kprobe ) ;
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 11:09:23 +04:00
2021-03-24 17:45:02 +03:00
static nokprobe_inline int kprobe_is_ss ( struct kprobe_ctlblk * kcb )
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
{
return ( kcb - > kprobe_status = = KPROBE_HIT_SS | |
kcb - > kprobe_status = = KPROBE_REENTER ) ;
}
2008-01-30 15:31:21 +03:00
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 18:09:05 +03:00
* remain disabled throughout this function .
2008-01-30 15:31:21 +03:00
*/
2014-04-17 12:18:14 +04:00
int kprobe_int3_handler ( struct pt_regs * regs )
2005-04-17 02:20:36 +04:00
{
2008-01-30 15:31:21 +03:00
kprobe_opcode_t * addr ;
2008-01-30 15:32:50 +03:00
struct kprobe * p ;
2005-11-07 12:00:14 +03:00
struct kprobe_ctlblk * kcb ;
2015-03-19 04:33:33 +03:00
if ( user_mode ( regs ) )
2014-07-11 21:27:01 +04:00
return 0 ;
2008-01-30 15:31:21 +03:00
addr = ( kprobe_opcode_t * ) ( regs - > ip - sizeof ( kprobe_opcode_t ) ) ;
2005-11-07 12:00:14 +03:00
/*
2018-06-19 19:16:17 +03:00
* We don ' t want to be preempted for the entire duration of kprobe
* processing . Since int3 and debug trap disables irqs and we clear
* IF while singlestepping , it must be no preemptible .
2005-11-07 12:00:14 +03:00
*/
2005-04-17 02:20:36 +04:00
2008-01-30 15:32:50 +03:00
kcb = get_kprobe_ctlblk ( ) ;
2008-01-30 15:32:19 +03:00
p = get_kprobe ( addr ) ;
2008-01-30 15:32:50 +03:00
2008-01-30 15:32:19 +03:00
if ( p ) {
if ( kprobe_running ( ) ) {
2008-01-30 15:32:50 +03:00
if ( reenter_kprobe ( p , regs , kcb ) )
return 1 ;
2005-04-17 02:20:36 +04:00
} else {
2008-01-30 15:32:19 +03:00
set_current_kprobe ( p , regs , kcb ) ;
kcb - > kprobe_status = KPROBE_HIT_ACTIVE ;
2008-01-30 15:32:50 +03:00
2005-04-17 02:20:36 +04:00
/*
2008-01-30 15:32:50 +03:00
* If we have no pre - handler or it returned 0 , we
* continue with normal processing . If we have a
2018-06-19 19:05:35 +03:00
* pre - handler and it returned non - zero , that means
* user handler setup registers to exit to another
* instruction , we must skip the single stepping .
2005-04-17 02:20:36 +04:00
*/
2008-01-30 15:32:50 +03:00
if ( ! p - > pre_handler | | ! p - > pre_handler ( p , regs ) )
2010-02-25 16:34:23 +03:00
setup_singlestep ( p , regs , kcb , 0 ) ;
2018-06-19 19:16:17 +03:00
else
2018-06-19 19:15:45 +03:00
reset_current_kprobe ( ) ;
2008-01-30 15:32:50 +03:00
return 1 ;
2008-01-30 15:32:19 +03:00
}
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
} else if ( kprobe_is_ss ( kcb ) ) {
p = kprobe_running ( ) ;
if ( ( unsigned long ) p - > ainsn . insn < regs - > ip & &
( unsigned long ) p - > ainsn . insn + MAX_INSN_SIZE > regs - > ip ) {
/* Most provably this is the second int3 for singlestep */
resume_singlestep ( p , regs , kcb ) ;
kprobe_post_process ( p , regs , kcb ) ;
return 1 ;
}
}
if ( * addr ! = INT3_INSN_OPCODE ) {
2010-04-28 02:33:49 +04:00
/*
* The breakpoint instruction was removed right
* after we hit it . Another cpu has removed
* either a probepoint or a debugger breakpoint
* at this address . In either case , no further
* handling of this interrupt is appropriate .
* Back up over the ( now missing ) int3 and run
* the original instruction .
*/
regs - > ip = ( unsigned long ) addr ;
return 1 ;
2008-01-30 15:32:50 +03:00
} /* else: not a kprobe fault; let the kernel handle it */
2005-04-17 02:20:36 +04:00
2008-01-30 15:32:50 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( kprobe_int3_handler ) ;
2005-04-17 02:20:36 +04:00
2014-04-17 12:18:14 +04:00
int kprobe_fault_handler ( struct pt_regs * regs , int trapnr )
2005-04-17 02:20:36 +04:00
{
2005-11-07 12:00:12 +03:00
struct kprobe * cur = kprobe_running ( ) ;
struct kprobe_ctlblk * kcb = get_kprobe_ctlblk ( ) ;
2014-04-17 12:16:44 +04:00
if ( unlikely ( regs - > ip = = ( unsigned long ) cur - > ainsn . insn ) ) {
/* This must happen on single-stepping */
WARN_ON ( kcb - > kprobe_status ! = KPROBE_HIT_SS & &
kcb - > kprobe_status ! = KPROBE_REENTER ) ;
2006-03-26 13:38:23 +04:00
/*
* We are here because the instruction being single
* stepped caused a page fault . We reset the current
2008-01-30 15:30:56 +03:00
* kprobe and the ip points back to the probe address
2006-03-26 13:38:23 +04:00
* and allow the page fault handler to continue as a
* normal page fault .
*/
2008-01-30 15:30:56 +03:00
regs - > ip = ( unsigned long ) cur - > addr ;
2016-06-11 17:06:53 +03:00
/*
x86/kprobes: Use int3 instead of debug trap for single-step
Use int3 instead of debug trap exception for single-stepping the
probed instructions. Some instructions which change the ip
registers or modify IF flags are emulated because those are not
able to be single-stepped by int3 or may allow the interrupt
while single-stepping.
This actually changes the kprobes behavior.
- kprobes can not probe following instructions; int3, iret,
far jmp/call which get absolute address as immediate,
indirect far jmp/call, indirect near jmp/call with addressing
by memory (register-based indirect jmp/call are OK), and
vmcall/vmlaunch/vmresume/vmxoff.
- If the kprobe post_handler doesn't set before registering,
it may not be called in some case even if you set it afterwards.
(IOW, kprobe booster is enabled at registration, user can not
change it)
But both are rare issue, unsupported instructions will not be
used in the kernel (or rarely used), and post_handlers are
rarely used (I don't see it except for the test code).
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/161469874601.49483.11985325887166921076.stgit@devnote2
2021-03-02 18:25:46 +03:00
* If the IF flag was set before the kprobe hit ,
2016-06-11 17:06:53 +03:00
* don ' t touch it :
*/
2008-01-30 15:31:21 +03:00
regs - > flags | = kcb - > kprobe_old_flags ;
2016-06-11 17:06:53 +03:00
2006-03-26 13:38:23 +04:00
if ( kcb - > kprobe_status = = KPROBE_REENTER )
restore_previous_kprobe ( kcb ) ;
else
reset_current_kprobe ( ) ;
2005-04-17 02:20:36 +04:00
}
2014-04-17 12:16:44 +04:00
2005-04-17 02:20:36 +04:00
return 0 ;
}
2014-04-17 12:18:14 +04:00
NOKPROBE_SYMBOL ( kprobe_fault_handler ) ;
2005-04-17 02:20:36 +04:00
2018-12-17 11:21:24 +03:00
int __init arch_populate_kprobe_blacklist ( void )
{
return kprobe_add_area_blacklist ( ( unsigned long ) __entry_text_start ,
( unsigned long ) __entry_text_end ) ;
}
2005-07-06 05:54:50 +04:00
int __init arch_init_kprobes ( void )
2005-06-28 02:17:10 +04:00
{
2013-07-18 15:47:50 +04:00
return 0 ;
2005-06-28 02:17:10 +04:00
}
2007-05-08 11:34:16 +04:00
2014-04-17 12:17:47 +04:00
int arch_trampoline_kprobe ( struct kprobe * p )
2007-05-08 11:34:16 +04:00
{
return 0 ;
}