2008-05-12 23:20:42 +04:00
/*
* Infrastructure for profiling code inserted by ' gcc - pg ' .
*
* Copyright ( C ) 2007 - 2008 Steven Rostedt < srostedt @ redhat . com >
* Copyright ( C ) 2004 - 2008 Ingo Molnar < mingo @ redhat . com >
*
* Originally ported from the - rt patch by :
* Copyright ( C ) 2007 Arnaldo Carvalho de Melo < acme @ redhat . com >
*
* Based on code in the latency_tracer , that is :
*
* Copyright ( C ) 2004 - 2006 Ingo Molnar
* Copyright ( C ) 2004 William Lee Irwin III
*/
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
# include <linux/stop_machine.h>
# include <linux/clocksource.h>
# include <linux/kallsyms.h>
2008-05-12 23:20:43 +04:00
# include <linux/seq_file.h>
2009-01-15 00:33:27 +03:00
# include <linux/suspend.h>
2008-05-12 23:20:43 +04:00
# include <linux/debugfs.h>
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
# include <linux/hardirq.h>
2008-02-23 18:55:50 +03:00
# include <linux/kthread.h>
2008-05-12 23:20:43 +04:00
# include <linux/uaccess.h>
2011-12-17 04:27:42 +04:00
# include <linux/bsearch.h>
2011-05-27 01:53:52 +04:00
# include <linux/module.h>
2008-02-23 18:55:50 +03:00
# include <linux/ftrace.h>
2008-05-12 23:20:43 +04:00
# include <linux/sysctl.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/slab.h>
2008-05-12 23:20:43 +04:00
# include <linux/ctype.h>
2011-12-17 02:06:45 +04:00
# include <linux/sort.h>
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
# include <linux/list.h>
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
# include <linux/hash.h>
2010-03-06 02:03:25 +03:00
# include <linux/rcupdate.h>
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2009-04-15 03:39:12 +04:00
# include <trace/events/sched.h>
2009-03-24 08:10:15 +03:00
2009-05-28 21:37:24 +04:00
# include <asm/setup.h>
2008-06-21 22:17:27 +04:00
2009-03-24 06:12:58 +03:00
# include "trace_output.h"
2009-03-20 19:50:56 +03:00
# include "trace_stat.h"
2008-05-12 23:20:42 +04:00
2008-10-23 17:33:03 +04:00
# define FTRACE_WARN_ON(cond) \
2011-04-29 18:36:31 +04:00
( { \
int ___r = cond ; \
if ( WARN_ON ( ___r ) ) \
2008-10-23 17:33:03 +04:00
ftrace_kill ( ) ; \
2011-04-29 18:36:31 +04:00
___r ; \
} )
2008-10-23 17:33:03 +04:00
# define FTRACE_WARN_ON_ONCE(cond) \
2011-04-29 18:36:31 +04:00
( { \
int ___r = cond ; \
if ( WARN_ON_ONCE ( ___r ) ) \
2008-10-23 17:33:03 +04:00
ftrace_kill ( ) ; \
2011-04-29 18:36:31 +04:00
___r ; \
} )
2008-10-23 17:33:03 +04:00
2009-02-16 23:28:00 +03:00
/* hash bits for specific function selection */
# define FTRACE_HASH_BITS 7
# define FTRACE_FUNC_HASHSIZE (1 << FTRACE_HASH_BITS)
2011-05-03 01:34:47 +04:00
# define FTRACE_HASH_DEFAULT_BITS 10
# define FTRACE_HASH_MAX_BITS 12
2009-02-16 23:28:00 +03:00
2012-02-15 18:51:48 +04:00
# define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_CONTROL)
2008-05-12 23:20:48 +04:00
/* ftrace_enabled is a method to turn ftrace on or off */
int ftrace_enabled __read_mostly ;
2008-05-12 23:20:43 +04:00
static int last_ftrace_enabled ;
2008-05-12 23:20:43 +04:00
2008-11-06 00:05:44 +03:00
/* Quick disabling of function tracer. */
int function_trace_stop ;
2009-10-14 00:33:52 +04:00
/* List for set_ftrace_pid's pids. */
LIST_HEAD ( ftrace_pids ) ;
struct ftrace_pid {
struct list_head list ;
struct pid * pid ;
} ;
2008-05-12 23:20:48 +04:00
/*
* ftrace_disabled is set when an anomaly is discovered .
* ftrace_disabled is much stronger than ftrace_enabled .
*/
static int ftrace_disabled __read_mostly ;
2009-02-14 09:15:39 +03:00
static DEFINE_MUTEX ( ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
2011-05-31 23:51:55 +04:00
static struct ftrace_ops ftrace_list_end __read_mostly = {
2009-03-25 20:26:41 +03:00
. func = ftrace_stub ,
2008-05-12 23:20:42 +04:00
} ;
2011-05-04 17:27:52 +04:00
static struct ftrace_ops * ftrace_global_list __read_mostly = & ftrace_list_end ;
2012-02-15 18:51:48 +04:00
static struct ftrace_ops * ftrace_control_list __read_mostly = & ftrace_list_end ;
2011-05-04 17:27:52 +04:00
static struct ftrace_ops * ftrace_ops_list __read_mostly = & ftrace_list_end ;
2008-05-12 23:20:42 +04:00
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub ;
ftrace: Fix dynamic selftest failure on some archs
Archs that do not implement CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST, will
fail the dynamic ftrace selftest.
The function tracer has a quick 'off' variable that will prevent
the call back functions from being called. This variable is called
function_trace_stop. In x86, this is implemented directly in the mcount
assembly, but for other archs, an intermediate function is used called
ftrace_test_stop_func().
In dynamic ftrace, the function pointer variable ftrace_trace_function is
used to update the caller code in the mcount caller. But for archs that
do not have CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST set, it only calls
ftrace_test_stop_func() instead, which in turn calls __ftrace_trace_function.
When more than one ftrace_ops is registered, the function it calls is
ftrace_ops_list_func(), which will iterate over all registered ftrace_ops
and call the callbacks that have their hash matching.
The issue happens when two ftrace_ops are registered for different functions
and one is then unregistered. The __ftrace_trace_function is then pointed
to the remaining ftrace_ops callback function directly. This mean it will
be called for all functions that were registered to trace by both ftrace_ops
that were registered.
This is not an issue for archs with CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST,
because the update of ftrace_trace_function doesn't happen until after all
functions have been updated, and then the mcount caller is updated. But
for those archs that do use the ftrace_test_stop_func(), the update is
immediate.
The dynamic selftest fails because it hits this situation, and the
ftrace_ops that it registers fails to only trace what it was suppose to
and instead traces all other functions.
The solution is to delay the setting of __ftrace_trace_function until
after all the functions have been updated according to the registered
ftrace_ops. Also, function_trace_stop is set during the update to prevent
function tracing from calling code that is caused by the function tracer
itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-13 23:11:02 +04:00
static ftrace_func_t __ftrace_trace_function_delay __read_mostly = ftrace_stub ;
2008-11-06 00:05:44 +03:00
ftrace_func_t __ftrace_trace_function __read_mostly = ftrace_stub ;
2008-11-26 08:16:23 +03:00
ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub ;
2011-05-04 06:49:52 +04:00
static struct ftrace_ops global_ops ;
2012-02-15 18:51:48 +04:00
static struct ftrace_ops control_ops ;
2008-05-12 23:20:42 +04:00
2011-05-04 17:27:52 +04:00
static void
ftrace_ops_list_func ( unsigned long ip , unsigned long parent_ip ) ;
2010-03-06 02:03:25 +03:00
/*
2011-05-04 17:27:52 +04:00
* Traverse the ftrace_global_list , invoking all entries . The reason that we
2010-03-06 02:03:25 +03:00
* can use rcu_dereference_raw ( ) is that elements removed from this list
* are simply leaked , so there is no need to interact with a grace - period
* mechanism . The rcu_dereference_raw ( ) calls are needed to handle
2011-05-04 17:27:52 +04:00
* concurrent insertions into the ftrace_global_list .
2010-03-06 02:03:25 +03:00
*
* Silly Alpha and silly pointer - speculation compiler optimizations !
*/
2011-05-04 17:27:52 +04:00
static void ftrace_global_list_func ( unsigned long ip ,
unsigned long parent_ip )
2008-05-12 23:20:42 +04:00
{
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 22:27:43 +04:00
struct ftrace_ops * op ;
if ( unlikely ( trace_recursion_test ( TRACE_GLOBAL_BIT ) ) )
return ;
2008-05-12 23:20:42 +04:00
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 22:27:43 +04:00
trace_recursion_set ( TRACE_GLOBAL_BIT ) ;
op = rcu_dereference_raw ( ftrace_global_list ) ; /*see above*/
2008-05-12 23:20:42 +04:00
while ( op ! = & ftrace_list_end ) {
op - > func ( ip , parent_ip ) ;
2010-03-06 02:03:25 +03:00
op = rcu_dereference_raw ( op - > next ) ; /*see above*/
2008-05-12 23:20:42 +04:00
} ;
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 22:27:43 +04:00
trace_recursion_clear ( TRACE_GLOBAL_BIT ) ;
2008-05-12 23:20:42 +04:00
}
2008-11-26 08:16:23 +03:00
static void ftrace_pid_func ( unsigned long ip , unsigned long parent_ip )
{
2008-12-03 23:36:58 +03:00
if ( ! test_tsk_trace_trace ( current ) )
2008-11-26 08:16:23 +03:00
return ;
ftrace_pid_function ( ip , parent_ip ) ;
}
static void set_ftrace_pid_function ( ftrace_func_t func )
{
/* do not set ftrace_pid_function to itself! */
if ( func ! = ftrace_pid_func )
ftrace_pid_function = func ;
}
2008-05-12 23:20:42 +04:00
/**
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* clear_ftrace_function - reset the ftrace function
2008-05-12 23:20:42 +04:00
*
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* This NULLs the ftrace function and in essence stops
* tracing . There may be lag
2008-05-12 23:20:42 +04:00
*/
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
void clear_ftrace_function ( void )
2008-05-12 23:20:42 +04:00
{
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
ftrace_trace_function = ftrace_stub ;
2008-11-06 00:05:44 +03:00
__ftrace_trace_function = ftrace_stub ;
ftrace: Fix dynamic selftest failure on some archs
Archs that do not implement CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST, will
fail the dynamic ftrace selftest.
The function tracer has a quick 'off' variable that will prevent
the call back functions from being called. This variable is called
function_trace_stop. In x86, this is implemented directly in the mcount
assembly, but for other archs, an intermediate function is used called
ftrace_test_stop_func().
In dynamic ftrace, the function pointer variable ftrace_trace_function is
used to update the caller code in the mcount caller. But for archs that
do not have CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST set, it only calls
ftrace_test_stop_func() instead, which in turn calls __ftrace_trace_function.
When more than one ftrace_ops is registered, the function it calls is
ftrace_ops_list_func(), which will iterate over all registered ftrace_ops
and call the callbacks that have their hash matching.
The issue happens when two ftrace_ops are registered for different functions
and one is then unregistered. The __ftrace_trace_function is then pointed
to the remaining ftrace_ops callback function directly. This mean it will
be called for all functions that were registered to trace by both ftrace_ops
that were registered.
This is not an issue for archs with CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST,
because the update of ftrace_trace_function doesn't happen until after all
functions have been updated, and then the mcount caller is updated. But
for those archs that do use the ftrace_test_stop_func(), the update is
immediate.
The dynamic selftest fails because it hits this situation, and the
ftrace_ops that it registers fails to only trace what it was suppose to
and instead traces all other functions.
The solution is to delay the setting of __ftrace_trace_function until
after all the functions have been updated according to the registered
ftrace_ops. Also, function_trace_stop is set during the update to prevent
function tracing from calling code that is caused by the function tracer
itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-13 23:11:02 +04:00
__ftrace_trace_function_delay = ftrace_stub ;
2008-11-26 08:16:23 +03:00
ftrace_pid_function = ftrace_stub ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2008-11-06 00:05:44 +03:00
# ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
/*
* For those archs that do not test ftrace_trace_stop in their
* mcount call site , we need to do it from C .
*/
static void ftrace_test_stop_func ( unsigned long ip , unsigned long parent_ip )
{
if ( function_trace_stop )
return ;
__ftrace_trace_function ( ip , parent_ip ) ;
}
# endif
2012-02-15 18:51:48 +04:00
static void control_ops_disable_all ( struct ftrace_ops * ops )
{
int cpu ;
for_each_possible_cpu ( cpu )
* per_cpu_ptr ( ops - > disabled , cpu ) = 1 ;
}
static int control_ops_alloc ( struct ftrace_ops * ops )
{
int __percpu * disabled ;
disabled = alloc_percpu ( int ) ;
if ( ! disabled )
return - ENOMEM ;
ops - > disabled = disabled ;
control_ops_disable_all ( ops ) ;
return 0 ;
}
static void control_ops_free ( struct ftrace_ops * ops )
{
free_percpu ( ops - > disabled ) ;
}
2011-05-04 06:49:52 +04:00
static void update_global_ops ( void )
2011-04-28 05:43:36 +04:00
{
ftrace_func_t func ;
/*
* If there ' s only one function registered , then call that
* function directly . Otherwise , we need to iterate over the
* registered callers .
*/
2011-05-04 17:27:52 +04:00
if ( ftrace_global_list = = & ftrace_list_end | |
ftrace_global_list - > next = = & ftrace_list_end )
func = ftrace_global_list - > func ;
2011-04-28 05:43:36 +04:00
else
2011-05-04 17:27:52 +04:00
func = ftrace_global_list_func ;
2011-04-28 05:43:36 +04:00
/* If we filter on pids, update to use the pid function */
if ( ! list_empty ( & ftrace_pids ) ) {
set_ftrace_pid_function ( func ) ;
func = ftrace_pid_func ;
}
2011-05-04 06:49:52 +04:00
global_ops . func = func ;
}
static void update_ftrace_function ( void )
{
ftrace_func_t func ;
update_global_ops ( ) ;
2011-05-06 05:14:55 +04:00
/*
* If we are at the end of the list and this ops is
* not dynamic , then have the mcount trampoline call
* the function directly
*/
2011-05-04 17:27:52 +04:00
if ( ftrace_ops_list = = & ftrace_list_end | |
2011-05-06 05:14:55 +04:00
( ftrace_ops_list - > next = = & ftrace_list_end & &
! ( ftrace_ops_list - > flags & FTRACE_OPS_FL_DYNAMIC ) ) )
2011-05-04 17:27:52 +04:00
func = ftrace_ops_list - > func ;
else
func = ftrace_ops_list_func ;
2011-05-04 06:49:52 +04:00
2011-04-28 05:43:36 +04:00
# ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
ftrace_trace_function = func ;
ftrace: Fix dynamic selftest failure on some archs
Archs that do not implement CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST, will
fail the dynamic ftrace selftest.
The function tracer has a quick 'off' variable that will prevent
the call back functions from being called. This variable is called
function_trace_stop. In x86, this is implemented directly in the mcount
assembly, but for other archs, an intermediate function is used called
ftrace_test_stop_func().
In dynamic ftrace, the function pointer variable ftrace_trace_function is
used to update the caller code in the mcount caller. But for archs that
do not have CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST set, it only calls
ftrace_test_stop_func() instead, which in turn calls __ftrace_trace_function.
When more than one ftrace_ops is registered, the function it calls is
ftrace_ops_list_func(), which will iterate over all registered ftrace_ops
and call the callbacks that have their hash matching.
The issue happens when two ftrace_ops are registered for different functions
and one is then unregistered. The __ftrace_trace_function is then pointed
to the remaining ftrace_ops callback function directly. This mean it will
be called for all functions that were registered to trace by both ftrace_ops
that were registered.
This is not an issue for archs with CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST,
because the update of ftrace_trace_function doesn't happen until after all
functions have been updated, and then the mcount caller is updated. But
for those archs that do use the ftrace_test_stop_func(), the update is
immediate.
The dynamic selftest fails because it hits this situation, and the
ftrace_ops that it registers fails to only trace what it was suppose to
and instead traces all other functions.
The solution is to delay the setting of __ftrace_trace_function until
after all the functions have been updated according to the registered
ftrace_ops. Also, function_trace_stop is set during the update to prevent
function tracing from calling code that is caused by the function tracer
itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-13 23:11:02 +04:00
# else
# ifdef CONFIG_DYNAMIC_FTRACE
/* do not update till all functions have been modified */
__ftrace_trace_function_delay = func ;
2011-04-28 05:43:36 +04:00
# else
__ftrace_trace_function = func ;
ftrace: Fix dynamic selftest failure on some archs
Archs that do not implement CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST, will
fail the dynamic ftrace selftest.
The function tracer has a quick 'off' variable that will prevent
the call back functions from being called. This variable is called
function_trace_stop. In x86, this is implemented directly in the mcount
assembly, but for other archs, an intermediate function is used called
ftrace_test_stop_func().
In dynamic ftrace, the function pointer variable ftrace_trace_function is
used to update the caller code in the mcount caller. But for archs that
do not have CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST set, it only calls
ftrace_test_stop_func() instead, which in turn calls __ftrace_trace_function.
When more than one ftrace_ops is registered, the function it calls is
ftrace_ops_list_func(), which will iterate over all registered ftrace_ops
and call the callbacks that have their hash matching.
The issue happens when two ftrace_ops are registered for different functions
and one is then unregistered. The __ftrace_trace_function is then pointed
to the remaining ftrace_ops callback function directly. This mean it will
be called for all functions that were registered to trace by both ftrace_ops
that were registered.
This is not an issue for archs with CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST,
because the update of ftrace_trace_function doesn't happen until after all
functions have been updated, and then the mcount caller is updated. But
for those archs that do use the ftrace_test_stop_func(), the update is
immediate.
The dynamic selftest fails because it hits this situation, and the
ftrace_ops that it registers fails to only trace what it was suppose to
and instead traces all other functions.
The solution is to delay the setting of __ftrace_trace_function until
after all the functions have been updated according to the registered
ftrace_ops. Also, function_trace_stop is set during the update to prevent
function tracing from calling code that is caused by the function tracer
itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-13 23:11:02 +04:00
# endif
2012-02-17 12:29:15 +04:00
ftrace_trace_function =
( func = = ftrace_stub ) ? func : ftrace_test_stop_func ;
2011-04-28 05:43:36 +04:00
# endif
}
2011-05-04 06:49:52 +04:00
static void add_ftrace_ops ( struct ftrace_ops * * list , struct ftrace_ops * ops )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2011-05-04 06:49:52 +04:00
ops - > next = * list ;
2008-05-12 23:20:42 +04:00
/*
2011-05-04 17:27:52 +04:00
* We are entering ops into the list but another
2008-05-12 23:20:42 +04:00
* CPU might be walking that list . We need to make sure
* the ops - > next pointer is valid before another CPU sees
2011-05-04 17:27:52 +04:00
* the ops pointer included into the list .
2008-05-12 23:20:42 +04:00
*/
2011-05-04 06:49:52 +04:00
rcu_assign_pointer ( * list , ops ) ;
2008-05-12 23:20:42 +04:00
}
2011-05-04 06:49:52 +04:00
static int remove_ftrace_ops ( struct ftrace_ops * * list , struct ftrace_ops * ops )
2008-05-12 23:20:42 +04:00
{
struct ftrace_ops * * p ;
/*
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* If we are removing the last function , then simply point
* to the ftrace_stub .
2008-05-12 23:20:42 +04:00
*/
2011-05-04 06:49:52 +04:00
if ( * list = = ops & & ops - > next = = & ftrace_list_end ) {
* list = & ftrace_list_end ;
2009-02-14 09:42:44 +03:00
return 0 ;
2008-05-12 23:20:42 +04:00
}
2011-05-04 06:49:52 +04:00
for ( p = list ; * p ! = & ftrace_list_end ; p = & ( * p ) - > next )
2008-05-12 23:20:42 +04:00
if ( * p = = ops )
break ;
2009-02-14 09:42:44 +03:00
if ( * p ! = ops )
return - 1 ;
2008-05-12 23:20:42 +04:00
* p = ( * p ) - > next ;
2011-05-04 06:49:52 +04:00
return 0 ;
}
2008-05-12 23:20:42 +04:00
2012-02-15 18:51:48 +04:00
static void add_ftrace_list_ops ( struct ftrace_ops * * list ,
struct ftrace_ops * main_ops ,
struct ftrace_ops * ops )
{
int first = * list = = & ftrace_list_end ;
add_ftrace_ops ( list , ops ) ;
if ( first )
add_ftrace_ops ( & ftrace_ops_list , main_ops ) ;
}
static int remove_ftrace_list_ops ( struct ftrace_ops * * list ,
struct ftrace_ops * main_ops ,
struct ftrace_ops * ops )
{
int ret = remove_ftrace_ops ( list , ops ) ;
if ( ! ret & & * list = = & ftrace_list_end )
ret = remove_ftrace_ops ( & ftrace_ops_list , main_ops ) ;
return ret ;
}
2011-05-04 06:49:52 +04:00
static int __register_ftrace_function ( struct ftrace_ops * ops )
{
if ( ftrace_disabled )
return - ENODEV ;
if ( FTRACE_WARN_ON ( ops = = & global_ops ) )
return - EINVAL ;
2011-05-04 17:27:52 +04:00
if ( WARN_ON ( ops - > flags & FTRACE_OPS_FL_ENABLED ) )
return - EBUSY ;
2012-02-15 18:51:48 +04:00
/* We don't support both control and global flags set. */
if ( ( ops - > flags & FL_GLOBAL_CONTROL_MASK ) = = FL_GLOBAL_CONTROL_MASK )
return - EINVAL ;
2011-05-06 05:14:55 +04:00
if ( ! core_kernel_data ( ( unsigned long ) ops ) )
ops - > flags | = FTRACE_OPS_FL_DYNAMIC ;
2011-05-04 17:27:52 +04:00
if ( ops - > flags & FTRACE_OPS_FL_GLOBAL ) {
2012-02-15 18:51:48 +04:00
add_ftrace_list_ops ( & ftrace_global_list , & global_ops , ops ) ;
2011-05-04 17:27:52 +04:00
ops - > flags | = FTRACE_OPS_FL_ENABLED ;
2012-02-15 18:51:48 +04:00
} else if ( ops - > flags & FTRACE_OPS_FL_CONTROL ) {
if ( control_ops_alloc ( ops ) )
return - ENOMEM ;
add_ftrace_list_ops ( & ftrace_control_list , & control_ops , ops ) ;
2011-05-04 17:27:52 +04:00
} else
add_ftrace_ops ( & ftrace_ops_list , ops ) ;
2011-05-04 06:49:52 +04:00
if ( ftrace_enabled )
update_ftrace_function ( ) ;
return 0 ;
}
static int __unregister_ftrace_function ( struct ftrace_ops * ops )
{
int ret ;
if ( ftrace_disabled )
return - ENODEV ;
2011-05-04 17:27:52 +04:00
if ( WARN_ON ( ! ( ops - > flags & FTRACE_OPS_FL_ENABLED ) ) )
return - EBUSY ;
2011-05-04 06:49:52 +04:00
if ( FTRACE_WARN_ON ( ops = = & global_ops ) )
return - EINVAL ;
2011-05-04 17:27:52 +04:00
if ( ops - > flags & FTRACE_OPS_FL_GLOBAL ) {
2012-02-15 18:51:48 +04:00
ret = remove_ftrace_list_ops ( & ftrace_global_list ,
& global_ops , ops ) ;
2011-05-04 17:27:52 +04:00
if ( ! ret )
ops - > flags & = ~ FTRACE_OPS_FL_ENABLED ;
2012-02-15 18:51:48 +04:00
} else if ( ops - > flags & FTRACE_OPS_FL_CONTROL ) {
ret = remove_ftrace_list_ops ( & ftrace_control_list ,
& control_ops , ops ) ;
if ( ! ret ) {
/*
* The ftrace_ops is now removed from the list ,
* so there ' ll be no new users . We must ensure
* all current users are done before we free
* the control data .
*/
synchronize_sched ( ) ;
control_ops_free ( ops ) ;
}
2011-05-04 17:27:52 +04:00
} else
ret = remove_ftrace_ops ( & ftrace_ops_list , ops ) ;
2011-05-04 06:49:52 +04:00
if ( ret < 0 )
return ret ;
2011-05-04 17:27:52 +04:00
2011-04-28 05:43:36 +04:00
if ( ftrace_enabled )
update_ftrace_function ( ) ;
2008-05-12 23:20:42 +04:00
2011-05-06 05:14:55 +04:00
/*
* Dynamic ops may be freed , we must make sure that all
* callers are done before leaving this function .
*/
if ( ops - > flags & FTRACE_OPS_FL_DYNAMIC )
synchronize_sched ( ) ;
2009-02-14 09:42:44 +03:00
return 0 ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2008-11-26 08:16:23 +03:00
static void ftrace_update_pid_func ( void )
{
2011-04-28 05:43:36 +04:00
/* Only do something if we are tracing something */
2008-11-26 08:16:23 +03:00
if ( ftrace_trace_function = = ftrace_stub )
2009-03-06 09:29:04 +03:00
return ;
2008-11-26 08:16:23 +03:00
2011-04-28 05:43:36 +04:00
update_ftrace_function ( ) ;
2008-11-26 08:16:23 +03:00
}
2009-03-24 00:12:36 +03:00
# ifdef CONFIG_FUNCTION_PROFILER
struct ftrace_profile {
struct hlist_node node ;
unsigned long ip ;
unsigned long counter ;
2009-03-24 06:12:58 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
unsigned long long time ;
2010-04-26 22:02:05 +04:00
unsigned long long time_squared ;
2009-03-24 06:12:58 +03:00
# endif
2009-02-16 23:28:00 +03:00
} ;
2009-03-24 00:12:36 +03:00
struct ftrace_profile_page {
struct ftrace_profile_page * next ;
unsigned long index ;
struct ftrace_profile records [ ] ;
2008-05-12 23:20:43 +04:00
} ;
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat {
atomic_t disabled ;
struct hlist_head * hash ;
struct ftrace_profile_page * pages ;
struct ftrace_profile_page * start ;
struct tracer_stat stat ;
} ;
2009-03-24 00:12:36 +03:00
# define PROFILE_RECORDS_SIZE \
( PAGE_SIZE - offsetof ( struct ftrace_profile_page , records ) )
2008-05-12 23:20:43 +04:00
2009-03-24 00:12:36 +03:00
# define PROFILES_PER_PAGE \
( PROFILE_RECORDS_SIZE / sizeof ( struct ftrace_profile ) )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2009-03-25 20:26:41 +03:00
static int ftrace_profile_bits __read_mostly ;
static int ftrace_profile_enabled __read_mostly ;
/* ftrace_profile_lock - synchronize the enable and disable of the profiler */
2009-03-20 19:50:56 +03:00
static DEFINE_MUTEX ( ftrace_profile_lock ) ;
2009-03-25 03:50:39 +03:00
static DEFINE_PER_CPU ( struct ftrace_profile_stat , ftrace_profile_stats ) ;
2009-03-24 00:12:36 +03:00
# define FTRACE_PROFILE_HASH_SIZE 1024 /* must be power of 2 */
2009-03-20 19:50:56 +03:00
static void *
function_stat_next ( void * v , int idx )
{
2009-03-24 00:12:36 +03:00
struct ftrace_profile * rec = v ;
struct ftrace_profile_page * pg ;
2009-03-20 19:50:56 +03:00
2009-03-24 00:12:36 +03:00
pg = ( struct ftrace_profile_page * ) ( ( unsigned long ) rec & PAGE_MASK ) ;
2009-03-20 19:50:56 +03:00
again :
2009-06-26 07:15:37 +04:00
if ( idx ! = 0 )
rec + + ;
2009-03-20 19:50:56 +03:00
if ( ( void * ) rec > = ( void * ) & pg - > records [ pg - > index ] ) {
pg = pg - > next ;
if ( ! pg )
return NULL ;
rec = & pg - > records [ 0 ] ;
2009-03-24 00:12:36 +03:00
if ( ! rec - > counter )
goto again ;
2009-03-20 19:50:56 +03:00
}
return rec ;
}
static void * function_stat_start ( struct tracer_stat * trace )
{
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat * stat =
container_of ( trace , struct ftrace_profile_stat , stat ) ;
if ( ! stat | | ! stat - > start )
return NULL ;
return function_stat_next ( & stat - > start - > records [ 0 ] , 0 ) ;
2009-03-20 19:50:56 +03:00
}
2009-03-24 06:12:58 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* function graph compares on total time */
static int function_stat_cmp ( void * p1 , void * p2 )
{
struct ftrace_profile * a = p1 ;
struct ftrace_profile * b = p2 ;
if ( a - > time < b - > time )
return - 1 ;
if ( a - > time > b - > time )
return 1 ;
else
return 0 ;
}
# else
/* not function graph compares against hits */
2009-03-20 19:50:56 +03:00
static int function_stat_cmp ( void * p1 , void * p2 )
{
2009-03-24 00:12:36 +03:00
struct ftrace_profile * a = p1 ;
struct ftrace_profile * b = p2 ;
2009-03-20 19:50:56 +03:00
if ( a - > counter < b - > counter )
return - 1 ;
if ( a - > counter > b - > counter )
return 1 ;
else
return 0 ;
}
2009-03-24 06:12:58 +03:00
# endif
2009-03-20 19:50:56 +03:00
static int function_stat_headers ( struct seq_file * m )
{
2009-03-24 06:12:58 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
2009-03-26 04:00:47 +03:00
seq_printf ( m , " Function "
2010-04-26 22:02:05 +04:00
" Hit Time Avg s^2 \n "
2009-03-26 04:00:47 +03:00
" -------- "
2010-04-26 22:02:05 +04:00
" --- ---- --- --- \n " ) ;
2009-03-24 06:12:58 +03:00
# else
2009-03-20 19:50:56 +03:00
seq_printf ( m , " Function Hit \n "
" -------- --- \n " ) ;
2009-03-24 06:12:58 +03:00
# endif
2009-03-20 19:50:56 +03:00
return 0 ;
}
static int function_stat_show ( struct seq_file * m , void * v )
{
2009-03-24 00:12:36 +03:00
struct ftrace_profile * rec = v ;
2009-03-20 19:50:56 +03:00
char str [ KSYM_SYMBOL_LEN ] ;
2010-08-23 12:50:12 +04:00
int ret = 0 ;
2009-03-24 06:12:58 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
2009-03-26 04:00:47 +03:00
static struct trace_seq s ;
unsigned long long avg ;
2010-04-26 22:02:05 +04:00
unsigned long long stddev ;
2009-03-24 06:12:58 +03:00
# endif
2010-08-23 12:50:12 +04:00
mutex_lock ( & ftrace_profile_lock ) ;
/* we raced with function_profile_reset() */
if ( unlikely ( rec - > counter = = 0 ) ) {
ret = - EBUSY ;
goto out ;
}
2009-03-20 19:50:56 +03:00
kallsyms_lookup ( rec - > ip , NULL , NULL , NULL , str ) ;
2009-03-24 06:12:58 +03:00
seq_printf ( m , " %-30.30s %10lu " , str , rec - > counter ) ;
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
seq_printf ( m , " " ) ;
2009-03-26 04:00:47 +03:00
avg = rec - > time ;
do_div ( avg , rec - > counter ) ;
2010-04-26 22:02:05 +04:00
/* Sample standard deviation (s^2) */
if ( rec - > counter < = 1 )
stddev = 0 ;
else {
stddev = rec - > time_squared - rec - > counter * avg * avg ;
/*
* Divide only 1000 for ns ^ 2 - > us ^ 2 conversion .
* trace_print_graph_duration will divide 1000 again .
*/
do_div ( stddev , ( rec - > counter - 1 ) * 1000 ) ;
}
2009-03-26 04:00:47 +03:00
trace_seq_init ( & s ) ;
trace_print_graph_duration ( rec - > time , & s ) ;
trace_seq_puts ( & s , " " ) ;
trace_print_graph_duration ( avg , & s ) ;
2010-04-26 22:02:05 +04:00
trace_seq_puts ( & s , " " ) ;
trace_print_graph_duration ( stddev , & s ) ;
2009-03-24 06:12:58 +03:00
trace_print_seq ( m , & s ) ;
# endif
seq_putc ( m , ' \n ' ) ;
2010-08-23 12:50:12 +04:00
out :
mutex_unlock ( & ftrace_profile_lock ) ;
2009-03-20 19:50:56 +03:00
2010-08-23 12:50:12 +04:00
return ret ;
2009-03-20 19:50:56 +03:00
}
2009-03-25 03:50:39 +03:00
static void ftrace_profile_reset ( struct ftrace_profile_stat * stat )
2009-03-20 19:50:56 +03:00
{
2009-03-24 00:12:36 +03:00
struct ftrace_profile_page * pg ;
2009-03-20 19:50:56 +03:00
2009-03-25 03:50:39 +03:00
pg = stat - > pages = stat - > start ;
2009-03-20 19:50:56 +03:00
2009-03-24 00:12:36 +03:00
while ( pg ) {
memset ( pg - > records , 0 , PROFILE_RECORDS_SIZE ) ;
pg - > index = 0 ;
pg = pg - > next ;
2009-03-20 19:50:56 +03:00
}
2009-03-25 03:50:39 +03:00
memset ( stat - > hash , 0 ,
2009-03-24 00:12:36 +03:00
FTRACE_PROFILE_HASH_SIZE * sizeof ( struct hlist_head ) ) ;
}
2009-03-20 19:50:56 +03:00
2009-03-25 03:50:39 +03:00
int ftrace_profile_pages_init ( struct ftrace_profile_stat * stat )
2009-03-24 00:12:36 +03:00
{
struct ftrace_profile_page * pg ;
2009-03-26 03:06:34 +03:00
int functions ;
int pages ;
2009-03-24 00:12:36 +03:00
int i ;
2009-03-20 19:50:56 +03:00
2009-03-24 00:12:36 +03:00
/* If we already allocated, do nothing */
2009-03-25 03:50:39 +03:00
if ( stat - > pages )
2009-03-24 00:12:36 +03:00
return 0 ;
2009-03-20 19:50:56 +03:00
2009-03-25 03:50:39 +03:00
stat - > pages = ( void * ) get_zeroed_page ( GFP_KERNEL ) ;
if ( ! stat - > pages )
2009-03-24 00:12:36 +03:00
return - ENOMEM ;
2009-03-20 19:50:56 +03:00
2009-03-26 03:06:34 +03:00
# ifdef CONFIG_DYNAMIC_FTRACE
functions = ftrace_update_tot_cnt ;
# else
/*
* We do not know the number of functions that exist because
* dynamic tracing is what counts them . With past experience
* we have around 20 K functions . That should be more than enough .
* It is highly unlikely we will execute every function in
* the kernel .
*/
functions = 20000 ;
# endif
2009-03-25 03:50:39 +03:00
pg = stat - > start = stat - > pages ;
2009-03-20 19:50:56 +03:00
2009-03-26 03:06:34 +03:00
pages = DIV_ROUND_UP ( functions , PROFILES_PER_PAGE ) ;
for ( i = 0 ; i < pages ; i + + ) {
2009-03-24 00:12:36 +03:00
pg - > next = ( void * ) get_zeroed_page ( GFP_KERNEL ) ;
if ( ! pg - > next )
2009-03-26 03:06:34 +03:00
goto out_free ;
2009-03-24 00:12:36 +03:00
pg = pg - > next ;
}
return 0 ;
2009-03-26 03:06:34 +03:00
out_free :
pg = stat - > start ;
while ( pg ) {
unsigned long tmp = ( unsigned long ) pg ;
pg = pg - > next ;
free_page ( tmp ) ;
}
free_page ( ( unsigned long ) stat - > pages ) ;
stat - > pages = NULL ;
stat - > start = NULL ;
return - ENOMEM ;
2009-03-20 19:50:56 +03:00
}
2009-03-25 03:50:39 +03:00
static int ftrace_profile_init_cpu ( int cpu )
2009-03-20 19:50:56 +03:00
{
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat * stat ;
2009-03-24 00:12:36 +03:00
int size ;
2009-03-20 19:50:56 +03:00
2009-03-25 03:50:39 +03:00
stat = & per_cpu ( ftrace_profile_stats , cpu ) ;
if ( stat - > hash ) {
2009-03-24 00:12:36 +03:00
/* If the profile is already created, simply reset it */
2009-03-25 03:50:39 +03:00
ftrace_profile_reset ( stat ) ;
2009-03-24 00:12:36 +03:00
return 0 ;
}
2009-03-20 19:50:56 +03:00
2009-03-24 00:12:36 +03:00
/*
* We are profiling all functions , but usually only a few thousand
* functions are hit . We ' ll make a hash of 1024 items .
*/
size = FTRACE_PROFILE_HASH_SIZE ;
2009-03-20 19:50:56 +03:00
2009-03-25 03:50:39 +03:00
stat - > hash = kzalloc ( sizeof ( struct hlist_head ) * size , GFP_KERNEL ) ;
2009-03-24 00:12:36 +03:00
2009-03-25 03:50:39 +03:00
if ( ! stat - > hash )
2009-03-24 00:12:36 +03:00
return - ENOMEM ;
2009-03-25 03:50:39 +03:00
if ( ! ftrace_profile_bits ) {
size - - ;
2009-03-24 00:12:36 +03:00
2009-03-25 03:50:39 +03:00
for ( ; size ; size > > = 1 )
ftrace_profile_bits + + ;
}
2009-03-24 00:12:36 +03:00
2009-03-26 03:06:34 +03:00
/* Preallocate the function profiling pages */
2009-03-25 03:50:39 +03:00
if ( ftrace_profile_pages_init ( stat ) < 0 ) {
kfree ( stat - > hash ) ;
stat - > hash = NULL ;
2009-03-24 00:12:36 +03:00
return - ENOMEM ;
}
return 0 ;
2009-03-20 19:50:56 +03:00
}
2009-03-25 03:50:39 +03:00
static int ftrace_profile_init ( void )
{
int cpu ;
int ret = 0 ;
for_each_online_cpu ( cpu ) {
ret = ftrace_profile_init_cpu ( cpu ) ;
if ( ret )
break ;
}
return ret ;
}
2009-03-24 00:12:36 +03:00
/* interrupts must be disabled */
2009-03-25 03:50:39 +03:00
static struct ftrace_profile *
ftrace_find_profiled_func ( struct ftrace_profile_stat * stat , unsigned long ip )
2009-03-20 19:50:56 +03:00
{
2009-03-24 00:12:36 +03:00
struct ftrace_profile * rec ;
2009-03-20 19:50:56 +03:00
struct hlist_head * hhd ;
struct hlist_node * n ;
unsigned long key ;
key = hash_long ( ip , ftrace_profile_bits ) ;
2009-03-25 03:50:39 +03:00
hhd = & stat - > hash [ key ] ;
2009-03-20 19:50:56 +03:00
if ( hlist_empty ( hhd ) )
return NULL ;
hlist_for_each_entry_rcu ( rec , n , hhd , node ) {
if ( rec - > ip = = ip )
2009-03-24 00:12:36 +03:00
return rec ;
}
return NULL ;
}
2009-03-25 03:50:39 +03:00
static void ftrace_add_profile ( struct ftrace_profile_stat * stat ,
struct ftrace_profile * rec )
2009-03-24 00:12:36 +03:00
{
unsigned long key ;
key = hash_long ( rec - > ip , ftrace_profile_bits ) ;
2009-03-25 03:50:39 +03:00
hlist_add_head_rcu ( & rec - > node , & stat - > hash [ key ] ) ;
2009-03-24 00:12:36 +03:00
}
2009-03-26 03:06:34 +03:00
/*
* The memory is already allocated , this simply finds a new record to use .
*/
2009-03-24 00:12:36 +03:00
static struct ftrace_profile *
2009-03-26 03:06:34 +03:00
ftrace_profile_alloc ( struct ftrace_profile_stat * stat , unsigned long ip )
2009-03-24 00:12:36 +03:00
{
struct ftrace_profile * rec = NULL ;
2009-03-26 03:06:34 +03:00
/* prevent recursion (from NMIs) */
2009-03-25 03:50:39 +03:00
if ( atomic_inc_return ( & stat - > disabled ) ! = 1 )
2009-03-24 00:12:36 +03:00
goto out ;
/*
2009-03-26 03:06:34 +03:00
* Try to find the function again since an NMI
* could have added it
2009-03-24 00:12:36 +03:00
*/
2009-03-25 03:50:39 +03:00
rec = ftrace_find_profiled_func ( stat , ip ) ;
2009-03-24 00:12:36 +03:00
if ( rec )
2009-03-25 03:50:39 +03:00
goto out ;
2009-03-24 00:12:36 +03:00
2009-03-25 03:50:39 +03:00
if ( stat - > pages - > index = = PROFILES_PER_PAGE ) {
if ( ! stat - > pages - > next )
goto out ;
stat - > pages = stat - > pages - > next ;
2009-03-20 19:50:56 +03:00
}
2009-03-24 00:12:36 +03:00
2009-03-25 03:50:39 +03:00
rec = & stat - > pages - > records [ stat - > pages - > index + + ] ;
2009-03-24 00:12:36 +03:00
rec - > ip = ip ;
2009-03-25 03:50:39 +03:00
ftrace_add_profile ( stat , rec ) ;
2009-03-24 00:12:36 +03:00
2009-03-20 19:50:56 +03:00
out :
2009-03-25 03:50:39 +03:00
atomic_dec ( & stat - > disabled ) ;
2009-03-20 19:50:56 +03:00
return rec ;
}
static void
function_profile_call ( unsigned long ip , unsigned long parent_ip )
{
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat * stat ;
2009-03-24 00:12:36 +03:00
struct ftrace_profile * rec ;
2009-03-20 19:50:56 +03:00
unsigned long flags ;
if ( ! ftrace_profile_enabled )
return ;
local_irq_save ( flags ) ;
2009-03-25 03:50:39 +03:00
stat = & __get_cpu_var ( ftrace_profile_stats ) ;
2009-06-02 05:51:28 +04:00
if ( ! stat - > hash | | ! ftrace_profile_enabled )
2009-03-25 03:50:39 +03:00
goto out ;
rec = ftrace_find_profiled_func ( stat , ip ) ;
2009-03-24 00:12:36 +03:00
if ( ! rec ) {
2009-03-26 03:06:34 +03:00
rec = ftrace_profile_alloc ( stat , ip ) ;
2009-03-24 00:12:36 +03:00
if ( ! rec )
goto out ;
}
2009-03-20 19:50:56 +03:00
rec - > counter + + ;
out :
local_irq_restore ( flags ) ;
}
2009-03-24 06:12:58 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
static int profile_graph_entry ( struct ftrace_graph_ent * trace )
{
function_profile_call ( trace - > func , 0 ) ;
return 1 ;
}
static void profile_graph_return ( struct ftrace_graph_ret * trace )
{
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat * stat ;
2009-03-25 06:17:58 +03:00
unsigned long long calltime ;
2009-03-24 06:12:58 +03:00
struct ftrace_profile * rec ;
2009-03-25 03:50:39 +03:00
unsigned long flags ;
2009-03-24 06:12:58 +03:00
local_irq_save ( flags ) ;
2009-03-25 03:50:39 +03:00
stat = & __get_cpu_var ( ftrace_profile_stats ) ;
2009-06-02 05:51:28 +04:00
if ( ! stat - > hash | | ! ftrace_profile_enabled )
2009-03-25 03:50:39 +03:00
goto out ;
2010-04-28 05:04:24 +04:00
/* If the calltime was zero'd ignore it */
if ( ! trace - > calltime )
goto out ;
2009-03-25 06:17:58 +03:00
calltime = trace - > rettime - trace - > calltime ;
if ( ! ( trace_flags & TRACE_ITER_GRAPH_TIME ) ) {
int index ;
index = trace - > depth ;
/* Append this call time to the parent time to subtract */
if ( index )
current - > ret_stack [ index - 1 ] . subtime + = calltime ;
if ( current - > ret_stack [ index ] . subtime < calltime )
calltime - = current - > ret_stack [ index ] . subtime ;
else
calltime = 0 ;
}
2009-03-25 03:50:39 +03:00
rec = ftrace_find_profiled_func ( stat , trace - > func ) ;
2010-04-26 22:02:05 +04:00
if ( rec ) {
2009-03-25 06:17:58 +03:00
rec - > time + = calltime ;
2010-04-26 22:02:05 +04:00
rec - > time_squared + = calltime * calltime ;
}
2009-03-25 06:17:58 +03:00
2009-03-25 03:50:39 +03:00
out :
2009-03-24 06:12:58 +03:00
local_irq_restore ( flags ) ;
}
static int register_ftrace_profiler ( void )
{
return register_ftrace_graph ( & profile_graph_return ,
& profile_graph_entry ) ;
}
static void unregister_ftrace_profiler ( void )
{
unregister_ftrace_graph ( ) ;
}
# else
2011-05-31 23:51:55 +04:00
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
2009-03-25 20:26:41 +03:00
. func = function_profile_call ,
2009-03-20 19:50:56 +03:00
} ;
2009-03-24 06:12:58 +03:00
static int register_ftrace_profiler ( void )
{
return register_ftrace_function ( & ftrace_profile_ops ) ;
}
static void unregister_ftrace_profiler ( void )
{
unregister_ftrace_function ( & ftrace_profile_ops ) ;
}
# endif /* CONFIG_FUNCTION_GRAPH_TRACER */
2009-03-20 19:50:56 +03:00
static ssize_t
ftrace_profile_write ( struct file * filp , const char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
unsigned long val ;
int ret ;
2011-06-07 23:58:27 +04:00
ret = kstrtoul_from_user ( ubuf , cnt , 10 , & val ) ;
if ( ret )
2009-03-20 19:50:56 +03:00
return ret ;
val = ! ! val ;
mutex_lock ( & ftrace_profile_lock ) ;
if ( ftrace_profile_enabled ^ val ) {
if ( val ) {
2009-03-24 00:12:36 +03:00
ret = ftrace_profile_init ( ) ;
if ( ret < 0 ) {
cnt = ret ;
goto out ;
}
2009-03-24 06:12:58 +03:00
ret = register_ftrace_profiler ( ) ;
if ( ret < 0 ) {
cnt = ret ;
goto out ;
}
2009-03-20 19:50:56 +03:00
ftrace_profile_enabled = 1 ;
} else {
ftrace_profile_enabled = 0 ;
2009-06-02 05:51:28 +04:00
/*
* unregister_ftrace_profiler calls stop_machine
* so this acts like an synchronize_sched .
*/
2009-03-24 06:12:58 +03:00
unregister_ftrace_profiler ( ) ;
2009-03-20 19:50:56 +03:00
}
}
2009-03-24 00:12:36 +03:00
out :
2009-03-20 19:50:56 +03:00
mutex_unlock ( & ftrace_profile_lock ) ;
2009-10-24 03:36:16 +04:00
* ppos + = cnt ;
2009-03-20 19:50:56 +03:00
return cnt ;
}
2009-03-24 00:12:36 +03:00
static ssize_t
ftrace_profile_read ( struct file * filp , char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
2009-03-25 20:26:41 +03:00
char buf [ 64 ] ; /* big enough to hold a number */
2009-03-24 00:12:36 +03:00
int r ;
r = sprintf ( buf , " %u \n " , ftrace_profile_enabled ) ;
return simple_read_from_buffer ( ubuf , cnt , ppos , buf , r ) ;
}
2009-03-20 19:50:56 +03:00
static const struct file_operations ftrace_profile_fops = {
. open = tracing_open_generic ,
. read = ftrace_profile_read ,
. write = ftrace_profile_write ,
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
. llseek = default_llseek ,
2009-03-20 19:50:56 +03:00
} ;
2009-03-25 03:50:39 +03:00
/* used to initialize the real stat files */
static struct tracer_stat function_stats __initdata = {
2009-03-25 20:26:41 +03:00
. name = " functions " ,
. stat_start = function_stat_start ,
. stat_next = function_stat_next ,
. stat_cmp = function_stat_cmp ,
. stat_headers = function_stat_headers ,
. stat_show = function_stat_show
2009-03-25 03:50:39 +03:00
} ;
2009-06-04 08:55:45 +04:00
static __init void ftrace_profile_debugfs ( struct dentry * d_tracer )
2009-03-20 19:50:56 +03:00
{
2009-03-25 03:50:39 +03:00
struct ftrace_profile_stat * stat ;
2009-03-20 19:50:56 +03:00
struct dentry * entry ;
2009-03-25 03:50:39 +03:00
char * name ;
2009-03-20 19:50:56 +03:00
int ret ;
2009-03-25 03:50:39 +03:00
int cpu ;
for_each_possible_cpu ( cpu ) {
stat = & per_cpu ( ftrace_profile_stats , cpu ) ;
/* allocate enough for function name + cpu number */
name = kmalloc ( 32 , GFP_KERNEL ) ;
if ( ! name ) {
/*
* The files created are permanent , if something happens
* we still do not free memory .
*/
WARN ( 1 ,
" Could not allocate stat file for cpu %d \n " ,
cpu ) ;
return ;
}
stat - > stat = function_stats ;
snprintf ( name , 32 , " function%d " , cpu ) ;
stat - > stat . name = name ;
ret = register_stat_tracer ( & stat - > stat ) ;
if ( ret ) {
WARN ( 1 ,
" Could not register function stat for cpu %d \n " ,
cpu ) ;
kfree ( name ) ;
return ;
}
2009-03-20 19:50:56 +03:00
}
entry = debugfs_create_file ( " function_profile_enabled " , 0644 ,
d_tracer , NULL , & ftrace_profile_fops ) ;
if ( ! entry )
pr_warning ( " Could not create debugfs "
" 'function_profile_enabled' entry \n " ) ;
}
# else /* CONFIG_FUNCTION_PROFILER */
2009-06-04 08:55:45 +04:00
static __init void ftrace_profile_debugfs ( struct dentry * d_tracer )
2009-03-20 19:50:56 +03:00
{
}
# endif /* CONFIG_FUNCTION_PROFILER */
2009-03-24 00:12:36 +03:00
static struct pid * const ftrace_swapper_pid = & init_struct_pid ;
# ifdef CONFIG_DYNAMIC_FTRACE
# ifndef CONFIG_FTRACE_MCOUNT_RECORD
# error Dynamic ftrace depends on MCOUNT_RECORD
# endif
static struct hlist_head ftrace_func_hash [ FTRACE_FUNC_HASHSIZE ] __read_mostly ;
struct ftrace_func_probe {
struct hlist_node node ;
struct ftrace_probe_ops * ops ;
unsigned long flags ;
unsigned long ip ;
void * data ;
struct rcu_head rcu ;
} ;
2011-04-29 23:12:32 +04:00
struct ftrace_func_entry {
struct hlist_node hlist ;
unsigned long ip ;
} ;
struct ftrace_hash {
unsigned long size_bits ;
struct hlist_head * buckets ;
unsigned long count ;
2011-05-06 02:03:47 +04:00
struct rcu_head rcu ;
2011-04-29 23:12:32 +04:00
} ;
2011-05-03 01:34:47 +04:00
/*
* We make these constant because no one should touch them ,
* but they are used as the default " empty hash " , to avoid allocating
* it all the time . These are in a read only section such that if
* anyone does try to modify it , it will cause an exception .
*/
static const struct hlist_head empty_buckets [ 1 ] ;
static const struct ftrace_hash empty_hash = {
. buckets = ( struct hlist_head * ) empty_buckets ,
2011-04-30 04:59:51 +04:00
} ;
2011-05-03 01:34:47 +04:00
# define EMPTY_HASH ((struct ftrace_hash *)&empty_hash)
2009-03-24 00:12:36 +03:00
2011-05-04 06:49:52 +04:00
static struct ftrace_ops global_ops = {
2011-05-02 20:29:25 +04:00
. func = ftrace_stub ,
2011-05-03 01:34:47 +04:00
. notrace_hash = EMPTY_HASH ,
. filter_hash = EMPTY_HASH ,
2011-05-02 20:29:25 +04:00
} ;
2009-03-24 00:12:36 +03:00
static DEFINE_MUTEX ( ftrace_regex_lock ) ;
struct ftrace_page {
struct ftrace_page * next ;
2011-12-17 01:23:44 +04:00
struct dyn_ftrace * records ;
2009-03-24 00:12:36 +03:00
int index ;
2011-12-17 01:23:44 +04:00
int size ;
2009-03-24 00:12:36 +03:00
} ;
2011-12-17 01:30:31 +04:00
static struct ftrace_page * ftrace_new_pgs ;
2011-12-17 01:23:44 +04:00
# define ENTRY_SIZE sizeof(struct dyn_ftrace)
# define ENTRIES_PER_PAGE (PAGE_SIZE / ENTRY_SIZE)
2009-03-24 00:12:36 +03:00
/* estimate from running different kernels */
# define NR_TO_INIT 10000
static struct ftrace_page * ftrace_pages_start ;
static struct ftrace_page * ftrace_pages ;
2011-12-20 04:07:36 +04:00
static bool ftrace_hash_empty ( struct ftrace_hash * hash )
{
return ! hash | | ! hash - > count ;
}
2011-04-29 23:12:32 +04:00
static struct ftrace_func_entry *
ftrace_lookup_ip ( struct ftrace_hash * hash , unsigned long ip )
{
unsigned long key ;
struct ftrace_func_entry * entry ;
struct hlist_head * hhd ;
struct hlist_node * n ;
2011-12-20 04:07:36 +04:00
if ( ftrace_hash_empty ( hash ) )
2011-04-29 23:12:32 +04:00
return NULL ;
if ( hash - > size_bits > 0 )
key = hash_long ( ip , hash - > size_bits ) ;
else
key = 0 ;
hhd = & hash - > buckets [ key ] ;
hlist_for_each_entry_rcu ( entry , n , hhd , hlist ) {
if ( entry - > ip = = ip )
return entry ;
}
return NULL ;
}
2011-05-03 01:34:47 +04:00
static void __add_hash_entry ( struct ftrace_hash * hash ,
struct ftrace_func_entry * entry )
2011-04-29 23:12:32 +04:00
{
struct hlist_head * hhd ;
unsigned long key ;
if ( hash - > size_bits )
2011-05-03 01:34:47 +04:00
key = hash_long ( entry - > ip , hash - > size_bits ) ;
2011-04-29 23:12:32 +04:00
else
key = 0 ;
hhd = & hash - > buckets [ key ] ;
hlist_add_head ( & entry - > hlist , hhd ) ;
hash - > count + + ;
2011-05-03 01:34:47 +04:00
}
static int add_hash_entry ( struct ftrace_hash * hash , unsigned long ip )
{
struct ftrace_func_entry * entry ;
entry = kmalloc ( sizeof ( * entry ) , GFP_KERNEL ) ;
if ( ! entry )
return - ENOMEM ;
entry - > ip = ip ;
__add_hash_entry ( hash , entry ) ;
2011-04-29 23:12:32 +04:00
return 0 ;
}
static void
2011-05-03 01:34:47 +04:00
free_hash_entry ( struct ftrace_hash * hash ,
2011-04-29 23:12:32 +04:00
struct ftrace_func_entry * entry )
{
hlist_del ( & entry - > hlist ) ;
kfree ( entry ) ;
hash - > count - - ;
}
2011-05-03 01:34:47 +04:00
static void
remove_hash_entry ( struct ftrace_hash * hash ,
struct ftrace_func_entry * entry )
{
hlist_del ( & entry - > hlist ) ;
hash - > count - - ;
}
2011-04-29 23:12:32 +04:00
static void ftrace_hash_clear ( struct ftrace_hash * hash )
{
struct hlist_head * hhd ;
struct hlist_node * tp , * tn ;
struct ftrace_func_entry * entry ;
int size = 1 < < hash - > size_bits ;
int i ;
2011-05-03 01:34:47 +04:00
if ( ! hash - > count )
return ;
2011-04-29 23:12:32 +04:00
for ( i = 0 ; i < size ; i + + ) {
hhd = & hash - > buckets [ i ] ;
hlist_for_each_entry_safe ( entry , tp , tn , hhd , hlist )
2011-05-03 01:34:47 +04:00
free_hash_entry ( hash , entry ) ;
2011-04-29 23:12:32 +04:00
}
FTRACE_WARN_ON ( hash - > count ) ;
}
2011-05-03 01:34:47 +04:00
static void free_ftrace_hash ( struct ftrace_hash * hash )
{
if ( ! hash | | hash = = EMPTY_HASH )
return ;
ftrace_hash_clear ( hash ) ;
kfree ( hash - > buckets ) ;
kfree ( hash ) ;
}
2011-05-06 02:03:47 +04:00
static void __free_ftrace_hash_rcu ( struct rcu_head * rcu )
{
struct ftrace_hash * hash ;
hash = container_of ( rcu , struct ftrace_hash , rcu ) ;
free_ftrace_hash ( hash ) ;
}
static void free_ftrace_hash_rcu ( struct ftrace_hash * hash )
{
if ( ! hash | | hash = = EMPTY_HASH )
return ;
call_rcu_sched ( & hash - > rcu , __free_ftrace_hash_rcu ) ;
}
ftrace, perf: Add filter support for function trace event
Adding support to filter function trace event via perf
interface. It is now possible to use filter interface
in the perf tool like:
perf record -e ftrace:function --filter="(ip == mm_*)" ls
The filter syntax is restricted to the the 'ip' field only,
and following operators are accepted '==' '!=' '||', ending
up with the filter strings like:
ip == f1[, ]f2 ... || ip != f3[, ]f4 ...
with comma ',' or space ' ' as a function separator. If the
space ' ' is used as a separator, the right side of the
assignment needs to be enclosed in double quotes '"', e.g.:
perf record -e ftrace:function --filter '(ip == do_execve,sys_*,ext*)' ls
perf record -e ftrace:function --filter '(ip == "do_execve,sys_*,ext*")' ls
perf record -e ftrace:function --filter '(ip == "do_execve sys_* ext*")' ls
The '==' operator adds trace filter with same effect as would
be added via set_ftrace_filter file.
The '!=' operator adds trace filter with same effect as would
be added via set_ftrace_notrace file.
The right side of the '!=', '==' operators is list of functions
or regexp. to be added to filter separated by space.
The '||' operator is used for connecting multiple filter definitions
together. It is possible to have more than one '==' and '!='
operators within one filter string.
Link: http://lkml.kernel.org/r/1329317514-8131-8-git-send-email-jolsa@redhat.com
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-02-15 18:51:54 +04:00
void ftrace_free_filter ( struct ftrace_ops * ops )
{
free_ftrace_hash ( ops - > filter_hash ) ;
free_ftrace_hash ( ops - > notrace_hash ) ;
}
2011-05-03 01:34:47 +04:00
static struct ftrace_hash * alloc_ftrace_hash ( int size_bits )
{
struct ftrace_hash * hash ;
int size ;
hash = kzalloc ( sizeof ( * hash ) , GFP_KERNEL ) ;
if ( ! hash )
return NULL ;
size = 1 < < size_bits ;
2011-11-30 01:08:00 +04:00
hash - > buckets = kcalloc ( size , sizeof ( * hash - > buckets ) , GFP_KERNEL ) ;
2011-05-03 01:34:47 +04:00
if ( ! hash - > buckets ) {
kfree ( hash ) ;
return NULL ;
}
hash - > size_bits = size_bits ;
return hash ;
}
static struct ftrace_hash *
alloc_and_copy_ftrace_hash ( int size_bits , struct ftrace_hash * hash )
{
struct ftrace_func_entry * entry ;
struct ftrace_hash * new_hash ;
struct hlist_node * tp ;
int size ;
int ret ;
int i ;
new_hash = alloc_ftrace_hash ( size_bits ) ;
if ( ! new_hash )
return NULL ;
/* Empty hash? */
2011-12-20 04:07:36 +04:00
if ( ftrace_hash_empty ( hash ) )
2011-05-03 01:34:47 +04:00
return new_hash ;
size = 1 < < hash - > size_bits ;
for ( i = 0 ; i < size ; i + + ) {
hlist_for_each_entry ( entry , tp , & hash - > buckets [ i ] , hlist ) {
ret = add_hash_entry ( new_hash , entry - > ip ) ;
if ( ret < 0 )
goto free_hash ;
}
}
FTRACE_WARN_ON ( new_hash - > count ! = hash - > count ) ;
return new_hash ;
free_hash :
free_ftrace_hash ( new_hash ) ;
return NULL ;
}
2011-07-13 23:03:44 +04:00
static void
ftrace_hash_rec_disable ( struct ftrace_ops * ops , int filter_hash ) ;
static void
ftrace_hash_rec_enable ( struct ftrace_ops * ops , int filter_hash ) ;
2011-05-03 01:34:47 +04:00
static int
2011-07-13 23:03:44 +04:00
ftrace_hash_move ( struct ftrace_ops * ops , int enable ,
struct ftrace_hash * * dst , struct ftrace_hash * src )
2011-05-03 01:34:47 +04:00
{
struct ftrace_func_entry * entry ;
struct hlist_node * tp , * tn ;
struct hlist_head * hhd ;
2011-05-06 02:03:47 +04:00
struct ftrace_hash * old_hash ;
struct ftrace_hash * new_hash ;
2011-05-03 01:34:47 +04:00
unsigned long key ;
int size = src - > count ;
int bits = 0 ;
2011-07-13 23:03:44 +04:00
int ret ;
2011-05-03 01:34:47 +04:00
int i ;
2011-07-13 23:03:44 +04:00
/*
* Remove the current set , update the hash and add
* them back .
*/
ftrace_hash_rec_disable ( ops , enable ) ;
2011-05-03 01:34:47 +04:00
/*
* If the new source is empty , just free dst and assign it
* the empty_hash .
*/
if ( ! src - > count ) {
2011-05-06 02:03:47 +04:00
free_ftrace_hash_rcu ( * dst ) ;
rcu_assign_pointer ( * dst , EMPTY_HASH ) ;
2011-11-05 04:32:39 +04:00
/* still need to update the function records */
ret = 0 ;
goto out ;
2011-05-03 01:34:47 +04:00
}
/*
* Make the hash size about 1 / 2 the # found
*/
for ( size / = 2 ; size ; size > > = 1 )
bits + + ;
/* Don't allocate too much */
if ( bits > FTRACE_HASH_MAX_BITS )
bits = FTRACE_HASH_MAX_BITS ;
2011-07-13 23:03:44 +04:00
ret = - ENOMEM ;
2011-05-06 02:03:47 +04:00
new_hash = alloc_ftrace_hash ( bits ) ;
if ( ! new_hash )
2011-07-13 23:03:44 +04:00
goto out ;
2011-05-03 01:34:47 +04:00
size = 1 < < src - > size_bits ;
for ( i = 0 ; i < size ; i + + ) {
hhd = & src - > buckets [ i ] ;
hlist_for_each_entry_safe ( entry , tp , tn , hhd , hlist ) {
if ( bits > 0 )
key = hash_long ( entry - > ip , bits ) ;
else
key = 0 ;
remove_hash_entry ( src , entry ) ;
2011-05-06 02:03:47 +04:00
__add_hash_entry ( new_hash , entry ) ;
2011-05-03 01:34:47 +04:00
}
}
2011-05-06 02:03:47 +04:00
old_hash = * dst ;
rcu_assign_pointer ( * dst , new_hash ) ;
free_ftrace_hash_rcu ( old_hash ) ;
2011-07-13 23:03:44 +04:00
ret = 0 ;
out :
/*
* Enable regardless of ret :
* On success , we enable the new hash .
* On failure , we re - enable the original hash .
*/
ftrace_hash_rec_enable ( ops , enable ) ;
return ret ;
2011-05-03 01:34:47 +04:00
}
2011-05-04 17:27:52 +04:00
/*
* Test the hashes for this ops to see if we want to call
* the ops - > func or not .
*
* It ' s a match if the ip is in the ops - > filter_hash or
* the filter_hash does not exist or is empty ,
* AND
* the ip is not in the ops - > notrace_hash .
2011-05-06 05:14:55 +04:00
*
* This needs to be called with preemption disabled as
* the hashes are freed with call_rcu_sched ( ) .
2011-05-04 17:27:52 +04:00
*/
static int
ftrace_ops_test ( struct ftrace_ops * ops , unsigned long ip )
{
struct ftrace_hash * filter_hash ;
struct ftrace_hash * notrace_hash ;
int ret ;
filter_hash = rcu_dereference_raw ( ops - > filter_hash ) ;
notrace_hash = rcu_dereference_raw ( ops - > notrace_hash ) ;
2011-12-20 04:07:36 +04:00
if ( ( ftrace_hash_empty ( filter_hash ) | |
2011-05-04 17:27:52 +04:00
ftrace_lookup_ip ( filter_hash , ip ) ) & &
2011-12-20 04:07:36 +04:00
( ftrace_hash_empty ( notrace_hash ) | |
2011-05-04 17:27:52 +04:00
! ftrace_lookup_ip ( notrace_hash , ip ) ) )
ret = 1 ;
else
ret = 0 ;
return ret ;
}
2009-03-24 00:12:36 +03:00
/*
* This is a double for . Do not use ' break ' to break out of the loop ,
* you must use a goto .
*/
# define do_for_each_ftrace_rec(pg, rec) \
for ( pg = ftrace_pages_start ; pg ; pg = pg - > next ) { \
int _____i ; \
for ( _____i = 0 ; _____i < pg - > index ; _____i + + ) { \
rec = & pg - > records [ _____i ] ;
# define while_for_each_ftrace_rec() \
} \
}
2011-12-17 04:27:42 +04:00
static int ftrace_cmp_recs ( const void * a , const void * b )
{
const struct dyn_ftrace * reca = a ;
const struct dyn_ftrace * recb = b ;
if ( reca - > ip > recb - > ip )
return 1 ;
if ( reca - > ip < recb - > ip )
return - 1 ;
return 0 ;
}
2011-08-16 17:53:39 +04:00
/**
* ftrace_location - return true if the ip giving is a traced location
* @ ip : the instruction pointer to check
*
* Returns 1 if @ ip given is a pointer to a ftrace location .
* That is , the instruction that is either a NOP or call to
* the function tracer . It checks the ftrace internal tables to
* determine if the address belongs or not .
*/
int ftrace_location ( unsigned long ip )
{
struct ftrace_page * pg ;
struct dyn_ftrace * rec ;
2011-12-17 04:27:42 +04:00
struct dyn_ftrace key ;
2011-08-16 17:53:39 +04:00
2011-12-17 04:27:42 +04:00
key . ip = ip ;
for ( pg = ftrace_pages_start ; pg ; pg = pg - > next ) {
rec = bsearch ( & key , pg - > records , pg - > index ,
sizeof ( struct dyn_ftrace ) ,
ftrace_cmp_recs ) ;
if ( rec )
2011-08-16 17:53:39 +04:00
return 1 ;
2011-12-17 04:27:42 +04:00
}
2011-08-16 17:53:39 +04:00
return 0 ;
}
2011-05-03 21:25:24 +04:00
static void __ftrace_hash_rec_update ( struct ftrace_ops * ops ,
int filter_hash ,
bool inc )
{
struct ftrace_hash * hash ;
struct ftrace_hash * other_hash ;
struct ftrace_page * pg ;
struct dyn_ftrace * rec ;
int count = 0 ;
int all = 0 ;
/* Only update if the ops has been registered */
if ( ! ( ops - > flags & FTRACE_OPS_FL_ENABLED ) )
return ;
/*
* In the filter_hash case :
* If the count is zero , we update all records .
* Otherwise we just update the items in the hash .
*
* In the notrace_hash case :
* We enable the update in the hash .
* As disabling notrace means enabling the tracing ,
* and enabling notrace means disabling , the inc variable
* gets inversed .
*/
if ( filter_hash ) {
hash = ops - > filter_hash ;
other_hash = ops - > notrace_hash ;
2011-12-20 04:07:36 +04:00
if ( ftrace_hash_empty ( hash ) )
2011-05-03 21:25:24 +04:00
all = 1 ;
} else {
inc = ! inc ;
hash = ops - > notrace_hash ;
other_hash = ops - > filter_hash ;
/*
* If the notrace hash has no items ,
* then there ' s nothing to do .
*/
2011-12-20 04:07:36 +04:00
if ( ftrace_hash_empty ( hash ) )
2011-05-03 21:25:24 +04:00
return ;
}
do_for_each_ftrace_rec ( pg , rec ) {
int in_other_hash = 0 ;
int in_hash = 0 ;
int match = 0 ;
if ( all ) {
/*
* Only the filter_hash affects all records .
* Update if the record is not in the notrace hash .
*/
2011-05-04 17:27:52 +04:00
if ( ! other_hash | | ! ftrace_lookup_ip ( other_hash , rec - > ip ) )
2011-05-03 21:25:24 +04:00
match = 1 ;
} else {
2011-12-20 04:07:36 +04:00
in_hash = ! ! ftrace_lookup_ip ( hash , rec - > ip ) ;
in_other_hash = ! ! ftrace_lookup_ip ( other_hash , rec - > ip ) ;
2011-05-03 21:25:24 +04:00
/*
*
*/
if ( filter_hash & & in_hash & & ! in_other_hash )
match = 1 ;
else if ( ! filter_hash & & in_hash & &
2011-12-20 04:07:36 +04:00
( in_other_hash | | ftrace_hash_empty ( other_hash ) ) )
2011-05-03 21:25:24 +04:00
match = 1 ;
}
if ( ! match )
continue ;
if ( inc ) {
rec - > flags + + ;
if ( FTRACE_WARN_ON ( ( rec - > flags & ~ FTRACE_FL_MASK ) = = FTRACE_REF_MAX ) )
return ;
} else {
if ( FTRACE_WARN_ON ( ( rec - > flags & ~ FTRACE_FL_MASK ) = = 0 ) )
return ;
rec - > flags - - ;
}
count + + ;
/* Shortcut, if we handled all records, we are done. */
if ( ! all & & count = = hash - > count )
return ;
} while_for_each_ftrace_rec ( ) ;
}
static void ftrace_hash_rec_disable ( struct ftrace_ops * ops ,
int filter_hash )
{
__ftrace_hash_rec_update ( ops , filter_hash , 0 ) ;
}
static void ftrace_hash_rec_enable ( struct ftrace_ops * ops ,
int filter_hash )
{
__ftrace_hash_rec_update ( ops , filter_hash , 1 ) ;
}
2008-05-12 23:20:51 +04:00
static struct dyn_ftrace * ftrace_alloc_dyn_node ( unsigned long ip )
2008-05-12 23:20:43 +04:00
{
2011-12-17 01:23:44 +04:00
if ( ftrace_pages - > index = = ftrace_pages - > size ) {
/* We should have allocated enough */
if ( WARN_ON ( ! ftrace_pages - > next ) )
return NULL ;
2008-05-12 23:20:43 +04:00
ftrace_pages = ftrace_pages - > next ;
}
return & ftrace_pages - > records [ ftrace_pages - > index + + ] ;
}
2008-10-23 17:33:07 +04:00
static struct dyn_ftrace *
2008-05-12 23:20:43 +04:00
ftrace_record_ip ( unsigned long ip )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2008-10-23 17:33:07 +04:00
struct dyn_ftrace * rec ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-11-15 03:21:19 +03:00
if ( ftrace_disabled )
2008-10-23 17:33:07 +04:00
return NULL ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-10-23 17:33:07 +04:00
rec = ftrace_alloc_dyn_node ( ip ) ;
if ( ! rec )
return NULL ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-10-23 17:33:07 +04:00
rec - > ip = ip ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-10-23 17:33:07 +04:00
return rec ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2008-11-15 03:21:19 +03:00
static void print_ip_ins ( const char * fmt , unsigned char * p )
{
int i ;
printk ( KERN_CONT " %s " , fmt ) ;
for ( i = 0 ; i < MCOUNT_INSN_SIZE ; i + + )
printk ( KERN_CONT " %s%02x " , i ? " : " : " " , p [ i ] ) ;
}
2011-08-16 17:53:39 +04:00
/**
* ftrace_bug - report and shutdown function tracer
* @ failed : The failed type ( EFAULT , EINVAL , EPERM )
* @ ip : The address that failed
*
* The arch code that enables or disables the function tracing
* can call ftrace_bug ( ) when it has detected a problem in
* modifying the code . @ failed should be one of either :
* EFAULT - if the problem happens on reading the @ ip address
* EINVAL - if what is read at @ ip is not what was expected
* EPERM - if the problem happens on writting to the @ ip address
*/
void ftrace_bug ( int failed , unsigned long ip )
2008-11-15 03:21:19 +03:00
{
switch ( failed ) {
case - EFAULT :
FTRACE_WARN_ON_ONCE ( 1 ) ;
pr_info ( " ftrace faulted on modifying " ) ;
print_ip_sym ( ip ) ;
break ;
case - EINVAL :
FTRACE_WARN_ON_ONCE ( 1 ) ;
pr_info ( " ftrace failed to modify " ) ;
print_ip_sym ( ip ) ;
print_ip_ins ( " actual: " , ( unsigned char * ) ip ) ;
printk ( KERN_CONT " \n " ) ;
break ;
case - EPERM :
FTRACE_WARN_ON_ONCE ( 1 ) ;
pr_info ( " ftrace faulted on writing " ) ;
print_ip_sym ( ip ) ;
break ;
default :
FTRACE_WARN_ON_ONCE ( 1 ) ;
pr_info ( " ftrace faulted on unknown error " ) ;
print_ip_sym ( ip ) ;
}
}
2008-05-12 23:20:43 +04:00
2010-02-03 00:49:11 +03:00
/* Return 1 if the address range is reserved for ftrace */
int ftrace_text_reserved ( void * start , void * end )
{
struct dyn_ftrace * rec ;
struct ftrace_page * pg ;
do_for_each_ftrace_rec ( pg , rec ) {
if ( rec - > ip < = ( unsigned long ) end & &
rec - > ip + MCOUNT_INSN_SIZE > ( unsigned long ) start )
return 1 ;
} while_for_each_ftrace_rec ( ) ;
return 0 ;
}
2011-08-16 17:53:39 +04:00
static int ftrace_check_record ( struct dyn_ftrace * rec , int enable , int update )
2008-05-12 23:20:43 +04:00
{
2009-07-15 08:32:15 +04:00
unsigned long flag = 0UL ;
2008-11-16 08:02:06 +03:00
2008-11-16 00:31:41 +03:00
/*
2011-12-05 21:22:48 +04:00
* If we are updating calls :
2008-11-16 00:31:41 +03:00
*
2011-05-03 21:25:24 +04:00
* If the record has a ref count , then we need to enable it
* because someone is using it .
2008-11-16 00:31:41 +03:00
*
2011-05-03 21:25:24 +04:00
* Otherwise we make sure its disabled .
*
2011-12-05 21:22:48 +04:00
* If we are disabling calls , then disable all records that
2011-05-03 21:25:24 +04:00
* are enabled .
2008-11-16 00:31:41 +03:00
*/
2011-08-16 17:53:39 +04:00
if ( enable & & ( rec - > flags & ~ FTRACE_FL_MASK ) )
2011-05-03 21:25:24 +04:00
flag = FTRACE_FL_ENABLED ;
2008-11-16 00:31:41 +03:00
2009-07-15 08:32:15 +04:00
/* If the state of this record hasn't changed, then do nothing */
if ( ( rec - > flags & FTRACE_FL_ENABLED ) = = flag )
2011-08-16 17:53:39 +04:00
return FTRACE_UPDATE_IGNORE ;
2008-11-16 00:31:41 +03:00
2009-07-15 08:32:15 +04:00
if ( flag ) {
2011-08-16 17:53:39 +04:00
if ( update )
rec - > flags | = FTRACE_FL_ENABLED ;
return FTRACE_UPDATE_MAKE_CALL ;
}
if ( update )
rec - > flags & = ~ FTRACE_FL_ENABLED ;
return FTRACE_UPDATE_MAKE_NOP ;
}
/**
* ftrace_update_record , set a record that now is tracing or not
* @ rec : the record to update
* @ enable : set to 1 if the record is tracing , zero to force disable
*
* The records that represent all functions that can be traced need
* to be updated when tracing has been enabled .
*/
int ftrace_update_record ( struct dyn_ftrace * rec , int enable )
{
return ftrace_check_record ( rec , enable , 1 ) ;
}
/**
* ftrace_test_record , check if the record has been enabled or not
* @ rec : the record to test
* @ enable : set to 1 to check if enabled , 0 if it is disabled
*
* The arch code may need to test if a record is already set to
* tracing to determine how to modify the function code that it
* represents .
*/
int ftrace_test_record ( struct dyn_ftrace * rec , int enable )
{
return ftrace_check_record ( rec , enable , 0 ) ;
}
static int
__ftrace_replace_code ( struct dyn_ftrace * rec , int enable )
{
unsigned long ftrace_addr ;
int ret ;
ftrace_addr = ( unsigned long ) FTRACE_ADDR ;
ret = ftrace_update_record ( rec , enable ) ;
switch ( ret ) {
case FTRACE_UPDATE_IGNORE :
return 0 ;
case FTRACE_UPDATE_MAKE_CALL :
2009-07-15 08:32:15 +04:00
return ftrace_make_call ( rec , ftrace_addr ) ;
2011-08-16 17:53:39 +04:00
case FTRACE_UPDATE_MAKE_NOP :
return ftrace_make_nop ( NULL , rec , ftrace_addr ) ;
2008-05-12 23:20:43 +04:00
}
2011-08-16 17:53:39 +04:00
return - 1 ; /* unknow ftrace bug */
2008-05-12 23:20:43 +04:00
}
2011-12-05 21:22:48 +04:00
static void ftrace_replace_code ( int update )
2008-05-12 23:20:43 +04:00
{
struct dyn_ftrace * rec ;
struct ftrace_page * pg ;
2009-02-17 19:20:26 +03:00
int failed ;
2008-05-12 23:20:43 +04:00
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
return ;
2009-02-13 20:43:56 +03:00
do_for_each_ftrace_rec ( pg , rec ) {
2011-12-05 21:22:48 +04:00
failed = __ftrace_replace_code ( rec , update ) ;
2009-03-13 12:16:34 +03:00
if ( failed ) {
2009-10-08 00:57:56 +04:00
ftrace_bug ( failed , rec - > ip ) ;
/* Stop processing */
return ;
2008-05-12 23:20:43 +04:00
}
2009-02-13 20:43:56 +03:00
} while_for_each_ftrace_rec ( ) ;
2008-05-12 23:20:43 +04:00
}
2011-08-16 17:53:39 +04:00
struct ftrace_rec_iter {
struct ftrace_page * pg ;
int index ;
} ;
/**
* ftrace_rec_iter_start , start up iterating over traced functions
*
* Returns an iterator handle that is used to iterate over all
* the records that represent address locations where functions
* are traced .
*
* May return NULL if no records are available .
*/
struct ftrace_rec_iter * ftrace_rec_iter_start ( void )
{
/*
* We only use a single iterator .
* Protected by the ftrace_lock mutex .
*/
static struct ftrace_rec_iter ftrace_rec_iter ;
struct ftrace_rec_iter * iter = & ftrace_rec_iter ;
iter - > pg = ftrace_pages_start ;
iter - > index = 0 ;
/* Could have empty pages */
while ( iter - > pg & & ! iter - > pg - > index )
iter - > pg = iter - > pg - > next ;
if ( ! iter - > pg )
return NULL ;
return iter ;
}
/**
* ftrace_rec_iter_next , get the next record to process .
* @ iter : The handle to the iterator .
*
* Returns the next iterator after the given iterator @ iter .
*/
struct ftrace_rec_iter * ftrace_rec_iter_next ( struct ftrace_rec_iter * iter )
{
iter - > index + + ;
if ( iter - > index > = iter - > pg - > index ) {
iter - > pg = iter - > pg - > next ;
iter - > index = 0 ;
/* Could have empty pages */
while ( iter - > pg & & ! iter - > pg - > index )
iter - > pg = iter - > pg - > next ;
}
if ( ! iter - > pg )
return NULL ;
return iter ;
}
/**
* ftrace_rec_iter_record , get the record at the iterator location
* @ iter : The current iterator location
*
* Returns the record that the current @ iter is at .
*/
struct dyn_ftrace * ftrace_rec_iter_record ( struct ftrace_rec_iter * iter )
{
return & iter - > pg - > records [ iter - > index ] ;
}
2008-05-24 22:40:04 +04:00
static int
2008-11-15 03:21:19 +03:00
ftrace_code_disable ( struct module * mod , struct dyn_ftrace * rec )
2008-05-12 23:20:43 +04:00
{
unsigned long ip ;
2008-10-23 17:32:59 +04:00
int ret ;
2008-05-12 23:20:43 +04:00
ip = rec - > ip ;
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
return 0 ;
2009-01-09 06:29:40 +03:00
ret = ftrace_make_nop ( mod , rec , MCOUNT_ADDR ) ;
2008-10-23 17:32:59 +04:00
if ( ret ) {
2008-11-15 03:21:19 +03:00
ftrace_bug ( ret , ip ) ;
2008-05-24 22:40:04 +04:00
return 0 ;
2008-05-12 23:20:48 +04:00
}
2008-05-24 22:40:04 +04:00
return 1 ;
2008-05-12 23:20:43 +04:00
}
2009-02-17 21:35:06 +03:00
/*
* archs can override this function if they must do something
* before the modifying code is performed .
*/
int __weak ftrace_arch_code_modify_prepare ( void )
{
return 0 ;
}
/*
* archs can override this function if they must do something
* after the modifying code is performed .
*/
int __weak ftrace_arch_code_modify_post_process ( void )
{
return 0 ;
}
2008-05-12 23:20:51 +04:00
static int __ftrace_modify_code ( void * data )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2008-05-12 23:20:43 +04:00
int * command = data ;
2011-12-05 21:22:48 +04:00
if ( * command & FTRACE_UPDATE_CALLS )
2008-05-12 23:20:43 +04:00
ftrace_replace_code ( 1 ) ;
2008-11-11 23:01:42 +03:00
else if ( * command & FTRACE_DISABLE_CALLS )
2008-05-12 23:20:43 +04:00
ftrace_replace_code ( 0 ) ;
if ( * command & FTRACE_UPDATE_TRACE_FUNC )
ftrace_update_ftrace_func ( ftrace_trace_function ) ;
2008-11-26 08:16:24 +03:00
if ( * command & FTRACE_START_FUNC_RET )
ftrace_enable_ftrace_graph_caller ( ) ;
else if ( * command & FTRACE_STOP_FUNC_RET )
ftrace_disable_ftrace_graph_caller ( ) ;
2008-05-12 23:20:43 +04:00
return 0 ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2011-08-16 17:53:39 +04:00
/**
* ftrace_run_stop_machine , go back to the stop machine method
* @ command : The command to tell ftrace what to do
*
* If an arch needs to fall back to the stop machine method , the
* it can call this function .
*/
void ftrace_run_stop_machine ( int command )
{
stop_machine ( __ftrace_modify_code , & command , NULL ) ;
}
/**
* arch_ftrace_update_code , modify the code to trace or not trace
* @ command : The command that needs to be done
*
* Archs can override this function if it does not need to
* run stop_machine ( ) to modify code .
*/
void __weak arch_ftrace_update_code ( int command )
{
ftrace_run_stop_machine ( command ) ;
}
2008-05-12 23:20:51 +04:00
static void ftrace_run_update_code ( int command )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2009-02-17 21:35:06 +03:00
int ret ;
ret = ftrace_arch_code_modify_prepare ( ) ;
FTRACE_WARN_ON ( ret ) ;
if ( ret )
return ;
2011-08-16 17:53:39 +04:00
/*
* Do not call function tracer while we update the code .
* We are in stop machine .
*/
function_trace_stop + + ;
2009-02-17 21:35:06 +03:00
2011-08-16 17:53:39 +04:00
/*
* By default we use stop_machine ( ) to modify the code .
* But archs can do what ever they want as long as it
* is safe . The stop_machine ( ) is the safest , but also
* produces the most overhead .
*/
arch_ftrace_update_code ( command ) ;
# ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
/*
* For archs that call ftrace_test_stop_func ( ) , we must
* wait till after we update all the function callers
* before we update the callback . This keeps different
* ops that record different functions from corrupting
* each other .
*/
__ftrace_trace_function = __ftrace_trace_function_delay ;
# endif
function_trace_stop - - ;
2009-02-17 21:35:06 +03:00
ret = ftrace_arch_code_modify_post_process ( ) ;
FTRACE_WARN_ON ( ret ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2008-05-12 23:20:43 +04:00
static ftrace_func_t saved_ftrace_func ;
2008-11-06 00:05:44 +03:00
static int ftrace_start_up ;
2011-05-04 17:27:52 +04:00
static int global_start_up ;
2008-11-26 08:16:23 +03:00
static void ftrace_startup_enable ( int command )
{
if ( saved_ftrace_func ! = ftrace_trace_function ) {
saved_ftrace_func = ftrace_trace_function ;
command | = FTRACE_UPDATE_TRACE_FUNC ;
}
if ( ! command | | ! ftrace_enabled )
return ;
ftrace_run_update_code ( command ) ;
}
2008-05-12 23:20:43 +04:00
2011-05-23 23:24:25 +04:00
static int ftrace_startup ( struct ftrace_ops * ops , int command )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2011-05-04 17:27:52 +04:00
bool hash_enable = true ;
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
2011-05-23 23:24:25 +04:00
return - ENODEV ;
2008-05-12 23:20:48 +04:00
2008-11-06 00:05:44 +03:00
ftrace_start_up + + ;
2011-12-05 21:22:48 +04:00
command | = FTRACE_UPDATE_CALLS ;
2008-05-12 23:20:43 +04:00
2011-05-04 17:27:52 +04:00
/* ops marked global share the filter hashes */
if ( ops - > flags & FTRACE_OPS_FL_GLOBAL ) {
ops = & global_ops ;
/* Don't update hash if global is already set */
if ( global_start_up )
hash_enable = false ;
global_start_up + + ;
}
2011-05-03 21:25:24 +04:00
ops - > flags | = FTRACE_OPS_FL_ENABLED ;
2011-05-04 17:27:52 +04:00
if ( hash_enable )
2011-05-03 21:25:24 +04:00
ftrace_hash_rec_enable ( ops , 1 ) ;
2008-11-26 08:16:23 +03:00
ftrace_startup_enable ( command ) ;
2011-05-23 23:24:25 +04:00
return 0 ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2011-05-04 05:55:54 +04:00
static void ftrace_shutdown ( struct ftrace_ops * ops , int command )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2011-05-04 17:27:52 +04:00
bool hash_disable = true ;
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
return ;
2008-11-06 00:05:44 +03:00
ftrace_start_up - - ;
2009-06-20 08:52:21 +04:00
/*
* Just warn in case of unbalance , no need to kill ftrace , it ' s not
* critical but the ftrace_call callers may be never nopped again after
* further ftrace uses .
*/
WARN_ON_ONCE ( ftrace_start_up < 0 ) ;
2011-05-04 17:27:52 +04:00
if ( ops - > flags & FTRACE_OPS_FL_GLOBAL ) {
ops = & global_ops ;
global_start_up - - ;
WARN_ON_ONCE ( global_start_up < 0 ) ;
/* Don't update hash if global still has users */
if ( global_start_up ) {
WARN_ON_ONCE ( ! ftrace_start_up ) ;
hash_disable = false ;
}
}
if ( hash_disable )
2011-05-03 21:25:24 +04:00
ftrace_hash_rec_disable ( ops , 1 ) ;
2011-05-04 17:27:52 +04:00
if ( ops ! = & global_ops | | ! global_start_up )
2011-05-03 21:25:24 +04:00
ops - > flags & = ~ FTRACE_OPS_FL_ENABLED ;
2011-05-04 17:27:52 +04:00
2011-12-05 21:22:48 +04:00
command | = FTRACE_UPDATE_CALLS ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-05-12 23:20:43 +04:00
if ( saved_ftrace_func ! = ftrace_trace_function ) {
saved_ftrace_func = ftrace_trace_function ;
command | = FTRACE_UPDATE_TRACE_FUNC ;
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-05-12 23:20:43 +04:00
if ( ! command | | ! ftrace_enabled )
2009-02-14 09:42:44 +03:00
return ;
2008-05-12 23:20:43 +04:00
ftrace_run_update_code ( command ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2008-05-12 23:20:51 +04:00
static void ftrace_startup_sysctl ( void )
2008-05-12 23:20:43 +04:00
{
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
return ;
2008-05-12 23:20:43 +04:00
/* Force update next time */
saved_ftrace_func = NULL ;
2008-11-06 00:05:44 +03:00
/* ftrace_start_up is true if we want ftrace running */
if ( ftrace_start_up )
2011-12-05 21:22:48 +04:00
ftrace_run_update_code ( FTRACE_UPDATE_CALLS ) ;
2008-05-12 23:20:43 +04:00
}
2008-05-12 23:20:51 +04:00
static void ftrace_shutdown_sysctl ( void )
2008-05-12 23:20:43 +04:00
{
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
return ;
2008-11-06 00:05:44 +03:00
/* ftrace_start_up is true if ftrace is running */
if ( ftrace_start_up )
2010-09-15 06:19:46 +04:00
ftrace_run_update_code ( FTRACE_DISABLE_CALLS ) ;
2008-05-12 23:20:43 +04:00
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
static cycle_t ftrace_update_time ;
static unsigned long ftrace_update_cnt ;
unsigned long ftrace_update_tot_cnt ;
ftrace: Fix regression where ftrace breaks when modules are loaded
Enabling function tracer to trace all functions, then load a module and
then disable function tracing will cause ftrace to fail.
This can also happen by enabling function tracing on the command line:
ftrace=function
and during boot up, modules are loaded, then you disable function tracing
with 'echo nop > current_tracer' you will trigger a bug in ftrace that
will shut itself down.
The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
are registered for tracing. When one or more ftrace_ops are registered,
all the records that represent the functions that the ftrace_ops will
trace have a ref count incremented. If this ref count is not zero,
when the code modification runs, that function will be enabled for tracing.
If the ref count is zero, that function will be disabled from tracing.
To make sure the accounting was working, FTRACE_WARN_ON()s were added
to updating of the ref counts.
If the ref count hits its max (> 2^30 ftrace_ops added), or if
the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
disables all modification of code.
Since it is common for ftrace_ops to trace all functions in the kernel,
instead of creating > 20,000 hash items for the ftrace_ops, the hash
count is just set to zero, and it represents that the ftrace_ops is
to trace all functions. This is where the issues arrise.
If you enable function tracing to trace all functions, and then add
a module, the modules function records do not get the ref count updated.
When the function tracer is disabled, all function records ref counts
are subtracted. Since the modules never had their ref counts incremented,
they go below zero and the FTRACE_WARN_ON() is triggered.
The solution to this is rather simple. When modules are loaded, and
their functions are added to the the ftrace pool, look to see if any
ftrace_ops are registered that trace all functions. And for those,
update the ref count for the module function records.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-15 07:02:27 +04:00
static int ops_traces_mod ( struct ftrace_ops * ops )
{
struct ftrace_hash * hash ;
hash = ops - > filter_hash ;
2011-12-20 04:07:36 +04:00
return ftrace_hash_empty ( hash ) ;
ftrace: Fix regression where ftrace breaks when modules are loaded
Enabling function tracer to trace all functions, then load a module and
then disable function tracing will cause ftrace to fail.
This can also happen by enabling function tracing on the command line:
ftrace=function
and during boot up, modules are loaded, then you disable function tracing
with 'echo nop > current_tracer' you will trigger a bug in ftrace that
will shut itself down.
The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
are registered for tracing. When one or more ftrace_ops are registered,
all the records that represent the functions that the ftrace_ops will
trace have a ref count incremented. If this ref count is not zero,
when the code modification runs, that function will be enabled for tracing.
If the ref count is zero, that function will be disabled from tracing.
To make sure the accounting was working, FTRACE_WARN_ON()s were added
to updating of the ref counts.
If the ref count hits its max (> 2^30 ftrace_ops added), or if
the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
disables all modification of code.
Since it is common for ftrace_ops to trace all functions in the kernel,
instead of creating > 20,000 hash items for the ftrace_ops, the hash
count is just set to zero, and it represents that the ftrace_ops is
to trace all functions. This is where the issues arrise.
If you enable function tracing to trace all functions, and then add
a module, the modules function records do not get the ref count updated.
When the function tracer is disabled, all function records ref counts
are subtracted. Since the modules never had their ref counts incremented,
they go below zero and the FTRACE_WARN_ON() is triggered.
The solution to this is rather simple. When modules are loaded, and
their functions are added to the the ftrace pool, look to see if any
ftrace_ops are registered that trace all functions. And for those,
update the ref count for the module function records.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-15 07:02:27 +04:00
}
2008-11-15 03:21:19 +03:00
static int ftrace_update_code ( struct module * mod )
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
{
2011-12-17 01:30:31 +04:00
struct ftrace_page * pg ;
2009-03-13 12:51:27 +03:00
struct dyn_ftrace * p ;
2008-06-21 22:20:29 +04:00
cycle_t start , stop ;
ftrace: Fix regression where ftrace breaks when modules are loaded
Enabling function tracer to trace all functions, then load a module and
then disable function tracing will cause ftrace to fail.
This can also happen by enabling function tracing on the command line:
ftrace=function
and during boot up, modules are loaded, then you disable function tracing
with 'echo nop > current_tracer' you will trigger a bug in ftrace that
will shut itself down.
The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
are registered for tracing. When one or more ftrace_ops are registered,
all the records that represent the functions that the ftrace_ops will
trace have a ref count incremented. If this ref count is not zero,
when the code modification runs, that function will be enabled for tracing.
If the ref count is zero, that function will be disabled from tracing.
To make sure the accounting was working, FTRACE_WARN_ON()s were added
to updating of the ref counts.
If the ref count hits its max (> 2^30 ftrace_ops added), or if
the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
disables all modification of code.
Since it is common for ftrace_ops to trace all functions in the kernel,
instead of creating > 20,000 hash items for the ftrace_ops, the hash
count is just set to zero, and it represents that the ftrace_ops is
to trace all functions. This is where the issues arrise.
If you enable function tracing to trace all functions, and then add
a module, the modules function records do not get the ref count updated.
When the function tracer is disabled, all function records ref counts
are subtracted. Since the modules never had their ref counts incremented,
they go below zero and the FTRACE_WARN_ON() is triggered.
The solution to this is rather simple. When modules are loaded, and
their functions are added to the the ftrace pool, look to see if any
ftrace_ops are registered that trace all functions. And for those,
update the ref count for the module function records.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-15 07:02:27 +04:00
unsigned long ref = 0 ;
2011-12-17 01:30:31 +04:00
int i ;
ftrace: Fix regression where ftrace breaks when modules are loaded
Enabling function tracer to trace all functions, then load a module and
then disable function tracing will cause ftrace to fail.
This can also happen by enabling function tracing on the command line:
ftrace=function
and during boot up, modules are loaded, then you disable function tracing
with 'echo nop > current_tracer' you will trigger a bug in ftrace that
will shut itself down.
The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
are registered for tracing. When one or more ftrace_ops are registered,
all the records that represent the functions that the ftrace_ops will
trace have a ref count incremented. If this ref count is not zero,
when the code modification runs, that function will be enabled for tracing.
If the ref count is zero, that function will be disabled from tracing.
To make sure the accounting was working, FTRACE_WARN_ON()s were added
to updating of the ref counts.
If the ref count hits its max (> 2^30 ftrace_ops added), or if
the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
disables all modification of code.
Since it is common for ftrace_ops to trace all functions in the kernel,
instead of creating > 20,000 hash items for the ftrace_ops, the hash
count is just set to zero, and it represents that the ftrace_ops is
to trace all functions. This is where the issues arrise.
If you enable function tracing to trace all functions, and then add
a module, the modules function records do not get the ref count updated.
When the function tracer is disabled, all function records ref counts
are subtracted. Since the modules never had their ref counts incremented,
they go below zero and the FTRACE_WARN_ON() is triggered.
The solution to this is rather simple. When modules are loaded, and
their functions are added to the the ftrace pool, look to see if any
ftrace_ops are registered that trace all functions. And for those,
update the ref count for the module function records.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-15 07:02:27 +04:00
/*
* When adding a module , we need to check if tracers are
* currently enabled and if they are set to trace all functions .
* If they are , we need to enable the module functions as well
* as update the reference counts for those function records .
*/
if ( mod ) {
struct ftrace_ops * ops ;
for ( ops = ftrace_ops_list ;
ops ! = & ftrace_list_end ; ops = ops - > next ) {
if ( ops - > flags & FTRACE_OPS_FL_ENABLED & &
ops_traces_mod ( ops ) )
ref + + ;
}
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2008-05-12 23:20:46 +04:00
start = ftrace_now ( raw_smp_processor_id ( ) ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
ftrace_update_cnt = 0 ;
2011-12-17 01:30:31 +04:00
for ( pg = ftrace_new_pgs ; pg ; pg = pg - > next ) {
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
2011-12-17 01:30:31 +04:00
for ( i = 0 ; i < pg - > index ; i + + ) {
/* If something went wrong, bail without enabling anything */
if ( unlikely ( ftrace_disabled ) )
return - 1 ;
2008-06-21 22:20:29 +04:00
2011-12-17 01:30:31 +04:00
p = & pg - > records [ i ] ;
p - > flags = ref ;
2008-06-21 22:20:29 +04:00
2011-12-17 01:30:31 +04:00
/*
* Do the initial record conversion from mcount jump
* to the NOP instructions .
*/
if ( ! ftrace_code_disable ( mod , p ) )
break ;
2009-10-14 00:33:53 +04:00
2011-12-17 01:30:31 +04:00
ftrace_update_cnt + + ;
2009-10-14 00:33:53 +04:00
2011-12-17 01:30:31 +04:00
/*
* If the tracing is enabled , go ahead and enable the record .
*
* The reason not to enable the record immediatelly is the
* inherent check of ftrace_make_nop / ftrace_make_call for
* correct previous instructions . Making first the NOP
* conversion puts the module to the correct state , thus
* passing the ftrace_make_call check .
*/
if ( ftrace_start_up & & ref ) {
int failed = __ftrace_replace_code ( p , 1 ) ;
if ( failed )
ftrace_bug ( failed , p - > ip ) ;
}
2009-10-14 00:33:53 +04:00
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2011-12-17 01:30:31 +04:00
ftrace_new_pgs = NULL ;
2008-05-12 23:20:46 +04:00
stop = ftrace_now ( raw_smp_processor_id ( ) ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
ftrace_update_time = stop - start ;
ftrace_update_tot_cnt + = ftrace_update_cnt ;
2008-05-12 23:20:42 +04:00
return 0 ;
}
2011-12-17 01:23:44 +04:00
static int ftrace_allocate_records ( struct ftrace_page * pg , int count )
2008-05-12 23:20:43 +04:00
{
2011-12-17 01:23:44 +04:00
int order ;
2008-05-12 23:20:43 +04:00
int cnt ;
2011-12-17 01:23:44 +04:00
if ( WARN_ON ( ! count ) )
return - EINVAL ;
order = get_count_order ( DIV_ROUND_UP ( count , ENTRIES_PER_PAGE ) ) ;
2008-05-12 23:20:43 +04:00
/*
2011-12-17 01:23:44 +04:00
* We want to fill as much as possible . No more than a page
* may be empty .
2008-05-12 23:20:43 +04:00
*/
2011-12-17 01:23:44 +04:00
while ( ( PAGE_SIZE < < order ) / ENTRY_SIZE > = count + ENTRIES_PER_PAGE )
order - - ;
2008-05-12 23:20:43 +04:00
2011-12-17 01:23:44 +04:00
again :
pg - > records = ( void * ) __get_free_pages ( GFP_KERNEL | __GFP_ZERO , order ) ;
2008-05-12 23:20:43 +04:00
2011-12-17 01:23:44 +04:00
if ( ! pg - > records ) {
/* if we can't allocate this size, try something smaller */
if ( ! order )
return - ENOMEM ;
order > > = 1 ;
goto again ;
}
2008-05-12 23:20:43 +04:00
2011-12-17 01:23:44 +04:00
cnt = ( PAGE_SIZE < < order ) / ENTRY_SIZE ;
pg - > size = cnt ;
2008-05-12 23:20:43 +04:00
2011-12-17 01:23:44 +04:00
if ( cnt > count )
cnt = count ;
return cnt ;
}
static struct ftrace_page *
ftrace_allocate_pages ( unsigned long num_to_init )
{
struct ftrace_page * start_pg ;
struct ftrace_page * pg ;
int order ;
int cnt ;
if ( ! num_to_init )
return 0 ;
start_pg = pg = kzalloc ( sizeof ( * pg ) , GFP_KERNEL ) ;
if ( ! pg )
return NULL ;
/*
* Try to allocate as much as possible in one continues
* location that fills in all of the space . We want to
* waste as little space as possible .
*/
for ( ; ; ) {
cnt = ftrace_allocate_records ( pg , num_to_init ) ;
if ( cnt < 0 )
goto free_pages ;
num_to_init - = cnt ;
if ( ! num_to_init )
2008-05-12 23:20:43 +04:00
break ;
2011-12-17 01:23:44 +04:00
pg - > next = kzalloc ( sizeof ( * pg ) , GFP_KERNEL ) ;
if ( ! pg - > next )
goto free_pages ;
2008-05-12 23:20:43 +04:00
pg = pg - > next ;
}
2011-12-17 01:23:44 +04:00
return start_pg ;
free_pages :
while ( start_pg ) {
order = get_count_order ( pg - > size / ENTRIES_PER_PAGE ) ;
free_pages ( ( unsigned long ) pg - > records , order ) ;
start_pg = pg - > next ;
kfree ( pg ) ;
pg = start_pg ;
}
pr_info ( " ftrace: FAILED to allocate memory for functions \n " ) ;
return NULL ;
}
static int __init ftrace_dyn_table_alloc ( unsigned long num_to_init )
{
int cnt ;
if ( ! num_to_init ) {
pr_info ( " ftrace: No functions to be traced? \n " ) ;
return - 1 ;
}
cnt = num_to_init / ENTRIES_PER_PAGE ;
pr_info ( " ftrace: allocating %ld entries in %d pages \n " ,
num_to_init , cnt + 1 ) ;
2008-05-12 23:20:43 +04:00
return 0 ;
}
2008-05-12 23:20:43 +04:00
# define FTRACE_BUFF_MAX (KSYM_SYMBOL_LEN+4) /* room for wildcards */
struct ftrace_iterator {
2010-09-10 19:47:43 +04:00
loff_t pos ;
2010-09-09 18:00:28 +04:00
loff_t func_pos ;
struct ftrace_page * pg ;
struct dyn_ftrace * func ;
struct ftrace_func_probe * probe ;
struct trace_parser parser ;
2011-04-30 04:59:51 +04:00
struct ftrace_hash * hash ;
2011-05-03 01:34:47 +04:00
struct ftrace_ops * ops ;
2010-09-09 18:00:28 +04:00
int hidx ;
int idx ;
unsigned flags ;
2008-05-12 23:20:43 +04:00
} ;
2009-02-16 23:28:00 +03:00
static void *
2010-09-09 18:00:28 +04:00
t_hash_next ( struct seq_file * m , loff_t * pos )
2009-02-16 23:28:00 +03:00
{
struct ftrace_iterator * iter = m - > private ;
2010-09-09 18:00:28 +04:00
struct hlist_node * hnd = NULL ;
2009-02-16 23:28:00 +03:00
struct hlist_head * hhd ;
( * pos ) + + ;
2010-09-10 19:47:43 +04:00
iter - > pos = * pos ;
2009-02-16 23:28:00 +03:00
2010-09-09 18:00:28 +04:00
if ( iter - > probe )
hnd = & iter - > probe - > node ;
2009-02-16 23:28:00 +03:00
retry :
if ( iter - > hidx > = FTRACE_FUNC_HASHSIZE )
return NULL ;
hhd = & ftrace_func_hash [ iter - > hidx ] ;
if ( hlist_empty ( hhd ) ) {
iter - > hidx + + ;
hnd = NULL ;
goto retry ;
}
if ( ! hnd )
hnd = hhd - > first ;
else {
hnd = hnd - > next ;
if ( ! hnd ) {
iter - > hidx + + ;
goto retry ;
}
}
2010-09-09 18:00:28 +04:00
if ( WARN_ON_ONCE ( ! hnd ) )
return NULL ;
iter - > probe = hlist_entry ( hnd , struct ftrace_func_probe , node ) ;
return iter ;
2009-02-16 23:28:00 +03:00
}
static void * t_hash_start ( struct seq_file * m , loff_t * pos )
{
struct ftrace_iterator * iter = m - > private ;
void * p = NULL ;
2009-06-24 05:54:54 +04:00
loff_t l ;
2011-12-20 00:21:16 +04:00
if ( ! ( iter - > flags & FTRACE_ITER_DO_HASH ) )
return NULL ;
2010-09-09 16:43:22 +04:00
if ( iter - > func_pos > * pos )
return NULL ;
2009-02-16 23:28:00 +03:00
2009-06-24 05:54:54 +04:00
iter - > hidx = 0 ;
2010-09-09 16:43:22 +04:00
for ( l = 0 ; l < = ( * pos - iter - > func_pos ) ; ) {
2010-09-09 18:00:28 +04:00
p = t_hash_next ( m , & l ) ;
2009-06-24 05:54:54 +04:00
if ( ! p )
break ;
}
2010-09-09 18:00:28 +04:00
if ( ! p )
return NULL ;
2010-09-10 19:47:43 +04:00
/* Only set this if we have an item */
iter - > flags | = FTRACE_ITER_HASH ;
2010-09-09 18:00:28 +04:00
return iter ;
2009-02-16 23:28:00 +03:00
}
2010-09-09 18:00:28 +04:00
static int
t_hash_show ( struct seq_file * m , struct ftrace_iterator * iter )
2009-02-16 23:28:00 +03:00
{
2009-02-17 20:32:04 +03:00
struct ftrace_func_probe * rec ;
2009-02-16 23:28:00 +03:00
2010-09-09 18:00:28 +04:00
rec = iter - > probe ;
if ( WARN_ON_ONCE ( ! rec ) )
return - EIO ;
2009-02-16 23:28:00 +03:00
2009-02-17 07:06:01 +03:00
if ( rec - > ops - > print )
return rec - > ops - > print ( m , rec - > ip , rec - > ops , rec - > data ) ;
2009-09-17 08:05:58 +04:00
seq_printf ( m , " %ps:%ps " , ( void * ) rec - > ip , ( void * ) rec - > ops - > func ) ;
2009-02-16 23:28:00 +03:00
if ( rec - > data )
seq_printf ( m , " :%p " , rec - > data ) ;
seq_putc ( m , ' \n ' ) ;
return 0 ;
}
2008-05-12 23:20:51 +04:00
static void *
2008-05-12 23:20:43 +04:00
t_next ( struct seq_file * m , void * v , loff_t * pos )
{
struct ftrace_iterator * iter = m - > private ;
2011-12-19 23:41:25 +04:00
struct ftrace_ops * ops = iter - > ops ;
2008-05-12 23:20:43 +04:00
struct dyn_ftrace * rec = NULL ;
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
return NULL ;
2009-02-16 23:28:00 +03:00
if ( iter - > flags & FTRACE_ITER_HASH )
2010-09-09 18:00:28 +04:00
return t_hash_next ( m , pos ) ;
2009-02-16 23:28:00 +03:00
2008-05-12 23:20:43 +04:00
( * pos ) + + ;
2011-02-16 19:35:34 +03:00
iter - > pos = iter - > func_pos = * pos ;
2008-05-12 23:20:43 +04:00
2009-02-16 19:21:52 +03:00
if ( iter - > flags & FTRACE_ITER_PRINTALL )
2010-09-14 19:21:11 +04:00
return t_hash_start ( m , pos ) ;
2009-02-16 19:21:52 +03:00
2008-05-12 23:20:43 +04:00
retry :
if ( iter - > idx > = iter - > pg - > index ) {
if ( iter - > pg - > next ) {
iter - > pg = iter - > pg - > next ;
iter - > idx = 0 ;
goto retry ;
}
} else {
rec = & iter - > pg - > records [ iter - > idx + + ] ;
2011-12-16 23:42:37 +04:00
if ( ( ( iter - > flags & FTRACE_ITER_FILTER ) & &
2011-05-02 20:29:25 +04:00
! ( ftrace_lookup_ip ( ops - > filter_hash , rec - > ip ) ) ) | |
2008-11-08 06:36:02 +03:00
2008-05-22 19:46:33 +04:00
( ( iter - > flags & FTRACE_ITER_NOTRACE ) & &
2011-05-03 22:39:21 +04:00
! ftrace_lookup_ip ( ops - > notrace_hash , rec - > ip ) ) | |
( ( iter - > flags & FTRACE_ITER_ENABLED ) & &
! ( rec - > flags & ~ FTRACE_FL_MASK ) ) ) {
2008-05-12 23:20:43 +04:00
rec = NULL ;
goto retry ;
}
}
2010-09-09 18:00:28 +04:00
if ( ! rec )
2010-09-14 19:21:11 +04:00
return t_hash_start ( m , pos ) ;
2010-09-09 18:00:28 +04:00
iter - > func = rec ;
return iter ;
2008-05-12 23:20:43 +04:00
}
2010-09-10 19:47:43 +04:00
static void reset_iter_read ( struct ftrace_iterator * iter )
{
iter - > pos = 0 ;
iter - > func_pos = 0 ;
iter - > flags & = ~ ( FTRACE_ITER_PRINTALL & FTRACE_ITER_HASH ) ;
2008-05-12 23:20:43 +04:00
}
static void * t_start ( struct seq_file * m , loff_t * pos )
{
struct ftrace_iterator * iter = m - > private ;
2011-12-19 23:41:25 +04:00
struct ftrace_ops * ops = iter - > ops ;
2008-05-12 23:20:43 +04:00
void * p = NULL ;
2009-06-24 05:54:19 +04:00
loff_t l ;
2008-05-12 23:20:43 +04:00
2009-02-16 23:28:00 +03:00
mutex_lock ( & ftrace_lock ) ;
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
return NULL ;
2010-09-10 19:47:43 +04:00
/*
* If an lseek was done , then reset and start from beginning .
*/
if ( * pos < iter - > pos )
reset_iter_read ( iter ) ;
2009-02-16 19:21:52 +03:00
/*
* For set_ftrace_filter reading , if we have the filter
* off , we can short cut and just print out that all
* functions are enabled .
*/
2011-12-20 04:07:36 +04:00
if ( iter - > flags & FTRACE_ITER_FILTER & &
ftrace_hash_empty ( ops - > filter_hash ) ) {
2009-02-16 19:21:52 +03:00
if ( * pos > 0 )
2009-02-16 23:28:00 +03:00
return t_hash_start ( m , pos ) ;
2009-02-16 19:21:52 +03:00
iter - > flags | = FTRACE_ITER_PRINTALL ;
2010-09-10 03:34:59 +04:00
/* reset in case of seek/pread */
iter - > flags & = ~ FTRACE_ITER_HASH ;
2009-02-16 19:21:52 +03:00
return iter ;
}
2009-02-16 23:28:00 +03:00
if ( iter - > flags & FTRACE_ITER_HASH )
return t_hash_start ( m , pos ) ;
2010-09-10 19:47:43 +04:00
/*
* Unfortunately , we need to restart at ftrace_pages_start
* every time we let go of the ftrace_mutex . This is because
* those pointers can change without the lock .
*/
2009-06-24 05:54:19 +04:00
iter - > pg = ftrace_pages_start ;
iter - > idx = 0 ;
for ( l = 0 ; l < = * pos ; ) {
p = t_next ( m , p , & l ) ;
if ( ! p )
break ;
2008-11-28 07:13:21 +03:00
}
function tracing: fix wrong pos computing when read buffer has been fulfilled
Impact: make output of available_filter_functions complete
phenomenon:
The first value of dyn_ftrace_total_info is not equal with
`cat available_filter_functions | wc -l`, but they should be equal.
root cause:
When printing functions with seq_printf in t_show, if the read buffer
is just overflowed by current function record, then this function
won't be printed to user space through read buffer, it will
just be dropped. So we can't see this function printing.
So, every time the last function to fill the read buffer, if overflowed,
will be dropped.
This also applies to set_ftrace_filter if set_ftrace_filter has
more bytes than read buffer.
fix:
Through checking return value of seq_printf, if less than 0, we know
this function doesn't be printed. Then we decrease position to force
this function to be printed next time, in next read buffer.
Another little fix is to show correct allocating pages count.
Signed-off-by: walimis <walimisdev@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-15 10:19:06 +03:00
2011-12-20 00:21:16 +04:00
if ( ! p )
return t_hash_start ( m , pos ) ;
2010-09-09 18:00:28 +04:00
return iter ;
2008-05-12 23:20:43 +04:00
}
static void t_stop ( struct seq_file * m , void * p )
{
2009-02-16 23:28:00 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
}
static int t_show ( struct seq_file * m , void * v )
{
2009-02-16 19:21:52 +03:00
struct ftrace_iterator * iter = m - > private ;
2010-09-09 18:00:28 +04:00
struct dyn_ftrace * rec ;
2008-05-12 23:20:43 +04:00
2009-02-16 23:28:00 +03:00
if ( iter - > flags & FTRACE_ITER_HASH )
2010-09-09 18:00:28 +04:00
return t_hash_show ( m , iter ) ;
2009-02-16 23:28:00 +03:00
2009-02-16 19:21:52 +03:00
if ( iter - > flags & FTRACE_ITER_PRINTALL ) {
seq_printf ( m , " #### all functions enabled #### \n " ) ;
return 0 ;
}
2010-09-09 18:00:28 +04:00
rec = iter - > func ;
2008-05-12 23:20:43 +04:00
if ( ! rec )
return 0 ;
2011-05-03 22:39:21 +04:00
seq_printf ( m , " %ps " , ( void * ) rec - > ip ) ;
if ( iter - > flags & FTRACE_ITER_ENABLED )
seq_printf ( m , " (%ld) " ,
rec - > flags & ~ FTRACE_FL_MASK ) ;
seq_printf ( m , " \n " ) ;
2008-05-12 23:20:43 +04:00
return 0 ;
}
2009-09-23 03:43:43 +04:00
static const struct seq_operations show_ftrace_seq_ops = {
2008-05-12 23:20:43 +04:00
. start = t_start ,
. next = t_next ,
. stop = t_stop ,
. show = t_show ,
} ;
2008-05-12 23:20:51 +04:00
static int
2008-05-12 23:20:43 +04:00
ftrace_avail_open ( struct inode * inode , struct file * file )
{
struct ftrace_iterator * iter ;
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
return - ENODEV ;
2012-04-25 12:23:39 +04:00
iter = __seq_open_private ( file , & show_ftrace_seq_ops , sizeof ( * iter ) ) ;
if ( iter ) {
iter - > pg = ftrace_pages_start ;
iter - > ops = & global_ops ;
2008-05-12 23:20:46 +04:00
}
2008-05-12 23:20:43 +04:00
2012-04-25 12:23:39 +04:00
return iter ? 0 : - ENOMEM ;
2008-05-12 23:20:43 +04:00
}
2011-05-03 22:39:21 +04:00
static int
ftrace_enabled_open ( struct inode * inode , struct file * file )
{
struct ftrace_iterator * iter ;
if ( unlikely ( ftrace_disabled ) )
return - ENODEV ;
2012-04-25 12:23:39 +04:00
iter = __seq_open_private ( file , & show_ftrace_seq_ops , sizeof ( * iter ) ) ;
if ( iter ) {
iter - > pg = ftrace_pages_start ;
iter - > flags = FTRACE_ITER_ENABLED ;
iter - > ops = & global_ops ;
2011-05-03 22:39:21 +04:00
}
2012-04-25 12:23:39 +04:00
return iter ? 0 : - ENOMEM ;
2011-05-03 22:39:21 +04:00
}
2011-04-30 04:59:51 +04:00
static void ftrace_filter_reset ( struct ftrace_hash * hash )
2008-05-12 23:20:43 +04:00
{
2009-02-14 09:15:39 +03:00
mutex_lock ( & ftrace_lock ) ;
2011-04-30 04:59:51 +04:00
ftrace_hash_clear ( hash ) ;
2009-02-14 09:15:39 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
}
2011-12-19 23:41:25 +04:00
/**
* ftrace_regex_open - initialize function tracer filter files
* @ ops : The ftrace_ops that hold the hash filters
* @ flag : The type of filter to process
* @ inode : The inode , usually passed in to your open routine
* @ file : The file , usually passed in to your open routine
*
* ftrace_regex_open ( ) initializes the filter files for the
* @ ops . Depending on @ flag it may process the filter hash or
* the notrace hash of @ ops . With this called from the open
* routine , you can use ftrace_filter_write ( ) for the write
* routine if @ flag has FTRACE_ITER_FILTER set , or
* ftrace_notrace_write ( ) if @ flag has FTRACE_ITER_NOTRACE set .
* ftrace_regex_lseek ( ) should be used as the lseek routine , and
* release must call ftrace_regex_release ( ) .
*/
int
2011-05-02 20:29:25 +04:00
ftrace_regex_open ( struct ftrace_ops * ops , int flag ,
2011-04-30 04:59:51 +04:00
struct inode * inode , struct file * file )
2008-05-12 23:20:43 +04:00
{
struct ftrace_iterator * iter ;
2011-05-02 20:29:25 +04:00
struct ftrace_hash * hash ;
2008-05-12 23:20:43 +04:00
int ret = 0 ;
2008-05-12 23:20:48 +04:00
if ( unlikely ( ftrace_disabled ) )
return - ENODEV ;
2008-05-12 23:20:43 +04:00
iter = kzalloc ( sizeof ( * iter ) , GFP_KERNEL ) ;
if ( ! iter )
return - ENOMEM ;
2009-09-11 19:29:29 +04:00
if ( trace_parser_get_init ( & iter - > parser , FTRACE_BUFF_MAX ) ) {
kfree ( iter ) ;
return - ENOMEM ;
}
2011-05-02 20:29:25 +04:00
if ( flag & FTRACE_ITER_NOTRACE )
hash = ops - > notrace_hash ;
else
hash = ops - > filter_hash ;
2011-05-03 01:34:47 +04:00
iter - > ops = ops ;
iter - > flags = flag ;
if ( file - > f_mode & FMODE_WRITE ) {
mutex_lock ( & ftrace_lock ) ;
iter - > hash = alloc_and_copy_ftrace_hash ( FTRACE_HASH_DEFAULT_BITS , hash ) ;
mutex_unlock ( & ftrace_lock ) ;
if ( ! iter - > hash ) {
trace_parser_put ( & iter - > parser ) ;
kfree ( iter ) ;
return - ENOMEM ;
}
}
2011-04-30 04:59:51 +04:00
2008-05-22 19:46:33 +04:00
mutex_lock ( & ftrace_regex_lock ) ;
2011-05-03 01:34:47 +04:00
2008-05-12 23:20:43 +04:00
if ( ( file - > f_mode & FMODE_WRITE ) & &
2009-07-23 07:29:30 +04:00
( file - > f_flags & O_TRUNC ) )
2011-05-03 01:34:47 +04:00
ftrace_filter_reset ( iter - > hash ) ;
2008-05-12 23:20:43 +04:00
if ( file - > f_mode & FMODE_READ ) {
iter - > pg = ftrace_pages_start ;
ret = seq_open ( file , & show_ftrace_seq_ops ) ;
if ( ! ret ) {
struct seq_file * m = file - > private_data ;
m - > private = iter ;
2009-09-22 09:54:28 +04:00
} else {
2011-05-03 01:34:47 +04:00
/* Failed */
free_ftrace_hash ( iter - > hash ) ;
2009-09-22 09:54:28 +04:00
trace_parser_put ( & iter - > parser ) ;
2008-05-12 23:20:43 +04:00
kfree ( iter ) ;
2009-09-22 09:54:28 +04:00
}
2008-05-12 23:20:43 +04:00
} else
file - > private_data = iter ;
2008-05-22 19:46:33 +04:00
mutex_unlock ( & ftrace_regex_lock ) ;
2008-05-12 23:20:43 +04:00
return ret ;
}
2008-05-22 19:46:33 +04:00
static int
ftrace_filter_open ( struct inode * inode , struct file * file )
{
2011-12-20 00:21:16 +04:00
return ftrace_regex_open ( & global_ops ,
FTRACE_ITER_FILTER | FTRACE_ITER_DO_HASH ,
inode , file ) ;
2008-05-22 19:46:33 +04:00
}
static int
ftrace_notrace_open ( struct inode * inode , struct file * file )
{
2011-05-02 20:29:25 +04:00
return ftrace_regex_open ( & global_ops , FTRACE_ITER_NOTRACE ,
2011-04-30 04:59:51 +04:00
inode , file ) ;
2008-05-22 19:46:33 +04:00
}
2011-12-19 23:41:25 +04:00
loff_t
2008-05-22 19:46:33 +04:00
ftrace_regex_lseek ( struct file * file , loff_t offset , int origin )
2008-05-12 23:20:43 +04:00
{
loff_t ret ;
if ( file - > f_mode & FMODE_READ )
ret = seq_lseek ( file , offset , origin ) ;
else
file - > f_pos = ret = 1 ;
return ret ;
}
2009-02-14 01:08:48 +03:00
static int ftrace_match ( char * str , char * regex , int len , int type )
2009-02-13 23:56:43 +03:00
{
int matched = 0 ;
2010-01-14 05:53:02 +03:00
int slen ;
2009-02-13 23:56:43 +03:00
switch ( type ) {
case MATCH_FULL :
if ( strcmp ( str , regex ) = = 0 )
matched = 1 ;
break ;
case MATCH_FRONT_ONLY :
if ( strncmp ( str , regex , len ) = = 0 )
matched = 1 ;
break ;
case MATCH_MIDDLE_ONLY :
if ( strstr ( str , regex ) )
matched = 1 ;
break ;
case MATCH_END_ONLY :
2010-01-14 05:53:02 +03:00
slen = strlen ( str ) ;
if ( slen > = len & & memcmp ( str + slen - len , regex , len ) = = 0 )
2009-02-13 23:56:43 +03:00
matched = 1 ;
break ;
}
return matched ;
}
2011-04-29 23:12:32 +04:00
static int
2011-04-30 04:59:51 +04:00
enter_record ( struct ftrace_hash * hash , struct dyn_ftrace * rec , int not )
2011-04-27 00:11:03 +04:00
{
2011-04-29 23:12:32 +04:00
struct ftrace_func_entry * entry ;
int ret = 0 ;
2011-04-30 04:59:51 +04:00
entry = ftrace_lookup_ip ( hash , rec - > ip ) ;
if ( not ) {
/* Do nothing if it doesn't exist */
if ( ! entry )
return 0 ;
2011-04-29 23:12:32 +04:00
2011-05-03 01:34:47 +04:00
free_hash_entry ( hash , entry ) ;
2011-04-30 04:59:51 +04:00
} else {
/* Do nothing if it exists */
if ( entry )
return 0 ;
2011-04-29 23:12:32 +04:00
2011-04-30 04:59:51 +04:00
ret = add_hash_entry ( hash , rec - > ip ) ;
2011-04-29 23:12:32 +04:00
}
return ret ;
2011-04-27 00:11:03 +04:00
}
2009-02-14 01:08:48 +03:00
static int
2011-04-29 04:32:08 +04:00
ftrace_match_record ( struct dyn_ftrace * rec , char * mod ,
char * regex , int len , int type )
2009-02-14 01:08:48 +03:00
{
char str [ KSYM_SYMBOL_LEN ] ;
2011-04-29 04:32:08 +04:00
char * modname ;
kallsyms_lookup ( rec - > ip , NULL , NULL , & modname , str ) ;
if ( mod ) {
/* module lookup requires matching the module */
if ( ! modname | | strcmp ( modname , mod ) )
return 0 ;
/* blank search means to match all funcs in the mod */
if ( ! len )
return 1 ;
}
2009-02-14 01:08:48 +03:00
return ftrace_match ( str , regex , len , type ) ;
}
2011-04-30 04:59:51 +04:00
static int
match_records ( struct ftrace_hash * hash , char * buff ,
int len , char * mod , int not )
2009-02-13 23:56:43 +03:00
{
2011-04-29 04:32:08 +04:00
unsigned search_len = 0 ;
2009-02-13 23:56:43 +03:00
struct ftrace_page * pg ;
struct dyn_ftrace * rec ;
2011-04-29 04:32:08 +04:00
int type = MATCH_FULL ;
char * search = buff ;
2009-12-08 06:15:11 +03:00
int found = 0 ;
2011-04-29 23:12:32 +04:00
int ret ;
2009-02-13 23:56:43 +03:00
2011-04-29 04:32:08 +04:00
if ( len ) {
type = filter_parse_regex ( buff , len , & search , & not ) ;
search_len = strlen ( search ) ;
}
2009-02-13 23:56:43 +03:00
2009-02-14 09:15:39 +03:00
mutex_lock ( & ftrace_lock ) ;
2009-02-13 20:43:56 +03:00
2011-04-29 04:32:08 +04:00
if ( unlikely ( ftrace_disabled ) )
goto out_unlock ;
2009-02-13 23:56:43 +03:00
2009-02-13 20:43:56 +03:00
do_for_each_ftrace_rec ( pg , rec ) {
2011-04-29 04:32:08 +04:00
if ( ftrace_match_record ( rec , mod , search , search_len , type ) ) {
2011-04-30 04:59:51 +04:00
ret = enter_record ( hash , rec , not ) ;
2011-04-29 23:12:32 +04:00
if ( ret < 0 ) {
found = ret ;
goto out_unlock ;
}
2009-12-08 06:15:11 +03:00
found = 1 ;
2009-02-13 20:43:56 +03:00
}
} while_for_each_ftrace_rec ( ) ;
2011-04-29 04:32:08 +04:00
out_unlock :
2009-02-14 09:15:39 +03:00
mutex_unlock ( & ftrace_lock ) ;
2009-12-08 06:15:11 +03:00
return found ;
2008-05-12 23:20:43 +04:00
}
2009-02-14 01:08:48 +03:00
static int
2011-04-30 04:59:51 +04:00
ftrace_match_records ( struct ftrace_hash * hash , char * buff , int len )
2009-02-14 01:08:48 +03:00
{
2011-04-30 04:59:51 +04:00
return match_records ( hash , buff , len , NULL , 0 ) ;
2009-02-14 01:08:48 +03:00
}
2011-04-30 04:59:51 +04:00
static int
ftrace_match_module_records ( struct ftrace_hash * hash , char * buff , char * mod )
2009-02-14 01:08:48 +03:00
{
int not = 0 ;
2009-02-17 19:20:26 +03:00
2009-02-14 01:08:48 +03:00
/* blank or '*' mean the same */
if ( strcmp ( buff , " * " ) = = 0 )
buff [ 0 ] = 0 ;
/* handle the case of 'dont filter this module' */
if ( strcmp ( buff , " ! " ) = = 0 | | strcmp ( buff , " !* " ) = = 0 ) {
buff [ 0 ] = 0 ;
not = 1 ;
}
2011-04-30 04:59:51 +04:00
return match_records ( hash , buff , strlen ( buff ) , mod , not ) ;
2009-02-14 01:08:48 +03:00
}
2009-02-14 08:40:25 +03:00
/*
* We register the module command as a template to show others how
* to register the a command as well .
*/
static int
2011-07-07 19:09:22 +04:00
ftrace_mod_callback ( struct ftrace_hash * hash ,
char * func , char * cmd , char * param , int enable )
2009-02-14 08:40:25 +03:00
{
char * mod ;
2011-04-29 23:12:32 +04:00
int ret = - EINVAL ;
2009-02-14 08:40:25 +03:00
/*
* cmd = = ' mod ' because we only registered this func
* for the ' mod ' ftrace_func_command .
* But if you register one func with multiple commands ,
* you can tell which command was used by the cmd
* parameter .
*/
/* we must have a module name */
if ( ! param )
2011-04-29 23:12:32 +04:00
return ret ;
2009-02-14 08:40:25 +03:00
mod = strsep ( & param , " : " ) ;
if ( ! strlen ( mod ) )
2011-04-29 23:12:32 +04:00
return ret ;
2009-02-14 08:40:25 +03:00
2011-04-30 04:59:51 +04:00
ret = ftrace_match_module_records ( hash , func , mod ) ;
2011-04-29 23:12:32 +04:00
if ( ! ret )
ret = - EINVAL ;
if ( ret < 0 )
return ret ;
return 0 ;
2009-02-14 08:40:25 +03:00
}
static struct ftrace_func_command ftrace_mod_cmd = {
. name = " mod " ,
. func = ftrace_mod_callback ,
} ;
static int __init ftrace_mod_cmd_init ( void )
{
return register_ftrace_command ( & ftrace_mod_cmd ) ;
}
device_initcall ( ftrace_mod_cmd_init ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
static void
2009-02-17 20:32:04 +03:00
function_trace_probe_call ( unsigned long ip , unsigned long parent_ip )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2009-02-17 20:32:04 +03:00
struct ftrace_func_probe * entry ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
struct hlist_head * hhd ;
struct hlist_node * n ;
unsigned long key ;
key = hash_long ( ip , FTRACE_HASH_BITS ) ;
hhd = & ftrace_func_hash [ key ] ;
if ( hlist_empty ( hhd ) )
return ;
/*
* Disable preemption for these calls to prevent a RCU grace
* period . This syncs the hash iteration and freeing of items
* on the hash . rcu_read_lock is too dangerous here .
*/
2010-06-03 17:36:50 +04:00
preempt_disable_notrace ( ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
hlist_for_each_entry_rcu ( entry , n , hhd , node ) {
if ( entry - > ip = = ip )
entry - > ops - > func ( ip , parent_ip , & entry - > data ) ;
}
2010-06-03 17:36:50 +04:00
preempt_enable_notrace ( ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
2009-02-17 20:32:04 +03:00
static struct ftrace_ops trace_probe_ops __read_mostly =
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2009-03-25 20:26:41 +03:00
. func = function_trace_probe_call ,
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
} ;
2009-02-17 20:32:04 +03:00
static int ftrace_probe_registered ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
2009-02-17 20:32:04 +03:00
static void __enable_ftrace_function_probe ( void )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2011-05-04 17:27:52 +04:00
int ret ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
int i ;
2009-02-17 20:32:04 +03:00
if ( ftrace_probe_registered )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
return ;
for ( i = 0 ; i < FTRACE_FUNC_HASHSIZE ; i + + ) {
struct hlist_head * hhd = & ftrace_func_hash [ i ] ;
if ( hhd - > first )
break ;
}
/* Nothing registered? */
if ( i = = FTRACE_FUNC_HASHSIZE )
return ;
2011-05-04 17:27:52 +04:00
ret = __register_ftrace_function ( & trace_probe_ops ) ;
if ( ! ret )
2011-05-23 23:24:25 +04:00
ret = ftrace_startup ( & trace_probe_ops , 0 ) ;
2011-05-04 17:27:52 +04:00
2009-02-17 20:32:04 +03:00
ftrace_probe_registered = 1 ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
2009-02-17 20:32:04 +03:00
static void __disable_ftrace_function_probe ( void )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2011-05-04 17:27:52 +04:00
int ret ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
int i ;
2009-02-17 20:32:04 +03:00
if ( ! ftrace_probe_registered )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
return ;
for ( i = 0 ; i < FTRACE_FUNC_HASHSIZE ; i + + ) {
struct hlist_head * hhd = & ftrace_func_hash [ i ] ;
if ( hhd - > first )
return ;
}
/* no more funcs left */
2011-05-04 17:27:52 +04:00
ret = __unregister_ftrace_function ( & trace_probe_ops ) ;
if ( ! ret )
ftrace_shutdown ( & trace_probe_ops , 0 ) ;
2009-02-17 20:32:04 +03:00
ftrace_probe_registered = 0 ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
static void ftrace_free_entry_rcu ( struct rcu_head * rhp )
{
2009-02-17 20:32:04 +03:00
struct ftrace_func_probe * entry =
container_of ( rhp , struct ftrace_func_probe , rcu ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
if ( entry - > ops - > free )
entry - > ops - > free ( & entry - > data ) ;
kfree ( entry ) ;
}
int
2009-02-17 20:32:04 +03:00
register_ftrace_function_probe ( char * glob , struct ftrace_probe_ops * ops ,
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
void * data )
{
2009-02-17 20:32:04 +03:00
struct ftrace_func_probe * entry ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
struct ftrace_page * pg ;
struct dyn_ftrace * rec ;
int type , len , not ;
2009-02-17 19:20:26 +03:00
unsigned long key ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
int count = 0 ;
char * search ;
2009-09-24 23:31:51 +04:00
type = filter_parse_regex ( glob , strlen ( glob ) , & search , & not ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
len = strlen ( search ) ;
2009-02-17 20:32:04 +03:00
/* we do not support '!' for function probes */
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
if ( WARN_ON ( not ) )
return - EINVAL ;
mutex_lock ( & ftrace_lock ) ;
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
goto out_unlock ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
2011-04-22 07:16:46 +04:00
do_for_each_ftrace_rec ( pg , rec ) {
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
2011-04-29 04:32:08 +04:00
if ( ! ftrace_match_record ( rec , NULL , search , len , type ) )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
continue ;
entry = kmalloc ( sizeof ( * entry ) , GFP_KERNEL ) ;
if ( ! entry ) {
2009-02-17 20:32:04 +03:00
/* If we did not process any, then return error */
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
if ( ! count )
count = - ENOMEM ;
goto out_unlock ;
}
count + + ;
entry - > data = data ;
/*
* The caller might want to do something special
* for each function we find . We call the callback
* to give the caller an opportunity to do so .
*/
if ( ops - > callback ) {
if ( ops - > callback ( rec - > ip , & entry - > data ) < 0 ) {
/* caller does not like this func */
kfree ( entry ) ;
continue ;
}
}
entry - > ops = ops ;
entry - > ip = rec - > ip ;
key = hash_long ( entry - > ip , FTRACE_HASH_BITS ) ;
hlist_add_head_rcu ( & entry - > node , & ftrace_func_hash [ key ] ) ;
} while_for_each_ftrace_rec ( ) ;
2009-02-17 20:32:04 +03:00
__enable_ftrace_function_probe ( ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
out_unlock :
mutex_unlock ( & ftrace_lock ) ;
return count ;
}
enum {
2009-02-17 20:32:04 +03:00
PROBE_TEST_FUNC = 1 ,
PROBE_TEST_DATA = 2
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
} ;
static void
2009-02-17 20:32:04 +03:00
__unregister_ftrace_function_probe ( char * glob , struct ftrace_probe_ops * ops ,
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
void * data , int flags )
{
2009-02-17 20:32:04 +03:00
struct ftrace_func_probe * entry ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
struct hlist_node * n , * tmp ;
char str [ KSYM_SYMBOL_LEN ] ;
int type = MATCH_FULL ;
int i , len = 0 ;
char * search ;
2009-09-15 14:06:30 +04:00
if ( glob & & ( strcmp ( glob , " * " ) = = 0 | | ! strlen ( glob ) ) )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
glob = NULL ;
2009-09-15 14:06:30 +04:00
else if ( glob ) {
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
int not ;
2009-09-24 23:31:51 +04:00
type = filter_parse_regex ( glob , strlen ( glob ) , & search , & not ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
len = strlen ( search ) ;
2009-02-17 20:32:04 +03:00
/* we do not support '!' for function probes */
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
if ( WARN_ON ( not ) )
return ;
}
mutex_lock ( & ftrace_lock ) ;
for ( i = 0 ; i < FTRACE_FUNC_HASHSIZE ; i + + ) {
struct hlist_head * hhd = & ftrace_func_hash [ i ] ;
hlist_for_each_entry_safe ( entry , n , tmp , hhd , node ) {
/* break up if statements for readability */
2009-02-17 20:32:04 +03:00
if ( ( flags & PROBE_TEST_FUNC ) & & entry - > ops ! = ops )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
continue ;
2009-02-17 20:32:04 +03:00
if ( ( flags & PROBE_TEST_DATA ) & & entry - > data ! = data )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
continue ;
/* do this last, since it is the most expensive */
if ( glob ) {
kallsyms_lookup ( entry - > ip , NULL , NULL ,
NULL , str ) ;
if ( ! ftrace_match ( str , glob , len , type ) )
continue ;
}
hlist_del ( & entry - > node ) ;
call_rcu ( & entry - > rcu , ftrace_free_entry_rcu ) ;
}
}
2009-02-17 20:32:04 +03:00
__disable_ftrace_function_probe ( ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
mutex_unlock ( & ftrace_lock ) ;
}
void
2009-02-17 20:32:04 +03:00
unregister_ftrace_function_probe ( char * glob , struct ftrace_probe_ops * ops ,
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
void * data )
{
2009-02-17 20:32:04 +03:00
__unregister_ftrace_function_probe ( glob , ops , data ,
PROBE_TEST_FUNC | PROBE_TEST_DATA ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
void
2009-02-17 20:32:04 +03:00
unregister_ftrace_function_probe_func ( char * glob , struct ftrace_probe_ops * ops )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2009-02-17 20:32:04 +03:00
__unregister_ftrace_function_probe ( glob , ops , NULL , PROBE_TEST_FUNC ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
2009-02-17 20:32:04 +03:00
void unregister_ftrace_function_probe_all ( char * glob )
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
{
2009-02-17 20:32:04 +03:00
__unregister_ftrace_function_probe ( glob , NULL , NULL , 0 ) ;
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-14 23:29:06 +03:00
}
2009-02-14 08:40:25 +03:00
static LIST_HEAD ( ftrace_commands ) ;
static DEFINE_MUTEX ( ftrace_cmd_mutex ) ;
int register_ftrace_command ( struct ftrace_func_command * cmd )
{
struct ftrace_func_command * p ;
int ret = 0 ;
mutex_lock ( & ftrace_cmd_mutex ) ;
list_for_each_entry ( p , & ftrace_commands , list ) {
if ( strcmp ( cmd - > name , p - > name ) = = 0 ) {
ret = - EBUSY ;
goto out_unlock ;
}
}
list_add ( & cmd - > list , & ftrace_commands ) ;
out_unlock :
mutex_unlock ( & ftrace_cmd_mutex ) ;
return ret ;
}
int unregister_ftrace_command ( struct ftrace_func_command * cmd )
{
struct ftrace_func_command * p , * n ;
int ret = - ENODEV ;
mutex_lock ( & ftrace_cmd_mutex ) ;
list_for_each_entry_safe ( p , n , & ftrace_commands , list ) {
if ( strcmp ( cmd - > name , p - > name ) = = 0 ) {
ret = 0 ;
list_del_init ( & p - > list ) ;
goto out_unlock ;
}
}
out_unlock :
mutex_unlock ( & ftrace_cmd_mutex ) ;
return ret ;
}
2011-05-03 01:34:47 +04:00
static int ftrace_process_regex ( struct ftrace_hash * hash ,
char * buff , int len , int enable )
2009-02-14 01:08:48 +03:00
{
2009-02-14 08:40:25 +03:00
char * func , * command , * next = buff ;
2009-02-17 19:20:26 +03:00
struct ftrace_func_command * p ;
2011-06-01 15:18:47 +04:00
int ret = - EINVAL ;
2009-02-14 01:08:48 +03:00
func = strsep ( & next , " : " ) ;
if ( ! next ) {
2011-04-30 04:59:51 +04:00
ret = ftrace_match_records ( hash , func , len ) ;
2011-04-29 23:12:32 +04:00
if ( ! ret )
ret = - EINVAL ;
if ( ret < 0 )
return ret ;
return 0 ;
2009-02-14 01:08:48 +03:00
}
2009-02-14 08:40:25 +03:00
/* command found */
2009-02-14 01:08:48 +03:00
command = strsep ( & next , " : " ) ;
2009-02-14 08:40:25 +03:00
mutex_lock ( & ftrace_cmd_mutex ) ;
list_for_each_entry ( p , & ftrace_commands , list ) {
if ( strcmp ( p - > name , command ) = = 0 ) {
2011-07-07 19:09:22 +04:00
ret = p - > func ( hash , func , command , next , enable ) ;
2009-02-14 08:40:25 +03:00
goto out_unlock ;
}
2009-02-14 01:08:48 +03:00
}
2009-02-14 08:40:25 +03:00
out_unlock :
mutex_unlock ( & ftrace_cmd_mutex ) ;
2009-02-14 01:08:48 +03:00
2009-02-14 08:40:25 +03:00
return ret ;
2009-02-14 01:08:48 +03:00
}
2008-05-12 23:20:51 +04:00
static ssize_t
2008-05-22 19:46:33 +04:00
ftrace_regex_write ( struct file * file , const char __user * ubuf ,
size_t cnt , loff_t * ppos , int enable )
2008-05-12 23:20:43 +04:00
{
struct ftrace_iterator * iter ;
2009-09-11 19:29:29 +04:00
struct trace_parser * parser ;
ssize_t ret , read ;
2008-05-12 23:20:43 +04:00
2009-09-22 09:52:20 +04:00
if ( ! cnt )
2008-05-12 23:20:43 +04:00
return 0 ;
2008-05-22 19:46:33 +04:00
mutex_lock ( & ftrace_regex_lock ) ;
2008-05-12 23:20:43 +04:00
2011-04-22 07:16:46 +04:00
ret = - ENODEV ;
if ( unlikely ( ftrace_disabled ) )
goto out_unlock ;
2008-05-12 23:20:43 +04:00
if ( file - > f_mode & FMODE_READ ) {
struct seq_file * m = file - > private_data ;
iter = m - > private ;
} else
iter = file - > private_data ;
2009-09-11 19:29:29 +04:00
parser = & iter - > parser ;
read = trace_get_user ( parser , ubuf , cnt , ppos ) ;
2008-05-12 23:20:43 +04:00
2009-09-22 09:52:20 +04:00
if ( read > = 0 & & trace_parser_loaded ( parser ) & &
2009-09-11 19:29:29 +04:00
! trace_parser_cont ( parser ) ) {
2011-05-03 01:34:47 +04:00
ret = ftrace_process_regex ( iter - > hash , parser - > buffer ,
2009-09-11 19:29:29 +04:00
parser - > idx , enable ) ;
2009-12-08 06:15:30 +03:00
trace_parser_clear ( parser ) ;
2008-05-12 23:20:43 +04:00
if ( ret )
2009-11-03 03:55:38 +03:00
goto out_unlock ;
2009-08-11 19:29:04 +04:00
}
2008-05-12 23:20:43 +04:00
ret = read ;
2009-11-03 03:55:38 +03:00
out_unlock :
2009-09-11 19:29:29 +04:00
mutex_unlock ( & ftrace_regex_lock ) ;
2009-11-03 03:55:38 +03:00
2008-05-12 23:20:43 +04:00
return ret ;
}
2011-12-19 23:41:25 +04:00
ssize_t
2008-05-22 19:46:33 +04:00
ftrace_filter_write ( struct file * file , const char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
return ftrace_regex_write ( file , ubuf , cnt , ppos , 1 ) ;
}
2011-12-19 23:41:25 +04:00
ssize_t
2008-05-22 19:46:33 +04:00
ftrace_notrace_write ( struct file * file , const char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
return ftrace_regex_write ( file , ubuf , cnt , ppos , 0 ) ;
}
2011-05-03 01:34:47 +04:00
static int
2011-05-02 20:29:25 +04:00
ftrace_set_regex ( struct ftrace_ops * ops , unsigned char * buf , int len ,
int reset , int enable )
2008-05-22 19:46:33 +04:00
{
2011-05-03 01:34:47 +04:00
struct ftrace_hash * * orig_hash ;
2011-05-02 20:29:25 +04:00
struct ftrace_hash * hash ;
2011-05-03 01:34:47 +04:00
int ret ;
2011-05-02 20:29:25 +04:00
2011-05-06 06:54:01 +04:00
/* All global ops uses the global ops filters */
if ( ops - > flags & FTRACE_OPS_FL_GLOBAL )
ops = & global_ops ;
2008-05-22 19:46:33 +04:00
if ( unlikely ( ftrace_disabled ) )
2011-05-03 01:34:47 +04:00
return - ENODEV ;
2008-05-22 19:46:33 +04:00
2011-05-02 20:29:25 +04:00
if ( enable )
2011-05-03 01:34:47 +04:00
orig_hash = & ops - > filter_hash ;
2011-05-02 20:29:25 +04:00
else
2011-05-03 01:34:47 +04:00
orig_hash = & ops - > notrace_hash ;
hash = alloc_and_copy_ftrace_hash ( FTRACE_HASH_DEFAULT_BITS , * orig_hash ) ;
if ( ! hash )
return - ENOMEM ;
2011-05-02 20:29:25 +04:00
2008-05-22 19:46:33 +04:00
mutex_lock ( & ftrace_regex_lock ) ;
if ( reset )
2011-04-30 04:59:51 +04:00
ftrace_filter_reset ( hash ) ;
2012-01-02 13:04:14 +04:00
if ( buf & & ! ftrace_match_records ( hash , buf , len ) ) {
ret = - EINVAL ;
goto out_regex_unlock ;
}
2011-05-03 01:34:47 +04:00
mutex_lock ( & ftrace_lock ) ;
2011-07-13 23:03:44 +04:00
ret = ftrace_hash_move ( ops , enable , orig_hash , hash ) ;
2011-07-13 23:08:31 +04:00
if ( ! ret & & ops - > flags & FTRACE_OPS_FL_ENABLED
& & ftrace_enabled )
2011-12-05 21:22:48 +04:00
ftrace_run_update_code ( FTRACE_UPDATE_CALLS ) ;
2011-07-13 23:08:31 +04:00
2011-05-03 01:34:47 +04:00
mutex_unlock ( & ftrace_lock ) ;
2012-01-02 13:04:14 +04:00
out_regex_unlock :
2008-05-22 19:46:33 +04:00
mutex_unlock ( & ftrace_regex_lock ) ;
2011-05-03 01:34:47 +04:00
free_ftrace_hash ( hash ) ;
return ret ;
2008-05-22 19:46:33 +04:00
}
2008-05-12 23:20:45 +04:00
/**
* ftrace_set_filter - set a function to filter on in ftrace
2011-05-06 06:54:01 +04:00
* @ ops - the ops to set the filter with
* @ buf - the string that holds the function filter text .
* @ len - the length of the string .
* @ reset - non zero to reset all filters before applying this filter .
*
* Filters denote which functions should be enabled when tracing is enabled .
* If @ buf is NULL and reset is set , all functions will be enabled for tracing .
*/
2012-01-02 13:04:14 +04:00
int ftrace_set_filter ( struct ftrace_ops * ops , unsigned char * buf ,
2011-05-06 06:54:01 +04:00
int len , int reset )
{
2012-01-02 13:04:14 +04:00
return ftrace_set_regex ( ops , buf , len , reset , 1 ) ;
2011-05-06 06:54:01 +04:00
}
EXPORT_SYMBOL_GPL ( ftrace_set_filter ) ;
/**
* ftrace_set_notrace - set a function to not trace in ftrace
* @ ops - the ops to set the notrace filter with
* @ buf - the string that holds the function notrace text .
* @ len - the length of the string .
* @ reset - non zero to reset all filters before applying this filter .
*
* Notrace Filters denote which functions should not be enabled when tracing
* is enabled . If @ buf is NULL and reset is set , all functions will be enabled
* for tracing .
*/
2012-01-02 13:04:14 +04:00
int ftrace_set_notrace ( struct ftrace_ops * ops , unsigned char * buf ,
2011-05-06 06:54:01 +04:00
int len , int reset )
{
2012-01-02 13:04:14 +04:00
return ftrace_set_regex ( ops , buf , len , reset , 0 ) ;
2011-05-06 06:54:01 +04:00
}
EXPORT_SYMBOL_GPL ( ftrace_set_notrace ) ;
/**
* ftrace_set_filter - set a function to filter on in ftrace
* @ ops - the ops to set the filter with
2008-05-12 23:20:45 +04:00
* @ buf - the string that holds the function filter text .
* @ len - the length of the string .
* @ reset - non zero to reset all filters before applying this filter .
*
* Filters denote which functions should be enabled when tracing is enabled .
* If @ buf is NULL and reset is set , all functions will be enabled for tracing .
*/
2011-05-06 06:54:01 +04:00
void ftrace_set_global_filter ( unsigned char * buf , int len , int reset )
2008-05-12 23:20:45 +04:00
{
2011-05-02 20:29:25 +04:00
ftrace_set_regex ( & global_ops , buf , len , reset , 1 ) ;
2008-05-22 19:46:33 +04:00
}
2011-05-06 06:54:01 +04:00
EXPORT_SYMBOL_GPL ( ftrace_set_global_filter ) ;
2008-05-12 23:20:48 +04:00
2008-05-22 19:46:33 +04:00
/**
* ftrace_set_notrace - set a function to not trace in ftrace
2011-05-06 06:54:01 +04:00
* @ ops - the ops to set the notrace filter with
2008-05-22 19:46:33 +04:00
* @ buf - the string that holds the function notrace text .
* @ len - the length of the string .
* @ reset - non zero to reset all filters before applying this filter .
*
* Notrace Filters denote which functions should not be enabled when tracing
* is enabled . If @ buf is NULL and reset is set , all functions will be enabled
* for tracing .
*/
2011-05-06 06:54:01 +04:00
void ftrace_set_global_notrace ( unsigned char * buf , int len , int reset )
2008-05-22 19:46:33 +04:00
{
2011-05-02 20:29:25 +04:00
ftrace_set_regex ( & global_ops , buf , len , reset , 0 ) ;
2008-05-12 23:20:45 +04:00
}
2011-05-06 06:54:01 +04:00
EXPORT_SYMBOL_GPL ( ftrace_set_global_notrace ) ;
2008-05-12 23:20:45 +04:00
2009-05-28 21:37:24 +04:00
/*
* command line interface to allow users to set filters on boot up .
*/
# define FTRACE_FILTER_SIZE COMMAND_LINE_SIZE
static char ftrace_notrace_buf [ FTRACE_FILTER_SIZE ] __initdata ;
static char ftrace_filter_buf [ FTRACE_FILTER_SIZE ] __initdata ;
static int __init set_ftrace_notrace ( char * str )
{
strncpy ( ftrace_notrace_buf , str , FTRACE_FILTER_SIZE ) ;
return 1 ;
}
__setup ( " ftrace_notrace= " , set_ftrace_notrace ) ;
static int __init set_ftrace_filter ( char * str )
{
strncpy ( ftrace_filter_buf , str , FTRACE_FILTER_SIZE ) ;
return 1 ;
}
__setup ( " ftrace_filter= " , set_ftrace_filter ) ;
2009-10-13 00:17:21 +04:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
2009-11-05 06:16:17 +03:00
static char ftrace_graph_buf [ FTRACE_FILTER_SIZE ] __initdata ;
2010-03-06 04:02:19 +03:00
static int ftrace_set_func ( unsigned long * array , int * idx , char * buffer ) ;
2009-10-13 00:17:21 +04:00
static int __init set_graph_function ( char * str )
{
2009-10-14 22:43:39 +04:00
strlcpy ( ftrace_graph_buf , str , FTRACE_FILTER_SIZE ) ;
2009-10-13 00:17:21 +04:00
return 1 ;
}
__setup ( " ftrace_graph_filter= " , set_graph_function ) ;
static void __init set_ftrace_early_graph ( char * buf )
{
int ret ;
char * func ;
while ( buf ) {
func = strsep ( & buf , " , " ) ;
/* we allow only one expression at a time */
ret = ftrace_set_func ( ftrace_graph_funcs , & ftrace_graph_count ,
func ) ;
if ( ret )
printk ( KERN_DEBUG " ftrace: function %s not "
" traceable \n " , func ) ;
}
}
# endif /* CONFIG_FUNCTION_GRAPH_TRACER */
2011-12-20 06:57:44 +04:00
void __init
ftrace_set_early_filter ( struct ftrace_ops * ops , char * buf , int enable )
2009-05-28 21:37:24 +04:00
{
char * func ;
while ( buf ) {
func = strsep ( & buf , " , " ) ;
2011-05-02 20:29:25 +04:00
ftrace_set_regex ( ops , func , strlen ( func ) , 0 , enable ) ;
2009-05-28 21:37:24 +04:00
}
}
static void __init set_ftrace_early_filters ( void )
{
if ( ftrace_filter_buf [ 0 ] )
2011-12-20 06:57:44 +04:00
ftrace_set_early_filter ( & global_ops , ftrace_filter_buf , 1 ) ;
2009-05-28 21:37:24 +04:00
if ( ftrace_notrace_buf [ 0 ] )
2011-12-20 06:57:44 +04:00
ftrace_set_early_filter ( & global_ops , ftrace_notrace_buf , 0 ) ;
2009-10-13 00:17:21 +04:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
if ( ftrace_graph_buf [ 0 ] )
set_ftrace_early_graph ( ftrace_graph_buf ) ;
# endif /* CONFIG_FUNCTION_GRAPH_TRACER */
2009-05-28 21:37:24 +04:00
}
2011-12-19 23:41:25 +04:00
int ftrace_regex_release ( struct inode * inode , struct file * file )
2008-05-12 23:20:43 +04:00
{
struct seq_file * m = ( struct seq_file * ) file - > private_data ;
struct ftrace_iterator * iter ;
2011-05-03 01:34:47 +04:00
struct ftrace_hash * * orig_hash ;
2009-09-11 19:29:29 +04:00
struct trace_parser * parser ;
2011-05-03 21:25:24 +04:00
int filter_hash ;
2011-05-03 01:34:47 +04:00
int ret ;
2008-05-12 23:20:43 +04:00
2008-05-22 19:46:33 +04:00
mutex_lock ( & ftrace_regex_lock ) ;
2008-05-12 23:20:43 +04:00
if ( file - > f_mode & FMODE_READ ) {
iter = m - > private ;
seq_release ( inode , file ) ;
} else
iter = file - > private_data ;
2009-09-11 19:29:29 +04:00
parser = & iter - > parser ;
if ( trace_parser_loaded ( parser ) ) {
parser - > buffer [ parser - > idx ] = 0 ;
2011-04-30 04:59:51 +04:00
ftrace_match_records ( iter - > hash , parser - > buffer , parser - > idx ) ;
2008-05-12 23:20:43 +04:00
}
2009-09-11 19:29:29 +04:00
trace_parser_put ( parser ) ;
2011-04-30 06:35:33 +04:00
if ( file - > f_mode & FMODE_WRITE ) {
2011-05-03 21:25:24 +04:00
filter_hash = ! ! ( iter - > flags & FTRACE_ITER_FILTER ) ;
if ( filter_hash )
2011-05-03 01:34:47 +04:00
orig_hash = & iter - > ops - > filter_hash ;
2011-05-03 21:25:24 +04:00
else
orig_hash = & iter - > ops - > notrace_hash ;
2011-05-03 01:34:47 +04:00
2011-04-30 06:35:33 +04:00
mutex_lock ( & ftrace_lock ) ;
2011-07-13 23:03:44 +04:00
ret = ftrace_hash_move ( iter - > ops , filter_hash ,
orig_hash , iter - > hash ) ;
if ( ! ret & & ( iter - > ops - > flags & FTRACE_OPS_FL_ENABLED )
& & ftrace_enabled )
2011-12-05 21:22:48 +04:00
ftrace_run_update_code ( FTRACE_UPDATE_CALLS ) ;
2011-07-13 23:03:44 +04:00
2011-04-30 06:35:33 +04:00
mutex_unlock ( & ftrace_lock ) ;
}
2011-05-03 01:34:47 +04:00
free_ftrace_hash ( iter - > hash ) ;
kfree ( iter ) ;
2011-04-30 06:35:33 +04:00
2008-05-22 19:46:33 +04:00
mutex_unlock ( & ftrace_regex_lock ) ;
2008-05-12 23:20:43 +04:00
return 0 ;
}
2009-03-06 05:44:55 +03:00
static const struct file_operations ftrace_avail_fops = {
2008-05-12 23:20:43 +04:00
. open = ftrace_avail_open ,
. read = seq_read ,
. llseek = seq_lseek ,
2009-08-17 12:54:03 +04:00
. release = seq_release_private ,
2008-05-12 23:20:43 +04:00
} ;
2011-05-03 22:39:21 +04:00
static const struct file_operations ftrace_enabled_fops = {
. open = ftrace_enabled_open ,
. read = seq_read ,
. llseek = seq_lseek ,
. release = seq_release_private ,
} ;
2009-03-06 05:44:55 +03:00
static const struct file_operations ftrace_filter_fops = {
2008-05-12 23:20:43 +04:00
. open = ftrace_filter_open ,
2009-03-13 12:47:23 +03:00
. read = seq_read ,
2008-05-12 23:20:43 +04:00
. write = ftrace_filter_write ,
2010-09-10 19:47:43 +04:00
. llseek = ftrace_regex_lseek ,
2011-04-30 04:59:51 +04:00
. release = ftrace_regex_release ,
2008-05-12 23:20:43 +04:00
} ;
2009-03-06 05:44:55 +03:00
static const struct file_operations ftrace_notrace_fops = {
2008-05-22 19:46:33 +04:00
. open = ftrace_notrace_open ,
2009-03-13 12:47:23 +03:00
. read = seq_read ,
2008-05-22 19:46:33 +04:00
. write = ftrace_notrace_write ,
. llseek = ftrace_regex_lseek ,
2011-04-30 04:59:51 +04:00
. release = ftrace_regex_release ,
2008-05-22 19:46:33 +04:00
} ;
2008-12-03 23:36:57 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
static DEFINE_MUTEX ( graph_lock ) ;
int ftrace_graph_count ;
2010-02-10 10:43:04 +03:00
int ftrace_graph_filter_enabled ;
2008-12-03 23:36:57 +03:00
unsigned long ftrace_graph_funcs [ FTRACE_GRAPH_MAX_FUNCS ] __read_mostly ;
static void *
2009-06-24 05:54:00 +04:00
__g_next ( struct seq_file * m , loff_t * pos )
2008-12-03 23:36:57 +03:00
{
2009-06-24 05:54:00 +04:00
if ( * pos > = ftrace_graph_count )
2008-12-03 23:36:57 +03:00
return NULL ;
2009-09-18 10:06:28 +04:00
return & ftrace_graph_funcs [ * pos ] ;
2009-06-24 05:54:00 +04:00
}
2008-12-03 23:36:57 +03:00
2009-06-24 05:54:00 +04:00
static void *
g_next ( struct seq_file * m , void * v , loff_t * pos )
{
( * pos ) + + ;
return __g_next ( m , pos ) ;
2008-12-03 23:36:57 +03:00
}
static void * g_start ( struct seq_file * m , loff_t * pos )
{
mutex_lock ( & graph_lock ) ;
2009-02-19 23:13:12 +03:00
/* Nothing, tell g_show to print all functions are enabled */
2010-02-10 10:43:04 +03:00
if ( ! ftrace_graph_filter_enabled & & ! * pos )
2009-02-19 23:13:12 +03:00
return ( void * ) 1 ;
2009-06-24 05:54:00 +04:00
return __g_next ( m , pos ) ;
2008-12-03 23:36:57 +03:00
}
static void g_stop ( struct seq_file * m , void * p )
{
mutex_unlock ( & graph_lock ) ;
}
static int g_show ( struct seq_file * m , void * v )
{
unsigned long * ptr = v ;
if ( ! ptr )
return 0 ;
2009-02-19 23:13:12 +03:00
if ( ptr = = ( unsigned long * ) 1 ) {
seq_printf ( m , " #### all functions enabled #### \n " ) ;
return 0 ;
}
2009-09-17 08:05:58 +04:00
seq_printf ( m , " %ps \n " , ( void * ) * ptr ) ;
2008-12-03 23:36:57 +03:00
return 0 ;
}
2009-09-23 03:43:43 +04:00
static const struct seq_operations ftrace_graph_seq_ops = {
2008-12-03 23:36:57 +03:00
. start = g_start ,
. next = g_next ,
. stop = g_stop ,
. show = g_show ,
} ;
static int
ftrace_graph_open ( struct inode * inode , struct file * file )
{
int ret = 0 ;
if ( unlikely ( ftrace_disabled ) )
return - ENODEV ;
mutex_lock ( & graph_lock ) ;
if ( ( file - > f_mode & FMODE_WRITE ) & &
2009-07-23 07:29:30 +04:00
( file - > f_flags & O_TRUNC ) ) {
2010-02-10 10:43:04 +03:00
ftrace_graph_filter_enabled = 0 ;
2008-12-03 23:36:57 +03:00
ftrace_graph_count = 0 ;
memset ( ftrace_graph_funcs , 0 , sizeof ( ftrace_graph_funcs ) ) ;
}
2009-09-18 10:06:28 +04:00
mutex_unlock ( & graph_lock ) ;
2008-12-03 23:36:57 +03:00
2009-09-18 10:06:28 +04:00
if ( file - > f_mode & FMODE_READ )
2008-12-03 23:36:57 +03:00
ret = seq_open ( file , & ftrace_graph_seq_ops ) ;
return ret ;
}
2009-07-23 07:29:11 +04:00
static int
ftrace_graph_release ( struct inode * inode , struct file * file )
{
if ( file - > f_mode & FMODE_READ )
seq_release ( inode , file ) ;
return 0 ;
}
2008-12-03 23:36:57 +03:00
static int
2009-02-19 23:13:12 +03:00
ftrace_set_func ( unsigned long * array , int * idx , char * buffer )
2008-12-03 23:36:57 +03:00
{
struct dyn_ftrace * rec ;
struct ftrace_page * pg ;
2009-02-19 23:13:12 +03:00
int search_len ;
2010-02-10 10:43:04 +03:00
int fail = 1 ;
2009-02-19 23:13:12 +03:00
int type , not ;
char * search ;
bool exists ;
int i ;
2008-12-03 23:36:57 +03:00
2009-02-19 23:13:12 +03:00
/* decode regex */
2009-09-24 23:31:51 +04:00
type = filter_parse_regex ( buffer , strlen ( buffer ) , & search , & not ) ;
2010-02-10 10:43:04 +03:00
if ( ! not & & * idx > = FTRACE_GRAPH_MAX_FUNCS )
return - EBUSY ;
2009-02-19 23:13:12 +03:00
search_len = strlen ( search ) ;
2009-02-14 09:15:39 +03:00
mutex_lock ( & ftrace_lock ) ;
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) ) {
mutex_unlock ( & ftrace_lock ) ;
return - ENODEV ;
}
2009-02-13 20:43:56 +03:00
do_for_each_ftrace_rec ( pg , rec ) {
2011-04-29 04:32:08 +04:00
if ( ftrace_match_record ( rec , NULL , search , search_len , type ) ) {
2010-02-10 10:43:04 +03:00
/* if it is in the array */
2009-02-19 23:13:12 +03:00
exists = false ;
2010-02-10 10:43:04 +03:00
for ( i = 0 ; i < * idx ; i + + ) {
2009-02-19 23:13:12 +03:00
if ( array [ i ] = = rec - > ip ) {
exists = true ;
2009-02-13 20:43:56 +03:00
break ;
}
2010-02-10 10:43:04 +03:00
}
if ( ! not ) {
fail = 0 ;
if ( ! exists ) {
array [ ( * idx ) + + ] = rec - > ip ;
if ( * idx > = FTRACE_GRAPH_MAX_FUNCS )
goto out ;
}
} else {
if ( exists ) {
array [ i ] = array [ - - ( * idx ) ] ;
array [ * idx ] = 0 ;
fail = 0 ;
}
}
2008-12-03 23:36:57 +03:00
}
2009-02-13 20:43:56 +03:00
} while_for_each_ftrace_rec ( ) ;
2010-02-10 10:43:04 +03:00
out :
2009-02-14 09:15:39 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-12-03 23:36:57 +03:00
2010-02-10 10:43:04 +03:00
if ( fail )
return - EINVAL ;
ftrace_graph_filter_enabled = 1 ;
return 0 ;
2008-12-03 23:36:57 +03:00
}
static ssize_t
ftrace_graph_write ( struct file * file , const char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
2009-09-11 19:29:29 +04:00
struct trace_parser parser ;
2009-09-22 09:52:20 +04:00
ssize_t read , ret ;
2008-12-03 23:36:57 +03:00
2010-02-10 10:43:04 +03:00
if ( ! cnt )
2008-12-03 23:36:57 +03:00
return 0 ;
mutex_lock ( & graph_lock ) ;
2009-09-11 19:29:29 +04:00
if ( trace_parser_get_init ( & parser , FTRACE_BUFF_MAX ) ) {
ret = - ENOMEM ;
2009-09-22 09:52:57 +04:00
goto out_unlock ;
2008-12-03 23:36:57 +03:00
}
2009-09-11 19:29:29 +04:00
read = trace_get_user ( & parser , ubuf , cnt , ppos ) ;
2008-12-03 23:36:57 +03:00
2009-09-22 09:52:20 +04:00
if ( read > = 0 & & trace_parser_loaded ( ( & parser ) ) ) {
2009-09-11 19:29:29 +04:00
parser . buffer [ parser . idx ] = 0 ;
/* we allow only one expression at a time */
2009-09-18 10:06:28 +04:00
ret = ftrace_set_func ( ftrace_graph_funcs , & ftrace_graph_count ,
2009-09-11 19:29:29 +04:00
parser . buffer ) ;
2008-12-03 23:36:57 +03:00
if ( ret )
2009-09-22 09:52:57 +04:00
goto out_free ;
2008-12-03 23:36:57 +03:00
}
ret = read ;
2009-09-22 09:52:57 +04:00
out_free :
2009-09-11 19:29:29 +04:00
trace_parser_put ( & parser ) ;
2009-09-22 09:52:57 +04:00
out_unlock :
2008-12-03 23:36:57 +03:00
mutex_unlock ( & graph_lock ) ;
return ret ;
}
static const struct file_operations ftrace_graph_fops = {
2009-07-23 07:29:11 +04:00
. open = ftrace_graph_open ,
. read = seq_read ,
. write = ftrace_graph_write ,
. release = ftrace_graph_release ,
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
. llseek = seq_lseek ,
2008-12-03 23:36:57 +03:00
} ;
# endif /* CONFIG_FUNCTION_GRAPH_TRACER */
2008-11-26 08:16:23 +03:00
static __init int ftrace_init_dyn_debugfs ( struct dentry * d_tracer )
2008-05-12 23:20:43 +04:00
{
2009-03-27 02:25:38 +03:00
trace_create_file ( " available_filter_functions " , 0444 ,
d_tracer , NULL , & ftrace_avail_fops ) ;
2008-05-12 23:20:43 +04:00
2011-05-03 22:39:21 +04:00
trace_create_file ( " enabled_functions " , 0444 ,
d_tracer , NULL , & ftrace_enabled_fops ) ;
2009-03-27 02:25:38 +03:00
trace_create_file ( " set_ftrace_filter " , 0644 , d_tracer ,
NULL , & ftrace_filter_fops ) ;
2008-05-22 19:46:33 +04:00
2009-03-27 02:25:38 +03:00
trace_create_file ( " set_ftrace_notrace " , 0644 , d_tracer ,
2008-05-22 19:46:33 +04:00
NULL , & ftrace_notrace_fops ) ;
ftrace: user update and disable dynamic ftrace daemon
In dynamic ftrace, the mcount function starts off pointing to a stub
function that just returns.
On start up, the call to the stub is modified to point to a "record_ip"
function. The job of the record_ip function is to add the function to
a pre-allocated hash list. If the function is already there, it simply is
ignored, otherwise it is added to the list.
Later, a ftraced daemon wakes up and calls kstop_machine if any functions
have been recorded, and changes the calls to the recorded functions to
a simple nop. If no functions were recorded, the daemon goes back to sleep.
The daemon wakes up once a second to see if it needs to update any newly
recorded functions into nops. Usually it does not, but if a lot of code
has been executed for the first time in the kernel, the ftraced daemon
will call kstop_machine to update those into nops.
The problem currently is that there's no way to stop the daemon from doing
this, and it can cause unneeded latencies (800us which for some is bothersome).
This patch adds a new file /debugfs/tracing/ftraced_enabled. If the daemon
is active, reading this will return "enabled\n" and "disabled\n" when the
daemon is not running. To disable the daemon, the user can echo "0" or
"disable" into this file, and "1" or "enable" to re-enable the daemon.
Since the daemon is used to convert the functions into nops to increase
the performance of the system, I also added that anytime something is
written into the ftraced_enabled file, kstop_machine will run if there
are new functions that have been detected that need to be converted.
This way the user can disable the daemon but still be able to control the
conversion of the mcount calls to nops by simply,
"echo 0 > /debugfs/tracing/ftraced_enabled"
when they need to do more conversions.
To see the number of converted functions:
"cat /debugfs/tracing/dyn_ftrace_total_info"
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-28 04:48:37 +04:00
2008-12-03 23:36:57 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
2009-03-27 02:25:38 +03:00
trace_create_file ( " set_graph_function " , 0444 , d_tracer ,
2008-12-03 23:36:57 +03:00
NULL ,
& ftrace_graph_fops ) ;
# endif /* CONFIG_FUNCTION_GRAPH_TRACER */
2008-05-12 23:20:43 +04:00
return 0 ;
}
2012-04-25 06:32:06 +04:00
static int ftrace_cmp_ips ( const void * a , const void * b )
2011-12-17 02:06:45 +04:00
{
2012-04-25 06:32:06 +04:00
const unsigned long * ipa = a ;
const unsigned long * ipb = b ;
2011-12-17 02:06:45 +04:00
2012-04-25 06:32:06 +04:00
if ( * ipa > * ipb )
return 1 ;
if ( * ipa < * ipb )
return - 1 ;
return 0 ;
}
static void ftrace_swap_ips ( void * a , void * b , int size )
{
unsigned long * ipa = a ;
unsigned long * ipb = b ;
unsigned long t ;
t = * ipa ;
* ipa = * ipb ;
* ipb = t ;
2011-12-17 02:06:45 +04:00
}
2009-10-14 00:33:53 +04:00
static int ftrace_process_locs ( struct module * mod ,
2008-11-15 03:21:19 +03:00
unsigned long * start ,
2008-08-14 23:45:08 +04:00
unsigned long * end )
{
2011-12-17 01:23:44 +04:00
struct ftrace_page * pg ;
unsigned long count ;
2008-08-14 23:45:08 +04:00
unsigned long * p ;
unsigned long addr ;
2011-06-25 07:28:13 +04:00
unsigned long flags = 0 ; /* Shut up gcc */
2011-12-17 01:23:44 +04:00
int ret = - ENOMEM ;
count = end - start ;
if ( ! count )
return 0 ;
2012-04-25 06:32:06 +04:00
sort ( start , count , sizeof ( * start ) ,
ftrace_cmp_ips , ftrace_swap_ips ) ;
2011-12-17 01:23:44 +04:00
pg = ftrace_allocate_pages ( count ) ;
if ( ! pg )
return - ENOMEM ;
2008-08-14 23:45:08 +04:00
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
2011-12-17 01:23:44 +04:00
2011-12-16 23:42:37 +04:00
/*
* Core and each module needs their own pages , as
* modules will free them when they are removed .
* Force a new page to be allocated for modules .
*/
2011-12-17 01:23:44 +04:00
if ( ! mod ) {
WARN_ON ( ftrace_pages | | ftrace_pages_start ) ;
/* First initialization */
ftrace_pages = ftrace_pages_start = pg ;
} else {
2011-12-16 23:42:37 +04:00
if ( ! ftrace_pages )
2011-12-17 01:23:44 +04:00
goto out ;
2011-12-16 23:42:37 +04:00
2011-12-17 01:23:44 +04:00
if ( WARN_ON ( ftrace_pages - > next ) ) {
/* Hmm, we have free pages? */
while ( ftrace_pages - > next )
ftrace_pages = ftrace_pages - > next ;
2011-12-16 23:42:37 +04:00
}
2011-12-17 01:23:44 +04:00
ftrace_pages - > next = pg ;
ftrace_pages = pg ;
2011-12-16 23:42:37 +04:00
}
2008-08-14 23:45:08 +04:00
p = start ;
while ( p < end ) {
addr = ftrace_call_adjust ( * p + + ) ;
2008-11-15 03:21:19 +03:00
/*
* Some architecture linkers will pad between
* the different mcount_loc sections of different
* object files to satisfy alignments .
* Skip any NULL pointers .
*/
if ( ! addr )
continue ;
2011-12-17 01:23:44 +04:00
if ( ! ftrace_record_ip ( addr ) )
break ;
2008-08-14 23:45:08 +04:00
}
2011-12-17 01:30:31 +04:00
/* These new locations need to be initialized */
ftrace_new_pgs = pg ;
2011-06-07 17:26:46 +04:00
/*
2011-06-25 07:28:13 +04:00
* We only need to disable interrupts on start up
* because we are modifying code that an interrupt
* may execute , and the modification is not atomic .
* But for modules , nothing runs the code we modify
* until we are finished with it , and there ' s no
* reason to cause large interrupt latencies while we do it .
2011-06-07 17:26:46 +04:00
*/
2011-06-25 07:28:13 +04:00
if ( ! mod )
local_irq_save ( flags ) ;
2008-11-15 03:21:19 +03:00
ftrace_update_code ( mod ) ;
2011-06-25 07:28:13 +04:00
if ( ! mod )
local_irq_restore ( flags ) ;
2011-12-17 01:23:44 +04:00
ret = 0 ;
out :
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-08-14 23:45:08 +04:00
2011-12-17 01:23:44 +04:00
return ret ;
2008-08-14 23:45:08 +04:00
}
2009-04-15 21:24:06 +04:00
# ifdef CONFIG_MODULES
2011-12-16 23:42:37 +04:00
# define next_to_ftrace_page(p) container_of(p, struct ftrace_page, next)
2009-10-07 21:00:35 +04:00
void ftrace_release_mod ( struct module * mod )
2009-04-15 21:24:06 +04:00
{
struct dyn_ftrace * rec ;
2011-12-16 23:42:37 +04:00
struct ftrace_page * * last_pg ;
2009-04-15 21:24:06 +04:00
struct ftrace_page * pg ;
2011-12-17 01:23:44 +04:00
int order ;
2009-04-15 21:24:06 +04:00
2011-04-22 07:16:46 +04:00
mutex_lock ( & ftrace_lock ) ;
2009-10-07 21:00:35 +04:00
if ( ftrace_disabled )
2011-04-22 07:16:46 +04:00
goto out_unlock ;
2009-04-15 21:24:06 +04:00
2011-12-16 23:42:37 +04:00
/*
* Each module has its own ftrace_pages , remove
* them from the list .
*/
last_pg = & ftrace_pages_start ;
for ( pg = ftrace_pages_start ; pg ; pg = * last_pg ) {
rec = & pg - > records [ 0 ] ;
2009-10-07 21:00:35 +04:00
if ( within_module_core ( rec - > ip , mod ) ) {
2009-04-15 21:24:06 +04:00
/*
2011-12-16 23:42:37 +04:00
* As core pages are first , the first
* page should never be a module page .
2009-04-15 21:24:06 +04:00
*/
2011-12-16 23:42:37 +04:00
if ( WARN_ON ( pg = = ftrace_pages_start ) )
goto out_unlock ;
/* Check if we are deleting the last page */
if ( pg = = ftrace_pages )
ftrace_pages = next_to_ftrace_page ( last_pg ) ;
* last_pg = pg - > next ;
2011-12-17 01:23:44 +04:00
order = get_count_order ( pg - > size / ENTRIES_PER_PAGE ) ;
free_pages ( ( unsigned long ) pg - > records , order ) ;
kfree ( pg ) ;
2011-12-16 23:42:37 +04:00
} else
last_pg = & pg - > next ;
}
2011-04-22 07:16:46 +04:00
out_unlock :
2009-04-15 21:24:06 +04:00
mutex_unlock ( & ftrace_lock ) ;
}
static void ftrace_init_module ( struct module * mod ,
unsigned long * start , unsigned long * end )
2008-08-14 23:45:09 +04:00
{
2008-08-16 05:40:04 +04:00
if ( ftrace_disabled | | start = = end )
2008-08-15 06:47:19 +04:00
return ;
2009-10-14 00:33:53 +04:00
ftrace_process_locs ( mod , start , end ) ;
2008-08-14 23:45:09 +04:00
}
2009-04-15 21:24:06 +04:00
static int ftrace_module_notify ( struct notifier_block * self ,
unsigned long val , void * data )
{
struct module * mod = data ;
switch ( val ) {
case MODULE_STATE_COMING :
ftrace_init_module ( mod , mod - > ftrace_callsites ,
mod - > ftrace_callsites +
mod - > num_ftrace_callsites ) ;
break ;
case MODULE_STATE_GOING :
2009-10-07 21:00:35 +04:00
ftrace_release_mod ( mod ) ;
2009-04-15 21:24:06 +04:00
break ;
}
return 0 ;
}
# else
static int ftrace_module_notify ( struct notifier_block * self ,
unsigned long val , void * data )
{
return 0 ;
}
# endif /* CONFIG_MODULES */
struct notifier_block ftrace_module_nb = {
. notifier_call = ftrace_module_notify ,
. priority = 0 ,
} ;
2008-08-14 23:45:08 +04:00
extern unsigned long __start_mcount_loc [ ] ;
extern unsigned long __stop_mcount_loc [ ] ;
void __init ftrace_init ( void )
{
unsigned long count , addr , flags ;
int ret ;
/* Keep the ftrace pointer to the stub */
addr = ( unsigned long ) ftrace_stub ;
local_irq_save ( flags ) ;
ftrace_dyn_arch_init ( & addr ) ;
local_irq_restore ( flags ) ;
/* ftrace_dyn_arch_init places the return code in addr */
if ( addr )
goto failed ;
count = __stop_mcount_loc - __start_mcount_loc ;
ret = ftrace_dyn_table_alloc ( count ) ;
if ( ret )
goto failed ;
last_ftrace_enabled = ftrace_enabled = 1 ;
2009-10-14 00:33:53 +04:00
ret = ftrace_process_locs ( NULL ,
2008-11-15 03:21:19 +03:00
__start_mcount_loc ,
2008-08-14 23:45:08 +04:00
__stop_mcount_loc ) ;
2009-04-15 21:24:06 +04:00
ret = register_module_notifier ( & ftrace_module_nb ) ;
2009-05-17 11:31:38 +04:00
if ( ret )
2009-04-15 21:24:06 +04:00
pr_warning ( " Failed to register trace ftrace module notifier \n " ) ;
2009-05-28 21:37:24 +04:00
set_ftrace_early_filters ( ) ;
2008-08-14 23:45:08 +04:00
return ;
failed :
ftrace_disabled = 1 ;
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
# else
2008-10-28 22:17:38 +03:00
2011-05-04 06:49:52 +04:00
static struct ftrace_ops global_ops = {
2011-05-04 05:55:54 +04:00
. func = ftrace_stub ,
} ;
2008-10-28 22:17:38 +03:00
static int __init ftrace_nodyn_init ( void )
{
ftrace_enabled = 1 ;
return 0 ;
}
device_initcall ( ftrace_nodyn_init ) ;
2008-11-26 08:16:23 +03:00
static inline int ftrace_init_dyn_debugfs ( struct dentry * d_tracer ) { return 0 ; }
static inline void ftrace_startup_enable ( int command ) { }
2008-11-26 08:16:24 +03:00
/* Keep as macros so we do not need to define the commands */
2011-05-23 23:33:49 +04:00
# define ftrace_startup(ops, command) \
( { \
( ops ) - > flags | = FTRACE_OPS_FL_ENABLED ; \
0 ; \
} )
2011-05-04 05:55:54 +04:00
# define ftrace_shutdown(ops, command) do { } while (0)
2008-05-12 23:20:45 +04:00
# define ftrace_startup_sysctl() do { } while (0)
# define ftrace_shutdown_sysctl() do { } while (0)
2011-05-04 17:27:52 +04:00
static inline int
ftrace_ops_test ( struct ftrace_ops * ops , unsigned long ip )
{
return 1 ;
}
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
# endif /* CONFIG_DYNAMIC_FTRACE */
2012-02-15 18:51:48 +04:00
static void
ftrace_ops_control_func ( unsigned long ip , unsigned long parent_ip )
{
struct ftrace_ops * op ;
if ( unlikely ( trace_recursion_test ( TRACE_CONTROL_BIT ) ) )
return ;
/*
* Some of the ops may be dynamically allocated ,
* they must be freed after a synchronize_sched ( ) .
*/
preempt_disable_notrace ( ) ;
trace_recursion_set ( TRACE_CONTROL_BIT ) ;
op = rcu_dereference_raw ( ftrace_control_list ) ;
while ( op ! = & ftrace_list_end ) {
if ( ! ftrace_function_local_disabled ( op ) & &
ftrace_ops_test ( op , ip ) )
op - > func ( ip , parent_ip ) ;
op = rcu_dereference_raw ( op - > next ) ;
} ;
trace_recursion_clear ( TRACE_CONTROL_BIT ) ;
preempt_enable_notrace ( ) ;
}
static struct ftrace_ops control_ops = {
. func = ftrace_ops_control_func ,
} ;
2011-05-04 17:27:52 +04:00
static void
ftrace_ops_list_func ( unsigned long ip , unsigned long parent_ip )
{
2011-05-06 05:14:55 +04:00
struct ftrace_ops * op ;
2011-05-04 17:27:52 +04:00
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 22:27:43 +04:00
if ( unlikely ( trace_recursion_test ( TRACE_INTERNAL_BIT ) ) )
return ;
trace_recursion_set ( TRACE_INTERNAL_BIT ) ;
2011-05-06 05:14:55 +04:00
/*
* Some of the ops may be dynamically allocated ,
* they must be freed after a synchronize_sched ( ) .
*/
preempt_disable_notrace ( ) ;
op = rcu_dereference_raw ( ftrace_ops_list ) ;
2011-05-04 17:27:52 +04:00
while ( op ! = & ftrace_list_end ) {
if ( ftrace_ops_test ( op , ip ) )
op - > func ( ip , parent_ip ) ;
op = rcu_dereference_raw ( op - > next ) ;
} ;
2011-05-06 05:14:55 +04:00
preempt_enable_notrace ( ) ;
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 22:27:43 +04:00
trace_recursion_clear ( TRACE_INTERNAL_BIT ) ;
2011-05-04 17:27:52 +04:00
}
2008-12-04 08:26:41 +03:00
static void clear_ftrace_swapper ( void )
2008-12-04 08:26:40 +03:00
{
struct task_struct * p ;
2008-12-04 08:26:41 +03:00
int cpu ;
2008-12-04 08:26:40 +03:00
2008-12-04 08:26:41 +03:00
get_online_cpus ( ) ;
for_each_online_cpu ( cpu ) {
p = idle_task ( cpu ) ;
2008-12-04 08:26:40 +03:00
clear_tsk_trace_trace ( p ) ;
2008-12-04 08:26:41 +03:00
}
put_online_cpus ( ) ;
}
2008-12-04 08:26:40 +03:00
2008-12-04 08:26:41 +03:00
static void set_ftrace_swapper ( void )
{
struct task_struct * p ;
int cpu ;
get_online_cpus ( ) ;
for_each_online_cpu ( cpu ) {
p = idle_task ( cpu ) ;
set_tsk_trace_trace ( p ) ;
}
put_online_cpus ( ) ;
2008-12-04 08:26:40 +03:00
}
2008-12-04 08:26:41 +03:00
static void clear_ftrace_pid ( struct pid * pid )
{
struct task_struct * p ;
2009-02-03 22:39:04 +03:00
rcu_read_lock ( ) ;
2008-12-04 08:26:41 +03:00
do_each_pid_task ( pid , PIDTYPE_PID , p ) {
clear_tsk_trace_trace ( p ) ;
} while_each_pid_task ( pid , PIDTYPE_PID , p ) ;
2009-02-03 22:39:04 +03:00
rcu_read_unlock ( ) ;
2008-12-04 08:26:41 +03:00
put_pid ( pid ) ;
}
static void set_ftrace_pid ( struct pid * pid )
2008-12-04 08:26:40 +03:00
{
struct task_struct * p ;
2009-02-03 22:39:04 +03:00
rcu_read_lock ( ) ;
2008-12-04 08:26:40 +03:00
do_each_pid_task ( pid , PIDTYPE_PID , p ) {
set_tsk_trace_trace ( p ) ;
} while_each_pid_task ( pid , PIDTYPE_PID , p ) ;
2009-02-03 22:39:04 +03:00
rcu_read_unlock ( ) ;
2008-12-04 08:26:40 +03:00
}
2009-10-14 00:33:52 +04:00
static void clear_ftrace_pid_task ( struct pid * pid )
2008-12-04 08:26:41 +03:00
{
2009-10-14 00:33:52 +04:00
if ( pid = = ftrace_swapper_pid )
2008-12-04 08:26:41 +03:00
clear_ftrace_swapper ( ) ;
else
2009-10-14 00:33:52 +04:00
clear_ftrace_pid ( pid ) ;
2008-12-04 08:26:41 +03:00
}
static void set_ftrace_pid_task ( struct pid * pid )
{
if ( pid = = ftrace_swapper_pid )
set_ftrace_swapper ( ) ;
else
set_ftrace_pid ( pid ) ;
}
2009-10-14 00:33:52 +04:00
static int ftrace_pid_add ( int p )
2008-11-26 08:16:23 +03:00
{
2008-12-04 08:26:40 +03:00
struct pid * pid ;
2009-10-14 00:33:52 +04:00
struct ftrace_pid * fpid ;
int ret = - EINVAL ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
mutex_lock ( & ftrace_lock ) ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
if ( ! p )
pid = ftrace_swapper_pid ;
else
pid = find_get_pid ( p ) ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
if ( ! pid )
goto out ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
ret = 0 ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
list_for_each_entry ( fpid , & ftrace_pids , list )
if ( fpid - > pid = = pid )
goto out_put ;
2008-12-04 08:26:40 +03:00
2009-10-14 00:33:52 +04:00
ret = - ENOMEM ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
fpid = kmalloc ( sizeof ( * fpid ) , GFP_KERNEL ) ;
if ( ! fpid )
goto out_put ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
list_add ( & fpid - > list , & ftrace_pids ) ;
fpid - > pid = pid ;
2008-12-03 23:36:58 +03:00
2009-10-14 00:33:52 +04:00
set_ftrace_pid_task ( pid ) ;
2008-12-04 08:26:40 +03:00
2009-10-14 00:33:52 +04:00
ftrace_update_pid_func ( ) ;
ftrace_startup_enable ( 0 ) ;
mutex_unlock ( & ftrace_lock ) ;
return 0 ;
out_put :
if ( pid ! = ftrace_swapper_pid )
put_pid ( pid ) ;
2008-12-04 08:26:40 +03:00
2009-10-14 00:33:52 +04:00
out :
mutex_unlock ( & ftrace_lock ) ;
return ret ;
}
static void ftrace_pid_reset ( void )
{
struct ftrace_pid * fpid , * safe ;
2008-12-04 08:26:40 +03:00
2009-10-14 00:33:52 +04:00
mutex_lock ( & ftrace_lock ) ;
list_for_each_entry_safe ( fpid , safe , & ftrace_pids , list ) {
struct pid * pid = fpid - > pid ;
clear_ftrace_pid_task ( pid ) ;
list_del ( & fpid - > list ) ;
kfree ( fpid ) ;
2008-11-26 08:16:23 +03:00
}
ftrace_update_pid_func ( ) ;
ftrace_startup_enable ( 0 ) ;
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2009-10-14 00:33:52 +04:00
}
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
static void * fpid_start ( struct seq_file * m , loff_t * pos )
{
mutex_lock ( & ftrace_lock ) ;
if ( list_empty ( & ftrace_pids ) & & ( ! * pos ) )
return ( void * ) 1 ;
return seq_list_start ( & ftrace_pids , * pos ) ;
}
static void * fpid_next ( struct seq_file * m , void * v , loff_t * pos )
{
if ( v = = ( void * ) 1 )
return NULL ;
return seq_list_next ( v , & ftrace_pids , pos ) ;
}
static void fpid_stop ( struct seq_file * m , void * p )
{
mutex_unlock ( & ftrace_lock ) ;
}
static int fpid_show ( struct seq_file * m , void * v )
{
const struct ftrace_pid * fpid = list_entry ( v , struct ftrace_pid , list ) ;
if ( v = = ( void * ) 1 ) {
seq_printf ( m , " no pid \n " ) ;
return 0 ;
}
if ( fpid - > pid = = ftrace_swapper_pid )
seq_printf ( m , " swapper tasks \n " ) ;
else
seq_printf ( m , " %u \n " , pid_vnr ( fpid - > pid ) ) ;
return 0 ;
}
static const struct seq_operations ftrace_pid_sops = {
. start = fpid_start ,
. next = fpid_next ,
. stop = fpid_stop ,
. show = fpid_show ,
} ;
static int
ftrace_pid_open ( struct inode * inode , struct file * file )
{
int ret = 0 ;
if ( ( file - > f_mode & FMODE_WRITE ) & &
( file - > f_flags & O_TRUNC ) )
ftrace_pid_reset ( ) ;
if ( file - > f_mode & FMODE_READ )
ret = seq_open ( file , & ftrace_pid_sops ) ;
return ret ;
}
2008-11-26 08:16:23 +03:00
static ssize_t
ftrace_pid_write ( struct file * filp , const char __user * ubuf ,
size_t cnt , loff_t * ppos )
{
2009-11-23 13:03:28 +03:00
char buf [ 64 ] , * tmp ;
2008-11-26 08:16:23 +03:00
long val ;
int ret ;
if ( cnt > = sizeof ( buf ) )
return - EINVAL ;
if ( copy_from_user ( & buf , ubuf , cnt ) )
return - EFAULT ;
buf [ cnt ] = 0 ;
2009-10-14 00:33:52 +04:00
/*
* Allow " echo > set_ftrace_pid " or " echo -n '' > set_ftrace_pid "
* to clean the filter quietly .
*/
2009-11-23 13:03:28 +03:00
tmp = strstrip ( buf ) ;
if ( strlen ( tmp ) = = 0 )
2009-10-14 00:33:52 +04:00
return 1 ;
2009-11-23 13:03:28 +03:00
ret = strict_strtol ( tmp , 10 , & val ) ;
2008-11-26 08:16:23 +03:00
if ( ret < 0 )
return ret ;
2009-10-14 00:33:52 +04:00
ret = ftrace_pid_add ( val ) ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
return ret ? ret : cnt ;
}
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
static int
ftrace_pid_release ( struct inode * inode , struct file * file )
{
if ( file - > f_mode & FMODE_READ )
seq_release ( inode , file ) ;
2008-11-26 08:16:23 +03:00
2009-10-14 00:33:52 +04:00
return 0 ;
2008-11-26 08:16:23 +03:00
}
2009-03-06 05:44:55 +03:00
static const struct file_operations ftrace_pid_fops = {
2009-10-14 00:33:52 +04:00
. open = ftrace_pid_open ,
. write = ftrace_pid_write ,
. read = seq_read ,
. llseek = seq_lseek ,
. release = ftrace_pid_release ,
2008-11-26 08:16:23 +03:00
} ;
static __init int ftrace_init_debugfs ( void )
{
struct dentry * d_tracer ;
d_tracer = tracing_init_dentry ( ) ;
if ( ! d_tracer )
return 0 ;
ftrace_init_dyn_debugfs ( d_tracer ) ;
2009-03-27 02:25:38 +03:00
trace_create_file ( " set_ftrace_pid " , 0644 , d_tracer ,
NULL , & ftrace_pid_fops ) ;
2009-03-24 00:12:36 +03:00
ftrace_profile_debugfs ( d_tracer ) ;
2008-11-26 08:16:23 +03:00
return 0 ;
}
fs_initcall ( ftrace_init_debugfs ) ;
2008-07-11 04:58:15 +04:00
/**
2008-10-23 17:33:02 +04:00
* ftrace_kill - kill ftrace
2008-07-11 04:58:15 +04:00
*
* This function should be used by panic code . It stops ftrace
* but in a not so nice way . If you need to simply kill ftrace
* from a non - atomic section , use ftrace_kill .
*/
2008-10-23 17:33:02 +04:00
void ftrace_kill ( void )
2008-07-11 04:58:15 +04:00
{
ftrace_disabled = 1 ;
ftrace_enabled = 0 ;
clear_ftrace_function ( ) ;
}
2011-09-30 05:26:16 +04:00
/**
* Test if ftrace is dead or not .
*/
int ftrace_is_dead ( void )
{
return ftrace_disabled ;
}
2008-05-12 23:20:42 +04:00
/**
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* register_ftrace_function - register a function for profiling
* @ ops - ops structure that holds the function for profiling .
2008-05-12 23:20:42 +04:00
*
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* Register a function to be called by all functions in the
* kernel .
*
* Note : @ ops - > func and all the functions it calls must be labeled
* with " notrace " , otherwise it will go into a
* recursive loop .
2008-05-12 23:20:42 +04:00
*/
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
int register_ftrace_function ( struct ftrace_ops * ops )
2008-05-12 23:20:42 +04:00
{
2011-04-22 07:16:46 +04:00
int ret = - 1 ;
2008-05-12 23:20:48 +04:00
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
2008-11-16 08:02:06 +03:00
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
goto out_unlock ;
2008-05-12 23:20:43 +04:00
ret = __register_ftrace_function ( ops ) ;
2011-05-04 17:27:52 +04:00
if ( ! ret )
2011-05-23 23:24:25 +04:00
ret = ftrace_startup ( ops , 0 ) ;
2011-05-04 17:27:52 +04:00
2008-05-12 23:20:43 +04:00
2011-04-22 07:16:46 +04:00
out_unlock :
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
return ret ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
}
2011-05-06 05:14:55 +04:00
EXPORT_SYMBOL_GPL ( register_ftrace_function ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
/**
2009-01-13 01:35:50 +03:00
* unregister_ftrace_function - unregister a function for profiling .
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
* @ ops - ops structure that holds the function to unregister
*
* Unregister a function that was added to be called by ftrace profiling .
*/
int unregister_ftrace_function ( struct ftrace_ops * ops )
{
int ret ;
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
ret = __unregister_ftrace_function ( ops ) ;
2011-05-04 17:27:52 +04:00
if ( ! ret )
ftrace_shutdown ( ops , 0 ) ;
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
return ret ;
}
2011-05-06 05:14:55 +04:00
EXPORT_SYMBOL_GPL ( unregister_ftrace_function ) ;
2008-05-12 23:20:43 +04:00
2008-05-12 23:20:51 +04:00
int
2008-05-12 23:20:43 +04:00
ftrace_enable_sysctl ( struct ctl_table * table , int write ,
2009-09-24 02:57:19 +04:00
void __user * buffer , size_t * lenp ,
2008-05-12 23:20:43 +04:00
loff_t * ppos )
{
2011-04-22 07:16:46 +04:00
int ret = - ENODEV ;
2008-05-12 23:20:48 +04:00
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
2008-05-12 23:20:43 +04:00
2011-04-22 07:16:46 +04:00
if ( unlikely ( ftrace_disabled ) )
goto out ;
ret = proc_dointvec ( table , write , buffer , lenp , ppos ) ;
2008-05-12 23:20:43 +04:00
2009-06-26 12:55:51 +04:00
if ( ret | | ! write | | ( last_ftrace_enabled = = ! ! ftrace_enabled ) )
2008-05-12 23:20:43 +04:00
goto out ;
2009-06-26 12:55:51 +04:00
last_ftrace_enabled = ! ! ftrace_enabled ;
2008-05-12 23:20:43 +04:00
if ( ftrace_enabled ) {
ftrace_startup_sysctl ( ) ;
/* we are starting ftrace again */
2011-05-04 17:27:52 +04:00
if ( ftrace_ops_list ! = & ftrace_list_end ) {
if ( ftrace_ops_list - > next = = & ftrace_list_end )
ftrace_trace_function = ftrace_ops_list - > func ;
2008-05-12 23:20:43 +04:00
else
2011-05-04 17:27:52 +04:00
ftrace_trace_function = ftrace_ops_list_func ;
2008-05-12 23:20:43 +04:00
}
} else {
/* stopping ftrace calls (just send to ftrace_stub) */
ftrace_trace_function = ftrace_stub ;
ftrace_shutdown_sysctl ( ) ;
}
out :
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-12 23:20:42 +04:00
return ret ;
2008-05-12 23:20:42 +04:00
}
2008-10-24 14:47:10 +04:00
2008-11-25 23:07:04 +03:00
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
2008-11-16 08:02:06 +03:00
2009-04-03 23:24:12 +04:00
static int ftrace_graph_active ;
2009-01-15 00:33:27 +03:00
static struct notifier_block ftrace_suspend_notifier ;
2008-11-16 08:02:06 +03:00
2008-12-03 07:50:05 +03:00
int ftrace_graph_entry_stub ( struct ftrace_graph_ent * trace )
{
return 0 ;
}
2008-11-26 02:57:25 +03:00
/* The callbacks that hook a function */
trace_func_graph_ret_t ftrace_graph_return =
( trace_func_graph_ret_t ) ftrace_stub ;
2008-12-03 07:50:05 +03:00
trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub ;
2008-11-23 08:22:56 +03:00
/* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */
static int alloc_retstack_tasklist ( struct ftrace_ret_stack * * ret_stack_list )
{
int i ;
int ret = 0 ;
unsigned long flags ;
int start = 0 , end = FTRACE_RETSTACK_ALLOC_SIZE ;
struct task_struct * g , * t ;
for ( i = 0 ; i < FTRACE_RETSTACK_ALLOC_SIZE ; i + + ) {
ret_stack_list [ i ] = kmalloc ( FTRACE_RETFUNC_DEPTH
* sizeof ( struct ftrace_ret_stack ) ,
GFP_KERNEL ) ;
if ( ! ret_stack_list [ i ] ) {
start = 0 ;
end = i ;
ret = - ENOMEM ;
goto free ;
}
}
read_lock_irqsave ( & tasklist_lock , flags ) ;
do_each_thread ( g , t ) {
if ( start = = end ) {
ret = - EAGAIN ;
goto unlock ;
}
if ( t - > ret_stack = = NULL ) {
2008-12-06 05:43:41 +03:00
atomic_set ( & t - > tracing_graph_pause , 0 ) ;
2008-11-23 08:22:56 +03:00
atomic_set ( & t - > trace_overrun , 0 ) ;
2009-06-02 22:01:19 +04:00
t - > curr_ret_stack = - 1 ;
/* Make sure the tasks see the -1 first: */
smp_wmb ( ) ;
t - > ret_stack = ret_stack_list [ start + + ] ;
2008-11-23 08:22:56 +03:00
}
} while_each_thread ( g , t ) ;
unlock :
read_unlock_irqrestore ( & tasklist_lock , flags ) ;
free :
for ( i = start ; i < end ; i + + )
kfree ( ret_stack_list [ i ] ) ;
return ret ;
}
2009-03-24 08:10:15 +03:00
static void
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 01:04:50 +04:00
ftrace_graph_probe_sched_switch ( void * ignore ,
struct task_struct * prev , struct task_struct * next )
2009-03-24 08:10:15 +03:00
{
unsigned long long timestamp ;
int index ;
2009-03-24 18:06:24 +03:00
/*
* Does the user want to count the time a function was asleep .
* If so , do not update the time stamps .
*/
if ( trace_flags & TRACE_ITER_SLEEP_TIME )
return ;
2009-03-24 08:10:15 +03:00
timestamp = trace_clock_local ( ) ;
prev - > ftrace_timestamp = timestamp ;
/* only process tasks that we timestamped */
if ( ! next - > ftrace_timestamp )
return ;
/*
* Update all the counters in next to make up for the
* time next was sleeping .
*/
timestamp - = next - > ftrace_timestamp ;
for ( index = next - > curr_ret_stack ; index > = 0 ; index - - )
next - > ret_stack [ index ] . calltime + = timestamp ;
}
2008-11-23 08:22:56 +03:00
/* Allocate a return stack for each task */
2008-11-25 23:07:04 +03:00
static int start_graph_tracing ( void )
2008-11-23 08:22:56 +03:00
{
struct ftrace_ret_stack * * ret_stack_list ;
2009-02-17 20:35:34 +03:00
int ret , cpu ;
2008-11-23 08:22:56 +03:00
ret_stack_list = kmalloc ( FTRACE_RETSTACK_ALLOC_SIZE *
sizeof ( struct ftrace_ret_stack * ) ,
GFP_KERNEL ) ;
if ( ! ret_stack_list )
return - ENOMEM ;
2009-02-17 20:35:34 +03:00
/* The cpu_boot init_task->ret_stack will never be freed */
2009-06-02 20:03:19 +04:00
for_each_online_cpu ( cpu ) {
if ( ! idle_task ( cpu ) - > ret_stack )
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 05:26:13 +03:00
ftrace_graph_init_idle_task ( idle_task ( cpu ) , cpu ) ;
2009-06-02 20:03:19 +04:00
}
2009-02-17 20:35:34 +03:00
2008-11-23 08:22:56 +03:00
do {
ret = alloc_retstack_tasklist ( ret_stack_list ) ;
} while ( ret = = - EAGAIN ) ;
2009-03-24 08:10:15 +03:00
if ( ! ret ) {
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 01:04:50 +04:00
ret = register_trace_sched_switch ( ftrace_graph_probe_sched_switch , NULL ) ;
2009-03-24 08:10:15 +03:00
if ( ret )
pr_info ( " ftrace_graph: Couldn't activate tracepoint "
" probe to kernel_sched_switch \n " ) ;
}
2008-11-23 08:22:56 +03:00
kfree ( ret_stack_list ) ;
return ret ;
}
2009-01-15 00:33:27 +03:00
/*
* Hibernation protection .
* The state of the current task is too much unstable during
* suspend / restore to disk . We want to protect against that .
*/
static int
ftrace_suspend_notifier_call ( struct notifier_block * bl , unsigned long state ,
void * unused )
{
switch ( state ) {
case PM_HIBERNATION_PREPARE :
pause_graph_tracing ( ) ;
break ;
case PM_POST_HIBERNATION :
unpause_graph_tracing ( ) ;
break ;
}
return NOTIFY_DONE ;
}
2008-11-26 02:57:25 +03:00
int register_ftrace_graph ( trace_func_graph_ret_t retfunc ,
trace_func_graph_ent_t entryfunc )
2008-11-11 09:14:25 +03:00
{
2008-11-16 08:02:06 +03:00
int ret = 0 ;
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
2008-11-16 08:02:06 +03:00
2009-03-24 07:18:31 +03:00
/* we currently allow only one tracer registered at a time */
2009-04-03 23:24:12 +04:00
if ( ftrace_graph_active ) {
2009-03-24 07:18:31 +03:00
ret = - EBUSY ;
goto out ;
}
2009-01-15 00:33:27 +03:00
ftrace_suspend_notifier . notifier_call = ftrace_suspend_notifier_call ;
register_pm_notifier ( & ftrace_suspend_notifier ) ;
2009-04-03 23:24:12 +04:00
ftrace_graph_active + + ;
2008-11-25 23:07:04 +03:00
ret = start_graph_tracing ( ) ;
2008-11-23 08:22:56 +03:00
if ( ret ) {
2009-04-03 23:24:12 +04:00
ftrace_graph_active - - ;
2008-11-23 08:22:56 +03:00
goto out ;
}
2008-11-26 08:16:25 +03:00
2008-11-26 02:57:25 +03:00
ftrace_graph_return = retfunc ;
ftrace_graph_entry = entryfunc ;
2008-11-26 08:16:25 +03:00
2011-05-23 23:24:25 +04:00
ret = ftrace_startup ( & global_ops , FTRACE_START_FUNC_RET ) ;
2008-11-16 08:02:06 +03:00
out :
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-11-16 08:02:06 +03:00
return ret ;
2008-11-11 09:14:25 +03:00
}
2008-11-25 23:07:04 +03:00
void unregister_ftrace_graph ( void )
2008-11-11 09:14:25 +03:00
{
2009-02-14 09:42:44 +03:00
mutex_lock ( & ftrace_lock ) ;
2008-11-16 08:02:06 +03:00
2009-04-03 23:24:12 +04:00
if ( unlikely ( ! ftrace_graph_active ) )
2009-03-30 19:11:28 +04:00
goto out ;
2009-04-03 23:24:12 +04:00
ftrace_graph_active - - ;
2008-11-26 02:57:25 +03:00
ftrace_graph_return = ( trace_func_graph_ret_t ) ftrace_stub ;
2008-12-03 07:50:05 +03:00
ftrace_graph_entry = ftrace_graph_entry_stub ;
2011-05-04 05:55:54 +04:00
ftrace_shutdown ( & global_ops , FTRACE_STOP_FUNC_RET ) ;
2009-01-15 00:33:27 +03:00
unregister_pm_notifier ( & ftrace_suspend_notifier ) ;
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 01:04:50 +04:00
unregister_trace_sched_switch ( ftrace_graph_probe_sched_switch , NULL ) ;
2008-11-16 08:02:06 +03:00
2009-03-30 19:11:28 +04:00
out :
2009-02-14 09:42:44 +03:00
mutex_unlock ( & ftrace_lock ) ;
2008-11-11 09:14:25 +03:00
}
2008-11-23 08:22:56 +03:00
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 05:26:13 +03:00
static DEFINE_PER_CPU ( struct ftrace_ret_stack * , idle_ret_stack ) ;
static void
graph_init_task ( struct task_struct * t , struct ftrace_ret_stack * ret_stack )
{
atomic_set ( & t - > tracing_graph_pause , 0 ) ;
atomic_set ( & t - > trace_overrun , 0 ) ;
t - > ftrace_timestamp = 0 ;
2011-03-31 05:57:33 +04:00
/* make curr_ret_stack visible before we add the ret_stack */
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 05:26:13 +03:00
smp_wmb ( ) ;
t - > ret_stack = ret_stack ;
}
/*
* Allocate a return stack for the idle task . May be the first
* time through , or it may be done by CPU hotplug online .
*/
void ftrace_graph_init_idle_task ( struct task_struct * t , int cpu )
{
t - > curr_ret_stack = - 1 ;
/*
* The idle task has no parent , it either has its own
* stack or no stack at all .
*/
if ( t - > ret_stack )
WARN_ON ( t - > ret_stack ! = per_cpu ( idle_ret_stack , cpu ) ) ;
if ( ftrace_graph_active ) {
struct ftrace_ret_stack * ret_stack ;
ret_stack = per_cpu ( idle_ret_stack , cpu ) ;
if ( ! ret_stack ) {
ret_stack = kmalloc ( FTRACE_RETFUNC_DEPTH
* sizeof ( struct ftrace_ret_stack ) ,
GFP_KERNEL ) ;
if ( ! ret_stack )
return ;
per_cpu ( idle_ret_stack , cpu ) = ret_stack ;
}
graph_init_task ( t , ret_stack ) ;
}
}
2008-11-23 08:22:56 +03:00
/* Allocate a return stack for newly created task */
2008-11-25 23:07:04 +03:00
void ftrace_graph_init_task ( struct task_struct * t )
2008-11-23 08:22:56 +03:00
{
2009-06-03 00:51:55 +04:00
/* Make sure we do not use the parent ret_stack */
t - > ret_stack = NULL ;
2010-03-13 03:41:23 +03:00
t - > curr_ret_stack = - 1 ;
2009-06-03 00:51:55 +04:00
2009-04-03 23:24:12 +04:00
if ( ftrace_graph_active ) {
2009-06-02 20:26:07 +04:00
struct ftrace_ret_stack * ret_stack ;
ret_stack = kmalloc ( FTRACE_RETFUNC_DEPTH
2008-11-23 08:22:56 +03:00
* sizeof ( struct ftrace_ret_stack ) ,
GFP_KERNEL ) ;
2009-06-02 20:26:07 +04:00
if ( ! ret_stack )
2008-11-23 08:22:56 +03:00
return ;
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 05:26:13 +03:00
graph_init_task ( t , ret_stack ) ;
2009-06-03 00:51:55 +04:00
}
2008-11-23 08:22:56 +03:00
}
2008-11-25 23:07:04 +03:00
void ftrace_graph_exit_task ( struct task_struct * t )
2008-11-23 08:22:56 +03:00
{
2008-11-23 19:33:12 +03:00
struct ftrace_ret_stack * ret_stack = t - > ret_stack ;
2008-11-23 08:22:56 +03:00
t - > ret_stack = NULL ;
2008-11-23 19:33:12 +03:00
/* NULL must become visible to IRQs before we free it: */
barrier ( ) ;
kfree ( ret_stack ) ;
2008-11-23 08:22:56 +03:00
}
2008-12-03 07:50:02 +03:00
void ftrace_graph_stop ( void )
{
ftrace_stop ( ) ;
}
2008-11-11 09:14:25 +03:00
# endif