License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
/* SPDX-License-Identifier: GPL-2.0 */
2012-10-28 01:18:32 +04:00
# ifndef __PERF_DSO
# define __PERF_DSO
2019-08-30 17:11:01 +03:00
# include <pthread.h>
2017-02-21 18:34:58 +03:00
# include <linux/refcount.h>
2012-10-28 01:18:32 +04:00
# include <linux/types.h>
# include <linux/rbtree.h>
2016-07-07 21:42:33 +03:00
# include <sys/types.h>
2013-08-07 15:38:47 +04:00
# include <stdbool.h>
2019-01-22 16:10:59 +03:00
# include <stdio.h>
2014-07-22 17:17:59 +04:00
# include <linux/bitops.h>
2013-09-11 09:09:30 +04:00
# include "build-id.h"
2012-10-28 01:18:32 +04:00
2019-01-22 16:24:34 +03:00
struct machine ;
2019-01-22 16:10:59 +03:00
struct map ;
2019-03-12 08:30:48 +03:00
struct perf_env ;
2019-01-22 16:10:59 +03:00
2019-08-30 15:43:25 +03:00
# define DSO__NAME_KALLSYMS "[kernel.kallsyms]"
# define DSO__NAME_KCORE "[kernel.kcore]"
2012-10-28 01:18:32 +04:00
enum dso_binary_type {
DSO_BINARY_TYPE__KALLSYMS = 0 ,
DSO_BINARY_TYPE__GUEST_KALLSYMS ,
DSO_BINARY_TYPE__VMLINUX ,
DSO_BINARY_TYPE__GUEST_VMLINUX ,
DSO_BINARY_TYPE__JAVA_JIT ,
DSO_BINARY_TYPE__DEBUGLINK ,
DSO_BINARY_TYPE__BUILD_ID_CACHE ,
2017-07-06 04:48:13 +03:00
DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO ,
2012-10-28 01:18:32 +04:00
DSO_BINARY_TYPE__FEDORA_DEBUGINFO ,
DSO_BINARY_TYPE__UBUNTU_DEBUGINFO ,
2020-05-26 18:52:07 +03:00
DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO ,
2012-10-28 01:18:32 +04:00
DSO_BINARY_TYPE__BUILDID_DEBUGINFO ,
DSO_BINARY_TYPE__SYSTEM_PATH_DSO ,
DSO_BINARY_TYPE__GUEST_KMODULE ,
2014-11-04 04:14:27 +03:00
DSO_BINARY_TYPE__GUEST_KMODULE_COMP ,
2012-10-28 01:18:32 +04:00
DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE ,
2014-11-04 04:14:27 +03:00
DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP ,
2013-08-07 15:38:51 +04:00
DSO_BINARY_TYPE__KCORE ,
DSO_BINARY_TYPE__GUEST_KCORE ,
2013-09-18 17:56:14 +04:00
DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO ,
2019-03-12 08:30:48 +03:00
DSO_BINARY_TYPE__BPF_PROG_INFO ,
2020-03-12 22:56:10 +03:00
DSO_BINARY_TYPE__BPF_IMAGE ,
2020-05-12 15:19:19 +03:00
DSO_BINARY_TYPE__OOL ,
2012-10-28 01:18:32 +04:00
DSO_BINARY_TYPE__NOT_FOUND ,
} ;
2020-08-08 15:21:54 +03:00
enum dso_space_type {
DSO_SPACE__USER = 0 ,
DSO_SPACE__KERNEL ,
DSO_SPACE__KERNEL_GUEST
2012-10-28 01:18:32 +04:00
} ;
enum dso_swap_type {
DSO_SWAP__UNSET ,
DSO_SWAP__NO ,
DSO_SWAP__YES ,
} ;
2014-07-22 17:17:18 +04:00
enum dso_data_status {
DSO_DATA_STATUS_ERROR = - 1 ,
DSO_DATA_STATUS_UNKNOWN = 0 ,
DSO_DATA_STATUS_OK = 1 ,
} ;
2014-07-22 17:17:19 +04:00
enum dso_data_status_seen {
DSO_DATA_STATUS_SEEN_ITRACE ,
} ;
2014-07-22 17:17:59 +04:00
enum dso_type {
DSO__TYPE_UNKNOWN ,
DSO__TYPE_64BIT ,
DSO__TYPE_32BIT ,
DSO__TYPE_X32BIT ,
} ;
2015-03-24 17:49:02 +03:00
enum dso_load_errno {
DSO_LOAD_ERRNO__SUCCESS = 0 ,
/*
* Choose an arbitrary negative big number not to clash with standard
* errno since SUS requires the errno has distinct positive values .
* See ' Issue 6 ' in the link below .
*
* http : //pubs.opengroup.org/onlinepubs/9699919799/basedefs/errno.h.html
*/
__DSO_LOAD_ERRNO__START = - 10000 ,
DSO_LOAD_ERRNO__INTERNAL_ERROR = __DSO_LOAD_ERRNO__START ,
/* for symsrc__init() */
DSO_LOAD_ERRNO__INVALID_ELF ,
DSO_LOAD_ERRNO__CANNOT_READ_BUILDID ,
DSO_LOAD_ERRNO__MISMATCHING_BUILDID ,
/* for decompress_kmodule */
DSO_LOAD_ERRNO__DECOMPRESSION_FAILURE ,
__DSO_LOAD_ERRNO__END ,
} ;
2012-10-28 01:18:32 +04:00
# define DSO__SWAP(dso, type, val) \
( { \
type ____r = val ; \
BUG_ON ( dso - > needs_swap = = DSO_SWAP__UNSET ) ; \
if ( dso - > needs_swap = = DSO_SWAP__YES ) { \
switch ( sizeof ( ____r ) ) { \
case 2 : \
____r = bswap_16 ( val ) ; \
break ; \
case 4 : \
____r = bswap_32 ( val ) ; \
break ; \
case 8 : \
____r = bswap_64 ( val ) ; \
break ; \
default : \
BUG_ON ( 1 ) ; \
} \
} \
____r ; \
} )
# define DSO__DATA_CACHE_SIZE 4096
# define DSO__DATA_CACHE_MASK ~(DSO__DATA_CACHE_SIZE - 1)
perf dso: Move dso_id from 'struct map' to 'struct dso'
And take it into account when looking up DSOs when we have the dso_id
fields obtained from somewhere, like from PERF_RECORD_MMAP2 records.
Instances of struct map pointing to the same DSO pathname but with
anything in dso_id different are in fact different DSOs, so better have
different 'struct dso' instances to reflect that. At some point we may
want to get copies of the contents of the different objects if we want
to do correct annotation or other analysis.
With this we get 'struct map' 24 bytes leaner:
$ pahole -C map ~/bin/perf
struct map {
union {
struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */
struct list_head node; /* 0 16 */
} __attribute__((__aligned__(8))); /* 0 24 */
u64 start; /* 24 8 */
u64 end; /* 32 8 */
_Bool erange_warned:1; /* 40: 0 1 */
_Bool priv:1; /* 40: 1 1 */
/* XXX 6 bits hole, try to pack */
/* XXX 3 bytes hole, try to pack */
u32 prot; /* 44 4 */
u64 pgoff; /* 48 8 */
u64 reloc; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u64 (*map_ip)(struct map *, u64); /* 64 8 */
u64 (*unmap_ip)(struct map *, u64); /* 72 8 */
struct dso * dso; /* 80 8 */
refcount_t refcnt; /* 88 4 */
u32 flags; /* 92 4 */
/* size: 96, cachelines: 2, members: 13 */
/* sum members: 92, holes: 1, sum holes: 3 */
/* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */
/* forced alignments: 1 */
/* last cacheline: 32 bytes */
} __attribute__((__aligned__(8)));
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-20 00:44:22 +03:00
/*
* Data about backing storage DSO , comes from PERF_RECORD_MMAP2 meta events
*/
struct dso_id {
u32 maj ;
u32 min ;
u64 ino ;
u64 ino_generation ;
} ;
2012-10-28 01:18:32 +04:00
struct dso_cache {
struct rb_node rb_node ;
u64 offset ;
u64 size ;
2020-05-15 20:29:26 +03:00
char data [ ] ;
2012-10-28 01:18:32 +04:00
} ;
2015-04-09 18:53:55 +03:00
struct auxtrace_cache ;
2012-10-28 01:18:32 +04:00
struct dso {
2015-05-18 03:30:40 +03:00
pthread_mutex_t lock ;
2012-10-28 01:18:32 +04:00
struct list_head node ;
2014-09-30 21:36:15 +04:00
struct rb_node rb_node ; /* rbtree node sorted by long name */
2015-11-13 12:48:30 +03:00
struct rb_root * root ; /* root of rbtree that rb_node is in */
2018-12-06 22:18:17 +03:00
struct rb_root_cached symbols ;
struct rb_root_cached symbol_names ;
2018-12-06 22:18:15 +03:00
struct rb_root_cached inlined_nodes ;
struct rb_root_cached srclines ;
2015-07-22 18:52:17 +03:00
struct {
u64 addr ;
struct symbol * symbol ;
2018-04-26 22:52:34 +03:00
} last_find_result ;
2013-12-03 11:23:07 +04:00
void * a2l ;
2013-12-03 11:23:08 +04:00
char * symsrc_filename ;
2013-12-03 11:23:10 +04:00
unsigned int a2l_fails ;
2020-08-08 15:21:54 +03:00
enum dso_space_type kernel ;
2012-10-28 01:18:32 +04:00
enum dso_swap_type needs_swap ;
enum dso_binary_type symtab_type ;
2013-12-17 23:14:07 +04:00
enum dso_binary_type binary_type ;
2015-03-24 17:49:02 +03:00
enum dso_load_errno load_errno ;
2012-10-28 01:18:32 +04:00
u8 adjust_symbols : 1 ;
u8 has_build_id : 1 ;
2013-09-11 09:09:31 +04:00
u8 has_srcline : 1 ;
2012-10-28 01:18:32 +04:00
u8 hit : 1 ;
u8 annotate_warned : 1 ;
2021-07-29 18:58:03 +03:00
u8 auxtrace_warned : 1 ;
2013-12-10 17:44:37 +04:00
u8 short_name_allocated : 1 ;
u8 long_name_allocated : 1 ;
2014-07-14 14:02:41 +04:00
u8 is_64_bit : 1 ;
2018-04-26 22:52:34 +03:00
bool sorted_by_name ;
bool loaded ;
2013-08-07 15:38:50 +04:00
u8 rel ;
2020-10-13 22:24:33 +03:00
struct build_id bid ;
2016-02-26 12:31:49 +03:00
u64 text_offset ;
2012-10-28 01:18:32 +04:00
const char * short_name ;
2013-12-10 22:19:23 +04:00
const char * long_name ;
2012-10-28 01:18:32 +04:00
u16 long_name_len ;
u16 short_name_len ;
perf tools powerpc: Cache the DWARF debug info
Cache the DWARF debug info for DSO so we don't have to rebuild it for each
address in the DSO.
Note that dso__new() uses calloc() so don't need to set dso->dwfl to NULL.
$ /tmp/perf.orig --version
perf version 3.18.rc1.gc2661b8
$ /tmp/perf.new --version
perf version 3.18.rc1.g402d62
$ perf stat -e cycles,instructions /tmp/perf.orig report -g > orig
Performance counter stats for '/tmp/perf.orig report -g':
6,428,177,183 cycles # 0.000 GHz
4,176,288,391 instructions # 0.65 insns per cycle
1.840666132 seconds time elapsed
$ perf stat -e cycles,instructions /tmp/perf.new report -g > new
Performance counter stats for '/tmp/perf.new report -g':
305,773,142 cycles # 0.000 GHz
276,048,272 instructions # 0.90 insns per cycle
0.087693543 seconds time elapsed
$ diff orig new
$
Changelog[v2]:
[Arnaldo Carvalho] Cache in existing global objects rather than create
new static/globals in functions.
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Anton Blanchard <anton@au1.ibm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20141022000958.GB2228@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-10-22 04:09:58 +04:00
void * dwfl ; /* DWARF debug info */
2015-04-09 18:53:55 +03:00
struct auxtrace_cache * auxtrace_cache ;
2018-08-17 12:48:07 +03:00
int comp ;
2014-05-07 20:30:45 +04:00
/* dso data file */
struct {
struct rb_root cache ;
2014-04-28 18:43:43 +04:00
int fd ;
2014-07-22 17:17:18 +04:00
int status ;
2014-07-22 17:17:19 +04:00
u32 status_seen ;
2021-10-21 14:27:00 +03:00
u64 file_size ;
2014-04-30 17:00:59 +04:00
struct list_head open_entry ;
2015-03-13 10:02:56 +03:00
u64 debug_frame_offset ;
u64 eh_frame_hdr_offset ;
2014-05-07 20:30:45 +04:00
} data ;
2019-03-12 08:30:48 +03:00
/* bpf prog information */
struct {
u32 id ;
u32 sub_id ;
struct perf_env * env ;
} bpf_prog ;
2014-05-07 20:30:45 +04:00
2014-10-23 14:45:13 +04:00
union { /* Tool specific area */
void * priv ;
u64 db_id ;
} ;
2017-07-06 04:48:08 +03:00
struct nsinfo * nsinfo ;
perf dso: Move dso_id from 'struct map' to 'struct dso'
And take it into account when looking up DSOs when we have the dso_id
fields obtained from somewhere, like from PERF_RECORD_MMAP2 records.
Instances of struct map pointing to the same DSO pathname but with
anything in dso_id different are in fact different DSOs, so better have
different 'struct dso' instances to reflect that. At some point we may
want to get copies of the contents of the different objects if we want
to do correct annotation or other analysis.
With this we get 'struct map' 24 bytes leaner:
$ pahole -C map ~/bin/perf
struct map {
union {
struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */
struct list_head node; /* 0 16 */
} __attribute__((__aligned__(8))); /* 0 24 */
u64 start; /* 24 8 */
u64 end; /* 32 8 */
_Bool erange_warned:1; /* 40: 0 1 */
_Bool priv:1; /* 40: 1 1 */
/* XXX 6 bits hole, try to pack */
/* XXX 3 bytes hole, try to pack */
u32 prot; /* 44 4 */
u64 pgoff; /* 48 8 */
u64 reloc; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u64 (*map_ip)(struct map *, u64); /* 64 8 */
u64 (*unmap_ip)(struct map *, u64); /* 72 8 */
struct dso * dso; /* 80 8 */
refcount_t refcnt; /* 88 4 */
u32 flags; /* 92 4 */
/* size: 96, cachelines: 2, members: 13 */
/* sum members: 92, holes: 1, sum holes: 3 */
/* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */
/* forced alignments: 1 */
/* last cacheline: 32 bytes */
} __attribute__((__aligned__(8)));
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-20 00:44:22 +03:00
struct dso_id id ;
2017-02-21 18:34:58 +03:00
refcount_t refcnt ;
2020-05-15 20:29:26 +03:00
char name [ ] ;
2012-10-28 01:18:32 +04:00
} ;
perf probe: Allow to add events on the local functions
Allow to add events on the local functions without debuginfo.
(With the debuginfo, we can add events even on inlined functions)
Currently, probing on local functions requires debuginfo to
locate actual address. It is also possible without debuginfo since
we have symbol maps.
Without this change;
----
# ./perf probe -a t_show
Added new event:
probe:t_show (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
no symbols found in /kbuild/ksrc/linux-3/tools/perf/perf, maybe install a debug package?
Failed to load map.
Error: Failed to add events. (-22)
----
As the above results, perf probe just put one event
on the first found symbol for kprobe event. Moreover,
for uprobe event, perf probe failed to find local
functions.
With this change;
----
# ./perf probe -a t_show
Added new events:
probe:t_show (on t_show)
probe:t_show_1 (on t_show)
probe:t_show_2 (on t_show)
probe:t_show_3 (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show_3 -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
Added new events:
probe_perf:identity__map_ip (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_1 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_2 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_3 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:identity__map_ip_3 -aR sleep 1
----
Now we succeed to put events on every given local functions
for both kprobes and uprobes. :)
Note that this also introduces some symbol rbtree
iteration macros; symbols__for_each, dso__for_each_symbol,
and map__for_each_symbol. These are for walking through
the symbol list in a map.
Changes from v2:
- Fix add_exec_to_probe_trace_events() not to convert address
to tp->symbol any more.
- Fix to set kernel probes based on ref_reloc_sym.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: "David A. Long" <dave.long@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20140206053225.29635.15026.stgit@kbuild-fedora.yrl.intra.hitachi.co.jp
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-02-06 09:32:25 +04:00
/* dso__for_each_symbol - iterate over the symbols of given type
*
2021-03-23 19:09:15 +03:00
* @ dso : the ' struct dso * ' in which symbols are iterated
perf probe: Allow to add events on the local functions
Allow to add events on the local functions without debuginfo.
(With the debuginfo, we can add events even on inlined functions)
Currently, probing on local functions requires debuginfo to
locate actual address. It is also possible without debuginfo since
we have symbol maps.
Without this change;
----
# ./perf probe -a t_show
Added new event:
probe:t_show (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
no symbols found in /kbuild/ksrc/linux-3/tools/perf/perf, maybe install a debug package?
Failed to load map.
Error: Failed to add events. (-22)
----
As the above results, perf probe just put one event
on the first found symbol for kprobe event. Moreover,
for uprobe event, perf probe failed to find local
functions.
With this change;
----
# ./perf probe -a t_show
Added new events:
probe:t_show (on t_show)
probe:t_show_1 (on t_show)
probe:t_show_2 (on t_show)
probe:t_show_3 (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show_3 -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
Added new events:
probe_perf:identity__map_ip (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_1 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_2 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_3 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:identity__map_ip_3 -aR sleep 1
----
Now we succeed to put events on every given local functions
for both kprobes and uprobes. :)
Note that this also introduces some symbol rbtree
iteration macros; symbols__for_each, dso__for_each_symbol,
and map__for_each_symbol. These are for walking through
the symbol list in a map.
Changes from v2:
- Fix add_exec_to_probe_trace_events() not to convert address
to tp->symbol any more.
- Fix to set kernel probes based on ref_reloc_sym.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: "David A. Long" <dave.long@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20140206053225.29635.15026.stgit@kbuild-fedora.yrl.intra.hitachi.co.jp
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-02-06 09:32:25 +04:00
* @ pos : the ' struct symbol * ' to use as a loop cursor
* @ n : the ' struct rb_node * ' to use as a temporary storage
*/
2018-04-26 22:52:34 +03:00
# define dso__for_each_symbol(dso, pos, n) \
symbols__for_each_entry ( & ( dso ) - > symbols , pos , n )
perf probe: Allow to add events on the local functions
Allow to add events on the local functions without debuginfo.
(With the debuginfo, we can add events even on inlined functions)
Currently, probing on local functions requires debuginfo to
locate actual address. It is also possible without debuginfo since
we have symbol maps.
Without this change;
----
# ./perf probe -a t_show
Added new event:
probe:t_show (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
no symbols found in /kbuild/ksrc/linux-3/tools/perf/perf, maybe install a debug package?
Failed to load map.
Error: Failed to add events. (-22)
----
As the above results, perf probe just put one event
on the first found symbol for kprobe event. Moreover,
for uprobe event, perf probe failed to find local
functions.
With this change;
----
# ./perf probe -a t_show
Added new events:
probe:t_show (on t_show)
probe:t_show_1 (on t_show)
probe:t_show_2 (on t_show)
probe:t_show_3 (on t_show)
You can now use it in all perf tools, such as:
perf record -e probe:t_show_3 -aR sleep 1
# ./perf probe -x perf -a identity__map_ip
Added new events:
probe_perf:identity__map_ip (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_1 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_2 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
probe_perf:identity__map_ip_3 (on identity__map_ip in /kbuild/ksrc/linux-3/tools/perf/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:identity__map_ip_3 -aR sleep 1
----
Now we succeed to put events on every given local functions
for both kprobes and uprobes. :)
Note that this also introduces some symbol rbtree
iteration macros; symbols__for_each, dso__for_each_symbol,
and map__for_each_symbol. These are for walking through
the symbol list in a map.
Changes from v2:
- Fix add_exec_to_probe_trace_events() not to convert address
to tp->symbol any more.
- Fix to set kernel probes based on ref_reloc_sym.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: "David A. Long" <dave.long@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20140206053225.29635.15026.stgit@kbuild-fedora.yrl.intra.hitachi.co.jp
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-02-06 09:32:25 +04:00
2018-04-26 22:52:34 +03:00
static inline void dso__set_loaded ( struct dso * dso )
2012-10-28 01:18:32 +04:00
{
2018-04-26 22:52:34 +03:00
dso - > loaded = true ;
2012-10-28 01:18:32 +04:00
}
perf dso: Move dso_id from 'struct map' to 'struct dso'
And take it into account when looking up DSOs when we have the dso_id
fields obtained from somewhere, like from PERF_RECORD_MMAP2 records.
Instances of struct map pointing to the same DSO pathname but with
anything in dso_id different are in fact different DSOs, so better have
different 'struct dso' instances to reflect that. At some point we may
want to get copies of the contents of the different objects if we want
to do correct annotation or other analysis.
With this we get 'struct map' 24 bytes leaner:
$ pahole -C map ~/bin/perf
struct map {
union {
struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */
struct list_head node; /* 0 16 */
} __attribute__((__aligned__(8))); /* 0 24 */
u64 start; /* 24 8 */
u64 end; /* 32 8 */
_Bool erange_warned:1; /* 40: 0 1 */
_Bool priv:1; /* 40: 1 1 */
/* XXX 6 bits hole, try to pack */
/* XXX 3 bytes hole, try to pack */
u32 prot; /* 44 4 */
u64 pgoff; /* 48 8 */
u64 reloc; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u64 (*map_ip)(struct map *, u64); /* 64 8 */
u64 (*unmap_ip)(struct map *, u64); /* 72 8 */
struct dso * dso; /* 80 8 */
refcount_t refcnt; /* 88 4 */
u32 flags; /* 92 4 */
/* size: 96, cachelines: 2, members: 13 */
/* sum members: 92, holes: 1, sum holes: 3 */
/* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */
/* forced alignments: 1 */
/* last cacheline: 32 bytes */
} __attribute__((__aligned__(8)));
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-20 00:44:22 +03:00
struct dso * dso__new_id ( const char * name , struct dso_id * id ) ;
2012-10-28 01:18:32 +04:00
struct dso * dso__new ( const char * name ) ;
void dso__delete ( struct dso * dso ) ;
perf dso: Move dso_id from 'struct map' to 'struct dso'
And take it into account when looking up DSOs when we have the dso_id
fields obtained from somewhere, like from PERF_RECORD_MMAP2 records.
Instances of struct map pointing to the same DSO pathname but with
anything in dso_id different are in fact different DSOs, so better have
different 'struct dso' instances to reflect that. At some point we may
want to get copies of the contents of the different objects if we want
to do correct annotation or other analysis.
With this we get 'struct map' 24 bytes leaner:
$ pahole -C map ~/bin/perf
struct map {
union {
struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */
struct list_head node; /* 0 16 */
} __attribute__((__aligned__(8))); /* 0 24 */
u64 start; /* 24 8 */
u64 end; /* 32 8 */
_Bool erange_warned:1; /* 40: 0 1 */
_Bool priv:1; /* 40: 1 1 */
/* XXX 6 bits hole, try to pack */
/* XXX 3 bytes hole, try to pack */
u32 prot; /* 44 4 */
u64 pgoff; /* 48 8 */
u64 reloc; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u64 (*map_ip)(struct map *, u64); /* 64 8 */
u64 (*unmap_ip)(struct map *, u64); /* 72 8 */
struct dso * dso; /* 80 8 */
refcount_t refcnt; /* 88 4 */
u32 flags; /* 92 4 */
/* size: 96, cachelines: 2, members: 13 */
/* sum members: 92, holes: 1, sum holes: 3 */
/* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */
/* forced alignments: 1 */
/* last cacheline: 32 bytes */
} __attribute__((__aligned__(8)));
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-20 00:44:22 +03:00
int dso__cmp_id ( struct dso * a , struct dso * b ) ;
2013-12-10 18:11:46 +04:00
void dso__set_short_name ( struct dso * dso , const char * name , bool name_allocated ) ;
2013-12-10 22:19:23 +04:00
void dso__set_long_name ( struct dso * dso , const char * name , bool name_allocated ) ;
2012-10-28 01:18:32 +04:00
int dso__name_len ( const struct dso * dso ) ;
2015-06-02 17:53:26 +03:00
struct dso * dso__get ( struct dso * dso ) ;
void dso__put ( struct dso * dso ) ;
static inline void __dso__zput ( struct dso * * dso )
{
dso__put ( * dso ) ;
* dso = NULL ;
}
# define dso__zput(dso) __dso__zput(&dso)
2018-04-26 22:52:34 +03:00
bool dso__loaded ( const struct dso * dso ) ;
2012-10-28 01:18:32 +04:00
2018-04-26 22:52:34 +03:00
static inline bool dso__has_symbols ( const struct dso * dso )
2018-04-23 23:08:02 +03:00
{
2018-12-06 22:18:17 +03:00
return ! RB_EMPTY_ROOT ( & dso - > symbols . rb_root ) ;
2018-04-23 23:08:02 +03:00
}
2018-04-26 22:52:34 +03:00
bool dso__sorted_by_name ( const struct dso * dso ) ;
void dso__set_sorted_by_name ( struct dso * dso ) ;
void dso__sort_by_name ( struct dso * dso ) ;
2012-10-28 01:18:32 +04:00
2020-10-13 22:24:37 +03:00
void dso__set_build_id ( struct dso * dso , struct build_id * bid ) ;
2020-10-13 22:24:38 +03:00
bool dso__build_id_equal ( const struct dso * dso , struct build_id * bid ) ;
2012-10-28 01:18:32 +04:00
void dso__read_running_kernel_build_id ( struct dso * dso ,
struct machine * machine ) ;
int dso__kernel_module_get_build_id ( struct dso * dso , const char * root_dir ) ;
char dso__symtab_origin ( const struct dso * dso ) ;
2013-12-17 00:03:18 +04:00
int dso__read_binary_type_filename ( const struct dso * dso , enum dso_binary_type type ,
char * root_dir , char * filename , size_t size ) ;
2015-06-03 11:52:21 +03:00
bool is_kernel_module ( const char * pathname , int cpumode ) ;
2014-11-04 04:14:27 +03:00
bool dso__needs_decompress ( struct dso * dso ) ;
2017-06-08 10:31:03 +03:00
int dso__decompress_kmodule_fd ( struct dso * dso , const char * name ) ;
int dso__decompress_kmodule_path ( struct dso * dso , const char * name ,
char * pathname , size_t len ) ;
2020-11-26 20:00:09 +03:00
int filename__decompress ( const char * name , char * pathname ,
size_t len , int comp , int * err ) ;
2017-06-08 10:31:03 +03:00
# define KMOD_DECOMP_NAME " / tmp / perf-kmod-XXXXXX"
# define KMOD_DECOMP_LEN sizeof(KMOD_DECOMP_NAME)
2012-10-28 01:18:32 +04:00
perf tools: Add kmod_path__parse function
Provides united way of parsing kernel module path
into several components.
The new kmod_path__parse function and few defines:
int __kmod_path__parse(struct kmod_path *m, const char *path,
bool alloc_name, bool alloc_ext);
#define kmod_path__parse(__m, __p) __kmod_path__parse(__m, __p, false, false)
#define kmod_path__parse_name(__m, __p) __kmod_path__parse(__m, __p, true , false)
#define kmod_path__parse_ext(__m, __p) __kmod_path__parse(__m, __p, false, true)
parse kernel module @path and updates @m argument like:
@comp - true if @path contains supported compression suffix,
false otherwise
@kmod - true if @path contains '.ko' suffix in right position,
false otherwise
@name - if (@alloc_name && @kmod) is true, it contains strdup-ed base name
of the kernel module without suffixes, otherwise strudup-ed
base name of @path
@ext - if (@alloc_ext && @comp) is true, it contains strdup-ed string
the compression suffix
It returns 0 if there's no strdup error, -ENOMEM otherwise.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-9t6eqg8j610r94l743hkntiv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-02-05 17:40:25 +03:00
struct kmod_path {
char * name ;
perf tools: Add compression id into 'struct kmod_path'
Store a decompression ID in 'struct kmod_path', so it can be later
stored in 'struct dso'.
Switch 'struct kmod_path's 'comp' from 'bool' to 'int' to return the
compressions array index. Add 0 index item into compressions array, so
that the comp usage stays as it was: 0 - no compression, != 0
compression index.
Update the kmod_path tests.
Committer notes:
Use a designated initializer + terminating comma, e.g. { .fmt = NULL, }, to fix
the build in several distros:
centos:6: util/dso.c:201: error: missing initializer
centos:6: util/dso.c:201: error: (near initialization for 'compressions[0].decompress')
debian:9: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
fedora:25: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
fedora:26: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
fedora:27: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
oraclelinux:6: util/dso.c:201: error: missing initializer
oraclelinux:6: util/dso.c:201: error: (near initialization for 'compressions[0].decompress')
ubuntu:12.04.5: util/dso.c:201:2: error: missing initializer [-Werror=missing-field-initializers]
ubuntu:12.04.5: util/dso.c:201:2: error: (near initialization for 'compressions[0].decompress') [-Werror=missing-field-initializers]
ubuntu:16.04: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
ubuntu:16.10: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
ubuntu:16.10: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
ubuntu:17.10: util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-17 12:48:06 +03:00
int comp ;
perf tools: Add kmod_path__parse function
Provides united way of parsing kernel module path
into several components.
The new kmod_path__parse function and few defines:
int __kmod_path__parse(struct kmod_path *m, const char *path,
bool alloc_name, bool alloc_ext);
#define kmod_path__parse(__m, __p) __kmod_path__parse(__m, __p, false, false)
#define kmod_path__parse_name(__m, __p) __kmod_path__parse(__m, __p, true , false)
#define kmod_path__parse_ext(__m, __p) __kmod_path__parse(__m, __p, false, true)
parse kernel module @path and updates @m argument like:
@comp - true if @path contains supported compression suffix,
false otherwise
@kmod - true if @path contains '.ko' suffix in right position,
false otherwise
@name - if (@alloc_name && @kmod) is true, it contains strdup-ed base name
of the kernel module without suffixes, otherwise strudup-ed
base name of @path
@ext - if (@alloc_ext && @comp) is true, it contains strdup-ed string
the compression suffix
It returns 0 if there's no strdup error, -ENOMEM otherwise.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-9t6eqg8j610r94l743hkntiv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-02-05 17:40:25 +03:00
bool kmod ;
} ;
int __kmod_path__parse ( struct kmod_path * m , const char * path ,
2018-08-17 12:48:13 +03:00
bool alloc_name ) ;
perf tools: Add kmod_path__parse function
Provides united way of parsing kernel module path
into several components.
The new kmod_path__parse function and few defines:
int __kmod_path__parse(struct kmod_path *m, const char *path,
bool alloc_name, bool alloc_ext);
#define kmod_path__parse(__m, __p) __kmod_path__parse(__m, __p, false, false)
#define kmod_path__parse_name(__m, __p) __kmod_path__parse(__m, __p, true , false)
#define kmod_path__parse_ext(__m, __p) __kmod_path__parse(__m, __p, false, true)
parse kernel module @path and updates @m argument like:
@comp - true if @path contains supported compression suffix,
false otherwise
@kmod - true if @path contains '.ko' suffix in right position,
false otherwise
@name - if (@alloc_name && @kmod) is true, it contains strdup-ed base name
of the kernel module without suffixes, otherwise strudup-ed
base name of @path
@ext - if (@alloc_ext && @comp) is true, it contains strdup-ed string
the compression suffix
It returns 0 if there's no strdup error, -ENOMEM otherwise.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-9t6eqg8j610r94l743hkntiv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-02-05 17:40:25 +03:00
2018-08-17 12:48:13 +03:00
# define kmod_path__parse(__m, __p) __kmod_path__parse(__m, __p, false)
# define kmod_path__parse_name(__m, __p) __kmod_path__parse(__m, __p, true)
perf tools: Add kmod_path__parse function
Provides united way of parsing kernel module path
into several components.
The new kmod_path__parse function and few defines:
int __kmod_path__parse(struct kmod_path *m, const char *path,
bool alloc_name, bool alloc_ext);
#define kmod_path__parse(__m, __p) __kmod_path__parse(__m, __p, false, false)
#define kmod_path__parse_name(__m, __p) __kmod_path__parse(__m, __p, true , false)
#define kmod_path__parse_ext(__m, __p) __kmod_path__parse(__m, __p, false, true)
parse kernel module @path and updates @m argument like:
@comp - true if @path contains supported compression suffix,
false otherwise
@kmod - true if @path contains '.ko' suffix in right position,
false otherwise
@name - if (@alloc_name && @kmod) is true, it contains strdup-ed base name
of the kernel module without suffixes, otherwise strudup-ed
base name of @path
@ext - if (@alloc_ext && @comp) is true, it contains strdup-ed string
the compression suffix
It returns 0 if there's no strdup error, -ENOMEM otherwise.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-9t6eqg8j610r94l743hkntiv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-02-05 17:40:25 +03:00
2017-05-31 15:01:04 +03:00
void dso__set_module_info ( struct dso * dso , struct kmod_path * m ,
struct machine * machine ) ;
2014-05-07 23:09:59 +04:00
/*
* The dso__data_ * external interface provides following functions :
2015-05-20 19:03:41 +03:00
* dso__data_get_fd
* dso__data_put_fd
2014-05-07 23:09:59 +04:00
* dso__data_close
2014-07-22 17:17:35 +04:00
* dso__data_size
2014-05-07 23:09:59 +04:00
* dso__data_read_offset
* dso__data_read_addr
2019-10-25 15:59:57 +03:00
* dso__data_write_cache_offs
* dso__data_write_cache_addr
2014-05-07 23:09:59 +04:00
*
* Please refer to the dso . c object code for each function and
* arguments documentation . Following text tries to explain the
* dso file descriptor caching .
*
* The dso__data * interface allows caching of opened file descriptors
* to speed up the dso data accesses . The idea is to leave the file
* descriptor opened ideally for the whole life of the dso object .
*
* The current usage of the dso__data_ * interface is as follows :
*
* Get DSO ' s fd :
2015-05-20 19:03:41 +03:00
* int fd = dso__data_get_fd ( dso , machine ) ;
* if ( fd > = 0 ) {
* USE ' fd ' SOMEHOW
* dso__data_put_fd ( dso ) ;
* }
2014-05-07 23:09:59 +04:00
*
* Read DSO ' s data :
* n = dso__data_read_offset ( dso_0 , & machine , 0 , buf , BUFSIZE ) ;
* n = dso__data_read_addr ( dso_0 , & machine , 0 , buf , BUFSIZE ) ;
*
* Eventually close DSO ' s fd :
* dso__data_close ( dso ) ;
*
* It is not necessary to close the DSO object data file . Each time new
* DSO data file is opened , the limit ( RLIMIT_NOFILE / 2 ) is checked . Once
* it is crossed , the oldest opened DSO object is closed .
*
* The dso__delete function calls close_dso function to ensure the
* data file descriptor gets closed / unmapped before the dso object
* is freed .
*
* TODO
*/
2015-05-20 19:03:41 +03:00
int dso__data_get_fd ( struct dso * dso , struct machine * machine ) ;
2016-03-22 19:09:37 +03:00
void dso__data_put_fd ( struct dso * dso ) ;
2014-04-28 18:43:43 +04:00
void dso__data_close ( struct dso * dso ) ;
2018-11-27 11:46:34 +03:00
int dso__data_file_size ( struct dso * dso , struct machine * machine ) ;
2014-07-22 17:17:35 +04:00
off_t dso__data_size ( struct dso * dso , struct machine * machine ) ;
2012-10-28 01:18:32 +04:00
ssize_t dso__data_read_offset ( struct dso * dso , struct machine * machine ,
u64 offset , u8 * data , ssize_t size ) ;
ssize_t dso__data_read_addr ( struct dso * dso , struct map * map ,
struct machine * machine , u64 addr ,
u8 * data , ssize_t size ) ;
2014-07-22 17:17:19 +04:00
bool dso__data_status_seen ( struct dso * dso , enum dso_data_status_seen by ) ;
2019-10-25 15:59:57 +03:00
ssize_t dso__data_write_cache_offs ( struct dso * dso , struct machine * machine ,
u64 offset , const u8 * data , ssize_t size ) ;
ssize_t dso__data_write_cache_addr ( struct dso * dso , struct map * map ,
struct machine * machine , u64 addr ,
const u8 * data , ssize_t size ) ;
2012-10-28 01:18:32 +04:00
struct map * dso__new_map ( const char * name ) ;
2015-05-28 18:40:55 +03:00
struct dso * machine__findnew_kernel ( struct machine * machine , const char * name ,
const char * short_name , int dso_type ) ;
2012-10-28 01:18:32 +04:00
2015-08-24 19:33:14 +03:00
void dso__reset_find_symbol_cache ( struct dso * dso ) ;
2018-04-26 22:52:34 +03:00
size_t dso__fprintf_symbols_by_name ( struct dso * dso , FILE * fp ) ;
size_t dso__fprintf ( struct dso * dso , FILE * fp ) ;
2013-08-07 15:38:47 +04:00
static inline bool dso__is_vmlinux ( struct dso * dso )
{
2013-12-17 23:14:07 +04:00
return dso - > binary_type = = DSO_BINARY_TYPE__VMLINUX | |
dso - > binary_type = = DSO_BINARY_TYPE__GUEST_VMLINUX ;
2013-08-07 15:38:47 +04:00
}
2013-08-07 15:38:51 +04:00
static inline bool dso__is_kcore ( struct dso * dso )
{
2013-12-17 23:14:07 +04:00
return dso - > binary_type = = DSO_BINARY_TYPE__KCORE | |
dso - > binary_type = = DSO_BINARY_TYPE__GUEST_KCORE ;
2013-08-07 15:38:51 +04:00
}
2016-05-28 18:15:37 +03:00
static inline bool dso__is_kallsyms ( struct dso * dso )
{
return dso - > kernel & & dso - > long_name [ 0 ] ! = ' / ' ;
}
2013-12-03 11:23:07 +04:00
void dso__free_a2l ( struct dso * dso ) ;
2014-07-22 17:17:59 +04:00
enum dso_type dso__type ( struct dso * dso , struct machine * machine ) ;
2015-03-24 17:49:02 +03:00
int dso__strerror_load ( struct dso * dso , char * buf , size_t buflen ) ;
2016-06-28 14:29:02 +03:00
void reset_fd_limit ( void ) ;
2012-10-28 01:18:32 +04:00
# endif /* __PERF_DSO */