IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Ingo writes:
"perf fixes:
Misc perf tooling fixes."
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf tools: Stop fallbacking to kallsyms for vdso symbols lookup
perf tools: Pass build flags to traceevent build
perf report: Don't crash on invalid inline debug information
perf cpu_map: Align cpu map synthesized events properly.
perf tools: Fix tracing_path_mount proper path
perf tools: Fix use of alternatives to find JDIR
perf evsel: Store ids for events with their own cpus perf_event__synthesize_event_update_cpus
perf vendor events intel: Fix wrong filter_band* values for uncore events
Revert "perf tools: Fix PMU term format max value calculation"
tools headers uapi: Sync kvm.h copy
tools arch uapi: Sync the x86 kvm.h copy
The first two patches fix handling of unsigned type, and handling
of a space before an ending semi-colon.
The third patch adds a selftest to test the processing of synthetic events.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCW8pMwxQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qn2jAQCV4leBtN9EAax4B9Mmy4e5oYGE0SDF
Qq0f/Zb1SLYbTwD/Wdo+mqOAc9EFYkrxRjvgufgqM4bOUufW4fOQnqqPnwI=
=GS2b
-----END PGP SIGNATURE-----
Merge tag 'trace-v4.19-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Steven writes:
"tracing: A few small fixes to synthetic events
Masami found some issues with the creation of synthetic events. The
first two patches fix handling of unsigned type, and handling of a
space before an ending semi-colon.
The third patch adds a selftest to test the processing of synthetic
events."
* tag 'trace-v4.19-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
selftests: ftrace: Add synthetic event syntax testcase
tracing: Fix synthetic event to allow semicolon at end
tracing: Fix synthetic event to accept unsigned modifier
This tests that a bctr (Branch to counter and link), ie. a function
call, to a wildly out-of-bounds address is handled correctly.
Some old kernel versions didn't handle it correctly, see eg:
"powerpc/slb: Force a full SLB flush when we insert for a bad EA"
https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-April/157397.html
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds a test to verify proper functioning of the rfi flush
capability implemented to mitigate meltdown. The test works by
measuring the number of L1d cache misses encountered while loading
data from memory. Across a system call, since the L1d cache is flushed
when rfi_flush is enabled, the number of cache misses is expected to
be relative to the number of cachelines corresponding to the data
being loaded.
The current system setting is reflected via powerpc/rfi_flush under
debugfs (assumed to be /sys/kernel/debug/). This test verifies the
expected result with rfi_flush enabled as well as when it is disabled.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Add SPDX tags, clang format, skip if the debugfs is missing, use
__u64 and SANE_USERSPACE_TYPES to avoid printf() build errors.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
... so that it can be used by others.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Add a testcase to check the syntax and field types for
synthetic_events interface.
Link: http://lkml.kernel.org/r/153986838264.18251.16627517536956299922.stgit@devbox
Acked-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Tests are added to make sure CGROUP_SKB cannot access:
tc_classid, data_meta, flow_keys
and can read and write:
mark, prority, and cb[0-4]
and can read other fields.
To make selftest with skb->sk work, a dummy sk is added in
bpf_prog_test_run_skb().
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Given libbpf is a generic library and not restricted to x86-64 only,
the compiler barrier in bpf_perf_event_read_simple() after fetching
the head needs to be replaced with smp_rmb() at minimum. Also, writing
out the tail we should use WRITE_ONCE() to avoid store tearing.
Now that we have the logic in place in ring_buffer_read_head() and
ring_buffer_write_tail() helper also used by perf tool which would
select the correct and best variant for a given architecture (e.g.
x86-64 can avoid CPU barriers entirely), make use of these in order
to fix bpf_perf_event_read_simple().
Fixes: d0cabbb021be ("tools: bpf: move the event reading loop to libbpf")
Fixes: 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Currently, on x86-64, perf uses LFENCE and MFENCE (rmb() and mb(),
respectively) when processing events from the perf ring buffer which
is unnecessarily expensive as we can do more lightweight in particular
given this is critical fast-path in perf.
According to Peter rmb()/mb() were added back then via a94d342b9cb0
("tools/perf: Add required memory barriers") at a time where kernel
still supported chips that needed it, but nowadays support for these
has been ditched completely, therefore we can fix them up as well.
While for x86-64, replacing rmb() and mb() with smp_*() variants would
result in just a compiler barrier for the former and LOCK + ADD for
the latter (__sync_synchronize() uses slower MFENCE by the way), Peter
suggested we can use smp_{load_acquire,store_release}() instead for
architectures where its implementation doesn't resolve in slower smp_mb().
Thus, e.g. in x86-64 we would be able to avoid CPU barrier entirely due
to TSO. For architectures where the latter needs to use smp_mb() e.g.
on arm, we stick to cheaper smp_rmb() variant for fetching the head.
This work adds helpers ring_buffer_read_head() and ring_buffer_write_tail()
for tools infrastructure that either switches to smp_load_acquire() for
architectures where it is cheaper or uses READ_ONCE() + smp_rmb() barrier
for those where it's not in order to fetch the data_head from the perf
control page, and it uses smp_store_release() to write the data_tail.
Latter is smp_mb() + WRITE_ONCE() combination or a cheaper variant if
architecture allows for it. Those that rely on smp_rmb() and smp_mb() can
further improve performance in a follow up step by implementing the two
under tools/arch/*/include/asm/barrier.h such that they don't have to
fallback to rmb() and mb() in tools/include/asm/barrier.h.
Switch perf to use ring_buffer_read_head() and ring_buffer_write_tail()
so it can make use of the optimizations. Later, we convert libbpf as
well to use the same helpers.
Side note [0]: the topic has been raised of whether one could simply use
the C11 gcc builtins [1] for the smp_load_acquire() and smp_store_release()
instead:
__atomic_load_n(ptr, __ATOMIC_ACQUIRE);
__atomic_store_n(ptr, val, __ATOMIC_RELEASE);
Kernel and (presumably) tooling shipped along with the kernel has a
minimum requirement of being able to build with gcc-4.6 and the latter
does not have C11 builtins. While generally the C11 memory models don't
align with the kernel's, the C11 load-acquire and store-release alone
/could/ suffice, however. Issue is that this is implementation dependent
on how the load-acquire and store-release is done by the compiler and
the mapping of supported compilers must align to be compatible with the
kernel's implementation, and thus needs to be verified/tracked on a
case by case basis whether they match (unless an architecture uses them
also from kernel side). The implementations for smp_load_acquire() and
smp_store_release() in this patch have been adapted from the kernel side
ones to have a concrete and compatible mapping in place.
[0] http://patchwork.ozlabs.org/patch/985422/
[1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
test_maps:
Tests that queue/stack maps are behaving correctly even in corner cases
test_progs:
Tests new ebpf helpers
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Sync both files.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This simply adds the field to 'struct perf_evsel' and allows setting
it via the event parser, to test it lets trace trace:
First look at where in a function that receives an evsel we can put a probe
to read how evsel->max_events was setup:
# perf probe -x ~/bin/perf -L trace__event_handler
<trace__event_handler@/home/acme/git/perf/tools/perf/builtin-trace.c:0>
0 static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
union perf_event *event __maybe_unused,
struct perf_sample *sample)
3 {
4 struct thread *thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
5 int callchain_ret = 0;
7 if (sample->callchain) {
8 callchain_ret = trace__resolve_callchain(trace, evsel, sample, &callchain_cursor);
9 if (callchain_ret == 0) {
10 if (callchain_cursor.nr < trace->min_stack)
11 goto out;
12 callchain_ret = 1;
}
}
See what variables we can probe at line 7:
# perf probe -x ~/bin/perf -V trace__event_handler:7
Available variables at trace__event_handler:7
@<trace__event_handler+89>
int callchain_ret
struct perf_evsel* evsel
struct perf_sample* sample
struct thread* thread
struct trace* trace
union perf_event* event
Add a probe at that line asking for evsel->max_events to be collected and named
as "max_events":
# perf probe -x ~/bin/perf trace__event_handler:7 'max_events=evsel->max_events'
Added new event:
probe_perf:trace__event_handler (on trace__event_handler:7 in /home/acme/bin/perf with max_events=evsel->max_events)
You can now use it in all perf tools, such as:
perf record -e probe_perf:trace__event_handler -aR sleep 1
Now use 'perf trace', here aliased to just 'trace' and trace trace, i.e.
the first 'trace' is tracing just that 'probe_perf:trace__event_handler' event,
while the traced trace is tracing all scheduler tracepoints, will stop at two
events (--max-events 2) and will just set evsel->max_events for all the sched
tracepoints to 9, we will see the output of both traces intermixed:
# trace -e *perf:*event_handler trace --max-events 2 -e sched:*/nr=9/
0.000 :0/0 sched:sched_waking:comm=rcu_sched pid=10 prio=120 target_cpu=000
0.009 :0/0 sched:sched_wakeup:comm=rcu_sched pid=10 prio=120 target_cpu=000
0.000 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9
0.046 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9
#
Now, if the traced trace sends its output to /dev/null, we'll see just
what the first level trace outputs: that evsel->max_events is indeed
being set to 9:
# trace -e *perf:*event_handler trace -o /dev/null --max-events 2 -e sched:*/nr=9/
0.000 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9
0.030 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9
#
Now that we can set evsel->max_events, we can go to the next step, honour that
per-event property in 'perf trace'.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-og00yasj276joem6e14l1eas@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
net/sched/cls_api.c has overlapping changes to a call to
nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL
to the 5th argument, and another (from 'net-next') added cb->extack
instead of NULL to the 6th argument.
net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to
code which moved (to mr_table_dump)) in 'net-next'. Thanks to David
Ahern for the heads up.
Signed-off-by: David S. Miller <davem@davemloft.net>
Here are a small number of last-minute USB driver fixes
Included here are:
- spectre fix for usb storage gadgets
- xhci fixes
- cdc-acm fixes
- usbip fixes for reported problems
All of these have been in linux-next with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCW8oNdg8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ym5xgCfQ55kmFLnpRKNvLG+ihYAoJ7OPOoAoKiXhWTF
GhtlV7Qu4s6JkLBEIN2n
=241b
-----END PGP SIGNATURE-----
Merge tag 'usb-4.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
I wrote:
"USB fixes for 4.19-final
Here are a small number of last-minute USB driver fixes
Included here are:
- spectre fix for usb storage gadgets
- xhci fixes
- cdc-acm fixes
- usbip fixes for reported problems
All of these have been in linux-next with no reported issues."
* tag 'usb-4.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
usb: gadget: storage: Fix Spectre v1 vulnerability
USB: fix the usbfs flag sanitization for control transfers
usb: xhci: pci: Enable Intel USB role mux on Apollo Lake platforms
usb: roles: intel_xhci: Fix Unbalanced pm_runtime_enable
cdc-acm: correct counting of UART states in serial state notification
cdc-acm: do not reset notification buffer index upon urb unlinking
cdc-acm: fix race between reset and control messaging
usb: usbip: Fix BUG: KASAN: slab-out-of-bounds in vhci_hub_control()
selftests: usbip: add wait after attach and before checking port status
For completeness, will be used in 'perf trace --max-events'.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-glaj3pwespxfj2fdjs9a20b6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When /tmp is mounted with noexec, mksyscalltbl fails.
[snip]
|perf-1.0/tools/perf/arch/arm64/entry/syscalls//mksyscalltbl:
/tmp/create-table-6VGPSt: Permission denied
[snip]
Add variable TMPDIR as prefix dir of the temporary file, if it is set,
replace default /tmp.
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Sébastien Boisvert <sboisvert@gydle.com>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Fixes: 2b5882435606 ("perf arm64: Generate system call table from asm/unistd.h")
LPU-Reference: 1539851173-14959-1-git-send-email-hongxu.jia@windriver.com
Link: https://lkml.kernel.org/n/tip-1qrgq840ci0c5cy4oww957ge@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The nfp driver is currently always JITing the BPF for 4 context/thread
mode of the NFP flow processors. Tell this to the disassembler,
otherwise some registers may be incorrectly decoded.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
FILE pointer variable f is opened but never closed.
Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Currently the call to atoi is being passed a single char string
that is not null terminated, so there is a potential read overrun
along the stack when parsing for an integer value. Fix this by
instead using a 2 char string that is initialized to all zeros
to ensure that a 1 char read into the string is always terminated
with a \0.
Detected by cppcheck:
"Invalid atoi() argument nr 1. A nul-terminated string is required."
Fixes: 3391ba0e2792 ("usbip: tools: Extract generic code to be shared with vudc backend")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Using the sh_entsize for both values isn't correct. It happens to be
correct on x86...
For both 32-bit and 64-bit sparc, there are four PLT entries in the PLT
section.
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexis Berlemont <alexis.berlemont@gmail.com>
Cc: David Tolnay <dtolnay@gmail.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
Cc: Li Bin <huawei.libin@huawei.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: zhangmengting@huawei.com
Fixes: b2f7605076d6 ("perf symbols: Fix plt entry calculation for ARM and AARCH64")
Link: http://lkml.kernel.org/r/20181017.120859.2268840244308635255.davem@davemloft.net
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* pm-devfreq:
PM / devfreq: remove redundant null pointer check before kfree
PM / devfreq: stopping the governor before device_unregister()
PM / devfreq: Convert to using %pOFn instead of device_node.name
PM / devfreq: Make update_devfreq() public
PM / devfreq: Don't adjust to user limits in governors
PM / devfreq: Fix handling of min/max_freq == 0
PM / devfreq: Drop custom MIN/MAX macros
PM / devfreq: Fix devfreq_add_device() when drivers are built as modules.
* pm-tools:
PM / tools: sleepgraph and bootgraph: upgrade to v5.2
PM / tools: sleepgraph: first batch of v5.2 changes
cpupower: Fix coredump on VMWare
cpupower: Fix AMD Family 0x17 msr_pstate size
cpupower: remove stringop-truncation waring
- Stop fallbacking to kallsyms for vDSO symbols lookup, this wasn't
being really used and is not valid in arches such as Sparc, where
user and kernel space don't share the address space, relying only on
cpumode to figure out what DSOs to lookup (Arnaldo Carvalho de Melo)
- Align cpu map synthesized events properly, fixing SIGBUS in
CPUs like Sparc (David Miller)
- Fix use of alternatives to find JDIR (Jarod Wilson)
- Store ids for events with their own cpus when synthesizing user
level event details (scale, unit, etc) events, fixing a crash
when recording a PMU event with a cpumask defined (Jiri Olsa)
- Fix wrong filter_band* values for uncore Intel vendor events (Jiri Olsa)
- Fix detection of tracefs path in systems without tracefs, where
that path should be the debugfs mountpoint plus "/tracing/" (Jiri Olsa)
- Pass build flags to traceevent build, allowing using alternative
flags in distro packages, RPM, for instance (Jiri Olsa)
- Fix 'perf report' crash on invalid inline debug information (Milian Wolff)
- Synch kvm uapi copies (Arnaldo Carvalho de Melo)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCW8eytQAKCRCyPKLppCJ+
Jz94AP9Ra7FFmnMuffimP5pIkUacfqkLXPG3Lymxa8+pm0FH6gD/cWUZCxNdchBN
v4zFXT1i9iR2YCKu8/1iijVx2wtpZQw=
=Dh50
-----END PGP SIGNATURE-----
Merge tag 'perf-urgent-for-mingo-4.19-20181017' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fixes from Arnaldo Carvalho de Melo:
- Stop falling back to kallsyms for vDSO symbols lookup, this wasn't
being really used and is not valid in arches such as Sparc, where
user and kernel space don't share the address space, relying only on
cpumode to figure out what DSOs to lookup (Arnaldo Carvalho de Melo)
- Align CPU map synthesized events properly, fixing SIGBUS in
CPUs like Sparc (David Miller)
- Fix use of alternatives to find JDIR (Jarod Wilson)
- Store IDs for events with their own CPUs when synthesizing user
level event details (scale, unit, etc) events, fixing a crash
when recording a PMU event with a cpumask defined (Jiri Olsa)
- Fix wrong filter_band* values for uncore Intel vendor events (Jiri Olsa)
- Fix detection of tracefs path in systems without tracefs, where
that path should be the debugfs mountpoint plus "/tracing/" (Jiri Olsa)
- Pass build flags to traceevent build, allowing using alternative
flags in distro packages, RPM, for instance (Jiri Olsa)
- Fix 'perf report' crash on invalid inline debug information (Milian Wolff)
- Synch KVM UAPI copies (Arnaldo Carvalho de Melo)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
David reports that:
<quote>
Perf has this hack where it uses the kernel symbol map as a backup when
a symbol can't be found in the user's symbol table(s).
This causes problems because the tests driving this code path use
machine__kernel_ip(), and that is completely meaningless on Sparc. On
sparc64 the kernel and user live in physically separate virtual address
spaces, rather than a shared one. And the kernel lives at a virtual
address that overlaps common userspace addresses. So this test passes
almost all the time when a user symbol lookup fails.
The consequence of this is that, if the unfound user virtual address in
the sample doesn't match up to a kernel symbol either, we trigger things
like this code in builtin-top.c:
if (al.sym == NULL && al.map != NULL) {
const char *msg = "Kernel samples will not be resolved.\n";
/*
* As we do lazy loading of symtabs we only will know if the
* specified vmlinux file is invalid when we actually have a
* hit in kernel space and then try to load it. So if we get
* here and there are _no_ symbols in the DSO backing the
* kernel map, bail out.
*
* We may never get here, for instance, if we use -K/
* --hide-kernel-symbols, even if the user specifies an
* invalid --vmlinux ;-)
*/
if (!machine->kptr_restrict_warned && !top->vmlinux_warned &&
__map__is_kernel(al.map) && map__has_symbols(al.map)) {
if (symbol_conf.vmlinux_name) {
char serr[256];
dso__strerror_load(al.map->dso, serr, sizeof(serr));
ui__warning("The %s file can't be used: %s\n%s",
symbol_conf.vmlinux_name, serr, msg);
} else {
ui__warning("A vmlinux file was not found.\n%s",
msg);
}
if (use_browser <= 0)
sleep(5);
top->vmlinux_warned = true;
}
}
When I fire up a compilation on sparc, this triggers immediately.
I'm trying to figure out what the "backup to kernel map" code is
accomplishing.
I see some language in the current code and in the changes that have
happened in this area talking about vdso. Does that really happen?
The vdso is mapped into userspace virtual addresses, not kernel ones.
More history. This didn't cause problems on sparc some time ago,
because the kernel IP check used to be "ip < 0" :-) Sparc kernel
addresses are not negative. But now with machine__kernel_ip(), which
works using the symbol table determined kernel address range, it does
trigger.
What it all boils down to is that on architectures like sparc,
machine__kernel_ip() should always return false in this scenerio, and
therefore this kind of logic:
if (cpumode == PERF_RECORD_MISC_USER && machine &&
mg != &machine->kmaps &&
machine__kernel_ip(machine, al->addr)) {
is basically invalid. PERF_RECORD_MISC_USER implies no kernel address
can possibly match for the sample/event in question (no matter how
hard you try!) :-)
</>
So, I thought something had changed and in the past we would somehow
find that address in the kallsyms, but I couldn't find anything to back
that up, the patch introducing this is over a decade old, lots of things
changed, so I was just thinking I was missing something.
I tried a gtod busy loop to generate vdso activity and added a 'perf
probe' at that branch, on x86_64 to see if it ever gets hit:
Made thread__find_map() noinline, as 'perf probe' in lines of inline
functions seems to not be working, only at function start. (Masami?)
# perf probe -x ~/bin/perf -L thread__find_map:57
<thread__find_map@/home/acme/git/perf/tools/perf/util/event.c:57>
57 if (cpumode == PERF_RECORD_MISC_USER && machine &&
58 mg != &machine->kmaps &&
59 machine__kernel_ip(machine, al->addr)) {
60 mg = &machine->kmaps;
61 load_map = true;
62 goto try_again;
}
} else {
/*
* Kernel maps might be changed when loading
* symbols so loading
* must be done prior to using kernel maps.
*/
69 if (load_map)
70 map__load(al->map);
71 al->addr = al->map->map_ip(al->map, al->addr);
# perf probe -x ~/bin/perf thread__find_map:60
Added new event:
probe_perf:thread__find_map (on thread__find_map:60 in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:thread__find_map -aR sleep 1
#
Then used this to see if, system wide, those probe points were being hit:
# perf trace -e *perf:thread*/max-stack=8/
^C[root@jouet ~]#
No hits when running 'perf top' and:
# cat gtod.c
#include <sys/time.h>
int main(void)
{
struct timeval tv;
while (1)
gettimeofday(&tv, 0);
return 0;
}
[root@jouet c]# ./gtod
^C
Pressed 'P' in 'perf top' and the [vdso] samples are there:
62.84% [vdso] [.] __vdso_gettimeofday
8.13% gtod [.] main
7.51% [vdso] [.] 0x0000000000000914
5.78% [vdso] [.] 0x0000000000000917
5.43% gtod [.] _init
2.71% [vdso] [.] 0x000000000000092d
0.35% [kernel] [k] native_io_delay
0.33% libc-2.26.so [.] __memmove_avx_unaligned_erms
0.20% [vdso] [.] 0x000000000000091d
0.17% [i2c_i801] [k] i801_access
0.06% firefox [.] free
0.06% libglib-2.0.so.0.5400.3 [.] g_source_iter_next
0.05% [vdso] [.] 0x0000000000000919
0.05% libpthread-2.26.so [.] __pthread_mutex_lock
0.05% libpixman-1.so.0.34.0 [.] 0x000000000006d3a7
0.04% [kernel] [k] entry_SYSCALL_64_trampoline
0.04% libxul.so [.] style::dom_apis::query_selector_slow
0.04% [kernel] [k] module_get_kallsym
0.04% firefox [.] malloc
0.04% [vdso] [.] 0x0000000000000910
I added a 'perf probe' to thread__find_map:69, and that surely got tons
of hits, i.e. for every map found, just to make sure the 'perf probe'
command was really working.
In the process I noticed a bug, we're only have records for '[vdso]' for
pre-existing commands, i.e. ones that are running when we start 'perf top',
when we will generate the PERF_RECORD_MMAP by looking at /perf/PID/maps.
I.e. like this, for preexisting processes with a vdso map, again,
tracing for all the system, only pre-existing processes get a [vdso] map
(when having one):
[root@jouet ~]# perf probe -x ~/bin/perf __machine__addnew_vdso
Added new event:
probe_perf:__machine__addnew_vdso (on __machine__addnew_vdso in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:__machine__addnew_vdso -aR sleep 1
[root@jouet ~]# perf trace -e probe_perf:__machine__addnew_vdso/max-stack=8/
0.000 probe_perf:__machine__addnew_vdso:(568eb3)
__machine__addnew_vdso (/home/acme/bin/perf)
map__new (/home/acme/bin/perf)
machine__process_mmap2_event (/home/acme/bin/perf)
machine__process_event (/home/acme/bin/perf)
perf_event__process (/home/acme/bin/perf)
perf_tool__process_synth_event (/home/acme/bin/perf)
perf_event__synthesize_mmap_events (/home/acme/bin/perf)
__event__synthesize_thread (/home/acme/bin/perf)
The kernel is generating a PERF_RECORD_MMAP for vDSOs, but somehow
'perf top' is not getting those records while 'perf record' is:
# perf record ~acme/c/gtod
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.076 MB perf.data (1499 samples) ]
# perf report -D | grep PERF_RECORD_MMAP2
71293612401913 0x11b48 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x400000(0x1000) @ 0 fd:02 1137 541179306]: r-xp /home/acme/c/gtod
71293612419012 0x11be0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a2783000(0x227000) @ 0 fd:00 3146370 854107250]: r-xp /usr/lib64/ld-2.26.so
71293612432110 0x11c50 [0x60]: PERF_RECORD_MMAP2 25484/25484: [0x7ffcdb53a000(0x2000) @ 0 00:00 0 0]: r-xp [vdso]
71293612509944 0x11cb0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a23cd000(0x3b6000) @ 0 fd:00 3149723 262067164]: r-xp /usr/lib64/libc-2.26.so
#
# perf script | grep vdso | head
gtod 25484 71293.612768: 2485554 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso])
gtod 25484 71293.613576: 2149343 cycles:ppp: 7ffcdb53a917 [unknown] ([vdso])
gtod 25484 71293.614274: 1814652 cycles:ppp: 7ffcdb53aca8 __vdso_gettimeofday+0x98 ([vdso])
gtod 25484 71293.614862: 1669070 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso])
gtod 25484 71293.615404: 1451589 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso])
gtod 25484 71293.615999: 1269941 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso])
gtod 25484 71293.616405: 1177946 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso])
gtod 25484 71293.616775: 1121290 cycles:ppp: 7ffcdb53ac47 __vdso_gettimeofday+0x37 ([vdso])
gtod 25484 71293.617150: 1037721 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso])
gtod 25484 71293.617478: 994526 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso])
#
The patch is the obvious one and with it we also continue to resolve
vdso symbols for pre-existing processes in 'perf top' and for all
processes in 'perf record' + 'perf report/script'.
Suggested-by: David Miller <davem@davemloft.net>
Acked-by: David Miller <davem@davemloft.net>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-cs7skq9pp0kjypiju6o7trse@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Allow the unit tests to verify the retrieval of the dirty shutdown
count via smart commands, and allow the driver-load-time retrieval of
the smart health payload to be simulated by nfit_test.
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Some NVDIMMs, in addition to providing an indication of whether the
previous shutdown was clean, also provide a running count of lifetime
dirty-shutdown events for the device. In anticipation of this
functionality appearing on more devices arrange for the nfit driver to
retrieve / cache this data at DIMM discovery time, and export it via
sysfs.
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The per-VM capability KVM_CAP_EXCEPTION_PAYLOAD (to be introduced in a
later commit) adds the following fields to struct kvm_vcpu_events:
exception_has_payload, exception_payload, and exception.pending.
With this capability set, all of the details of vcpu->arch.exception,
including the payload for a pending exception, are reported to
userspace in response to KVM_GET_VCPU_EVENTS.
With this capability clear, the original ABI is preserved, and the
exception.injected field is set for either pending or injected
exceptions.
When userspace calls KVM_SET_VCPU_EVENTS with
KVM_CAP_EXCEPTION_PAYLOAD clear, exception.injected is no longer
translated to exception.pending. KVM_SET_VCPU_EVENTS can now only
establish a pending exception when KVM_CAP_EXCEPTION_PAYLOAD is set.
Reported-by: Jim Mattson <jmattson@google.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add tests that do a MSG_PEEK recv followed by a regular receive to
test flag support.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Modify test library and add eVMCS test. This includes nVMX save/restore
testing.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Split prepare_for_vmx_operation() into prepare_for_vmx_operation() and
load_vmcs() so we can inject GUEST_SYNC() in between.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's add the 40 PA-bit versions of the VM modes, that AArch64
should have been using, so we can extend the dirty log test without
breaking things.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While we're messing with the code for the port and to support guest
page sizes that are less than the host page size, we also make some
code formatting cleanups and apply sync_global_to_guest().
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename VM_MODE_FLAT48PG to be more descriptive of its config and add a
new config that has the same parameters, except with 64K pages.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This code adds VM and VCPU setup code for the VM_MODE_FLAT48PG mode.
The VM_MODE_FLAT48PG isn't yet fully supportable, as it defines the
guest physical address limit as 52-bits, and KVM currently only
supports guests with up to 40-bit physical addresses (see
KVM_PHYS_SHIFT). VM_MODE_FLAT48PG will work fine, though, as long as
no >= 40-bit physical addresses are used.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tidy up kvm-util code: code/comment formatting, remove unused code,
and move x86 specific code out. We also move vcpu_dump() out of
common code, because not all arches (AArch64) have KVM_GET_REGS.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>