linux/kernel/trace
Linus Torvalds 17894c2a7a tracing fixes for v6.7-rc4:
- Snapshot buffer issues
 
   1. When instances started allowing latency tracers, it uses
      a snapshot buffer (another buffer that is not written to
      but swapped with the main buffer that is). The snapshot buffer
      needs to be the same size as the main buffer. But when the
      snapshot buffers were added to instances, the code to make
      the snapshot equal to the main buffer still was only doing it
      for the main buffer and not the instances.
 
   2. Need to stop the current tracer when resizing the buffers.
      Otherwise there can be a race if the tracer decides to make
      a snapshot between resizing the main buffer and the snapshot
      buffer.
 
   3. When a tracer is "stopped" in disables both the main buffer
      and the snapshot buffer. This needs to be done for instances
      and not only the main buffer, now that instances also have
      a snapshot buffer.
 
 - Buffered event for filtering issues
 
   When filtering is enabled, because events can be dropped often,
   it is quicker to copy the event into a temp buffer and write that
   into the main buffer if it is not filtered or just drop the event
   if it is, than to write the event into the ring buffer and then
   try to discard it. This temp buffer is allocated and needs special
   synchronization to do so. But there were some issues with that:
 
   1. When disabling the filter and freeing the buffer, a call to all
      CPUs is required to stop each per_cpu usage. But the code
      called smp_call_function_many() which does not include the
      current CPU. If the task is migrated to another CPU when it
      enables the CPUs via smp_call_function_many(), it will not enable
      the one it is currently on and this causes issues later on.
      Use on_each_cpu_mask() instead, which includes the current CPU.
 
    2. When the allocation of the buffered event fails, it can give
       a warning. But the buffered event is just an optimization
       (it's still OK to write to the ring buffer and free it).
       Do not WARN in this case.
 
    3. The freeing of the buffer event requires synchronization.
       First a counter is decremented to zero so that no new uses
       of it will happen. Then it sets the buffered event to NULL,
       and finally it frees the buffered event. There's a synchronize_rcu()
       between the counter decrement and the setting the variable to
       NULL, but only a smp_wmb() between that and the freeing of the
       buffer. It is theoretically possible that a user missed seeing
       the decrement, but will use the buffer after it is free. Another
       synchronize_rcu() is needed in place of that smp_wmb().
 
 - ring buffer timestamps on 32 bit machines
 
   The ring buffer timestamp on 32 bit machines has to break the 64 bit
   number into multiple values as cmpxchg is required on it, and a
   64 bit cmpxchg on 32 bit architectures is very slow. The code use
   to just use two 32 bit values and make it a 60 bit timestamp where
   the other 4 bits were used as counters for synchronization. It later
   came known that the timestamp on 32 bit still need all 64 bits in
   some cases. So 3 words were created to handle the 64 bits. But issues
   arised with this:
 
    1. The synchronization logic still only compared the counter
       with the first two, but not with the third number, so the
       synchronization could fail unknowingly.
 
    2. A check on discard of an event could race if an event happened
       between the discard and updating one of the counters. The
       counter needs to be updated (forcing an absolute timestamp
       and not to use a delta) before the actual discard happens.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZXIP5hQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qmJxAQDXBZwBUFQjWqZHLJn0S9aaz5FggkeR
 RmlsOMND0PXcjwD+N6U905i553ehu3SSyOP+5svoi0hyCB2qhj3ZF0LzZQU=
 =us1V
 -----END PGP SIGNATURE-----

Merge tag 'trace-v6.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

 - Snapshot buffer issues:

   1. When instances started allowing latency tracers, it uses a
      snapshot buffer (another buffer that is not written to but swapped
      with the main buffer that is). The snapshot buffer needs to be the
      same size as the main buffer. But when the snapshot buffers were
      added to instances, the code to make the snapshot equal to the
      main buffer still was only doing it for the main buffer and not
      the instances.

   2. Need to stop the current tracer when resizing the buffers.
      Otherwise there can be a race if the tracer decides to make a
      snapshot between resizing the main buffer and the snapshot buffer.

   3. When a tracer is "stopped" in disables both the main buffer and
      the snapshot buffer. This needs to be done for instances and not
      only the main buffer, now that instances also have a snapshot
      buffer.

 - Buffered event for filtering issues:

   When filtering is enabled, because events can be dropped often, it is
   quicker to copy the event into a temp buffer and write that into the
   main buffer if it is not filtered or just drop the event if it is,
   than to write the event into the ring buffer and then try to discard
   it. This temp buffer is allocated and needs special synchronization
   to do so. But there were some issues with that:

   1. When disabling the filter and freeing the buffer, a call to all
      CPUs is required to stop each per_cpu usage. But the code called
      smp_call_function_many() which does not include the current CPU.
      If the task is migrated to another CPU when it enables the CPUs
      via smp_call_function_many(), it will not enable the one it is
      currently on and this causes issues later on. Use
      on_each_cpu_mask() instead, which includes the current CPU.

    2.When the allocation of the buffered event fails, it can give a
      warning. But the buffered event is just an optimization (it's
      still OK to write to the ring buffer and free it). Do not WARN in
      this case.

    3.The freeing of the buffer event requires synchronization. First a
      counter is decremented to zero so that no new uses of it will
      happen. Then it sets the buffered event to NULL, and finally it
      frees the buffered event. There's a synchronize_rcu() between the
      counter decrement and the setting the variable to NULL, but only a
      smp_wmb() between that and the freeing of the buffer. It is
      theoretically possible that a user missed seeing the decrement,
      but will use the buffer after it is free. Another
      synchronize_rcu() is needed in place of that smp_wmb().

 - ring buffer timestamps on 32 bit machines

   The ring buffer timestamp on 32 bit machines has to break the 64 bit
   number into multiple values as cmpxchg is required on it, and a 64
   bit cmpxchg on 32 bit architectures is very slow. The code use to
   just use two 32 bit values and make it a 60 bit timestamp where the
   other 4 bits were used as counters for synchronization. It later came
   known that the timestamp on 32 bit still need all 64 bits in some
   cases. So 3 words were created to handle the 64 bits. But issues
   arised with this:

    1. The synchronization logic still only compared the counter with
       the first two, but not with the third number, so the
       synchronization could fail unknowingly.

    2. A check on discard of an event could race if an event happened
       between the discard and updating one of the counters. The counter
       needs to be updated (forcing an absolute timestamp and not to use
       a delta) before the actual discard happens.

* tag 'trace-v6.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  ring-buffer: Test last update in 32bit version of __rb_time_read()
  ring-buffer: Force absolute timestamp on discard of event
  tracing: Fix a possible race when disabling buffered events
  tracing: Fix a warning when allocating buffered events fails
  tracing: Fix incomplete locking when disabling buffered events
  tracing: Disable snapshot buffer when stopping instance tracers
  tracing: Stop current tracer when resizing buffer
  tracing: Always update snapshot buffer size
2023-12-08 08:44:43 -08:00
..
rv tracing/tools: Updates for 6.4 2023-04-28 16:11:26 -07:00
blktrace.c block: remove more NULL checks after bdev_get_queue() 2023-02-21 09:23:22 -07:00
bpf_trace.c bpf: Add __bpf_kfunc_{start,end}_defs macros 2023-11-01 22:33:53 -07:00
bpf_trace.h
error_report-traces.c
fgraph.c tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
fprobe.c Probes updates for v6.7: 2023-11-01 16:15:42 -10:00
ftrace_internal.h tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
ftrace.c ftrace: Use LIST_HEAD to initialize clear_hash 2023-09-01 21:18:38 -04:00
Kconfig Probes updates for v6.5: 2023-06-30 10:44:53 -07:00
kprobe_event_gen_test.c tracing: Fix wrong return in kprobe_event_gen_test.c 2023-03-19 12:20:48 -04:00
Makefile tracing/probes: Move finding func-proto API and getting func-param API to trace_btf 2023-08-23 09:39:45 +09:00
pid_list.c
pid_list.h
power-traces.c
preemptirq_delay_test.c
rethook.c rethook: Use __rcu pointer for rethook::handler 2023-12-01 14:53:56 +09:00
ring_buffer_benchmark.c ring_buffer: Remove unused "event" parameter 2022-11-23 19:08:30 -05:00
ring_buffer.c ring-buffer: Test last update in 32bit version of __rb_time_read() 2023-12-06 15:01:49 -05:00
rpm-traces.c
synth_event_gen_test.c tracing: Always use canonical ftrace path 2023-02-18 14:34:09 -05:00
trace_benchmark.c
trace_benchmark.h
trace_boot.c tracing/boot: Test strscpy() against less than zero for error 2023-07-05 10:30:49 -04:00
trace_branch.c
trace_btf.c tracing/probes: Add a function to search a member of a struct/union 2023-08-23 09:40:16 +09:00
trace_btf.h tracing/probes: Add a function to search a member of a struct/union 2023-08-23 09:40:16 +09:00
trace_clock.c
trace_dynevent.c tracing: Free buffers when a used dynamic event is removed 2022-11-23 19:07:12 -05:00
trace_dynevent.h
trace_entries.h tracing: Add back FORTIFY_SOURCE logic to kernel_stack event structure 2023-07-30 18:11:44 -04:00
trace_eprobe.c tracing/eprobe: drop unneeded breaks 2023-10-10 01:03:48 +09:00
trace_event_perf.c tracing/perf: Use strndup_user instead of kzalloc/strncpy_from_user 2022-11-23 19:08:31 -05:00
trace_events_filter_test.h
trace_events_filter.c tracing: Have trace_event_file have ref counters 2023-11-01 23:44:44 -04:00
trace_events_hist.c tracing/histograms: Simplify last_cmd_set() 2023-10-23 13:31:14 -04:00
trace_events_inject.c tracing: Have event inject files inc the trace array ref count 2023-09-07 16:38:54 -04:00
trace_events_synth.c tracing: Have the user copy of synthetic event address use correct context 2023-11-01 23:46:05 -04:00
trace_events_trigger.c tracing: Fix kernel-doc warnings in trace_events_trigger.c 2023-07-28 19:59:03 -04:00
trace_events_user.c tracing/user_events: Allow events to persist for perfmon_capable users 2023-10-03 22:29:43 -04:00
trace_events.c tracing: Have trace_event_file have ref counters 2023-11-01 23:44:44 -04:00
trace_export.c tracing: Add back FORTIFY_SOURCE logic to kernel_stack event structure 2023-07-30 18:11:44 -04:00
trace_fprobe.c tracing: fprobe-event: Fix to check tracepoint event and return 2023-11-10 20:06:12 +09:00
trace_functions_graph.c function_graph: Support recording and printing the return value of function 2023-06-20 18:38:37 -04:00
trace_functions.c
trace_hwlat.c tracing: Remove extra space at the end of hwlat_detector/mode 2023-09-01 21:00:00 -04:00
trace_irqsoff.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-17 13:49:37 -04:00
trace_kdb.c
trace_kprobe_selftest.c tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
trace_kprobe_selftest.h
trace_kprobe.c tracing/kprobes: Fix the order of argument descriptions 2023-11-11 08:00:43 +09:00
trace_mmiotrace.c
trace_nop.c
trace_osnoise.c tracing/timerlat: Add user-space interface 2023-06-22 10:39:56 -04:00
trace_output.c fs: create helper file_user_path() for user displayed mapped file path 2023-10-19 11:03:15 +02:00
trace_output.h tracing: Add "fields" option to show raw trace event fields 2023-03-29 06:52:08 -04:00
trace_preemptirq.c cpuidle: tracing, preempt: Squash _rcuidle tracing 2023-01-31 15:01:46 +01:00
trace_printk.c
trace_probe_kernel.h tracing/probes: Fix to record 0-length data_loc in fetch_store_string*() if fails 2023-07-14 17:04:58 +09:00
trace_probe_tmpl.h tracing/probes: Fix to record 0-length data_loc in fetch_store_string*() if fails 2023-07-14 17:04:58 +09:00
trace_probe.c tracing/probes: Add string type check with BTF 2023-08-23 09:41:13 +09:00
trace_probe.h tracing/kprobes: Return EADDRNOTAVAIL when func matches several symbols 2023-10-20 22:10:41 +09:00
trace_recursion_record.c
trace_sched_switch.c
trace_sched_wakeup.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-17 13:49:37 -04:00
trace_selftest_dynamic.c
trace_selftest.c tracing: Have function_graph selftest call cond_resched() 2023-05-28 21:15:46 -04:00
trace_seq.c tracing: Move readpos from seq_buf to trace_seq 2023-10-20 12:16:10 -04:00
trace_stack.c
trace_stat.c
trace_stat.h
trace_synth.h tracing: Allow synthetic events to pass around stacktraces 2023-01-25 10:31:24 -05:00
trace_syscalls.c bpf: Change syscall_nr type to int in struct syscall_tp_t 2023-10-13 12:39:36 -07:00
trace_uprobe.c Probes updates for v6.6: 2023-09-02 11:10:50 -07:00
trace.c tracing: Fix a possible race when disabling buffered events 2023-12-05 17:17:00 -05:00
trace.h tracing: Have trace_event_file have ref counters 2023-11-01 23:44:44 -04:00
tracing_map.c tracing: Remove unused variable 'dups' 2022-10-03 12:20:31 -04:00
tracing_map.h tracing: Remove unused extern declaration tracing_map_set_field_descr() 2023-07-23 11:08:14 -04:00