clocksource/drivers/arm_arch_timer: Avoid infinite recursion when ftrace is enabled
On platforms with an arch timer erratum workaround, it's possible for arch_timer_reg_read_stable() to recurse into itself when certain tracing options are enabled, leading to stack overflows and related problems. For example, when PREEMPT_TRACER and FUNCTION_GRAPH_TRACER are selected, it's possible to trigger this with: $ mount -t debugfs nodev /sys/kernel/debug/ $ echo function_graph > /sys/kernel/debug/tracing/current_tracer The problem is that in such cases, preempt_disable() instrumentation attempts to acquire a timestamp via trace_clock(), resulting in a call back to arch_timer_reg_read_stable(), and hence recursion. This patch changes arch_timer_reg_read_stable() to use preempt_{disable,enable}_notrace(), which avoids this. This problem is similar to the fixed by upstream commit 96b3d28bf4 ("sched/clock: Prevent tracing recursion in sched_clock_cpu()"). Fixes: 6acc71ccac71 ("arm64: arch_timer: Allows a CPU-specific erratum to only affect a subset of CPUs") Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
This commit is contained in:
parent
599dc457c7
commit
adb4f11e0a
@ -65,13 +65,13 @@ DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *,
|
||||
u64 _val; \
|
||||
if (needs_unstable_timer_counter_workaround()) { \
|
||||
const struct arch_timer_erratum_workaround *wa; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
wa = __this_cpu_read(timer_unstable_counter_workaround); \
|
||||
if (wa && wa->read_##reg) \
|
||||
_val = wa->read_##reg(); \
|
||||
else \
|
||||
_val = read_sysreg(reg); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
} else { \
|
||||
_val = read_sysreg(reg); \
|
||||
} \
|
||||
|
Loading…
x
Reference in New Issue
Block a user