Merge branches 'fixes.2020.04.27a', 'kfree_rcu.2020.04.27a', 'rcu-tasks.2020.04.27a', 'stall.2020.04.27a' and 'torture.2020.05.07a' into HEAD
fixes.2020.04.27a: Miscellaneous fixes. kfree_rcu.2020.04.27a: Changes related to kfree_rcu(). rcu-tasks.2020.04.27a: Addition of new RCU-tasks flavors. stall.2020.04.27a: RCU CPU stall-warning updates. torture.2020.05.07a: Torture-test updates.
This commit is contained in:
@ -1943,56 +1943,27 @@ invoked from a CPU-hotplug notifier.
|
|||||||
Scheduler and RCU
|
Scheduler and RCU
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
RCU depends on the scheduler, and the scheduler uses RCU to protect some
|
RCU makes use of kthreads, and it is necessary to avoid excessive CPU-time
|
||||||
of its data structures. The preemptible-RCU ``rcu_read_unlock()``
|
accumulation by these kthreads. This requirement was no surprise, but
|
||||||
implementation must therefore be written carefully to avoid deadlocks
|
RCU's violation of it when running context-switch-heavy workloads when
|
||||||
involving the scheduler's runqueue and priority-inheritance locks. In
|
built with ``CONFIG_NO_HZ_FULL=y`` `did come as a surprise
|
||||||
particular, ``rcu_read_unlock()`` must tolerate an interrupt where the
|
|
||||||
interrupt handler invokes both ``rcu_read_lock()`` and
|
|
||||||
``rcu_read_unlock()``. This possibility requires ``rcu_read_unlock()``
|
|
||||||
to use negative nesting levels to avoid destructive recursion via
|
|
||||||
interrupt handler's use of RCU.
|
|
||||||
|
|
||||||
This scheduler-RCU requirement came as a `complete
|
|
||||||
surprise <https://lwn.net/Articles/453002/>`__.
|
|
||||||
|
|
||||||
As noted above, RCU makes use of kthreads, and it is necessary to avoid
|
|
||||||
excessive CPU-time accumulation by these kthreads. This requirement was
|
|
||||||
no surprise, but RCU's violation of it when running context-switch-heavy
|
|
||||||
workloads when built with ``CONFIG_NO_HZ_FULL=y`` `did come as a
|
|
||||||
surprise
|
|
||||||
[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__.
|
[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__.
|
||||||
RCU has made good progress towards meeting this requirement, even for
|
RCU has made good progress towards meeting this requirement, even for
|
||||||
context-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is
|
context-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is
|
||||||
room for further improvement.
|
room for further improvement.
|
||||||
|
|
||||||
It is forbidden to hold any of scheduler's runqueue or
|
There is no longer any prohibition against holding any of
|
||||||
priority-inheritance spinlocks across an ``rcu_read_unlock()`` unless
|
scheduler's runqueue or priority-inheritance spinlocks across an
|
||||||
interrupts have been disabled across the entire RCU read-side critical
|
``rcu_read_unlock()``, even if interrupts and preemption were enabled
|
||||||
section, that is, up to and including the matching ``rcu_read_lock()``.
|
somewhere within the corresponding RCU read-side critical section.
|
||||||
Violating this restriction can result in deadlocks involving these
|
Therefore, it is now perfectly legal to execute ``rcu_read_lock()``
|
||||||
scheduler spinlocks. There was hope that this restriction might be
|
with preemption enabled, acquire one of the scheduler locks, and hold
|
||||||
lifted when interrupt-disabled calls to ``rcu_read_unlock()`` started
|
that lock across the matching ``rcu_read_unlock()``.
|
||||||
deferring the reporting of the resulting RCU-preempt quiescent state
|
|
||||||
until the end of the corresponding interrupts-disabled region.
|
|
||||||
Unfortunately, timely reporting of the corresponding quiescent state to
|
|
||||||
expedited grace periods requires a call to ``raise_softirq()``, which
|
|
||||||
can acquire these scheduler spinlocks. In addition, real-time systems
|
|
||||||
using RCU priority boosting need this restriction to remain in effect
|
|
||||||
because deferred quiescent-state reporting would also defer deboosting,
|
|
||||||
which in turn would degrade real-time latencies.
|
|
||||||
|
|
||||||
In theory, if a given RCU read-side critical section could be guaranteed
|
Similarly, the RCU flavor consolidation has removed the need for negative
|
||||||
to be less than one second in duration, holding a scheduler spinlock
|
nesting. The fact that interrupt-disabled regions of code act as RCU
|
||||||
across that critical section's ``rcu_read_unlock()`` would require only
|
read-side critical sections implicitly avoids earlier issues that used
|
||||||
that preemption be disabled across the entire RCU read-side critical
|
to result in destructive recursion via interrupt handler's use of RCU.
|
||||||
section, not interrupts. Unfortunately, given the possibility of vCPU
|
|
||||||
preemption, long-running interrupts, and so on, it is not possible in
|
|
||||||
practice to guarantee that a given RCU read-side critical section will
|
|
||||||
complete in less than one second. Therefore, as noted above, if
|
|
||||||
scheduler spinlocks are held across a given call to
|
|
||||||
``rcu_read_unlock()``, interrupts must be disabled across the entire RCU
|
|
||||||
read-side critical section.
|
|
||||||
|
|
||||||
Tracing and RCU
|
Tracing and RCU
|
||||||
~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~
|
||||||
|
@ -4210,12 +4210,24 @@
|
|||||||
Duration of CPU stall (s) to test RCU CPU stall
|
Duration of CPU stall (s) to test RCU CPU stall
|
||||||
warnings, zero to disable.
|
warnings, zero to disable.
|
||||||
|
|
||||||
|
rcutorture.stall_cpu_block= [KNL]
|
||||||
|
Sleep while stalling if set. This will result
|
||||||
|
in warnings from preemptible RCU in addition
|
||||||
|
to any other stall-related activity.
|
||||||
|
|
||||||
rcutorture.stall_cpu_holdoff= [KNL]
|
rcutorture.stall_cpu_holdoff= [KNL]
|
||||||
Time to wait (s) after boot before inducing stall.
|
Time to wait (s) after boot before inducing stall.
|
||||||
|
|
||||||
rcutorture.stall_cpu_irqsoff= [KNL]
|
rcutorture.stall_cpu_irqsoff= [KNL]
|
||||||
Disable interrupts while stalling if set.
|
Disable interrupts while stalling if set.
|
||||||
|
|
||||||
|
rcutorture.stall_gp_kthread= [KNL]
|
||||||
|
Duration (s) of forced sleep within RCU
|
||||||
|
grace-period kthread to test RCU CPU stall
|
||||||
|
warnings, zero to disable. If both stall_cpu
|
||||||
|
and stall_gp_kthread are specified, the
|
||||||
|
kthread is starved first, then the CPU.
|
||||||
|
|
||||||
rcutorture.stat_interval= [KNL]
|
rcutorture.stat_interval= [KNL]
|
||||||
Time (s) between statistics printk()s.
|
Time (s) between statistics printk()s.
|
||||||
|
|
||||||
@ -4286,6 +4298,13 @@
|
|||||||
only normal grace-period primitives. No effect
|
only normal grace-period primitives. No effect
|
||||||
on CONFIG_TINY_RCU kernels.
|
on CONFIG_TINY_RCU kernels.
|
||||||
|
|
||||||
|
rcupdate.rcu_task_ipi_delay= [KNL]
|
||||||
|
Set time in jiffies during which RCU tasks will
|
||||||
|
avoid sending IPIs, starting with the beginning
|
||||||
|
of a given grace period. Setting a large
|
||||||
|
number avoids disturbing real-time workloads,
|
||||||
|
but lengthens grace periods.
|
||||||
|
|
||||||
rcupdate.rcu_task_stall_timeout= [KNL]
|
rcupdate.rcu_task_stall_timeout= [KNL]
|
||||||
Set timeout in jiffies for RCU task stall warning
|
Set timeout in jiffies for RCU task stall warning
|
||||||
messages. Disable with a value less than or equal
|
messages. Disable with a value less than or equal
|
||||||
|
@ -37,6 +37,7 @@
|
|||||||
/* Exported common interfaces */
|
/* Exported common interfaces */
|
||||||
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
||||||
void rcu_barrier_tasks(void);
|
void rcu_barrier_tasks(void);
|
||||||
|
void rcu_barrier_tasks_rude(void);
|
||||||
void synchronize_rcu(void);
|
void synchronize_rcu(void);
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
@ -129,25 +130,57 @@ static inline void rcu_init_nohz(void) { }
|
|||||||
* Note a quasi-voluntary context switch for RCU-tasks's benefit.
|
* Note a quasi-voluntary context switch for RCU-tasks's benefit.
|
||||||
* This is a macro rather than an inline function to avoid #include hell.
|
* This is a macro rather than an inline function to avoid #include hell.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_TASKS_RCU
|
#ifdef CONFIG_TASKS_RCU_GENERIC
|
||||||
#define rcu_tasks_qs(t) \
|
|
||||||
do { \
|
# ifdef CONFIG_TASKS_RCU
|
||||||
if (READ_ONCE((t)->rcu_tasks_holdout)) \
|
# define rcu_tasks_classic_qs(t, preempt) \
|
||||||
WRITE_ONCE((t)->rcu_tasks_holdout, false); \
|
do { \
|
||||||
|
if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
|
||||||
|
WRITE_ONCE((t)->rcu_tasks_holdout, false); \
|
||||||
} while (0)
|
} while (0)
|
||||||
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t)
|
|
||||||
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
|
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
|
||||||
void synchronize_rcu_tasks(void);
|
void synchronize_rcu_tasks(void);
|
||||||
|
# else
|
||||||
|
# define rcu_tasks_classic_qs(t, preempt) do { } while (0)
|
||||||
|
# define call_rcu_tasks call_rcu
|
||||||
|
# define synchronize_rcu_tasks synchronize_rcu
|
||||||
|
# endif
|
||||||
|
|
||||||
|
# ifdef CONFIG_TASKS_RCU_TRACE
|
||||||
|
# define rcu_tasks_trace_qs(t) \
|
||||||
|
do { \
|
||||||
|
if (!likely(READ_ONCE((t)->trc_reader_checked)) && \
|
||||||
|
!unlikely(READ_ONCE((t)->trc_reader_nesting))) { \
|
||||||
|
smp_store_release(&(t)->trc_reader_checked, true); \
|
||||||
|
smp_mb(); /* Readers partitioned by store. */ \
|
||||||
|
} \
|
||||||
|
} while (0)
|
||||||
|
# else
|
||||||
|
# define rcu_tasks_trace_qs(t) do { } while (0)
|
||||||
|
# endif
|
||||||
|
|
||||||
|
#define rcu_tasks_qs(t, preempt) \
|
||||||
|
do { \
|
||||||
|
rcu_tasks_classic_qs((t), (preempt)); \
|
||||||
|
rcu_tasks_trace_qs((t)); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
|
# ifdef CONFIG_TASKS_RUDE_RCU
|
||||||
|
void call_rcu_tasks_rude(struct rcu_head *head, rcu_callback_t func);
|
||||||
|
void synchronize_rcu_tasks_rude(void);
|
||||||
|
# endif
|
||||||
|
|
||||||
|
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
|
||||||
void exit_tasks_rcu_start(void);
|
void exit_tasks_rcu_start(void);
|
||||||
void exit_tasks_rcu_finish(void);
|
void exit_tasks_rcu_finish(void);
|
||||||
#else /* #ifdef CONFIG_TASKS_RCU */
|
#else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
|
||||||
#define rcu_tasks_qs(t) do { } while (0)
|
#define rcu_tasks_qs(t, preempt) do { } while (0)
|
||||||
#define rcu_note_voluntary_context_switch(t) do { } while (0)
|
#define rcu_note_voluntary_context_switch(t) do { } while (0)
|
||||||
#define call_rcu_tasks call_rcu
|
#define call_rcu_tasks call_rcu
|
||||||
#define synchronize_rcu_tasks synchronize_rcu
|
#define synchronize_rcu_tasks synchronize_rcu
|
||||||
static inline void exit_tasks_rcu_start(void) { }
|
static inline void exit_tasks_rcu_start(void) { }
|
||||||
static inline void exit_tasks_rcu_finish(void) { }
|
static inline void exit_tasks_rcu_finish(void) { }
|
||||||
#endif /* #else #ifdef CONFIG_TASKS_RCU */
|
#endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
|
* cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
|
||||||
@ -158,7 +191,7 @@ static inline void exit_tasks_rcu_finish(void) { }
|
|||||||
*/
|
*/
|
||||||
#define cond_resched_tasks_rcu_qs() \
|
#define cond_resched_tasks_rcu_qs() \
|
||||||
do { \
|
do { \
|
||||||
rcu_tasks_qs(current); \
|
rcu_tasks_qs(current, false); \
|
||||||
cond_resched(); \
|
cond_resched(); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
88
include/linux/rcupdate_trace.h
Normal file
88
include/linux/rcupdate_trace.h
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||||
|
/*
|
||||||
|
* Read-Copy Update mechanism for mutual exclusion, adapted for tracing.
|
||||||
|
*
|
||||||
|
* Copyright (C) 2020 Paul E. McKenney.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef __LINUX_RCUPDATE_TRACE_H
|
||||||
|
#define __LINUX_RCUPDATE_TRACE_H
|
||||||
|
|
||||||
|
#include <linux/sched.h>
|
||||||
|
#include <linux/rcupdate.h>
|
||||||
|
|
||||||
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
|
|
||||||
|
extern struct lockdep_map rcu_trace_lock_map;
|
||||||
|
|
||||||
|
static inline int rcu_read_lock_trace_held(void)
|
||||||
|
{
|
||||||
|
return lock_is_held(&rcu_trace_lock_map);
|
||||||
|
}
|
||||||
|
|
||||||
|
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
|
static inline int rcu_read_lock_trace_held(void)
|
||||||
|
{
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
|
#ifdef CONFIG_TASKS_TRACE_RCU
|
||||||
|
|
||||||
|
void rcu_read_unlock_trace_special(struct task_struct *t, int nesting);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_read_lock_trace - mark beginning of RCU-trace read-side critical section
|
||||||
|
*
|
||||||
|
* When synchronize_rcu_trace() is invoked by one task, then that task
|
||||||
|
* is guaranteed to block until all other tasks exit their read-side
|
||||||
|
* critical sections. Similarly, if call_rcu_trace() is invoked on one
|
||||||
|
* task while other tasks are within RCU read-side critical sections,
|
||||||
|
* invocation of the corresponding RCU callback is deferred until after
|
||||||
|
* the all the other tasks exit their critical sections.
|
||||||
|
*
|
||||||
|
* For more details, please see the documentation for rcu_read_lock().
|
||||||
|
*/
|
||||||
|
static inline void rcu_read_lock_trace(void)
|
||||||
|
{
|
||||||
|
struct task_struct *t = current;
|
||||||
|
|
||||||
|
WRITE_ONCE(t->trc_reader_nesting, READ_ONCE(t->trc_reader_nesting) + 1);
|
||||||
|
if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) &&
|
||||||
|
t->trc_reader_special.b.need_mb)
|
||||||
|
smp_mb(); // Pairs with update-side barriers
|
||||||
|
rcu_lock_acquire(&rcu_trace_lock_map);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_read_unlock_trace - mark end of RCU-trace read-side critical section
|
||||||
|
*
|
||||||
|
* Pairs with a preceding call to rcu_read_lock_trace(), and nesting is
|
||||||
|
* allowed. Invoking a rcu_read_unlock_trace() when there is no matching
|
||||||
|
* rcu_read_lock_trace() is verboten, and will result in lockdep complaints.
|
||||||
|
*
|
||||||
|
* For more details, please see the documentation for rcu_read_unlock().
|
||||||
|
*/
|
||||||
|
static inline void rcu_read_unlock_trace(void)
|
||||||
|
{
|
||||||
|
int nesting;
|
||||||
|
struct task_struct *t = current;
|
||||||
|
|
||||||
|
rcu_lock_release(&rcu_trace_lock_map);
|
||||||
|
nesting = READ_ONCE(t->trc_reader_nesting) - 1;
|
||||||
|
if (likely(!READ_ONCE(t->trc_reader_special.s)) || nesting) {
|
||||||
|
WRITE_ONCE(t->trc_reader_nesting, nesting);
|
||||||
|
return; // We assume shallow reader nesting.
|
||||||
|
}
|
||||||
|
rcu_read_unlock_trace_special(t, nesting);
|
||||||
|
}
|
||||||
|
|
||||||
|
void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func);
|
||||||
|
void synchronize_rcu_tasks_trace(void);
|
||||||
|
void rcu_barrier_tasks_trace(void);
|
||||||
|
|
||||||
|
#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
|
||||||
|
|
||||||
|
#endif /* __LINUX_RCUPDATE_TRACE_H */
|
@ -31,4 +31,23 @@ do { \
|
|||||||
|
|
||||||
#define wait_rcu_gp(...) _wait_rcu_gp(false, __VA_ARGS__)
|
#define wait_rcu_gp(...) _wait_rcu_gp(false, __VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* synchronize_rcu_mult - Wait concurrently for multiple grace periods
|
||||||
|
* @...: List of call_rcu() functions for different grace periods to wait on
|
||||||
|
*
|
||||||
|
* This macro waits concurrently for multiple types of RCU grace periods.
|
||||||
|
* For example, synchronize_rcu_mult(call_rcu, call_rcu_tasks) would wait
|
||||||
|
* on concurrent RCU and RCU-tasks grace periods. Waiting on a given SRCU
|
||||||
|
* domain requires you to write a wrapper function for that SRCU domain's
|
||||||
|
* call_srcu() function, with this wrapper supplying the pointer to the
|
||||||
|
* corresponding srcu_struct.
|
||||||
|
*
|
||||||
|
* The first argument tells Tiny RCU's _wait_rcu_gp() not to
|
||||||
|
* bother waiting for RCU. The reason for this is because anywhere
|
||||||
|
* synchronize_rcu_mult() can be called is automatically already a full
|
||||||
|
* grace period.
|
||||||
|
*/
|
||||||
|
#define synchronize_rcu_mult(...) \
|
||||||
|
_wait_rcu_gp(IS_ENABLED(CONFIG_TINY_RCU), __VA_ARGS__)
|
||||||
|
|
||||||
#endif /* _LINUX_SCHED_RCUPDATE_WAIT_H */
|
#endif /* _LINUX_SCHED_RCUPDATE_WAIT_H */
|
||||||
|
@ -49,7 +49,7 @@ static inline void rcu_softirq_qs(void)
|
|||||||
#define rcu_note_context_switch(preempt) \
|
#define rcu_note_context_switch(preempt) \
|
||||||
do { \
|
do { \
|
||||||
rcu_qs(); \
|
rcu_qs(); \
|
||||||
rcu_tasks_qs(current); \
|
rcu_tasks_qs(current, (preempt)); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
static inline int rcu_needs_cpu(u64 basemono, u64 *nextevt)
|
static inline int rcu_needs_cpu(u64 basemono, u64 *nextevt)
|
||||||
@ -87,6 +87,7 @@ static inline bool rcu_inkernel_boot_has_ended(void) { return true; }
|
|||||||
static inline bool rcu_is_watching(void) { return true; }
|
static inline bool rcu_is_watching(void) { return true; }
|
||||||
static inline void rcu_momentary_dyntick_idle(void) { }
|
static inline void rcu_momentary_dyntick_idle(void) { }
|
||||||
static inline void kfree_rcu_scheduler_running(void) { }
|
static inline void kfree_rcu_scheduler_running(void) { }
|
||||||
|
static inline bool rcu_gp_might_be_stalled(void) { return false; }
|
||||||
|
|
||||||
/* Avoid RCU read-side critical sections leaking across. */
|
/* Avoid RCU read-side critical sections leaking across. */
|
||||||
static inline void rcu_all_qs(void) { barrier(); }
|
static inline void rcu_all_qs(void) { barrier(); }
|
||||||
|
@ -39,6 +39,7 @@ void rcu_barrier(void);
|
|||||||
bool rcu_eqs_special_set(int cpu);
|
bool rcu_eqs_special_set(int cpu);
|
||||||
void rcu_momentary_dyntick_idle(void);
|
void rcu_momentary_dyntick_idle(void);
|
||||||
void kfree_rcu_scheduler_running(void);
|
void kfree_rcu_scheduler_running(void);
|
||||||
|
bool rcu_gp_might_be_stalled(void);
|
||||||
unsigned long get_state_synchronize_rcu(void);
|
unsigned long get_state_synchronize_rcu(void);
|
||||||
void cond_synchronize_rcu(unsigned long oldstate);
|
void cond_synchronize_rcu(unsigned long oldstate);
|
||||||
|
|
||||||
|
@ -613,7 +613,7 @@ union rcu_special {
|
|||||||
u8 blocked;
|
u8 blocked;
|
||||||
u8 need_qs;
|
u8 need_qs;
|
||||||
u8 exp_hint; /* Hint for performance. */
|
u8 exp_hint; /* Hint for performance. */
|
||||||
u8 deferred_qs;
|
u8 need_mb; /* Readers need smp_mb(). */
|
||||||
} b; /* Bits. */
|
} b; /* Bits. */
|
||||||
u32 s; /* Set of bits. */
|
u32 s; /* Set of bits. */
|
||||||
};
|
};
|
||||||
@ -724,6 +724,14 @@ struct task_struct {
|
|||||||
struct list_head rcu_tasks_holdout_list;
|
struct list_head rcu_tasks_holdout_list;
|
||||||
#endif /* #ifdef CONFIG_TASKS_RCU */
|
#endif /* #ifdef CONFIG_TASKS_RCU */
|
||||||
|
|
||||||
|
#ifdef CONFIG_TASKS_TRACE_RCU
|
||||||
|
int trc_reader_nesting;
|
||||||
|
int trc_ipi_to_cpu;
|
||||||
|
union rcu_special trc_reader_special;
|
||||||
|
bool trc_reader_checked;
|
||||||
|
struct list_head trc_holdout_list;
|
||||||
|
#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
|
||||||
|
|
||||||
struct sched_info sched_info;
|
struct sched_info sched_info;
|
||||||
|
|
||||||
struct list_head tasks;
|
struct list_head tasks;
|
||||||
|
@ -89,7 +89,7 @@ void _torture_stop_kthread(char *m, struct task_struct **tp);
|
|||||||
#ifdef CONFIG_PREEMPTION
|
#ifdef CONFIG_PREEMPTION
|
||||||
#define torture_preempt_schedule() preempt_schedule()
|
#define torture_preempt_schedule() preempt_schedule()
|
||||||
#else
|
#else
|
||||||
#define torture_preempt_schedule()
|
#define torture_preempt_schedule() do { } while (0)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* __LINUX_TORTURE_H */
|
#endif /* __LINUX_TORTURE_H */
|
||||||
|
@ -1149,4 +1149,6 @@ int autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, i
|
|||||||
(wait)->flags = 0; \
|
(wait)->flags = 0; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg);
|
||||||
|
|
||||||
#endif /* _LINUX_WAIT_H */
|
#endif /* _LINUX_WAIT_H */
|
||||||
|
@ -141,6 +141,11 @@ struct task_struct init_task
|
|||||||
.rcu_tasks_holdout_list = LIST_HEAD_INIT(init_task.rcu_tasks_holdout_list),
|
.rcu_tasks_holdout_list = LIST_HEAD_INIT(init_task.rcu_tasks_holdout_list),
|
||||||
.rcu_tasks_idle_cpu = -1,
|
.rcu_tasks_idle_cpu = -1,
|
||||||
#endif
|
#endif
|
||||||
|
#ifdef CONFIG_TASKS_TRACE_RCU
|
||||||
|
.trc_reader_nesting = 0,
|
||||||
|
.trc_reader_special.s = 0,
|
||||||
|
.trc_holdout_list = LIST_HEAD_INIT(init_task.trc_holdout_list),
|
||||||
|
#endif
|
||||||
#ifdef CONFIG_CPUSETS
|
#ifdef CONFIG_CPUSETS
|
||||||
.mems_allowed_seq = SEQCNT_ZERO(init_task.mems_allowed_seq),
|
.mems_allowed_seq = SEQCNT_ZERO(init_task.mems_allowed_seq),
|
||||||
#endif
|
#endif
|
||||||
|
@ -1683,6 +1683,11 @@ static inline void rcu_copy_process(struct task_struct *p)
|
|||||||
INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
|
INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
|
||||||
p->rcu_tasks_idle_cpu = -1;
|
p->rcu_tasks_idle_cpu = -1;
|
||||||
#endif /* #ifdef CONFIG_TASKS_RCU */
|
#endif /* #ifdef CONFIG_TASKS_RCU */
|
||||||
|
#ifdef CONFIG_TASKS_TRACE_RCU
|
||||||
|
p->trc_reader_nesting = 0;
|
||||||
|
p->trc_reader_special.s = 0;
|
||||||
|
INIT_LIST_HEAD(&p->trc_holdout_list);
|
||||||
|
#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
|
||||||
}
|
}
|
||||||
|
|
||||||
struct pid *pidfd_pid(const struct file *file)
|
struct pid *pidfd_pid(const struct file *file)
|
||||||
|
@ -70,13 +70,37 @@ config TREE_SRCU
|
|||||||
help
|
help
|
||||||
This option selects the full-fledged version of SRCU.
|
This option selects the full-fledged version of SRCU.
|
||||||
|
|
||||||
|
config TASKS_RCU_GENERIC
|
||||||
|
def_bool TASKS_RCU || TASKS_RUDE_RCU || TASKS_TRACE_RCU
|
||||||
|
select SRCU
|
||||||
|
help
|
||||||
|
This option enables generic infrastructure code supporting
|
||||||
|
task-based RCU implementations. Not for manual selection.
|
||||||
|
|
||||||
config TASKS_RCU
|
config TASKS_RCU
|
||||||
def_bool PREEMPTION
|
def_bool PREEMPTION
|
||||||
select SRCU
|
|
||||||
help
|
help
|
||||||
This option enables a task-based RCU implementation that uses
|
This option enables a task-based RCU implementation that uses
|
||||||
only voluntary context switch (not preemption!), idle, and
|
only voluntary context switch (not preemption!), idle, and
|
||||||
user-mode execution as quiescent states.
|
user-mode execution as quiescent states. Not for manual selection.
|
||||||
|
|
||||||
|
config TASKS_RUDE_RCU
|
||||||
|
def_bool 0
|
||||||
|
help
|
||||||
|
This option enables a task-based RCU implementation that uses
|
||||||
|
only context switch (including preemption) and user-mode
|
||||||
|
execution as quiescent states. It forces IPIs and context
|
||||||
|
switches on all online CPUs, including idle ones, so use
|
||||||
|
with caution.
|
||||||
|
|
||||||
|
config TASKS_TRACE_RCU
|
||||||
|
def_bool 0
|
||||||
|
help
|
||||||
|
This option enables a task-based RCU implementation that uses
|
||||||
|
explicit rcu_read_lock_trace() read-side markers, and allows
|
||||||
|
these readers to appear in the idle loop as well as on the CPU
|
||||||
|
hotplug code paths. It can force IPIs on online CPUs, including
|
||||||
|
idle ones, so use with caution.
|
||||||
|
|
||||||
config RCU_STALL_COMMON
|
config RCU_STALL_COMMON
|
||||||
def_bool TREE_RCU
|
def_bool TREE_RCU
|
||||||
@ -210,4 +234,22 @@ config RCU_NOCB_CPU
|
|||||||
Say Y here if you want to help to debug reduced OS jitter.
|
Say Y here if you want to help to debug reduced OS jitter.
|
||||||
Say N here if you are unsure.
|
Say N here if you are unsure.
|
||||||
|
|
||||||
|
config TASKS_TRACE_RCU_READ_MB
|
||||||
|
bool "Tasks Trace RCU readers use memory barriers in user and idle"
|
||||||
|
depends on RCU_EXPERT
|
||||||
|
default PREEMPT_RT || NR_CPUS < 8
|
||||||
|
help
|
||||||
|
Use this option to further reduce the number of IPIs sent
|
||||||
|
to CPUs executing in userspace or idle during tasks trace
|
||||||
|
RCU grace periods. Given that a reasonable setting of
|
||||||
|
the rcupdate.rcu_task_ipi_delay kernel boot parameter
|
||||||
|
eliminates such IPIs for many workloads, proper setting
|
||||||
|
of this Kconfig option is important mostly for aggressive
|
||||||
|
real-time installations and for battery-powered devices,
|
||||||
|
hence the default chosen above.
|
||||||
|
|
||||||
|
Say Y here if you hate IPIs.
|
||||||
|
Say N here if you hate read-side memory barriers.
|
||||||
|
Take the default if you are unsure.
|
||||||
|
|
||||||
endmenu # "RCU Subsystem"
|
endmenu # "RCU Subsystem"
|
||||||
|
@ -29,6 +29,8 @@ config RCU_PERF_TEST
|
|||||||
select TORTURE_TEST
|
select TORTURE_TEST
|
||||||
select SRCU
|
select SRCU
|
||||||
select TASKS_RCU
|
select TASKS_RCU
|
||||||
|
select TASKS_RUDE_RCU
|
||||||
|
select TASKS_TRACE_RCU
|
||||||
default n
|
default n
|
||||||
help
|
help
|
||||||
This option provides a kernel module that runs performance
|
This option provides a kernel module that runs performance
|
||||||
@ -46,6 +48,8 @@ config RCU_TORTURE_TEST
|
|||||||
select TORTURE_TEST
|
select TORTURE_TEST
|
||||||
select SRCU
|
select SRCU
|
||||||
select TASKS_RCU
|
select TASKS_RCU
|
||||||
|
select TASKS_RUDE_RCU
|
||||||
|
select TASKS_TRACE_RCU
|
||||||
default n
|
default n
|
||||||
help
|
help
|
||||||
This option provides a kernel module that runs torture tests
|
This option provides a kernel module that runs torture tests
|
||||||
|
@ -431,6 +431,7 @@ bool rcu_gp_is_expedited(void); /* Internal RCU use. */
|
|||||||
void rcu_expedite_gp(void);
|
void rcu_expedite_gp(void);
|
||||||
void rcu_unexpedite_gp(void);
|
void rcu_unexpedite_gp(void);
|
||||||
void rcupdate_announce_bootup_oddness(void);
|
void rcupdate_announce_bootup_oddness(void);
|
||||||
|
void show_rcu_tasks_gp_kthreads(void);
|
||||||
void rcu_request_urgent_qs_task(struct task_struct *t);
|
void rcu_request_urgent_qs_task(struct task_struct *t);
|
||||||
#endif /* #else #ifdef CONFIG_TINY_RCU */
|
#endif /* #else #ifdef CONFIG_TINY_RCU */
|
||||||
|
|
||||||
@ -441,6 +442,8 @@ void rcu_request_urgent_qs_task(struct task_struct *t);
|
|||||||
enum rcutorture_type {
|
enum rcutorture_type {
|
||||||
RCU_FLAVOR,
|
RCU_FLAVOR,
|
||||||
RCU_TASKS_FLAVOR,
|
RCU_TASKS_FLAVOR,
|
||||||
|
RCU_TASKS_RUDE_FLAVOR,
|
||||||
|
RCU_TASKS_TRACING_FLAVOR,
|
||||||
RCU_TRIVIAL_FLAVOR,
|
RCU_TRIVIAL_FLAVOR,
|
||||||
SRCU_FLAVOR,
|
SRCU_FLAVOR,
|
||||||
INVALID_RCU_FLAVOR
|
INVALID_RCU_FLAVOR
|
||||||
@ -454,6 +457,7 @@ void do_trace_rcu_torture_read(const char *rcutorturename,
|
|||||||
unsigned long secs,
|
unsigned long secs,
|
||||||
unsigned long c_old,
|
unsigned long c_old,
|
||||||
unsigned long c);
|
unsigned long c);
|
||||||
|
void rcu_gp_set_torture_wait(int duration);
|
||||||
#else
|
#else
|
||||||
static inline void rcutorture_get_gp_data(enum rcutorture_type test_type,
|
static inline void rcutorture_get_gp_data(enum rcutorture_type test_type,
|
||||||
int *flags, unsigned long *gp_seq)
|
int *flags, unsigned long *gp_seq)
|
||||||
@ -471,6 +475,7 @@ void do_trace_rcu_torture_read(const char *rcutorturename,
|
|||||||
#define do_trace_rcu_torture_read(rcutorturename, rhp, secs, c_old, c) \
|
#define do_trace_rcu_torture_read(rcutorturename, rhp, secs, c_old, c) \
|
||||||
do { } while (0)
|
do { } while (0)
|
||||||
#endif
|
#endif
|
||||||
|
static inline void rcu_gp_set_torture_wait(int duration) { }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST)
|
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST)
|
||||||
@ -498,6 +503,7 @@ void srcutorture_get_gp_data(enum rcutorture_type test_type,
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_TINY_RCU
|
#ifdef CONFIG_TINY_RCU
|
||||||
|
static inline bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) { return false; }
|
||||||
static inline unsigned long rcu_get_gp_seq(void) { return 0; }
|
static inline unsigned long rcu_get_gp_seq(void) { return 0; }
|
||||||
static inline unsigned long rcu_exp_batches_completed(void) { return 0; }
|
static inline unsigned long rcu_exp_batches_completed(void) { return 0; }
|
||||||
static inline unsigned long
|
static inline unsigned long
|
||||||
@ -507,6 +513,7 @@ static inline void show_rcu_gp_kthreads(void) { }
|
|||||||
static inline int rcu_get_gp_kthreads_prio(void) { return 0; }
|
static inline int rcu_get_gp_kthreads_prio(void) { return 0; }
|
||||||
static inline void rcu_fwd_progress_check(unsigned long j) { }
|
static inline void rcu_fwd_progress_check(unsigned long j) { }
|
||||||
#else /* #ifdef CONFIG_TINY_RCU */
|
#else /* #ifdef CONFIG_TINY_RCU */
|
||||||
|
bool rcu_dynticks_zero_in_eqs(int cpu, int *vp);
|
||||||
unsigned long rcu_get_gp_seq(void);
|
unsigned long rcu_get_gp_seq(void);
|
||||||
unsigned long rcu_exp_batches_completed(void);
|
unsigned long rcu_exp_batches_completed(void);
|
||||||
unsigned long srcu_batches_completed(struct srcu_struct *sp);
|
unsigned long srcu_batches_completed(struct srcu_struct *sp);
|
||||||
|
@ -88,6 +88,7 @@ torture_param(bool, shutdown, RCUPERF_SHUTDOWN,
|
|||||||
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
|
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
|
||||||
torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable");
|
torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable");
|
||||||
torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() perf test?");
|
torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() perf test?");
|
||||||
|
torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate.");
|
||||||
|
|
||||||
static char *perf_type = "rcu";
|
static char *perf_type = "rcu";
|
||||||
module_param(perf_type, charp, 0444);
|
module_param(perf_type, charp, 0444);
|
||||||
@ -635,7 +636,7 @@ kfree_perf_thread(void *arg)
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < kfree_alloc_num; i++) {
|
for (i = 0; i < kfree_alloc_num; i++) {
|
||||||
alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
|
alloc_ptr = kmalloc(kfree_mult * sizeof(struct kfree_obj), GFP_KERNEL);
|
||||||
if (!alloc_ptr)
|
if (!alloc_ptr)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@ -722,6 +723,8 @@ kfree_perf_init(void)
|
|||||||
schedule_timeout_uninterruptible(1);
|
schedule_timeout_uninterruptible(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pr_alert("kfree object size=%lu\n", kfree_mult * sizeof(struct kfree_obj));
|
||||||
|
|
||||||
kfree_reader_tasks = kcalloc(kfree_nrealthreads, sizeof(kfree_reader_tasks[0]),
|
kfree_reader_tasks = kcalloc(kfree_nrealthreads, sizeof(kfree_reader_tasks[0]),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (kfree_reader_tasks == NULL) {
|
if (kfree_reader_tasks == NULL) {
|
||||||
|
@ -20,7 +20,7 @@
|
|||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <linux/spinlock.h>
|
#include <linux/spinlock.h>
|
||||||
#include <linux/smp.h>
|
#include <linux/smp.h>
|
||||||
#include <linux/rcupdate.h>
|
#include <linux/rcupdate_wait.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
#include <linux/sched/signal.h>
|
#include <linux/sched/signal.h>
|
||||||
#include <uapi/linux/sched/types.h>
|
#include <uapi/linux/sched/types.h>
|
||||||
@ -45,12 +45,25 @@
|
|||||||
#include <linux/sched/sysctl.h>
|
#include <linux/sched/sysctl.h>
|
||||||
#include <linux/oom.h>
|
#include <linux/oom.h>
|
||||||
#include <linux/tick.h>
|
#include <linux/tick.h>
|
||||||
|
#include <linux/rcupdate_trace.h>
|
||||||
|
|
||||||
#include "rcu.h"
|
#include "rcu.h"
|
||||||
|
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com> and Josh Triplett <josh@joshtriplett.org>");
|
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com> and Josh Triplett <josh@joshtriplett.org>");
|
||||||
|
|
||||||
|
#ifndef data_race
|
||||||
|
#define data_race(expr) \
|
||||||
|
({ \
|
||||||
|
expr; \
|
||||||
|
})
|
||||||
|
#endif
|
||||||
|
#ifndef ASSERT_EXCLUSIVE_WRITER
|
||||||
|
#define ASSERT_EXCLUSIVE_WRITER(var) do { } while (0)
|
||||||
|
#endif
|
||||||
|
#ifndef ASSERT_EXCLUSIVE_ACCESS
|
||||||
|
#define ASSERT_EXCLUSIVE_ACCESS(var) do { } while (0)
|
||||||
|
#endif
|
||||||
|
|
||||||
/* Bits for ->extendables field, extendables param, and related definitions. */
|
/* Bits for ->extendables field, extendables param, and related definitions. */
|
||||||
#define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */
|
#define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */
|
||||||
@ -102,6 +115,9 @@ torture_param(int, stall_cpu, 0, "Stall duration (s), zero to disable.");
|
|||||||
torture_param(int, stall_cpu_holdoff, 10,
|
torture_param(int, stall_cpu_holdoff, 10,
|
||||||
"Time to wait before starting stall (s).");
|
"Time to wait before starting stall (s).");
|
||||||
torture_param(int, stall_cpu_irqsoff, 0, "Disable interrupts while stalling.");
|
torture_param(int, stall_cpu_irqsoff, 0, "Disable interrupts while stalling.");
|
||||||
|
torture_param(int, stall_cpu_block, 0, "Sleep while stalling.");
|
||||||
|
torture_param(int, stall_gp_kthread, 0,
|
||||||
|
"Grace-period kthread stall duration (s).");
|
||||||
torture_param(int, stat_interval, 60,
|
torture_param(int, stat_interval, 60,
|
||||||
"Number of seconds between stats printk()s");
|
"Number of seconds between stats printk()s");
|
||||||
torture_param(int, stutter, 5, "Number of seconds to run/halt test");
|
torture_param(int, stutter, 5, "Number of seconds to run/halt test");
|
||||||
@ -665,6 +681,11 @@ static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
|
|||||||
call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
|
call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void synchronize_rcu_mult_test(void)
|
||||||
|
{
|
||||||
|
synchronize_rcu_mult(call_rcu_tasks, call_rcu);
|
||||||
|
}
|
||||||
|
|
||||||
static struct rcu_torture_ops tasks_ops = {
|
static struct rcu_torture_ops tasks_ops = {
|
||||||
.ttype = RCU_TASKS_FLAVOR,
|
.ttype = RCU_TASKS_FLAVOR,
|
||||||
.init = rcu_sync_torture_init,
|
.init = rcu_sync_torture_init,
|
||||||
@ -674,7 +695,7 @@ static struct rcu_torture_ops tasks_ops = {
|
|||||||
.get_gp_seq = rcu_no_completed,
|
.get_gp_seq = rcu_no_completed,
|
||||||
.deferred_free = rcu_tasks_torture_deferred_free,
|
.deferred_free = rcu_tasks_torture_deferred_free,
|
||||||
.sync = synchronize_rcu_tasks,
|
.sync = synchronize_rcu_tasks,
|
||||||
.exp_sync = synchronize_rcu_tasks,
|
.exp_sync = synchronize_rcu_mult_test,
|
||||||
.call = call_rcu_tasks,
|
.call = call_rcu_tasks,
|
||||||
.cb_barrier = rcu_barrier_tasks,
|
.cb_barrier = rcu_barrier_tasks,
|
||||||
.fqs = NULL,
|
.fqs = NULL,
|
||||||
@ -725,6 +746,72 @@ static struct rcu_torture_ops trivial_ops = {
|
|||||||
.name = "trivial"
|
.name = "trivial"
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Definitions for rude RCU-tasks torture testing.
|
||||||
|
*/
|
||||||
|
|
||||||
|
static void rcu_tasks_rude_torture_deferred_free(struct rcu_torture *p)
|
||||||
|
{
|
||||||
|
call_rcu_tasks_rude(&p->rtort_rcu, rcu_torture_cb);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct rcu_torture_ops tasks_rude_ops = {
|
||||||
|
.ttype = RCU_TASKS_RUDE_FLAVOR,
|
||||||
|
.init = rcu_sync_torture_init,
|
||||||
|
.readlock = rcu_torture_read_lock_trivial,
|
||||||
|
.read_delay = rcu_read_delay, /* just reuse rcu's version. */
|
||||||
|
.readunlock = rcu_torture_read_unlock_trivial,
|
||||||
|
.get_gp_seq = rcu_no_completed,
|
||||||
|
.deferred_free = rcu_tasks_rude_torture_deferred_free,
|
||||||
|
.sync = synchronize_rcu_tasks_rude,
|
||||||
|
.exp_sync = synchronize_rcu_tasks_rude,
|
||||||
|
.call = call_rcu_tasks_rude,
|
||||||
|
.cb_barrier = rcu_barrier_tasks_rude,
|
||||||
|
.fqs = NULL,
|
||||||
|
.stats = NULL,
|
||||||
|
.irq_capable = 1,
|
||||||
|
.name = "tasks-rude"
|
||||||
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Definitions for tracing RCU-tasks torture testing.
|
||||||
|
*/
|
||||||
|
|
||||||
|
static int tasks_tracing_torture_read_lock(void)
|
||||||
|
{
|
||||||
|
rcu_read_lock_trace();
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void tasks_tracing_torture_read_unlock(int idx)
|
||||||
|
{
|
||||||
|
rcu_read_unlock_trace();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rcu_tasks_tracing_torture_deferred_free(struct rcu_torture *p)
|
||||||
|
{
|
||||||
|
call_rcu_tasks_trace(&p->rtort_rcu, rcu_torture_cb);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct rcu_torture_ops tasks_tracing_ops = {
|
||||||
|
.ttype = RCU_TASKS_TRACING_FLAVOR,
|
||||||
|
.init = rcu_sync_torture_init,
|
||||||
|
.readlock = tasks_tracing_torture_read_lock,
|
||||||
|
.read_delay = srcu_read_delay, /* just reuse srcu's version. */
|
||||||
|
.readunlock = tasks_tracing_torture_read_unlock,
|
||||||
|
.get_gp_seq = rcu_no_completed,
|
||||||
|
.deferred_free = rcu_tasks_tracing_torture_deferred_free,
|
||||||
|
.sync = synchronize_rcu_tasks_trace,
|
||||||
|
.exp_sync = synchronize_rcu_tasks_trace,
|
||||||
|
.call = call_rcu_tasks_trace,
|
||||||
|
.cb_barrier = rcu_barrier_tasks_trace,
|
||||||
|
.fqs = NULL,
|
||||||
|
.stats = NULL,
|
||||||
|
.irq_capable = 1,
|
||||||
|
.slow_gps = 1,
|
||||||
|
.name = "tasks-tracing"
|
||||||
|
};
|
||||||
|
|
||||||
static unsigned long rcutorture_seq_diff(unsigned long new, unsigned long old)
|
static unsigned long rcutorture_seq_diff(unsigned long new, unsigned long old)
|
||||||
{
|
{
|
||||||
if (!cur_ops->gp_diff)
|
if (!cur_ops->gp_diff)
|
||||||
@ -734,7 +821,7 @@ static unsigned long rcutorture_seq_diff(unsigned long new, unsigned long old)
|
|||||||
|
|
||||||
static bool __maybe_unused torturing_tasks(void)
|
static bool __maybe_unused torturing_tasks(void)
|
||||||
{
|
{
|
||||||
return cur_ops == &tasks_ops;
|
return cur_ops == &tasks_ops || cur_ops == &tasks_rude_ops;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -833,7 +920,7 @@ static int rcu_torture_boost(void *arg)
|
|||||||
|
|
||||||
/* Wait for the next test interval. */
|
/* Wait for the next test interval. */
|
||||||
oldstarttime = boost_starttime;
|
oldstarttime = boost_starttime;
|
||||||
while (ULONG_CMP_LT(jiffies, oldstarttime)) {
|
while (time_before(jiffies, oldstarttime)) {
|
||||||
schedule_timeout_interruptible(oldstarttime - jiffies);
|
schedule_timeout_interruptible(oldstarttime - jiffies);
|
||||||
stutter_wait("rcu_torture_boost");
|
stutter_wait("rcu_torture_boost");
|
||||||
if (torture_must_stop())
|
if (torture_must_stop())
|
||||||
@ -843,7 +930,7 @@ static int rcu_torture_boost(void *arg)
|
|||||||
/* Do one boost-test interval. */
|
/* Do one boost-test interval. */
|
||||||
endtime = oldstarttime + test_boost_duration * HZ;
|
endtime = oldstarttime + test_boost_duration * HZ;
|
||||||
call_rcu_time = jiffies;
|
call_rcu_time = jiffies;
|
||||||
while (ULONG_CMP_LT(jiffies, endtime)) {
|
while (time_before(jiffies, endtime)) {
|
||||||
/* If we don't have a callback in flight, post one. */
|
/* If we don't have a callback in flight, post one. */
|
||||||
if (!smp_load_acquire(&rbi.inflight)) {
|
if (!smp_load_acquire(&rbi.inflight)) {
|
||||||
/* RCU core before ->inflight = 1. */
|
/* RCU core before ->inflight = 1. */
|
||||||
@ -914,7 +1001,7 @@ rcu_torture_fqs(void *arg)
|
|||||||
VERBOSE_TOROUT_STRING("rcu_torture_fqs task started");
|
VERBOSE_TOROUT_STRING("rcu_torture_fqs task started");
|
||||||
do {
|
do {
|
||||||
fqs_resume_time = jiffies + fqs_stutter * HZ;
|
fqs_resume_time = jiffies + fqs_stutter * HZ;
|
||||||
while (ULONG_CMP_LT(jiffies, fqs_resume_time) &&
|
while (time_before(jiffies, fqs_resume_time) &&
|
||||||
!kthread_should_stop()) {
|
!kthread_should_stop()) {
|
||||||
schedule_timeout_interruptible(1);
|
schedule_timeout_interruptible(1);
|
||||||
}
|
}
|
||||||
@ -1147,6 +1234,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
|
|||||||
struct torture_random_state *trsp,
|
struct torture_random_state *trsp,
|
||||||
struct rt_read_seg *rtrsp)
|
struct rt_read_seg *rtrsp)
|
||||||
{
|
{
|
||||||
|
unsigned long flags;
|
||||||
int idxnew = -1;
|
int idxnew = -1;
|
||||||
int idxold = *readstate;
|
int idxold = *readstate;
|
||||||
int statesnew = ~*readstate & newstate;
|
int statesnew = ~*readstate & newstate;
|
||||||
@ -1181,8 +1269,15 @@ static void rcutorture_one_extend(int *readstate, int newstate,
|
|||||||
rcu_read_unlock_bh();
|
rcu_read_unlock_bh();
|
||||||
if (statesold & RCUTORTURE_RDR_SCHED)
|
if (statesold & RCUTORTURE_RDR_SCHED)
|
||||||
rcu_read_unlock_sched();
|
rcu_read_unlock_sched();
|
||||||
if (statesold & RCUTORTURE_RDR_RCU)
|
if (statesold & RCUTORTURE_RDR_RCU) {
|
||||||
|
bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
|
||||||
|
|
||||||
|
if (lockit)
|
||||||
|
raw_spin_lock_irqsave(¤t->pi_lock, flags);
|
||||||
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
|
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
|
||||||
|
if (lockit)
|
||||||
|
raw_spin_unlock_irqrestore(¤t->pi_lock, flags);
|
||||||
|
}
|
||||||
|
|
||||||
/* Delay if neither beginning nor end and there was a change. */
|
/* Delay if neither beginning nor end and there was a change. */
|
||||||
if ((statesnew || statesold) && *readstate && newstate)
|
if ((statesnew || statesold) && *readstate && newstate)
|
||||||
@ -1283,6 +1378,7 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
|
|||||||
rcu_read_lock_bh_held() ||
|
rcu_read_lock_bh_held() ||
|
||||||
rcu_read_lock_sched_held() ||
|
rcu_read_lock_sched_held() ||
|
||||||
srcu_read_lock_held(srcu_ctlp) ||
|
srcu_read_lock_held(srcu_ctlp) ||
|
||||||
|
rcu_read_lock_trace_held() ||
|
||||||
torturing_tasks());
|
torturing_tasks());
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
/* Wait for rcu_torture_writer to get underway */
|
/* Wait for rcu_torture_writer to get underway */
|
||||||
@ -1444,9 +1540,9 @@ rcu_torture_stats_print(void)
|
|||||||
atomic_long_read(&n_rcu_torture_timers));
|
atomic_long_read(&n_rcu_torture_timers));
|
||||||
torture_onoff_stats();
|
torture_onoff_stats();
|
||||||
pr_cont("barrier: %ld/%ld:%ld\n",
|
pr_cont("barrier: %ld/%ld:%ld\n",
|
||||||
n_barrier_successes,
|
data_race(n_barrier_successes),
|
||||||
n_barrier_attempts,
|
data_race(n_barrier_attempts),
|
||||||
n_rcu_torture_barrier_error);
|
data_race(n_rcu_torture_barrier_error));
|
||||||
|
|
||||||
pr_alert("%s%s ", torture_type, TORTURE_FLAG);
|
pr_alert("%s%s ", torture_type, TORTURE_FLAG);
|
||||||
if (atomic_read(&n_rcu_torture_mberror) ||
|
if (atomic_read(&n_rcu_torture_mberror) ||
|
||||||
@ -1536,6 +1632,7 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, const char *tag)
|
|||||||
"test_boost=%d/%d test_boost_interval=%d "
|
"test_boost=%d/%d test_boost_interval=%d "
|
||||||
"test_boost_duration=%d shutdown_secs=%d "
|
"test_boost_duration=%d shutdown_secs=%d "
|
||||||
"stall_cpu=%d stall_cpu_holdoff=%d stall_cpu_irqsoff=%d "
|
"stall_cpu=%d stall_cpu_holdoff=%d stall_cpu_irqsoff=%d "
|
||||||
|
"stall_cpu_block=%d "
|
||||||
"n_barrier_cbs=%d "
|
"n_barrier_cbs=%d "
|
||||||
"onoff_interval=%d onoff_holdoff=%d\n",
|
"onoff_interval=%d onoff_holdoff=%d\n",
|
||||||
torture_type, tag, nrealreaders, nfakewriters,
|
torture_type, tag, nrealreaders, nfakewriters,
|
||||||
@ -1544,6 +1641,7 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, const char *tag)
|
|||||||
test_boost, cur_ops->can_boost,
|
test_boost, cur_ops->can_boost,
|
||||||
test_boost_interval, test_boost_duration, shutdown_secs,
|
test_boost_interval, test_boost_duration, shutdown_secs,
|
||||||
stall_cpu, stall_cpu_holdoff, stall_cpu_irqsoff,
|
stall_cpu, stall_cpu_holdoff, stall_cpu_irqsoff,
|
||||||
|
stall_cpu_block,
|
||||||
n_barrier_cbs,
|
n_barrier_cbs,
|
||||||
onoff_interval, onoff_holdoff);
|
onoff_interval, onoff_holdoff);
|
||||||
}
|
}
|
||||||
@ -1599,6 +1697,7 @@ static int rcutorture_booster_init(unsigned int cpu)
|
|||||||
*/
|
*/
|
||||||
static int rcu_torture_stall(void *args)
|
static int rcu_torture_stall(void *args)
|
||||||
{
|
{
|
||||||
|
int idx;
|
||||||
unsigned long stop_at;
|
unsigned long stop_at;
|
||||||
|
|
||||||
VERBOSE_TOROUT_STRING("rcu_torture_stall task started");
|
VERBOSE_TOROUT_STRING("rcu_torture_stall task started");
|
||||||
@ -1607,26 +1706,37 @@ static int rcu_torture_stall(void *args)
|
|||||||
schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
|
schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
|
||||||
VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff");
|
VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff");
|
||||||
}
|
}
|
||||||
if (!kthread_should_stop()) {
|
if (!kthread_should_stop() && stall_gp_kthread > 0) {
|
||||||
|
VERBOSE_TOROUT_STRING("rcu_torture_stall begin GP stall");
|
||||||
|
rcu_gp_set_torture_wait(stall_gp_kthread * HZ);
|
||||||
|
for (idx = 0; idx < stall_gp_kthread + 2; idx++) {
|
||||||
|
if (kthread_should_stop())
|
||||||
|
break;
|
||||||
|
schedule_timeout_uninterruptible(HZ);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!kthread_should_stop() && stall_cpu > 0) {
|
||||||
|
VERBOSE_TOROUT_STRING("rcu_torture_stall begin CPU stall");
|
||||||
stop_at = ktime_get_seconds() + stall_cpu;
|
stop_at = ktime_get_seconds() + stall_cpu;
|
||||||
/* RCU CPU stall is expected behavior in following code. */
|
/* RCU CPU stall is expected behavior in following code. */
|
||||||
rcu_read_lock();
|
idx = cur_ops->readlock();
|
||||||
if (stall_cpu_irqsoff)
|
if (stall_cpu_irqsoff)
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
else
|
else if (!stall_cpu_block)
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
pr_alert("rcu_torture_stall start on CPU %d.\n",
|
pr_alert("rcu_torture_stall start on CPU %d.\n",
|
||||||
smp_processor_id());
|
raw_smp_processor_id());
|
||||||
while (ULONG_CMP_LT((unsigned long)ktime_get_seconds(),
|
while (ULONG_CMP_LT((unsigned long)ktime_get_seconds(),
|
||||||
stop_at))
|
stop_at))
|
||||||
continue; /* Induce RCU CPU stall warning. */
|
if (stall_cpu_block)
|
||||||
|
schedule_timeout_uninterruptible(HZ);
|
||||||
if (stall_cpu_irqsoff)
|
if (stall_cpu_irqsoff)
|
||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
else
|
else if (!stall_cpu_block)
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
rcu_read_unlock();
|
cur_ops->readunlock(idx);
|
||||||
pr_alert("rcu_torture_stall end.\n");
|
|
||||||
}
|
}
|
||||||
|
pr_alert("rcu_torture_stall end.\n");
|
||||||
torture_shutdown_absorb("rcu_torture_stall");
|
torture_shutdown_absorb("rcu_torture_stall");
|
||||||
while (!kthread_should_stop())
|
while (!kthread_should_stop())
|
||||||
schedule_timeout_interruptible(10 * HZ);
|
schedule_timeout_interruptible(10 * HZ);
|
||||||
@ -1636,7 +1746,7 @@ static int rcu_torture_stall(void *args)
|
|||||||
/* Spawn CPU-stall kthread, if stall_cpu specified. */
|
/* Spawn CPU-stall kthread, if stall_cpu specified. */
|
||||||
static int __init rcu_torture_stall_init(void)
|
static int __init rcu_torture_stall_init(void)
|
||||||
{
|
{
|
||||||
if (stall_cpu <= 0)
|
if (stall_cpu <= 0 && stall_gp_kthread <= 0)
|
||||||
return 0;
|
return 0;
|
||||||
return torture_create_kthread(rcu_torture_stall, NULL, stall_task);
|
return torture_create_kthread(rcu_torture_stall, NULL, stall_task);
|
||||||
}
|
}
|
||||||
@ -1692,8 +1802,8 @@ struct rcu_fwd {
|
|||||||
unsigned long rcu_launder_gp_seq_start;
|
unsigned long rcu_launder_gp_seq_start;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct rcu_fwd *rcu_fwds;
|
static struct rcu_fwd *rcu_fwds;
|
||||||
bool rcu_fwd_emergency_stop;
|
static bool rcu_fwd_emergency_stop;
|
||||||
|
|
||||||
static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp)
|
static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp)
|
||||||
{
|
{
|
||||||
@ -2400,7 +2510,8 @@ rcu_torture_init(void)
|
|||||||
int firsterr = 0;
|
int firsterr = 0;
|
||||||
static struct rcu_torture_ops *torture_ops[] = {
|
static struct rcu_torture_ops *torture_ops[] = {
|
||||||
&rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
|
&rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
|
||||||
&busted_srcud_ops, &tasks_ops, &trivial_ops,
|
&busted_srcud_ops, &tasks_ops, &tasks_rude_ops,
|
||||||
|
&tasks_tracing_ops, &trivial_ops,
|
||||||
};
|
};
|
||||||
|
|
||||||
if (!torture_init_begin(torture_type, verbose))
|
if (!torture_init_begin(torture_type, verbose))
|
||||||
|
1193
kernel/rcu/tasks.h
Normal file
1193
kernel/rcu/tasks.h
Normal file
File diff suppressed because it is too large
Load Diff
@ -238,7 +238,9 @@ void rcu_softirq_qs(void)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Record entry into an extended quiescent state. This is only to be
|
* Record entry into an extended quiescent state. This is only to be
|
||||||
* called when not already in an extended quiescent state.
|
* called when not already in an extended quiescent state, that is,
|
||||||
|
* RCU is watching prior to the call to this function and is no longer
|
||||||
|
* watching upon return.
|
||||||
*/
|
*/
|
||||||
static void rcu_dynticks_eqs_enter(void)
|
static void rcu_dynticks_eqs_enter(void)
|
||||||
{
|
{
|
||||||
@ -250,8 +252,9 @@ static void rcu_dynticks_eqs_enter(void)
|
|||||||
* critical sections, and we also must force ordering with the
|
* critical sections, and we also must force ordering with the
|
||||||
* next idle sojourn.
|
* next idle sojourn.
|
||||||
*/
|
*/
|
||||||
|
rcu_dynticks_task_trace_enter(); // Before ->dynticks update!
|
||||||
seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks);
|
seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks);
|
||||||
/* Better be in an extended quiescent state! */
|
// RCU is no longer watching. Better be in extended quiescent state!
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
||||||
(seq & RCU_DYNTICK_CTRL_CTR));
|
(seq & RCU_DYNTICK_CTRL_CTR));
|
||||||
/* Better not have special action (TLB flush) pending! */
|
/* Better not have special action (TLB flush) pending! */
|
||||||
@ -261,7 +264,8 @@ static void rcu_dynticks_eqs_enter(void)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Record exit from an extended quiescent state. This is only to be
|
* Record exit from an extended quiescent state. This is only to be
|
||||||
* called from an extended quiescent state.
|
* called from an extended quiescent state, that is, RCU is not watching
|
||||||
|
* prior to the call to this function and is watching upon return.
|
||||||
*/
|
*/
|
||||||
static void rcu_dynticks_eqs_exit(void)
|
static void rcu_dynticks_eqs_exit(void)
|
||||||
{
|
{
|
||||||
@ -274,6 +278,8 @@ static void rcu_dynticks_eqs_exit(void)
|
|||||||
* critical section.
|
* critical section.
|
||||||
*/
|
*/
|
||||||
seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks);
|
seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks);
|
||||||
|
// RCU is now watching. Better not be in an extended quiescent state!
|
||||||
|
rcu_dynticks_task_trace_exit(); // After ->dynticks update!
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
||||||
!(seq & RCU_DYNTICK_CTRL_CTR));
|
!(seq & RCU_DYNTICK_CTRL_CTR));
|
||||||
if (seq & RCU_DYNTICK_CTRL_MASK) {
|
if (seq & RCU_DYNTICK_CTRL_MASK) {
|
||||||
@ -345,6 +351,28 @@ static bool rcu_dynticks_in_eqs_since(struct rcu_data *rdp, int snap)
|
|||||||
return snap != rcu_dynticks_snap(rdp);
|
return snap != rcu_dynticks_snap(rdp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Return true if the referenced integer is zero while the specified
|
||||||
|
* CPU remains within a single extended quiescent state.
|
||||||
|
*/
|
||||||
|
bool rcu_dynticks_zero_in_eqs(int cpu, int *vp)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
|
int snap;
|
||||||
|
|
||||||
|
// If not quiescent, force back to earlier extended quiescent state.
|
||||||
|
snap = atomic_read(&rdp->dynticks) & ~(RCU_DYNTICK_CTRL_MASK |
|
||||||
|
RCU_DYNTICK_CTRL_CTR);
|
||||||
|
|
||||||
|
smp_rmb(); // Order ->dynticks and *vp reads.
|
||||||
|
if (READ_ONCE(*vp))
|
||||||
|
return false; // Non-zero, so report failure;
|
||||||
|
smp_rmb(); // Order *vp read and ->dynticks re-read.
|
||||||
|
|
||||||
|
// If still in the same extended quiescent state, we are good!
|
||||||
|
return snap == (atomic_read(&rdp->dynticks) & ~RCU_DYNTICK_CTRL_MASK);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set the special (bottom) bit of the specified CPU so that it
|
* Set the special (bottom) bit of the specified CPU so that it
|
||||||
* will take special action (such as flushing its TLB) on the
|
* will take special action (such as flushing its TLB) on the
|
||||||
@ -584,6 +612,7 @@ static void rcu_eqs_enter(bool user)
|
|||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
|
||||||
rdp->dynticks_nesting == 0);
|
rdp->dynticks_nesting == 0);
|
||||||
if (rdp->dynticks_nesting != 1) {
|
if (rdp->dynticks_nesting != 1) {
|
||||||
|
// RCU will still be watching, so just do accounting and leave.
|
||||||
rdp->dynticks_nesting--;
|
rdp->dynticks_nesting--;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -596,7 +625,9 @@ static void rcu_eqs_enter(bool user)
|
|||||||
rcu_prepare_for_idle();
|
rcu_prepare_for_idle();
|
||||||
rcu_preempt_deferred_qs(current);
|
rcu_preempt_deferred_qs(current);
|
||||||
WRITE_ONCE(rdp->dynticks_nesting, 0); /* Avoid irq-access tearing. */
|
WRITE_ONCE(rdp->dynticks_nesting, 0); /* Avoid irq-access tearing. */
|
||||||
|
// RCU is watching here ...
|
||||||
rcu_dynticks_eqs_enter();
|
rcu_dynticks_eqs_enter();
|
||||||
|
// ... but is no longer watching here.
|
||||||
rcu_dynticks_task_enter();
|
rcu_dynticks_task_enter();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -676,7 +707,9 @@ static __always_inline void rcu_nmi_exit_common(bool irq)
|
|||||||
if (irq)
|
if (irq)
|
||||||
rcu_prepare_for_idle();
|
rcu_prepare_for_idle();
|
||||||
|
|
||||||
|
// RCU is watching here ...
|
||||||
rcu_dynticks_eqs_enter();
|
rcu_dynticks_eqs_enter();
|
||||||
|
// ... but is no longer watching here.
|
||||||
|
|
||||||
if (irq)
|
if (irq)
|
||||||
rcu_dynticks_task_enter();
|
rcu_dynticks_task_enter();
|
||||||
@ -751,11 +784,14 @@ static void rcu_eqs_exit(bool user)
|
|||||||
oldval = rdp->dynticks_nesting;
|
oldval = rdp->dynticks_nesting;
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
|
||||||
if (oldval) {
|
if (oldval) {
|
||||||
|
// RCU was already watching, so just do accounting and leave.
|
||||||
rdp->dynticks_nesting++;
|
rdp->dynticks_nesting++;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
rcu_dynticks_task_exit();
|
rcu_dynticks_task_exit();
|
||||||
|
// RCU is not watching here ...
|
||||||
rcu_dynticks_eqs_exit();
|
rcu_dynticks_eqs_exit();
|
||||||
|
// ... but is watching here.
|
||||||
rcu_cleanup_after_idle();
|
rcu_cleanup_after_idle();
|
||||||
trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, atomic_read(&rdp->dynticks));
|
trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, atomic_read(&rdp->dynticks));
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
|
||||||
@ -832,7 +868,9 @@ static __always_inline void rcu_nmi_enter_common(bool irq)
|
|||||||
if (irq)
|
if (irq)
|
||||||
rcu_dynticks_task_exit();
|
rcu_dynticks_task_exit();
|
||||||
|
|
||||||
|
// RCU is not watching here ...
|
||||||
rcu_dynticks_eqs_exit();
|
rcu_dynticks_eqs_exit();
|
||||||
|
// ... but is watching here.
|
||||||
|
|
||||||
if (irq)
|
if (irq)
|
||||||
rcu_cleanup_after_idle();
|
rcu_cleanup_after_idle();
|
||||||
@ -842,9 +880,16 @@ static __always_inline void rcu_nmi_enter_common(bool irq)
|
|||||||
rdp->dynticks_nmi_nesting == DYNTICK_IRQ_NONIDLE &&
|
rdp->dynticks_nmi_nesting == DYNTICK_IRQ_NONIDLE &&
|
||||||
READ_ONCE(rdp->rcu_urgent_qs) &&
|
READ_ONCE(rdp->rcu_urgent_qs) &&
|
||||||
!READ_ONCE(rdp->rcu_forced_tick)) {
|
!READ_ONCE(rdp->rcu_forced_tick)) {
|
||||||
|
// We get here only if we had already exited the extended
|
||||||
|
// quiescent state and this was an interrupt (not an NMI).
|
||||||
|
// Therefore, (1) RCU is already watching and (2) The fact
|
||||||
|
// that we are in an interrupt handler and that the rcu_node
|
||||||
|
// lock is an irq-disabled lock prevents self-deadlock.
|
||||||
|
// So we can safely recheck under the lock.
|
||||||
raw_spin_lock_rcu_node(rdp->mynode);
|
raw_spin_lock_rcu_node(rdp->mynode);
|
||||||
// Recheck under lock.
|
|
||||||
if (rdp->rcu_urgent_qs && !rdp->rcu_forced_tick) {
|
if (rdp->rcu_urgent_qs && !rdp->rcu_forced_tick) {
|
||||||
|
// A nohz_full CPU is in the kernel and RCU
|
||||||
|
// needs a quiescent state. Turn on the tick!
|
||||||
WRITE_ONCE(rdp->rcu_forced_tick, true);
|
WRITE_ONCE(rdp->rcu_forced_tick, true);
|
||||||
tick_dep_set_cpu(rdp->cpu, TICK_DEP_BIT_RCU);
|
tick_dep_set_cpu(rdp->cpu, TICK_DEP_BIT_RCU);
|
||||||
}
|
}
|
||||||
@ -1486,6 +1531,31 @@ static void rcu_gp_slow(int delay)
|
|||||||
schedule_timeout_uninterruptible(delay);
|
schedule_timeout_uninterruptible(delay);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static unsigned long sleep_duration;
|
||||||
|
|
||||||
|
/* Allow rcutorture to stall the grace-period kthread. */
|
||||||
|
void rcu_gp_set_torture_wait(int duration)
|
||||||
|
{
|
||||||
|
if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST) && duration > 0)
|
||||||
|
WRITE_ONCE(sleep_duration, duration);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_gp_set_torture_wait);
|
||||||
|
|
||||||
|
/* Actually implement the aforementioned wait. */
|
||||||
|
static void rcu_gp_torture_wait(void)
|
||||||
|
{
|
||||||
|
unsigned long duration;
|
||||||
|
|
||||||
|
if (!IS_ENABLED(CONFIG_RCU_TORTURE_TEST))
|
||||||
|
return;
|
||||||
|
duration = xchg(&sleep_duration, 0UL);
|
||||||
|
if (duration > 0) {
|
||||||
|
pr_alert("%s: Waiting %lu jiffies\n", __func__, duration);
|
||||||
|
schedule_timeout_uninterruptible(duration);
|
||||||
|
pr_alert("%s: Wait complete\n", __func__);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize a new grace period. Return false if no grace period required.
|
* Initialize a new grace period. Return false if no grace period required.
|
||||||
*/
|
*/
|
||||||
@ -1693,6 +1763,7 @@ static void rcu_gp_fqs_loop(void)
|
|||||||
rcu_state.gp_state = RCU_GP_WAIT_FQS;
|
rcu_state.gp_state = RCU_GP_WAIT_FQS;
|
||||||
ret = swait_event_idle_timeout_exclusive(
|
ret = swait_event_idle_timeout_exclusive(
|
||||||
rcu_state.gp_wq, rcu_gp_fqs_check_wake(&gf), j);
|
rcu_state.gp_wq, rcu_gp_fqs_check_wake(&gf), j);
|
||||||
|
rcu_gp_torture_wait();
|
||||||
rcu_state.gp_state = RCU_GP_DOING_FQS;
|
rcu_state.gp_state = RCU_GP_DOING_FQS;
|
||||||
/* Locking provides needed memory barriers. */
|
/* Locking provides needed memory barriers. */
|
||||||
/* If grace period done, leave loop. */
|
/* If grace period done, leave loop. */
|
||||||
@ -1847,6 +1918,7 @@ static int __noreturn rcu_gp_kthread(void *unused)
|
|||||||
swait_event_idle_exclusive(rcu_state.gp_wq,
|
swait_event_idle_exclusive(rcu_state.gp_wq,
|
||||||
READ_ONCE(rcu_state.gp_flags) &
|
READ_ONCE(rcu_state.gp_flags) &
|
||||||
RCU_GP_FLAG_INIT);
|
RCU_GP_FLAG_INIT);
|
||||||
|
rcu_gp_torture_wait();
|
||||||
rcu_state.gp_state = RCU_GP_DONE_GPS;
|
rcu_state.gp_state = RCU_GP_DONE_GPS;
|
||||||
/* Locking provides needed memory barrier. */
|
/* Locking provides needed memory barrier. */
|
||||||
if (rcu_gp_init())
|
if (rcu_gp_init())
|
||||||
@ -2837,6 +2909,8 @@ struct kfree_rcu_cpu {
|
|||||||
struct delayed_work monitor_work;
|
struct delayed_work monitor_work;
|
||||||
bool monitor_todo;
|
bool monitor_todo;
|
||||||
bool initialized;
|
bool initialized;
|
||||||
|
// Number of objects for which GP not started
|
||||||
|
int count;
|
||||||
};
|
};
|
||||||
|
|
||||||
static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc);
|
static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc);
|
||||||
@ -2950,6 +3024,8 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
|
|||||||
krcp->head = NULL;
|
krcp->head = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
WRITE_ONCE(krcp->count, 0);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* One work is per one batch, so there are two "free channels",
|
* One work is per one batch, so there are two "free channels",
|
||||||
* "bhead_free" and "head_free" the batch can handle. It can be
|
* "bhead_free" and "head_free" the batch can handle. It can be
|
||||||
@ -3086,6 +3162,8 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
|
|||||||
krcp->head = head;
|
krcp->head = head;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
WRITE_ONCE(krcp->count, krcp->count + 1);
|
||||||
|
|
||||||
// Set timer to drain after KFREE_DRAIN_JIFFIES.
|
// Set timer to drain after KFREE_DRAIN_JIFFIES.
|
||||||
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
|
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
|
||||||
!krcp->monitor_todo) {
|
!krcp->monitor_todo) {
|
||||||
@ -3100,6 +3178,56 @@ unlock_return:
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(kfree_call_rcu);
|
EXPORT_SYMBOL_GPL(kfree_call_rcu);
|
||||||
|
|
||||||
|
static unsigned long
|
||||||
|
kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long count = 0;
|
||||||
|
|
||||||
|
/* Snapshot count of all CPUs */
|
||||||
|
for_each_online_cpu(cpu) {
|
||||||
|
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
|
||||||
|
|
||||||
|
count += READ_ONCE(krcp->count);
|
||||||
|
}
|
||||||
|
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static unsigned long
|
||||||
|
kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
|
||||||
|
{
|
||||||
|
int cpu, freed = 0;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
for_each_online_cpu(cpu) {
|
||||||
|
int count;
|
||||||
|
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
|
||||||
|
|
||||||
|
count = krcp->count;
|
||||||
|
spin_lock_irqsave(&krcp->lock, flags);
|
||||||
|
if (krcp->monitor_todo)
|
||||||
|
kfree_rcu_drain_unlock(krcp, flags);
|
||||||
|
else
|
||||||
|
spin_unlock_irqrestore(&krcp->lock, flags);
|
||||||
|
|
||||||
|
sc->nr_to_scan -= count;
|
||||||
|
freed += count;
|
||||||
|
|
||||||
|
if (sc->nr_to_scan <= 0)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return freed;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct shrinker kfree_rcu_shrinker = {
|
||||||
|
.count_objects = kfree_rcu_shrink_count,
|
||||||
|
.scan_objects = kfree_rcu_shrink_scan,
|
||||||
|
.batch = 0,
|
||||||
|
.seeks = DEFAULT_SEEKS,
|
||||||
|
};
|
||||||
|
|
||||||
void __init kfree_rcu_scheduler_running(void)
|
void __init kfree_rcu_scheduler_running(void)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
@ -4021,6 +4149,8 @@ static void __init kfree_rcu_batch_init(void)
|
|||||||
INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
|
INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
|
||||||
krcp->initialized = true;
|
krcp->initialized = true;
|
||||||
}
|
}
|
||||||
|
if (register_shrinker(&kfree_rcu_shrinker))
|
||||||
|
pr_err("Failed to register kfree_rcu() shrinker!\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
void __init rcu_init(void)
|
void __init rcu_init(void)
|
||||||
|
@ -455,6 +455,8 @@ static void rcu_bind_gp_kthread(void);
|
|||||||
static bool rcu_nohz_full_cpu(void);
|
static bool rcu_nohz_full_cpu(void);
|
||||||
static void rcu_dynticks_task_enter(void);
|
static void rcu_dynticks_task_enter(void);
|
||||||
static void rcu_dynticks_task_exit(void);
|
static void rcu_dynticks_task_exit(void);
|
||||||
|
static void rcu_dynticks_task_trace_enter(void);
|
||||||
|
static void rcu_dynticks_task_trace_exit(void);
|
||||||
|
|
||||||
/* Forward declarations for tree_stall.h */
|
/* Forward declarations for tree_stall.h */
|
||||||
static void record_gp_stall_check_time(void);
|
static void record_gp_stall_check_time(void);
|
||||||
|
@ -639,6 +639,7 @@ static void wait_rcu_exp_gp(struct work_struct *wp)
|
|||||||
*/
|
*/
|
||||||
static void rcu_exp_handler(void *unused)
|
static void rcu_exp_handler(void *unused)
|
||||||
{
|
{
|
||||||
|
int depth = rcu_preempt_depth();
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
||||||
struct rcu_node *rnp = rdp->mynode;
|
struct rcu_node *rnp = rdp->mynode;
|
||||||
@ -649,7 +650,7 @@ static void rcu_exp_handler(void *unused)
|
|||||||
* critical section. If also enabled or idle, immediately
|
* critical section. If also enabled or idle, immediately
|
||||||
* report the quiescent state, otherwise defer.
|
* report the quiescent state, otherwise defer.
|
||||||
*/
|
*/
|
||||||
if (!rcu_preempt_depth()) {
|
if (!depth) {
|
||||||
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
|
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
|
||||||
rcu_dynticks_curr_cpu_in_eqs()) {
|
rcu_dynticks_curr_cpu_in_eqs()) {
|
||||||
rcu_report_exp_rdp(rdp);
|
rcu_report_exp_rdp(rdp);
|
||||||
@ -673,7 +674,7 @@ static void rcu_exp_handler(void *unused)
|
|||||||
* can have caused this quiescent state to already have been
|
* can have caused this quiescent state to already have been
|
||||||
* reported, so we really do need to check ->expmask.
|
* reported, so we really do need to check ->expmask.
|
||||||
*/
|
*/
|
||||||
if (rcu_preempt_depth() > 0) {
|
if (depth > 0) {
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
if (rnp->expmask & rdp->grpmask) {
|
if (rnp->expmask & rdp->grpmask) {
|
||||||
rdp->exp_deferred_qs = true;
|
rdp->exp_deferred_qs = true;
|
||||||
@ -683,30 +684,8 @@ static void rcu_exp_handler(void *unused)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
// Finally, negative nesting depth should not happen.
|
||||||
* The final and least likely case is where the interrupted
|
WARN_ON_ONCE(1);
|
||||||
* code was just about to or just finished exiting the RCU-preempt
|
|
||||||
* read-side critical section, and no, we can't tell which.
|
|
||||||
* So either way, set ->deferred_qs to flag later code that
|
|
||||||
* a quiescent state is required.
|
|
||||||
*
|
|
||||||
* If the CPU is fully enabled (or if some buggy RCU-preempt
|
|
||||||
* read-side critical section is being used from idle), just
|
|
||||||
* invoke rcu_preempt_deferred_qs() to immediately report the
|
|
||||||
* quiescent state. We cannot use rcu_read_unlock_special()
|
|
||||||
* because we are in an interrupt handler, which will cause that
|
|
||||||
* function to take an early exit without doing anything.
|
|
||||||
*
|
|
||||||
* Otherwise, force a context switch after the CPU enables everything.
|
|
||||||
*/
|
|
||||||
rdp->exp_deferred_qs = true;
|
|
||||||
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
|
|
||||||
WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs())) {
|
|
||||||
rcu_preempt_deferred_qs(t);
|
|
||||||
} else {
|
|
||||||
set_tsk_need_resched(t);
|
|
||||||
set_preempt_need_resched();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* PREEMPTION=y, so no PREEMPTION=n expedited grace period to clean up after. */
|
/* PREEMPTION=y, so no PREEMPTION=n expedited grace period to clean up after. */
|
||||||
|
@ -331,6 +331,7 @@ void rcu_note_context_switch(bool preempt)
|
|||||||
rcu_qs();
|
rcu_qs();
|
||||||
if (rdp->exp_deferred_qs)
|
if (rdp->exp_deferred_qs)
|
||||||
rcu_report_exp_rdp(rdp);
|
rcu_report_exp_rdp(rdp);
|
||||||
|
rcu_tasks_qs(current, preempt);
|
||||||
trace_rcu_utilization(TPS("End context switch"));
|
trace_rcu_utilization(TPS("End context switch"));
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_note_context_switch);
|
EXPORT_SYMBOL_GPL(rcu_note_context_switch);
|
||||||
@ -345,9 +346,7 @@ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp)
|
|||||||
return READ_ONCE(rnp->gp_tasks) != NULL;
|
return READ_ONCE(rnp->gp_tasks) != NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Bias and limit values for ->rcu_read_lock_nesting. */
|
/* limit value for ->rcu_read_lock_nesting. */
|
||||||
#define RCU_NEST_BIAS INT_MAX
|
|
||||||
#define RCU_NEST_NMAX (-INT_MAX / 2)
|
|
||||||
#define RCU_NEST_PMAX (INT_MAX / 2)
|
#define RCU_NEST_PMAX (INT_MAX / 2)
|
||||||
|
|
||||||
static void rcu_preempt_read_enter(void)
|
static void rcu_preempt_read_enter(void)
|
||||||
@ -355,9 +354,9 @@ static void rcu_preempt_read_enter(void)
|
|||||||
current->rcu_read_lock_nesting++;
|
current->rcu_read_lock_nesting++;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rcu_preempt_read_exit(void)
|
static int rcu_preempt_read_exit(void)
|
||||||
{
|
{
|
||||||
current->rcu_read_lock_nesting--;
|
return --current->rcu_read_lock_nesting;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rcu_preempt_depth_set(int val)
|
static void rcu_preempt_depth_set(int val)
|
||||||
@ -390,21 +389,15 @@ void __rcu_read_unlock(void)
|
|||||||
{
|
{
|
||||||
struct task_struct *t = current;
|
struct task_struct *t = current;
|
||||||
|
|
||||||
if (rcu_preempt_depth() != 1) {
|
if (rcu_preempt_read_exit() == 0) {
|
||||||
rcu_preempt_read_exit();
|
|
||||||
} else {
|
|
||||||
barrier(); /* critical section before exit code. */
|
barrier(); /* critical section before exit code. */
|
||||||
rcu_preempt_depth_set(-RCU_NEST_BIAS);
|
|
||||||
barrier(); /* assign before ->rcu_read_unlock_special load */
|
|
||||||
if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s)))
|
if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s)))
|
||||||
rcu_read_unlock_special(t);
|
rcu_read_unlock_special(t);
|
||||||
barrier(); /* ->rcu_read_unlock_special load before assign */
|
|
||||||
rcu_preempt_depth_set(0);
|
|
||||||
}
|
}
|
||||||
if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
|
if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
|
||||||
int rrln = rcu_preempt_depth();
|
int rrln = rcu_preempt_depth();
|
||||||
|
|
||||||
WARN_ON_ONCE(rrln < 0 && rrln > RCU_NEST_NMAX);
|
WARN_ON_ONCE(rrln < 0 || rrln > RCU_NEST_PMAX);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(__rcu_read_unlock);
|
EXPORT_SYMBOL_GPL(__rcu_read_unlock);
|
||||||
@ -556,7 +549,7 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
|
|||||||
{
|
{
|
||||||
return (__this_cpu_read(rcu_data.exp_deferred_qs) ||
|
return (__this_cpu_read(rcu_data.exp_deferred_qs) ||
|
||||||
READ_ONCE(t->rcu_read_unlock_special.s)) &&
|
READ_ONCE(t->rcu_read_unlock_special.s)) &&
|
||||||
rcu_preempt_depth() <= 0;
|
rcu_preempt_depth() == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -569,16 +562,11 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
|
|||||||
static void rcu_preempt_deferred_qs(struct task_struct *t)
|
static void rcu_preempt_deferred_qs(struct task_struct *t)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
bool couldrecurse = rcu_preempt_depth() >= 0;
|
|
||||||
|
|
||||||
if (!rcu_preempt_need_deferred_qs(t))
|
if (!rcu_preempt_need_deferred_qs(t))
|
||||||
return;
|
return;
|
||||||
if (couldrecurse)
|
|
||||||
rcu_preempt_depth_set(rcu_preempt_depth() - RCU_NEST_BIAS);
|
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
rcu_preempt_deferred_qs_irqrestore(t, flags);
|
rcu_preempt_deferred_qs_irqrestore(t, flags);
|
||||||
if (couldrecurse)
|
|
||||||
rcu_preempt_depth_set(rcu_preempt_depth() + RCU_NEST_BIAS);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -615,19 +603,18 @@ static void rcu_read_unlock_special(struct task_struct *t)
|
|||||||
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
||||||
struct rcu_node *rnp = rdp->mynode;
|
struct rcu_node *rnp = rdp->mynode;
|
||||||
|
|
||||||
exp = (t->rcu_blocked_node && t->rcu_blocked_node->exp_tasks) ||
|
exp = (t->rcu_blocked_node &&
|
||||||
(rdp->grpmask & READ_ONCE(rnp->expmask)) ||
|
READ_ONCE(t->rcu_blocked_node->exp_tasks)) ||
|
||||||
tick_nohz_full_cpu(rdp->cpu);
|
(rdp->grpmask & READ_ONCE(rnp->expmask));
|
||||||
// Need to defer quiescent state until everything is enabled.
|
// Need to defer quiescent state until everything is enabled.
|
||||||
if (irqs_were_disabled && use_softirq &&
|
if (use_softirq && (in_irq() || (exp && !irqs_were_disabled))) {
|
||||||
(in_interrupt() ||
|
// Using softirq, safe to awaken, and either the
|
||||||
(exp && !t->rcu_read_unlock_special.b.deferred_qs))) {
|
// wakeup is free or there is an expedited GP.
|
||||||
// Using softirq, safe to awaken, and we get
|
|
||||||
// no help from enabling irqs, unlike bh/preempt.
|
|
||||||
raise_softirq_irqoff(RCU_SOFTIRQ);
|
raise_softirq_irqoff(RCU_SOFTIRQ);
|
||||||
} else {
|
} else {
|
||||||
// Enabling BH or preempt does reschedule, so...
|
// Enabling BH or preempt does reschedule, so...
|
||||||
// Also if no expediting or NO_HZ_FULL, slow is OK.
|
// Also if no expediting, slow is OK.
|
||||||
|
// Plus nohz_full CPUs eventually get tick enabled.
|
||||||
set_tsk_need_resched(current);
|
set_tsk_need_resched(current);
|
||||||
set_preempt_need_resched();
|
set_preempt_need_resched();
|
||||||
if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
|
if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
|
||||||
@ -640,7 +627,6 @@ static void rcu_read_unlock_special(struct task_struct *t)
|
|||||||
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
|
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
t->rcu_read_unlock_special.b.deferred_qs = true;
|
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -699,7 +685,7 @@ static void rcu_flavor_sched_clock_irq(int user)
|
|||||||
} else if (rcu_preempt_need_deferred_qs(t)) {
|
} else if (rcu_preempt_need_deferred_qs(t)) {
|
||||||
rcu_preempt_deferred_qs(t); /* Report deferred QS. */
|
rcu_preempt_deferred_qs(t); /* Report deferred QS. */
|
||||||
return;
|
return;
|
||||||
} else if (!rcu_preempt_depth()) {
|
} else if (!WARN_ON_ONCE(rcu_preempt_depth())) {
|
||||||
rcu_qs(); /* Report immediate QS. */
|
rcu_qs(); /* Report immediate QS. */
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -854,8 +840,7 @@ void rcu_note_context_switch(bool preempt)
|
|||||||
this_cpu_write(rcu_data.rcu_urgent_qs, false);
|
this_cpu_write(rcu_data.rcu_urgent_qs, false);
|
||||||
if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs)))
|
if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs)))
|
||||||
rcu_momentary_dyntick_idle();
|
rcu_momentary_dyntick_idle();
|
||||||
if (!preempt)
|
rcu_tasks_qs(current, preempt);
|
||||||
rcu_tasks_qs(current);
|
|
||||||
out:
|
out:
|
||||||
trace_rcu_utilization(TPS("End context switch"));
|
trace_rcu_utilization(TPS("End context switch"));
|
||||||
}
|
}
|
||||||
@ -2568,3 +2553,21 @@ static void rcu_dynticks_task_exit(void)
|
|||||||
WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
|
WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
|
||||||
#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
|
#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
|
||||||
|
static void rcu_dynticks_task_trace_enter(void)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_TASKS_RCU_TRACE
|
||||||
|
if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
|
||||||
|
current->trc_reader_special.b.need_mb = true;
|
||||||
|
#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
|
||||||
|
static void rcu_dynticks_task_trace_exit(void)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_TASKS_RCU_TRACE
|
||||||
|
if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
|
||||||
|
current->trc_reader_special.b.need_mb = false;
|
||||||
|
#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
|
||||||
|
}
|
||||||
|
@ -15,10 +15,12 @@
|
|||||||
int sysctl_panic_on_rcu_stall __read_mostly;
|
int sysctl_panic_on_rcu_stall __read_mostly;
|
||||||
|
|
||||||
#ifdef CONFIG_PROVE_RCU
|
#ifdef CONFIG_PROVE_RCU
|
||||||
#define RCU_STALL_DELAY_DELTA (5 * HZ)
|
#define RCU_STALL_DELAY_DELTA (5 * HZ)
|
||||||
#else
|
#else
|
||||||
#define RCU_STALL_DELAY_DELTA 0
|
#define RCU_STALL_DELAY_DELTA 0
|
||||||
#endif
|
#endif
|
||||||
|
#define RCU_STALL_MIGHT_DIV 8
|
||||||
|
#define RCU_STALL_MIGHT_MIN (2 * HZ)
|
||||||
|
|
||||||
/* Limit-check stall timeouts specified at boottime and runtime. */
|
/* Limit-check stall timeouts specified at boottime and runtime. */
|
||||||
int rcu_jiffies_till_stall_check(void)
|
int rcu_jiffies_till_stall_check(void)
|
||||||
@ -40,6 +42,36 @@ int rcu_jiffies_till_stall_check(void)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
|
EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_gp_might_be_stalled - Is it likely that the grace period is stalled?
|
||||||
|
*
|
||||||
|
* Returns @true if the current grace period is sufficiently old that
|
||||||
|
* it is reasonable to assume that it might be stalled. This can be
|
||||||
|
* useful when deciding whether to allocate memory to enable RCU-mediated
|
||||||
|
* freeing on the one hand or just invoking synchronize_rcu() on the other.
|
||||||
|
* The latter is preferable when the grace period is stalled.
|
||||||
|
*
|
||||||
|
* Note that sampling of the .gp_start and .gp_seq fields must be done
|
||||||
|
* carefully to avoid false positives at the beginnings and ends of
|
||||||
|
* grace periods.
|
||||||
|
*/
|
||||||
|
bool rcu_gp_might_be_stalled(void)
|
||||||
|
{
|
||||||
|
unsigned long d = rcu_jiffies_till_stall_check() / RCU_STALL_MIGHT_DIV;
|
||||||
|
unsigned long j = jiffies;
|
||||||
|
|
||||||
|
if (d < RCU_STALL_MIGHT_MIN)
|
||||||
|
d = RCU_STALL_MIGHT_MIN;
|
||||||
|
smp_mb(); // jiffies before .gp_seq to avoid false positives.
|
||||||
|
if (!rcu_gp_in_progress())
|
||||||
|
return false;
|
||||||
|
// Long delays at this point avoids false positive, but a delay
|
||||||
|
// of ULONG_MAX/4 jiffies voids your no-false-positive warranty.
|
||||||
|
smp_mb(); // .gp_seq before second .gp_start
|
||||||
|
// And ditto here.
|
||||||
|
return !time_before(j, READ_ONCE(rcu_state.gp_start) + d);
|
||||||
|
}
|
||||||
|
|
||||||
/* Don't do RCU CPU stall warnings during long sysrq printouts. */
|
/* Don't do RCU CPU stall warnings during long sysrq printouts. */
|
||||||
void rcu_sysrq_start(void)
|
void rcu_sysrq_start(void)
|
||||||
{
|
{
|
||||||
@ -104,8 +136,8 @@ static void record_gp_stall_check_time(void)
|
|||||||
|
|
||||||
WRITE_ONCE(rcu_state.gp_start, j);
|
WRITE_ONCE(rcu_state.gp_start, j);
|
||||||
j1 = rcu_jiffies_till_stall_check();
|
j1 = rcu_jiffies_till_stall_check();
|
||||||
/* Record ->gp_start before ->jiffies_stall. */
|
smp_mb(); // ->gp_start before ->jiffies_stall and caller's ->gp_seq.
|
||||||
smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
|
WRITE_ONCE(rcu_state.jiffies_stall, j + j1);
|
||||||
rcu_state.jiffies_resched = j + j1 / 2;
|
rcu_state.jiffies_resched = j + j1 / 2;
|
||||||
rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
|
rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
|
||||||
}
|
}
|
||||||
@ -192,14 +224,40 @@ static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
|
|||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Communicate task state back to the RCU CPU stall warning request.
|
||||||
|
struct rcu_stall_chk_rdr {
|
||||||
|
int nesting;
|
||||||
|
union rcu_special rs;
|
||||||
|
bool on_blkd_list;
|
||||||
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Report out the state of a not-running task that is stalling the
|
||||||
|
* current RCU grace period.
|
||||||
|
*/
|
||||||
|
static bool check_slow_task(struct task_struct *t, void *arg)
|
||||||
|
{
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
struct rcu_stall_chk_rdr *rscrp = arg;
|
||||||
|
|
||||||
|
if (task_curr(t))
|
||||||
|
return false; // It is running, so decline to inspect it.
|
||||||
|
rscrp->nesting = t->rcu_read_lock_nesting;
|
||||||
|
rscrp->rs = t->rcu_read_unlock_special;
|
||||||
|
rnp = t->rcu_blocked_node;
|
||||||
|
rscrp->on_blkd_list = !list_empty(&t->rcu_node_entry);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Scan the current list of tasks blocked within RCU read-side critical
|
* Scan the current list of tasks blocked within RCU read-side critical
|
||||||
* sections, printing out the tid of each.
|
* sections, printing out the tid of each.
|
||||||
*/
|
*/
|
||||||
static int rcu_print_task_stall(struct rcu_node *rnp)
|
static int rcu_print_task_stall(struct rcu_node *rnp)
|
||||||
{
|
{
|
||||||
struct task_struct *t;
|
|
||||||
int ndetected = 0;
|
int ndetected = 0;
|
||||||
|
struct rcu_stall_chk_rdr rscr;
|
||||||
|
struct task_struct *t;
|
||||||
|
|
||||||
if (!rcu_preempt_blocked_readers_cgp(rnp))
|
if (!rcu_preempt_blocked_readers_cgp(rnp))
|
||||||
return 0;
|
return 0;
|
||||||
@ -208,7 +266,15 @@ static int rcu_print_task_stall(struct rcu_node *rnp)
|
|||||||
t = list_entry(rnp->gp_tasks->prev,
|
t = list_entry(rnp->gp_tasks->prev,
|
||||||
struct task_struct, rcu_node_entry);
|
struct task_struct, rcu_node_entry);
|
||||||
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
||||||
pr_cont(" P%d", t->pid);
|
if (!try_invoke_on_locked_down_task(t, check_slow_task, &rscr))
|
||||||
|
pr_cont(" P%d", t->pid);
|
||||||
|
else
|
||||||
|
pr_cont(" P%d/%d:%c%c%c%c",
|
||||||
|
t->pid, rscr.nesting,
|
||||||
|
".b"[rscr.rs.b.blocked],
|
||||||
|
".q"[rscr.rs.b.need_qs],
|
||||||
|
".e"[rscr.rs.b.exp_hint],
|
||||||
|
".l"[rscr.on_blkd_list]);
|
||||||
ndetected++;
|
ndetected++;
|
||||||
}
|
}
|
||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
@ -299,6 +365,16 @@ static const char *gp_state_getname(short gs)
|
|||||||
return gp_state_names[gs];
|
return gp_state_names[gs];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Is the RCU grace-period kthread being starved of CPU time? */
|
||||||
|
static bool rcu_is_gp_kthread_starving(unsigned long *jp)
|
||||||
|
{
|
||||||
|
unsigned long j = jiffies - READ_ONCE(rcu_state.gp_activity);
|
||||||
|
|
||||||
|
if (jp)
|
||||||
|
*jp = j;
|
||||||
|
return j > 2 * HZ;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Print out diagnostic information for the specified stalled CPU.
|
* Print out diagnostic information for the specified stalled CPU.
|
||||||
*
|
*
|
||||||
@ -313,6 +389,7 @@ static const char *gp_state_getname(short gs)
|
|||||||
static void print_cpu_stall_info(int cpu)
|
static void print_cpu_stall_info(int cpu)
|
||||||
{
|
{
|
||||||
unsigned long delta;
|
unsigned long delta;
|
||||||
|
bool falsepositive;
|
||||||
char fast_no_hz[72];
|
char fast_no_hz[72];
|
||||||
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
char *ticks_title;
|
char *ticks_title;
|
||||||
@ -333,7 +410,9 @@ static void print_cpu_stall_info(int cpu)
|
|||||||
}
|
}
|
||||||
print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
|
print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
|
||||||
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
|
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
|
||||||
pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
|
falsepositive = rcu_is_gp_kthread_starving(NULL) &&
|
||||||
|
rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp));
|
||||||
|
pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s%s\n",
|
||||||
cpu,
|
cpu,
|
||||||
"O."[!!cpu_online(cpu)],
|
"O."[!!cpu_online(cpu)],
|
||||||
"o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
|
"o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
|
||||||
@ -345,8 +424,9 @@ static void print_cpu_stall_info(int cpu)
|
|||||||
rcu_dynticks_snap(rdp) & 0xfff,
|
rcu_dynticks_snap(rdp) & 0xfff,
|
||||||
rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
|
rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
|
||||||
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
|
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
|
||||||
READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
|
data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
|
||||||
fast_no_hz);
|
fast_no_hz,
|
||||||
|
falsepositive ? " (false positive?)" : "");
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Complain about starvation of grace-period kthread. */
|
/* Complain about starvation of grace-period kthread. */
|
||||||
@ -355,8 +435,7 @@ static void rcu_check_gp_kthread_starvation(void)
|
|||||||
struct task_struct *gpk = rcu_state.gp_kthread;
|
struct task_struct *gpk = rcu_state.gp_kthread;
|
||||||
unsigned long j;
|
unsigned long j;
|
||||||
|
|
||||||
j = jiffies - READ_ONCE(rcu_state.gp_activity);
|
if (rcu_is_gp_kthread_starving(&j)) {
|
||||||
if (j > 2 * HZ) {
|
|
||||||
pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
|
pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
|
||||||
rcu_state.name, j,
|
rcu_state.name, j,
|
||||||
(long)rcu_seq_current(&rcu_state.gp_seq),
|
(long)rcu_seq_current(&rcu_state.gp_seq),
|
||||||
@ -364,6 +443,7 @@ static void rcu_check_gp_kthread_starvation(void)
|
|||||||
gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
|
gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
|
||||||
gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
|
gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
|
||||||
if (gpk) {
|
if (gpk) {
|
||||||
|
pr_err("\tUnless %s kthread gets sufficient CPU time, OOM is now expected behavior.\n", rcu_state.name);
|
||||||
pr_err("RCU grace-period kthread stack dump:\n");
|
pr_err("RCU grace-period kthread stack dump:\n");
|
||||||
sched_show_task(gpk);
|
sched_show_task(gpk);
|
||||||
wake_up_process(gpk);
|
wake_up_process(gpk);
|
||||||
@ -426,8 +506,6 @@ static void print_other_cpu_stall(unsigned long gp_seq, unsigned long gps)
|
|||||||
rcu_state.name, j - gpa, j, gpa,
|
rcu_state.name, j - gpa, j, gpa,
|
||||||
data_race(jiffies_till_next_fqs),
|
data_race(jiffies_till_next_fqs),
|
||||||
rcu_get_root()->qsmask);
|
rcu_get_root()->qsmask);
|
||||||
/* In this case, the current CPU might be at fault. */
|
|
||||||
sched_show_task(current);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
/* Rewrite if needed in case of slow consoles. */
|
/* Rewrite if needed in case of slow consoles. */
|
||||||
@ -615,7 +693,7 @@ void show_rcu_gp_kthreads(void)
|
|||||||
if (rcu_segcblist_is_offloaded(&rdp->cblist))
|
if (rcu_segcblist_is_offloaded(&rdp->cblist))
|
||||||
show_rcu_nocb_state(rdp);
|
show_rcu_nocb_state(rdp);
|
||||||
}
|
}
|
||||||
/* sched_show_task(rcu_state.gp_kthread); */
|
show_rcu_tasks_gp_kthreads();
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
|
EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
|
||||||
|
|
||||||
|
@ -41,6 +41,7 @@
|
|||||||
#include <linux/sched/isolation.h>
|
#include <linux/sched/isolation.h>
|
||||||
#include <linux/kprobes.h>
|
#include <linux/kprobes.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
#include <linux/irq_work.h>
|
||||||
|
|
||||||
#define CREATE_TRACE_POINTS
|
#define CREATE_TRACE_POINTS
|
||||||
|
|
||||||
@ -51,6 +52,19 @@
|
|||||||
#endif
|
#endif
|
||||||
#define MODULE_PARAM_PREFIX "rcupdate."
|
#define MODULE_PARAM_PREFIX "rcupdate."
|
||||||
|
|
||||||
|
#ifndef data_race
|
||||||
|
#define data_race(expr) \
|
||||||
|
({ \
|
||||||
|
expr; \
|
||||||
|
})
|
||||||
|
#endif
|
||||||
|
#ifndef ASSERT_EXCLUSIVE_WRITER
|
||||||
|
#define ASSERT_EXCLUSIVE_WRITER(var) do { } while (0)
|
||||||
|
#endif
|
||||||
|
#ifndef ASSERT_EXCLUSIVE_ACCESS
|
||||||
|
#define ASSERT_EXCLUSIVE_ACCESS(var) do { } while (0)
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifndef CONFIG_TINY_RCU
|
#ifndef CONFIG_TINY_RCU
|
||||||
module_param(rcu_expedited, int, 0);
|
module_param(rcu_expedited, int, 0);
|
||||||
module_param(rcu_normal, int, 0);
|
module_param(rcu_normal, int, 0);
|
||||||
@ -501,370 +515,6 @@ int rcu_cpu_stall_suppress_at_boot __read_mostly; // !0 = suppress boot stalls.
|
|||||||
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress_at_boot);
|
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress_at_boot);
|
||||||
module_param(rcu_cpu_stall_suppress_at_boot, int, 0444);
|
module_param(rcu_cpu_stall_suppress_at_boot, int, 0444);
|
||||||
|
|
||||||
#ifdef CONFIG_TASKS_RCU
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Simple variant of RCU whose quiescent states are voluntary context
|
|
||||||
* switch, cond_resched_rcu_qs(), user-space execution, and idle.
|
|
||||||
* As such, grace periods can take one good long time. There are no
|
|
||||||
* read-side primitives similar to rcu_read_lock() and rcu_read_unlock()
|
|
||||||
* because this implementation is intended to get the system into a safe
|
|
||||||
* state for some of the manipulations involved in tracing and the like.
|
|
||||||
* Finally, this implementation does not support high call_rcu_tasks()
|
|
||||||
* rates from multiple CPUs. If this is required, per-CPU callback lists
|
|
||||||
* will be needed.
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* Global list of callbacks and associated lock. */
|
|
||||||
static struct rcu_head *rcu_tasks_cbs_head;
|
|
||||||
static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
|
|
||||||
static DECLARE_WAIT_QUEUE_HEAD(rcu_tasks_cbs_wq);
|
|
||||||
static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
|
|
||||||
|
|
||||||
/* Track exiting tasks in order to allow them to be waited for. */
|
|
||||||
DEFINE_STATIC_SRCU(tasks_rcu_exit_srcu);
|
|
||||||
|
|
||||||
/* Control stall timeouts. Disable with <= 0, otherwise jiffies till stall. */
|
|
||||||
#define RCU_TASK_STALL_TIMEOUT (HZ * 60 * 10)
|
|
||||||
static int rcu_task_stall_timeout __read_mostly = RCU_TASK_STALL_TIMEOUT;
|
|
||||||
module_param(rcu_task_stall_timeout, int, 0644);
|
|
||||||
|
|
||||||
static struct task_struct *rcu_tasks_kthread_ptr;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
|
|
||||||
* @rhp: structure to be used for queueing the RCU updates.
|
|
||||||
* @func: actual callback function to be invoked after the grace period
|
|
||||||
*
|
|
||||||
* The callback function will be invoked some time after a full grace
|
|
||||||
* period elapses, in other words after all currently executing RCU
|
|
||||||
* read-side critical sections have completed. call_rcu_tasks() assumes
|
|
||||||
* that the read-side critical sections end at a voluntary context
|
|
||||||
* switch (not a preemption!), cond_resched_rcu_qs(), entry into idle,
|
|
||||||
* or transition to usermode execution. As such, there are no read-side
|
|
||||||
* primitives analogous to rcu_read_lock() and rcu_read_unlock() because
|
|
||||||
* this primitive is intended to determine that all tasks have passed
|
|
||||||
* through a safe state, not so much for data-strcuture synchronization.
|
|
||||||
*
|
|
||||||
* See the description of call_rcu() for more detailed information on
|
|
||||||
* memory ordering guarantees.
|
|
||||||
*/
|
|
||||||
void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
bool needwake;
|
|
||||||
|
|
||||||
rhp->next = NULL;
|
|
||||||
rhp->func = func;
|
|
||||||
raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
|
|
||||||
needwake = !rcu_tasks_cbs_head;
|
|
||||||
WRITE_ONCE(*rcu_tasks_cbs_tail, rhp);
|
|
||||||
rcu_tasks_cbs_tail = &rhp->next;
|
|
||||||
raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
|
|
||||||
/* We can't create the thread unless interrupts are enabled. */
|
|
||||||
if (needwake && READ_ONCE(rcu_tasks_kthread_ptr))
|
|
||||||
wake_up(&rcu_tasks_cbs_wq);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(call_rcu_tasks);
|
|
||||||
|
|
||||||
/**
|
|
||||||
* synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
|
|
||||||
*
|
|
||||||
* Control will return to the caller some time after a full rcu-tasks
|
|
||||||
* grace period has elapsed, in other words after all currently
|
|
||||||
* executing rcu-tasks read-side critical sections have elapsed. These
|
|
||||||
* read-side critical sections are delimited by calls to schedule(),
|
|
||||||
* cond_resched_tasks_rcu_qs(), idle execution, userspace execution, calls
|
|
||||||
* to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
|
|
||||||
*
|
|
||||||
* This is a very specialized primitive, intended only for a few uses in
|
|
||||||
* tracing and other situations requiring manipulation of function
|
|
||||||
* preambles and profiling hooks. The synchronize_rcu_tasks() function
|
|
||||||
* is not (yet) intended for heavy use from multiple CPUs.
|
|
||||||
*
|
|
||||||
* Note that this guarantee implies further memory-ordering guarantees.
|
|
||||||
* On systems with more than one CPU, when synchronize_rcu_tasks() returns,
|
|
||||||
* each CPU is guaranteed to have executed a full memory barrier since the
|
|
||||||
* end of its last RCU-tasks read-side critical section whose beginning
|
|
||||||
* preceded the call to synchronize_rcu_tasks(). In addition, each CPU
|
|
||||||
* having an RCU-tasks read-side critical section that extends beyond
|
|
||||||
* the return from synchronize_rcu_tasks() is guaranteed to have executed
|
|
||||||
* a full memory barrier after the beginning of synchronize_rcu_tasks()
|
|
||||||
* and before the beginning of that RCU-tasks read-side critical section.
|
|
||||||
* Note that these guarantees include CPUs that are offline, idle, or
|
|
||||||
* executing in user mode, as well as CPUs that are executing in the kernel.
|
|
||||||
*
|
|
||||||
* Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
|
|
||||||
* to its caller on CPU B, then both CPU A and CPU B are guaranteed
|
|
||||||
* to have executed a full memory barrier during the execution of
|
|
||||||
* synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
|
|
||||||
* (but again only if the system has more than one CPU).
|
|
||||||
*/
|
|
||||||
void synchronize_rcu_tasks(void)
|
|
||||||
{
|
|
||||||
/* Complain if the scheduler has not started. */
|
|
||||||
RCU_LOCKDEP_WARN(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
|
|
||||||
"synchronize_rcu_tasks called too soon");
|
|
||||||
|
|
||||||
/* Wait for the grace period. */
|
|
||||||
wait_rcu_gp(call_rcu_tasks);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(synchronize_rcu_tasks);
|
|
||||||
|
|
||||||
/**
|
|
||||||
* rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
|
|
||||||
*
|
|
||||||
* Although the current implementation is guaranteed to wait, it is not
|
|
||||||
* obligated to, for example, if there are no pending callbacks.
|
|
||||||
*/
|
|
||||||
void rcu_barrier_tasks(void)
|
|
||||||
{
|
|
||||||
/* There is only one callback queue, so this is easy. ;-) */
|
|
||||||
synchronize_rcu_tasks();
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
|
|
||||||
|
|
||||||
/* See if tasks are still holding out, complain if so. */
|
|
||||||
static void check_holdout_task(struct task_struct *t,
|
|
||||||
bool needreport, bool *firstreport)
|
|
||||||
{
|
|
||||||
int cpu;
|
|
||||||
|
|
||||||
if (!READ_ONCE(t->rcu_tasks_holdout) ||
|
|
||||||
t->rcu_tasks_nvcsw != READ_ONCE(t->nvcsw) ||
|
|
||||||
!READ_ONCE(t->on_rq) ||
|
|
||||||
(IS_ENABLED(CONFIG_NO_HZ_FULL) &&
|
|
||||||
!is_idle_task(t) && t->rcu_tasks_idle_cpu >= 0)) {
|
|
||||||
WRITE_ONCE(t->rcu_tasks_holdout, false);
|
|
||||||
list_del_init(&t->rcu_tasks_holdout_list);
|
|
||||||
put_task_struct(t);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
rcu_request_urgent_qs_task(t);
|
|
||||||
if (!needreport)
|
|
||||||
return;
|
|
||||||
if (*firstreport) {
|
|
||||||
pr_err("INFO: rcu_tasks detected stalls on tasks:\n");
|
|
||||||
*firstreport = false;
|
|
||||||
}
|
|
||||||
cpu = task_cpu(t);
|
|
||||||
pr_alert("%p: %c%c nvcsw: %lu/%lu holdout: %d idle_cpu: %d/%d\n",
|
|
||||||
t, ".I"[is_idle_task(t)],
|
|
||||||
"N."[cpu < 0 || !tick_nohz_full_cpu(cpu)],
|
|
||||||
t->rcu_tasks_nvcsw, t->nvcsw, t->rcu_tasks_holdout,
|
|
||||||
t->rcu_tasks_idle_cpu, cpu);
|
|
||||||
sched_show_task(t);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
|
|
||||||
static int __noreturn rcu_tasks_kthread(void *arg)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
struct task_struct *g, *t;
|
|
||||||
unsigned long lastreport;
|
|
||||||
struct rcu_head *list;
|
|
||||||
struct rcu_head *next;
|
|
||||||
LIST_HEAD(rcu_tasks_holdouts);
|
|
||||||
int fract;
|
|
||||||
|
|
||||||
/* Run on housekeeping CPUs by default. Sysadm can move if desired. */
|
|
||||||
housekeeping_affine(current, HK_FLAG_RCU);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Each pass through the following loop makes one check for
|
|
||||||
* newly arrived callbacks, and, if there are some, waits for
|
|
||||||
* one RCU-tasks grace period and then invokes the callbacks.
|
|
||||||
* This loop is terminated by the system going down. ;-)
|
|
||||||
*/
|
|
||||||
for (;;) {
|
|
||||||
|
|
||||||
/* Pick up any new callbacks. */
|
|
||||||
raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
|
|
||||||
list = rcu_tasks_cbs_head;
|
|
||||||
rcu_tasks_cbs_head = NULL;
|
|
||||||
rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
|
|
||||||
raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
|
|
||||||
|
|
||||||
/* If there were none, wait a bit and start over. */
|
|
||||||
if (!list) {
|
|
||||||
wait_event_interruptible(rcu_tasks_cbs_wq,
|
|
||||||
READ_ONCE(rcu_tasks_cbs_head));
|
|
||||||
if (!rcu_tasks_cbs_head) {
|
|
||||||
WARN_ON(signal_pending(current));
|
|
||||||
schedule_timeout_interruptible(HZ/10);
|
|
||||||
}
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Wait for all pre-existing t->on_rq and t->nvcsw
|
|
||||||
* transitions to complete. Invoking synchronize_rcu()
|
|
||||||
* suffices because all these transitions occur with
|
|
||||||
* interrupts disabled. Without this synchronize_rcu(),
|
|
||||||
* a read-side critical section that started before the
|
|
||||||
* grace period might be incorrectly seen as having started
|
|
||||||
* after the grace period.
|
|
||||||
*
|
|
||||||
* This synchronize_rcu() also dispenses with the
|
|
||||||
* need for a memory barrier on the first store to
|
|
||||||
* ->rcu_tasks_holdout, as it forces the store to happen
|
|
||||||
* after the beginning of the grace period.
|
|
||||||
*/
|
|
||||||
synchronize_rcu();
|
|
||||||
|
|
||||||
/*
|
|
||||||
* There were callbacks, so we need to wait for an
|
|
||||||
* RCU-tasks grace period. Start off by scanning
|
|
||||||
* the task list for tasks that are not already
|
|
||||||
* voluntarily blocked. Mark these tasks and make
|
|
||||||
* a list of them in rcu_tasks_holdouts.
|
|
||||||
*/
|
|
||||||
rcu_read_lock();
|
|
||||||
for_each_process_thread(g, t) {
|
|
||||||
if (t != current && READ_ONCE(t->on_rq) &&
|
|
||||||
!is_idle_task(t)) {
|
|
||||||
get_task_struct(t);
|
|
||||||
t->rcu_tasks_nvcsw = READ_ONCE(t->nvcsw);
|
|
||||||
WRITE_ONCE(t->rcu_tasks_holdout, true);
|
|
||||||
list_add(&t->rcu_tasks_holdout_list,
|
|
||||||
&rcu_tasks_holdouts);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rcu_read_unlock();
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Wait for tasks that are in the process of exiting.
|
|
||||||
* This does only part of the job, ensuring that all
|
|
||||||
* tasks that were previously exiting reach the point
|
|
||||||
* where they have disabled preemption, allowing the
|
|
||||||
* later synchronize_rcu() to finish the job.
|
|
||||||
*/
|
|
||||||
synchronize_srcu(&tasks_rcu_exit_srcu);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Each pass through the following loop scans the list
|
|
||||||
* of holdout tasks, removing any that are no longer
|
|
||||||
* holdouts. When the list is empty, we are done.
|
|
||||||
*/
|
|
||||||
lastreport = jiffies;
|
|
||||||
|
|
||||||
/* Start off with HZ/10 wait and slowly back off to 1 HZ wait*/
|
|
||||||
fract = 10;
|
|
||||||
|
|
||||||
for (;;) {
|
|
||||||
bool firstreport;
|
|
||||||
bool needreport;
|
|
||||||
int rtst;
|
|
||||||
struct task_struct *t1;
|
|
||||||
|
|
||||||
if (list_empty(&rcu_tasks_holdouts))
|
|
||||||
break;
|
|
||||||
|
|
||||||
/* Slowly back off waiting for holdouts */
|
|
||||||
schedule_timeout_interruptible(HZ/fract);
|
|
||||||
|
|
||||||
if (fract > 1)
|
|
||||||
fract--;
|
|
||||||
|
|
||||||
rtst = READ_ONCE(rcu_task_stall_timeout);
|
|
||||||
needreport = rtst > 0 &&
|
|
||||||
time_after(jiffies, lastreport + rtst);
|
|
||||||
if (needreport)
|
|
||||||
lastreport = jiffies;
|
|
||||||
firstreport = true;
|
|
||||||
WARN_ON(signal_pending(current));
|
|
||||||
list_for_each_entry_safe(t, t1, &rcu_tasks_holdouts,
|
|
||||||
rcu_tasks_holdout_list) {
|
|
||||||
check_holdout_task(t, needreport, &firstreport);
|
|
||||||
cond_resched();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Because ->on_rq and ->nvcsw are not guaranteed
|
|
||||||
* to have a full memory barriers prior to them in the
|
|
||||||
* schedule() path, memory reordering on other CPUs could
|
|
||||||
* cause their RCU-tasks read-side critical sections to
|
|
||||||
* extend past the end of the grace period. However,
|
|
||||||
* because these ->nvcsw updates are carried out with
|
|
||||||
* interrupts disabled, we can use synchronize_rcu()
|
|
||||||
* to force the needed ordering on all such CPUs.
|
|
||||||
*
|
|
||||||
* This synchronize_rcu() also confines all
|
|
||||||
* ->rcu_tasks_holdout accesses to be within the grace
|
|
||||||
* period, avoiding the need for memory barriers for
|
|
||||||
* ->rcu_tasks_holdout accesses.
|
|
||||||
*
|
|
||||||
* In addition, this synchronize_rcu() waits for exiting
|
|
||||||
* tasks to complete their final preempt_disable() region
|
|
||||||
* of execution, cleaning up after the synchronize_srcu()
|
|
||||||
* above.
|
|
||||||
*/
|
|
||||||
synchronize_rcu();
|
|
||||||
|
|
||||||
/* Invoke the callbacks. */
|
|
||||||
while (list) {
|
|
||||||
next = list->next;
|
|
||||||
local_bh_disable();
|
|
||||||
list->func(list);
|
|
||||||
local_bh_enable();
|
|
||||||
list = next;
|
|
||||||
cond_resched();
|
|
||||||
}
|
|
||||||
/* Paranoid sleep to keep this from entering a tight loop */
|
|
||||||
schedule_timeout_uninterruptible(HZ/10);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Spawn rcu_tasks_kthread() at core_initcall() time. */
|
|
||||||
static int __init rcu_spawn_tasks_kthread(void)
|
|
||||||
{
|
|
||||||
struct task_struct *t;
|
|
||||||
|
|
||||||
t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
|
|
||||||
if (WARN_ONCE(IS_ERR(t), "%s: Could not start Tasks-RCU grace-period kthread, OOM is now expected behavior\n", __func__))
|
|
||||||
return 0;
|
|
||||||
smp_mb(); /* Ensure others see full kthread. */
|
|
||||||
WRITE_ONCE(rcu_tasks_kthread_ptr, t);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
core_initcall(rcu_spawn_tasks_kthread);
|
|
||||||
|
|
||||||
/* Do the srcu_read_lock() for the above synchronize_srcu(). */
|
|
||||||
void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
|
|
||||||
{
|
|
||||||
preempt_disable();
|
|
||||||
current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
|
|
||||||
preempt_enable();
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Do the srcu_read_unlock() for the above synchronize_srcu(). */
|
|
||||||
void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu)
|
|
||||||
{
|
|
||||||
preempt_disable();
|
|
||||||
__srcu_read_unlock(&tasks_rcu_exit_srcu, current->rcu_tasks_idx);
|
|
||||||
preempt_enable();
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* #ifdef CONFIG_TASKS_RCU */
|
|
||||||
|
|
||||||
#ifndef CONFIG_TINY_RCU
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Print any non-default Tasks RCU settings.
|
|
||||||
*/
|
|
||||||
static void __init rcu_tasks_bootup_oddness(void)
|
|
||||||
{
|
|
||||||
#ifdef CONFIG_TASKS_RCU
|
|
||||||
if (rcu_task_stall_timeout != RCU_TASK_STALL_TIMEOUT)
|
|
||||||
pr_info("\tTasks-RCU CPU stall warnings timeout set to %d (rcu_task_stall_timeout).\n", rcu_task_stall_timeout);
|
|
||||||
else
|
|
||||||
pr_info("\tTasks RCU enabled.\n");
|
|
||||||
#endif /* #ifdef CONFIG_TASKS_RCU */
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* #ifndef CONFIG_TINY_RCU */
|
|
||||||
|
|
||||||
#ifdef CONFIG_PROVE_RCU
|
#ifdef CONFIG_PROVE_RCU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -935,6 +585,8 @@ late_initcall(rcu_verify_early_boot_tests);
|
|||||||
void rcu_early_boot_tests(void) {}
|
void rcu_early_boot_tests(void) {}
|
||||||
#endif /* CONFIG_PROVE_RCU */
|
#endif /* CONFIG_PROVE_RCU */
|
||||||
|
|
||||||
|
#include "tasks.h"
|
||||||
|
|
||||||
#ifndef CONFIG_TINY_RCU
|
#ifndef CONFIG_TINY_RCU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -2566,6 +2566,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
|
|||||||
*
|
*
|
||||||
* Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
|
* Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
|
||||||
* __schedule(). See the comment for smp_mb__after_spinlock().
|
* __schedule(). See the comment for smp_mb__after_spinlock().
|
||||||
|
*
|
||||||
|
* A similar smb_rmb() lives in try_invoke_on_locked_down_task().
|
||||||
*/
|
*/
|
||||||
smp_rmb();
|
smp_rmb();
|
||||||
if (p->on_rq && ttwu_remote(p, wake_flags))
|
if (p->on_rq && ttwu_remote(p, wake_flags))
|
||||||
@ -2639,6 +2641,52 @@ out:
|
|||||||
return success;
|
return success;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* try_invoke_on_locked_down_task - Invoke a function on task in fixed state
|
||||||
|
* @p: Process for which the function is to be invoked.
|
||||||
|
* @func: Function to invoke.
|
||||||
|
* @arg: Argument to function.
|
||||||
|
*
|
||||||
|
* If the specified task can be quickly locked into a definite state
|
||||||
|
* (either sleeping or on a given runqueue), arrange to keep it in that
|
||||||
|
* state while invoking @func(@arg). This function can use ->on_rq and
|
||||||
|
* task_curr() to work out what the state is, if required. Given that
|
||||||
|
* @func can be invoked with a runqueue lock held, it had better be quite
|
||||||
|
* lightweight.
|
||||||
|
*
|
||||||
|
* Returns:
|
||||||
|
* @false if the task slipped out from under the locks.
|
||||||
|
* @true if the task was locked onto a runqueue or is sleeping.
|
||||||
|
* However, @func can override this by returning @false.
|
||||||
|
*/
|
||||||
|
bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
|
||||||
|
{
|
||||||
|
bool ret = false;
|
||||||
|
struct rq_flags rf;
|
||||||
|
struct rq *rq;
|
||||||
|
|
||||||
|
lockdep_assert_irqs_enabled();
|
||||||
|
raw_spin_lock_irq(&p->pi_lock);
|
||||||
|
if (p->on_rq) {
|
||||||
|
rq = __task_rq_lock(p, &rf);
|
||||||
|
if (task_rq(p) == rq)
|
||||||
|
ret = func(p, arg);
|
||||||
|
rq_unlock(rq, &rf);
|
||||||
|
} else {
|
||||||
|
switch (p->state) {
|
||||||
|
case TASK_RUNNING:
|
||||||
|
case TASK_WAKING:
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
smp_rmb(); // See smp_rmb() comment in try_to_wake_up().
|
||||||
|
if (!p->on_rq)
|
||||||
|
ret = func(p, arg);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
raw_spin_unlock_irq(&p->pi_lock);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* wake_up_process - Wake up a specific process
|
* wake_up_process - Wake up a specific process
|
||||||
* @p: The process to be woken up.
|
* @p: The process to be woken up.
|
||||||
|
@ -158,6 +158,7 @@ config FUNCTION_TRACER
|
|||||||
select CONTEXT_SWITCH_TRACER
|
select CONTEXT_SWITCH_TRACER
|
||||||
select GLOB
|
select GLOB
|
||||||
select TASKS_RCU if PREEMPTION
|
select TASKS_RCU if PREEMPTION
|
||||||
|
select TASKS_RUDE_RCU
|
||||||
help
|
help
|
||||||
Enable the kernel to trace every kernel function. This is done
|
Enable the kernel to trace every kernel function. This is done
|
||||||
by using a compiler feature to insert a small, 5-byte No-Operation
|
by using a compiler feature to insert a small, 5-byte No-Operation
|
||||||
|
@ -160,17 +160,6 @@ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
|
|||||||
op->saved_func(ip, parent_ip, op, regs);
|
op->saved_func(ip, parent_ip, op, regs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ftrace_sync(struct work_struct *work)
|
|
||||||
{
|
|
||||||
/*
|
|
||||||
* This function is just a stub to implement a hard force
|
|
||||||
* of synchronize_rcu(). This requires synchronizing
|
|
||||||
* tasks even in userspace and idle.
|
|
||||||
*
|
|
||||||
* Yes, function tracing is rude.
|
|
||||||
*/
|
|
||||||
}
|
|
||||||
|
|
||||||
static void ftrace_sync_ipi(void *data)
|
static void ftrace_sync_ipi(void *data)
|
||||||
{
|
{
|
||||||
/* Probably not needed, but do it anyway */
|
/* Probably not needed, but do it anyway */
|
||||||
@ -256,7 +245,7 @@ static void update_ftrace_function(void)
|
|||||||
* Make sure all CPUs see this. Yes this is slow, but static
|
* Make sure all CPUs see this. Yes this is slow, but static
|
||||||
* tracing is slow and nasty to have enabled.
|
* tracing is slow and nasty to have enabled.
|
||||||
*/
|
*/
|
||||||
schedule_on_each_cpu(ftrace_sync);
|
synchronize_rcu_tasks_rude();
|
||||||
/* Now all cpus are using the list ops. */
|
/* Now all cpus are using the list ops. */
|
||||||
function_trace_op = set_function_trace_op;
|
function_trace_op = set_function_trace_op;
|
||||||
/* Make sure the function_trace_op is visible on all CPUs */
|
/* Make sure the function_trace_op is visible on all CPUs */
|
||||||
@ -2932,7 +2921,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
|
|||||||
* infrastructure to do the synchronization, thus we must do it
|
* infrastructure to do the synchronization, thus we must do it
|
||||||
* ourselves.
|
* ourselves.
|
||||||
*/
|
*/
|
||||||
schedule_on_each_cpu(ftrace_sync);
|
synchronize_rcu_tasks_rude();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* When the kernel is preeptive, tasks can be preempted
|
* When the kernel is preeptive, tasks can be preempted
|
||||||
@ -5887,7 +5876,7 @@ ftrace_graph_release(struct inode *inode, struct file *file)
|
|||||||
* infrastructure to do the synchronization, thus we must do it
|
* infrastructure to do the synchronization, thus we must do it
|
||||||
* ourselves.
|
* ourselves.
|
||||||
*/
|
*/
|
||||||
schedule_on_each_cpu(ftrace_sync);
|
synchronize_rcu_tasks_rude();
|
||||||
|
|
||||||
free_ftrace_hash(old_hash);
|
free_ftrace_hash(old_hash);
|
||||||
}
|
}
|
||||||
|
22
tools/testing/selftests/rcutorture/bin/kcsan-collapse.sh
Executable file
22
tools/testing/selftests/rcutorture/bin/kcsan-collapse.sh
Executable file
@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
|
#
|
||||||
|
# If this was a KCSAN run, collapse the reports in the various console.log
|
||||||
|
# files onto pairs of functions.
|
||||||
|
#
|
||||||
|
# Usage: kcsan-collapse.sh resultsdir
|
||||||
|
#
|
||||||
|
# Copyright (C) 2020 Facebook, Inc.
|
||||||
|
#
|
||||||
|
# Authors: Paul E. McKenney <paulmck@kernel.org>
|
||||||
|
|
||||||
|
if test -z "$TORTURE_KCONFIG_KCSAN_ARG"
|
||||||
|
then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
cat $1/*/console.log |
|
||||||
|
grep "BUG: KCSAN: " |
|
||||||
|
sed -e 's/^\[[^]]*] //' |
|
||||||
|
sort |
|
||||||
|
uniq -c |
|
||||||
|
sort -k1nr > $1/kcsan.sum
|
@ -41,7 +41,21 @@ else
|
|||||||
title="$title ($ngpsps/s)"
|
title="$title ($ngpsps/s)"
|
||||||
fi
|
fi
|
||||||
echo $title $stopstate $fwdprog
|
echo $title $stopstate $fwdprog
|
||||||
nclosecalls=`grep --binary-files=text 'torture: Reader Batch' $i/console.log | tail -1 | awk '{for (i=NF-8;i<=NF;i++) sum+=$i; } END {print sum}'`
|
nclosecalls=`grep --binary-files=text 'torture: Reader Batch' $i/console.log | tail -1 | \
|
||||||
|
awk -v sum=0 '
|
||||||
|
{
|
||||||
|
for (i = 0; i <= NF; i++) {
|
||||||
|
sum += $i;
|
||||||
|
if ($i ~ /Batch:/) {
|
||||||
|
sum = 0;
|
||||||
|
i = i + 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
END {
|
||||||
|
print sum
|
||||||
|
}'`
|
||||||
if test -z "$nclosecalls"
|
if test -z "$nclosecalls"
|
||||||
then
|
then
|
||||||
exit 0
|
exit 0
|
||||||
|
@ -70,6 +70,15 @@ do
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
if test -f "$rd/kcsan.sum"
|
||||||
|
then
|
||||||
|
if test -s "$rd/kcsan.sum"
|
||||||
|
then
|
||||||
|
echo KCSAN summary in $rd/kcsan.sum
|
||||||
|
else
|
||||||
|
echo Clean KCSAN run in $rd
|
||||||
|
fi
|
||||||
|
fi
|
||||||
done
|
done
|
||||||
EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1
|
EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1
|
||||||
ret=$?
|
ret=$?
|
||||||
|
@ -44,30 +44,32 @@ then
|
|||||||
fi
|
fi
|
||||||
echo ' ---' `date`: Starting build
|
echo ' ---' `date`: Starting build
|
||||||
echo ' ---' Kconfig fragment at: $config_template >> $resdir/log
|
echo ' ---' Kconfig fragment at: $config_template >> $resdir/log
|
||||||
touch $resdir/ConfigFragment.input $resdir/ConfigFragment
|
touch $resdir/ConfigFragment.input
|
||||||
if test -r "$config_dir/CFcommon"
|
|
||||||
then
|
# Combine additional Kconfig options into an existing set such that
|
||||||
echo " --- $config_dir/CFcommon" >> $resdir/ConfigFragment.input
|
# newer options win. The first argument is the Kconfig source ID, the
|
||||||
cat < $config_dir/CFcommon >> $resdir/ConfigFragment.input
|
# second the to-be-updated file within $T, and the third and final the
|
||||||
config_override.sh $config_dir/CFcommon $config_template > $T/Kc1
|
# list of additional Kconfig options. Note that a $2.tmp file is
|
||||||
grep '#CHECK#' $config_dir/CFcommon >> $resdir/ConfigFragment
|
# created when doing the update.
|
||||||
else
|
config_override_param () {
|
||||||
cp $config_template $T/Kc1
|
if test -n "$3"
|
||||||
fi
|
then
|
||||||
echo " --- $config_template" >> $resdir/ConfigFragment.input
|
echo $3 | sed -e 's/^ *//' -e 's/ *$//' | tr -s " " "\012" > $T/Kconfig_args
|
||||||
cat $config_template >> $resdir/ConfigFragment.input
|
echo " --- $1" >> $resdir/ConfigFragment.input
|
||||||
grep '#CHECK#' $config_template >> $resdir/ConfigFragment
|
cat $T/Kconfig_args >> $resdir/ConfigFragment.input
|
||||||
if test -n "$TORTURE_KCONFIG_ARG"
|
config_override.sh $T/$2 $T/Kconfig_args > $T/$2.tmp
|
||||||
then
|
mv $T/$2.tmp $T/$2
|
||||||
echo $TORTURE_KCONFIG_ARG | tr -s " " "\012" > $T/cmdline
|
# Note that "#CHECK#" is not permitted on commandline.
|
||||||
echo " --- --kconfig argument" >> $resdir/ConfigFragment.input
|
fi
|
||||||
cat $T/cmdline >> $resdir/ConfigFragment.input
|
}
|
||||||
config_override.sh $T/Kc1 $T/cmdline > $T/Kc2
|
|
||||||
# Note that "#CHECK#" is not permitted on commandline.
|
echo > $T/KcList
|
||||||
else
|
config_override_param "$config_dir/CFcommon" KcList "`cat $config_dir/CFcommon 2> /dev/null`"
|
||||||
cp $T/Kc1 $T/Kc2
|
config_override_param "$config_template" KcList "`cat $config_template 2> /dev/null`"
|
||||||
fi
|
config_override_param "--kasan options" KcList "$TORTURE_KCONFIG_KASAN_ARG"
|
||||||
cat $T/Kc2 >> $resdir/ConfigFragment
|
config_override_param "--kcsan options" KcList "$TORTURE_KCONFIG_KCSAN_ARG"
|
||||||
|
config_override_param "--kconfig argument" KcList "$TORTURE_KCONFIG_ARG"
|
||||||
|
cp $T/KcList $resdir/ConfigFragment
|
||||||
|
|
||||||
base_resdir=`echo $resdir | sed -e 's/\.[0-9]\+$//'`
|
base_resdir=`echo $resdir | sed -e 's/\.[0-9]\+$//'`
|
||||||
if test "$base_resdir" != "$resdir" -a -f $base_resdir/bzImage -a -f $base_resdir/vmlinux
|
if test "$base_resdir" != "$resdir" -a -f $base_resdir/bzImage -a -f $base_resdir/vmlinux
|
||||||
@ -80,7 +82,7 @@ then
|
|||||||
ln -s $base_resdir/.config $resdir # for kvm-recheck.sh
|
ln -s $base_resdir/.config $resdir # for kvm-recheck.sh
|
||||||
# Arch-independent indicator
|
# Arch-independent indicator
|
||||||
touch $resdir/builtkernel
|
touch $resdir/builtkernel
|
||||||
elif kvm-build.sh $T/Kc2 $resdir
|
elif kvm-build.sh $T/KcList $resdir
|
||||||
then
|
then
|
||||||
# Had to build a kernel for this test.
|
# Had to build a kernel for this test.
|
||||||
QEMU="`identify_qemu vmlinux`"
|
QEMU="`identify_qemu vmlinux`"
|
||||||
|
@ -31,6 +31,8 @@ TORTURE_DEFCONFIG=defconfig
|
|||||||
TORTURE_BOOT_IMAGE=""
|
TORTURE_BOOT_IMAGE=""
|
||||||
TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD
|
TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD
|
||||||
TORTURE_KCONFIG_ARG=""
|
TORTURE_KCONFIG_ARG=""
|
||||||
|
TORTURE_KCONFIG_KASAN_ARG=""
|
||||||
|
TORTURE_KCONFIG_KCSAN_ARG=""
|
||||||
TORTURE_KMAKE_ARG=""
|
TORTURE_KMAKE_ARG=""
|
||||||
TORTURE_QEMU_MEM=512
|
TORTURE_QEMU_MEM=512
|
||||||
TORTURE_SHUTDOWN_GRACE=180
|
TORTURE_SHUTDOWN_GRACE=180
|
||||||
@ -133,6 +135,12 @@ do
|
|||||||
TORTURE_KCONFIG_ARG="$2"
|
TORTURE_KCONFIG_ARG="$2"
|
||||||
shift
|
shift
|
||||||
;;
|
;;
|
||||||
|
--kasan)
|
||||||
|
TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
|
||||||
|
;;
|
||||||
|
--kcsan)
|
||||||
|
TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_KCSAN_INTERRUPT_WATCHER=y"; export TORTURE_KCONFIG_KCSAN_ARG
|
||||||
|
;;
|
||||||
--kmake-arg)
|
--kmake-arg)
|
||||||
checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$'
|
checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$'
|
||||||
TORTURE_KMAKE_ARG="$2"
|
TORTURE_KMAKE_ARG="$2"
|
||||||
@ -310,6 +318,8 @@ TORTURE_BUILDONLY="$TORTURE_BUILDONLY"; export TORTURE_BUILDONLY
|
|||||||
TORTURE_DEFCONFIG="$TORTURE_DEFCONFIG"; export TORTURE_DEFCONFIG
|
TORTURE_DEFCONFIG="$TORTURE_DEFCONFIG"; export TORTURE_DEFCONFIG
|
||||||
TORTURE_INITRD="$TORTURE_INITRD"; export TORTURE_INITRD
|
TORTURE_INITRD="$TORTURE_INITRD"; export TORTURE_INITRD
|
||||||
TORTURE_KCONFIG_ARG="$TORTURE_KCONFIG_ARG"; export TORTURE_KCONFIG_ARG
|
TORTURE_KCONFIG_ARG="$TORTURE_KCONFIG_ARG"; export TORTURE_KCONFIG_ARG
|
||||||
|
TORTURE_KCONFIG_KASAN_ARG="$TORTURE_KCONFIG_KASAN_ARG"; export TORTURE_KCONFIG_KASAN_ARG
|
||||||
|
TORTURE_KCONFIG_KCSAN_ARG="$TORTURE_KCONFIG_KCSAN_ARG"; export TORTURE_KCONFIG_KCSAN_ARG
|
||||||
TORTURE_KMAKE_ARG="$TORTURE_KMAKE_ARG"; export TORTURE_KMAKE_ARG
|
TORTURE_KMAKE_ARG="$TORTURE_KMAKE_ARG"; export TORTURE_KMAKE_ARG
|
||||||
TORTURE_QEMU_CMD="$TORTURE_QEMU_CMD"; export TORTURE_QEMU_CMD
|
TORTURE_QEMU_CMD="$TORTURE_QEMU_CMD"; export TORTURE_QEMU_CMD
|
||||||
TORTURE_QEMU_INTERACTIVE="$TORTURE_QEMU_INTERACTIVE"; export TORTURE_QEMU_INTERACTIVE
|
TORTURE_QEMU_INTERACTIVE="$TORTURE_QEMU_INTERACTIVE"; export TORTURE_QEMU_INTERACTIVE
|
||||||
@ -464,6 +474,7 @@ echo
|
|||||||
echo
|
echo
|
||||||
echo " --- `date` Test summary:"
|
echo " --- `date` Test summary:"
|
||||||
echo Results directory: $resdir/$ds
|
echo Results directory: $resdir/$ds
|
||||||
|
kcsan-collapse.sh $resdir/$ds
|
||||||
kvm-recheck.sh $resdir/$ds
|
kvm-recheck.sh $resdir/$ds
|
||||||
___EOF___
|
___EOF___
|
||||||
|
|
||||||
|
@ -14,3 +14,6 @@ TINY02
|
|||||||
TASKS01
|
TASKS01
|
||||||
TASKS02
|
TASKS02
|
||||||
TASKS03
|
TASKS03
|
||||||
|
RUDE01
|
||||||
|
TRACE01
|
||||||
|
TRACE02
|
||||||
|
10
tools/testing/selftests/rcutorture/configs/rcu/RUDE01
Normal file
10
tools/testing/selftests/rcutorture/configs/rcu/RUDE01
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
CONFIG_SMP=y
|
||||||
|
CONFIG_NR_CPUS=2
|
||||||
|
CONFIG_HOTPLUG_CPU=y
|
||||||
|
CONFIG_PREEMPT_NONE=n
|
||||||
|
CONFIG_PREEMPT_VOLUNTARY=n
|
||||||
|
CONFIG_PREEMPT=y
|
||||||
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
|
CONFIG_PROVE_LOCKING=y
|
||||||
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
|
CONFIG_RCU_EXPERT=y
|
@ -0,0 +1 @@
|
|||||||
|
rcutorture.torture_type=tasks-rude
|
11
tools/testing/selftests/rcutorture/configs/rcu/TRACE01
Normal file
11
tools/testing/selftests/rcutorture/configs/rcu/TRACE01
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
CONFIG_SMP=y
|
||||||
|
CONFIG_NR_CPUS=4
|
||||||
|
CONFIG_HOTPLUG_CPU=y
|
||||||
|
CONFIG_PREEMPT_NONE=y
|
||||||
|
CONFIG_PREEMPT_VOLUNTARY=n
|
||||||
|
CONFIG_PREEMPT=n
|
||||||
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
|
CONFIG_PROVE_LOCKING=y
|
||||||
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
|
CONFIG_TASKS_TRACE_RCU_READ_MB=y
|
||||||
|
CONFIG_RCU_EXPERT=y
|
@ -0,0 +1 @@
|
|||||||
|
rcutorture.torture_type=tasks-tracing
|
11
tools/testing/selftests/rcutorture/configs/rcu/TRACE02
Normal file
11
tools/testing/selftests/rcutorture/configs/rcu/TRACE02
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
CONFIG_SMP=y
|
||||||
|
CONFIG_NR_CPUS=4
|
||||||
|
CONFIG_HOTPLUG_CPU=y
|
||||||
|
CONFIG_PREEMPT_NONE=n
|
||||||
|
CONFIG_PREEMPT_VOLUNTARY=n
|
||||||
|
CONFIG_PREEMPT=y
|
||||||
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
|
CONFIG_PROVE_LOCKING=n
|
||||||
|
#CHECK#CONFIG_PROVE_RCU=n
|
||||||
|
CONFIG_TASKS_TRACE_RCU_READ_MB=n
|
||||||
|
CONFIG_RCU_EXPERT=y
|
@ -0,0 +1 @@
|
|||||||
|
rcutorture.torture_type=tasks-tracing
|
@ -1,5 +1,5 @@
|
|||||||
CONFIG_SMP=y
|
CONFIG_SMP=y
|
||||||
CONFIG_NR_CPUS=100
|
CONFIG_NR_CPUS=56
|
||||||
CONFIG_PREEMPT_NONE=y
|
CONFIG_PREEMPT_NONE=y
|
||||||
CONFIG_PREEMPT_VOLUNTARY=n
|
CONFIG_PREEMPT_VOLUNTARY=n
|
||||||
CONFIG_PREEMPT=n
|
CONFIG_PREEMPT=n
|
||||||
|
Reference in New Issue
Block a user