2009-08-23 00:56:45 +04:00
/*
* Read - Copy Update mechanism for mutual exclusion ( tree - based version )
* Internal non - public definitions .
*
* This program is free software ; you can redistribute it and / or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation ; either version 2 of the License , or
* ( at your option ) any later version .
*
* This program is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
*
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write to the Free Software
* Foundation , Inc . , 59 Temple Place - Suite 330 , Boston , MA 02111 - 1307 , USA .
*
* Copyright IBM Corporation , 2008
*
* Author : Ingo Molnar < mingo @ elte . hu >
* Paul E . McKenney < paulmck @ linux . vnet . ibm . com >
*/
# include <linux/cache.h>
# include <linux/spinlock.h>
# include <linux/threads.h>
# include <linux/cpumask.h>
# include <linux/seqlock.h>
rcu: Don't call wakeup() with rcu_node structure ->lock held
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp->lock critical section. Please see below for
the long version of this story.
On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:
> [12572.705832] ======================================================
> [12572.750317] [ INFO: possible circular locking dependency detected ]
> [12572.796978] 3.10.0-rc3+ #39 Not tainted
> [12572.833381] -------------------------------------------------------
> [12572.862233] trinity-child17/31341 is trying to acquire lock:
> [12572.870390] (rcu_node_0){..-.-.}, at: [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12572.878859]
> but task is already holding lock:
> [12572.894894] (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
> [12572.903381]
> which lock already depends on the new lock.
>
> [12572.927541]
> the existing dependency chain (in reverse order) is:
> [12572.943736]
> -> #4 (&ctx->lock){-.-...}:
> [12572.960032] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12572.968337] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12572.976633] [<ffffffff8113c987>] __perf_event_task_sched_out+0x2e7/0x5e0
> [12572.984969] [<ffffffff81088953>] perf_event_task_sched_out+0x93/0xa0
> [12572.993326] [<ffffffff816ea0bf>] __schedule+0x2cf/0x9c0
> [12573.001652] [<ffffffff816eacfe>] schedule_user+0x2e/0x70
> [12573.009998] [<ffffffff816ecd64>] retint_careful+0x12/0x2e
> [12573.018321]
> -> #3 (&rq->lock){-.-.-.}:
> [12573.034628] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.042930] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.051248] [<ffffffff8108e6a7>] wake_up_new_task+0xb7/0x260
> [12573.059579] [<ffffffff810492f5>] do_fork+0x105/0x470
> [12573.067880] [<ffffffff81049686>] kernel_thread+0x26/0x30
> [12573.076202] [<ffffffff816cee63>] rest_init+0x23/0x140
> [12573.084508] [<ffffffff81ed8e1f>] start_kernel+0x3f1/0x3fe
> [12573.092852] [<ffffffff81ed856f>] x86_64_start_reservations+0x2a/0x2c
> [12573.101233] [<ffffffff81ed863d>] x86_64_start_kernel+0xcc/0xcf
> [12573.109528]
> -> #2 (&p->pi_lock){-.-.-.}:
> [12573.125675] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.133829] [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
> [12573.141964] [<ffffffff8108e881>] try_to_wake_up+0x31/0x320
> [12573.150065] [<ffffffff8108ebe2>] default_wake_function+0x12/0x20
> [12573.158151] [<ffffffff8107bbf8>] autoremove_wake_function+0x18/0x40
> [12573.166195] [<ffffffff81085398>] __wake_up_common+0x58/0x90
> [12573.174215] [<ffffffff81086909>] __wake_up+0x39/0x50
> [12573.182146] [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
> [12573.190119] [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
> [12573.198023] [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
> [12573.205860] [<ffffffff8107a91d>] kthread+0xed/0x100
> [12573.213656] [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
> [12573.221379]
> -> #1 (&rsp->gp_wq){..-.-.}:
> [12573.236329] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.243783] [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
> [12573.251178] [<ffffffff810868f3>] __wake_up+0x23/0x50
> [12573.258505] [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
> [12573.265891] [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
> [12573.273248] [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
> [12573.280564] [<ffffffff8107a91d>] kthread+0xed/0x100
> [12573.287807] [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
Notice the above call chain.
rcu_start_future_gp() is called with the rnp->lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.
You can't do wakeups while holding the rnp->lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).
> [12573.295067]
> -> #0 (rcu_node_0){..-.-.}:
> [12573.309293] [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
> [12573.316568] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.323825] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.331081] [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12573.338377] [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
> [12573.345648] [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
> [12573.352942] [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
> [12573.360211] [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
> [12573.367514] [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
> [12573.374816] [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
Notice the above trace.
perf took its own ctx->lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:
rcu_read_lock();
raw_spin_lock(ctx->lock);
rcu_read_unlock();
Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx->lock, the following rcu_read_unlock()
triggered the "special" code.
The rcu_read_unlock_special() takes the rnp->lock, which gives us a
possible deadlock scenario.
CPU0 CPU1 CPU2
---- ---- ----
rcu_nocb_kthread()
lock(rq->lock);
lock(ctx->lock);
lock(rnp->lock);
wake_up();
lock(rq->lock);
rcu_read_unlock();
rcu_read_unlock_special();
lock(rnp->lock);
lock(ctx->lock);
**** DEADLOCK ****
> [12573.382068]
> other info that might help us debug this:
>
> [12573.403229] Chain exists of:
> rcu_node_0 --> &rq->lock --> &ctx->lock
>
> [12573.424471] Possible unsafe locking scenario:
>
> [12573.438499] CPU0 CPU1
> [12573.445599] ---- ----
> [12573.452691] lock(&ctx->lock);
> [12573.459799] lock(&rq->lock);
> [12573.467010] lock(&ctx->lock);
> [12573.474192] lock(rcu_node_0);
> [12573.481262]
> *** DEADLOCK ***
>
> [12573.501931] 1 lock held by trinity-child17/31341:
> [12573.508990] #0: (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
> [12573.516475]
> stack backtrace:
> [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
> [12573.545357] ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
> [12573.552868] ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
> [12573.560353] 0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
> [12573.567856] Call Trace:
> [12573.575011] [<ffffffff816e375b>] dump_stack+0x19/0x1b
> [12573.582284] [<ffffffff816dfa5d>] print_circular_bug+0x200/0x20f
> [12573.589637] [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
> [12573.596982] [<ffffffff810918f5>] ? sched_clock_cpu+0xb5/0x100
> [12573.604344] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.611652] [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
> [12573.619030] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.626331] [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
> [12573.633671] [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12573.640992] [<ffffffff811390ed>] ? perf_lock_task_context+0x7d/0x2d0
> [12573.648330] [<ffffffff810b429e>] ? put_lock_stats.isra.29+0xe/0x40
> [12573.655662] [<ffffffff813095a0>] ? delay_tsc+0x90/0xe0
> [12573.662964] [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
> [12573.670276] [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
> [12573.677622] [<ffffffff81139070>] ? __perf_event_enable+0x370/0x370
> [12573.684981] [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
> [12573.692358] [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
> [12573.699753] [<ffffffff8108cd9d>] ? get_parent_ip+0xd/0x50
> [12573.707135] [<ffffffff810b71fd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
> [12573.714599] [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
> [12573.721996] [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-05-29 01:32:53 +04:00
# include <linux/irq_work.h>
2009-08-23 00:56:45 +04:00
/*
2012-04-19 23:20:14 +04:00
* Define shape of hierarchy based on NR_CPUS , CONFIG_RCU_FANOUT , and
* CONFIG_RCU_FANOUT_LEAF .
2009-08-23 00:56:45 +04:00
* In theory , it should be possible to add more levels straightforwardly .
2010-12-15 03:07:52 +03:00
* In practice , this did work well going from three levels to four .
* Of course , your mileage may vary .
2009-08-23 00:56:45 +04:00
*/
2009-12-02 23:10:14 +03:00
# define MAX_RCU_LVLS 4
2012-04-19 23:20:14 +04:00
# define RCU_FANOUT_1 (CONFIG_RCU_FANOUT_LEAF)
2010-12-15 03:07:52 +03:00
# define RCU_FANOUT_2 (RCU_FANOUT_1 * CONFIG_RCU_FANOUT)
# define RCU_FANOUT_3 (RCU_FANOUT_2 * CONFIG_RCU_FANOUT)
# define RCU_FANOUT_4 (RCU_FANOUT_3 * CONFIG_RCU_FANOUT)
2009-08-23 00:56:45 +04:00
2010-12-15 03:07:52 +03:00
# if NR_CPUS <= RCU_FANOUT_1
2012-04-24 02:52:53 +04:00
# define RCU_NUM_LVLS 1
2009-08-23 00:56:45 +04:00
# define NUM_RCU_LVL_0 1
# define NUM_RCU_LVL_1 (NR_CPUS)
# define NUM_RCU_LVL_2 0
# define NUM_RCU_LVL_3 0
2009-12-02 23:10:14 +03:00
# define NUM_RCU_LVL_4 0
2010-12-15 03:07:52 +03:00
# elif NR_CPUS <= RCU_FANOUT_2
2012-04-24 02:52:53 +04:00
# define RCU_NUM_LVLS 2
2009-08-23 00:56:45 +04:00
# define NUM_RCU_LVL_0 1
2010-12-15 03:07:52 +03:00
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
2009-08-23 00:56:45 +04:00
# define NUM_RCU_LVL_2 (NR_CPUS)
# define NUM_RCU_LVL_3 0
2009-12-02 23:10:14 +03:00
# define NUM_RCU_LVL_4 0
2010-12-15 03:07:52 +03:00
# elif NR_CPUS <= RCU_FANOUT_3
2012-04-24 02:52:53 +04:00
# define RCU_NUM_LVLS 3
2009-08-23 00:56:45 +04:00
# define NUM_RCU_LVL_0 1
2010-12-15 03:07:52 +03:00
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
# define NUM_RCU_LVL_3 (NR_CPUS)
2009-12-02 23:10:14 +03:00
# define NUM_RCU_LVL_4 0
2010-12-15 03:07:52 +03:00
# elif NR_CPUS <= RCU_FANOUT_4
2012-04-24 02:52:53 +04:00
# define RCU_NUM_LVLS 4
2009-12-02 23:10:14 +03:00
# define NUM_RCU_LVL_0 1
2010-12-15 03:07:52 +03:00
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
# define NUM_RCU_LVL_3 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
# define NUM_RCU_LVL_4 (NR_CPUS)
2009-08-23 00:56:45 +04:00
# else
# error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
2010-12-15 03:07:52 +03:00
# endif /* #if (NR_CPUS) <= RCU_FANOUT_1 */
2009-08-23 00:56:45 +04:00
2009-12-02 23:10:14 +03:00
# define RCU_SUM (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3 + NUM_RCU_LVL_4)
2009-08-23 00:56:45 +04:00
# define NUM_RCU_NODES (RCU_SUM - NR_CPUS)
2012-04-24 02:52:53 +04:00
extern int rcu_num_lvls ;
extern int rcu_num_nodes ;
2009-08-23 00:56:45 +04:00
/*
* Dynticks per - CPU state .
*/
struct rcu_dynticks {
rcu: Track idleness independent of idle tasks
Earlier versions of RCU used the scheduling-clock tick to detect idleness
by checking for the idle task, but handled idleness differently for
CONFIG_NO_HZ=y. But there are now a number of uses of RCU read-side
critical sections in the idle task, for example, for tracing. A more
fine-grained detection of idleness is therefore required.
This commit presses the old dyntick-idle code into full-time service,
so that rcu_idle_enter(), previously known as rcu_enter_nohz(), is
always invoked at the beginning of an idle loop iteration. Similarly,
rcu_idle_exit(), previously known as rcu_exit_nohz(), is always invoked
at the end of an idle-loop iteration. This allows the idle task to
use RCU everywhere except between consecutive rcu_idle_enter() and
rcu_idle_exit() calls, in turn allowing architecture maintainers to
specify exactly where in the idle loop that RCU may be used.
Because some of the userspace upcall uses can result in what looks
to RCU like half of an interrupt, it is not possible to expect that
the irq_enter() and irq_exit() hooks will give exact counts. This
patch therefore expands the ->dynticks_nesting counter to 64 bits
and uses two separate bitfields to count process/idle transitions
and interrupt entry/exit transitions. It is presumed that userspace
upcalls do not happen in the idle loop or from usermode execution
(though usermode might do a system call that results in an upcall).
The counter is hard-reset on each process/idle transition, which
avoids the interrupt entry/exit error from accumulating. Overflow
is avoided by the 64-bitness of the ->dyntick_nesting counter.
This commit also adds warnings if a non-idle task asks RCU to enter
idle state (and these checks will need some adjustment before applying
Frederic's OS-jitter patches (http://lkml.org/lkml/2011/10/7/246).
In addition, validation of ->dynticks and ->dynticks_nesting is added.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2011-09-30 23:10:22 +04:00
long long dynticks_nesting ; /* Track irq/process nesting level. */
/* Process level is worth LLONG_MAX/2. */
int dynticks_nmi_nesting ; /* Track NMI nesting level. */
atomic_t dynticks ; /* Even value for idle, else odd. */
nohz_full: Add rcu_dyntick data for scalable detection of all-idle state
This commit adds fields to the rcu_dyntick structure that are used to
detect idle CPUs. These new fields differ from the existing ones in
that the existing ones consider a CPU executing in user mode to be idle,
where the new ones consider CPUs executing in user mode to be busy.
The handling of these new fields is otherwise quite similar to that for
the exiting fields. This commit also adds the initialization required
for these fields.
So, why is usermode execution treated differently, with RCU considering
it a quiescent state equivalent to idle, while in contrast the new
full-system idle state detection considers usermode execution to be
non-idle?
It turns out that although one of RCU's quiescent states is usermode
execution, it is not a full-system idle state. This is because the
purpose of the full-system idle state is not RCU, but rather determining
when accurate timekeeping can safely be disabled. Whenever accurate
timekeeping is required in a CONFIG_NO_HZ_FULL kernel, at least one
CPU must keep the scheduling-clock tick going. If even one CPU is
executing in user mode, accurate timekeeping is requires, particularly for
architectures where gettimeofday() and friends do not enter the kernel.
Only when all CPUs are really and truly idle can accurate timekeeping be
disabled, allowing all CPUs to turn off the scheduling clock interrupt,
thus greatly improving energy efficiency.
This naturally raises the question "Why is this code in RCU rather than in
timekeeping?", and the answer is that RCU has the data and infrastructure
to efficiently make this determination.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-06-21 23:34:33 +04:00
# ifdef CONFIG_NO_HZ_FULL_SYSIDLE
long long dynticks_idle_nesting ;
/* irq/process nesting level from idle. */
atomic_t dynticks_idle ; /* Even value for idle, else odd. */
/* "Idle" excludes userspace execution. */
unsigned long dynticks_idle_jiffies ;
/* End of last non-NMI non-idle period. */
# endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
2012-05-09 23:07:05 +04:00
# ifdef CONFIG_RCU_FAST_NO_HZ
2012-12-28 23:30:36 +04:00
bool all_lazy ; /* Are all CPU's CBs lazy? */
2012-05-09 23:07:05 +04:00
unsigned long nonlazy_posted ;
/* # times non-lazy CBs posted to CPU. */
unsigned long nonlazy_posted_snap ;
/* idle-period nonlazy_posted snapshot. */
2012-12-28 23:30:36 +04:00
unsigned long last_accelerate ;
/* Last jiffy CBs were accelerated. */
2012-06-24 21:15:02 +04:00
int tick_nohz_enabled_snap ; /* Previously seen value from sysfs. */
2012-05-09 23:07:05 +04:00
# endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
2009-08-23 00:56:45 +04:00
} ;
2011-03-30 04:48:28 +04:00
/* RCU's kthread states for tracing. */
# define RCU_KTHREAD_STOPPED 0
# define RCU_KTHREAD_RUNNING 1
# define RCU_KTHREAD_WAITING 2
2011-04-07 03:01:16 +04:00
# define RCU_KTHREAD_OFFCPU 3
# define RCU_KTHREAD_YIELDING 4
# define RCU_KTHREAD_MAX 4
2011-03-30 04:48:28 +04:00
2009-08-23 00:56:45 +04:00
/*
* Definition for node within the RCU grace - period - detection hierarchy .
*/
struct rcu_node {
2010-02-23 04:05:02 +03:00
raw_spinlock_t lock ; /* Root rcu_node's lock protects some */
2009-09-23 20:50:42 +04:00
/* rcu_state fields as well as following. */
2010-02-23 04:05:01 +03:00
unsigned long gpnum ; /* Current grace period for this node. */
2009-08-28 02:00:12 +04:00
/* This will either be equal to or one */
/* behind the root rcu_node's gpnum. */
2010-02-23 04:05:01 +03:00
unsigned long completed ; /* Last GP completed for this node. */
2009-11-03 00:52:28 +03:00
/* This will either be equal to or one */
/* behind the root rcu_node's gpnum. */
2009-08-23 00:56:45 +04:00
unsigned long qsmask ; /* CPUs or groups that need to switch in */
/* order for current grace period to proceed.*/
2009-09-23 20:50:42 +04:00
/* In leaf rcu_node, each bit corresponds to */
/* an rcu_data structure, otherwise, each */
/* bit corresponds to a child rcu_node */
/* structure. */
2010-11-30 08:56:39 +03:00
unsigned long expmask ; /* Groups that have ->blkd_tasks */
2009-12-02 23:10:15 +03:00
/* elements that need to drain to allow the */
/* current expedited grace period to */
/* complete (only for TREE_PREEMPT_RCU). */
2009-08-23 00:56:45 +04:00
unsigned long qsmaskinit ;
2009-12-02 23:10:15 +03:00
/* Per-GP initial value for qsmask & expmask. */
2009-08-23 00:56:45 +04:00
unsigned long grpmask ; /* Mask to apply to parent qsmask. */
2009-09-23 20:50:42 +04:00
/* Only one bit will be set in this mask. */
2009-08-23 00:56:45 +04:00
int grplo ; /* lowest-numbered CPU or group here. */
int grphi ; /* highest-numbered CPU or group here. */
u8 grpnum ; /* CPU/group number for next level up. */
u8 level ; /* root is at level 0. */
struct rcu_node * parent ;
2010-11-30 08:56:39 +03:00
struct list_head blkd_tasks ;
/* Tasks blocked in RCU read-side critical */
/* section. Tasks are placed at the head */
/* of this list and age towards the tail. */
struct list_head * gp_tasks ;
/* Pointer to the first task blocking the */
/* current grace period, or NULL if there */
/* is no such task. */
struct list_head * exp_tasks ;
/* Pointer to the first task blocking the */
/* current expedited grace period, or NULL */
/* if there is no such task. If there */
/* is no current expedited grace period, */
/* then there can cannot be any such task. */
2011-02-07 23:47:15 +03:00
# ifdef CONFIG_RCU_BOOST
struct list_head * boost_tasks ;
/* Pointer to first task that needs to be */
/* priority boosted, or NULL if no priority */
/* boosting is needed for this rcu_node */
/* structure. If there are no tasks */
/* queued on this rcu_node structure that */
/* are blocking the current grace period, */
/* there can be no such task. */
unsigned long boost_time ;
/* When to start boosting (jiffies). */
struct task_struct * boost_kthread_task ;
/* kthread that takes care of priority */
/* boosting for this rcu_node structure. */
2011-03-30 04:48:28 +04:00
unsigned int boost_kthread_status ;
/* State of boost_kthread_task for tracing. */
2011-02-23 00:42:43 +03:00
unsigned long n_tasks_boosted ;
/* Total number of tasks boosted. */
unsigned long n_exp_boosts ;
/* Number of tasks boosted for expedited GP. */
unsigned long n_normal_boosts ;
/* Number of tasks boosted for normal GP. */
unsigned long n_balk_blkd_tasks ;
/* Refused to boost: no blocked tasks. */
unsigned long n_balk_exp_gp_tasks ;
/* Refused to boost: nothing blocking GP. */
unsigned long n_balk_boost_tasks ;
/* Refused to boost: already boosting. */
unsigned long n_balk_notblocked ;
/* Refused to boost: RCU RS CS still running. */
unsigned long n_balk_notyet ;
/* Refused to boost: not yet time. */
unsigned long n_balk_nos ;
/* Refused to boost: not sure why, though. */
/* This can happen due to race conditions. */
2011-02-07 23:47:15 +03:00
# endif /* #ifdef CONFIG_RCU_BOOST */
2013-02-11 08:48:58 +04:00
# ifdef CONFIG_RCU_NOCB_CPU
wait_queue_head_t nocb_gp_wq [ 2 ] ;
/* Place for rcu_nocb_kthread() to wait GP. */
# endif /* #ifdef CONFIG_RCU_NOCB_CPU */
2012-12-31 01:06:35 +04:00
int need_future_gp [ 2 ] ;
/* Counts of upcoming no-CB GP requests. */
2012-06-27 04:00:35 +04:00
raw_spinlock_t fqslock ____cacheline_internodealigned_in_smp ;
2009-08-23 00:56:45 +04:00
} ____cacheline_internodealigned_in_smp ;
2009-09-28 18:46:33 +04:00
/*
* Do a full breadth - first scan of the rcu_node structures for the
* specified rcu_state structure .
*/
# define rcu_for_each_node_breadth_first(rsp, rnp) \
for ( ( rnp ) = & ( rsp ) - > node [ 0 ] ; \
2012-04-24 02:52:53 +04:00
( rnp ) < & ( rsp ) - > node [ rcu_num_nodes ] ; ( rnp ) + + )
2009-09-28 18:46:33 +04:00
2009-12-02 23:10:15 +03:00
/*
* Do a breadth - first scan of the non - leaf rcu_node structures for the
* specified rcu_state structure . Note that if there is a singleton
* rcu_node tree with but one rcu_node structure , this loop is a no - op .
*/
# define rcu_for_each_nonleaf_node_breadth_first(rsp, rnp) \
for ( ( rnp ) = & ( rsp ) - > node [ 0 ] ; \
2012-04-24 02:52:53 +04:00
( rnp ) < ( rsp ) - > level [ rcu_num_lvls - 1 ] ; ( rnp ) + + )
2009-12-02 23:10:15 +03:00
/*
* Scan the leaves of the rcu_node hierarchy for the specified rcu_state
* structure . Note that if there is a singleton rcu_node tree with but
* one rcu_node structure , this loop - will - visit the rcu_node structure .
* It is still a leaf node , even if it is also the root node .
*/
2009-09-28 18:46:33 +04:00
# define rcu_for_each_leaf_node(rsp, rnp) \
2012-04-24 02:52:53 +04:00
for ( ( rnp ) = ( rsp ) - > level [ rcu_num_lvls - 1 ] ; \
( rnp ) < & ( rsp ) - > node [ rcu_num_nodes ] ; ( rnp ) + + )
2009-09-28 18:46:33 +04:00
2009-08-23 00:56:45 +04:00
/* Index values for nxttail array in struct rcu_data. */
# define RCU_DONE_TAIL 0 /* Also RCU_WAIT head. */
# define RCU_WAIT_TAIL 1 /* Also RCU_NEXT_READY head. */
# define RCU_NEXT_READY_TAIL 2 /* Also RCU_NEXT head. */
# define RCU_NEXT_TAIL 3
# define RCU_NEXT_SIZE 4
/* Per-CPU data for read-copy update. */
struct rcu_data {
/* 1) quiescent-state and grace-period handling : */
2010-02-23 04:05:01 +03:00
unsigned long completed ; /* Track rsp->completed gp number */
2009-08-23 00:56:45 +04:00
/* in order to detect GP end. */
2010-02-23 04:05:01 +03:00
unsigned long gpnum ; /* Highest gp number that this CPU */
2009-08-23 00:56:45 +04:00
/* is aware of having started. */
rcu: Simplify quiescent-state accounting
There is often a delay between the time that a CPU passes through a
quiescent state and the time that this quiescent state is reported to the
RCU core. It is quite possible that the grace period ended before the
quiescent state could be reported, for example, some other CPU might have
deduced that this CPU passed through dyntick-idle mode. It is critically
important that quiescent state be counted only against the grace period
that was in effect at the time that the quiescent state was detected.
Previously, this was handled by recording the number of the last grace
period to complete when passing through a quiescent state. The RCU
core then checks this number against the current value, and rejects
the quiescent state if there is a mismatch. However, one additional
possibility must be accounted for, namely that the quiescent state was
recorded after the prior grace period completed but before the current
grace period started. In this case, the RCU core must reject the
quiescent state, but the recorded number will match. This is handled
when the CPU becomes aware of a new grace period -- at that point,
it invalidates any prior quiescent state.
This works, but is a bit indirect. The new approach records the current
grace period, and the RCU core checks to see (1) that this is still the
current grace period and (2) that this grace period has not yet ended.
This approach simplifies reasoning about correctness, and this commit
changes over to this new approach.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-06-27 11:17:43 +04:00
bool passed_quiesce ; /* User-mode/idle loop etc. */
2009-08-23 00:56:45 +04:00
bool qs_pending ; /* Core waits for quiesc state. */
bool beenonline ; /* CPU online at least once. */
2011-03-03 00:15:15 +03:00
bool preemptible ; /* Preemptible RCU? */
2009-08-23 00:56:45 +04:00
struct rcu_node * mynode ; /* This CPU's leaf of hierarchy */
unsigned long grpmask ; /* Mask to apply to leaf qsmask. */
2012-01-17 01:29:10 +04:00
# ifdef CONFIG_RCU_CPU_STALL_INFO
unsigned long ticks_this_gp ; /* The number of scheduling-clock */
/* ticks this CPU has handled */
/* during and after the last grace */
/* period it is aware of. */
# endif /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
2009-08-23 00:56:45 +04:00
/* 2) batch handling */
/*
* If nxtlist is not NULL , it is partitioned as follows .
* Any of the partitions might be empty , in which case the
* pointer to that partition will be equal to the pointer for
* the following partition . When the list is empty , all of
2009-09-23 20:50:42 +04:00
* the nxttail elements point to the - > nxtlist pointer itself ,
* which in that case is NULL .
2009-08-23 00:56:45 +04:00
*
* [ nxtlist , * nxttail [ RCU_DONE_TAIL ] ) :
* Entries that batch # < = - > completed
* The grace period for these entries has completed , and
* the other grace - period - completed entries may be moved
* here temporarily in rcu_process_callbacks ( ) .
2009-09-23 20:50:42 +04:00
* [ * nxttail [ RCU_DONE_TAIL ] , * nxttail [ RCU_WAIT_TAIL ] ) :
* Entries that batch # < = - > completed - 1 : waiting for current GP
* [ * nxttail [ RCU_WAIT_TAIL ] , * nxttail [ RCU_NEXT_READY_TAIL ] ) :
* Entries known to have arrived before current GP ended
* [ * nxttail [ RCU_NEXT_READY_TAIL ] , * nxttail [ RCU_NEXT_TAIL ] ) :
* Entries that might have arrived after current GP ended
* Note that the value of * nxttail [ RCU_NEXT_TAIL ] will
* always be NULL , as this is the end of the list .
2009-08-23 00:56:45 +04:00
*/
struct rcu_head * nxtlist ;
struct rcu_head * * nxttail [ RCU_NEXT_SIZE ] ;
2012-12-04 01:52:00 +04:00
unsigned long nxtcompleted [ RCU_NEXT_SIZE ] ;
/* grace periods for sublists. */
2012-01-07 02:11:30 +04:00
long qlen_lazy ; /* # of lazy queued callbacks */
long qlen ; /* # of queued callbacks, incl lazy */
2009-10-14 21:15:55 +04:00
long qlen_last_fqs_check ;
/* qlen at last check for QS forcing */
2010-09-08 01:23:09 +04:00
unsigned long n_cbs_invoked ; /* count of RCU cbs invoked. */
2012-10-29 18:29:20 +04:00
unsigned long n_nocbs_invoked ; /* count of no-CBs RCU cbs invoked. */
2010-10-20 10:13:06 +04:00
unsigned long n_cbs_orphaned ; /* RCU cbs orphaned by dying CPU */
unsigned long n_cbs_adopted ; /* RCU cbs adopted from dying CPU */
2009-10-14 21:15:55 +04:00
unsigned long n_force_qs_snap ;
/* did other CPU force QS recently? */
2009-08-23 00:56:45 +04:00
long blimit ; /* Upper limit on a processed batch */
/* 3) dynticks interface. */
struct rcu_dynticks * dynticks ; /* Shared per-CPU dynticks state. */
int dynticks_snap ; /* Per-GP tracking for dynticks. */
/* 4) reasons this CPU needed to be kicked by force_quiescent_state */
unsigned long dynticks_fqs ; /* Kicked due to dynticks idle. */
unsigned long offline_fqs ; /* Kicked due to being offline. */
/* 5) __rcu_pending() statistics. */
2010-02-23 04:05:01 +03:00
unsigned long n_rcu_pending ; /* rcu_pending() calls since boot. */
unsigned long n_rp_qs_pending ;
2010-04-15 04:39:26 +04:00
unsigned long n_rp_report_qs ;
2010-02-23 04:05:01 +03:00
unsigned long n_rp_cb_ready ;
unsigned long n_rp_cpu_needs_gp ;
unsigned long n_rp_gp_completed ;
unsigned long n_rp_gp_started ;
unsigned long n_rp_need_nothing ;
2009-08-23 00:56:45 +04:00
2012-06-12 04:39:43 +04:00
/* 6) _rcu_barrier() and OOM callbacks. */
2012-05-29 10:57:46 +04:00
struct rcu_head barrier_head ;
2012-06-12 04:39:43 +04:00
# ifdef CONFIG_RCU_FAST_NO_HZ
struct rcu_head oom_head ;
# endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
2012-05-29 10:57:46 +04:00
2012-08-20 08:35:53 +04:00
/* 7) Callback offloading. */
# ifdef CONFIG_RCU_NOCB_CPU
struct rcu_head * nocb_head ; /* CBs waiting for kthread. */
struct rcu_head * * nocb_tail ;
atomic_long_t nocb_q_count ; /* # CBs waiting for kthread */
atomic_long_t nocb_q_count_lazy ; /* (approximate). */
int nocb_p_count ; /* # CBs being invoked by kthread */
int nocb_p_count_lazy ; /* (approximate). */
wait_queue_head_t nocb_wq ; /* For nocb kthreads to sleep on. */
struct task_struct * nocb_kthread ;
# endif /* #ifdef CONFIG_RCU_NOCB_CPU */
2013-03-07 01:37:09 +04:00
/* 8) RCU CPU stall data. */
# ifdef CONFIG_RCU_CPU_STALL_INFO
unsigned int softirq_snap ; /* Snapshot of softirq activity. */
# endif /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
2009-08-23 00:56:45 +04:00
int cpu ;
rcu: Add grace-period, quiescent-state, and call_rcu trace events
Add trace events to record grace-period start and end, quiescent states,
CPUs noticing grace-period start and end, grace-period initialization,
call_rcu() invocation, tasks blocking in RCU read-side critical sections,
tasks exiting those same critical sections, force_quiescent_state()
detection of dyntick-idle and offline CPUs, CPUs entering and leaving
dyntick-idle mode (except from NMIs), CPUs coming online and going
offline, and CPUs being kicked for staying in dyntick-idle mode for too
long (as in many weeks, even on 32-bit systems).
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
rcu: Add the rcu flavor to callback trace events
The earlier trace events for registering RCU callbacks and for invoking
them did not include the RCU flavor (rcu_bh, rcu_preempt, or rcu_sched).
This commit adds the RCU flavor to those trace events.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-06-25 17:36:56 +04:00
struct rcu_state * rsp ;
2009-08-23 00:56:45 +04:00
} ;
2011-09-11 08:54:08 +04:00
/* Values for fqs_state field in struct rcu_state. */
rcu: Fix long-grace-period race between forcing and initialization
Very long RCU read-side critical sections (50 milliseconds or
so) can cause a race between force_quiescent_state() and
rcu_start_gp() as follows on kernel builds with multi-level
rcu_node hierarchies:
1. CPU 0 calls force_quiescent_state(), sees that there is a
grace period in progress, and acquires ->fsqlock.
2. CPU 1 detects the end of the grace period, and so
cpu_quiet_msk_finish() sets rsp->completed to rsp->gpnum.
This operation is carried out under the root rnp->lock,
but CPU 0 has not yet acquired that lock. Note that
rsp->signaled is still RCU_SAVE_DYNTICK from the last
grace period.
3. CPU 1 calls rcu_start_gp(), but no one wants a new grace
period, so it drops the root rnp->lock and returns.
4. CPU 0 acquires the root rnp->lock and picks up rsp->completed
and rsp->signaled, then drops rnp->lock. It then enters the
RCU_SAVE_DYNTICK leg of the switch statement.
5. CPU 2 invokes call_rcu(), and now needs a new grace period.
It calls rcu_start_gp(), which acquires the root rnp->lock, sets
rsp->signaled to RCU_GP_INIT (too bad that CPU 0 is already in
the RCU_SAVE_DYNTICK leg of the switch statement!) and starts
initializing the rcu_node hierarchy. If there are multiple
levels to the hierarchy, it will drop the root rnp->lock and
initialize the lower levels of the hierarchy.
6. CPU 0 notes that rsp->completed has not changed, which permits
both CPU 2 and CPU 0 to try updating it concurrently. If CPU 0's
update prevails, later calls to force_quiescent_state() can
count old quiescent states against the new grace period, which
can in turn result in premature ending of grace periods.
Not good.
This patch adds an RCU_GP_IDLE state for rsp->signaled that is
set initially at boot time and any time a grace period ends.
This prevents CPU 0 from getting into the workings of
force_quiescent_state() in step 4. Additional locking and
checks prevent the concurrent update of rsp->signaled in step 6.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <1256742889199-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-10-28 18:14:49 +03:00
# define RCU_GP_IDLE 0 /* No grace period in progress. */
# define RCU_GP_INIT 1 /* Grace period being initialized. */
# define RCU_SAVE_DYNTICK 2 /* Need to scan dyntick state. */
2010-01-05 02:09:07 +03:00
# define RCU_FORCE_QS 3 /* Need to force quiescent state. */
2009-08-23 00:56:45 +04:00
# define RCU_SIGNAL_INIT RCU_SAVE_DYNTICK
2013-04-04 09:14:11 +04:00
# define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500))
/* For jiffies_till_first_fqs and */
/* and jiffies_till_next_fqs. */
2010-03-06 02:03:26 +03:00
2013-04-04 09:14:11 +04:00
# define RCU_JIFFIES_FQS_DIV 256 /* Very large systems need more */
/* delay between bouts of */
/* quiescent-state forcing. */
# define RCU_STALL_RAT_DELAY 2 /* Allow other CPUs time to take */
/* at least one scheduling clock */
/* irq before ratting on them. */
2009-08-23 00:56:45 +04:00
2011-05-21 03:06:29 +04:00
# define rcu_wait(cond) \
do { \
for ( ; ; ) { \
set_current_state ( TASK_INTERRUPTIBLE ) ; \
if ( cond ) \
break ; \
schedule ( ) ; \
} \
__set_current_state ( TASK_RUNNING ) ; \
} while ( 0 )
2009-08-23 00:56:45 +04:00
/*
* RCU global state , including node hierarchy . This hierarchy is
* represented in " heap " form in a dense array . The root ( first level )
* of the hierarchy is in - > node [ 0 ] ( referenced by - > level [ 0 ] ) , the second
* level in - > node [ 1 ] through - > node [ m ] ( - > node [ 1 ] referenced by - > level [ 1 ] ) ,
* and the third level in - > node [ m + 1 ] and following ( - > node [ m + 1 ] referenced
* by - > level [ 2 ] ) . The number of levels is determined by the number of
* CPUs and by CONFIG_RCU_FANOUT . Small systems will have a " hierarchy "
* consisting of a single rcu_node .
*/
struct rcu_state {
struct rcu_node node [ NUM_RCU_NODES ] ; /* Hierarchy. */
2012-04-24 02:52:53 +04:00
struct rcu_node * level [ RCU_NUM_LVLS ] ; /* Hierarchy levels. */
2009-08-23 00:56:45 +04:00
u32 levelcnt [ MAX_RCU_LVLS + 1 ] ; /* # nodes in each level. */
2012-04-24 02:52:53 +04:00
u8 levelspread [ RCU_NUM_LVLS ] ; /* kids/node in each level. */
2010-06-28 12:25:04 +04:00
struct rcu_data __percpu * rda ; /* pointer of percu rcu_data. */
2012-05-29 10:26:01 +04:00
void ( * call ) ( struct rcu_head * head , /* call_rcu() flavor. */
void ( * func ) ( struct rcu_head * head ) ) ;
2009-08-23 00:56:45 +04:00
/* The following fields are guarded by the root rcu_node's lock. */
2011-09-11 08:54:08 +04:00
u8 fqs_state ____cacheline_internodealigned_in_smp ;
2009-08-23 00:56:45 +04:00
/* Force QS state. */
2011-06-16 02:47:09 +04:00
u8 boost ; /* Subject to priority boost. */
2010-02-23 04:05:01 +03:00
unsigned long gpnum ; /* Current gp number. */
unsigned long completed ; /* # of last completed gp. */
2012-06-19 05:36:08 +04:00
struct task_struct * gp_kthread ; /* Task for grace periods. */
wait_queue_head_t gp_wq ; /* Where GP task waits. */
int gp_flags ; /* Commands for GP task. */
2009-09-23 20:50:42 +04:00
2009-12-02 23:10:15 +03:00
/* End of fields guarded by root rcu_node's lock. */
2009-09-23 20:50:42 +04:00
2012-10-08 21:54:03 +04:00
raw_spinlock_t orphan_lock ____cacheline_internodealigned_in_smp ;
/* Protect following fields. */
2012-03-02 01:18:08 +04:00
struct rcu_head * orphan_nxtlist ; /* Orphaned callbacks that */
/* need a grace period. */
struct rcu_head * * orphan_nxttail ; /* Tail of above. */
struct rcu_head * orphan_donelist ; /* Orphaned callbacks that */
/* are ready to invoke. */
struct rcu_head * * orphan_donetail ; /* Tail of above. */
long qlen_lazy ; /* Number of lazy callbacks. */
long qlen ; /* Total number of callbacks. */
2012-10-08 21:54:03 +04:00
/* End of fields guarded by orphan_lock. */
2012-10-07 19:36:12 +04:00
struct mutex onoff_mutex ; /* Coordinate hotplug & GPs. */
2012-05-29 16:18:53 +04:00
struct mutex barrier_mutex ; /* Guards barrier fields. */
2012-05-29 11:34:56 +04:00
atomic_t barrier_cpu_count ; /* # CPUs waiting on. */
2012-05-29 14:03:37 +04:00
struct completion barrier_completion ; /* Wake at barrier end. */
2012-05-30 01:56:46 +04:00
unsigned long n_barrier_done ; /* ++ at start and end of */
/* _rcu_barrier(). */
2012-10-07 19:36:12 +04:00
/* End of fields guarded by barrier_mutex. */
2012-10-12 02:24:03 +04:00
atomic_long_t expedited_start ; /* Starting ticket. */
atomic_long_t expedited_done ; /* Done ticket. */
2012-10-12 03:18:09 +04:00
atomic_long_t expedited_wrap ; /* # near-wrap incidents. */
atomic_long_t expedited_tryfail ; /* # acquisition failures. */
atomic_long_t expedited_workdone1 ; /* # done by others #1. */
atomic_long_t expedited_workdone2 ; /* # done by others #2. */
atomic_long_t expedited_normal ; /* # fallbacks to normal. */
atomic_long_t expedited_stoppedcpus ; /* # successful stop_cpus. */
atomic_long_t expedited_done_tries ; /* # tries to update _done. */
atomic_long_t expedited_done_lost ; /* # times beaten to _done. */
atomic_long_t expedited_done_exit ; /* # times exited _done loop. */
2012-10-12 02:24:03 +04:00
2009-08-23 00:56:45 +04:00
unsigned long jiffies_force_qs ; /* Time at which to invoke */
/* force_quiescent_state(). */
unsigned long n_force_qs ; /* Number of calls to */
/* force_quiescent_state(). */
unsigned long n_force_qs_lh ; /* ~Number of calls leaving */
/* due to lock unavailable. */
unsigned long n_force_qs_ngp ; /* Number of calls leaving */
/* due to no GP active. */
unsigned long gp_start ; /* Time at which GP started, */
/* but in jiffies. */
unsigned long jiffies_stall ; /* Time at which to check */
/* for CPU stalls. */
2011-04-07 03:01:16 +04:00
unsigned long gp_max ; /* Maximum GP duration in */
/* jiffies. */
2013-07-13 00:50:28 +04:00
const char * name ; /* Name of structure. */
rcu: Distinguish "rcuo" kthreads by RCU flavor
Currently, the per-no-CBs-CPU kthreads are named "rcuo" followed by
the CPU number, for example, "rcuo". This is problematic given that
there are either two or three RCU flavors, each of which gets a per-CPU
kthread with exactly the same name. This commit therefore introduces
a one-letter abbreviation for each RCU flavor, namely 'b' for RCU-bh,
'p' for RCU-preempt, and 's' for RCU-sched. This abbreviation is used
to distinguish the "rcuo" kthreads, for example, for CPU 0 we would have
"rcuob/0", "rcuop/0", and "rcuos/0".
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2012-12-03 20:16:28 +04:00
char abbr ; /* Abbreviated name. */
2012-06-12 22:01:13 +04:00
struct list_head flavors ; /* List of RCU flavors. */
rcu: Don't call wakeup() with rcu_node structure ->lock held
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp->lock critical section. Please see below for
the long version of this story.
On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:
> [12572.705832] ======================================================
> [12572.750317] [ INFO: possible circular locking dependency detected ]
> [12572.796978] 3.10.0-rc3+ #39 Not tainted
> [12572.833381] -------------------------------------------------------
> [12572.862233] trinity-child17/31341 is trying to acquire lock:
> [12572.870390] (rcu_node_0){..-.-.}, at: [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12572.878859]
> but task is already holding lock:
> [12572.894894] (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
> [12572.903381]
> which lock already depends on the new lock.
>
> [12572.927541]
> the existing dependency chain (in reverse order) is:
> [12572.943736]
> -> #4 (&ctx->lock){-.-...}:
> [12572.960032] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12572.968337] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12572.976633] [<ffffffff8113c987>] __perf_event_task_sched_out+0x2e7/0x5e0
> [12572.984969] [<ffffffff81088953>] perf_event_task_sched_out+0x93/0xa0
> [12572.993326] [<ffffffff816ea0bf>] __schedule+0x2cf/0x9c0
> [12573.001652] [<ffffffff816eacfe>] schedule_user+0x2e/0x70
> [12573.009998] [<ffffffff816ecd64>] retint_careful+0x12/0x2e
> [12573.018321]
> -> #3 (&rq->lock){-.-.-.}:
> [12573.034628] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.042930] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.051248] [<ffffffff8108e6a7>] wake_up_new_task+0xb7/0x260
> [12573.059579] [<ffffffff810492f5>] do_fork+0x105/0x470
> [12573.067880] [<ffffffff81049686>] kernel_thread+0x26/0x30
> [12573.076202] [<ffffffff816cee63>] rest_init+0x23/0x140
> [12573.084508] [<ffffffff81ed8e1f>] start_kernel+0x3f1/0x3fe
> [12573.092852] [<ffffffff81ed856f>] x86_64_start_reservations+0x2a/0x2c
> [12573.101233] [<ffffffff81ed863d>] x86_64_start_kernel+0xcc/0xcf
> [12573.109528]
> -> #2 (&p->pi_lock){-.-.-.}:
> [12573.125675] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.133829] [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
> [12573.141964] [<ffffffff8108e881>] try_to_wake_up+0x31/0x320
> [12573.150065] [<ffffffff8108ebe2>] default_wake_function+0x12/0x20
> [12573.158151] [<ffffffff8107bbf8>] autoremove_wake_function+0x18/0x40
> [12573.166195] [<ffffffff81085398>] __wake_up_common+0x58/0x90
> [12573.174215] [<ffffffff81086909>] __wake_up+0x39/0x50
> [12573.182146] [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
> [12573.190119] [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
> [12573.198023] [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
> [12573.205860] [<ffffffff8107a91d>] kthread+0xed/0x100
> [12573.213656] [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
> [12573.221379]
> -> #1 (&rsp->gp_wq){..-.-.}:
> [12573.236329] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.243783] [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
> [12573.251178] [<ffffffff810868f3>] __wake_up+0x23/0x50
> [12573.258505] [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
> [12573.265891] [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
> [12573.273248] [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
> [12573.280564] [<ffffffff8107a91d>] kthread+0xed/0x100
> [12573.287807] [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
Notice the above call chain.
rcu_start_future_gp() is called with the rnp->lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.
You can't do wakeups while holding the rnp->lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).
> [12573.295067]
> -> #0 (rcu_node_0){..-.-.}:
> [12573.309293] [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
> [12573.316568] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.323825] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.331081] [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12573.338377] [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
> [12573.345648] [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
> [12573.352942] [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
> [12573.360211] [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
> [12573.367514] [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
> [12573.374816] [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
Notice the above trace.
perf took its own ctx->lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:
rcu_read_lock();
raw_spin_lock(ctx->lock);
rcu_read_unlock();
Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx->lock, the following rcu_read_unlock()
triggered the "special" code.
The rcu_read_unlock_special() takes the rnp->lock, which gives us a
possible deadlock scenario.
CPU0 CPU1 CPU2
---- ---- ----
rcu_nocb_kthread()
lock(rq->lock);
lock(ctx->lock);
lock(rnp->lock);
wake_up();
lock(rq->lock);
rcu_read_unlock();
rcu_read_unlock_special();
lock(rnp->lock);
lock(ctx->lock);
**** DEADLOCK ****
> [12573.382068]
> other info that might help us debug this:
>
> [12573.403229] Chain exists of:
> rcu_node_0 --> &rq->lock --> &ctx->lock
>
> [12573.424471] Possible unsafe locking scenario:
>
> [12573.438499] CPU0 CPU1
> [12573.445599] ---- ----
> [12573.452691] lock(&ctx->lock);
> [12573.459799] lock(&rq->lock);
> [12573.467010] lock(&ctx->lock);
> [12573.474192] lock(rcu_node_0);
> [12573.481262]
> *** DEADLOCK ***
>
> [12573.501931] 1 lock held by trinity-child17/31341:
> [12573.508990] #0: (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
> [12573.516475]
> stack backtrace:
> [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
> [12573.545357] ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
> [12573.552868] ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
> [12573.560353] 0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
> [12573.567856] Call Trace:
> [12573.575011] [<ffffffff816e375b>] dump_stack+0x19/0x1b
> [12573.582284] [<ffffffff816dfa5d>] print_circular_bug+0x200/0x20f
> [12573.589637] [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
> [12573.596982] [<ffffffff810918f5>] ? sched_clock_cpu+0xb5/0x100
> [12573.604344] [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
> [12573.611652] [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
> [12573.619030] [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
> [12573.626331] [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
> [12573.633671] [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
> [12573.640992] [<ffffffff811390ed>] ? perf_lock_task_context+0x7d/0x2d0
> [12573.648330] [<ffffffff810b429e>] ? put_lock_stats.isra.29+0xe/0x40
> [12573.655662] [<ffffffff813095a0>] ? delay_tsc+0x90/0xe0
> [12573.662964] [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
> [12573.670276] [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
> [12573.677622] [<ffffffff81139070>] ? __perf_event_enable+0x370/0x370
> [12573.684981] [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
> [12573.692358] [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
> [12573.699753] [<ffffffff8108cd9d>] ? get_parent_ip+0xd/0x50
> [12573.707135] [<ffffffff810b71fd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
> [12573.714599] [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
> [12573.721996] [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-05-29 01:32:53 +04:00
struct irq_work wakeup_work ; /* Postponed wakeups */
2009-08-23 00:56:45 +04:00
} ;
2012-06-23 04:06:26 +04:00
/* Values for rcu_state structure's gp_flags field. */
# define RCU_GP_FLAG_INIT 0x1 /* Need grace-period initialization. */
# define RCU_GP_FLAG_FQS 0x2 /* Need grace-period quiescent-state forcing. */
2012-06-12 22:01:13 +04:00
extern struct list_head rcu_struct_flavors ;
2012-08-20 08:35:53 +04:00
/* Sequence through rcu_state structures for each RCU flavor. */
2012-06-12 22:01:13 +04:00
# define for_each_rcu_flavor(rsp) \
list_for_each_entry ( ( rsp ) , & rcu_struct_flavors , flavors )
2009-12-02 23:10:15 +03:00
/* Return values for rcu_preempt_offline_tasks(). */
# define RCU_OFL_TASKS_NORM_GP 0x1 /* Tasks blocking normal */
/* GP were moved to root. */
# define RCU_OFL_TASKS_EXP_GP 0x2 /* Tasks blocking expedited */
/* GP were moved to root. */
2009-03-25 18:42:24 +03:00
/*
* RCU implementation internal declarations :
*/
2009-08-23 00:56:46 +04:00
extern struct rcu_state rcu_sched_state ;
DECLARE_PER_CPU ( struct rcu_data , rcu_sched_data ) ;
2009-03-25 18:42:24 +03:00
extern struct rcu_state rcu_bh_state ;
DECLARE_PER_CPU ( struct rcu_data , rcu_bh_data ) ;
rcu: Merge preemptable-RCU functionality into hierarchical RCU
Create a kernel/rcutree_plugin.h file that contains definitions
for preemptable RCU (or, under the #else branch of the #ifdef,
empty definitions for the classic non-preemptable semantics).
These definitions fit into plugins defined in kernel/rcutree.c
for this purpose.
This variant of preemptable RCU uses a new algorithm whose
read-side expense is roughly that of classic hierarchical RCU
under CONFIG_PREEMPT. This new algorithm's update-side expense
is similar to that of classic hierarchical RCU, and, in absence
of read-side preemption or blocking, is exactly that of classic
hierarchical RCU. Perhaps more important, this new algorithm
has a much simpler implementation, saving well over 1,000 lines
of code compared to mainline's implementation of preemptable
RCU, which will hopefully be retired in favor of this new
algorithm.
The simplifications are obtained by maintaining per-task
nesting state for running tasks, and using a simple
lock-protected algorithm to handle accounting when tasks block
within RCU read-side critical sections, making use of lessons
learned while creating numerous user-level RCU implementations
over the past 18 months.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: akpm@linux-foundation.org
Cc: mathieu.desnoyers@polymtl.ca
Cc: josht@linux.vnet.ibm.com
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
LKML-Reference: <12509746134003-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-23 00:56:52 +04:00
# ifdef CONFIG_TREE_PREEMPT_RCU
extern struct rcu_state rcu_preempt_state ;
DECLARE_PER_CPU ( struct rcu_data , rcu_preempt_data ) ;
# endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
2011-06-21 12:59:33 +04:00
# ifdef CONFIG_RCU_BOOST
DECLARE_PER_CPU ( unsigned int , rcu_cpu_kthread_status ) ;
DECLARE_PER_CPU ( int , rcu_cpu_kthread_cpu ) ;
DECLARE_PER_CPU ( unsigned int , rcu_cpu_kthread_loops ) ;
DECLARE_PER_CPU ( char , rcu_cpu_has_work ) ;
# endif /* #ifdef CONFIG_RCU_BOOST */
2010-01-15 03:10:58 +03:00
# ifndef RCU_TREE_NONCORE
2009-08-23 00:56:45 +04:00
2009-09-23 20:50:43 +04:00
/* Forward declarations for rcutree_plugin.h */
2009-11-11 00:37:19 +03:00
static void rcu_bootup_announce ( void ) ;
2009-09-23 20:50:43 +04:00
long rcu_batches_completed ( void ) ;
2012-07-02 18:08:42 +04:00
static void rcu_preempt_note_context_switch ( int cpu ) ;
2011-02-07 23:47:15 +03:00
static int rcu_preempt_blocked_readers_cgp ( struct rcu_node * rnp ) ;
rcu: Fix grace-period-stall bug on large systems with CPU hotplug
When the last CPU of a given leaf rcu_node structure goes
offline, all of the tasks queued on that leaf rcu_node structure
(due to having blocked in their current RCU read-side critical
sections) are requeued onto the root rcu_node structure. This
requeuing is carried out by rcu_preempt_offline_tasks().
However, it is possible that these queued tasks are the only
thing preventing the leaf rcu_node structure from reporting a
quiescent state up the rcu_node hierarchy. Unfortunately, the
old code would fail to do this reporting, resulting in a
grace-period stall given the following sequence of events:
1. Kernel built for more than 32 CPUs on 32-bit systems or for more
than 64 CPUs on 64-bit systems, so that there is more than one
rcu_node structure. (Or CONFIG_RCU_FANOUT is artificially set
to a number smaller than CONFIG_NR_CPUS.)
2. The kernel is built with CONFIG_TREE_PREEMPT_RCU.
3. A task running on a CPU associated with a given leaf rcu_node
structure blocks while in an RCU read-side critical section
-and- that CPU has not yet passed through a quiescent state
for the current RCU grace period. This will cause the task
to be queued on the leaf rcu_node's blocked_tasks[] array, in
particular, on the element of this array corresponding to the
current grace period.
4. Each of the remaining CPUs corresponding to this same leaf rcu_node
structure pass through a quiescent state. However, the task is
still in its RCU read-side critical section, so these quiescent
states cannot be reported further up the rcu_node hierarchy.
Nevertheless, all bits in the leaf rcu_node structure's ->qsmask
field are now zero.
5. Each of the remaining CPUs go offline. (The events in step
#4 and #5 can happen in any order as long as each CPU passes
through a quiescent state before going offline.)
6. When the last CPU goes offline, __rcu_offline_cpu() will invoke
rcu_preempt_offline_tasks(), which will move the task to the
root rcu_node structure, but without reporting a quiescent state
up the rcu_node hierarchy (and this failure to report a quiescent
state is the bug).
But because this leaf rcu_node structure's ->qsmask field is
already zero and its ->block_tasks[] entries are all empty,
force_quiescent_state() will skip this rcu_node structure.
Therefore, grace periods are now hung.
This patch abstracts some code out of rcu_read_unlock_special(),
calling the result task_quiet() by analogy with cpu_quiet(), and
invokes task_quiet() from both rcu_read_lock_special() and
__rcu_offline_cpu(). Invoking task_quiet() from
__rcu_offline_cpu() reports the quiescent state up the rcu_node
hierarchy, fixing the bug. This ends up requiring a separate
lock_class_key per level of the rcu_node hierarchy, which this
patch also provides.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <12589088301770-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-22 19:53:48 +03:00
# ifdef CONFIG_HOTPLUG_CPU
2009-12-02 23:10:13 +03:00
static void rcu_report_unblock_qs_rnp ( struct rcu_node * rnp ,
unsigned long flags ) ;
rcu: Fix grace-period-stall bug on large systems with CPU hotplug
When the last CPU of a given leaf rcu_node structure goes
offline, all of the tasks queued on that leaf rcu_node structure
(due to having blocked in their current RCU read-side critical
sections) are requeued onto the root rcu_node structure. This
requeuing is carried out by rcu_preempt_offline_tasks().
However, it is possible that these queued tasks are the only
thing preventing the leaf rcu_node structure from reporting a
quiescent state up the rcu_node hierarchy. Unfortunately, the
old code would fail to do this reporting, resulting in a
grace-period stall given the following sequence of events:
1. Kernel built for more than 32 CPUs on 32-bit systems or for more
than 64 CPUs on 64-bit systems, so that there is more than one
rcu_node structure. (Or CONFIG_RCU_FANOUT is artificially set
to a number smaller than CONFIG_NR_CPUS.)
2. The kernel is built with CONFIG_TREE_PREEMPT_RCU.
3. A task running on a CPU associated with a given leaf rcu_node
structure blocks while in an RCU read-side critical section
-and- that CPU has not yet passed through a quiescent state
for the current RCU grace period. This will cause the task
to be queued on the leaf rcu_node's blocked_tasks[] array, in
particular, on the element of this array corresponding to the
current grace period.
4. Each of the remaining CPUs corresponding to this same leaf rcu_node
structure pass through a quiescent state. However, the task is
still in its RCU read-side critical section, so these quiescent
states cannot be reported further up the rcu_node hierarchy.
Nevertheless, all bits in the leaf rcu_node structure's ->qsmask
field are now zero.
5. Each of the remaining CPUs go offline. (The events in step
#4 and #5 can happen in any order as long as each CPU passes
through a quiescent state before going offline.)
6. When the last CPU goes offline, __rcu_offline_cpu() will invoke
rcu_preempt_offline_tasks(), which will move the task to the
root rcu_node structure, but without reporting a quiescent state
up the rcu_node hierarchy (and this failure to report a quiescent
state is the bug).
But because this leaf rcu_node structure's ->qsmask field is
already zero and its ->block_tasks[] entries are all empty,
force_quiescent_state() will skip this rcu_node structure.
Therefore, grace periods are now hung.
This patch abstracts some code out of rcu_read_unlock_special(),
calling the result task_quiet() by analogy with cpu_quiet(), and
invokes task_quiet() from both rcu_read_lock_special() and
__rcu_offline_cpu(). Invoking task_quiet() from
__rcu_offline_cpu() reports the quiescent state up the rcu_node
hierarchy, fixing the bug. This ends up requiring a separate
lock_class_key per level of the rcu_node hierarchy, which this
patch also provides.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <12589088301770-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-22 19:53:48 +03:00
# endif /* #ifdef CONFIG_HOTPLUG_CPU */
2010-02-23 04:05:05 +03:00
static void rcu_print_detail_task_stall ( struct rcu_state * rsp ) ;
2011-08-14 00:31:47 +04:00
static int rcu_print_task_stall ( struct rcu_node * rnp ) ;
2009-09-23 20:50:43 +04:00
static void rcu_preempt_check_blocked_tasks ( struct rcu_node * rnp ) ;
# ifdef CONFIG_HOTPLUG_CPU
rcu: Fix TREE_PREEMPT_RCU CPU_HOTPLUG bad-luck hang
If the following sequence of events occurs, then
TREE_PREEMPT_RCU will hang waiting for a grace period to
complete, eventually OOMing the system:
o A TREE_PREEMPT_RCU build of the kernel is booted on a system
with more than 64 physical CPUs present (32 on a 32-bit system).
Alternatively, a TREE_PREEMPT_RCU build of the kernel is booted
with RCU_FANOUT set to a sufficiently small value that the
physical CPUs populate two or more leaf rcu_node structures.
o A task is preempted in an RCU read-side critical section
while running on a CPU corresponding to a given leaf rcu_node
structure.
o All CPUs corresponding to this same leaf rcu_node structure
record quiescent states for the current grace period.
o All of these same CPUs go offline (hence the need for enough
physical CPUs to populate more than one leaf rcu_node structure).
This causes the preempted task to be moved to the root rcu_node
structure.
At this point, there is nothing left to cause the quiescent
state to be propagated up the rcu_node tree, so the current
grace period never completes.
The simplest fix, especially after considering the deadlock
possibilities, is to detect this situation when the last CPU is
offlined, and to set that CPU's ->qsmask bit in its leaf
rcu_node structure. This will cause the next invocation of
force_quiescent_state() to end the grace period.
Without this fix, this hang can be triggered in an hour or so on
some machines with rcutorture and random CPU onlining/offlining.
With this fix, these same machines pass a full 10 hours of this
sort of abuse.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <20091015162614.GA19131@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-10-15 20:26:14 +04:00
static int rcu_preempt_offline_tasks ( struct rcu_state * rsp ,
struct rcu_node * rnp ,
struct rcu_data * rdp ) ;
2009-09-23 20:50:43 +04:00
# endif /* #ifdef CONFIG_HOTPLUG_CPU */
static void rcu_preempt_check_callbacks ( int cpu ) ;
void call_rcu ( struct rcu_head * head , void ( * func ) ( struct rcu_head * rcu ) ) ;
2009-12-02 23:10:15 +03:00
# if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU)
2011-10-22 18:12:34 +04:00
static void rcu_report_exp_rnp ( struct rcu_state * rsp , struct rcu_node * rnp ,
bool wake ) ;
2009-12-02 23:10:15 +03:00
# endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */
2009-09-23 20:50:43 +04:00
static void __init __rcu_init_preempt ( void ) ;
2011-05-05 08:43:49 +04:00
static void rcu_initiate_boost ( struct rcu_node * rnp , unsigned long flags ) ;
2011-06-16 02:47:09 +04:00
static void rcu_preempt_boost_start_gp ( struct rcu_node * rnp ) ;
static void invoke_rcu_callbacks_kthread ( void ) ;
2011-11-30 03:57:13 +04:00
static bool rcu_is_callbacks_kthread ( void ) ;
2011-06-16 02:47:09 +04:00
# ifdef CONFIG_RCU_BOOST
static void rcu_preempt_do_callbacks ( void ) ;
2013-06-19 22:52:21 +04:00
static int rcu_spawn_one_boost_kthread ( struct rcu_state * rsp ,
2012-07-16 14:42:35 +04:00
struct rcu_node * rnp ) ;
2011-06-16 02:47:09 +04:00
# endif /* #ifdef CONFIG_RCU_BOOST */
2013-06-19 22:52:21 +04:00
static void rcu_prepare_kthreads ( int cpu ) ;
rcu: Permit dyntick-idle with callbacks pending
The current implementation of RCU_FAST_NO_HZ prevents CPUs from entering
dyntick-idle state if they have RCU callbacks pending. Unfortunately,
this has the side-effect of often preventing them from entering this
state, especially if at least one other CPU is not in dyntick-idle state.
However, the resulting per-tick wakeup is wasteful in many cases: if the
CPU has already fully responded to the current RCU grace period, there
will be nothing for it to do until this grace period ends, which will
frequently take several jiffies.
This commit therefore permits a CPU that has done everything that the
current grace period has asked of it (rcu_pending() == 0) even if it
still as RCU callbacks pending. However, such a CPU posts a timer to
wake it up several jiffies later (6 jiffies, based on experience with
grace-period lengths). This wakeup is required to handle situations
that can result in all CPUs being in dyntick-idle mode, thus failing
to ever complete the current grace period. If a CPU wakes up before
the timer goes off, then it cancels that timer, thus avoiding spurious
wakeups.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-11-29 00:28:34 +04:00
static void rcu_cleanup_after_idle ( int cpu ) ;
2011-11-02 17:54:54 +04:00
static void rcu_prepare_for_idle ( int cpu ) ;
2012-02-28 23:02:21 +04:00
static void rcu_idle_count_callbacks_posted ( void ) ;
2012-01-17 01:29:10 +04:00
static void print_cpu_stall_info_begin ( void ) ;
static void print_cpu_stall_info ( struct rcu_state * rsp , int cpu ) ;
static void print_cpu_stall_info_end ( void ) ;
static void zero_cpu_stall_ticks ( struct rcu_data * rdp ) ;
static void increment_cpu_stall_ticks ( void ) ;
2013-02-11 08:48:58 +04:00
static int rcu_nocb_needs_gp ( struct rcu_state * rsp ) ;
static void rcu_nocb_gp_set ( struct rcu_node * rnp , int nrq ) ;
2012-12-31 03:21:01 +04:00
static void rcu_nocb_gp_cleanup ( struct rcu_state * rsp , struct rcu_node * rnp ) ;
2013-02-11 08:48:58 +04:00
static void rcu_init_one_nocb ( struct rcu_node * rnp ) ;
2012-08-20 08:35:53 +04:00
static bool __call_rcu_nocb ( struct rcu_data * rdp , struct rcu_head * rhp ,
bool lazy ) ;
static bool rcu_nocb_adopt_orphan_cbs ( struct rcu_state * rsp ,
struct rcu_data * rdp ) ;
static void rcu_boot_init_nocb_percpu_data ( struct rcu_data * rdp ) ;
static void rcu_spawn_nocb_kthreads ( struct rcu_state * rsp ) ;
2013-04-13 03:19:10 +04:00
static void rcu_kick_nohz_cpu ( int cpu ) ;
2013-01-08 01:37:42 +04:00
static bool init_nocb_callback_list ( struct rcu_data * rdp ) ;
2013-06-22 00:00:57 +04:00
static void rcu_sysidle_enter ( struct rcu_dynticks * rdtp , int irq ) ;
static void rcu_sysidle_exit ( struct rcu_dynticks * rdtp , int irq ) ;
nohz_full: Add rcu_dyntick data for scalable detection of all-idle state
This commit adds fields to the rcu_dyntick structure that are used to
detect idle CPUs. These new fields differ from the existing ones in
that the existing ones consider a CPU executing in user mode to be idle,
where the new ones consider CPUs executing in user mode to be busy.
The handling of these new fields is otherwise quite similar to that for
the exiting fields. This commit also adds the initialization required
for these fields.
So, why is usermode execution treated differently, with RCU considering
it a quiescent state equivalent to idle, while in contrast the new
full-system idle state detection considers usermode execution to be
non-idle?
It turns out that although one of RCU's quiescent states is usermode
execution, it is not a full-system idle state. This is because the
purpose of the full-system idle state is not RCU, but rather determining
when accurate timekeeping can safely be disabled. Whenever accurate
timekeeping is required in a CONFIG_NO_HZ_FULL kernel, at least one
CPU must keep the scheduling-clock tick going. If even one CPU is
executing in user mode, accurate timekeeping is requires, particularly for
architectures where gettimeofday() and friends do not enter the kernel.
Only when all CPUs are really and truly idle can accurate timekeeping be
disabled, allowing all CPUs to turn off the scheduling clock interrupt,
thus greatly improving energy efficiency.
This naturally raises the question "Why is this code in RCU rather than in
timekeeping?", and the answer is that RCU has the data and infrastructure
to efficiently make this determination.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-06-21 23:34:33 +04:00
static void rcu_sysidle_init_percpu_data ( struct rcu_dynticks * rdtp ) ;
2009-09-23 20:50:43 +04:00
2010-01-15 03:10:58 +03:00
# endif /* #ifndef RCU_TREE_NONCORE */
2012-08-20 08:35:53 +04:00
# ifdef CONFIG_RCU_TRACE
# ifdef CONFIG_RCU_NOCB_CPU
/* Sum up queue lengths for tracing. */
static inline void rcu_nocb_q_lengths ( struct rcu_data * rdp , long * ql , long * qll )
{
* ql = atomic_long_read ( & rdp - > nocb_q_count ) + rdp - > nocb_p_count ;
* qll = atomic_long_read ( & rdp - > nocb_q_count_lazy ) + rdp - > nocb_p_count_lazy ;
}
# else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_nocb_q_lengths ( struct rcu_data * rdp , long * ql , long * qll )
{
* ql = 0 ;
* qll = 0 ;
}
# endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
# endif /* #ifdef CONFIG_RCU_TRACE */