2008-06-26 13:21:34 +04:00
/*
* Generic helpers for smp ipi calls
*
* ( C ) Jens Axboe < jens . axboe @ oracle . com > 2008
*/
# include <linux/rcupdate.h>
2008-07-16 01:02:33 +04:00
# include <linux/rculist.h>
2009-03-13 12:47:34 +03:00
# include <linux/kernel.h>
2009-02-25 18:52:11 +03:00
# include <linux/module.h>
# include <linux/percpu.h>
# include <linux/init.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/gfp.h>
2008-06-26 13:21:34 +04:00
# include <linux/smp.h>
2009-02-25 15:59:47 +03:00
# include <linux/cpu.h>
2008-06-26 13:21:34 +04:00
2009-02-25 15:59:47 +03:00
static struct {
struct list_head queue ;
2009-11-17 17:40:01 +03:00
raw_spinlock_t lock ;
2009-02-25 18:52:11 +03:00
} call_function __cacheline_aligned_in_smp =
{
. queue = LIST_HEAD_INIT ( call_function . queue ) ,
2009-11-17 17:40:01 +03:00
. lock = __RAW_SPIN_LOCK_UNLOCKED ( call_function . lock ) ,
2009-02-25 18:52:11 +03:00
} ;
2008-06-26 13:21:34 +04:00
enum {
2009-02-25 15:59:48 +03:00
CSD_FLAG_LOCK = 0x01 ,
2008-06-26 13:21:34 +04:00
} ;
struct call_function_data {
2009-02-25 18:52:11 +03:00
struct call_single_data csd ;
generic-ipi: make struct call_function_data lockless
This patch can remove spinlock from struct call_function_data, the
reasons are below:
1: add a new interface for cpumask named cpumask_test_and_clear_cpu(),
it can atomically test and clear specific cpu, we can use it instead
of cpumask_test_cpu() and cpumask_clear_cpu() and no need data->lock
to protect those in generic_smp_call_function_interrupt().
2: in smp_call_function_many(), after csd_lock() return, the current's
cfd_data is deleted from call_function list, so it not have race
between other cpus, then cfs_data is only used in
smp_call_function_many() that must disable preemption and not from
a hardware interrupthandler or from a bottom half handler to call,
only the correspond cpu can use it, so it not have race in current
cpu, no need cfs_data->lock to protect it.
3: after 1 and 2, cfs_data->lock is only use to protect cfs_data->refs in
generic_smp_call_function_interrupt(), so we can define cfs_data->refs
to atomic_t, and no need cfs_data->lock any more.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
[akpm@linux-foundation.org: use atomic_dec_return()]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:43:39 +04:00
atomic_t refs ;
2009-02-25 18:52:11 +03:00
cpumask_var_t cpumask ;
2008-06-26 13:21:34 +04:00
} ;
2010-01-18 05:00:51 +03:00
static DEFINE_PER_CPU_SHARED_ALIGNED ( struct call_function_data , cfd_data ) ;
2008-06-26 13:21:34 +04:00
struct call_single_queue {
2009-02-25 18:52:11 +03:00
struct list_head list ;
2009-11-17 17:40:01 +03:00
raw_spinlock_t lock ;
2008-06-26 13:21:34 +04:00
} ;
2010-01-18 05:00:51 +03:00
static DEFINE_PER_CPU_SHARED_ALIGNED ( struct call_single_queue , call_single_queue ) ;
2009-02-25 15:59:47 +03:00
static int
hotplug_cfd ( struct notifier_block * nfb , unsigned long action , void * hcpu )
{
long cpu = ( long ) hcpu ;
struct call_function_data * cfd = & per_cpu ( cfd_data , cpu ) ;
switch ( action ) {
case CPU_UP_PREPARE :
case CPU_UP_PREPARE_FROZEN :
2009-06-07 01:51:36 +04:00
if ( ! zalloc_cpumask_var_node ( & cfd - > cpumask , GFP_KERNEL ,
2009-02-25 15:59:47 +03:00
cpu_to_node ( cpu ) ) )
return NOTIFY_BAD ;
break ;
2009-08-07 02:07:29 +04:00
# ifdef CONFIG_HOTPLUG_CPU
2009-02-25 15:59:47 +03:00
case CPU_UP_CANCELED :
case CPU_UP_CANCELED_FROZEN :
case CPU_DEAD :
case CPU_DEAD_FROZEN :
free_cpumask_var ( cfd - > cpumask ) ;
break ;
# endif
} ;
return NOTIFY_OK ;
}
static struct notifier_block __cpuinitdata hotplug_cfd_notifier = {
2009-02-25 18:52:11 +03:00
. notifier_call = hotplug_cfd ,
2009-02-25 15:59:47 +03:00
} ;
2008-07-26 06:45:11 +04:00
static int __cpuinit init_call_single_data ( void )
2008-06-26 13:21:34 +04:00
{
2009-02-25 15:59:47 +03:00
void * cpu = ( void * ) ( long ) smp_processor_id ( ) ;
2008-06-26 13:21:34 +04:00
int i ;
for_each_possible_cpu ( i ) {
struct call_single_queue * q = & per_cpu ( call_single_queue , i ) ;
2009-11-17 17:40:01 +03:00
raw_spin_lock_init ( & q - > lock ) ;
2008-06-26 13:21:34 +04:00
INIT_LIST_HEAD ( & q - > list ) ;
}
2009-02-25 15:59:47 +03:00
hotplug_cfd ( & hotplug_cfd_notifier , CPU_UP_PREPARE , cpu ) ;
register_cpu_notifier ( & hotplug_cfd_notifier ) ;
2008-07-26 06:45:11 +04:00
return 0 ;
2008-06-26 13:21:34 +04:00
}
2008-07-26 06:45:11 +04:00
early_initcall ( init_call_single_data ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 15:59:47 +03:00
/*
* csd_lock / csd_unlock used to serialize access to per - cpu csd resources
*
2009-02-25 18:52:11 +03:00
* For non - synchronous ipi calls the csd can still be in use by the
* previous function call . For multi - cpu calls its even more interesting
* as we ' ll have to ensure no other cpu is observing our csd .
2009-02-25 15:59:47 +03:00
*/
2009-02-25 15:59:48 +03:00
static void csd_lock_wait ( struct call_single_data * data )
2009-02-25 15:59:47 +03:00
{
while ( data - > flags & CSD_FLAG_LOCK )
cpu_relax ( ) ;
2009-02-25 15:59:48 +03:00
}
static void csd_lock ( struct call_single_data * data )
{
csd_lock_wait ( data ) ;
2009-02-25 15:59:47 +03:00
data - > flags = CSD_FLAG_LOCK ;
/*
2009-02-25 18:52:11 +03:00
* prevent CPU from reordering the above assignment
* to - > flags with any subsequent assignments to other
* fields of the specified call_single_data structure :
2009-02-25 15:59:47 +03:00
*/
smp_mb ( ) ;
}
static void csd_unlock ( struct call_single_data * data )
{
WARN_ON ( ! ( data - > flags & CSD_FLAG_LOCK ) ) ;
2009-02-25 18:52:11 +03:00
2009-02-25 15:59:47 +03:00
/*
2009-02-25 18:52:11 +03:00
* ensure we ' re all done before releasing data :
2009-02-25 15:59:47 +03:00
*/
smp_mb ( ) ;
2009-02-25 18:52:11 +03:00
2009-02-25 15:59:47 +03:00
data - > flags & = ~ CSD_FLAG_LOCK ;
2008-06-26 13:21:34 +04:00
}
/*
2009-02-25 18:52:11 +03:00
* Insert a previously allocated call_single_data element
* for execution on the given CPU . data must already have
* - > func , - > info , and - > flags set .
2008-06-26 13:21:34 +04:00
*/
2009-02-25 15:59:48 +03:00
static
void generic_exec_single ( int cpu , struct call_single_data * data , int wait )
2008-06-26 13:21:34 +04:00
{
struct call_single_queue * dst = & per_cpu ( call_single_queue , cpu ) ;
unsigned long flags ;
2009-02-25 15:59:48 +03:00
int ipi ;
2008-06-26 13:21:34 +04:00
2009-11-17 17:40:01 +03:00
raw_spin_lock_irqsave ( & dst - > lock , flags ) ;
2008-06-26 13:21:34 +04:00
ipi = list_empty ( & dst - > list ) ;
list_add_tail ( & data - > list , & dst - > list ) ;
2009-11-17 17:40:01 +03:00
raw_spin_unlock_irqrestore ( & dst - > lock , flags ) ;
2008-06-26 13:21:34 +04:00
2008-10-30 20:28:41 +03:00
/*
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
* The list addition should be visible before sending the IPI
* handler locks the list to pull the entry off it because of
* normal cache coherency rules implied by spinlocks .
*
* If IPIs can go out of order to the cache coherency protocol
* in an architecture , sufficient synchronisation should be added
* to arch code to make it appear to obey cache coherency WRT
2009-02-25 18:52:11 +03:00
* locking and barrier primitives . Generic code isn ' t really
* equipped to do the right thing . . .
2008-10-30 20:28:41 +03:00
*/
2008-06-26 13:21:34 +04:00
if ( ipi )
arch_send_call_function_single_ipi ( cpu ) ;
if ( wait )
2009-02-25 15:59:48 +03:00
csd_lock_wait ( data ) ;
2008-06-26 13:21:34 +04:00
}
/*
* Invoked by arch to handle an IPI for call function . Must be called with
* interrupts disabled .
*/
void generic_smp_call_function_interrupt ( void )
{
struct call_function_data * data ;
2009-12-15 05:00:16 +03:00
int cpu = smp_processor_id ( ) ;
2008-06-26 13:21:34 +04:00
2009-08-20 05:05:35 +04:00
/*
* Shouldn ' t receive this interrupt on a cpu that is not yet online .
*/
WARN_ON_ONCE ( ! cpu_online ( cpu ) ) ;
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
/*
* Ensure entry is visible on call_function_queue after we have
* entered the IPI . See comment in smp_call_function_many .
* If we don ' t have this , then we may miss an entry on the list
* and never get another IPI to process it .
*/
smp_mb ( ) ;
2008-06-26 13:21:34 +04:00
/*
2009-02-25 18:52:11 +03:00
* It ' s ok to use list_for_each_rcu ( ) here even though we may
* delete ' pos ' , since list_del_rcu ( ) doesn ' t clear - > next
2008-06-26 13:21:34 +04:00
*/
2009-02-25 15:59:47 +03:00
list_for_each_entry_rcu ( data , & call_function . queue , csd . list ) {
2008-06-26 13:21:34 +04:00
int refs ;
generic-ipi: make struct call_function_data lockless
This patch can remove spinlock from struct call_function_data, the
reasons are below:
1: add a new interface for cpumask named cpumask_test_and_clear_cpu(),
it can atomically test and clear specific cpu, we can use it instead
of cpumask_test_cpu() and cpumask_clear_cpu() and no need data->lock
to protect those in generic_smp_call_function_interrupt().
2: in smp_call_function_many(), after csd_lock() return, the current's
cfd_data is deleted from call_function list, so it not have race
between other cpus, then cfs_data is only used in
smp_call_function_many() that must disable preemption and not from
a hardware interrupthandler or from a bottom half handler to call,
only the correspond cpu can use it, so it not have race in current
cpu, no need cfs_data->lock to protect it.
3: after 1 and 2, cfs_data->lock is only use to protect cfs_data->refs in
generic_smp_call_function_interrupt(), so we can define cfs_data->refs
to atomic_t, and no need cfs_data->lock any more.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
[akpm@linux-foundation.org: use atomic_dec_return()]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:43:39 +04:00
if ( ! cpumask_test_and_clear_cpu ( cpu , data - > cpumask ) )
2008-06-26 13:21:34 +04:00
continue ;
data - > csd . func ( data - > csd . info ) ;
generic-ipi: make struct call_function_data lockless
This patch can remove spinlock from struct call_function_data, the
reasons are below:
1: add a new interface for cpumask named cpumask_test_and_clear_cpu(),
it can atomically test and clear specific cpu, we can use it instead
of cpumask_test_cpu() and cpumask_clear_cpu() and no need data->lock
to protect those in generic_smp_call_function_interrupt().
2: in smp_call_function_many(), after csd_lock() return, the current's
cfd_data is deleted from call_function list, so it not have race
between other cpus, then cfs_data is only used in
smp_call_function_many() that must disable preemption and not from
a hardware interrupthandler or from a bottom half handler to call,
only the correspond cpu can use it, so it not have race in current
cpu, no need cfs_data->lock to protect it.
3: after 1 and 2, cfs_data->lock is only use to protect cfs_data->refs in
generic_smp_call_function_interrupt(), so we can define cfs_data->refs
to atomic_t, and no need cfs_data->lock any more.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
[akpm@linux-foundation.org: use atomic_dec_return()]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:43:39 +04:00
refs = atomic_dec_return ( & data - > refs ) ;
WARN_ON ( refs < 0 ) ;
2009-02-25 15:59:47 +03:00
if ( ! refs ) {
2009-11-17 17:40:01 +03:00
raw_spin_lock ( & call_function . lock ) ;
2009-02-25 15:59:47 +03:00
list_del_rcu ( & data - > csd . list ) ;
2009-11-17 17:40:01 +03:00
raw_spin_unlock ( & call_function . lock ) ;
2009-02-25 15:59:47 +03:00
}
2008-06-26 13:21:34 +04:00
if ( refs )
continue ;
2009-02-25 15:59:47 +03:00
csd_unlock ( & data - > csd ) ;
2008-06-26 13:21:34 +04:00
}
}
/*
2009-02-25 18:52:11 +03:00
* Invoked by arch to handle an IPI for call function single . Must be
* called from the arch with interrupts disabled .
2008-06-26 13:21:34 +04:00
*/
void generic_smp_call_function_single_interrupt ( void )
{
struct call_single_queue * q = & __get_cpu_var ( call_single_queue ) ;
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
unsigned int data_flags ;
2009-02-25 18:52:11 +03:00
LIST_HEAD ( list ) ;
2008-06-26 13:21:34 +04:00
2009-08-20 05:05:35 +04:00
/*
* Shouldn ' t receive this interrupt on a cpu that is not yet online .
*/
WARN_ON_ONCE ( ! cpu_online ( smp_processor_id ( ) ) ) ;
2009-11-17 17:40:01 +03:00
raw_spin_lock ( & q - > lock ) ;
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
list_replace_init ( & q - > list , & list ) ;
2009-11-17 17:40:01 +03:00
raw_spin_unlock ( & q - > lock ) ;
2008-06-26 13:21:34 +04:00
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
while ( ! list_empty ( & list ) ) {
struct call_single_data * data ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
data = list_entry ( list . next , struct call_single_data , list ) ;
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
list_del ( & data - > list ) ;
2008-06-26 13:21:34 +04:00
/*
2009-02-25 18:52:11 +03:00
* ' data ' can be invalid after this call if flags = = 0
* ( when called through generic_exec_single ( ) ) ,
* so save them away before making the call :
2008-06-26 13:21:34 +04:00
*/
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25 08:22:45 +03:00
data_flags = data - > flags ;
data - > func ( data - > info ) ;
2009-02-25 15:59:47 +03:00
/*
2009-02-25 18:52:11 +03:00
* Unlocked CSDs are valid through generic_exec_single ( ) :
2009-02-25 15:59:47 +03:00
*/
if ( data_flags & CSD_FLAG_LOCK )
csd_unlock ( data ) ;
2008-06-26 13:21:34 +04:00
}
}
2010-01-18 05:00:51 +03:00
static DEFINE_PER_CPU_SHARED_ALIGNED ( struct call_single_data , csd_data ) ;
2009-01-29 18:08:01 +03:00
2008-06-26 13:21:34 +04:00
/*
* smp_call_function_single - Run a function on a specific CPU
* @ func : The function to run . This must be fast and non - blocking .
* @ info : An arbitrary pointer to pass to the function .
* @ wait : If true , wait until function has completed on other CPUs .
*
2009-10-22 15:19:34 +04:00
* Returns 0 on success , else a negative status code .
2008-06-26 13:21:34 +04:00
*/
int smp_call_function_single ( int cpu , void ( * func ) ( void * info ) , void * info ,
2008-06-06 13:18:06 +04:00
int wait )
2008-06-26 13:21:34 +04:00
{
2009-02-25 15:59:47 +03:00
struct call_single_data d = {
. flags = 0 ,
} ;
2008-06-26 13:21:34 +04:00
unsigned long flags ;
2009-02-25 18:52:11 +03:00
int this_cpu ;
2008-08-26 04:07:14 +04:00
int err = 0 ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
/*
* prevent preemption and reschedule on another processor ,
* as well as CPU removal
*/
this_cpu = get_cpu ( ) ;
2009-08-20 05:05:35 +04:00
/*
* Can deadlock when called with interrupts disabled .
* We allow cpu ' s that are not yet online though , as no one else can
* send smp call function interrupt to this cpu and as such deadlocks
* can ' t happen .
*/
WARN_ON_ONCE ( cpu_online ( this_cpu ) & & irqs_disabled ( )
& & ! oops_in_progress ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
if ( cpu = = this_cpu ) {
2008-06-26 13:21:34 +04:00
local_irq_save ( flags ) ;
func ( info ) ;
local_irq_restore ( flags ) ;
2009-02-25 18:52:11 +03:00
} else {
if ( ( unsigned ) cpu < nr_cpu_ids & & cpu_online ( cpu ) ) {
struct call_single_data * data = & d ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
if ( ! wait )
data = & __get_cpu_var ( csd_data ) ;
2009-02-25 15:59:48 +03:00
2009-02-25 18:52:11 +03:00
csd_lock ( data ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
data - > func = func ;
data - > info = info ;
generic_exec_single ( cpu , data , wait ) ;
} else {
err = - ENXIO ; /* CPU not online */
}
2008-06-26 13:21:34 +04:00
}
put_cpu ( ) ;
2009-02-25 18:52:11 +03:00
2008-08-26 04:07:14 +04:00
return err ;
2008-06-26 13:21:34 +04:00
}
EXPORT_SYMBOL ( smp_call_function_single ) ;
2009-11-18 01:27:27 +03:00
/*
* smp_call_function_any - Run a function on any of the given cpus
* @ mask : The mask of cpus it can run on .
* @ func : The function to run . This must be fast and non - blocking .
* @ info : An arbitrary pointer to pass to the function .
* @ wait : If true , wait until function has completed .
*
* Returns 0 on success , else a negative status code ( if no cpus were online ) .
* Note that @ wait will be implicitly turned on in case of allocation failures ,
* since we fall back to on - stack allocation .
*
* Selection preference :
* 1 ) current cpu if in @ mask
* 2 ) any cpu of current node if in @ mask
* 3 ) any other online cpu in @ mask
*/
int smp_call_function_any ( const struct cpumask * mask ,
void ( * func ) ( void * info ) , void * info , int wait )
{
unsigned int cpu ;
const struct cpumask * nodemask ;
int ret ;
/* Try for same CPU (cheapest) */
cpu = get_cpu ( ) ;
if ( cpumask_test_cpu ( cpu , mask ) )
goto call ;
/* Try for same node. */
2010-01-16 04:01:23 +03:00
nodemask = cpumask_of_node ( cpu_to_node ( cpu ) ) ;
2009-11-18 01:27:27 +03:00
for ( cpu = cpumask_first_and ( nodemask , mask ) ; cpu < nr_cpu_ids ;
cpu = cpumask_next_and ( cpu , nodemask , mask ) ) {
if ( cpu_online ( cpu ) )
goto call ;
}
/* Any online will do: smp_call_function_single handles nr_cpu_ids. */
cpu = cpumask_any_and ( mask , cpu_online_mask ) ;
call :
ret = smp_call_function_single ( cpu , func , info , wait ) ;
put_cpu ( ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( smp_call_function_any ) ;
2008-06-26 13:21:34 +04:00
/**
* __smp_call_function_single ( ) : Run a function on another CPU
* @ cpu : The CPU to run on .
* @ data : Pre - allocated and setup data structure
*
2009-02-25 18:52:11 +03:00
* Like smp_call_function_single ( ) , but allow caller to pass in a
* pre - allocated data structure . Useful for embedding @ data inside
* other structures , for instance .
2008-06-26 13:21:34 +04:00
*/
2009-02-25 15:59:48 +03:00
void __smp_call_function_single ( int cpu , struct call_single_data * data ,
int wait )
2008-06-26 13:21:34 +04:00
{
2009-02-25 15:59:48 +03:00
csd_lock ( data ) ;
2009-08-20 05:05:35 +04:00
/*
* Can deadlock when called with interrupts disabled .
* We allow cpu ' s that are not yet online though , as no one else can
* send smp call function interrupt to this cpu and as such deadlocks
* can ' t happen .
*/
WARN_ON_ONCE ( cpu_online ( smp_processor_id ( ) ) & & wait & & irqs_disabled ( )
& & ! oops_in_progress ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 15:59:48 +03:00
generic_exec_single ( cpu , data , wait ) ;
2008-06-26 13:21:34 +04:00
}
/**
2008-12-30 01:35:16 +03:00
* smp_call_function_many ( ) : Run a function on a set of other CPUs .
* @ mask : The set of cpus to run on ( only runs on online subset ) .
2008-06-26 13:21:34 +04:00
* @ func : The function to run . This must be fast and non - blocking .
* @ info : An arbitrary pointer to pass to the function .
2009-02-25 18:52:11 +03:00
* @ wait : If true , wait ( atomically ) until function has completed
* on other CPUs .
2008-06-26 13:21:34 +04:00
*
2009-10-22 15:19:34 +04:00
* If @ wait is true , then returns once @ func has returned .
2008-06-26 13:21:34 +04:00
*
* You must not call this function with disabled interrupts or from a
* hardware interrupt handler or from a bottom half handler . Preemption
* must be disabled when calling this function .
*/
2008-12-30 01:35:16 +03:00
void smp_call_function_many ( const struct cpumask * mask ,
2009-02-25 18:52:11 +03:00
void ( * func ) ( void * ) , void * info , bool wait )
2008-06-26 13:21:34 +04:00
{
2008-12-30 01:35:16 +03:00
struct call_function_data * data ;
2008-06-26 13:21:34 +04:00
unsigned long flags ;
2009-02-25 18:52:11 +03:00
int cpu , next_cpu , this_cpu = smp_processor_id ( ) ;
2008-06-26 13:21:34 +04:00
2009-08-20 05:05:35 +04:00
/*
* Can deadlock when called with interrupts disabled .
* We allow cpu ' s that are not yet online though , as no one else can
* send smp call function interrupt to this cpu and as such deadlocks
* can ' t happen .
*/
WARN_ON_ONCE ( cpu_online ( this_cpu ) & & irqs_disabled ( )
& & ! oops_in_progress ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
/* So, what's a CPU they want? Ignoring this one. */
2008-12-30 01:35:16 +03:00
cpu = cpumask_first_and ( mask , cpu_online_mask ) ;
2009-02-25 18:52:11 +03:00
if ( cpu = = this_cpu )
2008-12-30 01:35:16 +03:00
cpu = cpumask_next_and ( cpu , mask , cpu_online_mask ) ;
2009-02-25 18:52:11 +03:00
2008-12-30 01:35:16 +03:00
/* No online cpus? We're done. */
if ( cpu > = nr_cpu_ids )
return ;
/* Do we have another CPU which isn't us? */
next_cpu = cpumask_next_and ( cpu , mask , cpu_online_mask ) ;
2009-02-25 18:52:11 +03:00
if ( next_cpu = = this_cpu )
2008-12-30 01:35:16 +03:00
next_cpu = cpumask_next_and ( next_cpu , mask , cpu_online_mask ) ;
/* Fastpath: do that cpu by itself. */
if ( next_cpu > = nr_cpu_ids ) {
smp_call_function_single ( cpu , func , info , wait ) ;
return ;
2008-06-26 13:21:34 +04:00
}
2009-02-25 15:59:47 +03:00
data = & __get_cpu_var ( cfd_data ) ;
csd_lock ( & data - > csd ) ;
2008-06-26 13:21:34 +04:00
data - > csd . func = func ;
data - > csd . info = info ;
2009-02-25 15:59:47 +03:00
cpumask_and ( data - > cpumask , mask , cpu_online_mask ) ;
2009-02-25 18:52:11 +03:00
cpumask_clear_cpu ( this_cpu , data - > cpumask ) ;
generic-ipi: make struct call_function_data lockless
This patch can remove spinlock from struct call_function_data, the
reasons are below:
1: add a new interface for cpumask named cpumask_test_and_clear_cpu(),
it can atomically test and clear specific cpu, we can use it instead
of cpumask_test_cpu() and cpumask_clear_cpu() and no need data->lock
to protect those in generic_smp_call_function_interrupt().
2: in smp_call_function_many(), after csd_lock() return, the current's
cfd_data is deleted from call_function list, so it not have race
between other cpus, then cfs_data is only used in
smp_call_function_many() that must disable preemption and not from
a hardware interrupthandler or from a bottom half handler to call,
only the correspond cpu can use it, so it not have race in current
cpu, no need cfs_data->lock to protect it.
3: after 1 and 2, cfs_data->lock is only use to protect cfs_data->refs in
generic_smp_call_function_interrupt(), so we can define cfs_data->refs
to atomic_t, and no need cfs_data->lock any more.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
[akpm@linux-foundation.org: use atomic_dec_return()]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:43:39 +04:00
atomic_set ( & data - > refs , cpumask_weight ( data - > cpumask ) ) ;
2008-06-26 13:21:34 +04:00
2009-11-17 17:40:01 +03:00
raw_spin_lock_irqsave ( & call_function . lock , flags ) ;
2009-02-25 15:59:47 +03:00
/*
* Place entry at the _HEAD_ of the list , so that any cpu still
2009-02-25 18:52:11 +03:00
* observing the entry in generic_smp_call_function_interrupt ( )
* will not miss any other list entries :
2009-02-25 15:59:47 +03:00
*/
list_add_rcu ( & data - > csd . list , & call_function . queue ) ;
2009-11-17 17:40:01 +03:00
raw_spin_unlock_irqrestore ( & call_function . lock , flags ) ;
2008-06-26 13:21:34 +04:00
2008-10-30 20:28:41 +03:00
/*
* Make the list addition visible before sending the ipi .
2009-02-25 18:52:11 +03:00
* ( IPIs must obey or appear to obey normal Linux cache
* coherency rules - - see comment in generic_exec_single ) .
2008-10-30 20:28:41 +03:00
*/
smp_mb ( ) ;
2008-06-26 13:21:34 +04:00
/* Send a message to all CPUs in the map */
2009-02-25 15:59:47 +03:00
arch_send_call_function_ipi_mask ( data - > cpumask ) ;
2008-06-26 13:21:34 +04:00
2009-02-25 18:52:11 +03:00
/* Optionally wait for the CPUs to complete */
2008-12-30 01:35:16 +03:00
if ( wait )
2009-02-25 15:59:48 +03:00
csd_lock_wait ( & data - > csd ) ;
2008-06-26 13:21:34 +04:00
}
2008-12-30 01:35:16 +03:00
EXPORT_SYMBOL ( smp_call_function_many ) ;
2008-06-26 13:21:34 +04:00
/**
* smp_call_function ( ) : Run a function on all other CPUs .
* @ func : The function to run . This must be fast and non - blocking .
* @ info : An arbitrary pointer to pass to the function .
2009-02-25 18:52:11 +03:00
* @ wait : If true , wait ( atomically ) until function has completed
* on other CPUs .
2008-06-26 13:21:34 +04:00
*
2008-12-30 01:35:16 +03:00
* Returns 0.
2008-06-26 13:21:34 +04:00
*
* If @ wait is true , then returns once @ func has returned ; otherwise
2009-10-22 15:19:34 +04:00
* it returns just before the target cpu calls @ func .
2008-06-26 13:21:34 +04:00
*
* You must not call this function with disabled interrupts or from a
* hardware interrupt handler or from a bottom half handler .
*/
2008-06-06 13:18:06 +04:00
int smp_call_function ( void ( * func ) ( void * ) , void * info , int wait )
2008-06-26 13:21:34 +04:00
{
preempt_disable ( ) ;
2008-12-30 01:35:16 +03:00
smp_call_function_many ( cpu_online_mask , func , info , wait ) ;
2008-06-26 13:21:34 +04:00
preempt_enable ( ) ;
2009-02-25 18:52:11 +03:00
2008-12-30 01:35:16 +03:00
return 0 ;
2008-06-26 13:21:34 +04:00
}
EXPORT_SYMBOL ( smp_call_function ) ;
void ipi_call_lock ( void )
{
2009-11-17 17:40:01 +03:00
raw_spin_lock ( & call_function . lock ) ;
2008-06-26 13:21:34 +04:00
}
void ipi_call_unlock ( void )
{
2009-11-17 17:40:01 +03:00
raw_spin_unlock ( & call_function . lock ) ;
2008-06-26 13:21:34 +04:00
}
void ipi_call_lock_irq ( void )
{
2009-11-17 17:40:01 +03:00
raw_spin_lock_irq ( & call_function . lock ) ;
2008-06-26 13:21:34 +04:00
}
void ipi_call_unlock_irq ( void )
{
2009-11-17 17:40:01 +03:00
raw_spin_unlock_irq ( & call_function . lock ) ;
2008-06-26 13:21:34 +04:00
}