2008-04-29 12:00:10 +04:00
/*
2008-10-19 07:28:07 +04:00
* device_cgroup . c - device cgroup subsystem
2008-04-29 12:00:10 +04:00
*
* Copyright 2007 IBM Corp
*/
# include <linux/device_cgroup.h>
# include <linux/cgroup.h>
# include <linux/ctype.h>
# include <linux/list.h>
# include <linux/uaccess.h>
2008-04-29 12:00:14 +04:00
# include <linux/seq_file.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/slab.h>
2008-10-19 07:28:07 +04:00
# include <linux/rcupdate.h>
2009-04-03 03:57:32 +04:00
# include <linux/mutex.h>
2008-04-29 12:00:10 +04:00
# define ACC_MKNOD 1
# define ACC_READ 2
# define ACC_WRITE 4
# define ACC_MASK (ACC_MKNOD | ACC_READ | ACC_WRITE)
# define DEV_BLOCK 1
# define DEV_CHAR 2
# define DEV_ALL 4 /* this represents all devices */
2009-04-03 03:57:32 +04:00
static DEFINE_MUTEX ( devcgroup_mutex ) ;
2013-02-15 20:55:45 +04:00
enum devcg_behavior {
DEVCG_DEFAULT_NONE ,
DEVCG_DEFAULT_ALLOW ,
DEVCG_DEFAULT_DENY ,
} ;
2008-04-29 12:00:10 +04:00
/*
2012-10-05 04:15:20 +04:00
* exception list locking rules :
2009-04-03 03:57:32 +04:00
* hold devcgroup_mutex for update / read .
2008-10-19 07:28:07 +04:00
* hold rcu_read_lock ( ) for read .
2008-04-29 12:00:10 +04:00
*/
2012-10-05 04:15:20 +04:00
struct dev_exception_item {
2008-04-29 12:00:10 +04:00
u32 major , minor ;
short type ;
short access ;
struct list_head list ;
devcgroup: relax white-list protection down to RCU
Currently this list is protected with a simple spinlock, even for reading
from one. This is OK, but can be better.
Actually I want it to be better very much, since after replacing the
OpenVZ device permissions engine with the cgroup-based one I noticed, that
we set 12 default device permissions for each newly created container (for
/dev/null, full, terminals, ect devices), and people sometimes have up to
20 perms more, so traversing the ~30-40 elements list under a spinlock
doesn't seem very good.
Here's the RCU protection for white-list - dev_whitelist_item-s are added
and removed under the devcg->lock, but are looked up in permissions
checking under the rcu_read_lock.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 12:47:07 +04:00
struct rcu_head rcu ;
2008-04-29 12:00:10 +04:00
} ;
struct dev_cgroup {
struct cgroup_subsys_state css ;
2012-10-05 04:15:20 +04:00
struct list_head exceptions ;
2013-02-15 20:55:45 +04:00
enum devcg_behavior behavior ;
2008-04-29 12:00:10 +04:00
} ;
2008-06-06 09:46:24 +04:00
static inline struct dev_cgroup * css_to_devcgroup ( struct cgroup_subsys_state * s )
{
return container_of ( s , struct dev_cgroup , css ) ;
}
2008-04-29 12:00:10 +04:00
static inline struct dev_cgroup * cgroup_to_devcgroup ( struct cgroup * cgroup )
{
2008-06-06 09:46:24 +04:00
return css_to_devcgroup ( cgroup_subsys_state ( cgroup , devices_subsys_id ) ) ;
2008-04-29 12:00:10 +04:00
}
2008-07-25 12:47:03 +04:00
static inline struct dev_cgroup * task_devcgroup ( struct task_struct * task )
{
return css_to_devcgroup ( task_subsys_state ( task , devices_subsys_id ) ) ;
}
2008-04-29 12:00:10 +04:00
struct cgroup_subsys devices_subsys ;
2012-01-31 09:47:36 +04:00
static int devcgroup_can_attach ( struct cgroup * new_cgrp ,
struct cgroup_taskset * set )
2008-04-29 12:00:10 +04:00
{
2011-12-13 06:12:21 +04:00
struct task_struct * task = cgroup_taskset_first ( set ) ;
2008-04-29 12:00:10 +04:00
2011-12-13 06:12:21 +04:00
if ( current ! = task & & ! capable ( CAP_SYS_ADMIN ) )
return - EPERM ;
2008-04-29 12:00:10 +04:00
return 0 ;
}
/*
2009-04-03 03:57:32 +04:00
* called under devcgroup_mutex
2008-04-29 12:00:10 +04:00
*/
2012-10-05 04:15:20 +04:00
static int dev_exceptions_copy ( struct list_head * dest , struct list_head * orig )
2008-04-29 12:00:10 +04:00
{
2012-10-05 04:15:20 +04:00
struct dev_exception_item * ex , * tmp , * new ;
2008-04-29 12:00:10 +04:00
2012-11-06 21:16:53 +04:00
lockdep_assert_held ( & devcgroup_mutex ) ;
2012-10-05 04:15:20 +04:00
list_for_each_entry ( ex , orig , list ) {
new = kmemdup ( ex , sizeof ( * ex ) , GFP_KERNEL ) ;
2008-04-29 12:00:10 +04:00
if ( ! new )
goto free_and_exit ;
list_add_tail ( & new - > list , dest ) ;
}
return 0 ;
free_and_exit :
2012-10-05 04:15:20 +04:00
list_for_each_entry_safe ( ex , tmp , dest , list ) {
list_del ( & ex - > list ) ;
kfree ( ex ) ;
2008-04-29 12:00:10 +04:00
}
return - ENOMEM ;
}
/*
2009-04-03 03:57:32 +04:00
* called under devcgroup_mutex
2008-04-29 12:00:10 +04:00
*/
2012-10-05 04:15:20 +04:00
static int dev_exception_add ( struct dev_cgroup * dev_cgroup ,
struct dev_exception_item * ex )
2008-04-29 12:00:10 +04:00
{
2012-10-05 04:15:20 +04:00
struct dev_exception_item * excopy , * walk ;
2008-04-29 12:00:10 +04:00
2012-11-06 21:16:53 +04:00
lockdep_assert_held ( & devcgroup_mutex ) ;
2012-10-05 04:15:20 +04:00
excopy = kmemdup ( ex , sizeof ( * ex ) , GFP_KERNEL ) ;
if ( ! excopy )
2008-04-29 12:00:10 +04:00
return - ENOMEM ;
2012-10-05 04:15:20 +04:00
list_for_each_entry ( walk , & dev_cgroup - > exceptions , list ) {
if ( walk - > type ! = ex - > type )
2008-06-06 09:46:28 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( walk - > major ! = ex - > major )
2008-06-06 09:46:28 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( walk - > minor ! = ex - > minor )
2008-06-06 09:46:28 +04:00
continue ;
2012-10-05 04:15:20 +04:00
walk - > access | = ex - > access ;
kfree ( excopy ) ;
excopy = NULL ;
2008-06-06 09:46:28 +04:00
}
2012-10-05 04:15:20 +04:00
if ( excopy ! = NULL )
list_add_tail_rcu ( & excopy - > list , & dev_cgroup - > exceptions ) ;
2008-04-29 12:00:10 +04:00
return 0 ;
}
/*
2009-04-03 03:57:32 +04:00
* called under devcgroup_mutex
2008-04-29 12:00:10 +04:00
*/
2012-10-05 04:15:20 +04:00
static void dev_exception_rm ( struct dev_cgroup * dev_cgroup ,
struct dev_exception_item * ex )
2008-04-29 12:00:10 +04:00
{
2012-10-05 04:15:20 +04:00
struct dev_exception_item * walk , * tmp ;
2008-04-29 12:00:10 +04:00
2012-11-06 21:16:53 +04:00
lockdep_assert_held ( & devcgroup_mutex ) ;
2012-10-05 04:15:20 +04:00
list_for_each_entry_safe ( walk , tmp , & dev_cgroup - > exceptions , list ) {
if ( walk - > type ! = ex - > type )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( walk - > major ! = ex - > major )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( walk - > minor ! = ex - > minor )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
walk - > access & = ~ ex - > access ;
2008-04-29 12:00:10 +04:00
if ( ! walk - > access ) {
devcgroup: relax white-list protection down to RCU
Currently this list is protected with a simple spinlock, even for reading
from one. This is OK, but can be better.
Actually I want it to be better very much, since after replacing the
OpenVZ device permissions engine with the cgroup-based one I noticed, that
we set 12 default device permissions for each newly created container (for
/dev/null, full, terminals, ect devices), and people sometimes have up to
20 perms more, so traversing the ~30-40 elements list under a spinlock
doesn't seem very good.
Here's the RCU protection for white-list - dev_whitelist_item-s are added
and removed under the devcg->lock, but are looked up in permissions
checking under the rcu_read_lock.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 12:47:07 +04:00
list_del_rcu ( & walk - > list ) ;
2011-03-15 13:07:57 +03:00
kfree_rcu ( walk , rcu ) ;
2008-04-29 12:00:10 +04:00
}
}
}
2013-02-22 04:41:31 +04:00
static void __dev_exception_clean ( struct dev_cgroup * dev_cgroup )
{
struct dev_exception_item * ex , * tmp ;
list_for_each_entry_safe ( ex , tmp , & dev_cgroup - > exceptions , list ) {
list_del_rcu ( & ex - > list ) ;
kfree_rcu ( ex , rcu ) ;
}
}
2012-10-05 04:15:15 +04:00
/**
2012-10-05 04:15:20 +04:00
* dev_exception_clean - frees all entries of the exception list
* @ dev_cgroup : dev_cgroup with the exception list to be cleaned
2012-10-05 04:15:15 +04:00
*
* called under devcgroup_mutex
*/
2012-10-05 04:15:20 +04:00
static void dev_exception_clean ( struct dev_cgroup * dev_cgroup )
2012-10-05 04:15:15 +04:00
{
2012-11-06 21:16:53 +04:00
lockdep_assert_held ( & devcgroup_mutex ) ;
2013-02-22 04:41:31 +04:00
__dev_exception_clean ( dev_cgroup ) ;
2012-10-05 04:15:15 +04:00
}
2013-02-15 20:55:47 +04:00
static inline bool is_devcg_online ( const struct dev_cgroup * devcg )
{
return ( devcg - > behavior ! = DEVCG_DEFAULT_NONE ) ;
}
2013-02-15 20:55:46 +04:00
/**
* devcgroup_online - initializes devcgroup ' s behavior and exceptions based on
* parent ' s
* @ cgroup : cgroup getting online
* returns 0 in case of success , error code otherwise
*/
static int devcgroup_online ( struct cgroup * cgroup )
{
struct dev_cgroup * dev_cgroup , * parent_dev_cgroup = NULL ;
int ret = 0 ;
mutex_lock ( & devcgroup_mutex ) ;
dev_cgroup = cgroup_to_devcgroup ( cgroup ) ;
if ( cgroup - > parent )
parent_dev_cgroup = cgroup_to_devcgroup ( cgroup - > parent ) ;
if ( parent_dev_cgroup = = NULL )
dev_cgroup - > behavior = DEVCG_DEFAULT_ALLOW ;
else {
ret = dev_exceptions_copy ( & dev_cgroup - > exceptions ,
& parent_dev_cgroup - > exceptions ) ;
if ( ! ret )
dev_cgroup - > behavior = parent_dev_cgroup - > behavior ;
}
mutex_unlock ( & devcgroup_mutex ) ;
return ret ;
}
static void devcgroup_offline ( struct cgroup * cgroup )
{
struct dev_cgroup * dev_cgroup = cgroup_to_devcgroup ( cgroup ) ;
mutex_lock ( & devcgroup_mutex ) ;
dev_cgroup - > behavior = DEVCG_DEFAULT_NONE ;
mutex_unlock ( & devcgroup_mutex ) ;
}
2008-04-29 12:00:10 +04:00
/*
* called from kernel / cgroup . c with cgroup_lock ( ) held .
*/
2012-11-19 20:13:38 +04:00
static struct cgroup_subsys_state * devcgroup_css_alloc ( struct cgroup * cgroup )
2008-04-29 12:00:10 +04:00
{
2013-02-15 20:55:46 +04:00
struct dev_cgroup * dev_cgroup ;
2008-04-29 12:00:10 +04:00
dev_cgroup = kzalloc ( sizeof ( * dev_cgroup ) , GFP_KERNEL ) ;
if ( ! dev_cgroup )
return ERR_PTR ( - ENOMEM ) ;
2012-10-05 04:15:20 +04:00
INIT_LIST_HEAD ( & dev_cgroup - > exceptions ) ;
2013-02-15 20:55:46 +04:00
dev_cgroup - > behavior = DEVCG_DEFAULT_NONE ;
2008-04-29 12:00:10 +04:00
return & dev_cgroup - > css ;
}
2012-11-19 20:13:38 +04:00
static void devcgroup_css_free ( struct cgroup * cgroup )
2008-04-29 12:00:10 +04:00
{
struct dev_cgroup * dev_cgroup ;
dev_cgroup = cgroup_to_devcgroup ( cgroup ) ;
2013-02-22 04:41:31 +04:00
__dev_exception_clean ( dev_cgroup ) ;
2008-04-29 12:00:10 +04:00
kfree ( dev_cgroup ) ;
}
# define DEVCG_ALLOW 1
# define DEVCG_DENY 2
2008-04-29 12:00:14 +04:00
# define DEVCG_LIST 3
2008-07-13 23:14:02 +04:00
# define MAJMINLEN 13
2008-04-29 12:00:14 +04:00
# define ACCLEN 4
2008-04-29 12:00:10 +04:00
static void set_access ( char * acc , short access )
{
int idx = 0 ;
2008-04-29 12:00:14 +04:00
memset ( acc , 0 , ACCLEN ) ;
2008-04-29 12:00:10 +04:00
if ( access & ACC_READ )
acc [ idx + + ] = ' r ' ;
if ( access & ACC_WRITE )
acc [ idx + + ] = ' w ' ;
if ( access & ACC_MKNOD )
acc [ idx + + ] = ' m ' ;
}
static char type_to_char ( short type )
{
if ( type = = DEV_ALL )
return ' a ' ;
if ( type = = DEV_CHAR )
return ' c ' ;
if ( type = = DEV_BLOCK )
return ' b ' ;
return ' X ' ;
}
2008-04-29 12:00:14 +04:00
static void set_majmin ( char * str , unsigned m )
2008-04-29 12:00:10 +04:00
{
if ( m = = ~ 0 )
2008-07-25 12:47:08 +04:00
strcpy ( str , " * " ) ;
2008-04-29 12:00:10 +04:00
else
2008-07-25 12:47:08 +04:00
sprintf ( str , " %u " , m ) ;
2008-04-29 12:00:10 +04:00
}
2008-04-29 12:00:14 +04:00
static int devcgroup_seq_read ( struct cgroup * cgroup , struct cftype * cft ,
struct seq_file * m )
2008-04-29 12:00:10 +04:00
{
2008-04-29 12:00:14 +04:00
struct dev_cgroup * devcgroup = cgroup_to_devcgroup ( cgroup ) ;
2012-10-05 04:15:20 +04:00
struct dev_exception_item * ex ;
2008-04-29 12:00:14 +04:00
char maj [ MAJMINLEN ] , min [ MAJMINLEN ] , acc [ ACCLEN ] ;
2008-04-29 12:00:10 +04:00
devcgroup: relax white-list protection down to RCU
Currently this list is protected with a simple spinlock, even for reading
from one. This is OK, but can be better.
Actually I want it to be better very much, since after replacing the
OpenVZ device permissions engine with the cgroup-based one I noticed, that
we set 12 default device permissions for each newly created container (for
/dev/null, full, terminals, ect devices), and people sometimes have up to
20 perms more, so traversing the ~30-40 elements list under a spinlock
doesn't seem very good.
Here's the RCU protection for white-list - dev_whitelist_item-s are added
and removed under the devcg->lock, but are looked up in permissions
checking under the rcu_read_lock.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 12:47:07 +04:00
rcu_read_lock ( ) ;
2012-10-05 04:15:17 +04:00
/*
* To preserve the compatibility :
* - Only show the " all devices " when the default policy is to allow
* - List the exceptions in case the default policy is to deny
* This way , the file remains as a " whitelist of devices "
*/
2012-10-26 00:37:38 +04:00
if ( devcgroup - > behavior = = DEVCG_DEFAULT_ALLOW ) {
2012-10-05 04:15:17 +04:00
set_access ( acc , ACC_MASK ) ;
set_majmin ( maj , ~ 0 ) ;
set_majmin ( min , ~ 0 ) ;
seq_printf ( m , " %c %s:%s %s \n " , type_to_char ( DEV_ALL ) ,
2008-04-29 12:00:14 +04:00
maj , min , acc ) ;
2012-10-05 04:15:17 +04:00
} else {
2012-10-05 04:15:20 +04:00
list_for_each_entry_rcu ( ex , & devcgroup - > exceptions , list ) {
set_access ( acc , ex - > access ) ;
set_majmin ( maj , ex - > major ) ;
set_majmin ( min , ex - > minor ) ;
seq_printf ( m , " %c %s:%s %s \n " , type_to_char ( ex - > type ) ,
2012-10-05 04:15:17 +04:00
maj , min , acc ) ;
}
2008-04-29 12:00:10 +04:00
}
devcgroup: relax white-list protection down to RCU
Currently this list is protected with a simple spinlock, even for reading
from one. This is OK, but can be better.
Actually I want it to be better very much, since after replacing the
OpenVZ device permissions engine with the cgroup-based one I noticed, that
we set 12 default device permissions for each newly created container (for
/dev/null, full, terminals, ect devices), and people sometimes have up to
20 perms more, so traversing the ~30-40 elements list under a spinlock
doesn't seem very good.
Here's the RCU protection for white-list - dev_whitelist_item-s are added
and removed under the devcg->lock, but are looked up in permissions
checking under the rcu_read_lock.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 12:47:07 +04:00
rcu_read_unlock ( ) ;
2008-04-29 12:00:10 +04:00
2008-04-29 12:00:14 +04:00
return 0 ;
2008-04-29 12:00:10 +04:00
}
2012-10-05 04:15:17 +04:00
/**
2012-10-05 04:15:20 +04:00
* may_access - verifies if a new exception is part of what is allowed
* by a dev cgroup based on the default policy +
* exceptions . This is used to make sure a child cgroup
* won ' t have more privileges than its parent or to
* verify if a certain access is allowed .
2012-10-05 04:15:17 +04:00
* @ dev_cgroup : dev cgroup to be tested against
2012-10-05 04:15:20 +04:00
* @ refex : new exception
2013-02-15 20:55:45 +04:00
* @ behavior : behavior of the exception
2008-04-29 12:00:10 +04:00
*/
2013-02-15 20:55:44 +04:00
static bool may_access ( struct dev_cgroup * dev_cgroup ,
2013-02-15 20:55:45 +04:00
struct dev_exception_item * refex ,
enum devcg_behavior behavior )
2008-04-29 12:00:10 +04:00
{
2012-10-05 04:15:20 +04:00
struct dev_exception_item * ex ;
2012-10-05 04:15:17 +04:00
bool match = false ;
2008-04-29 12:00:10 +04:00
2012-11-06 21:16:53 +04:00
rcu_lockdep_assert ( rcu_read_lock_held ( ) | |
lockdep_is_held ( & devcgroup_mutex ) ,
" device_cgroup::may_access() called without proper synchronization " ) ;
2012-11-06 21:17:37 +04:00
list_for_each_entry_rcu ( ex , & dev_cgroup - > exceptions , list ) {
2012-10-05 04:15:20 +04:00
if ( ( refex - > type & DEV_BLOCK ) & & ! ( ex - > type & DEV_BLOCK ) )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( ( refex - > type & DEV_CHAR ) & & ! ( ex - > type & DEV_CHAR ) )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( ex - > major ! = ~ 0 & & ex - > major ! = refex - > major )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( ex - > minor ! = ~ 0 & & ex - > minor ! = refex - > minor )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:20 +04:00
if ( refex - > access & ( ~ ex - > access ) )
2008-04-29 12:00:10 +04:00
continue ;
2012-10-05 04:15:17 +04:00
match = true ;
break ;
2008-04-29 12:00:10 +04:00
}
2012-10-05 04:15:17 +04:00
2013-02-15 20:55:45 +04:00
if ( dev_cgroup - > behavior = = DEVCG_DEFAULT_ALLOW ) {
if ( behavior = = DEVCG_DEFAULT_ALLOW ) {
/* the exception will deny access to certain devices */
return true ;
} else {
/* the exception will allow access to certain devices */
if ( match )
/*
* a new exception allowing access shouldn ' t
* match an parent ' s exception
*/
return false ;
2013-02-15 20:55:44 +04:00
return true ;
2013-02-15 20:55:45 +04:00
}
2013-02-15 20:55:44 +04:00
} else {
2013-02-15 20:55:45 +04:00
/* only behavior == DEVCG_DEFAULT_DENY allowed here */
if ( match )
/* parent has an exception that matches the proposed */
2013-02-15 20:55:44 +04:00
return true ;
2013-02-15 20:55:45 +04:00
else
return false ;
2013-02-15 20:55:44 +04:00
}
return false ;
2008-04-29 12:00:10 +04:00
}
/*
* parent_has_perm :
2012-10-05 04:15:20 +04:00
* when adding a new allow rule to a device exception list , the rule
2008-04-29 12:00:10 +04:00
* must be allowed in the parent device
*/
2008-07-25 12:47:03 +04:00
static int parent_has_perm ( struct dev_cgroup * childcg ,
2012-10-05 04:15:20 +04:00
struct dev_exception_item * ex )
2008-04-29 12:00:10 +04:00
{
2008-07-25 12:47:03 +04:00
struct cgroup * pcg = childcg - > css . cgroup - > parent ;
2008-04-29 12:00:10 +04:00
struct dev_cgroup * parent ;
if ( ! pcg )
return 1 ;
parent = cgroup_to_devcgroup ( pcg ) ;
2013-02-15 20:55:45 +04:00
return may_access ( parent , ex , childcg - > behavior ) ;
2008-04-29 12:00:10 +04:00
}
2012-10-26 00:37:45 +04:00
/**
* may_allow_all - checks if it ' s possible to change the behavior to
* allow based on parent ' s rules .
* @ parent : device cgroup ' s parent
* returns : ! = 0 in case it ' s allowed , 0 otherwise
*/
static inline int may_allow_all ( struct dev_cgroup * parent )
{
2012-11-06 19:25:04 +04:00
if ( ! parent )
return 1 ;
2012-10-26 00:37:45 +04:00
return parent - > behavior = = DEVCG_DEFAULT_ALLOW ;
}
2013-02-15 20:55:47 +04:00
/**
* revalidate_active_exceptions - walks through the active exception list and
* revalidates the exceptions based on parent ' s
* behavior and exceptions . The exceptions that
* are no longer valid will be removed .
* Called with devcgroup_mutex held .
* @ devcg : cgroup which exceptions will be checked
*
* This is one of the three key functions for hierarchy implementation .
* This function is responsible for re - evaluating all the cgroup ' s active
* exceptions due to a parent ' s exception change .
* Refer to Documentation / cgroups / devices . txt for more details .
*/
static void revalidate_active_exceptions ( struct dev_cgroup * devcg )
{
struct dev_exception_item * ex ;
struct list_head * this , * tmp ;
list_for_each_safe ( this , tmp , & devcg - > exceptions ) {
ex = container_of ( this , struct dev_exception_item , list ) ;
if ( ! parent_has_perm ( devcg , ex ) )
dev_exception_rm ( devcg , ex ) ;
}
}
/**
* propagate_exception - propagates a new exception to the children
* @ devcg_root : device cgroup that added a new exception
* @ ex : new exception to be propagated
*
* returns : 0 in case of success , ! = 0 in case of error
*/
static int propagate_exception ( struct dev_cgroup * devcg_root ,
struct dev_exception_item * ex )
{
2013-05-24 05:55:38 +04:00
struct cgroup * root = devcg_root - > css . cgroup , * pos ;
2013-02-15 20:55:47 +04:00
int rc = 0 ;
2013-05-24 05:55:38 +04:00
rcu_read_lock ( ) ;
2013-02-15 20:55:47 +04:00
2013-05-24 05:55:38 +04:00
cgroup_for_each_descendant_pre ( pos , root ) {
struct dev_cgroup * devcg = cgroup_to_devcgroup ( pos ) ;
/*
* Because devcgroup_mutex is held , no devcg will become
* online or offline during the tree walk ( see on / offline
* methods ) , and online ones are safe to access outside RCU
* read lock without bumping refcnt .
*/
if ( ! is_devcg_online ( devcg ) )
continue ;
rcu_read_unlock ( ) ;
2013-02-15 20:55:47 +04:00
/*
* in case both root ' s behavior and devcg is allow , a new
* restriction means adding to the exception list
*/
if ( devcg_root - > behavior = = DEVCG_DEFAULT_ALLOW & &
devcg - > behavior = = DEVCG_DEFAULT_ALLOW ) {
rc = dev_exception_add ( devcg , ex ) ;
if ( rc )
break ;
} else {
/*
* in the other possible cases :
* root ' s behavior : allow , devcg ' s : deny
* root ' s behavior : deny , devcg ' s : deny
* the exception will be removed
*/
dev_exception_rm ( devcg , ex ) ;
}
revalidate_active_exceptions ( devcg ) ;
2013-05-24 05:55:38 +04:00
rcu_read_lock ( ) ;
2013-02-15 20:55:47 +04:00
}
2013-05-24 05:55:38 +04:00
rcu_read_unlock ( ) ;
2013-02-15 20:55:47 +04:00
return rc ;
}
static inline bool has_children ( struct dev_cgroup * devcgroup )
{
struct cgroup * cgrp = devcgroup - > css . cgroup ;
return ! list_empty ( & cgrp - > children ) ;
}
2008-04-29 12:00:10 +04:00
/*
2012-10-05 04:15:20 +04:00
* Modify the exception list using allow / deny rules .
2008-04-29 12:00:10 +04:00
* CAP_SYS_ADMIN is needed for this . It ' s at least separate from CAP_MKNOD
* so we can give a container CAP_MKNOD to let it create devices but not
2012-10-05 04:15:20 +04:00
* modify the exception list .
2008-04-29 12:00:10 +04:00
* It seems likely we ' ll want to add a CAP_CONTAINER capability to allow
* us to also grant CAP_SYS_ADMIN to containers without giving away the
2012-10-05 04:15:20 +04:00
* device exception list controls , but for now we ' ll stick with CAP_SYS_ADMIN
2008-04-29 12:00:10 +04:00
*
* Taking rules away is always allowed ( given CAP_SYS_ADMIN ) . Granting
* new access is only allowed if you ' re in the top - level cgroup , or your
* parent cgroup has the access you ' re asking for .
*/
2008-07-25 12:47:03 +04:00
static int devcgroup_update_access ( struct dev_cgroup * devcgroup ,
int filetype , const char * buffer )
2008-04-29 12:00:10 +04:00
{
2008-07-25 12:47:03 +04:00
const char * b ;
2012-10-26 00:37:41 +04:00
char temp [ 12 ] ; /* 11 + 1 characters needed for a u32 */
2013-02-15 20:55:45 +04:00
int count , rc = 0 ;
2012-10-05 04:15:20 +04:00
struct dev_exception_item ex ;
2012-10-26 00:37:45 +04:00
struct cgroup * p = devcgroup - > css . cgroup ;
2012-11-06 19:25:04 +04:00
struct dev_cgroup * parent = NULL ;
2008-04-29 12:00:10 +04:00
if ( ! capable ( CAP_SYS_ADMIN ) )
return - EPERM ;
2012-11-06 19:25:04 +04:00
if ( p - > parent )
parent = cgroup_to_devcgroup ( p - > parent ) ;
2012-10-05 04:15:20 +04:00
memset ( & ex , 0 , sizeof ( ex ) ) ;
2008-04-29 12:00:10 +04:00
b = buffer ;
switch ( * b ) {
case ' a ' :
2012-10-05 04:15:17 +04:00
switch ( filetype ) {
case DEVCG_ALLOW :
2013-02-15 20:55:47 +04:00
if ( has_children ( devcgroup ) )
return - EINVAL ;
2012-10-26 00:37:45 +04:00
if ( ! may_allow_all ( parent ) )
2012-10-05 04:15:17 +04:00
return - EPERM ;
2012-10-05 04:15:20 +04:00
dev_exception_clean ( devcgroup ) ;
2012-11-06 19:25:04 +04:00
devcgroup - > behavior = DEVCG_DEFAULT_ALLOW ;
if ( ! parent )
break ;
2012-10-26 00:37:45 +04:00
rc = dev_exceptions_copy ( & devcgroup - > exceptions ,
& parent - > exceptions ) ;
if ( rc )
return rc ;
2012-10-05 04:15:17 +04:00
break ;
case DEVCG_DENY :
2013-02-15 20:55:47 +04:00
if ( has_children ( devcgroup ) )
return - EINVAL ;
2012-10-05 04:15:20 +04:00
dev_exception_clean ( devcgroup ) ;
2012-10-26 00:37:38 +04:00
devcgroup - > behavior = DEVCG_DEFAULT_DENY ;
2012-10-05 04:15:17 +04:00
break ;
default :
return - EINVAL ;
}
return 0 ;
2008-04-29 12:00:10 +04:00
case ' b ' :
2012-10-05 04:15:20 +04:00
ex . type = DEV_BLOCK ;
2008-04-29 12:00:10 +04:00
break ;
case ' c ' :
2012-10-05 04:15:20 +04:00
ex . type = DEV_CHAR ;
2008-04-29 12:00:10 +04:00
break ;
default :
2008-07-25 12:47:03 +04:00
return - EINVAL ;
2008-04-29 12:00:10 +04:00
}
b + + ;
2008-07-25 12:47:03 +04:00
if ( ! isspace ( * b ) )
return - EINVAL ;
2008-04-29 12:00:10 +04:00
b + + ;
if ( * b = = ' * ' ) {
2012-10-05 04:15:20 +04:00
ex . major = ~ 0 ;
2008-04-29 12:00:10 +04:00
b + + ;
} else if ( isdigit ( * b ) ) {
2012-10-26 00:37:41 +04:00
memset ( temp , 0 , sizeof ( temp ) ) ;
for ( count = 0 ; count < sizeof ( temp ) - 1 ; count + + ) {
temp [ count ] = * b ;
b + + ;
if ( ! isdigit ( * b ) )
break ;
}
rc = kstrtou32 ( temp , 10 , & ex . major ) ;
if ( rc )
return - EINVAL ;
2008-04-29 12:00:10 +04:00
} else {
2008-07-25 12:47:03 +04:00
return - EINVAL ;
2008-04-29 12:00:10 +04:00
}
2008-07-25 12:47:03 +04:00
if ( * b ! = ' : ' )
return - EINVAL ;
2008-04-29 12:00:10 +04:00
b + + ;
/* read minor */
if ( * b = = ' * ' ) {
2012-10-05 04:15:20 +04:00
ex . minor = ~ 0 ;
2008-04-29 12:00:10 +04:00
b + + ;
} else if ( isdigit ( * b ) ) {
2012-10-26 00:37:41 +04:00
memset ( temp , 0 , sizeof ( temp ) ) ;
for ( count = 0 ; count < sizeof ( temp ) - 1 ; count + + ) {
temp [ count ] = * b ;
b + + ;
if ( ! isdigit ( * b ) )
break ;
}
rc = kstrtou32 ( temp , 10 , & ex . minor ) ;
if ( rc )
return - EINVAL ;
2008-04-29 12:00:10 +04:00
} else {
2008-07-25 12:47:03 +04:00
return - EINVAL ;
2008-04-29 12:00:10 +04:00
}
2008-07-25 12:47:03 +04:00
if ( ! isspace ( * b ) )
return - EINVAL ;
2008-04-29 12:00:10 +04:00
for ( b + + , count = 0 ; count < 3 ; count + + , b + + ) {
switch ( * b ) {
case ' r ' :
2012-10-05 04:15:20 +04:00
ex . access | = ACC_READ ;
2008-04-29 12:00:10 +04:00
break ;
case ' w ' :
2012-10-05 04:15:20 +04:00
ex . access | = ACC_WRITE ;
2008-04-29 12:00:10 +04:00
break ;
case ' m ' :
2012-10-05 04:15:20 +04:00
ex . access | = ACC_MKNOD ;
2008-04-29 12:00:10 +04:00
break ;
case ' \n ' :
case ' \0 ' :
count = 3 ;
break ;
default :
2008-07-25 12:47:03 +04:00
return - EINVAL ;
2008-04-29 12:00:10 +04:00
}
}
switch ( filetype ) {
case DEVCG_ALLOW :
2012-10-05 04:15:20 +04:00
if ( ! parent_has_perm ( devcgroup , & ex ) )
2008-07-25 12:47:03 +04:00
return - EPERM ;
2012-10-05 04:15:17 +04:00
/*
* If the default policy is to allow by default , try to remove
* an matching exception instead . And be silent about it : we
* don ' t want to break compatibility
*/
2012-10-26 00:37:38 +04:00
if ( devcgroup - > behavior = = DEVCG_DEFAULT_ALLOW ) {
2012-10-05 04:15:20 +04:00
dev_exception_rm ( devcgroup , & ex ) ;
2012-10-05 04:15:17 +04:00
return 0 ;
}
2013-02-15 20:55:47 +04:00
rc = dev_exception_add ( devcgroup , & ex ) ;
break ;
2008-04-29 12:00:10 +04:00
case DEVCG_DENY :
2012-10-05 04:15:17 +04:00
/*
* If the default policy is to deny by default , try to remove
* an matching exception instead . And be silent about it : we
* don ' t want to break compatibility
*/
2013-02-15 20:55:47 +04:00
if ( devcgroup - > behavior = = DEVCG_DEFAULT_DENY )
2012-10-05 04:15:20 +04:00
dev_exception_rm ( devcgroup , & ex ) ;
2013-02-15 20:55:47 +04:00
else
rc = dev_exception_add ( devcgroup , & ex ) ;
if ( rc )
break ;
/* we only propagate new restrictions */
rc = propagate_exception ( devcgroup , & ex ) ;
break ;
2008-04-29 12:00:10 +04:00
default :
2013-02-15 20:55:47 +04:00
rc = - EINVAL ;
2008-04-29 12:00:10 +04:00
}
2013-02-15 20:55:47 +04:00
return rc ;
2008-07-25 12:47:03 +04:00
}
2008-04-29 12:00:10 +04:00
2008-07-25 12:47:03 +04:00
static int devcgroup_access_write ( struct cgroup * cgrp , struct cftype * cft ,
const char * buffer )
{
int retval ;
2009-04-03 03:57:32 +04:00
mutex_lock ( & devcgroup_mutex ) ;
2008-07-25 12:47:03 +04:00
retval = devcgroup_update_access ( cgroup_to_devcgroup ( cgrp ) ,
cft - > private , buffer ) ;
2009-04-03 03:57:32 +04:00
mutex_unlock ( & devcgroup_mutex ) ;
2008-04-29 12:00:10 +04:00
return retval ;
}
static struct cftype dev_cgroup_files [ ] = {
{
. name = " allow " ,
2008-07-25 12:47:03 +04:00
. write_string = devcgroup_access_write ,
2008-04-29 12:00:10 +04:00
. private = DEVCG_ALLOW ,
} ,
{
. name = " deny " ,
2008-07-25 12:47:03 +04:00
. write_string = devcgroup_access_write ,
2008-04-29 12:00:10 +04:00
. private = DEVCG_DENY ,
} ,
2008-04-29 12:00:14 +04:00
{
. name = " list " ,
. read_seq_string = devcgroup_seq_read ,
. private = DEVCG_LIST ,
} ,
2012-04-01 23:09:55 +04:00
{ } /* terminate */
2008-04-29 12:00:10 +04:00
} ;
struct cgroup_subsys devices_subsys = {
. name = " devices " ,
. can_attach = devcgroup_can_attach ,
2012-11-19 20:13:38 +04:00
. css_alloc = devcgroup_css_alloc ,
. css_free = devcgroup_css_free ,
2013-02-15 20:55:46 +04:00
. css_online = devcgroup_online ,
. css_offline = devcgroup_offline ,
2008-04-29 12:00:10 +04:00
. subsys_id = devices_subsys_id ,
2012-04-01 23:09:55 +04:00
. base_cftypes = dev_cgroup_files ,
2008-04-29 12:00:10 +04:00
} ;
2012-10-05 04:15:17 +04:00
/**
* __devcgroup_check_permission - checks if an inode operation is permitted
* @ dev_cgroup : the dev cgroup to be tested against
* @ type : device type
* @ major : device major number
* @ minor : device minor number
* @ access : combination of ACC_WRITE , ACC_READ and ACC_MKNOD
*
* returns 0 on success , - EPERM case the operation is not permitted
*/
2012-10-26 00:37:34 +04:00
static int __devcgroup_check_permission ( short type , u32 major , u32 minor ,
2012-10-05 04:15:17 +04:00
short access )
2008-04-29 12:00:10 +04:00
{
2012-10-26 00:37:34 +04:00
struct dev_cgroup * dev_cgroup ;
2012-10-05 04:15:20 +04:00
struct dev_exception_item ex ;
2012-10-05 04:15:17 +04:00
int rc ;
2008-09-03 01:35:52 +04:00
2012-10-05 04:15:20 +04:00
memset ( & ex , 0 , sizeof ( ex ) ) ;
ex . type = type ;
ex . major = major ;
ex . minor = minor ;
ex . access = access ;
2008-09-03 01:35:52 +04:00
2012-10-05 04:15:17 +04:00
rcu_read_lock ( ) ;
2012-10-26 00:37:34 +04:00
dev_cgroup = task_devcgroup ( current ) ;
2013-02-15 20:55:45 +04:00
rc = may_access ( dev_cgroup , & ex , dev_cgroup - > behavior ) ;
2012-10-05 04:15:17 +04:00
rcu_read_unlock ( ) ;
2009-06-18 03:26:33 +04:00
2012-10-05 04:15:17 +04:00
if ( ! rc )
return - EPERM ;
2008-09-03 01:35:52 +04:00
2012-10-05 04:15:17 +04:00
return 0 ;
}
2008-04-29 12:00:10 +04:00
2012-10-05 04:15:17 +04:00
int __devcgroup_inode_permission ( struct inode * inode , int mask )
{
short type , access = 0 ;
if ( S_ISBLK ( inode - > i_mode ) )
type = DEV_BLOCK ;
if ( S_ISCHR ( inode - > i_mode ) )
type = DEV_CHAR ;
if ( mask & MAY_WRITE )
access | = ACC_WRITE ;
if ( mask & MAY_READ )
access | = ACC_READ ;
2012-10-26 00:37:34 +04:00
return __devcgroup_check_permission ( type , imajor ( inode ) , iminor ( inode ) ,
access ) ;
2008-04-29 12:00:10 +04:00
}
int devcgroup_inode_mknod ( int mode , dev_t dev )
{
2012-10-05 04:15:17 +04:00
short type ;
2008-04-29 12:00:10 +04:00
2009-01-08 05:07:46 +03:00
if ( ! S_ISBLK ( mode ) & & ! S_ISCHR ( mode ) )
return 0 ;
2012-10-05 04:15:17 +04:00
if ( S_ISBLK ( mode ) )
type = DEV_BLOCK ;
else
type = DEV_CHAR ;
2008-09-03 01:35:52 +04:00
2012-10-26 00:37:34 +04:00
return __devcgroup_check_permission ( type , MAJOR ( dev ) , MINOR ( dev ) ,
ACC_MKNOD ) ;
2008-09-03 01:35:52 +04:00
2008-04-29 12:00:10 +04:00
}