2005-04-17 02:20:36 +04:00
/*
* RAW sockets for IPv6
2007-02-09 17:24:49 +03:00
* Linux INET6 implementation
2005-04-17 02:20:36 +04:00
*
* Authors :
2007-02-09 17:24:49 +03:00
* Pedro Roque < roque @ di . fc . ul . pt >
2005-04-17 02:20:36 +04:00
*
* Adapted from linux / net / ipv4 / raw . c
*
* Fixes :
* Hideaki YOSHIFUJI : sin6_scope_id support
2007-02-09 17:24:49 +03:00
* YOSHIFUJI , H . @ USAGI : raw checksum ( RFC2292 ( bis ) compliance )
2005-04-17 02:20:36 +04:00
* Kazunori MIYAZAWA @ USAGI : change process style to use ip6_append_data
*
* This program is free software ; you can redistribute it and / or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation ; either version
* 2 of the License , or ( at your option ) any later version .
*/
# include <linux/errno.h>
# include <linux/types.h>
# include <linux/socket.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/slab.h>
2005-04-17 02:20:36 +04:00
# include <linux/sockios.h>
# include <linux/net.h>
# include <linux/in6.h>
# include <linux/netdevice.h>
# include <linux/if_arp.h>
# include <linux/icmpv6.h>
# include <linux/netfilter.h>
# include <linux/netfilter_ipv6.h>
2005-12-14 10:16:37 +03:00
# include <linux/skbuff.h>
2005-04-17 02:20:36 +04:00
# include <asm/uaccess.h>
# include <asm/ioctls.h>
2007-09-12 14:01:34 +04:00
# include <net/net_namespace.h>
2005-04-17 02:20:36 +04:00
# include <net/ip.h>
# include <net/sock.h>
# include <net/snmp.h>
# include <net/ipv6.h>
# include <net/ndisc.h>
# include <net/protocol.h>
# include <net/ip6_route.h>
# include <net/ip6_checksum.h>
# include <net/addrconf.h>
# include <net/transp_v6.h>
# include <net/udp.h>
# include <net/inet_common.h>
2005-08-10 07:08:28 +04:00
# include <net/tcp_states.h>
2007-06-27 10:56:32 +04:00
# if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
2006-08-24 07:35:31 +04:00
# include <net/mip6.h>
# endif
2008-04-03 04:22:53 +04:00
# include <linux/mroute6.h>
2005-04-17 02:20:36 +04:00
2007-11-20 09:36:45 +03:00
# include <net/raw.h>
2005-04-17 02:20:36 +04:00
# include <net/rawv6.h>
# include <net/xfrm.h>
# include <linux/proc_fs.h>
# include <linux/seq_file.h>
2007-11-20 09:36:45 +03:00
static struct raw_hashinfo raw_v6_hashinfo = {
2008-03-18 10:59:23 +03:00
. lock = __RW_LOCK_UNLOCKED ( raw_v6_hashinfo . lock ) ,
2007-11-20 09:36:45 +03:00
} ;
2005-04-17 02:20:36 +04:00
2008-01-14 16:35:31 +03:00
static struct sock * __raw_v6_lookup ( struct net * net , struct sock * sk ,
unsigned short num , struct in6_addr * loc_addr ,
struct in6_addr * rmt_addr , int dif )
2005-04-17 02:20:36 +04:00
{
struct hlist_node * node ;
int is_multicast = ipv6_addr_is_multicast ( loc_addr ) ;
sk_for_each_from ( sk , node )
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num = = num ) {
2005-04-17 02:20:36 +04:00
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
2008-03-25 21:57:35 +03:00
if ( ! net_eq ( sock_net ( sk ) , net ) )
2008-01-14 16:35:31 +03:00
continue ;
2005-04-17 02:20:36 +04:00
if ( ! ipv6_addr_any ( & np - > daddr ) & &
! ipv6_addr_equal ( & np - > daddr , rmt_addr ) )
continue ;
2005-08-10 06:44:42 +04:00
if ( sk - > sk_bound_dev_if & & sk - > sk_bound_dev_if ! = dif )
continue ;
2005-04-17 02:20:36 +04:00
if ( ! ipv6_addr_any ( & np - > rcv_saddr ) ) {
if ( ipv6_addr_equal ( & np - > rcv_saddr , loc_addr ) )
goto found ;
if ( is_multicast & &
inet6_mc_check ( sk , loc_addr , rmt_addr ) )
goto found ;
continue ;
}
goto found ;
}
sk = NULL ;
found :
return sk ;
}
/*
* 0 - deliver
* 1 - block
*/
static __inline__ int icmpv6_filter ( struct sock * sk , struct sk_buff * skb )
{
struct icmp6hdr * icmph ;
struct raw6_sock * rp = raw6_sk ( sk ) ;
if ( pskb_may_pull ( skb , sizeof ( struct icmp6hdr ) ) ) {
__u32 * data = & rp - > filter . data [ 0 ] ;
int bit_nr ;
icmph = ( struct icmp6hdr * ) skb - > data ;
bit_nr = icmph - > icmp6_type ;
return ( data [ bit_nr > > 5 ] & ( 1 < < ( bit_nr & 31 ) ) ) ! = 0 ;
}
return 0 ;
}
2007-06-27 10:56:32 +04:00
# if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
static int ( * mh_filter ) ( struct sock * sock , struct sk_buff * skb ) ;
int rawv6_mh_filter_register ( int ( * filter ) ( struct sock * sock ,
struct sk_buff * skb ) )
{
rcu_assign_pointer ( mh_filter , filter ) ;
return 0 ;
}
EXPORT_SYMBOL ( rawv6_mh_filter_register ) ;
int rawv6_mh_filter_unregister ( int ( * filter ) ( struct sock * sock ,
struct sk_buff * skb ) )
{
rcu_assign_pointer ( mh_filter , NULL ) ;
synchronize_rcu ( ) ;
return 0 ;
}
EXPORT_SYMBOL ( rawv6_mh_filter_unregister ) ;
# endif
2005-04-17 02:20:36 +04:00
/*
* demultiplex raw sockets .
* ( should consider queueing the skb in the sock receive_queue
* without calling rawv6 . c )
*
* Caller owns SKB so we must make clones .
*/
2007-11-20 09:35:57 +03:00
static int ipv6_raw_deliver ( struct sk_buff * skb , int nexthdr )
2005-04-17 02:20:36 +04:00
{
struct in6_addr * saddr ;
struct in6_addr * daddr ;
struct sock * sk ;
2005-08-10 06:45:02 +04:00
int delivered = 0 ;
2005-04-17 02:20:36 +04:00
__u8 hash ;
2008-01-14 16:35:31 +03:00
struct net * net ;
2005-04-17 02:20:36 +04:00
2007-04-26 04:54:47 +04:00
saddr = & ipv6_hdr ( skb ) - > saddr ;
2005-04-17 02:20:36 +04:00
daddr = saddr + 1 ;
hash = nexthdr & ( MAX_INET_PROTOS - 1 ) ;
2007-11-20 09:36:45 +03:00
read_lock ( & raw_v6_hashinfo . lock ) ;
sk = sk_head ( & raw_v6_hashinfo . ht [ hash ] ) ;
2005-04-17 02:20:36 +04:00
if ( sk = = NULL )
goto out ;
2008-03-25 15:47:49 +03:00
net = dev_net ( skb - > dev ) ;
2008-01-14 16:35:31 +03:00
sk = __raw_v6_lookup ( net , sk , nexthdr , daddr , saddr , IP6CB ( skb ) - > iif ) ;
2005-04-17 02:20:36 +04:00
while ( sk ) {
2006-08-24 07:35:31 +04:00
int filtered ;
2005-08-10 06:45:02 +04:00
delivered = 1 ;
2006-08-24 07:35:31 +04:00
switch ( nexthdr ) {
case IPPROTO_ICMPV6 :
filtered = icmpv6_filter ( sk , skb ) ;
break ;
2007-06-27 10:56:32 +04:00
# if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
2006-08-24 07:35:31 +04:00
case IPPROTO_MH :
2007-06-27 10:56:32 +04:00
{
2006-08-24 07:35:31 +04:00
/* XXX: To validate MH only once for each packet,
* this is placed here . It should be after checking
* xfrm policy , however it doesn ' t . The checking xfrm
* policy is placed in rawv6_rcv ( ) because it is
* required for each socket .
*/
2007-06-27 10:56:32 +04:00
int ( * filter ) ( struct sock * sock , struct sk_buff * skb ) ;
filter = rcu_dereference ( mh_filter ) ;
filtered = filter ? filter ( sk , skb ) : 0 ;
2006-08-24 07:35:31 +04:00
break ;
2007-06-27 10:56:32 +04:00
}
2006-08-24 07:35:31 +04:00
# endif
default :
filtered = 0 ;
break ;
}
if ( filtered < 0 )
break ;
if ( filtered = = 0 ) {
2005-04-17 02:20:36 +04:00
struct sk_buff * clone = skb_clone ( skb , GFP_ATOMIC ) ;
/* Not releasing hash table! */
[NETFILTER]: Add nf_conntrack subsystem.
The existing connection tracking subsystem in netfilter can only
handle ipv4. There were basically two choices present to add
connection tracking support for ipv6. We could either duplicate all
of the ipv4 connection tracking code into an ipv6 counterpart, or (the
choice taken by these patches) we could design a generic layer that
could handle both ipv4 and ipv6 and thus requiring only one sub-protocol
(TCP, UDP, etc.) connection tracking helper module to be written.
In fact nf_conntrack is capable of working with any layer 3
protocol.
The existing ipv4 specific conntrack code could also not deal
with the pecularities of doing connection tracking on ipv6,
which is also cured here. For example, these issues include:
1) ICMPv6 handling, which is used for neighbour discovery in
ipv6 thus some messages such as these should not participate
in connection tracking since effectively they are like ARP
messages
2) fragmentation must be handled differently in ipv6, because
the simplistic "defrag, connection track and NAT, refrag"
(which the existing ipv4 connection tracking does) approach simply
isn't feasible in ipv6
3) ipv6 extension header parsing must occur at the correct spots
before and after connection tracking decisions, and there were
no provisions for this in the existing connection tracking
design
4) ipv6 has no need for stateful NAT
The ipv4 specific conntrack layer is kept around, until all of
the ipv4 specific conntrack helpers are ported over to nf_conntrack
and it is feature complete. Once that occurs, the old conntrack
stuff will get placed into the feature-removal-schedule and we will
fully kill it off 6 months later.
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-11-10 03:38:16 +03:00
if ( clone ) {
nf_reset ( clone ) ;
2005-04-17 02:20:36 +04:00
rawv6_rcv ( sk , clone ) ;
[NETFILTER]: Add nf_conntrack subsystem.
The existing connection tracking subsystem in netfilter can only
handle ipv4. There were basically two choices present to add
connection tracking support for ipv6. We could either duplicate all
of the ipv4 connection tracking code into an ipv6 counterpart, or (the
choice taken by these patches) we could design a generic layer that
could handle both ipv4 and ipv6 and thus requiring only one sub-protocol
(TCP, UDP, etc.) connection tracking helper module to be written.
In fact nf_conntrack is capable of working with any layer 3
protocol.
The existing ipv4 specific conntrack code could also not deal
with the pecularities of doing connection tracking on ipv6,
which is also cured here. For example, these issues include:
1) ICMPv6 handling, which is used for neighbour discovery in
ipv6 thus some messages such as these should not participate
in connection tracking since effectively they are like ARP
messages
2) fragmentation must be handled differently in ipv6, because
the simplistic "defrag, connection track and NAT, refrag"
(which the existing ipv4 connection tracking does) approach simply
isn't feasible in ipv6
3) ipv6 extension header parsing must occur at the correct spots
before and after connection tracking decisions, and there were
no provisions for this in the existing connection tracking
design
4) ipv6 has no need for stateful NAT
The ipv4 specific conntrack layer is kept around, until all of
the ipv4 specific conntrack helpers are ported over to nf_conntrack
and it is feature complete. Once that occurs, the old conntrack
stuff will get placed into the feature-removal-schedule and we will
fully kill it off 6 months later.
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-11-10 03:38:16 +03:00
}
2005-04-17 02:20:36 +04:00
}
2008-01-14 16:35:31 +03:00
sk = __raw_v6_lookup ( net , sk_next ( sk ) , nexthdr , daddr , saddr ,
2005-09-02 04:44:49 +04:00
IP6CB ( skb ) - > iif ) ;
2005-04-17 02:20:36 +04:00
}
out :
2007-11-20 09:36:45 +03:00
read_unlock ( & raw_v6_hashinfo . lock ) ;
2005-08-10 06:45:02 +04:00
return delivered ;
2005-04-17 02:20:36 +04:00
}
2007-11-20 09:35:57 +03:00
int raw6_local_deliver ( struct sk_buff * skb , int nexthdr )
{
struct sock * raw_sk ;
2007-11-20 09:36:45 +03:00
raw_sk = sk_head ( & raw_v6_hashinfo . ht [ nexthdr & ( MAX_INET_PROTOS - 1 ) ] ) ;
2007-11-20 09:35:57 +03:00
if ( raw_sk & & ! ipv6_raw_deliver ( skb , nexthdr ) )
raw_sk = NULL ;
return raw_sk ! = NULL ;
}
2005-04-17 02:20:36 +04:00
/* This cleans up af_inet6 a bit. -DaveM */
static int rawv6_bind ( struct sock * sk , struct sockaddr * uaddr , int addr_len )
{
struct inet_sock * inet = inet_sk ( sk ) ;
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
struct sockaddr_in6 * addr = ( struct sockaddr_in6 * ) uaddr ;
2006-11-15 07:56:00 +03:00
__be32 v4addr = 0 ;
2005-04-17 02:20:36 +04:00
int addr_type ;
int err ;
if ( addr_len < SIN6_LEN_RFC2133 )
return - EINVAL ;
addr_type = ipv6_addr_type ( & addr - > sin6_addr ) ;
/* Raw sockets are IPv6 only */
if ( addr_type = = IPV6_ADDR_MAPPED )
2009-11-06 10:01:17 +03:00
return - EADDRNOTAVAIL ;
2005-04-17 02:20:36 +04:00
lock_sock ( sk ) ;
err = - EINVAL ;
if ( sk - > sk_state ! = TCP_CLOSE )
goto out ;
2009-11-06 10:01:17 +03:00
rcu_read_lock ( ) ;
2005-04-17 02:20:36 +04:00
/* Check if the address belongs to the host. */
if ( addr_type ! = IPV6_ADDR_ANY ) {
struct net_device * dev = NULL ;
if ( addr_type & IPV6_ADDR_LINKLOCAL ) {
if ( addr_len > = sizeof ( struct sockaddr_in6 ) & &
addr - > sin6_scope_id ) {
/* Override any existing binding, if another
* one is supplied by user .
*/
sk - > sk_bound_dev_if = addr - > sin6_scope_id ;
}
2007-02-09 17:24:49 +03:00
2005-04-17 02:20:36 +04:00
/* Binding to link-local address requires an interface */
if ( ! sk - > sk_bound_dev_if )
2009-11-06 10:01:17 +03:00
goto out_unlock ;
2005-04-17 02:20:36 +04:00
2009-11-06 10:01:17 +03:00
err = - ENODEV ;
dev = dev_get_by_index_rcu ( sock_net ( sk ) ,
sk - > sk_bound_dev_if ) ;
if ( ! dev )
goto out_unlock ;
2005-04-17 02:20:36 +04:00
}
2007-02-09 17:24:49 +03:00
2005-04-17 02:20:36 +04:00
/* ipv4 addr of the socket is invalid. Only the
* unspecified and mapped address have a v4 equivalent .
*/
v4addr = LOOPBACK4_IPV6 ;
if ( ! ( addr_type & IPV6_ADDR_MULTICAST ) ) {
err = - EADDRNOTAVAIL ;
2008-03-25 20:26:21 +03:00
if ( ! ipv6_chk_addr ( sock_net ( sk ) , & addr - > sin6_addr ,
2008-01-11 09:43:18 +03:00
dev , 0 ) ) {
2009-11-06 10:01:17 +03:00
goto out_unlock ;
2005-04-17 02:20:36 +04:00
}
}
}
2009-10-15 10:30:45 +04:00
inet - > inet_rcv_saddr = inet - > inet_saddr = v4addr ;
2005-04-17 02:20:36 +04:00
ipv6_addr_copy ( & np - > rcv_saddr , & addr - > sin6_addr ) ;
if ( ! ( addr_type & IPV6_ADDR_MULTICAST ) )
ipv6_addr_copy ( & np - > saddr , & addr - > sin6_addr ) ;
err = 0 ;
2009-11-06 10:01:17 +03:00
out_unlock :
rcu_read_unlock ( ) ;
2005-04-17 02:20:36 +04:00
out :
release_sock ( sk ) ;
return err ;
}
2007-11-20 09:35:57 +03:00
static void rawv6_err ( struct sock * sk , struct sk_buff * skb ,
2005-04-17 02:20:36 +04:00
struct inet6_skb_parm * opt ,
2009-06-23 15:31:07 +04:00
u8 type , u8 code , int offset , __be32 info )
2005-04-17 02:20:36 +04:00
{
struct inet_sock * inet = inet_sk ( sk ) ;
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
int err ;
int harderr ;
/* Report error on raw socket, if:
1. User requested recverr .
2. Socket is connected ( otherwise the error indication
is useless without recverr and error is hard .
*/
if ( ! np - > recverr & & sk - > sk_state ! = TCP_ESTABLISHED )
return ;
harderr = icmpv6_err_convert ( type , code , & err ) ;
if ( type = = ICMPV6_PKT_TOOBIG )
harderr = ( np - > pmtudisc = = IPV6_PMTUDISC_DO ) ;
if ( np - > recverr ) {
u8 * payload = skb - > data ;
if ( ! inet - > hdrincl )
payload + = offset ;
ipv6_icmp_error ( sk , skb , err , 0 , ntohl ( info ) , payload ) ;
}
if ( np - > recverr | | harderr ) {
sk - > sk_err = err ;
sk - > sk_error_report ( sk ) ;
}
}
2007-11-20 09:35:57 +03:00
void raw6_icmp_error ( struct sk_buff * skb , int nexthdr ,
2009-06-23 15:31:07 +04:00
u8 type , u8 code , int inner_offset , __be32 info )
2007-11-20 09:35:57 +03:00
{
struct sock * sk ;
int hash ;
struct in6_addr * saddr , * daddr ;
2008-01-14 16:35:31 +03:00
struct net * net ;
2007-11-20 09:35:57 +03:00
2007-11-20 09:36:45 +03:00
hash = nexthdr & ( RAW_HTABLE_SIZE - 1 ) ;
2007-11-20 09:35:57 +03:00
2007-11-20 09:36:45 +03:00
read_lock ( & raw_v6_hashinfo . lock ) ;
sk = sk_head ( & raw_v6_hashinfo . ht [ hash ] ) ;
2007-11-20 09:35:57 +03:00
if ( sk ! = NULL ) {
2008-04-11 18:51:26 +04:00
/* Note: ipv6_hdr(skb) != skb->data */
struct ipv6hdr * ip6h = ( struct ipv6hdr * ) skb - > data ;
saddr = & ip6h - > saddr ;
daddr = & ip6h - > daddr ;
2008-03-25 15:47:49 +03:00
net = dev_net ( skb - > dev ) ;
2007-11-20 09:35:57 +03:00
2008-01-14 16:35:31 +03:00
while ( ( sk = __raw_v6_lookup ( net , sk , nexthdr , saddr , daddr ,
2007-11-20 09:35:57 +03:00
IP6CB ( skb ) - > iif ) ) ) {
rawv6_err ( sk , skb , NULL , type , code ,
inner_offset , info ) ;
sk = sk_next ( sk ) ;
}
}
2007-11-20 09:36:45 +03:00
read_unlock ( & raw_v6_hashinfo . lock ) ;
2007-11-20 09:35:57 +03:00
}
2005-04-17 02:20:36 +04:00
static inline int rawv6_rcv_skb ( struct sock * sk , struct sk_buff * skb )
{
2007-02-09 17:24:49 +03:00
if ( ( raw6_sk ( sk ) - > checksum | | sk - > sk_filter ) & &
2005-11-11 00:01:24 +03:00
skb_checksum_complete ( skb ) ) {
2007-11-14 07:31:14 +03:00
atomic_inc ( & sk - > sk_drops ) ;
2005-11-11 00:01:24 +03:00
kfree_skb ( skb ) ;
2008-08-30 01:06:51 +04:00
return NET_RX_DROP ;
2005-04-17 02:20:36 +04:00
}
/* Charge it to the socket. */
2010-04-29 02:31:51 +04:00
if ( ip_queue_rcv_skb ( sk , skb ) < 0 ) {
2005-04-17 02:20:36 +04:00
kfree_skb ( skb ) ;
2008-08-30 01:06:51 +04:00
return NET_RX_DROP ;
2005-04-17 02:20:36 +04:00
}
return 0 ;
}
/*
2007-02-09 17:24:49 +03:00
* This is next to useless . . .
2005-04-17 02:20:36 +04:00
* if we demultiplex in network layer we don ' t need the extra call
2007-02-09 17:24:49 +03:00
* just to queue the skb . . .
* maybe we could have the network decide upon a hint if it
2005-04-17 02:20:36 +04:00
* should call raw_rcv for demultiplexing
*/
int rawv6_rcv ( struct sock * sk , struct sk_buff * skb )
{
struct inet_sock * inet = inet_sk ( sk ) ;
struct raw6_sock * rp = raw6_sk ( sk ) ;
2007-02-09 17:24:49 +03:00
if ( ! xfrm6_policy_check ( sk , XFRM_POLICY_IN , skb ) ) {
2007-11-14 07:31:14 +03:00
atomic_inc ( & sk - > sk_drops ) ;
2007-02-09 17:24:49 +03:00
kfree_skb ( skb ) ;
return NET_RX_DROP ;
}
2005-04-17 02:20:36 +04:00
if ( ! rp - > checksum )
skb - > ip_summed = CHECKSUM_UNNECESSARY ;
2006-08-30 03:44:56 +04:00
if ( skb - > ip_summed = = CHECKSUM_COMPLETE ) {
2007-04-11 07:50:43 +04:00
skb_postpull_rcsum ( skb , skb_network_header ( skb ) ,
2007-03-16 23:26:39 +03:00
skb_network_header_len ( skb ) ) ;
2007-04-26 04:54:47 +04:00
if ( ! csum_ipv6_magic ( & ipv6_hdr ( skb ) - > saddr ,
& ipv6_hdr ( skb ) - > daddr ,
2009-10-15 10:30:45 +04:00
skb - > len , inet - > inet_num , skb - > csum ) )
2005-04-17 02:20:36 +04:00
skb - > ip_summed = CHECKSUM_UNNECESSARY ;
}
2007-04-09 22:59:39 +04:00
if ( ! skb_csum_unnecessary ( skb ) )
2007-04-26 04:54:47 +04:00
skb - > csum = ~ csum_unfold ( csum_ipv6_magic ( & ipv6_hdr ( skb ) - > saddr ,
& ipv6_hdr ( skb ) - > daddr ,
skb - > len ,
2009-10-15 10:30:45 +04:00
inet - > inet_num , 0 ) ) ;
2005-04-17 02:20:36 +04:00
if ( inet - > hdrincl ) {
2005-11-11 00:01:24 +03:00
if ( skb_checksum_complete ( skb ) ) {
2007-11-14 07:31:14 +03:00
atomic_inc ( & sk - > sk_drops ) ;
2005-04-17 02:20:36 +04:00
kfree_skb ( skb ) ;
2008-08-30 01:06:51 +04:00
return NET_RX_DROP ;
2005-04-17 02:20:36 +04:00
}
}
rawv6_rcv_skb ( sk , skb ) ;
return 0 ;
}
/*
* This should be easy , if there is something there
* we return it , otherwise we block .
*/
static int rawv6_recvmsg ( struct kiocb * iocb , struct sock * sk ,
struct msghdr * msg , size_t len ,
int noblock , int flags , int * addr_len )
{
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
struct sockaddr_in6 * sin6 = ( struct sockaddr_in6 * ) msg - > msg_name ;
struct sk_buff * skb ;
size_t copied ;
int err ;
if ( flags & MSG_OOB )
return - EOPNOTSUPP ;
2007-02-09 17:24:49 +03:00
if ( addr_len )
2005-04-17 02:20:36 +04:00
* addr_len = sizeof ( * sin6 ) ;
if ( flags & MSG_ERRQUEUE )
return ipv6_recv_error ( sk , msg , len ) ;
2010-04-23 15:26:09 +04:00
if ( np - > rxpmtu & & np - > rxopt . bits . rxpmtu )
return ipv6_recv_rxpmtu ( sk , msg , len ) ;
2005-04-17 02:20:36 +04:00
skb = skb_recv_datagram ( sk , flags , noblock , & err ) ;
if ( ! skb )
goto out ;
copied = skb - > len ;
2007-02-09 17:24:49 +03:00
if ( copied > len ) {
copied = len ;
msg - > msg_flags | = MSG_TRUNC ;
}
2005-04-17 02:20:36 +04:00
2007-04-09 22:59:39 +04:00
if ( skb_csum_unnecessary ( skb ) ) {
2005-04-17 02:20:36 +04:00
err = skb_copy_datagram_iovec ( skb , 0 , msg - > msg_iov , copied ) ;
} else if ( msg - > msg_flags & MSG_TRUNC ) {
2005-11-11 00:01:24 +03:00
if ( __skb_checksum_complete ( skb ) )
2005-04-17 02:20:36 +04:00
goto csum_copy_err ;
err = skb_copy_datagram_iovec ( skb , 0 , msg - > msg_iov , copied ) ;
} else {
err = skb_copy_and_csum_datagram_iovec ( skb , 0 , msg - > msg_iov ) ;
if ( err = = - EINVAL )
goto csum_copy_err ;
}
if ( err )
goto out_free ;
/* Copy the address. */
if ( sin6 ) {
sin6 - > sin6_family = AF_INET6 ;
2006-07-26 04:05:35 +04:00
sin6 - > sin6_port = 0 ;
2007-04-26 04:54:47 +04:00
ipv6_addr_copy ( & sin6 - > sin6_addr , & ipv6_hdr ( skb ) - > saddr ) ;
2005-04-17 02:20:36 +04:00
sin6 - > sin6_flowinfo = 0 ;
sin6 - > sin6_scope_id = 0 ;
if ( ipv6_addr_type ( & sin6 - > sin6_addr ) & IPV6_ADDR_LINKLOCAL )
sin6 - > sin6_scope_id = IP6CB ( skb ) - > iif ;
}
net: Generalize socket rx gap / receive queue overflow cmsg
Create a new socket level option to report number of queue overflows
Recently I augmented the AF_PACKET protocol to report the number of frames lost
on the socket receive queue between any two enqueued frames. This value was
exported via a SOL_PACKET level cmsg. AFter I completed that work it was
requested that this feature be generalized so that any datagram oriented socket
could make use of this option. As such I've created this patch, It creates a
new SOL_SOCKET level option called SO_RXQ_OVFL, which when enabled exports a
SOL_SOCKET level cmsg that reports the nubmer of times the sk_receive_queue
overflowed between any two given frames. It also augments the AF_PACKET
protocol to take advantage of this new feature (as it previously did not touch
sk->sk_drops, which this patch uses to record the overflow count). Tested
successfully by me.
Notes:
1) Unlike my previous patch, this patch simply records the sk_drops value, which
is not a number of drops between packets, but rather a total number of drops.
Deltas must be computed in user space.
2) While this patch currently works with datagram oriented protocols, it will
also be accepted by non-datagram oriented protocols. I'm not sure if thats
agreeable to everyone, but my argument in favor of doing so is that, for those
protocols which aren't applicable to this option, sk_drops will always be zero,
and reporting no drops on a receive queue that isn't used for those
non-participating protocols seems reasonable to me. This also saves us having
to code in a per-protocol opt in mechanism.
3) This applies cleanly to net-next assuming that commit
977750076d98c7ff6cbda51858bb5a5894a9d9ab (my af packet cmsg patch) is reverted
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-10-13 00:26:31 +04:00
sock_recv_ts_and_drops ( msg , sk , skb ) ;
2005-04-17 02:20:36 +04:00
if ( np - > rxopt . all )
datagram_recv_ctl ( sk , msg , skb ) ;
err = copied ;
if ( flags & MSG_TRUNC )
err = skb - > len ;
out_free :
skb_free_datagram ( sk , skb ) ;
out :
return err ;
csum_copy_err :
2005-12-14 10:16:37 +03:00
skb_kill_datagram ( sk , skb , flags ) ;
2005-04-17 02:20:36 +04:00
/* Error for blocking case is chosen to masquerade
as some normal condition .
*/
err = ( flags & MSG_DONTWAIT ) ? - EAGAIN : - EHOSTUNREACH ;
2005-12-14 10:16:37 +03:00
goto out ;
2005-04-17 02:20:36 +04:00
}
static int rawv6_push_pending_frames ( struct sock * sk , struct flowi * fl ,
2005-04-20 09:30:14 +04:00
struct raw6_sock * rp )
2005-04-17 02:20:36 +04:00
{
struct sk_buff * skb ;
int err = 0 ;
2005-04-20 09:30:14 +04:00
int offset ;
int len ;
2005-05-04 01:24:36 +04:00
int total_len ;
2006-11-15 08:35:48 +03:00
__wsum tmp_csum ;
__sum16 csum ;
2005-04-17 02:20:36 +04:00
if ( ! rp - > checksum )
goto send ;
if ( ( skb = skb_peek ( & sk - > sk_write_queue ) ) = = NULL )
goto out ;
2005-04-20 09:30:14 +04:00
offset = rp - > offset ;
2007-04-11 07:50:43 +04:00
total_len = inet_sk ( sk ) - > cork . length - ( skb_network_header ( skb ) -
skb - > data ) ;
2005-05-04 01:24:36 +04:00
if ( offset > = total_len - 1 ) {
2005-04-17 02:20:36 +04:00
err = - EINVAL ;
2005-04-20 09:30:14 +04:00
ip6_flush_pending_frames ( sk ) ;
2005-04-17 02:20:36 +04:00
goto out ;
}
/* should be check HW csum miyazawa */
if ( skb_queue_len ( & sk - > sk_write_queue ) = = 1 ) {
/*
* Only one fragment on the socket .
*/
tmp_csum = skb - > csum ;
} else {
2005-04-20 09:30:14 +04:00
struct sk_buff * csum_skb = NULL ;
2005-04-17 02:20:36 +04:00
tmp_csum = 0 ;
skb_queue_walk ( & sk - > sk_write_queue , skb ) {
tmp_csum = csum_add ( tmp_csum , skb - > csum ) ;
2005-04-20 09:30:14 +04:00
if ( csum_skb )
continue ;
2007-04-26 04:55:53 +04:00
len = skb - > len - skb_transport_offset ( skb ) ;
2005-04-20 09:30:14 +04:00
if ( offset > = len ) {
offset - = len ;
continue ;
}
csum_skb = skb ;
2005-04-17 02:20:36 +04:00
}
2005-04-20 09:30:14 +04:00
skb = csum_skb ;
2005-04-17 02:20:36 +04:00
}
2007-04-26 04:55:53 +04:00
offset + = skb_transport_offset ( skb ) ;
2005-04-20 09:30:14 +04:00
if ( skb_copy_bits ( skb , offset , & csum , 2 ) )
BUG ( ) ;
2005-04-17 02:20:36 +04:00
/* in case cksum was not initialized */
2005-04-20 09:30:14 +04:00
if ( unlikely ( csum ) )
2006-11-15 08:36:54 +03:00
tmp_csum = csum_sub ( tmp_csum , csum_unfold ( csum ) ) ;
2005-04-20 09:30:14 +04:00
2006-11-15 08:35:48 +03:00
csum = csum_ipv6_magic ( & fl - > fl6_src ,
2005-04-20 09:30:14 +04:00
& fl - > fl6_dst ,
2005-05-04 01:24:36 +04:00
total_len , fl - > proto , tmp_csum ) ;
2005-04-20 09:30:14 +04:00
2006-11-16 13:36:50 +03:00
if ( csum = = 0 & & fl - > proto = = IPPROTO_UDP )
csum = CSUM_MANGLED_0 ;
2005-04-17 02:20:36 +04:00
2005-04-20 09:30:14 +04:00
if ( skb_store_bits ( skb , offset , & csum , 2 ) )
BUG ( ) ;
2005-04-17 02:20:36 +04:00
send :
err = ip6_push_pending_frames ( sk ) ;
out :
return err ;
}
static int rawv6_send_hdrinc ( struct sock * sk , void * from , int length ,
2007-02-09 17:24:49 +03:00
struct flowi * fl , struct rt6_info * rt ,
2005-04-17 02:20:36 +04:00
unsigned int flags )
{
2005-04-20 09:32:22 +04:00
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
2005-04-17 02:20:36 +04:00
struct ipv6hdr * iph ;
struct sk_buff * skb ;
int err ;
if ( length > rt - > u . dst . dev - > mtu ) {
ipv6_local_error ( sk , EMSGSIZE , fl , rt - > u . dst . dev - > mtu ) ;
return - EMSGSIZE ;
}
if ( flags & MSG_PROBE )
goto out ;
2008-05-13 07:48:31 +04:00
skb = sock_alloc_send_skb ( sk ,
length + LL_ALLOCATED_SPACE ( rt - > u . dst . dev ) + 15 ,
flags & MSG_DONTWAIT , & err ) ;
2005-04-17 02:20:36 +04:00
if ( skb = = NULL )
2007-02-09 17:24:49 +03:00
goto error ;
2008-05-13 07:48:31 +04:00
skb_reserve ( skb , LL_RESERVED_SPACE ( rt - > u . dst . dev ) ) ;
2005-04-17 02:20:36 +04:00
skb - > priority = sk - > sk_priority ;
2008-01-31 06:08:16 +03:00
skb - > mark = sk - > sk_mark ;
2009-06-02 09:19:30 +04:00
skb_dst_set ( skb , dst_clone ( & rt - > u . dst ) ) ;
2005-04-17 02:20:36 +04:00
2007-03-11 01:57:15 +03:00
skb_put ( skb , length ) ;
skb_reset_network_header ( skb ) ;
2007-04-26 04:54:47 +04:00
iph = ipv6_hdr ( skb ) ;
2005-04-17 02:20:36 +04:00
skb - > ip_summed = CHECKSUM_NONE ;
2007-04-11 08:21:55 +04:00
skb - > transport_header = skb - > network_header ;
2005-04-17 02:20:36 +04:00
err = memcpy_fromiovecend ( ( void * ) iph , from , 0 , length ) ;
if ( err )
goto error_fault ;
2009-04-27 13:45:02 +04:00
IP6_UPD_PO_STATS ( sock_net ( sk ) , rt - > rt6i_idev , IPSTATS_MIB_OUT , skb - > len ) ;
2010-03-23 06:09:07 +03:00
err = NF_HOOK ( NFPROTO_IPV6 , NF_INET_LOCAL_OUT , skb , NULL ,
rt - > u . dst . dev , dst_output ) ;
2005-04-17 02:20:36 +04:00
if ( err > 0 )
ip: Report qdisc packet drops
Christoph Lameter pointed out that packet drops at qdisc level where not
accounted in SNMP counters. Only if application sets IP_RECVERR, drops
are reported to user (-ENOBUFS errors) and SNMP counters updated.
IP_RECVERR is used to enable extended reliable error message passing,
but these are not needed to update system wide SNMP stats.
This patch changes things a bit to allow SNMP counters to be updated,
regardless of IP_RECVERR being set or not on the socket.
Example after an UDP tx flood
# netstat -s
...
IP:
1487048 outgoing packets dropped
...
Udp:
...
SndbufErrors: 1487048
send() syscalls, do however still return an OK status, to not
break applications.
Note : send() manual page explicitly says for -ENOBUFS error :
"The output queue for a network interface was full.
This generally indicates that the interface has stopped sending,
but may be caused by transient congestion.
(Normally, this does not occur in Linux. Packets are just silently
dropped when a device queue overflows.) "
This is not true for IP_RECVERR enabled sockets : a send() syscall
that hit a qdisc drop returns an ENOBUFS error.
Many thanks to Christoph, David, and last but not least, Alexey !
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-03 05:05:33 +04:00
err = net_xmit_errno ( err ) ;
2005-04-17 02:20:36 +04:00
if ( err )
goto error ;
out :
return 0 ;
error_fault :
err = - EFAULT ;
kfree_skb ( skb ) ;
error :
2008-10-08 21:54:51 +04:00
IP6_INC_STATS ( sock_net ( sk ) , rt - > rt6i_idev , IPSTATS_MIB_OUTDISCARDS ) ;
ip: Report qdisc packet drops
Christoph Lameter pointed out that packet drops at qdisc level where not
accounted in SNMP counters. Only if application sets IP_RECVERR, drops
are reported to user (-ENOBUFS errors) and SNMP counters updated.
IP_RECVERR is used to enable extended reliable error message passing,
but these are not needed to update system wide SNMP stats.
This patch changes things a bit to allow SNMP counters to be updated,
regardless of IP_RECVERR being set or not on the socket.
Example after an UDP tx flood
# netstat -s
...
IP:
1487048 outgoing packets dropped
...
Udp:
...
SndbufErrors: 1487048
send() syscalls, do however still return an OK status, to not
break applications.
Note : send() manual page explicitly says for -ENOBUFS error :
"The output queue for a network interface was full.
This generally indicates that the interface has stopped sending,
but may be caused by transient congestion.
(Normally, this does not occur in Linux. Packets are just silently
dropped when a device queue overflows.) "
This is not true for IP_RECVERR enabled sockets : a send() syscall
that hit a qdisc drop returns an ENOBUFS error.
Many thanks to Christoph, David, and last but not least, Alexey !
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-03 05:05:33 +04:00
if ( err = = - ENOBUFS & & ! np - > recverr )
err = 0 ;
2007-02-09 17:24:49 +03:00
return err ;
2005-04-17 02:20:36 +04:00
}
2006-10-31 02:06:12 +03:00
static int rawv6_probe_proto_opt ( struct flowi * fl , struct msghdr * msg )
2005-04-17 02:20:36 +04:00
{
struct iovec * iov ;
u8 __user * type = NULL ;
u8 __user * code = NULL ;
2006-08-24 07:36:47 +04:00
u8 len = 0 ;
2005-04-17 02:20:36 +04:00
int probed = 0 ;
int i ;
if ( ! msg - > msg_iov )
2006-10-31 02:06:12 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
for ( i = 0 ; i < msg - > msg_iovlen ; i + + ) {
iov = & msg - > msg_iov [ i ] ;
if ( ! iov )
continue ;
switch ( fl - > proto ) {
case IPPROTO_ICMPV6 :
/* check if one-byte field is readable or not. */
if ( iov - > iov_base & & iov - > iov_len < 1 )
break ;
if ( ! type ) {
type = iov - > iov_base ;
/* check if code field is readable or not. */
if ( iov - > iov_len > 1 )
code = type + 1 ;
} else if ( ! code )
code = iov - > iov_base ;
if ( type & & code ) {
2006-10-31 02:06:12 +03:00
if ( get_user ( fl - > fl_icmp_type , type ) | |
get_user ( fl - > fl_icmp_code , code ) )
return - EFAULT ;
2005-04-17 02:20:36 +04:00
probed = 1 ;
}
break ;
2006-08-24 07:36:47 +04:00
case IPPROTO_MH :
if ( iov - > iov_base & & iov - > iov_len < 1 )
break ;
/* check if type field is readable or not. */
if ( iov - > iov_len > 2 - len ) {
u8 __user * p = iov - > iov_base ;
2006-10-31 02:06:12 +03:00
if ( get_user ( fl - > fl_mh_type , & p [ 2 - len ] ) )
return - EFAULT ;
2006-08-24 07:36:47 +04:00
probed = 1 ;
} else
len + = iov - > iov_len ;
break ;
2005-04-17 02:20:36 +04:00
default :
probed = 1 ;
break ;
}
if ( probed )
break ;
}
2006-10-31 02:06:12 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
static int rawv6_sendmsg ( struct kiocb * iocb , struct sock * sk ,
struct msghdr * msg , size_t len )
{
struct ipv6_txoptions opt_space ;
struct sockaddr_in6 * sin6 = ( struct sockaddr_in6 * ) msg - > msg_name ;
struct in6_addr * daddr , * final_p = NULL , final ;
struct inet_sock * inet = inet_sk ( sk ) ;
struct ipv6_pinfo * np = inet6_sk ( sk ) ;
struct raw6_sock * rp = raw6_sk ( sk ) ;
struct ipv6_txoptions * opt = NULL ;
struct ip6_flowlabel * flowlabel = NULL ;
struct dst_entry * dst = NULL ;
struct flowi fl ;
int addr_len = msg - > msg_namelen ;
int hlimit = - 1 ;
2005-09-08 05:19:03 +04:00
int tclass = - 1 ;
2010-04-23 15:26:08 +04:00
int dontfrag = - 1 ;
2005-04-17 02:20:36 +04:00
u16 proto ;
int err ;
/* Rough check on arithmetic overflow,
[IPv6]: Fix incorrect length check in rawv6_sendmsg()
In article <20070329.142644.70222545.davem@davemloft.net> (at Thu, 29 Mar 2007 14:26:44 -0700 (PDT)), David Miller <davem@davemloft.net> says:
> From: Sridhar Samudrala <sri@us.ibm.com>
> Date: Thu, 29 Mar 2007 14:17:28 -0700
>
> > The check for length in rawv6_sendmsg() is incorrect.
> > As len is an unsigned int, (len < 0) will never be TRUE.
> > I think checking for IPV6_MAXPLEN(65535) is better.
> >
> > Is it possible to send ipv6 jumbo packets using raw
> > sockets? If so, we can remove this check.
>
> I don't see why such a limitation against jumbo would exist,
> does anyone else?
>
> Thanks for catching this Sridhar. A good compiler should simply
> fail to compile "if (x < 0)" when 'x' is an unsigned type, don't
> you think :-)
Dave, we use "int" for returning value,
so we should fix this anyway, IMHO;
we should not allow len > INT_MAX.
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-31 01:45:35 +04:00
better check is made in ip6_append_data ( ) .
2005-04-17 02:20:36 +04:00
*/
[IPv6]: Fix incorrect length check in rawv6_sendmsg()
In article <20070329.142644.70222545.davem@davemloft.net> (at Thu, 29 Mar 2007 14:26:44 -0700 (PDT)), David Miller <davem@davemloft.net> says:
> From: Sridhar Samudrala <sri@us.ibm.com>
> Date: Thu, 29 Mar 2007 14:17:28 -0700
>
> > The check for length in rawv6_sendmsg() is incorrect.
> > As len is an unsigned int, (len < 0) will never be TRUE.
> > I think checking for IPV6_MAXPLEN(65535) is better.
> >
> > Is it possible to send ipv6 jumbo packets using raw
> > sockets? If so, we can remove this check.
>
> I don't see why such a limitation against jumbo would exist,
> does anyone else?
>
> Thanks for catching this Sridhar. A good compiler should simply
> fail to compile "if (x < 0)" when 'x' is an unsigned type, don't
> you think :-)
Dave, we use "int" for returning value,
so we should fix this anyway, IMHO;
we should not allow len > INT_MAX.
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-31 01:45:35 +04:00
if ( len > INT_MAX )
2005-04-17 02:20:36 +04:00
return - EMSGSIZE ;
/* Mirror BSD error message compatibility */
2007-02-09 17:24:49 +03:00
if ( msg - > msg_flags & MSG_OOB )
2005-04-17 02:20:36 +04:00
return - EOPNOTSUPP ;
/*
2007-02-09 17:24:49 +03:00
* Get and verify the address .
2005-04-17 02:20:36 +04:00
*/
memset ( & fl , 0 , sizeof ( fl ) ) ;
2008-01-31 06:08:16 +03:00
fl . mark = sk - > sk_mark ;
2005-04-17 02:20:36 +04:00
if ( sin6 ) {
2007-02-09 17:24:49 +03:00
if ( addr_len < SIN6_LEN_RFC2133 )
2005-04-17 02:20:36 +04:00
return - EINVAL ;
2007-02-09 17:24:49 +03:00
if ( sin6 - > sin6_family & & sin6 - > sin6_family ! = AF_INET6 )
2005-04-17 02:20:36 +04:00
return ( - EAFNOSUPPORT ) ;
/* port is the proto value [0..255] carried in nexthdr */
proto = ntohs ( sin6 - > sin6_port ) ;
if ( ! proto )
2009-10-15 10:30:45 +04:00
proto = inet - > inet_num ;
else if ( proto ! = inet - > inet_num )
2005-04-17 02:20:36 +04:00
return ( - EINVAL ) ;
if ( proto > 255 )
return ( - EINVAL ) ;
daddr = & sin6 - > sin6_addr ;
if ( np - > sndflow ) {
fl . fl6_flowlabel = sin6 - > sin6_flowinfo & IPV6_FLOWINFO_MASK ;
if ( fl . fl6_flowlabel & IPV6_FLOWLABEL_MASK ) {
flowlabel = fl6_sock_lookup ( sk , fl . fl6_flowlabel ) ;
if ( flowlabel = = NULL )
return - EINVAL ;
daddr = & flowlabel - > dst ;
}
}
/*
* Otherwise it will be difficult to maintain
* sk - > sk_dst_cache .
*/
if ( sk - > sk_state = = TCP_ESTABLISHED & &
ipv6_addr_equal ( daddr , & np - > daddr ) )
daddr = & np - > daddr ;
if ( addr_len > = sizeof ( struct sockaddr_in6 ) & &
sin6 - > sin6_scope_id & &
ipv6_addr_type ( daddr ) & IPV6_ADDR_LINKLOCAL )
fl . oif = sin6 - > sin6_scope_id ;
} else {
2007-02-09 17:24:49 +03:00
if ( sk - > sk_state ! = TCP_ESTABLISHED )
2005-04-17 02:20:36 +04:00
return - EDESTADDRREQ ;
2007-02-09 17:24:49 +03:00
2009-10-15 10:30:45 +04:00
proto = inet - > inet_num ;
2005-04-17 02:20:36 +04:00
daddr = & np - > daddr ;
fl . fl6_flowlabel = np - > flow_label ;
}
if ( fl . oif = = 0 )
fl . oif = sk - > sk_bound_dev_if ;
if ( msg - > msg_controllen ) {
opt = & opt_space ;
memset ( opt , 0 , sizeof ( struct ipv6_txoptions ) ) ;
opt - > tot_len = sizeof ( struct ipv6_txoptions ) ;
2010-04-23 15:26:08 +04:00
err = datagram_send_ctl ( sock_net ( sk ) , msg , & fl , opt , & hlimit ,
& tclass , & dontfrag ) ;
2005-04-17 02:20:36 +04:00
if ( err < 0 ) {
fl6_sock_release ( flowlabel ) ;
return err ;
}
if ( ( fl . fl6_flowlabel & IPV6_FLOWLABEL_MASK ) & & ! flowlabel ) {
flowlabel = fl6_sock_lookup ( sk , fl . fl6_flowlabel ) ;
if ( flowlabel = = NULL )
return - EINVAL ;
}
if ( ! ( opt - > opt_nflen | opt - > opt_flen ) )
opt = NULL ;
}
if ( opt = = NULL )
opt = np - > opt ;
2005-11-20 06:23:18 +03:00
if ( flowlabel )
opt = fl6_merge_options ( & opt_space , flowlabel , opt ) ;
opt = ipv6_fixup_options ( & opt_space , opt ) ;
2005-04-17 02:20:36 +04:00
fl . proto = proto ;
2006-10-31 02:06:12 +03:00
err = rawv6_probe_proto_opt ( & fl , msg ) ;
if ( err )
goto out ;
2007-02-09 17:24:49 +03:00
2008-04-11 08:38:24 +04:00
if ( ! ipv6_addr_any ( daddr ) )
ipv6_addr_copy ( & fl . fl6_dst , daddr ) ;
else
fl . fl6_dst . s6_addr [ 15 ] = 0x1 ; /* :: means loopback (BSD'ism) */
2005-04-17 02:20:36 +04:00
if ( ipv6_addr_any ( & fl . fl6_src ) & & ! ipv6_addr_any ( & np - > saddr ) )
ipv6_addr_copy ( & fl . fl6_src , & np - > saddr ) ;
/* merge ip6_build_xmit from ip6_output */
if ( opt & & opt - > srcrt ) {
struct rt0_hdr * rt0 = ( struct rt0_hdr * ) opt - > srcrt ;
ipv6_addr_copy ( & final , & fl . fl6_dst ) ;
ipv6_addr_copy ( & fl . fl6_dst , rt0 - > addr ) ;
final_p = & final ;
}
if ( ! fl . oif & & ipv6_addr_is_multicast ( & fl . fl6_dst ) )
fl . oif = np - > mcast_oif ;
2006-08-05 10:12:42 +04:00
security_sk_classify_flow ( sk , & fl ) ;
2005-04-17 02:20:36 +04:00
err = ip6_dst_lookup ( sk , & dst , & fl ) ;
if ( err )
goto out ;
if ( final_p )
ipv6_addr_copy ( & fl . fl6_dst , final_p ) ;
2008-11-26 04:35:18 +03:00
err = __xfrm_lookup ( sock_net ( sk ) , & dst , & fl , sk , XFRM_LOOKUP_WAIT ) ;
if ( err < 0 ) {
2007-05-25 05:17:54 +04:00
if ( err = = - EREMOTE )
err = ip6_dst_blackhole ( sk , & dst , & fl ) ;
if ( err < 0 )
goto out ;
}
2005-04-17 02:20:36 +04:00
if ( hlimit < 0 ) {
if ( ipv6_addr_is_multicast ( & fl . fl6_dst ) )
hlimit = np - > mcast_hops ;
else
hlimit = np - > hop_limit ;
if ( hlimit < 0 )
2008-03-10 13:00:30 +03:00
hlimit = ip6_dst_hoplimit ( dst ) ;
2005-04-17 02:20:36 +04:00
}
2009-08-09 12:12:48 +04:00
if ( tclass < 0 )
2006-09-14 07:01:28 +04:00
tclass = np - > tclass ;
2005-09-08 05:19:03 +04:00
2010-04-23 15:26:08 +04:00
if ( dontfrag < 0 )
dontfrag = np - > dontfrag ;
2005-04-17 02:20:36 +04:00
if ( msg - > msg_flags & MSG_CONFIRM )
goto do_confirm ;
back_from_confirm :
if ( inet - > hdrincl ) {
err = rawv6_send_hdrinc ( sk , msg - > msg_iov , len , & fl , ( struct rt6_info * ) dst , msg - > msg_flags ) ;
} else {
lock_sock ( sk ) ;
2005-09-08 05:19:03 +04:00
err = ip6_append_data ( sk , ip_generic_getfrag , msg - > msg_iov ,
len , 0 , hlimit , tclass , opt , & fl , ( struct rt6_info * ) dst ,
2010-04-23 15:26:08 +04:00
msg - > msg_flags , dontfrag ) ;
2005-04-17 02:20:36 +04:00
if ( err )
ip6_flush_pending_frames ( sk ) ;
else if ( ! ( msg - > msg_flags & MSG_MORE ) )
2005-04-20 09:30:14 +04:00
err = rawv6_push_pending_frames ( sk , & fl , rp ) ;
2007-09-15 03:45:40 +04:00
release_sock ( sk ) ;
2005-04-17 02:20:36 +04:00
}
done :
2006-02-14 02:56:13 +03:00
dst_release ( dst ) ;
2007-02-09 17:24:49 +03:00
out :
2005-04-17 02:20:36 +04:00
fl6_sock_release ( flowlabel ) ;
return err < 0 ? err : len ;
do_confirm :
dst_confirm ( dst ) ;
if ( ! ( msg - > msg_flags & MSG_PROBE ) | | len )
goto back_from_confirm ;
err = 0 ;
goto done ;
}
2007-02-09 17:24:49 +03:00
static int rawv6_seticmpfilter ( struct sock * sk , int level , int optname ,
2005-04-17 02:20:36 +04:00
char __user * optval , int optlen )
{
switch ( optname ) {
case ICMPV6_FILTER :
if ( optlen > sizeof ( struct icmp6_filter ) )
optlen = sizeof ( struct icmp6_filter ) ;
if ( copy_from_user ( & raw6_sk ( sk ) - > filter , optval , optlen ) )
return - EFAULT ;
return 0 ;
default :
return - ENOPROTOOPT ;
2007-04-21 04:09:22 +04:00
}
2005-04-17 02:20:36 +04:00
return 0 ;
}
2007-02-09 17:24:49 +03:00
static int rawv6_geticmpfilter ( struct sock * sk , int level , int optname ,
2005-04-17 02:20:36 +04:00
char __user * optval , int __user * optlen )
{
int len ;
switch ( optname ) {
case ICMPV6_FILTER :
if ( get_user ( len , optlen ) )
return - EFAULT ;
if ( len < 0 )
return - EINVAL ;
if ( len > sizeof ( struct icmp6_filter ) )
len = sizeof ( struct icmp6_filter ) ;
if ( put_user ( len , optlen ) )
return - EFAULT ;
if ( copy_to_user ( optval , & raw6_sk ( sk ) - > filter , len ) )
return - EFAULT ;
return 0 ;
default :
return - ENOPROTOOPT ;
2007-04-21 04:09:22 +04:00
}
2005-04-17 02:20:36 +04:00
return 0 ;
}
2006-03-21 09:45:21 +03:00
static int do_rawv6_setsockopt ( struct sock * sk , int level , int optname ,
2009-10-01 03:12:20 +04:00
char __user * optval , unsigned int optlen )
2005-04-17 02:20:36 +04:00
{
struct raw6_sock * rp = raw6_sk ( sk ) ;
int val ;
2007-02-09 17:24:49 +03:00
if ( get_user ( val , ( int __user * ) optval ) )
2005-04-17 02:20:36 +04:00
return - EFAULT ;
switch ( optname ) {
case IPV6_CHECKSUM :
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num = = IPPROTO_ICMPV6 & &
2008-04-25 08:30:38 +04:00
level = = IPPROTO_IPV6 ) {
/*
* RFC3542 tells that IPV6_CHECKSUM socket
* option in the IPPROTO_IPV6 level is not
* allowed on ICMPv6 sockets .
* If you want to set it , use IPPROTO_RAW
* level IPV6_CHECKSUM socket option
* ( Linux extension ) .
*/
return - EINVAL ;
}
2005-04-17 02:20:36 +04:00
/* You may get strange result with a positive odd offset;
RFC2292bis agrees with me . */
if ( val > 0 & & ( val & 1 ) )
return ( - EINVAL ) ;
if ( val < 0 ) {
rp - > checksum = 0 ;
} else {
rp - > checksum = 1 ;
rp - > offset = val ;
}
return 0 ;
break ;
default :
return ( - ENOPROTOOPT ) ;
}
}
2006-03-21 09:45:21 +03:00
static int rawv6_setsockopt ( struct sock * sk , int level , int optname ,
2009-10-01 03:12:20 +04:00
char __user * optval , unsigned int optlen )
2005-04-17 02:20:36 +04:00
{
switch ( level ) {
case SOL_RAW :
break ;
case SOL_ICMPV6 :
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num ! = IPPROTO_ICMPV6 )
2005-04-17 02:20:36 +04:00
return - EOPNOTSUPP ;
2006-03-21 09:45:21 +03:00
return rawv6_seticmpfilter ( sk , level , optname , optval ,
2005-04-17 02:20:36 +04:00
optlen ) ;
case SOL_IPV6 :
if ( optname = = IPV6_CHECKSUM )
break ;
default :
2006-03-21 09:45:21 +03:00
return ipv6_setsockopt ( sk , level , optname , optval ,
2005-04-17 02:20:36 +04:00
optlen ) ;
2007-04-21 04:09:22 +04:00
}
2006-03-21 09:45:21 +03:00
return do_rawv6_setsockopt ( sk , level , optname , optval , optlen ) ;
}
# ifdef CONFIG_COMPAT
static int compat_rawv6_setsockopt ( struct sock * sk , int level , int optname ,
2009-10-01 03:12:20 +04:00
char __user * optval , unsigned int optlen )
2006-03-21 09:45:21 +03:00
{
2006-03-21 09:48:35 +03:00
switch ( level ) {
case SOL_RAW :
break ;
case SOL_ICMPV6 :
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num ! = IPPROTO_ICMPV6 )
2006-03-21 09:48:35 +03:00
return - EOPNOTSUPP ;
return rawv6_seticmpfilter ( sk , level , optname , optval , optlen ) ;
case SOL_IPV6 :
if ( optname = = IPV6_CHECKSUM )
2006-03-21 09:45:21 +03:00
break ;
2006-03-21 09:48:35 +03:00
default :
return compat_ipv6_setsockopt ( sk , level , optname ,
optval , optlen ) ;
2007-04-21 04:09:22 +04:00
}
2006-03-21 09:45:21 +03:00
return do_rawv6_setsockopt ( sk , level , optname , optval , optlen ) ;
}
# endif
static int do_rawv6_getsockopt ( struct sock * sk , int level , int optname ,
char __user * optval , int __user * optlen )
{
struct raw6_sock * rp = raw6_sk ( sk ) ;
int val , len ;
2005-04-17 02:20:36 +04:00
if ( get_user ( len , optlen ) )
return - EFAULT ;
switch ( optname ) {
case IPV6_CHECKSUM :
2008-04-25 08:30:38 +04:00
/*
* We allow getsockopt ( ) for IPPROTO_IPV6 - level
* IPV6_CHECKSUM socket option on ICMPv6 sockets
* since RFC3542 is silent about it .
*/
2005-04-17 02:20:36 +04:00
if ( rp - > checksum = = 0 )
val = - 1 ;
else
val = rp - > offset ;
break ;
default :
return - ENOPROTOOPT ;
}
len = min_t ( unsigned int , sizeof ( int ) , len ) ;
if ( put_user ( len , optlen ) )
return - EFAULT ;
if ( copy_to_user ( optval , & val , len ) )
return - EFAULT ;
return 0 ;
}
2006-03-21 09:45:21 +03:00
static int rawv6_getsockopt ( struct sock * sk , int level , int optname ,
char __user * optval , int __user * optlen )
{
switch ( level ) {
case SOL_RAW :
break ;
case SOL_ICMPV6 :
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num ! = IPPROTO_ICMPV6 )
2006-03-21 09:45:21 +03:00
return - EOPNOTSUPP ;
return rawv6_geticmpfilter ( sk , level , optname , optval ,
optlen ) ;
case SOL_IPV6 :
if ( optname = = IPV6_CHECKSUM )
break ;
default :
return ipv6_getsockopt ( sk , level , optname , optval ,
optlen ) ;
2007-04-21 04:09:22 +04:00
}
2006-03-21 09:45:21 +03:00
return do_rawv6_getsockopt ( sk , level , optname , optval , optlen ) ;
}
# ifdef CONFIG_COMPAT
static int compat_rawv6_getsockopt ( struct sock * sk , int level , int optname ,
2006-03-21 09:48:35 +03:00
char __user * optval , int __user * optlen )
2006-03-21 09:45:21 +03:00
{
2006-03-21 09:48:35 +03:00
switch ( level ) {
case SOL_RAW :
break ;
case SOL_ICMPV6 :
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num ! = IPPROTO_ICMPV6 )
2006-03-21 09:48:35 +03:00
return - EOPNOTSUPP ;
return rawv6_geticmpfilter ( sk , level , optname , optval , optlen ) ;
case SOL_IPV6 :
if ( optname = = IPV6_CHECKSUM )
2006-03-21 09:45:21 +03:00
break ;
2006-03-21 09:48:35 +03:00
default :
return compat_ipv6_getsockopt ( sk , level , optname ,
optval , optlen ) ;
2007-04-21 04:09:22 +04:00
}
2006-03-21 09:45:21 +03:00
return do_rawv6_getsockopt ( sk , level , optname , optval , optlen ) ;
}
# endif
2005-04-17 02:20:36 +04:00
static int rawv6_ioctl ( struct sock * sk , int cmd , unsigned long arg )
{
switch ( cmd ) {
case SIOCOUTQ :
{
2009-06-18 06:05:41 +04:00
int amount = sk_wmem_alloc_get ( sk ) ;
2005-04-17 02:20:36 +04:00
return put_user ( amount , ( int __user * ) arg ) ;
}
case SIOCINQ :
{
struct sk_buff * skb ;
int amount = 0 ;
2005-06-19 09:56:18 +04:00
spin_lock_bh ( & sk - > sk_receive_queue . lock ) ;
2005-04-17 02:20:36 +04:00
skb = skb_peek ( & sk - > sk_receive_queue ) ;
if ( skb ! = NULL )
2007-04-20 07:29:13 +04:00
amount = skb - > tail - skb - > transport_header ;
2005-06-19 09:56:18 +04:00
spin_unlock_bh ( & sk - > sk_receive_queue . lock ) ;
2005-04-17 02:20:36 +04:00
return put_user ( amount , ( int __user * ) arg ) ;
}
default :
2008-04-03 04:22:53 +04:00
# ifdef CONFIG_IPV6_MROUTE
return ip6mr_ioctl ( sk , cmd , ( void __user * ) arg ) ;
# else
2005-04-17 02:20:36 +04:00
return - ENOIOCTLCMD ;
2008-04-03 04:22:53 +04:00
# endif
2005-04-17 02:20:36 +04:00
}
}
static void rawv6_close ( struct sock * sk , long timeout )
{
2009-10-15 10:30:45 +04:00
if ( inet_sk ( sk ) - > inet_num = = IPPROTO_RAW )
2008-07-19 11:28:58 +04:00
ip6_ra_control ( sk , - 1 ) ;
2008-04-03 04:22:53 +04:00
ip6mr_sk_done ( sk ) ;
2005-04-17 02:20:36 +04:00
sk_common_release ( sk ) ;
}
2008-06-15 04:04:49 +04:00
static void raw6_destroy ( struct sock * sk )
raw: Raw socket leak.
The program below just leaks the raw kernel socket
int main() {
int fd = socket(PF_INET, SOCK_RAW, IPPROTO_UDP);
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
inet_aton("127.0.0.1", &addr.sin_addr);
addr.sin_family = AF_INET;
addr.sin_port = htons(2048);
sendto(fd, "a", 1, MSG_MORE, &addr, sizeof(addr));
return 0;
}
Corked packet is allocated via sock_wmalloc which holds the owner socket,
so one should uncork it and flush all pending data on close. Do this in the
same way as in UDP.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-06-05 02:16:12 +04:00
{
lock_sock ( sk ) ;
ip6_flush_pending_frames ( sk ) ;
release_sock ( sk ) ;
2008-06-13 01:47:58 +04:00
2008-06-15 04:04:49 +04:00
inet6_destroy_sock ( sk ) ;
raw: Raw socket leak.
The program below just leaks the raw kernel socket
int main() {
int fd = socket(PF_INET, SOCK_RAW, IPPROTO_UDP);
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
inet_aton("127.0.0.1", &addr.sin_addr);
addr.sin_family = AF_INET;
addr.sin_port = htons(2048);
sendto(fd, "a", 1, MSG_MORE, &addr, sizeof(addr));
return 0;
}
Corked packet is allocated via sock_wmalloc which holds the owner socket,
so one should uncork it and flush all pending data on close. Do this in the
same way as in UDP.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-06-05 02:16:12 +04:00
}
2005-04-17 02:20:36 +04:00
static int rawv6_init_sk ( struct sock * sk )
{
2007-02-07 11:07:39 +03:00
struct raw6_sock * rp = raw6_sk ( sk ) ;
2009-10-15 10:30:45 +04:00
switch ( inet_sk ( sk ) - > inet_num ) {
2007-02-07 11:07:39 +03:00
case IPPROTO_ICMPV6 :
2005-04-17 02:20:36 +04:00
rp - > checksum = 1 ;
rp - > offset = 2 ;
2007-02-07 11:07:39 +03:00
break ;
case IPPROTO_MH :
rp - > checksum = 1 ;
rp - > offset = 4 ;
break ;
default :
break ;
2005-04-17 02:20:36 +04:00
}
return ( 0 ) ;
}
struct proto rawv6_prot = {
2006-03-21 09:48:35 +03:00
. name = " RAWv6 " ,
. owner = THIS_MODULE ,
. close = rawv6_close ,
raw: Raw socket leak.
The program below just leaks the raw kernel socket
int main() {
int fd = socket(PF_INET, SOCK_RAW, IPPROTO_UDP);
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
inet_aton("127.0.0.1", &addr.sin_addr);
addr.sin_family = AF_INET;
addr.sin_port = htons(2048);
sendto(fd, "a", 1, MSG_MORE, &addr, sizeof(addr));
return 0;
}
Corked packet is allocated via sock_wmalloc which holds the owner socket,
so one should uncork it and flush all pending data on close. Do this in the
same way as in UDP.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-06-05 02:16:12 +04:00
. destroy = raw6_destroy ,
2006-03-21 09:48:35 +03:00
. connect = ip6_datagram_connect ,
. disconnect = udp_disconnect ,
. ioctl = rawv6_ioctl ,
. init = rawv6_init_sk ,
. setsockopt = rawv6_setsockopt ,
. getsockopt = rawv6_getsockopt ,
. sendmsg = rawv6_sendmsg ,
. recvmsg = rawv6_recvmsg ,
. bind = rawv6_bind ,
. backlog_rcv = rawv6_rcv_skb ,
2008-03-23 02:56:51 +03:00
. hash = raw_hash_sk ,
. unhash = raw_unhash_sk ,
2006-03-21 09:48:35 +03:00
. obj_size = sizeof ( struct raw6_sock ) ,
2008-03-23 02:56:51 +03:00
. h . raw_hash = & raw_v6_hashinfo ,
2006-03-21 09:45:21 +03:00
# ifdef CONFIG_COMPAT
2006-03-21 09:48:35 +03:00
. compat_setsockopt = compat_rawv6_setsockopt ,
. compat_getsockopt = compat_rawv6_getsockopt ,
2006-03-21 09:45:21 +03:00
# endif
2005-04-17 02:20:36 +04:00
} ;
# ifdef CONFIG_PROC_FS
static void raw6_sock_seq_show ( struct seq_file * seq , struct sock * sp , int i )
{
struct ipv6_pinfo * np = inet6_sk ( sp ) ;
struct in6_addr * dest , * src ;
__u16 destp , srcp ;
dest = & np - > daddr ;
src = & np - > rcv_saddr ;
destp = 0 ;
2009-10-15 10:30:45 +04:00
srcp = inet_sk ( sp ) - > inet_num ;
2005-04-17 02:20:36 +04:00
seq_printf ( seq ,
" %4d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X "
2007-11-14 07:31:14 +03:00
" %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d \n " ,
2005-04-17 02:20:36 +04:00
i ,
src - > s6_addr32 [ 0 ] , src - > s6_addr32 [ 1 ] ,
src - > s6_addr32 [ 2 ] , src - > s6_addr32 [ 3 ] , srcp ,
dest - > s6_addr32 [ 0 ] , dest - > s6_addr32 [ 1 ] ,
dest - > s6_addr32 [ 2 ] , dest - > s6_addr32 [ 3 ] , destp ,
2007-02-09 17:24:49 +03:00
sp - > sk_state ,
2009-06-18 06:05:41 +04:00
sk_wmem_alloc_get ( sp ) ,
sk_rmem_alloc_get ( sp ) ,
2005-04-17 02:20:36 +04:00
0 , 0L , 0 ,
sock_i_uid ( sp ) , 0 ,
sock_i_ino ( sp ) ,
2007-11-14 07:31:14 +03:00
atomic_read ( & sp - > sk_refcnt ) , sp , atomic_read ( & sp - > sk_drops ) ) ;
2005-04-17 02:20:36 +04:00
}
static int raw6_seq_show ( struct seq_file * seq , void * v )
{
if ( v = = SEQ_START_TOKEN )
seq_printf ( seq ,
" sl "
" local_address "
" remote_address "
" st tx_queue rx_queue tr tm->when retrnsmt "
2008-06-18 08:04:56 +04:00
" uid timeout inode ref pointer drops \n " ) ;
2005-04-17 02:20:36 +04:00
else
2007-11-20 09:38:33 +03:00
raw6_sock_seq_show ( seq , v , raw_seq_private ( seq ) - > bucket ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
2007-07-11 10:07:31 +04:00
static const struct seq_operations raw6_seq_ops = {
2007-11-20 09:38:33 +03:00
. start = raw_seq_start ,
. next = raw_seq_next ,
. stop = raw_seq_stop ,
2005-04-17 02:20:36 +04:00
. show = raw6_seq_show ,
} ;
static int raw6_seq_open ( struct inode * inode , struct file * file )
{
2008-01-31 14:48:55 +03:00
return raw_seq_open ( inode , file , & raw_v6_hashinfo , & raw6_seq_ops ) ;
2005-04-17 02:20:36 +04:00
}
2007-02-12 11:55:35 +03:00
static const struct file_operations raw6_seq_fops = {
2005-04-17 02:20:36 +04:00
. owner = THIS_MODULE ,
. open = raw6_seq_open ,
. read = seq_read ,
. llseek = seq_lseek ,
2008-01-14 16:35:57 +03:00
. release = seq_release_net ,
2005-04-17 02:20:36 +04:00
} ;
2010-01-17 06:35:32 +03:00
static int __net_init raw6_init_net ( struct net * net )
2005-04-17 02:20:36 +04:00
{
2008-01-14 16:36:50 +03:00
if ( ! proc_net_fops_create ( net , " raw6 " , S_IRUGO , & raw6_seq_fops ) )
2005-04-17 02:20:36 +04:00
return - ENOMEM ;
2008-01-14 16:36:50 +03:00
2005-04-17 02:20:36 +04:00
return 0 ;
}
2010-01-17 06:35:32 +03:00
static void __net_exit raw6_exit_net ( struct net * net )
2008-01-14 16:36:50 +03:00
{
proc_net_remove ( net , " raw6 " ) ;
}
static struct pernet_operations raw6_net_ops = {
. init = raw6_init_net ,
. exit = raw6_exit_net ,
} ;
int __init raw6_proc_init ( void )
{
return register_pernet_subsys ( & raw6_net_ops ) ;
}
2005-04-17 02:20:36 +04:00
void raw6_proc_exit ( void )
{
2008-01-14 16:36:50 +03:00
unregister_pernet_subsys ( & raw6_net_ops ) ;
2005-04-17 02:20:36 +04:00
}
# endif /* CONFIG_PROC_FS */
2007-12-11 13:25:35 +03:00
/* Same as inet6_dgram_ops, sans udp_poll. */
static const struct proto_ops inet6_sockraw_ops = {
. family = PF_INET6 ,
. owner = THIS_MODULE ,
. release = inet6_release ,
. bind = inet6_bind ,
. connect = inet_dgram_connect , /* ok */
. socketpair = sock_no_socketpair , /* a do nothing */
. accept = sock_no_accept , /* a do nothing */
. getname = inet6_getname ,
. poll = datagram_poll , /* ok */
. ioctl = inet6_ioctl , /* must change */
. listen = sock_no_listen , /* ok */
. shutdown = inet_shutdown , /* ok */
. setsockopt = sock_common_setsockopt , /* ok */
. getsockopt = sock_common_getsockopt , /* ok */
. sendmsg = inet_sendmsg , /* ok */
. recvmsg = sock_common_recvmsg , /* ok */
. mmap = sock_no_mmap ,
. sendpage = sock_no_sendpage ,
# ifdef CONFIG_COMPAT
. compat_setsockopt = compat_sock_common_setsockopt ,
. compat_getsockopt = compat_sock_common_getsockopt ,
# endif
} ;
static struct inet_protosw rawv6_protosw = {
. type = SOCK_RAW ,
. protocol = IPPROTO_IP , /* wild card */
. prot = & rawv6_prot ,
. ops = & inet6_sockraw_ops ,
. no_check = UDP_CSUM_DEFAULT ,
. flags = INET_PROTOSW_REUSE ,
} ;
int __init rawv6_init ( void )
{
int ret ;
ret = inet6_register_protosw ( & rawv6_protosw ) ;
if ( ret )
goto out ;
out :
return ret ;
}
2007-12-13 16:34:58 +03:00
void rawv6_exit ( void )
2007-12-11 13:25:35 +03:00
{
inet6_unregister_protosw ( & rawv6_protosw ) ;
}