2019-05-27 09:55:01 +03:00
// SPDX-License-Identifier: GPL-2.0-or-later
2005-08-10 07:14:34 +04:00
/*
* net / dccp / ipv4 . c
*
* An implementation of the DCCP protocol
* Arnaldo Carvalho de Melo < acme @ conectiva . com . br >
*/
# include <linux/dccp.h>
# include <linux/icmp.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/slab.h>
2005-08-10 07:14:34 +04:00
# include <linux/module.h>
# include <linux/skbuff.h>
# include <linux/random.h>
# include <net/icmp.h>
2006-03-21 08:25:11 +03:00
# include <net/inet_common.h>
2005-08-10 07:14:34 +04:00
# include <net/inet_hashtables.h>
2005-12-27 07:43:12 +03:00
# include <net/inet_sock.h>
2006-03-21 08:25:11 +03:00
# include <net/protocol.h>
2005-08-10 07:14:34 +04:00
# include <net/sock.h>
2005-12-14 10:25:19 +03:00
# include <net/timewait_sock.h>
2005-08-10 07:14:34 +04:00
# include <net/tcp_states.h>
# include <net/xfrm.h>
2011-08-04 07:50:44 +04:00
# include <net/secure_seq.h>
2021-04-08 20:45:02 +03:00
# include <net/netns/generic.h>
2005-08-10 07:14:34 +04:00
2005-09-18 11:17:51 +04:00
# include "ackvec.h"
2005-08-10 07:14:34 +04:00
# include "ccid.h"
# include "dccp.h"
[DCCP]: Initial feature negotiation implementation
Still needs more work, but boots and doesn't crashes, even
does some negotiation!
18:38:52.174934 127.0.0.1.43458 > 127.0.0.1.5001: request <change_l ack_ratio 2, change_r ccid 2, change_l ccid 2>
18:38:52.218526 127.0.0.1.5001 > 127.0.0.1.43458: response <nop, nop, change_l ack_ratio 2, confirm_r ccid 2 2, confirm_l ccid 2 2, confirm_r ack_ratio 2>
18:38:52.185398 127.0.0.1.43458 > 127.0.0.1.5001: <nop, confirm_r ack_ratio 2, ack_vector0 0x00, elapsed_time 212>
:-)
Signed-off-by: Andrea Bittau <a.bittau@cs.ucl.ac.uk>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-21 04:43:56 +03:00
# include "feat.h"
2005-08-10 07:14:34 +04:00
2021-04-08 20:45:02 +03:00
struct dccp_v4_pernet {
struct sock * v4_ctl_sk ;
} ;
static unsigned int dccp_v4_pernet_id __read_mostly ;
2006-03-21 09:00:37 +03:00
/*
2021-04-08 20:45:02 +03:00
* The per - net v4_ctl_sk socket is used for responding to
2006-03-21 09:00:37 +03:00
* the Out - of - the - blue ( OOTB ) packets . A control sock will be created
* for this socket at the initialization time .
*/
2005-12-14 10:24:16 +03:00
int dccp_v4_connect ( struct sock * sk , struct sockaddr * uaddr , int addr_len )
2005-08-10 07:14:34 +04:00
{
2011-04-27 00:28:44 +04:00
const struct sockaddr_in * usin = ( struct sockaddr_in * ) uaddr ;
2005-08-10 07:14:34 +04:00
struct inet_sock * inet = inet_sk ( sk ) ;
struct dccp_sock * dp = dccp_sk ( sk ) ;
2011-02-25 00:38:12 +03:00
__be16 orig_sport , orig_dport ;
2022-11-19 04:49:13 +03:00
__be32 daddr , nexthop ;
2011-05-07 03:10:41 +04:00
struct flowi4 * fl4 ;
2011-04-27 00:28:44 +04:00
struct rtable * rt ;
2005-08-10 07:14:34 +04:00
int err ;
2011-04-21 13:45:37 +04:00
struct ip_options_rcu * inet_opt ;
2005-08-10 07:14:34 +04:00
dp - > dccps_role = DCCP_ROLE_CLIENT ;
if ( addr_len < sizeof ( struct sockaddr_in ) )
return - EINVAL ;
if ( usin - > sin_family ! = AF_INET )
return - EAFNOSUPPORT ;
nexthop = daddr = usin - > sin_addr . s_addr ;
2011-04-21 13:45:37 +04:00
inet_opt = rcu_dereference_protected ( inet - > inet_opt ,
2016-04-05 18:10:15 +03:00
lockdep_sock_is_held ( sk ) ) ;
2011-04-21 13:45:37 +04:00
if ( inet_opt ! = NULL & & inet_opt - > opt . srr ) {
2005-08-10 07:14:34 +04:00
if ( daddr = = 0 )
return - EINVAL ;
2011-04-21 13:45:37 +04:00
nexthop = inet_opt - > opt . faddr ;
2005-08-10 07:14:34 +04:00
}
2011-02-25 00:38:12 +03:00
orig_sport = inet - > inet_sport ;
orig_dport = usin - > sin_port ;
2011-05-07 03:10:41 +04:00
fl4 = & inet - > cork . fl . u . ip4 ;
rt = ip_route_connect ( fl4 , nexthop , inet - > inet_saddr ,
2022-04-21 02:21:33 +03:00
sk - > sk_bound_dev_if , IPPROTO_DCCP , orig_sport ,
orig_dport , sk ) ;
2011-03-03 01:31:35 +03:00
if ( IS_ERR ( rt ) )
return PTR_ERR ( rt ) ;
2005-08-10 07:14:34 +04:00
if ( rt - > rt_flags & ( RTCF_MULTICAST | RTCF_BROADCAST ) ) {
ip_rt_put ( rt ) ;
return - ENETUNREACH ;
}
2011-04-21 13:45:37 +04:00
if ( inet_opt = = NULL | | ! inet_opt - > opt . srr )
2011-05-07 03:10:41 +04:00
daddr = fl4 - > daddr ;
2005-08-10 07:14:34 +04:00
net: Add a bhash2 table hashed by port and address
The current bind hashtable (bhash) is hashed by port only.
In the socket bind path, we have to check for bind conflicts by
traversing the specified port's inet_bind_bucket while holding the
hashbucket's spinlock (see inet_csk_get_port() and
inet_csk_bind_conflict()). In instances where there are tons of
sockets hashed to the same port at different addresses, the bind
conflict check is time-intensive and can cause softirq cpu lockups,
as well as stops new tcp connections since __inet_inherit_port()
also contests for the spinlock.
This patch adds a second bind table, bhash2, that hashes by
port and sk->sk_rcv_saddr (ipv4) and sk->sk_v6_rcv_saddr (ipv6).
Searching the bhash2 table leads to significantly faster conflict
resolution and less time holding the hashbucket spinlock.
Please note a few things:
* There can be the case where the a socket's address changes after it
has been bound. There are two cases where this happens:
1) The case where there is a bind() call on INADDR_ANY (ipv4) or
IPV6_ADDR_ANY (ipv6) and then a connect() call. The kernel will
assign the socket an address when it handles the connect()
2) In inet_sk_reselect_saddr(), which is called when rebuilding the
sk header and a few pre-conditions are met (eg rerouting fails).
In these two cases, we need to update the bhash2 table by removing the
entry for the old address, and add a new entry reflecting the updated
address.
* The bhash2 table must have its own lock, even though concurrent
accesses on the same port are protected by the bhash lock. Bhash2 must
have its own lock to protect against cases where sockets on different
ports hash to different bhash hashbuckets but to the same bhash2
hashbucket.
This brings up a few stipulations:
1) When acquiring both the bhash and the bhash2 lock, the bhash2 lock
will always be acquired after the bhash lock and released before the
bhash lock is released.
2) There are no nested bhash2 hashbucket locks. A bhash2 lock is always
acquired+released before another bhash2 lock is acquired+released.
* The bhash table cannot be superseded by the bhash2 table because for
bind requests on INADDR_ANY (ipv4) or IPV6_ADDR_ANY (ipv6), every socket
bound to that port must be checked for a potential conflict. The bhash
table is the only source of port->socket associations.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22 21:10:21 +03:00
if ( inet - > inet_saddr = = 0 ) {
2022-11-19 04:49:13 +03:00
err = inet_bhash2_update_saddr ( sk , & fl4 - > saddr , AF_INET ) ;
net: Add a bhash2 table hashed by port and address
The current bind hashtable (bhash) is hashed by port only.
In the socket bind path, we have to check for bind conflicts by
traversing the specified port's inet_bind_bucket while holding the
hashbucket's spinlock (see inet_csk_get_port() and
inet_csk_bind_conflict()). In instances where there are tons of
sockets hashed to the same port at different addresses, the bind
conflict check is time-intensive and can cause softirq cpu lockups,
as well as stops new tcp connections since __inet_inherit_port()
also contests for the spinlock.
This patch adds a second bind table, bhash2, that hashes by
port and sk->sk_rcv_saddr (ipv4) and sk->sk_v6_rcv_saddr (ipv6).
Searching the bhash2 table leads to significantly faster conflict
resolution and less time holding the hashbucket spinlock.
Please note a few things:
* There can be the case where the a socket's address changes after it
has been bound. There are two cases where this happens:
1) The case where there is a bind() call on INADDR_ANY (ipv4) or
IPV6_ADDR_ANY (ipv6) and then a connect() call. The kernel will
assign the socket an address when it handles the connect()
2) In inet_sk_reselect_saddr(), which is called when rebuilding the
sk header and a few pre-conditions are met (eg rerouting fails).
In these two cases, we need to update the bhash2 table by removing the
entry for the old address, and add a new entry reflecting the updated
address.
* The bhash2 table must have its own lock, even though concurrent
accesses on the same port are protected by the bhash lock. Bhash2 must
have its own lock to protect against cases where sockets on different
ports hash to different bhash hashbuckets but to the same bhash2
hashbucket.
This brings up a few stipulations:
1) When acquiring both the bhash and the bhash2 lock, the bhash2 lock
will always be acquired after the bhash lock and released before the
bhash lock is released.
2) There are no nested bhash2 hashbucket locks. A bhash2 lock is always
acquired+released before another bhash2 lock is acquired+released.
* The bhash table cannot be superseded by the bhash2 table because for
bind requests on INADDR_ANY (ipv4) or IPV6_ADDR_ANY (ipv6), every socket
bound to that port must be checked for a potential conflict. The bhash
table is the only source of port->socket associations.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22 21:10:21 +03:00
if ( err ) {
ip_rt_put ( rt ) ;
return err ;
}
2022-11-19 04:49:13 +03:00
} else {
sk_rcv_saddr_set ( sk , inet - > inet_saddr ) ;
net: Add a bhash2 table hashed by port and address
The current bind hashtable (bhash) is hashed by port only.
In the socket bind path, we have to check for bind conflicts by
traversing the specified port's inet_bind_bucket while holding the
hashbucket's spinlock (see inet_csk_get_port() and
inet_csk_bind_conflict()). In instances where there are tons of
sockets hashed to the same port at different addresses, the bind
conflict check is time-intensive and can cause softirq cpu lockups,
as well as stops new tcp connections since __inet_inherit_port()
also contests for the spinlock.
This patch adds a second bind table, bhash2, that hashes by
port and sk->sk_rcv_saddr (ipv4) and sk->sk_v6_rcv_saddr (ipv6).
Searching the bhash2 table leads to significantly faster conflict
resolution and less time holding the hashbucket spinlock.
Please note a few things:
* There can be the case where the a socket's address changes after it
has been bound. There are two cases where this happens:
1) The case where there is a bind() call on INADDR_ANY (ipv4) or
IPV6_ADDR_ANY (ipv6) and then a connect() call. The kernel will
assign the socket an address when it handles the connect()
2) In inet_sk_reselect_saddr(), which is called when rebuilding the
sk header and a few pre-conditions are met (eg rerouting fails).
In these two cases, we need to update the bhash2 table by removing the
entry for the old address, and add a new entry reflecting the updated
address.
* The bhash2 table must have its own lock, even though concurrent
accesses on the same port are protected by the bhash lock. Bhash2 must
have its own lock to protect against cases where sockets on different
ports hash to different bhash hashbuckets but to the same bhash2
hashbucket.
This brings up a few stipulations:
1) When acquiring both the bhash and the bhash2 lock, the bhash2 lock
will always be acquired after the bhash lock and released before the
bhash lock is released.
2) There are no nested bhash2 hashbucket locks. A bhash2 lock is always
acquired+released before another bhash2 lock is acquired+released.
* The bhash table cannot be superseded by the bhash2 table because for
bind requests on INADDR_ANY (ipv4) or IPV6_ADDR_ANY (ipv6), every socket
bound to that port must be checked for a potential conflict. The bhash
table is the only source of port->socket associations.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22 21:10:21 +03:00
}
2009-10-15 10:30:45 +04:00
inet - > inet_dport = usin - > sin_port ;
2015-03-19 00:05:35 +03:00
sk_daddr_set ( sk , daddr ) ;
2005-08-10 07:14:34 +04:00
2005-12-14 10:26:10 +03:00
inet_csk ( sk ) - > icsk_ext_hdr_len = 0 ;
2011-04-21 13:45:37 +04:00
if ( inet_opt )
inet_csk ( sk ) - > icsk_ext_hdr_len = inet_opt - > opt . optlen ;
2005-08-10 07:14:34 +04:00
/*
* Socket identity is still unknown ( sport may be zero ) .
* However we set state to DCCP_REQUESTING and not releasing socket
* lock select source port , enter ourselves into the hash tables and
* complete initialization after this .
*/
dccp_set_state ( sk , DCCP_REQUESTING ) ;
2005-12-14 10:25:31 +03:00
err = inet_hash_connect ( & dccp_death_row , sk ) ;
2005-08-10 07:14:34 +04:00
if ( err ! = 0 )
goto failure ;
2011-05-07 03:10:41 +04:00
rt = ip_route_newports ( fl4 , rt , orig_sport , orig_dport ,
2011-03-03 01:31:35 +03:00
inet - > inet_sport , inet - > inet_dport , sk ) ;
if ( IS_ERR ( rt ) ) {
2011-11-22 01:45:26 +04:00
err = PTR_ERR ( rt ) ;
2011-03-03 01:31:35 +03:00
rt = NULL ;
2005-08-10 07:14:34 +04:00
goto failure ;
2011-03-03 01:31:35 +03:00
}
2005-08-10 07:14:34 +04:00
/* OK, now commit destination to socket. */
2010-06-11 10:31:35 +04:00
sk_setup_caps ( sk , & rt - > dst ) ;
2005-08-10 07:14:34 +04:00
2009-10-15 10:30:45 +04:00
dp - > dccps_iss = secure_dccp_sequence_number ( inet - > inet_saddr ,
inet - > inet_daddr ,
inet - > inet_sport ,
inet - > inet_dport ) ;
2022-10-05 18:23:53 +03:00
inet - > inet_id = get_random_u16 ( ) ;
2005-08-10 07:14:34 +04:00
err = dccp_connect ( sk ) ;
rt = NULL ;
if ( err ! = 0 )
goto failure ;
out :
return err ;
failure :
2005-08-14 03:34:54 +04:00
/*
* This unhashes the socket and releases the local port , if necessary .
*/
2005-08-10 07:14:34 +04:00
dccp_set_state ( sk , DCCP_CLOSED ) ;
2022-11-19 04:49:14 +03:00
inet_bhash2_reset_saddr ( sk ) ;
2005-08-10 07:14:34 +04:00
ip_rt_put ( rt ) ;
sk - > sk_route_caps = 0 ;
2009-10-15 10:30:45 +04:00
inet - > inet_dport = 0 ;
2005-08-10 07:14:34 +04:00
goto out ;
}
2005-12-14 10:24:16 +03:00
EXPORT_SYMBOL_GPL ( dccp_v4_connect ) ;
2005-08-10 07:14:34 +04:00
/*
* This routine does path mtu discovery as defined in RFC1191 .
*/
static inline void dccp_do_pmtu_discovery ( struct sock * sk ,
const struct iphdr * iph ,
u32 mtu )
{
struct dst_entry * dst ;
const struct inet_sock * inet = inet_sk ( sk ) ;
const struct dccp_sock * dp = dccp_sk ( sk ) ;
/* We are not interested in DCCP_LISTEN and request_socks (RESPONSEs
* send out by Linux are always < 576 bytes so they should go through
* unfragmented ) .
*/
if ( sk - > sk_state = = DCCP_LISTEN )
return ;
2012-07-16 14:28:06 +04:00
dst = inet_csk_update_pmtu ( sk , mtu ) ;
if ( ! dst )
2005-08-10 07:14:34 +04:00
return ;
/* Something is about to be wrong... Remember soft error
* for the case , if this connection will not able to recover .
*/
if ( mtu < dst_mtu ( dst ) & & ip_dont_fragment ( sk , dst ) )
sk - > sk_err_soft = EMSGSIZE ;
mtu = dst_mtu ( dst ) ;
if ( inet - > pmtudisc ! = IP_PMTUDISC_DONT & &
2013-11-05 05:24:17 +04:00
ip_sk_accept_pmtu ( sk ) & &
2005-12-14 10:26:10 +03:00
inet_csk ( sk ) - > icsk_pmtu_cookie > mtu ) {
2005-08-10 07:14:34 +04:00
dccp_sync_mss ( sk , mtu ) ;
/*
2006-10-25 03:17:51 +04:00
* From RFC 4340 , sec . 14.1 :
2005-08-10 07:14:34 +04:00
*
2005-08-14 03:34:54 +04:00
* DCCP - Sync packets are the best choice for upward
* probing , since DCCP - Sync probes do not risk application
* data loss .
2005-08-10 07:14:34 +04:00
*/
2005-08-17 10:10:59 +04:00
dccp_send_sync ( sk , dp - > dccps_gsr , DCCP_PKT_SYNC ) ;
2005-08-10 07:14:34 +04:00
} /* else let the usual retransmit timer handle it */
}
2012-07-12 08:27:49 +04:00
static void dccp_do_redirect ( struct sk_buff * skb , struct sock * sk )
{
struct dst_entry * dst = __sk_dst_check ( sk , 0 ) ;
2012-07-12 11:41:25 +04:00
if ( dst )
2012-07-17 14:29:28 +04:00
dst - > ops - > redirect ( dst , sk , skb ) ;
2012-07-12 08:27:49 +04:00
}
2015-03-22 20:22:24 +03:00
void dccp_req_err ( struct sock * sk , u64 seq )
{
struct request_sock * req = inet_reqsk ( sk ) ;
struct net * net = sock_net ( sk ) ;
/*
* ICMPs are not backlogged , hence we cannot get an established
* socket here .
*/
if ( ! between48 ( seq , dccp_rsk ( req ) - > dreq_iss , dccp_rsk ( req ) - > dreq_gss ) ) {
2016-04-28 02:44:39 +03:00
__NET_INC_STATS ( net , LINUX_MIB_OUTOFWINDOWICMPS ) ;
2015-03-22 20:22:24 +03:00
} else {
/*
* Still in RESPOND , just remove it silently .
* There is no good way to pass the error to the newly
* created socket , and POSIX does not want network
* errors returned from accept ( ) .
*/
inet_csk_reqsk_queue_drop ( req - > rsk_listener , req ) ;
}
2015-10-14 21:16:26 +03:00
reqsk_put ( req ) ;
2015-03-22 20:22:24 +03:00
}
EXPORT_SYMBOL ( dccp_req_err ) ;
2005-08-10 07:14:34 +04:00
/*
* This routine is called by the ICMP module when it gets some sort of error
* condition . If err < 0 then the socket should be closed and the error
* returned to the user . If err > 0 it ' s just the icmp type < < 8 | icmp code .
* After adjustment header points to the first 8 bytes of the tcp header . We
* need to find the appropriate port .
*
* The locking strategy used here is very " optimistic " . When someone else
* accesses the socket the ICMP is just dropped and for some paths there is no
* check at all . A more general error queue to queue errors for later handling
* is probably better .
*/
2018-11-08 14:19:21 +03:00
static int dccp_v4_err ( struct sk_buff * skb , u32 info )
2005-08-10 07:14:34 +04:00
{
const struct iphdr * iph = ( struct iphdr * ) skb - > data ;
2008-07-26 14:59:10 +04:00
const u8 offset = iph - > ihl < < 2 ;
2016-11-03 05:00:40 +03:00
const struct dccp_hdr * dh ;
2005-08-10 07:14:34 +04:00
struct dccp_sock * dp ;
struct inet_sock * inet ;
2007-03-13 20:43:18 +03:00
const int type = icmp_hdr ( skb ) - > type ;
const int code = icmp_hdr ( skb ) - > code ;
2005-08-10 07:14:34 +04:00
struct sock * sk ;
__u64 seq ;
int err ;
2008-07-15 10:01:40 +04:00
struct net * net = dev_net ( skb - > dev ) ;
2005-08-10 07:14:34 +04:00
2016-11-03 05:00:40 +03:00
/* Only need dccph_dport & dccph_sport which are the first
* 4 bytes in dccp header .
* Our caller ( icmp_socket_deliver ( ) ) already pulled 8 bytes for us .
*/
BUILD_BUG_ON ( offsetofend ( struct dccp_hdr , dccph_sport ) > 8 ) ;
BUILD_BUG_ON ( offsetofend ( struct dccp_hdr , dccph_dport ) > 8 ) ;
dh = ( struct dccp_hdr * ) ( skb - > data + offset ) ;
2005-08-10 07:14:34 +04:00
2015-03-22 20:22:24 +03:00
sk = __inet_lookup_established ( net , & dccp_hashinfo ,
iph - > daddr , dh - > dccph_dport ,
iph - > saddr , ntohs ( dh - > dccph_sport ) ,
2017-08-07 18:44:17 +03:00
inet_iif ( skb ) , 0 ) ;
2015-03-22 20:22:24 +03:00
if ( ! sk ) {
2016-04-28 02:44:29 +03:00
__ICMP_INC_STATS ( net , ICMP_MIB_INERRORS ) ;
2018-11-08 14:19:21 +03:00
return - ENOENT ;
2005-08-10 07:14:34 +04:00
}
if ( sk - > sk_state = = DCCP_TIME_WAIT ) {
2006-10-11 06:41:46 +04:00
inet_twsk_put ( inet_twsk ( sk ) ) ;
2018-11-08 14:19:21 +03:00
return 0 ;
2005-08-10 07:14:34 +04:00
}
2015-03-22 20:22:24 +03:00
seq = dccp_hdr_seq ( dh ) ;
2018-11-08 14:19:21 +03:00
if ( sk - > sk_state = = DCCP_NEW_SYN_RECV ) {
dccp_req_err ( sk , seq ) ;
return 0 ;
}
2005-08-10 07:14:34 +04:00
bh_lock_sock ( sk ) ;
/* If too many ICMPs get dropped on busy
* servers this needs to be solved differently .
*/
if ( sock_owned_by_user ( sk ) )
2016-04-28 02:44:39 +03:00
__NET_INC_STATS ( net , LINUX_MIB_LOCKDROPPEDICMPS ) ;
2005-08-10 07:14:34 +04:00
if ( sk - > sk_state = = DCCP_CLOSED )
goto out ;
dp = dccp_sk ( sk ) ;
2007-10-24 16:18:06 +04:00
if ( ( 1 < < sk - > sk_state ) & ~ ( DCCPF_REQUESTING | DCCPF_LISTEN ) & &
2008-07-26 14:59:10 +04:00
! between48 ( seq , dp - > dccps_awl , dp - > dccps_awh ) ) {
2016-04-28 02:44:39 +03:00
__NET_INC_STATS ( net , LINUX_MIB_OUTOFWINDOWICMPS ) ;
2005-08-10 07:14:34 +04:00
goto out ;
}
switch ( type ) {
2012-07-12 08:27:49 +04:00
case ICMP_REDIRECT :
2017-03-10 08:40:33 +03:00
if ( ! sock_owned_by_user ( sk ) )
dccp_do_redirect ( skb , sk ) ;
2012-07-12 08:27:49 +04:00
goto out ;
2005-08-10 07:14:34 +04:00
case ICMP_SOURCE_QUENCH :
/* Just silently ignore these. */
goto out ;
case ICMP_PARAMETERPROB :
err = EPROTO ;
break ;
case ICMP_DEST_UNREACH :
if ( code > NR_ICMP_UNREACH )
goto out ;
if ( code = = ICMP_FRAG_NEEDED ) { /* PMTU discovery (RFC1191) */
if ( ! sock_owned_by_user ( sk ) )
dccp_do_pmtu_discovery ( sk , iph , info ) ;
goto out ;
}
err = icmp_err_convert [ code ] . errno ;
break ;
case ICMP_TIME_EXCEEDED :
err = EHOSTUNREACH ;
break ;
default :
goto out ;
}
switch ( sk - > sk_state ) {
case DCCP_REQUESTING :
case DCCP_RESPOND :
if ( ! sock_owned_by_user ( sk ) ) {
2016-04-28 02:44:28 +03:00
__DCCP_INC_STATS ( DCCP_MIB_ATTEMPTFAILS ) ;
2005-08-10 07:14:34 +04:00
sk - > sk_err = err ;
2021-06-28 01:48:21 +03:00
sk_error_report ( sk ) ;
2005-08-10 07:14:34 +04:00
dccp_done ( sk ) ;
} else
sk - > sk_err_soft = err ;
goto out ;
}
/* If we've already connected we will keep trying
* until we time out , or the user gives up .
*
* rfc1122 4.2 .3 .9 allows to consider as hard errors
* only PROTO_UNREACH and PORT_UNREACH ( well , FRAG_FAILED too ,
* but it is obsoleted by pmtu discovery ) .
*
* Note , that in modern internet , where routing is unreliable
* and in each dark corner broken firewalls sit , sending random
* errors ordered by their masters even this two messages finally lose
* their original sense ( even Linux sends invalid PORT_UNREACHs )
*
* Now we are in compliance with RFCs .
* - - ANK ( 980905 )
*/
inet = inet_sk ( sk ) ;
if ( ! sock_owned_by_user ( sk ) & & inet - > recverr ) {
sk - > sk_err = err ;
2021-06-28 01:48:21 +03:00
sk_error_report ( sk ) ;
2005-08-10 07:14:34 +04:00
} else /* Only an error on timeout */
sk - > sk_err_soft = err ;
out :
bh_unlock_sock ( sk ) ;
sock_put ( sk ) ;
2018-11-08 14:19:21 +03:00
return 0 ;
2005-08-10 07:14:34 +04:00
}
2006-11-15 08:28:51 +03:00
static inline __sum16 dccp_v4_csum_finish ( struct sk_buff * skb ,
2006-11-10 22:43:06 +03:00
__be32 src , __be32 dst )
{
return csum_tcpudp_magic ( src , dst , skb - > len , IPPROTO_DCCP , skb - > csum ) ;
}
2010-04-11 06:15:55 +04:00
void dccp_v4_send_check ( struct sock * sk , struct sk_buff * skb )
2005-12-14 10:16:16 +03:00
{
const struct inet_sock * inet = inet_sk ( sk ) ;
struct dccp_hdr * dh = dccp_hdr ( skb ) ;
2006-11-10 22:43:06 +03:00
dccp_csum_outgoing ( skb ) ;
2009-10-15 10:30:45 +04:00
dh - > dccph_checksum = dccp_v4_csum_finish ( skb ,
inet - > inet_saddr ,
inet - > inet_daddr ) ;
2005-12-14 10:16:16 +03:00
}
2005-12-14 10:24:16 +03:00
EXPORT_SYMBOL_GPL ( dccp_v4_send_check ) ;
2006-11-13 18:31:50 +03:00
static inline u64 dccp_v4_init_sequence ( const struct sk_buff * skb )
2005-08-10 07:14:34 +04:00
{
2007-04-21 09:47:35 +04:00
return secure_dccp_sequence_number ( ip_hdr ( skb ) - > daddr ,
ip_hdr ( skb ) - > saddr ,
2005-08-10 07:14:34 +04:00
dccp_hdr ( skb ) - > dccph_dport ,
dccp_hdr ( skb ) - > dccph_sport ) ;
}
/*
* The three way handshake has completed - we got a valid ACK or DATAACK -
* now create the new socket .
*
* This is the equivalent of TCP ' s tcp_v4_syn_recv_sock
*/
2015-09-29 17:42:48 +03:00
struct sock * dccp_v4_request_recv_sock ( const struct sock * sk ,
struct sk_buff * skb ,
2005-08-10 07:14:34 +04:00
struct request_sock * req ,
2015-10-22 18:20:46 +03:00
struct dst_entry * dst ,
struct request_sock * req_unhash ,
bool * own_req )
2005-08-10 07:14:34 +04:00
{
struct inet_request_sock * ireq ;
struct inet_sock * newinet ;
struct sock * newsk ;
if ( sk_acceptq_is_full ( sk ) )
goto exit_overflow ;
newsk = dccp_create_openreq_child ( sk , req , skb ) ;
if ( newsk = = NULL )
2010-10-21 15:06:43 +04:00
goto exit_nonewsk ;
2005-08-10 07:14:34 +04:00
newinet = inet_sk ( newsk ) ;
ireq = inet_rsk ( req ) ;
2015-03-19 00:05:35 +03:00
sk_daddr_set ( newsk , ireq - > ir_rmt_addr ) ;
sk_rcv_saddr_set ( newsk , ireq - > ir_loc_addr ) ;
2013-10-10 02:21:29 +04:00
newinet - > inet_saddr = ireq - > ir_loc_addr ;
2017-10-20 19:04:13 +03:00
RCU_INIT_POINTER ( newinet - > inet_opt , rcu_dereference ( ireq - > ireq_opt ) ) ;
2005-08-10 07:14:34 +04:00
newinet - > mc_index = inet_iif ( skb ) ;
2007-04-21 09:47:35 +04:00
newinet - > mc_ttl = ip_hdr ( skb ) - > ttl ;
2022-10-05 18:23:53 +03:00
newinet - > inet_id = get_random_u16 ( ) ;
2005-08-10 07:14:34 +04:00
2011-05-09 02:28:03 +04:00
if ( dst = = NULL & & ( dst = inet_csk_route_child_sock ( sk , newsk , req ) ) = = NULL )
goto put_and_exit ;
sk_setup_caps ( newsk , dst ) ;
2005-08-10 07:14:34 +04:00
dccp_sync_mss ( newsk , dst_mtu ( dst ) ) ;
2011-05-09 02:28:03 +04:00
if ( __inet_inherit_port ( sk , newsk ) < 0 )
goto put_and_exit ;
tcp: fix race condition when creating child sockets from syncookies
When the TCP stack is in SYN flood mode, the server child socket is
created from the SYN cookie received in a TCP packet with the ACK flag
set.
The child socket is created when the server receives the first TCP
packet with a valid SYN cookie from the client. Usually, this packet
corresponds to the final step of the TCP 3-way handshake, the ACK
packet. But is also possible to receive a valid SYN cookie from the
first TCP data packet sent by the client, and thus create a child socket
from that SYN cookie.
Since a client socket is ready to send data as soon as it receives the
SYN+ACK packet from the server, the client can send the ACK packet (sent
by the TCP stack code), and the first data packet (sent by the userspace
program) almost at the same time, and thus the server will equally
receive the two TCP packets with valid SYN cookies almost at the same
instant.
When such event happens, the TCP stack code has a race condition that
occurs between the momement a lookup is done to the established
connections hashtable to check for the existence of a connection for the
same client, and the moment that the child socket is added to the
established connections hashtable. As a consequence, this race condition
can lead to a situation where we add two child sockets to the
established connections hashtable and deliver two sockets to the
userspace program to the same client.
This patch fixes the race condition by checking if an existing child
socket exists for the same client when we are adding the second child
socket to the established connections socket. If an existing child
socket exists, we drop the packet and discard the second child socket
to the same client.
Signed-off-by: Ricardo Dias <rdias@singlestore.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20201120111133.GA67501@rdias-suse-pc.lan
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-20 14:11:33 +03:00
* own_req = inet_ehash_nolisten ( newsk , req_to_sk ( req_unhash ) , NULL ) ;
2017-10-20 19:04:13 +03:00
if ( * own_req )
ireq - > ireq_opt = NULL ;
else
newinet - > inet_opt = NULL ;
2005-08-10 07:14:34 +04:00
return newsk ;
exit_overflow :
2016-04-28 02:44:39 +03:00
__NET_INC_STATS ( sock_net ( sk ) , LINUX_MIB_LISTENOVERFLOWS ) ;
2010-10-21 15:06:43 +04:00
exit_nonewsk :
dst_release ( dst ) ;
2005-08-10 07:14:34 +04:00
exit :
2016-04-28 02:44:39 +03:00
__NET_INC_STATS ( sock_net ( sk ) , LINUX_MIB_LISTENDROPS ) ;
2005-08-10 07:14:34 +04:00
return NULL ;
2011-05-09 02:28:03 +04:00
put_and_exit :
2017-10-20 19:04:13 +03:00
newinet - > inet_opt = NULL ;
inet: Fix kmemleak in tcp_v4/6_syn_recv_sock and dccp_v4/6_request_recv_sock
If in either of the above functions inet_csk_route_child_sock() or
__inet_inherit_port() fails, the newsk will not be freed:
unreferenced object 0xffff88022e8a92c0 (size 1592):
comm "softirq", pid 0, jiffies 4294946244 (age 726.160s)
hex dump (first 32 bytes):
0a 01 01 01 0a 01 01 02 00 00 00 00 a7 cc 16 00 ................
02 00 03 01 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff8153d190>] kmemleak_alloc+0x21/0x3e
[<ffffffff810ab3e7>] kmem_cache_alloc+0xb5/0xc5
[<ffffffff8149b65b>] sk_prot_alloc.isra.53+0x2b/0xcd
[<ffffffff8149b784>] sk_clone_lock+0x16/0x21e
[<ffffffff814d711a>] inet_csk_clone_lock+0x10/0x7b
[<ffffffff814ebbc3>] tcp_create_openreq_child+0x21/0x481
[<ffffffff814e8fa5>] tcp_v4_syn_recv_sock+0x3a/0x23b
[<ffffffff814ec5ba>] tcp_check_req+0x29f/0x416
[<ffffffff814e8e10>] tcp_v4_do_rcv+0x161/0x2bc
[<ffffffff814eb917>] tcp_v4_rcv+0x6c9/0x701
[<ffffffff814cea9f>] ip_local_deliver_finish+0x70/0xc4
[<ffffffff814cec20>] ip_local_deliver+0x4e/0x7f
[<ffffffff814ce9f8>] ip_rcv_finish+0x1fc/0x233
[<ffffffff814cee68>] ip_rcv+0x217/0x267
[<ffffffff814a7bbe>] __netif_receive_skb+0x49e/0x553
[<ffffffff814a7cc3>] netif_receive_skb+0x50/0x82
This happens, because sk_clone_lock initializes sk_refcnt to 2, and thus
a single sock_put() is not enough to free the memory. Additionally, things
like xfrm, memcg, cookie_values,... may have been initialized.
We have to free them properly.
This is fixed by forcing a call to tcp_done(), ending up in
inet_csk_destroy_sock, doing the final sock_put(). tcp_done() is necessary,
because it ends up doing all the cleanup on xfrm, memcg, cookie_values,
xfrm,...
Before calling tcp_done, we have to set the socket to SOCK_DEAD, to
force it entering inet_csk_destroy_sock. To avoid the warning in
inet_csk_destroy_sock, inet_num has to be set to 0.
As inet_csk_destroy_sock does a dec on orphan_count, we first have to
increase it.
Calling tcp_done() allows us to remove the calls to
tcp_clear_xmit_timer() and tcp_cleanup_congestion_control().
A similar approach is taken for dccp by calling dccp_done().
This is in the kernel since 093d282321 (tproxy: fix hash locking issue
when using port redirection in __inet_inherit_port()), thus since
version >= 2.6.37.
Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-12-14 08:07:58 +04:00
inet_csk_prepare_forced_close ( newsk ) ;
dccp_done ( newsk ) ;
2011-05-09 02:28:03 +04:00
goto exit ;
2005-08-10 07:14:34 +04:00
}
2005-12-14 10:24:16 +03:00
EXPORT_SYMBOL_GPL ( dccp_v4_request_recv_sock ) ;
2008-04-14 09:30:19 +04:00
static struct dst_entry * dccp_v4_route_skb ( struct net * net , struct sock * sk ,
2005-08-10 07:14:34 +04:00
struct sk_buff * skb )
{
struct rtable * rt ;
2011-12-01 22:28:34 +04:00
const struct iphdr * iph = ip_hdr ( skb ) ;
2011-03-12 09:12:47 +03:00
struct flowi4 fl4 = {
2012-07-24 03:29:00 +04:00
. flowi4_oif = inet_iif ( skb ) ,
2011-12-01 22:28:34 +04:00
. daddr = iph - > saddr ,
. saddr = iph - > daddr ,
2011-03-12 09:12:47 +03:00
. flowi4_tos = RT_CONN_FLAGS ( sk ) ,
. flowi4_proto = sk - > sk_protocol ,
2011-03-12 11:00:33 +03:00
. fl4_sport = dccp_hdr ( skb ) - > dccph_dport ,
. fl4_dport = dccp_hdr ( skb ) - > dccph_sport ,
2011-03-12 08:29:39 +03:00
} ;
2005-08-10 07:14:34 +04:00
2020-09-28 05:38:26 +03:00
security_skb_classify_flow ( skb , flowi4_to_flowi_common ( & fl4 ) ) ;
2011-03-12 09:12:47 +03:00
rt = ip_route_output_flow ( net , & fl4 , sk ) ;
2011-03-03 01:31:35 +03:00
if ( IS_ERR ( rt ) ) {
2016-07-08 12:03:57 +03:00
IP_INC_STATS ( net , IPSTATS_MIB_OUTNOROUTES ) ;
2005-08-10 07:14:34 +04:00
return NULL ;
}
2010-06-11 10:31:35 +04:00
return & rt - > dst ;
2005-08-10 07:14:34 +04:00
}
2015-09-25 17:39:23 +03:00
static int dccp_v4_send_response ( const struct sock * sk , struct request_sock * req )
2006-11-10 17:52:36 +03:00
{
int err = - 1 ;
struct sk_buff * skb ;
2008-02-29 22:43:03 +03:00
struct dst_entry * dst ;
2011-05-19 02:32:03 +04:00
struct flowi4 fl4 ;
2006-11-10 17:52:36 +03:00
2012-07-18 01:02:46 +04:00
dst = inet_csk_route_req ( sk , & fl4 , req ) ;
2008-02-29 22:43:03 +03:00
if ( dst = = NULL )
2006-11-10 17:52:36 +03:00
goto out ;
skb = dccp_make_response ( sk , dst , req ) ;
if ( skb ! = NULL ) {
const struct inet_request_sock * ireq = inet_rsk ( req ) ;
struct dccp_hdr * dh = dccp_hdr ( skb ) ;
2013-10-10 02:21:29 +04:00
dh - > dccph_checksum = dccp_v4_csum_finish ( skb , ireq - > ir_loc_addr ,
ireq - > ir_rmt_addr ) ;
2018-10-02 22:35:05 +03:00
rcu_read_lock ( ) ;
2013-10-10 02:21:29 +04:00
err = ip_build_and_send_pkt ( skb , sk , ireq - > ir_loc_addr ,
ireq - > ir_rmt_addr ,
2020-09-10 03:50:47 +03:00
rcu_dereference ( ireq - > ireq_opt ) ,
inet_sk ( sk ) - > tos ) ;
2018-10-02 22:35:05 +03:00
rcu_read_unlock ( ) ;
2006-11-14 16:21:36 +03:00
err = net_xmit_eval ( err ) ;
2006-11-10 17:52:36 +03:00
}
out :
dst_release ( dst ) ;
return err ;
}
2015-09-29 17:42:39 +03:00
static void dccp_v4_ctl_send_reset ( const struct sock * sk , struct sk_buff * rxskb )
2005-08-10 07:14:34 +04:00
{
int err ;
2007-04-21 09:47:35 +04:00
const struct iphdr * rxiph ;
2005-08-10 07:14:34 +04:00
struct sk_buff * skb ;
struct dst_entry * dst ;
2009-06-02 09:19:30 +04:00
struct net * net = dev_net ( skb_dst ( rxskb ) - > dev ) ;
2021-04-08 20:45:02 +03:00
struct dccp_v4_pernet * pn ;
struct sock * ctl_sk ;
2005-08-10 07:14:34 +04:00
/* Never send a reset in response to a reset. */
[DCCP]: Factor out common code for generating Resets
This factors code common to dccp_v{4,6}_ctl_send_reset into a separate function,
and adds support for filling in the Data 1 ... Data 3 fields from RFC 4340, 5.6.
It is useful to have this separate, since the following Reset codes will always
be generated from the control socket rather than via dccp_send_reset:
* Code 3, "No Connection", cf. 8.3.1;
* Code 4, "Packet Error" (identification for Data 1 added);
* Code 5, "Option Error" (identification for Data 1..3 added, will be used later);
* Code 6, "Mandatory Error" (same as Option Error);
* Code 7, "Connection Refused" (what on Earth is the difference to "No Connection"?);
* Code 8, "Bad Service Code";
* Code 9, "Too Busy";
* Code 10, "Bad Init Cookie" (not used).
Code 0 is not recommended by the RFC, the following codes would be used in
dccp_send_reset() instead, since they all relate to an established DCCP connection:
* Code 1, "Closed";
* Code 2, "Aborted";
* Code 11, "Aggression Penalty" (12.3).
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
2007-09-26 21:35:19 +04:00
if ( dccp_hdr ( rxskb ) - > dccph_type = = DCCP_PKT_RESET )
2005-08-10 07:14:34 +04:00
return ;
2009-06-02 09:14:27 +04:00
if ( skb_rtable ( rxskb ) - > rt_type ! = RTN_LOCAL )
2005-08-10 07:14:34 +04:00
return ;
2021-04-08 20:45:02 +03:00
pn = net_generic ( net , dccp_v4_pernet_id ) ;
ctl_sk = pn - > v4_ctl_sk ;
2008-04-14 09:30:19 +04:00
dst = dccp_v4_route_skb ( net , ctl_sk , rxskb ) ;
2005-08-10 07:14:34 +04:00
if ( dst = = NULL )
return ;
2008-04-14 09:29:37 +04:00
skb = dccp_ctl_make_reset ( ctl_sk , rxskb ) ;
2005-08-10 07:14:34 +04:00
if ( skb = = NULL )
goto out ;
2007-04-21 09:47:35 +04:00
rxiph = ip_hdr ( rxskb ) ;
[DCCP]: Factor out common code for generating Resets
This factors code common to dccp_v{4,6}_ctl_send_reset into a separate function,
and adds support for filling in the Data 1 ... Data 3 fields from RFC 4340, 5.6.
It is useful to have this separate, since the following Reset codes will always
be generated from the control socket rather than via dccp_send_reset:
* Code 3, "No Connection", cf. 8.3.1;
* Code 4, "Packet Error" (identification for Data 1 added);
* Code 5, "Option Error" (identification for Data 1..3 added, will be used later);
* Code 6, "Mandatory Error" (same as Option Error);
* Code 7, "Connection Refused" (what on Earth is the difference to "No Connection"?);
* Code 8, "Bad Service Code";
* Code 9, "Too Busy";
* Code 10, "Bad Init Cookie" (not used).
Code 0 is not recommended by the RFC, the following codes would be used in
dccp_send_reset() instead, since they all relate to an established DCCP connection:
* Code 1, "Closed";
* Code 2, "Aborted";
* Code 11, "Aggression Penalty" (12.3).
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
2007-09-26 21:35:19 +04:00
dccp_hdr ( skb ) - > dccph_checksum = dccp_v4_csum_finish ( skb , rxiph - > saddr ,
rxiph - > daddr ) ;
2009-06-02 09:19:30 +04:00
skb_dst_set ( skb , dst_clone ( dst ) ) ;
2005-08-10 07:14:34 +04:00
2016-07-08 12:03:57 +03:00
local_bh_disable ( ) ;
2008-04-14 09:29:37 +04:00
bh_lock_sock ( ctl_sk ) ;
err = ip_build_and_send_pkt ( skb , ctl_sk ,
2020-09-10 03:50:47 +03:00
rxiph - > daddr , rxiph - > saddr , NULL ,
inet_sk ( ctl_sk ) - > tos ) ;
2008-04-14 09:29:37 +04:00
bh_unlock_sock ( ctl_sk ) ;
2005-08-10 07:14:34 +04:00
2006-11-14 16:21:36 +03:00
if ( net_xmit_eval ( err ) = = 0 ) {
2016-07-08 12:03:57 +03:00
__DCCP_INC_STATS ( DCCP_MIB_OUTSEGS ) ;
__DCCP_INC_STATS ( DCCP_MIB_OUTRSTS ) ;
2005-08-10 07:14:34 +04:00
}
2016-07-08 12:03:57 +03:00
local_bh_enable ( ) ;
2005-08-10 07:14:34 +04:00
out :
2016-07-08 12:03:57 +03:00
dst_release ( dst ) ;
2005-08-10 07:14:34 +04:00
}
2006-11-10 17:52:36 +03:00
static void dccp_v4_reqsk_destructor ( struct request_sock * req )
{
2008-11-05 10:56:30 +03:00
dccp_feat_list_purge ( & dccp_rsk ( req ) - > dreq_featneg ) ;
2017-10-20 19:04:13 +03:00
kfree ( rcu_dereference_protected ( inet_rsk ( req ) - > ireq_opt , 1 ) ) ;
2006-11-10 17:52:36 +03:00
}
2015-03-22 20:22:19 +03:00
void dccp_syn_ack_timeout ( const struct request_sock * req )
2012-04-13 02:16:05 +04:00
{
}
EXPORT_SYMBOL ( dccp_syn_ack_timeout ) ;
2006-11-10 17:52:36 +03:00
static struct request_sock_ops dccp_request_sock_ops __read_mostly = {
. family = PF_INET ,
. obj_size = sizeof ( struct dccp_request_sock ) ,
. rtx_syn_ack = dccp_v4_send_response ,
. send_ack = dccp_reqsk_send_ack ,
. destructor = dccp_v4_reqsk_destructor ,
. send_reset = dccp_v4_ctl_send_reset ,
2012-04-13 02:16:05 +04:00
. syn_ack_timeout = dccp_syn_ack_timeout ,
2006-11-10 17:52:36 +03:00
} ;
int dccp_v4_conn_request ( struct sock * sk , struct sk_buff * skb )
{
struct inet_request_sock * ireq ;
struct request_sock * req ;
struct dccp_request_sock * dreq ;
2006-12-10 21:01:18 +03:00
const __be32 service = dccp_hdr_request ( skb ) - > dccph_req_service ;
2006-11-10 17:52:36 +03:00
struct dccp_skb_cb * dcb = DCCP_SKB_CB ( skb ) ;
/* Never answer to DCCP_PKT_REQUESTs send to broadcast or multicast */
2009-06-02 09:14:27 +04:00
if ( skb_rtable ( skb ) - > rt_flags & ( RTCF_BROADCAST | RTCF_MULTICAST ) )
[DCCP]: Twice the wrong reset code in receiving connection-Requests
This fixes two bugs in processing of connection-Requests in
v{4,6}_conn_request:
1. Due to using the variable `reset_code', the Reset code generated
internally by dccp_parse_options() is overwritten with the
initialised value ("Too Busy") of reset_code, which is not what is
intended.
2. When receiving a connection-Request on a multicast or broadcast
address, no Reset should be generated, to avoid storms of such
packets. Instead of jumping to the `drop' label, the
v{4,6}_conn_request functions now return 0. Below is why in my
understanding this is correct:
When the conn_request function returns < 0, then the caller,
dccp_rcv_state_process(), returns 1. In all instances where
dccp_rcv_state_process is called (dccp_v4_do_rcv, dccp_v6_do_rcv,
and dccp_child_process), a return value of != 0 from
dccp_rcv_state_process() means that a Reset is generated.
If on the other hand the conn_request function returns 0, the
packet is discarded and no Reset is generated.
Note: There may be a related problem when sending the Response, due to
the following.
if (dccp_v6_send_response(sk, req, NULL))
goto drop_and_free;
/* ... */
drop_and_free:
return -1;
In this case, if send_response fails due to transmission errors, the
next thing that is generated is a Reset with a code "Too Busy". I
haven't been able to conjure up such a condition, but it might be good
to change the behaviour here also (not done by this patch).
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: Ian McDonald <ian.mcdonald@jandi.co.nz>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-05 01:52:28 +04:00
return 0 ; /* discard, don't send a reset here */
2006-11-10 17:52:36 +03:00
if ( dccp_bad_service_code ( sk , service ) ) {
[DCCP]: Twice the wrong reset code in receiving connection-Requests
This fixes two bugs in processing of connection-Requests in
v{4,6}_conn_request:
1. Due to using the variable `reset_code', the Reset code generated
internally by dccp_parse_options() is overwritten with the
initialised value ("Too Busy") of reset_code, which is not what is
intended.
2. When receiving a connection-Request on a multicast or broadcast
address, no Reset should be generated, to avoid storms of such
packets. Instead of jumping to the `drop' label, the
v{4,6}_conn_request functions now return 0. Below is why in my
understanding this is correct:
When the conn_request function returns < 0, then the caller,
dccp_rcv_state_process(), returns 1. In all instances where
dccp_rcv_state_process is called (dccp_v4_do_rcv, dccp_v6_do_rcv,
and dccp_child_process), a return value of != 0 from
dccp_rcv_state_process() means that a Reset is generated.
If on the other hand the conn_request function returns 0, the
packet is discarded and no Reset is generated.
Note: There may be a related problem when sending the Response, due to
the following.
if (dccp_v6_send_response(sk, req, NULL))
goto drop_and_free;
/* ... */
drop_and_free:
return -1;
In this case, if send_response fails due to transmission errors, the
next thing that is generated is a Reset with a code "Too Busy". I
haven't been able to conjure up such a condition, but it might be good
to change the behaviour here also (not done by this patch).
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: Ian McDonald <ian.mcdonald@jandi.co.nz>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-05 01:52:28 +04:00
dcb - > dccpd_reset_code = DCCP_RESET_CODE_BAD_SERVICE_CODE ;
2006-11-10 17:52:36 +03:00
goto drop ;
2006-12-10 21:01:18 +03:00
}
2006-11-10 17:52:36 +03:00
/*
* TW buckets are converted to open requests without
* limitations , they conserve resources and peer is
* evidently real one .
*/
[DCCP]: Twice the wrong reset code in receiving connection-Requests
This fixes two bugs in processing of connection-Requests in
v{4,6}_conn_request:
1. Due to using the variable `reset_code', the Reset code generated
internally by dccp_parse_options() is overwritten with the
initialised value ("Too Busy") of reset_code, which is not what is
intended.
2. When receiving a connection-Request on a multicast or broadcast
address, no Reset should be generated, to avoid storms of such
packets. Instead of jumping to the `drop' label, the
v{4,6}_conn_request functions now return 0. Below is why in my
understanding this is correct:
When the conn_request function returns < 0, then the caller,
dccp_rcv_state_process(), returns 1. In all instances where
dccp_rcv_state_process is called (dccp_v4_do_rcv, dccp_v6_do_rcv,
and dccp_child_process), a return value of != 0 from
dccp_rcv_state_process() means that a Reset is generated.
If on the other hand the conn_request function returns 0, the
packet is discarded and no Reset is generated.
Note: There may be a related problem when sending the Response, due to
the following.
if (dccp_v6_send_response(sk, req, NULL))
goto drop_and_free;
/* ... */
drop_and_free:
return -1;
In this case, if send_response fails due to transmission errors, the
next thing that is generated is a Reset with a code "Too Busy". I
haven't been able to conjure up such a condition, but it might be good
to change the behaviour here also (not done by this patch).
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: Ian McDonald <ian.mcdonald@jandi.co.nz>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-05 01:52:28 +04:00
dcb - > dccpd_reset_code = DCCP_RESET_CODE_TOO_BUSY ;
2006-11-10 17:52:36 +03:00
if ( inet_csk_reqsk_queue_is_full ( sk ) )
goto drop ;
2016-10-26 19:27:57 +03:00
if ( sk_acceptq_is_full ( sk ) )
2006-11-10 17:52:36 +03:00
goto drop ;
2015-10-05 07:08:11 +03:00
req = inet_reqsk_alloc ( & dccp_request_sock_ops , sk , true ) ;
2006-11-10 17:52:36 +03:00
if ( req = = NULL )
goto drop ;
2008-11-05 10:55:49 +03:00
if ( dccp_reqsk_init ( req , dccp_sk ( sk ) , skb ) )
goto drop_and_free ;
2006-11-10 17:52:36 +03:00
2007-12-13 17:29:24 +03:00
dreq = dccp_rsk ( req ) ;
if ( dccp_parse_options ( sk , dreq , skb ) )
goto drop_and_free ;
2006-11-10 17:52:36 +03:00
if ( security_inet_conn_request ( sk , skb , req ) )
goto drop_and_free ;
ireq = inet_rsk ( req ) ;
2015-03-19 00:05:38 +03:00
sk_rcv_saddr_set ( req_to_sk ( req ) , ip_hdr ( skb ) - > daddr ) ;
sk_daddr_set ( req_to_sk ( req ) , ip_hdr ( skb ) - > saddr ) ;
2018-04-07 23:42:41 +03:00
ireq - > ir_mark = inet_request_mark ( sk , skb ) ;
2015-03-13 02:44:10 +03:00
ireq - > ireq_family = AF_INET ;
2022-05-13 21:55:45 +03:00
ireq - > ir_iif = READ_ONCE ( sk - > sk_bound_dev_if ) ;
2006-11-10 17:52:36 +03:00
2006-12-10 21:01:18 +03:00
/*
2006-11-10 17:52:36 +03:00
* Step 3 : Process LISTEN state
*
* Set S . ISR , S . GSR , S . SWL , S . SWH from packet or Init Cookie
*
2012-02-27 05:22:02 +04:00
* Setting S . SWL / S . SWH to is deferred to dccp_create_openreq_child ( ) .
2006-11-10 17:52:36 +03:00
*/
dreq - > dreq_isr = dcb - > dccpd_seq ;
2012-02-27 05:22:02 +04:00
dreq - > dreq_gsr = dreq - > dreq_isr ;
2006-11-13 18:31:50 +03:00
dreq - > dreq_iss = dccp_v4_init_sequence ( skb ) ;
2012-02-27 05:22:02 +04:00
dreq - > dreq_gss = dreq - > dreq_iss ;
2006-11-10 17:52:36 +03:00
dreq - > dreq_service = service ;
2013-03-17 12:23:34 +04:00
if ( dccp_v4_send_response ( sk , req ) )
2006-11-10 17:52:36 +03:00
goto drop_and_free ;
inet_csk_reqsk_queue_hash_add ( sk , req , DCCP_TIMEOUT_INIT ) ;
2017-07-26 09:19:46 +03:00
reqsk_put ( req ) ;
2006-11-10 17:52:36 +03:00
return 0 ;
drop_and_free :
reqsk_free ( req ) ;
drop :
2016-04-28 02:44:28 +03:00
__DCCP_INC_STATS ( DCCP_MIB_ATTEMPTFAILS ) ;
2006-11-10 17:52:36 +03:00
return - 1 ;
}
EXPORT_SYMBOL_GPL ( dccp_v4_conn_request ) ;
2005-08-10 07:14:34 +04:00
int dccp_v4_do_rcv ( struct sock * sk , struct sk_buff * skb )
{
struct dccp_hdr * dh = dccp_hdr ( skb ) ;
if ( sk - > sk_state = = DCCP_OPEN ) { /* Fast path */
if ( dccp_rcv_established ( sk , skb , dh , skb - > len ) )
goto reset ;
return 0 ;
}
/*
* Step 3 : Process LISTEN state
2006-11-10 21:29:14 +03:00
* If P . type = = Request or P contains a valid Init Cookie option ,
* ( * Must scan the packet ' s options to check for Init
* Cookies . Only Init Cookies are processed here ,
* however ; other options are processed in Step 8. This
* scan need only be performed if the endpoint uses Init
* Cookies * )
* ( * Generate a new socket and switch to that socket * )
* Set S : = new socket for this port pair
* S . state = RESPOND
* Choose S . ISS ( initial seqno ) or set from Init Cookies
* Initialize S . GAR : = S . ISS
* Set S . ISR , S . GSR , S . SWL , S . SWH from packet or Init Cookies
* Continue with S . state = = RESPOND
* ( * A Response packet will be generated in Step 11 * )
* Otherwise ,
* Generate Reset ( No Connection ) unless P . type = = Reset
* Drop packet and return
2005-08-10 07:14:34 +04:00
*
2005-08-14 03:34:54 +04:00
* NOTE : the check for the packet types is done in
* dccp_rcv_state_process
2005-08-10 07:14:34 +04:00
*/
if ( dccp_rcv_state_process ( sk , skb , dh , skb - > len ) )
goto reset ;
return 0 ;
reset :
2006-11-15 06:07:45 +03:00
dccp_v4_ctl_send_reset ( sk , skb ) ;
2005-08-10 07:14:34 +04:00
kfree_skb ( skb ) ;
return 0 ;
}
2005-12-14 10:24:16 +03:00
EXPORT_SYMBOL_GPL ( dccp_v4_do_rcv ) ;
2006-11-14 17:57:34 +03:00
/**
* dccp_invalid_packet - check for malformed packets
2020-07-13 02:15:00 +03:00
* @ skb : Packet to validate
*
2006-11-14 17:57:34 +03:00
* Implements RFC 4340 , 8.5 : Step 1 : Check header basics
* Packets that fail these checks are ignored and do not receive Resets .
*/
2005-12-14 10:24:16 +03:00
int dccp_invalid_packet ( struct sk_buff * skb )
2005-08-10 07:14:34 +04:00
{
const struct dccp_hdr * dh ;
2006-11-10 22:43:06 +03:00
unsigned int cscov ;
2016-11-28 17:26:49 +03:00
u8 dccph_doff ;
2005-08-10 07:14:34 +04:00
if ( skb - > pkt_type ! = PACKET_HOST )
return 1 ;
2006-11-14 17:57:34 +03:00
/* If the packet is shorter than 12 bytes, drop packet and return */
2005-08-10 07:14:34 +04:00
if ( ! pskb_may_pull ( skb , sizeof ( struct dccp_hdr ) ) ) {
2006-11-20 23:39:23 +03:00
DCCP_WARN ( " pskb_may_pull failed \n " ) ;
2005-08-10 07:14:34 +04:00
return 1 ;
}
dh = dccp_hdr ( skb ) ;
2006-11-14 17:57:34 +03:00
/* If P.type is not understood, drop packet and return */
2005-08-10 07:14:34 +04:00
if ( dh - > dccph_type > = DCCP_PKT_INVALID ) {
2006-11-20 23:39:23 +03:00
DCCP_WARN ( " invalid packet type \n " ) ;
2005-08-10 07:14:34 +04:00
return 1 ;
}
/*
2006-11-14 17:57:34 +03:00
* If P . Data Offset is too small for packet type , drop packet and return
2005-08-10 07:14:34 +04:00
*/
2016-11-28 17:26:49 +03:00
dccph_doff = dh - > dccph_doff ;
if ( dccph_doff < dccp_hdr_len ( skb ) / sizeof ( u32 ) ) {
DCCP_WARN ( " P.Data Offset(%u) too small \n " , dccph_doff ) ;
2005-08-10 07:14:34 +04:00
return 1 ;
}
2006-11-14 17:57:34 +03:00
/*
2020-08-23 04:07:13 +03:00
* If P . Data Offset is too large for packet , drop packet and return
2006-11-14 17:57:34 +03:00
*/
2016-11-28 17:26:49 +03:00
if ( ! pskb_may_pull ( skb , dccph_doff * sizeof ( u32 ) ) ) {
DCCP_WARN ( " P.Data Offset(%u) too large \n " , dccph_doff ) ;
2005-08-10 07:14:34 +04:00
return 1 ;
}
2016-11-28 17:26:49 +03:00
dh = dccp_hdr ( skb ) ;
2005-08-10 07:14:34 +04:00
/*
* If P . type is not Data , Ack , or DataAck and P . X = = 0 ( the packet
* has short sequence numbers ) , drop packet and return
*/
2008-05-27 17:22:38 +04:00
if ( ( dh - > dccph_type < DCCP_PKT_DATA | |
dh - > dccph_type > DCCP_PKT_DATAACK ) & & dh - > dccph_x = = 0 ) {
2006-11-20 23:39:23 +03:00
DCCP_WARN ( " P.type (%s) not Data || [Data]Ack, while P.X == 0 \n " ,
dccp_packet_name ( dh - > dccph_type ) ) ;
2005-08-10 07:14:34 +04:00
return 1 ;
}
2006-11-10 22:43:06 +03:00
/*
* If P . CsCov is too large for the packet size , drop packet and return .
* This must come _before_ checksumming ( not as RFC 4340 suggests ) .
*/
cscov = dccp_csum_coverage ( skb ) ;
if ( cscov > skb - > len ) {
2006-11-20 23:39:23 +03:00
DCCP_WARN ( " P.CsCov %u exceeds packet length %d \n " ,
dh - > dccph_cscov , skb - > len ) ;
2006-11-10 22:43:06 +03:00
return 1 ;
}
/* If header checksum is incorrect, drop packet and return.
* ( This step is completed in the AF - dependent functions . ) */
skb - > csum = skb_checksum ( skb , 0 , cscov , 0 ) ;
2005-08-10 07:14:34 +04:00
return 0 ;
}
2005-12-14 10:24:16 +03:00
EXPORT_SYMBOL_GPL ( dccp_invalid_packet ) ;
2005-08-10 07:14:34 +04:00
/* this is called when real data arrives */
2006-03-21 08:25:11 +03:00
static int dccp_v4_rcv ( struct sk_buff * skb )
2005-08-10 07:14:34 +04:00
{
const struct dccp_hdr * dh ;
2007-04-21 09:47:35 +04:00
const struct iphdr * iph ;
2016-04-01 18:52:17 +03:00
bool refcounted ;
2005-08-10 07:14:34 +04:00
struct sock * sk ;
2006-11-10 22:43:06 +03:00
int min_cov ;
2005-08-10 07:14:34 +04:00
2006-11-10 22:43:06 +03:00
/* Step 1: Check header basics */
2005-08-10 07:14:34 +04:00
if ( dccp_invalid_packet ( skb ) )
goto discard_it ;
2007-04-21 09:47:35 +04:00
iph = ip_hdr ( skb ) ;
2006-11-10 22:43:06 +03:00
/* Step 1: If header checksum is incorrect, drop packet and return */
2007-04-21 09:47:35 +04:00
if ( dccp_v4_csum_finish ( skb , iph - > saddr , iph - > daddr ) ) {
2006-11-20 23:39:23 +03:00
DCCP_WARN ( " dropped packet with invalid checksum \n " ) ;
2005-12-14 10:24:16 +03:00
goto discard_it ;
}
2005-08-10 07:14:34 +04:00
dh = dccp_hdr ( skb ) ;
2007-10-24 16:12:09 +04:00
DCCP_SKB_CB ( skb ) - > dccpd_seq = dccp_hdr_seq ( dh ) ;
2005-08-10 07:14:34 +04:00
DCCP_SKB_CB ( skb ) - > dccpd_type = dh - > dccph_type ;
2008-10-31 10:54:56 +03:00
dccp_pr_debug ( " %8.8s src=%pI4@%-5d dst=%pI4@%-5d seq=%llu " ,
2005-08-10 07:14:34 +04:00
dccp_packet_name ( dh - > dccph_type ) ,
2008-10-31 10:54:56 +03:00
& iph - > saddr , ntohs ( dh - > dccph_sport ) ,
& iph - > daddr , ntohs ( dh - > dccph_dport ) ,
2005-08-10 07:27:14 +04:00
( unsigned long long ) DCCP_SKB_CB ( skb ) - > dccpd_seq ) ;
2005-08-10 07:14:34 +04:00
if ( dccp_packet_without_ack ( skb ) ) {
DCCP_SKB_CB ( skb ) - > dccpd_ack_seq = DCCP_PKT_WITHOUT_ACK_SEQ ;
dccp_pr_debug_cat ( " \n " ) ;
} else {
DCCP_SKB_CB ( skb ) - > dccpd_ack_seq = dccp_hdr_ack_seq ( skb ) ;
2006-11-10 16:46:34 +03:00
dccp_pr_debug_cat ( " , ack=%llu \n " , ( unsigned long long )
2005-08-10 07:27:14 +04:00
DCCP_SKB_CB ( skb ) - > dccpd_ack_seq ) ;
2005-08-10 07:14:34 +04:00
}
2015-10-14 03:12:54 +03:00
lookup :
2016-02-10 19:50:38 +03:00
sk = __inet_lookup_skb ( & dccp_hashinfo , skb , __dccp_hdr_len ( dh ) ,
2017-08-07 18:44:17 +03:00
dh - > dccph_sport , dh - > dccph_dport , 0 , & refcounted ) ;
2015-10-14 03:12:54 +03:00
if ( ! sk ) {
2005-08-10 07:14:34 +04:00
dccp_pr_debug ( " failed to look up flow ID in table and "
" get corresponding socket \n " ) ;
goto no_dccp_socket ;
}
2006-12-10 21:01:18 +03:00
/*
2005-08-10 07:14:34 +04:00
* Step 2 :
2006-12-10 21:01:18 +03:00
* . . . or S . state = = TIMEWAIT ,
2005-08-10 07:14:34 +04:00
* Generate Reset ( No Connection ) unless P . type = = Reset
* Drop packet and return
*/
if ( sk - > sk_state = = DCCP_TIME_WAIT ) {
2006-11-10 16:46:34 +03:00
dccp_pr_debug ( " sk->sk_state == DCCP_TIME_WAIT: do_time_wait \n " ) ;
inet_twsk_put ( inet_twsk ( sk ) ) ;
goto no_dccp_socket ;
2005-08-10 07:14:34 +04:00
}
2015-10-02 21:43:32 +03:00
if ( sk - > sk_state = = DCCP_NEW_SYN_RECV ) {
struct request_sock * req = inet_reqsk ( sk ) ;
2016-02-18 16:39:18 +03:00
struct sock * nsk ;
2015-10-02 21:43:32 +03:00
sk = req - > rsk_listener ;
2016-02-18 16:39:18 +03:00
if ( unlikely ( sk - > sk_state ! = DCCP_LISTEN ) ) {
2015-10-14 21:16:27 +03:00
inet_csk_reqsk_queue_drop_and_put ( sk , req ) ;
2015-10-14 03:12:54 +03:00
goto lookup ;
}
2016-02-18 16:39:18 +03:00
sock_hold ( sk ) ;
2016-04-01 18:52:17 +03:00
refcounted = true ;
2016-02-18 16:39:18 +03:00
nsk = dccp_check_req ( sk , skb , req ) ;
2015-10-02 21:43:32 +03:00
if ( ! nsk ) {
reqsk_put ( req ) ;
2016-02-18 16:39:18 +03:00
goto discard_and_relse ;
2015-10-02 21:43:32 +03:00
}
if ( nsk = = sk ) {
reqsk_put ( req ) ;
} else if ( dccp_child_process ( sk , nsk , skb ) ) {
dccp_v4_ctl_send_reset ( sk , skb ) ;
2016-02-18 16:39:18 +03:00
goto discard_and_relse ;
2015-10-02 21:43:32 +03:00
} else {
2016-02-18 16:39:18 +03:00
sock_put ( sk ) ;
2015-10-02 21:43:32 +03:00
return 0 ;
}
}
2006-11-10 22:43:06 +03:00
/*
* RFC 4340 , sec . 9.2 .1 : Minimum Checksum Coverage
2006-12-10 21:01:18 +03:00
* o if MinCsCov = 0 , only packets with CsCov = 0 are accepted
* o if MinCsCov > 0 , also accept packets with CsCov > = MinCsCov
2006-11-10 22:43:06 +03:00
*/
min_cov = dccp_sk ( sk ) - > dccps_pcrlen ;
if ( dh - > dccph_cscov & & ( min_cov = = 0 | | dh - > dccph_cscov < min_cov ) ) {
dccp_pr_debug ( " Packet CsCov %d does not satisfy MinCsCov %d \n " ,
dh - > dccph_cscov , min_cov ) ;
/* FIXME: "Such packets SHOULD be reported using Data Dropped
* options ( Section 11.7 ) with Drop Code 0 , Protocol
* Constraints . " */
goto discard_and_relse ;
}
2005-12-27 07:42:22 +03:00
if ( ! xfrm4_policy_check ( sk , XFRM_POLICY_IN , skb ) )
2005-08-10 07:14:34 +04:00
goto discard_and_relse ;
2019-09-29 21:54:03 +03:00
nf_reset_ct ( skb ) ;
2005-08-10 07:14:34 +04:00
2016-11-03 03:14:41 +03:00
return __sk_receive_skb ( sk , skb , 1 , dh - > dccph_doff * 4 , refcounted ) ;
2005-08-10 07:14:34 +04:00
no_dccp_socket :
if ( ! xfrm4_policy_check ( NULL , XFRM_POLICY_IN , skb ) )
goto discard_it ;
/*
* Step 2 :
2006-12-10 21:01:18 +03:00
* If no socket . . .
2005-08-10 07:14:34 +04:00
* Generate Reset ( No Connection ) unless P . type = = Reset
* Drop packet and return
*/
if ( dh - > dccph_type ! = DCCP_PKT_RESET ) {
2005-08-14 03:34:54 +04:00
DCCP_SKB_CB ( skb ) - > dccpd_reset_code =
DCCP_RESET_CODE_NO_CONNECTION ;
2006-11-15 06:07:45 +03:00
dccp_v4_ctl_send_reset ( sk , skb ) ;
2005-08-10 07:14:34 +04:00
}
discard_it :
kfree_skb ( skb ) ;
return 0 ;
discard_and_relse :
2016-04-01 18:52:17 +03:00
if ( refcounted )
sock_put ( sk ) ;
2005-08-10 07:14:34 +04:00
goto discard_it ;
}
2009-09-01 23:25:04 +04:00
static const struct inet_connection_sock_af_ops dccp_ipv4_af_ops = {
2006-03-21 09:48:35 +03:00
. queue_xmit = ip_queue_xmit ,
. send_check = dccp_v4_send_check ,
. rebuild_header = inet_sk_rebuild_header ,
. conn_request = dccp_v4_conn_request ,
. syn_recv_sock = dccp_v4_request_recv_sock ,
. net_header_len = sizeof ( struct iphdr ) ,
. setsockopt = ip_setsockopt ,
. getsockopt = ip_getsockopt ,
. addr2sockaddr = inet_csk_addr2sockaddr ,
. sockaddr_len = sizeof ( struct sockaddr_in ) ,
2005-12-14 10:16:16 +03:00
} ;
2006-03-21 08:23:15 +03:00
static int dccp_v4_init_sock ( struct sock * sk )
2005-08-10 07:14:34 +04:00
{
2006-03-21 09:00:37 +03:00
static __u8 dccp_v4_ctl_sock_initialized ;
int err = dccp_init_sock ( sk , dccp_v4_ctl_sock_initialized ) ;
2005-12-14 10:24:16 +03:00
2006-03-21 09:00:37 +03:00
if ( err = = 0 ) {
if ( unlikely ( ! dccp_v4_ctl_sock_initialized ) )
dccp_v4_ctl_sock_initialized = 1 ;
2006-03-21 08:23:15 +03:00
inet_csk ( sk ) - > icsk_af_ops = & dccp_ipv4_af_ops ;
2006-03-21 09:00:37 +03:00
}
2006-03-21 08:23:15 +03:00
return err ;
2005-08-10 07:14:34 +04:00
}
2005-12-14 10:25:19 +03:00
static struct timewait_sock_ops dccp_timewait_sock_ops = {
. twsk_obj_size = sizeof ( struct inet_timewait_sock ) ,
} ;
2006-03-21 08:58:29 +03:00
static struct proto dccp_v4_prot = {
2005-08-10 07:14:34 +04:00
. name = " DCCP " ,
. owner = THIS_MODULE ,
. close = dccp_close ,
. connect = dccp_v4_connect ,
. disconnect = dccp_disconnect ,
. ioctl = dccp_ioctl ,
. init = dccp_v4_init_sock ,
. setsockopt = dccp_setsockopt ,
. getsockopt = dccp_getsockopt ,
. sendmsg = dccp_sendmsg ,
. recvmsg = dccp_recvmsg ,
. backlog_rcv = dccp_v4_do_rcv ,
[SOCK] proto: Add hashinfo member to struct proto
This way we can remove TCP and DCCP specific versions of
sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
sk->sk_prot->hash: inet_hash is directly used, only v6 need
a specific version to deal with mapped sockets
sk->sk_prot->unhash: both v4 and v6 use inet_hash directly
struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
that inet_csk_get_port can find the per family routine.
Now only the lookup routines receive as a parameter a struct inet_hashtable.
With this we further reuse code, reducing the difference among INET transport
protocols.
Eventually work has to be done on UDP and SCTP to make them share this
infrastructure and get as a bonus inet_diag interfaces so that iproute can be
used with these protocols.
net-2.6/net/ipv4/inet_hashtables.c:
struct proto | +8
struct inet_connection_sock_af_ops | +8
2 structs changed
__inet_hash_nolisten | +18
__inet_hash | -210
inet_put_port | +8
inet_bind_bucket_create | +1
__inet_hash_connect | -8
5 functions changed, 27 bytes added, 218 bytes removed, diff: -191
net-2.6/net/core/sock.c:
proto_seq_show | +3
1 function changed, 3 bytes added, diff: +3
net-2.6/net/ipv4/inet_connection_sock.c:
inet_csk_get_port | +15
1 function changed, 15 bytes added, diff: +15
net-2.6/net/ipv4/tcp.c:
tcp_set_state | -7
1 function changed, 7 bytes removed, diff: -7
net-2.6/net/ipv4/tcp_ipv4.c:
tcp_v4_get_port | -31
tcp_v4_hash | -48
tcp_v4_destroy_sock | -7
tcp_v4_syn_recv_sock | -2
tcp_unhash | -179
5 functions changed, 267 bytes removed, diff: -267
net-2.6/net/ipv6/inet6_hashtables.c:
__inet6_hash | +8
1 function changed, 8 bytes added, diff: +8
net-2.6/net/ipv4/inet_hashtables.c:
inet_unhash | +190
inet_hash | +242
2 functions changed, 432 bytes added, diff: +432
vmlinux:
16 functions changed, 485 bytes added, 492 bytes removed, diff: -7
/home/acme/git/net-2.6/net/ipv6/tcp_ipv6.c:
tcp_v6_get_port | -31
tcp_v6_hash | -7
tcp_v6_syn_recv_sock | -9
3 functions changed, 47 bytes removed, diff: -47
/home/acme/git/net-2.6/net/dccp/proto.c:
dccp_destroy_sock | -7
dccp_unhash | -179
dccp_hash | -49
dccp_set_state | -7
dccp_done | +1
5 functions changed, 1 bytes added, 242 bytes removed, diff: -241
/home/acme/git/net-2.6/net/dccp/ipv4.c:
dccp_v4_get_port | -31
dccp_v4_request_recv_sock | -2
2 functions changed, 33 bytes removed, diff: -33
/home/acme/git/net-2.6/net/dccp/ipv6.c:
dccp_v6_get_port | -31
dccp_v6_hash | -7
dccp_v6_request_recv_sock | +5
3 functions changed, 5 bytes added, 38 bytes removed, diff: -33
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-03 15:06:04 +03:00
. hash = inet_hash ,
. unhash = inet_unhash ,
2005-08-10 07:14:34 +04:00
. accept = inet_csk_accept ,
[SOCK] proto: Add hashinfo member to struct proto
This way we can remove TCP and DCCP specific versions of
sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
sk->sk_prot->hash: inet_hash is directly used, only v6 need
a specific version to deal with mapped sockets
sk->sk_prot->unhash: both v4 and v6 use inet_hash directly
struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
that inet_csk_get_port can find the per family routine.
Now only the lookup routines receive as a parameter a struct inet_hashtable.
With this we further reuse code, reducing the difference among INET transport
protocols.
Eventually work has to be done on UDP and SCTP to make them share this
infrastructure and get as a bonus inet_diag interfaces so that iproute can be
used with these protocols.
net-2.6/net/ipv4/inet_hashtables.c:
struct proto | +8
struct inet_connection_sock_af_ops | +8
2 structs changed
__inet_hash_nolisten | +18
__inet_hash | -210
inet_put_port | +8
inet_bind_bucket_create | +1
__inet_hash_connect | -8
5 functions changed, 27 bytes added, 218 bytes removed, diff: -191
net-2.6/net/core/sock.c:
proto_seq_show | +3
1 function changed, 3 bytes added, diff: +3
net-2.6/net/ipv4/inet_connection_sock.c:
inet_csk_get_port | +15
1 function changed, 15 bytes added, diff: +15
net-2.6/net/ipv4/tcp.c:
tcp_set_state | -7
1 function changed, 7 bytes removed, diff: -7
net-2.6/net/ipv4/tcp_ipv4.c:
tcp_v4_get_port | -31
tcp_v4_hash | -48
tcp_v4_destroy_sock | -7
tcp_v4_syn_recv_sock | -2
tcp_unhash | -179
5 functions changed, 267 bytes removed, diff: -267
net-2.6/net/ipv6/inet6_hashtables.c:
__inet6_hash | +8
1 function changed, 8 bytes added, diff: +8
net-2.6/net/ipv4/inet_hashtables.c:
inet_unhash | +190
inet_hash | +242
2 functions changed, 432 bytes added, diff: +432
vmlinux:
16 functions changed, 485 bytes added, 492 bytes removed, diff: -7
/home/acme/git/net-2.6/net/ipv6/tcp_ipv6.c:
tcp_v6_get_port | -31
tcp_v6_hash | -7
tcp_v6_syn_recv_sock | -9
3 functions changed, 47 bytes removed, diff: -47
/home/acme/git/net-2.6/net/dccp/proto.c:
dccp_destroy_sock | -7
dccp_unhash | -179
dccp_hash | -49
dccp_set_state | -7
dccp_done | +1
5 functions changed, 1 bytes added, 242 bytes removed, diff: -241
/home/acme/git/net-2.6/net/dccp/ipv4.c:
dccp_v4_get_port | -31
dccp_v4_request_recv_sock | -2
2 functions changed, 33 bytes removed, diff: -33
/home/acme/git/net-2.6/net/dccp/ipv6.c:
dccp_v6_get_port | -31
dccp_v6_hash | -7
dccp_v6_request_recv_sock | +5
3 functions changed, 5 bytes added, 38 bytes removed, diff: -33
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-03 15:06:04 +03:00
. get_port = inet_csk_get_port ,
2005-08-10 07:14:34 +04:00
. shutdown = dccp_shutdown ,
2006-03-21 08:23:15 +03:00
. destroy = dccp_destroy_sock ,
2005-08-10 07:14:34 +04:00
. orphan_count = & dccp_orphan_count ,
. max_header = MAX_DCCP_HEADER ,
. obj_size = sizeof ( struct dccp_sock ) ,
2017-01-18 13:53:44 +03:00
. slab_flags = SLAB_TYPESAFE_BY_RCU ,
2005-08-10 07:14:34 +04:00
. rsk_prot = & dccp_request_sock_ops ,
2005-12-14 10:25:19 +03:00
. twsk_prot = & dccp_timewait_sock_ops ,
2008-03-23 02:50:58 +03:00
. h . hashinfo = & dccp_hashinfo ,
2005-08-10 07:14:34 +04:00
} ;
2005-12-14 10:25:19 +03:00
2009-09-14 16:21:47 +04:00
static const struct net_protocol dccp_v4_protocol = {
2006-03-21 08:25:11 +03:00
. handler = dccp_v4_rcv ,
. err_handler = dccp_v4_err ,
. no_policy = 1 ,
2014-01-09 13:01:17 +04:00
. icmp_strict_tag_validation = 1 ,
2006-03-21 08:25:11 +03:00
} ;
static const struct proto_ops inet_dccp_ops = {
2006-03-21 09:48:35 +03:00
. family = PF_INET ,
. owner = THIS_MODULE ,
. release = inet_release ,
. bind = inet_bind ,
. connect = inet_stream_connect ,
. socketpair = sock_no_socketpair ,
. accept = inet_accept ,
. getname = inet_getname ,
2006-03-21 08:25:11 +03:00
/* FIXME: work on tcp_poll to rename it to inet_csk_poll */
2018-06-28 19:43:44 +03:00
. poll = dccp_poll ,
2006-03-21 09:48:35 +03:00
. ioctl = inet_ioctl ,
2019-04-17 23:51:48 +03:00
. gettstamp = sock_gettstamp ,
2006-03-21 08:25:11 +03:00
/* FIXME: work on inet_listen to rename it to sock_common_listen */
2006-03-21 09:48:35 +03:00
. listen = inet_dccp_listen ,
. shutdown = inet_shutdown ,
. setsockopt = sock_common_setsockopt ,
. getsockopt = sock_common_getsockopt ,
. sendmsg = inet_sendmsg ,
. recvmsg = sock_common_recvmsg ,
. mmap = sock_no_mmap ,
. sendpage = sock_no_sendpage ,
2006-03-21 08:25:11 +03:00
} ;
static struct inet_protosw dccp_v4_protosw = {
. type = SOCK_DCCP ,
. protocol = IPPROTO_DCCP ,
. prot = & dccp_v4_prot ,
. ops = & inet_dccp_ops ,
. flags = INET_PROTOSW_ICSK ,
} ;
2010-01-17 06:35:32 +03:00
static int __net_init dccp_v4_init_net ( struct net * net )
2008-04-14 09:29:13 +04:00
{
2021-04-08 20:45:02 +03:00
struct dccp_v4_pernet * pn = net_generic ( net , dccp_v4_pernet_id ) ;
2010-03-14 23:13:19 +03:00
if ( dccp_hashinfo . bhash = = NULL )
return - ESOCKTNOSUPPORT ;
2008-04-14 09:29:59 +04:00
2021-04-08 20:45:02 +03:00
return inet_ctl_sock_create ( & pn - > v4_ctl_sk , PF_INET ,
2010-03-14 23:13:19 +03:00
SOCK_DCCP , IPPROTO_DCCP , net ) ;
2008-04-14 09:29:13 +04:00
}
2010-01-17 06:35:32 +03:00
static void __net_exit dccp_v4_exit_net ( struct net * net )
2008-04-14 09:29:13 +04:00
{
2021-04-08 20:45:02 +03:00
struct dccp_v4_pernet * pn = net_generic ( net , dccp_v4_pernet_id ) ;
inet_ctl_sock_destroy ( pn - > v4_ctl_sk ) ;
2008-04-14 09:29:13 +04:00
}
2022-05-13 00:14:56 +03:00
static void __net_exit dccp_v4_exit_batch ( struct list_head * net_exit_list )
{
inet_twsk_purge ( & dccp_hashinfo , AF_INET ) ;
}
2008-04-14 09:29:13 +04:00
static struct pernet_operations dccp_v4_ops = {
. init = dccp_v4_init_net ,
. exit = dccp_v4_exit_net ,
2022-05-13 00:14:56 +03:00
. exit_batch = dccp_v4_exit_batch ,
2021-04-08 20:45:02 +03:00
. id = & dccp_v4_pernet_id ,
. size = sizeof ( struct dccp_v4_pernet ) ,
2008-04-14 09:29:13 +04:00
} ;
2006-03-21 08:25:11 +03:00
static int __init dccp_v4_init ( void )
{
int err = proto_register ( & dccp_v4_prot , 1 ) ;
2017-06-20 10:42:38 +03:00
if ( err )
2006-03-21 08:25:11 +03:00
goto out ;
inet_register_protosw ( & dccp_v4_protosw ) ;
2008-04-14 09:29:13 +04:00
err = register_pernet_subsys ( & dccp_v4_ops ) ;
if ( err )
goto out_destroy_ctl_sock ;
2017-06-20 10:42:38 +03:00
err = inet_add_protocol ( & dccp_v4_protocol , IPPROTO_DCCP ) ;
if ( err )
goto out_proto_unregister ;
2006-03-21 08:25:11 +03:00
out :
return err ;
2017-06-20 10:42:38 +03:00
out_proto_unregister :
unregister_pernet_subsys ( & dccp_v4_ops ) ;
2008-04-14 09:29:13 +04:00
out_destroy_ctl_sock :
2006-03-21 08:25:11 +03:00
inet_unregister_protosw ( & dccp_v4_protosw ) ;
proto_unregister ( & dccp_v4_prot ) ;
goto out ;
}
static void __exit dccp_v4_exit ( void )
{
2017-06-20 10:42:38 +03:00
inet_del_protocol ( & dccp_v4_protocol , IPPROTO_DCCP ) ;
2008-04-14 09:29:13 +04:00
unregister_pernet_subsys ( & dccp_v4_ops ) ;
2006-03-21 08:25:11 +03:00
inet_unregister_protosw ( & dccp_v4_protosw ) ;
proto_unregister ( & dccp_v4_prot ) ;
}
module_init ( dccp_v4_init ) ;
module_exit ( dccp_v4_exit ) ;
/*
* __stringify doesn ' t likes enums , so use SOCK_DCCP ( 6 ) and IPPROTO_DCCP ( 33 )
* values directly , Also cover the case where the protocol is not specified ,
* i . e . net - pf - PF_INET - proto - 0 - type - SOCK_DCCP
*/
2007-10-22 03:45:03 +04:00
MODULE_ALIAS_NET_PF_PROTO_TYPE ( PF_INET , 33 , 6 ) ;
MODULE_ALIAS_NET_PF_PROTO_TYPE ( PF_INET , 0 , 6 ) ;
2006-03-21 08:25:11 +03:00
MODULE_LICENSE ( " GPL " ) ;
MODULE_AUTHOR ( " Arnaldo Carvalho de Melo <acme@mandriva.com> " ) ;
MODULE_DESCRIPTION ( " DCCP - Datagram Congestion Controlled Protocol " ) ;