2005-04-17 02:20:36 +04:00
/*
* INET An implementation of the TCP / IP protocol suite for the LINUX
* operating system . INET is implemented using the BSD Socket
* interface as the means of communication with the user level .
*
* ROUTE - implementation of the IP router .
*
2005-05-06 03:16:16 +04:00
* Authors : Ross Biro
2005-04-17 02:20:36 +04:00
* Fred N . van Kempen , < waltje @ uWalt . NL . Mugnet . ORG >
* Alan Cox , < gw4pts @ gw4pts . ampr . org >
* Linus Torvalds , < Linus . Torvalds @ helsinki . fi >
* Alexey Kuznetsov , < kuznet @ ms2 . inr . ac . ru >
*
* Fixes :
* Alan Cox : Verify area fixes .
* Alan Cox : cli ( ) protects routing changes
* Rui Oliveira : ICMP routing table updates
* ( rco @ di . uminho . pt ) Routing table insertion and update
* Linus Torvalds : Rewrote bits to be sensible
* Alan Cox : Added BSD route gw semantics
2007-02-09 17:24:47 +03:00
* Alan Cox : Super / proc > 4 K
2005-04-17 02:20:36 +04:00
* Alan Cox : MTU in route table
* Alan Cox : MSS actually . Also added the window
* clamper .
* Sam Lantinga : Fixed route matching in rt_del ( )
* Alan Cox : Routing cache support .
* Alan Cox : Removed compatibility cruft .
* Alan Cox : RTF_REJECT support .
* Alan Cox : TCP irtt support .
* Jonathan Naylor : Added Metric support .
* Miquel van Smoorenburg : BSD API fixes .
* Miquel van Smoorenburg : Metrics .
* Alan Cox : Use __u32 properly
* Alan Cox : Aligned routing errors more closely with BSD
* our system is still very different .
* Alan Cox : Faster / proc handling
* Alexey Kuznetsov : Massive rework to support tree based routing ,
* routing caches and better behaviour .
2007-02-09 17:24:47 +03:00
*
2005-04-17 02:20:36 +04:00
* Olaf Erb : irtt wasn ' t being copied right .
* Bjorn Ekwall : Kerneld route support .
* Alan Cox : Multicast fixed ( I hope )
* Pavel Krauz : Limited broadcast fixed
* Mike McLagan : Routing by source
* Alexey Kuznetsov : End of old history . Split to fib . c and
* route . c and rewritten from scratch .
* Andi Kleen : Load - limit warning messages .
* Vitaly E . Lavrov : Transparent proxy revived after year coma .
* Vitaly E . Lavrov : Race condition in ip_route_input_slow .
* Tobias Ringstrom : Uninitialized res . type in ip_route_output_slow .
* Vladimir V . Ivanov : IP rule info ( flowid ) is really useful .
* Marc Boucher : routing by fwmark
* Robert Olsson : Added rt_cache statistics
* Arnaldo C . Melo : Convert proc stuff to seq_file
2005-07-06 02:00:32 +04:00
* Eric Dumazet : hashed spinlocks and rt_check_expire ( ) fixes .
2006-03-25 12:38:55 +03:00
* Ilia Sotnikov : Ignore TOS on PMTUD and Redirect
* Ilia Sotnikov : Removed TOS from hash calculations
2005-04-17 02:20:36 +04:00
*
* This program is free software ; you can redistribute it and / or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation ; either version
* 2 of the License , or ( at your option ) any later version .
*/
2012-03-12 11:03:32 +04:00
# define pr_fmt(fmt) "IPv4: " fmt
2005-04-17 02:20:36 +04:00
# include <linux/module.h>
# include <asm/uaccess.h>
# include <linux/bitops.h>
# include <linux/types.h>
# include <linux/kernel.h>
# include <linux/mm.h>
# include <linux/string.h>
# include <linux/socket.h>
# include <linux/sockios.h>
# include <linux/errno.h>
# include <linux/in.h>
# include <linux/inet.h>
# include <linux/netdevice.h>
# include <linux/proc_fs.h>
# include <linux/init.h>
# include <linux/skbuff.h>
# include <linux/inetdevice.h>
# include <linux/igmp.h>
# include <linux/pkt_sched.h>
# include <linux/mroute.h>
# include <linux/netfilter_ipv4.h>
# include <linux/random.h>
# include <linux/rcupdate.h>
# include <linux/times.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/slab.h>
inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP
generator.
linux kernels used inet_peer cache for this purpose, but this had a huge
cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs,
with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth
is about 20.
4) If server deals with many tcp flows, we have a high probability of
not finding the inet_peer, allocating a fresh one, inserting it in
the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time,
so that reassembly units have a chance to complete reassembly of
fragments belonging to one message before receiving other fragments
with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP
as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-02 16:26:03 +04:00
# include <linux/jhash.h>
2007-11-14 08:34:06 +03:00
# include <net/dst.h>
2015-07-21 11:43:59 +03:00
# include <net/dst_metadata.h>
2007-09-12 14:01:34 +04:00
# include <net/net_namespace.h>
2005-04-17 02:20:36 +04:00
# include <net/protocol.h>
# include <net/ip.h>
# include <net/route.h>
# include <net/inetpeer.h>
# include <net/sock.h>
# include <net/ip_fib.h>
# include <net/arp.h>
# include <net/tcp.h>
# include <net/icmp.h>
# include <net/xfrm.h>
2015-07-21 11:43:47 +03:00
# include <net/lwtunnel.h>
2006-07-31 07:43:36 +04:00
# include <net/netevent.h>
2007-03-22 21:55:17 +03:00
# include <net/rtnetlink.h>
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_SYSCTL
# include <linux/sysctl.h>
2012-04-18 22:05:46 +04:00
# include <linux/kmemleak.h>
2005-04-17 02:20:36 +04:00
# endif
2011-08-04 07:50:44 +04:00
# include <net/secure_seq.h>
2015-07-21 11:43:59 +03:00
# include <net/ip_tunnels.h>
2015-09-30 06:07:13 +03:00
# include <net/l3mdev.h>
2005-04-17 02:20:36 +04:00
2011-03-12 04:07:33 +03:00
# define RT_FL_TOS(oldflp4) \
2011-12-02 15:39:42 +04:00
( ( oldflp4 ) - > flowi4_tos & ( IPTOS_RT_MASK | RTO_ONLINK ) )
2005-04-17 02:20:36 +04:00
# define RT_GC_TIMEOUT (300*HZ)
static int ip_rt_max_size ;
2008-03-23 03:43:59 +03:00
static int ip_rt_redirect_number __read_mostly = 9 ;
static int ip_rt_redirect_load __read_mostly = HZ / 50 ;
static int ip_rt_redirect_silence __read_mostly = ( ( HZ / 50 ) < < ( 9 + 1 ) ) ;
static int ip_rt_error_cost __read_mostly = HZ ;
static int ip_rt_error_burst __read_mostly = 5 * HZ ;
static int ip_rt_mtu_expires __read_mostly = 10 * 60 * HZ ;
static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20 ;
static int ip_rt_min_advmss __read_mostly = 256 ;
2011-12-22 00:47:16 +04:00
2005-04-17 02:20:36 +04:00
/*
* Interface to generic destination cache .
*/
static struct dst_entry * ipv4_dst_check ( struct dst_entry * dst , u32 cookie ) ;
2010-12-13 23:52:14 +03:00
static unsigned int ipv4_default_advmss ( const struct dst_entry * dst ) ;
2011-11-23 06:12:51 +04:00
static unsigned int ipv4_mtu ( const struct dst_entry * dst ) ;
2005-04-17 02:20:36 +04:00
static struct dst_entry * ipv4_negative_advice ( struct dst_entry * dst ) ;
static void ipv4_link_failure ( struct sk_buff * skb ) ;
2012-07-17 14:29:28 +04:00
static void ip_rt_update_pmtu ( struct dst_entry * dst , struct sock * sk ,
struct sk_buff * skb , u32 mtu ) ;
static void ip_do_redirect ( struct dst_entry * dst , struct sock * sk ,
struct sk_buff * skb ) ;
2012-08-01 02:06:50 +04:00
static void ipv4_dst_destroy ( struct dst_entry * dst ) ;
2005-04-17 02:20:36 +04:00
net: Implement read-only protection and COW'ing of metrics.
Routing metrics are now copy-on-write.
Initially a route entry points it's metrics at a read-only location.
If a routing table entry exists, it will point there. Else it will
point at the all zero metric place-holder called 'dst_default_metrics'.
The writeability state of the metrics is stored in the low bits of the
metrics pointer, we have two bits left to spare if we want to store
more states.
For the initial implementation, COW is implemented simply via kmalloc.
However future enhancements will change this to place the writable
metrics somewhere else, in order to increase sharing. Very likely
this "somewhere else" will be the inetpeer cache.
Note also that this means that metrics updates may transiently fail
if we cannot COW the metrics successfully.
But even by itself, this patch should decrease memory usage and
increase cache locality especially for routing workloads. In those
cases the read-only metric copies stay in place and never get written
to.
TCP workloads where metrics get updated, and those rare cases where
PMTU triggers occur, will take a very slight performance hit. But
that hit will be alleviated when the long-term writable metrics
move to a more sharable location.
Since the metrics storage went from a u32 array of RTAX_MAX entries to
what is essentially a pointer, some retooling of the dst_entry layout
was necessary.
Most importantly, we need to preserve the alignment of the reference
count so that it doesn't share cache lines with the read-mostly state,
as per Eric Dumazet's alignment assertion checks.
The only non-trivial bit here is the move of the 'flags' member into
the writeable cacheline. This is OK since we are always accessing the
flags around the same moment when we made a modification to the
reference count.
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-01-27 07:51:05 +03:00
static u32 * ipv4_cow_metrics ( struct dst_entry * dst , unsigned long old )
{
2012-07-10 18:08:18 +04:00
WARN_ON ( 1 ) ;
return NULL ;
net: Implement read-only protection and COW'ing of metrics.
Routing metrics are now copy-on-write.
Initially a route entry points it's metrics at a read-only location.
If a routing table entry exists, it will point there. Else it will
point at the all zero metric place-holder called 'dst_default_metrics'.
The writeability state of the metrics is stored in the low bits of the
metrics pointer, we have two bits left to spare if we want to store
more states.
For the initial implementation, COW is implemented simply via kmalloc.
However future enhancements will change this to place the writable
metrics somewhere else, in order to increase sharing. Very likely
this "somewhere else" will be the inetpeer cache.
Note also that this means that metrics updates may transiently fail
if we cannot COW the metrics successfully.
But even by itself, this patch should decrease memory usage and
increase cache locality especially for routing workloads. In those
cases the read-only metric copies stay in place and never get written
to.
TCP workloads where metrics get updated, and those rare cases where
PMTU triggers occur, will take a very slight performance hit. But
that hit will be alleviated when the long-term writable metrics
move to a more sharable location.
Since the metrics storage went from a u32 array of RTAX_MAX entries to
what is essentially a pointer, some retooling of the dst_entry layout
was necessary.
Most importantly, we need to preserve the alignment of the reference
count so that it doesn't share cache lines with the read-mostly state,
as per Eric Dumazet's alignment assertion checks.
The only non-trivial bit here is the move of the 'flags' member into
the writeable cacheline. This is OK since we are always accessing the
flags around the same moment when we made a modification to the
reference count.
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-01-27 07:51:05 +03:00
}
2012-07-03 08:52:24 +04:00
static struct neighbour * ipv4_neigh_lookup ( const struct dst_entry * dst ,
struct sk_buff * skb ,
const void * daddr ) ;
2011-07-18 11:40:17 +04:00
2005-04-17 02:20:36 +04:00
static struct dst_ops ipv4_dst_ops = {
. family = AF_INET ,
. check = ipv4_dst_check ,
2010-12-13 23:52:14 +03:00
. default_advmss = ipv4_default_advmss ,
2011-11-23 06:12:51 +04:00
. mtu = ipv4_mtu ,
net: Implement read-only protection and COW'ing of metrics.
Routing metrics are now copy-on-write.
Initially a route entry points it's metrics at a read-only location.
If a routing table entry exists, it will point there. Else it will
point at the all zero metric place-holder called 'dst_default_metrics'.
The writeability state of the metrics is stored in the low bits of the
metrics pointer, we have two bits left to spare if we want to store
more states.
For the initial implementation, COW is implemented simply via kmalloc.
However future enhancements will change this to place the writable
metrics somewhere else, in order to increase sharing. Very likely
this "somewhere else" will be the inetpeer cache.
Note also that this means that metrics updates may transiently fail
if we cannot COW the metrics successfully.
But even by itself, this patch should decrease memory usage and
increase cache locality especially for routing workloads. In those
cases the read-only metric copies stay in place and never get written
to.
TCP workloads where metrics get updated, and those rare cases where
PMTU triggers occur, will take a very slight performance hit. But
that hit will be alleviated when the long-term writable metrics
move to a more sharable location.
Since the metrics storage went from a u32 array of RTAX_MAX entries to
what is essentially a pointer, some retooling of the dst_entry layout
was necessary.
Most importantly, we need to preserve the alignment of the reference
count so that it doesn't share cache lines with the read-mostly state,
as per Eric Dumazet's alignment assertion checks.
The only non-trivial bit here is the move of the 'flags' member into
the writeable cacheline. This is OK since we are always accessing the
flags around the same moment when we made a modification to the
reference count.
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-01-27 07:51:05 +03:00
. cow_metrics = ipv4_cow_metrics ,
2012-08-01 02:06:50 +04:00
. destroy = ipv4_dst_destroy ,
2005-04-17 02:20:36 +04:00
. negative_advice = ipv4_negative_advice ,
. link_failure = ipv4_link_failure ,
. update_pmtu = ip_rt_update_pmtu ,
2012-07-12 07:55:47 +04:00
. redirect = ip_do_redirect ,
2008-05-21 01:32:14 +04:00
. local_out = __ip_local_out ,
2011-07-18 11:40:17 +04:00
. neigh_lookup = ipv4_neigh_lookup ,
2005-04-17 02:20:36 +04:00
} ;
# define ECN_OR_COST(class) TC_PRIO_##class
2007-07-10 02:32:57 +04:00
const __u8 ip_tos2prio [ 16 ] = {
2005-04-17 02:20:36 +04:00
TC_PRIO_BESTEFFORT ,
2011-03-15 16:56:07 +03:00
ECN_OR_COST ( BESTEFFORT ) ,
2005-04-17 02:20:36 +04:00
TC_PRIO_BESTEFFORT ,
ECN_OR_COST ( BESTEFFORT ) ,
TC_PRIO_BULK ,
ECN_OR_COST ( BULK ) ,
TC_PRIO_BULK ,
ECN_OR_COST ( BULK ) ,
TC_PRIO_INTERACTIVE ,
ECN_OR_COST ( INTERACTIVE ) ,
TC_PRIO_INTERACTIVE ,
ECN_OR_COST ( INTERACTIVE ) ,
TC_PRIO_INTERACTIVE_BULK ,
ECN_OR_COST ( INTERACTIVE_BULK ) ,
TC_PRIO_INTERACTIVE_BULK ,
ECN_OR_COST ( INTERACTIVE_BULK )
} ;
2012-04-05 01:33:28 +04:00
EXPORT_SYMBOL ( ip_tos2prio ) ;
2005-04-17 02:20:36 +04:00
[IPV4]: rt_cache_stat can be statically defined
Using __get_cpu_var(obj) is slightly faster than per_cpu_ptr(obj,
raw_smp_processor_id()).
1) Smaller code and memory use
For static and small objects, DEFINE_PER_CPU(type, object) is preferred over a
alloc_percpu() : Better and smaller code to access them, and no extra memory
(storing the pointer, and the percpu array of pointers)
x86_64 code before patch
mov 1237577(%rip),%rax # ffffffff803e5990 <rt_cache_stat>
not %rax # part of per_cpu machinery
mov %gs:0x3c,%edx # get cpu number
movslq %edx,%rdx # extend 32 bits cpu number to 64 bits
mov (%rax,%rdx,8),%rax # get the pointer for this cpu
incl 0x38(%rax)
x86_64 code after patch
mov $per_cpu__rt_cache_stat,%rdx
mov %gs:0x48,%rax # get percpu data offset
incl 0x38(%rax,%rdx,1)
2) False sharing avoidance for SMP :
For a small NR_CPUS, the array of per cpu pointers allocated in alloc_percpu()
can be <= 32 bytes. This let slab code gives a part of a cache line. If the
other part of this 64 bytes (or 128 bytes) cache line is used by a mostly
written object, we can have false sharing and expensive per_cpu_ptr() operations.
Size of rt_cache_stat is 64 bytes, so this patch is not a danger of a too big
increase of bss (in UP mode) or static per_cpu data for SMP
(PERCPU_ENOUGH_ROOM is currently 32768 bytes)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-01-17 13:54:36 +03:00
static DEFINE_PER_CPU ( struct rt_cache_stat , rt_cache_stat ) ;
2014-04-08 02:39:40 +04:00
# define RT_CACHE_STAT_INC(field) raw_cpu_inc(rt_cache_stat.field)
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_PROC_FS
static void * rt_cache_seq_start ( struct seq_file * seq , loff_t * pos )
{
[IPV4] route cache: Introduce rt_genid for smooth cache invalidation
Current ip route cache implementation is not suited to large caches.
We can consume a lot of CPU when cache must be invalidated, since we
currently need to evict all cache entries, and this eviction is
sometimes asynchronous. min_delay & max_delay can somewhat control this
asynchronism behavior, but whole thing is a kludge, regularly triggering
infamous soft lockup messages. When entries are still in use, this also
consumes a lot of ram, filling dst_garbage.list.
A better scheme is to use a generation identifier on each entry,
so that cache invalidation can be performed by changing the table
identifier, without having to scan all entries.
No more delayed flushing, no more stalling when secret_interval expires.
Invalidated entries will then be freed at GC time (controled by
ip_rt_gc_timeout or stress), or when an invalidated entry is found
in a chain when an insert is done.
Thus we keep a normal equilibrium.
This patch :
- renames rt_hash_rnd to rt_genid (and makes it an atomic_t)
- Adds a new rt_genid field to 'struct rtable' (filling a hole on 64bit)
- Checks entry->rt_genid at appropriate places :
2008-02-01 04:05:09 +03:00
if ( * pos )
2012-07-17 22:00:09 +04:00
return NULL ;
[IPV4] route cache: Introduce rt_genid for smooth cache invalidation
Current ip route cache implementation is not suited to large caches.
We can consume a lot of CPU when cache must be invalidated, since we
currently need to evict all cache entries, and this eviction is
sometimes asynchronous. min_delay & max_delay can somewhat control this
asynchronism behavior, but whole thing is a kludge, regularly triggering
infamous soft lockup messages. When entries are still in use, this also
consumes a lot of ram, filling dst_garbage.list.
A better scheme is to use a generation identifier on each entry,
so that cache invalidation can be performed by changing the table
identifier, without having to scan all entries.
No more delayed flushing, no more stalling when secret_interval expires.
Invalidated entries will then be freed at GC time (controled by
ip_rt_gc_timeout or stress), or when an invalidated entry is found
in a chain when an insert is done.
Thus we keep a normal equilibrium.
This patch :
- renames rt_hash_rnd to rt_genid (and makes it an atomic_t)
- Adds a new rt_genid field to 'struct rtable' (filling a hole on 64bit)
- Checks entry->rt_genid at appropriate places :
2008-02-01 04:05:09 +03:00
return SEQ_START_TOKEN ;
2005-04-17 02:20:36 +04:00
}
static void * rt_cache_seq_next ( struct seq_file * seq , void * v , loff_t * pos )
{
+ + * pos ;
2012-07-17 22:00:09 +04:00
return NULL ;
2005-04-17 02:20:36 +04:00
}
static void rt_cache_seq_stop ( struct seq_file * seq , void * v )
{
}
static int rt_cache_seq_show ( struct seq_file * seq , void * v )
{
if ( v = = SEQ_START_TOKEN )
seq_printf ( seq , " %-127s \n " ,
" Iface \t Destination \t Gateway \t Flags \t \t RefCnt \t Use \t "
" Metric \t Source \t \t MTU \t Window \t IRTT \t TOS \t HHRef \t "
" HHUptod \t SpecDst " ) ;
2007-02-09 17:24:47 +03:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
2007-03-13 00:34:29 +03:00
static const struct seq_operations rt_cache_seq_ops = {
2005-04-17 02:20:36 +04:00
. start = rt_cache_seq_start ,
. next = rt_cache_seq_next ,
. stop = rt_cache_seq_stop ,
. show = rt_cache_seq_show ,
} ;
static int rt_cache_seq_open ( struct inode * inode , struct file * file )
{
2012-07-17 22:00:09 +04:00
return seq_open ( file , & rt_cache_seq_ops ) ;
2005-04-17 02:20:36 +04:00
}
2007-02-12 11:55:35 +03:00
static const struct file_operations rt_cache_seq_fops = {
2005-04-17 02:20:36 +04:00
. owner = THIS_MODULE ,
. open = rt_cache_seq_open ,
. read = seq_read ,
. llseek = seq_lseek ,
2012-07-17 22:00:09 +04:00
. release = seq_release ,
2005-04-17 02:20:36 +04:00
} ;
static void * rt_cpu_seq_start ( struct seq_file * seq , loff_t * pos )
{
int cpu ;
if ( * pos = = 0 )
return SEQ_START_TOKEN ;
2008-12-29 15:23:42 +03:00
for ( cpu = * pos - 1 ; cpu < nr_cpu_ids ; + + cpu ) {
2005-04-17 02:20:36 +04:00
if ( ! cpu_possible ( cpu ) )
continue ;
* pos = cpu + 1 ;
[IPV4]: rt_cache_stat can be statically defined
Using __get_cpu_var(obj) is slightly faster than per_cpu_ptr(obj,
raw_smp_processor_id()).
1) Smaller code and memory use
For static and small objects, DEFINE_PER_CPU(type, object) is preferred over a
alloc_percpu() : Better and smaller code to access them, and no extra memory
(storing the pointer, and the percpu array of pointers)
x86_64 code before patch
mov 1237577(%rip),%rax # ffffffff803e5990 <rt_cache_stat>
not %rax # part of per_cpu machinery
mov %gs:0x3c,%edx # get cpu number
movslq %edx,%rdx # extend 32 bits cpu number to 64 bits
mov (%rax,%rdx,8),%rax # get the pointer for this cpu
incl 0x38(%rax)
x86_64 code after patch
mov $per_cpu__rt_cache_stat,%rdx
mov %gs:0x48,%rax # get percpu data offset
incl 0x38(%rax,%rdx,1)
2) False sharing avoidance for SMP :
For a small NR_CPUS, the array of per cpu pointers allocated in alloc_percpu()
can be <= 32 bytes. This let slab code gives a part of a cache line. If the
other part of this 64 bytes (or 128 bytes) cache line is used by a mostly
written object, we can have false sharing and expensive per_cpu_ptr() operations.
Size of rt_cache_stat is 64 bytes, so this patch is not a danger of a too big
increase of bss (in UP mode) or static per_cpu data for SMP
(PERCPU_ENOUGH_ROOM is currently 32768 bytes)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-01-17 13:54:36 +03:00
return & per_cpu ( rt_cache_stat , cpu ) ;
2005-04-17 02:20:36 +04:00
}
return NULL ;
}
static void * rt_cpu_seq_next ( struct seq_file * seq , void * v , loff_t * pos )
{
int cpu ;
2008-12-29 15:23:42 +03:00
for ( cpu = * pos ; cpu < nr_cpu_ids ; + + cpu ) {
2005-04-17 02:20:36 +04:00
if ( ! cpu_possible ( cpu ) )
continue ;
* pos = cpu + 1 ;
[IPV4]: rt_cache_stat can be statically defined
Using __get_cpu_var(obj) is slightly faster than per_cpu_ptr(obj,
raw_smp_processor_id()).
1) Smaller code and memory use
For static and small objects, DEFINE_PER_CPU(type, object) is preferred over a
alloc_percpu() : Better and smaller code to access them, and no extra memory
(storing the pointer, and the percpu array of pointers)
x86_64 code before patch
mov 1237577(%rip),%rax # ffffffff803e5990 <rt_cache_stat>
not %rax # part of per_cpu machinery
mov %gs:0x3c,%edx # get cpu number
movslq %edx,%rdx # extend 32 bits cpu number to 64 bits
mov (%rax,%rdx,8),%rax # get the pointer for this cpu
incl 0x38(%rax)
x86_64 code after patch
mov $per_cpu__rt_cache_stat,%rdx
mov %gs:0x48,%rax # get percpu data offset
incl 0x38(%rax,%rdx,1)
2) False sharing avoidance for SMP :
For a small NR_CPUS, the array of per cpu pointers allocated in alloc_percpu()
can be <= 32 bytes. This let slab code gives a part of a cache line. If the
other part of this 64 bytes (or 128 bytes) cache line is used by a mostly
written object, we can have false sharing and expensive per_cpu_ptr() operations.
Size of rt_cache_stat is 64 bytes, so this patch is not a danger of a too big
increase of bss (in UP mode) or static per_cpu data for SMP
(PERCPU_ENOUGH_ROOM is currently 32768 bytes)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-01-17 13:54:36 +03:00
return & per_cpu ( rt_cache_stat , cpu ) ;
2005-04-17 02:20:36 +04:00
}
return NULL ;
2007-02-09 17:24:47 +03:00
2005-04-17 02:20:36 +04:00
}
static void rt_cpu_seq_stop ( struct seq_file * seq , void * v )
{
}
static int rt_cpu_seq_show ( struct seq_file * seq , void * v )
{
struct rt_cache_stat * st = v ;
if ( v = = SEQ_START_TOKEN ) {
2005-04-28 23:16:08 +04:00
seq_printf ( seq , " entries in_hit in_slow_tot in_slow_mc in_no_route in_brd in_martian_dst in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search \n " ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
2007-02-09 17:24:47 +03:00
2005-04-17 02:20:36 +04:00
seq_printf ( seq , " %08x %08x %08x %08x %08x %08x %08x %08x "
" %08x %08x %08x %08x %08x %08x %08x %08x %08x \n " ,
2010-10-08 10:37:34 +04:00
dst_entries_get_slow ( & ipv4_dst_ops ) ,
2013-10-16 13:49:04 +04:00
0 , /* st->in_hit */
2005-04-17 02:20:36 +04:00
st - > in_slow_tot ,
st - > in_slow_mc ,
st - > in_no_route ,
st - > in_brd ,
st - > in_martian_dst ,
st - > in_martian_src ,
2013-10-16 13:49:04 +04:00
0 , /* st->out_hit */
2005-04-17 02:20:36 +04:00
st - > out_slow_tot ,
2007-02-09 17:24:47 +03:00
st - > out_slow_mc ,
2005-04-17 02:20:36 +04:00
2013-10-16 13:49:04 +04:00
0 , /* st->gc_total */
0 , /* st->gc_ignored */
0 , /* st->gc_goal_miss */
0 , /* st->gc_dst_overflow */
0 , /* st->in_hlist_search */
0 /* st->out_hlist_search */
2005-04-17 02:20:36 +04:00
) ;
return 0 ;
}
2007-03-13 00:34:29 +03:00
static const struct seq_operations rt_cpu_seq_ops = {
2005-04-17 02:20:36 +04:00
. start = rt_cpu_seq_start ,
. next = rt_cpu_seq_next ,
. stop = rt_cpu_seq_stop ,
. show = rt_cpu_seq_show ,
} ;
static int rt_cpu_seq_open ( struct inode * inode , struct file * file )
{
return seq_open ( file , & rt_cpu_seq_ops ) ;
}
2007-02-12 11:55:35 +03:00
static const struct file_operations rt_cpu_seq_fops = {
2005-04-17 02:20:36 +04:00
. owner = THIS_MODULE ,
. open = rt_cpu_seq_open ,
. read = seq_read ,
. llseek = seq_lseek ,
. release = seq_release ,
} ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2009-11-26 02:40:35 +03:00
static int rt_acct_proc_show ( struct seq_file * m , void * v )
2007-12-06 08:13:48 +03:00
{
2009-11-26 02:40:35 +03:00
struct ip_rt_acct * dst , * src ;
unsigned int i , j ;
dst = kcalloc ( 256 , sizeof ( struct ip_rt_acct ) , GFP_KERNEL ) ;
if ( ! dst )
return - ENOMEM ;
for_each_possible_cpu ( i ) {
src = ( struct ip_rt_acct * ) per_cpu_ptr ( ip_rt_acct , i ) ;
for ( j = 0 ; j < 256 ; j + + ) {
dst [ j ] . o_bytes + = src [ j ] . o_bytes ;
dst [ j ] . o_packets + = src [ j ] . o_packets ;
dst [ j ] . i_bytes + = src [ j ] . i_bytes ;
dst [ j ] . i_packets + = src [ j ] . i_packets ;
}
2007-12-06 08:13:48 +03:00
}
2009-11-26 02:40:35 +03:00
seq_write ( m , dst , 256 * sizeof ( struct ip_rt_acct ) ) ;
kfree ( dst ) ;
return 0 ;
}
2007-12-06 08:13:48 +03:00
2009-11-26 02:40:35 +03:00
static int rt_acct_proc_open ( struct inode * inode , struct file * file )
{
return single_open ( file , rt_acct_proc_show , NULL ) ;
2007-12-06 08:13:48 +03:00
}
2009-11-26 02:40:35 +03:00
static const struct file_operations rt_acct_proc_fops = {
. owner = THIS_MODULE ,
. open = rt_acct_proc_open ,
. read = seq_read ,
. llseek = seq_lseek ,
. release = single_release ,
} ;
2007-12-06 08:13:48 +03:00
# endif
2007-12-06 08:14:28 +03:00
2008-02-29 07:51:18 +03:00
static int __net_init ip_rt_do_proc_init ( struct net * net )
2007-12-06 08:14:28 +03:00
{
struct proc_dir_entry * pde ;
2013-02-18 05:34:54 +04:00
pde = proc_create ( " rt_cache " , S_IRUGO , net - > proc_net ,
& rt_cache_seq_fops ) ;
2007-12-06 08:14:28 +03:00
if ( ! pde )
goto err1 ;
2008-02-29 01:14:25 +03:00
pde = proc_create ( " rt_cache " , S_IRUGO ,
net - > proc_net_stat , & rt_cpu_seq_fops ) ;
2007-12-06 08:14:28 +03:00
if ( ! pde )
goto err2 ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2009-11-26 02:40:35 +03:00
pde = proc_create ( " rt_acct " , 0 , net - > proc_net , & rt_acct_proc_fops ) ;
2007-12-06 08:14:28 +03:00
if ( ! pde )
goto err3 ;
# endif
return 0 ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2007-12-06 08:14:28 +03:00
err3 :
remove_proc_entry ( " rt_cache " , net - > proc_net_stat ) ;
# endif
err2 :
remove_proc_entry ( " rt_cache " , net - > proc_net ) ;
err1 :
return - ENOMEM ;
}
2008-02-29 07:51:18 +03:00
static void __net_exit ip_rt_do_proc_exit ( struct net * net )
{
remove_proc_entry ( " rt_cache " , net - > proc_net_stat ) ;
remove_proc_entry ( " rt_cache " , net - > proc_net ) ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2008-02-29 07:51:18 +03:00
remove_proc_entry ( " rt_acct " , net - > proc_net ) ;
2010-01-17 06:32:50 +03:00
# endif
2008-02-29 07:51:18 +03:00
}
static struct pernet_operations ip_rt_proc_ops __net_initdata = {
. init = ip_rt_do_proc_init ,
. exit = ip_rt_do_proc_exit ,
} ;
static int __init ip_rt_proc_init ( void )
{
return register_pernet_subsys ( & ip_rt_proc_ops ) ;
}
2007-12-06 08:14:28 +03:00
# else
2008-02-29 07:51:18 +03:00
static inline int ip_rt_proc_init ( void )
2007-12-06 08:14:28 +03:00
{
return 0 ;
}
2005-04-17 02:20:36 +04:00
# endif /* CONFIG_PROC_FS */
2007-02-09 17:24:47 +03:00
2012-07-25 09:11:23 +04:00
static inline bool rt_is_expired ( const struct rtable * rth )
2008-07-06 06:04:32 +04:00
{
2013-07-30 04:33:53 +04:00
return rth - > rt_genid ! = rt_genid_ipv4 ( dev_net ( rth - > dst . dev ) ) ;
2008-07-06 06:04:32 +04:00
}
2012-09-07 04:45:29 +04:00
void rt_cache_flush ( struct net * net )
2005-04-17 02:20:36 +04:00
{
2013-07-30 04:33:53 +04:00
rt_genid_bump_ipv4 ( net ) ;
2010-03-08 06:20:00 +03:00
}
2012-07-03 08:52:24 +04:00
static struct neighbour * ipv4_neigh_lookup ( const struct dst_entry * dst ,
struct sk_buff * skb ,
const void * daddr )
2011-07-12 02:44:24 +04:00
{
2011-07-18 11:40:17 +04:00
struct net_device * dev = dst - > dev ;
const __be32 * pkey = daddr ;
2012-01-27 00:22:32 +04:00
const struct rtable * rt ;
2011-07-12 02:44:24 +04:00
struct neighbour * n ;
2012-01-27 00:22:32 +04:00
rt = ( const struct rtable * ) dst ;
2012-07-02 13:02:15 +04:00
if ( rt - > rt_gateway )
2012-01-27 00:22:32 +04:00
pkey = ( const __be32 * ) & rt - > rt_gateway ;
2012-07-03 08:52:24 +04:00
else if ( skb )
pkey = & ip_hdr ( skb ) - > daddr ;
2011-07-18 11:40:17 +04:00
2012-02-16 02:48:35 +04:00
n = __ipv4_neigh_lookup ( dev , * ( __force u32 * ) pkey ) ;
2011-07-18 11:40:17 +04:00
if ( n )
return n ;
2011-07-25 04:01:41 +04:00
return neigh_create ( & arp_tbl , pkey , dev ) ;
2011-07-18 11:40:17 +04:00
}
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
# define IP_IDENTS_SZ 2048u
2015-05-01 20:37:49 +03:00
static atomic_t * ip_idents __read_mostly ;
static u32 * ip_tstamps __read_mostly ;
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
/* In order to protect privacy, we add a perturbation to identifiers
* if one generator is seldom used . This makes hard for an attacker
* to infer how many packets were sent between two points in time .
*/
u32 ip_idents_reserve ( u32 hash , int segs )
{
2015-05-01 20:37:49 +03:00
u32 * p_tstamp = ip_tstamps + hash % IP_IDENTS_SZ ;
atomic_t * p_id = ip_idents + hash % IP_IDENTS_SZ ;
u32 old = ACCESS_ONCE ( * p_tstamp ) ;
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
u32 now = ( u32 ) jiffies ;
u32 delta = 0 ;
2015-05-01 20:37:49 +03:00
if ( old ! = now & & cmpxchg ( p_tstamp , old , now ) = = old )
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
delta = prandom_u32_max ( now - old ) ;
2015-05-01 20:37:49 +03:00
return atomic_add_return ( segs + delta , p_id ) - segs ;
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
}
EXPORT_SYMBOL ( ip_idents_reserve ) ;
2005-04-17 02:20:36 +04:00
2015-03-25 19:07:44 +03:00
void __ip_select_ident ( struct net * net , struct iphdr * iph , int segs )
2005-04-17 02:20:36 +04:00
{
inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP
generator.
linux kernels used inet_peer cache for this purpose, but this had a huge
cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs,
with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth
is about 20.
4) If server deals with many tcp flows, we have a high probability of
not finding the inet_peer, allocating a fresh one, inserting it in
the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time,
so that reassembly units have a chance to complete reassembly of
fragments belonging to one message before receiving other fragments
with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP
as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-02 16:26:03 +04:00
static u32 ip_idents_hashrnd __read_mostly ;
u32 hash , id ;
2005-04-17 02:20:36 +04:00
inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP
generator.
linux kernels used inet_peer cache for this purpose, but this had a huge
cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs,
with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth
is about 20.
4) If server deals with many tcp flows, we have a high probability of
not finding the inet_peer, allocating a fresh one, inserting it in
the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time,
so that reassembly units have a chance to complete reassembly of
fragments belonging to one message before receiving other fragments
with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP
as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-02 16:26:03 +04:00
net_get_random_once ( & ip_idents_hashrnd , sizeof ( ip_idents_hashrnd ) ) ;
2005-04-17 02:20:36 +04:00
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
hash = jhash_3words ( ( __force u32 ) iph - > daddr ,
( __force u32 ) iph - > saddr ,
2015-03-25 19:07:44 +03:00
iph - > protocol ^ net_hash_mix ( net ) ,
ip: make IP identifiers less predictable
In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
Jedidiah describe ways exploiting linux IP identifier generation to
infer whether two machines are exchanging packets.
With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
changed IP id generation, but this does not really prevent this
side-channel technique.
This patch adds a random amount of perturbation so that IP identifiers
for a given destination [1] are no longer monotonically increasing after
an idle period.
Note that prandom_u32_max(1) returns 0, so if generator is used at most
once per jiffy, this patch inserts no hole in the ID suite and do not
increase collision probability.
This is jiffies based, so in the worst case (HZ=1000), the id can
rollover after ~65 seconds of idle time, which should be fine.
We also change the hash used in __ip_select_ident() to not only hash
on daddr, but also saddr and protocol, so that ICMP probes can not be
used to infer information for other protocols.
For IPv6, adds saddr into the hash as well, but not nexthdr.
If I ping the patched target, we can see ID are now hard to predict.
21:57:11.008086 IP (...)
A > target: ICMP echo request, seq 1, length 64
21:57:11.010752 IP (... id 2081 ...)
target > A: ICMP echo reply, seq 1, length 64
21:57:12.013133 IP (...)
A > target: ICMP echo request, seq 2, length 64
21:57:12.015737 IP (... id 3039 ...)
target > A: ICMP echo reply, seq 2, length 64
21:57:13.016580 IP (...)
A > target: ICMP echo request, seq 3, length 64
21:57:13.019251 IP (... id 3437 ...)
target > A: ICMP echo reply, seq 3, length 64
[1] TCP sessions uses a per flow ID generator not changed by this patch.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu>
Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-26 10:58:10 +04:00
ip_idents_hashrnd ) ;
inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP
generator.
linux kernels used inet_peer cache for this purpose, but this had a huge
cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs,
with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth
is about 20.
4) If server deals with many tcp flows, we have a high probability of
not finding the inet_peer, allocating a fresh one, inserting it in
the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time,
so that reassembly units have a chance to complete reassembly of
fragments belonging to one message before receiving other fragments
with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP
as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-02 16:26:03 +04:00
id = ip_idents_reserve ( hash , segs ) ;
iph - > id = htons ( id ) ;
2005-04-17 02:20:36 +04:00
}
2010-07-10 01:22:10 +04:00
EXPORT_SYMBOL ( __ip_select_ident ) ;
2005-04-17 02:20:36 +04:00
2012-07-18 00:42:13 +04:00
static void __build_flow_key ( struct flowi4 * fl4 , const struct sock * sk ,
2012-07-17 15:19:00 +04:00
const struct iphdr * iph ,
int oif , u8 tos ,
u8 prot , u32 mark , int flow_flags )
{
if ( sk ) {
const struct inet_sock * inet = inet_sk ( sk ) ;
oif = sk - > sk_bound_dev_if ;
mark = sk - > sk_mark ;
tos = RT_CONN_FLAGS ( sk ) ;
prot = inet - > hdrincl ? IPPROTO_RAW : sk - > sk_protocol ;
}
flowi4_init_output ( fl4 , oif , mark , tos ,
RT_SCOPE_UNIVERSE , prot ,
flow_flags ,
iph - > daddr , iph - > saddr , 0 , 0 ) ;
}
2012-07-18 00:42:13 +04:00
static void build_skb_flow_key ( struct flowi4 * fl4 , const struct sk_buff * skb ,
const struct sock * sk )
2012-07-17 15:19:00 +04:00
{
const struct iphdr * iph = ip_hdr ( skb ) ;
int oif = skb - > dev - > ifindex ;
u8 tos = RT_TOS ( iph - > tos ) ;
u8 prot = iph - > protocol ;
u32 mark = skb - > mark ;
__build_flow_key ( fl4 , sk , iph , oif , tos , prot , mark , 0 ) ;
}
2012-07-18 00:42:13 +04:00
static void build_sk_flow_key ( struct flowi4 * fl4 , const struct sock * sk )
2012-07-17 15:19:00 +04:00
{
const struct inet_sock * inet = inet_sk ( sk ) ;
2012-07-18 00:42:13 +04:00
const struct ip_options_rcu * inet_opt ;
2012-07-17 15:19:00 +04:00
__be32 daddr = inet - > inet_daddr ;
rcu_read_lock ( ) ;
inet_opt = rcu_dereference ( inet - > inet_opt ) ;
if ( inet_opt & & inet_opt - > opt . srr )
daddr = inet_opt - > opt . faddr ;
flowi4_init_output ( fl4 , sk - > sk_bound_dev_if , sk - > sk_mark ,
RT_CONN_FLAGS ( sk ) , RT_SCOPE_UNIVERSE ,
inet - > hdrincl ? IPPROTO_RAW : sk - > sk_protocol ,
inet_sk_flowi_flags ( sk ) ,
daddr , inet - > inet_saddr , 0 , 0 ) ;
rcu_read_unlock ( ) ;
}
2012-07-18 00:42:13 +04:00
static void ip_rt_build_flow_key ( struct flowi4 * fl4 , const struct sock * sk ,
const struct sk_buff * skb )
2012-07-17 15:19:00 +04:00
{
if ( skb )
build_skb_flow_key ( fl4 , skb , sk ) ;
else
build_sk_flow_key ( fl4 , sk ) ;
}
2012-08-01 02:02:02 +04:00
static inline void rt_free ( struct rtable * rt )
{
call_rcu ( & rt - > dst . rcu_head , dst_rcu_free ) ;
}
static DEFINE_SPINLOCK ( fnhe_lock ) ;
2012-07-17 15:19:00 +04:00
2013-06-27 11:27:05 +04:00
static void fnhe_flush_routes ( struct fib_nh_exception * fnhe )
{
struct rtable * rt ;
rt = rcu_dereference ( fnhe - > fnhe_rth_input ) ;
if ( rt ) {
RCU_INIT_POINTER ( fnhe - > fnhe_rth_input , NULL ) ;
rt_free ( rt ) ;
}
rt = rcu_dereference ( fnhe - > fnhe_rth_output ) ;
if ( rt ) {
RCU_INIT_POINTER ( fnhe - > fnhe_rth_output , NULL ) ;
rt_free ( rt ) ;
}
}
2012-07-18 14:15:35 +04:00
static struct fib_nh_exception * fnhe_oldest ( struct fnhe_hash_bucket * hash )
2012-07-17 15:19:00 +04:00
{
struct fib_nh_exception * fnhe , * oldest ;
oldest = rcu_dereference ( hash - > chain ) ;
for ( fnhe = rcu_dereference ( oldest - > fnhe_next ) ; fnhe ;
fnhe = rcu_dereference ( fnhe - > fnhe_next ) ) {
if ( time_before ( fnhe - > fnhe_stamp , oldest - > fnhe_stamp ) )
oldest = fnhe ;
}
2013-06-27 11:27:05 +04:00
fnhe_flush_routes ( oldest ) ;
2012-07-17 15:19:00 +04:00
return oldest ;
}
2012-07-18 00:23:08 +04:00
static inline u32 fnhe_hashfun ( __be32 daddr )
{
2014-09-04 19:21:31 +04:00
static u32 fnhe_hashrnd __read_mostly ;
2012-07-18 00:23:08 +04:00
u32 hval ;
2014-09-04 19:21:31 +04:00
net_get_random_once ( & fnhe_hashrnd , sizeof ( fnhe_hashrnd ) ) ;
hval = jhash_1word ( ( __force u32 ) daddr , fnhe_hashrnd ) ;
return hash_32 ( hval , FNHE_HASH_SHIFT ) ;
2012-07-18 00:23:08 +04:00
}
2013-05-28 00:46:31 +04:00
static void fill_route_from_fnhe ( struct rtable * rt , struct fib_nh_exception * fnhe )
{
rt - > rt_pmtu = fnhe - > fnhe_pmtu ;
rt - > dst . expires = fnhe - > fnhe_expires ;
if ( fnhe - > fnhe_gw ) {
rt - > rt_flags | = RTCF_REDIRECTED ;
rt - > rt_gateway = fnhe - > fnhe_gw ;
rt - > rt_uses_gateway = 1 ;
}
}
2012-07-18 14:15:35 +04:00
static void update_or_create_fnhe ( struct fib_nh * nh , __be32 daddr , __be32 gw ,
u32 pmtu , unsigned long expires )
2012-07-17 15:19:00 +04:00
{
2012-07-18 14:15:35 +04:00
struct fnhe_hash_bucket * hash ;
2012-07-17 15:19:00 +04:00
struct fib_nh_exception * fnhe ;
2013-05-28 00:46:31 +04:00
struct rtable * rt ;
unsigned int i ;
2012-07-17 15:19:00 +04:00
int depth ;
2012-07-18 14:15:35 +04:00
u32 hval = fnhe_hashfun ( daddr ) ;
2012-08-01 02:02:02 +04:00
spin_lock_bh ( & fnhe_lock ) ;
2012-07-17 15:19:00 +04:00
2014-09-04 09:21:56 +04:00
hash = rcu_dereference ( nh - > nh_exceptions ) ;
2012-07-17 15:19:00 +04:00
if ( ! hash ) {
2012-07-18 14:15:35 +04:00
hash = kzalloc ( FNHE_HASH_SIZE * sizeof ( * hash ) , GFP_ATOMIC ) ;
2012-07-17 15:19:00 +04:00
if ( ! hash )
2012-07-18 14:15:35 +04:00
goto out_unlock ;
2014-09-04 09:21:56 +04:00
rcu_assign_pointer ( nh - > nh_exceptions , hash ) ;
2012-07-17 15:19:00 +04:00
}
hash + = hval ;
depth = 0 ;
for ( fnhe = rcu_dereference ( hash - > chain ) ; fnhe ;
fnhe = rcu_dereference ( fnhe - > fnhe_next ) ) {
if ( fnhe - > fnhe_daddr = = daddr )
2012-07-18 14:15:35 +04:00
break ;
2012-07-17 15:19:00 +04:00
depth + + ;
}
2012-07-18 14:15:35 +04:00
if ( fnhe ) {
if ( gw )
fnhe - > fnhe_gw = gw ;
if ( pmtu ) {
fnhe - > fnhe_pmtu = pmtu ;
2013-05-28 00:46:31 +04:00
fnhe - > fnhe_expires = max ( 1UL , expires ) ;
2012-07-18 14:15:35 +04:00
}
2013-05-28 00:46:31 +04:00
/* Update all cached dsts too */
2013-06-27 11:27:05 +04:00
rt = rcu_dereference ( fnhe - > fnhe_rth_input ) ;
if ( rt )
fill_route_from_fnhe ( rt , fnhe ) ;
rt = rcu_dereference ( fnhe - > fnhe_rth_output ) ;
2013-05-28 00:46:31 +04:00
if ( rt )
fill_route_from_fnhe ( rt , fnhe ) ;
2012-07-18 14:15:35 +04:00
} else {
if ( depth > FNHE_RECLAIM_DEPTH )
fnhe = fnhe_oldest ( hash ) ;
else {
fnhe = kzalloc ( sizeof ( * fnhe ) , GFP_ATOMIC ) ;
if ( ! fnhe )
goto out_unlock ;
fnhe - > fnhe_next = hash - > chain ;
rcu_assign_pointer ( hash - > chain , fnhe ) ;
}
2013-05-28 00:46:33 +04:00
fnhe - > fnhe_genid = fnhe_genid ( dev_net ( nh - > nh_dev ) ) ;
2012-07-18 14:15:35 +04:00
fnhe - > fnhe_daddr = daddr ;
fnhe - > fnhe_gw = gw ;
fnhe - > fnhe_pmtu = pmtu ;
fnhe - > fnhe_expires = expires ;
2013-05-28 00:46:31 +04:00
/* Exception created; mark the cached routes for the nexthop
* stale , so anyone caching it rechecks if this exception
* applies to them .
*/
2013-06-27 11:27:05 +04:00
rt = rcu_dereference ( nh - > nh_rth_input ) ;
if ( rt )
rt - > dst . obsolete = DST_OBSOLETE_KILL ;
2013-05-28 00:46:31 +04:00
for_each_possible_cpu ( i ) {
struct rtable __rcu * * prt ;
prt = per_cpu_ptr ( nh - > nh_pcpu_rth_output , i ) ;
rt = rcu_dereference ( * prt ) ;
if ( rt )
rt - > dst . obsolete = DST_OBSOLETE_KILL ;
}
2012-07-17 15:19:00 +04:00
}
fnhe - > fnhe_stamp = jiffies ;
2012-07-18 14:15:35 +04:00
out_unlock :
2012-08-01 02:02:02 +04:00
spin_unlock_bh ( & fnhe_lock ) ;
2012-07-17 15:19:00 +04:00
}
2012-07-17 22:31:28 +04:00
static void __ip_do_redirect ( struct rtable * rt , struct sk_buff * skb , struct flowi4 * fl4 ,
bool kill_route )
2005-04-17 02:20:36 +04:00
{
2012-07-12 07:55:47 +04:00
__be32 new_gw = icmp_hdr ( skb ) - > un . gateway ;
2012-07-12 07:38:08 +04:00
__be32 old_gw = ip_hdr ( skb ) - > saddr ;
2012-07-12 07:55:47 +04:00
struct net_device * dev = skb - > dev ;
struct in_device * in_dev ;
2012-07-17 15:19:00 +04:00
struct fib_result res ;
2012-07-12 07:55:47 +04:00
struct neighbour * n ;
2008-02-29 07:50:06 +03:00
struct net * net ;
2005-04-17 02:20:36 +04:00
2012-07-12 07:38:08 +04:00
switch ( icmp_hdr ( skb ) - > code & 7 ) {
case ICMP_REDIR_NET :
case ICMP_REDIR_NETTOS :
case ICMP_REDIR_HOST :
case ICMP_REDIR_HOSTTOS :
break ;
default :
return ;
}
2012-07-12 07:55:47 +04:00
if ( rt - > rt_gateway ! = old_gw )
return ;
in_dev = __in_dev_get_rcu ( dev ) ;
if ( ! in_dev )
return ;
2008-03-25 15:47:49 +03:00
net = dev_net ( dev ) ;
2009-11-23 21:41:23 +03:00
if ( new_gw = = old_gw | | ! IN_DEV_RX_REDIRECTS ( in_dev ) | |
ipv4_is_multicast ( new_gw ) | | ipv4_is_lbcast ( new_gw ) | |
ipv4_is_zeronet ( new_gw ) )
2005-04-17 02:20:36 +04:00
goto reject_redirect ;
if ( ! IN_DEV_SHARED_MEDIA ( in_dev ) ) {
if ( ! inet_addr_onlink ( in_dev , new_gw , old_gw ) )
goto reject_redirect ;
if ( IN_DEV_SEC_REDIRECTS ( in_dev ) & & ip_fib_check_default ( new_gw , dev ) )
goto reject_redirect ;
} else {
2008-02-29 07:50:06 +03:00
if ( inet_addr_type ( net , new_gw ) ! = RTN_UNICAST )
2005-04-17 02:20:36 +04:00
goto reject_redirect ;
}
2012-07-17 15:19:00 +04:00
n = ipv4_neigh_lookup ( & rt - > dst , NULL , & new_gw ) ;
2014-09-25 04:07:53 +04:00
if ( ! IS_ERR ( n ) ) {
2012-07-12 07:55:47 +04:00
if ( ! ( n - > nud_state & NUD_VALID ) ) {
neigh_event_send ( n , NULL ) ;
} else {
2015-06-23 20:45:37 +03:00
if ( fib_lookup ( net , fl4 , & res , 0 ) = = 0 ) {
2012-07-17 15:19:00 +04:00
struct fib_nh * nh = & FIB_RES_NH ( res ) ;
2012-07-18 14:15:35 +04:00
update_or_create_fnhe ( nh , fl4 - > daddr , new_gw ,
0 , 0 ) ;
2012-07-17 15:19:00 +04:00
}
2012-07-17 22:31:28 +04:00
if ( kill_route )
rt - > dst . obsolete = DST_OBSOLETE_KILL ;
2012-07-12 07:55:47 +04:00
call_netevent_notifiers ( NETEVENT_NEIGH_UPDATE , n ) ;
}
neigh_release ( n ) ;
}
return ;
reject_redirect :
# ifdef CONFIG_IP_ROUTE_VERBOSE
2012-07-12 18:40:05 +04:00
if ( IN_DEV_LOG_MARTIANS ( in_dev ) ) {
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
__be32 daddr = iph - > daddr ;
__be32 saddr = iph - > saddr ;
2012-07-12 07:55:47 +04:00
net_info_ratelimited ( " Redirect from %pI4 on %s about %pI4 ignored \n "
" Advised path = %pI4 -> %pI4 \n " ,
& old_gw , dev - > name , & new_gw ,
& saddr , & daddr ) ;
2012-07-12 18:40:05 +04:00
}
2012-07-12 07:55:47 +04:00
# endif
;
}
2012-07-17 15:19:00 +04:00
static void ip_do_redirect ( struct dst_entry * dst , struct sock * sk , struct sk_buff * skb )
{
struct rtable * rt ;
struct flowi4 fl4 ;
2013-05-28 10:26:49 +04:00
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
int oif = skb - > dev - > ifindex ;
u8 tos = RT_TOS ( iph - > tos ) ;
u8 prot = iph - > protocol ;
u32 mark = skb - > mark ;
2012-07-17 15:19:00 +04:00
rt = ( struct rtable * ) dst ;
2013-05-28 10:26:49 +04:00
__build_flow_key ( & fl4 , sk , iph , oif , tos , prot , mark , 0 ) ;
2012-07-17 22:31:28 +04:00
__ip_do_redirect ( rt , skb , & fl4 , true ) ;
2012-07-17 15:19:00 +04:00
}
2005-04-17 02:20:36 +04:00
static struct dst_entry * ipv4_negative_advice ( struct dst_entry * dst )
{
2008-03-06 05:30:47 +03:00
struct rtable * rt = ( struct rtable * ) dst ;
2005-04-17 02:20:36 +04:00
struct dst_entry * ret = dst ;
if ( rt ) {
2010-03-19 02:20:20 +03:00
if ( dst - > obsolete > 0 ) {
2005-04-17 02:20:36 +04:00
ip_rt_put ( rt ) ;
ret = NULL ;
2012-07-10 17:58:42 +04:00
} else if ( ( rt - > rt_flags & RTCF_REDIRECTED ) | |
rt - > dst . expires ) {
2012-07-17 22:00:09 +04:00
ip_rt_put ( rt ) ;
2005-04-17 02:20:36 +04:00
ret = NULL ;
}
}
return ret ;
}
/*
* Algorithm :
* 1. The first ip_rt_redirect_number redirects are sent
* with exponential backoff , then we stop sending them at all ,
* assuming that the host ignores our redirects .
* 2. If we did not see packets requiring redirects
* during ip_rt_redirect_silence , we assume that the host
* forgot redirected route and start to send redirects again .
*
* This algorithm is much cheaper and more intelligent than dumb load limiting
* in icmp . c .
*
* NOTE . Do not forget to inhibit load limiting for redirects ( redundant )
* and " frag. need " ( breaks PMTU discovery ) in icmp . c .
*/
void ip_rt_send_redirect ( struct sk_buff * skb )
{
2009-06-02 09:14:27 +04:00
struct rtable * rt = skb_rtable ( skb ) ;
2009-08-29 10:52:01 +04:00
struct in_device * in_dev ;
2011-02-05 02:55:25 +03:00
struct inet_peer * peer ;
2012-07-10 14:58:16 +04:00
struct net * net ;
2009-08-29 10:52:01 +04:00
int log_martians ;
2015-08-28 02:07:03 +03:00
int vif ;
2005-04-17 02:20:36 +04:00
2009-08-29 10:52:01 +04:00
rcu_read_lock ( ) ;
2010-06-11 10:31:35 +04:00
in_dev = __in_dev_get_rcu ( rt - > dst . dev ) ;
2009-08-29 10:52:01 +04:00
if ( ! in_dev | | ! IN_DEV_TX_REDIRECTS ( in_dev ) ) {
rcu_read_unlock ( ) ;
2005-04-17 02:20:36 +04:00
return ;
2009-08-29 10:52:01 +04:00
}
log_martians = IN_DEV_LOG_MARTIANS ( in_dev ) ;
2015-09-30 06:07:13 +03:00
vif = l3mdev_master_ifindex_rcu ( rt - > dst . dev ) ;
2009-08-29 10:52:01 +04:00
rcu_read_unlock ( ) ;
2005-04-17 02:20:36 +04:00
2012-07-10 14:58:16 +04:00
net = dev_net ( rt - > dst . dev ) ;
2015-08-28 02:07:03 +03:00
peer = inet_getpeer_v4 ( net - > ipv4 . peers , ip_hdr ( skb ) - > saddr , vif , 1 ) ;
2011-02-05 02:55:25 +03:00
if ( ! peer ) {
2012-10-08 15:41:15 +04:00
icmp_send ( skb , ICMP_REDIRECT , ICMP_REDIR_HOST ,
rt_nexthop ( rt , ip_hdr ( skb ) - > daddr ) ) ;
2011-02-05 02:55:25 +03:00
return ;
}
2005-04-17 02:20:36 +04:00
/* No redirected packets during ip_rt_redirect_silence;
* reset the algorithm .
*/
2011-02-05 02:55:25 +03:00
if ( time_after ( jiffies , peer - > rate_last + ip_rt_redirect_silence ) )
peer - > rate_tokens = 0 ;
2005-04-17 02:20:36 +04:00
/* Too many ignored redirects; do not send anything
2010-06-11 10:31:35 +04:00
* set dst . rate_last to the last seen redirected packet .
2005-04-17 02:20:36 +04:00
*/
2011-02-05 02:55:25 +03:00
if ( peer - > rate_tokens > = ip_rt_redirect_number ) {
peer - > rate_last = jiffies ;
2012-07-10 14:58:16 +04:00
goto out_put_peer ;
2005-04-17 02:20:36 +04:00
}
/* Check for load limit; set rate_last to the latest sent
* redirect .
*/
2011-02-05 02:55:25 +03:00
if ( peer - > rate_tokens = = 0 | |
2006-12-18 11:26:35 +03:00
time_after ( jiffies ,
2011-02-05 02:55:25 +03:00
( peer - > rate_last +
( ip_rt_redirect_load < < peer - > rate_tokens ) ) ) ) {
2012-10-08 15:41:15 +04:00
__be32 gw = rt_nexthop ( rt , ip_hdr ( skb ) - > daddr ) ;
icmp_send ( skb , ICMP_REDIRECT , ICMP_REDIR_HOST , gw ) ;
2011-02-05 02:55:25 +03:00
peer - > rate_last = jiffies ;
+ + peer - > rate_tokens ;
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_IP_ROUTE_VERBOSE
2009-08-29 10:52:01 +04:00
if ( log_martians & &
2012-05-14 01:56:26 +04:00
peer - > rate_tokens = = ip_rt_redirect_number )
net_warn_ratelimited ( " host %pI4/if%d ignores redirects for %pI4 to %pI4 \n " ,
2012-07-24 03:29:00 +04:00
& ip_hdr ( skb ) - > saddr , inet_iif ( skb ) ,
2012-10-08 15:41:15 +04:00
& ip_hdr ( skb ) - > daddr , & gw ) ;
2005-04-17 02:20:36 +04:00
# endif
}
2012-07-10 14:58:16 +04:00
out_put_peer :
inet_putpeer ( peer ) ;
2005-04-17 02:20:36 +04:00
}
static int ip_error ( struct sk_buff * skb )
{
2012-06-27 03:27:09 +04:00
struct in_device * in_dev = __in_dev_get_rcu ( skb - > dev ) ;
2009-06-02 09:14:27 +04:00
struct rtable * rt = skb_rtable ( skb ) ;
2011-02-05 02:55:25 +03:00
struct inet_peer * peer ;
2005-04-17 02:20:36 +04:00
unsigned long now ;
2012-06-27 03:27:09 +04:00
struct net * net ;
2011-02-05 02:55:25 +03:00
bool send ;
2005-04-17 02:20:36 +04:00
int code ;
2015-05-22 12:58:12 +03:00
/* IP on this device is disabled. */
if ( ! in_dev )
goto out ;
2012-06-27 03:27:09 +04:00
net = dev_net ( rt - > dst . dev ) ;
if ( ! IN_DEV_FORWARD ( in_dev ) ) {
switch ( rt - > dst . error ) {
case EHOSTUNREACH :
IP_INC_STATS_BH ( net , IPSTATS_MIB_INADDRERRORS ) ;
break ;
case ENETUNREACH :
IP_INC_STATS_BH ( net , IPSTATS_MIB_INNOROUTES ) ;
break ;
}
goto out ;
}
2010-06-11 10:31:35 +04:00
switch ( rt - > dst . error ) {
2011-07-01 13:43:07 +04:00
case EINVAL :
default :
goto out ;
case EHOSTUNREACH :
code = ICMP_HOST_UNREACH ;
break ;
case ENETUNREACH :
code = ICMP_NET_UNREACH ;
2012-06-27 03:27:09 +04:00
IP_INC_STATS_BH ( net , IPSTATS_MIB_INNOROUTES ) ;
2011-07-01 13:43:07 +04:00
break ;
case EACCES :
code = ICMP_PKT_FILTERED ;
break ;
2005-04-17 02:20:36 +04:00
}
2015-08-28 02:07:03 +03:00
peer = inet_getpeer_v4 ( net - > ipv4 . peers , ip_hdr ( skb ) - > saddr ,
2015-09-30 06:07:13 +03:00
l3mdev_master_ifindex ( skb - > dev ) , 1 ) ;
2011-02-05 02:55:25 +03:00
send = true ;
if ( peer ) {
now = jiffies ;
peer - > rate_tokens + = now - peer - > rate_last ;
if ( peer - > rate_tokens > ip_rt_error_burst )
peer - > rate_tokens = ip_rt_error_burst ;
peer - > rate_last = now ;
if ( peer - > rate_tokens > = ip_rt_error_cost )
peer - > rate_tokens - = ip_rt_error_cost ;
else
send = false ;
2012-07-10 14:58:16 +04:00
inet_putpeer ( peer ) ;
2005-04-17 02:20:36 +04:00
}
2011-02-05 02:55:25 +03:00
if ( send )
icmp_send ( skb , ICMP_DEST_UNREACH , code , 0 ) ;
2005-04-17 02:20:36 +04:00
out : kfree_skb ( skb ) ;
return 0 ;
2007-02-09 17:24:47 +03:00
}
2005-04-17 02:20:36 +04:00
2012-10-08 02:47:25 +04:00
static void __ip_rt_update_pmtu ( struct rtable * rt , struct flowi4 * fl4 , u32 mtu )
2005-04-17 02:20:36 +04:00
{
2012-10-08 02:47:25 +04:00
struct dst_entry * dst = & rt - > dst ;
2012-07-17 15:19:00 +04:00
struct fib_result res ;
2011-02-10 07:42:07 +03:00
2013-01-17 00:58:10 +04:00
if ( dst_metric_locked ( dst , RTAX_MTU ) )
return ;
2015-04-28 06:43:15 +03:00
if ( ipv4_mtu ( dst ) < mtu )
2015-01-29 11:09:03 +03:00
return ;
2012-07-10 17:58:42 +04:00
if ( mtu < ip_rt_min_pmtu )
mtu = ip_rt_min_pmtu ;
2011-02-10 07:42:07 +03:00
2013-05-28 00:46:32 +04:00
if ( rt - > rt_pmtu = = mtu & &
time_before ( jiffies , dst - > expires - ip_rt_mtu_expires / 2 ) )
return ;
2012-08-28 16:33:07 +04:00
rcu_read_lock ( ) ;
2015-06-23 20:45:37 +03:00
if ( fib_lookup ( dev_net ( dst - > dev ) , fl4 , & res , 0 ) = = 0 ) {
2012-07-17 15:19:00 +04:00
struct fib_nh * nh = & FIB_RES_NH ( res ) ;
2012-07-18 14:15:35 +04:00
update_or_create_fnhe ( nh , fl4 - > daddr , 0 , mtu ,
jiffies + ip_rt_mtu_expires ) ;
2012-07-17 15:19:00 +04:00
}
2012-08-28 16:33:07 +04:00
rcu_read_unlock ( ) ;
2005-04-17 02:20:36 +04:00
}
2012-07-17 15:19:00 +04:00
static void ip_rt_update_pmtu ( struct dst_entry * dst , struct sock * sk ,
struct sk_buff * skb , u32 mtu )
{
struct rtable * rt = ( struct rtable * ) dst ;
struct flowi4 fl4 ;
ip_rt_build_flow_key ( & fl4 , sk , skb ) ;
2012-10-08 02:47:25 +04:00
__ip_rt_update_pmtu ( rt , & fl4 , mtu ) ;
2012-07-17 15:19:00 +04:00
}
2012-06-15 09:21:46 +04:00
void ipv4_update_pmtu ( struct sk_buff * skb , struct net * net , u32 mtu ,
int oif , u32 mark , u8 protocol , int flow_flags )
{
2012-07-17 15:19:00 +04:00
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
2012-06-15 09:21:46 +04:00
struct flowi4 fl4 ;
struct rtable * rt ;
2014-05-13 21:17:34 +04:00
if ( ! mark )
mark = IP4_REPLY_MARK ( net , skb - > mark ) ;
2012-07-17 15:19:00 +04:00
__build_flow_key ( & fl4 , NULL , iph , oif ,
RT_TOS ( iph - > tos ) , protocol , mark , flow_flags ) ;
2012-06-15 09:21:46 +04:00
rt = __ip_route_output_key ( net , & fl4 ) ;
if ( ! IS_ERR ( rt ) ) {
2012-07-17 15:19:00 +04:00
__ip_rt_update_pmtu ( rt , & fl4 , mtu ) ;
2012-06-15 09:21:46 +04:00
ip_rt_put ( rt ) ;
}
}
EXPORT_SYMBOL_GPL ( ipv4_update_pmtu ) ;
2013-01-21 05:59:11 +04:00
static void __ipv4_sk_update_pmtu ( struct sk_buff * skb , struct sock * sk , u32 mtu )
2012-06-15 09:21:46 +04:00
{
2012-07-17 15:19:00 +04:00
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
struct flowi4 fl4 ;
struct rtable * rt ;
2012-06-15 09:21:46 +04:00
2012-07-17 15:19:00 +04:00
__build_flow_key ( & fl4 , sk , iph , 0 , 0 , 0 , 0 , 0 ) ;
2014-05-13 21:17:34 +04:00
if ( ! fl4 . flowi4_mark )
fl4 . flowi4_mark = IP4_REPLY_MARK ( sock_net ( sk ) , skb - > mark ) ;
2012-07-17 15:19:00 +04:00
rt = __ip_route_output_key ( sock_net ( sk ) , & fl4 ) ;
if ( ! IS_ERR ( rt ) ) {
__ip_rt_update_pmtu ( rt , & fl4 , mtu ) ;
ip_rt_put ( rt ) ;
}
2012-06-15 09:21:46 +04:00
}
2013-01-21 05:59:11 +04:00
void ipv4_sk_update_pmtu ( struct sk_buff * skb , struct sock * sk , u32 mtu )
{
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
struct flowi4 fl4 ;
struct rtable * rt ;
2014-06-30 12:26:23 +04:00
struct dst_entry * odst = NULL ;
2013-01-22 04:01:28 +04:00
bool new = false ;
2013-01-21 05:59:11 +04:00
bh_lock_sock ( sk ) ;
2013-11-05 05:24:17 +04:00
if ( ! ip_sk_accept_pmtu ( sk ) )
goto out ;
2014-06-30 12:26:23 +04:00
odst = sk_dst_get ( sk ) ;
2013-01-21 05:59:11 +04:00
2014-06-30 12:26:23 +04:00
if ( sock_owned_by_user ( sk ) | | ! odst ) {
2013-01-21 05:59:11 +04:00
__ipv4_sk_update_pmtu ( skb , sk , mtu ) ;
goto out ;
}
__build_flow_key ( & fl4 , sk , iph , 0 , 0 , 0 , 0 , 0 ) ;
2014-06-30 12:26:23 +04:00
rt = ( struct rtable * ) odst ;
2015-04-03 11:17:26 +03:00
if ( odst - > obsolete & & ! odst - > ops - > check ( odst , 0 ) ) {
2013-01-21 05:59:11 +04:00
rt = ip_route_output_flow ( sock_net ( sk ) , & fl4 , sk ) ;
if ( IS_ERR ( rt ) )
goto out ;
2013-01-22 04:01:28 +04:00
new = true ;
2013-01-21 05:59:11 +04:00
}
__ip_rt_update_pmtu ( ( struct rtable * ) rt - > dst . path , & fl4 , mtu ) ;
2014-06-30 12:26:23 +04:00
if ( ! dst_check ( & rt - > dst , 0 ) ) {
2013-01-22 04:01:28 +04:00
if ( new )
dst_release ( & rt - > dst ) ;
2013-01-21 05:59:11 +04:00
rt = ip_route_output_flow ( sock_net ( sk ) , & fl4 , sk ) ;
if ( IS_ERR ( rt ) )
goto out ;
2013-01-22 04:01:28 +04:00
new = true ;
2013-01-21 05:59:11 +04:00
}
2013-01-22 04:01:28 +04:00
if ( new )
2014-06-30 12:26:23 +04:00
sk_dst_set ( sk , & rt - > dst ) ;
2013-01-21 05:59:11 +04:00
out :
bh_unlock_sock ( sk ) ;
2014-06-30 12:26:23 +04:00
dst_release ( odst ) ;
2013-01-21 05:59:11 +04:00
}
2012-06-15 09:21:46 +04:00
EXPORT_SYMBOL_GPL ( ipv4_sk_update_pmtu ) ;
2011-02-10 09:00:16 +03:00
2012-07-12 08:25:45 +04:00
void ipv4_redirect ( struct sk_buff * skb , struct net * net ,
int oif , u32 mark , u8 protocol , int flow_flags )
{
2012-07-17 15:19:00 +04:00
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
2012-07-12 08:25:45 +04:00
struct flowi4 fl4 ;
struct rtable * rt ;
2012-07-17 15:19:00 +04:00
__build_flow_key ( & fl4 , NULL , iph , oif ,
RT_TOS ( iph - > tos ) , protocol , mark , flow_flags ) ;
2012-07-12 08:25:45 +04:00
rt = __ip_route_output_key ( net , & fl4 ) ;
if ( ! IS_ERR ( rt ) ) {
2012-07-17 22:31:28 +04:00
__ip_do_redirect ( rt , skb , & fl4 , false ) ;
2012-07-12 08:25:45 +04:00
ip_rt_put ( rt ) ;
}
}
EXPORT_SYMBOL_GPL ( ipv4_redirect ) ;
void ipv4_sk_redirect ( struct sk_buff * skb , struct sock * sk )
{
2012-07-17 15:19:00 +04:00
const struct iphdr * iph = ( const struct iphdr * ) skb - > data ;
struct flowi4 fl4 ;
struct rtable * rt ;
2012-07-12 08:25:45 +04:00
2012-07-17 15:19:00 +04:00
__build_flow_key ( & fl4 , sk , iph , 0 , 0 , 0 , 0 , 0 ) ;
rt = __ip_route_output_key ( sock_net ( sk ) , & fl4 ) ;
if ( ! IS_ERR ( rt ) ) {
2012-07-17 22:31:28 +04:00
__ip_do_redirect ( rt , skb , & fl4 , false ) ;
2012-07-17 15:19:00 +04:00
ip_rt_put ( rt ) ;
}
2012-07-12 08:25:45 +04:00
}
EXPORT_SYMBOL_GPL ( ipv4_sk_redirect ) ;
2011-12-01 22:38:59 +04:00
static struct dst_entry * ipv4_dst_check ( struct dst_entry * dst , u32 cookie )
{
struct rtable * rt = ( struct rtable * ) dst ;
2012-07-17 22:31:28 +04:00
/* All IPV4 dsts are created with ->obsolete set to the value
* DST_OBSOLETE_FORCE_CHK which forces validation calls down
* into this function always .
*
2013-05-28 00:46:31 +04:00
* When a PMTU / redirect information update invalidates a route ,
* this is indicated by setting obsolete to DST_OBSOLETE_KILL or
* DST_OBSOLETE_DEAD by dst_free ( ) .
2012-07-17 22:31:28 +04:00
*/
2013-05-28 00:46:31 +04:00
if ( dst - > obsolete ! = DST_OBSOLETE_FORCE_CHK | | rt_is_expired ( rt ) )
2011-12-01 22:38:59 +04:00
return NULL ;
2010-03-19 02:20:20 +03:00
return dst ;
2005-04-17 02:20:36 +04:00
}
static void ipv4_link_failure ( struct sk_buff * skb )
{
struct rtable * rt ;
icmp_send ( skb , ICMP_DEST_UNREACH , ICMP_HOST_UNREACH , 0 ) ;
2009-06-02 09:14:27 +04:00
rt = skb_rtable ( skb ) ;
2012-07-10 17:58:42 +04:00
if ( rt )
dst_set_expires ( & rt - > dst , 0 ) ;
2005-04-17 02:20:36 +04:00
}
2014-04-15 21:47:15 +04:00
static int ip_rt_bug ( struct sock * sk , struct sk_buff * skb )
2005-04-17 02:20:36 +04:00
{
2012-05-15 18:11:54 +04:00
pr_debug ( " %s: %pI4 -> %pI4, %s \n " ,
__func__ , & ip_hdr ( skb ) - > saddr , & ip_hdr ( skb ) - > daddr ,
skb - > dev ? skb - > dev - > name : " ? " ) ;
2005-04-17 02:20:36 +04:00
kfree_skb ( skb ) ;
2011-05-21 11:16:42 +04:00
WARN_ON ( 1 ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
/*
We do not cache source address of outgoing interface ,
because it is used only by IP RR , TS and SRR options ,
so that it out of fast path .
BTW remember : " addr " is allowed to be not aligned
in IP options !
*/
2011-05-14 01:29:41 +04:00
void ip_rt_get_source ( u8 * addr , struct sk_buff * skb , struct rtable * rt )
2005-04-17 02:20:36 +04:00
{
2006-09-27 08:27:54 +04:00
__be32 src ;
2005-04-17 02:20:36 +04:00
2010-11-12 04:07:48 +03:00
if ( rt_is_output_route ( rt ) )
2011-05-14 02:01:21 +04:00
src = ip_hdr ( skb ) - > saddr ;
2010-10-05 14:41:36 +04:00
else {
2011-05-14 01:29:41 +04:00
struct fib_result res ;
struct flowi4 fl4 ;
struct iphdr * iph ;
iph = ip_hdr ( skb ) ;
memset ( & fl4 , 0 , sizeof ( fl4 ) ) ;
fl4 . daddr = iph - > daddr ;
fl4 . saddr = iph - > saddr ;
2011-07-23 06:00:41 +04:00
fl4 . flowi4_tos = RT_TOS ( iph - > tos ) ;
2011-05-14 01:29:41 +04:00
fl4 . flowi4_oif = rt - > dst . dev - > ifindex ;
fl4 . flowi4_iif = skb - > dev - > ifindex ;
fl4 . flowi4_mark = skb - > mark ;
2011-03-05 08:47:09 +03:00
2010-10-05 14:41:36 +04:00
rcu_read_lock ( ) ;
2015-06-23 20:45:37 +03:00
if ( fib_lookup ( dev_net ( rt - > dst . dev ) , & fl4 , & res , 0 ) = = 0 )
2011-03-25 03:42:21 +03:00
src = FIB_RES_PREFSRC ( dev_net ( rt - > dst . dev ) , res ) ;
2010-10-05 14:41:36 +04:00
else
2012-07-13 16:03:45 +04:00
src = inet_select_addr ( rt - > dst . dev ,
rt_nexthop ( rt , iph - > daddr ) ,
RT_SCOPE_UNIVERSE ) ;
2010-10-05 14:41:36 +04:00
rcu_read_unlock ( ) ;
}
2005-04-17 02:20:36 +04:00
memcpy ( addr , & src , 4 ) ;
}
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2005-04-17 02:20:36 +04:00
static void set_class_tag ( struct rtable * rt , u32 tag )
{
2010-06-11 10:31:35 +04:00
if ( ! ( rt - > dst . tclassid & 0xFFFF ) )
rt - > dst . tclassid | = tag & 0xFFFF ;
if ( ! ( rt - > dst . tclassid & 0xFFFF0000 ) )
rt - > dst . tclassid | = tag & 0xFFFF0000 ;
2005-04-17 02:20:36 +04:00
}
# endif
2010-12-13 23:52:14 +03:00
static unsigned int ipv4_default_advmss ( const struct dst_entry * dst )
{
unsigned int advmss = dst_metric_raw ( dst , RTAX_ADVMSS ) ;
if ( advmss = = 0 ) {
advmss = max_t ( unsigned int , dst - > dev - > mtu - 40 ,
ip_rt_min_advmss ) ;
if ( advmss > 65535 - 40 )
advmss = 65535 - 40 ;
}
return advmss ;
}
2011-11-23 06:12:51 +04:00
static unsigned int ipv4_mtu ( const struct dst_entry * dst )
2010-12-15 00:01:14 +03:00
{
2011-11-23 06:14:50 +04:00
const struct rtable * rt = ( const struct rtable * ) dst ;
2012-07-10 17:58:42 +04:00
unsigned int mtu = rt - > rt_pmtu ;
2012-08-27 10:30:01 +04:00
if ( ! mtu | | time_after_eq ( jiffies , rt - > dst . expires ) )
2012-07-10 17:58:42 +04:00
mtu = dst_metric_raw ( dst , RTAX_MTU ) ;
2011-11-23 06:13:31 +04:00
2013-01-17 00:55:01 +04:00
if ( mtu )
2011-11-23 06:13:31 +04:00
return mtu ;
mtu = dst - > dev - > mtu ;
2010-12-15 00:01:14 +03:00
if ( unlikely ( dst_metric_locked ( dst , RTAX_MTU ) ) ) {
2012-10-08 15:41:18 +04:00
if ( rt - > rt_uses_gateway & & mtu > 576 )
2010-12-15 00:01:14 +03:00
mtu = 576 ;
}
2013-08-19 06:08:07 +04:00
return min_t ( unsigned int , mtu , IP_MAX_MTU ) ;
2010-12-15 00:01:14 +03:00
}
2012-07-17 23:20:47 +04:00
static struct fib_nh_exception * find_exception ( struct fib_nh * nh , __be32 daddr )
2012-07-17 15:19:00 +04:00
{
2014-09-04 09:21:56 +04:00
struct fnhe_hash_bucket * hash = rcu_dereference ( nh - > nh_exceptions ) ;
2012-07-17 15:19:00 +04:00
struct fib_nh_exception * fnhe ;
u32 hval ;
2012-07-17 23:20:47 +04:00
if ( ! hash )
return NULL ;
2012-07-18 00:23:08 +04:00
hval = fnhe_hashfun ( daddr ) ;
2012-07-17 15:19:00 +04:00
for ( fnhe = rcu_dereference ( hash [ hval ] . chain ) ; fnhe ;
fnhe = rcu_dereference ( fnhe - > fnhe_next ) ) {
2012-07-17 23:20:47 +04:00
if ( fnhe - > fnhe_daddr = = daddr )
return fnhe ;
}
return NULL ;
}
2012-07-18 14:15:35 +04:00
2012-08-01 02:06:50 +04:00
static bool rt_bind_exception ( struct rtable * rt , struct fib_nh_exception * fnhe ,
2012-07-17 23:20:47 +04:00
__be32 daddr )
{
2012-08-01 02:06:50 +04:00
bool ret = false ;
2012-08-01 02:02:02 +04:00
spin_lock_bh ( & fnhe_lock ) ;
2012-07-17 23:20:47 +04:00
2012-08-01 02:02:02 +04:00
if ( daddr = = fnhe - > fnhe_daddr ) {
2013-06-27 11:27:05 +04:00
struct rtable __rcu * * porig ;
struct rtable * orig ;
2013-05-28 00:46:33 +04:00
int genid = fnhe_genid ( dev_net ( rt - > dst . dev ) ) ;
2013-06-27 11:27:05 +04:00
if ( rt_is_input_route ( rt ) )
porig = & fnhe - > fnhe_rth_input ;
else
porig = & fnhe - > fnhe_rth_output ;
orig = rcu_dereference ( * porig ) ;
2013-05-28 00:46:33 +04:00
if ( fnhe - > fnhe_genid ! = genid ) {
fnhe - > fnhe_genid = genid ;
2012-10-18 01:17:44 +04:00
fnhe - > fnhe_gw = 0 ;
fnhe - > fnhe_pmtu = 0 ;
fnhe - > fnhe_expires = 0 ;
2013-06-27 11:27:05 +04:00
fnhe_flush_routes ( fnhe ) ;
orig = NULL ;
2012-10-18 01:17:44 +04:00
}
2013-05-28 00:46:31 +04:00
fill_route_from_fnhe ( rt , fnhe ) ;
if ( ! rt - > rt_gateway )
2012-10-08 15:41:18 +04:00
rt - > rt_gateway = daddr ;
2012-07-17 23:20:47 +04:00
2013-06-27 11:27:05 +04:00
if ( ! ( rt - > dst . flags & DST_NOCACHE ) ) {
rcu_assign_pointer ( * porig , rt ) ;
if ( orig )
rt_free ( orig ) ;
ret = true ;
}
2012-08-01 02:02:02 +04:00
fnhe - > fnhe_stamp = jiffies ;
}
spin_unlock_bh ( & fnhe_lock ) ;
2012-08-01 02:06:50 +04:00
return ret ;
2012-07-31 05:08:23 +04:00
}
2012-08-01 02:06:50 +04:00
static bool rt_cache_route ( struct fib_nh * nh , struct rtable * rt )
2012-07-17 23:20:47 +04:00
{
2012-07-31 09:45:30 +04:00
struct rtable * orig , * prev , * * p ;
2012-08-01 02:06:50 +04:00
bool ret = true ;
2012-07-17 23:20:47 +04:00
2012-07-31 09:45:30 +04:00
if ( rt_is_input_route ( rt ) ) {
2012-07-31 05:08:23 +04:00
p = ( struct rtable * * ) & nh - > nh_rth_input ;
2012-07-31 09:45:30 +04:00
} else {
2014-08-17 21:30:35 +04:00
p = ( struct rtable * * ) raw_cpu_ptr ( nh - > nh_pcpu_rth_output ) ;
2012-07-31 09:45:30 +04:00
}
2012-07-17 23:20:47 +04:00
orig = * p ;
prev = cmpxchg ( p , orig , rt ) ;
if ( prev = = orig ) {
if ( orig )
2012-07-31 05:08:23 +04:00
rt_free ( orig ) ;
2012-10-08 15:41:18 +04:00
} else
2012-08-01 02:06:50 +04:00
ret = false ;
return ret ;
}
2015-01-15 02:17:06 +03:00
struct uncached_list {
spinlock_t lock ;
struct list_head head ;
} ;
static DEFINE_PER_CPU_ALIGNED ( struct uncached_list , rt_uncached_list ) ;
2012-08-01 02:06:50 +04:00
static void rt_add_uncached_list ( struct rtable * rt )
{
2015-01-15 02:17:06 +03:00
struct uncached_list * ul = raw_cpu_ptr ( & rt_uncached_list ) ;
rt - > rt_uncached_list = ul ;
spin_lock_bh ( & ul - > lock ) ;
list_add_tail ( & rt - > rt_uncached , & ul - > head ) ;
spin_unlock_bh ( & ul - > lock ) ;
2012-08-01 02:06:50 +04:00
}
static void ipv4_dst_destroy ( struct dst_entry * dst )
{
struct rtable * rt = ( struct rtable * ) dst ;
2012-08-24 09:40:47 +04:00
if ( ! list_empty ( & rt - > rt_uncached ) ) {
2015-01-15 02:17:06 +03:00
struct uncached_list * ul = rt - > rt_uncached_list ;
spin_lock_bh ( & ul - > lock ) ;
2012-08-01 02:06:50 +04:00
list_del ( & rt - > rt_uncached ) ;
2015-01-15 02:17:06 +03:00
spin_unlock_bh ( & ul - > lock ) ;
2012-08-01 02:06:50 +04:00
}
}
void rt_flush_dev ( struct net_device * dev )
{
2015-01-15 02:17:06 +03:00
struct net * net = dev_net ( dev ) ;
struct rtable * rt ;
int cpu ;
for_each_possible_cpu ( cpu ) {
struct uncached_list * ul = & per_cpu ( rt_uncached_list , cpu ) ;
2012-08-01 02:06:50 +04:00
2015-01-15 02:17:06 +03:00
spin_lock_bh ( & ul - > lock ) ;
list_for_each_entry ( rt , & ul - > head , rt_uncached ) {
2012-08-01 02:06:50 +04:00
if ( rt - > dst . dev ! = dev )
continue ;
rt - > dst . dev = net - > loopback_dev ;
dev_hold ( rt - > dst . dev ) ;
dev_put ( dev ) ;
}
2015-01-15 02:17:06 +03:00
spin_unlock_bh ( & ul - > lock ) ;
2012-07-17 15:19:00 +04:00
}
}
2012-07-25 09:11:23 +04:00
static bool rt_cache_valid ( const struct rtable * rt )
2012-07-17 23:58:50 +04:00
{
2012-07-25 09:11:23 +04:00
return rt & &
rt - > dst . obsolete = = DST_OBSOLETE_FORCE_CHK & &
! rt_is_expired ( rt ) ;
2012-07-17 23:58:50 +04:00
}
2012-07-17 23:20:47 +04:00
static void rt_set_nexthop ( struct rtable * rt , __be32 daddr ,
2011-03-05 08:47:09 +03:00
const struct fib_result * res ,
2012-07-17 23:20:47 +04:00
struct fib_nh_exception * fnhe ,
2011-02-17 08:44:24 +03:00
struct fib_info * fi , u16 type , u32 itag )
2005-04-17 02:20:36 +04:00
{
2012-08-01 02:06:50 +04:00
bool cached = false ;
2005-04-17 02:20:36 +04:00
if ( fi ) {
2012-07-17 15:19:00 +04:00
struct fib_nh * nh = & FIB_RES_NH ( * res ) ;
2012-10-08 15:41:18 +04:00
if ( nh - > nh_gw & & nh - > nh_scope = = RT_SCOPE_LINK ) {
2012-07-17 15:19:00 +04:00
rt - > rt_gateway = nh - > nh_gw ;
2012-10-08 15:41:18 +04:00
rt - > rt_uses_gateway = 1 ;
}
2012-07-18 01:55:59 +04:00
dst_init_metrics ( & rt - > dst , fi - > fib_metrics , true ) ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2012-07-17 23:20:47 +04:00
rt - > dst . tclassid = nh - > nh_tclassid ;
2005-04-17 02:20:36 +04:00
# endif
2015-08-20 14:56:25 +03:00
rt - > dst . lwtstate = lwtstate_get ( nh - > nh_lwtstate ) ;
2012-08-01 02:02:02 +04:00
if ( unlikely ( fnhe ) )
2012-08-01 02:06:50 +04:00
cached = rt_bind_exception ( rt , fnhe , daddr ) ;
2012-08-01 02:02:02 +04:00
else if ( ! ( rt - > dst . flags & DST_NOCACHE ) )
2012-08-01 02:06:50 +04:00
cached = rt_cache_route ( nh , rt ) ;
2012-10-08 15:41:18 +04:00
if ( unlikely ( ! cached ) ) {
/* Routes we intend to cache in nexthop exception or
* FIB nexthop have the DST_NOCACHE bit clear .
* However , if we are unsuccessful at storing this
* route into the cache we really need to set it .
*/
rt - > dst . flags | = DST_NOCACHE ;
if ( ! rt - > rt_gateway )
rt - > rt_gateway = daddr ;
rt_add_uncached_list ( rt ) ;
}
} else
2012-08-01 02:06:50 +04:00
rt_add_uncached_list ( rt ) ;
2010-12-09 08:16:57 +03:00
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_IP_MULTIPLE_TABLES
2012-07-13 19:21:29 +04:00
set_class_tag ( rt , res - > tclassid ) ;
2005-04-17 02:20:36 +04:00
# endif
set_class_tag ( rt , itag ) ;
# endif
}
2011-04-29 01:13:38 +04:00
static struct rtable * rt_dst_alloc ( struct net_device * dev ,
2015-09-02 23:58:34 +03:00
unsigned int flags , u16 type ,
2012-07-17 23:20:47 +04:00
bool nopolicy , bool noxfrm , bool will_cache )
2011-02-18 02:42:37 +03:00
{
2015-09-02 23:58:34 +03:00
struct rtable * rt ;
rt = dst_alloc ( & ipv4_dst_ops , dev , 1 , DST_OBSOLETE_FORCE_CHK ,
( will_cache ? 0 : ( DST_HOST | DST_NOCACHE ) ) |
( nopolicy ? DST_NOPOLICY : 0 ) |
( noxfrm ? DST_NOXFRM : 0 ) ) ;
if ( rt ) {
rt - > rt_genid = rt_genid_ipv4 ( dev_net ( dev ) ) ;
rt - > rt_flags = flags ;
rt - > rt_type = type ;
rt - > rt_is_input = 0 ;
rt - > rt_iif = 0 ;
rt - > rt_pmtu = 0 ;
rt - > rt_gateway = 0 ;
rt - > rt_uses_gateway = 0 ;
2015-09-02 23:58:35 +03:00
rt - > rt_table_id = 0 ;
2015-09-02 23:58:34 +03:00
INIT_LIST_HEAD ( & rt - > rt_uncached ) ;
rt - > dst . output = ip_output ;
if ( flags & RTCF_LOCAL )
rt - > dst . input = ip_local_deliver ;
}
return rt ;
2011-02-18 02:42:37 +03:00
}
2010-06-02 23:21:31 +04:00
/* called in rcu_read_lock() section */
2006-09-27 08:25:20 +04:00
static int ip_route_input_mc ( struct sk_buff * skb , __be32 daddr , __be32 saddr ,
2005-04-17 02:20:36 +04:00
u8 tos , struct net_device * dev , int our )
{
struct rtable * rth ;
2010-06-02 23:21:31 +04:00
struct in_device * in_dev = __in_dev_get_rcu ( dev ) ;
2015-09-02 23:58:34 +03:00
unsigned int flags = RTCF_MULTICAST ;
2005-04-17 02:20:36 +04:00
u32 itag = 0 ;
2010-06-02 16:05:27 +04:00
int err ;
2005-04-17 02:20:36 +04:00
/* Primary sanity checks. */
2015-04-03 11:17:26 +03:00
if ( ! in_dev )
2005-04-17 02:20:36 +04:00
return - EINVAL ;
2008-01-21 14:18:08 +03:00
if ( ipv4_is_multicast ( saddr ) | | ipv4_is_lbcast ( saddr ) | |
2012-06-12 04:44:01 +04:00
skb - > protocol ! = htons ( ETH_P_IP ) )
2005-04-17 02:20:36 +04:00
goto e_inval ;
2015-09-28 21:10:38 +03:00
if ( ipv4_is_loopback ( saddr ) & & ! IN_DEV_ROUTE_LOCALNET ( in_dev ) )
goto e_inval ;
2012-06-12 04:44:01 +04:00
2007-12-17 00:45:43 +03:00
if ( ipv4_is_zeronet ( saddr ) ) {
if ( ! ipv4_is_local_multicast ( daddr ) )
2005-04-17 02:20:36 +04:00
goto e_inval ;
2010-06-02 16:05:27 +04:00
} else {
2012-06-29 05:54:02 +04:00
err = fib_validate_source ( skb , saddr , 0 , tos , 0 , dev ,
in_dev , & itag ) ;
2010-06-02 16:05:27 +04:00
if ( err < 0 )
goto e_err ;
}
2015-09-02 23:58:34 +03:00
if ( our )
flags | = RTCF_LOCAL ;
rth = rt_dst_alloc ( dev_net ( dev ) - > loopback_dev , flags , RTN_MULTICAST ,
2012-07-17 23:20:47 +04:00
IN_DEV_CONF_GET ( in_dev , NOPOLICY ) , false , false ) ;
2005-04-17 02:20:36 +04:00
if ( ! rth )
goto e_nobufs ;
2011-04-29 01:31:47 +04:00
# ifdef CONFIG_IP_ROUTE_CLASSID
rth - > dst . tclassid = itag ;
# endif
2010-06-11 10:31:35 +04:00
rth - > dst . output = ip_rt_bug ;
2012-07-18 01:44:26 +04:00
rth - > rt_is_input = 1 ;
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_IP_MROUTE
2007-12-17 00:45:43 +03:00
if ( ! ipv4_is_local_multicast ( daddr ) & & IN_DEV_MFORWARD ( in_dev ) )
2010-06-11 10:31:35 +04:00
rth - > dst . input = ip_mr_input ;
2005-04-17 02:20:36 +04:00
# endif
RT_CACHE_STAT_INC ( in_slow_mc ) ;
2012-07-17 22:00:09 +04:00
skb_dst_set ( skb , & rth - > dst ) ;
return 0 ;
2005-04-17 02:20:36 +04:00
e_nobufs :
return - ENOBUFS ;
e_inval :
2010-06-02 23:21:31 +04:00
return - EINVAL ;
2010-06-02 16:05:27 +04:00
e_err :
return err ;
2005-04-17 02:20:36 +04:00
}
static void ip_handle_martian_source ( struct net_device * dev ,
struct in_device * in_dev ,
struct sk_buff * skb ,
2006-09-27 08:25:20 +04:00
__be32 daddr ,
__be32 saddr )
2005-04-17 02:20:36 +04:00
{
RT_CACHE_STAT_INC ( in_martian_src ) ;
# ifdef CONFIG_IP_ROUTE_VERBOSE
if ( IN_DEV_LOG_MARTIANS ( in_dev ) & & net_ratelimit ( ) ) {
/*
* RFC1812 recommendation , if source is martian ,
* the only hint is MAC header .
*/
2012-03-11 22:36:11 +04:00
pr_warn ( " martian source %pI4 from %pI4, on dev %s \n " ,
2008-10-31 10:53:57 +03:00
& daddr , & saddr , dev - > name ) ;
2007-03-20 01:33:04 +03:00
if ( dev - > hard_header_len & & skb_mac_header_was_set ( skb ) ) {
2012-03-11 22:36:11 +04:00
print_hex_dump ( KERN_WARNING , " ll header: " ,
DUMP_PREFIX_OFFSET , 16 , 1 ,
skb_mac_header ( skb ) ,
dev - > hard_header_len , true ) ;
2005-04-17 02:20:36 +04:00
}
}
# endif
}
2010-06-03 08:13:21 +04:00
/* called in rcu_read_lock() section */
2008-04-10 12:52:09 +04:00
static int __mkroute_input ( struct sk_buff * skb ,
2011-02-17 08:44:24 +03:00
const struct fib_result * res ,
2008-04-10 12:52:09 +04:00
struct in_device * in_dev ,
2012-07-26 15:14:38 +04:00
__be32 daddr , __be32 saddr , u32 tos )
2005-04-17 02:20:36 +04:00
{
2013-06-27 11:27:05 +04:00
struct fib_nh_exception * fnhe ;
2005-04-17 02:20:36 +04:00
struct rtable * rth ;
int err ;
struct in_device * out_dev ;
2012-07-17 23:58:50 +04:00
bool do_cache ;
2014-05-22 12:36:55 +04:00
u32 itag = 0 ;
2005-04-17 02:20:36 +04:00
/* get a working reference to the output device */
2010-06-03 08:13:21 +04:00
out_dev = __in_dev_get_rcu ( FIB_RES_DEV ( * res ) ) ;
2015-04-03 11:17:26 +03:00
if ( ! out_dev ) {
2012-05-14 01:56:26 +04:00
net_crit_ratelimited ( " Bug in ip_route_input_slow(). Please report. \n " ) ;
2005-04-17 02:20:36 +04:00
return - EINVAL ;
}
2011-04-07 08:51:50 +04:00
err = fib_validate_source ( skb , saddr , daddr , tos , FIB_RES_OIF ( * res ) ,
2012-06-29 05:54:02 +04:00
in_dev - > dev , in_dev , & itag ) ;
2005-04-17 02:20:36 +04:00
if ( err < 0 ) {
2007-02-09 17:24:47 +03:00
ip_handle_martian_source ( in_dev - > dev , in_dev , skb , daddr ,
2005-04-17 02:20:36 +04:00
saddr ) ;
2007-02-09 17:24:47 +03:00
2005-04-17 02:20:36 +04:00
goto cleanup ;
}
2012-10-08 15:41:15 +04:00
do_cache = res - > fi & & ! itag ;
if ( out_dev = = in_dev & & err & & IN_DEV_TX_REDIRECTS ( out_dev ) & &
2015-01-23 14:01:26 +03:00
skb - > protocol = = htons ( ETH_P_IP ) & &
2005-04-17 02:20:36 +04:00
( IN_DEV_SHARED_MEDIA ( out_dev ) | |
2015-01-23 14:01:26 +03:00
inet_addr_onlink ( out_dev , saddr , FIB_RES_GW ( * res ) ) ) )
IPCB ( skb ) - > flags | = IPSKB_DOREDIRECT ;
2005-04-17 02:20:36 +04:00
if ( skb - > protocol ! = htons ( ETH_P_IP ) ) {
/* Not IP (i.e. ARP). Do not create route, if it is
* invalid for proxy arp . DNAT routes are always valid .
2010-01-05 08:50:47 +03:00
*
* Proxy arp feature have been extended to allow , ARP
* replies back to the same interface , to support
* Private VLAN switch technologies . See arp . c .
2005-04-17 02:20:36 +04:00
*/
2010-01-05 08:50:47 +03:00
if ( out_dev = = in_dev & &
IN_DEV_PROXY_ARP_PVLAN ( in_dev ) = = 0 ) {
2005-04-17 02:20:36 +04:00
err = - EINVAL ;
goto cleanup ;
}
}
2013-06-27 11:27:05 +04:00
fnhe = find_exception ( & FIB_RES_NH ( * res ) , daddr ) ;
2012-10-08 15:41:15 +04:00
if ( do_cache ) {
2015-04-03 11:17:27 +03:00
if ( fnhe )
2013-06-27 11:27:05 +04:00
rth = rcu_dereference ( fnhe - > fnhe_rth_input ) ;
else
rth = rcu_dereference ( FIB_RES_NH ( * res ) . nh_rth_input ) ;
2012-10-08 15:41:15 +04:00
if ( rt_cache_valid ( rth ) ) {
skb_dst_set_noref ( skb , & rth - > dst ) ;
goto out ;
2012-07-17 23:58:50 +04:00
}
}
2012-07-17 23:20:47 +04:00
2015-09-02 23:58:34 +03:00
rth = rt_dst_alloc ( out_dev - > dev , 0 , res - > type ,
2011-04-29 01:13:38 +04:00
IN_DEV_CONF_GET ( in_dev , NOPOLICY ) ,
2012-07-17 23:58:50 +04:00
IN_DEV_CONF_GET ( out_dev , NOXFRM ) , do_cache ) ;
2005-04-17 02:20:36 +04:00
if ( ! rth ) {
err = - ENOBUFS ;
goto cleanup ;
}
2012-07-18 01:44:26 +04:00
rth - > rt_is_input = 1 ;
2015-09-02 23:58:35 +03:00
if ( res - > table )
rth - > rt_table_id = res - > table - > tb_id ;
2014-02-17 11:23:43 +04:00
RT_CACHE_STAT_INC ( in_slow_tot ) ;
2005-04-17 02:20:36 +04:00
2010-06-11 10:31:35 +04:00
rth - > dst . input = ip_forward ;
2005-04-17 02:20:36 +04:00
2013-06-27 11:27:05 +04:00
rt_set_nexthop ( rth , daddr , res , fnhe , res - > fi , res - > type , itag ) ;
2015-08-20 14:56:25 +03:00
if ( lwtunnel_output_redirect ( rth - > dst . lwtstate ) ) {
rth - > dst . lwtstate - > orig_output = rth - > dst . output ;
2015-07-21 11:43:50 +03:00
rth - > dst . output = lwtunnel_output ;
2015-08-17 23:42:24 +03:00
}
2015-08-20 14:56:25 +03:00
if ( lwtunnel_input_redirect ( rth - > dst . lwtstate ) ) {
rth - > dst . lwtstate - > orig_input = rth - > dst . input ;
2015-08-17 23:42:24 +03:00
rth - > dst . input = lwtunnel_input ;
}
2012-07-26 15:14:38 +04:00
skb_dst_set ( skb , & rth - > dst ) ;
2012-07-17 23:58:50 +04:00
out :
2005-04-17 02:20:36 +04:00
err = 0 ;
cleanup :
return err ;
2007-02-09 17:24:47 +03:00
}
2005-04-17 02:20:36 +04:00
2015-09-30 11:12:22 +03:00
# ifdef CONFIG_IP_ROUTE_MULTIPATH
/* To make ICMP packets follow the right flow, the multipath hash is
* calculated from the inner IP addresses in reverse order .
*/
static int ip_multipath_icmp_hash ( struct sk_buff * skb )
{
const struct iphdr * outer_iph = ip_hdr ( skb ) ;
struct icmphdr _icmph ;
const struct icmphdr * icmph ;
struct iphdr _inner_iph ;
const struct iphdr * inner_iph ;
if ( unlikely ( ( outer_iph - > frag_off & htons ( IP_OFFSET ) ) ! = 0 ) )
goto standard_hash ;
icmph = skb_header_pointer ( skb , outer_iph - > ihl * 4 , sizeof ( _icmph ) ,
& _icmph ) ;
if ( ! icmph )
goto standard_hash ;
if ( icmph - > type ! = ICMP_DEST_UNREACH & &
icmph - > type ! = ICMP_REDIRECT & &
icmph - > type ! = ICMP_TIME_EXCEEDED & &
icmph - > type ! = ICMP_PARAMETERPROB ) {
goto standard_hash ;
}
inner_iph = skb_header_pointer ( skb ,
outer_iph - > ihl * 4 + sizeof ( _icmph ) ,
sizeof ( _inner_iph ) , & _inner_iph ) ;
if ( ! inner_iph )
goto standard_hash ;
return fib_multipath_hash ( inner_iph - > daddr , inner_iph - > saddr ) ;
standard_hash :
return fib_multipath_hash ( outer_iph - > saddr , outer_iph - > daddr ) ;
}
# endif /* CONFIG_IP_ROUTE_MULTIPATH */
2008-04-10 12:52:09 +04:00
static int ip_mkroute_input ( struct sk_buff * skb ,
struct fib_result * res ,
2011-03-12 04:07:33 +03:00
const struct flowi4 * fl4 ,
2008-04-10 12:52:09 +04:00
struct in_device * in_dev ,
__be32 daddr , __be32 saddr , u32 tos )
2005-04-17 02:20:36 +04:00
{
# ifdef CONFIG_IP_ROUTE_MULTIPATH
2015-09-30 11:12:21 +03:00
if ( res - > fi & & res - > fi - > fib_nhs > 1 ) {
int h ;
2015-09-30 11:12:22 +03:00
if ( unlikely ( ip_hdr ( skb ) - > protocol = = IPPROTO_ICMP ) )
h = ip_multipath_icmp_hash ( skb ) ;
else
h = fib_multipath_hash ( saddr , daddr ) ;
2015-09-30 11:12:21 +03:00
fib_select_multipath ( res , h ) ;
}
2005-04-17 02:20:36 +04:00
# endif
/* create a routing cache entry */
2012-07-26 15:14:38 +04:00
return __mkroute_input ( skb , res , in_dev , daddr , saddr , tos ) ;
2005-04-17 02:20:36 +04:00
}
/*
* NOTE . We drop all the packets that has local source
* addresses , because every properly looped back packet
* must have correct destination already attached by output routine .
*
* Such approach solves two big problems :
* 1. Not simplex devices are handled properly .
* 2. IP spoofing attempts are filtered with 100 % of guarantee .
2010-10-05 14:41:36 +04:00
* called with rcu_read_lock ( )
2005-04-17 02:20:36 +04:00
*/
2006-09-27 08:25:20 +04:00
static int ip_route_input_slow ( struct sk_buff * skb , __be32 daddr , __be32 saddr ,
2012-06-28 04:05:06 +04:00
u8 tos , struct net_device * dev )
2005-04-17 02:20:36 +04:00
{
struct fib_result res ;
2010-06-02 23:21:31 +04:00
struct in_device * in_dev = __in_dev_get_rcu ( dev ) ;
2015-07-21 11:43:59 +03:00
struct ip_tunnel_info * tun_info ;
2011-03-12 04:07:33 +03:00
struct flowi4 fl4 ;
2012-04-15 09:58:06 +04:00
unsigned int flags = 0 ;
2005-04-17 02:20:36 +04:00
u32 itag = 0 ;
2012-04-15 09:58:06 +04:00
struct rtable * rth ;
2005-04-17 02:20:36 +04:00
int err = - EINVAL ;
2012-04-15 05:34:41 +04:00
struct net * net = dev_net ( dev ) ;
2012-07-17 23:58:50 +04:00
bool do_cache ;
2005-04-17 02:20:36 +04:00
/* IP on this device is disabled. */
if ( ! in_dev )
goto out ;
/* Check for the most weird martians, which can be not detected
by fib_lookup .
*/
2015-08-20 14:56:25 +03:00
tun_info = skb_tunnel_info ( skb ) ;
2015-08-28 21:48:19 +03:00
if ( tun_info & & ! ( tun_info - > mode & IP_TUNNEL_INFO_TX ) )
2015-07-21 11:43:59 +03:00
fl4 . flowi4_tun_key . tun_id = tun_info - > key . tun_id ;
else
fl4 . flowi4_tun_key . tun_id = 0 ;
2015-07-21 11:43:56 +03:00
skb_dst_drop ( skb ) ;
2012-06-12 04:44:01 +04:00
if ( ipv4_is_multicast ( saddr ) | | ipv4_is_lbcast ( saddr ) )
2005-04-17 02:20:36 +04:00
goto martian_source ;
2012-07-17 23:58:50 +04:00
res . fi = NULL ;
2015-09-16 19:16:39 +03:00
res . table = NULL ;
2010-10-17 19:11:22 +04:00
if ( ipv4_is_lbcast ( daddr ) | | ( saddr = = 0 & & daddr = = 0 ) )
2005-04-17 02:20:36 +04:00
goto brd_input ;
/* Accept zero addresses only to limited broadcast;
* I even do not know to fix it or not . Waiting for complains : - )
*/
2007-12-17 00:45:43 +03:00
if ( ipv4_is_zeronet ( saddr ) )
2005-04-17 02:20:36 +04:00
goto martian_source ;
2012-06-12 04:44:01 +04:00
if ( ipv4_is_zeronet ( daddr ) )
2005-04-17 02:20:36 +04:00
goto martian_destination ;
2012-08-04 01:27:25 +04:00
/* Following code try to avoid calling IN_DEV_NET_ROUTE_LOCALNET(),
* and call it once if daddr or / and saddr are loopback addresses
*/
if ( ipv4_is_loopback ( daddr ) ) {
if ( ! IN_DEV_NET_ROUTE_LOCALNET ( in_dev , net ) )
2012-06-12 04:44:01 +04:00
goto martian_destination ;
2012-08-04 01:27:25 +04:00
} else if ( ipv4_is_loopback ( saddr ) ) {
if ( ! IN_DEV_NET_ROUTE_LOCALNET ( in_dev , net ) )
2012-06-12 04:44:01 +04:00
goto martian_source ;
}
2005-04-17 02:20:36 +04:00
/*
* Now we are ready to route packet .
*/
2011-03-12 04:07:33 +03:00
fl4 . flowi4_oif = 0 ;
2015-09-30 06:07:13 +03:00
fl4 . flowi4_iif = l3mdev_fib_oif_rcu ( dev ) ;
2011-03-12 04:07:33 +03:00
fl4 . flowi4_mark = skb - > mark ;
fl4 . flowi4_tos = tos ;
fl4 . flowi4_scope = RT_SCOPE_UNIVERSE ;
2015-09-30 05:07:07 +03:00
fl4 . flowi4_flags = 0 ;
2011-03-12 04:07:33 +03:00
fl4 . daddr = daddr ;
fl4 . saddr = saddr ;
2015-06-23 20:45:37 +03:00
err = fib_lookup ( net , & fl4 , & res , 0 ) ;
2014-02-14 14:26:22 +04:00
if ( err ! = 0 ) {
if ( ! IN_DEV_FORWARD ( in_dev ) )
err = - EHOSTUNREACH ;
2005-04-17 02:20:36 +04:00
goto no_route ;
2014-02-14 14:26:22 +04:00
}
2005-04-17 02:20:36 +04:00
if ( res . type = = RTN_BROADCAST )
goto brd_input ;
if ( res . type = = RTN_LOCAL ) {
2011-04-07 08:51:50 +04:00
err = fib_validate_source ( skb , saddr , daddr , tos ,
2014-04-16 03:25:35 +04:00
0 , dev , in_dev , & itag ) ;
2010-06-02 16:05:27 +04:00
if ( err < 0 )
2015-09-28 21:10:44 +03:00
goto martian_source ;
2005-04-17 02:20:36 +04:00
goto local_input ;
}
2014-02-14 14:26:22 +04:00
if ( ! IN_DEV_FORWARD ( in_dev ) ) {
err = - EHOSTUNREACH ;
2012-06-27 03:27:09 +04:00
goto no_route ;
2014-02-14 14:26:22 +04:00
}
2005-04-17 02:20:36 +04:00
if ( res . type ! = RTN_UNICAST )
goto martian_destination ;
2011-03-12 04:07:33 +03:00
err = ip_mkroute_input ( skb , & res , & fl4 , in_dev , daddr , saddr , tos ) ;
2005-04-17 02:20:36 +04:00
out : return err ;
brd_input :
if ( skb - > protocol ! = htons ( ETH_P_IP ) )
goto e_inval ;
2012-06-28 15:05:27 +04:00
if ( ! ipv4_is_zeronet ( saddr ) ) {
2012-06-29 05:54:02 +04:00
err = fib_validate_source ( skb , saddr , 0 , tos , 0 , dev ,
in_dev , & itag ) ;
2005-04-17 02:20:36 +04:00
if ( err < 0 )
2015-09-28 21:10:44 +03:00
goto martian_source ;
2005-04-17 02:20:36 +04:00
}
flags | = RTCF_BROADCAST ;
res . type = RTN_BROADCAST ;
RT_CACHE_STAT_INC ( in_brd ) ;
local_input :
2012-07-17 23:58:50 +04:00
do_cache = false ;
if ( res . fi ) {
2012-07-24 00:22:20 +04:00
if ( ! itag ) {
2012-07-31 05:08:23 +04:00
rth = rcu_dereference ( FIB_RES_NH ( res ) . nh_rth_input ) ;
2012-07-17 23:58:50 +04:00
if ( rt_cache_valid ( rth ) ) {
2012-07-26 15:14:38 +04:00
skb_dst_set_noref ( skb , & rth - > dst ) ;
err = 0 ;
goto out ;
2012-07-17 23:58:50 +04:00
}
do_cache = true ;
}
}
2015-09-02 23:58:34 +03:00
rth = rt_dst_alloc ( net - > loopback_dev , flags | RTCF_LOCAL , res . type ,
2012-07-17 23:58:50 +04:00
IN_DEV_CONF_GET ( in_dev , NOPOLICY ) , false , do_cache ) ;
2005-04-17 02:20:36 +04:00
if ( ! rth )
goto e_nobufs ;
2010-06-11 10:31:35 +04:00
rth - > dst . output = ip_rt_bug ;
2011-04-29 01:31:47 +04:00
# ifdef CONFIG_IP_ROUTE_CLASSID
rth - > dst . tclassid = itag ;
# endif
2012-07-18 01:44:26 +04:00
rth - > rt_is_input = 1 ;
2015-09-02 23:58:35 +03:00
if ( res . table )
rth - > rt_table_id = res . table - > tb_id ;
2015-07-21 11:43:47 +03:00
2014-02-17 11:23:43 +04:00
RT_CACHE_STAT_INC ( in_slow_tot ) ;
2005-04-17 02:20:36 +04:00
if ( res . type = = RTN_UNREACHABLE ) {
2010-06-11 10:31:35 +04:00
rth - > dst . input = ip_error ;
rth - > dst . error = - err ;
2005-04-17 02:20:36 +04:00
rth - > rt_flags & = ~ RTCF_LOCAL ;
}
2013-11-20 07:12:34 +04:00
if ( do_cache ) {
if ( unlikely ( ! rt_cache_route ( & FIB_RES_NH ( res ) , rth ) ) ) {
rth - > dst . flags | = DST_NOCACHE ;
rt_add_uncached_list ( rth ) ;
}
}
2012-07-17 22:00:09 +04:00
skb_dst_set ( skb , & rth - > dst ) ;
2011-03-03 01:31:35 +03:00
err = 0 ;
2010-10-05 14:41:36 +04:00
goto out ;
2005-04-17 02:20:36 +04:00
no_route :
RT_CACHE_STAT_INC ( in_no_route ) ;
res . type = RTN_UNREACHABLE ;
2014-10-30 12:09:53 +03:00
res . fi = NULL ;
2015-09-16 19:16:39 +03:00
res . table = NULL ;
2005-04-17 02:20:36 +04:00
goto local_input ;
/*
* Do not cache martian addresses : they should be logged ( RFC1812 )
*/
martian_destination :
RT_CACHE_STAT_INC ( in_martian_dst ) ;
# ifdef CONFIG_IP_ROUTE_VERBOSE
2012-05-14 01:56:26 +04:00
if ( IN_DEV_LOG_MARTIANS ( in_dev ) )
net_warn_ratelimited ( " martian destination %pI4 from %pI4, dev %s \n " ,
& daddr , & saddr , dev - > name ) ;
2005-04-17 02:20:36 +04:00
# endif
2005-06-29 00:06:23 +04:00
2005-04-17 02:20:36 +04:00
e_inval :
err = - EINVAL ;
2010-10-05 14:41:36 +04:00
goto out ;
2005-04-17 02:20:36 +04:00
e_nobufs :
err = - ENOBUFS ;
2010-10-05 14:41:36 +04:00
goto out ;
2005-04-17 02:20:36 +04:00
martian_source :
ip_handle_martian_source ( dev , in_dev , skb , daddr , saddr ) ;
2010-10-05 14:41:36 +04:00
goto out ;
2005-04-17 02:20:36 +04:00
}
2012-07-26 15:14:38 +04:00
int ip_route_input_noref ( struct sk_buff * skb , __be32 daddr , __be32 saddr ,
u8 tos , struct net_device * dev )
2005-04-17 02:20:36 +04:00
{
2010-06-02 23:21:31 +04:00
int res ;
2005-04-17 02:20:36 +04:00
2010-06-02 23:21:31 +04:00
rcu_read_lock ( ) ;
2005-04-17 02:20:36 +04:00
/* Multicast recognition logic is moved from route cache to here.
The problem was that too many Ethernet cards have broken / missing
hardware multicast filters : - ( As result the host on multicasting
network acquires a lot of useless route cache entries , sort of
SDR messages from all the world . Now we try to get rid of them .
Really , provided software IP multicast filter is organized
reasonably ( at least , hashed ) , it does not result in a slowdown
comparing with route cache reject entries .
Note , that multicast routers are not affected , because
route cache entry is created eventually .
*/
2007-12-17 00:45:43 +03:00
if ( ipv4_is_multicast ( daddr ) ) {
2010-06-02 23:21:31 +04:00
struct in_device * in_dev = __in_dev_get_rcu ( dev ) ;
2005-04-17 02:20:36 +04:00
2010-06-02 23:21:31 +04:00
if ( in_dev ) {
2011-03-11 03:34:38 +03:00
int our = ip_check_mc_rcu ( in_dev , daddr , saddr ,
ip_hdr ( skb ) - > protocol ) ;
2005-04-17 02:20:36 +04:00
if ( our
# ifdef CONFIG_IP_MROUTE
2009-11-23 21:41:23 +03:00
| |
( ! ipv4_is_local_multicast ( daddr ) & &
IN_DEV_MFORWARD ( in_dev ) )
2005-04-17 02:20:36 +04:00
# endif
2009-11-23 21:41:23 +03:00
) {
2010-06-02 23:21:31 +04:00
int res = ip_route_input_mc ( skb , daddr , saddr ,
tos , dev , our ) ;
2005-04-17 02:20:36 +04:00
rcu_read_unlock ( ) ;
2010-06-02 23:21:31 +04:00
return res ;
2005-04-17 02:20:36 +04:00
}
}
rcu_read_unlock ( ) ;
return - EINVAL ;
}
2012-06-28 04:05:06 +04:00
res = ip_route_input_slow ( skb , daddr , saddr , tos , dev ) ;
2010-06-02 23:21:31 +04:00
rcu_read_unlock ( ) ;
return res ;
2005-04-17 02:20:36 +04:00
}
2012-07-26 15:14:38 +04:00
EXPORT_SYMBOL ( ip_route_input_noref ) ;
2005-04-17 02:20:36 +04:00
2010-10-05 14:41:36 +04:00
/* called with rcu_read_lock() */
2011-02-17 08:44:24 +03:00
static struct rtable * __mkroute_output ( const struct fib_result * res ,
2012-07-01 06:02:56 +04:00
const struct flowi4 * fl4 , int orig_oif ,
2011-12-02 15:39:42 +04:00
struct net_device * dev_out ,
2011-02-18 02:29:00 +03:00
unsigned int flags )
2005-04-17 02:20:36 +04:00
{
2011-02-17 08:44:24 +03:00
struct fib_info * fi = res - > fi ;
2012-07-17 23:20:47 +04:00
struct fib_nh_exception * fnhe ;
2011-02-18 02:29:00 +03:00
struct in_device * in_dev ;
2011-02-17 08:44:24 +03:00
u16 type = res - > type ;
2011-02-18 02:29:00 +03:00
struct rtable * rth ;
2012-10-08 15:41:19 +04:00
bool do_cache ;
2005-04-17 02:20:36 +04:00
2012-06-12 04:44:01 +04:00
in_dev = __in_dev_get_rcu ( dev_out ) ;
if ( ! in_dev )
2011-02-18 02:29:00 +03:00
return ERR_PTR ( - EINVAL ) ;
2005-04-17 02:20:36 +04:00
2012-06-12 04:44:01 +04:00
if ( likely ( ! IN_DEV_ROUTE_LOCALNET ( in_dev ) ) )
if ( ipv4_is_loopback ( fl4 - > saddr ) & & ! ( dev_out - > flags & IFF_LOOPBACK ) )
return ERR_PTR ( - EINVAL ) ;
2011-03-12 04:07:33 +03:00
if ( ipv4_is_lbcast ( fl4 - > daddr ) )
2011-02-17 08:44:24 +03:00
type = RTN_BROADCAST ;
2011-03-12 04:07:33 +03:00
else if ( ipv4_is_multicast ( fl4 - > daddr ) )
2011-02-17 08:44:24 +03:00
type = RTN_MULTICAST ;
2011-03-12 04:07:33 +03:00
else if ( ipv4_is_zeronet ( fl4 - > daddr ) )
2011-02-18 02:29:00 +03:00
return ERR_PTR ( - EINVAL ) ;
2005-04-17 02:20:36 +04:00
if ( dev_out - > flags & IFF_LOOPBACK )
flags | = RTCF_LOCAL ;
2012-11-23 01:04:14 +04:00
do_cache = true ;
2011-02-17 08:44:24 +03:00
if ( type = = RTN_BROADCAST ) {
2005-04-17 02:20:36 +04:00
flags | = RTCF_BROADCAST | RTCF_LOCAL ;
2011-02-17 08:44:24 +03:00
fi = NULL ;
} else if ( type = = RTN_MULTICAST ) {
2010-09-29 15:53:50 +04:00
flags | = RTCF_MULTICAST | RTCF_LOCAL ;
2011-04-29 01:48:42 +04:00
if ( ! ip_check_mc_rcu ( in_dev , fl4 - > daddr , fl4 - > saddr ,
fl4 - > flowi4_proto ) )
2005-04-17 02:20:36 +04:00
flags & = ~ RTCF_LOCAL ;
2012-11-23 01:04:14 +04:00
else
do_cache = false ;
2005-04-17 02:20:36 +04:00
/* If multicast route do not exist use
2010-09-29 15:53:50 +04:00
* default one , but do not gateway in this case .
* Yes , it is hack .
2005-04-17 02:20:36 +04:00
*/
2011-02-17 08:44:24 +03:00
if ( fi & & res - > prefixlen < 4 )
fi = NULL ;
2005-04-17 02:20:36 +04:00
}
2012-07-17 23:20:47 +04:00
fnhe = NULL ;
2012-11-23 01:04:14 +04:00
do_cache & = fi ! = NULL ;
if ( do_cache ) {
2012-08-01 02:02:02 +04:00
struct rtable __rcu * * prth ;
2012-10-08 15:41:19 +04:00
struct fib_nh * nh = & FIB_RES_NH ( * res ) ;
2012-07-31 09:45:30 +04:00
2012-10-08 15:41:19 +04:00
fnhe = find_exception ( nh , fl4 - > daddr ) ;
2012-08-01 02:02:02 +04:00
if ( fnhe )
2013-06-27 11:27:05 +04:00
prth = & fnhe - > fnhe_rth_output ;
2012-10-08 15:41:19 +04:00
else {
if ( unlikely ( fl4 - > flowi4_flags &
FLOWI_FLAG_KNOWN_NH & &
! ( nh - > nh_gw & &
nh - > nh_scope = = RT_SCOPE_LINK ) ) ) {
do_cache = false ;
goto add ;
}
2014-08-17 21:30:35 +04:00
prth = raw_cpu_ptr ( nh - > nh_pcpu_rth_output ) ;
2012-10-08 15:41:19 +04:00
}
2012-08-01 02:02:02 +04:00
rth = rcu_dereference ( * prth ) ;
if ( rt_cache_valid ( rth ) ) {
dst_hold ( & rth - > dst ) ;
return rth ;
2012-07-17 23:20:47 +04:00
}
}
2012-10-08 15:41:19 +04:00
add :
2015-09-02 23:58:34 +03:00
rth = rt_dst_alloc ( dev_out , flags , type ,
2011-04-29 01:13:38 +04:00
IN_DEV_CONF_GET ( in_dev , NOPOLICY ) ,
2012-07-17 23:20:47 +04:00
IN_DEV_CONF_GET ( in_dev , NOXFRM ) ,
2012-10-08 15:41:19 +04:00
do_cache ) ;
2010-10-07 18:48:38 +04:00
if ( ! rth )
2011-02-18 02:29:00 +03:00
return ERR_PTR ( - ENOBUFS ) ;
2010-10-07 18:48:38 +04:00
2012-07-24 00:57:45 +04:00
rth - > rt_iif = orig_oif ? : 0 ;
2015-09-02 23:58:35 +03:00
if ( res - > table )
rth - > rt_table_id = res - > table - > tb_id ;
2005-04-17 02:20:36 +04:00
RT_CACHE_STAT_INC ( out_slow_tot ) ;
if ( flags & ( RTCF_BROADCAST | RTCF_MULTICAST ) ) {
2007-02-09 17:24:47 +03:00
if ( flags & RTCF_LOCAL & &
2005-04-17 02:20:36 +04:00
! ( dev_out - > flags & IFF_LOOPBACK ) ) {
2010-06-11 10:31:35 +04:00
rth - > dst . output = ip_mc_output ;
2005-04-17 02:20:36 +04:00
RT_CACHE_STAT_INC ( out_slow_mc ) ;
}
# ifdef CONFIG_IP_MROUTE
2011-02-17 08:44:24 +03:00
if ( type = = RTN_MULTICAST ) {
2005-04-17 02:20:36 +04:00
if ( IN_DEV_MFORWARD ( in_dev ) & &
2011-04-29 01:48:42 +04:00
! ipv4_is_local_multicast ( fl4 - > daddr ) ) {
2010-06-11 10:31:35 +04:00
rth - > dst . input = ip_mr_input ;
rth - > dst . output = ip_mc_output ;
2005-04-17 02:20:36 +04:00
}
}
# endif
}
2012-07-17 23:20:47 +04:00
rt_set_nexthop ( rth , fl4 - > daddr , res , fnhe , fi , type , 0 ) ;
2015-08-20 14:56:25 +03:00
if ( lwtunnel_output_redirect ( rth - > dst . lwtstate ) )
2015-08-03 19:39:21 +03:00
rth - > dst . output = lwtunnel_output ;
2005-04-17 02:20:36 +04:00
2011-02-18 02:29:00 +03:00
return rth ;
2005-04-17 02:20:36 +04:00
}
/*
* Major route resolver routine .
*/
2015-09-30 11:12:22 +03:00
struct rtable * __ip_route_output_key_hash ( struct net * net , struct flowi4 * fl4 ,
int mp_hash )
2005-04-17 02:20:36 +04:00
{
struct net_device * dev_out = NULL ;
2011-12-02 15:39:42 +04:00
__u8 tos = RT_FL_TOS ( fl4 ) ;
2011-04-29 01:48:42 +04:00
unsigned int flags = 0 ;
struct fib_result res ;
2011-02-18 02:29:00 +03:00
struct rtable * rth ;
2011-04-29 01:48:42 +04:00
int orig_oif ;
2015-09-17 17:01:32 +03:00
int err = - ENETUNREACH ;
2005-04-17 02:20:36 +04:00
2012-07-13 19:21:29 +04:00
res . tclassid = 0 ;
2005-04-17 02:20:36 +04:00
res . fi = NULL ;
2012-06-11 13:01:56 +04:00
res . table = NULL ;
2005-04-17 02:20:36 +04:00
2011-04-29 01:48:42 +04:00
orig_oif = fl4 - > flowi4_oif ;
2012-08-09 01:53:36 +04:00
fl4 - > flowi4_iif = LOOPBACK_IFINDEX ;
2011-04-29 01:48:42 +04:00
fl4 - > flowi4_tos = tos & IPTOS_RT_MASK ;
fl4 - > flowi4_scope = ( ( tos & RTO_ONLINK ) ?
RT_SCOPE_LINK : RT_SCOPE_UNIVERSE ) ;
ipv4: Optimize flow initialization in output route lookup.
We burn a lot of useless cycles, cpu store buffer traffic, and
memory operations memset()'ing the on-stack flow used to perform
output route lookups in __ip_route_output_key().
Only the first half of the flow object members even matter for
output route lookups in this context, specifically:
FIB rules matching cares about:
dst, src, tos, iif, oif, mark
FIB trie lookup cares about:
dst
FIB semantic match cares about:
tos, scope, oif
Therefore only initialize these specific members and elide the
memset entirely.
On Niagara2 this kills about ~300 cycles from the output route
lookup path.
Likely, we can take things further, since all callers of output
route lookups essentially throw away the on-stack flow they use.
So they don't care if we use it as a scratch-pad to compute the
final flow key.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
2011-03-05 08:24:47 +03:00
2011-02-18 02:37:09 +03:00
rcu_read_lock ( ) ;
2011-04-29 01:48:42 +04:00
if ( fl4 - > saddr ) {
2011-03-03 01:31:35 +03:00
rth = ERR_PTR ( - EINVAL ) ;
2011-04-29 01:48:42 +04:00
if ( ipv4_is_multicast ( fl4 - > saddr ) | |
ipv4_is_lbcast ( fl4 - > saddr ) | |
ipv4_is_zeronet ( fl4 - > saddr ) )
2005-04-17 02:20:36 +04:00
goto out ;
/* I removed check for oif == dev_out->oif here.
It was wrong for two reasons :
2008-01-23 09:04:30 +03:00
1. ip_dev_find ( net , saddr ) can return wrong iface , if saddr
is assigned to multiple interfaces .
2005-04-17 02:20:36 +04:00
2. Moreover , we are allowed to send packets with saddr
of another iface . - - ANK
*/
2011-04-29 01:48:42 +04:00
if ( fl4 - > flowi4_oif = = 0 & &
( ipv4_is_multicast ( fl4 - > daddr ) | |
ipv4_is_lbcast ( fl4 - > daddr ) ) ) {
2008-10-01 18:28:28 +04:00
/* It is equivalent to inet_addr_type(saddr) == RTN_LOCAL */
2011-04-29 01:48:42 +04:00
dev_out = __ip_dev_find ( net , fl4 - > saddr , false ) ;
2015-04-03 11:17:26 +03:00
if ( ! dev_out )
2008-10-01 18:28:28 +04:00
goto out ;
2005-04-17 02:20:36 +04:00
/* Special hack: user can direct multicasts
and limited broadcast via necessary interface
without fiddling with IP_MULTICAST_IF or IP_PKTINFO .
This hack is not just for fun , it allows
vic , vat and friends to work .
They bind socket to loopback , set ttl to zero
and expect that it will work .
From the viewpoint of routing cache they are broken ,
because we are not allowed to build multicast path
with loopback source addr ( look , routing cache
cannot know , that ttl is zero , so that packet
will not leave this host and route is valid ) .
Luckily , this hack is good workaround .
*/
2011-04-29 01:48:42 +04:00
fl4 - > flowi4_oif = dev_out - > ifindex ;
2005-04-17 02:20:36 +04:00
goto make_route ;
}
2008-10-01 18:28:28 +04:00
2011-04-29 01:48:42 +04:00
if ( ! ( fl4 - > flowi4_flags & FLOWI_FLAG_ANYSRC ) ) {
2008-10-01 18:28:28 +04:00
/* It is equivalent to inet_addr_type(saddr) == RTN_LOCAL */
2011-04-29 01:48:42 +04:00
if ( ! __ip_dev_find ( net , fl4 - > saddr , false ) )
2008-10-01 18:28:28 +04:00
goto out ;
}
2005-04-17 02:20:36 +04:00
}
2011-04-29 01:48:42 +04:00
if ( fl4 - > flowi4_oif ) {
dev_out = dev_get_by_index_rcu ( net , fl4 - > flowi4_oif ) ;
2011-03-03 01:31:35 +03:00
rth = ERR_PTR ( - ENODEV ) ;
2015-04-03 11:17:26 +03:00
if ( ! dev_out )
2005-04-17 02:20:36 +04:00
goto out ;
2005-10-04 01:35:55 +04:00
/* RACE: Check return value of inet_select_addr instead. */
2010-12-22 07:39:39 +03:00
if ( ! ( dev_out - > flags & IFF_UP ) | | ! __in_dev_get_rcu ( dev_out ) ) {
2011-03-03 01:31:35 +03:00
rth = ERR_PTR ( - ENETUNREACH ) ;
2010-12-22 07:39:39 +03:00
goto out ;
}
2011-04-29 01:48:42 +04:00
if ( ipv4_is_local_multicast ( fl4 - > daddr ) | |
2015-05-01 17:39:54 +03:00
ipv4_is_lbcast ( fl4 - > daddr ) | |
fl4 - > flowi4_proto = = IPPROTO_IGMP ) {
2011-04-29 01:48:42 +04:00
if ( ! fl4 - > saddr )
fl4 - > saddr = inet_select_addr ( dev_out , 0 ,
RT_SCOPE_LINK ) ;
2005-04-17 02:20:36 +04:00
goto make_route ;
}
2013-10-04 19:04:48 +04:00
if ( ! fl4 - > saddr ) {
2011-04-29 01:48:42 +04:00
if ( ipv4_is_multicast ( fl4 - > daddr ) )
fl4 - > saddr = inet_select_addr ( dev_out , 0 ,
fl4 - > flowi4_scope ) ;
else if ( ! fl4 - > daddr )
fl4 - > saddr = inet_select_addr ( dev_out , 0 ,
RT_SCOPE_HOST ) ;
2005-04-17 02:20:36 +04:00
}
2015-09-30 06:07:15 +03:00
rth = l3mdev_get_rtable ( dev_out , fl4 ) ;
if ( rth )
2015-08-13 23:59:02 +03:00
goto out ;
2005-04-17 02:20:36 +04:00
}
2011-04-29 01:48:42 +04:00
if ( ! fl4 - > daddr ) {
fl4 - > daddr = fl4 - > saddr ;
if ( ! fl4 - > daddr )
fl4 - > daddr = fl4 - > saddr = htonl ( INADDR_LOOPBACK ) ;
2008-01-23 09:06:19 +03:00
dev_out = net - > loopback_dev ;
2012-08-09 01:53:36 +04:00
fl4 - > flowi4_oif = LOOPBACK_IFINDEX ;
2005-04-17 02:20:36 +04:00
res . type = RTN_LOCAL ;
flags | = RTCF_LOCAL ;
goto make_route ;
}
2015-09-17 17:01:32 +03:00
err = fib_lookup ( net , fl4 , & res , 0 ) ;
if ( err ) {
2005-04-17 02:20:36 +04:00
res . fi = NULL ;
2012-06-11 13:01:56 +04:00
res . table = NULL ;
2011-04-29 01:48:42 +04:00
if ( fl4 - > flowi4_oif ) {
2005-04-17 02:20:36 +04:00
/* Apparently, routing tables are wrong. Assume,
that the destination is on link .
WHY ? DW .
Because we are allowed to send to iface
even if it has NO routes and NO assigned
addresses . When oif is specified , routing
tables are looked up with only one purpose :
to catch if destination is gatewayed , rather than
direct . Moreover , if MSG_DONTROUTE is set ,
we send packet , ignoring both routing tables
and ifaddr state . - - ANK
We could make it even if oif is unknown ,
likely IPv6 , but we do not .
*/
2011-04-29 01:48:42 +04:00
if ( fl4 - > saddr = = 0 )
fl4 - > saddr = inet_select_addr ( dev_out , 0 ,
RT_SCOPE_LINK ) ;
2005-04-17 02:20:36 +04:00
res . type = RTN_UNICAST ;
goto make_route ;
}
2015-09-17 17:01:32 +03:00
rth = ERR_PTR ( err ) ;
2005-04-17 02:20:36 +04:00
goto out ;
}
if ( res . type = = RTN_LOCAL ) {
2011-04-29 01:48:42 +04:00
if ( ! fl4 - > saddr ) {
2011-01-03 23:24:20 +03:00
if ( res . fi - > fib_prefsrc )
2011-04-29 01:48:42 +04:00
fl4 - > saddr = res . fi - > fib_prefsrc ;
2011-01-03 23:24:20 +03:00
else
2011-04-29 01:48:42 +04:00
fl4 - > saddr = fl4 - > daddr ;
2011-01-03 23:24:20 +03:00
}
2008-01-23 09:06:19 +03:00
dev_out = net - > loopback_dev ;
2011-04-29 01:48:42 +04:00
fl4 - > flowi4_oif = dev_out - > ifindex ;
2005-04-17 02:20:36 +04:00
flags | = RTCF_LOCAL ;
goto make_route ;
}
2015-10-05 18:51:25 +03:00
fib_select_path ( net , & res , fl4 , mp_hash ) ;
2005-04-17 02:20:36 +04:00
dev_out = FIB_RES_DEV ( res ) ;
2011-04-29 01:48:42 +04:00
fl4 - > flowi4_oif = dev_out - > ifindex ;
2005-04-17 02:20:36 +04:00
make_route :
2012-07-01 06:02:56 +04:00
rth = __mkroute_output ( & res , fl4 , orig_oif , dev_out , flags ) ;
2005-04-17 02:20:36 +04:00
2011-02-18 02:37:09 +03:00
out :
rcu_read_unlock ( ) ;
2011-03-03 01:31:35 +03:00
return rth ;
2005-04-17 02:20:36 +04:00
}
2015-09-30 11:12:22 +03:00
EXPORT_SYMBOL_GPL ( __ip_route_output_key_hash ) ;
2005-08-10 07:12:12 +04:00
2010-09-09 01:35:43 +04:00
static struct dst_entry * ipv4_blackhole_dst_check ( struct dst_entry * dst , u32 cookie )
{
return NULL ;
}
2011-11-23 06:12:51 +04:00
static unsigned int ipv4_blackhole_mtu ( const struct dst_entry * dst )
2011-02-01 00:16:00 +03:00
{
2011-11-23 06:13:31 +04:00
unsigned int mtu = dst_metric_raw ( dst , RTAX_MTU ) ;
return mtu ? : dst - > dev - > mtu ;
2011-02-01 00:16:00 +03:00
}
2012-07-17 14:29:28 +04:00
static void ipv4_rt_blackhole_update_pmtu ( struct dst_entry * dst , struct sock * sk ,
struct sk_buff * skb , u32 mtu )
2007-05-25 05:17:54 +04:00
{
}
2012-07-17 14:29:28 +04:00
static void ipv4_rt_blackhole_redirect ( struct dst_entry * dst , struct sock * sk ,
struct sk_buff * skb )
2012-07-12 11:39:24 +04:00
{
}
2011-04-25 02:07:32 +04:00
static u32 * ipv4_rt_blackhole_cow_metrics ( struct dst_entry * dst ,
unsigned long old )
{
return NULL ;
}
2007-05-25 05:17:54 +04:00
static struct dst_ops ipv4_dst_blackhole_ops = {
. family = AF_INET ,
2010-09-09 01:35:43 +04:00
. check = ipv4_blackhole_dst_check ,
2011-11-23 06:12:51 +04:00
. mtu = ipv4_blackhole_mtu ,
2011-02-18 22:39:01 +03:00
. default_advmss = ipv4_default_advmss ,
2007-05-25 05:17:54 +04:00
. update_pmtu = ipv4_rt_blackhole_update_pmtu ,
2012-07-12 11:39:24 +04:00
. redirect = ipv4_rt_blackhole_redirect ,
2011-04-25 02:07:32 +04:00
. cow_metrics = ipv4_rt_blackhole_cow_metrics ,
2011-07-18 11:40:17 +04:00
. neigh_lookup = ipv4_neigh_lookup ,
2007-05-25 05:17:54 +04:00
} ;
2011-03-02 01:59:04 +03:00
struct dst_entry * ipv4_blackhole_route ( struct net * net , struct dst_entry * dst_orig )
2007-05-25 05:17:54 +04:00
{
2011-03-02 01:59:04 +03:00
struct rtable * ort = ( struct rtable * ) dst_orig ;
2012-07-19 23:31:33 +04:00
struct rtable * rt ;
2007-05-25 05:17:54 +04:00
2012-07-19 23:31:33 +04:00
rt = dst_alloc ( & ipv4_dst_blackhole_ops , NULL , 1 , DST_OBSOLETE_NONE , 0 ) ;
2007-05-25 05:17:54 +04:00
if ( rt ) {
2010-06-11 10:31:35 +04:00
struct dst_entry * new = & rt - > dst ;
2007-05-25 05:17:54 +04:00
new - > __use = 1 ;
2007-11-14 08:34:06 +03:00
new - > input = dst_discard ;
2014-04-15 21:47:15 +04:00
new - > output = dst_discard_sk ;
2007-05-25 05:17:54 +04:00
2010-06-11 10:31:35 +04:00
new - > dev = ort - > dst . dev ;
2007-05-25 05:17:54 +04:00
if ( new - > dev )
dev_hold ( new - > dev ) ;
2012-07-18 01:44:26 +04:00
rt - > rt_is_input = ort - > rt_is_input ;
2011-03-05 08:47:09 +03:00
rt - > rt_iif = ort - > rt_iif ;
2012-07-10 17:58:42 +04:00
rt - > rt_pmtu = ort - > rt_pmtu ;
2007-05-25 05:17:54 +04:00
2013-07-30 04:33:53 +04:00
rt - > rt_genid = rt_genid_ipv4 ( net ) ;
2007-05-25 05:17:54 +04:00
rt - > rt_flags = ort - > rt_flags ;
rt - > rt_type = ort - > rt_type ;
rt - > rt_gateway = ort - > rt_gateway ;
2012-10-08 15:41:18 +04:00
rt - > rt_uses_gateway = ort - > rt_uses_gateway ;
2007-05-25 05:17:54 +04:00
2012-08-01 02:06:50 +04:00
INIT_LIST_HEAD ( & rt - > rt_uncached ) ;
2007-05-25 05:17:54 +04:00
dst_free ( new ) ;
}
2011-03-02 01:59:04 +03:00
dst_release ( dst_orig ) ;
return rt ? & rt - > dst : ERR_PTR ( - ENOMEM ) ;
2007-05-25 05:17:54 +04:00
}
2011-03-12 09:12:47 +03:00
struct rtable * ip_route_output_flow ( struct net * net , struct flowi4 * flp4 ,
2015-09-25 17:39:10 +03:00
const struct sock * sk )
2005-04-17 02:20:36 +04:00
{
2011-03-12 09:12:47 +03:00
struct rtable * rt = __ip_route_output_key ( net , flp4 ) ;
2005-04-17 02:20:36 +04:00
2011-03-03 01:31:35 +03:00
if ( IS_ERR ( rt ) )
return rt ;
2005-04-17 02:20:36 +04:00
2011-05-03 01:37:45 +04:00
if ( flp4 - > flowi4_proto )
2014-09-16 12:08:40 +04:00
rt = ( struct rtable * ) xfrm_lookup_route ( net , & rt - > dst ,
flowi4_to_flowi ( flp4 ) ,
sk , 0 ) ;
2005-04-17 02:20:36 +04:00
2011-03-03 01:31:35 +03:00
return rt ;
2005-04-17 02:20:36 +04:00
}
2005-08-10 07:12:12 +04:00
EXPORT_SYMBOL_GPL ( ip_route_output_flow ) ;
2015-09-02 23:58:36 +03:00
static int rt_fill_info ( struct net * net , __be32 dst , __be32 src , u32 table_id ,
2012-09-08 00:12:54 +04:00
struct flowi4 * fl4 , struct sk_buff * skb , u32 portid ,
2012-07-12 21:10:17 +04:00
u32 seq , int event , int nowait , unsigned int flags )
2005-04-17 02:20:36 +04:00
{
2009-06-02 09:14:27 +04:00
struct rtable * rt = skb_rtable ( skb ) ;
2005-04-17 02:20:36 +04:00
struct rtmsg * r ;
2006-08-18 05:15:17 +04:00
struct nlmsghdr * nlh ;
2011-10-11 05:12:02 +04:00
unsigned long expires = 0 ;
2012-07-10 18:26:01 +04:00
u32 error ;
2012-07-20 13:02:08 +04:00
u32 metrics [ RTAX_MAX ] ;
2006-08-18 05:15:17 +04:00
2012-09-08 00:12:54 +04:00
nlh = nlmsg_put ( skb , portid , seq , event , sizeof ( * r ) , flags ) ;
2015-04-03 11:17:26 +03:00
if ( ! nlh )
2007-02-01 10:16:40 +03:00
return - EMSGSIZE ;
2006-08-18 05:15:17 +04:00
r = nlmsg_data ( nlh ) ;
2005-04-17 02:20:36 +04:00
r - > rtm_family = AF_INET ;
r - > rtm_dst_len = 32 ;
r - > rtm_src_len = 0 ;
2012-07-01 06:02:59 +04:00
r - > rtm_tos = fl4 - > flowi4_tos ;
2015-09-02 23:58:36 +03:00
r - > rtm_table = table_id ;
if ( nla_put_u32 ( skb , RTA_TABLE , table_id ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
2005-04-17 02:20:36 +04:00
r - > rtm_type = rt - > rt_type ;
r - > rtm_scope = RT_SCOPE_UNIVERSE ;
r - > rtm_protocol = RTPROT_UNSPEC ;
r - > rtm_flags = ( rt - > rt_flags & ~ 0xFFFF ) | RTM_F_CLONED ;
if ( rt - > rt_flags & RTCF_NOTIFY )
r - > rtm_flags | = RTM_F_NOTIFY ;
2015-01-23 14:01:26 +03:00
if ( IPCB ( skb ) - > flags & IPSKB_DOREDIRECT )
r - > rtm_flags | = RTCF_DOREDIRECT ;
2006-08-18 05:15:17 +04:00
2015-03-29 17:59:25 +03:00
if ( nla_put_in_addr ( skb , RTA_DST , dst ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
2012-07-01 06:02:56 +04:00
if ( src ) {
2005-04-17 02:20:36 +04:00
r - > rtm_src_len = 32 ;
2015-03-29 17:59:25 +03:00
if ( nla_put_in_addr ( skb , RTA_SRC , src ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
2005-04-17 02:20:36 +04:00
}
2012-04-02 04:39:02 +04:00
if ( rt - > dst . dev & &
nla_put_u32 ( skb , RTA_OIF , rt - > dst . dev - > ifindex ) )
goto nla_put_failure ;
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2012-04-02 04:39:02 +04:00
if ( rt - > dst . tclassid & &
nla_put_u32 ( skb , RTA_FLOW , rt - > dst . tclassid ) )
goto nla_put_failure ;
2005-04-17 02:20:36 +04:00
# endif
2012-06-28 15:05:27 +04:00
if ( ! rt_is_input_route ( rt ) & &
2012-07-01 06:02:59 +04:00
fl4 - > saddr ! = src ) {
2015-03-29 17:59:25 +03:00
if ( nla_put_in_addr ( skb , RTA_PREFSRC , fl4 - > saddr ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
}
2012-10-08 15:41:18 +04:00
if ( rt - > rt_uses_gateway & &
2015-03-29 17:59:25 +03:00
nla_put_in_addr ( skb , RTA_GATEWAY , rt - > rt_gateway ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
2006-08-18 05:15:17 +04:00
2012-10-08 04:56:54 +04:00
expires = rt - > dst . expires ;
if ( expires ) {
unsigned long now = jiffies ;
if ( time_before ( now , expires ) )
expires - = now ;
else
expires = 0 ;
}
2012-07-20 13:02:08 +04:00
memcpy ( metrics , dst_metrics_ptr ( & rt - > dst ) , sizeof ( metrics ) ) ;
2012-10-08 04:56:54 +04:00
if ( rt - > rt_pmtu & & expires )
2012-07-20 13:02:08 +04:00
metrics [ RTAX_MTU - 1 ] = rt - > rt_pmtu ;
if ( rtnetlink_put_metrics ( skb , metrics ) < 0 )
2006-08-18 05:15:17 +04:00
goto nla_put_failure ;
2012-07-01 06:03:01 +04:00
if ( fl4 - > flowi4_mark & &
2012-10-10 12:27:25 +04:00
nla_put_u32 ( skb , RTA_MARK , fl4 - > flowi4_mark ) )
2012-04-02 04:39:02 +04:00
goto nla_put_failure ;
2010-07-21 02:03:14 +04:00
2010-06-11 10:31:35 +04:00
error = rt - > dst . error ;
2006-08-18 05:15:17 +04:00
2010-11-12 04:07:48 +03:00
if ( rt_is_input_route ( rt ) ) {
2012-12-04 05:03:07 +04:00
# ifdef CONFIG_IP_MROUTE
if ( ipv4_is_multicast ( dst ) & & ! ipv4_is_local_multicast ( dst ) & &
IPV4_DEVCONF_ALL ( net , MC_FORWARDING ) ) {
int err = ipmr_get_route ( net , skb ,
fl4 - > saddr , fl4 - > daddr ,
r , nowait ) ;
if ( err < = 0 ) {
if ( ! nowait ) {
if ( err = = 0 )
return 0 ;
goto nla_put_failure ;
} else {
if ( err = = - EMSGSIZE )
goto nla_put_failure ;
error = err ;
}
}
} else
# endif
2014-04-13 19:08:02 +04:00
if ( nla_put_u32 ( skb , RTA_IIF , skb - > dev - > ifindex ) )
2012-12-04 05:03:07 +04:00
goto nla_put_failure ;
2005-04-17 02:20:36 +04:00
}
2012-07-10 18:26:01 +04:00
if ( rtnl_put_cacheinfo ( skb , & rt - > dst , 0 , expires , error ) < 0 )
2006-11-27 20:27:07 +03:00
goto nla_put_failure ;
2006-08-18 05:15:17 +04:00
2015-01-17 00:09:00 +03:00
nlmsg_end ( skb , nlh ) ;
return 0 ;
2005-04-17 02:20:36 +04:00
2006-08-18 05:15:17 +04:00
nla_put_failure :
2007-02-01 10:16:40 +03:00
nlmsg_cancel ( skb , nlh ) ;
return - EMSGSIZE ;
2005-04-17 02:20:36 +04:00
}
2013-03-21 11:45:29 +04:00
static int inet_rtm_getroute ( struct sk_buff * in_skb , struct nlmsghdr * nlh )
2005-04-17 02:20:36 +04:00
{
2008-03-25 20:26:21 +03:00
struct net * net = sock_net ( in_skb - > sk ) ;
2006-08-18 05:15:44 +04:00
struct rtmsg * rtm ;
struct nlattr * tb [ RTA_MAX + 1 ] ;
2005-04-17 02:20:36 +04:00
struct rtable * rt = NULL ;
2012-07-01 06:02:59 +04:00
struct flowi4 fl4 ;
2006-09-27 08:25:20 +04:00
__be32 dst = 0 ;
__be32 src = 0 ;
u32 iif ;
2006-08-18 05:15:44 +04:00
int err ;
2010-07-21 02:03:14 +04:00
int mark ;
2005-04-17 02:20:36 +04:00
struct sk_buff * skb ;
2015-09-02 23:58:36 +03:00
u32 table_id = RT_TABLE_MAIN ;
2005-04-17 02:20:36 +04:00
2006-08-18 05:15:44 +04:00
err = nlmsg_parse ( nlh , sizeof ( * rtm ) , tb , RTA_MAX , rtm_ipv4_policy ) ;
if ( err < 0 )
goto errout ;
rtm = nlmsg_data ( nlh ) ;
2005-04-17 02:20:36 +04:00
skb = alloc_skb ( NLMSG_GOODSIZE , GFP_KERNEL ) ;
2015-04-03 11:17:26 +03:00
if ( ! skb ) {
2006-08-18 05:15:44 +04:00
err = - ENOBUFS ;
goto errout ;
}
2005-04-17 02:20:36 +04:00
/* Reserve room for dummy headers, this skb can pass
through good chunk of routing engine .
*/
2007-03-20 01:30:44 +03:00
skb_reset_mac_header ( skb ) ;
2007-04-11 07:45:18 +04:00
skb_reset_network_header ( skb ) ;
2006-04-18 04:27:11 +04:00
/* Bugfix: need to give ip_route_input enough of an IP header to not gag. */
2007-04-21 09:47:35 +04:00
ip_hdr ( skb ) - > protocol = IPPROTO_ICMP ;
2005-04-17 02:20:36 +04:00
skb_reserve ( skb , MAX_HEADER + sizeof ( struct iphdr ) ) ;
2015-03-29 17:59:26 +03:00
src = tb [ RTA_SRC ] ? nla_get_in_addr ( tb [ RTA_SRC ] ) : 0 ;
dst = tb [ RTA_DST ] ? nla_get_in_addr ( tb [ RTA_DST ] ) : 0 ;
2006-08-18 05:15:44 +04:00
iif = tb [ RTA_IIF ] ? nla_get_u32 ( tb [ RTA_IIF ] ) : 0 ;
2010-07-21 02:03:14 +04:00
mark = tb [ RTA_MARK ] ? nla_get_u32 ( tb [ RTA_MARK ] ) : 0 ;
2005-04-17 02:20:36 +04:00
2012-07-01 06:02:59 +04:00
memset ( & fl4 , 0 , sizeof ( fl4 ) ) ;
fl4 . daddr = dst ;
fl4 . saddr = src ;
fl4 . flowi4_tos = rtm - > rtm_tos ;
fl4 . flowi4_oif = tb [ RTA_OIF ] ? nla_get_u32 ( tb [ RTA_OIF ] ) : 0 ;
fl4 . flowi4_mark = mark ;
2005-04-17 02:20:36 +04:00
if ( iif ) {
2006-08-18 05:15:44 +04:00
struct net_device * dev ;
2008-02-29 07:52:04 +03:00
dev = __dev_get_by_index ( net , iif ) ;
2015-04-03 11:17:26 +03:00
if ( ! dev ) {
2006-08-18 05:15:44 +04:00
err = - ENODEV ;
goto errout_free ;
}
2005-04-17 02:20:36 +04:00
skb - > protocol = htons ( ETH_P_IP ) ;
skb - > dev = dev ;
2010-07-21 02:03:14 +04:00
skb - > mark = mark ;
2005-04-17 02:20:36 +04:00
local_bh_disable ( ) ;
err = ip_route_input ( skb , dst , src , rtm - > rtm_tos , dev ) ;
local_bh_enable ( ) ;
2006-08-18 05:15:44 +04:00
2009-06-02 09:14:27 +04:00
rt = skb_rtable ( skb ) ;
2010-06-11 10:31:35 +04:00
if ( err = = 0 & & rt - > dst . error )
err = - rt - > dst . error ;
2005-04-17 02:20:36 +04:00
} else {
2011-03-12 09:12:47 +03:00
rt = ip_route_output_key ( net , & fl4 ) ;
2011-03-03 01:31:35 +03:00
err = 0 ;
if ( IS_ERR ( rt ) )
err = PTR_ERR ( rt ) ;
2005-04-17 02:20:36 +04:00
}
2006-08-18 05:15:44 +04:00
2005-04-17 02:20:36 +04:00
if ( err )
2006-08-18 05:15:44 +04:00
goto errout_free ;
2005-04-17 02:20:36 +04:00
2010-06-11 10:31:35 +04:00
skb_dst_set ( skb , & rt - > dst ) ;
2005-04-17 02:20:36 +04:00
if ( rtm - > rtm_flags & RTM_F_NOTIFY )
rt - > rt_flags | = RTCF_NOTIFY ;
2015-09-02 23:58:36 +03:00
if ( rtm - > rtm_flags & RTM_F_LOOKUP_TABLE )
table_id = rt - > rt_table_id ;
err = rt_fill_info ( net , dst , src , table_id , & fl4 , skb ,
2012-09-08 00:12:54 +04:00
NETLINK_CB ( in_skb ) . portid , nlh - > nlmsg_seq ,
2008-02-29 07:52:04 +03:00
RTM_NEWROUTE , 0 , 0 ) ;
2015-01-19 07:36:08 +03:00
if ( err < 0 )
2006-08-18 05:15:44 +04:00
goto errout_free ;
2005-04-17 02:20:36 +04:00
2012-09-08 00:12:54 +04:00
err = rtnl_unicast ( skb , net , NETLINK_CB ( in_skb ) . portid ) ;
2006-08-18 05:15:44 +04:00
errout :
2006-08-15 11:30:25 +04:00
return err ;
2005-04-17 02:20:36 +04:00
2006-08-18 05:15:44 +04:00
errout_free :
2005-04-17 02:20:36 +04:00
kfree_skb ( skb ) ;
2006-08-18 05:15:44 +04:00
goto errout ;
2005-04-17 02:20:36 +04:00
}
void ip_rt_multicast_event ( struct in_device * in_dev )
{
2012-09-07 04:45:29 +04:00
rt_cache_flush ( dev_net ( in_dev - > dev ) ) ;
2005-04-17 02:20:36 +04:00
}
# ifdef CONFIG_SYSCTL
2013-02-19 04:43:12 +04:00
static int ip_rt_gc_timeout __read_mostly = RT_GC_TIMEOUT ;
static int ip_rt_gc_interval __read_mostly = 60 * HZ ;
static int ip_rt_gc_min_interval __read_mostly = HZ / 2 ;
static int ip_rt_gc_elasticity __read_mostly = 8 ;
2013-06-12 10:04:25 +04:00
static int ipv4_sysctl_rtcache_flush ( struct ctl_table * __ctl , int write ,
2009-09-24 02:57:19 +04:00
void __user * buffer ,
2005-04-17 02:20:36 +04:00
size_t * lenp , loff_t * ppos )
{
2013-05-28 00:46:33 +04:00
struct net * net = ( struct net * ) __ctl - > extra1 ;
2005-04-17 02:20:36 +04:00
if ( write ) {
2013-05-28 00:46:33 +04:00
rt_cache_flush ( net ) ;
fnhe_genid_bump ( net ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
2007-02-09 17:24:47 +03:00
}
2005-04-17 02:20:36 +04:00
return - EINVAL ;
}
2013-06-12 10:04:25 +04:00
static struct ctl_table ipv4_route_table [ ] = {
2005-04-17 02:20:36 +04:00
{
. procname = " gc_thresh " ,
. data = & ipv4_dst_ops . gc_thresh ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " max_size " ,
. data = & ip_rt_max_size ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
/* Deprecated. Use gc_min_interval_ms */
2007-02-09 17:24:47 +03:00
2005-04-17 02:20:36 +04:00
. procname = " gc_min_interval " ,
. data = & ip_rt_gc_min_interval ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec_jiffies ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " gc_min_interval_ms " ,
. data = & ip_rt_gc_min_interval ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec_ms_jiffies ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " gc_timeout " ,
. data = & ip_rt_gc_timeout ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec_jiffies ,
2005-04-17 02:20:36 +04:00
} ,
2011-12-22 00:47:16 +04:00
{
. procname = " gc_interval " ,
. data = & ip_rt_gc_interval ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
. proc_handler = proc_dointvec_jiffies ,
} ,
2005-04-17 02:20:36 +04:00
{
. procname = " redirect_load " ,
. data = & ip_rt_redirect_load ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " redirect_number " ,
. data = & ip_rt_redirect_number ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " redirect_silence " ,
. data = & ip_rt_redirect_silence ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " error_cost " ,
. data = & ip_rt_error_cost ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " error_burst " ,
. data = & ip_rt_error_burst ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " gc_elasticity " ,
. data = & ip_rt_gc_elasticity ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " mtu_expires " ,
. data = & ip_rt_mtu_expires ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec_jiffies ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " min_pmtu " ,
. data = & ip_rt_min_pmtu ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
{
. procname = " min_adv_mss " ,
. data = & ip_rt_min_advmss ,
. maxlen = sizeof ( int ) ,
. mode = 0644 ,
2008-11-04 05:21:05 +03:00
. proc_handler = proc_dointvec ,
2005-04-17 02:20:36 +04:00
} ,
2009-11-06 00:32:03 +03:00
{ }
2005-04-17 02:20:36 +04:00
} ;
2008-07-06 06:02:33 +04:00
static struct ctl_table ipv4_route_flush_table [ ] = {
{
. procname = " flush " ,
. maxlen = sizeof ( int ) ,
. mode = 0200 ,
2008-11-04 05:21:05 +03:00
. proc_handler = ipv4_sysctl_rtcache_flush ,
2008-07-06 06:02:33 +04:00
} ,
2009-11-06 00:32:03 +03:00
{ } ,
2008-07-06 06:02:33 +04:00
} ;
static __net_init int sysctl_route_net_init ( struct net * net )
{
struct ctl_table * tbl ;
tbl = ipv4_route_flush_table ;
2009-11-26 02:14:13 +03:00
if ( ! net_eq ( net , & init_net ) ) {
2008-07-06 06:02:33 +04:00
tbl = kmemdup ( tbl , sizeof ( ipv4_route_flush_table ) , GFP_KERNEL ) ;
2015-04-03 11:17:26 +03:00
if ( ! tbl )
2008-07-06 06:02:33 +04:00
goto err_dup ;
2012-11-16 07:02:59 +04:00
/* Don't export sysctls to unprivileged users */
if ( net - > user_ns ! = & init_user_ns )
tbl [ 0 ] . procname = NULL ;
2008-07-06 06:02:33 +04:00
}
tbl [ 0 ] . extra1 = net ;
2012-04-19 17:44:49 +04:00
net - > ipv4 . route_hdr = register_net_sysctl ( net , " net/ipv4/route " , tbl ) ;
2015-04-03 11:17:26 +03:00
if ( ! net - > ipv4 . route_hdr )
2008-07-06 06:02:33 +04:00
goto err_reg ;
return 0 ;
err_reg :
if ( tbl ! = ipv4_route_flush_table )
kfree ( tbl ) ;
err_dup :
return - ENOMEM ;
}
static __net_exit void sysctl_route_net_exit ( struct net * net )
{
struct ctl_table * tbl ;
tbl = net - > ipv4 . route_hdr - > ctl_table_arg ;
unregister_net_sysctl_table ( net - > ipv4 . route_hdr ) ;
BUG_ON ( tbl = = ipv4_route_flush_table ) ;
kfree ( tbl ) ;
}
static __net_initdata struct pernet_operations sysctl_route_ops = {
. init = sysctl_route_net_init ,
. exit = sysctl_route_net_exit ,
} ;
2005-04-17 02:20:36 +04:00
# endif
2010-05-08 12:57:52 +04:00
static __net_init int rt_genid_init ( struct net * net )
2008-07-06 06:02:59 +04:00
{
2013-07-30 04:33:53 +04:00
atomic_set ( & net - > ipv4 . rt_genid , 0 ) ;
2013-05-28 00:46:33 +04:00
atomic_set ( & net - > fnhe_genid , 0 ) ;
2011-03-25 03:42:21 +03:00
get_random_bytes ( & net - > ipv4 . dev_addr_genid ,
sizeof ( net - > ipv4 . dev_addr_genid ) ) ;
2008-07-06 06:02:59 +04:00
return 0 ;
}
2010-05-08 12:57:52 +04:00
static __net_initdata struct pernet_operations rt_genid_ops = {
. init = rt_genid_init ,
2008-07-06 06:02:59 +04:00
} ;
2012-06-10 03:27:05 +04:00
static int __net_init ipv4_inetpeer_init ( struct net * net )
{
struct inet_peer_base * bp = kmalloc ( sizeof ( * bp ) , GFP_KERNEL ) ;
if ( ! bp )
return - ENOMEM ;
inet_peer_base_init ( bp ) ;
net - > ipv4 . peers = bp ;
return 0 ;
}
static void __net_exit ipv4_inetpeer_exit ( struct net * net )
{
struct inet_peer_base * bp = net - > ipv4 . peers ;
net - > ipv4 . peers = NULL ;
2012-06-10 03:32:41 +04:00
inetpeer_invalidate_tree ( bp ) ;
2012-06-10 03:27:05 +04:00
kfree ( bp ) ;
}
static __net_initdata struct pernet_operations ipv4_inetpeer_ops = {
. init = ipv4_inetpeer_init ,
. exit = ipv4_inetpeer_exit ,
} ;
2008-07-06 06:02:59 +04:00
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2010-02-16 18:20:26 +03:00
struct ip_rt_acct __percpu * ip_rt_acct __read_mostly ;
2011-01-14 15:36:42 +03:00
# endif /* CONFIG_IP_ROUTE_CLASSID */
2005-04-17 02:20:36 +04:00
int __init ip_rt_init ( void )
{
2005-07-06 01:58:19 +04:00
int rc = 0 ;
2015-01-15 02:17:06 +03:00
int cpu ;
2005-04-17 02:20:36 +04:00
inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP
generator.
linux kernels used inet_peer cache for this purpose, but this had a huge
cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs,
with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth
is about 20.
4) If server deals with many tcp flows, we have a high probability of
not finding the inet_peer, allocating a fresh one, inserting it in
the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time,
so that reassembly units have a chance to complete reassembly of
fragments belonging to one message before receiving other fragments
with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP
as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-02 16:26:03 +04:00
ip_idents = kmalloc ( IP_IDENTS_SZ * sizeof ( * ip_idents ) , GFP_KERNEL ) ;
if ( ! ip_idents )
panic ( " IP: failed to allocate ip_idents \n " ) ;
prandom_bytes ( ip_idents , IP_IDENTS_SZ * sizeof ( * ip_idents ) ) ;
2015-05-01 20:37:49 +03:00
ip_tstamps = kcalloc ( IP_IDENTS_SZ , sizeof ( * ip_tstamps ) , GFP_KERNEL ) ;
if ( ! ip_tstamps )
panic ( " IP: failed to allocate ip_tstamps \n " ) ;
2015-01-15 02:17:06 +03:00
for_each_possible_cpu ( cpu ) {
struct uncached_list * ul = & per_cpu ( rt_uncached_list , cpu ) ;
INIT_LIST_HEAD ( & ul - > head ) ;
spin_lock_init ( & ul - > lock ) ;
}
2011-01-14 15:36:42 +03:00
# ifdef CONFIG_IP_ROUTE_CLASSID
2009-02-25 16:07:33 +03:00
ip_rt_acct = __alloc_percpu ( 256 * sizeof ( struct ip_rt_acct ) , __alignof__ ( struct ip_rt_acct ) ) ;
2005-04-17 02:20:36 +04:00
if ( ! ip_rt_acct )
panic ( " IP: failed to allocate ip_rt_acct \n " ) ;
# endif
2006-08-27 06:25:52 +04:00
ipv4_dst_ops . kmem_cachep =
kmem_cache_create ( " ip_dst_cache " , sizeof ( struct rtable ) , 0 ,
2007-07-20 05:11:58 +04:00
SLAB_HWCACHE_ALIGN | SLAB_PANIC , NULL ) ;
2005-04-17 02:20:36 +04:00
2007-05-25 05:17:54 +04:00
ipv4_dst_blackhole_ops . kmem_cachep = ipv4_dst_ops . kmem_cachep ;
2010-10-08 10:37:34 +04:00
if ( dst_entries_init ( & ipv4_dst_ops ) < 0 )
panic ( " IP: failed to allocate ipv4_dst_ops counter \n " ) ;
if ( dst_entries_init ( & ipv4_dst_blackhole_ops ) < 0 )
panic ( " IP: failed to allocate ipv4_dst_blackhole_ops counter \n " ) ;
2012-07-17 22:00:09 +04:00
ipv4_dst_ops . gc_thresh = ~ 0 ;
ip_rt_max_size = INT_MAX ;
2005-04-17 02:20:36 +04:00
devinet_init ( ) ;
ip_fib_init ( ) ;
2008-02-29 07:51:18 +03:00
if ( ip_rt_proc_init ( ) )
2012-03-11 22:36:11 +04:00
pr_err ( " Unable to create route proc files \n " ) ;
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_XFRM
xfrm_init ( ) ;
2012-11-13 11:52:24 +04:00
xfrm4_init ( ) ;
2005-04-17 02:20:36 +04:00
# endif
2011-06-10 05:27:09 +04:00
rtnl_register ( PF_INET , RTM_GETROUTE , inet_rtm_getroute , NULL , NULL ) ;
2007-03-22 21:55:17 +03:00
2008-07-06 06:02:33 +04:00
# ifdef CONFIG_SYSCTL
register_pernet_subsys ( & sysctl_route_ops ) ;
# endif
2010-05-08 12:57:52 +04:00
register_pernet_subsys ( & rt_genid_ops ) ;
2012-06-10 03:27:05 +04:00
register_pernet_subsys ( & ipv4_inetpeer_ops ) ;
2005-04-17 02:20:36 +04:00
return rc ;
}
2008-07-30 14:32:52 +04:00
# ifdef CONFIG_SYSCTL
2008-07-27 11:59:33 +04:00
/*
* We really need to sanitize the damn ipv4 init order , then all
* this nonsense will go away .
*/
void __init ip_static_sysctl_init ( void )
{
2012-04-19 17:32:39 +04:00
register_net_sysctl ( & init_net , " net/ipv4/route " , ipv4_route_table ) ;
2008-07-27 11:59:33 +04:00
}
2008-07-30 14:32:52 +04:00
# endif