2005-04-17 02:20:36 +04:00
/*
* Bond several ethernet interfaces into a Cisco , running ' Etherchannel ' .
*
* Portions are ( c ) Copyright 1995 Simon " Guru Aleph-Null " Janes
* NCM : Network and Communications Management , Inc .
*
* BUT , I ' m the one who modified it for ethernet , so :
* ( c ) Copyright 1999 , Thomas Davis , tadavis @ lbl . gov
*
* This software may be used and distributed according to the terms
* of the GNU Public License , incorporated herein by reference .
*
*/
# ifndef _LINUX_BONDING_H
# define _LINUX_BONDING_H
# include <linux/timer.h>
# include <linux/proc_fs.h>
# include <linux/if_bonding.h>
2010-10-13 20:01:50 +04:00
# include <linux/cpumask.h>
bonding: send IPv6 neighbor advertisement on failover
This patch adds better IPv6 failover support for bonding devices,
especially when in active-backup mode and there are only IPv6 addresses
configured, as reported by Alex Sidorenko.
- Creates a new file, net/drivers/bonding/bond_ipv6.c, for the
IPv6-specific routines. Both regular bonds and VLANs over bonds
are supported.
- Adds a new tunable, num_unsol_na, to limit the number of unsolicited
IPv6 Neighbor Advertisements that are sent on a failover event.
Default is 1.
- Creates two new IPv6 neighbor discovery functions:
ndisc_build_skb()
ndisc_send_skb()
These were required to support VLANs since we have to be able to
add the VLAN id to the skb since ndisc_send_na() and friends
shouldn't be asked to do this. These two routines are basically
__ndisc_send() split into two pieces, in a slightly different order.
- Updates Documentation/networking/bonding.txt and bumps the rev of bond
support to 3.4.0.
On failover, this new code will generate one packet:
- An unsolicited IPv6 Neighbor Advertisement, which helps the switch
learn that the address has moved to the new slave.
Testing has shown that sending just the NA results in pretty good
behavior when in active-back mode, I saw no lost ping packets for example.
Signed-off-by: Brian Haley <brian.haley@hp.com>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
2008-11-05 04:51:14 +03:00
# include <linux/in6.h>
2011-02-18 02:43:32 +03:00
# include <linux/netpoll.h>
2012-03-22 20:14:29 +04:00
# include <linux/inetdevice.h>
2012-11-28 03:57:04 +04:00
# include <linux/etherdevice.h>
2005-04-17 02:20:36 +04:00
# include "bond_3ad.h"
# include "bond_alb.h"
2011-04-26 19:25:52 +04:00
# define DRV_VERSION "3.7.1"
# define DRV_RELDATE "April 27, 2011"
2005-04-17 02:20:36 +04:00
# define DRV_NAME "bonding"
# define DRV_DESCRIPTION "Ethernet Channel Bonding Driver"
2011-03-07 00:58:46 +03:00
# define bond_version DRV_DESCRIPTION ": v" DRV_VERSION " (" DRV_RELDATE ")\n"
2005-04-17 02:20:36 +04:00
# define BOND_MAX_ARP_TARGETS 16
bonding: disable arp and enable mii monitoring when bond change to no uses arp mode
Because the ARP monitoring is not support for 802.3ad, but I still
could change the mode to 802.3ad from ab mode while ARP monitoring
is running, it is incorrect.
So add a check for 802.3ad in bonding_store_mode to fix the problem,
and make a new macro BOND_NO_USES_ARP() to simplify the code.
v2: according to the Dan Williams's suggestion, bond mode is the most
important bond option, it should override any of the other sub-options.
So when the mode is changed, the conficting values should be cleared
or reset, otherwise the user has to duplicate more operations to modify
the logic. I disable the arp and enable mii monitoring when the bond mode
is changed to AB, TB and 8023AD if the arp interval is true.
v3: according to the Nik's suggestion, the default value of miimon should need
a name, there is several place to use it, and the bond_store_arp_interval()
could use micro BOND_NO_USES_ARP to make the code more simpify.
Suggested-by: Dan Williams <dcbw@redhat.com>
Suggested-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-11-22 18:28:43 +04:00
# define BOND_DEFAULT_MIIMON 100
2005-04-17 02:20:36 +04:00
# define IS_UP(dev) \
( ( ( ( dev ) - > flags & IFF_UP ) = = IFF_UP ) & & \
netif_running ( dev ) & & \
netif_carrier_ok ( dev ) )
/*
* Checks whether slave is ready for transmit .
*/
# define SLAVE_IS_OK(slave) \
( ( ( slave ) - > dev - > flags & IFF_UP ) & & \
netif_running ( ( slave ) - > dev ) & & \
( ( slave ) - > link = = BOND_LINK_UP ) & & \
2011-03-12 06:14:37 +03:00
bond_is_active_slave ( slave ) )
2005-04-17 02:20:36 +04:00
# define USES_PRIMARY(mode) \
( ( ( mode ) = = BOND_MODE_ACTIVEBACKUP ) | | \
( ( mode ) = = BOND_MODE_TLB ) | | \
( ( mode ) = = BOND_MODE_ALB ) )
bonding: disable arp and enable mii monitoring when bond change to no uses arp mode
Because the ARP monitoring is not support for 802.3ad, but I still
could change the mode to 802.3ad from ab mode while ARP monitoring
is running, it is incorrect.
So add a check for 802.3ad in bonding_store_mode to fix the problem,
and make a new macro BOND_NO_USES_ARP() to simplify the code.
v2: according to the Dan Williams's suggestion, bond mode is the most
important bond option, it should override any of the other sub-options.
So when the mode is changed, the conficting values should be cleared
or reset, otherwise the user has to duplicate more operations to modify
the logic. I disable the arp and enable mii monitoring when the bond mode
is changed to AB, TB and 8023AD if the arp interval is true.
v3: according to the Nik's suggestion, the default value of miimon should need
a name, there is several place to use it, and the bond_store_arp_interval()
could use micro BOND_NO_USES_ARP to make the code more simpify.
Suggested-by: Dan Williams <dcbw@redhat.com>
Suggested-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-11-22 18:28:43 +04:00
# define BOND_NO_USES_ARP(mode) \
( ( ( mode ) = = BOND_MODE_8023AD ) | | \
( ( mode ) = = BOND_MODE_TLB ) | | \
( ( mode ) = = BOND_MODE_ALB ) )
2010-06-02 12:40:18 +04:00
# define TX_QUEUE_OVERRIDE(mode) \
( ( ( mode ) = = BOND_MODE_ACTIVEBACKUP ) | | \
( ( mode ) = = BOND_MODE_ROUNDROBIN ) )
2013-10-18 19:43:34 +04:00
# define BOND_MODE_IS_LB(mode) \
( ( ( mode ) = = BOND_MODE_TLB ) | | \
( ( mode ) = = BOND_MODE_ALB ) )
2013-11-15 19:34:30 +04:00
# define IS_IP_TARGET_UNUSABLE_ADDRESS(a) \
( ( htonl ( INADDR_BROADCAST ) = = a ) | | \
ipv4_is_zeronet ( a ) )
2005-04-17 02:20:36 +04:00
/*
* Less bad way to call ioctl from within the kernel ; this needs to be
* done some other way to get the call out of interrupt context .
* Needs " ioctl " variable to be supplied by calling context .
*/
# define IOCTL(dev, arg, cmd) ({ \
int res = 0 ; \
mm_segment_t fs = get_fs ( ) ; \
set_fs ( get_ds ( ) ) ; \
res = ioctl ( dev , arg , cmd ) ; \
set_fs ( fs ) ; \
res ; } )
2013-08-01 18:54:47 +04:00
/* slave list primitives */
2013-09-25 11:20:24 +04:00
# define bond_slave_list(bond) (&(bond)->dev->adj_list.lower)
# define bond_has_slaves(bond) !list_empty(bond_slave_list(bond))
2013-09-25 11:20:21 +04:00
2013-08-01 18:54:47 +04:00
/* IMPORTANT: bond_first/last_slave can return NULL in case of an empty list */
# define bond_first_slave(bond) \
2013-09-25 11:20:24 +04:00
( bond_has_slaves ( bond ) ? \
netdev_adjacent_get_private ( bond_slave_list ( bond ) - > next ) : \
NULL )
2013-08-01 18:54:47 +04:00
# define bond_last_slave(bond) \
2013-09-25 11:20:24 +04:00
( bond_has_slaves ( bond ) ? \
netdev_adjacent_get_private ( bond_slave_list ( bond ) - > prev ) : \
NULL )
2013-08-01 18:54:47 +04:00
2013-12-13 06:19:55 +04:00
/* Caller must have rcu_read_lock */
# define bond_first_slave_rcu(bond) \
netdev_lower_get_first_private_rcu ( bond - > dev )
2013-09-25 11:20:24 +04:00
# define bond_is_first_slave(bond, pos) (pos == bond_first_slave(bond))
# define bond_is_last_slave(bond, pos) (pos == bond_last_slave(bond))
2013-08-01 18:54:47 +04:00
2005-04-17 02:20:36 +04:00
/**
2013-08-01 18:54:47 +04:00
* bond_for_each_slave - iterate over all slaves
* @ bond : the bond holding this list
* @ pos : current slave
2013-09-25 11:20:14 +04:00
* @ iter : list_head * iterator
2005-04-17 02:20:36 +04:00
*
* Caller must hold bond - > lock
*/
2013-09-25 11:20:14 +04:00
# define bond_for_each_slave(bond, pos, iter) \
netdev_for_each_lower_private ( ( bond ) - > dev , pos , iter )
2005-04-17 02:20:36 +04:00
bonding: initial RCU conversion
This patch does the initial bonding conversion to RCU. After it the
following modes are protected by RCU alone: roundrobin, active-backup,
broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for
reading, and will be dealt with later. curr_active_slave needs to be
dereferenced via rcu in the converted modes because the only thing
protecting the slave after this patch is rcu_read_lock, so we need the
proper barrier for weakly ordered archs and to make sure we don't have
stale pointer. It's not tagged with __rcu yet because there's still work
to be done to remove the curr_slave_lock, so sparse will complain when
rcu_assign_pointer and rcu_dereference are used, but the alternative to use
rcu_dereference_protected would've created much bigger code churn which is
more difficult to test and review. That will be converted in time.
1. Active-backup mode
1.1 Perf recording while doing iperf -P 4
- old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU
in bonding
- new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU
in bonding
1.2. Bandwidth measurements
- old bonding: 16.1 gbps consistently
- new bonding: 17.5 gbps consistently
2. Round-robin mode
2.1 Perf recording while doing iperf -P 4
- old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU
in bonding
- new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU
in bonding
2.2 Bandwidth measurements
- old bonding: 8 gbps (variable due to packet reorderings)
- new bonding: 10 gbps (variable due to packet reorderings)
Of course the latency has improved in all converted modes, and moreover
while
doing enslave/release (since it doesn't affect tx anymore).
Also I've stress tested all modes doing enslave/release in a loop while
transmitting traffic.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-01 18:54:51 +04:00
/* Caller must have rcu_read_lock */
2013-09-25 11:20:14 +04:00
# define bond_for_each_slave_rcu(bond, pos, iter) \
netdev_for_each_lower_private_rcu ( ( bond ) - > dev , pos , iter )
bonding: initial RCU conversion
This patch does the initial bonding conversion to RCU. After it the
following modes are protected by RCU alone: roundrobin, active-backup,
broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for
reading, and will be dealt with later. curr_active_slave needs to be
dereferenced via rcu in the converted modes because the only thing
protecting the slave after this patch is rcu_read_lock, so we need the
proper barrier for weakly ordered archs and to make sure we don't have
stale pointer. It's not tagged with __rcu yet because there's still work
to be done to remove the curr_slave_lock, so sparse will complain when
rcu_assign_pointer and rcu_dereference are used, but the alternative to use
rcu_dereference_protected would've created much bigger code churn which is
more difficult to test and review. That will be converted in time.
1. Active-backup mode
1.1 Perf recording while doing iperf -P 4
- old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU
in bonding
- new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU
in bonding
1.2. Bandwidth measurements
- old bonding: 16.1 gbps consistently
- new bonding: 17.5 gbps consistently
2. Round-robin mode
2.1 Perf recording while doing iperf -P 4
- old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU
in bonding
- new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU
in bonding
2.2 Bandwidth measurements
- old bonding: 8 gbps (variable due to packet reorderings)
- new bonding: 10 gbps (variable due to packet reorderings)
Of course the latency has improved in all converted modes, and moreover
while
doing enslave/release (since it doesn't affect tx anymore).
Also I've stress tested all modes doing enslave/release in a loop while
transmitting traffic.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-01 18:54:51 +04:00
2010-10-13 20:01:50 +04:00
# ifdef CONFIG_NET_POLL_CONTROLLER
net: Convert netpoll blocking api in bonding driver to be a counter
A while back I made some changes to enable netpoll in the bonding driver. Among
them was a per-cpu flag that indicated we were in a path that held locks which
could cause the netpoll path to block in during tx, and as such the tx path
should queue the frame for later use. This appears to have given rise to a
regression. If one of those paths on which we hold the per-cpu flag yields the
cpu, its possible for us to come back on a different cpu, leading to us clearing
a different flag than we set. This results in odd netpoll drops, and BUG
backtraces appearing in the log, as we check to make sure that we only clear set
bits, and only set clear bits. I had though briefly about changing the
offending paths so that they wouldn't sleep, but looking at my origional work
more closely, it doesn't appear that a per-cpu flag is warranted. We alrady
gate the checking of this flag on IFF_IN_NETPOLL, so we don't hit this in the
normal tx case anyway. And practically speaking, the normal use case for
netpoll is to only have one client anyway, so we're not going to erroneously
queue netpoll frames when its actually safe to do so. As such, lets just
convert that per-cpu flag to an atomic counter. It fixes the rescheduling bugs,
is equivalent from a performance perspective and actually eliminates some code
in the process.
Tested by the reporter and myself, successfully
Reported-by: Liang Zheng <lzheng@redhat.com>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-06 12:05:50 +03:00
extern atomic_t netpoll_block_tx ;
2010-10-13 20:01:50 +04:00
static inline void block_netpoll_tx ( void )
{
net: Convert netpoll blocking api in bonding driver to be a counter
A while back I made some changes to enable netpoll in the bonding driver. Among
them was a per-cpu flag that indicated we were in a path that held locks which
could cause the netpoll path to block in during tx, and as such the tx path
should queue the frame for later use. This appears to have given rise to a
regression. If one of those paths on which we hold the per-cpu flag yields the
cpu, its possible for us to come back on a different cpu, leading to us clearing
a different flag than we set. This results in odd netpoll drops, and BUG
backtraces appearing in the log, as we check to make sure that we only clear set
bits, and only set clear bits. I had though briefly about changing the
offending paths so that they wouldn't sleep, but looking at my origional work
more closely, it doesn't appear that a per-cpu flag is warranted. We alrady
gate the checking of this flag on IFF_IN_NETPOLL, so we don't hit this in the
normal tx case anyway. And practically speaking, the normal use case for
netpoll is to only have one client anyway, so we're not going to erroneously
queue netpoll frames when its actually safe to do so. As such, lets just
convert that per-cpu flag to an atomic counter. It fixes the rescheduling bugs,
is equivalent from a performance perspective and actually eliminates some code
in the process.
Tested by the reporter and myself, successfully
Reported-by: Liang Zheng <lzheng@redhat.com>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-06 12:05:50 +03:00
atomic_inc ( & netpoll_block_tx ) ;
2010-10-13 20:01:50 +04:00
}
static inline void unblock_netpoll_tx ( void )
{
net: Convert netpoll blocking api in bonding driver to be a counter
A while back I made some changes to enable netpoll in the bonding driver. Among
them was a per-cpu flag that indicated we were in a path that held locks which
could cause the netpoll path to block in during tx, and as such the tx path
should queue the frame for later use. This appears to have given rise to a
regression. If one of those paths on which we hold the per-cpu flag yields the
cpu, its possible for us to come back on a different cpu, leading to us clearing
a different flag than we set. This results in odd netpoll drops, and BUG
backtraces appearing in the log, as we check to make sure that we only clear set
bits, and only set clear bits. I had though briefly about changing the
offending paths so that they wouldn't sleep, but looking at my origional work
more closely, it doesn't appear that a per-cpu flag is warranted. We alrady
gate the checking of this flag on IFF_IN_NETPOLL, so we don't hit this in the
normal tx case anyway. And practically speaking, the normal use case for
netpoll is to only have one client anyway, so we're not going to erroneously
queue netpoll frames when its actually safe to do so. As such, lets just
convert that per-cpu flag to an atomic counter. It fixes the rescheduling bugs,
is equivalent from a performance perspective and actually eliminates some code
in the process.
Tested by the reporter and myself, successfully
Reported-by: Liang Zheng <lzheng@redhat.com>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-06 12:05:50 +03:00
atomic_dec ( & netpoll_block_tx ) ;
2010-10-13 20:01:50 +04:00
}
static inline int is_netpoll_tx_blocked ( struct net_device * dev )
{
2011-02-18 02:43:33 +03:00
if ( unlikely ( netpoll_tx_running ( dev ) ) )
net: Convert netpoll blocking api in bonding driver to be a counter
A while back I made some changes to enable netpoll in the bonding driver. Among
them was a per-cpu flag that indicated we were in a path that held locks which
could cause the netpoll path to block in during tx, and as such the tx path
should queue the frame for later use. This appears to have given rise to a
regression. If one of those paths on which we hold the per-cpu flag yields the
cpu, its possible for us to come back on a different cpu, leading to us clearing
a different flag than we set. This results in odd netpoll drops, and BUG
backtraces appearing in the log, as we check to make sure that we only clear set
bits, and only set clear bits. I had though briefly about changing the
offending paths so that they wouldn't sleep, but looking at my origional work
more closely, it doesn't appear that a per-cpu flag is warranted. We alrady
gate the checking of this flag on IFF_IN_NETPOLL, so we don't hit this in the
normal tx case anyway. And practically speaking, the normal use case for
netpoll is to only have one client anyway, so we're not going to erroneously
queue netpoll frames when its actually safe to do so. As such, lets just
convert that per-cpu flag to an atomic counter. It fixes the rescheduling bugs,
is equivalent from a performance perspective and actually eliminates some code
in the process.
Tested by the reporter and myself, successfully
Reported-by: Liang Zheng <lzheng@redhat.com>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-06 12:05:50 +03:00
return atomic_read ( & netpoll_block_tx ) ;
2010-10-13 20:01:50 +04:00
return 0 ;
}
# else
# define block_netpoll_tx()
# define unblock_netpoll_tx()
# define is_netpoll_tx_blocked(dev) (0)
# endif
2005-04-17 02:20:36 +04:00
struct bond_params {
int mode ;
2005-06-27 01:54:11 +04:00
int xmit_policy ;
2005-04-17 02:20:36 +04:00
int miimon ;
2011-04-26 19:25:52 +04:00
u8 num_peer_notif ;
2005-04-17 02:20:36 +04:00
int arp_interval ;
2006-09-23 08:54:53 +04:00
int arp_validate ;
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
int arp_all_targets ;
2005-04-17 02:20:36 +04:00
int use_carrier ;
2007-10-10 06:57:24 +04:00
int fail_over_mac ;
2005-04-17 02:20:36 +04:00
int updelay ;
int downdelay ;
int lacp_fast ;
2011-06-22 13:54:39 +04:00
unsigned int min_links ;
2008-11-05 04:51:16 +03:00
int ad_select ;
2005-04-17 02:20:36 +04:00
char primary [ IFNAMSIZ ] ;
2009-09-25 07:28:09 +04:00
int primary_reselect ;
2007-08-23 04:06:58 +04:00
__be32 arp_targets [ BOND_MAX_ARP_TARGETS ] ;
2010-06-02 12:40:18 +04:00
int tx_queues ;
2010-06-02 12:39:21 +04:00
int all_slaves_active ;
2010-10-05 18:23:59 +04:00
int resend_igmp ;
2013-09-13 19:05:33 +04:00
int lp_interval ;
2013-11-05 16:51:41 +04:00
int packets_per_slave ;
2005-04-17 02:20:36 +04:00
} ;
2005-11-09 21:35:44 +03:00
struct bond_parm_tbl {
char * modename ;
int mode ;
} ;
2008-01-18 03:25:01 +03:00
# define BOND_MAX_MODENAME_LEN 20
2005-04-17 02:20:36 +04:00
struct slave {
2005-11-09 21:36:50 +03:00
struct net_device * dev ; /* first - useful for panic debug */
2011-03-22 05:38:12 +03:00
struct bonding * bond ; /* our master */
2006-09-23 08:52:51 +04:00
int delay ;
2007-01-29 23:08:38 +03:00
unsigned long jiffies ;
unsigned long last_arp_rx ;
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
unsigned long target_last_arp_rx [ BOND_MAX_ARP_TARGETS ] ;
2005-04-17 02:20:36 +04:00
s8 link ; /* one of BOND_LINK_XXXX */
2008-05-18 08:10:13 +04:00
s8 new_link ;
2011-03-16 11:46:43 +03:00
u8 backup : 1 , /* indicates backup slave. Value corresponds with
BOND_STATE_ACTIVE and BOND_STATE_BACKUP */
inactive : 1 ; /* indicates inactive slave */
2011-04-13 19:22:31 +04:00
u8 duplex ;
2007-10-10 06:43:41 +04:00
u32 original_mtu ;
2005-04-17 02:20:36 +04:00
u32 link_failure_count ;
2011-04-13 19:22:31 +04:00
u32 speed ;
2010-06-02 12:40:18 +04:00
u16 queue_id ;
2011-04-13 19:22:31 +04:00
u8 perm_hwaddr [ ETH_ALEN ] ;
2005-04-17 02:20:36 +04:00
struct ad_slave_info ad_info ; /* HUGE - better to dynamically alloc */
struct tlb_slave_info tlb_info ;
2011-02-18 02:43:32 +03:00
# ifdef CONFIG_NET_POLL_CONTROLLER
struct netpoll * np ;
# endif
2005-04-17 02:20:36 +04:00
} ;
2008-05-18 08:10:13 +04:00
/*
* Link pseudo - state only used internally by monitors
*/
# define BOND_LINK_NOCHANGE -1
2005-04-17 02:20:36 +04:00
/*
* Here are the locking policies for the two bonding locks :
*
* 1 ) Get bond - > lock when reading / writing slave list .
* 2 ) Get bond - > curr_slave_lock when reading / writing bond - > curr_active_slave .
* ( It is unnecessary when the write - lock is put with bond - > lock . )
* 3 ) When we lock with bond - > curr_slave_lock , we must lock with bond - > lock
* beforehand .
*/
struct bonding {
2005-11-09 21:36:50 +03:00
struct net_device * dev ; /* first - useful for panic debug */
2005-04-17 02:20:36 +04:00
struct slave * curr_active_slave ;
struct slave * current_arp_slave ;
struct slave * primary_slave ;
2009-09-25 07:28:09 +04:00
bool force_primary ;
2005-04-17 02:20:36 +04:00
s32 slave_cnt ; /* never change this value outside the attach/detach wrappers */
2012-06-11 23:23:07 +04:00
int ( * recv_probe ) ( const struct sk_buff * , struct bonding * ,
struct slave * ) ;
2005-04-17 02:20:36 +04:00
rwlock_t lock ;
rwlock_t curr_slave_lock ;
2011-04-26 19:25:52 +04:00
u8 send_peer_notif ;
2013-06-12 02:07:02 +04:00
u8 igmp_retrans ;
2005-04-17 02:20:36 +04:00
# ifdef CONFIG_PROC_FS
struct proc_dir_entry * proc_entry ;
char proc_file_name [ IFNAMSIZ ] ;
# endif /* CONFIG_PROC_FS */
struct list_head bond_list ;
2013-11-05 16:51:41 +04:00
u32 rr_tx_counter ;
2005-04-17 02:20:36 +04:00
struct ad_bond_info ad_info ;
struct alb_bond_info alb_info ;
struct bond_params params ;
2007-10-18 04:37:45 +04:00
struct workqueue_struct * wq ;
struct delayed_work mii_work ;
struct delayed_work arp_work ;
struct delayed_work alb_work ;
struct delayed_work ad_work ;
2010-10-05 18:23:57 +04:00
struct delayed_work mcast_work ;
2010-12-09 18:17:13 +03:00
# ifdef CONFIG_DEBUG_FS
2012-08-22 14:11:26 +04:00
/* debugging support via debugfs */
2010-12-09 18:17:13 +03:00
struct dentry * debug_dir ;
# endif /* CONFIG_DEBUG_FS */
2005-04-17 02:20:36 +04:00
} ;
2011-03-12 06:14:35 +03:00
# define bond_slave_get_rcu(dev) \
( ( struct slave * ) rcu_dereference ( dev - > rx_handler_data ) )
2013-01-04 02:49:01 +04:00
# define bond_slave_get_rtnl(dev) \
( ( struct slave * ) rtnl_dereference ( dev - > rx_handler_data ) )
2005-04-17 02:20:36 +04:00
/**
* Returns NULL if the net_device does not belong to any of the bond ' s slaves
*
* Caller must hold bond lock for read
*/
2011-02-23 10:40:33 +03:00
static inline struct slave * bond_get_slave_by_dev ( struct bonding * bond ,
struct net_device * slave_dev )
2005-04-17 02:20:36 +04:00
{
2013-09-25 11:20:11 +04:00
return netdev_lower_dev_get_private ( bond - > dev , slave_dev ) ;
2005-04-17 02:20:36 +04:00
}
2006-01-06 09:45:42 +03:00
static inline struct bonding * bond_get_bond_by_slave ( struct slave * slave )
2005-04-17 02:20:36 +04:00
{
2013-01-04 02:49:01 +04:00
if ( ! slave | | ! slave - > bond )
2005-04-17 02:20:36 +04:00
return NULL ;
2013-01-04 02:49:01 +04:00
return slave - > bond ;
2005-04-17 02:20:36 +04:00
}
2008-12-10 10:07:13 +03:00
static inline bool bond_is_lb ( const struct bonding * bond )
{
2013-10-18 19:43:34 +04:00
return BOND_MODE_IS_LB ( bond - > params . mode ) ;
2008-12-10 10:07:13 +03:00
}
2011-03-12 06:14:37 +03:00
static inline void bond_set_active_slave ( struct slave * slave )
{
slave - > backup = 0 ;
}
static inline void bond_set_backup_slave ( struct slave * slave )
{
slave - > backup = 1 ;
}
static inline int bond_slave_state ( struct slave * slave )
{
return slave - > backup ;
}
static inline bool bond_is_active_slave ( struct slave * slave )
{
return ! bond_slave_state ( slave ) ;
}
2009-09-25 07:28:09 +04:00
# define BOND_PRI_RESELECT_ALWAYS 0
# define BOND_PRI_RESELECT_BETTER 1
# define BOND_PRI_RESELECT_FAILURE 2
2008-05-18 08:10:14 +04:00
# define BOND_FOM_NONE 0
# define BOND_FOM_ACTIVE 1
# define BOND_FOM_FOLLOW 2
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
# define BOND_ARP_TARGETS_ANY 0
# define BOND_ARP_TARGETS_ALL 1
2006-09-23 08:54:53 +04:00
# define BOND_ARP_VALIDATE_NONE 0
# define BOND_ARP_VALIDATE_ACTIVE (1 << BOND_STATE_ACTIVE)
# define BOND_ARP_VALIDATE_BACKUP (1 << BOND_STATE_BACKUP)
# define BOND_ARP_VALIDATE_ALL (BOND_ARP_VALIDATE_ACTIVE | \
BOND_ARP_VALIDATE_BACKUP )
2006-12-12 19:24:39 +03:00
static inline int slave_do_arp_validate ( struct bonding * bond ,
struct slave * slave )
2006-09-23 08:54:53 +04:00
{
2011-03-12 06:14:37 +03:00
return bond - > params . arp_validate & ( 1 < < bond_slave_state ( slave ) ) ;
2006-09-23 08:54:53 +04:00
}
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
/* Get the oldest arp which we've received on this slave for bond's
* arp_targets .
*/
static inline unsigned long slave_oldest_target_arp_rx ( struct bonding * bond ,
struct slave * slave )
{
int i = 1 ;
unsigned long ret = slave - > target_last_arp_rx [ 0 ] ;
for ( ; ( i < BOND_MAX_ARP_TARGETS ) & & bond - > params . arp_targets [ i ] ; i + + )
if ( time_before ( slave - > target_last_arp_rx [ i ] , ret ) )
ret = slave - > target_last_arp_rx [ i ] ;
return ret ;
}
2006-12-12 19:24:39 +03:00
static inline unsigned long slave_last_rx ( struct bonding * bond ,
2007-01-29 23:08:38 +03:00
struct slave * slave )
2006-09-23 08:54:53 +04:00
{
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
if ( slave_do_arp_validate ( bond , slave ) ) {
if ( bond - > params . arp_all_targets = = BOND_ARP_TARGETS_ALL )
return slave_oldest_target_arp_rx ( bond , slave ) ;
else
return slave - > last_arp_rx ;
}
2006-09-23 08:54:53 +04:00
return slave - > dev - > last_rx ;
}
2011-02-18 02:43:32 +03:00
# ifdef CONFIG_NET_POLL_CONTROLLER
static inline void bond_netpoll_send_skb ( const struct slave * slave ,
struct sk_buff * skb )
{
struct netpoll * np = slave - > np ;
if ( np )
netpoll_send_skb ( np , skb ) ;
}
# else
static inline void bond_netpoll_send_skb ( const struct slave * slave ,
struct sk_buff * skb )
{
}
# endif
2006-01-06 09:45:42 +03:00
static inline void bond_set_slave_inactive_flags ( struct slave * slave )
2005-04-17 02:20:36 +04:00
{
2013-01-04 02:49:01 +04:00
if ( ! bond_is_lb ( slave - > bond ) )
2011-03-12 06:14:37 +03:00
bond_set_backup_slave ( slave ) ;
2013-01-04 02:49:01 +04:00
if ( ! slave - > bond - > params . all_slaves_active )
2011-03-16 11:46:43 +03:00
slave - > inactive = 1 ;
2005-04-17 02:20:36 +04:00
}
2006-01-06 09:45:42 +03:00
static inline void bond_set_slave_active_flags ( struct slave * slave )
2005-04-17 02:20:36 +04:00
{
2011-03-12 06:14:37 +03:00
bond_set_active_slave ( slave ) ;
2011-03-16 11:46:43 +03:00
slave - > inactive = 0 ;
}
static inline bool bond_is_slave_inactive ( struct slave * slave )
{
return slave - > inactive ;
2005-04-17 02:20:36 +04:00
}
2012-03-22 20:14:29 +04:00
static inline __be32 bond_confirm_addr ( struct net_device * dev , __be32 dst , __be32 local )
{
struct in_device * in_dev ;
__be32 addr = 0 ;
rcu_read_lock ( ) ;
in_dev = __in_dev_get_rcu ( dev ) ;
if ( in_dev )
2013-12-10 18:02:40 +04:00
addr = inet_confirm_addr ( dev_net ( dev ) , in_dev , dst , local ,
RT_SCOPE_HOST ) ;
2012-03-22 20:14:29 +04:00
rcu_read_unlock ( ) ;
return addr ;
}
2013-08-01 18:54:50 +04:00
static inline bool slave_can_tx ( struct slave * slave )
{
if ( IS_UP ( slave - > dev ) & & slave - > link = = BOND_LINK_UP & &
bond_is_active_slave ( slave ) )
return true ;
else
return false ;
}
2011-10-13 01:56:25 +04:00
struct bond_net ;
2013-09-07 02:00:26 +04:00
int bond_arp_rcv ( const struct sk_buff * skb , struct bonding * bond , struct slave * slave ) ;
2005-04-17 02:20:36 +04:00
int bond_dev_queue_xmit ( struct bonding * bond , struct sk_buff * skb , struct net_device * slave_dev ) ;
2009-10-29 17:18:26 +03:00
int bond_create ( struct net * net , const char * name ) ;
2011-10-13 01:56:25 +04:00
int bond_create_sysfs ( struct bond_net * net ) ;
void bond_destroy_sysfs ( struct bond_net * net ) ;
2009-10-29 17:18:22 +03:00
void bond_prepare_sysfs_group ( struct bonding * bond ) ;
2005-11-09 21:35:51 +03:00
int bond_enslave ( struct net_device * bond_dev , struct net_device * slave_dev ) ;
int bond_release ( struct net_device * bond_dev , struct net_device * slave_dev ) ;
2007-10-18 04:37:45 +04:00
void bond_mii_monitor ( struct work_struct * ) ;
void bond_loadbalance_arp_mon ( struct work_struct * ) ;
void bond_activebackup_arp_mon ( struct work_struct * ) ;
2013-10-02 15:39:25 +04:00
int bond_xmit_hash ( struct bonding * bond , struct sk_buff * skb , int count ) ;
2008-12-10 10:10:17 +03:00
int bond_parse_parm ( const char * mode_arg , const struct bond_parm_tbl * tbl ) ;
2005-11-09 21:35:51 +03:00
void bond_select_active_slave ( struct bonding * bond ) ;
void bond_change_active_slave ( struct bonding * bond , struct slave * new_active ) ;
2010-12-09 18:17:13 +03:00
void bond_create_debugfs ( void ) ;
void bond_destroy_debugfs ( void ) ;
void bond_debug_register ( struct bonding * bond ) ;
void bond_debug_unregister ( struct bonding * bond ) ;
void bond_debug_reregister ( struct bonding * bond ) ;
2011-04-13 19:22:29 +04:00
const char * bond_mode_name ( int mode ) ;
2013-10-18 19:43:33 +04:00
void bond_setup ( struct net_device * bond_dev ) ;
unsigned int bond_get_num_tx_queues ( void ) ;
int bond_netlink_init ( void ) ;
void bond_netlink_fini ( void ) ;
2013-10-18 19:43:34 +04:00
int bond_option_mode_set ( struct bonding * bond , int mode ) ;
2013-10-18 19:43:35 +04:00
int bond_option_active_slave_set ( struct bonding * bond , struct net_device * slave_dev ) ;
2013-12-13 02:09:55 +04:00
int bond_option_miimon_set ( struct bonding * bond , int miimon ) ;
2013-12-13 02:10:02 +04:00
int bond_option_updelay_set ( struct bonding * bond , int updelay ) ;
2013-12-13 02:10:09 +04:00
int bond_option_downdelay_set ( struct bonding * bond , int downdelay ) ;
2013-12-13 02:10:16 +04:00
int bond_option_use_carrier_set ( struct bonding * bond , int use_carrier ) ;
2013-12-13 02:10:24 +04:00
int bond_option_arp_interval_set ( struct bonding * bond , int arp_interval ) ;
2013-12-13 02:10:31 +04:00
int bond_option_arp_ip_targets_set ( struct bonding * bond , __be32 * targets ,
int count ) ;
int bond_option_arp_ip_target_add ( struct bonding * bond , __be32 target ) ;
int bond_option_arp_ip_target_rem ( struct bonding * bond , __be32 target ) ;
2013-12-13 02:10:38 +04:00
int bond_option_arp_validate_set ( struct bonding * bond , int arp_validate ) ;
2013-12-13 02:10:45 +04:00
int bond_option_arp_all_targets_set ( struct bonding * bond , int arp_all_targets ) ;
2013-12-16 04:41:51 +04:00
int bond_option_primary_set ( struct bonding * bond , const char * primary ) ;
2013-12-16 04:41:58 +04:00
int bond_option_primary_reselect_set ( struct bonding * bond ,
int primary_reselect ) ;
2013-12-16 04:42:05 +04:00
int bond_option_fail_over_mac_set ( struct bonding * bond , int fail_over_mac ) ;
2013-12-16 04:42:12 +04:00
int bond_option_xmit_hash_policy_set ( struct bonding * bond ,
int xmit_hash_policy ) ;
2013-12-16 04:42:19 +04:00
int bond_option_resend_igmp_set ( struct bonding * bond , int resend_igmp ) ;
2013-12-18 09:30:09 +04:00
int bond_option_num_peer_notif_set ( struct bonding * bond , int num_peer_notif ) ;
2013-12-18 09:30:16 +04:00
int bond_option_all_slaves_active_set ( struct bonding * bond ,
int all_slaves_active ) ;
2013-12-18 09:30:23 +04:00
int bond_option_min_links_set ( struct bonding * bond , int min_links ) ;
2013-12-18 09:30:30 +04:00
int bond_option_lp_interval_set ( struct bonding * bond , int min_links ) ;
2013-12-18 09:30:37 +04:00
int bond_option_packets_per_slave_set ( struct bonding * bond ,
int packets_per_slave ) ;
2013-10-18 19:43:37 +04:00
struct net_device * bond_option_active_slave_get_rcu ( struct bonding * bond ) ;
struct net_device * bond_option_active_slave_get ( struct bonding * bond ) ;
2005-04-17 02:20:36 +04:00
2009-10-29 17:18:26 +03:00
struct bond_net {
struct net * net ; /* Associated network namespace */
struct list_head dev_list ;
# ifdef CONFIG_PROC_FS
struct proc_dir_entry * proc_dir ;
# endif
2011-10-13 01:56:25 +04:00
struct class_attribute class_attr_bonding_masters ;
2009-10-29 17:18:26 +03:00
} ;
2011-03-07 00:58:46 +03:00
# ifdef CONFIG_PROC_FS
void bond_create_proc_entry ( struct bonding * bond ) ;
void bond_remove_proc_entry ( struct bonding * bond ) ;
void bond_create_proc_dir ( struct bond_net * bn ) ;
void bond_destroy_proc_dir ( struct bond_net * bn ) ;
# else
static inline void bond_create_proc_entry ( struct bonding * bond )
{
}
static inline void bond_remove_proc_entry ( struct bonding * bond )
{
}
static inline void bond_create_proc_dir ( struct bond_net * bn )
{
}
static inline void bond_destroy_proc_dir ( struct bond_net * bn )
{
}
# endif
2012-11-28 03:57:04 +04:00
static inline struct slave * bond_slave_has_mac ( struct bonding * bond ,
const u8 * mac )
{
2013-09-25 11:20:14 +04:00
struct list_head * iter ;
2012-11-28 03:57:04 +04:00
struct slave * tmp ;
2013-09-25 11:20:14 +04:00
bond_for_each_slave ( bond , tmp , iter )
2012-11-28 03:57:04 +04:00
if ( ether_addr_equal_64bits ( mac , tmp - > dev - > dev_addr ) )
return tmp ;
return NULL ;
}
2011-03-07 00:58:46 +03:00
2013-10-15 12:28:39 +04:00
/* Caller must hold rcu_read_lock() for read */
static inline struct slave * bond_slave_has_mac_rcu ( struct bonding * bond ,
const u8 * mac )
{
struct list_head * iter ;
struct slave * tmp ;
bond_for_each_slave_rcu ( bond , tmp , iter )
if ( ether_addr_equal_64bits ( mac , tmp - > dev - > dev_addr ) )
return tmp ;
return NULL ;
}
2013-06-24 13:49:29 +04:00
/* Check if the ip is present in arp ip list, or first free slot if ip == 0
* Returns - 1 if not found , index if found
*/
static inline int bond_get_targets_ip ( __be32 * targets , __be32 ip )
{
int i ;
for ( i = 0 ; i < BOND_MAX_ARP_TARGETS ; i + + )
if ( targets [ i ] = = ip )
return i ;
else if ( targets [ i ] = = 0 )
break ;
return - 1 ;
}
2008-09-14 19:56:12 +04:00
/* exported from bond_main.c */
2009-10-29 17:18:26 +03:00
extern int bond_net_id ;
2008-12-10 10:10:38 +03:00
extern const struct bond_parm_tbl bond_lacp_tbl [ ] ;
extern const struct bond_parm_tbl bond_mode_tbl [ ] ;
extern const struct bond_parm_tbl xmit_hashtype_tbl [ ] ;
extern const struct bond_parm_tbl arp_validate_tbl [ ] ;
bonding: add an option to fail when any of arp_ip_target is inaccessible
Currently, we fail only when all of the ips in arp_ip_target are gone.
However, in some situations we might need to fail if even one host from
arp_ip_target becomes unavailable.
All situations, obviously, rely on the idea that we need *completely*
functional network, with all interfaces/addresses working correctly.
One real world example might be:
vlans on top on bond (hybrid port). If bond and vlans have ips assigned
and we have their peers monitored via arp_ip_target - in case of switch
misconfiguration (trunk/access port), slave driver malfunction or
tagged/untagged traffic dropped on the way - we will be able to switch
to another slave.
Though any other configuration needs that if we need to have access to all
arp_ip_targets.
This patch adds this possibility by adding a new parameter -
arp_all_targets (both as a module parameter and as a sysfs knob). It can be
set to:
0 or any (the default) - which works exactly as it's working now -
the slave is up if any of the arp_ip_targets are up.
1 or all - the slave is up if all of the arp_ip_targets are up.
This parameter can be changed on the fly (via sysfs), and requires the mode
to be active-backup and arp_validate to be enabled (it obeys the
arp_validate config on which slaves to validate).
Internally it's done through:
1) Add target_last_arp_rx[BOND_MAX_ARP_TARGETS] array to slave struct. It's
an array of jiffies, meaning that slave->target_last_arp_rx[i] is the
last time we've received arp from bond->params.arp_targets[i] on this
slave.
2) If we successfully validate an arp from bond->params.arp_targets[i] in
bond_validate_arp() - update the slave->target_last_arp_rx[i] with the
current jiffies value.
3) When getting slave's last_rx via slave_last_rx(), we return the oldest
time when we've received an arp from any address in
bond->params.arp_targets[].
If the value of arp_all_targets == 0 - we still work the same way as
before.
Also, update the documentation to reflect the new parameter.
v3->v4:
Kill the forgotten rtnl_unlock(), rephrase the documentation part to be
more clear, don't fail setting arp_all_targets if arp_validate is not set -
it has no effect anyway but can be easier to set up. Also, print a warning
if the last arp_ip_target is removed while the arp_interval is on, but not
the arp_validate.
v2->v3:
Use _bh spinlock, remove useless rtnl_lock() and use jiffies for new
arp_ip_target last arp, instead of slave_last_rx(). On bond_enslave(),
use the same initialization value for target_last_arp_rx[] as is used
for the default last_arp_rx, to avoid useless interface flaps.
Also, instead of failing to remove the last arp_ip_target just print a
warning - otherwise it might break existing scripts.
v1->v2:
Correctly handle adding/removing hosts in arp_ip_target - we need to
shift/initialize all slave's target_last_arp_rx. Also, don't fail module
loading on arp_all_targets misconfiguration, just disable it, and some
minor style fixes.
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 13:49:34 +04:00
extern const struct bond_parm_tbl arp_all_targets_tbl [ ] ;
2008-12-10 10:10:38 +03:00
extern const struct bond_parm_tbl fail_over_mac_tbl [ ] ;
2009-09-25 07:28:09 +04:00
extern const struct bond_parm_tbl pri_reselect_tbl [ ] ;
2008-12-26 10:58:57 +03:00
extern struct bond_parm_tbl ad_select_tbl [ ] ;
2013-10-18 19:43:33 +04:00
/* exported from bond_netlink.c */
extern struct rtnl_link_ops bond_link_ops ;
2005-04-17 02:20:36 +04:00
# endif /* _LINUX_BONDING_H */