IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
o By default, SRIOV PF will have promiscous mode on.
Signed-off-by: Sucheta Chakraborty <sucheta.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
o On enabling VF flood bit, PF driver will be able to receive traffic
from all its VFs.
Signed-off-by: Sucheta Chakraborty <sucheta.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
o Adapter should allow vlan traffic only for vlans configured on a VF.
On configuring any vlan mode from VF, adapter will allow any vlan
traffic to pass for that VF. Do not allow VF to configure this mode.
Signed-off-by: Sucheta Chakraborty <sucheta.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, after changing the MTU for a device, dev_set_mtu() calls
NETDEV_CHANGEMTU notification, however doesn't verify it's return code -
which can be NOTIFY_BAD - i.e. some of the net notifier blocks refused this
change, and continues nevertheless.
To fix this, verify the return code, and if it's an error - then revert the
MTU to the original one, notify again and pass the error code.
CC: Jiri Pirko <jiri@resnulli.us>
CC: "David S. Miller" <davem@davemloft.net>
CC: Eric Dumazet <edumazet@google.com>
CC: Alexander Duyck <alexander.h.duyck@intel.com>
CC: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Reviewed-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Document how to use one AF_PACKET mmap socket for RX and TX.
Signed-off-by: Norbert van Bolhuis <nvbolhuis@aimvalley.nl>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
10G PHYs don't currently support running the state machine, which
is implicitly setup via of_phy_connect(). Therefore, it is necessary
to implement an OF version of phy_attach(), which does everything
except start the state machine.
Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
phy_attach_direct() may now attach to a generic 10G driver. It can
also be used exactly as phy_connect_direct(), which will be useful
when using of_mdio, as phy_connect (and therefore of_phy_connect)
start the PHY state machine, which is currently irrelevant for 10G
PHYs.
Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Very incomplete, but will allow for binding an ethernet controller
to it.
Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Then other generic phy driver such as generic 10g phy driver can join it.
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Need an extra parameter to read or write Clause 45 PHYs, so
need a different API with the extra parameter.
Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Avoid needless export of local functions
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: James Chapman <jchapman@katalix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a bunch of whole lot of namespace issues with the Broadcom bnx2x driver
found by running 'make namespacecheck'
* global variables must be prefixed with bnx2x_
naming a variable int_mode, or num_queue is invitation to disaster
* make local functions static
* move some inline's used in one file out of header
(this driver has a bad case of inline-itis)
* remove resulting dead code fallout
bnx2x_pfc_statistic,
bnx2x_emac_get_pfc_stat
bnx2x_init_vlan_mac_obj,
Looks like vlan mac support in this driver was a botch from day one
either never worked, or not implemented or missing support functions
Compile tested only.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
If used 64 bit compiler GCC warns that:
drivers/net/ethernet/smsc/smc91x.c:1897:7:
warning: cast from pointer to integer of different
size [-Wpointer-to-int-cast]
This patch fixes this by changing typecast from "unsigned int" to "unsigned long"
CC: "David S. Miller" <davem@davemloft.net>
CC: Jingoo Han <jg1.han@samsung.com>
CC: netdev@vger.kernel.org
Signed-off-by: Pankaj Dubey <pankaj.dubey@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Simplify the GRE header length calculation in gre_gso_segment().
Switch to an approach that is simpler, faster, and more general. The
new approach will continue to be correct even if we add support for
the optional variable-length routing info that may be present in a GRE
header.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: H.K. Jerry Chu <hkchu@google.com>
Cc: Pravin B Shelar <pshelar@nicira.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It is not necessary at all.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
tp->root is a void* pointer, no need to cast it.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
tcf_match_indev() is called in fast path, it is not wise to
search for a netdev by ifindex and then compare by its name,
just compare the ifindex.
Also, dev->name could be changed by user-space, therefore
the match would be always fail, but dev->ifindex could
be consistent.
BTW, this will also save some bytes from the core struct of u32.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It will be needed by the next patch.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Refactor tcf_add_notify() and factor out tcf_del_notify().
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no need to store the index separatedly
since tcf_hashinfo is allocated statically too.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
GRO layer has a limit of 8 flows being held in GRO list,
for performance reason.
When a packet comes for a flow not yet in the list,
and list is full, we immediately give it to upper
stacks, lowering aggregation performance.
With TSO auto sizing and FQ packet scheduler, this situation
happens more often.
This patch changes strategy to simply evict the oldest flow of
the list. This works better because of the nature of packet
trains for which GRO is efficient. This also has the effect
of lowering the GRO latency if many flows are competing.
Tested :
Used a 40Gbps NIC, with 4 RX queues, and 200 concurrent TCP_STREAM
netperf.
Before patch, aggregate rate is 11Gbps (while a single flow can reach
30Gbps)
After patch, line rate is reached.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jerry Chu <hkchu@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to use the native GRO handling of encapsulated protocols on
mlx4, we need to call napi_gro_receive() instead of netif_receive_skb()
unless busy polling is in action.
While we are at it, rename mlx4_en_cq_ll_polling() to
mlx4_en_cq_busy_polling()
Tested with GRE tunnel : GRO aggregation is now performed on the
ethernet device instead of being done later on gre device.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Jerry Chu <hkchu@google.com>
Cc: Or Gerlitz <ogerlitz@mellanox.com>
Acked-By: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes the following sparse warning:
net/ipv4/gre_offload.c:253:5: warning:
symbol 'gre_gro_complete' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa says:
====================
path mtu hardening patches
After a lot of back and forth I want to propose these changes regarding
path mtu hardening and give an outline why I think this is the best way
how to proceed:
This set contains the following patches:
* ipv4: introduce ip_dst_mtu_maybe_forward and protect forwarding path against pmtu spoofing
* ipv6: introduce ip6_dst_mtu_forward and protect forwarding path with it
* ipv4: introduce hardened ip_no_pmtu_disc mode
The first one switches the forwarding path of IPv4 to use the interface
mtu by default and ignore a possible discovered path mtu. It provides
a sysctl to switch back to the original behavior (see discussion below).
The second patch does the same thing unconditionally for IPv6. I don't
provide a knob for IPv6 to switch to original behavior (please see
below).
The third patch introduces a hardened pmtu mode, where only pmtu
information are accepted where the protocol is able to do more stringent
checks on the icmp piggyback payload (please see the patch commit msg
for further details).
Why is this change necessary?
First of all, RFC 1191 4. Router specification says:
"When a router is unable to forward a datagram because it exceeds the
MTU of the next-hop network and its Don't Fragment bit is set, the
router is required to return an ICMP Destination Unreachable message
to the source of the datagram, with the Code indicating
"fragmentation needed and DF set". ..."
For some time now fragmentation has been considered problematic, e.g.:
* http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-3.pdf
* http://tools.ietf.org/search/rfc4963
Most of them seem to agree that fragmentation should be avoided because
of efficiency, data corruption or security concerns.
Recently it was shown possible that correctly guessing IP ids could lead
to data injection on DNS packets:
<https://sites.google.com/site/hayashulman/files/fragmentation-poisoning.pdf>
While we can try to completly stop fragmentation on the end host
(this is e.g. implemented via IP_PMTUDISC_INTERFACE), we cannot stop
fragmentation completly on the forwarding path. On the end host the
application has to deal with MTUs and has to choose fallback methods
if fragmentation could be an attack vector. This is already the case for
most DNS software, where a maximum UDP packet size can be configured. But
until recently they had no control over local fragmentation and could
thus emit fragmented packets.
On the forwarding path we can just try to delay the fragmentation to
the last hop where this is really necessary. Current kernel already does
that but only because routers don't receive feedback of path mtus, these are
only send back to the end host system. But it is possible to maliciously
insert path mtu inforamtion via ICMP packets which have an icmp echo_reply
payload, because we cannot validate those notifications against local
sockets. DHCP clients which establish an any-bound RAW-socket could also
start processing unwanted fragmentation-needed packets.
Why does IPv4 has a knob to revert to old behavior while IPv6 doesn't?
IPv4 does fragmentation on the path while IPv6 does always respond with
packet-too-big errors. The interface MTU will always be greater than
the path MTU information. So we would discard packets we could actually
forward because of malicious information. After this change we would
let the hop, which really could not forward the packet, notify the host
of this problem.
IPv4 allowes fragmentation mid-path. In case someone does use a software
which tries to discover such paths and assumes that the kernel is handling
the discovered pmtu information automatically. This should be an extremly
rare case, but because I could not exclude the possibility this knob is
provided. Also this software could insert non-locked mtu information
into the kernel. We cannot distinguish that from path mtu information
currently. Premature fragmentation could solve some problems in wrongly
configured networks, thus this switch is provided.
One frag-needed packet could reduce the path mtu down to 522 bytes
(route/min_pmtu).
Misc:
IPv6 neighbor discovery could advertise mtu information for an
interface. These information update the ipv6-specific interface mtu and
thus get used by the forwarding path.
Tunnel and xfrm output path will still honour path mtu and also respond
with Packet-too-Big or fragmentation-needed errors if needed.
Changelog for all patches:
v2)
* enabled ip_forward_use_pmtu by default
* reworded
v3)
* disabled ip_forward_use_pmtu by default
* reworded
v4)
* renamed ip_dst_mtu_secure to ip_dst_mtu_maybe_forward
* updated changelog accordingly
* removed unneeded !!(... & ...) double negations
v2)
* by default we honour pmtu information
3)
* only honor interface mtu
* rewritten and simplified
* no knob to fall back to old mode any more
v2)
* reworded Documentation
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This new ip_no_pmtu_disc mode only allowes fragmentation-needed errors
to be honored by protocols which do more stringent validation on the
ICMP's packet payload. This knob is useful for people who e.g. want to
run an unmodified DNS server in a namespace where they need to use pmtu
for TCP connections (as they are used for zone transfers or fallback
for requests) but don't want to use possibly spoofed UDP pmtu information.
Currently the whitelisted protocols are TCP, SCTP and DCCP as they check
if the returned packet is in the window or if the association is valid.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: John Heffner <johnwheffner@gmail.com>
Suggested-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the IPv6 forwarding path we are only concerend about the outgoing
interface MTU, but also respect locked MTUs on routes. Tunnel provider
or IPSEC already have to recheck and if needed send PtB notifications
to the sending host in case the data does not fit into the packet with
added headers (we only know the final header sizes there, while also
using path MTU information).
The reason for this change is, that path MTU information can be injected
into the kernel via e.g. icmp_err protocol handler without verification
of local sockets. As such, this could cause the IPv6 forwarding path to
wrongfully emit Packet-too-Big errors and drop IPv6 packets.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: John Heffner <johnwheffner@gmail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
While forwarding we should not use the protocol path mtu to calculate
the mtu for a forwarded packet but instead use the interface mtu.
We mark forwarded skbs in ip_forward with IPSKB_FORWARDED, which was
introduced for multicast forwarding. But as it does not conflict with
our usage in unicast code path it is perfect for reuse.
I moved the functions ip_sk_accept_pmtu, ip_sk_use_pmtu and ip_skb_dst_mtu
along with the new ip_dst_mtu_maybe_forward to net/ip.h to fix circular
dependencies because of IPSKB_FORWARDED.
Because someone might have written a software which does probe
destinations manually and expects the kernel to honour those path mtus
I introduced a new per-namespace "ip_forward_use_pmtu" knob so someone
can disable this new behaviour. We also still use mtus which are locked on a
route for forwarding.
The reason for this change is, that path mtus information can be injected
into the kernel via e.g. icmp_err protocol handler without verification
of local sockets. As such, this could cause the IPv4 forwarding path to
wrongfully emit fragmentation needed notifications or start to fragment
packets along a path.
Tunnel and ipsec output paths clear IPCB again, thus IPSKB_FORWARDED
won't be set and further fragmentation logic will use the path mtu to
determine the fragmentation size. They also recheck packet size with
help of path mtu discovery and report appropriate errors.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: John Heffner <johnwheffner@gmail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is to be compatible with the use of "get_time" (i.e. default
time unit in us) in iproute2 patch for HHF as requested by Stephen.
Signed-off-by: Terry Lam <vtlam@google.com>
Acked-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
vmalloc is a limited resource. Don't use it unnecessarily.
It seems this allocation should work with kcalloc.
Remove unnecessary memset(,0,) of buf as it's completely
overwritten as the previously only unset field in
struct qlcnic_pci_func_cfg is now set to 0.
Use kfree instead of vfree.
Use ETH_ALEN instead of 6.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
I don't know how large "tp->vlan_shift" is but static checkers worry
about shift wrapping bugs here.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Dimitris Michailidis <dm@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
That code has been around for ages without being used.
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's a huge mess currently, that is really hard to read. This cleanup
doesn't touch the logic at all, it only breaks easy-to-fix long lines and
updates comment styles.
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The crc16 functionality is not used anymore, therefore
we can safely remove the dependency in the Kbuild file.
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
With unrolling the batadv_header into the respective structures, the
offsetof checks are now useless. Instead, add build checks for all
packet types which go over the wire to avoid problems with wrong sizes
or compatibility issues on some architectures which don't use every day.
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Add missing sysfs attributes in the proper section of the README
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Return at the end of void functions is not needed.
Since most of the void functions in the code do not do so,
make all the others consistent by removing the useless
returns. Actually all the functions to be "fixed" are in
network-coding.h only.
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Show tables for the multi interface operation. Originator tables
are added per hard interface.
This patch also changes the API by adding the interface to the
bat_orig_print() parameters.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
To show information per interface, add a debugfs hardif structure
similar to the system in sysfs. Hard interface folders will be created
in "$debugfs/batman-adv/". Files are not yet added.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
With the new interface alternating, the first hop may send packets
in a round robin fashion to it's neighbors because it has multiple
valid routes built by the multi interface optimization. This patch
enables the feature if bonding is selected. Note that unlike the
bonding implemented before, this version is much simpler and may
even enable multi path routing to a certain degree.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
The current OGM sending an aggregation functionality decides on
which interfaces a packet should be sent when it parses the forward
packet struct. However, with the network wide multi interface
optimization the outgoing interface is decided by the OGM processing
function.
This is reflected by moving the decision in the OGM processing function
and add the outgoing interface in the forwarding packet struct. This
practically implies that an OGM may be added multiple times (once per
outgoing interface), and this also affects aggregation which needs to
consider the outgoing interface as well.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
If the same interface is used for sending and receiving, there might be
throughput degradation on half-duplex interfaces such as WiFi. Add a
penalty if the same interface is used to reflect this problem in the
metric. At the same time, change the hop penalty from 30 to 15 so there
will be no change for single wifi mesh network. the effective hop
penalty will stay at 30 due to the new wifi penalty for these networks.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
For the network wide multi interface optimization there are different
routers for each outgoing interface (outgoing from the OGM perspective,
incoming for payload traffic). To reflect this, change the router and
associated data to a list of routers.
While at it, rename batadv_orig_node_get_router() to
batadv_orig_router_get() to follow the new naming scheme.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
For the network wide multi interface optimization it is required to save
metrics per outgoing interface in one neighbor. Therefore a new type is
introduced to keep interface-specific information. This also requires
some changes in access and list management.
The compare and equiv_or_better API calls are changed to take the
outgoing interface into consideration.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Remove bonding and interface alternating code - it will be replaced
by a new, network-wide multi interface optimization which enables
both bonding and interface alternating in a better way.
Keep the sysfs and find router function though, this will be needed
later.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Sabrina Dubroca says:
====================
alx: add statistics
Currently, the alx driver doesn't support statistics [1,2]. The
original alx driver [3] that Johannes Berg modified provided
statistics. This patch is an adaptation of the statistics code from
the original driver to the alx driver included in the kernel.
v4:
- modified the assignements of hw stats to netstats (Ben Hutchings)
- added comments to describe the stats fields (copied from atlx)
v3:
- renamed __alx_update_hw_stats to alx_update_hw_stats (Stephen Hemminger)
v2:
- use u64 instead of unsigned long (Ben Hutchings)
- implement ndo_get_stats64 instead of ndo_get_stats (Ben Hutchings)
- use EINVAL instead of ENOTSUPP (Ben Hutchings)
- add BUILD_BUG_ON to check the size of the stats (Johannes Berg, Ben
Hutchings)
- add a comment regarding persistence of the stats (Stephen Hemminger)
- align assignments in __alx_update_hw_stats
[1] https://bugzilla.kernel.org/show_bug.cgi?id=63401
[2] http://www.spinics.net/lists/netdev/msg245544.html
[3] https://github.com/mcgrof/alx
====================
Signed-off-by: David S. Miller <davem@davemloft.net>