IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Don't keep separate functions to enable and disable queues for the VFs.
Just call the existing function that everybody else uses. Remove the
unused functions.
Change-Id: I15db9aad64a59e502bfe1e0fdab9b347ab85c12c
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Fix the VF reset flow so that it works on real hardware. After
discussions with the HW team, the reset flow has been changed
somewhat.
- Change the i40e_reset_vf function to a void type, and fix
up the callers to reflect this.
- Move the MSI-X disable code to i40e_free_vf_res since it must
be done every time the VF is freed, regardless of whether or
not it is reset.
- Ensure that the PCIe bus is quiet before polling the reset bit.
- Don't clear the VFGEN_RSTAT1 register at the beginning as it is
cleared by the reset.
- Poll longer for the reset to be done.
- Disable the queues using an existing function rather than
rolling our own.
- Free and reallocate the VSI after reset to avoid rx hang.
Change-Id: I11e2590431cb73e8663714d1cc5b23d59b809033
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The VF reset code will be refactored in future patches. Part of that
refactor required it to call i40e_alloc_vf_res and i40e_free_vf_res, so
the function must be moved. In order to make the future patches more
readable, we perform the function move here, with no other changes.
Change-Id: If6567c9c0bada6caafb2ee0227e0d9d50d05f27f
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This implements receive offload for VXLAN for i40e. The hardware
supports checksum offload/verification of the inner/outer header.
Change-Id: I450db300af6713f2044fef1191a0d1d294c13369
Signed-off-by: Joseph Gasparakis <joseph.gasparakis@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This adds the implementation for the VXLAN ndo's. This allows the
hardware to do RX checksum offload for inner packets on the UDP ports
that VXLAN notifies us about.
Signed-off-by: Joseph Gasparakis <joseph.gasparakis@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add curly-braces on a multi-line function. While we're here we
also change to return void in i40e_vsi_clear_rings() since no
caller cares.
Change-Id: I261fcef20e2a39e18d83ec08fdd14456131dee91
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Wake on LAN is disabled by default and will remain that way for most
platforms, but there is an NVM setting that allows vendors to enable it
for a port if they think they've provided the right power environment
for the device. This patch adds code to check the NVM setting and enable
Magic Packet use if WoL is enabled for the port.
Since only Magic Packet is supported, there's not a lot of HW configuration
needed.
Change-Id: I44e904a7b15695e34683009f487064cd86ea59b0
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Call i40e_set_pci_config_data from probe, then check that
we are in a 8GT/s x8 PCIe slot and send a warning if we are not.
Change-Id: I62815c574cee50d2787c50bbe956dde7a7a75a11
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The HMC error interrupt would generate an un-necessary message
"unhandled interrupt", causing extra log spam, in addition to causing
a reset that was not necessary. Prevent this issue by handling the
HMC error case explicitly, and only reset if the interrupt was from
some of the other causes.
Change-Id: Iabd203ba1dfc26a136b638597f3e9991acfa29f3
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Using for_each_set_bit() to simplify the code.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add missing PCI bus link speed 8.0 GT/s and bus link widths of
x1, x2, x4 and x8.
CC: <linux-kernel@vger.kernel.org>
CC: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Scott Feldman says:
====================
bonding: final set of netlink patches
v2:
- per Jiri's comment, fix ad_select checking against parm table by
spliting bond_parse_parm() into several funcs. Go ahead and apply
same technique to all parameters using parm table.
- fix netlink msg size to including missing nest attr
- drop the last patch for active_slaves. This patch needs to be
reworked per Jiri's comments and shouldn't hold up finalizing
the conversion of the existing parameter to netlink attributes.
Ding, assuming this patch set goes in, you should have all you
need to start converting module parameter setting/checking over to
funcs in *_options.c.
I'll send iproute2 patch for bonding netlink support once this patch
set is accepted.
v1:
The following series implements the last set of bonding netlink attributes
for 802.3ad mode:
lacp_rate
ad_select
ad_info, nest of:
ad_aggregator
ad_num_ports
ad_actor_key
ad_partner_key
ad_partner_mac
The last patch adds an additional netlink attribute, active_slaves, which
is a nested list of ifindices for current active slaves. We're using this
list to enable/disable hashing of ports in a hardware LAG implementation.
In the same way bonding driver includes/excludes ports for 802.3ad egress
hashing, hardware ports are included/excluded from egress hashing by
hardware based on port active status. Yes, data path offloaded to
hardware, control path remains in kernel via bonding driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add bounds checking for params defined with parm tbl.
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add missing space for IFLA_BOND_ARP_IP_TARGET nest header.
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add nested IFLA_BOND_AD_INFO for bonding 802.3ad info.
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add IFLA_BOND_AD_SELECT to allow get/set of bonding parameter
ad_select via netlink.
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add IFLA_BOND_AD_LACP_RATE to allow get/set of bonding parameter
lacp_rate via netlink.
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nithin Nayak Sujir says:
====================
tg3: Unicast filter support and misc fixes
Michael Chan (2):
tg3: Refactor __tg3_set_mac_addr()
tg3: Add unicast filtering support.
Nithin Nayak Sujir (3):
tg3: Set the MAC clock to the fastest speed during boot code load
tg3: Poll cpmu link state on APE + ASF enabled devices
tg3: Update version to 3.136
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On ASF enabled devices where the mgmt firmware runs on the application
processing engine, there is a race between the tg3 driver processing a
link change event and the ASF firmware clearing the link changed bit in
the EMAC status register. This leads to link notifications to the driver
sometimes getting lost.
Poll the CPMU link state as a backup for the normal interrupt path
update if ASF is enabled.
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On the 5717, 5718 and 5719 devices, the bootcode runs slower when any
port doesn't have a link due to clock speed slowing down as part of the
link-aware feature. This leads to the driver timing out waiting for the
bootcode signature.
This patch overrides the clock policy to the highest frequency just before
reset and restores it after the bootcode is up.
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Up to 3 additional unicast addresses can be added to the perfect match
filter table.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
so that individual MAC address filter entries can be set.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The llc_sap_list_lock does not need to be global, only acquired
in core.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Namespace related cleaning
* make cred_to_ucred static
* remove unused sock_rmalloc function
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Running 'scripts/checkpatch.pl' on the driver files gives numerous warnings:
- block comments using empty /* line;
- unneeded \ at end of lines;
- message string split across lines;
- use of __attribute__((aligned(n))) instead of __aligned(n) macro;
- use of __attribute__((packed)) instead of __packed macro.
Additionally, running 'scripts/checkpatch.pl --strict' gives more complaints:
- including the paragraph about writing to FSF into the heading comment;
- alignment not matching open paren;
- multiple assignments on one line;
- use of CamelCase names;
- missing {} on one of the *if* arms where another has them;
- spinlock definition without a comment.
While fixing these, also do some more style cleanups:
- remove useless () around expressions;
- add {} around multi-line *if* operator's arm;
- remove space before comma;
- add spaces after /* and before */;
- properly align continuation lines of broken up expressions;
- realign comments to the structure fields.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
o Driver is using common tx_clean_lock for all Tx queues. This patch
adds per queue tx_clean_lock.
o Driver is not updating sw_consumer while processing Tx completion
when interface is going down. Fixed in this patch.
Signed-off-by: Shahed Shaikh <shahed.shaikh@qlogic.com>
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When we create a new infiniband link with uninfiniband device, e.g. `ip link
add link em1 type ipoib pkey 0x8001`. We will get a NULL pointer dereference
cause other dev like Ethernet don't have struct ib_device.
The code path is:
rtnl_newlink
|-- ipoib_new_child_link
|-- __ipoib_vlan_add
|-- ipoib_set_dev_features
|-- ib_query_device
Fix this bug by make sure the src net is infiniband when create new link.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The return value should be the boolean value, not the error code.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Spotted-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
expression e1,e2;
@@
pci_enable_wake(e1,
- 0
+ PCI_D0
,e2)
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert says:
====================
ipv4: Cache dst in tunnels
Version 3 of caching routes in tunnels.
Addressed some comments from Eric in this series.
There are two patches (variants) in the series:
1) One dst cached for each tunnel.
2) Percpu dst cache per tunnel to avoid false sharing
Testing with GRE tunnels on a 32 CPU host with bnx2x (RSS support
for GRE) shows a modest improvement in CPU utilization with these
patches running 200 TCP_RR netperf clients.
Without patches
71.22% CPU utilization
138/180/244 90/95/99% latencies
1.30465e+06 CPU/tps
18318 CPU/tps
With patches
69.84%
142/186/249 90/95/99% latencies
1.30827e+06
18732 CPU/tps
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
percpu route cache eliminates share of dst refcnt between CPUs.
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Avoid doing a route lookup on every packet being tunneled.
In ip_tunnel.c cache the route returned from ip_route_output if
the tunnel is "connected" so that all the rouitng parameters are
taken from tunnel parms for a packet. Specifically, not NBMA tunnel
and tos is from tunnel parms (not inner packet).
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Recently I updated the sctp socket option deprecation warnings to be both a bit
more clear and ratelimited to prevent user processes from spamming the log file.
Ben Hutchings suggested that I add the process name and pid to these warnings so
that users can tell who is responsible for using the deprecated apis. This
patch accomplishes that.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Vlad Yasevich <vyasevich@gmail.com>
CC: Ben Hutchings <bhutchings@solarflare.com>
CC: "David S. Miller" <davem@davemloft.net>
CC: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Netdev_priv performs an addition, not a pointer dereference, so it seems
quite unlikely that its result would ever be NULL.
A semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
statement S;
@@
- if (!netdev_priv(...)) S
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Grant Grundler <grundler@parisc-linux.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since commit 52367a763d
("cxgb4/cxgb4vf: Code cleanup to enable T4 Configuration File support"),
we have failures like this during cxgb4 probe:
cxgb4 0000:01:00.4: bad SGE FL page buffer sizes [65536, 65536]
cxgb4: probe of 0000:01:00.4 failed with error -22
This happens whenever software parameters are used, without a
configuration file. That happens when the hardware was already
initialized (after kexec, or after csiostor is loaded).
It happens that these values are acceptable, rendering fl_pg_order equal
to 0, which is the case of a hard init when the page size is equal or
larger than 65536.
Accepting fl_large_pg equal to fl_small_pg solves the issue, and
shouldn't cause any trouble besides a possible performance reduction
when smaller pages are used. And that can be fixed by a configuration
file.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allows you to dump all sets available in all of
the registered families. This allows you to use NFPROTO_UNSPEC
to dump all existing sets, similarly to other existing table,
chain and rule operations.
This patch is based on original patch from Arturo Borrero
González.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
It would be useful e.g. in a server or desktop environment to have
a facility in the notion of fine-grained "per application" or "per
application group" firewall policies. Probably, users in the mobile,
embedded area (e.g. Android based) with different security policy
requirements for application groups could have great benefit from
that as well. For example, with a little bit of configuration effort,
an admin could whitelist well-known applications, and thus block
otherwise unwanted "hard-to-track" applications like [1] from a
user's machine. Blocking is just one example, but it is not limited
to that, meaning we can have much different scenarios/policies that
netfilter allows us than just blocking, e.g. fine grained settings
where applications are allowed to connect/send traffic to, application
traffic marking/conntracking, application-specific packet mangling,
and so on.
Implementation of PID-based matching would not be appropriate
as they frequently change, and child tracking would make that
even more complex and ugly. Cgroups would be a perfect candidate
for accomplishing that as they associate a set of tasks with a
set of parameters for one or more subsystems, in our case the
netfilter subsystem, which, of course, can be combined with other
cgroup subsystems into something more complex if needed.
As mentioned, to overcome this constraint, such processes could
be placed into one or multiple cgroups where different fine-grained
rules can be defined depending on the application scenario, while
e.g. everything else that is not part of that could be dropped (or
vice versa), thus making life harder for unwanted processes to
communicate to the outside world. So, we make use of cgroups here
to track jobs and limit their resources in terms of iptables
policies; in other words, limiting, tracking, etc what they are
allowed to communicate.
In our case we're working on outgoing traffic based on which local
socket that originated from. Also, one doesn't even need to have
an a-prio knowledge of the application internals regarding their
particular use of ports or protocols. Matching is *extremly*
lightweight as we just test for the sk_classid marker of sockets,
originating from net_cls. net_cls and netfilter do not contradict
each other; in fact, each construct can live as standalone or they
can be used in combination with each other, which is perfectly fine,
plus it serves Tejun's requirement to not introduce a new cgroups
subsystem. Through this, we result in a very minimal and efficient
module, and don't add anything except netfilter code.
One possible, minimal usage example (many other iptables options
can be applied obviously):
1) Configuring cgroups if not already done, e.g.:
mkdir /sys/fs/cgroup/net_cls
mount -t cgroup -o net_cls net_cls /sys/fs/cgroup/net_cls
mkdir /sys/fs/cgroup/net_cls/0
echo 1 > /sys/fs/cgroup/net_cls/0/net_cls.classid
(resp. a real flow handle id for tc)
2) Configuring netfilter (iptables-nftables), e.g.:
iptables -A OUTPUT -m cgroup ! --cgroup 1 -j DROP
3) Running applications, e.g.:
ping 208.67.222.222 <pid:1799>
echo 1799 > /sys/fs/cgroup/net_cls/0/tasks
64 bytes from 208.67.222.222: icmp_seq=44 ttl=49 time=11.9 ms
[...]
ping 208.67.220.220 <pid:1804>
ping: sendmsg: Operation not permitted
[...]
echo 1804 > /sys/fs/cgroup/net_cls/0/tasks
64 bytes from 208.67.220.220: icmp_seq=89 ttl=56 time=19.0 ms
[...]
Of course, real-world deployments would make use of cgroups user
space toolsuite, or own custom policy daemons dynamically moving
applications from/to various cgroups.
[1] http://www.blackhat.com/presentations/bh-europe-06/bh-eu-06-biondi/bh-eu-06-biondi-up.pdf
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: cgroups@vger.kernel.org
Acked-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
While we're at it and introduced CGROUP_NET_CLASSID, lets also make
NETPRIO_CGROUP more consistent with the rest of cgroups and rename it
into CONFIG_CGROUP_NET_PRIO so that for networking, we now have
CONFIG_CGROUP_NET_{PRIO,CLASSID}. This not only makes the CONFIG
option consistent among networking cgroups, but also among cgroups
CONFIG conventions in general as the vast majority has a prefix of
CONFIG_CGROUP_<SUBSYS>.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Zefan Li <lizefan@huawei.com>
Cc: cgroups@vger.kernel.org
Acked-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Zefan Li requested [1] to perform the following cleanup/refactoring:
- Split cgroupfs classid handling into net core to better express a
possible more generic use.
- Disable module support for cgroupfs bits as the majority of other
cgroupfs subsystems do not have that, and seems to be not wished
from cgroup side. Zefan probably might want to follow-up for netprio
later on.
- By this, code can be further reduced which previously took care of
functionality built when compiled as module.
cgroupfs bits are being placed under net/core/netclassid_cgroup.c, so
that we are consistent with {netclassid,netprio}_cgroup naming that is
under net/core/ as suggested by Zefan.
No change in functionality, but only code refactoring that is being
done here.
[1] http://patchwork.ozlabs.org/patch/304825/
Suggested-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Zefan Li <lizefan@huawei.com>
Cc: Thomas Graf <tgraf@suug.ch>
Cc: cgroups@vger.kernel.org
Acked-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
If setting event mask fails then we were returning 0 for success.
This patch updates return code to -EINVAL in case of problem.
Signed-off-by: Eric Leblond <eric@regit.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The following code is not used in current upstream code.
Some of this seems to be old hooks, other might be used by some
out of tree module (which I don't care about breaking), and
the need_ipv4_conntrack was used by old NAT code but no longer
called.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Function never used in current upstream code.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
We currently use prandom_u32() for allocation of ports in tcp bind(0)
and udp code. In case of plain SNAT we try to keep the ports as is
or increment on collision.
SNAT --random mode does use per-destination incrementing port
allocation. As a recent paper pointed out in [1] that this mode of
port allocation makes it possible to an attacker to find the randomly
allocated ports through a timing side-channel in a socket overloading
attack conducted through an off-path attacker.
So, NF_NAT_RANGE_PROTO_RANDOM actually weakens the port randomization
in regard to the attack described in this paper. As we need to keep
compatibility, add another flag called NF_NAT_RANGE_PROTO_RANDOM_FULLY
that would replace the NF_NAT_RANGE_PROTO_RANDOM hash-based port
selection algorithm with a simple prandom_u32() in order to mitigate
this attack vector. Note that the lfsr113's internal state is
periodically reseeded by the kernel through a local secure entropy
source.
More details can be found in [1], the basic idea is to send bursts
of packets to a socket to overflow its receive queue and measure
the latency to detect a possible retransmit when the port is found.
Because of increasing ports to given destination and port, further
allocations can be predicted. This information could then be used by
an attacker for e.g. for cache-poisoning, NS pinning, and degradation
of service attacks against DNS servers [1]:
The best defense against the poisoning attacks is to properly
deploy and validate DNSSEC; DNSSEC provides security not only
against off-path attacker but even against MitM attacker. We hope
that our results will help motivate administrators to adopt DNSSEC.
However, full DNSSEC deployment make take significant time, and
until that happens, we recommend short-term, non-cryptographic
defenses. We recommend to support full port randomisation,
according to practices recommended in [2], and to avoid
per-destination sequential port allocation, which we show may be
vulnerable to derandomisation attacks.
Joint work between Hannes Frederic Sowa and Daniel Borkmann.
[1] https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
[2] http://arxiv.org/pdf/1205.5190v1.pdf
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The nfmsg variable is not used (except in sizeof operator which does
not care about its value) between the first and second time it is
assigned the value. Furthermore, nlmsg_data has no side effects, so
the assignment can be safely removed.
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
In nf_tables_set_alloc_name(), we are trying to find a new, unused
name for our new set and interate through the list of present sets.
As far as I can see, we're using format string %d to parse already
present names in order to mark their presence in a bitmap, so that
we can later on find the first 0 in that map to assign the new set
name to. We should rather use a temporary variable of type int to
store the result of sscanf() to, and for making sanity checks on.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The VLAN tag handling code in netpoll_send_skb_on_dev() has two problems.
1) It exits without unlocking the TXQ.
2) It then tries to queue a NULL skb to npinfo->txq.
Reported-by: Ahmed Tamrawi <atamrawi@iastate.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
when read/write the 64bit data, the correct lock should be hold.
and we can use the generic vti6_get_stats to return stats, and
not define a new one in ip6_vti.c
Fixes: 87b6d218f3 ("tunnel: implement 64 bits statistics")
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
when read/write the 64bit data, the correct lock should be hold.
Fixes: 87b6d218f3 ("tunnel: implement 64 bits statistics")
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>