Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: "I'm sending a pull request of these lingering bug fixes for networking before the normal merge window material because some of this stuff I'd like to get to -stable ASAP" 1) cxgb3 stopped working on 32-bit machines, fix from Ben Hutchings. 2) Structures passed via netlink for netfilter logging are not fully initialized. From Mathias Krause. 3) Properly unlink upper openvswitch device during notifications, from Alexei Starovoitov. 4) Fix race conditions involving access to the IP compression scratch buffer, from Michal Kubrecek. 5) We don't handle the expiration of MTU information contained in ipv6 routes sometimes, fix from Hannes Frederic Sowa. 6) With Fast Open we can miscompute the TCP SYN/ACK RTT, from Yuchung Cheng. 7) Don't take TCP RTT sample when an ACK doesn't acknowledge new data, also from Yuchung Cheng. 8) The decreased IPSEC garbage collection threshold causes problems for some people, bump it back up. From Steffen Klassert. 9) Fix skb->truesize calculated by tcp_tso_segment(), from Eric Dumazet. 10) flow_dissector doesn't validate packet lengths sufficiently, from Jason Wang * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (41 commits) net/mlx4_core: Fix call to __mlx4_unregister_mac net: sctp: do not trigger BUG_ON in sctp_cmd_delete_tcb net: flow_dissector: fail on evil iph->ihl xfrm: Fix null pointer dereference when decoding sessions can: kvaser_usb: fix usb endpoints detection can: c_can: Fix RX message handling, handle lost message before EOB doc:net: Fix typo in Documentation/networking bgmac: don't update slot on skb alloc/dma mapping error ibm emac: Fix locking for enable/disable eob irq ibm emac: Don't call napi_complete if napi_reschedule failed virtio-net: correctly handle cpu hotplug notifier during resuming bridge: pass correct vlan id to multicast code net: x25: Fix dead URLs in Kconfig netfilter: xt_NFQUEUE: fix --queue-bypass regression xen-netback: use jiffies_64 value to calculate credit timeout cxgb3: Fix length calculation in write_ofld_wr() on 32-bit architectures bnx2x: Disable VF access on PF removal bnx2x: prevent FW assert on low mem during unload tcp: gso: fix truesize tracking xfrm: Increase the garbage collector threshold ...
This commit is contained in:
commit
be408cd3e1
@ -18,8 +18,8 @@ Introduction
|
||||
Datagram Congestion Control Protocol (DCCP) is an unreliable, connection
|
||||
oriented protocol designed to solve issues present in UDP and TCP, particularly
|
||||
for real-time and multimedia (streaming) traffic.
|
||||
It divides into a base protocol (RFC 4340) and plugable congestion control
|
||||
modules called CCIDs. Like plugable TCP congestion control, at least one CCID
|
||||
It divides into a base protocol (RFC 4340) and pluggable congestion control
|
||||
modules called CCIDs. Like pluggable TCP congestion control, at least one CCID
|
||||
needs to be enabled in order for the protocol to function properly. In the Linux
|
||||
implementation, this is the TCP-like CCID2 (RFC 4341). Additional CCIDs, such as
|
||||
the TCP-friendly CCID3 (RFC 4342), are optional.
|
||||
|
@ -103,7 +103,7 @@ Additional Configurations
|
||||
PRO/100 Family of Adapters is e100.
|
||||
|
||||
As an example, if you install the e100 driver for two PRO/100 adapters
|
||||
(eth0 and eth1), add the following to a configuraton file in /etc/modprobe.d/
|
||||
(eth0 and eth1), add the following to a configuration file in /etc/modprobe.d/
|
||||
|
||||
alias eth0 e100
|
||||
alias eth1 e100
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
Introduction
|
||||
============
|
||||
The IEEE 802.15.4 working group focuses on standartization of bottom
|
||||
The IEEE 802.15.4 working group focuses on standardization of bottom
|
||||
two layers: Medium Access Control (MAC) and Physical (PHY). And there
|
||||
are mainly two options available for upper layers:
|
||||
- ZigBee - proprietary protocol from ZigBee Alliance
|
||||
@ -66,7 +66,7 @@ net_device, with .type = ARPHRD_IEEE802154. Data is exchanged with socket family
|
||||
code via plain sk_buffs. On skb reception skb->cb must contain additional
|
||||
info as described in the struct ieee802154_mac_cb. During packet transmission
|
||||
the skb->cb is used to provide additional data to device's header_ops->create
|
||||
function. Be aware, that this data can be overriden later (when socket code
|
||||
function. Be aware that this data can be overridden later (when socket code
|
||||
submits skb to qdisc), so if you need something from that cb later, you should
|
||||
store info in the skb->data on your own.
|
||||
|
||||
|
@ -197,7 +197,7 @@ state information because the file format is subject to change. It is
|
||||
implemented to provide extra debug information to help diagnose
|
||||
problems.) Users should use the netlink API.
|
||||
|
||||
/proc/net/pppol2tp is also provided for backwards compaibility with
|
||||
/proc/net/pppol2tp is also provided for backwards compatibility with
|
||||
the original pppol2tp driver. It lists information about L2TPv2
|
||||
tunnels and sessions only. Its use is discouraged.
|
||||
|
||||
|
@ -4,23 +4,23 @@ Information you need to know about netdev
|
||||
|
||||
Q: What is netdev?
|
||||
|
||||
A: It is a mailing list for all network related linux stuff. This includes
|
||||
A: It is a mailing list for all network-related Linux stuff. This includes
|
||||
anything found under net/ (i.e. core code like IPv6) and drivers/net
|
||||
(i.e. hardware specific drivers) in the linux source tree.
|
||||
(i.e. hardware specific drivers) in the Linux source tree.
|
||||
|
||||
Note that some subsystems (e.g. wireless drivers) which have a high volume
|
||||
of traffic have their own specific mailing lists.
|
||||
|
||||
The netdev list is managed (like many other linux mailing lists) through
|
||||
The netdev list is managed (like many other Linux mailing lists) through
|
||||
VGER ( http://vger.kernel.org/ ) and archives can be found below:
|
||||
|
||||
http://marc.info/?l=linux-netdev
|
||||
http://www.spinics.net/lists/netdev/
|
||||
|
||||
Aside from subsystems like that mentioned above, all network related linux
|
||||
development (i.e. RFC, review, comments, etc) takes place on netdev.
|
||||
Aside from subsystems like that mentioned above, all network-related Linux
|
||||
development (i.e. RFC, review, comments, etc.) takes place on netdev.
|
||||
|
||||
Q: How do the changes posted to netdev make their way into linux?
|
||||
Q: How do the changes posted to netdev make their way into Linux?
|
||||
|
||||
A: There are always two trees (git repositories) in play. Both are driven
|
||||
by David Miller, the main network maintainer. There is the "net" tree,
|
||||
@ -35,7 +35,7 @@ A: There are always two trees (git repositories) in play. Both are driven
|
||||
Q: How often do changes from these trees make it to the mainline Linus tree?
|
||||
|
||||
A: To understand this, you need to know a bit of background information
|
||||
on the cadence of linux development. Each new release starts off with
|
||||
on the cadence of Linux development. Each new release starts off with
|
||||
a two week "merge window" where the main maintainers feed their new
|
||||
stuff to Linus for merging into the mainline tree. After the two weeks,
|
||||
the merge window is closed, and it is called/tagged "-rc1". No new
|
||||
@ -46,7 +46,7 @@ A: To understand this, you need to know a bit of background information
|
||||
things are in a state of churn), and a week after the last vX.Y-rcN
|
||||
was done, the official "vX.Y" is released.
|
||||
|
||||
Relating that to netdev: At the beginning of the 2 week merge window,
|
||||
Relating that to netdev: At the beginning of the 2-week merge window,
|
||||
the net-next tree will be closed - no new changes/features. The
|
||||
accumulated new content of the past ~10 weeks will be passed onto
|
||||
mainline/Linus via a pull request for vX.Y -- at the same time,
|
||||
@ -59,16 +59,16 @@ A: To understand this, you need to know a bit of background information
|
||||
IMPORTANT: Do not send new net-next content to netdev during the
|
||||
period during which net-next tree is closed.
|
||||
|
||||
Shortly after the two weeks have passed, (and vX.Y-rc1 is released) the
|
||||
Shortly after the two weeks have passed (and vX.Y-rc1 is released), the
|
||||
tree for net-next reopens to collect content for the next (vX.Y+1) release.
|
||||
|
||||
If you aren't subscribed to netdev and/or are simply unsure if net-next
|
||||
has re-opened yet, simply check the net-next git repository link above for
|
||||
any new networking related commits.
|
||||
any new networking-related commits.
|
||||
|
||||
The "net" tree continues to collect fixes for the vX.Y content, and
|
||||
is fed back to Linus at regular (~weekly) intervals. Meaning that the
|
||||
focus for "net" is on stablilization and bugfixes.
|
||||
focus for "net" is on stabilization and bugfixes.
|
||||
|
||||
Finally, the vX.Y gets released, and the whole cycle starts over.
|
||||
|
||||
@ -217,7 +217,7 @@ A: Attention to detail. Re-read your own work as if you were the
|
||||
to why it happens, and then if necessary, explain why the fix proposed
|
||||
is the best way to get things done. Don't mangle whitespace, and as
|
||||
is common, don't mis-indent function arguments that span multiple lines.
|
||||
If it is your 1st patch, mail it to yourself so you can test apply
|
||||
If it is your first patch, mail it to yourself so you can test apply
|
||||
it to an unpatched tree to confirm infrastructure didn't mangle it.
|
||||
|
||||
Finally, go back and read Documentation/SubmittingPatches to be
|
||||
|
@ -45,7 +45,7 @@ processing.
|
||||
|
||||
Conversion of the reception path involves calling poll() on the file
|
||||
descriptor, once the socket is readable the frames from the ring are
|
||||
processsed in order until no more messages are available, as indicated by
|
||||
processed in order until no more messages are available, as indicated by
|
||||
a status word in the frame header.
|
||||
|
||||
On kernel side, in order to make use of memory mapped I/O on receive, the
|
||||
@ -56,7 +56,7 @@ Dumps of kernel databases automatically support memory mapped I/O.
|
||||
|
||||
Conversion of the transmit path involves changing message construction to
|
||||
use memory from the TX ring instead of (usually) a buffer declared on the
|
||||
stack and setting up the frame header approriately. Optionally poll() can
|
||||
stack and setting up the frame header appropriately. Optionally poll() can
|
||||
be used to wait for free frames in the TX ring.
|
||||
|
||||
Structured and definitions for using memory mapped I/O are contained in
|
||||
@ -231,7 +231,7 @@ Ring setup:
|
||||
if (setsockopt(fd, NETLINK_TX_RING, &req, sizeof(req)) < 0)
|
||||
exit(1)
|
||||
|
||||
/* Calculate size of each invididual ring */
|
||||
/* Calculate size of each individual ring */
|
||||
ring_size = req.nm_block_nr * req.nm_block_size;
|
||||
|
||||
/* Map RX/TX rings. The TX ring is located after the RX ring */
|
||||
|
@ -89,8 +89,8 @@ packets. The name 'carrier' and the inversion are historical, think of
|
||||
it as lower layer.
|
||||
|
||||
Note that for certain kind of soft-devices, which are not managing any
|
||||
real hardware, there is possible to set this bit from userpsace.
|
||||
One should use TVL IFLA_CARRIER to do so.
|
||||
real hardware, it is possible to set this bit from userspace. One
|
||||
should use TVL IFLA_CARRIER to do so.
|
||||
|
||||
netif_carrier_ok() can be used to query that bit.
|
||||
|
||||
|
@ -144,7 +144,7 @@ An overview of the RxRPC protocol:
|
||||
(*) Calls use ACK packets to handle reliability. Data packets are also
|
||||
explicitly sequenced per call.
|
||||
|
||||
(*) There are two types of positive acknowledgement: hard-ACKs and soft-ACKs.
|
||||
(*) There are two types of positive acknowledgment: hard-ACKs and soft-ACKs.
|
||||
A hard-ACK indicates to the far side that all the data received to a point
|
||||
has been received and processed; a soft-ACK indicates that the data has
|
||||
been received but may yet be discarded and re-requested. The sender may
|
||||
|
@ -160,7 +160,7 @@ Where:
|
||||
o pmt: core has the embedded power module (optional).
|
||||
o force_sf_dma_mode: force DMA to use the Store and Forward mode
|
||||
instead of the Threshold.
|
||||
o force_thresh_dma_mode: force DMA to use the Shreshold mode other than
|
||||
o force_thresh_dma_mode: force DMA to use the Threshold mode other than
|
||||
the Store and Forward mode.
|
||||
o riwt_off: force to disable the RX watchdog feature and switch to NAPI mode.
|
||||
o fix_mac_speed: this callback is used for modifying some syscfg registers
|
||||
@ -175,7 +175,7 @@ Where:
|
||||
registers.
|
||||
o custom_cfg/custom_data: this is a custom configuration that can be passed
|
||||
while initializing the resources.
|
||||
o bsp_priv: another private poiter.
|
||||
o bsp_priv: another private pointer.
|
||||
|
||||
For MDIO bus The we have:
|
||||
|
||||
@ -271,7 +271,7 @@ reset procedure etc).
|
||||
o dwmac1000_dma.c: dma functions for the GMAC chip;
|
||||
o dwmac1000.h: specific header file for the GMAC;
|
||||
o dwmac100_core: MAC 100 core and dma code;
|
||||
o dwmac100_dma.c: dma funtions for the MAC chip;
|
||||
o dwmac100_dma.c: dma functions for the MAC chip;
|
||||
o dwmac1000.h: specific header file for the MAC;
|
||||
o dwmac_lib.c: generic DMA functions shared among chips;
|
||||
o enh_desc.c: functions for handling enhanced descriptors;
|
||||
@ -364,4 +364,4 @@ Auto-negotiated Link Parter Ability.
|
||||
10) TODO:
|
||||
o XGMAC is not supported.
|
||||
o Complete the TBI & RTBI support.
|
||||
o extened VLAN support for 3.70a SYNP GMAC.
|
||||
o extend VLAN support for 3.70a SYNP GMAC.
|
||||
|
@ -68,7 +68,7 @@ Module parameters
|
||||
|
||||
There are several parameters which may be provided to the driver when
|
||||
its module is loaded. These are usually placed in /etc/modprobe.d/*.conf
|
||||
configuretion files. Example:
|
||||
configuration files. Example:
|
||||
|
||||
options 3c59x debug=3 rx_copybreak=300
|
||||
|
||||
@ -178,7 +178,7 @@ max_interrupt_work=N
|
||||
|
||||
The driver's interrupt service routine can handle many receive and
|
||||
transmit packets in a single invocation. It does this in a loop.
|
||||
The value of max_interrupt_work governs how mnay times the interrupt
|
||||
The value of max_interrupt_work governs how many times the interrupt
|
||||
service routine will loop. The default value is 32 loops. If this
|
||||
is exceeded the interrupt service routine gives up and generates a
|
||||
warning message "eth0: Too much work in interrupt".
|
||||
|
@ -105,7 +105,7 @@ reduced by the following measures or a combination thereof:
|
||||
later.
|
||||
The lapb module interface was modified to support this. Its
|
||||
data_indication() method should now transparently pass the
|
||||
netif_rx() return value to the (lapb mopdule) caller.
|
||||
netif_rx() return value to the (lapb module) caller.
|
||||
(2) Drivers for kernel versions 2.2.x should always check the global
|
||||
variable netdev_dropping when a new frame is received. The driver
|
||||
should only call netif_rx() if netdev_dropping is zero. Otherwise
|
||||
|
@ -814,9 +814,6 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
|
||||
msg_ctrl_save = priv->read_reg(priv,
|
||||
C_CAN_IFACE(MSGCTRL_REG, 0));
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_EOB)
|
||||
return num_rx_pkts;
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_MSGLST) {
|
||||
c_can_handle_lost_msg_obj(dev, 0, msg_obj);
|
||||
num_rx_pkts++;
|
||||
@ -824,6 +821,9 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
|
||||
continue;
|
||||
}
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_EOB)
|
||||
return num_rx_pkts;
|
||||
|
||||
if (!(msg_ctrl_save & IF_MCONT_NEWDAT))
|
||||
continue;
|
||||
|
||||
|
@ -1544,9 +1544,9 @@ static int kvaser_usb_init_one(struct usb_interface *intf,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
struct usb_endpoint_descriptor **in,
|
||||
struct usb_endpoint_descriptor **out)
|
||||
static int kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
struct usb_endpoint_descriptor **in,
|
||||
struct usb_endpoint_descriptor **out)
|
||||
{
|
||||
const struct usb_host_interface *iface_desc;
|
||||
struct usb_endpoint_descriptor *endpoint;
|
||||
@ -1557,12 +1557,18 @@ static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
|
||||
endpoint = &iface_desc->endpoint[i].desc;
|
||||
|
||||
if (usb_endpoint_is_bulk_in(endpoint))
|
||||
if (!*in && usb_endpoint_is_bulk_in(endpoint))
|
||||
*in = endpoint;
|
||||
|
||||
if (usb_endpoint_is_bulk_out(endpoint))
|
||||
if (!*out && usb_endpoint_is_bulk_out(endpoint))
|
||||
*out = endpoint;
|
||||
|
||||
/* use first bulk endpoint for in and out */
|
||||
if (*in && *out)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int kvaser_usb_probe(struct usb_interface *intf,
|
||||
@ -1576,8 +1582,8 @@ static int kvaser_usb_probe(struct usb_interface *intf,
|
||||
if (!dev)
|
||||
return -ENOMEM;
|
||||
|
||||
kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
|
||||
if (!dev->bulk_in || !dev->bulk_out) {
|
||||
err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
|
||||
if (err) {
|
||||
dev_err(&intf->dev, "Cannot get usb endpoint(s)");
|
||||
return err;
|
||||
}
|
||||
|
@ -244,25 +244,33 @@ static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac,
|
||||
struct bgmac_slot_info *slot)
|
||||
{
|
||||
struct device *dma_dev = bgmac->core->dma_dev;
|
||||
struct sk_buff *skb;
|
||||
dma_addr_t dma_addr;
|
||||
struct bgmac_rx_header *rx;
|
||||
|
||||
/* Alloc skb */
|
||||
slot->skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE);
|
||||
if (!slot->skb)
|
||||
skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE);
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Poison - if everything goes fine, hardware will overwrite it */
|
||||
rx = (struct bgmac_rx_header *)slot->skb->data;
|
||||
rx = (struct bgmac_rx_header *)skb->data;
|
||||
rx->len = cpu_to_le16(0xdead);
|
||||
rx->flags = cpu_to_le16(0xbeef);
|
||||
|
||||
/* Map skb for the DMA */
|
||||
slot->dma_addr = dma_map_single(dma_dev, slot->skb->data,
|
||||
BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dma_dev, slot->dma_addr)) {
|
||||
dma_addr = dma_map_single(dma_dev, skb->data,
|
||||
BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dma_dev, dma_addr)) {
|
||||
bgmac_err(bgmac, "DMA mapping error\n");
|
||||
dev_kfree_skb(skb);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Update the slot */
|
||||
slot->skb = skb;
|
||||
slot->dma_addr = dma_addr;
|
||||
|
||||
if (slot->dma_addr & 0xC0000000)
|
||||
bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
|
||||
|
||||
|
@ -2545,10 +2545,6 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
|
||||
}
|
||||
}
|
||||
|
||||
/* Allocated memory for FW statistics */
|
||||
if (bnx2x_alloc_fw_stats_mem(bp))
|
||||
LOAD_ERROR_EXIT(bp, load_error0);
|
||||
|
||||
/* need to be done after alloc mem, since it's self adjusting to amount
|
||||
* of memory available for RSS queues
|
||||
*/
|
||||
@ -2558,6 +2554,10 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
|
||||
LOAD_ERROR_EXIT(bp, load_error0);
|
||||
}
|
||||
|
||||
/* Allocated memory for FW statistics */
|
||||
if (bnx2x_alloc_fw_stats_mem(bp))
|
||||
LOAD_ERROR_EXIT(bp, load_error0);
|
||||
|
||||
/* request pf to initialize status blocks */
|
||||
if (IS_VF(bp)) {
|
||||
rc = bnx2x_vfpf_init(bp);
|
||||
@ -2812,8 +2812,8 @@ load_error1:
|
||||
if (IS_PF(bp))
|
||||
bnx2x_clear_pf_load(bp);
|
||||
load_error0:
|
||||
bnx2x_free_fp_mem(bp);
|
||||
bnx2x_free_fw_stats_mem(bp);
|
||||
bnx2x_free_fp_mem(bp);
|
||||
bnx2x_free_mem(bp);
|
||||
|
||||
return rc;
|
||||
|
@ -2018,6 +2018,8 @@ failed:
|
||||
|
||||
void bnx2x_iov_remove_one(struct bnx2x *bp)
|
||||
{
|
||||
int vf_idx;
|
||||
|
||||
/* if SRIOV is not enabled there's nothing to do */
|
||||
if (!IS_SRIOV(bp))
|
||||
return;
|
||||
@ -2026,6 +2028,18 @@ void bnx2x_iov_remove_one(struct bnx2x *bp)
|
||||
pci_disable_sriov(bp->pdev);
|
||||
DP(BNX2X_MSG_IOV, "sriov disabled\n");
|
||||
|
||||
/* disable access to all VFs */
|
||||
for (vf_idx = 0; vf_idx < bp->vfdb->sriov.total; vf_idx++) {
|
||||
bnx2x_pretend_func(bp,
|
||||
HW_VF_HANDLE(bp,
|
||||
bp->vfdb->sriov.first_vf_in_pf +
|
||||
vf_idx));
|
||||
DP(BNX2X_MSG_IOV, "disabling internal access for vf %d\n",
|
||||
bp->vfdb->sriov.first_vf_in_pf + vf_idx);
|
||||
bnx2x_vf_enable_internal(bp, 0);
|
||||
bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
|
||||
}
|
||||
|
||||
/* free vf database */
|
||||
__bnx2x_iov_free_vfdb(bp);
|
||||
}
|
||||
@ -3197,7 +3211,7 @@ int bnx2x_enable_sriov(struct bnx2x *bp)
|
||||
* the "acquire" messages to appear on the VF PF channel.
|
||||
*/
|
||||
DP(BNX2X_MSG_IOV, "about to call enable sriov\n");
|
||||
pci_disable_sriov(bp->pdev);
|
||||
bnx2x_disable_sriov(bp);
|
||||
rc = pci_enable_sriov(bp->pdev, req_vfs);
|
||||
if (rc) {
|
||||
BNX2X_ERR("pci_enable_sriov failed with %d\n", rc);
|
||||
|
@ -1599,7 +1599,8 @@ static void write_ofld_wr(struct adapter *adap, struct sk_buff *skb,
|
||||
flits = skb_transport_offset(skb) / 8;
|
||||
sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl;
|
||||
sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb),
|
||||
skb->tail - skb->transport_header,
|
||||
skb_tail_pointer(skb) -
|
||||
skb_transport_header(skb),
|
||||
adap->pdev);
|
||||
if (need_skb_unmap()) {
|
||||
setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits);
|
||||
|
@ -696,6 +696,15 @@ static inline int qnq_async_evt_rcvd(struct be_adapter *adapter)
|
||||
return adapter->flags & BE_FLAGS_QNQ_ASYNC_EVT_RCVD;
|
||||
}
|
||||
|
||||
static inline int fw_major_num(const char *fw_ver)
|
||||
{
|
||||
int fw_major = 0;
|
||||
|
||||
sscanf(fw_ver, "%d.", &fw_major);
|
||||
|
||||
return fw_major;
|
||||
}
|
||||
|
||||
extern void be_cq_notify(struct be_adapter *adapter, u16 qid, bool arm,
|
||||
u16 num_popped);
|
||||
extern void be_link_status_update(struct be_adapter *adapter, u8 link_status);
|
||||
|
@ -3247,6 +3247,12 @@ static int be_setup(struct be_adapter *adapter)
|
||||
|
||||
be_cmd_get_fw_ver(adapter, adapter->fw_ver, adapter->fw_on_flash);
|
||||
|
||||
if (BE2_chip(adapter) && fw_major_num(adapter->fw_ver) < 4) {
|
||||
dev_err(dev, "Firmware on card is old(%s), IRQs may not work.",
|
||||
adapter->fw_ver);
|
||||
dev_err(dev, "Please upgrade firmware to version >= 4.0\n");
|
||||
}
|
||||
|
||||
if (adapter->vlans_added)
|
||||
be_vid_config(adapter);
|
||||
|
||||
|
@ -263,7 +263,9 @@ static inline void mal_schedule_poll(struct mal_instance *mal)
|
||||
{
|
||||
if (likely(napi_schedule_prep(&mal->napi))) {
|
||||
MAL_DBG2(mal, "schedule_poll" NL);
|
||||
spin_lock(&mal->lock);
|
||||
mal_disable_eob_irq(mal);
|
||||
spin_unlock(&mal->lock);
|
||||
__napi_schedule(&mal->napi);
|
||||
} else
|
||||
MAL_DBG2(mal, "already in poll" NL);
|
||||
@ -442,15 +444,13 @@ static int mal_poll(struct napi_struct *napi, int budget)
|
||||
if (unlikely(mc->ops->peek_rx(mc->dev) ||
|
||||
test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) {
|
||||
MAL_DBG2(mal, "rotting packet" NL);
|
||||
if (napi_reschedule(napi))
|
||||
mal_disable_eob_irq(mal);
|
||||
else
|
||||
MAL_DBG2(mal, "already in poll list" NL);
|
||||
|
||||
if (budget > 0)
|
||||
goto again;
|
||||
else
|
||||
if (!napi_reschedule(napi))
|
||||
goto more_work;
|
||||
|
||||
spin_lock_irqsave(&mal->lock, flags);
|
||||
mal_disable_eob_irq(mal);
|
||||
spin_unlock_irqrestore(&mal->lock, flags);
|
||||
goto again;
|
||||
}
|
||||
mc->ops->poll_tx(mc->dev);
|
||||
}
|
||||
|
@ -1691,7 +1691,7 @@ static void mlx4_master_deactivate_admin_state(struct mlx4_priv *priv, int slave
|
||||
vp_oper->vlan_idx = NO_INDX;
|
||||
}
|
||||
if (NO_INDX != vp_oper->mac_idx) {
|
||||
__mlx4_unregister_mac(&priv->dev, port, vp_oper->mac_idx);
|
||||
__mlx4_unregister_mac(&priv->dev, port, vp_oper->state.mac);
|
||||
vp_oper->mac_idx = NO_INDX;
|
||||
}
|
||||
}
|
||||
|
@ -2276,9 +2276,9 @@ int qlcnic_83xx_get_nic_info(struct qlcnic_adapter *adapter,
|
||||
temp = (cmd.rsp.arg[8] & 0x7FFE0000) >> 17;
|
||||
npar_info->max_linkspeed_reg_offset = temp;
|
||||
}
|
||||
if (npar_info->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS)
|
||||
memcpy(ahw->extra_capability, &cmd.rsp.arg[16],
|
||||
sizeof(ahw->extra_capability));
|
||||
|
||||
memcpy(ahw->extra_capability, &cmd.rsp.arg[16],
|
||||
sizeof(ahw->extra_capability));
|
||||
|
||||
out:
|
||||
qlcnic_free_mbx_args(&cmd);
|
||||
|
@ -785,8 +785,6 @@ void qlcnic_82xx_config_intr_coalesce(struct qlcnic_adapter *adapter)
|
||||
|
||||
#define QLCNIC_ENABLE_IPV4_LRO 1
|
||||
#define QLCNIC_ENABLE_IPV6_LRO 2
|
||||
#define QLCNIC_NO_DEST_IPV4_CHECK (1 << 8)
|
||||
#define QLCNIC_NO_DEST_IPV6_CHECK (2 << 8)
|
||||
|
||||
int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int enable)
|
||||
{
|
||||
@ -806,11 +804,10 @@ int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int enable)
|
||||
|
||||
word = 0;
|
||||
if (enable) {
|
||||
word = QLCNIC_ENABLE_IPV4_LRO | QLCNIC_NO_DEST_IPV4_CHECK;
|
||||
word = QLCNIC_ENABLE_IPV4_LRO;
|
||||
if (adapter->ahw->extra_capability[0] &
|
||||
QLCNIC_FW_CAP2_HW_LRO_IPV6)
|
||||
word |= QLCNIC_ENABLE_IPV6_LRO |
|
||||
QLCNIC_NO_DEST_IPV6_CHECK;
|
||||
word |= QLCNIC_ENABLE_IPV6_LRO;
|
||||
}
|
||||
|
||||
req.words[0] = cpu_to_le64(word);
|
||||
|
@ -1131,7 +1131,10 @@ qlcnic_initialize_nic(struct qlcnic_adapter *adapter)
|
||||
if (err == -EIO)
|
||||
return err;
|
||||
adapter->ahw->extra_capability[0] = temp;
|
||||
} else {
|
||||
adapter->ahw->extra_capability[0] = 0;
|
||||
}
|
||||
|
||||
adapter->ahw->max_mac_filters = nic_info.max_mac_filters;
|
||||
adapter->ahw->max_mtu = nic_info.max_mtu;
|
||||
|
||||
@ -2159,8 +2162,7 @@ void qlcnic_set_drv_version(struct qlcnic_adapter *adapter)
|
||||
else if (qlcnic_83xx_check(adapter))
|
||||
fw_cmd = QLCNIC_CMD_83XX_SET_DRV_VER;
|
||||
|
||||
if ((ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) &&
|
||||
(ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER))
|
||||
if (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER)
|
||||
qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd);
|
||||
}
|
||||
|
||||
|
@ -310,6 +310,7 @@ static ssize_t store_enabled(struct netconsole_target *nt,
|
||||
const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
unsigned long flags;
|
||||
int enabled;
|
||||
int err;
|
||||
|
||||
@ -324,9 +325,7 @@ static ssize_t store_enabled(struct netconsole_target *nt,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mutex_lock(&nt->mutex);
|
||||
if (enabled) { /* 1 */
|
||||
|
||||
/*
|
||||
* Skip netpoll_parse_options() -- all the attributes are
|
||||
* already configured via configfs. Just print them out.
|
||||
@ -334,19 +333,22 @@ static ssize_t store_enabled(struct netconsole_target *nt,
|
||||
netpoll_print_options(&nt->np);
|
||||
|
||||
err = netpoll_setup(&nt->np);
|
||||
if (err) {
|
||||
mutex_unlock(&nt->mutex);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "netconsole: network logging started\n");
|
||||
|
||||
} else { /* 0 */
|
||||
/* We need to disable the netconsole before cleaning it up
|
||||
* otherwise we might end up in write_msg() with
|
||||
* nt->np.dev == NULL and nt->enabled == 1
|
||||
*/
|
||||
spin_lock_irqsave(&target_list_lock, flags);
|
||||
nt->enabled = 0;
|
||||
spin_unlock_irqrestore(&target_list_lock, flags);
|
||||
netpoll_cleanup(&nt->np);
|
||||
}
|
||||
|
||||
nt->enabled = enabled;
|
||||
mutex_unlock(&nt->mutex);
|
||||
|
||||
return strnlen(buf, count);
|
||||
}
|
||||
@ -563,8 +565,10 @@ static ssize_t netconsole_target_attr_store(struct config_item *item,
|
||||
struct netconsole_target_attr *na =
|
||||
container_of(attr, struct netconsole_target_attr, attr);
|
||||
|
||||
mutex_lock(&nt->mutex);
|
||||
if (na->store)
|
||||
ret = na->store(nt, buf, count);
|
||||
mutex_unlock(&nt->mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -78,7 +78,6 @@
|
||||
#define AX_MEDIUM_STATUS_MODE 0x22
|
||||
#define AX_MEDIUM_GIGAMODE 0x01
|
||||
#define AX_MEDIUM_FULL_DUPLEX 0x02
|
||||
#define AX_MEDIUM_ALWAYS_ONE 0x04
|
||||
#define AX_MEDIUM_EN_125MHZ 0x08
|
||||
#define AX_MEDIUM_RXFLOW_CTRLEN 0x10
|
||||
#define AX_MEDIUM_TXFLOW_CTRLEN 0x20
|
||||
@ -1065,8 +1064,8 @@ static int ax88179_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||
|
||||
/* Configure default medium type => giga */
|
||||
*tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
|
||||
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE |
|
||||
AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE;
|
||||
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX |
|
||||
AX_MEDIUM_GIGAMODE;
|
||||
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE,
|
||||
2, 2, tmp16);
|
||||
|
||||
@ -1225,7 +1224,7 @@ static int ax88179_link_reset(struct usbnet *dev)
|
||||
}
|
||||
|
||||
mode = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
|
||||
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE;
|
||||
AX_MEDIUM_RXFLOW_CTRLEN;
|
||||
|
||||
ax88179_read_cmd(dev, AX_ACCESS_MAC, PHYSICAL_LINK_STATUS,
|
||||
1, 1, &link_sts);
|
||||
@ -1339,8 +1338,8 @@ static int ax88179_reset(struct usbnet *dev)
|
||||
|
||||
/* Configure default medium type => giga */
|
||||
*tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
|
||||
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE |
|
||||
AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE;
|
||||
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX |
|
||||
AX_MEDIUM_GIGAMODE;
|
||||
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE,
|
||||
2, 2, tmp16);
|
||||
|
||||
|
@ -1118,11 +1118,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
|
||||
{
|
||||
struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
|
||||
|
||||
mutex_lock(&vi->config_lock);
|
||||
|
||||
if (!vi->config_enable)
|
||||
goto done;
|
||||
|
||||
switch(action & ~CPU_TASKS_FROZEN) {
|
||||
case CPU_ONLINE:
|
||||
case CPU_DOWN_FAILED:
|
||||
@ -1136,8 +1131,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
|
||||
break;
|
||||
}
|
||||
|
||||
done:
|
||||
mutex_unlock(&vi->config_lock);
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
@ -1699,6 +1692,8 @@ static int virtnet_freeze(struct virtio_device *vdev)
|
||||
struct virtnet_info *vi = vdev->priv;
|
||||
int i;
|
||||
|
||||
unregister_hotcpu_notifier(&vi->nb);
|
||||
|
||||
/* Prevent config work handler from accessing the device */
|
||||
mutex_lock(&vi->config_lock);
|
||||
vi->config_enable = false;
|
||||
@ -1747,6 +1742,10 @@ static int virtnet_restore(struct virtio_device *vdev)
|
||||
virtnet_set_queues(vi, vi->curr_queue_pairs);
|
||||
rtnl_unlock();
|
||||
|
||||
err = register_hotcpu_notifier(&vi->nb);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
@ -148,10 +148,6 @@ static int enslave( struct net_device *, struct net_device * );
|
||||
static int emancipate( struct net_device * );
|
||||
#endif
|
||||
|
||||
#ifdef __i386__
|
||||
#define ASM_CRC 1
|
||||
#endif
|
||||
|
||||
static const char version[] =
|
||||
"Granch SBNI12 driver ver 5.0.1 Jun 22 2001 Denis I.Timofeev.\n";
|
||||
|
||||
@ -1551,88 +1547,6 @@ __setup( "sbni=", sbni_setup );
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
#ifdef ASM_CRC
|
||||
|
||||
static u32
|
||||
calc_crc32( u32 crc, u8 *p, u32 len )
|
||||
{
|
||||
register u32 _crc;
|
||||
_crc = crc;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
"xorl %%ebx, %%ebx\n"
|
||||
"movl %2, %%esi\n"
|
||||
"movl %3, %%ecx\n"
|
||||
"movl $crc32tab, %%edi\n"
|
||||
"shrl $2, %%ecx\n"
|
||||
"jz 1f\n"
|
||||
|
||||
".align 4\n"
|
||||
"0:\n"
|
||||
"movb %%al, %%bl\n"
|
||||
"movl (%%esi), %%edx\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb %%dl, %%bl\n"
|
||||
"shrl $8, %%edx\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb %%dl, %%bl\n"
|
||||
"shrl $8, %%edx\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb %%dl, %%bl\n"
|
||||
"movb %%dh, %%dl\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb %%dl, %%bl\n"
|
||||
"addl $4, %%esi\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"decl %%ecx\n"
|
||||
"jnz 0b\n"
|
||||
|
||||
"1:\n"
|
||||
"movl %3, %%ecx\n"
|
||||
"andl $3, %%ecx\n"
|
||||
"jz 2f\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb (%%esi), %%bl\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"decl %%ecx\n"
|
||||
"jz 2f\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb 1(%%esi), %%bl\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
|
||||
"decl %%ecx\n"
|
||||
"jz 2f\n"
|
||||
|
||||
"movb %%al, %%bl\n"
|
||||
"shrl $8, %%eax\n"
|
||||
"xorb 2(%%esi), %%bl\n"
|
||||
"xorl (%%edi,%%ebx,4), %%eax\n"
|
||||
"2:\n"
|
||||
: "=a" (_crc)
|
||||
: "0" (_crc), "g" (p), "g" (len)
|
||||
: "bx", "cx", "dx", "si", "di"
|
||||
);
|
||||
|
||||
return _crc;
|
||||
}
|
||||
|
||||
#else /* ASM_CRC */
|
||||
|
||||
static u32
|
||||
calc_crc32( u32 crc, u8 *p, u32 len )
|
||||
{
|
||||
@ -1642,9 +1556,6 @@ calc_crc32( u32 crc, u8 *p, u32 len )
|
||||
return crc;
|
||||
}
|
||||
|
||||
#endif /* ASM_CRC */
|
||||
|
||||
|
||||
static u32 crc32tab[] __attribute__ ((aligned(8))) = {
|
||||
0xD202EF8D, 0xA505DF1B, 0x3C0C8EA1, 0x4B0BBE37,
|
||||
0xD56F2B94, 0xA2681B02, 0x3B614AB8, 0x4C667A2E,
|
||||
|
@ -163,6 +163,7 @@ struct xenvif {
|
||||
unsigned long credit_usec;
|
||||
unsigned long remaining_credit;
|
||||
struct timer_list credit_timeout;
|
||||
u64 credit_window_start;
|
||||
|
||||
/* Statistics */
|
||||
unsigned long rx_gso_checksum_fixup;
|
||||
|
@ -312,8 +312,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
|
||||
vif->credit_bytes = vif->remaining_credit = ~0UL;
|
||||
vif->credit_usec = 0UL;
|
||||
init_timer(&vif->credit_timeout);
|
||||
/* Initialize 'expires' now: it's used to track the credit window. */
|
||||
vif->credit_timeout.expires = jiffies;
|
||||
vif->credit_window_start = get_jiffies_64();
|
||||
|
||||
dev->netdev_ops = &xenvif_netdev_ops;
|
||||
dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
|
||||
|
@ -1185,9 +1185,8 @@ out:
|
||||
|
||||
static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
{
|
||||
unsigned long now = jiffies;
|
||||
unsigned long next_credit =
|
||||
vif->credit_timeout.expires +
|
||||
u64 now = get_jiffies_64();
|
||||
u64 next_credit = vif->credit_window_start +
|
||||
msecs_to_jiffies(vif->credit_usec / 1000);
|
||||
|
||||
/* Timer could already be pending in rare cases. */
|
||||
@ -1195,8 +1194,8 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
return true;
|
||||
|
||||
/* Passed the point where we can replenish credit? */
|
||||
if (time_after_eq(now, next_credit)) {
|
||||
vif->credit_timeout.expires = now;
|
||||
if (time_after_eq64(now, next_credit)) {
|
||||
vif->credit_window_start = now;
|
||||
tx_add_credit(vif);
|
||||
}
|
||||
|
||||
@ -1208,6 +1207,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
tx_credit_callback;
|
||||
mod_timer(&vif->credit_timeout,
|
||||
next_credit);
|
||||
vif->credit_window_start = next_credit;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
@ -24,7 +24,8 @@ struct netpoll {
|
||||
struct net_device *dev;
|
||||
char dev_name[IFNAMSIZ];
|
||||
const char *name;
|
||||
void (*rx_hook)(struct netpoll *, int, char *, int);
|
||||
void (*rx_skb_hook)(struct netpoll *np, int source, struct sk_buff *skb,
|
||||
int offset, int len);
|
||||
|
||||
union inet_addr local_ip, remote_ip;
|
||||
bool ipv6;
|
||||
@ -41,7 +42,7 @@ struct netpoll_info {
|
||||
unsigned long rx_flags;
|
||||
spinlock_t rx_lock;
|
||||
struct semaphore dev_lock;
|
||||
struct list_head rx_np; /* netpolls that registered an rx_hook */
|
||||
struct list_head rx_np; /* netpolls that registered an rx_skb_hook */
|
||||
|
||||
struct sk_buff_head neigh_tx; /* list of neigh requests to reply to */
|
||||
struct sk_buff_head txq;
|
||||
|
@ -165,6 +165,7 @@ static inline struct inet6_dev *ip6_dst_idev(struct dst_entry *dst)
|
||||
static inline void rt6_clean_expires(struct rt6_info *rt)
|
||||
{
|
||||
rt->rt6i_flags &= ~RTF_EXPIRES;
|
||||
rt->dst.expires = 0;
|
||||
}
|
||||
|
||||
static inline void rt6_set_expires(struct rt6_info *rt, unsigned long expires)
|
||||
|
@ -64,7 +64,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
br_flood_deliver(br, skb, false);
|
||||
goto out;
|
||||
}
|
||||
if (br_multicast_rcv(br, NULL, skb)) {
|
||||
if (br_multicast_rcv(br, NULL, skb, vid)) {
|
||||
kfree_skb(skb);
|
||||
goto out;
|
||||
}
|
||||
|
@ -80,7 +80,7 @@ int br_handle_frame_finish(struct sk_buff *skb)
|
||||
br_fdb_update(br, p, eth_hdr(skb)->h_source, vid);
|
||||
|
||||
if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) &&
|
||||
br_multicast_rcv(br, p, skb))
|
||||
br_multicast_rcv(br, p, skb, vid))
|
||||
goto drop;
|
||||
|
||||
if (p->state == BR_STATE_LEARNING)
|
||||
|
@ -947,7 +947,8 @@ void br_multicast_disable_port(struct net_bridge_port *port)
|
||||
|
||||
static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
struct igmpv3_report *ih;
|
||||
struct igmpv3_grec *grec;
|
||||
@ -957,12 +958,10 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
|
||||
int type;
|
||||
int err = 0;
|
||||
__be32 group;
|
||||
u16 vid = 0;
|
||||
|
||||
if (!pskb_may_pull(skb, sizeof(*ih)))
|
||||
return -EINVAL;
|
||||
|
||||
br_vlan_get_tag(skb, &vid);
|
||||
ih = igmpv3_report_hdr(skb);
|
||||
num = ntohs(ih->ngrec);
|
||||
len = sizeof(*ih);
|
||||
@ -1005,7 +1004,8 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
static int br_ip6_multicast_mld2_report(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
struct icmp6hdr *icmp6h;
|
||||
struct mld2_grec *grec;
|
||||
@ -1013,12 +1013,10 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
|
||||
int len;
|
||||
int num;
|
||||
int err = 0;
|
||||
u16 vid = 0;
|
||||
|
||||
if (!pskb_may_pull(skb, sizeof(*icmp6h)))
|
||||
return -EINVAL;
|
||||
|
||||
br_vlan_get_tag(skb, &vid);
|
||||
icmp6h = icmp6_hdr(skb);
|
||||
num = ntohs(icmp6h->icmp6_dataun.un_data16[1]);
|
||||
len = sizeof(*icmp6h);
|
||||
@ -1141,7 +1139,8 @@ static void br_multicast_query_received(struct net_bridge *br,
|
||||
|
||||
static int br_ip4_multicast_query(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
struct igmphdr *ih = igmp_hdr(skb);
|
||||
@ -1153,7 +1152,6 @@ static int br_ip4_multicast_query(struct net_bridge *br,
|
||||
unsigned long now = jiffies;
|
||||
__be32 group;
|
||||
int err = 0;
|
||||
u16 vid = 0;
|
||||
|
||||
spin_lock(&br->multicast_lock);
|
||||
if (!netif_running(br->dev) ||
|
||||
@ -1189,7 +1187,6 @@ static int br_ip4_multicast_query(struct net_bridge *br,
|
||||
if (!group)
|
||||
goto out;
|
||||
|
||||
br_vlan_get_tag(skb, &vid);
|
||||
mp = br_mdb_ip4_get(mlock_dereference(br->mdb, br), group, vid);
|
||||
if (!mp)
|
||||
goto out;
|
||||
@ -1219,7 +1216,8 @@ out:
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
static int br_ip6_multicast_query(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
const struct ipv6hdr *ip6h = ipv6_hdr(skb);
|
||||
struct mld_msg *mld;
|
||||
@ -1231,7 +1229,6 @@ static int br_ip6_multicast_query(struct net_bridge *br,
|
||||
unsigned long now = jiffies;
|
||||
const struct in6_addr *group = NULL;
|
||||
int err = 0;
|
||||
u16 vid = 0;
|
||||
|
||||
spin_lock(&br->multicast_lock);
|
||||
if (!netif_running(br->dev) ||
|
||||
@ -1265,7 +1262,6 @@ static int br_ip6_multicast_query(struct net_bridge *br,
|
||||
if (!group)
|
||||
goto out;
|
||||
|
||||
br_vlan_get_tag(skb, &vid);
|
||||
mp = br_mdb_ip6_get(mlock_dereference(br->mdb, br), group, vid);
|
||||
if (!mp)
|
||||
goto out;
|
||||
@ -1439,7 +1435,8 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br,
|
||||
|
||||
static int br_multicast_ipv4_rcv(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
struct sk_buff *skb2 = skb;
|
||||
const struct iphdr *iph;
|
||||
@ -1447,7 +1444,6 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
|
||||
unsigned int len;
|
||||
unsigned int offset;
|
||||
int err;
|
||||
u16 vid = 0;
|
||||
|
||||
/* We treat OOM as packet loss for now. */
|
||||
if (!pskb_may_pull(skb, sizeof(*iph)))
|
||||
@ -1508,7 +1504,6 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
|
||||
|
||||
err = 0;
|
||||
|
||||
br_vlan_get_tag(skb2, &vid);
|
||||
BR_INPUT_SKB_CB(skb)->igmp = 1;
|
||||
ih = igmp_hdr(skb2);
|
||||
|
||||
@ -1519,10 +1514,10 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
|
||||
err = br_ip4_multicast_add_group(br, port, ih->group, vid);
|
||||
break;
|
||||
case IGMPV3_HOST_MEMBERSHIP_REPORT:
|
||||
err = br_ip4_multicast_igmp3_report(br, port, skb2);
|
||||
err = br_ip4_multicast_igmp3_report(br, port, skb2, vid);
|
||||
break;
|
||||
case IGMP_HOST_MEMBERSHIP_QUERY:
|
||||
err = br_ip4_multicast_query(br, port, skb2);
|
||||
err = br_ip4_multicast_query(br, port, skb2, vid);
|
||||
break;
|
||||
case IGMP_HOST_LEAVE_MESSAGE:
|
||||
br_ip4_multicast_leave_group(br, port, ih->group, vid);
|
||||
@ -1540,7 +1535,8 @@ err_out:
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
static int br_multicast_ipv6_rcv(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
struct sk_buff *skb2;
|
||||
const struct ipv6hdr *ip6h;
|
||||
@ -1550,7 +1546,6 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
|
||||
unsigned int len;
|
||||
int offset;
|
||||
int err;
|
||||
u16 vid = 0;
|
||||
|
||||
if (!pskb_may_pull(skb, sizeof(*ip6h)))
|
||||
return -EINVAL;
|
||||
@ -1640,7 +1635,6 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
|
||||
|
||||
err = 0;
|
||||
|
||||
br_vlan_get_tag(skb, &vid);
|
||||
BR_INPUT_SKB_CB(skb)->igmp = 1;
|
||||
|
||||
switch (icmp6_type) {
|
||||
@ -1657,10 +1651,10 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
|
||||
break;
|
||||
}
|
||||
case ICMPV6_MLD2_REPORT:
|
||||
err = br_ip6_multicast_mld2_report(br, port, skb2);
|
||||
err = br_ip6_multicast_mld2_report(br, port, skb2, vid);
|
||||
break;
|
||||
case ICMPV6_MGM_QUERY:
|
||||
err = br_ip6_multicast_query(br, port, skb2);
|
||||
err = br_ip6_multicast_query(br, port, skb2, vid);
|
||||
break;
|
||||
case ICMPV6_MGM_REDUCTION:
|
||||
{
|
||||
@ -1681,7 +1675,7 @@ out:
|
||||
#endif
|
||||
|
||||
int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb, u16 vid)
|
||||
{
|
||||
BR_INPUT_SKB_CB(skb)->igmp = 0;
|
||||
BR_INPUT_SKB_CB(skb)->mrouters_only = 0;
|
||||
@ -1691,10 +1685,10 @@ int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port,
|
||||
|
||||
switch (skb->protocol) {
|
||||
case htons(ETH_P_IP):
|
||||
return br_multicast_ipv4_rcv(br, port, skb);
|
||||
return br_multicast_ipv4_rcv(br, port, skb, vid);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
case htons(ETH_P_IPV6):
|
||||
return br_multicast_ipv6_rcv(br, port, skb);
|
||||
return br_multicast_ipv6_rcv(br, port, skb, vid);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -451,7 +451,8 @@ extern int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd, void __us
|
||||
extern unsigned int br_mdb_rehash_seq;
|
||||
extern int br_multicast_rcv(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb);
|
||||
struct sk_buff *skb,
|
||||
u16 vid);
|
||||
extern struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br,
|
||||
struct sk_buff *skb, u16 vid);
|
||||
extern void br_multicast_add_port(struct net_bridge_port *port);
|
||||
@ -522,7 +523,8 @@ static inline bool br_multicast_querier_exists(struct net_bridge *br,
|
||||
#else
|
||||
static inline int br_multicast_rcv(struct net_bridge *br,
|
||||
struct net_bridge_port *port,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb,
|
||||
u16 vid)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
@ -181,6 +181,7 @@ static void ebt_ulog_packet(struct net *net, unsigned int hooknr,
|
||||
ub->qlen++;
|
||||
|
||||
pm = nlmsg_data(nlh);
|
||||
memset(pm, 0, sizeof(*pm));
|
||||
|
||||
/* Fill in the ulog data */
|
||||
pm->version = EBT_ULOG_VERSION;
|
||||
@ -193,8 +194,6 @@ static void ebt_ulog_packet(struct net *net, unsigned int hooknr,
|
||||
pm->hook = hooknr;
|
||||
if (uloginfo->prefix != NULL)
|
||||
strcpy(pm->prefix, uloginfo->prefix);
|
||||
else
|
||||
*(pm->prefix) = '\0';
|
||||
|
||||
if (in) {
|
||||
strcpy(pm->physindev, in->name);
|
||||
@ -204,16 +203,14 @@ static void ebt_ulog_packet(struct net *net, unsigned int hooknr,
|
||||
strcpy(pm->indev, br_port_get_rcu(in)->br->dev->name);
|
||||
else
|
||||
strcpy(pm->indev, in->name);
|
||||
} else
|
||||
pm->indev[0] = pm->physindev[0] = '\0';
|
||||
}
|
||||
|
||||
if (out) {
|
||||
/* If out exists, then out is a bridge port */
|
||||
strcpy(pm->physoutdev, out->name);
|
||||
/* rcu_read_lock()ed by nf_hook_slow */
|
||||
strcpy(pm->outdev, br_port_get_rcu(out)->br->dev->name);
|
||||
} else
|
||||
pm->outdev[0] = pm->physoutdev[0] = '\0';
|
||||
}
|
||||
|
||||
if (skb_copy_bits(skb, -ETH_HLEN, pm->data, copy_len) < 0)
|
||||
BUG();
|
||||
|
@ -40,7 +40,7 @@ again:
|
||||
struct iphdr _iph;
|
||||
ip:
|
||||
iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph);
|
||||
if (!iph)
|
||||
if (!iph || iph->ihl < 5)
|
||||
return false;
|
||||
|
||||
if (ip_is_fragment(iph))
|
||||
|
@ -636,8 +636,9 @@ static void netpoll_neigh_reply(struct sk_buff *skb, struct netpoll_info *npinfo
|
||||
|
||||
netpoll_send_skb(np, send_skb);
|
||||
|
||||
/* If there are several rx_hooks for the same address,
|
||||
we're fine by sending a single reply */
|
||||
/* If there are several rx_skb_hooks for the same
|
||||
* address we're fine by sending a single reply
|
||||
*/
|
||||
break;
|
||||
}
|
||||
spin_unlock_irqrestore(&npinfo->rx_lock, flags);
|
||||
@ -719,8 +720,9 @@ static void netpoll_neigh_reply(struct sk_buff *skb, struct netpoll_info *npinfo
|
||||
|
||||
netpoll_send_skb(np, send_skb);
|
||||
|
||||
/* If there are several rx_hooks for the same address,
|
||||
we're fine by sending a single reply */
|
||||
/* If there are several rx_skb_hooks for the same
|
||||
* address, we're fine by sending a single reply
|
||||
*/
|
||||
break;
|
||||
}
|
||||
spin_unlock_irqrestore(&npinfo->rx_lock, flags);
|
||||
@ -756,11 +758,12 @@ static bool pkt_is_ns(struct sk_buff *skb)
|
||||
|
||||
int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo)
|
||||
{
|
||||
int proto, len, ulen;
|
||||
int hits = 0;
|
||||
int proto, len, ulen, data_len;
|
||||
int hits = 0, offset;
|
||||
const struct iphdr *iph;
|
||||
struct udphdr *uh;
|
||||
struct netpoll *np, *tmp;
|
||||
uint16_t source;
|
||||
|
||||
if (list_empty(&npinfo->rx_np))
|
||||
goto out;
|
||||
@ -820,7 +823,10 @@ int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo)
|
||||
|
||||
len -= iph->ihl*4;
|
||||
uh = (struct udphdr *)(((char *)iph) + iph->ihl*4);
|
||||
offset = (unsigned char *)(uh + 1) - skb->data;
|
||||
ulen = ntohs(uh->len);
|
||||
data_len = skb->len - offset;
|
||||
source = ntohs(uh->source);
|
||||
|
||||
if (ulen != len)
|
||||
goto out;
|
||||
@ -834,9 +840,7 @@ int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo)
|
||||
if (np->local_port && np->local_port != ntohs(uh->dest))
|
||||
continue;
|
||||
|
||||
np->rx_hook(np, ntohs(uh->source),
|
||||
(char *)(uh+1),
|
||||
ulen - sizeof(struct udphdr));
|
||||
np->rx_skb_hook(np, source, skb, offset, data_len);
|
||||
hits++;
|
||||
}
|
||||
} else {
|
||||
@ -859,7 +863,10 @@ int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo)
|
||||
if (!pskb_may_pull(skb, sizeof(struct udphdr)))
|
||||
goto out;
|
||||
uh = udp_hdr(skb);
|
||||
offset = (unsigned char *)(uh + 1) - skb->data;
|
||||
ulen = ntohs(uh->len);
|
||||
data_len = skb->len - offset;
|
||||
source = ntohs(uh->source);
|
||||
if (ulen != skb->len)
|
||||
goto out;
|
||||
if (udp6_csum_init(skb, uh, IPPROTO_UDP))
|
||||
@ -872,9 +879,7 @@ int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo)
|
||||
if (np->local_port && np->local_port != ntohs(uh->dest))
|
||||
continue;
|
||||
|
||||
np->rx_hook(np, ntohs(uh->source),
|
||||
(char *)(uh+1),
|
||||
ulen - sizeof(struct udphdr));
|
||||
np->rx_skb_hook(np, source, skb, offset, data_len);
|
||||
hits++;
|
||||
}
|
||||
#endif
|
||||
@ -1062,7 +1067,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev, gfp_t gfp)
|
||||
|
||||
npinfo->netpoll = np;
|
||||
|
||||
if (np->rx_hook) {
|
||||
if (np->rx_skb_hook) {
|
||||
spin_lock_irqsave(&npinfo->rx_lock, flags);
|
||||
npinfo->rx_flags |= NETPOLL_RX_ENABLED;
|
||||
list_add_tail(&np->rx, &npinfo->rx_np);
|
||||
|
@ -271,6 +271,11 @@ unsigned int arpt_do_table(struct sk_buff *skb,
|
||||
local_bh_disable();
|
||||
addend = xt_write_recseq_begin();
|
||||
private = table->private;
|
||||
/*
|
||||
* Ensure we load private-> members after we've fetched the base
|
||||
* pointer.
|
||||
*/
|
||||
smp_read_barrier_depends();
|
||||
table_base = private->entries[smp_processor_id()];
|
||||
|
||||
e = get_entry(table_base, private->hook_entry[hook]);
|
||||
|
@ -327,6 +327,11 @@ ipt_do_table(struct sk_buff *skb,
|
||||
addend = xt_write_recseq_begin();
|
||||
private = table->private;
|
||||
cpu = smp_processor_id();
|
||||
/*
|
||||
* Ensure we load private-> members after we've fetched the base
|
||||
* pointer.
|
||||
*/
|
||||
smp_read_barrier_depends();
|
||||
table_base = private->entries[cpu];
|
||||
jumpstack = (struct ipt_entry **)private->jumpstack[cpu];
|
||||
stackptr = per_cpu_ptr(private->stackptr, cpu);
|
||||
|
@ -220,6 +220,7 @@ static void ipt_ulog_packet(struct net *net,
|
||||
ub->qlen++;
|
||||
|
||||
pm = nlmsg_data(nlh);
|
||||
memset(pm, 0, sizeof(*pm));
|
||||
|
||||
/* We might not have a timestamp, get one */
|
||||
if (skb->tstamp.tv64 == 0)
|
||||
@ -238,8 +239,6 @@ static void ipt_ulog_packet(struct net *net,
|
||||
}
|
||||
else if (loginfo->prefix[0] != '\0')
|
||||
strncpy(pm->prefix, loginfo->prefix, sizeof(pm->prefix));
|
||||
else
|
||||
*(pm->prefix) = '\0';
|
||||
|
||||
if (in && in->hard_header_len > 0 &&
|
||||
skb->mac_header != skb->network_header &&
|
||||
@ -251,13 +250,9 @@ static void ipt_ulog_packet(struct net *net,
|
||||
|
||||
if (in)
|
||||
strncpy(pm->indev_name, in->name, sizeof(pm->indev_name));
|
||||
else
|
||||
pm->indev_name[0] = '\0';
|
||||
|
||||
if (out)
|
||||
strncpy(pm->outdev_name, out->name, sizeof(pm->outdev_name));
|
||||
else
|
||||
pm->outdev_name[0] = '\0';
|
||||
|
||||
/* copy_len <= skb->len, so can't fail. */
|
||||
if (skb_copy_bits(skb, 0, pm->payload, copy_len) < 0)
|
||||
|
@ -2856,7 +2856,8 @@ static inline bool tcp_ack_update_rtt(struct sock *sk, const int flag,
|
||||
* left edge of the send window.
|
||||
* See draft-ietf-tcplw-high-performance-00, section 3.3.
|
||||
*/
|
||||
if (seq_rtt < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr)
|
||||
if (seq_rtt < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
|
||||
flag & FLAG_ACKED)
|
||||
seq_rtt = tcp_time_stamp - tp->rx_opt.rcv_tsecr;
|
||||
|
||||
if (seq_rtt < 0)
|
||||
@ -2871,14 +2872,19 @@ static inline bool tcp_ack_update_rtt(struct sock *sk, const int flag,
|
||||
}
|
||||
|
||||
/* Compute time elapsed between (last) SYNACK and the ACK completing 3WHS. */
|
||||
static void tcp_synack_rtt_meas(struct sock *sk, struct request_sock *req)
|
||||
static void tcp_synack_rtt_meas(struct sock *sk, const u32 synack_stamp)
|
||||
{
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
s32 seq_rtt = -1;
|
||||
|
||||
if (tp->lsndtime && !tp->total_retrans)
|
||||
seq_rtt = tcp_time_stamp - tp->lsndtime;
|
||||
tcp_ack_update_rtt(sk, FLAG_SYN_ACKED, seq_rtt, -1);
|
||||
if (synack_stamp && !tp->total_retrans)
|
||||
seq_rtt = tcp_time_stamp - synack_stamp;
|
||||
|
||||
/* If the ACK acks both the SYNACK and the (Fast Open'd) data packets
|
||||
* sent in SYN_RECV, SYNACK RTT is the smooth RTT computed in tcp_ack()
|
||||
*/
|
||||
if (!tp->srtt)
|
||||
tcp_ack_update_rtt(sk, FLAG_SYN_ACKED, seq_rtt, -1);
|
||||
}
|
||||
|
||||
static void tcp_cong_avoid(struct sock *sk, u32 ack, u32 in_flight)
|
||||
@ -2981,6 +2987,7 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
|
||||
s32 seq_rtt = -1;
|
||||
s32 ca_seq_rtt = -1;
|
||||
ktime_t last_ackt = net_invalid_timestamp();
|
||||
bool rtt_update;
|
||||
|
||||
while ((skb = tcp_write_queue_head(sk)) && skb != tcp_send_head(sk)) {
|
||||
struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
|
||||
@ -3057,14 +3064,13 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
|
||||
if (skb && (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED))
|
||||
flag |= FLAG_SACK_RENEGING;
|
||||
|
||||
if (tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt) ||
|
||||
(flag & FLAG_ACKED))
|
||||
tcp_rearm_rto(sk);
|
||||
rtt_update = tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt);
|
||||
|
||||
if (flag & FLAG_ACKED) {
|
||||
const struct tcp_congestion_ops *ca_ops
|
||||
= inet_csk(sk)->icsk_ca_ops;
|
||||
|
||||
tcp_rearm_rto(sk);
|
||||
if (unlikely(icsk->icsk_mtup.probe_size &&
|
||||
!after(tp->mtu_probe.probe_seq_end, tp->snd_una))) {
|
||||
tcp_mtup_probe_success(sk);
|
||||
@ -3103,6 +3109,13 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
|
||||
|
||||
ca_ops->pkts_acked(sk, pkts_acked, rtt_us);
|
||||
}
|
||||
} else if (skb && rtt_update && sack_rtt >= 0 &&
|
||||
sack_rtt > (s32)(now - TCP_SKB_CB(skb)->when)) {
|
||||
/* Do not re-arm RTO if the sack RTT is measured from data sent
|
||||
* after when the head was last (re)transmitted. Otherwise the
|
||||
* timeout may continue to extend in loss recovery.
|
||||
*/
|
||||
tcp_rearm_rto(sk);
|
||||
}
|
||||
|
||||
#if FASTRETRANS_DEBUG > 0
|
||||
@ -5587,6 +5600,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
|
||||
struct request_sock *req;
|
||||
int queued = 0;
|
||||
bool acceptable;
|
||||
u32 synack_stamp;
|
||||
|
||||
tp->rx_opt.saw_tstamp = 0;
|
||||
|
||||
@ -5669,9 +5683,11 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
|
||||
* so release it.
|
||||
*/
|
||||
if (req) {
|
||||
synack_stamp = tcp_rsk(req)->snt_synack;
|
||||
tp->total_retrans = req->num_retrans;
|
||||
reqsk_fastopen_remove(sk, req, false);
|
||||
} else {
|
||||
synack_stamp = tp->lsndtime;
|
||||
/* Make sure socket is routed, for correct metrics. */
|
||||
icsk->icsk_af_ops->rebuild_header(sk);
|
||||
tcp_init_congestion_control(sk);
|
||||
@ -5694,7 +5710,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
|
||||
tp->snd_una = TCP_SKB_CB(skb)->ack_seq;
|
||||
tp->snd_wnd = ntohs(th->window) << tp->rx_opt.snd_wscale;
|
||||
tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
|
||||
tcp_synack_rtt_meas(sk, req);
|
||||
tcp_synack_rtt_meas(sk, synack_stamp);
|
||||
|
||||
if (tp->rx_opt.tstamp_ok)
|
||||
tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
|
||||
|
@ -18,6 +18,7 @@ struct sk_buff *tcp_tso_segment(struct sk_buff *skb,
|
||||
netdev_features_t features)
|
||||
{
|
||||
struct sk_buff *segs = ERR_PTR(-EINVAL);
|
||||
unsigned int sum_truesize = 0;
|
||||
struct tcphdr *th;
|
||||
unsigned int thlen;
|
||||
unsigned int seq;
|
||||
@ -102,13 +103,7 @@ struct sk_buff *tcp_tso_segment(struct sk_buff *skb,
|
||||
if (copy_destructor) {
|
||||
skb->destructor = gso_skb->destructor;
|
||||
skb->sk = gso_skb->sk;
|
||||
/* {tcp|sock}_wfree() use exact truesize accounting :
|
||||
* sum(skb->truesize) MUST be exactly be gso_skb->truesize
|
||||
* So we account mss bytes of 'true size' for each segment.
|
||||
* The last segment will contain the remaining.
|
||||
*/
|
||||
skb->truesize = mss;
|
||||
gso_skb->truesize -= mss;
|
||||
sum_truesize += skb->truesize;
|
||||
}
|
||||
skb = skb->next;
|
||||
th = tcp_hdr(skb);
|
||||
@ -125,7 +120,9 @@ struct sk_buff *tcp_tso_segment(struct sk_buff *skb,
|
||||
if (copy_destructor) {
|
||||
swap(gso_skb->sk, skb->sk);
|
||||
swap(gso_skb->destructor, skb->destructor);
|
||||
swap(gso_skb->truesize, skb->truesize);
|
||||
sum_truesize += skb->truesize;
|
||||
atomic_add(sum_truesize - gso_skb->truesize,
|
||||
&skb->sk->sk_wmem_alloc);
|
||||
}
|
||||
|
||||
delta = htonl(oldlen + (skb_tail_pointer(skb) -
|
||||
|
@ -104,10 +104,14 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
u8 *xprth = skb_network_header(skb) + iph->ihl * 4;
|
||||
struct flowi4 *fl4 = &fl->u.ip4;
|
||||
int oif = 0;
|
||||
|
||||
if (skb_dst(skb))
|
||||
oif = skb_dst(skb)->dev->ifindex;
|
||||
|
||||
memset(fl4, 0, sizeof(struct flowi4));
|
||||
fl4->flowi4_mark = skb->mark;
|
||||
fl4->flowi4_oif = skb_dst(skb)->dev->ifindex;
|
||||
fl4->flowi4_oif = reverse ? skb->skb_iif : oif;
|
||||
|
||||
if (!ip_is_fragment(iph)) {
|
||||
switch (iph->protocol) {
|
||||
@ -236,7 +240,7 @@ static struct dst_ops xfrm4_dst_ops = {
|
||||
.destroy = xfrm4_dst_destroy,
|
||||
.ifdown = xfrm4_dst_ifdown,
|
||||
.local_out = __ip_local_out,
|
||||
.gc_thresh = 1024,
|
||||
.gc_thresh = 32768,
|
||||
};
|
||||
|
||||
static struct xfrm_policy_afinfo xfrm4_policy_afinfo = {
|
||||
|
@ -349,6 +349,11 @@ ip6t_do_table(struct sk_buff *skb,
|
||||
local_bh_disable();
|
||||
addend = xt_write_recseq_begin();
|
||||
private = table->private;
|
||||
/*
|
||||
* Ensure we load private-> members after we've fetched the base
|
||||
* pointer.
|
||||
*/
|
||||
smp_read_barrier_depends();
|
||||
cpu = smp_processor_id();
|
||||
table_base = private->entries[cpu];
|
||||
jumpstack = (struct ip6t_entry **)private->jumpstack[cpu];
|
||||
|
@ -1087,10 +1087,13 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie)
|
||||
if (rt->rt6i_genid != rt_genid_ipv6(dev_net(rt->dst.dev)))
|
||||
return NULL;
|
||||
|
||||
if (rt->rt6i_node && (rt->rt6i_node->fn_sernum == cookie))
|
||||
return dst;
|
||||
if (!rt->rt6i_node || (rt->rt6i_node->fn_sernum != cookie))
|
||||
return NULL;
|
||||
|
||||
return NULL;
|
||||
if (rt6_check_expired(rt))
|
||||
return NULL;
|
||||
|
||||
return dst;
|
||||
}
|
||||
|
||||
static struct dst_entry *ip6_negative_advice(struct dst_entry *dst)
|
||||
|
@ -135,10 +135,14 @@ _decode_session6(struct sk_buff *skb, struct flowi *fl, int reverse)
|
||||
struct ipv6_opt_hdr *exthdr;
|
||||
const unsigned char *nh = skb_network_header(skb);
|
||||
u8 nexthdr = nh[IP6CB(skb)->nhoff];
|
||||
int oif = 0;
|
||||
|
||||
if (skb_dst(skb))
|
||||
oif = skb_dst(skb)->dev->ifindex;
|
||||
|
||||
memset(fl6, 0, sizeof(struct flowi6));
|
||||
fl6->flowi6_mark = skb->mark;
|
||||
fl6->flowi6_oif = skb_dst(skb)->dev->ifindex;
|
||||
fl6->flowi6_oif = reverse ? skb->skb_iif : oif;
|
||||
|
||||
fl6->daddr = reverse ? hdr->saddr : hdr->daddr;
|
||||
fl6->saddr = reverse ? hdr->daddr : hdr->saddr;
|
||||
@ -285,7 +289,7 @@ static struct dst_ops xfrm6_dst_ops = {
|
||||
.destroy = xfrm6_dst_destroy,
|
||||
.ifdown = xfrm6_dst_ifdown,
|
||||
.local_out = __ip6_local_out,
|
||||
.gc_thresh = 1024,
|
||||
.gc_thresh = 32768,
|
||||
};
|
||||
|
||||
static struct xfrm_policy_afinfo xfrm6_policy_afinfo = {
|
||||
|
@ -845,8 +845,13 @@ xt_replace_table(struct xt_table *table,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
table->private = newinfo;
|
||||
newinfo->initial_entries = private->initial_entries;
|
||||
/*
|
||||
* Ensure contents of newinfo are visible before assigning to
|
||||
* private.
|
||||
*/
|
||||
smp_wmb();
|
||||
table->private = newinfo;
|
||||
|
||||
/*
|
||||
* Even though table entries have now been swapped, other CPU's
|
||||
|
@ -147,6 +147,7 @@ nfqueue_tg_v3(struct sk_buff *skb, const struct xt_action_param *par)
|
||||
{
|
||||
const struct xt_NFQ_info_v3 *info = par->targinfo;
|
||||
u32 queue = info->queuenum;
|
||||
int ret;
|
||||
|
||||
if (info->queues_total > 1) {
|
||||
if (info->flags & NFQ_FLAG_CPU_FANOUT) {
|
||||
@ -157,7 +158,11 @@ nfqueue_tg_v3(struct sk_buff *skb, const struct xt_action_param *par)
|
||||
queue = nfqueue_hash(skb, par);
|
||||
}
|
||||
|
||||
return NF_QUEUE_NR(queue);
|
||||
ret = NF_QUEUE_NR(queue);
|
||||
if (info->flags & NFQ_FLAG_BYPASS)
|
||||
ret |= NF_VERDICT_FLAG_QUEUE_BYPASS;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct xt_target nfqueue_tg_reg[] __read_mostly = {
|
||||
|
@ -65,8 +65,7 @@ void ovs_dp_notify_wq(struct work_struct *work)
|
||||
continue;
|
||||
|
||||
netdev_vport = netdev_vport_priv(vport);
|
||||
if (netdev_vport->dev->reg_state == NETREG_UNREGISTERED ||
|
||||
netdev_vport->dev->reg_state == NETREG_UNREGISTERING)
|
||||
if (!(netdev_vport->dev->priv_flags & IFF_OVS_DATAPATH))
|
||||
dp_detach_port_notify(vport);
|
||||
}
|
||||
}
|
||||
@ -88,6 +87,10 @@ static int dp_device_event(struct notifier_block *unused, unsigned long event,
|
||||
return NOTIFY_DONE;
|
||||
|
||||
if (event == NETDEV_UNREGISTER) {
|
||||
/* upper_dev_unlink and decrement promisc immediately */
|
||||
ovs_netdev_detach_dev(vport);
|
||||
|
||||
/* schedule vport destroy, dev_put and genl notification */
|
||||
ovs_net = net_generic(dev_net(dev), ovs_net_id);
|
||||
queue_work(system_wq, &ovs_net->dp_notify_work);
|
||||
}
|
||||
|
@ -150,15 +150,25 @@ static void free_port_rcu(struct rcu_head *rcu)
|
||||
ovs_vport_free(vport_from_priv(netdev_vport));
|
||||
}
|
||||
|
||||
void ovs_netdev_detach_dev(struct vport *vport)
|
||||
{
|
||||
struct netdev_vport *netdev_vport = netdev_vport_priv(vport);
|
||||
|
||||
ASSERT_RTNL();
|
||||
netdev_vport->dev->priv_flags &= ~IFF_OVS_DATAPATH;
|
||||
netdev_rx_handler_unregister(netdev_vport->dev);
|
||||
netdev_upper_dev_unlink(netdev_vport->dev,
|
||||
netdev_master_upper_dev_get(netdev_vport->dev));
|
||||
dev_set_promiscuity(netdev_vport->dev, -1);
|
||||
}
|
||||
|
||||
static void netdev_destroy(struct vport *vport)
|
||||
{
|
||||
struct netdev_vport *netdev_vport = netdev_vport_priv(vport);
|
||||
|
||||
rtnl_lock();
|
||||
netdev_vport->dev->priv_flags &= ~IFF_OVS_DATAPATH;
|
||||
netdev_rx_handler_unregister(netdev_vport->dev);
|
||||
netdev_upper_dev_unlink(netdev_vport->dev, get_dpdev(vport->dp));
|
||||
dev_set_promiscuity(netdev_vport->dev, -1);
|
||||
if (netdev_vport->dev->priv_flags & IFF_OVS_DATAPATH)
|
||||
ovs_netdev_detach_dev(vport);
|
||||
rtnl_unlock();
|
||||
|
||||
call_rcu(&netdev_vport->rcu, free_port_rcu);
|
||||
|
@ -39,5 +39,6 @@ netdev_vport_priv(const struct vport *vport)
|
||||
}
|
||||
|
||||
const char *ovs_netdev_get_name(const struct vport *);
|
||||
void ovs_netdev_detach_dev(struct vport *);
|
||||
|
||||
#endif /* vport_netdev.h */
|
||||
|
@ -255,6 +255,7 @@ static struct fq_flow *fq_classify(struct sk_buff *skb, struct fq_sched_data *q)
|
||||
f->socket_hash != sk->sk_hash)) {
|
||||
f->credit = q->initial_quantum;
|
||||
f->socket_hash = sk->sk_hash;
|
||||
f->time_next_packet = 0ULL;
|
||||
}
|
||||
return f;
|
||||
}
|
||||
|
@ -279,7 +279,9 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
|
||||
sctp_v6_to_addr(&dst_saddr, &fl6->saddr, htons(bp->port));
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(laddr, &bp->address_list, list) {
|
||||
if (!laddr->valid || (laddr->state != SCTP_ADDR_SRC))
|
||||
if (!laddr->valid || laddr->state == SCTP_ADDR_DEL ||
|
||||
(laddr->state != SCTP_ADDR_SRC &&
|
||||
!asoc->src_out_of_asoc_ok))
|
||||
continue;
|
||||
|
||||
/* Do not compare against v4 addrs */
|
||||
|
@ -860,7 +860,6 @@ static void sctp_cmd_delete_tcb(sctp_cmd_seq_t *cmds,
|
||||
(!asoc->temp) && (sk->sk_shutdown != SHUTDOWN_MASK))
|
||||
return;
|
||||
|
||||
BUG_ON(asoc->peer.primary_path == NULL);
|
||||
sctp_unhash_established(asoc);
|
||||
sctp_association_free(asoc);
|
||||
}
|
||||
|
@ -16,8 +16,8 @@ config X25
|
||||
if you want that) and the lower level data link layer protocol LAPB
|
||||
(say Y to "LAPB Data Link Driver" below if you want that).
|
||||
|
||||
You can read more about X.25 at <http://www.sangoma.com/x25.htm> and
|
||||
<http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/cbook/cx25.htm>.
|
||||
You can read more about X.25 at <http://www.sangoma.com/tutorials/x25/> and
|
||||
<http://docwiki.cisco.com/wiki/X.25>.
|
||||
Information about X.25 for Linux is contained in the files
|
||||
<file:Documentation/networking/x25.txt> and
|
||||
<file:Documentation/networking/x25-iface.txt>.
|
||||
|
@ -141,14 +141,14 @@ static int ipcomp_compress(struct xfrm_state *x, struct sk_buff *skb)
|
||||
const int plen = skb->len;
|
||||
int dlen = IPCOMP_SCRATCH_SIZE;
|
||||
u8 *start = skb->data;
|
||||
const int cpu = get_cpu();
|
||||
u8 *scratch = *per_cpu_ptr(ipcomp_scratches, cpu);
|
||||
struct crypto_comp *tfm = *per_cpu_ptr(ipcd->tfms, cpu);
|
||||
struct crypto_comp *tfm;
|
||||
u8 *scratch;
|
||||
int err;
|
||||
|
||||
local_bh_disable();
|
||||
scratch = *this_cpu_ptr(ipcomp_scratches);
|
||||
tfm = *this_cpu_ptr(ipcd->tfms);
|
||||
err = crypto_comp_compress(tfm, start, plen, scratch, &dlen);
|
||||
local_bh_enable();
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
@ -158,13 +158,13 @@ static int ipcomp_compress(struct xfrm_state *x, struct sk_buff *skb)
|
||||
}
|
||||
|
||||
memcpy(start + sizeof(struct ip_comp_hdr), scratch, dlen);
|
||||
put_cpu();
|
||||
local_bh_enable();
|
||||
|
||||
pskb_trim(skb, dlen + sizeof(struct ip_comp_hdr));
|
||||
return 0;
|
||||
|
||||
out:
|
||||
put_cpu();
|
||||
local_bh_enable();
|
||||
return err;
|
||||
}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user