IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch addresses a kernel panic seen when setting up the interface.
Specifically we see a NULL pointer dereference on the Tx descriptor cleanup
path when enabling interrupts. This change corrects that so it cannot
occur.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
This series contains fixes to e1000e.
...
Bruce Allan (1):
e1000e: fix test for PHY being accessible on 82577/8/9 and I217
Tushar Dave (1):
e1000e: Correct link check logic for 82571 serdes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The commit 4197aa7bb81877ebb06e4f2cc1b5fea2da23a7bd implements 64 bit
per ring statistics. But the driver resets the 'total_bytes' and
'total_packets' from RX and TX rings in the RX and TX interrupt
handlers to zero. This results in statistics being lost and user space
reporting RX and TX statistics as zero. This patch addresses the
issue by preventing the resetting of RX and TX ring statistics to
zero.
Signed-off-by: Narendra K <narendra_k@dell.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Occasionally, the PHY can be initially inaccessible when the first read of
a PHY register, e.g. PHY_ID1, happens (signified by the returned value
0xFFFF) but subsequent accesses of the PHY work as expected. Add a retry
counter similar to how it is done in the generic e1000_get_phy_id().
Also, when the PHY is completely inaccessible (i.e. when subsequent reads
of the PHY_IDx registers returns all F's) and the MDIO access mode must be
set to slow before attempting to read the PHY ID again, the functions that
do these latter two actions expect the SW/FW/HW semaphore is not already
set so the semaphore must be released before and re-acquired after calling
them otherwise there is an unnecessarily inordinate amount of delay during
device initialization.
Reported-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
SYNCH bit and IV bit of RXCW register are sticky. Before examining these bits,
RXCW should be read twice to filter out one-time false events and have correct
values for these bits. Incorrect values of these bits in link check logic can
cause weird link stability issues if auto-negotiation fails.
CC: stable <stable@vger.kernel.org> [2.6.38+]
Reported-by: Dean Nelson <dnelson@redhat.com>
Signed-off-by: Tushar Dave <tushar.n.dave@intel.com>
Reviewed-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In rare cases, bnx2x_free_tx_skbs() can unmap the wrong DMA address
when it gets to the last entry of the tx ring. We were not using
the proper macro to skip the last entry when advancing the tx index.
Reported-by: Zongyun Lai <zlai@vmware.com>
Reviewed-by: Jeffrey Huang <huangjw@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit db83d136d7f753 (gianfar: Fix missing sock reference when
processing TX time stamps) added a potential sk_wmem_alloc imbalance
If the new skb has a different truesize than old one, we can get a
negative sk_wmem_alloc once new skb is orphaned at TX completion.
Now we no longer early orphan skbs in dev_hard_start_xmit(), this
probably can lead to fatal bugs.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Manfred Rudigier <manfred.rudigier@omicron.at>
Cc: Claudiu Manoil <claudiu.manoil@freescale.com>
Cc: Jiajun Wu <b06378@freescale.com>
Cc: Andy Fleming <afleming@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If list_for_each_entry, etc complete a traversal of the list, the iterator
variable ends up pointing to an address at an offset from the list head,
and not a meaningful structure. Thus this value should not be used after
the end of the iterator. There does not seem to be a meaningful value to
provide to netdev_warn. Replace with pr_warn, since pr_err is used
elsewhere.
This problem was found using Coccinelle (http://coccinelle.lip6.fr/).
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
For the higher mtu sizes requiring the buffer size greater than 8192,
the buffers are sent or received using multiple dma descriptors/ same
descriptor with option of multi buffer handling.
It was observed during tests that the driver was missing on data
packets during the normal ping operations if the data buffers being used
catered to jumbo frame handling.
The memory barrriers are added in between preparation of dma descriptors
in the jumbo frame handling path to ensure all instructions before
enabling the dma are complete.
Signed-off-by: Deepak Sikri <deepak.sikri@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It was observed that during multiple reboots nfs hangs. The status of
receive descriptors shows that all the descriptors were in control of
CPU, and none were assigned to DMA.
Also the DMA status register confirmed that the Rx buffer is
unavailable.
This patch adds the fix for the same by adding the memory barriers to
ascertain that the all instructions before enabling the Rx or Tx DMA are
completed which involves the proper setting of the ownership bit in DMA
descriptors.
Signed-off-by: Deepak Sikri <deepak.sikri@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit c0357e975afdbbedab5c662d19bef865f02adc17
bnx2: stop using net_device.{base_addr, irq}.
removed netdev->base_addr so we need to update cnic to get the MMIO
base address from pci_resource_start(). Otherwise, mmap of the uio
device will fail.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
DCB and SR-IOV cannot currently be enabled at the same time as the queueing
schemes are incompatible. If they are both enabled it will result in Tx
hangs since only the first Tx queue will be able to transmit any traffic.
This simple fix for this is to block us from enabling TCs in ixgbe_setup_tc
if SR-IOV is enabled. This change will be reverted once we can support
SR-IOV and DCB coexistence.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
some people report atl1c could cause system hang with following
kernel trace info:
---------------------------------------
WARNING: at.../net/sched/sch_generic.c:258 dev_watchdog+0x1db/0x1d0()
...
NETDEV WATCHDOG: eth0 (atl1c): transmit queue 0 timed out
...
---------------------------------------
This is caused by netif_stop_queue calling when cable Link is down.
So remove netif_stop_queue, because link_watch will take it over.
Signed-off-by: xiong <xiong@qca.qualcomm.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Cloud Ren <cjren@qca.qualcomm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit a1c7fff7e1 (net: netdev_alloc_skb() use build_skb()) broke b44 on
some 64bit machines.
It appears b44 and b43 use __netdev_alloc_skb() instead of alloc_skb()
for their bounce buffers.
There is no need to add an extra NET_SKB_PAD reservation for bounce
buffers :
- In TX path, NET_SKB_PAD is useless
- In RX path in b44, we force a copy of incoming frames if
GFP_DMA allocations were needed.
Reported-and-bisected-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently only used when packet split mode is enabled with jumbo frames,
IP payload checksum (for fragmented UDP packets) is mutually exclusive with
receive hashing offload since the hardware uses the same space in the
receive descriptor for the hardware-provided packet checksum and the RSS
hash, respectively. Users currently must disable jumbos when receive
hashing offload is enabled, or vice versa, because of this incompatibility.
Since testing has shown that IP payload checksum does not provide any real
benefit, just remove it so that there is no longer a choice between jumbos
or receive hashing offload but not both as done in other Intel GbE drivers
(e.g. e1000, igb).
Also, add a missing check for IP checksum error reported by the hardware;
let the stack verify the checksum when this happens.
CC: stable <stable@vger.kernel.org> [3.4]
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Using ethtool -C ethX rx-usecs 0 crashes with a divide by zero.
Refactor this function to fix this issue and make it more clear
what the intent of each conditional is. Add comment regarding
using a setting of zero.
CC: stable <stable@vger.kernel.org> [3.3+]
CC: David Ahern <daahern@cisco.com>
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This fixes a number of warnings such as:
CC drivers/net/ethernet/ti/davinci_cpdma.o
drivers/net/ethernet/ti/davinci_cpdma.c:279:1: warning: data definition
has no type or storage class
drivers/net/ethernet/ti/davinci_cpdma.c:279:1: warning: type defaults to
‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’
drivers/net/ethernet/ti/davinci_cpdma.c:279:1: warning: parameter names
(without types) in function declaration
Signed-off-by: Daniel Mack <zonque@gmail.com>
Cc: Vaibhav Hiremath <hvaibhav@ti.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Christian Riesch <christian.riesch@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
The correct behavior is to program the interrupt coalescing regs
(RXICr/TXICr) in accordance with the Rx/Tx Q's "rx/txcoalescing"
flag. That is, if the coalescing flag is 0 for a given Rx/Tx queue
then the corresponding coalescing register should be cleared.
This behavior is correctly implemented for the single-queue mode
(SQ_SG_MODE), but not for the multi-queue mode (MQ_MG_MODE).
This fixes the later case.
Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
FCoE target mode was experiencing issues due to the fact that we were
sending up data frames that were padded to 60 bytes after the DDP logic had
already stripped the frame down to 52 or 56 depending on the use of VLANs.
This was resulting in the FCoE DDP logic having issues since it thought the
frame still had data in it due to the padding.
To resolve this, adding code so that we do not pad FCoE frames prior to
handling them to the stack.
CC: <stable@vger.kernel.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a missing resource release in ring cleanup.
Not doing this leaves a range of QPs that are being reserved,
and no one can use them.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a crash at the error flow of NOP command which caused the driver to try and use
a completion vector which wasn't allocated.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Set valid port parameters: MTU and flow control configuration when
configuring the port during HW device initialization,
prior to the net device open() being called.
Using invalid parameters (such as all zeros)
could lead to bad firmware behavior.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following commit couldn't work if the RMCR is not set to 1.
"net: sh_eth: fix the rxdesc pointer when rx descriptor empty happens"
commit id 79fba9f51755c704c0a7d7b7f0df10874dc0a744
If RMCR is not set, the controller will clear the EDRRR after it received
a frame. In this case, the driver doesn't need to fix the value of
cur_rx/dirty_rx. The driver only needs it when the controll detects
receive descriptors are empty.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Tested-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The 8168evl (RTL_GIGA_MAC_VER_34) based Gigabyte GA-990FXA motherboards
are very prone to NETDEV watchdog problems without this change. See
https://bugzilla.kernel.org/show_bug.cgi?id=42899 for instance.
I don't know why it *works*. It's depressingly effective though.
For the record:
- the problem may go along IOMMU (AMD-Vi) errors but it really looks
like a red herring.
- the patch sets the RX_MULTI_EN bit. If the 8168c doc is any guide,
the chipset now fetches several Rx descriptors at a time.
- long ago the driver ignored the RX_MULTI_EN bit.
e542a2269f232d61270ceddd42b73a4348dee2bb changed the RxConfig
settings. Whatever the problem it's now labeled a regression.
- Realtek's own driver can identify two different 8168evl devices
(CFG_METHOD_16 and CFG_METHOD_17) where the r8169 driver only
sees one. It sucks.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a memory leak that was introduced in the 3.4 kernel. The
leak occurred when FCoE was enabled and traffic was passed over the FCoE
rings reserved for FCoE. The memory leak was due to us not populating the
compound page information on the order 1 pages needed for FCoE.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Fix Kconfig file to make sure that PTP and IGB/IXGBE are both either
in-kernel or modules, not mixed. Having the build status mixed causes
compile errors.
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
i210/i211 device has only 16 RAR address filters like 82575, instead of
32 like i350. This patch removes the entries for i210/i211 in the
get_invariants function which was setting them for 32. This ensures that
they will get the default value which is the correct one.
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Increasing the hardware statistics structure to accomodate statistics for skyhawk.
Signed-off-by: Vasundhara Volam <vasundhara.volam@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Modify IOCTL error message to print subsystem also.
Signed-off-by: Vasundhara Volam <vasundhara.volam@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The maximum size of packet that can be handled by controller including ethernet
header is 65535. Reducing gso_max_size accordingly.
Signed-off-by: Sarveshwar Bandi <sarveshwar.bandi@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a potential hole when configuring the cycle counter used to
generate the nanosecond time clock. This clock is based off of the SYSTIME
registers along with the TIMINCA registers. The TIMINCA register determines
the increment to be added to the SYSTIME registers every DMA clock tick. This
register needs to be reconfigured whenever the link-speed changes. However,
the value calculated stays the same when link is down and when link is up.
Misconfiguration can occur if the link status changes due to a reset, which
causes the TIMINCA register to be reset. This reset puts the device in an
unstable state where the SYSTIME registers stop incrementing and the PTP
protocol does not function.
The solution is to double check the TIMINCA value and always reset the value
if the register is zero. This prevents a misconfiguration bug that halts the
PHC.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When I2C is not responding it's usually due to a previous
unexpected reset during I2C operation. We release it by
powering down and up the SFP+ module.
Signed-off-by: Yaniv Rosner <yaniv.rosner@broadcom.com>
Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The adapter->npars[] array has QLCNIC_MAX_PCI_FUNC elements. We
allocate it that way a few lines earlier in the function. So this test
is off by one.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a off by one error in the minimal number of BD in
bnx2x_start_xmit() and bnx2x_tx_int() before stopping/resuming tx queue.
A full size GSO packet, with data included in skb->head really needs
(MAX_SKB_FRAGS + 4) BDs, because of bnx2x_tx_split()
This error triggers if BQL is disabled and heavy TCP transmit traffic
occurs.
bnx2x_tx_split() definitely can be called, remove a wrong comment.
Reported-by: Tomas Hruby <thruby@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Eilon Greenstein <eilong@broadcom.com>
Cc: Yaniv Rosner <yanivr@broadcom.com>
Cc: Merav Sicron <meravs@broadcom.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Robert Evans <evansr@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull networking fixes from David S. Miller:
This has the fix for the wireless issues I ran into the other week as
well as:
1) Fix CAN c_can driver transmit handling resulting in BUG check
triggers, from AnilKumar Ch.
2) Fix packet drop monitor sleeping in atomic context, from Eric
Dumazet.
3) Fix mv643xx_eth driver build regression, from Andrew Lunn.
4) Inetpeer freeing needs an RCU grace period in order to avoid races
during tree invalidation. From Eric Dumazet.
5) Fix endianness bugs in xt_HMARK netfilter module, from Hans
Schillstrom.
6) Add proper module refcounting to l2tp_eth to avoid crash on module
unload, from Eric Dumazet.
7) Fix truncation of neighbour entry dumps due to logic errors in
neigh_dump_info() and friends, from Eric Dumazet.
8) The conversion of fib6_age() to dst_neigh_lookup() accidently
reversed the logic of a flags test, fix from Thomas Graf.
9) Fix checksum configuration in newer sky2 chips, from Stephen
Hemminger.
10) Revert BQL support in NIU driver, doesn't work.
11) l2tp_ip_sendmsg() illegally uses a route without a proper reference.
From Eric Dumazet.
12) be2net driver references an SKB after it's potentially been freed,
also from Eric Dumazet.
13) Fix RCU stalls in dummy net driver init. Also from Eric Dumazet.
14) lpc_eth has several bugs in it's transmit engine leading to packet
leaks and improper queue wakes, from Eric Dumazet.
15) Apply short DMA workaround to more tg3 chips, from Matt Carlson.
16) Add tilegx network driver.
17) Bonding queue mapping for a packet can get corrupted, fix from Eric
Dumazet.
18) Fix bug in netpoll_send_udp() SKB management that can leave garbage
in the payload in certain situations. From Eric Dumazet.
19) bnx2x driver interprets chip RX checksum offload incorrectly in
encapsulation situations. Fix from Eric Dumazet.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (75 commits)
bnx2x: fix checksum validation
netpoll: fix netpoll_send_udp() bugs
bonding: Fix corrupted queue_mapping
bonding:record primary when modify it via sysfs
tilegx network driver: initial support
tg3: Apply short DMA frag workaround to 5906
net: stmmac: Fix clock en-/disable calls
lpc_eth: fix tx completion
lpc_eth: add missing ndo_change_mtu()
dummy: fix rcu_sched self-detected stalls
net: Reorder initialization in ip_route_output to fix gcc warning
virtio-net: fix a race on 32bit arches
r8169: avoid NAPI scheduling delay.
net: Make linux/tcp.h C++ friendly (trivial)
netdev: fix drivers/net/phy/ kernel-doc warnings
net/core: fix kernel-doc warnings
be2net: fix a race in be_xmit()
l2tp: fix a race in l2tp_ip_sendmsg()
mac80211: add back channel change flag
NFC: Fix possible NULL ptr deref when getting the name of a socket
...
bnx2x driver incorrectly sets ip_summed to CHECKSUM_UNNECESSARY on
encapsulated segments. TCP stack happily accepts frames with bad
checksums, if they are inside a GRE or IPIP encapsulation.
Our understanding is that if no IP or L4 csum validation was done by the
hardware, we should leave ip_summed as is (CHECKSUM_NONE), since
hardware doesn't provide CHECKSUM_COMPLETE support in its cqe.
Then, if IP/L4 checksumming was done by the hardware, set
CHECKSUM_UNNECESSARY if no error was flagged.
Patch based on findings and analysis from Robert Evans
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Eilon Greenstein <eilong@broadcom.com>
Cc: Yaniv Rosner <yanivr@broadcom.com>
Cc: Merav Sicron <meravs@broadcom.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Robert Evans <evansr@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change adds support for the tilegx network driver based on the
GXIO IORPC support in the tilegx software stack, using the on-chip
mPIPE packet processing engine.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5906 devices also need the short DMA fragment workaround. This patch
makes the necessary change.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Tested-by: Christian Kujau <lists@nerdbynature.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
clk_{un}prepare is mandatory for platforms using common clock framework.
Since these drivers are used by SPEAr platform, which supports common
clock framework, add clk_{un}prepare() support for them. Otherwise
the clocks are not correctly en-/disabled and ethernet support doesn't
work.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Viresh Kumar <viresh.linux@gmail.com>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
__lpc_handle_xmit() has two bugs :
1) It can leak skbs in case TXSTATUS_ERROR is set
2) It can wake up txqueue while no slot was freed.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Roland Stigge <stigge@antcom.de>
Tested-by: Roland Stigge <stigge@antcom.de>
Cc: Kevin Wells <kevin.wells@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
lpc_eth does a copy of transmitted skbs to DMA area, without checking
skb lengths, so can trigger buffer overflows :
memcpy(pldat->tx_buff_v + txidx * ENET_MAXF_SIZE, skb->data, len);
One way to get bigger skbs is to allow MTU changes above the 1500 limit.
Calling eth_change_mtu() in ndo_change_mtu() makes sure this cannot
happen.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Roland Stigge <stigge@antcom.de>
Cc: Kevin Wells <kevin.wells@nxp.com>
Acked-by: Roland Stigge <stigge@antcom.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
While reworking the r8169 driver a few months ago to perform the
smallest amount of work in the irq handler, I took care of avoiding
any irq mask register operation in the slow work dedicated user
context thread. The slow work thread scheduled an extra round of NAPI
work which would ultimately set the irq mask register as required,
thus keeping such irq mask operations in the NAPI handler.
It would eventually race with the irq handler and delay NAPI execution
for - assuming no further irq - a whole ksoftirqd period. Mildly a
problem for rare link changes or corner case PCI events.
The race was always lost after the last bh disabling lock had been
removed from the work thread and people started wondering where those
pesky "NOHZ: local_softirq_pending 08" messages came from.
Actually the irq mask register _can_ be set up directly in the slow
work thread.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Reported-by: Dave Jones <davej@redhat.com>
Tested-by: Marc Dionne <marc.c.dionne@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As soon as hardware is notified of a transmit, we no longer can assume
skb can be dereferenced, as TX completion might have freed the packet.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit efa230f2c68abab817f13473077f8d0cc74f43f3.
BQL doesn't work with how this driver currently only takes TX
interrupts every 1/4 of the TX ring. That behavior needs to be fixed,
but that's a larger non-trivial task and for now we have to revert
BQL support as this makes the device currently completely unusable.
Signed-off-by: David S. Miller <davem@davemloft.net>
The commit ba27ec66ffeb78cbf fixes the Kconfig of the
driver when built as module allowing to select/unselect
the PCI and Platform modules that are not anymore mutually
exclusive. This patch fixes and guarantees that the driver
builds on all the platforms w/ w/o PCI and when select/unselect
the two stmmac supports. In case of there are some problems
on both the configuration and the pci/pltf registration the
module_init will fail.
v2: set the CONFIG_STMMAC_PLATFORM enabled by default.
I've just noticed that this can actually help on
some configurations that don't enable any STMMAC
options by default (e.g. SPEAr).
v3: change printk level when do not register the driver.
Reported-by: Fengguang Wu <wfg@linux.intel.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The newer flavors of Yukon II use a different method for receive
checksum offload. This is indicated in the driver by the SKY2_HW_NEW_LE
flag. On these newer chips, the BMU_ENA_RX_CHKSUM should not be set.
The driver would get incorrectly toggle the bit, enabling the old
checksum logic on these chips and cause a BUG_ON() assertion. If
receive checksum was toggled via ethtool.
Reported-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 452503ebc (ARM: Orion: Eth: Add clk/clkdev support.) broke
the building of the driver on architectures which don't have clk
support. In particular PPC32 Pegasos which uses this driver.
Add #ifdef around the clk API usage.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 096335b3f983 ("mlx4_core: Allow dynamic MTU configuration for
IB ports") modifies the port VL setting. This exposes a bug in
mlx4_common_set_port(), where the VL cap value passed in (inside the
command mailbox) is incorrectly zeroed-out:
mlx4_SET_PORT modifies the VL_cap field (byte 3 of the mailbox).
Since the SET_PORT command is paravirtualized on the master as well as
on the slaves, mlx4_SET_PORT_wrapper() is invoked on the master. This
calls mlx4_common_set_port() where mailbox byte 3 gets overwritten by
code which should only set a single bit in that byte (for the reset
qkey counter flag) -- but instead overwrites the entire byte.
The result is that when running in SR-IOV mode, the VL_cap will be set
to zero -- fix this.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>