IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Also fixes an incorrect function comment (probably copy/paste).
Signed-off-by: Ben Boeckel <mathstuf@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
This series contains updates to e1000, igb, ixgbe and ixgbevf.
Hong Zhiguo provides a fix for e1000 where tx_ring and adapter->tx_ring
are already of type "struct e1000_tx_ring" so no need to divide by
e1000_tx_ring size in the idx calculation.
Emil provides a fix for ixgbevf to remove a redundant workaround related
to header split and a fix for ixgbe to resolve an issue where the MTA table
can be cleared when the interface is reset while in promisc mode.
Todd provides a fix for igb to prevent ethtool from writing to the iNVM
in i210/i211 devices. This issue was reported by Marek Vasut <marex@denx.de>.
Anton Blanchard provides a fix for ixgbe to reduce memory consumption
with larger page sizes, seen on PPC.
Don provides a cleanup in ixgbe to replace the IXGBE_DESC_UNUSED macro with
the inline function ixgbevf_desc_unused() to make the logic a bit more
readable.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch resolves an issue where the MTA table can be cleared when the
interface is reset while in promisc mode. As result IPv6 traffic between
VFs will be interrupted.
This patch makes the update of the MTA table unconditional to avoid the
inconsistent clearing on reset.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch just replaces the IXGBE_DESC_UNUSED macro with a like named
inline function ixgbevf_desc_unused. The inline function makes the logic
a bit more readable.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The ixgbe driver allocates pages for its receive rings. It currently
uses 512 pages, regardless of page size. During receive handling it
adds the unused part of the page back into the rx ring, avoiding the
need for a new allocation.
On a ppc64 box with 64 threads and 64kB pages, we end up with
512 entries * 64 rx queues * 64kB = 2GB memory used. Even more of a
concern is that we use up 2GB of IOMMU space in order to map all this
memory.
The driver makes a number of decisions based on if PAGE_SIZE is less
than 8kB, so use this as the breakpoint and only allocate 128 entries
on 8kB or larger page sizes.
Signed-off-by: Anton Blanchard <anton@samba.org>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don't let ethtool try to write to iNVM in i210/i211.
This fixes an issue seen by Marek Vasut.
Reported-by: Marek Vasut <marex@denx.de>
Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch removes a workaround related to header split, which is redundant
because the driver does not support splitting packet headers on Rx.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
tx_ring and adapter->tx_ring are already of type "struct
e1000_tx_ring *"
Signed-off-by: Hong Zhiguo <zhiguohong@tencent.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When using firmware assisted TSO, we use a single DMA mapping for
the linear area of a TSO skb.
We still have to segment the super-packet and insert a descriptor
containing the original headers before each segment of payload, so we
can unmap the linear area only after the last segment is completed.
The unmapping information for the linear area is therefore associated
with the last header descriptor.
We calculate the DMA address to unmap from using the map length and
the invariant that the end of the DMA mapping matches the end of
the data referenced by the last descriptor. But this invariant is
broken when there is TCP payload in the linear area.
Fix this by adding and using an explicit dma_offset field.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
This patch removes the priv->can.do_get_state() callback, as it just returns
priv->can.state. The callback's only user can_fill_info() has direct access to
priv->can.state.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Some devices, like the Kvaser Memorator Professional, have several bulk in
endpoints. Only the first one found must be used by the driver. The same holds
for the bulk out endpoint. The official Kvaser driver (leaf) was used as
reference for this patch.
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Olivier Sobrie <olivier@sobrie.be>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
If we handle end of block messages with higher priority than a lost message,
we can run into an endless interrupt loop.
This is reproducable with a am335x processor and "cansequence -r" at 1Mbit.
As soon as we loose a packet we can't escape from an interrupt loop.
This patch fixes the problem by handling lost packets before EOB packets.
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Markus Pargmann <mpa@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The code sequence:
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(64);
pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
bypasses the architectures check on the DMA mask. It can be replaced
with dma_coerce_mask_and_coherent(), avoiding the direct initialization
of this mask.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The code sequence:
pldat->pdev->dev.coherent_dma_mask = 0xFFFFFFFF;
pldat->pdev->dev.dma_mask = &pldat->pdev->dev.coherent_dma_mask;
bypasses the architectures check on the DMA mask. It can be replaced
with dma_coerce_mask_and_coherent(), avoiding the direct initialization
of this mask.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Copying whole packets with skb_copy_from_linear_data_offset is a pretty
bad idea. CPU was spending time in __copy_user_common and network
performance was lower. With the new solution iperf-measured speed
increased from 116Mb/s to 134Mb/s.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Putting the context id of the primary phy context in
the placeholder of the secondary is obviously a bad
idea.
Spotted by smatch.
Fixes: dac94da8dba3 ("iwlwifi: mvm: new BT Coex API")
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Don't update the slot in "bgmac_dma_rx_skb_for_slot" unless both the
skb alloc and dma mapping are successful; and free the newly allocated
skb if a dma mapping error occurs. This relieves the caller of the need
to deduce/execute the appropriate cleanup action required when an error
occurs.
Signed-off-by: Nathan Hintz <nlhintz@hotmail.com>
Acked-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Calls to mal_enable_eob_irq perform a read-write-modify of a dcr to
enable device irqs which is protected by a spin lock. However calls to
mal_disable_eob_irq do not take the corresponding lock.
This patch resolves the problem by ensuring that calls to
mal_disable_eob_irq also take the lock.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a bug which would trigger the BUG_ON() at
net/core/dev.c:4156. It was found that this was due to continuing
processing in the current poll call even when the call to
napi_reschedule failed, indicating the device was already on the
polling list. This resulted in an extra call to napi_complete which
triggered the BUG_ON().
This patch ensures that we only contine processing rotting packets in
the current mal_poll call if we are not already on the polling list.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address(). While at it, reorganize checking so it matches checks in
other drivers.
Signed-off-by: Luka Perkov <luka@openwrt.org>
CC: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address().
Signed-off-by: Luka Perkov <luka@openwrt.org>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address().
Signed-off-by: Luka Perkov <luka@openwrt.org>
Acked-by: David Daney <david.daney@cavium.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The dev_attrs field of struct bus_type is going away soon, dev_groups
should be used instead. This converts the MDIO bus code to use the
correct field.
Cc: David S. Miller <davem@davemloft.net>
Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: Nick Bowler <nbowler@elliptictech.com>
Cc: <netdev@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Kirsher says:
====================
This series contains updates to vxlan, net, ixgbe, ixgbevf, and i40e.
Joseph provides a single patch against vxlan which removes the burden
from the NIC drivers to check if the vxlan driver is enabled in the
kernel and also makes available the vxlan headrooms to the drivers.
Jacob provides majority of the patches, with patches against net, ixgbe
and ixgbevf. His net patch adds might_sleep() call to napi_disable so
that every use of napi_disable during atomic context will be visible.
Then Jacob provides a patch to fix qv_lock_napi call in
ixgbe_napi_disable_all. The other ixgbe patches cleanup
ixgbe_check_minimum_link function to correctly show that there are some
minor loss of encoding, even though we don't calculate it and remove
unnecessary duplication of PCIe bandwidth display. Lastly, Jacob
provides 4 patches against ixgbevf to add ixgbevf_rx_skb in line with
how ixgbe handles the variations on how packets can be received, adds
support in order to track how many packets were cleaned during busy poll
as part of the extended statistics.
Wei Yongjun provides a fix for i40e to return -ENOMEN in the memory
allocation error handling case instead of returning 0, as done
elsewhere in this function.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Amend the documentation in the mvmdio driver to note the fact
that it is now used by both the mvneta and mv643xx_eth drivers.
Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make only a single call to mutex_unlock in orion_mdio_write.
Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace manual poll of MVMDIO_SMI_READ_VALID with a call to
orion_mdio_wait_ready. This ensures a consistent timeout,
eliminates a busy loop, and allows for use of interrupts on
systems that support them.
Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amend orion_mdio_wait_ready so that the same timeout is used when
polling or using wait_event_timeout. Set the timeout to 1ms.
Replace udelay with usleep_range to avoid a busy loop, and set the
polling interval range as 45us to 55us, so that the first sleep
will be enough in almost all cases.
Generate the same log message at timeout when polling or using
wait_event_timeout.
Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The function needn't to be public, so to make it as static.
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The interface type, which is being traced by "struct be_adapter::
if_type", isn't used currently. So we can remove that safely
according to Sathya's comments.
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use a more current logging style.
Convert printks to pr_<level>.
Consolidate multiple printks into a single printk to avoid
any possible dmesg interleaving. Add a default "event" msg
in case the listed types are ever expanded.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This cleans code a bit and will be useful when allocating buffers in
other places (like RX path, to avoid skb_copy_from_linear_data_offset).
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change complements commits d0da7c002f7b2a93582187a9e3f73891a01d8ee4
[MIPS: DEC: Convert to new irq_chip functions] and
5359b938c088423a28c41499f183cd10824c1816 [MIPS: DECstation I/O ASIC DMA
interrupt handling fix] and implements automatic handling of the two
classes of DMA interrupts the I/O ASIC implements, informational and
errors.
Informational DMA interrupts do not stop the transfer and use the
`handle_edge_irq' handler that clears the request right away so that
another request may be recorded while the previous is being handled.
DMA error interrupts stop the transfer and require a corrective action
before DMA can be reenabled. Therefore they use the `handle_fasteoi_irq'
handler that only clears the request on the way out. Because MIPS
processor interrupt inputs, one of which the I/O ASIC's interrupt
controller is cascaded to, are level-triggered it is recommended that
error DMA interrupt action handlers are registered with the IRQF_ONESHOT
flag set so that they are run with the interrupt line masked.
This change removes the export of clear_ioasic_dma_irq that now does not
have to be called by device drivers to clear interrupts explicitly
anymore. Originally these interrupts were cleared in the .end handler of
the `irq_chip' structure, before it was removed.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5874/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Reported by "make includecheck"
Tested that the corresponding sources still compile well on x86
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
If the firmware image that we attempt to load doesn't
actually exist we have a broken firmware file or other
code not checking things correctly, so warn in such a
case. Also avoid assigning cur_ucode/ucode_loaded then.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When writing the disable_power_off value, the LPRX
enable value also gets written unintentionally, so
fix that by adding the missing break statement.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This can be useful when using the device as a sniffer.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Having a WARN_ON() followed by a printed message is
less useful than having the message in the warning
so move the message.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The number of commands can never be negative, so it should
be using an unsigned type. This also shuts up an smatch
warning elsewhere in the code.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Change old UAPSD bit to PM_CMD_SUPPORT, and add a new bit to indicate
real UAPSD support.
Don't use UAPSD when the firmware doesn't support it.
Signed-off-by: David Spinadel <david.spinadel@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Fix to return -ENOMEM in the memory alloc error handling
case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch removes the need to keep a zero_base variable in the adapter
structure. Now we just use two different macros to set the non-zero and
zero base. This adds to readability and shortens some of the structure
initialization under 80 columns. The gathering of status for ethtool was
slightly modified to again better fit into 80 columns and become a bit
more readable.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds the extended statistics similar to the ixgbe driver. These
statistics keep track of how often the busy polling yields, as well as how many
packets are cleaned or missed by the polling routine.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>