IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Refactor the packet split Rx code to properly use half-pages for
receives. The previous code was doing way more mapping and unmapping
than it needed to, and wasn't properly using half-pages.
Increment the page use count each time we give a half-page to an skb,
knowing that the stack will probably process and release the page before
we need it again. Only free and reallocate pages if the count shows that
both half-pages are in use. Add counters to track reallocations and page
reuse.
Change-ID: I534b299196036b64be82b4861a0a4036310a8f22
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The i40e and i40evf drivers now cleanly handle allocation
failures and can avoid kernel log spew from the memory allocator
when allocations fail, so set __GFP_NOWARN on Rx buffer alloc.
Change-ID: Ic9e1b83c495e2a3ef6b069ba7fb6e52ce134cd23
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is the "Don't Give Up" patch. Previously the
driver could fail an allocation, and then possibly stall
a queue forever, by never coming back to continue receiving
or allocating buffers.
With this patch, the driver will keep polling trying to allocate
receive buffers until it succeeds. This should keep all receive
queues running even in the face of memory pressure.
Also update copyright year in file header.
Change-ID: I2b103d1ce95b9831288a7222c3343ffa1988b81b
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
While re-enabling interrupts the driver would clear all pending
causes. This meant that if an interrupt was generated while the driver
was cleaning or polling with interrupts disabled, then that interrupt
was lost. This could cause a queue to become dead, especially for
receive. Refactored the enable_icr0 function in order to allow
it to be decided by the caller whether the CLEARPBA (clear pending
events) bit will be set while re-enabling the interrupt.
Also update copyright year in file headers.
Change-ID: Ic1db100a05e13c98919057696db147a258ca365a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Now that the Force-WriteBack functionality in X710/XL710 devices
has been moved out of the clean routine and into the service task,
we need to make sure WriteBack-On-ITR is separated out since it
is still called from clean.
In the X722 devices, Force-WriteBack implies WriteBack-On-ITR but
without the interrupt, which put the driver into a missed
interrupt scenario and a potential tx-timeout report.
With this patch, we break the two functions out, and call the
appropriate ones at the right place. This will avoid creating missed
interrupt like scenarios for X722 devices.
Also update copyright year in file headers.
Change-ID: Iacbde39f95f332f82be8736864675052c3583a40
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don't bother trying to set up a TSO if the skb->ip_summed is not
set to CHECKSUM_PARTIAL.
Change-ID: I6495b3568e404907a2965b48cf3e2effa7c9ab55
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Driver was using an offset based off a DMA handle while mapping and
unmapping using sync_single_range_for[cpu|device], where it should
be using DMA handle (returned from alloc_coherent) and the offset of the
memory to be sync'd.
Change-ID: I208256565b1595ff0e9171ab852de06b997917c6
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Nelson, Shannon <shannon.nelson@intel.com>
Reviewed-by: Williams, Mitch A <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We were not doing write-back on interrupt throttle for Legacy case in X722.
This patch fixes that, so we do WB_ON_ITR for Legacy as well. Plus the issue
that we should still be setting NO_ITR if we are touching the DYN_CTLN register
since we do not want to change ITR setting here.
Change-ID: I5db8491ee1544118a389db839cecc93e1bbc480e
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add write-back on interrupt throttle rate timer expiration support
for the i40evf driver, when running on X722 devices.
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If the driver calls skb_set_hash even with a zero hash, that
indicates to the stack that the hash calculation is offloaded
in hardware. So the Stack doesn't do a SW hash which is required
for load balancing if the user decides to turn of rx-hashing
on our device.
This patch fixes the path so that we do not call skb_set_hash
if the feature is disabled.
Change-ID: Ic4debfa4ff91b5a72e447348a75768ed7a2d3e1b
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
1) remove duplicate include of tcp.h
2) put an ampersand at the end of a line instead of the beginning
3) remove a useless dev_info
4) match declaration of function to the implementation
5) repair incorrect comment
6) correct whitespace
7) remove unused define
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We shouldn't be using a bitwise operator here; it's not a bitwise
operation. Use a logical operator instead. Why doesn't c have a
logical-or-and-assign operator?
Change-ID: Id84f3ca884910bed7073c84b1e16a102e958d0de
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch fixes the memory leak which would be seen otherwise when user
programs flow-director filter using ethtool (sideband filter programming).
When ethtool is used to program flow directory filter, 'raw_buf' gets
allocated and it is supposed to be freed as part of queue cleanup. But
check of 'tx_buffer->skb' was preventing it from being freed.
Change-ID: Ief4f0a1a32a653180498bf6e987c1b4342ab8923
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch contains following changes:
- detection and recovery logic (issue SW interrupt) has been moved to
service_task from timeout function.
- added some more debug info from tx_timeout.
Logic to detect and recover TX queue hung is now two step process:
- service_task detects TX queue hung and sets a bit(hung_detected) if
it was not set.
- if bit was set (means this is back-back hung condition detected),
issue SW interrupt and clear the bit.
- napi_poll clears the bit unconditionally since it cleans TX/RX queues.
Change-ID: Ieed03a48927c845a988b3ff375090bf37caeb903
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Instead of awkwardly keeping a fixed array of pointers in the adapter
struct and then allocating ring structs individually, just keep a single
pointer and allocate a single blob for the arrays. This simplifies code,
shrinks the adapter structure, and future-proofs the driver by not
limiting the number of rings we can handle.
Change-ID: I31334ff911a6474954232cfe4bc98ccca3c769ff
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Issue a prefetch for data early in the transmit path.
This should not be generally needed for Tx traffic, but
it helps immensely for pktgen workloads and should help
for forwarding workloads as well.
Change-ID: Iefee870c20599e0c4240e1d8637e4f16b625f83a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch fixes the issue of forcing WB too often causing us to not
benefit from NAPI.
Without this patch we were forcing WB/arming interrupt too often taking
away the benefits of NAPI and causing a performance impact.
With this patch we disable force WB in the clean routine for X710
and XL710 adapters. X722 adapters do not enable interrupt to force
a WB and benefit from WB_ON_ITR and hence force WB is left enabled
for those adapters.
For XL710 and X710 adapters if we have less than 4 packets pending
a software Interrupt triggered from service task will force a WB.
This patch also changes the conditions for setting RS bit as described
in code comments. This optimizes when the HW does a tail bump amd when
it does a WB. It also optimizes when we do a wmb.
Change-ID: Id831e1ae7d3e2ec3f52cd0917b41ce1d22d75d9d
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When in NAPI with interrupts disabled, the HW needs to be forced to do a
write back on TX if the number of descriptors pending are less than a
cache line.
This stat helps keep track of how many times we get into this situation.
Change-ID: I76c1bcc7ebccd6bffcc5aa33bfe05f2fa1c9a984
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Code was moved into a separate function some time ago.
Change-ID: Icabbe71ce05cf5d716d3e1152cdd9cd41d11bcb5
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We would like to automatically provide busy polling support
to all NAPI drivers, without them having to implement anything.
skb_mark_napi_id() can be called from napi_gro_receive() and
napi_get_frags().
Few drivers are still calling skb_mark_napi_id() because
they use netif_receive_skb(). They should eventually call
napi_gro_receive() instead. I will leave this to drivers
maintainers.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The adaptive ITR (interrupt throttle rate) algorithm was adjusting
the hardware's interrupt rate too frequently. This caused a lot
of variation in the interrupt rate for fairly constant workloads.
Change the code to have a counter and adjust only once every N
number of interrupts.
Change-ID: I0460f1f86571037484eca5aca36ac4d889cb8389
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The dynamic algorithm, while now working, doesn't have good
performance in 40G mode.
One part of this patch addresses the high CPU utilization of some small
streaming workloads that the driver should reduce CPU in.
It also changes the minimum ITR that the dynamic algorithm
will settle on, causing our minimum latency to go from 12us
to about 14us, when using adaptive mode.
It also changes the BULK interrupt rate to allow maximum throughput
on a 40Gb connection with a single thread of transmit, clamping
interrupt rate to 8000 for TX makes single thread traffic go too
slow.
The new ULTRA bulk setting is introduced and is used
when the Rx packet rate on this queue exceeds 40000 packets per
second. This value of 40000 was chosen because the automatic tuning
of minimum ITR=20us means that a single queue can't quite achieve
that many packets per second from a round-robin test.
Change-ID: Icce8faa128688ca5fd2c4229bdd9726877a92ea2
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The driver was using a value expressed in 2us increments
for the divisor to figure out our bytes/usec values.
Fix the usecs variable to contain a value in microseconds.
Change-ID: I5c20493103c295d6f201947bb908add7040b7c41
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change moves a multi-line register setting into a function
which simplifies reading the flow of the enable function.
This also fixes a bug where the enable function was enabling
the interrupt twice while trying to update the two interrupt
throttle rate thresholds for Rx and Tx.
Change-ID: Ie308f9d0d48540204590cb9d7a5a7b1196f959bb
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
As per Eric Dumazet's previous patches:
(see commit (24d2e4a507) - tg3: use napi_complete_done())
Quoting verbatim:
Using napi_complete_done() instead of napi_complete() allows
us to use /sys/class/net/ethX/gro_flush_timeout
GRO layer can aggregate more packets if the flush is delayed a bit,
without having to set too big coalescing parameters that impact
latencies.
</end quote>
Tested
configuration: low latency via ethtool -C ethx adaptive-rx off
rx-usecs 10 adaptive-tx off tx-usecs 15
workload: streaming rx using netperf TCP_MAERTS
igb:
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo
...
Interim result: 941.48 10^6bits/s over 1.000 seconds ending at 1440193171.589
Alignment Offset Bytes Bytes Recvs Bytes Sends
Local Remote Local Remote Xfered Per Per
Recv Send Recv Send Recv (avg) Send (avg)
8 8 0 0 1176930056 1475.36 797726 16384.00 71905
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo
...
Interim result: 941.49 10^6bits/s over 0.997 seconds ending at 1440193142.763
Alignment Offset Bytes Bytes Recvs Bytes Sends
Local Remote Local Remote Xfered Per Per
Recv Send Recv Send Recv (avg) Send (avg)
8 8 0 0 1175182320 50476.00 23282 16384.00 71816
i40e:
Hard to test because the traffic is incoming so fast (24Gb/s) that GRO
always receives 87kB, even at the highest interrupt rate.
Other drivers were only compile tested.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The code in i40e and i40evf is using an "IN_NETPOLL" flag that has never
added any value due to the fact that the Rx clean-up is handled in NAPI.
As such the flag was set, the queue was scheduled via NAPI, and then polled
from the netpoll controller and if any Rx packets were processed the were
processed in the wrong context.
In addition the flag itself just added an unneeded conditional to the
hot-path so it can safely be dropped and save us a few instructions.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The polling routine for i40e was rounding up the budget for Rx cleanup to
1. This is incorrect as the netpoll poll call is expecting no Rx to be
processed as the budget passed was 0.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add missings spaces after declarations, remove another __func__ use,
remove uncessary braces, remove unneeded breaks, and useless returns,
and generally fix up some code.
Change-ID: Ie715d6b64976c50e1c21531685fe0a2bd38c4244
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Keep track of how many times we ask the stack to linearize the
skb because the HW cannot handle skbs with more than 8 frags per
segment/single packet.
Change-ID: If455452060963a769bbe6112cba952e79e944b52
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Function i40e_clean_rx_irq() tries to reuse memory pages allocated
from the nearest node. To better support memoryless node, use
numa_mem_id() instead of numa_node_id() to get the nearest node with
memory.
This change should only affect performance.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The arm writeback (arm_wb) code is used for kicking the Tx ring to
make sure any pending work is completed even if interrupts are
disabled. It was running when it didn't need to, and not clearing
the ring->arm_wb state after it was set. This caused Tx hangs
to still occur occasionally when there really was no hang.
Fix this by resetting the variable right after it was used.
Change-ID: I7bf75d552ba9c4bd203d40615213861a24bb5594
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch modifies the driver timeout logic by issuing a writeback
request via a software interrupt to the hardware the first time the
driver detects a hang. The driver was too aggressive in resetting a hung
queue, so back that off by removing logic to down the netdevice after
too many hangs, and move the function to the service task.
Change-ID: Ife100b9d124cd08cbdb81ab659008c1b9abbedea
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use CTLN1 instead of CTLN for the VF relative register space.
Change-ID: Iefba63faf0307af55fec8dbb64f26059f7d91318
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
X722 supports offloading of outer UDP TX and RX checksum for tunneled
packets. This patch exposes the support and leaves it enabled by
default.
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
X722 fixes an issue from X710 where TX descriptor WB would not happen if
the interrupts were disabled. In order for the write backs to happen a
bit needs to be set in the dynamic interrupt control register called
WB_ON_ITR. With this feature, the SW driver need not arm SW interrupts to
work around the issue in X710.
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use macros for abstracting (1 << foo) to BIT(foo)
and (1ULL << foo64) to BIT_ULL(foo64) in order to match
better with kernel requirements.
NOTE: the adminq_cmd.h file was not modified on purpose because
of the dependency upon firmware for that file.
Change-ID: I73ee2e48c880d671948aad19bd53ca6b2ac558fc
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch changes the switch statement for dynamic interrupt throttling
and adds a default case. With this patch, we check the latency setting
instead of the current ITR settings and the included refactor improves
performance.
Without this patch, the ITR setting would never change dynamically, and
there was no default.
Change-ID: Idb5a8a14c7109ec47c90f6e94bd43baa17d7ee37
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add a prefetch for the next Tx descriptor to be used when we know
there are more coming.
Change-ID: Ibb9acab11d508eec2db7da795df74debc16eeacb
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Down was requesting queue disables, but then exited immediately
without waiting for the queues to actually disable. This could
allow any function called after i40evf_down to run immediately,
including i40evf_up, and causes a memory leak.
Removing the whole reinit_locked function is the best way
to go about this, and allows for the driver to handle the
state changes by requesting reset from the periodic timer.
Also, add a couple WARN_ONs in slow path to help us recognize
if we re-introduce this issue or missed any cases.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The driver doesn't use the time_stamp member to determine if there is a
tx_hang any more. There really isn't any point to the variable at all
so just remove it. It was left over from a previous tx_hang design.
Change-ID: I4c814827e1bcb46e45118fe37acdcfa814fb62a0
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Inlining these functions gives us about 15% more 64 byte packets per
second when using pktgen. 13.3 million to 15 million with a single
queue.
Also fix the function names in i40evf to i40evf not i40e while we are
touching the function header.
Change-ID: I3294ae9b085cf438672b6db5f9af122490ead9d0
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Eric added support for skb->xmit_more in i40e, this ports that into
i40evf as well.
Support skb->xmit_more in i40evf is straightforward; we need to move
around i40e_maybe_stop_tx() call to correctly test netif_xmit_stopped()
before taking the decision to not kick the NIC.
Change-ID: Idddda6a2e4a7ab335631c91ced51f55b25eb8468
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There's no need for a counter so remove the TODO comment.
Change-ID: I3321dda04934c4f5fda9b279ab666192bda44214
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Without this, RSS would have done inner header load balancing. Now we can
get the benefits of ATR for tunneled packets to better align TX and RX
queues with the right core/interrupt.
Change-ID: I07d0e0a192faf28fdd33b2f04c32b2a82ff97ddd
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
None of those drivers uses last_rx for its own needs.
See 4dc89133f4 ("net: add a comment on
netdev->last_rx") for reference.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Wingman Kwok <w-kwok2@ti.com>
Cc: Murali Karicheri <m-karicheri2@ti.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update i40e and i40evf to use dma_rmb. This should improve performance by
decreasing the barrier overhead on strong ordered architectures.
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If transmit VLAN HW offloads are disabled then the network stack sends up
an skb with the protocol set to 8021q. In that case to get the correct
checksum offloads we have to reset the skb protocol to the encapsulated
ethertype.
Change-ID: I903d78533de09b1c5d3ec695ee1990dd0fa5dd0d
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If the skb allocation fails we should not continue using the skb
pointer. Breaking out at the point of failure means that at the next
RX interrupt the driver will try the allocation again.
Change-ID: Iefaad69856ced7418bfd92afe55322676341f82e
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jim Young <james.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>