IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Size used with 'dma_alloc_coherent()' and 'dma_free_coherent()' should be
consistent.
Here, the size of a pointer is used in dma_alloc... and the size of the
pointed structure is used in dma_free...
This has been spotted with coccinelle, using the following script:
////////////////////
@r@
expression x0, x1, y0, y1, z0, z1, t0, t1, ret;
@@
* ret = dma_alloc_coherent(x0, y0, z0, t0);
...
* dma_free_coherent(x1, y1, ret, t1);
@script:python@
y0 << r.y0;
y1 << r.y1;
@@
if y1.find(y0) == -1:
print "WARNING: sizes look different: '%s' vs '%s'" % (y0, y1)
////////////////////
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
To ensure the dev->phydev pointer is not used after becoming invalid in
mdiobus_unregister, set it to NULL. This happens when removing the macb
driver without first taking its interface down, since unregister_netdev
will end up calling macb_close.
Signed-off-by: Xander Huff <xander.huff@ni.com>
Signed-off-by: Nathan Sullivan <nathan.sullivan@ni.com>
Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Reviewed-by: Moritz Fischer <moritz.fischer@ettus.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the case when a frontend only negotiates a single queue with xen-
netback it is possible for a skbuff with a s/w hash to result in a
hash extra_info segment being sent to the frontend even when no hash
algorithm has been configured. (The ndo_select_queue() entry point makes
sure the hash is not set if no algorithm is configured, but this entry
point is not called when there is only a single queue). This can result
in a frontend that is unable to handle extra_info segments being given
such a segment, causing it to crash.
This patch fixes the problem by clearing the hash in ndo_start_xmit()
instead, which is clearly guaranteed to be called irrespective of the
number of queues.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When processing a REG_MR work request, if fw supports the
FW_RI_NSMR_TPTE_WR work request, and if the page list for this
registration is <= 2 pages, and the current state of the mr is INVALID,
then use FW_RI_NSMR_TPTE_WR to pass down a fully populated TPTE for FW
to write. This avoids FW having to do an async read of the TPTE blocking
the SQ until the read completes.
To know if the current MR state is INVALID or not, iw_cxgb4 must track the
state of each fastreg MR. The c4iw_mr struct state is updated as REG_MR
and LOCAL_INV WRs are posted and completed, when a reg_mr is destroyed,
and when RECV completions are processed that include a local invalidation.
This optimization increases small IO IOPS for both iSER and NVMF.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Query firmware for the FW_PARAMS_PARAM_DEV_RI_FR_NSMR_TPTE_WR parameter.
If it exists and is 1, then advertise support for FR_NSMR_TPTE_WR to
the ULDs.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
In MLX qp packets, the LRH (built by the driver) has both a VL field
and an SL field. When building a QP1 packet, the VL field should
reflect the SLtoVL mapping and not arbitrarily contain zero (as is
done now). This bug causes credit problems in IB switches at
high rates of QP1 packets.
The fix is to cache the SL to VL mapping in the driver, and look up
the VL mapped to the SL provided in the send request when sending
QP1 packets.
For FW versions which support generating a port_management_config_change
event with subtype sl-to-vl-table-change, the driver uses that event
to update its sl-to-vl mapping cache. Otherwise, the driver snoops
incoming SMP mads to update the cache.
There remains the case where the FW is running in secure-host mode
(so no QP0 packets are delivered to the driver), and the FW does not
generate the sl2vl mapping change event. To support this case, the
driver updates (via querying the FW) its sl2vl mapping cache when
running in secure-host mode when it receives either a Port Up event
or a client-reregister event (where the port is still up, but there
may have been an opensm failover).
OpenSM modifies the sl2vl mapping before Port Up and Client-reregister
events occur, so if there is a mapping change the driver's cache will
be properly updated.
Fixes: 225c7b1feef1 ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
An extra entry for MDIO_XGENE got added during merging.
Delete it.
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If NO_DMA=y:
drivers/built-in.o: In function `emac_probe':
emac.c:(.text+0x3780b8): undefined reference to `bad_dma_ops'
emac.c:(.text+0x3780e2): undefined reference to `bad_dma_ops'
emac.c:(.text+0x378112): undefined reference to `bad_dma_ops'
emac.c:(.text+0x378146): undefined reference to `bad_dma_ops'
emac.c:(.text+0x37816e): undefined reference to `bad_dma_ops'
drivers/built-in.o:emac.c:(.text+0x37819a): more undefined references to `bad_dma_ops' follow
If NO_IOMEM=y:
drivers/net/ethernet/qualcomm/emac/emac.c: In function ‘emac_remove’:
drivers/net/ethernet/qualcomm/emac/emac.c:736:3: error: implicit declaration of function ‘iounmap’ [-Werror=implicit-function-declaration]
iounmap(adpt->phy.digital);
^
Add dependencies on HAS_DMA and HAS_IOMEM to fix this.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Because hw lro started to be supported from MT7623, the proper way to check if
the feature is capable is to judge by the chip id instead of by the dtsi.
Signed-off-by: Nelson Chang <nelson.chang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver gets the chip id by ETHSYS_CHIPID0_3/ETHSYS_CHIPID4_7 registers
in mtk_probe().
Signed-off-by: Nelson Chang <nelson.chang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
During the conversion to the feature flags, a check against
ci->id != BCMA_CHIP_ID_BCM47162
became
bgmac->feature_flags & BGMAC_FEAT_CLKCTLS
instead of
!(bgmac->feature_flags & BGMAC_FEAT_CLKCTLS)
Reported-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Jon Mason <jon.mason@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wake-on-LAN (WoL) is an Ethernet networking standard that allows
a computer/device to be turned on or awakened by a network message.
VSC8531 PHY can support this feature configure by driver set function.
WoL status get by driver get function.
Tested on Beaglebone Black with VSC 8531 PHY.
Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microsemi.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support to enable CPSW RGMII internal delay (id mode) bits
when rgmii internal delay is configured in phy.
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trival fix, dev_err messages are missing a \n, so add it. Also
fix grammer, spelling mistake and add white spaces to various
error messages.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trival fix, dev_dbg message is missing a \n, so add it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trival fix, dev_err messages are missing a \n, so add it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows full 64K skbuffs (with 1500 mtu ethernet, composed of 45
fragments) to be handled by netback for to-guest rx.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of flushing the copy ops when an packet is complete, complete
packets when their copy ops are done. This improves performance by
reducing the number of grant copy hypercalls.
Latency is still limited by the relatively small size of the copy
batch.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of only placing one skb on the guest rx ring at a time, process
a batch of up-to 64. This improves performance by ~10% in some tests.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When an skb is removed from the guest rx queue, immediately wake the
tx queue, instead of after processing them.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Refactor the to-guest (rx) path to:
1. Push responses for completed skbs earlier, reducing latency.
2. Reduce the per-queue memory overhead by greatly reducing the
maximum number of grant copy ops in each hypercall (from 4352 to
64). Each struct xenvif_queue is now only 44 kB instead of 220 kB.
3. Make the code more maintainable.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As far as I am aware only very old Windows network frontends make use of
this style of passing GSO packets from backend to frontend. These
frontends can easily be replaced by the freely available Xen Project
Windows PV network frontend, which uses the 'default' mechanism for
passing GSO packets, which is also used by all Linux frontends.
NOTE: Removal of this feature will not cause breakage in old Windows
frontends. They simply will no longer receive GSO packets - the
packets instead being fragmented in the backend.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The netback source module has become very large and somewhat confusing.
This patch simply moves all code related to the backend to frontend (i.e
guest side rx) data-path into a separate rx source module.
This patch contains no functional change, it is code movement and
minimal changes to avoid patch style-check issues.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Madalin Bucur says:
====================
fsl/fman: cleanup and small fixes
This series contains fixes for the DPAA FMan driver.
Adding myself as maintainer of the driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Set bit 0 in register 1C.23 to enable the EDPD feature of the
KSZ9031 PHY. This reduces power consumption when the link is
down.
Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull networking updates from David Miller:
1) BBR TCP congestion control, from Neal Cardwell, Yuchung Cheng and
co. at Google. https://lwn.net/Articles/701165/
2) Do TCP Small Queues for retransmits, from Eric Dumazet.
3) Support collect_md mode for all IPV4 and IPV6 tunnels, from Alexei
Starovoitov.
4) Allow cls_flower to classify packets in ip tunnels, from Amir Vadai.
5) Support DSA tagging in older mv88e6xxx switches, from Andrew Lunn.
6) Support GMAC protocol in iwlwifi mwm, from Ayala Beker.
7) Support ndo_poll_controller in mlx5, from Calvin Owens.
8) Move VRF processing to an output hook and allow l3mdev to be
loopback, from David Ahern.
9) Support SOCK_DESTROY for UDP sockets. Also from David Ahern.
10) Congestion control in RXRPC, from David Howells.
11) Support geneve RX offload in ixgbe, from Emil Tantilov.
12) When hitting pressure for new incoming TCP data SKBs, perform a
partial rathern than a full purge of the OFO queue (which could be
huge). From Eric Dumazet.
13) Convert XFRM state and policy lookups to RCU, from Florian Westphal.
14) Support RX network flow classification to igb, from Gangfeng Huang.
15) Hardware offloading of eBPF in nfp driver, from Jakub Kicinski.
16) New skbmod packet action, from Jamal Hadi Salim.
17) Remove some inefficiencies in snmp proc output, from Jia He.
18) Add FIB notifications to properly propagate route changes to
hardware which is doing forwarding offloading. From Jiri Pirko.
19) New dsa driver for qca8xxx chips, from John Crispin.
20) Implement RFC7559 ipv6 router solicitation backoff, from Maciej
Żenczykowski.
21) Add L3 mode to ipvlan, from Mahesh Bandewar.
22) Support 802.1ad in mlx4, from Moshe Shemesh.
23) Support hardware LRO in mediatek driver, from Nelson Chang.
24) Add TC offloading to mlx5, from Or Gerlitz.
25) Convert various drivers to ethtool ksettings interfaces, from
Philippe Reynes.
26) TX max rate limiting for cxgb4, from Rahul Lakkireddy.
27) NAPI support for ath10k, from Rajkumar Manoharan.
28) Support XDP in mlx5, from Rana Shahout and Saeed Mahameed.
29) UDP replicast support in TIPC, from Richard Alpe.
30) Per-queue statistics for qed driver, from Sudarsana Reddy Kalluru.
31) Support BQL in thunderx driver, from Sunil Goutham.
32) TSO support in alx driver, from Tobias Regnery.
33) Add stream parser engine and use it in kcm.
34) Support async DHCP replies in ipconfig module, from Uwe
Kleine-König.
35) DSA port fast aging for mv88e6xxx driver, from Vivien Didelot.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1715 commits)
mlxsw: switchx2: Fix misuse of hard_header_len
mlxsw: spectrum: Fix misuse of hard_header_len
net/faraday: Stop NCSI device on shutdown
net/ncsi: Introduce ncsi_stop_dev()
net/ncsi: Rework the channel monitoring
net/ncsi: Allow to extend NCSI request properties
net/ncsi: Rework request index allocation
net/ncsi: Don't probe on the reserved channel ID (0x1f)
net/ncsi: Introduce NCSI_RESERVED_CHANNEL
net/ncsi: Avoid unused-value build warning from ia64-linux-gcc
net: Add netdev all_adj_list refcnt propagation to fix panic
net: phy: Add Edge-rate driver for Microsemi PHYs.
vmxnet3: Wake queue from reset work
i40e: avoid NULL pointer dereference and recursive errors on early PCI error
qed: Add RoCE ll2 & GSI support
qed: Add support for memory registeration verbs
qed: Add support for QP verbs
qed: PD,PKEY and CQ verb support
qed: Add support for RoCE hw init
qede: Add qedr framework
...
In order to specify that the mlxsw switchx2 driver needs additional
headroom for packets, there have been use of the hard_header_len field of
the netdevice struct.
This commit changes that to use needed_headroom instead, as this is the
correct way to do that.
Fixes: 31557f0f9755 ("mlxsw: Introduce Mellanox SwitchX-2 ASIC support")
Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Acked-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to specify that the mlxsw spectrum driver needs additional
headroom for packets, there have been use of the hard_header_len field of
the netdevice struct.
This commit changes that to use needed_headroom instead, as this is the
correct way to do that.
Fixes: 56ade8fe3fe1 ("mlxsw: spectrum: Add initial support for Spectrum ASIC")
Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Acked-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This stops NCSI device when closing the network device so that the
NCSI device can be reenabled later.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edge-rate:
As system and networking speeds increase, a signal's output transition,
also know as the edge rate or slew rate (V/ns), takes on greater importance
because high-speed signals come with a price. That price is an assortment of
interference problems like ringing on the line, signal overshoot and
undershoot, extended signal settling times, crosstalk noise, transmission
line reflections, false signal detection by the receiving device and
electromagnetic interference (EMI) -- all of which can negate the potential
gains designers are seeking when they try to increase system speeds through
the use of higher performance logic devices. The fact is, faster signaling
edge rates can cause a higher level of electrical noise or other type of
interference that can actually lead to slower line speeds and lower maximum
system frequencies. This parameter allow the board designers to change the
driving strange, and thereby change the EMI behavioral.
Edge-rate parameters (vddmac, edge-slowdown) get from Device Tree.
Tested on Beaglebone Black with VSC 8531 PHY.
Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microsemi.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3_reset_work() expects tx queues to be stopped (via
vmxnet3_quiesce_dev -> netif_tx_disable). However, this races with the
netif_wake_queue() call in netif_tx_timeout() such that the driver's
start_xmit routine may be called unexpectedly, triggering one of the BUG_ON
in vmxnet3_map_pkt with a stack trace like this:
RIP: 0010:[<ffffffffa00cf4bc>] vmxnet3_map_pkt+0x3ac/0x4c0 [vmxnet3]
[<ffffffffa00cf7e0>] vmxnet3_tq_xmit+0x210/0x4e0 [vmxnet3]
[<ffffffff813ab144>] dev_hard_start_xmit+0x2e4/0x4c0
[<ffffffff813c956e>] sch_direct_xmit+0x17e/0x1e0
[<ffffffff813c96a7>] __qdisc_run+0xd7/0x130
[<ffffffff813a6a7a>] net_tx_action+0x10a/0x200
[<ffffffff810691df>] __do_softirq+0x11f/0x260
[<ffffffff81472fdc>] call_softirq+0x1c/0x30
[<ffffffff81004695>] do_softirq+0x65/0xa0
[<ffffffff81069b89>] local_bh_enable_ip+0x99/0xa0
[<ffffffffa031ff36>] destroy_conntrack+0x96/0x110 [nf_conntrack]
[<ffffffff813d65e2>] nf_conntrack_destroy+0x12/0x20
[<ffffffff8139c6d5>] skb_release_head_state+0xb5/0xf0
[<ffffffff8139d299>] skb_release_all+0x9/0x20
[<ffffffff8139cfe9>] __kfree_skb+0x9/0x90
[<ffffffffa00d0069>] vmxnet3_quiesce_dev+0x209/0x340 [vmxnet3]
[<ffffffffa00d020a>] vmxnet3_reset_work+0x6a/0xa0 [vmxnet3]
[<ffffffff8107d7cc>] process_one_work+0x16c/0x350
[<ffffffff810804fa>] worker_thread+0x17a/0x410
[<ffffffff810848c6>] kthread+0x96/0xa0
[<ffffffff81472ee4>] kernel_thread_helper+0x4/0x10
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
100GbE Intel Wired LAN Driver Updates 2016-10-02
This series contains updates to fm10k only.
Jake fixes an issue where PTP applications requesting software timestamps
may complain that the requested mode is not supported, so add a generic
callback for those drivers that have software transmit timestamp support
enabled. Then provides a trivial cleanup where a code was not wrapped
properly. Got make sure that code looks good in a 80 character limit.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Although rare, it's possible to hit PCI error early on device
probe, meaning possibly some structs are not entirely initialized,
and some might even be completely uninitialized, leading to NULL
pointer dereference.
The i40e driver currently presents a "bad" behavior if device hits
such early PCI error: firstly, the struct i40e_pf might not be
attached to pci_dev yet, leading to a NULL pointer dereference on
access to pf->state.
Even checking if the struct is NULL and avoiding the access in that
case isn't enough, since the driver cannot recover from PCI error
that early; in our experiments we saw multiple failures on kernel
log, like:
[549.664] i40e 0007:01:00.1: Initial pf_reset failed: -15
[549.664] i40e: probe of 0007:01:00.1 failed with error -15
[...]
[871.644] i40e 0007:01:00.1: The driver for the device stopped because the
device firmware failed to init. Try updating your NVM image.
[871.644] i40e: probe of 0007:01:00.1 failed with error -32
[...]
[872.516] i40e 0007:01:00.0: ARQ: Unknown event 0x0000 ignored
Between the first probe failure (error -15) and the second (error -32)
another PCI error happened due to the first bad probe. Also, driver
started to flood console with those ARQ event messages.
This patch will prevent these issues by allowing error recovery
mechanism to remove the failed device from the system instead of
trying to recover from early PCI errors during device probe.
CC: <stable@vger.kernel.org>
Signed-off-by: Guilherme G Piccoli <gpiccoli@linux.vnet.ibm.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2016-10-03
This series contains fixes to i40e only.
Stefan Assmann provides the changes in this series to resolve an issue
where when we run out of MSIx vectors, iWARP gets disabled automatically.
First adds a check for "no vectors left" during MSIx vector allocation
for VMDq, which will prevent more vectors being allocated than available.
Then fixed the MSIx vector redistribution when we reach the hardware limit
for vectors so that additional features like VMDq, iWARP, etc do not get
starved for vectors because the PF is hogging all the resources. Lastly,
fix the issue for flow director by moving the check for the reaching the
vector limit earlier in the code so that a decision can be made on
disabling flow director.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the RoCE-specific LL2 logic [as well as GSI support] over
the 'generic' LL2 interface.
Signed-off-by: Ram Amrani <Ram.Amrani@caviumnetworks.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@caviumnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add slowpath configuration support for user, dma and memory
regions registration.
Signed-off-by: Ram Amrani <Ram.Amrani@caviumnetworks.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@caviumnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the slowpath configurations of Queue Pair verbs
which adds, deletes, modifies and queries Queue Pairs.
Signed-off-by: Ram Amrani <Ram.Amrani@caviumnetworks.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@caviumnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the configurations of the protection domain and
completion queues.
Signed-off-by: Ram Amrani <Ram.Amrani@caviumnetworks.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@caviumnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>