Merge branch 'ionic-Add-ionic-driver'

Shannon Nelson says:

====================
ionic: Add ionic driver

This is a patch series that adds the ionic driver, supporting the Pensando
ethernet device.

In this initial patchset we implement basic transmit and receive.  Later
patchsets will add more advanced features.

Our thanks to Saeed Mahameed, David Miller, Andrew Lunn, Michal Kubecek,
Jacub Kicinski, Jiri Pirko, Yunsheng Lin, and the ever present kbuild
test robots for their comments and suggestions.

New in v7:
 - stop Tx queue if no descriptor space left after a Tx
 - return ETIMEDOUT if the module data can't be copied out safely
 - remove unnecessary synchronize_irq() before free_irq()
 - use eth_prepare_mac_addr_change() and eth_commit_mac_addr_change() helpers
 - propagate error out of ionic_dl_info_get()

New in v6:
 - added a new patch with devlink info tags for ASIC and general FW
 - use the new devlink info tags in the driver
 - fixed up TxRx cleanup on setup failure
 - allow for possible 0 address from dma mapping of Tx buffers
 - remove a few more unnecessary debugfs error checks
 - use innocuous hardcoded strings in the identify message
 - removed a couple of unused functions and definitions
 - fix a leak in the error handling of port_info setup
 - changed from BUILD_BUG_ON() to static_assert()

New in v5:
 - code reorganized for more sane layout, with a side benefit of getting
   rid of a "defined but not used" complaint after patch 5
 - added "ionic_" prefix to struct definitions and fixed up remaining
   reverse christmas tree formatting (I think I got them all...)
 - ndo_open and ndo_stop reworked for better error recovery
 - interrupt coalescing enabled at driver start
 - unnecessary log messaging removed from events
 - double copy added in the module prom read to assure a clean copy
 - added BQL counting
 - fixed a TSO unmap issue found in testing
 - generalize a bit-flag wait with timeout
 - added devlink into earlier code and dropped patch 19

New in v4:
 - use devlink struct alloc for ionic device specific struct
 - add support for devlink_port
 - fixup devlink fixed vs running version usage
 - use bitmap_copy() instead of memcpy() for link_ksettings
 - don't bother to zero out the advertising bits before copying
   in the support bits
 - drop unknown xcvr types (will be expanded on later)
 - flap the connection to force auto-negotiation
 - use is_power_of_2() rather than open code
 - simplify set/get_pauseparam use of pause->autoneg
 - add a couple comments about NIC status data updated in DMA spaces

New in v3:
 - use le32_to_cpu() on queue_count[] values in debugfs
 - dma_free_coherent() can handle NULL pointers
 - remove unused SS_TEST from ethtool handlers
 - one more case of stop the tx ring if there is no room
 - remove a couple of stray // comments

New in v2:
 - removed debugfs error checking and cut down on debugfs use
 - remove redundant bounds checking on incoming values for mtu and ethtool
 - don't alloc rx_filter memory until the match type has been checked
 - free the ionic struct on remove
 - simplified link_up and netif_carrier_ok comparison
 - put stats into ethtool -S, out of debugfs
 - moved dev_cmd and dev_info dumping to ethtool -d, out of debugfs
 - added devlink support
 - used kernel's rss init routines rather than open code
 - set the Kbuild dependant on 64BIT
 - cut down on some unnecessary log messaging
 - cleaned up ionic_get_link_ksettings
 - cleaned up other little code bits here and there
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2019-09-05 09:24:44 +02:00
commit e7ac4ea0fe
32 changed files with 9692 additions and 0 deletions

View File

@ -23,6 +23,7 @@ Contents:
intel/ice
google/gve
mellanox/mlx5
pensando/ionic
.. only:: subproject

View File

@ -0,0 +1,43 @@
.. SPDX-License-Identifier: GPL-2.0+
==========================================================
Linux* Driver for the Pensando(R) Ethernet adapter family
==========================================================
Pensando Linux Ethernet driver.
Copyright(c) 2019 Pensando Systems, Inc
Contents
========
- Identifying the Adapter
- Support
Identifying the Adapter
=======================
To find if one or more Pensando PCI Ethernet devices are installed on the
host, check for the PCI devices::
$ lspci -d 1dd8:
b5:00.0 Ethernet controller: Device 1dd8:1002
b6:00.0 Ethernet controller: Device 1dd8:1002
If such devices are listed as above, then the ionic.ko driver should find
and configure them for use. There should be log entries in the kernel
messages such as these::
$ dmesg | grep ionic
ionic Pensando Ethernet NIC Driver, ver 0.15.0-k
ionic 0000:b5:00.0 enp181s0: renamed from eth0
ionic 0000:b6:00.0 enp182s0: renamed from eth0
Support
=======
For general Linux networking support, please use the netdev mailing
list, which is monitored by Pensando personnel::
netdev@vger.kernel.org
For more specific support needs, please use the Pensando driver support
email::
drivers@pensando.io

View File

@ -14,11 +14,27 @@ board.rev
Board design revision.
asic.id
=======
ASIC design identifier.
asic.rev
========
ASIC design revision.
board.manufacture
=================
An identifier of the company or the facility which produced the part.
fw
==
Overall firmware version, often representing the collection of
fw.mgmt, fw.app, etc.
fw.mgmt
=======

View File

@ -12608,6 +12608,14 @@ L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/peaq-wmi.c
PENSANDO ETHERNET DRIVERS
M: Shannon Nelson <snelson@pensando.io>
M: Pensando Drivers <drivers@pensando.io>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/pensando/ionic.rst
F: drivers/net/ethernet/pensando/
PER-CPU MEMORY ALLOCATOR
M: Dennis Zhou <dennis@kernel.org>
M: Tejun Heo <tj@kernel.org>

View File

@ -168,6 +168,7 @@ config ETHOC
source "drivers/net/ethernet/packetengines/Kconfig"
source "drivers/net/ethernet/pasemi/Kconfig"
source "drivers/net/ethernet/pensando/Kconfig"
source "drivers/net/ethernet/qlogic/Kconfig"
source "drivers/net/ethernet/qualcomm/Kconfig"
source "drivers/net/ethernet/rdc/Kconfig"

View File

@ -97,3 +97,4 @@ obj-$(CONFIG_NET_VENDOR_WIZNET) += wiznet/
obj-$(CONFIG_NET_VENDOR_XILINX) += xilinx/
obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/
obj-$(CONFIG_NET_VENDOR_SYNOPSYS) += synopsys/
obj-$(CONFIG_NET_VENDOR_PENSANDO) += pensando/

View File

@ -0,0 +1,32 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2019 Pensando Systems, Inc
#
# Pensando device configuration
#
config NET_VENDOR_PENSANDO
bool "Pensando devices"
default y
help
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about Pensando cards. If you say Y, you will be asked
for your specific card in the following questions.
if NET_VENDOR_PENSANDO
config IONIC
tristate "Pensando Ethernet IONIC Support"
depends on 64BIT && PCI
help
This enables the support for the Pensando family of Ethernet
adapters. More specific information on this driver can be
found in
<file:Documentation/networking/device_drivers/pensando/ionic.rst>.
To compile this driver as a module, choose M here. The module
will be called ionic.
endif # NET_VENDOR_PENSANDO

View File

@ -0,0 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the Pensando network device drivers.
#
obj-$(CONFIG_IONIC) += ionic/

View File

@ -0,0 +1,8 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2017 - 2019 Pensando Systems, Inc
obj-$(CONFIG_IONIC) := ionic.o
ionic-y := ionic_main.o ionic_bus_pci.o ionic_devlink.o ionic_dev.o \
ionic_debugfs.o ionic_lif.o ionic_rx_filter.o ionic_ethtool.o \
ionic_txrx.o ionic_stats.o

View File

@ -0,0 +1,73 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_H_
#define _IONIC_H_
struct ionic_lif;
#include "ionic_if.h"
#include "ionic_dev.h"
#include "ionic_devlink.h"
#define IONIC_DRV_NAME "ionic"
#define IONIC_DRV_DESCRIPTION "Pensando Ethernet NIC Driver"
#define IONIC_DRV_VERSION "0.15.0-k"
#define PCI_VENDOR_ID_PENSANDO 0x1dd8
#define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_PF 0x1002
#define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF 0x1003
#define IONIC_SUBDEV_ID_NAPLES_25 0x4000
#define IONIC_SUBDEV_ID_NAPLES_100_4 0x4001
#define IONIC_SUBDEV_ID_NAPLES_100_8 0x4002
#define DEVCMD_TIMEOUT 10
struct ionic {
struct pci_dev *pdev;
struct device *dev;
struct devlink_port dl_port;
struct ionic_dev idev;
struct mutex dev_cmd_lock; /* lock for dev_cmd operations */
struct dentry *dentry;
struct ionic_dev_bar bars[IONIC_BARS_MAX];
unsigned int num_bars;
struct ionic_identity ident;
struct list_head lifs;
struct ionic_lif *master_lif;
unsigned int nnqs_per_lif;
unsigned int neqs_per_lif;
unsigned int ntxqs_per_lif;
unsigned int nrxqs_per_lif;
DECLARE_BITMAP(lifbits, IONIC_LIFS_MAX);
unsigned int nintrs;
DECLARE_BITMAP(intrs, IONIC_INTR_CTRL_REGS_MAX);
struct work_struct nb_work;
struct notifier_block nb;
};
struct ionic_admin_ctx {
struct completion work;
union ionic_adminq_cmd cmd;
union ionic_adminq_comp comp;
};
int ionic_napi(struct napi_struct *napi, int budget, ionic_cq_cb cb,
ionic_cq_done_cb done_cb, void *done_arg);
int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx);
int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_wait);
int ionic_set_dma_mask(struct ionic *ionic);
int ionic_setup(struct ionic *ionic);
int ionic_identify(struct ionic *ionic);
int ionic_init(struct ionic *ionic);
int ionic_reset(struct ionic *ionic);
int ionic_port_identify(struct ionic *ionic);
int ionic_port_init(struct ionic *ionic);
int ionic_port_reset(struct ionic *ionic);
#endif /* _IONIC_H_ */

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_BUS_H_
#define _IONIC_BUS_H_
int ionic_bus_get_irq(struct ionic *ionic, unsigned int num);
const char *ionic_bus_info(struct ionic *ionic);
int ionic_bus_alloc_irq_vectors(struct ionic *ionic, unsigned int nintrs);
void ionic_bus_free_irq_vectors(struct ionic *ionic);
int ionic_bus_register_driver(void);
void ionic_bus_unregister_driver(void);
void __iomem *ionic_bus_map_dbpage(struct ionic *ionic, int page_num);
void ionic_bus_unmap_dbpage(struct ionic *ionic, void __iomem *page);
#endif /* _IONIC_BUS_H_ */

View File

@ -0,0 +1,292 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/pci.h>
#include "ionic.h"
#include "ionic_bus.h"
#include "ionic_lif.h"
#include "ionic_debugfs.h"
/* Supported devices */
static const struct pci_device_id ionic_id_table[] = {
{ PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_IONIC_ETH_PF) },
{ PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF) },
{ 0, } /* end of table */
};
MODULE_DEVICE_TABLE(pci, ionic_id_table);
int ionic_bus_get_irq(struct ionic *ionic, unsigned int num)
{
return pci_irq_vector(ionic->pdev, num);
}
const char *ionic_bus_info(struct ionic *ionic)
{
return pci_name(ionic->pdev);
}
int ionic_bus_alloc_irq_vectors(struct ionic *ionic, unsigned int nintrs)
{
return pci_alloc_irq_vectors(ionic->pdev, nintrs, nintrs,
PCI_IRQ_MSIX);
}
void ionic_bus_free_irq_vectors(struct ionic *ionic)
{
pci_free_irq_vectors(ionic->pdev);
}
static int ionic_map_bars(struct ionic *ionic)
{
struct pci_dev *pdev = ionic->pdev;
struct device *dev = ionic->dev;
struct ionic_dev_bar *bars;
unsigned int i, j;
bars = ionic->bars;
ionic->num_bars = 0;
for (i = 0, j = 0; i < IONIC_BARS_MAX; i++) {
if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM))
continue;
bars[j].len = pci_resource_len(pdev, i);
/* only map the whole bar 0 */
if (j > 0) {
bars[j].vaddr = NULL;
} else {
bars[j].vaddr = pci_iomap(pdev, i, bars[j].len);
if (!bars[j].vaddr) {
dev_err(dev,
"Cannot memory-map BAR %d, aborting\n",
i);
return -ENODEV;
}
}
bars[j].bus_addr = pci_resource_start(pdev, i);
bars[j].res_index = i;
ionic->num_bars++;
j++;
}
return 0;
}
static void ionic_unmap_bars(struct ionic *ionic)
{
struct ionic_dev_bar *bars = ionic->bars;
unsigned int i;
for (i = 0; i < IONIC_BARS_MAX; i++) {
if (bars[i].vaddr) {
iounmap(bars[i].vaddr);
bars[i].bus_addr = 0;
bars[i].vaddr = NULL;
bars[i].len = 0;
}
}
}
void __iomem *ionic_bus_map_dbpage(struct ionic *ionic, int page_num)
{
return pci_iomap_range(ionic->pdev,
ionic->bars[IONIC_PCI_BAR_DBELL].res_index,
(u64)page_num << PAGE_SHIFT, PAGE_SIZE);
}
void ionic_bus_unmap_dbpage(struct ionic *ionic, void __iomem *page)
{
iounmap(page);
}
static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct device *dev = &pdev->dev;
struct ionic *ionic;
int err;
ionic = ionic_devlink_alloc(dev);
if (!ionic)
return -ENOMEM;
ionic->pdev = pdev;
ionic->dev = dev;
pci_set_drvdata(pdev, ionic);
mutex_init(&ionic->dev_cmd_lock);
/* Query system for DMA addressing limitation for the device. */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(IONIC_ADDR_LEN));
if (err) {
dev_err(dev, "Unable to obtain 64-bit DMA for consistent allocations, aborting. err=%d\n",
err);
goto err_out_clear_drvdata;
}
ionic_debugfs_add_dev(ionic);
/* Setup PCI device */
err = pci_enable_device_mem(pdev);
if (err) {
dev_err(dev, "Cannot enable PCI device: %d, aborting\n", err);
goto err_out_debugfs_del_dev;
}
err = pci_request_regions(pdev, IONIC_DRV_NAME);
if (err) {
dev_err(dev, "Cannot request PCI regions: %d, aborting\n", err);
goto err_out_pci_disable_device;
}
pci_set_master(pdev);
err = ionic_map_bars(ionic);
if (err)
goto err_out_pci_clear_master;
/* Configure the device */
err = ionic_setup(ionic);
if (err) {
dev_err(dev, "Cannot setup device: %d, aborting\n", err);
goto err_out_unmap_bars;
}
err = ionic_identify(ionic);
if (err) {
dev_err(dev, "Cannot identify device: %d, aborting\n", err);
goto err_out_teardown;
}
err = ionic_init(ionic);
if (err) {
dev_err(dev, "Cannot init device: %d, aborting\n", err);
goto err_out_teardown;
}
/* Configure the ports */
err = ionic_port_identify(ionic);
if (err) {
dev_err(dev, "Cannot identify port: %d, aborting\n", err);
goto err_out_reset;
}
err = ionic_port_init(ionic);
if (err) {
dev_err(dev, "Cannot init port: %d, aborting\n", err);
goto err_out_reset;
}
/* Configure LIFs */
err = ionic_lif_identify(ionic, IONIC_LIF_TYPE_CLASSIC,
&ionic->ident.lif);
if (err) {
dev_err(dev, "Cannot identify LIFs: %d, aborting\n", err);
goto err_out_port_reset;
}
err = ionic_lifs_size(ionic);
if (err) {
dev_err(dev, "Cannot size LIFs: %d, aborting\n", err);
goto err_out_port_reset;
}
err = ionic_lifs_alloc(ionic);
if (err) {
dev_err(dev, "Cannot allocate LIFs: %d, aborting\n", err);
goto err_out_free_irqs;
}
err = ionic_lifs_init(ionic);
if (err) {
dev_err(dev, "Cannot init LIFs: %d, aborting\n", err);
goto err_out_free_lifs;
}
err = ionic_lifs_register(ionic);
if (err) {
dev_err(dev, "Cannot register LIFs: %d, aborting\n", err);
goto err_out_deinit_lifs;
}
err = ionic_devlink_register(ionic);
if (err) {
dev_err(dev, "Cannot register devlink: %d\n", err);
goto err_out_deregister_lifs;
}
return 0;
err_out_deregister_lifs:
ionic_lifs_unregister(ionic);
err_out_deinit_lifs:
ionic_lifs_deinit(ionic);
err_out_free_lifs:
ionic_lifs_free(ionic);
err_out_free_irqs:
ionic_bus_free_irq_vectors(ionic);
err_out_port_reset:
ionic_port_reset(ionic);
err_out_reset:
ionic_reset(ionic);
err_out_teardown:
ionic_dev_teardown(ionic);
err_out_unmap_bars:
ionic_unmap_bars(ionic);
pci_release_regions(pdev);
err_out_pci_clear_master:
pci_clear_master(pdev);
err_out_pci_disable_device:
pci_disable_device(pdev);
err_out_debugfs_del_dev:
ionic_debugfs_del_dev(ionic);
err_out_clear_drvdata:
mutex_destroy(&ionic->dev_cmd_lock);
ionic_devlink_free(ionic);
return err;
}
static void ionic_remove(struct pci_dev *pdev)
{
struct ionic *ionic = pci_get_drvdata(pdev);
if (!ionic)
return;
ionic_devlink_unregister(ionic);
ionic_lifs_unregister(ionic);
ionic_lifs_deinit(ionic);
ionic_lifs_free(ionic);
ionic_bus_free_irq_vectors(ionic);
ionic_port_reset(ionic);
ionic_reset(ionic);
ionic_dev_teardown(ionic);
ionic_unmap_bars(ionic);
pci_release_regions(pdev);
pci_clear_master(pdev);
pci_disable_device(pdev);
ionic_debugfs_del_dev(ionic);
mutex_destroy(&ionic->dev_cmd_lock);
ionic_devlink_free(ionic);
}
static struct pci_driver ionic_driver = {
.name = IONIC_DRV_NAME,
.id_table = ionic_id_table,
.probe = ionic_probe,
.remove = ionic_remove,
};
int ionic_bus_register_driver(void)
{
return pci_register_driver(&ionic_driver);
}
void ionic_bus_unregister_driver(void)
{
pci_unregister_driver(&ionic_driver);
}

View File

@ -0,0 +1,248 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/pci.h>
#include <linux/netdevice.h>
#include "ionic.h"
#include "ionic_bus.h"
#include "ionic_lif.h"
#include "ionic_debugfs.h"
#ifdef CONFIG_DEBUG_FS
static struct dentry *ionic_dir;
void ionic_debugfs_create(void)
{
ionic_dir = debugfs_create_dir(IONIC_DRV_NAME, NULL);
}
void ionic_debugfs_destroy(void)
{
debugfs_remove_recursive(ionic_dir);
}
void ionic_debugfs_add_dev(struct ionic *ionic)
{
ionic->dentry = debugfs_create_dir(ionic_bus_info(ionic), ionic_dir);
}
void ionic_debugfs_del_dev(struct ionic *ionic)
{
debugfs_remove_recursive(ionic->dentry);
ionic->dentry = NULL;
}
static int identity_show(struct seq_file *seq, void *v)
{
struct ionic *ionic = seq->private;
struct ionic_identity *ident;
ident = &ionic->ident;
seq_printf(seq, "nlifs: %d\n", ident->dev.nlifs);
seq_printf(seq, "nintrs: %d\n", ident->dev.nintrs);
seq_printf(seq, "ndbpgs_per_lif: %d\n", ident->dev.ndbpgs_per_lif);
seq_printf(seq, "intr_coal_mult: %d\n", ident->dev.intr_coal_mult);
seq_printf(seq, "intr_coal_div: %d\n", ident->dev.intr_coal_div);
seq_printf(seq, "max_ucast_filters: %d\n", ident->lif.eth.max_ucast_filters);
seq_printf(seq, "max_mcast_filters: %d\n", ident->lif.eth.max_mcast_filters);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(identity);
void ionic_debugfs_add_ident(struct ionic *ionic)
{
debugfs_create_file("identity", 0400, ionic->dentry,
ionic, &identity_fops) ? 0 : -EOPNOTSUPP;
}
void ionic_debugfs_add_sizes(struct ionic *ionic)
{
debugfs_create_u32("nlifs", 0400, ionic->dentry,
(u32 *)&ionic->ident.dev.nlifs);
debugfs_create_u32("nintrs", 0400, ionic->dentry, &ionic->nintrs);
debugfs_create_u32("ntxqs_per_lif", 0400, ionic->dentry,
(u32 *)&ionic->ident.lif.eth.config.queue_count[IONIC_QTYPE_TXQ]);
debugfs_create_u32("nrxqs_per_lif", 0400, ionic->dentry,
(u32 *)&ionic->ident.lif.eth.config.queue_count[IONIC_QTYPE_RXQ]);
}
static int q_tail_show(struct seq_file *seq, void *v)
{
struct ionic_queue *q = seq->private;
seq_printf(seq, "%d\n", q->tail->index);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(q_tail);
static int q_head_show(struct seq_file *seq, void *v)
{
struct ionic_queue *q = seq->private;
seq_printf(seq, "%d\n", q->head->index);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(q_head);
static int cq_tail_show(struct seq_file *seq, void *v)
{
struct ionic_cq *cq = seq->private;
seq_printf(seq, "%d\n", cq->tail->index);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(cq_tail);
static const struct debugfs_reg32 intr_ctrl_regs[] = {
{ .name = "coal_init", .offset = 0, },
{ .name = "mask", .offset = 4, },
{ .name = "credits", .offset = 8, },
{ .name = "mask_on_assert", .offset = 12, },
{ .name = "coal_timer", .offset = 16, },
};
void ionic_debugfs_add_qcq(struct ionic_lif *lif, struct ionic_qcq *qcq)
{
struct dentry *q_dentry, *cq_dentry, *intr_dentry, *stats_dentry;
struct ionic_dev *idev = &lif->ionic->idev;
struct debugfs_regset32 *intr_ctrl_regset;
struct ionic_intr_info *intr = &qcq->intr;
struct debugfs_blob_wrapper *desc_blob;
struct device *dev = lif->ionic->dev;
struct ionic_queue *q = &qcq->q;
struct ionic_cq *cq = &qcq->cq;
qcq->dentry = debugfs_create_dir(q->name, lif->dentry);
debugfs_create_x32("total_size", 0400, qcq->dentry, &qcq->total_size);
debugfs_create_x64("base_pa", 0400, qcq->dentry, &qcq->base_pa);
q_dentry = debugfs_create_dir("q", qcq->dentry);
debugfs_create_u32("index", 0400, q_dentry, &q->index);
debugfs_create_x64("base_pa", 0400, q_dentry, &q->base_pa);
if (qcq->flags & IONIC_QCQ_F_SG) {
debugfs_create_x64("sg_base_pa", 0400, q_dentry,
&q->sg_base_pa);
debugfs_create_u32("sg_desc_size", 0400, q_dentry,
&q->sg_desc_size);
}
debugfs_create_u32("num_descs", 0400, q_dentry, &q->num_descs);
debugfs_create_u32("desc_size", 0400, q_dentry, &q->desc_size);
debugfs_create_u32("pid", 0400, q_dentry, &q->pid);
debugfs_create_u32("qid", 0400, q_dentry, &q->hw_index);
debugfs_create_u32("qtype", 0400, q_dentry, &q->hw_type);
debugfs_create_u64("drop", 0400, q_dentry, &q->drop);
debugfs_create_u64("stop", 0400, q_dentry, &q->stop);
debugfs_create_u64("wake", 0400, q_dentry, &q->wake);
debugfs_create_file("tail", 0400, q_dentry, q, &q_tail_fops);
debugfs_create_file("head", 0400, q_dentry, q, &q_head_fops);
desc_blob = devm_kzalloc(dev, sizeof(*desc_blob), GFP_KERNEL);
if (!desc_blob)
return;
desc_blob->data = q->base;
desc_blob->size = (unsigned long)q->num_descs * q->desc_size;
debugfs_create_blob("desc_blob", 0400, q_dentry, desc_blob);
if (qcq->flags & IONIC_QCQ_F_SG) {
desc_blob = devm_kzalloc(dev, sizeof(*desc_blob), GFP_KERNEL);
if (!desc_blob)
return;
desc_blob->data = q->sg_base;
desc_blob->size = (unsigned long)q->num_descs * q->sg_desc_size;
debugfs_create_blob("sg_desc_blob", 0400, q_dentry,
desc_blob);
}
cq_dentry = debugfs_create_dir("cq", qcq->dentry);
debugfs_create_x64("base_pa", 0400, cq_dentry, &cq->base_pa);
debugfs_create_u32("num_descs", 0400, cq_dentry, &cq->num_descs);
debugfs_create_u32("desc_size", 0400, cq_dentry, &cq->desc_size);
debugfs_create_u8("done_color", 0400, cq_dentry,
(u8 *)&cq->done_color);
debugfs_create_file("tail", 0400, cq_dentry, cq, &cq_tail_fops);
desc_blob = devm_kzalloc(dev, sizeof(*desc_blob), GFP_KERNEL);
if (!desc_blob)
return;
desc_blob->data = cq->base;
desc_blob->size = (unsigned long)cq->num_descs * cq->desc_size;
debugfs_create_blob("desc_blob", 0400, cq_dentry, desc_blob);
if (qcq->flags & IONIC_QCQ_F_INTR) {
intr_dentry = debugfs_create_dir("intr", qcq->dentry);
debugfs_create_u32("index", 0400, intr_dentry,
&intr->index);
debugfs_create_u32("vector", 0400, intr_dentry,
&intr->vector);
intr_ctrl_regset = devm_kzalloc(dev, sizeof(*intr_ctrl_regset),
GFP_KERNEL);
if (!intr_ctrl_regset)
return;
intr_ctrl_regset->regs = intr_ctrl_regs;
intr_ctrl_regset->nregs = ARRAY_SIZE(intr_ctrl_regs);
intr_ctrl_regset->base = &idev->intr_ctrl[intr->index];
debugfs_create_regset32("intr_ctrl", 0400, intr_dentry,
intr_ctrl_regset);
}
if (qcq->flags & IONIC_QCQ_F_NOTIFYQ) {
stats_dentry = debugfs_create_dir("notifyblock", qcq->dentry);
debugfs_create_u64("eid", 0400, stats_dentry,
(u64 *)&lif->info->status.eid);
debugfs_create_u16("link_status", 0400, stats_dentry,
(u16 *)&lif->info->status.link_status);
debugfs_create_u32("link_speed", 0400, stats_dentry,
(u32 *)&lif->info->status.link_speed);
debugfs_create_u16("link_down_count", 0400, stats_dentry,
(u16 *)&lif->info->status.link_down_count);
}
}
static int netdev_show(struct seq_file *seq, void *v)
{
struct net_device *netdev = seq->private;
seq_printf(seq, "%s\n", netdev->name);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(netdev);
void ionic_debugfs_add_lif(struct ionic_lif *lif)
{
lif->dentry = debugfs_create_dir(lif->name, lif->ionic->dentry);
debugfs_create_file("netdev", 0400, lif->dentry,
lif->netdev, &netdev_fops);
}
void ionic_debugfs_del_lif(struct ionic_lif *lif)
{
debugfs_remove_recursive(lif->dentry);
lif->dentry = NULL;
}
void ionic_debugfs_del_qcq(struct ionic_qcq *qcq)
{
debugfs_remove_recursive(qcq->dentry);
qcq->dentry = NULL;
}
#endif

View File

@ -0,0 +1,34 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_DEBUGFS_H_
#define _IONIC_DEBUGFS_H_
#include <linux/debugfs.h>
#ifdef CONFIG_DEBUG_FS
void ionic_debugfs_create(void);
void ionic_debugfs_destroy(void);
void ionic_debugfs_add_dev(struct ionic *ionic);
void ionic_debugfs_del_dev(struct ionic *ionic);
void ionic_debugfs_add_ident(struct ionic *ionic);
void ionic_debugfs_add_sizes(struct ionic *ionic);
void ionic_debugfs_add_lif(struct ionic_lif *lif);
void ionic_debugfs_add_qcq(struct ionic_lif *lif, struct ionic_qcq *qcq);
void ionic_debugfs_del_lif(struct ionic_lif *lif);
void ionic_debugfs_del_qcq(struct ionic_qcq *qcq);
#else
static inline void ionic_debugfs_create(void) { }
static inline void ionic_debugfs_destroy(void) { }
static inline void ionic_debugfs_add_dev(struct ionic *ionic) { }
static inline void ionic_debugfs_del_dev(struct ionic *ionic) { }
static inline void ionic_debugfs_add_ident(struct ionic *ionic) { }
static inline void ionic_debugfs_add_sizes(struct ionic *ionic) { }
static inline void ionic_debugfs_add_lif(struct ionic_lif *lif) { }
static inline void ionic_debugfs_add_qcq(struct ionic_lif *lif, struct ionic_qcq *qcq) { }
static inline void ionic_debugfs_del_lif(struct ionic_lif *lif) { }
static inline void ionic_debugfs_del_qcq(struct ionic_qcq *qcq) { }
#endif
#endif /* _IONIC_DEBUGFS_H_ */

View File

@ -0,0 +1,500 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/etherdevice.h>
#include "ionic.h"
#include "ionic_dev.h"
#include "ionic_lif.h"
void ionic_init_devinfo(struct ionic *ionic)
{
struct ionic_dev *idev = &ionic->idev;
idev->dev_info.asic_type = ioread8(&idev->dev_info_regs->asic_type);
idev->dev_info.asic_rev = ioread8(&idev->dev_info_regs->asic_rev);
memcpy_fromio(idev->dev_info.fw_version,
idev->dev_info_regs->fw_version,
IONIC_DEVINFO_FWVERS_BUFLEN);
memcpy_fromio(idev->dev_info.serial_num,
idev->dev_info_regs->serial_num,
IONIC_DEVINFO_SERIAL_BUFLEN);
idev->dev_info.fw_version[IONIC_DEVINFO_FWVERS_BUFLEN] = 0;
idev->dev_info.serial_num[IONIC_DEVINFO_SERIAL_BUFLEN] = 0;
dev_dbg(ionic->dev, "fw_version %s\n", idev->dev_info.fw_version);
}
int ionic_dev_setup(struct ionic *ionic)
{
struct ionic_dev_bar *bar = ionic->bars;
unsigned int num_bars = ionic->num_bars;
struct ionic_dev *idev = &ionic->idev;
struct device *dev = ionic->dev;
u32 sig;
/* BAR0: dev_cmd and interrupts */
if (num_bars < 1) {
dev_err(dev, "No bars found, aborting\n");
return -EFAULT;
}
if (bar->len < IONIC_BAR0_SIZE) {
dev_err(dev, "Resource bar size %lu too small, aborting\n",
bar->len);
return -EFAULT;
}
idev->dev_info_regs = bar->vaddr + IONIC_BAR0_DEV_INFO_REGS_OFFSET;
idev->dev_cmd_regs = bar->vaddr + IONIC_BAR0_DEV_CMD_REGS_OFFSET;
idev->intr_status = bar->vaddr + IONIC_BAR0_INTR_STATUS_OFFSET;
idev->intr_ctrl = bar->vaddr + IONIC_BAR0_INTR_CTRL_OFFSET;
sig = ioread32(&idev->dev_info_regs->signature);
if (sig != IONIC_DEV_INFO_SIGNATURE) {
dev_err(dev, "Incompatible firmware signature %x", sig);
return -EFAULT;
}
ionic_init_devinfo(ionic);
/* BAR1: doorbells */
bar++;
if (num_bars < 2) {
dev_err(dev, "Doorbell bar missing, aborting\n");
return -EFAULT;
}
idev->db_pages = bar->vaddr;
idev->phy_db_pages = bar->bus_addr;
return 0;
}
void ionic_dev_teardown(struct ionic *ionic)
{
/* place holder */
}
/* Devcmd Interface */
u8 ionic_dev_cmd_status(struct ionic_dev *idev)
{
return ioread8(&idev->dev_cmd_regs->comp.comp.status);
}
bool ionic_dev_cmd_done(struct ionic_dev *idev)
{
return ioread32(&idev->dev_cmd_regs->done) & IONIC_DEV_CMD_DONE;
}
void ionic_dev_cmd_comp(struct ionic_dev *idev, union ionic_dev_cmd_comp *comp)
{
memcpy_fromio(comp, &idev->dev_cmd_regs->comp, sizeof(*comp));
}
void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd)
{
memcpy_toio(&idev->dev_cmd_regs->cmd, cmd, sizeof(*cmd));
iowrite32(0, &idev->dev_cmd_regs->done);
iowrite32(1, &idev->dev_cmd_regs->doorbell);
}
/* Device commands */
void ionic_dev_cmd_identify(struct ionic_dev *idev, u8 ver)
{
union ionic_dev_cmd cmd = {
.identify.opcode = IONIC_CMD_IDENTIFY,
.identify.ver = ver,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_init(struct ionic_dev *idev)
{
union ionic_dev_cmd cmd = {
.init.opcode = IONIC_CMD_INIT,
.init.type = 0,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_reset(struct ionic_dev *idev)
{
union ionic_dev_cmd cmd = {
.reset.opcode = IONIC_CMD_RESET,
};
ionic_dev_cmd_go(idev, &cmd);
}
/* Port commands */
void ionic_dev_cmd_port_identify(struct ionic_dev *idev)
{
union ionic_dev_cmd cmd = {
.port_init.opcode = IONIC_CMD_PORT_IDENTIFY,
.port_init.index = 0,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_init(struct ionic_dev *idev)
{
union ionic_dev_cmd cmd = {
.port_init.opcode = IONIC_CMD_PORT_INIT,
.port_init.index = 0,
.port_init.info_pa = cpu_to_le64(idev->port_info_pa),
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_reset(struct ionic_dev *idev)
{
union ionic_dev_cmd cmd = {
.port_reset.opcode = IONIC_CMD_PORT_RESET,
.port_reset.index = 0,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_state(struct ionic_dev *idev, u8 state)
{
union ionic_dev_cmd cmd = {
.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
.port_setattr.index = 0,
.port_setattr.attr = IONIC_PORT_ATTR_STATE,
.port_setattr.state = state,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_speed(struct ionic_dev *idev, u32 speed)
{
union ionic_dev_cmd cmd = {
.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
.port_setattr.index = 0,
.port_setattr.attr = IONIC_PORT_ATTR_SPEED,
.port_setattr.speed = cpu_to_le32(speed),
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, u8 an_enable)
{
union ionic_dev_cmd cmd = {
.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
.port_setattr.index = 0,
.port_setattr.attr = IONIC_PORT_ATTR_AUTONEG,
.port_setattr.an_enable = an_enable,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_fec(struct ionic_dev *idev, u8 fec_type)
{
union ionic_dev_cmd cmd = {
.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
.port_setattr.index = 0,
.port_setattr.attr = IONIC_PORT_ATTR_FEC,
.port_setattr.fec_type = fec_type,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_port_pause(struct ionic_dev *idev, u8 pause_type)
{
union ionic_dev_cmd cmd = {
.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
.port_setattr.index = 0,
.port_setattr.attr = IONIC_PORT_ATTR_PAUSE,
.port_setattr.pause_type = pause_type,
};
ionic_dev_cmd_go(idev, &cmd);
}
/* LIF commands */
void ionic_dev_cmd_lif_identify(struct ionic_dev *idev, u8 type, u8 ver)
{
union ionic_dev_cmd cmd = {
.lif_identify.opcode = IONIC_CMD_LIF_IDENTIFY,
.lif_identify.type = type,
.lif_identify.ver = ver,
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_lif_init(struct ionic_dev *idev, u16 lif_index,
dma_addr_t info_pa)
{
union ionic_dev_cmd cmd = {
.lif_init.opcode = IONIC_CMD_LIF_INIT,
.lif_init.index = cpu_to_le16(lif_index),
.lif_init.info_pa = cpu_to_le64(info_pa),
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_lif_reset(struct ionic_dev *idev, u16 lif_index)
{
union ionic_dev_cmd cmd = {
.lif_init.opcode = IONIC_CMD_LIF_RESET,
.lif_init.index = cpu_to_le16(lif_index),
};
ionic_dev_cmd_go(idev, &cmd);
}
void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, struct ionic_qcq *qcq,
u16 lif_index, u16 intr_index)
{
struct ionic_queue *q = &qcq->q;
struct ionic_cq *cq = &qcq->cq;
union ionic_dev_cmd cmd = {
.q_init.opcode = IONIC_CMD_Q_INIT,
.q_init.lif_index = cpu_to_le16(lif_index),
.q_init.type = q->type,
.q_init.index = cpu_to_le32(q->index),
.q_init.flags = cpu_to_le16(IONIC_QINIT_F_IRQ |
IONIC_QINIT_F_ENA),
.q_init.pid = cpu_to_le16(q->pid),
.q_init.intr_index = cpu_to_le16(intr_index),
.q_init.ring_size = ilog2(q->num_descs),
.q_init.ring_base = cpu_to_le64(q->base_pa),
.q_init.cq_ring_base = cpu_to_le64(cq->base_pa),
};
ionic_dev_cmd_go(idev, &cmd);
}
int ionic_db_page_num(struct ionic_lif *lif, int pid)
{
return (lif->hw_index * lif->dbid_count) + pid;
}
int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq,
struct ionic_intr_info *intr,
unsigned int num_descs, size_t desc_size)
{
struct ionic_cq_info *cur;
unsigned int ring_size;
unsigned int i;
if (desc_size == 0 || !is_power_of_2(num_descs))
return -EINVAL;
ring_size = ilog2(num_descs);
if (ring_size < 2 || ring_size > 16)
return -EINVAL;
cq->lif = lif;
cq->bound_intr = intr;
cq->num_descs = num_descs;
cq->desc_size = desc_size;
cq->tail = cq->info;
cq->done_color = 1;
cur = cq->info;
for (i = 0; i < num_descs; i++) {
if (i + 1 == num_descs) {
cur->next = cq->info;
cur->last = true;
} else {
cur->next = cur + 1;
}
cur->index = i;
cur++;
}
return 0;
}
void ionic_cq_map(struct ionic_cq *cq, void *base, dma_addr_t base_pa)
{
struct ionic_cq_info *cur;
unsigned int i;
cq->base = base;
cq->base_pa = base_pa;
for (i = 0, cur = cq->info; i < cq->num_descs; i++, cur++)
cur->cq_desc = base + (i * cq->desc_size);
}
void ionic_cq_bind(struct ionic_cq *cq, struct ionic_queue *q)
{
cq->bound_q = q;
}
unsigned int ionic_cq_service(struct ionic_cq *cq, unsigned int work_to_do,
ionic_cq_cb cb, ionic_cq_done_cb done_cb,
void *done_arg)
{
unsigned int work_done = 0;
if (work_to_do == 0)
return 0;
while (cb(cq, cq->tail)) {
if (cq->tail->last)
cq->done_color = !cq->done_color;
cq->tail = cq->tail->next;
DEBUG_STATS_CQE_CNT(cq);
if (++work_done >= work_to_do)
break;
}
if (work_done && done_cb)
done_cb(done_arg);
return work_done;
}
int ionic_q_init(struct ionic_lif *lif, struct ionic_dev *idev,
struct ionic_queue *q, unsigned int index, const char *name,
unsigned int num_descs, size_t desc_size,
size_t sg_desc_size, unsigned int pid)
{
struct ionic_desc_info *cur;
unsigned int ring_size;
unsigned int i;
if (desc_size == 0 || !is_power_of_2(num_descs))
return -EINVAL;
ring_size = ilog2(num_descs);
if (ring_size < 2 || ring_size > 16)
return -EINVAL;
q->lif = lif;
q->idev = idev;
q->index = index;
q->num_descs = num_descs;
q->desc_size = desc_size;
q->sg_desc_size = sg_desc_size;
q->tail = q->info;
q->head = q->tail;
q->pid = pid;
snprintf(q->name, sizeof(q->name), "L%d-%s%u", lif->index, name, index);
cur = q->info;
for (i = 0; i < num_descs; i++) {
if (i + 1 == num_descs)
cur->next = q->info;
else
cur->next = cur + 1;
cur->index = i;
cur->left = num_descs - i;
cur++;
}
return 0;
}
void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa)
{
struct ionic_desc_info *cur;
unsigned int i;
q->base = base;
q->base_pa = base_pa;
for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
cur->desc = base + (i * q->desc_size);
}
void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa)
{
struct ionic_desc_info *cur;
unsigned int i;
q->sg_base = base;
q->sg_base_pa = base_pa;
for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
cur->sg_desc = base + (i * q->sg_desc_size);
}
void ionic_q_post(struct ionic_queue *q, bool ring_doorbell, ionic_desc_cb cb,
void *cb_arg)
{
struct device *dev = q->lif->ionic->dev;
struct ionic_lif *lif = q->lif;
q->head->cb = cb;
q->head->cb_arg = cb_arg;
q->head = q->head->next;
dev_dbg(dev, "lif=%d qname=%s qid=%d qtype=%d p_index=%d ringdb=%d\n",
q->lif->index, q->name, q->hw_type, q->hw_index,
q->head->index, ring_doorbell);
if (ring_doorbell)
ionic_dbell_ring(lif->kern_dbpage, q->hw_type,
q->dbval | q->head->index);
}
static bool ionic_q_is_posted(struct ionic_queue *q, unsigned int pos)
{
unsigned int mask, tail, head;
mask = q->num_descs - 1;
tail = q->tail->index;
head = q->head->index;
return ((pos - tail) & mask) < ((head - tail) & mask);
}
void ionic_q_service(struct ionic_queue *q, struct ionic_cq_info *cq_info,
unsigned int stop_index)
{
struct ionic_desc_info *desc_info;
ionic_desc_cb cb;
void *cb_arg;
/* check for empty queue */
if (q->tail->index == q->head->index)
return;
/* stop index must be for a descriptor that is not yet completed */
if (unlikely(!ionic_q_is_posted(q, stop_index)))
dev_err(q->lif->ionic->dev,
"ionic stop is not posted %s stop %u tail %u head %u\n",
q->name, stop_index, q->tail->index, q->head->index);
do {
desc_info = q->tail;
q->tail = desc_info->next;
cb = desc_info->cb;
cb_arg = desc_info->cb_arg;
desc_info->cb = NULL;
desc_info->cb_arg = NULL;
if (cb)
cb(q, desc_info, cq_info, cb_arg);
} while (desc_info->index != stop_index);
}

View File

@ -0,0 +1,299 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_DEV_H_
#define _IONIC_DEV_H_
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include "ionic_if.h"
#include "ionic_regs.h"
#define IONIC_MIN_MTU ETH_MIN_MTU
#define IONIC_MAX_MTU 9194
#define IONIC_MAX_TXRX_DESC 16384
#define IONIC_MIN_TXRX_DESC 16
#define IONIC_DEF_TXRX_DESC 4096
#define IONIC_LIFS_MAX 1024
#define IONIC_ITR_COAL_USEC_DEFAULT 64
#define IONIC_DEV_CMD_REG_VERSION 1
#define IONIC_DEV_INFO_REG_COUNT 32
#define IONIC_DEV_CMD_REG_COUNT 32
struct ionic_dev_bar {
void __iomem *vaddr;
phys_addr_t bus_addr;
unsigned long len;
int res_index;
};
/* Registers */
static_assert(sizeof(struct ionic_intr) == 32);
static_assert(sizeof(struct ionic_doorbell) == 8);
static_assert(sizeof(struct ionic_intr_status) == 8);
static_assert(sizeof(union ionic_dev_regs) == 4096);
static_assert(sizeof(union ionic_dev_info_regs) == 2048);
static_assert(sizeof(union ionic_dev_cmd_regs) == 2048);
static_assert(sizeof(struct ionic_lif_stats) == 1024);
static_assert(sizeof(struct ionic_admin_cmd) == 64);
static_assert(sizeof(struct ionic_admin_comp) == 16);
static_assert(sizeof(struct ionic_nop_cmd) == 64);
static_assert(sizeof(struct ionic_nop_comp) == 16);
/* Device commands */
static_assert(sizeof(struct ionic_dev_identify_cmd) == 64);
static_assert(sizeof(struct ionic_dev_identify_comp) == 16);
static_assert(sizeof(struct ionic_dev_init_cmd) == 64);
static_assert(sizeof(struct ionic_dev_init_comp) == 16);
static_assert(sizeof(struct ionic_dev_reset_cmd) == 64);
static_assert(sizeof(struct ionic_dev_reset_comp) == 16);
static_assert(sizeof(struct ionic_dev_getattr_cmd) == 64);
static_assert(sizeof(struct ionic_dev_getattr_comp) == 16);
static_assert(sizeof(struct ionic_dev_setattr_cmd) == 64);
static_assert(sizeof(struct ionic_dev_setattr_comp) == 16);
/* Port commands */
static_assert(sizeof(struct ionic_port_identify_cmd) == 64);
static_assert(sizeof(struct ionic_port_identify_comp) == 16);
static_assert(sizeof(struct ionic_port_init_cmd) == 64);
static_assert(sizeof(struct ionic_port_init_comp) == 16);
static_assert(sizeof(struct ionic_port_reset_cmd) == 64);
static_assert(sizeof(struct ionic_port_reset_comp) == 16);
static_assert(sizeof(struct ionic_port_getattr_cmd) == 64);
static_assert(sizeof(struct ionic_port_getattr_comp) == 16);
static_assert(sizeof(struct ionic_port_setattr_cmd) == 64);
static_assert(sizeof(struct ionic_port_setattr_comp) == 16);
/* LIF commands */
static_assert(sizeof(struct ionic_lif_init_cmd) == 64);
static_assert(sizeof(struct ionic_lif_init_comp) == 16);
static_assert(sizeof(struct ionic_lif_reset_cmd) == 64);
static_assert(sizeof(ionic_lif_reset_comp) == 16);
static_assert(sizeof(struct ionic_lif_getattr_cmd) == 64);
static_assert(sizeof(struct ionic_lif_getattr_comp) == 16);
static_assert(sizeof(struct ionic_lif_setattr_cmd) == 64);
static_assert(sizeof(struct ionic_lif_setattr_comp) == 16);
static_assert(sizeof(struct ionic_q_init_cmd) == 64);
static_assert(sizeof(struct ionic_q_init_comp) == 16);
static_assert(sizeof(struct ionic_q_control_cmd) == 64);
static_assert(sizeof(ionic_q_control_comp) == 16);
static_assert(sizeof(struct ionic_rx_mode_set_cmd) == 64);
static_assert(sizeof(ionic_rx_mode_set_comp) == 16);
static_assert(sizeof(struct ionic_rx_filter_add_cmd) == 64);
static_assert(sizeof(struct ionic_rx_filter_add_comp) == 16);
static_assert(sizeof(struct ionic_rx_filter_del_cmd) == 64);
static_assert(sizeof(ionic_rx_filter_del_comp) == 16);
/* RDMA commands */
static_assert(sizeof(struct ionic_rdma_reset_cmd) == 64);
static_assert(sizeof(struct ionic_rdma_queue_cmd) == 64);
/* Events */
static_assert(sizeof(struct ionic_notifyq_cmd) == 4);
static_assert(sizeof(union ionic_notifyq_comp) == 64);
static_assert(sizeof(struct ionic_notifyq_event) == 64);
static_assert(sizeof(struct ionic_link_change_event) == 64);
static_assert(sizeof(struct ionic_reset_event) == 64);
static_assert(sizeof(struct ionic_heartbeat_event) == 64);
static_assert(sizeof(struct ionic_log_event) == 64);
/* I/O */
static_assert(sizeof(struct ionic_txq_desc) == 16);
static_assert(sizeof(struct ionic_txq_sg_desc) == 128);
static_assert(sizeof(struct ionic_txq_comp) == 16);
static_assert(sizeof(struct ionic_rxq_desc) == 16);
static_assert(sizeof(struct ionic_rxq_sg_desc) == 128);
static_assert(sizeof(struct ionic_rxq_comp) == 16);
struct ionic_devinfo {
u8 asic_type;
u8 asic_rev;
char fw_version[IONIC_DEVINFO_FWVERS_BUFLEN + 1];
char serial_num[IONIC_DEVINFO_SERIAL_BUFLEN + 1];
};
struct ionic_dev {
union ionic_dev_info_regs __iomem *dev_info_regs;
union ionic_dev_cmd_regs __iomem *dev_cmd_regs;
u64 __iomem *db_pages;
dma_addr_t phy_db_pages;
struct ionic_intr __iomem *intr_ctrl;
u64 __iomem *intr_status;
u32 port_info_sz;
struct ionic_port_info *port_info;
dma_addr_t port_info_pa;
struct ionic_devinfo dev_info;
};
struct ionic_cq_info {
void *cq_desc;
struct ionic_cq_info *next;
unsigned int index;
bool last;
};
struct ionic_queue;
struct ionic_qcq;
struct ionic_desc_info;
typedef void (*ionic_desc_cb)(struct ionic_queue *q,
struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, void *cb_arg);
struct ionic_desc_info {
void *desc;
void *sg_desc;
struct ionic_desc_info *next;
unsigned int index;
unsigned int left;
ionic_desc_cb cb;
void *cb_arg;
};
#define QUEUE_NAME_MAX_SZ 32
struct ionic_queue {
u64 dbell_count;
u64 drop;
u64 stop;
u64 wake;
struct ionic_lif *lif;
struct ionic_desc_info *info;
struct ionic_desc_info *tail;
struct ionic_desc_info *head;
struct ionic_dev *idev;
unsigned int index;
unsigned int type;
unsigned int hw_index;
unsigned int hw_type;
u64 dbval;
void *base;
void *sg_base;
dma_addr_t base_pa;
dma_addr_t sg_base_pa;
unsigned int num_descs;
unsigned int desc_size;
unsigned int sg_desc_size;
unsigned int pid;
char name[QUEUE_NAME_MAX_SZ];
};
#define INTR_INDEX_NOT_ASSIGNED -1
#define INTR_NAME_MAX_SZ 32
struct ionic_intr_info {
char name[INTR_NAME_MAX_SZ];
unsigned int index;
unsigned int vector;
u64 rearm_count;
unsigned int cpu;
cpumask_t affinity_mask;
};
struct ionic_cq {
void *base;
dma_addr_t base_pa;
struct ionic_lif *lif;
struct ionic_cq_info *info;
struct ionic_cq_info *tail;
struct ionic_queue *bound_q;
struct ionic_intr_info *bound_intr;
bool done_color;
unsigned int num_descs;
u64 compl_count;
unsigned int desc_size;
};
struct ionic;
static inline void ionic_intr_init(struct ionic_dev *idev,
struct ionic_intr_info *intr,
unsigned long index)
{
ionic_intr_clean(idev->intr_ctrl, index);
intr->index = index;
}
static inline unsigned int ionic_q_space_avail(struct ionic_queue *q)
{
unsigned int avail = q->tail->index;
if (q->head->index >= avail)
avail += q->head->left - 1;
else
avail -= q->head->index + 1;
return avail;
}
static inline bool ionic_q_has_space(struct ionic_queue *q, unsigned int want)
{
return ionic_q_space_avail(q) >= want;
}
void ionic_init_devinfo(struct ionic *ionic);
int ionic_dev_setup(struct ionic *ionic);
void ionic_dev_teardown(struct ionic *ionic);
void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd);
u8 ionic_dev_cmd_status(struct ionic_dev *idev);
bool ionic_dev_cmd_done(struct ionic_dev *idev);
void ionic_dev_cmd_comp(struct ionic_dev *idev, union ionic_dev_cmd_comp *comp);
void ionic_dev_cmd_identify(struct ionic_dev *idev, u8 ver);
void ionic_dev_cmd_init(struct ionic_dev *idev);
void ionic_dev_cmd_reset(struct ionic_dev *idev);
void ionic_dev_cmd_port_identify(struct ionic_dev *idev);
void ionic_dev_cmd_port_init(struct ionic_dev *idev);
void ionic_dev_cmd_port_reset(struct ionic_dev *idev);
void ionic_dev_cmd_port_state(struct ionic_dev *idev, u8 state);
void ionic_dev_cmd_port_speed(struct ionic_dev *idev, u32 speed);
void ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, u8 an_enable);
void ionic_dev_cmd_port_fec(struct ionic_dev *idev, u8 fec_type);
void ionic_dev_cmd_port_pause(struct ionic_dev *idev, u8 pause_type);
void ionic_dev_cmd_lif_identify(struct ionic_dev *idev, u8 type, u8 ver);
void ionic_dev_cmd_lif_init(struct ionic_dev *idev, u16 lif_index,
dma_addr_t addr);
void ionic_dev_cmd_lif_reset(struct ionic_dev *idev, u16 lif_index);
void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, struct ionic_qcq *qcq,
u16 lif_index, u16 intr_index);
int ionic_db_page_num(struct ionic_lif *lif, int pid);
int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq,
struct ionic_intr_info *intr,
unsigned int num_descs, size_t desc_size);
void ionic_cq_map(struct ionic_cq *cq, void *base, dma_addr_t base_pa);
void ionic_cq_bind(struct ionic_cq *cq, struct ionic_queue *q);
typedef bool (*ionic_cq_cb)(struct ionic_cq *cq, struct ionic_cq_info *cq_info);
typedef void (*ionic_cq_done_cb)(void *done_arg);
unsigned int ionic_cq_service(struct ionic_cq *cq, unsigned int work_to_do,
ionic_cq_cb cb, ionic_cq_done_cb done_cb,
void *done_arg);
int ionic_q_init(struct ionic_lif *lif, struct ionic_dev *idev,
struct ionic_queue *q, unsigned int index, const char *name,
unsigned int num_descs, size_t desc_size,
size_t sg_desc_size, unsigned int pid);
void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa);
void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa);
void ionic_q_post(struct ionic_queue *q, bool ring_doorbell, ionic_desc_cb cb,
void *cb_arg);
void ionic_q_rewind(struct ionic_queue *q, struct ionic_desc_info *start);
void ionic_q_service(struct ionic_queue *q, struct ionic_cq_info *cq_info,
unsigned int stop_index);
#endif /* _IONIC_DEV_H_ */

View File

@ -0,0 +1,99 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/module.h>
#include <linux/netdevice.h>
#include "ionic.h"
#include "ionic_bus.h"
#include "ionic_lif.h"
#include "ionic_devlink.h"
static int ionic_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
struct netlink_ext_ack *extack)
{
struct ionic *ionic = devlink_priv(dl);
struct ionic_dev *idev = &ionic->idev;
char buf[16];
int err = 0;
err = devlink_info_driver_name_put(req, IONIC_DRV_NAME);
if (err)
goto info_out;
err = devlink_info_version_running_put(req,
DEVLINK_INFO_VERSION_GENERIC_FW,
idev->dev_info.fw_version);
if (err)
goto info_out;
snprintf(buf, sizeof(buf), "0x%x", idev->dev_info.asic_type);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_ID,
buf);
if (err)
goto info_out;
snprintf(buf, sizeof(buf), "0x%x", idev->dev_info.asic_rev);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_REV,
buf);
if (err)
goto info_out;
err = devlink_info_serial_number_put(req, idev->dev_info.serial_num);
info_out:
return err;
}
static const struct devlink_ops ionic_dl_ops = {
.info_get = ionic_dl_info_get,
};
struct ionic *ionic_devlink_alloc(struct device *dev)
{
struct devlink *dl;
dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic));
return devlink_priv(dl);
}
void ionic_devlink_free(struct ionic *ionic)
{
struct devlink *dl = priv_to_devlink(ionic);
devlink_free(dl);
}
int ionic_devlink_register(struct ionic *ionic)
{
struct devlink *dl = priv_to_devlink(ionic);
int err;
err = devlink_register(dl, ionic->dev);
if (err) {
dev_warn(ionic->dev, "devlink_register failed: %d\n", err);
return err;
}
devlink_port_attrs_set(&ionic->dl_port, DEVLINK_PORT_FLAVOUR_PHYSICAL,
0, false, 0, NULL, 0);
err = devlink_port_register(dl, &ionic->dl_port, 0);
if (err)
dev_err(ionic->dev, "devlink_port_register failed: %d\n", err);
else
devlink_port_type_eth_set(&ionic->dl_port,
ionic->master_lif->netdev);
return err;
}
void ionic_devlink_unregister(struct ionic *ionic)
{
struct devlink *dl = priv_to_devlink(ionic);
devlink_port_unregister(&ionic->dl_port);
devlink_unregister(dl);
}

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_DEVLINK_H_
#define _IONIC_DEVLINK_H_
#include <net/devlink.h>
struct ionic *ionic_devlink_alloc(struct device *dev);
void ionic_devlink_free(struct ionic *ionic);
int ionic_devlink_register(struct ionic *ionic);
void ionic_devlink_unregister(struct ionic *ionic);
#endif /* _IONIC_DEVLINK_H_ */

View File

@ -0,0 +1,779 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/module.h>
#include <linux/netdevice.h>
#include "ionic.h"
#include "ionic_bus.h"
#include "ionic_lif.h"
#include "ionic_ethtool.h"
#include "ionic_stats.h"
static const char ionic_priv_flags_strings[][ETH_GSTRING_LEN] = {
#define PRIV_F_SW_DBG_STATS BIT(0)
"sw-dbg-stats",
};
#define PRIV_FLAGS_COUNT ARRAY_SIZE(ionic_priv_flags_strings)
static void ionic_get_stats_strings(struct ionic_lif *lif, u8 *buf)
{
u32 i;
for (i = 0; i < ionic_num_stats_grps; i++)
ionic_stats_groups[i].get_strings(lif, &buf);
}
static void ionic_get_stats(struct net_device *netdev,
struct ethtool_stats *stats, u64 *buf)
{
struct ionic_lif *lif;
u32 i;
lif = netdev_priv(netdev);
memset(buf, 0, stats->n_stats * sizeof(*buf));
for (i = 0; i < ionic_num_stats_grps; i++)
ionic_stats_groups[i].get_values(lif, &buf);
}
static int ionic_get_stats_count(struct ionic_lif *lif)
{
int i, num_stats = 0;
for (i = 0; i < ionic_num_stats_grps; i++)
num_stats += ionic_stats_groups[i].get_count(lif);
return num_stats;
}
static int ionic_get_sset_count(struct net_device *netdev, int sset)
{
struct ionic_lif *lif = netdev_priv(netdev);
int count = 0;
switch (sset) {
case ETH_SS_STATS:
count = ionic_get_stats_count(lif);
break;
case ETH_SS_PRIV_FLAGS:
count = PRIV_FLAGS_COUNT;
break;
}
return count;
}
static void ionic_get_strings(struct net_device *netdev,
u32 sset, u8 *buf)
{
struct ionic_lif *lif = netdev_priv(netdev);
switch (sset) {
case ETH_SS_STATS:
ionic_get_stats_strings(lif, buf);
break;
case ETH_SS_PRIV_FLAGS:
memcpy(buf, ionic_priv_flags_strings,
PRIV_FLAGS_COUNT * ETH_GSTRING_LEN);
break;
}
}
static void ionic_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic *ionic = lif->ionic;
strlcpy(drvinfo->driver, IONIC_DRV_NAME, sizeof(drvinfo->driver));
strlcpy(drvinfo->version, IONIC_DRV_VERSION, sizeof(drvinfo->version));
strlcpy(drvinfo->fw_version, ionic->idev.dev_info.fw_version,
sizeof(drvinfo->fw_version));
strlcpy(drvinfo->bus_info, ionic_bus_info(ionic),
sizeof(drvinfo->bus_info));
}
static int ionic_get_regs_len(struct net_device *netdev)
{
return (IONIC_DEV_INFO_REG_COUNT + IONIC_DEV_CMD_REG_COUNT) * sizeof(u32);
}
static void ionic_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
void *p)
{
struct ionic_lif *lif = netdev_priv(netdev);
unsigned int size;
regs->version = IONIC_DEV_CMD_REG_VERSION;
size = IONIC_DEV_INFO_REG_COUNT * sizeof(u32);
memcpy_fromio(p, lif->ionic->idev.dev_info_regs->words, size);
size = IONIC_DEV_CMD_REG_COUNT * sizeof(u32);
memcpy_fromio(p, lif->ionic->idev.dev_cmd_regs->words, size);
}
static int ionic_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *ks)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic_dev *idev = &lif->ionic->idev;
int copper_seen = 0;
ethtool_link_ksettings_zero_link_mode(ks, supported);
/* The port_info data is found in a DMA space that the NIC keeps
* up-to-date, so there's no need to request the data from the
* NIC, we already have it in our memory space.
*/
switch (le16_to_cpu(idev->port_info->status.xcvr.pid)) {
/* Copper */
case IONIC_XCVR_PID_QSFP_100G_CR4:
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseCR4_Full);
copper_seen++;
break;
case IONIC_XCVR_PID_QSFP_40GBASE_CR4:
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseCR4_Full);
copper_seen++;
break;
case IONIC_XCVR_PID_SFP_25GBASE_CR_S:
case IONIC_XCVR_PID_SFP_25GBASE_CR_L:
case IONIC_XCVR_PID_SFP_25GBASE_CR_N:
ethtool_link_ksettings_add_link_mode(ks, supported,
25000baseCR_Full);
copper_seen++;
break;
case IONIC_XCVR_PID_SFP_10GBASE_AOC:
case IONIC_XCVR_PID_SFP_10GBASE_CU:
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseCR_Full);
copper_seen++;
break;
/* Fibre */
case IONIC_XCVR_PID_QSFP_100G_SR4:
case IONIC_XCVR_PID_QSFP_100G_AOC:
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseSR4_Full);
break;
case IONIC_XCVR_PID_QSFP_100G_LR4:
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseLR4_ER4_Full);
break;
case IONIC_XCVR_PID_QSFP_100G_ER4:
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseLR4_ER4_Full);
break;
case IONIC_XCVR_PID_QSFP_40GBASE_SR4:
case IONIC_XCVR_PID_QSFP_40GBASE_AOC:
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseSR4_Full);
break;
case IONIC_XCVR_PID_QSFP_40GBASE_LR4:
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseLR4_Full);
break;
case IONIC_XCVR_PID_SFP_25GBASE_SR:
case IONIC_XCVR_PID_SFP_25GBASE_AOC:
ethtool_link_ksettings_add_link_mode(ks, supported,
25000baseSR_Full);
break;
case IONIC_XCVR_PID_SFP_10GBASE_SR:
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseSR_Full);
break;
case IONIC_XCVR_PID_SFP_10GBASE_LR:
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseLR_Full);
break;
case IONIC_XCVR_PID_SFP_10GBASE_LRM:
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseLRM_Full);
break;
case IONIC_XCVR_PID_SFP_10GBASE_ER:
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseER_Full);
break;
case IONIC_XCVR_PID_UNKNOWN:
/* This means there's no module plugged in */
break;
default:
dev_info(lif->ionic->dev, "unknown xcvr type pid=%d / 0x%x\n",
idev->port_info->status.xcvr.pid,
idev->port_info->status.xcvr.pid);
break;
}
bitmap_copy(ks->link_modes.advertising, ks->link_modes.supported,
__ETHTOOL_LINK_MODE_MASK_NBITS);
ethtool_link_ksettings_add_link_mode(ks, supported, FEC_BASER);
ethtool_link_ksettings_add_link_mode(ks, supported, FEC_RS);
if (idev->port_info->config.fec_type == IONIC_PORT_FEC_TYPE_FC)
ethtool_link_ksettings_add_link_mode(ks, advertising, FEC_BASER);
else if (idev->port_info->config.fec_type == IONIC_PORT_FEC_TYPE_RS)
ethtool_link_ksettings_add_link_mode(ks, advertising, FEC_RS);
ethtool_link_ksettings_add_link_mode(ks, supported, FIBRE);
ethtool_link_ksettings_add_link_mode(ks, supported, Pause);
if (idev->port_info->status.xcvr.phy == IONIC_PHY_TYPE_COPPER ||
copper_seen)
ks->base.port = PORT_DA;
else if (idev->port_info->status.xcvr.phy == IONIC_PHY_TYPE_FIBER)
ks->base.port = PORT_FIBRE;
else
ks->base.port = PORT_NONE;
if (ks->base.port != PORT_NONE) {
ks->base.speed = le32_to_cpu(lif->info->status.link_speed);
if (le16_to_cpu(lif->info->status.link_status))
ks->base.duplex = DUPLEX_FULL;
else
ks->base.duplex = DUPLEX_UNKNOWN;
ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
if (idev->port_info->config.an_enable) {
ethtool_link_ksettings_add_link_mode(ks, advertising,
Autoneg);
ks->base.autoneg = AUTONEG_ENABLE;
}
}
return 0;
}
static int ionic_set_link_ksettings(struct net_device *netdev,
const struct ethtool_link_ksettings *ks)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic *ionic = lif->ionic;
struct ionic_dev *idev;
u32 req_rs, req_fc;
u8 fec_type;
int err = 0;
idev = &lif->ionic->idev;
fec_type = IONIC_PORT_FEC_TYPE_NONE;
/* set autoneg */
if (ks->base.autoneg != idev->port_info->config.an_enable) {
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_autoneg(idev, ks->base.autoneg);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err)
return err;
}
/* set speed */
if (ks->base.speed != le32_to_cpu(idev->port_info->config.speed)) {
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_speed(idev, ks->base.speed);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err)
return err;
}
/* set FEC */
req_rs = ethtool_link_ksettings_test_link_mode(ks, advertising, FEC_RS);
req_fc = ethtool_link_ksettings_test_link_mode(ks, advertising, FEC_BASER);
if (req_rs && req_fc) {
netdev_info(netdev, "Only select one FEC mode at a time\n");
return -EINVAL;
} else if (req_fc) {
fec_type = IONIC_PORT_FEC_TYPE_FC;
} else if (req_rs) {
fec_type = IONIC_PORT_FEC_TYPE_RS;
} else if (!(req_rs | req_fc)) {
fec_type = IONIC_PORT_FEC_TYPE_NONE;
}
if (fec_type != idev->port_info->config.fec_type) {
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_fec(idev, fec_type);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err)
return err;
}
return 0;
}
static void ionic_get_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
{
struct ionic_lif *lif = netdev_priv(netdev);
u8 pause_type;
pause->autoneg = 0;
pause_type = lif->ionic->idev.port_info->config.pause_type;
if (pause_type) {
pause->rx_pause = pause_type & IONIC_PAUSE_F_RX ? 1 : 0;
pause->tx_pause = pause_type & IONIC_PAUSE_F_TX ? 1 : 0;
}
}
static int ionic_set_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic *ionic = lif->ionic;
u32 requested_pause;
int err;
if (pause->autoneg)
return -EOPNOTSUPP;
/* change both at the same time */
requested_pause = IONIC_PORT_PAUSE_TYPE_LINK;
if (pause->rx_pause)
requested_pause |= IONIC_PAUSE_F_RX;
if (pause->tx_pause)
requested_pause |= IONIC_PAUSE_F_TX;
if (requested_pause == lif->ionic->idev.port_info->config.pause_type)
return 0;
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_pause(&lif->ionic->idev, requested_pause);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err)
return err;
return 0;
}
static int ionic_get_coalesce(struct net_device *netdev,
struct ethtool_coalesce *coalesce)
{
struct ionic_lif *lif = netdev_priv(netdev);
/* Tx uses Rx interrupt */
coalesce->tx_coalesce_usecs = lif->rx_coalesce_usecs;
coalesce->rx_coalesce_usecs = lif->rx_coalesce_usecs;
return 0;
}
static int ionic_set_coalesce(struct net_device *netdev,
struct ethtool_coalesce *coalesce)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic_identity *ident;
struct ionic_qcq *qcq;
unsigned int i;
u32 usecs;
u32 coal;
if (coalesce->rx_max_coalesced_frames ||
coalesce->rx_coalesce_usecs_irq ||
coalesce->rx_max_coalesced_frames_irq ||
coalesce->tx_max_coalesced_frames ||
coalesce->tx_coalesce_usecs_irq ||
coalesce->tx_max_coalesced_frames_irq ||
coalesce->stats_block_coalesce_usecs ||
coalesce->use_adaptive_rx_coalesce ||
coalesce->use_adaptive_tx_coalesce ||
coalesce->pkt_rate_low ||
coalesce->rx_coalesce_usecs_low ||
coalesce->rx_max_coalesced_frames_low ||
coalesce->tx_coalesce_usecs_low ||
coalesce->tx_max_coalesced_frames_low ||
coalesce->pkt_rate_high ||
coalesce->rx_coalesce_usecs_high ||
coalesce->rx_max_coalesced_frames_high ||
coalesce->tx_coalesce_usecs_high ||
coalesce->tx_max_coalesced_frames_high ||
coalesce->rate_sample_interval)
return -EINVAL;
ident = &lif->ionic->ident;
if (ident->dev.intr_coal_div == 0) {
netdev_warn(netdev, "bad HW value in dev.intr_coal_div = %d\n",
ident->dev.intr_coal_div);
return -EIO;
}
/* Tx uses Rx interrupt, so only change Rx */
if (coalesce->tx_coalesce_usecs != lif->rx_coalesce_usecs) {
netdev_warn(netdev, "only the rx-usecs can be changed\n");
return -EINVAL;
}
coal = ionic_coal_usec_to_hw(lif->ionic, coalesce->rx_coalesce_usecs);
if (coal > IONIC_INTR_CTRL_COAL_MAX)
return -ERANGE;
/* If they asked for non-zero and it resolved to zero, bump it up */
if (!coal && coalesce->rx_coalesce_usecs)
coal = 1;
/* Convert it back to get device resolution */
usecs = ionic_coal_hw_to_usec(lif->ionic, coal);
if (usecs != lif->rx_coalesce_usecs) {
lif->rx_coalesce_usecs = usecs;
if (test_bit(IONIC_LIF_UP, lif->state)) {
for (i = 0; i < lif->nxqs; i++) {
qcq = lif->rxqcqs[i].qcq;
ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
qcq->intr.index, coal);
}
}
}
return 0;
}
static void ionic_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
struct ionic_lif *lif = netdev_priv(netdev);
ring->tx_max_pending = IONIC_MAX_TXRX_DESC;
ring->tx_pending = lif->ntxq_descs;
ring->rx_max_pending = IONIC_MAX_TXRX_DESC;
ring->rx_pending = lif->nrxq_descs;
}
static int ionic_set_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
struct ionic_lif *lif = netdev_priv(netdev);
bool running;
if (ring->rx_mini_pending || ring->rx_jumbo_pending) {
netdev_info(netdev, "Changing jumbo or mini descriptors not supported\n");
return -EINVAL;
}
if (!is_power_of_2(ring->tx_pending) ||
!is_power_of_2(ring->rx_pending)) {
netdev_info(netdev, "Descriptor count must be a power of 2\n");
return -EINVAL;
}
/* if nothing to do return success */
if (ring->tx_pending == lif->ntxq_descs &&
ring->rx_pending == lif->nrxq_descs)
return 0;
if (!ionic_wait_for_bit(lif, IONIC_LIF_QUEUE_RESET))
return -EBUSY;
running = test_bit(IONIC_LIF_UP, lif->state);
if (running)
ionic_stop(netdev);
lif->ntxq_descs = ring->tx_pending;
lif->nrxq_descs = ring->rx_pending;
if (running)
ionic_open(netdev);
clear_bit(IONIC_LIF_QUEUE_RESET, lif->state);
return 0;
}
static void ionic_get_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct ionic_lif *lif = netdev_priv(netdev);
/* report maximum channels */
ch->max_combined = lif->ionic->ntxqs_per_lif;
/* report current channels */
ch->combined_count = lif->nxqs;
}
static int ionic_set_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct ionic_lif *lif = netdev_priv(netdev);
bool running;
if (!ch->combined_count || ch->other_count ||
ch->rx_count || ch->tx_count)
return -EINVAL;
if (ch->combined_count == lif->nxqs)
return 0;
if (!ionic_wait_for_bit(lif, IONIC_LIF_QUEUE_RESET))
return -EBUSY;
running = test_bit(IONIC_LIF_UP, lif->state);
if (running)
ionic_stop(netdev);
lif->nxqs = ch->combined_count;
if (running)
ionic_open(netdev);
clear_bit(IONIC_LIF_QUEUE_RESET, lif->state);
return 0;
}
static u32 ionic_get_priv_flags(struct net_device *netdev)
{
struct ionic_lif *lif = netdev_priv(netdev);
u32 priv_flags = 0;
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state))
priv_flags |= PRIV_F_SW_DBG_STATS;
return priv_flags;
}
static int ionic_set_priv_flags(struct net_device *netdev, u32 priv_flags)
{
struct ionic_lif *lif = netdev_priv(netdev);
u32 flags = lif->flags;
clear_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state);
if (priv_flags & PRIV_F_SW_DBG_STATS)
set_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state);
if (flags != lif->flags)
lif->flags = flags;
return 0;
}
static int ionic_get_rxnfc(struct net_device *netdev,
struct ethtool_rxnfc *info, u32 *rules)
{
struct ionic_lif *lif = netdev_priv(netdev);
int err = 0;
switch (info->cmd) {
case ETHTOOL_GRXRINGS:
info->data = lif->nxqs;
break;
default:
netdev_err(netdev, "Command parameter %d is not supported\n",
info->cmd);
err = -EOPNOTSUPP;
}
return err;
}
static u32 ionic_get_rxfh_indir_size(struct net_device *netdev)
{
struct ionic_lif *lif = netdev_priv(netdev);
return le16_to_cpu(lif->ionic->ident.lif.eth.rss_ind_tbl_sz);
}
static u32 ionic_get_rxfh_key_size(struct net_device *netdev)
{
return IONIC_RSS_HASH_KEY_SIZE;
}
static int ionic_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
u8 *hfunc)
{
struct ionic_lif *lif = netdev_priv(netdev);
unsigned int i, tbl_sz;
if (indir) {
tbl_sz = le16_to_cpu(lif->ionic->ident.lif.eth.rss_ind_tbl_sz);
for (i = 0; i < tbl_sz; i++)
indir[i] = lif->rss_ind_tbl[i];
}
if (key)
memcpy(key, lif->rss_hash_key, IONIC_RSS_HASH_KEY_SIZE);
if (hfunc)
*hfunc = ETH_RSS_HASH_TOP;
return 0;
}
static int ionic_set_rxfh(struct net_device *netdev, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct ionic_lif *lif = netdev_priv(netdev);
int err;
if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
return -EOPNOTSUPP;
err = ionic_lif_rss_config(lif, lif->rss_types, key, indir);
if (err)
return err;
return 0;
}
static int ionic_set_tunable(struct net_device *dev,
const struct ethtool_tunable *tuna,
const void *data)
{
struct ionic_lif *lif = netdev_priv(dev);
switch (tuna->id) {
case ETHTOOL_RX_COPYBREAK:
lif->rx_copybreak = *(u32 *)data;
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int ionic_get_tunable(struct net_device *netdev,
const struct ethtool_tunable *tuna, void *data)
{
struct ionic_lif *lif = netdev_priv(netdev);
switch (tuna->id) {
case ETHTOOL_RX_COPYBREAK:
*(u32 *)data = lif->rx_copybreak;
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int ionic_get_module_info(struct net_device *netdev,
struct ethtool_modinfo *modinfo)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic_dev *idev = &lif->ionic->idev;
struct ionic_xcvr_status *xcvr;
xcvr = &idev->port_info->status.xcvr;
/* report the module data type and length */
switch (xcvr->sprom[0]) {
case 0x03: /* SFP */
modinfo->type = ETH_MODULE_SFF_8079;
modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
break;
case 0x0D: /* QSFP */
case 0x11: /* QSFP28 */
modinfo->type = ETH_MODULE_SFF_8436;
modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
break;
default:
netdev_info(netdev, "unknown xcvr type 0x%02x\n",
xcvr->sprom[0]);
break;
}
return 0;
}
static int ionic_get_module_eeprom(struct net_device *netdev,
struct ethtool_eeprom *ee,
u8 *data)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic_dev *idev = &lif->ionic->idev;
struct ionic_xcvr_status *xcvr;
char tbuf[sizeof(xcvr->sprom)];
int count = 10;
u32 len;
/* The NIC keeps the module prom up-to-date in the DMA space
* so we can simply copy the module bytes into the data buffer.
*/
xcvr = &idev->port_info->status.xcvr;
len = min_t(u32, sizeof(xcvr->sprom), ee->len);
do {
memcpy(data, xcvr->sprom, len);
memcpy(tbuf, xcvr->sprom, len);
/* Let's make sure we got a consistent copy */
if (!memcmp(data, tbuf, len))
break;
} while (--count);
if (!count)
return -ETIMEDOUT;
return 0;
}
static int ionic_nway_reset(struct net_device *netdev)
{
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic *ionic = lif->ionic;
int err = 0;
/* flap the link to force auto-negotiation */
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_state(&ionic->idev, IONIC_PORT_ADMIN_STATE_DOWN);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
if (!err) {
ionic_dev_cmd_port_state(&ionic->idev, IONIC_PORT_ADMIN_STATE_UP);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
}
mutex_unlock(&ionic->dev_cmd_lock);
return err;
}
static const struct ethtool_ops ionic_ethtool_ops = {
.get_drvinfo = ionic_get_drvinfo,
.get_regs_len = ionic_get_regs_len,
.get_regs = ionic_get_regs,
.get_link = ethtool_op_get_link,
.get_link_ksettings = ionic_get_link_ksettings,
.get_coalesce = ionic_get_coalesce,
.set_coalesce = ionic_set_coalesce,
.get_ringparam = ionic_get_ringparam,
.set_ringparam = ionic_set_ringparam,
.get_channels = ionic_get_channels,
.set_channels = ionic_set_channels,
.get_strings = ionic_get_strings,
.get_ethtool_stats = ionic_get_stats,
.get_sset_count = ionic_get_sset_count,
.get_priv_flags = ionic_get_priv_flags,
.set_priv_flags = ionic_set_priv_flags,
.get_rxnfc = ionic_get_rxnfc,
.get_rxfh_indir_size = ionic_get_rxfh_indir_size,
.get_rxfh_key_size = ionic_get_rxfh_key_size,
.get_rxfh = ionic_get_rxfh,
.set_rxfh = ionic_set_rxfh,
.get_tunable = ionic_get_tunable,
.set_tunable = ionic_set_tunable,
.get_module_info = ionic_get_module_info,
.get_module_eeprom = ionic_get_module_eeprom,
.get_pauseparam = ionic_get_pauseparam,
.set_pauseparam = ionic_set_pauseparam,
.set_link_ksettings = ionic_set_link_ksettings,
.nway_reset = ionic_nway_reset,
};
void ionic_ethtool_set_ops(struct net_device *netdev)
{
netdev->ethtool_ops = &ionic_ethtool_ops;
}

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_ETHTOOL_H_
#define _IONIC_ETHTOOL_H_
void ionic_ethtool_set_ops(struct net_device *netdev);
#endif /* _IONIC_ETHTOOL_H_ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,277 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_LIF_H_
#define _IONIC_LIF_H_
#include <linux/pci.h>
#include "ionic_rx_filter.h"
#define IONIC_ADMINQ_LENGTH 16 /* must be a power of two */
#define IONIC_NOTIFYQ_LENGTH 64 /* must be a power of two */
#define IONIC_MAX_NUM_NAPI_CNTR (NAPI_POLL_WEIGHT + 1)
#define IONIC_MAX_NUM_SG_CNTR (IONIC_TX_MAX_SG_ELEMS + 1)
#define IONIC_RX_COPYBREAK_DEFAULT 256
struct ionic_tx_stats {
u64 dma_map_err;
u64 pkts;
u64 bytes;
u64 clean;
u64 linearize;
u64 no_csum;
u64 csum;
u64 crc32_csum;
u64 tso;
u64 frags;
u64 sg_cntr[IONIC_MAX_NUM_SG_CNTR];
};
struct ionic_rx_stats {
u64 dma_map_err;
u64 alloc_err;
u64 pkts;
u64 bytes;
u64 csum_none;
u64 csum_complete;
u64 csum_error;
u64 buffers_posted;
};
#define IONIC_QCQ_F_INITED BIT(0)
#define IONIC_QCQ_F_SG BIT(1)
#define IONIC_QCQ_F_INTR BIT(2)
#define IONIC_QCQ_F_TX_STATS BIT(3)
#define IONIC_QCQ_F_RX_STATS BIT(4)
#define IONIC_QCQ_F_NOTIFYQ BIT(5)
struct ionic_napi_stats {
u64 poll_count;
u64 work_done_cntr[IONIC_MAX_NUM_NAPI_CNTR];
};
struct ionic_q_stats {
union {
struct ionic_tx_stats tx;
struct ionic_rx_stats rx;
};
};
struct ionic_qcq {
void *base;
dma_addr_t base_pa;
unsigned int total_size;
struct ionic_queue q;
struct ionic_cq cq;
struct ionic_intr_info intr;
struct napi_struct napi;
struct ionic_napi_stats napi_stats;
struct ionic_q_stats *stats;
unsigned int flags;
struct dentry *dentry;
};
struct ionic_qcqst {
struct ionic_qcq *qcq;
struct ionic_q_stats *stats;
};
#define q_to_qcq(q) container_of(q, struct ionic_qcq, q)
#define q_to_tx_stats(q) (&q_to_qcq(q)->stats->tx)
#define q_to_rx_stats(q) (&q_to_qcq(q)->stats->rx)
#define napi_to_qcq(napi) container_of(napi, struct ionic_qcq, napi)
#define napi_to_cq(napi) (&napi_to_qcq(napi)->cq)
enum ionic_deferred_work_type {
IONIC_DW_TYPE_RX_MODE,
IONIC_DW_TYPE_RX_ADDR_ADD,
IONIC_DW_TYPE_RX_ADDR_DEL,
IONIC_DW_TYPE_LINK_STATUS,
IONIC_DW_TYPE_LIF_RESET,
};
struct ionic_deferred_work {
struct list_head list;
enum ionic_deferred_work_type type;
union {
unsigned int rx_mode;
u8 addr[ETH_ALEN];
};
};
struct ionic_deferred {
spinlock_t lock; /* lock for deferred work list */
struct list_head list;
struct work_struct work;
};
struct ionic_lif_sw_stats {
u64 tx_packets;
u64 tx_bytes;
u64 rx_packets;
u64 rx_bytes;
u64 tx_tso;
u64 tx_no_csum;
u64 tx_csum;
u64 rx_csum_none;
u64 rx_csum_complete;
u64 rx_csum_error;
};
enum ionic_lif_state_flags {
IONIC_LIF_INITED,
IONIC_LIF_SW_DEBUG_STATS,
IONIC_LIF_UP,
IONIC_LIF_LINK_CHECK_REQUESTED,
IONIC_LIF_QUEUE_RESET,
/* leave this as last */
IONIC_LIF_STATE_SIZE
};
#define IONIC_LIF_NAME_MAX_SZ 32
struct ionic_lif {
char name[IONIC_LIF_NAME_MAX_SZ];
struct list_head list;
struct net_device *netdev;
DECLARE_BITMAP(state, IONIC_LIF_STATE_SIZE);
struct ionic *ionic;
bool registered;
unsigned int index;
unsigned int hw_index;
unsigned int kern_pid;
u64 __iomem *kern_dbpage;
spinlock_t adminq_lock; /* lock for AdminQ operations */
struct ionic_qcq *adminqcq;
struct ionic_qcq *notifyqcq;
struct ionic_qcqst *txqcqs;
struct ionic_qcqst *rxqcqs;
u64 last_eid;
unsigned int neqs;
unsigned int nxqs;
unsigned int ntxq_descs;
unsigned int nrxq_descs;
u32 rx_copybreak;
unsigned int rx_mode;
u64 hw_features;
bool mc_overflow;
unsigned int nmcast;
bool uc_overflow;
unsigned int nucast;
struct ionic_lif_info *info;
dma_addr_t info_pa;
u32 info_sz;
u16 rss_types;
u8 rss_hash_key[IONIC_RSS_HASH_KEY_SIZE];
u8 *rss_ind_tbl;
dma_addr_t rss_ind_tbl_pa;
u32 rss_ind_tbl_sz;
struct ionic_rx_filters rx_filters;
struct ionic_deferred deferred;
unsigned long *dbid_inuse;
unsigned int dbid_count;
struct dentry *dentry;
u32 rx_coalesce_usecs;
u32 flags;
struct work_struct tx_timeout_work;
};
#define lif_to_txqcq(lif, i) ((lif)->txqcqs[i].qcq)
#define lif_to_rxqcq(lif, i) ((lif)->rxqcqs[i].qcq)
#define lif_to_txq(lif, i) (&lif_to_txqcq((lif), i)->q)
#define lif_to_rxq(lif, i) (&lif_to_txqcq((lif), i)->q)
static inline int ionic_wait_for_bit(struct ionic_lif *lif, int bitname)
{
unsigned long tlimit = jiffies + HZ;
while (test_and_set_bit(bitname, lif->state) &&
time_before(jiffies, tlimit))
usleep_range(100, 200);
return test_bit(bitname, lif->state);
}
static inline u32 ionic_coal_usec_to_hw(struct ionic *ionic, u32 usecs)
{
u32 mult = le32_to_cpu(ionic->ident.dev.intr_coal_mult);
u32 div = le32_to_cpu(ionic->ident.dev.intr_coal_div);
/* Div-by-zero should never be an issue, but check anyway */
if (!div || !mult)
return 0;
/* Round up in case usecs is close to the next hw unit */
usecs += (div / mult) >> 1;
/* Convert from usecs to device units */
return (usecs * mult) / div;
}
static inline u32 ionic_coal_hw_to_usec(struct ionic *ionic, u32 units)
{
u32 mult = le32_to_cpu(ionic->ident.dev.intr_coal_mult);
u32 div = le32_to_cpu(ionic->ident.dev.intr_coal_div);
/* Div-by-zero should never be an issue, but check anyway */
if (!div || !mult)
return 0;
/* Convert from device units to usec */
return (units * div) / mult;
}
int ionic_lifs_alloc(struct ionic *ionic);
void ionic_lifs_free(struct ionic *ionic);
void ionic_lifs_deinit(struct ionic *ionic);
int ionic_lifs_init(struct ionic *ionic);
int ionic_lifs_register(struct ionic *ionic);
void ionic_lifs_unregister(struct ionic *ionic);
int ionic_lif_identify(struct ionic *ionic, u8 lif_type,
union ionic_lif_identity *lif_ident);
int ionic_lifs_size(struct ionic *ionic);
int ionic_lif_rss_config(struct ionic_lif *lif, u16 types,
const u8 *key, const u32 *indir);
int ionic_open(struct net_device *netdev);
int ionic_stop(struct net_device *netdev);
int ionic_reset_queues(struct ionic_lif *lif);
static inline void debug_stats_txq_post(struct ionic_qcq *qcq,
struct ionic_txq_desc *desc, bool dbell)
{
u8 num_sg_elems = ((le64_to_cpu(desc->cmd) >> IONIC_TXQ_DESC_NSGE_SHIFT)
& IONIC_TXQ_DESC_NSGE_MASK);
qcq->q.dbell_count += dbell;
if (num_sg_elems > (IONIC_MAX_NUM_SG_CNTR - 1))
num_sg_elems = IONIC_MAX_NUM_SG_CNTR - 1;
qcq->stats->tx.sg_cntr[num_sg_elems]++;
}
static inline void debug_stats_napi_poll(struct ionic_qcq *qcq,
unsigned int work_done)
{
qcq->napi_stats.poll_count++;
if (work_done > (IONIC_MAX_NUM_NAPI_CNTR - 1))
work_done = IONIC_MAX_NUM_NAPI_CNTR - 1;
qcq->napi_stats.work_done_cntr[work_done]++;
}
#define DEBUG_STATS_CQE_CNT(cq) ((cq)->compl_count++)
#define DEBUG_STATS_RX_BUFF_CNT(qcq) ((qcq)->stats->rx.buffers_posted++)
#define DEBUG_STATS_INTR_REARM(intr) ((intr)->rearm_count++)
#define DEBUG_STATS_TXQ_POST(qcq, txdesc, dbell) \
debug_stats_txq_post(qcq, txdesc, dbell)
#define DEBUG_STATS_NAPI_POLL(qcq, work_done) \
debug_stats_napi_poll(qcq, work_done)
#endif /* _IONIC_LIF_H_ */

View File

@ -0,0 +1,549 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/module.h>
#include <linux/version.h>
#include <linux/netdevice.h>
#include <linux/utsname.h>
#include "ionic.h"
#include "ionic_bus.h"
#include "ionic_lif.h"
#include "ionic_debugfs.h"
MODULE_DESCRIPTION(IONIC_DRV_DESCRIPTION);
MODULE_AUTHOR("Pensando Systems, Inc");
MODULE_LICENSE("GPL");
MODULE_VERSION(IONIC_DRV_VERSION);
static const char *ionic_error_to_str(enum ionic_status_code code)
{
switch (code) {
case IONIC_RC_SUCCESS:
return "IONIC_RC_SUCCESS";
case IONIC_RC_EVERSION:
return "IONIC_RC_EVERSION";
case IONIC_RC_EOPCODE:
return "IONIC_RC_EOPCODE";
case IONIC_RC_EIO:
return "IONIC_RC_EIO";
case IONIC_RC_EPERM:
return "IONIC_RC_EPERM";
case IONIC_RC_EQID:
return "IONIC_RC_EQID";
case IONIC_RC_EQTYPE:
return "IONIC_RC_EQTYPE";
case IONIC_RC_ENOENT:
return "IONIC_RC_ENOENT";
case IONIC_RC_EINTR:
return "IONIC_RC_EINTR";
case IONIC_RC_EAGAIN:
return "IONIC_RC_EAGAIN";
case IONIC_RC_ENOMEM:
return "IONIC_RC_ENOMEM";
case IONIC_RC_EFAULT:
return "IONIC_RC_EFAULT";
case IONIC_RC_EBUSY:
return "IONIC_RC_EBUSY";
case IONIC_RC_EEXIST:
return "IONIC_RC_EEXIST";
case IONIC_RC_EINVAL:
return "IONIC_RC_EINVAL";
case IONIC_RC_ENOSPC:
return "IONIC_RC_ENOSPC";
case IONIC_RC_ERANGE:
return "IONIC_RC_ERANGE";
case IONIC_RC_BAD_ADDR:
return "IONIC_RC_BAD_ADDR";
case IONIC_RC_DEV_CMD:
return "IONIC_RC_DEV_CMD";
case IONIC_RC_ERROR:
return "IONIC_RC_ERROR";
case IONIC_RC_ERDMA:
return "IONIC_RC_ERDMA";
default:
return "IONIC_RC_UNKNOWN";
}
}
static int ionic_error_to_errno(enum ionic_status_code code)
{
switch (code) {
case IONIC_RC_SUCCESS:
return 0;
case IONIC_RC_EVERSION:
case IONIC_RC_EQTYPE:
case IONIC_RC_EQID:
case IONIC_RC_EINVAL:
return -EINVAL;
case IONIC_RC_EPERM:
return -EPERM;
case IONIC_RC_ENOENT:
return -ENOENT;
case IONIC_RC_EAGAIN:
return -EAGAIN;
case IONIC_RC_ENOMEM:
return -ENOMEM;
case IONIC_RC_EFAULT:
return -EFAULT;
case IONIC_RC_EBUSY:
return -EBUSY;
case IONIC_RC_EEXIST:
return -EEXIST;
case IONIC_RC_ENOSPC:
return -ENOSPC;
case IONIC_RC_ERANGE:
return -ERANGE;
case IONIC_RC_BAD_ADDR:
return -EFAULT;
case IONIC_RC_EOPCODE:
case IONIC_RC_EINTR:
case IONIC_RC_DEV_CMD:
case IONIC_RC_ERROR:
case IONIC_RC_ERDMA:
case IONIC_RC_EIO:
default:
return -EIO;
}
}
static const char *ionic_opcode_to_str(enum ionic_cmd_opcode opcode)
{
switch (opcode) {
case IONIC_CMD_NOP:
return "IONIC_CMD_NOP";
case IONIC_CMD_INIT:
return "IONIC_CMD_INIT";
case IONIC_CMD_RESET:
return "IONIC_CMD_RESET";
case IONIC_CMD_IDENTIFY:
return "IONIC_CMD_IDENTIFY";
case IONIC_CMD_GETATTR:
return "IONIC_CMD_GETATTR";
case IONIC_CMD_SETATTR:
return "IONIC_CMD_SETATTR";
case IONIC_CMD_PORT_IDENTIFY:
return "IONIC_CMD_PORT_IDENTIFY";
case IONIC_CMD_PORT_INIT:
return "IONIC_CMD_PORT_INIT";
case IONIC_CMD_PORT_RESET:
return "IONIC_CMD_PORT_RESET";
case IONIC_CMD_PORT_GETATTR:
return "IONIC_CMD_PORT_GETATTR";
case IONIC_CMD_PORT_SETATTR:
return "IONIC_CMD_PORT_SETATTR";
case IONIC_CMD_LIF_INIT:
return "IONIC_CMD_LIF_INIT";
case IONIC_CMD_LIF_RESET:
return "IONIC_CMD_LIF_RESET";
case IONIC_CMD_LIF_IDENTIFY:
return "IONIC_CMD_LIF_IDENTIFY";
case IONIC_CMD_LIF_SETATTR:
return "IONIC_CMD_LIF_SETATTR";
case IONIC_CMD_LIF_GETATTR:
return "IONIC_CMD_LIF_GETATTR";
case IONIC_CMD_RX_MODE_SET:
return "IONIC_CMD_RX_MODE_SET";
case IONIC_CMD_RX_FILTER_ADD:
return "IONIC_CMD_RX_FILTER_ADD";
case IONIC_CMD_RX_FILTER_DEL:
return "IONIC_CMD_RX_FILTER_DEL";
case IONIC_CMD_Q_INIT:
return "IONIC_CMD_Q_INIT";
case IONIC_CMD_Q_CONTROL:
return "IONIC_CMD_Q_CONTROL";
case IONIC_CMD_RDMA_RESET_LIF:
return "IONIC_CMD_RDMA_RESET_LIF";
case IONIC_CMD_RDMA_CREATE_EQ:
return "IONIC_CMD_RDMA_CREATE_EQ";
case IONIC_CMD_RDMA_CREATE_CQ:
return "IONIC_CMD_RDMA_CREATE_CQ";
case IONIC_CMD_RDMA_CREATE_ADMINQ:
return "IONIC_CMD_RDMA_CREATE_ADMINQ";
case IONIC_CMD_FW_DOWNLOAD:
return "IONIC_CMD_FW_DOWNLOAD";
case IONIC_CMD_FW_CONTROL:
return "IONIC_CMD_FW_CONTROL";
default:
return "DEVCMD_UNKNOWN";
}
}
static void ionic_adminq_flush(struct ionic_lif *lif)
{
struct ionic_queue *adminq = &lif->adminqcq->q;
spin_lock(&lif->adminq_lock);
while (adminq->tail != adminq->head) {
memset(adminq->tail->desc, 0, sizeof(union ionic_adminq_cmd));
adminq->tail->cb = NULL;
adminq->tail->cb_arg = NULL;
adminq->tail = adminq->tail->next;
}
spin_unlock(&lif->adminq_lock);
}
static int ionic_adminq_check_err(struct ionic_lif *lif,
struct ionic_admin_ctx *ctx,
bool timeout)
{
struct net_device *netdev = lif->netdev;
const char *opcode_str;
const char *status_str;
int err = 0;
if (ctx->comp.comp.status || timeout) {
opcode_str = ionic_opcode_to_str(ctx->cmd.cmd.opcode);
status_str = ionic_error_to_str(ctx->comp.comp.status);
err = timeout ? -ETIMEDOUT :
ionic_error_to_errno(ctx->comp.comp.status);
netdev_err(netdev, "%s (%d) failed: %s (%d)\n",
opcode_str, ctx->cmd.cmd.opcode,
timeout ? "TIMEOUT" : status_str, err);
if (timeout)
ionic_adminq_flush(lif);
}
return err;
}
static void ionic_adminq_cb(struct ionic_queue *q,
struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, void *cb_arg)
{
struct ionic_admin_ctx *ctx = cb_arg;
struct ionic_admin_comp *comp;
struct device *dev;
if (!ctx)
return;
comp = cq_info->cq_desc;
dev = &q->lif->netdev->dev;
memcpy(&ctx->comp, comp, sizeof(*comp));
dev_dbg(dev, "comp admin queue command:\n");
dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
&ctx->comp, sizeof(ctx->comp), true);
complete_all(&ctx->work);
}
static int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
{
struct ionic_queue *adminq = &lif->adminqcq->q;
int err = 0;
WARN_ON(in_interrupt());
spin_lock(&lif->adminq_lock);
if (!ionic_q_has_space(adminq, 1)) {
err = -ENOSPC;
goto err_out;
}
memcpy(adminq->head->desc, &ctx->cmd, sizeof(ctx->cmd));
dev_dbg(&lif->netdev->dev, "post admin queue command:\n");
dynamic_hex_dump("cmd ", DUMP_PREFIX_OFFSET, 16, 1,
&ctx->cmd, sizeof(ctx->cmd), true);
ionic_q_post(adminq, true, ionic_adminq_cb, ctx);
err_out:
spin_unlock(&lif->adminq_lock);
return err;
}
int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
{
struct net_device *netdev = lif->netdev;
unsigned long remaining;
const char *name;
int err;
err = ionic_adminq_post(lif, ctx);
if (err) {
name = ionic_opcode_to_str(ctx->cmd.cmd.opcode);
netdev_err(netdev, "Posting of %s (%d) failed: %d\n",
name, ctx->cmd.cmd.opcode, err);
return err;
}
remaining = wait_for_completion_timeout(&ctx->work,
HZ * (ulong)DEVCMD_TIMEOUT);
return ionic_adminq_check_err(lif, ctx, (remaining == 0));
}
int ionic_napi(struct napi_struct *napi, int budget, ionic_cq_cb cb,
ionic_cq_done_cb done_cb, void *done_arg)
{
struct ionic_qcq *qcq = napi_to_qcq(napi);
struct ionic_cq *cq = &qcq->cq;
u32 work_done, flags = 0;
work_done = ionic_cq_service(cq, budget, cb, done_cb, done_arg);
if (work_done < budget && napi_complete_done(napi, work_done)) {
flags |= IONIC_INTR_CRED_UNMASK;
DEBUG_STATS_INTR_REARM(cq->bound_intr);
}
if (work_done || flags) {
flags |= IONIC_INTR_CRED_RESET_COALESCE;
ionic_intr_credits(cq->lif->ionic->idev.intr_ctrl,
cq->bound_intr->index,
work_done, flags);
}
DEBUG_STATS_NAPI_POLL(qcq, work_done);
return work_done;
}
int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds)
{
struct ionic_dev *idev = &ionic->idev;
unsigned long start_time;
unsigned long max_wait;
unsigned long duration;
int opcode;
int done;
int err;
WARN_ON(in_interrupt());
/* Wait for dev cmd to complete, retrying if we get EAGAIN,
* but don't wait any longer than max_seconds.
*/
max_wait = jiffies + (max_seconds * HZ);
try_again:
start_time = jiffies;
do {
done = ionic_dev_cmd_done(idev);
if (done)
break;
msleep(20);
} while (!done && time_before(jiffies, max_wait));
duration = jiffies - start_time;
opcode = idev->dev_cmd_regs->cmd.cmd.opcode;
dev_dbg(ionic->dev, "DEVCMD %s (%d) done=%d took %ld secs (%ld jiffies)\n",
ionic_opcode_to_str(opcode), opcode,
done, duration / HZ, duration);
if (!done && !time_before(jiffies, max_wait)) {
dev_warn(ionic->dev, "DEVCMD %s (%d) timeout after %ld secs\n",
ionic_opcode_to_str(opcode), opcode, max_seconds);
return -ETIMEDOUT;
}
err = ionic_dev_cmd_status(&ionic->idev);
if (err) {
if (err == IONIC_RC_EAGAIN && !time_after(jiffies, max_wait)) {
dev_err(ionic->dev, "DEV_CMD %s (%d) error, %s (%d) retrying...\n",
ionic_opcode_to_str(opcode), opcode,
ionic_error_to_str(err), err);
msleep(1000);
iowrite32(0, &idev->dev_cmd_regs->done);
iowrite32(1, &idev->dev_cmd_regs->doorbell);
goto try_again;
}
dev_err(ionic->dev, "DEV_CMD %s (%d) error, %s (%d) failed\n",
ionic_opcode_to_str(opcode), opcode,
ionic_error_to_str(err), err);
return ionic_error_to_errno(err);
}
return 0;
}
int ionic_setup(struct ionic *ionic)
{
int err;
err = ionic_dev_setup(ionic);
if (err)
return err;
return 0;
}
int ionic_identify(struct ionic *ionic)
{
struct ionic_identity *ident = &ionic->ident;
struct ionic_dev *idev = &ionic->idev;
size_t sz;
int err;
memset(ident, 0, sizeof(*ident));
ident->drv.os_type = cpu_to_le32(IONIC_OS_TYPE_LINUX);
strncpy(ident->drv.driver_ver_str, IONIC_DRV_VERSION,
sizeof(ident->drv.driver_ver_str) - 1);
mutex_lock(&ionic->dev_cmd_lock);
sz = min(sizeof(ident->drv), sizeof(idev->dev_cmd_regs->data));
memcpy_toio(&idev->dev_cmd_regs->data, &ident->drv, sz);
ionic_dev_cmd_identify(idev, IONIC_IDENTITY_VERSION_1);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
if (!err) {
sz = min(sizeof(ident->dev), sizeof(idev->dev_cmd_regs->data));
memcpy_fromio(&ident->dev, &idev->dev_cmd_regs->data, sz);
}
mutex_unlock(&ionic->dev_cmd_lock);
if (err)
goto err_out_unmap;
ionic_debugfs_add_ident(ionic);
return 0;
err_out_unmap:
return err;
}
int ionic_init(struct ionic *ionic)
{
struct ionic_dev *idev = &ionic->idev;
int err;
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_init(idev);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
return err;
}
int ionic_reset(struct ionic *ionic)
{
struct ionic_dev *idev = &ionic->idev;
int err;
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_reset(idev);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
return err;
}
int ionic_port_identify(struct ionic *ionic)
{
struct ionic_identity *ident = &ionic->ident;
struct ionic_dev *idev = &ionic->idev;
size_t sz;
int err;
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_identify(idev);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
if (!err) {
sz = min(sizeof(ident->port), sizeof(idev->dev_cmd_regs->data));
memcpy_fromio(&ident->port, &idev->dev_cmd_regs->data, sz);
}
mutex_unlock(&ionic->dev_cmd_lock);
return err;
}
int ionic_port_init(struct ionic *ionic)
{
struct ionic_identity *ident = &ionic->ident;
struct ionic_dev *idev = &ionic->idev;
size_t sz;
int err;
if (idev->port_info)
return 0;
idev->port_info_sz = ALIGN(sizeof(*idev->port_info), PAGE_SIZE);
idev->port_info = dma_alloc_coherent(ionic->dev, idev->port_info_sz,
&idev->port_info_pa,
GFP_KERNEL);
if (!idev->port_info) {
dev_err(ionic->dev, "Failed to allocate port info, aborting\n");
return -ENOMEM;
}
sz = min(sizeof(ident->port.config), sizeof(idev->dev_cmd_regs->data));
mutex_lock(&ionic->dev_cmd_lock);
memcpy_toio(&idev->dev_cmd_regs->data, &ident->port.config, sz);
ionic_dev_cmd_port_init(idev);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
ionic_dev_cmd_port_state(&ionic->idev, IONIC_PORT_ADMIN_STATE_UP);
(void)ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err) {
dev_err(ionic->dev, "Failed to init port\n");
dma_free_coherent(ionic->dev, idev->port_info_sz,
idev->port_info, idev->port_info_pa);
idev->port_info = NULL;
idev->port_info_pa = 0;
}
return err;
}
int ionic_port_reset(struct ionic *ionic)
{
struct ionic_dev *idev = &ionic->idev;
int err;
if (!idev->port_info)
return 0;
mutex_lock(&ionic->dev_cmd_lock);
ionic_dev_cmd_port_reset(idev);
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
dma_free_coherent(ionic->dev, idev->port_info_sz,
idev->port_info, idev->port_info_pa);
idev->port_info = NULL;
idev->port_info_pa = 0;
if (err)
dev_err(ionic->dev, "Failed to reset port\n");
return err;
}
static int __init ionic_init_module(void)
{
pr_info("%s %s, ver %s\n",
IONIC_DRV_NAME, IONIC_DRV_DESCRIPTION, IONIC_DRV_VERSION);
ionic_debugfs_create();
return ionic_bus_register_driver();
}
static void __exit ionic_cleanup_module(void)
{
ionic_bus_unregister_driver();
ionic_debugfs_destroy();
pr_info("%s removed\n", IONIC_DRV_NAME);
}
module_init(ionic_init_module);
module_exit(ionic_cleanup_module);

View File

@ -0,0 +1,136 @@
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB OR BSD-2-Clause */
/* Copyright (c) 2018-2019 Pensando Systems, Inc. All rights reserved. */
#ifndef IONIC_REGS_H
#define IONIC_REGS_H
#include <linux/io.h>
/** struct ionic_intr - interrupt control register set.
* @coal_init: coalesce timer initial value.
* @mask: interrupt mask value.
* @credits: interrupt credit count and return.
* @mask_assert: interrupt mask value on assert.
* @coal: coalesce timer time remaining.
*/
struct ionic_intr {
u32 coal_init;
u32 mask;
u32 credits;
u32 mask_assert;
u32 coal;
u32 rsvd[3];
};
#define IONIC_INTR_CTRL_REGS_MAX 2048
#define IONIC_INTR_CTRL_COAL_MAX 0x3F
/** enum ionic_intr_mask_vals - valid values for mask and mask_assert.
* @IONIC_INTR_MASK_CLEAR: unmask interrupt.
* @IONIC_INTR_MASK_SET: mask interrupt.
*/
enum ionic_intr_mask_vals {
IONIC_INTR_MASK_CLEAR = 0,
IONIC_INTR_MASK_SET = 1,
};
/** enum ionic_intr_credits_bits - bitwise composition of credits values.
* @IONIC_INTR_CRED_COUNT: bit mask of credit count, no shift needed.
* @IONIC_INTR_CRED_COUNT_SIGNED: bit mask of credit count, including sign bit.
* @IONIC_INTR_CRED_UNMASK: unmask the interrupt.
* @IONIC_INTR_CRED_RESET_COALESCE: reset the coalesce timer.
* @IONIC_INTR_CRED_REARM: unmask the and reset the timer.
*/
enum ionic_intr_credits_bits {
IONIC_INTR_CRED_COUNT = 0x7fffu,
IONIC_INTR_CRED_COUNT_SIGNED = 0xffffu,
IONIC_INTR_CRED_UNMASK = 0x10000u,
IONIC_INTR_CRED_RESET_COALESCE = 0x20000u,
IONIC_INTR_CRED_REARM = (IONIC_INTR_CRED_UNMASK |
IONIC_INTR_CRED_RESET_COALESCE),
};
static inline void ionic_intr_coal_init(struct ionic_intr __iomem *intr_ctrl,
int intr_idx, u32 coal)
{
iowrite32(coal, &intr_ctrl[intr_idx].coal_init);
}
static inline void ionic_intr_mask(struct ionic_intr __iomem *intr_ctrl,
int intr_idx, u32 mask)
{
iowrite32(mask, &intr_ctrl[intr_idx].mask);
}
static inline void ionic_intr_credits(struct ionic_intr __iomem *intr_ctrl,
int intr_idx, u32 cred, u32 flags)
{
if (WARN_ON_ONCE(cred > IONIC_INTR_CRED_COUNT)) {
cred = ioread32(&intr_ctrl[intr_idx].credits);
cred &= IONIC_INTR_CRED_COUNT_SIGNED;
}
iowrite32(cred | flags, &intr_ctrl[intr_idx].credits);
}
static inline void ionic_intr_clean(struct ionic_intr __iomem *intr_ctrl,
int intr_idx)
{
u32 cred;
cred = ioread32(&intr_ctrl[intr_idx].credits);
cred &= IONIC_INTR_CRED_COUNT_SIGNED;
cred |= IONIC_INTR_CRED_RESET_COALESCE;
iowrite32(cred, &intr_ctrl[intr_idx].credits);
}
static inline void ionic_intr_mask_assert(struct ionic_intr __iomem *intr_ctrl,
int intr_idx, u32 mask)
{
iowrite32(mask, &intr_ctrl[intr_idx].mask_assert);
}
/** enum ionic_dbell_bits - bitwise composition of dbell values.
*
* @IONIC_DBELL_QID_MASK: unshifted mask of valid queue id bits.
* @IONIC_DBELL_QID_SHIFT: queue id shift amount in dbell value.
* @IONIC_DBELL_QID: macro to build QID component of dbell value.
*
* @IONIC_DBELL_RING_MASK: unshifted mask of valid ring bits.
* @IONIC_DBELL_RING_SHIFT: ring shift amount in dbell value.
* @IONIC_DBELL_RING: macro to build ring component of dbell value.
*
* @IONIC_DBELL_RING_0: ring zero dbell component value.
* @IONIC_DBELL_RING_1: ring one dbell component value.
* @IONIC_DBELL_RING_2: ring two dbell component value.
* @IONIC_DBELL_RING_3: ring three dbell component value.
*
* @IONIC_DBELL_INDEX_MASK: bit mask of valid index bits, no shift needed.
*/
enum ionic_dbell_bits {
IONIC_DBELL_QID_MASK = 0xffffff,
IONIC_DBELL_QID_SHIFT = 24,
#define IONIC_DBELL_QID(n) \
(((u64)(n) & IONIC_DBELL_QID_MASK) << IONIC_DBELL_QID_SHIFT)
IONIC_DBELL_RING_MASK = 0x7,
IONIC_DBELL_RING_SHIFT = 16,
#define IONIC_DBELL_RING(n) \
(((u64)(n) & IONIC_DBELL_RING_MASK) << IONIC_DBELL_RING_SHIFT)
IONIC_DBELL_RING_0 = 0,
IONIC_DBELL_RING_1 = IONIC_DBELL_RING(1),
IONIC_DBELL_RING_2 = IONIC_DBELL_RING(2),
IONIC_DBELL_RING_3 = IONIC_DBELL_RING(3),
IONIC_DBELL_INDEX_MASK = 0xffff,
};
static inline void ionic_dbell_ring(u64 __iomem *db_page, int qtype, u64 val)
{
writeq(val, &db_page[qtype]);
}
#endif /* IONIC_REGS_H */

View File

@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include "ionic.h"
#include "ionic_lif.h"
#include "ionic_rx_filter.h"
void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f)
{
struct device *dev = lif->ionic->dev;
hlist_del(&f->by_id);
hlist_del(&f->by_hash);
devm_kfree(dev, f);
}
int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f)
{
struct ionic_admin_ctx ctx = {
.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
.cmd.rx_filter_del = {
.opcode = IONIC_CMD_RX_FILTER_DEL,
.filter_id = cpu_to_le32(f->filter_id),
},
};
return ionic_adminq_post_wait(lif, &ctx);
}
int ionic_rx_filters_init(struct ionic_lif *lif)
{
unsigned int i;
spin_lock_init(&lif->rx_filters.lock);
for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
INIT_HLIST_HEAD(&lif->rx_filters.by_hash[i]);
INIT_HLIST_HEAD(&lif->rx_filters.by_id[i]);
}
return 0;
}
void ionic_rx_filters_deinit(struct ionic_lif *lif)
{
struct ionic_rx_filter *f;
struct hlist_head *head;
struct hlist_node *tmp;
unsigned int i;
for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
head = &lif->rx_filters.by_id[i];
hlist_for_each_entry_safe(f, tmp, head, by_id)
ionic_rx_filter_free(lif, f);
}
}
int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,
u32 hash, struct ionic_admin_ctx *ctx)
{
struct device *dev = lif->ionic->dev;
struct ionic_rx_filter_add_cmd *ac;
struct ionic_rx_filter *f;
struct hlist_head *head;
unsigned int key;
ac = &ctx->cmd.rx_filter_add;
switch (le16_to_cpu(ac->match)) {
case IONIC_RX_FILTER_MATCH_VLAN:
key = le16_to_cpu(ac->vlan.vlan);
break;
case IONIC_RX_FILTER_MATCH_MAC:
key = *(u32 *)ac->mac.addr;
break;
case IONIC_RX_FILTER_MATCH_MAC_VLAN:
key = le16_to_cpu(ac->mac_vlan.vlan);
break;
default:
return -EINVAL;
}
f = devm_kzalloc(dev, sizeof(*f), GFP_KERNEL);
if (!f)
return -ENOMEM;
f->flow_id = flow_id;
f->filter_id = le32_to_cpu(ctx->comp.rx_filter_add.filter_id);
f->rxq_index = rxq_index;
memcpy(&f->cmd, ac, sizeof(f->cmd));
INIT_HLIST_NODE(&f->by_hash);
INIT_HLIST_NODE(&f->by_id);
spin_lock_bh(&lif->rx_filters.lock);
key = hash_32(key, IONIC_RX_FILTER_HASH_BITS);
head = &lif->rx_filters.by_hash[key];
hlist_add_head(&f->by_hash, head);
key = f->filter_id & IONIC_RX_FILTER_HLISTS_MASK;
head = &lif->rx_filters.by_id[key];
hlist_add_head(&f->by_id, head);
spin_unlock_bh(&lif->rx_filters.lock);
return 0;
}
struct ionic_rx_filter *ionic_rx_filter_by_vlan(struct ionic_lif *lif, u16 vid)
{
struct ionic_rx_filter *f;
struct hlist_head *head;
unsigned int key;
key = hash_32(vid, IONIC_RX_FILTER_HASH_BITS);
head = &lif->rx_filters.by_hash[key];
hlist_for_each_entry(f, head, by_hash) {
if (le16_to_cpu(f->cmd.match) != IONIC_RX_FILTER_MATCH_VLAN)
continue;
if (le16_to_cpu(f->cmd.vlan.vlan) == vid)
return f;
}
return NULL;
}
struct ionic_rx_filter *ionic_rx_filter_by_addr(struct ionic_lif *lif,
const u8 *addr)
{
struct ionic_rx_filter *f;
struct hlist_head *head;
unsigned int key;
key = hash_32(*(u32 *)addr, IONIC_RX_FILTER_HASH_BITS);
head = &lif->rx_filters.by_hash[key];
hlist_for_each_entry(f, head, by_hash) {
if (le16_to_cpu(f->cmd.match) != IONIC_RX_FILTER_MATCH_MAC)
continue;
if (memcmp(addr, f->cmd.mac.addr, ETH_ALEN) == 0)
return f;
}
return NULL;
}

View File

@ -0,0 +1,35 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_RX_FILTER_H_
#define _IONIC_RX_FILTER_H_
#define IONIC_RXQ_INDEX_ANY (0xFFFF)
struct ionic_rx_filter {
u32 flow_id;
u32 filter_id;
u16 rxq_index;
struct ionic_rx_filter_add_cmd cmd;
struct hlist_node by_hash;
struct hlist_node by_id;
};
#define IONIC_RX_FILTER_HASH_BITS 10
#define IONIC_RX_FILTER_HLISTS BIT(IONIC_RX_FILTER_HASH_BITS)
#define IONIC_RX_FILTER_HLISTS_MASK (IONIC_RX_FILTER_HLISTS - 1)
struct ionic_rx_filters {
spinlock_t lock; /* filter list lock */
struct hlist_head by_hash[IONIC_RX_FILTER_HLISTS]; /* by skb hash */
struct hlist_head by_id[IONIC_RX_FILTER_HLISTS]; /* by filter_id */
};
void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f);
int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f);
int ionic_rx_filters_init(struct ionic_lif *lif);
void ionic_rx_filters_deinit(struct ionic_lif *lif);
int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,
u32 hash, struct ionic_admin_ctx *ctx);
struct ionic_rx_filter *ionic_rx_filter_by_vlan(struct ionic_lif *lif, u16 vid);
struct ionic_rx_filter *ionic_rx_filter_by_addr(struct ionic_lif *lif, const u8 *addr);
#endif /* _IONIC_RX_FILTER_H_ */

View File

@ -0,0 +1,310 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/kernel.h>
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include "ionic.h"
#include "ionic_lif.h"
#include "ionic_stats.h"
static const struct ionic_stat_desc ionic_lif_stats_desc[] = {
IONIC_LIF_STAT_DESC(tx_packets),
IONIC_LIF_STAT_DESC(tx_bytes),
IONIC_LIF_STAT_DESC(rx_packets),
IONIC_LIF_STAT_DESC(rx_bytes),
IONIC_LIF_STAT_DESC(tx_tso),
IONIC_LIF_STAT_DESC(tx_no_csum),
IONIC_LIF_STAT_DESC(tx_csum),
IONIC_LIF_STAT_DESC(rx_csum_none),
IONIC_LIF_STAT_DESC(rx_csum_complete),
IONIC_LIF_STAT_DESC(rx_csum_error),
};
static const struct ionic_stat_desc ionic_tx_stats_desc[] = {
IONIC_TX_STAT_DESC(pkts),
IONIC_TX_STAT_DESC(bytes),
IONIC_TX_STAT_DESC(clean),
IONIC_TX_STAT_DESC(dma_map_err),
IONIC_TX_STAT_DESC(linearize),
IONIC_TX_STAT_DESC(frags),
};
static const struct ionic_stat_desc ionic_rx_stats_desc[] = {
IONIC_RX_STAT_DESC(pkts),
IONIC_RX_STAT_DESC(bytes),
IONIC_RX_STAT_DESC(dma_map_err),
IONIC_RX_STAT_DESC(alloc_err),
IONIC_RX_STAT_DESC(csum_none),
IONIC_RX_STAT_DESC(csum_complete),
IONIC_RX_STAT_DESC(csum_error),
};
static const struct ionic_stat_desc ionic_txq_stats_desc[] = {
IONIC_TX_Q_STAT_DESC(stop),
IONIC_TX_Q_STAT_DESC(wake),
IONIC_TX_Q_STAT_DESC(drop),
IONIC_TX_Q_STAT_DESC(dbell_count),
};
static const struct ionic_stat_desc ionic_dbg_cq_stats_desc[] = {
IONIC_CQ_STAT_DESC(compl_count),
};
static const struct ionic_stat_desc ionic_dbg_intr_stats_desc[] = {
IONIC_INTR_STAT_DESC(rearm_count),
};
static const struct ionic_stat_desc ionic_dbg_napi_stats_desc[] = {
IONIC_NAPI_STAT_DESC(poll_count),
};
#define IONIC_NUM_LIF_STATS ARRAY_SIZE(ionic_lif_stats_desc)
#define IONIC_NUM_TX_STATS ARRAY_SIZE(ionic_tx_stats_desc)
#define IONIC_NUM_RX_STATS ARRAY_SIZE(ionic_rx_stats_desc)
#define IONIC_NUM_TX_Q_STATS ARRAY_SIZE(ionic_txq_stats_desc)
#define IONIC_NUM_DBG_CQ_STATS ARRAY_SIZE(ionic_dbg_cq_stats_desc)
#define IONIC_NUM_DBG_INTR_STATS ARRAY_SIZE(ionic_dbg_intr_stats_desc)
#define IONIC_NUM_DBG_NAPI_STATS ARRAY_SIZE(ionic_dbg_napi_stats_desc)
#define MAX_Q(lif) ((lif)->netdev->real_num_tx_queues)
static void ionic_get_lif_stats(struct ionic_lif *lif,
struct ionic_lif_sw_stats *stats)
{
struct ionic_tx_stats *tstats;
struct ionic_rx_stats *rstats;
struct ionic_qcq *txqcq;
struct ionic_qcq *rxqcq;
int q_num;
memset(stats, 0, sizeof(*stats));
for (q_num = 0; q_num < MAX_Q(lif); q_num++) {
txqcq = lif_to_txqcq(lif, q_num);
if (txqcq && txqcq->stats) {
tstats = &txqcq->stats->tx;
stats->tx_packets += tstats->pkts;
stats->tx_bytes += tstats->bytes;
stats->tx_tso += tstats->tso;
stats->tx_no_csum += tstats->no_csum;
stats->tx_csum += tstats->csum;
}
rxqcq = lif_to_rxqcq(lif, q_num);
if (rxqcq && rxqcq->stats) {
rstats = &rxqcq->stats->rx;
stats->rx_packets += rstats->pkts;
stats->rx_bytes += rstats->bytes;
stats->rx_csum_none += rstats->csum_none;
stats->rx_csum_complete += rstats->csum_complete;
stats->rx_csum_error += rstats->csum_error;
}
}
}
static u64 ionic_sw_stats_get_count(struct ionic_lif *lif)
{
u64 total = 0;
/* lif stats */
total += IONIC_NUM_LIF_STATS;
/* tx stats */
total += MAX_Q(lif) * IONIC_NUM_TX_STATS;
/* rx stats */
total += MAX_Q(lif) * IONIC_NUM_RX_STATS;
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state)) {
/* tx debug stats */
total += MAX_Q(lif) * (IONIC_NUM_DBG_CQ_STATS +
IONIC_NUM_TX_Q_STATS +
IONIC_NUM_DBG_INTR_STATS +
IONIC_MAX_NUM_SG_CNTR);
/* rx debug stats */
total += MAX_Q(lif) * (IONIC_NUM_DBG_CQ_STATS +
IONIC_NUM_DBG_INTR_STATS +
IONIC_NUM_DBG_NAPI_STATS +
IONIC_MAX_NUM_NAPI_CNTR);
}
return total;
}
static void ionic_sw_stats_get_strings(struct ionic_lif *lif, u8 **buf)
{
int i, q_num;
for (i = 0; i < IONIC_NUM_LIF_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN, ionic_lif_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (q_num = 0; q_num < MAX_Q(lif); q_num++) {
for (i = 0; i < IONIC_NUM_TX_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN, "tx_%d_%s",
q_num, ionic_tx_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state)) {
for (i = 0; i < IONIC_NUM_TX_Q_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"txq_%d_%s",
q_num,
ionic_txq_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_NUM_DBG_CQ_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"txq_%d_cq_%s",
q_num,
ionic_dbg_cq_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_NUM_DBG_INTR_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"txq_%d_intr_%s",
q_num,
ionic_dbg_intr_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_MAX_NUM_SG_CNTR; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"txq_%d_sg_cntr_%d",
q_num, i);
*buf += ETH_GSTRING_LEN;
}
}
}
for (q_num = 0; q_num < MAX_Q(lif); q_num++) {
for (i = 0; i < IONIC_NUM_RX_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"rx_%d_%s",
q_num, ionic_rx_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state)) {
for (i = 0; i < IONIC_NUM_DBG_CQ_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"rxq_%d_cq_%s",
q_num,
ionic_dbg_cq_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_NUM_DBG_INTR_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"rxq_%d_intr_%s",
q_num,
ionic_dbg_intr_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_NUM_DBG_NAPI_STATS; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"rxq_%d_napi_%s",
q_num,
ionic_dbg_napi_stats_desc[i].name);
*buf += ETH_GSTRING_LEN;
}
for (i = 0; i < IONIC_MAX_NUM_NAPI_CNTR; i++) {
snprintf(*buf, ETH_GSTRING_LEN,
"rxq_%d_napi_work_done_%d",
q_num, i);
*buf += ETH_GSTRING_LEN;
}
}
}
}
static void ionic_sw_stats_get_values(struct ionic_lif *lif, u64 **buf)
{
struct ionic_lif_sw_stats lif_stats;
struct ionic_qcq *txqcq, *rxqcq;
int i, q_num;
ionic_get_lif_stats(lif, &lif_stats);
for (i = 0; i < IONIC_NUM_LIF_STATS; i++) {
**buf = IONIC_READ_STAT64(&lif_stats, &ionic_lif_stats_desc[i]);
(*buf)++;
}
for (q_num = 0; q_num < MAX_Q(lif); q_num++) {
txqcq = lif_to_txqcq(lif, q_num);
for (i = 0; i < IONIC_NUM_TX_STATS; i++) {
**buf = IONIC_READ_STAT64(&txqcq->stats->tx,
&ionic_tx_stats_desc[i]);
(*buf)++;
}
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state)) {
for (i = 0; i < IONIC_NUM_TX_Q_STATS; i++) {
**buf = IONIC_READ_STAT64(&txqcq->q,
&ionic_txq_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_NUM_DBG_CQ_STATS; i++) {
**buf = IONIC_READ_STAT64(&txqcq->cq,
&ionic_dbg_cq_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_NUM_DBG_INTR_STATS; i++) {
**buf = IONIC_READ_STAT64(&txqcq->intr,
&ionic_dbg_intr_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_MAX_NUM_SG_CNTR; i++) {
**buf = txqcq->stats->tx.sg_cntr[i];
(*buf)++;
}
}
}
for (q_num = 0; q_num < MAX_Q(lif); q_num++) {
rxqcq = lif_to_rxqcq(lif, q_num);
for (i = 0; i < IONIC_NUM_RX_STATS; i++) {
**buf = IONIC_READ_STAT64(&rxqcq->stats->rx,
&ionic_rx_stats_desc[i]);
(*buf)++;
}
if (test_bit(IONIC_LIF_SW_DEBUG_STATS, lif->state)) {
for (i = 0; i < IONIC_NUM_DBG_CQ_STATS; i++) {
**buf = IONIC_READ_STAT64(&rxqcq->cq,
&ionic_dbg_cq_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_NUM_DBG_INTR_STATS; i++) {
**buf = IONIC_READ_STAT64(&rxqcq->intr,
&ionic_dbg_intr_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_NUM_DBG_NAPI_STATS; i++) {
**buf = IONIC_READ_STAT64(&rxqcq->napi_stats,
&ionic_dbg_napi_stats_desc[i]);
(*buf)++;
}
for (i = 0; i < IONIC_MAX_NUM_NAPI_CNTR; i++) {
**buf = rxqcq->napi_stats.work_done_cntr[i];
(*buf)++;
}
}
}
}
const struct ionic_stats_group_intf ionic_stats_groups[] = {
/* SW Stats group */
{
.get_strings = ionic_sw_stats_get_strings,
.get_values = ionic_sw_stats_get_values,
.get_count = ionic_sw_stats_get_count,
},
/* Add more stat groups here */
};
const int ionic_num_stats_grps = ARRAY_SIZE(ionic_stats_groups);

View File

@ -0,0 +1,53 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_STATS_H_
#define _IONIC_STATS_H_
#define IONIC_STAT_TO_OFFSET(type, stat_name) (offsetof(type, stat_name))
#define IONIC_STAT_DESC(type, stat_name) { \
.name = #stat_name, \
.offset = IONIC_STAT_TO_OFFSET(type, stat_name) \
}
#define IONIC_LIF_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_lif_sw_stats, stat_name)
#define IONIC_TX_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_tx_stats, stat_name)
#define IONIC_RX_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_rx_stats, stat_name)
#define IONIC_TX_Q_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_queue, stat_name)
#define IONIC_CQ_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_cq, stat_name)
#define IONIC_INTR_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_intr_info, stat_name)
#define IONIC_NAPI_STAT_DESC(stat_name) \
IONIC_STAT_DESC(struct ionic_napi_stats, stat_name)
/* Interface structure for a particalar stats group */
struct ionic_stats_group_intf {
void (*get_strings)(struct ionic_lif *lif, u8 **buf);
void (*get_values)(struct ionic_lif *lif, u64 **buf);
u64 (*get_count)(struct ionic_lif *lif);
};
extern const struct ionic_stats_group_intf ionic_stats_groups[];
extern const int ionic_num_stats_grps;
#define IONIC_READ_STAT64(base_ptr, desc_ptr) \
(*((u64 *)(((u8 *)(base_ptr)) + (desc_ptr)->offset)))
struct ionic_stat_desc {
char name[ETH_GSTRING_LEN];
u64 offset;
};
#endif /* _IONIC_STATS_H_ */

View File

@ -0,0 +1,925 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/if_vlan.h>
#include <net/ip6_checksum.h>
#include "ionic.h"
#include "ionic_lif.h"
#include "ionic_txrx.h"
static void ionic_rx_clean(struct ionic_queue *q, struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, void *cb_arg);
static inline void ionic_txq_post(struct ionic_queue *q, bool ring_dbell,
ionic_desc_cb cb_func, void *cb_arg)
{
DEBUG_STATS_TXQ_POST(q_to_qcq(q), q->head->desc, ring_dbell);
ionic_q_post(q, ring_dbell, cb_func, cb_arg);
}
static inline void ionic_rxq_post(struct ionic_queue *q, bool ring_dbell,
ionic_desc_cb cb_func, void *cb_arg)
{
ionic_q_post(q, ring_dbell, cb_func, cb_arg);
DEBUG_STATS_RX_BUFF_CNT(q_to_qcq(q));
}
static inline struct netdev_queue *q_to_ndq(struct ionic_queue *q)
{
return netdev_get_tx_queue(q->lif->netdev, q->index);
}
static void ionic_rx_recycle(struct ionic_queue *q, struct ionic_desc_info *desc_info,
struct sk_buff *skb)
{
struct ionic_rxq_desc *old = desc_info->desc;
struct ionic_rxq_desc *new = q->head->desc;
new->addr = old->addr;
new->len = old->len;
ionic_rxq_post(q, true, ionic_rx_clean, skb);
}
static bool ionic_rx_copybreak(struct ionic_queue *q, struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, struct sk_buff **skb)
{
struct ionic_rxq_comp *comp = cq_info->cq_desc;
struct ionic_rxq_desc *desc = desc_info->desc;
struct net_device *netdev = q->lif->netdev;
struct device *dev = q->lif->ionic->dev;
struct sk_buff *new_skb;
u16 clen, dlen;
clen = le16_to_cpu(comp->len);
dlen = le16_to_cpu(desc->len);
if (clen > q->lif->rx_copybreak) {
dma_unmap_single(dev, (dma_addr_t)le64_to_cpu(desc->addr),
dlen, DMA_FROM_DEVICE);
return false;
}
new_skb = netdev_alloc_skb_ip_align(netdev, clen);
if (!new_skb) {
dma_unmap_single(dev, (dma_addr_t)le64_to_cpu(desc->addr),
dlen, DMA_FROM_DEVICE);
return false;
}
dma_sync_single_for_cpu(dev, (dma_addr_t)le64_to_cpu(desc->addr),
clen, DMA_FROM_DEVICE);
memcpy(new_skb->data, (*skb)->data, clen);
ionic_rx_recycle(q, desc_info, *skb);
*skb = new_skb;
return true;
}
static void ionic_rx_clean(struct ionic_queue *q, struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, void *cb_arg)
{
struct ionic_rxq_comp *comp = cq_info->cq_desc;
struct ionic_qcq *qcq = q_to_qcq(q);
struct sk_buff *skb = cb_arg;
struct ionic_rx_stats *stats;
struct net_device *netdev;
stats = q_to_rx_stats(q);
netdev = q->lif->netdev;
if (comp->status) {
ionic_rx_recycle(q, desc_info, skb);
return;
}
if (unlikely(test_bit(IONIC_LIF_QUEUE_RESET, q->lif->state))) {
/* no packet processing while resetting */
ionic_rx_recycle(q, desc_info, skb);
return;
}
stats->pkts++;
stats->bytes += le16_to_cpu(comp->len);
ionic_rx_copybreak(q, desc_info, cq_info, &skb);
skb_put(skb, le16_to_cpu(comp->len));
skb->protocol = eth_type_trans(skb, netdev);
skb_record_rx_queue(skb, q->index);
if (netdev->features & NETIF_F_RXHASH) {
switch (comp->pkt_type_color & IONIC_RXQ_COMP_PKT_TYPE_MASK) {
case IONIC_PKT_TYPE_IPV4:
case IONIC_PKT_TYPE_IPV6:
skb_set_hash(skb, le32_to_cpu(comp->rss_hash),
PKT_HASH_TYPE_L3);
break;
case IONIC_PKT_TYPE_IPV4_TCP:
case IONIC_PKT_TYPE_IPV6_TCP:
case IONIC_PKT_TYPE_IPV4_UDP:
case IONIC_PKT_TYPE_IPV6_UDP:
skb_set_hash(skb, le32_to_cpu(comp->rss_hash),
PKT_HASH_TYPE_L4);
break;
}
}
if (netdev->features & NETIF_F_RXCSUM) {
if (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = (__wsum)le16_to_cpu(comp->csum);
stats->csum_complete++;
}
} else {
stats->csum_none++;
}
if ((comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_TCP_BAD) ||
(comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_UDP_BAD) ||
(comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_BAD))
stats->csum_error++;
if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) {
if (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_VLAN)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
le16_to_cpu(comp->vlan_tci));
}
napi_gro_receive(&qcq->napi, skb);
}
static bool ionic_rx_service(struct ionic_cq *cq, struct ionic_cq_info *cq_info)
{
struct ionic_rxq_comp *comp = cq_info->cq_desc;
struct ionic_queue *q = cq->bound_q;
struct ionic_desc_info *desc_info;
if (!color_match(comp->pkt_type_color, cq->done_color))
return false;
/* check for empty queue */
if (q->tail->index == q->head->index)
return false;
desc_info = q->tail;
if (desc_info->index != le16_to_cpu(comp->comp_index))
return false;
q->tail = desc_info->next;
/* clean the related q entry, only one per qc completion */
ionic_rx_clean(q, desc_info, cq_info, desc_info->cb_arg);
desc_info->cb = NULL;
desc_info->cb_arg = NULL;
return true;
}
static u32 ionic_rx_walk_cq(struct ionic_cq *rxcq, u32 limit)
{
u32 work_done = 0;
while (ionic_rx_service(rxcq, rxcq->tail)) {
if (rxcq->tail->last)
rxcq->done_color = !rxcq->done_color;
rxcq->tail = rxcq->tail->next;
DEBUG_STATS_CQE_CNT(rxcq);
if (++work_done >= limit)
break;
}
return work_done;
}
void ionic_rx_flush(struct ionic_cq *cq)
{
struct ionic_dev *idev = &cq->lif->ionic->idev;
u32 work_done;
work_done = ionic_rx_walk_cq(cq, cq->num_descs);
if (work_done)
ionic_intr_credits(idev->intr_ctrl, cq->bound_intr->index,
work_done, IONIC_INTR_CRED_RESET_COALESCE);
}
static struct sk_buff *ionic_rx_skb_alloc(struct ionic_queue *q, unsigned int len,
dma_addr_t *dma_addr)
{
struct ionic_lif *lif = q->lif;
struct ionic_rx_stats *stats;
struct net_device *netdev;
struct sk_buff *skb;
struct device *dev;
netdev = lif->netdev;
dev = lif->ionic->dev;
stats = q_to_rx_stats(q);
skb = netdev_alloc_skb_ip_align(netdev, len);
if (!skb) {
net_warn_ratelimited("%s: SKB alloc failed on %s!\n",
netdev->name, q->name);
stats->alloc_err++;
return NULL;
}
*dma_addr = dma_map_single(dev, skb->data, len, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, *dma_addr)) {
dev_kfree_skb(skb);
net_warn_ratelimited("%s: DMA single map failed on %s!\n",
netdev->name, q->name);
stats->dma_map_err++;
return NULL;
}
return skb;
}
#define IONIC_RX_RING_DOORBELL_STRIDE ((1 << 2) - 1)
void ionic_rx_fill(struct ionic_queue *q)
{
struct net_device *netdev = q->lif->netdev;
struct ionic_rxq_desc *desc;
struct sk_buff *skb;
dma_addr_t dma_addr;
bool ring_doorbell;
unsigned int len;
unsigned int i;
len = netdev->mtu + ETH_HLEN;
for (i = ionic_q_space_avail(q); i; i--) {
skb = ionic_rx_skb_alloc(q, len, &dma_addr);
if (!skb)
return;
desc = q->head->desc;
desc->addr = cpu_to_le64(dma_addr);
desc->len = cpu_to_le16(len);
desc->opcode = IONIC_RXQ_DESC_OPCODE_SIMPLE;
ring_doorbell = ((q->head->index + 1) &
IONIC_RX_RING_DOORBELL_STRIDE) == 0;
ionic_rxq_post(q, ring_doorbell, ionic_rx_clean, skb);
}
}
static void ionic_rx_fill_cb(void *arg)
{
ionic_rx_fill(arg);
}
void ionic_rx_empty(struct ionic_queue *q)
{
struct device *dev = q->lif->ionic->dev;
struct ionic_desc_info *cur;
struct ionic_rxq_desc *desc;
for (cur = q->tail; cur != q->head; cur = cur->next) {
desc = cur->desc;
dma_unmap_single(dev, le64_to_cpu(desc->addr),
le16_to_cpu(desc->len), DMA_FROM_DEVICE);
dev_kfree_skb(cur->cb_arg);
cur->cb_arg = NULL;
}
}
int ionic_rx_napi(struct napi_struct *napi, int budget)
{
struct ionic_qcq *qcq = napi_to_qcq(napi);
struct ionic_cq *rxcq = napi_to_cq(napi);
unsigned int qi = rxcq->bound_q->index;
struct ionic_dev *idev;
struct ionic_lif *lif;
struct ionic_cq *txcq;
u32 work_done = 0;
u32 flags = 0;
lif = rxcq->bound_q->lif;
idev = &lif->ionic->idev;
txcq = &lif->txqcqs[qi].qcq->cq;
ionic_tx_flush(txcq);
work_done = ionic_rx_walk_cq(rxcq, budget);
if (work_done)
ionic_rx_fill_cb(rxcq->bound_q);
if (work_done < budget && napi_complete_done(napi, work_done)) {
flags |= IONIC_INTR_CRED_UNMASK;
DEBUG_STATS_INTR_REARM(rxcq->bound_intr);
}
if (work_done || flags) {
flags |= IONIC_INTR_CRED_RESET_COALESCE;
ionic_intr_credits(idev->intr_ctrl, rxcq->bound_intr->index,
work_done, flags);
}
DEBUG_STATS_NAPI_POLL(qcq, work_done);
return work_done;
}
static dma_addr_t ionic_tx_map_single(struct ionic_queue *q, void *data, size_t len)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct device *dev = q->lif->ionic->dev;
dma_addr_t dma_addr;
dma_addr = dma_map_single(dev, data, len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
net_warn_ratelimited("%s: DMA single map failed on %s!\n",
q->lif->netdev->name, q->name);
stats->dma_map_err++;
return 0;
}
return dma_addr;
}
static dma_addr_t ionic_tx_map_frag(struct ionic_queue *q, const skb_frag_t *frag,
size_t offset, size_t len)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct device *dev = q->lif->ionic->dev;
dma_addr_t dma_addr;
dma_addr = skb_frag_dma_map(dev, frag, offset, len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
net_warn_ratelimited("%s: DMA frag map failed on %s!\n",
q->lif->netdev->name, q->name);
stats->dma_map_err++;
}
return dma_addr;
}
static void ionic_tx_clean(struct ionic_queue *q, struct ionic_desc_info *desc_info,
struct ionic_cq_info *cq_info, void *cb_arg)
{
struct ionic_txq_sg_desc *sg_desc = desc_info->sg_desc;
struct ionic_txq_sg_elem *elem = sg_desc->elems;
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct ionic_txq_desc *desc = desc_info->desc;
struct device *dev = q->lif->ionic->dev;
u8 opcode, flags, nsge;
u16 queue_index;
unsigned int i;
u64 addr;
decode_txq_desc_cmd(le64_to_cpu(desc->cmd),
&opcode, &flags, &nsge, &addr);
/* use unmap_single only if either this is not TSO,
* or this is first descriptor of a TSO
*/
if (opcode != IONIC_TXQ_DESC_OPCODE_TSO ||
flags & IONIC_TXQ_DESC_FLAG_TSO_SOT)
dma_unmap_single(dev, (dma_addr_t)addr,
le16_to_cpu(desc->len), DMA_TO_DEVICE);
else
dma_unmap_page(dev, (dma_addr_t)addr,
le16_to_cpu(desc->len), DMA_TO_DEVICE);
for (i = 0; i < nsge; i++, elem++)
dma_unmap_page(dev, (dma_addr_t)le64_to_cpu(elem->addr),
le16_to_cpu(elem->len), DMA_TO_DEVICE);
if (cb_arg) {
struct sk_buff *skb = cb_arg;
u32 len = skb->len;
queue_index = skb_get_queue_mapping(skb);
if (unlikely(__netif_subqueue_stopped(q->lif->netdev,
queue_index))) {
netif_wake_subqueue(q->lif->netdev, queue_index);
q->wake++;
}
dev_kfree_skb_any(skb);
stats->clean++;
netdev_tx_completed_queue(q_to_ndq(q), 1, len);
}
}
void ionic_tx_flush(struct ionic_cq *cq)
{
struct ionic_txq_comp *comp = cq->tail->cq_desc;
struct ionic_dev *idev = &cq->lif->ionic->idev;
struct ionic_queue *q = cq->bound_q;
struct ionic_desc_info *desc_info;
unsigned int work_done = 0;
/* walk the completed cq entries */
while (work_done < cq->num_descs &&
color_match(comp->color, cq->done_color)) {
/* clean the related q entries, there could be
* several q entries completed for each cq completion
*/
do {
desc_info = q->tail;
q->tail = desc_info->next;
ionic_tx_clean(q, desc_info, cq->tail,
desc_info->cb_arg);
desc_info->cb = NULL;
desc_info->cb_arg = NULL;
} while (desc_info->index != le16_to_cpu(comp->comp_index));
if (cq->tail->last)
cq->done_color = !cq->done_color;
cq->tail = cq->tail->next;
comp = cq->tail->cq_desc;
DEBUG_STATS_CQE_CNT(cq);
work_done++;
}
if (work_done)
ionic_intr_credits(idev->intr_ctrl, cq->bound_intr->index,
work_done, 0);
}
static int ionic_tx_tcp_inner_pseudo_csum(struct sk_buff *skb)
{
int err;
err = skb_cow_head(skb, 0);
if (err)
return err;
if (skb->protocol == cpu_to_be16(ETH_P_IP)) {
inner_ip_hdr(skb)->check = 0;
inner_tcp_hdr(skb)->check =
~csum_tcpudp_magic(inner_ip_hdr(skb)->saddr,
inner_ip_hdr(skb)->daddr,
0, IPPROTO_TCP, 0);
} else if (skb->protocol == cpu_to_be16(ETH_P_IPV6)) {
inner_tcp_hdr(skb)->check =
~csum_ipv6_magic(&inner_ipv6_hdr(skb)->saddr,
&inner_ipv6_hdr(skb)->daddr,
0, IPPROTO_TCP, 0);
}
return 0;
}
static int ionic_tx_tcp_pseudo_csum(struct sk_buff *skb)
{
int err;
err = skb_cow_head(skb, 0);
if (err)
return err;
if (skb->protocol == cpu_to_be16(ETH_P_IP)) {
ip_hdr(skb)->check = 0;
tcp_hdr(skb)->check =
~csum_tcpudp_magic(ip_hdr(skb)->saddr,
ip_hdr(skb)->daddr,
0, IPPROTO_TCP, 0);
} else if (skb->protocol == cpu_to_be16(ETH_P_IPV6)) {
tcp_hdr(skb)->check =
~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
&ipv6_hdr(skb)->daddr,
0, IPPROTO_TCP, 0);
}
return 0;
}
static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc,
struct sk_buff *skb,
dma_addr_t addr, u8 nsge, u16 len,
unsigned int hdrlen, unsigned int mss,
bool outer_csum,
u16 vlan_tci, bool has_vlan,
bool start, bool done)
{
u8 flags = 0;
u64 cmd;
flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
flags |= outer_csum ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
flags |= start ? IONIC_TXQ_DESC_FLAG_TSO_SOT : 0;
flags |= done ? IONIC_TXQ_DESC_FLAG_TSO_EOT : 0;
cmd = encode_txq_desc_cmd(IONIC_TXQ_DESC_OPCODE_TSO, flags, nsge, addr);
desc->cmd = cpu_to_le64(cmd);
desc->len = cpu_to_le16(len);
desc->vlan_tci = cpu_to_le16(vlan_tci);
desc->hdr_len = cpu_to_le16(hdrlen);
desc->mss = cpu_to_le16(mss);
if (done) {
skb_tx_timestamp(skb);
netdev_tx_sent_queue(q_to_ndq(q), skb->len);
ionic_txq_post(q, !netdev_xmit_more(), ionic_tx_clean, skb);
} else {
ionic_txq_post(q, false, ionic_tx_clean, NULL);
}
}
static struct ionic_txq_desc *ionic_tx_tso_next(struct ionic_queue *q,
struct ionic_txq_sg_elem **elem)
{
struct ionic_txq_sg_desc *sg_desc = q->head->sg_desc;
struct ionic_txq_desc *desc = q->head->desc;
*elem = sg_desc->elems;
return desc;
}
static int ionic_tx_tso(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct ionic_desc_info *abort = q->head;
struct device *dev = q->lif->ionic->dev;
struct ionic_desc_info *rewind = abort;
struct ionic_txq_sg_elem *elem;
struct ionic_txq_desc *desc;
unsigned int frag_left = 0;
unsigned int offset = 0;
unsigned int len_left;
dma_addr_t desc_addr;
unsigned int hdrlen;
unsigned int nfrags;
unsigned int seglen;
u64 total_bytes = 0;
u64 total_pkts = 0;
unsigned int left;
unsigned int len;
unsigned int mss;
skb_frag_t *frag;
bool start, done;
bool outer_csum;
bool has_vlan;
u16 desc_len;
u8 desc_nsge;
u16 vlan_tci;
bool encap;
int err;
mss = skb_shinfo(skb)->gso_size;
nfrags = skb_shinfo(skb)->nr_frags;
len_left = skb->len - skb_headlen(skb);
outer_csum = (skb_shinfo(skb)->gso_type & SKB_GSO_GRE_CSUM) ||
(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM);
has_vlan = !!skb_vlan_tag_present(skb);
vlan_tci = skb_vlan_tag_get(skb);
encap = skb->encapsulation;
/* Preload inner-most TCP csum field with IP pseudo hdr
* calculated with IP length set to zero. HW will later
* add in length to each TCP segment resulting from the TSO.
*/
if (encap)
err = ionic_tx_tcp_inner_pseudo_csum(skb);
else
err = ionic_tx_tcp_pseudo_csum(skb);
if (err)
return err;
if (encap)
hdrlen = skb_inner_transport_header(skb) - skb->data +
inner_tcp_hdrlen(skb);
else
hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
seglen = hdrlen + mss;
left = skb_headlen(skb);
desc = ionic_tx_tso_next(q, &elem);
start = true;
/* Chop skb->data up into desc segments */
while (left > 0) {
len = min(seglen, left);
frag_left = seglen - len;
desc_addr = ionic_tx_map_single(q, skb->data + offset, len);
if (dma_mapping_error(dev, desc_addr))
goto err_out_abort;
desc_len = len;
desc_nsge = 0;
left -= len;
offset += len;
if (nfrags > 0 && frag_left > 0)
continue;
done = (nfrags == 0 && left == 0);
ionic_tx_tso_post(q, desc, skb,
desc_addr, desc_nsge, desc_len,
hdrlen, mss,
outer_csum,
vlan_tci, has_vlan,
start, done);
total_pkts++;
total_bytes += start ? len : len + hdrlen;
desc = ionic_tx_tso_next(q, &elem);
start = false;
seglen = mss;
}
/* Chop skb frags into desc segments */
for (frag = skb_shinfo(skb)->frags; len_left; frag++) {
offset = 0;
left = skb_frag_size(frag);
len_left -= left;
nfrags--;
stats->frags++;
while (left > 0) {
if (frag_left > 0) {
len = min(frag_left, left);
frag_left -= len;
elem->addr =
cpu_to_le64(ionic_tx_map_frag(q, frag,
offset, len));
if (dma_mapping_error(dev, elem->addr))
goto err_out_abort;
elem->len = cpu_to_le16(len);
elem++;
desc_nsge++;
left -= len;
offset += len;
if (nfrags > 0 && frag_left > 0)
continue;
done = (nfrags == 0 && left == 0);
ionic_tx_tso_post(q, desc, skb, desc_addr,
desc_nsge, desc_len,
hdrlen, mss, outer_csum,
vlan_tci, has_vlan,
start, done);
total_pkts++;
total_bytes += start ? len : len + hdrlen;
desc = ionic_tx_tso_next(q, &elem);
start = false;
} else {
len = min(mss, left);
frag_left = mss - len;
desc_addr = ionic_tx_map_frag(q, frag,
offset, len);
if (dma_mapping_error(dev, desc_addr))
goto err_out_abort;
desc_len = len;
desc_nsge = 0;
left -= len;
offset += len;
if (nfrags > 0 && frag_left > 0)
continue;
done = (nfrags == 0 && left == 0);
ionic_tx_tso_post(q, desc, skb, desc_addr,
desc_nsge, desc_len,
hdrlen, mss, outer_csum,
vlan_tci, has_vlan,
start, done);
total_pkts++;
total_bytes += start ? len : len + hdrlen;
desc = ionic_tx_tso_next(q, &elem);
start = false;
}
}
}
stats->pkts += total_pkts;
stats->bytes += total_bytes;
stats->tso++;
return 0;
err_out_abort:
while (rewind->desc != q->head->desc) {
ionic_tx_clean(q, rewind, NULL, NULL);
rewind = rewind->next;
}
q->head = abort;
return -ENOMEM;
}
static int ionic_tx_calc_csum(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct ionic_txq_desc *desc = q->head->desc;
struct device *dev = q->lif->ionic->dev;
dma_addr_t dma_addr;
bool has_vlan;
u8 flags = 0;
bool encap;
u64 cmd;
has_vlan = !!skb_vlan_tag_present(skb);
encap = skb->encapsulation;
dma_addr = ionic_tx_map_single(q, skb->data, skb_headlen(skb));
if (dma_mapping_error(dev, dma_addr))
return -ENOMEM;
flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
cmd = encode_txq_desc_cmd(IONIC_TXQ_DESC_OPCODE_CSUM_PARTIAL,
flags, skb_shinfo(skb)->nr_frags, dma_addr);
desc->cmd = cpu_to_le64(cmd);
desc->len = cpu_to_le16(skb_headlen(skb));
desc->vlan_tci = cpu_to_le16(skb_vlan_tag_get(skb));
desc->csum_start = cpu_to_le16(skb_checksum_start_offset(skb));
desc->csum_offset = cpu_to_le16(skb->csum_offset);
if (skb->csum_not_inet)
stats->crc32_csum++;
else
stats->csum++;
return 0;
}
static int ionic_tx_calc_no_csum(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct ionic_txq_desc *desc = q->head->desc;
struct device *dev = q->lif->ionic->dev;
dma_addr_t dma_addr;
bool has_vlan;
u8 flags = 0;
bool encap;
u64 cmd;
has_vlan = !!skb_vlan_tag_present(skb);
encap = skb->encapsulation;
dma_addr = ionic_tx_map_single(q, skb->data, skb_headlen(skb));
if (dma_mapping_error(dev, dma_addr))
return -ENOMEM;
flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
cmd = encode_txq_desc_cmd(IONIC_TXQ_DESC_OPCODE_CSUM_NONE,
flags, skb_shinfo(skb)->nr_frags, dma_addr);
desc->cmd = cpu_to_le64(cmd);
desc->len = cpu_to_le16(skb_headlen(skb));
desc->vlan_tci = cpu_to_le16(skb_vlan_tag_get(skb));
stats->no_csum++;
return 0;
}
static int ionic_tx_skb_frags(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_txq_sg_desc *sg_desc = q->head->sg_desc;
unsigned int len_left = skb->len - skb_headlen(skb);
struct ionic_txq_sg_elem *elem = sg_desc->elems;
struct ionic_tx_stats *stats = q_to_tx_stats(q);
struct device *dev = q->lif->ionic->dev;
dma_addr_t dma_addr;
skb_frag_t *frag;
u16 len;
for (frag = skb_shinfo(skb)->frags; len_left; frag++, elem++) {
len = skb_frag_size(frag);
elem->len = cpu_to_le16(len);
dma_addr = ionic_tx_map_frag(q, frag, 0, len);
if (dma_mapping_error(dev, dma_addr))
return -ENOMEM;
elem->addr = cpu_to_le64(dma_addr);
len_left -= len;
stats->frags++;
}
return 0;
}
static int ionic_tx(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
int err;
/* set up the initial descriptor */
if (skb->ip_summed == CHECKSUM_PARTIAL)
err = ionic_tx_calc_csum(q, skb);
else
err = ionic_tx_calc_no_csum(q, skb);
if (err)
return err;
/* add frags */
err = ionic_tx_skb_frags(q, skb);
if (err)
return err;
skb_tx_timestamp(skb);
stats->pkts++;
stats->bytes += skb->len;
netdev_tx_sent_queue(q_to_ndq(q), skb->len);
ionic_txq_post(q, !netdev_xmit_more(), ionic_tx_clean, skb);
return 0;
}
static int ionic_tx_descs_needed(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
int err;
/* If TSO, need roundup(skb->len/mss) descs */
if (skb_is_gso(skb))
return (skb->len / skb_shinfo(skb)->gso_size) + 1;
/* If non-TSO, just need 1 desc and nr_frags sg elems */
if (skb_shinfo(skb)->nr_frags <= IONIC_TX_MAX_SG_ELEMS)
return 1;
/* Too many frags, so linearize */
err = skb_linearize(skb);
if (err)
return err;
stats->linearize++;
/* Need 1 desc and zero sg elems */
return 1;
}
static int ionic_maybe_stop_tx(struct ionic_queue *q, int ndescs)
{
int stopped = 0;
if (unlikely(!ionic_q_has_space(q, ndescs))) {
netif_stop_subqueue(q->lif->netdev, q->index);
q->stop++;
stopped = 1;
/* Might race with ionic_tx_clean, check again */
smp_rmb();
if (ionic_q_has_space(q, ndescs)) {
netif_wake_subqueue(q->lif->netdev, q->index);
stopped = 0;
}
}
return stopped;
}
netdev_tx_t ionic_start_xmit(struct sk_buff *skb, struct net_device *netdev)
{
u16 queue_index = skb_get_queue_mapping(skb);
struct ionic_lif *lif = netdev_priv(netdev);
struct ionic_queue *q;
int ndescs;
int err;
if (unlikely(!test_bit(IONIC_LIF_UP, lif->state))) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
if (unlikely(!lif_to_txqcq(lif, queue_index)))
queue_index = 0;
q = lif_to_txq(lif, queue_index);
ndescs = ionic_tx_descs_needed(q, skb);
if (ndescs < 0)
goto err_out_drop;
if (unlikely(ionic_maybe_stop_tx(q, ndescs)))
return NETDEV_TX_BUSY;
if (skb_is_gso(skb))
err = ionic_tx_tso(q, skb);
else
err = ionic_tx(q, skb);
if (err)
goto err_out_drop;
/* Stop the queue if there aren't descriptors for the next packet.
* Since our SG lists per descriptor take care of most of the possible
* fragmentation, we don't need to have many descriptors available.
*/
ionic_maybe_stop_tx(q, 4);
return NETDEV_TX_OK;
err_out_drop:
q->stop++;
q->drop++;
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#ifndef _IONIC_TXRX_H_
#define _IONIC_TXRX_H_
void ionic_rx_flush(struct ionic_cq *cq);
void ionic_tx_flush(struct ionic_cq *cq);
void ionic_rx_fill(struct ionic_queue *q);
void ionic_rx_empty(struct ionic_queue *q);
int ionic_rx_napi(struct napi_struct *napi, int budget);
netdev_tx_t ionic_start_xmit(struct sk_buff *skb, struct net_device *netdev);
#endif /* _IONIC_TXRX_H_ */

View File

@ -458,6 +458,13 @@ enum devlink_param_generic_id {
/* Maker of the board */
#define DEVLINK_INFO_VERSION_GENERIC_BOARD_MANUFACTURE "board.manufacture"
/* Part number, identifier of asic design */
#define DEVLINK_INFO_VERSION_GENERIC_ASIC_ID "asic.id"
/* Revision of asic design */
#define DEVLINK_INFO_VERSION_GENERIC_ASIC_REV "asic.rev"
/* Overall FW version */
#define DEVLINK_INFO_VERSION_GENERIC_FW "fw"
/* Control processor FW version */
#define DEVLINK_INFO_VERSION_GENERIC_FW_MGMT "fw.mgmt"
/* Data path microcode controlling high-speed packet processing */