mlx5-updates-2021-06-14
1) Trivial Lag refactroing in preparation for upcomming Single FDB lag feature - First 3 patches 2) Scalable IRQ distriburion for Sub-functions A subfunction (SF) is a lightweight function that has a parent PCI function (PF) on which it is deployed. Currently, mlx5 subfunction is sharing the IRQs (MSI-X) with their parent PCI function. Before this series the PF allocates enough IRQs to cover all the cores in a system, Newly created SFs will re-use all the IRQs that the PF has allocated for itself. Hence, the more SFs are created, there are more EQs per IRQs. Therefore, whenever we handle an interrupt, we need to pull all SFs EQs and PF EQs instead of PF EQs without SFs on the system. This leads to a hard impact on the performance of SFs and PF. For example, on machine with: Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz with 56 cores. PCI Express 3 with BW of 126 Gb/s. ConnectX-5 Ex; EDR IB (100Gb/s) and 100GbE; dual-port QSFP28; PCIe4.0 x16. test case: iperf TX BW single CPU, affinity of app and IRQ are the same. PF only: no SFs on the system, 56 IRQs. SF (before), 250 SFs Sharing the same 56 IRQs . SF (now), 250 SFs + 255 avaiable IRQs for the NIC. (please see IRQ spread scheme below). application SF-IRQ channel BW(Gb/sec) interrupts/sec iperf TX affinity PF only cpu={0} cpu={0} cpu={0} 79 8200 SF (before) cpu={0} cpu={0} cpu={0} 51.3 (-35%) 9500 SF (now) cpu={0} cpu={0} cpu={0} 78 (-2%) 8200 command: $ taskset -c 0 iperf -c 11.1.1.1 -P 3 -i 6 -t 30 | grep SUM The different between the SF examples is that before this series we allocate num_cpus (56) IRQs, and all of them were shared among the PF and the SFs. And after this series, we allocate 255 IRQs, and we spread the SFs among the above IRQs. This have significantly decreased the load on each IRQ and the number of EQs per IRQ is down by 95% (251->11). In this patchset the solution proposed is to have a dedicated IRQ pool for SFs to use. the pool will allocate a large number of IRQs for SFs to grab from in order to minimize irq sharing between the different SFs. IRQs will not be requested from the OS until they are 1st requested by an SF consumer, and will be eventually released when the last SF consumer releases them. For the detailed IRQ spread and allocation scheme please see last patch: ("net/mlx5: Round-Robin EQs over IRQs") -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAmDIJUkACgkQSD+KveBX +j7tgQf+KtxzniuEY+JgbGWWyQvglx88S6WfhTOhZZllm2QXa2wWX24mz/AdYc0x QCT6yUzvaeaHPNpw/KwCw1IKpB9dlT+wIBD9NCEqtHqj+bVz+ioL/OlM5VJj+wC2 kp+EjYsQbwgZIM40JgLLu2uzLy/5w7a1v9Rj0l4mLRZqPmrqeKrIAsVkVutaxtPg PtECBag4XtYERMXOfKohnXanwjW6ZyYQ0Yal76jNqoXXgy5dHr/JJDZQZTDURt7S 3ex0gwTZwHfOLFQdRzD+U0kuC2/6sHMfeVrKO6QxuG/gihYe8FXEQ4qVSJmgXANP VH6n1Vk5IhaMzYKfGFb2OGOWanAVIA== =z0x7 -----END PGP SIGNATURE----- Merge tag 'mlx5-updates-2021-06-14' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2021-06-14 1) Trivial Lag refactroing in preparation for upcomming Single FDB lag feature - First 3 patches 2) Scalable IRQ distriburion for Sub-functions A subfunction (SF) is a lightweight function that has a parent PCI function (PF) on which it is deployed. Currently, mlx5 subfunction is sharing the IRQs (MSI-X) with their parent PCI function. Before this series the PF allocates enough IRQs to cover all the cores in a system, Newly created SFs will re-use all the IRQs that the PF has allocated for itself. Hence, the more SFs are created, there are more EQs per IRQs. Therefore, whenever we handle an interrupt, we need to pull all SFs EQs and PF EQs instead of PF EQs without SFs on the system. This leads to a hard impact on the performance of SFs and PF. For example, on machine with: Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz with 56 cores. PCI Express 3 with BW of 126 Gb/s. ConnectX-5 Ex; EDR IB (100Gb/s) and 100GbE; dual-port QSFP28; PCIe4.0 x16. test case: iperf TX BW single CPU, affinity of app and IRQ are the same. PF only: no SFs on the system, 56 IRQs. SF (before), 250 SFs Sharing the same 56 IRQs . SF (now), 250 SFs + 255 avaiable IRQs for the NIC. (please see IRQ spread scheme below). application SF-IRQ channel BW(Gb/sec) interrupts/sec iperf TX affinity PF only cpu={0} cpu={0} cpu={0} 79 8200 SF (before) cpu={0} cpu={0} cpu={0} 51.3 (-35%) 9500 SF (now) cpu={0} cpu={0} cpu={0} 78 (-2%) 8200 command: $ taskset -c 0 iperf -c 11.1.1.1 -P 3 -i 6 -t 30 | grep SUM The different between the SF examples is that before this series we allocate num_cpus (56) IRQs, and all of them were shared among the PF and the SFs. And after this series, we allocate 255 IRQs, and we spread the SFs among the above IRQs. This have significantly decreased the load on each IRQ and the number of EQs per IRQ is down by 95% (251->11). In this patchset the solution proposed is to have a dedicated IRQ pool for SFs to use. the pool will allocate a large number of IRQs for SFs to grab from in order to minimize irq sharing between the different SFs. IRQs will not be requested from the OS until they are 1st requested by an SF consumer, and will be eventually released when the last SF consumer releases them. For the detailed IRQ spread and allocation scheme please see last patch: ("net/mlx5: Round-Robin EQs over IRQs") ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
commit
f0c227c7df
@ -1559,12 +1559,16 @@ int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
|
||||
}
|
||||
|
||||
eq->irq_nb.notifier_call = mlx5_ib_eq_pf_int;
|
||||
param = (struct mlx5_eq_param){
|
||||
.irq_index = 0,
|
||||
param = (struct mlx5_eq_param) {
|
||||
.nent = MLX5_IB_NUM_PF_EQE,
|
||||
};
|
||||
param.mask[0] = 1ull << MLX5_EVENT_TYPE_PAGE_FAULT;
|
||||
if (!zalloc_cpumask_var(¶m.affinity, GFP_KERNEL)) {
|
||||
err = -ENOMEM;
|
||||
goto err_wq;
|
||||
}
|
||||
eq->core = mlx5_eq_create_generic(dev->mdev, ¶m);
|
||||
free_cpumask_var(param.affinity);
|
||||
if (IS_ERR(eq->core)) {
|
||||
err = PTR_ERR(eq->core);
|
||||
goto err_wq;
|
||||
|
@ -5114,7 +5114,7 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
|
||||
mlx5e_set_netdev_mtu_boundaries(priv);
|
||||
mlx5e_set_dev_port_mtu(priv);
|
||||
|
||||
mlx5_lag_add(mdev, netdev);
|
||||
mlx5_lag_add_netdev(mdev, netdev);
|
||||
|
||||
mlx5e_enable_async_events(priv);
|
||||
mlx5e_enable_blocking_events(priv);
|
||||
@ -5162,7 +5162,7 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
|
||||
priv->en_trap = NULL;
|
||||
}
|
||||
mlx5e_disable_async_events(priv);
|
||||
mlx5_lag_remove(mdev);
|
||||
mlx5_lag_remove_netdev(mdev, priv->netdev);
|
||||
mlx5_vxlan_reset_to_default(mdev->vxlan);
|
||||
}
|
||||
|
||||
|
@ -976,7 +976,7 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv)
|
||||
if (MLX5_CAP_GEN(mdev, uplink_follow))
|
||||
mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK,
|
||||
0, 0, MLX5_VPORT_ADMIN_STATE_AUTO);
|
||||
mlx5_lag_add(mdev, netdev);
|
||||
mlx5_lag_add_netdev(mdev, netdev);
|
||||
priv->events_nb.notifier_call = uplink_rep_async_event;
|
||||
mlx5_notifier_register(mdev, &priv->events_nb);
|
||||
mlx5e_dcbnl_initialize(priv);
|
||||
@ -1009,7 +1009,7 @@ static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv)
|
||||
mlx5e_dcbnl_delete_app(priv);
|
||||
mlx5_notifier_unregister(mdev, &priv->events_nb);
|
||||
mlx5e_rep_tc_disable(priv);
|
||||
mlx5_lag_remove(mdev);
|
||||
mlx5_lag_remove_netdev(mdev, priv->netdev);
|
||||
}
|
||||
|
||||
static MLX5E_DEFINE_STATS_GRP(sw_rep, 0);
|
||||
|
@ -1,33 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
|
||||
/*
|
||||
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
|
||||
*
|
||||
* This software is available to you under a choice of one of two
|
||||
* licenses. You may choose to be licensed under the terms of the GNU
|
||||
* General Public License (GPL) Version 2, available from the file
|
||||
* COPYING in the main directory of this source tree, or the
|
||||
* OpenIB.org BSD license below:
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or
|
||||
* without modification, are permitted provided that the following
|
||||
* conditions are met:
|
||||
*
|
||||
* - Redistributions of source code must retain the above
|
||||
* copyright notice, this list of conditions and the following
|
||||
* disclaimer.
|
||||
*
|
||||
* - Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following
|
||||
* disclaimer in the documentation and/or other materials
|
||||
* provided with the distribution.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
||||
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
* SOFTWARE.
|
||||
* Copyright (c) 2013-2021, Mellanox Technologies inc. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/interrupt.h>
|
||||
@ -45,6 +18,7 @@
|
||||
#include "eswitch.h"
|
||||
#include "lib/clock.h"
|
||||
#include "diag/fw_tracer.h"
|
||||
#include "mlx5_irq.h"
|
||||
|
||||
enum {
|
||||
MLX5_EQE_OWNER_INIT_VAL = 0x1,
|
||||
@ -84,6 +58,9 @@ struct mlx5_eq_table {
|
||||
struct mutex lock; /* sync async eqs creations */
|
||||
int num_comp_eqs;
|
||||
struct mlx5_irq_table *irq_table;
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct cpu_rmap *rmap;
|
||||
#endif
|
||||
};
|
||||
|
||||
#define MLX5_ASYNC_EVENT_MASK ((1ull << MLX5_EVENT_TYPE_PATH_MIG) | \
|
||||
@ -286,7 +263,7 @@ create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
|
||||
u32 out[MLX5_ST_SZ_DW(create_eq_out)] = {0};
|
||||
u8 log_eq_stride = ilog2(MLX5_EQE_SIZE);
|
||||
struct mlx5_priv *priv = &dev->priv;
|
||||
u8 vecidx = param->irq_index;
|
||||
u16 vecidx = param->irq_index;
|
||||
__be64 *pas;
|
||||
void *eqc;
|
||||
int inlen;
|
||||
@ -309,13 +286,20 @@ create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
|
||||
mlx5_init_fbc(eq->frag_buf.frags, log_eq_stride, log_eq_size, &eq->fbc);
|
||||
init_eq_buf(eq);
|
||||
|
||||
eq->irq = mlx5_irq_request(dev, vecidx, param->affinity);
|
||||
if (IS_ERR(eq->irq)) {
|
||||
err = PTR_ERR(eq->irq);
|
||||
goto err_buf;
|
||||
}
|
||||
|
||||
vecidx = mlx5_irq_get_index(eq->irq);
|
||||
inlen = MLX5_ST_SZ_BYTES(create_eq_in) +
|
||||
MLX5_FLD_SZ_BYTES(create_eq_in, pas[0]) * eq->frag_buf.npages;
|
||||
|
||||
in = kvzalloc(inlen, GFP_KERNEL);
|
||||
if (!in) {
|
||||
err = -ENOMEM;
|
||||
goto err_buf;
|
||||
goto err_irq;
|
||||
}
|
||||
|
||||
pas = (__be64 *)MLX5_ADDR_OF(create_eq_in, in, pas);
|
||||
@ -359,6 +343,8 @@ err_eq:
|
||||
err_in:
|
||||
kvfree(in);
|
||||
|
||||
err_irq:
|
||||
mlx5_irq_release(eq->irq);
|
||||
err_buf:
|
||||
mlx5_frag_buf_free(dev, &eq->frag_buf);
|
||||
return err;
|
||||
@ -377,10 +363,9 @@ err_buf:
|
||||
int mlx5_eq_enable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
|
||||
int err;
|
||||
|
||||
err = mlx5_irq_attach_nb(eq_table->irq_table, eq->vecidx, nb);
|
||||
err = mlx5_irq_attach_nb(eq->irq, nb);
|
||||
if (!err)
|
||||
eq_update_ci(eq, 1);
|
||||
|
||||
@ -399,9 +384,7 @@ EXPORT_SYMBOL(mlx5_eq_enable);
|
||||
void mlx5_eq_disable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
|
||||
|
||||
mlx5_irq_detach_nb(eq_table->irq_table, eq->vecidx, nb);
|
||||
mlx5_irq_detach_nb(eq->irq, nb);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_eq_disable);
|
||||
|
||||
@ -415,10 +398,9 @@ static int destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
|
||||
if (err)
|
||||
mlx5_core_warn(dev, "failed to destroy a previously created eq: eqn %d\n",
|
||||
eq->eqn);
|
||||
synchronize_irq(eq->irqn);
|
||||
mlx5_irq_release(eq->irq);
|
||||
|
||||
mlx5_frag_buf_free(dev, &eq->frag_buf);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -490,14 +472,7 @@ static int create_async_eq(struct mlx5_core_dev *dev,
|
||||
int err;
|
||||
|
||||
mutex_lock(&eq_table->lock);
|
||||
/* Async EQs must share irq index 0 */
|
||||
if (param->irq_index != 0) {
|
||||
err = -EINVAL;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
err = create_map_eq(dev, eq, param);
|
||||
unlock:
|
||||
mutex_unlock(&eq_table->lock);
|
||||
return err;
|
||||
}
|
||||
@ -616,8 +591,11 @@ setup_async_eq(struct mlx5_core_dev *dev, struct mlx5_eq_async *eq,
|
||||
|
||||
eq->irq_nb.notifier_call = mlx5_eq_async_int;
|
||||
spin_lock_init(&eq->lock);
|
||||
if (!zalloc_cpumask_var(¶m->affinity, GFP_KERNEL))
|
||||
return -ENOMEM;
|
||||
|
||||
err = create_async_eq(dev, &eq->core, param);
|
||||
free_cpumask_var(param->affinity);
|
||||
if (err) {
|
||||
mlx5_core_warn(dev, "failed to create %s EQ %d\n", name, err);
|
||||
return err;
|
||||
@ -652,7 +630,6 @@ static int create_async_eqs(struct mlx5_core_dev *dev)
|
||||
mlx5_eq_notifier_register(dev, &table->cq_err_nb);
|
||||
|
||||
param = (struct mlx5_eq_param) {
|
||||
.irq_index = 0,
|
||||
.nent = MLX5_NUM_CMD_EQE,
|
||||
.mask[0] = 1ull << MLX5_EVENT_TYPE_CMD,
|
||||
};
|
||||
@ -665,7 +642,6 @@ static int create_async_eqs(struct mlx5_core_dev *dev)
|
||||
mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL);
|
||||
|
||||
param = (struct mlx5_eq_param) {
|
||||
.irq_index = 0,
|
||||
.nent = MLX5_NUM_ASYNC_EQE,
|
||||
};
|
||||
|
||||
@ -675,7 +651,6 @@ static int create_async_eqs(struct mlx5_core_dev *dev)
|
||||
goto err2;
|
||||
|
||||
param = (struct mlx5_eq_param) {
|
||||
.irq_index = 0,
|
||||
.nent = /* TODO: sriov max_vf + */ 1,
|
||||
.mask[0] = 1ull << MLX5_EVENT_TYPE_PAGE_REQUEST,
|
||||
};
|
||||
@ -735,6 +710,9 @@ mlx5_eq_create_generic(struct mlx5_core_dev *dev,
|
||||
struct mlx5_eq *eq = kvzalloc(sizeof(*eq), GFP_KERNEL);
|
||||
int err;
|
||||
|
||||
if (!param->affinity)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (!eq)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
@ -845,16 +823,21 @@ static int create_comp_eqs(struct mlx5_core_dev *dev)
|
||||
.irq_index = vecidx,
|
||||
.nent = nent,
|
||||
};
|
||||
err = create_map_eq(dev, &eq->core, ¶m);
|
||||
if (err) {
|
||||
kfree(eq);
|
||||
goto clean;
|
||||
|
||||
if (!zalloc_cpumask_var(¶m.affinity, GFP_KERNEL)) {
|
||||
err = -ENOMEM;
|
||||
goto clean_eq;
|
||||
}
|
||||
cpumask_set_cpu(cpumask_local_spread(i, dev->priv.numa_node),
|
||||
param.affinity);
|
||||
err = create_map_eq(dev, &eq->core, ¶m);
|
||||
free_cpumask_var(param.affinity);
|
||||
if (err)
|
||||
goto clean_eq;
|
||||
err = mlx5_eq_enable(dev, &eq->core, &eq->irq_nb);
|
||||
if (err) {
|
||||
destroy_unmap_eq(dev, &eq->core);
|
||||
kfree(eq);
|
||||
goto clean;
|
||||
goto clean_eq;
|
||||
}
|
||||
|
||||
mlx5_core_dbg(dev, "allocated completion EQN %d\n", eq->core.eqn);
|
||||
@ -863,7 +846,8 @@ static int create_comp_eqs(struct mlx5_core_dev *dev)
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
clean_eq:
|
||||
kfree(eq);
|
||||
clean:
|
||||
destroy_comp_eqs(dev);
|
||||
return err;
|
||||
@ -899,17 +883,23 @@ EXPORT_SYMBOL(mlx5_comp_vectors_count);
|
||||
struct cpumask *
|
||||
mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector)
|
||||
{
|
||||
int vecidx = vector + MLX5_IRQ_VEC_COMP_BASE;
|
||||
struct mlx5_eq_table *table = dev->priv.eq_table;
|
||||
struct mlx5_eq_comp *eq, *n;
|
||||
int i = 0;
|
||||
|
||||
return mlx5_irq_get_affinity_mask(dev->priv.eq_table->irq_table,
|
||||
vecidx);
|
||||
list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
|
||||
if (i++ == vector)
|
||||
break;
|
||||
}
|
||||
|
||||
return mlx5_irq_get_affinity_mask(eq->core.irq);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_comp_irq_get_affinity_mask);
|
||||
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev)
|
||||
{
|
||||
return mlx5_irq_get_rmap(dev->priv.eq_table->irq_table);
|
||||
return dev->priv.eq_table->rmap;
|
||||
}
|
||||
#endif
|
||||
|
||||
@ -926,12 +916,57 @@ struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn)
|
||||
return ERR_PTR(-ENOENT);
|
||||
}
|
||||
|
||||
static void clear_rmap(struct mlx5_core_dev *dev)
|
||||
{
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
|
||||
|
||||
free_irq_cpu_rmap(eq_table->rmap);
|
||||
#endif
|
||||
}
|
||||
|
||||
static int set_rmap(struct mlx5_core_dev *mdev)
|
||||
{
|
||||
int err = 0;
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct mlx5_eq_table *eq_table = mdev->priv.eq_table;
|
||||
int vecidx;
|
||||
|
||||
eq_table->rmap = alloc_irq_cpu_rmap(eq_table->num_comp_eqs);
|
||||
if (!eq_table->rmap) {
|
||||
err = -ENOMEM;
|
||||
mlx5_core_err(mdev, "Failed to allocate cpu_rmap. err %d", err);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
vecidx = MLX5_IRQ_VEC_COMP_BASE;
|
||||
for (; vecidx < eq_table->num_comp_eqs + MLX5_IRQ_VEC_COMP_BASE;
|
||||
vecidx++) {
|
||||
err = irq_cpu_rmap_add(eq_table->rmap,
|
||||
pci_irq_vector(mdev->pdev, vecidx));
|
||||
if (err) {
|
||||
mlx5_core_err(mdev, "irq_cpu_rmap_add failed. err %d",
|
||||
err);
|
||||
goto err_irq_cpu_rmap_add;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_irq_cpu_rmap_add:
|
||||
clear_rmap(mdev);
|
||||
err_out:
|
||||
#endif
|
||||
return err;
|
||||
}
|
||||
|
||||
/* This function should only be called after mlx5_cmd_force_teardown_hca */
|
||||
void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_eq_table *table = dev->priv.eq_table;
|
||||
|
||||
mutex_lock(&table->lock); /* sync with create/destroy_async_eq */
|
||||
if (!mlx5_core_is_sf(dev))
|
||||
clear_rmap(dev);
|
||||
mlx5_irq_table_destroy(dev);
|
||||
mutex_unlock(&table->lock);
|
||||
}
|
||||
@ -948,12 +983,19 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
|
||||
int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
|
||||
MLX5_CAP_GEN(dev, max_num_eqs) :
|
||||
1 << MLX5_CAP_GEN(dev, log_max_eq);
|
||||
int max_eqs_sf;
|
||||
int err;
|
||||
|
||||
eq_table->num_comp_eqs =
|
||||
min_t(int,
|
||||
mlx5_irq_get_num_comp(eq_table->irq_table),
|
||||
mlx5_irq_table_get_num_comp(eq_table->irq_table),
|
||||
num_eqs - MLX5_MAX_ASYNC_EQS);
|
||||
if (mlx5_core_is_sf(dev)) {
|
||||
max_eqs_sf = min_t(int, MLX5_COMP_EQS_PER_SF,
|
||||
mlx5_irq_table_get_sfs_vec(eq_table->irq_table));
|
||||
eq_table->num_comp_eqs = min_t(int, eq_table->num_comp_eqs,
|
||||
max_eqs_sf);
|
||||
}
|
||||
|
||||
err = create_async_eqs(dev);
|
||||
if (err) {
|
||||
@ -961,6 +1003,18 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
|
||||
goto err_async_eqs;
|
||||
}
|
||||
|
||||
if (!mlx5_core_is_sf(dev)) {
|
||||
/* rmap is a mapping between irq number and queue number.
|
||||
* each irq can be assign only to a single rmap.
|
||||
* since SFs share IRQs, rmap mapping cannot function correctly
|
||||
* for irqs that are shared for different core/netdev RX rings.
|
||||
* Hence we don't allow netdev rmap for SFs
|
||||
*/
|
||||
err = set_rmap(dev);
|
||||
if (err)
|
||||
goto err_rmap;
|
||||
}
|
||||
|
||||
err = create_comp_eqs(dev);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "Failed to create completion EQs\n");
|
||||
@ -969,6 +1023,9 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
|
||||
|
||||
return 0;
|
||||
err_comp_eqs:
|
||||
if (!mlx5_core_is_sf(dev))
|
||||
clear_rmap(dev);
|
||||
err_rmap:
|
||||
destroy_async_eqs(dev);
|
||||
err_async_eqs:
|
||||
return err;
|
||||
@ -976,6 +1033,8 @@ err_async_eqs:
|
||||
|
||||
void mlx5_eq_table_destroy(struct mlx5_core_dev *dev)
|
||||
{
|
||||
if (!mlx5_core_is_sf(dev))
|
||||
clear_rmap(dev);
|
||||
destroy_comp_eqs(dev);
|
||||
destroy_async_eqs(dev);
|
||||
}
|
||||
|
@ -93,6 +93,64 @@ int mlx5_cmd_destroy_vport_lag(struct mlx5_core_dev *dev)
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_cmd_destroy_vport_lag);
|
||||
|
||||
static int mlx5_lag_netdev_event(struct notifier_block *this,
|
||||
unsigned long event, void *ptr);
|
||||
static void mlx5_do_bond_work(struct work_struct *work);
|
||||
|
||||
static void mlx5_ldev_free(struct kref *ref)
|
||||
{
|
||||
struct mlx5_lag *ldev = container_of(ref, struct mlx5_lag, ref);
|
||||
|
||||
if (ldev->nb.notifier_call)
|
||||
unregister_netdevice_notifier_net(&init_net, &ldev->nb);
|
||||
mlx5_lag_mp_cleanup(ldev);
|
||||
cancel_delayed_work_sync(&ldev->bond_work);
|
||||
destroy_workqueue(ldev->wq);
|
||||
kfree(ldev);
|
||||
}
|
||||
|
||||
static void mlx5_ldev_put(struct mlx5_lag *ldev)
|
||||
{
|
||||
kref_put(&ldev->ref, mlx5_ldev_free);
|
||||
}
|
||||
|
||||
static void mlx5_ldev_get(struct mlx5_lag *ldev)
|
||||
{
|
||||
kref_get(&ldev->ref);
|
||||
}
|
||||
|
||||
static struct mlx5_lag *mlx5_lag_dev_alloc(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
int err;
|
||||
|
||||
ldev = kzalloc(sizeof(*ldev), GFP_KERNEL);
|
||||
if (!ldev)
|
||||
return NULL;
|
||||
|
||||
ldev->wq = create_singlethread_workqueue("mlx5_lag");
|
||||
if (!ldev->wq) {
|
||||
kfree(ldev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
kref_init(&ldev->ref);
|
||||
INIT_DELAYED_WORK(&ldev->bond_work, mlx5_do_bond_work);
|
||||
|
||||
ldev->nb.notifier_call = mlx5_lag_netdev_event;
|
||||
if (register_netdevice_notifier_net(&init_net, &ldev->nb)) {
|
||||
ldev->nb.notifier_call = NULL;
|
||||
mlx5_core_err(dev, "Failed to register LAG netdev notifier\n");
|
||||
}
|
||||
|
||||
err = mlx5_lag_mp_init(ldev);
|
||||
if (err)
|
||||
mlx5_core_err(dev, "Failed to init multipath lag err=%d\n",
|
||||
err);
|
||||
|
||||
return ldev;
|
||||
}
|
||||
|
||||
int mlx5_lag_dev_get_netdev_idx(struct mlx5_lag *ldev,
|
||||
struct net_device *ndev)
|
||||
{
|
||||
@ -258,6 +316,10 @@ static void mlx5_lag_add_devices(struct mlx5_lag *ldev)
|
||||
if (!ldev->pf[i].dev)
|
||||
continue;
|
||||
|
||||
if (ldev->pf[i].dev->priv.flags &
|
||||
MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV)
|
||||
continue;
|
||||
|
||||
ldev->pf[i].dev->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
|
||||
mlx5_rescan_drivers_locked(ldev->pf[i].dev);
|
||||
}
|
||||
@ -276,6 +338,31 @@ static void mlx5_lag_remove_devices(struct mlx5_lag *ldev)
|
||||
}
|
||||
}
|
||||
|
||||
static void mlx5_disable_lag(struct mlx5_lag *ldev)
|
||||
{
|
||||
struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
|
||||
struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
|
||||
bool roce_lag;
|
||||
int err;
|
||||
|
||||
roce_lag = __mlx5_lag_is_roce(ldev);
|
||||
|
||||
if (roce_lag) {
|
||||
if (!(dev0->priv.flags & MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV)) {
|
||||
dev0->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
|
||||
mlx5_rescan_drivers_locked(dev0);
|
||||
}
|
||||
mlx5_nic_vport_disable_roce(dev1);
|
||||
}
|
||||
|
||||
err = mlx5_deactivate_lag(ldev);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
if (roce_lag)
|
||||
mlx5_lag_add_devices(ldev);
|
||||
}
|
||||
|
||||
static void mlx5_do_bond(struct mlx5_lag *ldev)
|
||||
{
|
||||
struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
|
||||
@ -322,20 +409,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
|
||||
} else if (do_bond && __mlx5_lag_is_active(ldev)) {
|
||||
mlx5_modify_lag(ldev, &tracker);
|
||||
} else if (!do_bond && __mlx5_lag_is_active(ldev)) {
|
||||
roce_lag = __mlx5_lag_is_roce(ldev);
|
||||
|
||||
if (roce_lag) {
|
||||
dev0->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
|
||||
mlx5_rescan_drivers_locked(dev0);
|
||||
mlx5_nic_vport_disable_roce(dev1);
|
||||
}
|
||||
|
||||
err = mlx5_deactivate_lag(ldev);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
if (roce_lag)
|
||||
mlx5_lag_add_devices(ldev);
|
||||
mlx5_disable_lag(ldev);
|
||||
}
|
||||
}
|
||||
|
||||
@ -495,55 +569,52 @@ static int mlx5_lag_netdev_event(struct notifier_block *this,
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static struct mlx5_lag *mlx5_lag_dev_alloc(void)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
|
||||
ldev = kzalloc(sizeof(*ldev), GFP_KERNEL);
|
||||
if (!ldev)
|
||||
return NULL;
|
||||
|
||||
ldev->wq = create_singlethread_workqueue("mlx5_lag");
|
||||
if (!ldev->wq) {
|
||||
kfree(ldev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
INIT_DELAYED_WORK(&ldev->bond_work, mlx5_do_bond_work);
|
||||
|
||||
return ldev;
|
||||
}
|
||||
|
||||
static void mlx5_lag_dev_free(struct mlx5_lag *ldev)
|
||||
{
|
||||
destroy_workqueue(ldev->wq);
|
||||
kfree(ldev);
|
||||
}
|
||||
|
||||
static int mlx5_lag_dev_add_pf(struct mlx5_lag *ldev,
|
||||
struct mlx5_core_dev *dev,
|
||||
struct net_device *netdev)
|
||||
static void mlx5_ldev_add_netdev(struct mlx5_lag *ldev,
|
||||
struct mlx5_core_dev *dev,
|
||||
struct net_device *netdev)
|
||||
{
|
||||
unsigned int fn = PCI_FUNC(dev->pdev->devfn);
|
||||
|
||||
if (fn >= MLX5_MAX_PORTS)
|
||||
return -EPERM;
|
||||
return;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev->pf[fn].dev = dev;
|
||||
ldev->pf[fn].netdev = netdev;
|
||||
ldev->tracker.netdev_state[fn].link_up = 0;
|
||||
ldev->tracker.netdev_state[fn].tx_enabled = 0;
|
||||
|
||||
dev->priv.lag = ldev;
|
||||
|
||||
spin_unlock(&lag_lock);
|
||||
|
||||
return fn;
|
||||
}
|
||||
|
||||
static void mlx5_lag_dev_remove_pf(struct mlx5_lag *ldev,
|
||||
struct mlx5_core_dev *dev)
|
||||
static void mlx5_ldev_remove_netdev(struct mlx5_lag *ldev,
|
||||
struct net_device *netdev)
|
||||
{
|
||||
int i;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
for (i = 0; i < MLX5_MAX_PORTS; i++) {
|
||||
if (ldev->pf[i].netdev == netdev) {
|
||||
ldev->pf[i].netdev = NULL;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&lag_lock);
|
||||
}
|
||||
|
||||
static void mlx5_ldev_add_mdev(struct mlx5_lag *ldev,
|
||||
struct mlx5_core_dev *dev)
|
||||
{
|
||||
unsigned int fn = PCI_FUNC(dev->pdev->devfn);
|
||||
|
||||
if (fn >= MLX5_MAX_PORTS)
|
||||
return;
|
||||
|
||||
ldev->pf[fn].dev = dev;
|
||||
dev->priv.lag = ldev;
|
||||
}
|
||||
|
||||
/* Must be called with intf_mutex held */
|
||||
static void mlx5_ldev_remove_mdev(struct mlx5_lag *ldev,
|
||||
struct mlx5_core_dev *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
@ -554,19 +625,15 @@ static void mlx5_lag_dev_remove_pf(struct mlx5_lag *ldev,
|
||||
if (i == MLX5_MAX_PORTS)
|
||||
return;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
memset(&ldev->pf[i], 0, sizeof(*ldev->pf));
|
||||
|
||||
ldev->pf[i].dev = NULL;
|
||||
dev->priv.lag = NULL;
|
||||
spin_unlock(&lag_lock);
|
||||
}
|
||||
|
||||
/* Must be called with intf_mutex held */
|
||||
void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
|
||||
static void __mlx5_lag_dev_add_mdev(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_lag *ldev = NULL;
|
||||
struct mlx5_core_dev *tmp_dev;
|
||||
int i, err;
|
||||
|
||||
if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
|
||||
!MLX5_CAP_GEN(dev, lag_master) ||
|
||||
@ -578,67 +645,77 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
|
||||
ldev = tmp_dev->priv.lag;
|
||||
|
||||
if (!ldev) {
|
||||
ldev = mlx5_lag_dev_alloc();
|
||||
ldev = mlx5_lag_dev_alloc(dev);
|
||||
if (!ldev) {
|
||||
mlx5_core_err(dev, "Failed to alloc lag dev\n");
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
mlx5_ldev_get(ldev);
|
||||
}
|
||||
|
||||
if (mlx5_lag_dev_add_pf(ldev, dev, netdev) < 0)
|
||||
mlx5_ldev_add_mdev(ldev, dev);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
void mlx5_lag_remove_mdev(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (!ldev)
|
||||
return;
|
||||
|
||||
mlx5_dev_list_lock();
|
||||
mlx5_ldev_remove_mdev(ldev, dev);
|
||||
mlx5_dev_list_unlock();
|
||||
mlx5_ldev_put(ldev);
|
||||
}
|
||||
|
||||
void mlx5_lag_add_mdev(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_dev_list_lock();
|
||||
__mlx5_lag_dev_add_mdev(dev);
|
||||
mlx5_dev_list_unlock();
|
||||
}
|
||||
|
||||
/* Must be called with intf_mutex held */
|
||||
void mlx5_lag_remove_netdev(struct mlx5_core_dev *dev,
|
||||
struct net_device *netdev)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (!ldev)
|
||||
return;
|
||||
|
||||
if (__mlx5_lag_is_active(ldev))
|
||||
mlx5_disable_lag(ldev);
|
||||
|
||||
mlx5_ldev_remove_netdev(ldev, netdev);
|
||||
ldev->flags &= ~MLX5_LAG_FLAG_READY;
|
||||
}
|
||||
|
||||
/* Must be called with intf_mutex held */
|
||||
void mlx5_lag_add_netdev(struct mlx5_core_dev *dev,
|
||||
struct net_device *netdev)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
int i;
|
||||
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (!ldev)
|
||||
return;
|
||||
|
||||
mlx5_ldev_add_netdev(ldev, dev, netdev);
|
||||
|
||||
for (i = 0; i < MLX5_MAX_PORTS; i++)
|
||||
if (!ldev->pf[i].dev)
|
||||
break;
|
||||
|
||||
if (i >= MLX5_MAX_PORTS)
|
||||
ldev->flags |= MLX5_LAG_FLAG_READY;
|
||||
|
||||
if (!ldev->nb.notifier_call) {
|
||||
ldev->nb.notifier_call = mlx5_lag_netdev_event;
|
||||
if (register_netdevice_notifier_net(&init_net, &ldev->nb)) {
|
||||
ldev->nb.notifier_call = NULL;
|
||||
mlx5_core_err(dev, "Failed to register LAG netdev notifier\n");
|
||||
}
|
||||
}
|
||||
|
||||
err = mlx5_lag_mp_init(ldev);
|
||||
if (err)
|
||||
mlx5_core_err(dev, "Failed to init multipath lag err=%d\n",
|
||||
err);
|
||||
}
|
||||
|
||||
/* Must be called with intf_mutex held */
|
||||
void mlx5_lag_remove(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_lag *ldev;
|
||||
int i;
|
||||
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
if (!ldev)
|
||||
return;
|
||||
|
||||
if (__mlx5_lag_is_active(ldev))
|
||||
mlx5_deactivate_lag(ldev);
|
||||
|
||||
mlx5_lag_dev_remove_pf(ldev, dev);
|
||||
|
||||
ldev->flags &= ~MLX5_LAG_FLAG_READY;
|
||||
|
||||
for (i = 0; i < MLX5_MAX_PORTS; i++)
|
||||
if (ldev->pf[i].dev)
|
||||
break;
|
||||
|
||||
if (i == MLX5_MAX_PORTS) {
|
||||
if (ldev->nb.notifier_call) {
|
||||
unregister_netdevice_notifier_net(&init_net, &ldev->nb);
|
||||
ldev->nb.notifier_call = NULL;
|
||||
}
|
||||
mlx5_lag_mp_cleanup(ldev);
|
||||
cancel_delayed_work_sync(&ldev->bond_work);
|
||||
mlx5_lag_dev_free(ldev);
|
||||
}
|
||||
}
|
||||
|
||||
bool mlx5_lag_is_roce(struct mlx5_core_dev *dev)
|
||||
@ -647,7 +724,7 @@ bool mlx5_lag_is_roce(struct mlx5_core_dev *dev)
|
||||
bool res;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
res = ldev && __mlx5_lag_is_roce(ldev);
|
||||
spin_unlock(&lag_lock);
|
||||
|
||||
@ -661,7 +738,7 @@ bool mlx5_lag_is_active(struct mlx5_core_dev *dev)
|
||||
bool res;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
res = ldev && __mlx5_lag_is_active(ldev);
|
||||
spin_unlock(&lag_lock);
|
||||
|
||||
@ -675,7 +752,7 @@ bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev)
|
||||
bool res;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
res = ldev && __mlx5_lag_is_sriov(ldev);
|
||||
spin_unlock(&lag_lock);
|
||||
|
||||
@ -688,7 +765,7 @@ void mlx5_lag_update(struct mlx5_core_dev *dev)
|
||||
struct mlx5_lag *ldev;
|
||||
|
||||
mlx5_dev_list_lock();
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (!ldev)
|
||||
goto unlock;
|
||||
|
||||
@ -704,7 +781,7 @@ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev)
|
||||
struct mlx5_lag *ldev;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
|
||||
if (!(ldev && __mlx5_lag_is_roce(ldev)))
|
||||
goto unlock;
|
||||
@ -733,7 +810,7 @@ u8 mlx5_lag_get_slave_port(struct mlx5_core_dev *dev,
|
||||
u8 port = 0;
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (!(ldev && __mlx5_lag_is_roce(ldev)))
|
||||
goto unlock;
|
||||
|
||||
@ -769,7 +846,7 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev,
|
||||
memset(values, 0, sizeof(*values) * num_counters);
|
||||
|
||||
spin_lock(&lag_lock);
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
if (ldev && __mlx5_lag_is_active(ldev)) {
|
||||
num_ports = MLX5_MAX_PORTS;
|
||||
mdev[MLX5_LAG_P1] = ldev->pf[MLX5_LAG_P1].dev;
|
||||
|
@ -40,6 +40,7 @@ struct lag_tracker {
|
||||
struct mlx5_lag {
|
||||
u8 flags;
|
||||
u8 v2p_map[MLX5_MAX_PORTS];
|
||||
struct kref ref;
|
||||
struct lag_func pf[MLX5_MAX_PORTS];
|
||||
struct lag_tracker tracker;
|
||||
struct workqueue_struct *wq;
|
||||
@ -49,7 +50,7 @@ struct mlx5_lag {
|
||||
};
|
||||
|
||||
static inline struct mlx5_lag *
|
||||
mlx5_lag_dev_get(struct mlx5_core_dev *dev)
|
||||
mlx5_lag_dev(struct mlx5_core_dev *dev)
|
||||
{
|
||||
return dev->priv.lag;
|
||||
}
|
||||
|
@ -28,7 +28,7 @@ bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev)
|
||||
struct mlx5_lag *ldev;
|
||||
bool res;
|
||||
|
||||
ldev = mlx5_lag_dev_get(dev);
|
||||
ldev = mlx5_lag_dev(dev);
|
||||
res = ldev && __mlx5_lag_is_multipath(ldev);
|
||||
|
||||
return res;
|
||||
|
@ -1,5 +1,5 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
|
||||
/* Copyright (c) 2018 Mellanox Technologies */
|
||||
/* Copyright (c) 2018-2021, Mellanox Technologies inc. All rights reserved. */
|
||||
|
||||
#ifndef __LIB_MLX5_EQ_H__
|
||||
#define __LIB_MLX5_EQ_H__
|
||||
@ -32,6 +32,7 @@ struct mlx5_eq {
|
||||
unsigned int irqn;
|
||||
u8 eqn;
|
||||
struct mlx5_rsc_debug *dbg;
|
||||
struct mlx5_irq *irq;
|
||||
};
|
||||
|
||||
struct mlx5_eq_async {
|
||||
|
45
drivers/net/ethernet/mellanox/mlx5/core/lib/sf.h
Normal file
45
drivers/net/ethernet/mellanox/mlx5/core/lib/sf.h
Normal file
@ -0,0 +1,45 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
|
||||
/* Copyright (c) 2021 Mellanox Technologies Ltd */
|
||||
|
||||
#ifndef __LIB_MLX5_SF_H__
|
||||
#define __LIB_MLX5_SF_H__
|
||||
|
||||
#include <linux/mlx5/driver.h>
|
||||
|
||||
static inline u16 mlx5_sf_start_function_id(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return MLX5_CAP_GEN(dev, sf_base_id);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MLX5_SF
|
||||
|
||||
static inline bool mlx5_sf_supported(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return MLX5_CAP_GEN(dev, sf);
|
||||
}
|
||||
|
||||
static inline u16 mlx5_sf_max_functions(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
if (!mlx5_sf_supported(dev))
|
||||
return 0;
|
||||
if (MLX5_CAP_GEN(dev, max_num_sf))
|
||||
return MLX5_CAP_GEN(dev, max_num_sf);
|
||||
else
|
||||
return 1 << MLX5_CAP_GEN(dev, log_max_sf);
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static inline bool mlx5_sf_supported(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline u16 mlx5_sf_max_functions(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
@ -76,6 +76,7 @@
|
||||
#include "sf/vhca_event.h"
|
||||
#include "sf/dev/dev.h"
|
||||
#include "sf/sf.h"
|
||||
#include "mlx5_irq.h"
|
||||
|
||||
MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
|
||||
MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) core driver");
|
||||
@ -1185,6 +1186,7 @@ static int mlx5_load(struct mlx5_core_dev *dev)
|
||||
}
|
||||
|
||||
mlx5_sf_dev_table_create(dev);
|
||||
mlx5_lag_add_mdev(dev);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -1219,6 +1221,7 @@ err_irq_table:
|
||||
|
||||
static void mlx5_unload(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_lag_remove_mdev(dev);
|
||||
mlx5_sf_dev_table_destroy(dev);
|
||||
mlx5_sriov_detach(dev);
|
||||
mlx5_ec_cleanup(dev);
|
||||
|
@ -164,27 +164,10 @@ int mlx5_query_mcam_reg(struct mlx5_core_dev *dev, u32 *mcap, u8 feature_group,
|
||||
int mlx5_query_qcam_reg(struct mlx5_core_dev *mdev, u32 *qcam,
|
||||
u8 feature_group, u8 access_reg_group);
|
||||
|
||||
void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev);
|
||||
void mlx5_lag_remove(struct mlx5_core_dev *dev);
|
||||
|
||||
int mlx5_irq_table_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
|
||||
int mlx5_irq_table_create(struct mlx5_core_dev *dev);
|
||||
void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
|
||||
int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx,
|
||||
struct notifier_block *nb);
|
||||
int mlx5_irq_detach_nb(struct mlx5_irq_table *irq_table, int vecidx,
|
||||
struct notifier_block *nb);
|
||||
|
||||
int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int devfn,
|
||||
int msix_vec_count);
|
||||
int mlx5_get_default_msix_vec_count(struct mlx5_core_dev *dev, int num_vfs);
|
||||
|
||||
struct cpumask *
|
||||
mlx5_irq_get_affinity_mask(struct mlx5_irq_table *irq_table, int vecidx);
|
||||
struct cpu_rmap *mlx5_irq_get_rmap(struct mlx5_irq_table *table);
|
||||
int mlx5_irq_get_num_comp(struct mlx5_irq_table *table);
|
||||
struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev);
|
||||
void mlx5_lag_add_netdev(struct mlx5_core_dev *dev, struct net_device *netdev);
|
||||
void mlx5_lag_remove_netdev(struct mlx5_core_dev *dev, struct net_device *netdev);
|
||||
void mlx5_lag_add_mdev(struct mlx5_core_dev *dev);
|
||||
void mlx5_lag_remove_mdev(struct mlx5_core_dev *dev);
|
||||
|
||||
int mlx5_events_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_events_cleanup(struct mlx5_core_dev *dev);
|
||||
|
35
drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
Normal file
35
drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
Normal file
@ -0,0 +1,35 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
|
||||
/* Copyright (c) 2021 Mellanox Technologies. */
|
||||
|
||||
#ifndef __MLX5_IRQ_H__
|
||||
#define __MLX5_IRQ_H__
|
||||
|
||||
#include <linux/mlx5/driver.h>
|
||||
|
||||
#define MLX5_COMP_EQS_PER_SF 8
|
||||
|
||||
#define MLX5_IRQ_EQ_CTRL (0)
|
||||
|
||||
struct mlx5_irq;
|
||||
|
||||
int mlx5_irq_table_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
|
||||
int mlx5_irq_table_create(struct mlx5_core_dev *dev);
|
||||
void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
|
||||
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table);
|
||||
int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table);
|
||||
struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev);
|
||||
|
||||
int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int devfn,
|
||||
int msix_vec_count);
|
||||
int mlx5_get_default_msix_vec_count(struct mlx5_core_dev *dev, int num_vfs);
|
||||
|
||||
struct mlx5_irq *mlx5_irq_request(struct mlx5_core_dev *dev, u16 vecidx,
|
||||
struct cpumask *affinity);
|
||||
void mlx5_irq_release(struct mlx5_irq *irq);
|
||||
int mlx5_irq_attach_nb(struct mlx5_irq *irq, struct notifier_block *nb);
|
||||
int mlx5_irq_detach_nb(struct mlx5_irq *irq, struct notifier_block *nb);
|
||||
struct cpumask *mlx5_irq_get_affinity_mask(struct mlx5_irq *irq);
|
||||
int mlx5_irq_get_index(struct mlx5_irq *irq);
|
||||
|
||||
#endif /* __MLX5_IRQ_H__ */
|
@ -6,61 +6,53 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/mlx5/driver.h>
|
||||
#include "mlx5_core.h"
|
||||
#include "mlx5_irq.h"
|
||||
#include "lib/sf.h"
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
#include <linux/cpu_rmap.h>
|
||||
#endif
|
||||
|
||||
#define MLX5_MAX_IRQ_NAME (32)
|
||||
/* max irq_index is 255. three chars */
|
||||
#define MLX5_MAX_IRQ_IDX_CHARS (3)
|
||||
|
||||
#define MLX5_SFS_PER_CTRL_IRQ 64
|
||||
#define MLX5_IRQ_CTRL_SF_MAX 8
|
||||
/* min num of vectores for SFs to be enabled */
|
||||
#define MLX5_IRQ_VEC_COMP_BASE_SF 2
|
||||
|
||||
#define MLX5_EQ_SHARE_IRQ_MAX_COMP (8)
|
||||
#define MLX5_EQ_SHARE_IRQ_MAX_CTRL (UINT_MAX)
|
||||
#define MLX5_EQ_SHARE_IRQ_MIN_COMP (1)
|
||||
#define MLX5_EQ_SHARE_IRQ_MIN_CTRL (4)
|
||||
#define MLX5_EQ_REFS_PER_IRQ (2)
|
||||
|
||||
struct mlx5_irq {
|
||||
u32 index;
|
||||
struct atomic_notifier_head nh;
|
||||
cpumask_var_t mask;
|
||||
char name[MLX5_MAX_IRQ_NAME];
|
||||
struct kref kref;
|
||||
int irqn;
|
||||
struct mlx5_irq_pool *pool;
|
||||
};
|
||||
|
||||
struct mlx5_irq_pool {
|
||||
char name[MLX5_MAX_IRQ_NAME - MLX5_MAX_IRQ_IDX_CHARS];
|
||||
struct xa_limit xa_num_irqs;
|
||||
struct mutex lock; /* sync IRQs creations */
|
||||
struct xarray irqs;
|
||||
u32 max_threshold;
|
||||
u32 min_threshold;
|
||||
struct mlx5_core_dev *dev;
|
||||
};
|
||||
|
||||
struct mlx5_irq_table {
|
||||
struct mlx5_irq *irq;
|
||||
int nvec;
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct cpu_rmap *rmap;
|
||||
#endif
|
||||
struct mlx5_irq_pool *pf_pool;
|
||||
struct mlx5_irq_pool *sf_ctrl_pool;
|
||||
struct mlx5_irq_pool *sf_comp_pool;
|
||||
};
|
||||
|
||||
int mlx5_irq_table_init(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_irq_table *irq_table;
|
||||
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return 0;
|
||||
|
||||
irq_table = kvzalloc(sizeof(*irq_table), GFP_KERNEL);
|
||||
if (!irq_table)
|
||||
return -ENOMEM;
|
||||
|
||||
dev->priv.irq_table = irq_table;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev)
|
||||
{
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return;
|
||||
|
||||
kvfree(dev->priv.irq_table);
|
||||
}
|
||||
|
||||
int mlx5_irq_get_num_comp(struct mlx5_irq_table *table)
|
||||
{
|
||||
return table->nvec - MLX5_IRQ_VEC_COMP_BASE;
|
||||
}
|
||||
|
||||
static struct mlx5_irq *mlx5_irq_get(struct mlx5_core_dev *dev, int vecidx)
|
||||
{
|
||||
struct mlx5_irq_table *irq_table = dev->priv.irq_table;
|
||||
|
||||
return &irq_table->irq[vecidx];
|
||||
}
|
||||
|
||||
/**
|
||||
* mlx5_get_default_msix_vec_count - Get the default number of MSI-X vectors
|
||||
* to be ssigned to each VF.
|
||||
@ -146,34 +138,46 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx,
|
||||
struct notifier_block *nb)
|
||||
static void irq_release(struct kref *kref)
|
||||
{
|
||||
struct mlx5_irq *irq;
|
||||
struct mlx5_irq *irq = container_of(kref, struct mlx5_irq, kref);
|
||||
struct mlx5_irq_pool *pool = irq->pool;
|
||||
|
||||
irq = &irq_table->irq[vecidx];
|
||||
return atomic_notifier_chain_register(&irq->nh, nb);
|
||||
xa_erase(&pool->irqs, irq->index);
|
||||
/* free_irq requires that affinity and rmap will be cleared
|
||||
* before calling it. This is why there is asymmetry with set_rmap
|
||||
* which should be called after alloc_irq but before request_irq.
|
||||
*/
|
||||
irq_set_affinity_hint(irq->irqn, NULL);
|
||||
free_cpumask_var(irq->mask);
|
||||
free_irq(irq->irqn, &irq->nh);
|
||||
kfree(irq);
|
||||
}
|
||||
|
||||
int mlx5_irq_detach_nb(struct mlx5_irq_table *irq_table, int vecidx,
|
||||
struct notifier_block *nb)
|
||||
static void irq_put(struct mlx5_irq *irq)
|
||||
{
|
||||
struct mlx5_irq *irq;
|
||||
struct mlx5_irq_pool *pool = irq->pool;
|
||||
|
||||
irq = &irq_table->irq[vecidx];
|
||||
return atomic_notifier_chain_unregister(&irq->nh, nb);
|
||||
mutex_lock(&pool->lock);
|
||||
kref_put(&irq->kref, irq_release);
|
||||
mutex_unlock(&pool->lock);
|
||||
}
|
||||
|
||||
static irqreturn_t mlx5_irq_int_handler(int irq, void *nh)
|
||||
static irqreturn_t irq_int_handler(int irq, void *nh)
|
||||
{
|
||||
atomic_notifier_call_chain(nh, 0, NULL);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void irq_sf_set_name(struct mlx5_irq_pool *pool, char *name, int vecidx)
|
||||
{
|
||||
snprintf(name, MLX5_MAX_IRQ_NAME, "%s%d", pool->name, vecidx);
|
||||
}
|
||||
|
||||
static void irq_set_name(char *name, int vecidx)
|
||||
{
|
||||
if (vecidx == 0) {
|
||||
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_async");
|
||||
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_async%d", vecidx);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -181,251 +185,431 @@ static void irq_set_name(char *name, int vecidx)
|
||||
vecidx - MLX5_IRQ_VEC_COMP_BASE);
|
||||
}
|
||||
|
||||
static int request_irqs(struct mlx5_core_dev *dev, int nvec)
|
||||
static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
|
||||
{
|
||||
struct mlx5_core_dev *dev = pool->dev;
|
||||
char name[MLX5_MAX_IRQ_NAME];
|
||||
struct mlx5_irq *irq;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nvec; i++) {
|
||||
struct mlx5_irq *irq = mlx5_irq_get(dev, i);
|
||||
int irqn = pci_irq_vector(dev->pdev, i);
|
||||
|
||||
irq = kzalloc(sizeof(*irq), GFP_KERNEL);
|
||||
if (!irq)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
irq->irqn = pci_irq_vector(dev->pdev, i);
|
||||
if (!pool->name[0])
|
||||
irq_set_name(name, i);
|
||||
ATOMIC_INIT_NOTIFIER_HEAD(&irq->nh);
|
||||
snprintf(irq->name, MLX5_MAX_IRQ_NAME,
|
||||
"%s@pci:%s", name, pci_name(dev->pdev));
|
||||
err = request_irq(irqn, mlx5_irq_int_handler, 0, irq->name,
|
||||
&irq->nh);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "Failed to request irq\n");
|
||||
goto err_request_irq;
|
||||
}
|
||||
else
|
||||
irq_sf_set_name(pool, name, i);
|
||||
ATOMIC_INIT_NOTIFIER_HEAD(&irq->nh);
|
||||
snprintf(irq->name, MLX5_MAX_IRQ_NAME,
|
||||
"%s@pci:%s", name, pci_name(dev->pdev));
|
||||
err = request_irq(irq->irqn, irq_int_handler, 0, irq->name,
|
||||
&irq->nh);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "Failed to request irq. err = %d\n", err);
|
||||
goto err_req_irq;
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_request_irq:
|
||||
while (i--) {
|
||||
struct mlx5_irq *irq = mlx5_irq_get(dev, i);
|
||||
int irqn = pci_irq_vector(dev->pdev, i);
|
||||
|
||||
free_irq(irqn, &irq->nh);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
static void irq_clear_rmap(struct mlx5_core_dev *dev)
|
||||
{
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct mlx5_irq_table *irq_table = dev->priv.irq_table;
|
||||
|
||||
free_irq_cpu_rmap(irq_table->rmap);
|
||||
#endif
|
||||
}
|
||||
|
||||
static int irq_set_rmap(struct mlx5_core_dev *mdev)
|
||||
{
|
||||
int err = 0;
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct mlx5_irq_table *irq_table = mdev->priv.irq_table;
|
||||
int num_affinity_vec;
|
||||
int vecidx;
|
||||
|
||||
num_affinity_vec = mlx5_irq_get_num_comp(irq_table);
|
||||
irq_table->rmap = alloc_irq_cpu_rmap(num_affinity_vec);
|
||||
if (!irq_table->rmap) {
|
||||
err = -ENOMEM;
|
||||
mlx5_core_err(mdev, "Failed to allocate cpu_rmap. err %d", err);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
vecidx = MLX5_IRQ_VEC_COMP_BASE;
|
||||
for (; vecidx < irq_table->nvec; vecidx++) {
|
||||
err = irq_cpu_rmap_add(irq_table->rmap,
|
||||
pci_irq_vector(mdev->pdev, vecidx));
|
||||
if (err) {
|
||||
mlx5_core_err(mdev, "irq_cpu_rmap_add failed. err %d",
|
||||
err);
|
||||
goto err_irq_cpu_rmap_add;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_irq_cpu_rmap_add:
|
||||
irq_clear_rmap(mdev);
|
||||
err_out:
|
||||
#endif
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Completion IRQ vectors */
|
||||
|
||||
static int set_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
|
||||
{
|
||||
int vecidx = MLX5_IRQ_VEC_COMP_BASE + i;
|
||||
struct mlx5_irq *irq;
|
||||
int irqn;
|
||||
|
||||
irq = mlx5_irq_get(mdev, vecidx);
|
||||
irqn = pci_irq_vector(mdev->pdev, vecidx);
|
||||
if (!zalloc_cpumask_var(&irq->mask, GFP_KERNEL)) {
|
||||
mlx5_core_warn(mdev, "zalloc_cpumask_var failed");
|
||||
return -ENOMEM;
|
||||
mlx5_core_warn(dev, "zalloc_cpumask_var failed\n");
|
||||
err = -ENOMEM;
|
||||
goto err_cpumask;
|
||||
}
|
||||
|
||||
cpumask_set_cpu(cpumask_local_spread(i, mdev->priv.numa_node),
|
||||
irq->mask);
|
||||
if (IS_ENABLED(CONFIG_SMP) &&
|
||||
irq_set_affinity_hint(irqn, irq->mask))
|
||||
mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x",
|
||||
irqn);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void clear_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
|
||||
{
|
||||
int vecidx = MLX5_IRQ_VEC_COMP_BASE + i;
|
||||
struct mlx5_irq *irq;
|
||||
int irqn;
|
||||
|
||||
irq = mlx5_irq_get(mdev, vecidx);
|
||||
irqn = pci_irq_vector(mdev->pdev, vecidx);
|
||||
irq_set_affinity_hint(irqn, NULL);
|
||||
kref_init(&irq->kref);
|
||||
irq->index = i;
|
||||
err = xa_err(xa_store(&pool->irqs, irq->index, irq, GFP_KERNEL));
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "Failed to alloc xa entry for irq(%u). err = %d\n",
|
||||
irq->index, err);
|
||||
goto err_xa;
|
||||
}
|
||||
irq->pool = pool;
|
||||
return irq;
|
||||
err_xa:
|
||||
free_cpumask_var(irq->mask);
|
||||
err_cpumask:
|
||||
free_irq(irq->irqn, &irq->nh);
|
||||
err_req_irq:
|
||||
kfree(irq);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
static int set_comp_irq_affinity_hints(struct mlx5_core_dev *mdev)
|
||||
int mlx5_irq_attach_nb(struct mlx5_irq *irq, struct notifier_block *nb)
|
||||
{
|
||||
int nvec = mlx5_irq_get_num_comp(mdev->priv.irq_table);
|
||||
int err;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nvec; i++) {
|
||||
err = set_comp_irq_affinity_hint(mdev, i);
|
||||
if (err)
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_out:
|
||||
for (i--; i >= 0; i--)
|
||||
clear_comp_irq_affinity_hint(mdev, i);
|
||||
|
||||
err = kref_get_unless_zero(&irq->kref);
|
||||
if (WARN_ON_ONCE(!err))
|
||||
/* Something very bad happens here, we are enabling EQ
|
||||
* on non-existing IRQ.
|
||||
*/
|
||||
return -ENOENT;
|
||||
err = atomic_notifier_chain_register(&irq->nh, nb);
|
||||
if (err)
|
||||
irq_put(irq);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void clear_comp_irqs_affinity_hints(struct mlx5_core_dev *mdev)
|
||||
int mlx5_irq_detach_nb(struct mlx5_irq *irq, struct notifier_block *nb)
|
||||
{
|
||||
int nvec = mlx5_irq_get_num_comp(mdev->priv.irq_table);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nvec; i++)
|
||||
clear_comp_irq_affinity_hint(mdev, i);
|
||||
irq_put(irq);
|
||||
return atomic_notifier_chain_unregister(&irq->nh, nb);
|
||||
}
|
||||
|
||||
struct cpumask *
|
||||
mlx5_irq_get_affinity_mask(struct mlx5_irq_table *irq_table, int vecidx)
|
||||
struct cpumask *mlx5_irq_get_affinity_mask(struct mlx5_irq *irq)
|
||||
{
|
||||
return irq_table->irq[vecidx].mask;
|
||||
return irq->mask;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RFS_ACCEL
|
||||
struct cpu_rmap *mlx5_irq_get_rmap(struct mlx5_irq_table *irq_table)
|
||||
int mlx5_irq_get_index(struct mlx5_irq *irq)
|
||||
{
|
||||
return irq_table->rmap;
|
||||
return irq->index;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void unrequest_irqs(struct mlx5_core_dev *dev)
|
||||
/* irq_pool API */
|
||||
|
||||
/* creating an irq from irq_pool */
|
||||
static struct mlx5_irq *irq_pool_create_irq(struct mlx5_irq_pool *pool,
|
||||
struct cpumask *affinity)
|
||||
{
|
||||
struct mlx5_irq *irq;
|
||||
u32 irq_index;
|
||||
int err;
|
||||
|
||||
err = xa_alloc(&pool->irqs, &irq_index, NULL, pool->xa_num_irqs,
|
||||
GFP_KERNEL);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
irq = irq_request(pool, irq_index);
|
||||
if (IS_ERR(irq))
|
||||
return irq;
|
||||
cpumask_copy(irq->mask, affinity);
|
||||
irq_set_affinity_hint(irq->irqn, irq->mask);
|
||||
return irq;
|
||||
}
|
||||
|
||||
/* looking for the irq with the smallest refcount and the same affinity */
|
||||
static struct mlx5_irq *irq_pool_find_least_loaded(struct mlx5_irq_pool *pool,
|
||||
struct cpumask *affinity)
|
||||
{
|
||||
int start = pool->xa_num_irqs.min;
|
||||
int end = pool->xa_num_irqs.max;
|
||||
struct mlx5_irq *irq = NULL;
|
||||
struct mlx5_irq *iter;
|
||||
unsigned long index;
|
||||
|
||||
lockdep_assert_held(&pool->lock);
|
||||
xa_for_each_range(&pool->irqs, index, iter, start, end) {
|
||||
if (!cpumask_equal(iter->mask, affinity))
|
||||
continue;
|
||||
if (kref_read(&iter->kref) < pool->min_threshold)
|
||||
return iter;
|
||||
if (!irq || kref_read(&iter->kref) <
|
||||
kref_read(&irq->kref))
|
||||
irq = iter;
|
||||
}
|
||||
return irq;
|
||||
}
|
||||
|
||||
/* requesting an irq from a given pool according to given affinity */
|
||||
static struct mlx5_irq *irq_pool_request_affinity(struct mlx5_irq_pool *pool,
|
||||
struct cpumask *affinity)
|
||||
{
|
||||
struct mlx5_irq *least_loaded_irq, *new_irq;
|
||||
|
||||
mutex_lock(&pool->lock);
|
||||
least_loaded_irq = irq_pool_find_least_loaded(pool, affinity);
|
||||
if (least_loaded_irq &&
|
||||
kref_read(&least_loaded_irq->kref) < pool->min_threshold)
|
||||
goto out;
|
||||
new_irq = irq_pool_create_irq(pool, affinity);
|
||||
if (IS_ERR(new_irq)) {
|
||||
if (!least_loaded_irq) {
|
||||
mlx5_core_err(pool->dev, "Didn't find IRQ for cpu = %u\n",
|
||||
cpumask_first(affinity));
|
||||
mutex_unlock(&pool->lock);
|
||||
return new_irq;
|
||||
}
|
||||
/* We failed to create a new IRQ for the requested affinity,
|
||||
* sharing existing IRQ.
|
||||
*/
|
||||
goto out;
|
||||
}
|
||||
least_loaded_irq = new_irq;
|
||||
goto unlock;
|
||||
out:
|
||||
kref_get(&least_loaded_irq->kref);
|
||||
if (kref_read(&least_loaded_irq->kref) > pool->max_threshold)
|
||||
mlx5_core_dbg(pool->dev, "IRQ %u overloaded, pool_name: %s, %u EQs on this irq\n",
|
||||
least_loaded_irq->irqn, pool->name,
|
||||
kref_read(&least_loaded_irq->kref) / MLX5_EQ_REFS_PER_IRQ);
|
||||
unlock:
|
||||
mutex_unlock(&pool->lock);
|
||||
return least_loaded_irq;
|
||||
}
|
||||
|
||||
/* requesting an irq from a given pool according to given index */
|
||||
static struct mlx5_irq *
|
||||
irq_pool_request_vector(struct mlx5_irq_pool *pool, int vecidx,
|
||||
struct cpumask *affinity)
|
||||
{
|
||||
struct mlx5_irq *irq;
|
||||
|
||||
mutex_lock(&pool->lock);
|
||||
irq = xa_load(&pool->irqs, vecidx);
|
||||
if (irq) {
|
||||
kref_get(&irq->kref);
|
||||
goto unlock;
|
||||
}
|
||||
irq = irq_request(pool, vecidx);
|
||||
if (IS_ERR(irq) || !affinity)
|
||||
goto unlock;
|
||||
cpumask_copy(irq->mask, affinity);
|
||||
irq_set_affinity_hint(irq->irqn, irq->mask);
|
||||
unlock:
|
||||
mutex_unlock(&pool->lock);
|
||||
return irq;
|
||||
}
|
||||
|
||||
static struct mlx5_irq_pool *find_sf_irq_pool(struct mlx5_irq_table *irq_table,
|
||||
int i, struct cpumask *affinity)
|
||||
{
|
||||
if (cpumask_empty(affinity) && i == MLX5_IRQ_EQ_CTRL)
|
||||
return irq_table->sf_ctrl_pool;
|
||||
return irq_table->sf_comp_pool;
|
||||
}
|
||||
|
||||
/**
|
||||
* mlx5_irq_release - release an IRQ back to the system.
|
||||
* @irq: irq to be released.
|
||||
*/
|
||||
void mlx5_irq_release(struct mlx5_irq *irq)
|
||||
{
|
||||
synchronize_irq(irq->irqn);
|
||||
irq_put(irq);
|
||||
}
|
||||
|
||||
/**
|
||||
* mlx5_irq_request - request an IRQ for mlx5 device.
|
||||
* @dev: mlx5 device that requesting the IRQ.
|
||||
* @vecidx: vector index of the IRQ. This argument is ignore if affinity is
|
||||
* provided.
|
||||
* @affinity: cpumask requested for this IRQ.
|
||||
*
|
||||
* This function returns a pointer to IRQ, or ERR_PTR in case of error.
|
||||
*/
|
||||
struct mlx5_irq *mlx5_irq_request(struct mlx5_core_dev *dev, u16 vecidx,
|
||||
struct cpumask *affinity)
|
||||
{
|
||||
struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev);
|
||||
struct mlx5_irq_pool *pool;
|
||||
struct mlx5_irq *irq;
|
||||
|
||||
if (mlx5_core_is_sf(dev)) {
|
||||
pool = find_sf_irq_pool(irq_table, vecidx, affinity);
|
||||
if (!pool)
|
||||
/* we don't have IRQs for SFs, using the PF IRQs */
|
||||
goto pf_irq;
|
||||
if (cpumask_empty(affinity) && !strcmp(pool->name, "mlx5_sf_comp"))
|
||||
/* In case an SF user request IRQ with vecidx */
|
||||
irq = irq_pool_request_vector(pool, vecidx, NULL);
|
||||
else
|
||||
irq = irq_pool_request_affinity(pool, affinity);
|
||||
goto out;
|
||||
}
|
||||
pf_irq:
|
||||
pool = irq_table->pf_pool;
|
||||
irq = irq_pool_request_vector(pool, vecidx, affinity);
|
||||
out:
|
||||
if (IS_ERR(irq))
|
||||
return irq;
|
||||
mlx5_core_dbg(dev, "irq %u mapped to cpu %*pbl, %u EQs on this irq\n",
|
||||
irq->irqn, cpumask_pr_args(affinity),
|
||||
kref_read(&irq->kref) / MLX5_EQ_REFS_PER_IRQ);
|
||||
return irq;
|
||||
}
|
||||
|
||||
static struct mlx5_irq_pool *
|
||||
irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name,
|
||||
u32 min_threshold, u32 max_threshold)
|
||||
{
|
||||
struct mlx5_irq_pool *pool = kvzalloc(sizeof(*pool), GFP_KERNEL);
|
||||
|
||||
if (!pool)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
pool->dev = dev;
|
||||
xa_init_flags(&pool->irqs, XA_FLAGS_ALLOC);
|
||||
pool->xa_num_irqs.min = start;
|
||||
pool->xa_num_irqs.max = start + size - 1;
|
||||
if (name)
|
||||
snprintf(pool->name, MLX5_MAX_IRQ_NAME - MLX5_MAX_IRQ_IDX_CHARS,
|
||||
name);
|
||||
pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ;
|
||||
pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ;
|
||||
mutex_init(&pool->lock);
|
||||
mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d",
|
||||
name, size, start);
|
||||
return pool;
|
||||
}
|
||||
|
||||
static void irq_pool_free(struct mlx5_irq_pool *pool)
|
||||
{
|
||||
struct mlx5_irq *irq;
|
||||
unsigned long index;
|
||||
|
||||
xa_for_each(&pool->irqs, index, irq)
|
||||
irq_release(&irq->kref);
|
||||
xa_destroy(&pool->irqs);
|
||||
kvfree(pool);
|
||||
}
|
||||
|
||||
static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pf_vec)
|
||||
{
|
||||
struct mlx5_irq_table *table = dev->priv.irq_table;
|
||||
int i;
|
||||
int num_sf_ctrl_by_msix;
|
||||
int num_sf_ctrl_by_sfs;
|
||||
int num_sf_ctrl;
|
||||
int err;
|
||||
|
||||
for (i = 0; i < table->nvec; i++)
|
||||
free_irq(pci_irq_vector(dev->pdev, i),
|
||||
&mlx5_irq_get(dev, i)->nh);
|
||||
/* init pf_pool */
|
||||
table->pf_pool = irq_pool_alloc(dev, 0, pf_vec, NULL,
|
||||
MLX5_EQ_SHARE_IRQ_MIN_COMP,
|
||||
MLX5_EQ_SHARE_IRQ_MAX_COMP);
|
||||
if (IS_ERR(table->pf_pool))
|
||||
return PTR_ERR(table->pf_pool);
|
||||
if (!mlx5_sf_max_functions(dev))
|
||||
return 0;
|
||||
if (sf_vec < MLX5_IRQ_VEC_COMP_BASE_SF) {
|
||||
mlx5_core_err(dev, "Not enught IRQs for SFs. SF may run at lower performance\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* init sf_ctrl_pool */
|
||||
num_sf_ctrl_by_msix = DIV_ROUND_UP(sf_vec, MLX5_COMP_EQS_PER_SF);
|
||||
num_sf_ctrl_by_sfs = DIV_ROUND_UP(mlx5_sf_max_functions(dev),
|
||||
MLX5_SFS_PER_CTRL_IRQ);
|
||||
num_sf_ctrl = min_t(int, num_sf_ctrl_by_msix, num_sf_ctrl_by_sfs);
|
||||
num_sf_ctrl = min_t(int, MLX5_IRQ_CTRL_SF_MAX, num_sf_ctrl);
|
||||
table->sf_ctrl_pool = irq_pool_alloc(dev, pf_vec, num_sf_ctrl,
|
||||
"mlx5_sf_ctrl",
|
||||
MLX5_EQ_SHARE_IRQ_MIN_CTRL,
|
||||
MLX5_EQ_SHARE_IRQ_MAX_CTRL);
|
||||
if (IS_ERR(table->sf_ctrl_pool)) {
|
||||
err = PTR_ERR(table->sf_ctrl_pool);
|
||||
goto err_pf;
|
||||
}
|
||||
/* init sf_comp_pool */
|
||||
table->sf_comp_pool = irq_pool_alloc(dev, pf_vec + num_sf_ctrl,
|
||||
sf_vec - num_sf_ctrl, "mlx5_sf_comp",
|
||||
MLX5_EQ_SHARE_IRQ_MIN_COMP,
|
||||
MLX5_EQ_SHARE_IRQ_MAX_COMP);
|
||||
if (IS_ERR(table->sf_comp_pool)) {
|
||||
err = PTR_ERR(table->sf_comp_pool);
|
||||
goto err_sf_ctrl;
|
||||
}
|
||||
return 0;
|
||||
err_sf_ctrl:
|
||||
irq_pool_free(table->sf_ctrl_pool);
|
||||
err_pf:
|
||||
irq_pool_free(table->pf_pool);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void irq_pools_destroy(struct mlx5_irq_table *table)
|
||||
{
|
||||
if (table->sf_ctrl_pool) {
|
||||
irq_pool_free(table->sf_comp_pool);
|
||||
irq_pool_free(table->sf_ctrl_pool);
|
||||
}
|
||||
irq_pool_free(table->pf_pool);
|
||||
}
|
||||
|
||||
/* irq_table API */
|
||||
|
||||
int mlx5_irq_table_init(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_irq_table *irq_table;
|
||||
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return 0;
|
||||
|
||||
irq_table = kvzalloc(sizeof(*irq_table), GFP_KERNEL);
|
||||
if (!irq_table)
|
||||
return -ENOMEM;
|
||||
|
||||
dev->priv.irq_table = irq_table;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev)
|
||||
{
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return;
|
||||
|
||||
kvfree(dev->priv.irq_table);
|
||||
}
|
||||
|
||||
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table)
|
||||
{
|
||||
return table->pf_pool->xa_num_irqs.max - table->pf_pool->xa_num_irqs.min;
|
||||
}
|
||||
|
||||
int mlx5_irq_table_create(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_priv *priv = &dev->priv;
|
||||
struct mlx5_irq_table *table = priv->irq_table;
|
||||
int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
|
||||
MLX5_CAP_GEN(dev, max_num_eqs) :
|
||||
1 << MLX5_CAP_GEN(dev, log_max_eq);
|
||||
int nvec;
|
||||
int total_vec;
|
||||
int pf_vec;
|
||||
int err;
|
||||
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return 0;
|
||||
|
||||
nvec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
|
||||
MLX5_IRQ_VEC_COMP_BASE;
|
||||
nvec = min_t(int, nvec, num_eqs);
|
||||
if (nvec <= MLX5_IRQ_VEC_COMP_BASE)
|
||||
pf_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
|
||||
MLX5_IRQ_VEC_COMP_BASE;
|
||||
pf_vec = min_t(int, pf_vec, num_eqs);
|
||||
if (pf_vec <= MLX5_IRQ_VEC_COMP_BASE)
|
||||
return -ENOMEM;
|
||||
|
||||
table->irq = kcalloc(nvec, sizeof(*table->irq), GFP_KERNEL);
|
||||
if (!table->irq)
|
||||
return -ENOMEM;
|
||||
total_vec = pf_vec;
|
||||
if (mlx5_sf_max_functions(dev))
|
||||
total_vec += MLX5_IRQ_CTRL_SF_MAX +
|
||||
MLX5_COMP_EQS_PER_SF * mlx5_sf_max_functions(dev);
|
||||
|
||||
nvec = pci_alloc_irq_vectors(dev->pdev, MLX5_IRQ_VEC_COMP_BASE + 1,
|
||||
nvec, PCI_IRQ_MSIX);
|
||||
if (nvec < 0) {
|
||||
err = nvec;
|
||||
goto err_free_irq;
|
||||
}
|
||||
total_vec = pci_alloc_irq_vectors(dev->pdev, MLX5_IRQ_VEC_COMP_BASE + 1,
|
||||
total_vec, PCI_IRQ_MSIX);
|
||||
if (total_vec < 0)
|
||||
return total_vec;
|
||||
pf_vec = min(pf_vec, total_vec);
|
||||
|
||||
table->nvec = nvec;
|
||||
|
||||
err = irq_set_rmap(dev);
|
||||
err = irq_pools_init(dev, total_vec - pf_vec, pf_vec);
|
||||
if (err)
|
||||
goto err_set_rmap;
|
||||
pci_free_irq_vectors(dev->pdev);
|
||||
|
||||
err = request_irqs(dev, nvec);
|
||||
if (err)
|
||||
goto err_request_irqs;
|
||||
|
||||
err = set_comp_irq_affinity_hints(dev);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "Failed to alloc affinity hint cpumask\n");
|
||||
goto err_set_affinity;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_set_affinity:
|
||||
unrequest_irqs(dev);
|
||||
err_request_irqs:
|
||||
irq_clear_rmap(dev);
|
||||
err_set_rmap:
|
||||
pci_free_irq_vectors(dev->pdev);
|
||||
err_free_irq:
|
||||
kfree(table->irq);
|
||||
return err;
|
||||
}
|
||||
|
||||
void mlx5_irq_table_destroy(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_irq_table *table = dev->priv.irq_table;
|
||||
int i;
|
||||
|
||||
if (mlx5_core_is_sf(dev))
|
||||
return;
|
||||
|
||||
/* free_irq requires that affinity and rmap will be cleared
|
||||
* before calling it. This is why there is asymmetry with set_rmap
|
||||
* which should be called after alloc_irq but before request_irq.
|
||||
/* There are cases where IRQs still will be in used when we reaching
|
||||
* to here. Hence, making sure all the irqs are realeased.
|
||||
*/
|
||||
irq_clear_rmap(dev);
|
||||
clear_comp_irqs_affinity_hints(dev);
|
||||
for (i = 0; i < table->nvec; i++)
|
||||
free_irq(pci_irq_vector(dev->pdev, i),
|
||||
&mlx5_irq_get(dev, i)->nh);
|
||||
irq_pools_destroy(table);
|
||||
pci_free_irq_vectors(dev->pdev);
|
||||
kfree(table->irq);
|
||||
}
|
||||
|
||||
int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table)
|
||||
{
|
||||
if (table->sf_comp_pool)
|
||||
return table->sf_comp_pool->xa_num_irqs.max -
|
||||
table->sf_comp_pool->xa_num_irqs.min + 1;
|
||||
else
|
||||
return mlx5_irq_table_get_num_comp(table);
|
||||
}
|
||||
|
||||
struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev)
|
||||
|
@ -5,42 +5,7 @@
|
||||
#define __MLX5_SF_H__
|
||||
|
||||
#include <linux/mlx5/driver.h>
|
||||
|
||||
static inline u16 mlx5_sf_start_function_id(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return MLX5_CAP_GEN(dev, sf_base_id);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MLX5_SF
|
||||
|
||||
static inline bool mlx5_sf_supported(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return MLX5_CAP_GEN(dev, sf);
|
||||
}
|
||||
|
||||
static inline u16 mlx5_sf_max_functions(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
if (!mlx5_sf_supported(dev))
|
||||
return 0;
|
||||
if (MLX5_CAP_GEN(dev, max_num_sf))
|
||||
return MLX5_CAP_GEN(dev, max_num_sf);
|
||||
else
|
||||
return 1 << MLX5_CAP_GEN(dev, log_max_sf);
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static inline bool mlx5_sf_supported(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline u16 mlx5_sf_max_functions(const struct mlx5_core_dev *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
#include "lib/sf.h"
|
||||
|
||||
#ifdef CONFIG_MLX5_SF_MANAGER
|
||||
|
||||
|
@ -34,6 +34,7 @@
|
||||
#include <linux/mlx5/driver.h>
|
||||
#include <linux/mlx5/vport.h>
|
||||
#include "mlx5_core.h"
|
||||
#include "mlx5_irq.h"
|
||||
#include "eswitch.h"
|
||||
|
||||
static int sriov_restore_guids(struct mlx5_core_dev *dev, int vf)
|
||||
|
@ -16,6 +16,7 @@ struct mlx5_eq_param {
|
||||
u8 irq_index;
|
||||
int nent;
|
||||
u64 mask[4];
|
||||
cpumask_var_t affinity;
|
||||
};
|
||||
|
||||
struct mlx5_eq *
|
||||
|
@ -3806,8 +3806,8 @@ struct mlx5_ifc_eqc_bits {
|
||||
|
||||
u8 reserved_at_80[0x20];
|
||||
|
||||
u8 reserved_at_a0[0x18];
|
||||
u8 intr[0x8];
|
||||
u8 reserved_at_a0[0x14];
|
||||
u8 intr[0xc];
|
||||
|
||||
u8 reserved_at_c0[0x3];
|
||||
u8 log_page_size[0x5];
|
||||
|
Loading…
Reference in New Issue
Block a user