Including fixes from WiFi and bpf.
Current release - regressions: - bpf: syzkaller found null ptr deref in unix_bpf proto add - eth: i40e: fix ST code value for clause 45 Previous releases - regressions: - core: return error from sk_stream_wait_connect() if sk_wait_event() fails - ipv6: revert remove expired routes with a separated list of routes - wifi rfkill: - set GPIO direction - fix crash with WED rx support enabled - bluetooth: - fix deadlock in vhci_send_frame - fix use-after-free in bt_sock_recvmsg - eth: mlx5e: fix a race in command alloc flow - eth: ice: fix PF with enabled XDP going no-carrier after reset - eth: bnxt_en: do not map packet buffers twice Previous releases - always broken: - core: - check vlan filter feature in vlan_vids_add_by_dev() and vlan_vids_del_by_dev() - check dev->gso_max_size in gso_features_check() - mptcp: fix inconsistent state on fastopen race - phy: skip LED triggers on PHYs on SFP modules - eth: mlx5e: - fix double free of encap_header - fix slab-out-of-bounds in mlx5_query_nic_vport_mac_list() Signed-off-by: Paolo Abeni <pabeni@redhat.com> -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmWELNYSHHBhYmVuaUBy ZWRoYXQuY29tAAoJECkkeY3MjxOkL0MQALYng9iIz5m/iX0pjRpI4HxruflMkdWa +FMRnWp0OE5ak8l/UcffgYRNozXAcA/PSJhanZU3gT21IeJ+X78paivWyEPUhpqN d1nhRRsnx37fTK3lbS/wSGJUN+x4g49kHQn5mw8vfi/RGHuc/vbLO23iWsawB92/ 7YI0rEzZh2b1FKytvqF9t2lLtJw5ucwQtdm3d/tg4iuL44Lq8dA69dln4wZx3t28 lobsW0eQW2JRh2YwrREb1oUD0CcUNk+XGsWVyUXqs31OflkqYMEzI41Yxs/lHJs0 0Lmt3/F2Ls+H+vEYElJ0wsNPFZr4TDhAsV5KMxZdoBfWhTN8ordloBXGHre1IVSK SQtja5IqT01dLbDoL7tLpyEsGLp1A+HPH+BVxt582srSMoXWmFYOZcRczKJ85C1W qaohCGeEO537ExrAMHbJ0CxR3oSawyOBszjTYGdbI3xiFj5q1n48YyJSep//rgvP PewzqtMpPPapPIiJbvRjN8Mn56Y2832TSbPOVZ2KJuBpx+i/mjXyIK97FMb+Jdvu 3ACH3BmsUfvXXpXNSZIgtc35tP03GSeV9B2vzlhjFwxB2vV4wuX9NbI5OIWi7ZM1 03jkC2wQf6jVby45IM5kMuEKL3hMXUx9t0nZg0szJ3T31+OQ6e5Hlv1Aqp4Ihn5Q N+fxo6lpm+Aq =sEmi -----END PGP SIGNATURE----- Merge tag 'net-6.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from WiFi and bpf. Current release - regressions: - bpf: syzkaller found null ptr deref in unix_bpf proto add - eth: i40e: fix ST code value for clause 45 Previous releases - regressions: - core: return error from sk_stream_wait_connect() if sk_wait_event() fails - ipv6: revert remove expired routes with a separated list of routes - wifi rfkill: - set GPIO direction - fix crash with WED rx support enabled - bluetooth: - fix deadlock in vhci_send_frame - fix use-after-free in bt_sock_recvmsg - eth: mlx5e: fix a race in command alloc flow - eth: ice: fix PF with enabled XDP going no-carrier after reset - eth: bnxt_en: do not map packet buffers twice Previous releases - always broken: - core: - check vlan filter feature in vlan_vids_add_by_dev() and vlan_vids_del_by_dev() - check dev->gso_max_size in gso_features_check() - mptcp: fix inconsistent state on fastopen race - phy: skip LED triggers on PHYs on SFP modules - eth: mlx5e: - fix double free of encap_header - fix slab-out-of-bounds in mlx5_query_nic_vport_mac_list()" * tag 'net-6.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (69 commits) net: check dev->gso_max_size in gso_features_check() kselftest: rtnetlink.sh: use grep_fail when expecting the cmd fail net/ipv6: Revert remove expired routes with a separated list of routes net: avoid build bug in skb extension length calculation net: ethernet: mtk_wed: fix possible NULL pointer dereference in mtk_wed_wo_queue_tx_clean() net: stmmac: fix incorrect flag check in timestamp interrupt selftests: add vlan hw filter tests net: check vlan filter feature in vlan_vids_add_by_dev() and vlan_vids_del_by_dev() net: hns3: add new maintainer for the HNS3 ethernet driver net: mana: select PAGE_POOL net: ks8851: Fix TX stall caused by TX buffer overrun ice: Fix PF with enabled XDP going no-carrier after reset ice: alter feature support check for SRIOV and LAG ice: stop trashing VF VSI aggregator node ID information mailmap: add entries for Geliang Tang mptcp: fill in missing MODULE_DESCRIPTION() mptcp: fix inconsistent state on fastopen race selftests: mptcp: join: fix subflow_send_ack lookup net: phy: skip LED triggers on PHYs on SFP modules bpf: Add missing BPF_LINK_TYPE invocations ...
This commit is contained in:
commit
7c5e046bdc
4
.mailmap
4
.mailmap
@ -191,6 +191,10 @@ Gao Xiang <xiang@kernel.org> <gaoxiang25@huawei.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@aol.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@linux.alibaba.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@redhat.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliang.tang@suse.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@xiaomi.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@gmail.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@163.com>
|
||||
Georgi Djakov <djakov@kernel.org> <georgi.djakov@linaro.org>
|
||||
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com>
|
||||
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
|
||||
|
@ -9524,6 +9524,7 @@ F: drivers/bus/hisi_lpc.c
|
||||
HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3)
|
||||
M: Yisen Zhuang <yisen.zhuang@huawei.com>
|
||||
M: Salil Mehta <salil.mehta@huawei.com>
|
||||
M: Jijie Shao <shaojijie@huawei.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://www.hisilicon.com
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <linux/module.h>
|
||||
#include <asm/unaligned.h>
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/slab.h>
|
||||
@ -44,6 +45,7 @@ struct vhci_data {
|
||||
bool wakeup;
|
||||
__u16 msft_opcode;
|
||||
bool aosp_capable;
|
||||
atomic_t initialized;
|
||||
};
|
||||
|
||||
static int vhci_open_dev(struct hci_dev *hdev)
|
||||
@ -75,11 +77,10 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
|
||||
memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
|
||||
|
||||
mutex_lock(&data->open_mutex);
|
||||
skb_queue_tail(&data->readq, skb);
|
||||
mutex_unlock(&data->open_mutex);
|
||||
|
||||
wake_up_interruptible(&data->read_wait);
|
||||
if (atomic_read(&data->initialized))
|
||||
wake_up_interruptible(&data->read_wait);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -464,7 +465,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
|
||||
skb_put_u8(skb, 0xff);
|
||||
skb_put_u8(skb, opcode);
|
||||
put_unaligned_le16(hdev->id, skb_put(skb, 2));
|
||||
skb_queue_tail(&data->readq, skb);
|
||||
skb_queue_head(&data->readq, skb);
|
||||
atomic_inc(&data->initialized);
|
||||
|
||||
wake_up_interruptible(&data->read_wait);
|
||||
return 0;
|
||||
|
@ -866,10 +866,13 @@ static int atl1e_setup_ring_resources(struct atl1e_adapter *adapter)
|
||||
netdev_err(adapter->netdev, "offset(%d) > ring size(%d) !!\n",
|
||||
offset, adapter->ring_size);
|
||||
err = -1;
|
||||
goto failed;
|
||||
goto free_buffer;
|
||||
}
|
||||
|
||||
return 0;
|
||||
free_buffer:
|
||||
kfree(tx_ring->tx_buffer);
|
||||
tx_ring->tx_buffer = NULL;
|
||||
failed:
|
||||
if (adapter->ring_vir_addr != NULL) {
|
||||
dma_free_coherent(&pdev->dev, adapter->ring_size,
|
||||
|
@ -59,7 +59,6 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
|
||||
for (i = 0; i < num_frags ; i++) {
|
||||
skb_frag_t *frag = &sinfo->frags[i];
|
||||
struct bnxt_sw_tx_bd *frag_tx_buf;
|
||||
struct pci_dev *pdev = bp->pdev;
|
||||
dma_addr_t frag_mapping;
|
||||
int frag_len;
|
||||
|
||||
@ -73,16 +72,10 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
|
||||
txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)];
|
||||
|
||||
frag_len = skb_frag_size(frag);
|
||||
frag_mapping = skb_frag_dma_map(&pdev->dev, frag, 0,
|
||||
frag_len, DMA_TO_DEVICE);
|
||||
|
||||
if (unlikely(dma_mapping_error(&pdev->dev, frag_mapping)))
|
||||
return NULL;
|
||||
|
||||
dma_unmap_addr_set(frag_tx_buf, mapping, frag_mapping);
|
||||
|
||||
flags = frag_len << TX_BD_LEN_SHIFT;
|
||||
txbd->tx_bd_len_flags_type = cpu_to_le32(flags);
|
||||
frag_mapping = page_pool_get_dma_addr(skb_frag_page(frag)) +
|
||||
skb_frag_off(frag);
|
||||
txbd->tx_bd_haddr = cpu_to_le64(frag_mapping);
|
||||
|
||||
len = frag_len;
|
||||
|
@ -207,7 +207,7 @@
|
||||
#define I40E_GLGEN_MSCA_OPCODE_SHIFT 26
|
||||
#define I40E_GLGEN_MSCA_OPCODE_MASK(_i) I40E_MASK(_i, I40E_GLGEN_MSCA_OPCODE_SHIFT)
|
||||
#define I40E_GLGEN_MSCA_STCODE_SHIFT 28
|
||||
#define I40E_GLGEN_MSCA_STCODE_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_STCODE_SHIFT)
|
||||
#define I40E_GLGEN_MSCA_STCODE_MASK(_i) I40E_MASK(_i, I40E_GLGEN_MSCA_STCODE_SHIFT)
|
||||
#define I40E_GLGEN_MSCA_MDICMD_SHIFT 30
|
||||
#define I40E_GLGEN_MSCA_MDICMD_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDICMD_SHIFT)
|
||||
#define I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT 31
|
||||
|
@ -37,11 +37,11 @@ typedef void (*I40E_ADMINQ_CALLBACK)(struct i40e_hw *, struct i40e_aq_desc *);
|
||||
#define I40E_QTX_CTL_VM_QUEUE 0x1
|
||||
#define I40E_QTX_CTL_PF_QUEUE 0x2
|
||||
|
||||
#define I40E_MDIO_CLAUSE22_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK
|
||||
#define I40E_MDIO_CLAUSE22_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK(1)
|
||||
#define I40E_MDIO_CLAUSE22_OPCODE_WRITE_MASK I40E_GLGEN_MSCA_OPCODE_MASK(1)
|
||||
#define I40E_MDIO_CLAUSE22_OPCODE_READ_MASK I40E_GLGEN_MSCA_OPCODE_MASK(2)
|
||||
|
||||
#define I40E_MDIO_CLAUSE45_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK
|
||||
#define I40E_MDIO_CLAUSE45_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK(0)
|
||||
#define I40E_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK I40E_GLGEN_MSCA_OPCODE_MASK(0)
|
||||
#define I40E_MDIO_CLAUSE45_OPCODE_WRITE_MASK I40E_GLGEN_MSCA_OPCODE_MASK(1)
|
||||
#define I40E_MDIO_CLAUSE45_OPCODE_READ_MASK I40E_GLGEN_MSCA_OPCODE_MASK(3)
|
||||
|
@ -1850,14 +1850,14 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
|
||||
linkmode_zero(ks->link_modes.supported);
|
||||
linkmode_zero(ks->link_modes.advertising);
|
||||
|
||||
for (i = 0; i < BITS_PER_TYPE(u64); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(phy_type_low_lkup); i++) {
|
||||
if (phy_types_low & BIT_ULL(i))
|
||||
ice_linkmode_set_bit(&phy_type_low_lkup[i], ks,
|
||||
req_speeds, advert_phy_type_lo,
|
||||
i);
|
||||
}
|
||||
|
||||
for (i = 0; i < BITS_PER_TYPE(u64); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(phy_type_high_lkup); i++) {
|
||||
if (phy_types_high & BIT_ULL(i))
|
||||
ice_linkmode_set_bit(&phy_type_high_lkup[i], ks,
|
||||
req_speeds, advert_phy_type_hi,
|
||||
|
@ -1981,6 +1981,8 @@ int ice_init_lag(struct ice_pf *pf)
|
||||
int n, err;
|
||||
|
||||
ice_lag_init_feature_support_flag(pf);
|
||||
if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG))
|
||||
return 0;
|
||||
|
||||
pf->lag = kzalloc(sizeof(*lag), GFP_KERNEL);
|
||||
if (!pf->lag)
|
||||
|
@ -2371,6 +2371,9 @@ static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi)
|
||||
} else {
|
||||
max_txqs[i] = vsi->alloc_txq;
|
||||
}
|
||||
|
||||
if (vsi->type == ICE_VSI_PF)
|
||||
max_txqs[i] += vsi->num_xdp_txq;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc);
|
||||
@ -2620,10 +2623,6 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
|
||||
if (vsi->type == ICE_VSI_VF &&
|
||||
vsi->agg_node && vsi->agg_node->valid)
|
||||
vsi->agg_node->num_vsis--;
|
||||
if (vsi->agg_node) {
|
||||
vsi->agg_node->valid = false;
|
||||
vsi->agg_node->agg_id = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -399,9 +399,10 @@ static int otx2_dcbnl_ieee_getpfc(struct net_device *dev, struct ieee_pfc *pfc)
|
||||
static int otx2_dcbnl_ieee_setpfc(struct net_device *dev, struct ieee_pfc *pfc)
|
||||
{
|
||||
struct otx2_nic *pfvf = netdev_priv(dev);
|
||||
u8 old_pfc_en;
|
||||
int err;
|
||||
|
||||
/* Save PFC configuration to interface */
|
||||
old_pfc_en = pfvf->pfc_en;
|
||||
pfvf->pfc_en = pfc->pfc_en;
|
||||
|
||||
if (pfvf->hw.tx_queues >= NIX_PF_PFC_PRIO_MAX)
|
||||
@ -411,13 +412,17 @@ static int otx2_dcbnl_ieee_setpfc(struct net_device *dev, struct ieee_pfc *pfc)
|
||||
* supported by the tx queue configuration
|
||||
*/
|
||||
err = otx2_check_pfc_config(pfvf);
|
||||
if (err)
|
||||
if (err) {
|
||||
pfvf->pfc_en = old_pfc_en;
|
||||
return err;
|
||||
}
|
||||
|
||||
process_pfc:
|
||||
err = otx2_config_priority_flow_ctrl(pfvf);
|
||||
if (err)
|
||||
if (err) {
|
||||
pfvf->pfc_en = old_pfc_en;
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Request Per channel Bpids */
|
||||
if (pfc->pfc_en)
|
||||
@ -425,6 +430,12 @@ process_pfc:
|
||||
|
||||
err = otx2_pfc_txschq_update(pfvf);
|
||||
if (err) {
|
||||
if (pfc->pfc_en)
|
||||
otx2_nix_config_bp(pfvf, false);
|
||||
|
||||
otx2_pfc_txschq_stop(pfvf);
|
||||
pfvf->pfc_en = old_pfc_en;
|
||||
otx2_config_priority_flow_ctrl(pfvf);
|
||||
dev_err(pfvf->dev, "%s failed to update TX schedulers\n", __func__);
|
||||
return err;
|
||||
}
|
||||
|
@ -291,6 +291,9 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
|
||||
for (i = 0; i < q->n_desc; i++) {
|
||||
struct mtk_wed_wo_queue_entry *entry = &q->entry[i];
|
||||
|
||||
if (!entry->buf)
|
||||
continue;
|
||||
|
||||
dma_unmap_single(wo->hw->dev, entry->addr, entry->len,
|
||||
DMA_TO_DEVICE);
|
||||
skb_free_frag(entry->buf);
|
||||
|
@ -156,15 +156,18 @@ static u8 alloc_token(struct mlx5_cmd *cmd)
|
||||
return token;
|
||||
}
|
||||
|
||||
static int cmd_alloc_index(struct mlx5_cmd *cmd)
|
||||
static int cmd_alloc_index(struct mlx5_cmd *cmd, struct mlx5_cmd_work_ent *ent)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&cmd->alloc_lock, flags);
|
||||
ret = find_first_bit(&cmd->vars.bitmask, cmd->vars.max_reg_cmds);
|
||||
if (ret < cmd->vars.max_reg_cmds)
|
||||
if (ret < cmd->vars.max_reg_cmds) {
|
||||
clear_bit(ret, &cmd->vars.bitmask);
|
||||
ent->idx = ret;
|
||||
cmd->ent_arr[ent->idx] = ent;
|
||||
}
|
||||
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
|
||||
|
||||
return ret < cmd->vars.max_reg_cmds ? ret : -ENOMEM;
|
||||
@ -979,7 +982,7 @@ static void cmd_work_handler(struct work_struct *work)
|
||||
sem = ent->page_queue ? &cmd->vars.pages_sem : &cmd->vars.sem;
|
||||
down(sem);
|
||||
if (!ent->page_queue) {
|
||||
alloc_ret = cmd_alloc_index(cmd);
|
||||
alloc_ret = cmd_alloc_index(cmd, ent);
|
||||
if (alloc_ret < 0) {
|
||||
mlx5_core_err_rl(dev, "failed to allocate command entry\n");
|
||||
if (ent->callback) {
|
||||
@ -994,15 +997,14 @@ static void cmd_work_handler(struct work_struct *work)
|
||||
up(sem);
|
||||
return;
|
||||
}
|
||||
ent->idx = alloc_ret;
|
||||
} else {
|
||||
ent->idx = cmd->vars.max_reg_cmds;
|
||||
spin_lock_irqsave(&cmd->alloc_lock, flags);
|
||||
clear_bit(ent->idx, &cmd->vars.bitmask);
|
||||
cmd->ent_arr[ent->idx] = ent;
|
||||
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
|
||||
}
|
||||
|
||||
cmd->ent_arr[ent->idx] = ent;
|
||||
lay = get_inst(cmd, ent->idx);
|
||||
ent->lay = lay;
|
||||
memset(lay, 0, sizeof(*lay));
|
||||
|
@ -718,7 +718,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
|
||||
|
||||
while (block_timestamp > tracer->last_timestamp) {
|
||||
/* Check block override if it's not the first block */
|
||||
if (!tracer->last_timestamp) {
|
||||
if (tracer->last_timestamp) {
|
||||
u64 *ts_event;
|
||||
/* To avoid block override be the HW in case of buffer
|
||||
* wraparound, the time stamp of the previous block
|
||||
|
@ -154,6 +154,7 @@ static int fs_udp_create_groups(struct mlx5e_flow_table *ft, enum fs_udp_type ty
|
||||
in = kvzalloc(inlen, GFP_KERNEL);
|
||||
if (!in || !ft->g) {
|
||||
kfree(ft->g);
|
||||
ft->g = NULL;
|
||||
kvfree(in);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
@ -197,7 +197,7 @@ parse_mirred_encap(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
}
|
||||
esw_attr->dests[esw_attr->out_count].flags |= MLX5_ESW_DEST_ENCAP;
|
||||
esw_attr->out_count++;
|
||||
/* attr->dests[].rep is resolved when we handle encap */
|
||||
/* attr->dests[].vport is resolved when we handle encap */
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -270,7 +270,8 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
|
||||
out_priv = netdev_priv(out_dev);
|
||||
rpriv = out_priv->ppriv;
|
||||
esw_attr->dests[esw_attr->out_count].rep = rpriv->rep;
|
||||
esw_attr->dests[esw_attr->out_count].vport_valid = true;
|
||||
esw_attr->dests[esw_attr->out_count].vport = rpriv->rep->vport;
|
||||
esw_attr->dests[esw_attr->out_count].mdev = out_priv->mdev;
|
||||
|
||||
esw_attr->out_count++;
|
||||
|
@ -300,6 +300,10 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
|
||||
if (err)
|
||||
goto destroy_neigh_entry;
|
||||
|
||||
e->encap_size = ipv4_encap_size;
|
||||
e->encap_header = encap_header;
|
||||
encap_header = NULL;
|
||||
|
||||
if (!(nud_state & NUD_VALID)) {
|
||||
neigh_event_send(attr.n, NULL);
|
||||
/* the encap entry will be made valid on neigh update event
|
||||
@ -310,8 +314,8 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
|
||||
|
||||
memset(&reformat_params, 0, sizeof(reformat_params));
|
||||
reformat_params.type = e->reformat_type;
|
||||
reformat_params.size = ipv4_encap_size;
|
||||
reformat_params.data = encap_header;
|
||||
reformat_params.size = e->encap_size;
|
||||
reformat_params.data = e->encap_header;
|
||||
e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev, &reformat_params,
|
||||
MLX5_FLOW_NAMESPACE_FDB);
|
||||
if (IS_ERR(e->pkt_reformat)) {
|
||||
@ -319,8 +323,6 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
|
||||
goto destroy_neigh_entry;
|
||||
}
|
||||
|
||||
e->encap_size = ipv4_encap_size;
|
||||
e->encap_header = encap_header;
|
||||
e->flags |= MLX5_ENCAP_ENTRY_VALID;
|
||||
mlx5e_rep_queue_neigh_stats_work(netdev_priv(attr.out_dev));
|
||||
mlx5e_route_lookup_ipv4_put(&attr);
|
||||
@ -403,18 +405,23 @@ int mlx5e_tc_tun_update_header_ipv4(struct mlx5e_priv *priv,
|
||||
if (err)
|
||||
goto free_encap;
|
||||
|
||||
e->encap_size = ipv4_encap_size;
|
||||
kfree(e->encap_header);
|
||||
e->encap_header = encap_header;
|
||||
encap_header = NULL;
|
||||
|
||||
if (!(nud_state & NUD_VALID)) {
|
||||
neigh_event_send(attr.n, NULL);
|
||||
/* the encap entry will be made valid on neigh update event
|
||||
* and not used before that.
|
||||
*/
|
||||
goto free_encap;
|
||||
goto release_neigh;
|
||||
}
|
||||
|
||||
memset(&reformat_params, 0, sizeof(reformat_params));
|
||||
reformat_params.type = e->reformat_type;
|
||||
reformat_params.size = ipv4_encap_size;
|
||||
reformat_params.data = encap_header;
|
||||
reformat_params.size = e->encap_size;
|
||||
reformat_params.data = e->encap_header;
|
||||
e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev, &reformat_params,
|
||||
MLX5_FLOW_NAMESPACE_FDB);
|
||||
if (IS_ERR(e->pkt_reformat)) {
|
||||
@ -422,10 +429,6 @@ int mlx5e_tc_tun_update_header_ipv4(struct mlx5e_priv *priv,
|
||||
goto free_encap;
|
||||
}
|
||||
|
||||
e->encap_size = ipv4_encap_size;
|
||||
kfree(e->encap_header);
|
||||
e->encap_header = encap_header;
|
||||
|
||||
e->flags |= MLX5_ENCAP_ENTRY_VALID;
|
||||
mlx5e_rep_queue_neigh_stats_work(netdev_priv(attr.out_dev));
|
||||
mlx5e_route_lookup_ipv4_put(&attr);
|
||||
@ -567,6 +570,10 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
|
||||
if (err)
|
||||
goto destroy_neigh_entry;
|
||||
|
||||
e->encap_size = ipv6_encap_size;
|
||||
e->encap_header = encap_header;
|
||||
encap_header = NULL;
|
||||
|
||||
if (!(nud_state & NUD_VALID)) {
|
||||
neigh_event_send(attr.n, NULL);
|
||||
/* the encap entry will be made valid on neigh update event
|
||||
@ -577,8 +584,8 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
|
||||
|
||||
memset(&reformat_params, 0, sizeof(reformat_params));
|
||||
reformat_params.type = e->reformat_type;
|
||||
reformat_params.size = ipv6_encap_size;
|
||||
reformat_params.data = encap_header;
|
||||
reformat_params.size = e->encap_size;
|
||||
reformat_params.data = e->encap_header;
|
||||
e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev, &reformat_params,
|
||||
MLX5_FLOW_NAMESPACE_FDB);
|
||||
if (IS_ERR(e->pkt_reformat)) {
|
||||
@ -586,8 +593,6 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
|
||||
goto destroy_neigh_entry;
|
||||
}
|
||||
|
||||
e->encap_size = ipv6_encap_size;
|
||||
e->encap_header = encap_header;
|
||||
e->flags |= MLX5_ENCAP_ENTRY_VALID;
|
||||
mlx5e_rep_queue_neigh_stats_work(netdev_priv(attr.out_dev));
|
||||
mlx5e_route_lookup_ipv6_put(&attr);
|
||||
@ -669,18 +674,23 @@ int mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv,
|
||||
if (err)
|
||||
goto free_encap;
|
||||
|
||||
e->encap_size = ipv6_encap_size;
|
||||
kfree(e->encap_header);
|
||||
e->encap_header = encap_header;
|
||||
encap_header = NULL;
|
||||
|
||||
if (!(nud_state & NUD_VALID)) {
|
||||
neigh_event_send(attr.n, NULL);
|
||||
/* the encap entry will be made valid on neigh update event
|
||||
* and not used before that.
|
||||
*/
|
||||
goto free_encap;
|
||||
goto release_neigh;
|
||||
}
|
||||
|
||||
memset(&reformat_params, 0, sizeof(reformat_params));
|
||||
reformat_params.type = e->reformat_type;
|
||||
reformat_params.size = ipv6_encap_size;
|
||||
reformat_params.data = encap_header;
|
||||
reformat_params.size = e->encap_size;
|
||||
reformat_params.data = e->encap_header;
|
||||
e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev, &reformat_params,
|
||||
MLX5_FLOW_NAMESPACE_FDB);
|
||||
if (IS_ERR(e->pkt_reformat)) {
|
||||
@ -688,10 +698,6 @@ int mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv,
|
||||
goto free_encap;
|
||||
}
|
||||
|
||||
e->encap_size = ipv6_encap_size;
|
||||
kfree(e->encap_header);
|
||||
e->encap_header = encap_header;
|
||||
|
||||
e->flags |= MLX5_ENCAP_ENTRY_VALID;
|
||||
mlx5e_rep_queue_neigh_stats_work(netdev_priv(attr.out_dev));
|
||||
mlx5e_route_lookup_ipv6_put(&attr);
|
||||
|
@ -1064,7 +1064,8 @@ int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv,
|
||||
|
||||
out_priv = netdev_priv(encap_dev);
|
||||
rpriv = out_priv->ppriv;
|
||||
esw_attr->dests[out_index].rep = rpriv->rep;
|
||||
esw_attr->dests[out_index].vport_valid = true;
|
||||
esw_attr->dests[out_index].vport = rpriv->rep->vport;
|
||||
esw_attr->dests[out_index].mdev = out_priv->mdev;
|
||||
}
|
||||
|
||||
|
@ -493,6 +493,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd,
|
||||
dma_addr_t dma_addr = xdptxd->dma_addr;
|
||||
u32 dma_len = xdptxd->len;
|
||||
u16 ds_cnt, inline_hdr_sz;
|
||||
unsigned int frags_size;
|
||||
u8 num_wqebbs = 1;
|
||||
int num_frags = 0;
|
||||
bool inline_ok;
|
||||
@ -503,8 +504,9 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd,
|
||||
|
||||
inline_ok = sq->min_inline_mode == MLX5_INLINE_MODE_NONE ||
|
||||
dma_len >= MLX5E_XDP_MIN_INLINE;
|
||||
frags_size = xdptxd->has_frags ? xdptxdf->sinfo->xdp_frags_size : 0;
|
||||
|
||||
if (unlikely(!inline_ok || sq->hw_mtu < dma_len)) {
|
||||
if (unlikely(!inline_ok || sq->hw_mtu < dma_len + frags_size)) {
|
||||
stats->err++;
|
||||
return false;
|
||||
}
|
||||
|
@ -2142,7 +2142,7 @@ static int mlx5e_ipsec_block_tc_offload(struct mlx5_core_dev *mdev)
|
||||
|
||||
static void mlx5e_ipsec_unblock_tc_offload(struct mlx5_core_dev *mdev)
|
||||
{
|
||||
mdev->num_block_tc++;
|
||||
mdev->num_block_tc--;
|
||||
}
|
||||
|
||||
int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
|
||||
|
@ -49,7 +49,7 @@ void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
|
||||
count = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
|
||||
"%d.%d.%04d (%.16s)", fw_rev_maj(mdev),
|
||||
fw_rev_min(mdev), fw_rev_sub(mdev), mdev->board_id);
|
||||
if (count == sizeof(drvinfo->fw_version))
|
||||
if (count >= sizeof(drvinfo->fw_version))
|
||||
snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
|
||||
"%d.%d.%04d", fw_rev_maj(mdev),
|
||||
fw_rev_min(mdev), fw_rev_sub(mdev));
|
||||
|
@ -78,7 +78,7 @@ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
|
||||
count = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
|
||||
"%d.%d.%04d (%.16s)", fw_rev_maj(mdev),
|
||||
fw_rev_min(mdev), fw_rev_sub(mdev), mdev->board_id);
|
||||
if (count == sizeof(drvinfo->fw_version))
|
||||
if (count >= sizeof(drvinfo->fw_version))
|
||||
snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
|
||||
"%d.%d.%04d", fw_rev_maj(mdev),
|
||||
fw_rev_min(mdev), fw_rev_sub(mdev));
|
||||
|
@ -3778,7 +3778,8 @@ alloc_branch_attr(struct mlx5e_tc_flow *flow,
|
||||
break;
|
||||
case FLOW_ACTION_ACCEPT:
|
||||
case FLOW_ACTION_PIPE:
|
||||
if (set_branch_dest_ft(flow->priv, attr))
|
||||
err = set_branch_dest_ft(flow->priv, attr);
|
||||
if (err)
|
||||
goto out_err;
|
||||
break;
|
||||
case FLOW_ACTION_JUMP:
|
||||
@ -3788,7 +3789,8 @@ alloc_branch_attr(struct mlx5e_tc_flow *flow,
|
||||
goto out_err;
|
||||
}
|
||||
*jump_count = cond->extval;
|
||||
if (set_branch_dest_ft(flow->priv, attr))
|
||||
err = set_branch_dest_ft(flow->priv, attr);
|
||||
if (err)
|
||||
goto out_err;
|
||||
break;
|
||||
default:
|
||||
@ -5736,8 +5738,10 @@ int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_a
|
||||
|
||||
esw = priv->mdev->priv.eswitch;
|
||||
attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping);
|
||||
if (IS_ERR(attr->act_id_restore_rule))
|
||||
if (IS_ERR(attr->act_id_restore_rule)) {
|
||||
err = PTR_ERR(attr->act_id_restore_rule);
|
||||
goto err_rule;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -526,7 +526,8 @@ struct mlx5_esw_flow_attr {
|
||||
u8 total_vlan;
|
||||
struct {
|
||||
u32 flags;
|
||||
struct mlx5_eswitch_rep *rep;
|
||||
bool vport_valid;
|
||||
u16 vport;
|
||||
struct mlx5_pkt_reformat *pkt_reformat;
|
||||
struct mlx5_core_dev *mdev;
|
||||
struct mlx5_termtbl_handle *termtbl;
|
||||
|
@ -287,10 +287,9 @@ static void esw_put_dest_tables_loop(struct mlx5_eswitch *esw, struct mlx5_flow_
|
||||
for (i = from; i < to; i++)
|
||||
if (esw_attr->dests[i].flags & MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE)
|
||||
mlx5_chains_put_table(chains, 0, 1, 0);
|
||||
else if (mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,
|
||||
else if (mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].vport,
|
||||
esw_attr->dests[i].mdev))
|
||||
mlx5_esw_indir_table_put(esw, esw_attr->dests[i].rep->vport,
|
||||
false);
|
||||
mlx5_esw_indir_table_put(esw, esw_attr->dests[i].vport, false);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -358,8 +357,8 @@ esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr)
|
||||
* this criteria.
|
||||
*/
|
||||
for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {
|
||||
if (esw_attr->dests[i].rep &&
|
||||
mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,
|
||||
if (esw_attr->dests[i].vport_valid &&
|
||||
mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].vport,
|
||||
esw_attr->dests[i].mdev)) {
|
||||
result = true;
|
||||
} else {
|
||||
@ -388,7 +387,7 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest,
|
||||
dest[*i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
|
||||
|
||||
dest[*i].ft = mlx5_esw_indir_table_get(esw, attr,
|
||||
esw_attr->dests[j].rep->vport, false);
|
||||
esw_attr->dests[j].vport, false);
|
||||
if (IS_ERR(dest[*i].ft)) {
|
||||
err = PTR_ERR(dest[*i].ft);
|
||||
goto err_indir_tbl_get;
|
||||
@ -432,11 +431,11 @@ static bool esw_setup_uplink_fwd_ipsec_needed(struct mlx5_eswitch *esw,
|
||||
int attr_idx)
|
||||
{
|
||||
if (esw->offloads.ft_ipsec_tx_pol &&
|
||||
esw_attr->dests[attr_idx].rep &&
|
||||
esw_attr->dests[attr_idx].rep->vport == MLX5_VPORT_UPLINK &&
|
||||
esw_attr->dests[attr_idx].vport_valid &&
|
||||
esw_attr->dests[attr_idx].vport == MLX5_VPORT_UPLINK &&
|
||||
/* To be aligned with software, encryption is needed only for tunnel device */
|
||||
(esw_attr->dests[attr_idx].flags & MLX5_ESW_DEST_ENCAP_VALID) &&
|
||||
esw_attr->dests[attr_idx].rep != esw_attr->in_rep &&
|
||||
esw_attr->dests[attr_idx].vport != esw_attr->in_rep->vport &&
|
||||
esw_same_vhca_id(esw_attr->dests[attr_idx].mdev, esw->dev))
|
||||
return true;
|
||||
|
||||
@ -469,7 +468,7 @@ esw_setup_dest_fwd_vport(struct mlx5_flow_destination *dest, struct mlx5_flow_ac
|
||||
int attr_idx, int dest_idx, bool pkt_reformat)
|
||||
{
|
||||
dest[dest_idx].type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
|
||||
dest[dest_idx].vport.num = esw_attr->dests[attr_idx].rep->vport;
|
||||
dest[dest_idx].vport.num = esw_attr->dests[attr_idx].vport;
|
||||
if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
|
||||
dest[dest_idx].vport.vhca_id =
|
||||
MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id);
|
||||
@ -1177,9 +1176,9 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
|
||||
struct mlx5_flow_handle *flow;
|
||||
struct mlx5_flow_spec *spec;
|
||||
struct mlx5_vport *vport;
|
||||
int err, pfindex;
|
||||
unsigned long i;
|
||||
void *misc;
|
||||
int err;
|
||||
|
||||
if (!MLX5_VPORT_MANAGER(esw->dev) && !mlx5_core_is_ecpf_esw_manager(esw->dev))
|
||||
return 0;
|
||||
@ -1255,7 +1254,15 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
|
||||
flows[vport->index] = flow;
|
||||
}
|
||||
}
|
||||
esw->fdb_table.offloads.peer_miss_rules[mlx5_get_dev_index(peer_dev)] = flows;
|
||||
|
||||
pfindex = mlx5_get_dev_index(peer_dev);
|
||||
if (pfindex >= MLX5_MAX_PORTS) {
|
||||
esw_warn(esw->dev, "Peer dev index(%d) is over the max num defined(%d)\n",
|
||||
pfindex, MLX5_MAX_PORTS);
|
||||
err = -EINVAL;
|
||||
goto add_ec_vf_flow_err;
|
||||
}
|
||||
esw->fdb_table.offloads.peer_miss_rules[pfindex] = flows;
|
||||
|
||||
kvfree(spec);
|
||||
return 0;
|
||||
|
@ -233,8 +233,8 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
|
||||
|
||||
/* hairpin */
|
||||
for (i = esw_attr->split_count; i < esw_attr->out_count; i++)
|
||||
if (!esw_attr->dest_int_port && esw_attr->dests[i].rep &&
|
||||
esw_attr->dests[i].rep->vport == MLX5_VPORT_UPLINK)
|
||||
if (!esw_attr->dest_int_port && esw_attr->dests[i].vport_valid &&
|
||||
esw_attr->dests[i].vport == MLX5_VPORT_UPLINK)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
|
@ -277,7 +277,7 @@ int mlx5_query_nic_vport_mac_list(struct mlx5_core_dev *dev,
|
||||
req_list_size = max_list_size;
|
||||
}
|
||||
|
||||
out_sz = MLX5_ST_SZ_BYTES(query_nic_vport_context_in) +
|
||||
out_sz = MLX5_ST_SZ_BYTES(query_nic_vport_context_out) +
|
||||
req_list_size * MLX5_ST_SZ_BYTES(mac_address_layout);
|
||||
|
||||
out = kvzalloc(out_sz, GFP_KERNEL);
|
||||
|
@ -350,6 +350,8 @@ union ks8851_tx_hdr {
|
||||
* @rxd: Space for receiving SPI data, in DMA-able space.
|
||||
* @txd: Space for transmitting SPI data, in DMA-able space.
|
||||
* @msg_enable: The message flags controlling driver output (see ethtool).
|
||||
* @tx_space: Free space in the hardware TX buffer (cached copy of KS_TXMIR).
|
||||
* @queued_len: Space required in hardware TX buffer for queued packets in txq.
|
||||
* @fid: Incrementing frame id tag.
|
||||
* @rc_ier: Cached copy of KS_IER.
|
||||
* @rc_ccr: Cached copy of KS_CCR.
|
||||
@ -399,6 +401,7 @@ struct ks8851_net {
|
||||
struct work_struct rxctrl_work;
|
||||
|
||||
struct sk_buff_head txq;
|
||||
unsigned int queued_len;
|
||||
|
||||
struct eeprom_93cx6 eeprom;
|
||||
struct regulator *vdd_reg;
|
||||
|
@ -362,16 +362,18 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
|
||||
handled |= IRQ_RXPSI;
|
||||
|
||||
if (status & IRQ_TXI) {
|
||||
handled |= IRQ_TXI;
|
||||
|
||||
/* no lock here, tx queue should have been stopped */
|
||||
|
||||
/* update our idea of how much tx space is available to the
|
||||
* system */
|
||||
ks->tx_space = ks8851_rdreg16(ks, KS_TXMIR);
|
||||
unsigned short tx_space = ks8851_rdreg16(ks, KS_TXMIR);
|
||||
|
||||
netif_dbg(ks, intr, ks->netdev,
|
||||
"%s: txspace %d\n", __func__, ks->tx_space);
|
||||
"%s: txspace %d\n", __func__, tx_space);
|
||||
|
||||
spin_lock(&ks->statelock);
|
||||
ks->tx_space = tx_space;
|
||||
if (netif_queue_stopped(ks->netdev))
|
||||
netif_wake_queue(ks->netdev);
|
||||
spin_unlock(&ks->statelock);
|
||||
|
||||
handled |= IRQ_TXI;
|
||||
}
|
||||
|
||||
if (status & IRQ_RXI)
|
||||
@ -414,9 +416,6 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
|
||||
if (status & IRQ_LCI)
|
||||
mii_check_link(&ks->mii);
|
||||
|
||||
if (status & IRQ_TXI)
|
||||
netif_wake_queue(ks->netdev);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
@ -500,6 +499,7 @@ static int ks8851_net_open(struct net_device *dev)
|
||||
ks8851_wrreg16(ks, KS_ISR, ks->rc_ier);
|
||||
ks8851_wrreg16(ks, KS_IER, ks->rc_ier);
|
||||
|
||||
ks->queued_len = 0;
|
||||
netif_start_queue(ks->netdev);
|
||||
|
||||
netif_dbg(ks, ifup, ks->netdev, "network device up\n");
|
||||
|
@ -286,6 +286,18 @@ static void ks8851_wrfifo_spi(struct ks8851_net *ks, struct sk_buff *txp,
|
||||
netdev_err(ks->netdev, "%s: spi_sync() failed\n", __func__);
|
||||
}
|
||||
|
||||
/**
|
||||
* calc_txlen - calculate size of message to send packet
|
||||
* @len: Length of data
|
||||
*
|
||||
* Returns the size of the TXFIFO message needed to send
|
||||
* this packet.
|
||||
*/
|
||||
static unsigned int calc_txlen(unsigned int len)
|
||||
{
|
||||
return ALIGN(len + 4, 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* ks8851_rx_skb_spi - receive skbuff
|
||||
* @ks: The device state
|
||||
@ -305,7 +317,9 @@ static void ks8851_rx_skb_spi(struct ks8851_net *ks, struct sk_buff *skb)
|
||||
*/
|
||||
static void ks8851_tx_work(struct work_struct *work)
|
||||
{
|
||||
unsigned int dequeued_len = 0;
|
||||
struct ks8851_net_spi *kss;
|
||||
unsigned short tx_space;
|
||||
struct ks8851_net *ks;
|
||||
unsigned long flags;
|
||||
struct sk_buff *txb;
|
||||
@ -322,6 +336,8 @@ static void ks8851_tx_work(struct work_struct *work)
|
||||
last = skb_queue_empty(&ks->txq);
|
||||
|
||||
if (txb) {
|
||||
dequeued_len += calc_txlen(txb->len);
|
||||
|
||||
ks8851_wrreg16_spi(ks, KS_RXQCR,
|
||||
ks->rc_rxqcr | RXQCR_SDA);
|
||||
ks8851_wrfifo_spi(ks, txb, last);
|
||||
@ -332,6 +348,13 @@ static void ks8851_tx_work(struct work_struct *work)
|
||||
}
|
||||
}
|
||||
|
||||
tx_space = ks8851_rdreg16_spi(ks, KS_TXMIR);
|
||||
|
||||
spin_lock(&ks->statelock);
|
||||
ks->queued_len -= dequeued_len;
|
||||
ks->tx_space = tx_space;
|
||||
spin_unlock(&ks->statelock);
|
||||
|
||||
ks8851_unlock_spi(ks, &flags);
|
||||
}
|
||||
|
||||
@ -346,18 +369,6 @@ static void ks8851_flush_tx_work_spi(struct ks8851_net *ks)
|
||||
flush_work(&kss->tx_work);
|
||||
}
|
||||
|
||||
/**
|
||||
* calc_txlen - calculate size of message to send packet
|
||||
* @len: Length of data
|
||||
*
|
||||
* Returns the size of the TXFIFO message needed to send
|
||||
* this packet.
|
||||
*/
|
||||
static unsigned int calc_txlen(unsigned int len)
|
||||
{
|
||||
return ALIGN(len + 4, 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* ks8851_start_xmit_spi - transmit packet using SPI
|
||||
* @skb: The buffer to transmit
|
||||
@ -386,16 +397,17 @@ static netdev_tx_t ks8851_start_xmit_spi(struct sk_buff *skb,
|
||||
|
||||
spin_lock(&ks->statelock);
|
||||
|
||||
if (needed > ks->tx_space) {
|
||||
if (ks->queued_len + needed > ks->tx_space) {
|
||||
netif_stop_queue(dev);
|
||||
ret = NETDEV_TX_BUSY;
|
||||
} else {
|
||||
ks->tx_space -= needed;
|
||||
ks->queued_len += needed;
|
||||
skb_queue_tail(&ks->txq, skb);
|
||||
}
|
||||
|
||||
spin_unlock(&ks->statelock);
|
||||
schedule_work(&kss->tx_work);
|
||||
if (ret == NETDEV_TX_OK)
|
||||
schedule_work(&kss->tx_work);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -20,6 +20,7 @@ config MICROSOFT_MANA
|
||||
depends on PCI_MSI && X86_64
|
||||
depends on PCI_HYPERV
|
||||
select AUXILIARY_BUS
|
||||
select PAGE_POOL
|
||||
help
|
||||
This driver supports Microsoft Azure Network Adapter (MANA).
|
||||
So far, the driver is only supported on X86_64.
|
||||
|
@ -582,10 +582,10 @@ static void ocelot_port_rmon_stats_cb(struct ocelot *ocelot, int port, void *pri
|
||||
rmon_stats->hist_tx[0] = s[OCELOT_STAT_TX_64];
|
||||
rmon_stats->hist_tx[1] = s[OCELOT_STAT_TX_65_127];
|
||||
rmon_stats->hist_tx[2] = s[OCELOT_STAT_TX_128_255];
|
||||
rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_128_255];
|
||||
rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_256_511];
|
||||
rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_512_1023];
|
||||
rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_1024_1526];
|
||||
rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_256_511];
|
||||
rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_512_1023];
|
||||
rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_1024_1526];
|
||||
rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_1527_MAX];
|
||||
}
|
||||
|
||||
static void ocelot_port_pmac_rmon_stats_cb(struct ocelot *ocelot, int port,
|
||||
@ -610,10 +610,10 @@ static void ocelot_port_pmac_rmon_stats_cb(struct ocelot *ocelot, int port,
|
||||
rmon_stats->hist_tx[0] = s[OCELOT_STAT_TX_PMAC_64];
|
||||
rmon_stats->hist_tx[1] = s[OCELOT_STAT_TX_PMAC_65_127];
|
||||
rmon_stats->hist_tx[2] = s[OCELOT_STAT_TX_PMAC_128_255];
|
||||
rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_PMAC_128_255];
|
||||
rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_PMAC_256_511];
|
||||
rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_PMAC_512_1023];
|
||||
rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_PMAC_1024_1526];
|
||||
rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_PMAC_256_511];
|
||||
rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_PMAC_512_1023];
|
||||
rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_PMAC_1024_1526];
|
||||
rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_PMAC_1527_MAX];
|
||||
}
|
||||
|
||||
void ocelot_port_get_rmon_stats(struct ocelot *ocelot, int port,
|
||||
|
@ -237,7 +237,7 @@ static void timestamp_interrupt(struct stmmac_priv *priv)
|
||||
*/
|
||||
ts_status = readl(priv->ioaddr + GMAC_TIMESTAMP_STATUS);
|
||||
|
||||
if (priv->plat->flags & STMMAC_FLAG_EXT_SNAPSHOT_EN)
|
||||
if (!(priv->plat->flags & STMMAC_FLAG_EXT_SNAPSHOT_EN))
|
||||
return;
|
||||
|
||||
num_snapshot = (ts_status & GMAC_TIMESTAMP_ATSNS_MASK) >>
|
||||
|
@ -160,60 +160,6 @@ static __le32 wx_test_staterr(union wx_rx_desc *rx_desc,
|
||||
return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits);
|
||||
}
|
||||
|
||||
static bool wx_can_reuse_rx_page(struct wx_rx_buffer *rx_buffer,
|
||||
int rx_buffer_pgcnt)
|
||||
{
|
||||
unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;
|
||||
struct page *page = rx_buffer->page;
|
||||
|
||||
/* avoid re-using remote and pfmemalloc pages */
|
||||
if (!dev_page_is_reusable(page))
|
||||
return false;
|
||||
|
||||
#if (PAGE_SIZE < 8192)
|
||||
/* if we are only owner of page we can reuse it */
|
||||
if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1))
|
||||
return false;
|
||||
#endif
|
||||
|
||||
/* If we have drained the page fragment pool we need to update
|
||||
* the pagecnt_bias and page count so that we fully restock the
|
||||
* number of references the driver holds.
|
||||
*/
|
||||
if (unlikely(pagecnt_bias == 1)) {
|
||||
page_ref_add(page, USHRT_MAX - 1);
|
||||
rx_buffer->pagecnt_bias = USHRT_MAX;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* wx_reuse_rx_page - page flip buffer and store it back on the ring
|
||||
* @rx_ring: rx descriptor ring to store buffers on
|
||||
* @old_buff: donor buffer to have page reused
|
||||
*
|
||||
* Synchronizes page for reuse by the adapter
|
||||
**/
|
||||
static void wx_reuse_rx_page(struct wx_ring *rx_ring,
|
||||
struct wx_rx_buffer *old_buff)
|
||||
{
|
||||
u16 nta = rx_ring->next_to_alloc;
|
||||
struct wx_rx_buffer *new_buff;
|
||||
|
||||
new_buff = &rx_ring->rx_buffer_info[nta];
|
||||
|
||||
/* update, and store next to alloc */
|
||||
nta++;
|
||||
rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
|
||||
|
||||
/* transfer page from old buffer to new buffer */
|
||||
new_buff->page = old_buff->page;
|
||||
new_buff->page_dma = old_buff->page_dma;
|
||||
new_buff->page_offset = old_buff->page_offset;
|
||||
new_buff->pagecnt_bias = old_buff->pagecnt_bias;
|
||||
}
|
||||
|
||||
static void wx_dma_sync_frag(struct wx_ring *rx_ring,
|
||||
struct wx_rx_buffer *rx_buffer)
|
||||
{
|
||||
@ -270,8 +216,6 @@ static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring,
|
||||
size,
|
||||
DMA_FROM_DEVICE);
|
||||
skip_sync:
|
||||
rx_buffer->pagecnt_bias--;
|
||||
|
||||
return rx_buffer;
|
||||
}
|
||||
|
||||
@ -280,19 +224,9 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring,
|
||||
struct sk_buff *skb,
|
||||
int rx_buffer_pgcnt)
|
||||
{
|
||||
if (wx_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) {
|
||||
/* hand second half of page back to the ring */
|
||||
wx_reuse_rx_page(rx_ring, rx_buffer);
|
||||
} else {
|
||||
if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
|
||||
/* the page has been released from the ring */
|
||||
WX_CB(skb)->page_released = true;
|
||||
else
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
|
||||
__page_frag_cache_drain(rx_buffer->page,
|
||||
rx_buffer->pagecnt_bias);
|
||||
}
|
||||
if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
|
||||
/* the page has been released from the ring */
|
||||
WX_CB(skb)->page_released = true;
|
||||
|
||||
/* clear contents of rx_buffer */
|
||||
rx_buffer->page = NULL;
|
||||
@ -335,11 +269,12 @@ static struct sk_buff *wx_build_skb(struct wx_ring *rx_ring,
|
||||
if (size <= WX_RXBUFFER_256) {
|
||||
memcpy(__skb_put(skb, size), page_addr,
|
||||
ALIGN(size, sizeof(long)));
|
||||
rx_buffer->pagecnt_bias++;
|
||||
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, true);
|
||||
return skb;
|
||||
}
|
||||
|
||||
skb_mark_for_recycle(skb);
|
||||
|
||||
if (!wx_test_staterr(rx_desc, WX_RXD_STAT_EOP))
|
||||
WX_CB(skb)->dma = rx_buffer->dma;
|
||||
|
||||
@ -382,8 +317,6 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
|
||||
bi->page_dma = dma;
|
||||
bi->page = page;
|
||||
bi->page_offset = 0;
|
||||
page_ref_add(page, USHRT_MAX - 1);
|
||||
bi->pagecnt_bias = USHRT_MAX;
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -723,7 +656,6 @@ static int wx_clean_rx_irq(struct wx_q_vector *q_vector,
|
||||
/* exit if we failed to retrieve a buffer */
|
||||
if (!skb) {
|
||||
rx_ring->rx_stats.alloc_rx_buff_failed++;
|
||||
rx_buffer->pagecnt_bias++;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -2248,8 +2180,6 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
|
||||
|
||||
/* free resources associated with mapping */
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
__page_frag_cache_drain(rx_buffer->page,
|
||||
rx_buffer->pagecnt_bias);
|
||||
|
||||
i++;
|
||||
rx_buffer++;
|
||||
|
@ -787,7 +787,6 @@ struct wx_rx_buffer {
|
||||
dma_addr_t page_dma;
|
||||
struct page *page;
|
||||
unsigned int page_offset;
|
||||
u16 pagecnt_bias;
|
||||
};
|
||||
|
||||
struct wx_queue_stats {
|
||||
|
@ -1548,7 +1548,8 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
|
||||
goto error;
|
||||
|
||||
phy_resume(phydev);
|
||||
phy_led_triggers_register(phydev);
|
||||
if (!phydev->is_on_sfp_module)
|
||||
phy_led_triggers_register(phydev);
|
||||
|
||||
/**
|
||||
* If the external phy used by current mac interface is managed by
|
||||
@ -1817,7 +1818,8 @@ void phy_detach(struct phy_device *phydev)
|
||||
}
|
||||
phydev->phylink = NULL;
|
||||
|
||||
phy_led_triggers_unregister(phydev);
|
||||
if (!phydev->is_on_sfp_module)
|
||||
phy_led_triggers_unregister(phydev);
|
||||
|
||||
if (phydev->mdio.dev.driver)
|
||||
module_put(phydev->mdio.dev.driver->owner);
|
||||
|
@ -1385,7 +1385,7 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
|
||||
* if it is true then one of the handlers took the page.
|
||||
*/
|
||||
|
||||
if (reclaim) {
|
||||
if (reclaim && txq) {
|
||||
u16 sequence = le16_to_cpu(pkt->hdr.sequence);
|
||||
int index = SEQ_TO_INDEX(sequence);
|
||||
int cmd_index = iwl_txq_get_cmd_index(txq, index);
|
||||
|
@ -3106,7 +3106,7 @@ static u32 iwl_trans_pcie_dump_rbs(struct iwl_trans *trans,
|
||||
struct iwl_rxq *rxq = &trans_pcie->rxq[0];
|
||||
u32 i, r, j, rb_len = 0;
|
||||
|
||||
spin_lock(&rxq->lock);
|
||||
spin_lock_bh(&rxq->lock);
|
||||
|
||||
r = iwl_get_closed_rb_stts(trans, rxq);
|
||||
|
||||
@ -3130,7 +3130,7 @@ static u32 iwl_trans_pcie_dump_rbs(struct iwl_trans *trans,
|
||||
*data = iwl_fw_error_next_data(*data);
|
||||
}
|
||||
|
||||
spin_unlock(&rxq->lock);
|
||||
spin_unlock_bh(&rxq->lock);
|
||||
|
||||
return rb_len;
|
||||
}
|
||||
|
@ -783,7 +783,7 @@ mt76_dma_rx_reset(struct mt76_dev *dev, enum mt76_rxq_id qid)
|
||||
|
||||
static void
|
||||
mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
||||
int len, bool more, u32 info)
|
||||
int len, bool more, u32 info, bool allow_direct)
|
||||
{
|
||||
struct sk_buff *skb = q->rx_head;
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
@ -795,7 +795,7 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
||||
|
||||
skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
|
||||
} else {
|
||||
mt76_put_page_pool_buf(data, true);
|
||||
mt76_put_page_pool_buf(data, allow_direct);
|
||||
}
|
||||
|
||||
if (more)
|
||||
@ -815,6 +815,7 @@ mt76_dma_rx_process(struct mt76_dev *dev, struct mt76_queue *q, int budget)
|
||||
struct sk_buff *skb;
|
||||
unsigned char *data;
|
||||
bool check_ddone = false;
|
||||
bool allow_direct = !mt76_queue_is_wed_rx(q);
|
||||
bool more;
|
||||
|
||||
if (IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED) &&
|
||||
@ -855,7 +856,8 @@ mt76_dma_rx_process(struct mt76_dev *dev, struct mt76_queue *q, int budget)
|
||||
}
|
||||
|
||||
if (q->rx_head) {
|
||||
mt76_add_fragment(dev, q, data, len, more, info);
|
||||
mt76_add_fragment(dev, q, data, len, more, info,
|
||||
allow_direct);
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -884,7 +886,7 @@ mt76_dma_rx_process(struct mt76_dev *dev, struct mt76_queue *q, int budget)
|
||||
continue;
|
||||
|
||||
free_frag:
|
||||
mt76_put_page_pool_buf(data, true);
|
||||
mt76_put_page_pool_buf(data, allow_direct);
|
||||
}
|
||||
|
||||
mt76_dma_rx_fill(dev, q, true);
|
||||
|
@ -142,9 +142,13 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_ITER, iter)
|
||||
#ifdef CONFIG_NET
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_NETNS, netns)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_XDP, xdp)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_NETFILTER, netfilter)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_TCX, tcx)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_NETKIT, netkit)
|
||||
#endif
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf)
|
||||
#endif
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_KPROBE_MULTI, kprobe_multi)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_STRUCT_OPS, struct_ops)
|
||||
BPF_LINK_TYPE(BPF_LINK_TYPE_UPROBE_MULTI, uprobe_multi)
|
||||
|
@ -4447,7 +4447,8 @@ ieee80211_is_protected_dual_of_public_action(struct sk_buff *skb)
|
||||
action != WLAN_PUB_ACTION_LOC_TRACK_NOTI &&
|
||||
action != WLAN_PUB_ACTION_FTM_REQUEST &&
|
||||
action != WLAN_PUB_ACTION_FTM_RESPONSE &&
|
||||
action != WLAN_PUB_ACTION_FILS_DISCOVERY;
|
||||
action != WLAN_PUB_ACTION_FILS_DISCOVERY &&
|
||||
action != WLAN_PUB_ACTION_VENDOR_SPECIFIC;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -189,6 +189,7 @@ struct blocked_key {
|
||||
struct smp_csrk {
|
||||
bdaddr_t bdaddr;
|
||||
u8 bdaddr_type;
|
||||
u8 link_type;
|
||||
u8 type;
|
||||
u8 val[16];
|
||||
};
|
||||
@ -198,6 +199,7 @@ struct smp_ltk {
|
||||
struct rcu_head rcu;
|
||||
bdaddr_t bdaddr;
|
||||
u8 bdaddr_type;
|
||||
u8 link_type;
|
||||
u8 authenticated;
|
||||
u8 type;
|
||||
u8 enc_size;
|
||||
@ -212,6 +214,7 @@ struct smp_irk {
|
||||
bdaddr_t rpa;
|
||||
bdaddr_t bdaddr;
|
||||
u8 addr_type;
|
||||
u8 link_type;
|
||||
u8 val[16];
|
||||
};
|
||||
|
||||
@ -219,6 +222,8 @@ struct link_key {
|
||||
struct list_head list;
|
||||
struct rcu_head rcu;
|
||||
bdaddr_t bdaddr;
|
||||
u8 bdaddr_type;
|
||||
u8 link_type;
|
||||
u8 type;
|
||||
u8 val[HCI_LINK_KEY_SIZE];
|
||||
u8 pin_len;
|
||||
@ -1227,11 +1232,11 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev,
|
||||
continue;
|
||||
|
||||
/* Match CIG ID if set */
|
||||
if (cig != BT_ISO_QOS_CIG_UNSET && cig != c->iso_qos.ucast.cig)
|
||||
if (cig != c->iso_qos.ucast.cig)
|
||||
continue;
|
||||
|
||||
/* Match CIS ID if set */
|
||||
if (id != BT_ISO_QOS_CIS_UNSET && id != c->iso_qos.ucast.cis)
|
||||
if (id != c->iso_qos.ucast.cis)
|
||||
continue;
|
||||
|
||||
/* Match destination address if set */
|
||||
|
@ -179,9 +179,6 @@ struct fib6_info {
|
||||
|
||||
refcount_t fib6_ref;
|
||||
unsigned long expires;
|
||||
|
||||
struct hlist_node gc_link;
|
||||
|
||||
struct dst_metrics *fib6_metrics;
|
||||
#define fib6_pmtu fib6_metrics->metrics[RTAX_MTU-1]
|
||||
|
||||
@ -250,6 +247,19 @@ static inline bool fib6_requires_src(const struct fib6_info *rt)
|
||||
return rt->fib6_src.plen > 0;
|
||||
}
|
||||
|
||||
static inline void fib6_clean_expires(struct fib6_info *f6i)
|
||||
{
|
||||
f6i->fib6_flags &= ~RTF_EXPIRES;
|
||||
f6i->expires = 0;
|
||||
}
|
||||
|
||||
static inline void fib6_set_expires(struct fib6_info *f6i,
|
||||
unsigned long expires)
|
||||
{
|
||||
f6i->expires = expires;
|
||||
f6i->fib6_flags |= RTF_EXPIRES;
|
||||
}
|
||||
|
||||
static inline bool fib6_check_expired(const struct fib6_info *f6i)
|
||||
{
|
||||
if (f6i->fib6_flags & RTF_EXPIRES)
|
||||
@ -257,11 +267,6 @@ static inline bool fib6_check_expired(const struct fib6_info *f6i)
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool fib6_has_expires(const struct fib6_info *f6i)
|
||||
{
|
||||
return f6i->fib6_flags & RTF_EXPIRES;
|
||||
}
|
||||
|
||||
/* Function to safely get fn->fn_sernum for passed in rt
|
||||
* and store result in passed in cookie.
|
||||
* Return true if we can get cookie safely
|
||||
@ -383,7 +388,6 @@ struct fib6_table {
|
||||
struct inet_peer_base tb6_peers;
|
||||
unsigned int flags;
|
||||
unsigned int fib_seq;
|
||||
struct hlist_head tb6_gc_hlist; /* GC candidates */
|
||||
#define RT6_TABLE_HAS_DFLT_ROUTER BIT(0)
|
||||
};
|
||||
|
||||
@ -500,48 +504,6 @@ void fib6_gc_cleanup(void);
|
||||
|
||||
int fib6_init(void);
|
||||
|
||||
/* fib6_info must be locked by the caller, and fib6_info->fib6_table can be
|
||||
* NULL.
|
||||
*/
|
||||
static inline void fib6_set_expires_locked(struct fib6_info *f6i,
|
||||
unsigned long expires)
|
||||
{
|
||||
struct fib6_table *tb6;
|
||||
|
||||
tb6 = f6i->fib6_table;
|
||||
f6i->expires = expires;
|
||||
if (tb6 && !fib6_has_expires(f6i))
|
||||
hlist_add_head(&f6i->gc_link, &tb6->tb6_gc_hlist);
|
||||
f6i->fib6_flags |= RTF_EXPIRES;
|
||||
}
|
||||
|
||||
/* fib6_info must be locked by the caller, and fib6_info->fib6_table can be
|
||||
* NULL. If fib6_table is NULL, the fib6_info will no be inserted into the
|
||||
* list of GC candidates until it is inserted into a table.
|
||||
*/
|
||||
static inline void fib6_set_expires(struct fib6_info *f6i,
|
||||
unsigned long expires)
|
||||
{
|
||||
spin_lock_bh(&f6i->fib6_table->tb6_lock);
|
||||
fib6_set_expires_locked(f6i, expires);
|
||||
spin_unlock_bh(&f6i->fib6_table->tb6_lock);
|
||||
}
|
||||
|
||||
static inline void fib6_clean_expires_locked(struct fib6_info *f6i)
|
||||
{
|
||||
if (fib6_has_expires(f6i))
|
||||
hlist_del_init(&f6i->gc_link);
|
||||
f6i->fib6_flags &= ~RTF_EXPIRES;
|
||||
f6i->expires = 0;
|
||||
}
|
||||
|
||||
static inline void fib6_clean_expires(struct fib6_info *f6i)
|
||||
{
|
||||
spin_lock_bh(&f6i->fib6_table->tb6_lock);
|
||||
fib6_clean_expires_locked(f6i);
|
||||
spin_unlock_bh(&f6i->fib6_table->tb6_lock);
|
||||
}
|
||||
|
||||
struct ipv6_route_iter {
|
||||
struct seq_net_private p;
|
||||
struct fib6_walker w;
|
||||
|
@ -2799,6 +2799,11 @@ static inline bool sk_is_tcp(const struct sock *sk)
|
||||
return sk->sk_type == SOCK_STREAM && sk->sk_protocol == IPPROTO_TCP;
|
||||
}
|
||||
|
||||
static inline bool sk_is_stream_unix(const struct sock *sk)
|
||||
{
|
||||
return sk->sk_family == AF_UNIX && sk->sk_type == SOCK_STREAM;
|
||||
}
|
||||
|
||||
/**
|
||||
* sk_eat_skb - Release a skb if it is no longer needed
|
||||
* @sk: socket to eat this skb from
|
||||
|
@ -407,6 +407,8 @@ int vlan_vids_add_by_dev(struct net_device *dev,
|
||||
return 0;
|
||||
|
||||
list_for_each_entry(vid_info, &vlan_info->vid_list, list) {
|
||||
if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
|
||||
continue;
|
||||
err = vlan_vid_add(dev, vid_info->proto, vid_info->vid);
|
||||
if (err)
|
||||
goto unwind;
|
||||
@ -417,6 +419,8 @@ unwind:
|
||||
list_for_each_entry_continue_reverse(vid_info,
|
||||
&vlan_info->vid_list,
|
||||
list) {
|
||||
if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
|
||||
continue;
|
||||
vlan_vid_del(dev, vid_info->proto, vid_info->vid);
|
||||
}
|
||||
|
||||
@ -436,8 +440,11 @@ void vlan_vids_del_by_dev(struct net_device *dev,
|
||||
if (!vlan_info)
|
||||
return;
|
||||
|
||||
list_for_each_entry(vid_info, &vlan_info->vid_list, list)
|
||||
list_for_each_entry(vid_info, &vlan_info->vid_list, list) {
|
||||
if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
|
||||
continue;
|
||||
vlan_vid_del(dev, vid_info->proto, vid_info->vid);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(vlan_vids_del_by_dev);
|
||||
|
||||
|
@ -309,11 +309,14 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
|
||||
if (flags & MSG_OOB)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
lock_sock(sk);
|
||||
|
||||
skb = skb_recv_datagram(sk, flags, &err);
|
||||
if (!skb) {
|
||||
if (sk->sk_shutdown & RCV_SHUTDOWN)
|
||||
return 0;
|
||||
err = 0;
|
||||
|
||||
release_sock(sk);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -343,6 +346,8 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
|
||||
|
||||
skb_free_datagram(sk, skb);
|
||||
|
||||
release_sock(sk);
|
||||
|
||||
if (flags & MSG_TRUNC)
|
||||
copied = skblen;
|
||||
|
||||
|
@ -516,6 +516,9 @@ static u8 hci_cc_read_class_of_dev(struct hci_dev *hdev, void *data,
|
||||
{
|
||||
struct hci_rp_read_class_of_dev *rp = data;
|
||||
|
||||
if (WARN_ON(!hdev))
|
||||
return HCI_ERROR_UNSPECIFIED;
|
||||
|
||||
bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
|
||||
|
||||
if (rp->status)
|
||||
@ -747,9 +750,23 @@ static u8 hci_cc_read_enc_key_size(struct hci_dev *hdev, void *data,
|
||||
} else {
|
||||
conn->enc_key_size = rp->key_size;
|
||||
status = 0;
|
||||
|
||||
if (conn->enc_key_size < hdev->min_enc_key_size) {
|
||||
/* As slave role, the conn->state has been set to
|
||||
* BT_CONNECTED and l2cap conn req might not be received
|
||||
* yet, at this moment the l2cap layer almost does
|
||||
* nothing with the non-zero status.
|
||||
* So we also clear encrypt related bits, and then the
|
||||
* handler of l2cap conn req will get the right secure
|
||||
* state at a later time.
|
||||
*/
|
||||
status = HCI_ERROR_AUTH_FAILURE;
|
||||
clear_bit(HCI_CONN_ENCRYPT, &conn->flags);
|
||||
clear_bit(HCI_CONN_AES_CCM, &conn->flags);
|
||||
}
|
||||
}
|
||||
|
||||
hci_encrypt_cfm(conn, 0);
|
||||
hci_encrypt_cfm(conn, status);
|
||||
|
||||
done:
|
||||
hci_dev_unlock(hdev);
|
||||
@ -820,8 +837,6 @@ static u8 hci_cc_write_auth_payload_timeout(struct hci_dev *hdev, void *data,
|
||||
if (!rp->status)
|
||||
conn->auth_payload_timeout = get_unaligned_le16(sent + 2);
|
||||
|
||||
hci_encrypt_cfm(conn, 0);
|
||||
|
||||
unlock:
|
||||
hci_dev_unlock(hdev);
|
||||
|
||||
@ -2304,7 +2319,8 @@ static void hci_cs_inquiry(struct hci_dev *hdev, __u8 status)
|
||||
return;
|
||||
}
|
||||
|
||||
set_bit(HCI_INQUIRY, &hdev->flags);
|
||||
if (hci_sent_cmd_data(hdev, HCI_OP_INQUIRY))
|
||||
set_bit(HCI_INQUIRY, &hdev->flags);
|
||||
}
|
||||
|
||||
static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
|
||||
@ -3683,12 +3699,8 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, void *data,
|
||||
cp.handle = cpu_to_le16(conn->handle);
|
||||
cp.timeout = cpu_to_le16(hdev->auth_payload_timeout);
|
||||
if (hci_send_cmd(conn->hdev, HCI_OP_WRITE_AUTH_PAYLOAD_TO,
|
||||
sizeof(cp), &cp)) {
|
||||
sizeof(cp), &cp))
|
||||
bt_dev_err(hdev, "write auth payload timeout failed");
|
||||
goto notify;
|
||||
}
|
||||
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
notify:
|
||||
|
@ -6492,6 +6492,14 @@ drop:
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
static inline void l2cap_sig_send_rej(struct l2cap_conn *conn, u16 ident)
|
||||
{
|
||||
struct l2cap_cmd_rej_unk rej;
|
||||
|
||||
rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD);
|
||||
l2cap_send_cmd(conn, ident, L2CAP_COMMAND_REJ, sizeof(rej), &rej);
|
||||
}
|
||||
|
||||
static inline void l2cap_sig_channel(struct l2cap_conn *conn,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
@ -6517,23 +6525,24 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
|
||||
|
||||
if (len > skb->len || !cmd->ident) {
|
||||
BT_DBG("corrupted command");
|
||||
l2cap_sig_send_rej(conn, cmd->ident);
|
||||
break;
|
||||
}
|
||||
|
||||
err = l2cap_bredr_sig_cmd(conn, cmd, len, skb->data);
|
||||
if (err) {
|
||||
struct l2cap_cmd_rej_unk rej;
|
||||
|
||||
BT_ERR("Wrong link type (%d)", err);
|
||||
|
||||
rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD);
|
||||
l2cap_send_cmd(conn, cmd->ident, L2CAP_COMMAND_REJ,
|
||||
sizeof(rej), &rej);
|
||||
l2cap_sig_send_rej(conn, cmd->ident);
|
||||
}
|
||||
|
||||
skb_pull(skb, len);
|
||||
}
|
||||
|
||||
if (skb->len > 0) {
|
||||
BT_DBG("corrupted command");
|
||||
l2cap_sig_send_rej(conn, 0);
|
||||
}
|
||||
|
||||
drop:
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
@ -2897,7 +2897,8 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
|
||||
for (i = 0; i < key_count; i++) {
|
||||
struct mgmt_link_key_info *key = &cp->keys[i];
|
||||
|
||||
if (key->addr.type != BDADDR_BREDR || key->type > 0x08)
|
||||
/* Considering SMP over BREDR/LE, there is no need to check addr_type */
|
||||
if (key->type > 0x08)
|
||||
return mgmt_cmd_status(sk, hdev->id,
|
||||
MGMT_OP_LOAD_LINK_KEYS,
|
||||
MGMT_STATUS_INVALID_PARAMS);
|
||||
@ -7130,6 +7131,7 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
|
||||
|
||||
for (i = 0; i < irk_count; i++) {
|
||||
struct mgmt_irk_info *irk = &cp->irks[i];
|
||||
u8 addr_type = le_addr_type(irk->addr.type);
|
||||
|
||||
if (hci_is_blocked_key(hdev,
|
||||
HCI_BLOCKED_KEY_TYPE_IRK,
|
||||
@ -7139,8 +7141,12 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
|
||||
continue;
|
||||
}
|
||||
|
||||
/* When using SMP over BR/EDR, the addr type should be set to BREDR */
|
||||
if (irk->addr.type == BDADDR_BREDR)
|
||||
addr_type = BDADDR_BREDR;
|
||||
|
||||
hci_add_irk(hdev, &irk->addr.bdaddr,
|
||||
le_addr_type(irk->addr.type), irk->val,
|
||||
addr_type, irk->val,
|
||||
BDADDR_ANY);
|
||||
}
|
||||
|
||||
@ -7221,6 +7227,7 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
|
||||
for (i = 0; i < key_count; i++) {
|
||||
struct mgmt_ltk_info *key = &cp->keys[i];
|
||||
u8 type, authenticated;
|
||||
u8 addr_type = le_addr_type(key->addr.type);
|
||||
|
||||
if (hci_is_blocked_key(hdev,
|
||||
HCI_BLOCKED_KEY_TYPE_LTK,
|
||||
@ -7255,8 +7262,12 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
|
||||
continue;
|
||||
}
|
||||
|
||||
/* When using SMP over BR/EDR, the addr type should be set to BREDR */
|
||||
if (key->addr.type == BDADDR_BREDR)
|
||||
addr_type = BDADDR_BREDR;
|
||||
|
||||
hci_add_ltk(hdev, &key->addr.bdaddr,
|
||||
le_addr_type(key->addr.type), type, authenticated,
|
||||
addr_type, type, authenticated,
|
||||
key->val, key->enc_size, key->ediv, key->rand);
|
||||
}
|
||||
|
||||
@ -9523,7 +9534,7 @@ void mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key,
|
||||
|
||||
ev.store_hint = persistent;
|
||||
bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
|
||||
ev.key.addr.type = BDADDR_BREDR;
|
||||
ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
|
||||
ev.key.type = key->type;
|
||||
memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE);
|
||||
ev.key.pin_len = key->pin_len;
|
||||
@ -9574,7 +9585,7 @@ void mgmt_new_ltk(struct hci_dev *hdev, struct smp_ltk *key, bool persistent)
|
||||
ev.store_hint = persistent;
|
||||
|
||||
bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
|
||||
ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type);
|
||||
ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
|
||||
ev.key.type = mgmt_ltk_type(key);
|
||||
ev.key.enc_size = key->enc_size;
|
||||
ev.key.ediv = key->ediv;
|
||||
@ -9603,7 +9614,7 @@ void mgmt_new_irk(struct hci_dev *hdev, struct smp_irk *irk, bool persistent)
|
||||
|
||||
bacpy(&ev.rpa, &irk->rpa);
|
||||
bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr);
|
||||
ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type);
|
||||
ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type);
|
||||
memcpy(ev.irk.val, irk->val, sizeof(irk->val));
|
||||
|
||||
mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL);
|
||||
@ -9632,7 +9643,7 @@ void mgmt_new_csrk(struct hci_dev *hdev, struct smp_csrk *csrk,
|
||||
ev.store_hint = persistent;
|
||||
|
||||
bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr);
|
||||
ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type);
|
||||
ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type);
|
||||
ev.key.type = csrk->type;
|
||||
memcpy(ev.key.val, csrk->val, sizeof(csrk->val));
|
||||
|
||||
|
@ -1059,6 +1059,7 @@ static void smp_notify_keys(struct l2cap_conn *conn)
|
||||
}
|
||||
|
||||
if (smp->remote_irk) {
|
||||
smp->remote_irk->link_type = hcon->type;
|
||||
mgmt_new_irk(hdev, smp->remote_irk, persistent);
|
||||
|
||||
/* Now that user space can be considered to know the
|
||||
@ -1078,24 +1079,28 @@ static void smp_notify_keys(struct l2cap_conn *conn)
|
||||
}
|
||||
|
||||
if (smp->csrk) {
|
||||
smp->csrk->link_type = hcon->type;
|
||||
smp->csrk->bdaddr_type = hcon->dst_type;
|
||||
bacpy(&smp->csrk->bdaddr, &hcon->dst);
|
||||
mgmt_new_csrk(hdev, smp->csrk, persistent);
|
||||
}
|
||||
|
||||
if (smp->responder_csrk) {
|
||||
smp->responder_csrk->link_type = hcon->type;
|
||||
smp->responder_csrk->bdaddr_type = hcon->dst_type;
|
||||
bacpy(&smp->responder_csrk->bdaddr, &hcon->dst);
|
||||
mgmt_new_csrk(hdev, smp->responder_csrk, persistent);
|
||||
}
|
||||
|
||||
if (smp->ltk) {
|
||||
smp->ltk->link_type = hcon->type;
|
||||
smp->ltk->bdaddr_type = hcon->dst_type;
|
||||
bacpy(&smp->ltk->bdaddr, &hcon->dst);
|
||||
mgmt_new_ltk(hdev, smp->ltk, persistent);
|
||||
}
|
||||
|
||||
if (smp->responder_ltk) {
|
||||
smp->responder_ltk->link_type = hcon->type;
|
||||
smp->responder_ltk->bdaddr_type = hcon->dst_type;
|
||||
bacpy(&smp->responder_ltk->bdaddr, &hcon->dst);
|
||||
mgmt_new_ltk(hdev, smp->responder_ltk, persistent);
|
||||
@ -1115,6 +1120,8 @@ static void smp_notify_keys(struct l2cap_conn *conn)
|
||||
key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst,
|
||||
smp->link_key, type, 0, &persistent);
|
||||
if (key) {
|
||||
key->link_type = hcon->type;
|
||||
key->bdaddr_type = hcon->dst_type;
|
||||
mgmt_new_link_key(hdev, key, persistent);
|
||||
|
||||
/* Don't keep debug keys around if the relevant
|
||||
|
@ -3472,6 +3472,9 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
|
||||
if (gso_segs > READ_ONCE(dev->gso_max_segs))
|
||||
return features & ~NETIF_F_GSO_MASK;
|
||||
|
||||
if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size)))
|
||||
return features & ~NETIF_F_GSO_MASK;
|
||||
|
||||
if (!skb_shinfo(skb)->gso_type) {
|
||||
skb_warn_bad_offload(skb);
|
||||
return features & ~NETIF_F_GSO_MASK;
|
||||
|
@ -4825,7 +4825,9 @@ static __always_inline unsigned int skb_ext_total_length(void)
|
||||
static void skb_extensions_init(void)
|
||||
{
|
||||
BUILD_BUG_ON(SKB_EXT_NUM >= 8);
|
||||
#if !IS_ENABLED(CONFIG_KCOV_INSTRUMENT_ALL)
|
||||
BUILD_BUG_ON(skb_ext_total_length() > 255);
|
||||
#endif
|
||||
|
||||
skbuff_ext_cache = kmem_cache_create("skbuff_ext_cache",
|
||||
SKB_EXT_ALIGN_VALUE * skb_ext_total_length(),
|
||||
|
@ -536,6 +536,8 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
|
||||
{
|
||||
if (sk_is_tcp(sk))
|
||||
return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_LISTEN);
|
||||
if (sk_is_stream_unix(sk))
|
||||
return (1 << sk->sk_state) & TCPF_ESTABLISHED;
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -79,7 +79,7 @@ int sk_stream_wait_connect(struct sock *sk, long *timeo_p)
|
||||
remove_wait_queue(sk_sleep(sk), &wait);
|
||||
sk->sk_write_pending--;
|
||||
} while (!done);
|
||||
return 0;
|
||||
return done < 0 ? done : 0;
|
||||
}
|
||||
EXPORT_SYMBOL(sk_stream_wait_connect);
|
||||
|
||||
|
@ -82,6 +82,7 @@ void *ife_decode(struct sk_buff *skb, u16 *metalen)
|
||||
if (unlikely(!pskb_may_pull(skb, total_pull)))
|
||||
return NULL;
|
||||
|
||||
ifehdr = (struct ifeheadr *)(skb->data + skb->dev->hard_header_len);
|
||||
skb_set_mac_header(skb, total_pull);
|
||||
__skb_pull(skb, total_pull);
|
||||
*metalen = ifehdrln - IFE_METAHDRLEN;
|
||||
|
@ -160,8 +160,6 @@ struct fib6_info *fib6_info_alloc(gfp_t gfp_flags, bool with_fib6_nh)
|
||||
INIT_LIST_HEAD(&f6i->fib6_siblings);
|
||||
refcount_set(&f6i->fib6_ref, 1);
|
||||
|
||||
INIT_HLIST_NODE(&f6i->gc_link);
|
||||
|
||||
return f6i;
|
||||
}
|
||||
|
||||
@ -248,7 +246,6 @@ static struct fib6_table *fib6_alloc_table(struct net *net, u32 id)
|
||||
net->ipv6.fib6_null_entry);
|
||||
table->tb6_root.fn_flags = RTN_ROOT | RTN_TL_ROOT | RTN_RTINFO;
|
||||
inet_peer_base_init(&table->tb6_peers);
|
||||
INIT_HLIST_HEAD(&table->tb6_gc_hlist);
|
||||
}
|
||||
|
||||
return table;
|
||||
@ -1060,8 +1057,6 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
|
||||
lockdep_is_held(&table->tb6_lock));
|
||||
}
|
||||
}
|
||||
|
||||
fib6_clean_expires_locked(rt);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1123,10 +1118,9 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
|
||||
if (!(iter->fib6_flags & RTF_EXPIRES))
|
||||
return -EEXIST;
|
||||
if (!(rt->fib6_flags & RTF_EXPIRES))
|
||||
fib6_clean_expires_locked(iter);
|
||||
fib6_clean_expires(iter);
|
||||
else
|
||||
fib6_set_expires_locked(iter,
|
||||
rt->expires);
|
||||
fib6_set_expires(iter, rt->expires);
|
||||
|
||||
if (rt->fib6_pmtu)
|
||||
fib6_metric_set(iter, RTAX_MTU,
|
||||
@ -1485,10 +1479,6 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
|
||||
if (rt->nh)
|
||||
list_add(&rt->nh_list, &rt->nh->f6i_list);
|
||||
__fib6_update_sernum_upto_root(rt, fib6_new_sernum(info->nl_net));
|
||||
|
||||
if (fib6_has_expires(rt))
|
||||
hlist_add_head(&rt->gc_link, &table->tb6_gc_hlist);
|
||||
|
||||
fib6_start_gc(info->nl_net, rt);
|
||||
}
|
||||
|
||||
@ -2291,8 +2281,9 @@ static void fib6_flush_trees(struct net *net)
|
||||
* Garbage collection
|
||||
*/
|
||||
|
||||
static int fib6_age(struct fib6_info *rt, struct fib6_gc_args *gc_args)
|
||||
static int fib6_age(struct fib6_info *rt, void *arg)
|
||||
{
|
||||
struct fib6_gc_args *gc_args = arg;
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/*
|
||||
@ -2300,7 +2291,7 @@ static int fib6_age(struct fib6_info *rt, struct fib6_gc_args *gc_args)
|
||||
* Routes are expired even if they are in use.
|
||||
*/
|
||||
|
||||
if (fib6_has_expires(rt) && rt->expires) {
|
||||
if (rt->fib6_flags & RTF_EXPIRES && rt->expires) {
|
||||
if (time_after(now, rt->expires)) {
|
||||
RT6_TRACE("expiring %p\n", rt);
|
||||
return -1;
|
||||
@ -2317,40 +2308,6 @@ static int fib6_age(struct fib6_info *rt, struct fib6_gc_args *gc_args)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void fib6_gc_table(struct net *net,
|
||||
struct fib6_table *tb6,
|
||||
struct fib6_gc_args *gc_args)
|
||||
{
|
||||
struct fib6_info *rt;
|
||||
struct hlist_node *n;
|
||||
struct nl_info info = {
|
||||
.nl_net = net,
|
||||
.skip_notify = false,
|
||||
};
|
||||
|
||||
hlist_for_each_entry_safe(rt, n, &tb6->tb6_gc_hlist, gc_link)
|
||||
if (fib6_age(rt, gc_args) == -1)
|
||||
fib6_del(rt, &info);
|
||||
}
|
||||
|
||||
static void fib6_gc_all(struct net *net, struct fib6_gc_args *gc_args)
|
||||
{
|
||||
struct fib6_table *table;
|
||||
struct hlist_head *head;
|
||||
unsigned int h;
|
||||
|
||||
rcu_read_lock();
|
||||
for (h = 0; h < FIB6_TABLE_HASHSZ; h++) {
|
||||
head = &net->ipv6.fib_table_hash[h];
|
||||
hlist_for_each_entry_rcu(table, head, tb6_hlist) {
|
||||
spin_lock_bh(&table->tb6_lock);
|
||||
fib6_gc_table(net, table, gc_args);
|
||||
spin_unlock_bh(&table->tb6_lock);
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
void fib6_run_gc(unsigned long expires, struct net *net, bool force)
|
||||
{
|
||||
struct fib6_gc_args gc_args;
|
||||
@ -2366,7 +2323,7 @@ void fib6_run_gc(unsigned long expires, struct net *net, bool force)
|
||||
net->ipv6.sysctl.ip6_rt_gc_interval;
|
||||
gc_args.more = 0;
|
||||
|
||||
fib6_gc_all(net, &gc_args);
|
||||
fib6_clean_all(net, fib6_age, &gc_args);
|
||||
now = jiffies;
|
||||
net->ipv6.ip6_rt_last_gc = now;
|
||||
|
||||
|
@ -3763,10 +3763,10 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
|
||||
rt->dst_nocount = true;
|
||||
|
||||
if (cfg->fc_flags & RTF_EXPIRES)
|
||||
fib6_set_expires_locked(rt, jiffies +
|
||||
clock_t_to_jiffies(cfg->fc_expires));
|
||||
fib6_set_expires(rt, jiffies +
|
||||
clock_t_to_jiffies(cfg->fc_expires));
|
||||
else
|
||||
fib6_clean_expires_locked(rt);
|
||||
fib6_clean_expires(rt);
|
||||
|
||||
if (cfg->fc_protocol == RTPROT_UNSPEC)
|
||||
cfg->fc_protocol = RTPROT_BOOT;
|
||||
|
@ -1788,10 +1788,10 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
|
||||
lockdep_is_held(&local->hw.wiphy->mtx));
|
||||
|
||||
/*
|
||||
* If there are no changes, then accept a link that doesn't exist,
|
||||
* If there are no changes, then accept a link that exist,
|
||||
* unless it's a new link.
|
||||
*/
|
||||
if (params->link_id < 0 && !new_link &&
|
||||
if (params->link_id >= 0 && !new_link &&
|
||||
!params->link_mac && !params->txpwr_set &&
|
||||
!params->supported_rates_len &&
|
||||
!params->ht_capa && !params->vht_capa &&
|
||||
|
@ -1,7 +1,7 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright 2015 Intel Deutschland GmbH
|
||||
* Copyright (C) 2022 Intel Corporation
|
||||
* Copyright (C) 2022-2023 Intel Corporation
|
||||
*/
|
||||
#include <net/mac80211.h>
|
||||
#include "ieee80211_i.h"
|
||||
@ -589,6 +589,10 @@ int drv_change_sta_links(struct ieee80211_local *local,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* during reconfig don't add it to debugfs again */
|
||||
if (local->in_reconfig)
|
||||
return 0;
|
||||
|
||||
for_each_set_bit(link_id, &links_to_add, IEEE80211_MLD_MAX_NUM_LINKS) {
|
||||
link_sta = rcu_dereference_protected(info->link[link_id],
|
||||
lockdep_is_held(&local->hw.wiphy->mtx));
|
||||
|
@ -1068,8 +1068,8 @@ mesh_plink_get_event(struct ieee80211_sub_if_data *sdata,
|
||||
case WLAN_SP_MESH_PEERING_OPEN:
|
||||
if (!matches_local)
|
||||
event = OPN_RJCT;
|
||||
if (!mesh_plink_free_count(sdata) ||
|
||||
(sta->mesh->plid && sta->mesh->plid != plid))
|
||||
else if (!mesh_plink_free_count(sdata) ||
|
||||
(sta->mesh->plid && sta->mesh->plid != plid))
|
||||
event = OPN_IGNR;
|
||||
else
|
||||
event = OPN_ACPT;
|
||||
@ -1077,9 +1077,9 @@ mesh_plink_get_event(struct ieee80211_sub_if_data *sdata,
|
||||
case WLAN_SP_MESH_PEERING_CONFIRM:
|
||||
if (!matches_local)
|
||||
event = CNF_RJCT;
|
||||
if (!mesh_plink_free_count(sdata) ||
|
||||
sta->mesh->llid != llid ||
|
||||
(sta->mesh->plid && sta->mesh->plid != plid))
|
||||
else if (!mesh_plink_free_count(sdata) ||
|
||||
sta->mesh->llid != llid ||
|
||||
(sta->mesh->plid && sta->mesh->plid != plid))
|
||||
event = CNF_IGNR;
|
||||
else
|
||||
event = CNF_ACPT;
|
||||
@ -1247,6 +1247,8 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata,
|
||||
return;
|
||||
}
|
||||
elems = ieee802_11_parse_elems(baseaddr, len - baselen, true, NULL);
|
||||
mesh_process_plink_frame(sdata, mgmt, elems, rx_status);
|
||||
kfree(elems);
|
||||
if (elems) {
|
||||
mesh_process_plink_frame(sdata, mgmt, elems, rx_status);
|
||||
kfree(elems);
|
||||
}
|
||||
}
|
||||
|
@ -5782,7 +5782,7 @@ static void ieee80211_ml_reconfiguration(struct ieee80211_sub_if_data *sdata,
|
||||
{
|
||||
const struct ieee80211_multi_link_elem *ml;
|
||||
const struct element *sub;
|
||||
size_t ml_len;
|
||||
ssize_t ml_len;
|
||||
unsigned long removed_links = 0;
|
||||
u16 link_removal_timeout[IEEE80211_MLD_MAX_NUM_LINKS] = {};
|
||||
u8 link_id;
|
||||
@ -5798,6 +5798,8 @@ static void ieee80211_ml_reconfiguration(struct ieee80211_sub_if_data *sdata,
|
||||
elems->scratch + elems->scratch_len -
|
||||
elems->scratch_pos,
|
||||
WLAN_EID_FRAGMENT);
|
||||
if (ml_len < 0)
|
||||
return;
|
||||
|
||||
elems->ml_reconf = (const void *)elems->scratch_pos;
|
||||
elems->ml_reconf_len = ml_len;
|
||||
|
@ -70,3 +70,4 @@ static struct kunit_suite mptcp_crypto_suite = {
|
||||
kunit_test_suite(mptcp_crypto_suite);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("KUnit tests for MPTCP Crypto");
|
||||
|
@ -3402,12 +3402,12 @@ static void mptcp_release_cb(struct sock *sk)
|
||||
if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
|
||||
__mptcp_clean_una_wakeup(sk);
|
||||
if (unlikely(msk->cb_flags)) {
|
||||
/* be sure to set the current sk state before taking actions
|
||||
/* be sure to sync the msk state before taking actions
|
||||
* depending on sk_state (MPTCP_ERROR_REPORT)
|
||||
* On sk release avoid actions depending on the first subflow
|
||||
*/
|
||||
if (__test_and_clear_bit(MPTCP_CONNECTED, &msk->cb_flags) && msk->first)
|
||||
__mptcp_set_connected(sk);
|
||||
if (__test_and_clear_bit(MPTCP_SYNC_STATE, &msk->cb_flags) && msk->first)
|
||||
__mptcp_sync_state(sk, msk->pending_state);
|
||||
if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
|
||||
__mptcp_error_report(sk);
|
||||
if (__test_and_clear_bit(MPTCP_SYNC_SNDBUF, &msk->cb_flags))
|
||||
|
@ -124,7 +124,7 @@
|
||||
#define MPTCP_ERROR_REPORT 3
|
||||
#define MPTCP_RETRANSMIT 4
|
||||
#define MPTCP_FLUSH_JOIN_LIST 5
|
||||
#define MPTCP_CONNECTED 6
|
||||
#define MPTCP_SYNC_STATE 6
|
||||
#define MPTCP_SYNC_SNDBUF 7
|
||||
|
||||
struct mptcp_skb_cb {
|
||||
@ -296,6 +296,9 @@ struct mptcp_sock {
|
||||
bool use_64bit_ack; /* Set when we received a 64-bit DSN */
|
||||
bool csum_enabled;
|
||||
bool allow_infinite_fallback;
|
||||
u8 pending_state; /* A subflow asked to set this sk_state,
|
||||
* protected by the msk data lock
|
||||
*/
|
||||
u8 mpc_endpoint_id;
|
||||
u8 recvmsg_inq:1,
|
||||
cork:1,
|
||||
@ -728,7 +731,7 @@ void mptcp_get_options(const struct sk_buff *skb,
|
||||
struct mptcp_options_received *mp_opt);
|
||||
|
||||
void mptcp_finish_connect(struct sock *sk);
|
||||
void __mptcp_set_connected(struct sock *sk);
|
||||
void __mptcp_sync_state(struct sock *sk, int state);
|
||||
void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout);
|
||||
|
||||
static inline void mptcp_stop_tout_timer(struct sock *sk)
|
||||
@ -1115,7 +1118,7 @@ static inline bool subflow_simultaneous_connect(struct sock *sk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
|
||||
|
||||
return sk->sk_state == TCP_ESTABLISHED &&
|
||||
return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_FIN_WAIT1) &&
|
||||
is_active_ssk(subflow) &&
|
||||
!subflow->conn_finished;
|
||||
}
|
||||
|
@ -419,22 +419,28 @@ static bool subflow_use_different_dport(struct mptcp_sock *msk, const struct soc
|
||||
return inet_sk(sk)->inet_dport != inet_sk((struct sock *)msk)->inet_dport;
|
||||
}
|
||||
|
||||
void __mptcp_set_connected(struct sock *sk)
|
||||
void __mptcp_sync_state(struct sock *sk, int state)
|
||||
{
|
||||
__mptcp_propagate_sndbuf(sk, mptcp_sk(sk)->first);
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
__mptcp_propagate_sndbuf(sk, msk->first);
|
||||
if (sk->sk_state == TCP_SYN_SENT) {
|
||||
inet_sk_state_store(sk, TCP_ESTABLISHED);
|
||||
inet_sk_state_store(sk, state);
|
||||
sk->sk_state_change(sk);
|
||||
}
|
||||
}
|
||||
|
||||
static void mptcp_set_connected(struct sock *sk)
|
||||
static void mptcp_propagate_state(struct sock *sk, struct sock *ssk)
|
||||
{
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
mptcp_data_lock(sk);
|
||||
if (!sock_owned_by_user(sk))
|
||||
__mptcp_set_connected(sk);
|
||||
else
|
||||
__set_bit(MPTCP_CONNECTED, &mptcp_sk(sk)->cb_flags);
|
||||
if (!sock_owned_by_user(sk)) {
|
||||
__mptcp_sync_state(sk, ssk->sk_state);
|
||||
} else {
|
||||
msk->pending_state = ssk->sk_state;
|
||||
__set_bit(MPTCP_SYNC_STATE, &msk->cb_flags);
|
||||
}
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
||||
@ -496,7 +502,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
||||
subflow_set_remote_key(msk, subflow, &mp_opt);
|
||||
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVEACK);
|
||||
mptcp_finish_connect(sk);
|
||||
mptcp_set_connected(parent);
|
||||
mptcp_propagate_state(parent, sk);
|
||||
} else if (subflow->request_join) {
|
||||
u8 hmac[SHA256_DIGEST_SIZE];
|
||||
|
||||
@ -540,7 +546,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
||||
} else if (mptcp_check_fallback(sk)) {
|
||||
fallback:
|
||||
mptcp_rcv_space_init(msk, sk);
|
||||
mptcp_set_connected(parent);
|
||||
mptcp_propagate_state(parent, sk);
|
||||
}
|
||||
return;
|
||||
|
||||
@ -1740,7 +1746,7 @@ static void subflow_state_change(struct sock *sk)
|
||||
mptcp_rcv_space_init(msk, sk);
|
||||
pr_fallback(msk);
|
||||
subflow->conn_finished = 1;
|
||||
mptcp_set_connected(parent);
|
||||
mptcp_propagate_state(parent, sk);
|
||||
}
|
||||
|
||||
/* as recvmsg() does not acquire the subflow socket for ssk selection
|
||||
|
@ -143,3 +143,4 @@ static struct kunit_suite mptcp_token_suite = {
|
||||
kunit_test_suite(mptcp_token_suite);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("KUnit tests for MPTCP Token");
|
||||
|
@ -126,6 +126,14 @@ static int rfkill_gpio_probe(struct platform_device *pdev)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = gpiod_direction_output(rfkill->reset_gpio, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = gpiod_direction_output(rfkill->shutdown_gpio, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
rfkill->rfkill_dev = rfkill_alloc(rfkill->name, &pdev->dev,
|
||||
rfkill->type, &rfkill_gpio_ops,
|
||||
rfkill);
|
||||
|
@ -182,21 +182,47 @@ void rose_kill_by_neigh(struct rose_neigh *neigh)
|
||||
*/
|
||||
static void rose_kill_by_device(struct net_device *dev)
|
||||
{
|
||||
struct sock *s;
|
||||
struct sock *sk, *array[16];
|
||||
struct rose_sock *rose;
|
||||
bool rescan;
|
||||
int i, cnt;
|
||||
|
||||
start:
|
||||
rescan = false;
|
||||
cnt = 0;
|
||||
spin_lock_bh(&rose_list_lock);
|
||||
sk_for_each(s, &rose_list) {
|
||||
struct rose_sock *rose = rose_sk(s);
|
||||
|
||||
sk_for_each(sk, &rose_list) {
|
||||
rose = rose_sk(sk);
|
||||
if (rose->device == dev) {
|
||||
rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
|
||||
if (cnt == ARRAY_SIZE(array)) {
|
||||
rescan = true;
|
||||
break;
|
||||
}
|
||||
sock_hold(sk);
|
||||
array[cnt++] = sk;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&rose_list_lock);
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
sk = array[cnt];
|
||||
rose = rose_sk(sk);
|
||||
lock_sock(sk);
|
||||
spin_lock_bh(&rose_list_lock);
|
||||
if (rose->device == dev) {
|
||||
rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
|
||||
if (rose->neighbour)
|
||||
rose->neighbour->use--;
|
||||
netdev_put(rose->device, &rose->dev_tracker);
|
||||
rose->device = NULL;
|
||||
}
|
||||
spin_unlock_bh(&rose_list_lock);
|
||||
release_sock(sk);
|
||||
sock_put(sk);
|
||||
cond_resched();
|
||||
}
|
||||
spin_unlock_bh(&rose_list_lock);
|
||||
if (rescan)
|
||||
goto start;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -656,7 +682,10 @@ static int rose_release(struct socket *sock)
|
||||
break;
|
||||
}
|
||||
|
||||
spin_lock_bh(&rose_list_lock);
|
||||
netdev_put(rose->device, &rose->dev_tracker);
|
||||
rose->device = NULL;
|
||||
spin_unlock_bh(&rose_list_lock);
|
||||
sock->sk = NULL;
|
||||
release_sock(sk);
|
||||
sock_put(sk);
|
||||
|
87
net/wireless/certs/wens.hex
Normal file
87
net/wireless/certs/wens.hex
Normal file
@ -0,0 +1,87 @@
|
||||
/* Chen-Yu Tsai's regdb certificate */
|
||||
0x30, 0x82, 0x02, 0xa7, 0x30, 0x82, 0x01, 0x8f,
|
||||
0x02, 0x14, 0x61, 0xc0, 0x38, 0x65, 0x1a, 0xab,
|
||||
0xdc, 0xf9, 0x4b, 0xd0, 0xac, 0x7f, 0xf0, 0x6c,
|
||||
0x72, 0x48, 0xdb, 0x18, 0xc6, 0x00, 0x30, 0x0d,
|
||||
0x06, 0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7, 0x0d,
|
||||
0x01, 0x01, 0x0b, 0x05, 0x00, 0x30, 0x0f, 0x31,
|
||||
0x0d, 0x30, 0x0b, 0x06, 0x03, 0x55, 0x04, 0x03,
|
||||
0x0c, 0x04, 0x77, 0x65, 0x6e, 0x73, 0x30, 0x20,
|
||||
0x17, 0x0d, 0x32, 0x33, 0x31, 0x32, 0x30, 0x31,
|
||||
0x30, 0x37, 0x34, 0x31, 0x31, 0x34, 0x5a, 0x18,
|
||||
0x0f, 0x32, 0x31, 0x32, 0x33, 0x31, 0x31, 0x30,
|
||||
0x37, 0x30, 0x37, 0x34, 0x31, 0x31, 0x34, 0x5a,
|
||||
0x30, 0x0f, 0x31, 0x0d, 0x30, 0x0b, 0x06, 0x03,
|
||||
0x55, 0x04, 0x03, 0x0c, 0x04, 0x77, 0x65, 0x6e,
|
||||
0x73, 0x30, 0x82, 0x01, 0x22, 0x30, 0x0d, 0x06,
|
||||
0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7, 0x0d, 0x01,
|
||||
0x01, 0x01, 0x05, 0x00, 0x03, 0x82, 0x01, 0x0f,
|
||||
0x00, 0x30, 0x82, 0x01, 0x0a, 0x02, 0x82, 0x01,
|
||||
0x01, 0x00, 0xa9, 0x7a, 0x2c, 0x78, 0x4d, 0xa7,
|
||||
0x19, 0x2d, 0x32, 0x52, 0xa0, 0x2e, 0x6c, 0xef,
|
||||
0x88, 0x7f, 0x15, 0xc5, 0xb6, 0x69, 0x54, 0x16,
|
||||
0x43, 0x14, 0x79, 0x53, 0xb7, 0xae, 0x88, 0xfe,
|
||||
0xc0, 0xb7, 0x5d, 0x47, 0x8e, 0x1a, 0xe1, 0xef,
|
||||
0xb3, 0x90, 0x86, 0xda, 0xd3, 0x64, 0x81, 0x1f,
|
||||
0xce, 0x5d, 0x9e, 0x4b, 0x6e, 0x58, 0x02, 0x3e,
|
||||
0xb2, 0x6f, 0x5e, 0x42, 0x47, 0x41, 0xf4, 0x2c,
|
||||
0xb8, 0xa8, 0xd4, 0xaa, 0xc0, 0x0e, 0xe6, 0x48,
|
||||
0xf0, 0xa8, 0xce, 0xcb, 0x08, 0xae, 0x37, 0xaf,
|
||||
0xf6, 0x40, 0x39, 0xcb, 0x55, 0x6f, 0x5b, 0x4f,
|
||||
0x85, 0x34, 0xe6, 0x69, 0x10, 0x50, 0x72, 0x5e,
|
||||
0x4e, 0x9d, 0x4c, 0xba, 0x38, 0x36, 0x0d, 0xce,
|
||||
0x73, 0x38, 0xd7, 0x27, 0x02, 0x2a, 0x79, 0x03,
|
||||
0xe1, 0xac, 0xcf, 0xb0, 0x27, 0x85, 0x86, 0x93,
|
||||
0x17, 0xab, 0xec, 0x42, 0x77, 0x37, 0x65, 0x8a,
|
||||
0x44, 0xcb, 0xd6, 0x42, 0x93, 0x92, 0x13, 0xe3,
|
||||
0x39, 0x45, 0xc5, 0x6e, 0x00, 0x4a, 0x7f, 0xcb,
|
||||
0x42, 0x17, 0x2b, 0x25, 0x8c, 0xb8, 0x17, 0x3b,
|
||||
0x15, 0x36, 0x59, 0xde, 0x42, 0xce, 0x21, 0xe6,
|
||||
0xb6, 0xc7, 0x6e, 0x5e, 0x26, 0x1f, 0xf7, 0x8a,
|
||||
0x57, 0x9e, 0xa5, 0x96, 0x72, 0xb7, 0x02, 0x32,
|
||||
0xeb, 0x07, 0x2b, 0x73, 0xe2, 0x4f, 0x66, 0x58,
|
||||
0x9a, 0xeb, 0x0f, 0x07, 0xb6, 0xab, 0x50, 0x8b,
|
||||
0xc3, 0x8f, 0x17, 0xfa, 0x0a, 0x99, 0xc2, 0x16,
|
||||
0x25, 0xbf, 0x2d, 0x6b, 0x1a, 0xaa, 0xe6, 0x3e,
|
||||
0x5f, 0xeb, 0x6d, 0x9b, 0x5d, 0x4d, 0x42, 0x83,
|
||||
0x2d, 0x39, 0xb8, 0xc9, 0xac, 0xdb, 0x3a, 0x91,
|
||||
0x50, 0xdf, 0xbb, 0xb1, 0x76, 0x6d, 0x15, 0x73,
|
||||
0xfd, 0xc6, 0xe6, 0x6b, 0x71, 0x9e, 0x67, 0x36,
|
||||
0x22, 0x83, 0x79, 0xb1, 0xd6, 0xb8, 0x84, 0x52,
|
||||
0xaf, 0x96, 0x5b, 0xc3, 0x63, 0x02, 0x4e, 0x78,
|
||||
0x70, 0x57, 0x02, 0x03, 0x01, 0x00, 0x01, 0x30,
|
||||
0x0d, 0x06, 0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7,
|
||||
0x0d, 0x01, 0x01, 0x0b, 0x05, 0x00, 0x03, 0x82,
|
||||
0x01, 0x01, 0x00, 0x24, 0x28, 0xee, 0x22, 0x74,
|
||||
0x7f, 0x7c, 0xfa, 0x6c, 0x1f, 0xb3, 0x18, 0xd1,
|
||||
0xc2, 0x3d, 0x7d, 0x29, 0x42, 0x88, 0xad, 0x82,
|
||||
0xa5, 0xb1, 0x8a, 0x05, 0xd0, 0xec, 0x5c, 0x91,
|
||||
0x20, 0xf6, 0x82, 0xfd, 0xd5, 0x67, 0x60, 0x5f,
|
||||
0x31, 0xf5, 0xbd, 0x88, 0x91, 0x70, 0xbd, 0xb8,
|
||||
0xb9, 0x8c, 0x88, 0xfe, 0x53, 0xc9, 0x54, 0x9b,
|
||||
0x43, 0xc4, 0x7a, 0x43, 0x74, 0x6b, 0xdd, 0xb0,
|
||||
0xb1, 0x3b, 0x33, 0x45, 0x46, 0x78, 0xa3, 0x1c,
|
||||
0xef, 0x54, 0x68, 0xf7, 0x85, 0x9c, 0xe4, 0x51,
|
||||
0x6f, 0x06, 0xaf, 0x81, 0xdb, 0x2a, 0x7b, 0x7b,
|
||||
0x6f, 0xa8, 0x9c, 0x67, 0xd8, 0xcb, 0xc9, 0x91,
|
||||
0x40, 0x00, 0xae, 0xd9, 0xa1, 0x9f, 0xdd, 0xa6,
|
||||
0x43, 0x0e, 0x28, 0x7b, 0xaa, 0x1b, 0xe9, 0x84,
|
||||
0xdb, 0x76, 0x64, 0x42, 0x70, 0xc9, 0xc0, 0xeb,
|
||||
0xae, 0x84, 0x11, 0x16, 0x68, 0x4e, 0x84, 0x9e,
|
||||
0x7e, 0x92, 0x36, 0xee, 0x1c, 0x3b, 0x08, 0x63,
|
||||
0xeb, 0x79, 0x84, 0x15, 0x08, 0x9d, 0xaf, 0xc8,
|
||||
0x9a, 0xc7, 0x34, 0xd3, 0x94, 0x4b, 0xd1, 0x28,
|
||||
0x97, 0xbe, 0xd1, 0x45, 0x75, 0xdc, 0x35, 0x62,
|
||||
0xac, 0x1d, 0x1f, 0xb7, 0xb7, 0x15, 0x87, 0xc8,
|
||||
0x98, 0xc0, 0x24, 0x31, 0x56, 0x8d, 0xed, 0xdb,
|
||||
0x06, 0xc6, 0x46, 0xbf, 0x4b, 0x6d, 0xa6, 0xd5,
|
||||
0xab, 0xcc, 0x60, 0xfc, 0xe5, 0x37, 0xb6, 0x53,
|
||||
0x7d, 0x58, 0x95, 0xa9, 0x56, 0xc7, 0xf7, 0xee,
|
||||
0xc3, 0xa0, 0x76, 0xf7, 0x65, 0x4d, 0x53, 0xfa,
|
||||
0xff, 0x5f, 0x76, 0x33, 0x5a, 0x08, 0xfa, 0x86,
|
||||
0x92, 0x5a, 0x13, 0xfa, 0x1a, 0xfc, 0xf2, 0x1b,
|
||||
0x8c, 0x7f, 0x42, 0x6d, 0xb7, 0x7e, 0xb7, 0xb4,
|
||||
0xf0, 0xc7, 0x83, 0xbb, 0xa2, 0x81, 0x03, 0x2d,
|
||||
0xd4, 0x2a, 0x63, 0x3f, 0xf7, 0x31, 0x2e, 0x40,
|
||||
0x33, 0x5c, 0x46, 0xbc, 0x9b, 0xc1, 0x05, 0xa5,
|
||||
0x45, 0x4e, 0xc3,
|
@ -524,6 +524,37 @@ out:
|
||||
test_sockmap_pass_prog__destroy(pass);
|
||||
}
|
||||
|
||||
static void test_sockmap_unconnected_unix(void)
|
||||
{
|
||||
int err, map, stream = 0, dgram = 0, zero = 0;
|
||||
struct test_sockmap_pass_prog *skel;
|
||||
|
||||
skel = test_sockmap_pass_prog__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "open_and_load"))
|
||||
return;
|
||||
|
||||
map = bpf_map__fd(skel->maps.sock_map_rx);
|
||||
|
||||
stream = xsocket(AF_UNIX, SOCK_STREAM, 0);
|
||||
if (stream < 0)
|
||||
return;
|
||||
|
||||
dgram = xsocket(AF_UNIX, SOCK_DGRAM, 0);
|
||||
if (dgram < 0) {
|
||||
close(stream);
|
||||
return;
|
||||
}
|
||||
|
||||
err = bpf_map_update_elem(map, &zero, &stream, BPF_ANY);
|
||||
ASSERT_ERR(err, "bpf_map_update_elem(stream)");
|
||||
|
||||
err = bpf_map_update_elem(map, &zero, &dgram, BPF_ANY);
|
||||
ASSERT_OK(err, "bpf_map_update_elem(dgram)");
|
||||
|
||||
close(stream);
|
||||
close(dgram);
|
||||
}
|
||||
|
||||
void test_sockmap_basic(void)
|
||||
{
|
||||
if (test__start_subtest("sockmap create_update_free"))
|
||||
@ -566,4 +597,7 @@ void test_sockmap_basic(void)
|
||||
test_sockmap_skb_verdict_fionread(false);
|
||||
if (test__start_subtest("sockmap skb_verdict msg_f_peek"))
|
||||
test_sockmap_skb_verdict_peek();
|
||||
|
||||
if (test__start_subtest("sockmap unconnected af_unix"))
|
||||
test_sockmap_unconnected_unix();
|
||||
}
|
||||
|
@ -91,6 +91,7 @@ TEST_PROGS += test_bridge_neigh_suppress.sh
|
||||
TEST_PROGS += test_vxlan_nolocalbypass.sh
|
||||
TEST_PROGS += test_bridge_backup_port.sh
|
||||
TEST_PROGS += fdb_flush.sh
|
||||
TEST_PROGS += vlan_hw_filter.sh
|
||||
|
||||
TEST_FILES := settings
|
||||
|
||||
|
@ -2776,7 +2776,7 @@ backup_tests()
|
||||
fi
|
||||
|
||||
if reset "mpc backup" &&
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then
|
||||
pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup
|
||||
speed=slow \
|
||||
run_tests $ns1 $ns2 10.0.1.1
|
||||
@ -2785,7 +2785,7 @@ backup_tests()
|
||||
fi
|
||||
|
||||
if reset "mpc backup both sides" &&
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then
|
||||
pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup
|
||||
pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup
|
||||
speed=slow \
|
||||
@ -2795,7 +2795,7 @@ backup_tests()
|
||||
fi
|
||||
|
||||
if reset "mpc switch to backup" &&
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then
|
||||
pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow
|
||||
sflags=backup speed=slow \
|
||||
run_tests $ns1 $ns2 10.0.1.1
|
||||
@ -2804,7 +2804,7 @@ backup_tests()
|
||||
fi
|
||||
|
||||
if reset "mpc switch to backup both sides" &&
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
|
||||
continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then
|
||||
pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow
|
||||
pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow
|
||||
sflags=backup speed=slow \
|
||||
|
@ -297,7 +297,7 @@ kci_test_addrlft()
|
||||
done
|
||||
|
||||
sleep 5
|
||||
run_cmd_grep "10.23.11." ip addr show dev "$devdummy"
|
||||
run_cmd_grep_fail "10.23.11." ip addr show dev "$devdummy"
|
||||
if [ $? -eq 0 ]; then
|
||||
check_err 1
|
||||
end_test "FAIL: preferred_lft addresses remaining"
|
||||
|
29
tools/testing/selftests/net/vlan_hw_filter.sh
Executable file
29
tools/testing/selftests/net/vlan_hw_filter.sh
Executable file
@ -0,0 +1,29 @@
|
||||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
readonly NETNS="ns-$(mktemp -u XXXXXX)"
|
||||
|
||||
ret=0
|
||||
|
||||
cleanup() {
|
||||
ip netns del $NETNS
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
fail() {
|
||||
echo "ERROR: ${1:-unexpected return code} (ret: $_)" >&2
|
||||
ret=1
|
||||
}
|
||||
|
||||
ip netns add ${NETNS}
|
||||
ip netns exec ${NETNS} ip link add bond0 type bond mode 0
|
||||
ip netns exec ${NETNS} ip link add bond_slave_1 type veth peer veth2
|
||||
ip netns exec ${NETNS} ip link set bond_slave_1 master bond0
|
||||
ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off
|
||||
ip netns exec ${NETNS} ip link add link bond_slave_1 name bond_slave_1.0 type vlan id 0
|
||||
ip netns exec ${NETNS} ip link add link bond0 name bond0.0 type vlan id 0
|
||||
ip netns exec ${NETNS} ip link set bond_slave_1 nomaster
|
||||
ip netns exec ${NETNS} ip link del veth2 || fail "Please check vlan HW filter function"
|
||||
|
||||
exit $ret
|
Loading…
Reference in New Issue
Block a user