Networking fixes for 5.19-rc1, including fixes from bpf, and netfilter.
Current release - new code bugs: - af_packet: make sure to pull the MAC header, avoid skb panic in GSO - ptp_clockmatrix: fix inverted logic in is_single_shot() - netfilter: flowtable: fix missing FLOWI_FLAG_ANYSRC flag - dt-bindings: net: adin: fix adi,phy-output-clock description syntax - wifi: iwlwifi: pcie: rename CAUSE macro, avoid MIPS build warning Previous releases - regressions: - Revert "net: af_key: add check for pfkey_broadcast in function pfkey_process" - tcp: fix tcp_mtup_probe_success vs wrong snd_cwnd - nf_tables: disallow non-stateful expression in sets earlier - nft_limit: clone packet limits' cost value - nf_tables: double hook unregistration in netns path - ping6: fix ping -6 with interface name Previous releases - always broken: - sched: fix memory barriers to prevent skbs from getting stuck in lockless qdiscs - neigh: set lower cap for neigh_managed_work rearming, avoid constantly scheduling the probe work - bpf: fix probe read error on big endian in ___bpf_prog_run() - amt: memory leak and error handling fixes Misc: - ipv6: expand & rename accept_unsolicited_na to accept_untracked_na Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmKY9lMACgkQMUZtbf5S IrtNvA//WCpG53NwSy8aV2X/0vkprVEuO8EQeIYhaw1R4KlVcqrITQcaLqkq/xL/ RUq6F/plftMSiuGRhTp/Sgbl0o0XgJkf769m4zQxz9NqWqgcw5kwJPu4Xq1nSM9t /2qAFNDnXShxRiSYrI0qxQrmd0OUjtsibxsKRTSrrlvcd6zYfrx/7+QK5qpLMF9E zJpBSYQm2R0RLGRith99G8w3WauhlprPaxyQ71ogQtBhTF+Eg7K+xEm2D5DKtyvj 7CLyrQtR0jyDBAt2ZPCh5D/yVPkNI1rigQ8m4uiW9DE6mk1DsxxY+DIOt5vQPBdR x9Pq0qG54KS5sP18ABeNRQn4NWdkhVf/CcPkaRxHJdRs13mpQUATJRpZ3Ytd9Nt0 HW6Kby+zY6bdpUX8+UYdhcG6wbt0Lw8B+bSCjiqfE/CBbfUFA3L9/q/5Hk8Xbnxn lCIk4asxQgpNhcZ+PAkZfFgE0GNDKnXDu1thO+q7/N9srZrrh9WQW5qoq5lexo8V c01jRbPTKa64Gbvm+xDDGEwSl2uIRITtea284bL3q6lnI50n50dlLOAW0z5tmbEg X9OHae5bMAdtvS5A1ForJaWA/Mj35ZqtGG5oj0WcGcLupVyec3rgaYaJtNvwgoDx ptCQVIMLTAHXtZMohm0YrBizg0qbqmCd2c0/LB+3odX328YStJU= =bWkn -----END PGP SIGNATURE----- Merge tag 'net-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bpf and netfilter. Current release - new code bugs: - af_packet: make sure to pull the MAC header, avoid skb panic in GSO - ptp_clockmatrix: fix inverted logic in is_single_shot() - netfilter: flowtable: fix missing FLOWI_FLAG_ANYSRC flag - dt-bindings: net: adin: fix adi,phy-output-clock description syntax - wifi: iwlwifi: pcie: rename CAUSE macro, avoid MIPS build warning Previous releases - regressions: - Revert "net: af_key: add check for pfkey_broadcast in function pfkey_process" - tcp: fix tcp_mtup_probe_success vs wrong snd_cwnd - nf_tables: disallow non-stateful expression in sets earlier - nft_limit: clone packet limits' cost value - nf_tables: double hook unregistration in netns path - ping6: fix ping -6 with interface name Previous releases - always broken: - sched: fix memory barriers to prevent skbs from getting stuck in lockless qdiscs - neigh: set lower cap for neigh_managed_work rearming, avoid constantly scheduling the probe work - bpf: fix probe read error on big endian in ___bpf_prog_run() - amt: memory leak and error handling fixes Misc: - ipv6: expand & rename accept_unsolicited_na to accept_untracked_na" * tag 'net-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (80 commits) net/af_packet: make sure to pull mac header net: add debug info to __skb_pull() net: CONFIG_DEBUG_NET depends on CONFIG_NET stmmac: intel: Add RPL-P PCI ID net: stmmac: use dev_err_probe() for reporting mdio bus registration failure tipc: check attribute length for bearer name ice: fix access-beyond-end in the switch code nfp: remove padding in nfp_nfdk_tx_desc ax25: Fix ax25 session cleanup problems net: usb: qmi_wwan: Add support for Cinterion MV31 with new baseline sfc/siena: fix wrong tx channel offset with efx_separate_tx_channels sfc/siena: fix considering that all channels have TX queues socket: Don't use u8 type in uapi socket.h net/sched: act_api: fix error code in tcf_ct_flow_table_fill_tuple_ipv6() net: ping6: Fix ping -6 with interface name macsec: fix UAF bug for real_dev octeontx2-af: fix error code in is_valid_offset() wifi: mac80211: fix use-after-free in chanctx code bonding: guard ns_targets by CONFIG_IPV6 tcp: tcp_rtx_synack() can be called from process context ...
This commit is contained in:
commit
58f9d52ff6
@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
|||||||
title: Analog Devices ADIN1200/ADIN1300 PHY
|
title: Analog Devices ADIN1200/ADIN1300 PHY
|
||||||
|
|
||||||
maintainers:
|
maintainers:
|
||||||
- Alexandru Ardelean <alexandru.ardelean@analog.com>
|
- Alexandru Tachici <alexandru.tachici@analog.com>
|
||||||
|
|
||||||
description: |
|
description: |
|
||||||
Bindings for Analog Devices Industrial Ethernet PHYs
|
Bindings for Analog Devices Industrial Ethernet PHYs
|
||||||
@ -37,7 +37,8 @@ properties:
|
|||||||
default: 8
|
default: 8
|
||||||
|
|
||||||
adi,phy-output-clock:
|
adi,phy-output-clock:
|
||||||
description: Select clock output on GP_CLK pin. Two clocks are available:
|
description: |
|
||||||
|
Select clock output on GP_CLK pin. Two clocks are available:
|
||||||
A 25MHz reference and a free-running 125MHz.
|
A 25MHz reference and a free-running 125MHz.
|
||||||
The phy can alternatively automatically switch between the reference and
|
The phy can alternatively automatically switch between the reference and
|
||||||
the 125MHz clocks based on its internal state.
|
the 125MHz clocks based on its internal state.
|
||||||
|
@ -2474,21 +2474,16 @@ drop_unsolicited_na - BOOLEAN
|
|||||||
|
|
||||||
By default this is turned off.
|
By default this is turned off.
|
||||||
|
|
||||||
accept_unsolicited_na - BOOLEAN
|
accept_untracked_na - BOOLEAN
|
||||||
Add a new neighbour cache entry in STALE state for routers on receiving an
|
Add a new neighbour cache entry in STALE state for routers on receiving a
|
||||||
unsolicited neighbour advertisement with target link-layer address option
|
neighbour advertisement (either solicited or unsolicited) with target
|
||||||
specified. This is as per router-side behavior documented in RFC9131.
|
link-layer address option specified if no neighbour entry is already
|
||||||
This has lower precedence than drop_unsolicited_na.
|
present for the advertised IPv6 address. Without this knob, NAs received
|
||||||
|
for untracked addresses (absent in neighbour cache) are silently ignored.
|
||||||
|
|
||||||
==== ====== ====== ==============================================
|
This is as per router-side behaviour documented in RFC9131.
|
||||||
drop accept fwding behaviour
|
|
||||||
---- ------ ------ ----------------------------------------------
|
This has lower precedence than drop_unsolicited_na.
|
||||||
1 X X Drop NA packet and don't pass up the stack
|
|
||||||
0 0 X Pass NA packet up the stack, don't update NC
|
|
||||||
0 1 0 Pass NA packet up the stack, don't update NC
|
|
||||||
0 1 1 Pass NA packet up the stack, and add a STALE
|
|
||||||
NC entry
|
|
||||||
==== ====== ====== ==============================================
|
|
||||||
|
|
||||||
This will optimize the return path for the initial off-link communication
|
This will optimize the return path for the initial off-link communication
|
||||||
that is initiated by a directly connected host, by ensuring that
|
that is initiated by a directly connected host, by ensuring that
|
||||||
|
@ -57,7 +57,7 @@ static char *type_str[] = {
|
|||||||
"AMT_MSG_MEMBERSHIP_QUERY",
|
"AMT_MSG_MEMBERSHIP_QUERY",
|
||||||
"AMT_MSG_MEMBERSHIP_UPDATE",
|
"AMT_MSG_MEMBERSHIP_UPDATE",
|
||||||
"AMT_MSG_MULTICAST_DATA",
|
"AMT_MSG_MULTICAST_DATA",
|
||||||
"AMT_MSG_TEARDOWM",
|
"AMT_MSG_TEARDOWN",
|
||||||
};
|
};
|
||||||
|
|
||||||
static char *action_str[] = {
|
static char *action_str[] = {
|
||||||
@ -2423,7 +2423,7 @@ static bool amt_update_handler(struct amt_dev *amt, struct sk_buff *skb)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false;
|
return true;
|
||||||
|
|
||||||
report:
|
report:
|
||||||
iph = ip_hdr(skb);
|
iph = ip_hdr(skb);
|
||||||
@ -2679,7 +2679,7 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
|
|||||||
amt = rcu_dereference_sk_user_data(sk);
|
amt = rcu_dereference_sk_user_data(sk);
|
||||||
if (!amt) {
|
if (!amt) {
|
||||||
err = true;
|
err = true;
|
||||||
goto out;
|
goto drop;
|
||||||
}
|
}
|
||||||
|
|
||||||
skb->dev = amt->dev;
|
skb->dev = amt->dev;
|
||||||
|
@ -6159,7 +6159,9 @@ static int bond_check_params(struct bond_params *params)
|
|||||||
strscpy_pad(params->primary, primary, sizeof(params->primary));
|
strscpy_pad(params->primary, primary, sizeof(params->primary));
|
||||||
|
|
||||||
memcpy(params->arp_targets, arp_target, sizeof(arp_target));
|
memcpy(params->arp_targets, arp_target, sizeof(arp_target));
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
memset(params->ns_targets, 0, sizeof(struct in6_addr) * BOND_MAX_NS_TARGETS);
|
memset(params->ns_targets, 0, sizeof(struct in6_addr) * BOND_MAX_NS_TARGETS);
|
||||||
|
#endif
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -290,11 +290,6 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
|
|||||||
|
|
||||||
addr6 = nla_get_in6_addr(attr);
|
addr6 = nla_get_in6_addr(attr);
|
||||||
|
|
||||||
if (ipv6_addr_type(&addr6) & IPV6_ADDR_LINKLOCAL) {
|
|
||||||
NL_SET_ERR_MSG(extack, "Invalid IPv6 addr6");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
bond_opt_initextra(&newval, &addr6, sizeof(addr6));
|
bond_opt_initextra(&newval, &addr6, sizeof(addr6));
|
||||||
err = __bond_opt_set(bond, BOND_OPT_NS_TARGETS,
|
err = __bond_opt_set(bond, BOND_OPT_NS_TARGETS,
|
||||||
&newval);
|
&newval);
|
||||||
|
@ -34,10 +34,8 @@ static int bond_option_arp_ip_target_add(struct bonding *bond, __be32 target);
|
|||||||
static int bond_option_arp_ip_target_rem(struct bonding *bond, __be32 target);
|
static int bond_option_arp_ip_target_rem(struct bonding *bond, __be32 target);
|
||||||
static int bond_option_arp_ip_targets_set(struct bonding *bond,
|
static int bond_option_arp_ip_targets_set(struct bonding *bond,
|
||||||
const struct bond_opt_value *newval);
|
const struct bond_opt_value *newval);
|
||||||
#if IS_ENABLED(CONFIG_IPV6)
|
|
||||||
static int bond_option_ns_ip6_targets_set(struct bonding *bond,
|
static int bond_option_ns_ip6_targets_set(struct bonding *bond,
|
||||||
const struct bond_opt_value *newval);
|
const struct bond_opt_value *newval);
|
||||||
#endif
|
|
||||||
static int bond_option_arp_validate_set(struct bonding *bond,
|
static int bond_option_arp_validate_set(struct bonding *bond,
|
||||||
const struct bond_opt_value *newval);
|
const struct bond_opt_value *newval);
|
||||||
static int bond_option_arp_all_targets_set(struct bonding *bond,
|
static int bond_option_arp_all_targets_set(struct bonding *bond,
|
||||||
@ -299,7 +297,6 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
|
|||||||
.flags = BOND_OPTFLAG_RAWVAL,
|
.flags = BOND_OPTFLAG_RAWVAL,
|
||||||
.set = bond_option_arp_ip_targets_set
|
.set = bond_option_arp_ip_targets_set
|
||||||
},
|
},
|
||||||
#if IS_ENABLED(CONFIG_IPV6)
|
|
||||||
[BOND_OPT_NS_TARGETS] = {
|
[BOND_OPT_NS_TARGETS] = {
|
||||||
.id = BOND_OPT_NS_TARGETS,
|
.id = BOND_OPT_NS_TARGETS,
|
||||||
.name = "ns_ip6_target",
|
.name = "ns_ip6_target",
|
||||||
@ -307,7 +304,6 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
|
|||||||
.flags = BOND_OPTFLAG_RAWVAL,
|
.flags = BOND_OPTFLAG_RAWVAL,
|
||||||
.set = bond_option_ns_ip6_targets_set
|
.set = bond_option_ns_ip6_targets_set
|
||||||
},
|
},
|
||||||
#endif
|
|
||||||
[BOND_OPT_DOWNDELAY] = {
|
[BOND_OPT_DOWNDELAY] = {
|
||||||
.id = BOND_OPT_DOWNDELAY,
|
.id = BOND_OPT_DOWNDELAY,
|
||||||
.name = "downdelay",
|
.name = "downdelay",
|
||||||
@ -1254,6 +1250,12 @@ static int bond_option_ns_ip6_targets_set(struct bonding *bond,
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
#else
|
||||||
|
static int bond_option_ns_ip6_targets_set(struct bonding *bond,
|
||||||
|
const struct bond_opt_value *newval)
|
||||||
|
{
|
||||||
|
return -EPERM;
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static int bond_option_arp_validate_set(struct bonding *bond,
|
static int bond_option_arp_validate_set(struct bonding *bond,
|
||||||
|
@ -129,6 +129,21 @@ static void bond_info_show_master(struct seq_file *seq)
|
|||||||
printed = 1;
|
printed = 1;
|
||||||
}
|
}
|
||||||
seq_printf(seq, "\n");
|
seq_printf(seq, "\n");
|
||||||
|
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
|
printed = 0;
|
||||||
|
seq_printf(seq, "NS IPv6 target/s (xx::xx form):");
|
||||||
|
|
||||||
|
for (i = 0; (i < BOND_MAX_NS_TARGETS); i++) {
|
||||||
|
if (ipv6_addr_any(&bond->params.ns_targets[i]))
|
||||||
|
break;
|
||||||
|
if (printed)
|
||||||
|
seq_printf(seq, ",");
|
||||||
|
seq_printf(seq, " %pI6c", &bond->params.ns_targets[i]);
|
||||||
|
printed = 1;
|
||||||
|
}
|
||||||
|
seq_printf(seq, "\n");
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
|
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
|
||||||
|
@ -3960,6 +3960,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip,
|
|||||||
*/
|
*/
|
||||||
child = of_get_child_by_name(np, "mdio");
|
child = of_get_child_by_name(np, "mdio");
|
||||||
err = mv88e6xxx_mdio_register(chip, child, false);
|
err = mv88e6xxx_mdio_register(chip, child, false);
|
||||||
|
of_node_put(child);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -1,32 +1,7 @@
|
|||||||
/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
|
||||||
|
/*
|
||||||
|
* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
||||||
* Copyright 2020 NXP
|
* Copyright 2020 NXP
|
||||||
*
|
|
||||||
* Redistribution and use in source and binary forms, with or without
|
|
||||||
* modification, are permitted provided that the following conditions are met:
|
|
||||||
* * Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* * Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* * Neither the name of Freescale Semiconductor nor the
|
|
||||||
* names of its contributors may be used to endorse or promote products
|
|
||||||
* derived from this software without specific prior written permission.
|
|
||||||
*
|
|
||||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
|
||||||
* GNU General Public License ("GPL") as published by the Free Software
|
|
||||||
* Foundation, either version 2 of that License or (at your option) any
|
|
||||||
* later version.
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
|
||||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
||||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
|
||||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
||||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
|
||||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
||||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||||
|
@ -1,31 +1,6 @@
|
|||||||
/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later */
|
||||||
*
|
/*
|
||||||
* Redistribution and use in source and binary forms, with or without
|
* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
||||||
* modification, are permitted provided that the following conditions are met:
|
|
||||||
* * Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* * Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* * Neither the name of Freescale Semiconductor nor the
|
|
||||||
* names of its contributors may be used to endorse or promote products
|
|
||||||
* derived from this software without specific prior written permission.
|
|
||||||
*
|
|
||||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
|
||||||
* GNU General Public License ("GPL") as published by the Free Software
|
|
||||||
* Foundation, either version 2 of that License or (at your option) any
|
|
||||||
* later version.
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
|
||||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
||||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
|
||||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
||||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
|
||||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
||||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#ifndef __DPAA_H
|
#ifndef __DPAA_H
|
||||||
|
@ -1,32 +1,6 @@
|
|||||||
/* Copyright 2008-2016 Freescale Semiconductor Inc.
|
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
|
||||||
*
|
/*
|
||||||
* Redistribution and use in source and binary forms, with or without
|
* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
||||||
* modification, are permitted provided that the following conditions are met:
|
|
||||||
* * Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* * Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* * Neither the name of Freescale Semiconductor nor the
|
|
||||||
* names of its contributors may be used to endorse or promote products
|
|
||||||
* derived from this software without specific prior written permission.
|
|
||||||
*
|
|
||||||
*
|
|
||||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
|
||||||
* GNU General Public License ("GPL") as published by the Free Software
|
|
||||||
* Foundation, either version 2 of that License or (at your option) any
|
|
||||||
* later version.
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
|
||||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
||||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
|
||||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
||||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
|
||||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
||||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
|
@ -1,32 +1,6 @@
|
|||||||
/* Copyright 2013-2015 Freescale Semiconductor Inc.
|
/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later */
|
||||||
*
|
/*
|
||||||
* Redistribution and use in source and binary forms, with or without
|
* Copyright 2013-2015 Freescale Semiconductor Inc.
|
||||||
* modification, are permitted provided that the following conditions are met:
|
|
||||||
* * Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* * Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* * Neither the name of Freescale Semiconductor nor the
|
|
||||||
* names of its contributors may be used to endorse or promote products
|
|
||||||
* derived from this software without specific prior written permission.
|
|
||||||
*
|
|
||||||
*
|
|
||||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
|
||||||
* GNU General Public License ("GPL") as published by the Free Software
|
|
||||||
* Foundation, either version 2 of that License or (at your option) any
|
|
||||||
* later version.
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
|
||||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
||||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
|
||||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
||||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
|
||||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
||||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#undef TRACE_SYSTEM
|
#undef TRACE_SYSTEM
|
||||||
|
@ -1,32 +1,6 @@
|
|||||||
/* Copyright 2008-2016 Freescale Semiconductor, Inc.
|
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
|
||||||
*
|
/*
|
||||||
* Redistribution and use in source and binary forms, with or without
|
* Copyright 2008 - 2016 Freescale Semiconductor Inc.
|
||||||
* modification, are permitted provided that the following conditions are met:
|
|
||||||
* * Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* * Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* * Neither the name of Freescale Semiconductor nor the
|
|
||||||
* names of its contributors may be used to endorse or promote products
|
|
||||||
* derived from this software without specific prior written permission.
|
|
||||||
*
|
|
||||||
*
|
|
||||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
|
||||||
* GNU General Public License ("GPL") as published by the Free Software
|
|
||||||
* Foundation, either version 2 of that License or (at your option) any
|
|
||||||
* later version.
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
|
||||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
||||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
|
||||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
||||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
|
||||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
||||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||||
|
@ -69,7 +69,7 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev,
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_mdiobus_reg:
|
err_mdiobus_reg:
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_region(pdev, 0);
|
||||||
err_pci_mem_reg:
|
err_pci_mem_reg:
|
||||||
pci_disable_device(pdev);
|
pci_disable_device(pdev);
|
||||||
err_pci_enable:
|
err_pci_enable:
|
||||||
@ -88,7 +88,7 @@ static void enetc_pci_mdio_remove(struct pci_dev *pdev)
|
|||||||
mdiobus_unregister(bus);
|
mdiobus_unregister(bus);
|
||||||
mdio_priv = bus->priv;
|
mdio_priv = bus->priv;
|
||||||
iounmap(mdio_priv->hw->port);
|
iounmap(mdio_priv->hw->port);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_region(pdev, 0);
|
||||||
pci_disable_device(pdev);
|
pci_disable_device(pdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -47,8 +47,3 @@ ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
|
|||||||
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
|
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
|
||||||
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
|
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
|
||||||
ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o
|
ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o
|
||||||
|
|
||||||
# FIXME: temporarily silence -Warray-bounds on non W=1+ builds
|
|
||||||
ifndef KBUILD_EXTRA_WARN
|
|
||||||
CFLAGS_ice_switch.o += -Wno-array-bounds
|
|
||||||
endif
|
|
||||||
|
@ -601,12 +601,30 @@ struct ice_aqc_sw_rules {
|
|||||||
__le32 addr_low;
|
__le32 addr_low;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* Add switch rule response:
|
||||||
|
* Content of return buffer is same as the input buffer. The status field and
|
||||||
|
* LUT index are updated as part of the response
|
||||||
|
*/
|
||||||
|
struct ice_aqc_sw_rules_elem_hdr {
|
||||||
|
__le16 type; /* Switch rule type, one of T_... */
|
||||||
|
#define ICE_AQC_SW_RULES_T_LKUP_RX 0x0
|
||||||
|
#define ICE_AQC_SW_RULES_T_LKUP_TX 0x1
|
||||||
|
#define ICE_AQC_SW_RULES_T_LG_ACT 0x2
|
||||||
|
#define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3
|
||||||
|
#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4
|
||||||
|
#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5
|
||||||
|
#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6
|
||||||
|
__le16 status;
|
||||||
|
} __packed __aligned(sizeof(__le16));
|
||||||
|
|
||||||
/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
|
/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
|
||||||
* This structures describes the lookup rules and associated actions. "index"
|
* This structures describes the lookup rules and associated actions. "index"
|
||||||
* is returned as part of a response to a successful Add command, and can be
|
* is returned as part of a response to a successful Add command, and can be
|
||||||
* used to identify the rule for Update/Get/Remove commands.
|
* used to identify the rule for Update/Get/Remove commands.
|
||||||
*/
|
*/
|
||||||
struct ice_sw_rule_lkup_rx_tx {
|
struct ice_sw_rule_lkup_rx_tx {
|
||||||
|
struct ice_aqc_sw_rules_elem_hdr hdr;
|
||||||
|
|
||||||
__le16 recipe_id;
|
__le16 recipe_id;
|
||||||
#define ICE_SW_RECIPE_LOGICAL_PORT_FWD 10
|
#define ICE_SW_RECIPE_LOGICAL_PORT_FWD 10
|
||||||
/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
|
/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
|
||||||
@ -683,14 +701,16 @@ struct ice_sw_rule_lkup_rx_tx {
|
|||||||
* lookup-type
|
* lookup-type
|
||||||
*/
|
*/
|
||||||
__le16 hdr_len;
|
__le16 hdr_len;
|
||||||
u8 hdr[];
|
u8 hdr_data[];
|
||||||
};
|
} __packed __aligned(sizeof(__le16));
|
||||||
|
|
||||||
/* Add/Update/Remove large action command/response entry
|
/* Add/Update/Remove large action command/response entry
|
||||||
* "index" is returned as part of a response to a successful Add command, and
|
* "index" is returned as part of a response to a successful Add command, and
|
||||||
* can be used to identify the action for Update/Get/Remove commands.
|
* can be used to identify the action for Update/Get/Remove commands.
|
||||||
*/
|
*/
|
||||||
struct ice_sw_rule_lg_act {
|
struct ice_sw_rule_lg_act {
|
||||||
|
struct ice_aqc_sw_rules_elem_hdr hdr;
|
||||||
|
|
||||||
__le16 index; /* Index in large action table */
|
__le16 index; /* Index in large action table */
|
||||||
__le16 size;
|
__le16 size;
|
||||||
/* Max number of large actions */
|
/* Max number of large actions */
|
||||||
@ -744,45 +764,19 @@ struct ice_sw_rule_lg_act {
|
|||||||
#define ICE_LG_ACT_STAT_COUNT_S 3
|
#define ICE_LG_ACT_STAT_COUNT_S 3
|
||||||
#define ICE_LG_ACT_STAT_COUNT_M (0x7F << ICE_LG_ACT_STAT_COUNT_S)
|
#define ICE_LG_ACT_STAT_COUNT_M (0x7F << ICE_LG_ACT_STAT_COUNT_S)
|
||||||
__le32 act[]; /* array of size for actions */
|
__le32 act[]; /* array of size for actions */
|
||||||
};
|
} __packed __aligned(sizeof(__le16));
|
||||||
|
|
||||||
/* Add/Update/Remove VSI list command/response entry
|
/* Add/Update/Remove VSI list command/response entry
|
||||||
* "index" is returned as part of a response to a successful Add command, and
|
* "index" is returned as part of a response to a successful Add command, and
|
||||||
* can be used to identify the VSI list for Update/Get/Remove commands.
|
* can be used to identify the VSI list for Update/Get/Remove commands.
|
||||||
*/
|
*/
|
||||||
struct ice_sw_rule_vsi_list {
|
struct ice_sw_rule_vsi_list {
|
||||||
|
struct ice_aqc_sw_rules_elem_hdr hdr;
|
||||||
|
|
||||||
__le16 index; /* Index of VSI/Prune list */
|
__le16 index; /* Index of VSI/Prune list */
|
||||||
__le16 number_vsi;
|
__le16 number_vsi;
|
||||||
__le16 vsi[]; /* Array of number_vsi VSI numbers */
|
__le16 vsi[]; /* Array of number_vsi VSI numbers */
|
||||||
};
|
} __packed __aligned(sizeof(__le16));
|
||||||
|
|
||||||
/* Query VSI list command/response entry */
|
|
||||||
struct ice_sw_rule_vsi_list_query {
|
|
||||||
__le16 index;
|
|
||||||
DECLARE_BITMAP(vsi_list, ICE_MAX_VSI);
|
|
||||||
} __packed;
|
|
||||||
|
|
||||||
/* Add switch rule response:
|
|
||||||
* Content of return buffer is same as the input buffer. The status field and
|
|
||||||
* LUT index are updated as part of the response
|
|
||||||
*/
|
|
||||||
struct ice_aqc_sw_rules_elem {
|
|
||||||
__le16 type; /* Switch rule type, one of T_... */
|
|
||||||
#define ICE_AQC_SW_RULES_T_LKUP_RX 0x0
|
|
||||||
#define ICE_AQC_SW_RULES_T_LKUP_TX 0x1
|
|
||||||
#define ICE_AQC_SW_RULES_T_LG_ACT 0x2
|
|
||||||
#define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3
|
|
||||||
#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4
|
|
||||||
#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5
|
|
||||||
#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6
|
|
||||||
__le16 status;
|
|
||||||
union {
|
|
||||||
struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
|
|
||||||
struct ice_sw_rule_lg_act lg_act;
|
|
||||||
struct ice_sw_rule_vsi_list vsi_list;
|
|
||||||
struct ice_sw_rule_vsi_list_query vsi_list_query;
|
|
||||||
} __packed pdata;
|
|
||||||
};
|
|
||||||
|
|
||||||
/* Query PFC Mode (direct 0x0302)
|
/* Query PFC Mode (direct 0x0302)
|
||||||
* Set PFC Mode (direct 0x0303)
|
* Set PFC Mode (direct 0x0303)
|
||||||
|
@ -1282,18 +1282,13 @@ static const struct ice_dummy_pkt_profile ice_dummy_pkt_profiles[] = {
|
|||||||
ICE_PKT_PROFILE(tcp, 0),
|
ICE_PKT_PROFILE(tcp, 0),
|
||||||
};
|
};
|
||||||
|
|
||||||
#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
|
#define ICE_SW_RULE_RX_TX_HDR_SIZE(s, l) struct_size((s), hdr_data, (l))
|
||||||
(offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr) + \
|
#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s) \
|
||||||
(DUMMY_ETH_HDR_LEN * \
|
ICE_SW_RULE_RX_TX_HDR_SIZE((s), DUMMY_ETH_HDR_LEN)
|
||||||
sizeof(((struct ice_sw_rule_lkup_rx_tx *)0)->hdr[0])))
|
#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s) \
|
||||||
#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
|
ICE_SW_RULE_RX_TX_HDR_SIZE((s), 0)
|
||||||
(offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr))
|
#define ICE_SW_RULE_LG_ACT_SIZE(s, n) struct_size((s), act, (n))
|
||||||
#define ICE_SW_RULE_LG_ACT_SIZE(n) \
|
#define ICE_SW_RULE_VSI_LIST_SIZE(s, n) struct_size((s), vsi, (n))
|
||||||
(offsetof(struct ice_aqc_sw_rules_elem, pdata.lg_act.act) + \
|
|
||||||
((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act[0])))
|
|
||||||
#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
|
|
||||||
(offsetof(struct ice_aqc_sw_rules_elem, pdata.vsi_list.vsi) + \
|
|
||||||
((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi[0])))
|
|
||||||
|
|
||||||
/* this is a recipe to profile association bitmap */
|
/* this is a recipe to profile association bitmap */
|
||||||
static DECLARE_BITMAP(recipe_to_profile[ICE_MAX_NUM_RECIPES],
|
static DECLARE_BITMAP(recipe_to_profile[ICE_MAX_NUM_RECIPES],
|
||||||
@ -2376,7 +2371,8 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
|
|||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
|
ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
|
||||||
struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
|
struct ice_sw_rule_lkup_rx_tx *s_rule,
|
||||||
|
enum ice_adminq_opc opc)
|
||||||
{
|
{
|
||||||
u16 vlan_id = ICE_MAX_VLAN_ID + 1;
|
u16 vlan_id = ICE_MAX_VLAN_ID + 1;
|
||||||
u16 vlan_tpid = ETH_P_8021Q;
|
u16 vlan_tpid = ETH_P_8021Q;
|
||||||
@ -2388,15 +2384,14 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
|
|||||||
u8 q_rgn;
|
u8 q_rgn;
|
||||||
|
|
||||||
if (opc == ice_aqc_opc_remove_sw_rules) {
|
if (opc == ice_aqc_opc_remove_sw_rules) {
|
||||||
s_rule->pdata.lkup_tx_rx.act = 0;
|
s_rule->act = 0;
|
||||||
s_rule->pdata.lkup_tx_rx.index =
|
s_rule->index = cpu_to_le16(f_info->fltr_rule_id);
|
||||||
cpu_to_le16(f_info->fltr_rule_id);
|
s_rule->hdr_len = 0;
|
||||||
s_rule->pdata.lkup_tx_rx.hdr_len = 0;
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
eth_hdr_sz = sizeof(dummy_eth_header);
|
eth_hdr_sz = sizeof(dummy_eth_header);
|
||||||
eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
|
eth_hdr = s_rule->hdr_data;
|
||||||
|
|
||||||
/* initialize the ether header with a dummy header */
|
/* initialize the ether header with a dummy header */
|
||||||
memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz);
|
memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz);
|
||||||
@ -2481,14 +2476,14 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
|
s_rule->hdr.type = (f_info->flag & ICE_FLTR_RX) ?
|
||||||
cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX) :
|
cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX) :
|
||||||
cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX);
|
cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX);
|
||||||
|
|
||||||
/* Recipe set depending on lookup type */
|
/* Recipe set depending on lookup type */
|
||||||
s_rule->pdata.lkup_tx_rx.recipe_id = cpu_to_le16(f_info->lkup_type);
|
s_rule->recipe_id = cpu_to_le16(f_info->lkup_type);
|
||||||
s_rule->pdata.lkup_tx_rx.src = cpu_to_le16(f_info->src);
|
s_rule->src = cpu_to_le16(f_info->src);
|
||||||
s_rule->pdata.lkup_tx_rx.act = cpu_to_le32(act);
|
s_rule->act = cpu_to_le32(act);
|
||||||
|
|
||||||
if (daddr)
|
if (daddr)
|
||||||
ether_addr_copy(eth_hdr + ICE_ETH_DA_OFFSET, daddr);
|
ether_addr_copy(eth_hdr + ICE_ETH_DA_OFFSET, daddr);
|
||||||
@ -2502,7 +2497,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
|
|||||||
|
|
||||||
/* Create the switch rule with the final dummy Ethernet header */
|
/* Create the switch rule with the final dummy Ethernet header */
|
||||||
if (opc != ice_aqc_opc_update_sw_rules)
|
if (opc != ice_aqc_opc_update_sw_rules)
|
||||||
s_rule->pdata.lkup_tx_rx.hdr_len = cpu_to_le16(eth_hdr_sz);
|
s_rule->hdr_len = cpu_to_le16(eth_hdr_sz);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -2519,7 +2514,8 @@ static int
|
|||||||
ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
|
ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
|
||||||
u16 sw_marker, u16 l_id)
|
u16 sw_marker, u16 l_id)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
|
struct ice_sw_rule_lkup_rx_tx *rx_tx;
|
||||||
|
struct ice_sw_rule_lg_act *lg_act;
|
||||||
/* For software marker we need 3 large actions
|
/* For software marker we need 3 large actions
|
||||||
* 1. FWD action: FWD TO VSI or VSI LIST
|
* 1. FWD action: FWD TO VSI or VSI LIST
|
||||||
* 2. GENERIC VALUE action to hold the profile ID
|
* 2. GENERIC VALUE action to hold the profile ID
|
||||||
@ -2540,18 +2536,18 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
|
|||||||
* 1. Large Action
|
* 1. Large Action
|
||||||
* 2. Look up Tx Rx
|
* 2. Look up Tx Rx
|
||||||
*/
|
*/
|
||||||
lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
|
lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(lg_act, num_lg_acts);
|
||||||
rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
|
rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(rx_tx);
|
||||||
lg_act = devm_kzalloc(ice_hw_to_dev(hw), rules_size, GFP_KERNEL);
|
lg_act = devm_kzalloc(ice_hw_to_dev(hw), rules_size, GFP_KERNEL);
|
||||||
if (!lg_act)
|
if (!lg_act)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
|
rx_tx = (typeof(rx_tx))((u8 *)lg_act + lg_act_size);
|
||||||
|
|
||||||
/* Fill in the first switch rule i.e. large action */
|
/* Fill in the first switch rule i.e. large action */
|
||||||
lg_act->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LG_ACT);
|
lg_act->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LG_ACT);
|
||||||
lg_act->pdata.lg_act.index = cpu_to_le16(l_id);
|
lg_act->index = cpu_to_le16(l_id);
|
||||||
lg_act->pdata.lg_act.size = cpu_to_le16(num_lg_acts);
|
lg_act->size = cpu_to_le16(num_lg_acts);
|
||||||
|
|
||||||
/* First action VSI forwarding or VSI list forwarding depending on how
|
/* First action VSI forwarding or VSI list forwarding depending on how
|
||||||
* many VSIs
|
* many VSIs
|
||||||
@ -2563,13 +2559,13 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
|
|||||||
act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) & ICE_LG_ACT_VSI_LIST_ID_M;
|
act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) & ICE_LG_ACT_VSI_LIST_ID_M;
|
||||||
if (m_ent->vsi_count > 1)
|
if (m_ent->vsi_count > 1)
|
||||||
act |= ICE_LG_ACT_VSI_LIST;
|
act |= ICE_LG_ACT_VSI_LIST;
|
||||||
lg_act->pdata.lg_act.act[0] = cpu_to_le32(act);
|
lg_act->act[0] = cpu_to_le32(act);
|
||||||
|
|
||||||
/* Second action descriptor type */
|
/* Second action descriptor type */
|
||||||
act = ICE_LG_ACT_GENERIC;
|
act = ICE_LG_ACT_GENERIC;
|
||||||
|
|
||||||
act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
|
act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
|
||||||
lg_act->pdata.lg_act.act[1] = cpu_to_le32(act);
|
lg_act->act[1] = cpu_to_le32(act);
|
||||||
|
|
||||||
act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
|
act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
|
||||||
ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
|
ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
|
||||||
@ -2579,24 +2575,22 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
|
|||||||
act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
|
act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
|
||||||
ICE_LG_ACT_GENERIC_VALUE_M;
|
ICE_LG_ACT_GENERIC_VALUE_M;
|
||||||
|
|
||||||
lg_act->pdata.lg_act.act[2] = cpu_to_le32(act);
|
lg_act->act[2] = cpu_to_le32(act);
|
||||||
|
|
||||||
/* call the fill switch rule to fill the lookup Tx Rx structure */
|
/* call the fill switch rule to fill the lookup Tx Rx structure */
|
||||||
ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
|
ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
|
||||||
ice_aqc_opc_update_sw_rules);
|
ice_aqc_opc_update_sw_rules);
|
||||||
|
|
||||||
/* Update the action to point to the large action ID */
|
/* Update the action to point to the large action ID */
|
||||||
rx_tx->pdata.lkup_tx_rx.act =
|
rx_tx->act = cpu_to_le32(ICE_SINGLE_ACT_PTR |
|
||||||
cpu_to_le32(ICE_SINGLE_ACT_PTR |
|
((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
|
||||||
((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
|
ICE_SINGLE_ACT_PTR_VAL_M));
|
||||||
ICE_SINGLE_ACT_PTR_VAL_M));
|
|
||||||
|
|
||||||
/* Use the filter rule ID of the previously created rule with single
|
/* Use the filter rule ID of the previously created rule with single
|
||||||
* act. Once the update happens, hardware will treat this as large
|
* act. Once the update happens, hardware will treat this as large
|
||||||
* action
|
* action
|
||||||
*/
|
*/
|
||||||
rx_tx->pdata.lkup_tx_rx.index =
|
rx_tx->index = cpu_to_le16(m_ent->fltr_info.fltr_rule_id);
|
||||||
cpu_to_le16(m_ent->fltr_info.fltr_rule_id);
|
|
||||||
|
|
||||||
status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
|
status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
|
||||||
ice_aqc_opc_update_sw_rules, NULL);
|
ice_aqc_opc_update_sw_rules, NULL);
|
||||||
@ -2658,7 +2652,7 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
|
|||||||
u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
|
u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
|
||||||
enum ice_sw_lkup_type lkup_type)
|
enum ice_sw_lkup_type lkup_type)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_vsi_list *s_rule;
|
||||||
u16 s_rule_size;
|
u16 s_rule_size;
|
||||||
u16 rule_type;
|
u16 rule_type;
|
||||||
int status;
|
int status;
|
||||||
@ -2681,7 +2675,7 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
|
|||||||
else
|
else
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
|
s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, num_vsi);
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
@ -2691,13 +2685,13 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
|
|||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
/* AQ call requires hw_vsi_id(s) */
|
/* AQ call requires hw_vsi_id(s) */
|
||||||
s_rule->pdata.vsi_list.vsi[i] =
|
s_rule->vsi[i] =
|
||||||
cpu_to_le16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
|
cpu_to_le16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
|
||||||
}
|
}
|
||||||
|
|
||||||
s_rule->type = cpu_to_le16(rule_type);
|
s_rule->hdr.type = cpu_to_le16(rule_type);
|
||||||
s_rule->pdata.vsi_list.number_vsi = cpu_to_le16(num_vsi);
|
s_rule->number_vsi = cpu_to_le16(num_vsi);
|
||||||
s_rule->pdata.vsi_list.index = cpu_to_le16(vsi_list_id);
|
s_rule->index = cpu_to_le16(vsi_list_id);
|
||||||
|
|
||||||
status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
|
status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
|
||||||
|
|
||||||
@ -2745,13 +2739,14 @@ ice_create_pkt_fwd_rule(struct ice_hw *hw,
|
|||||||
struct ice_fltr_list_entry *f_entry)
|
struct ice_fltr_list_entry *f_entry)
|
||||||
{
|
{
|
||||||
struct ice_fltr_mgmt_list_entry *fm_entry;
|
struct ice_fltr_mgmt_list_entry *fm_entry;
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_lkup_rx_tx *s_rule;
|
||||||
enum ice_sw_lkup_type l_type;
|
enum ice_sw_lkup_type l_type;
|
||||||
struct ice_sw_recipe *recp;
|
struct ice_sw_recipe *recp;
|
||||||
int status;
|
int status;
|
||||||
|
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
||||||
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, GFP_KERNEL);
|
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule),
|
||||||
|
GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
fm_entry = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*fm_entry),
|
fm_entry = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*fm_entry),
|
||||||
@ -2772,17 +2767,16 @@ ice_create_pkt_fwd_rule(struct ice_hw *hw,
|
|||||||
ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
|
ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
|
||||||
ice_aqc_opc_add_sw_rules);
|
ice_aqc_opc_add_sw_rules);
|
||||||
|
|
||||||
status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
|
status = ice_aq_sw_rules(hw, s_rule,
|
||||||
|
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 1,
|
||||||
ice_aqc_opc_add_sw_rules, NULL);
|
ice_aqc_opc_add_sw_rules, NULL);
|
||||||
if (status) {
|
if (status) {
|
||||||
devm_kfree(ice_hw_to_dev(hw), fm_entry);
|
devm_kfree(ice_hw_to_dev(hw), fm_entry);
|
||||||
goto ice_create_pkt_fwd_rule_exit;
|
goto ice_create_pkt_fwd_rule_exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
f_entry->fltr_info.fltr_rule_id =
|
f_entry->fltr_info.fltr_rule_id = le16_to_cpu(s_rule->index);
|
||||||
le16_to_cpu(s_rule->pdata.lkup_tx_rx.index);
|
fm_entry->fltr_info.fltr_rule_id = le16_to_cpu(s_rule->index);
|
||||||
fm_entry->fltr_info.fltr_rule_id =
|
|
||||||
le16_to_cpu(s_rule->pdata.lkup_tx_rx.index);
|
|
||||||
|
|
||||||
/* The book keeping entries will get removed when base driver
|
/* The book keeping entries will get removed when base driver
|
||||||
* calls remove filter AQ command
|
* calls remove filter AQ command
|
||||||
@ -2807,20 +2801,22 @@ ice_create_pkt_fwd_rule_exit:
|
|||||||
static int
|
static int
|
||||||
ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
|
ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_lkup_rx_tx *s_rule;
|
||||||
int status;
|
int status;
|
||||||
|
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
||||||
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, GFP_KERNEL);
|
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule),
|
||||||
|
GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
|
ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
|
||||||
|
|
||||||
s_rule->pdata.lkup_tx_rx.index = cpu_to_le16(f_info->fltr_rule_id);
|
s_rule->index = cpu_to_le16(f_info->fltr_rule_id);
|
||||||
|
|
||||||
/* Update switch rule with new rule set to forward VSI list */
|
/* Update switch rule with new rule set to forward VSI list */
|
||||||
status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
|
status = ice_aq_sw_rules(hw, s_rule,
|
||||||
|
ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 1,
|
||||||
ice_aqc_opc_update_sw_rules, NULL);
|
ice_aqc_opc_update_sw_rules, NULL);
|
||||||
|
|
||||||
devm_kfree(ice_hw_to_dev(hw), s_rule);
|
devm_kfree(ice_hw_to_dev(hw), s_rule);
|
||||||
@ -3104,17 +3100,17 @@ static int
|
|||||||
ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
|
ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
|
||||||
enum ice_sw_lkup_type lkup_type)
|
enum ice_sw_lkup_type lkup_type)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_vsi_list *s_rule;
|
||||||
u16 s_rule_size;
|
u16 s_rule_size;
|
||||||
int status;
|
int status;
|
||||||
|
|
||||||
s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
|
s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, 0);
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
|
s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
|
||||||
s_rule->pdata.vsi_list.index = cpu_to_le16(vsi_list_id);
|
s_rule->index = cpu_to_le16(vsi_list_id);
|
||||||
|
|
||||||
/* Free the vsi_list resource that we allocated. It is assumed that the
|
/* Free the vsi_list resource that we allocated. It is assumed that the
|
||||||
* list is empty at this point.
|
* list is empty at this point.
|
||||||
@ -3274,10 +3270,10 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
|
|||||||
|
|
||||||
if (remove_rule) {
|
if (remove_rule) {
|
||||||
/* Remove the lookup rule */
|
/* Remove the lookup rule */
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_lkup_rx_tx *s_rule;
|
||||||
|
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw),
|
||||||
ICE_SW_RULE_RX_TX_NO_HDR_SIZE,
|
ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!s_rule) {
|
if (!s_rule) {
|
||||||
status = -ENOMEM;
|
status = -ENOMEM;
|
||||||
@ -3288,8 +3284,8 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
|
|||||||
ice_aqc_opc_remove_sw_rules);
|
ice_aqc_opc_remove_sw_rules);
|
||||||
|
|
||||||
status = ice_aq_sw_rules(hw, s_rule,
|
status = ice_aq_sw_rules(hw, s_rule,
|
||||||
ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
|
ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule),
|
||||||
ice_aqc_opc_remove_sw_rules, NULL);
|
1, ice_aqc_opc_remove_sw_rules, NULL);
|
||||||
|
|
||||||
/* Remove a book keeping from the list */
|
/* Remove a book keeping from the list */
|
||||||
devm_kfree(ice_hw_to_dev(hw), s_rule);
|
devm_kfree(ice_hw_to_dev(hw), s_rule);
|
||||||
@ -3437,7 +3433,7 @@ bool ice_vlan_fltr_exist(struct ice_hw *hw, u16 vlan_id, u16 vsi_handle)
|
|||||||
*/
|
*/
|
||||||
int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
|
struct ice_sw_rule_lkup_rx_tx *s_rule, *r_iter;
|
||||||
struct ice_fltr_list_entry *m_list_itr;
|
struct ice_fltr_list_entry *m_list_itr;
|
||||||
struct list_head *rule_head;
|
struct list_head *rule_head;
|
||||||
u16 total_elem_left, s_rule_size;
|
u16 total_elem_left, s_rule_size;
|
||||||
@ -3501,7 +3497,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
|
rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
|
||||||
|
|
||||||
/* Allocate switch rule buffer for the bulk update for unicast */
|
/* Allocate switch rule buffer for the bulk update for unicast */
|
||||||
s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
|
s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule);
|
||||||
s_rule = devm_kcalloc(ice_hw_to_dev(hw), num_unicast, s_rule_size,
|
s_rule = devm_kcalloc(ice_hw_to_dev(hw), num_unicast, s_rule_size,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!s_rule) {
|
if (!s_rule) {
|
||||||
@ -3517,8 +3513,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
if (is_unicast_ether_addr(mac_addr)) {
|
if (is_unicast_ether_addr(mac_addr)) {
|
||||||
ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
|
ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
|
||||||
ice_aqc_opc_add_sw_rules);
|
ice_aqc_opc_add_sw_rules);
|
||||||
r_iter = (struct ice_aqc_sw_rules_elem *)
|
r_iter = (typeof(s_rule))((u8 *)r_iter + s_rule_size);
|
||||||
((u8 *)r_iter + s_rule_size);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3527,7 +3522,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
/* Call AQ switch rule in AQ_MAX chunk */
|
/* Call AQ switch rule in AQ_MAX chunk */
|
||||||
for (total_elem_left = num_unicast; total_elem_left > 0;
|
for (total_elem_left = num_unicast; total_elem_left > 0;
|
||||||
total_elem_left -= elem_sent) {
|
total_elem_left -= elem_sent) {
|
||||||
struct ice_aqc_sw_rules_elem *entry = r_iter;
|
struct ice_sw_rule_lkup_rx_tx *entry = r_iter;
|
||||||
|
|
||||||
elem_sent = min_t(u8, total_elem_left,
|
elem_sent = min_t(u8, total_elem_left,
|
||||||
(ICE_AQ_MAX_BUF_LEN / s_rule_size));
|
(ICE_AQ_MAX_BUF_LEN / s_rule_size));
|
||||||
@ -3536,7 +3531,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
NULL);
|
NULL);
|
||||||
if (status)
|
if (status)
|
||||||
goto ice_add_mac_exit;
|
goto ice_add_mac_exit;
|
||||||
r_iter = (struct ice_aqc_sw_rules_elem *)
|
r_iter = (typeof(s_rule))
|
||||||
((u8 *)r_iter + (elem_sent * s_rule_size));
|
((u8 *)r_iter + (elem_sent * s_rule_size));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3548,8 +3543,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
struct ice_fltr_mgmt_list_entry *fm_entry;
|
struct ice_fltr_mgmt_list_entry *fm_entry;
|
||||||
|
|
||||||
if (is_unicast_ether_addr(mac_addr)) {
|
if (is_unicast_ether_addr(mac_addr)) {
|
||||||
f_info->fltr_rule_id =
|
f_info->fltr_rule_id = le16_to_cpu(r_iter->index);
|
||||||
le16_to_cpu(r_iter->pdata.lkup_tx_rx.index);
|
|
||||||
f_info->fltr_act = ICE_FWD_TO_VSI;
|
f_info->fltr_act = ICE_FWD_TO_VSI;
|
||||||
/* Create an entry to track this MAC address */
|
/* Create an entry to track this MAC address */
|
||||||
fm_entry = devm_kzalloc(ice_hw_to_dev(hw),
|
fm_entry = devm_kzalloc(ice_hw_to_dev(hw),
|
||||||
@ -3565,8 +3559,7 @@ int ice_add_mac(struct ice_hw *hw, struct list_head *m_list)
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
list_add(&fm_entry->list_entry, rule_head);
|
list_add(&fm_entry->list_entry, rule_head);
|
||||||
r_iter = (struct ice_aqc_sw_rules_elem *)
|
r_iter = (typeof(s_rule))((u8 *)r_iter + s_rule_size);
|
||||||
((u8 *)r_iter + s_rule_size);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3865,7 +3858,7 @@ ice_rem_adv_rule_info(struct ice_hw *hw, struct list_head *rule_head)
|
|||||||
*/
|
*/
|
||||||
int ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
|
int ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
|
||||||
{
|
{
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_lkup_rx_tx *s_rule;
|
||||||
struct ice_fltr_info f_info;
|
struct ice_fltr_info f_info;
|
||||||
enum ice_adminq_opc opcode;
|
enum ice_adminq_opc opcode;
|
||||||
u16 s_rule_size;
|
u16 s_rule_size;
|
||||||
@ -3876,8 +3869,8 @@ int ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
|
hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
|
||||||
|
|
||||||
s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
|
s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule) :
|
||||||
ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
|
ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule);
|
||||||
|
|
||||||
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
@ -3915,7 +3908,7 @@ int ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
|
|||||||
if (status || !(f_info.flag & ICE_FLTR_TX_RX))
|
if (status || !(f_info.flag & ICE_FLTR_TX_RX))
|
||||||
goto out;
|
goto out;
|
||||||
if (set) {
|
if (set) {
|
||||||
u16 index = le16_to_cpu(s_rule->pdata.lkup_tx_rx.index);
|
u16 index = le16_to_cpu(s_rule->index);
|
||||||
|
|
||||||
if (f_info.flag & ICE_FLTR_TX) {
|
if (f_info.flag & ICE_FLTR_TX) {
|
||||||
hw->port_info->dflt_tx_vsi_num = hw_vsi_id;
|
hw->port_info->dflt_tx_vsi_num = hw_vsi_id;
|
||||||
@ -5641,7 +5634,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
|
|||||||
*/
|
*/
|
||||||
static int
|
static int
|
||||||
ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
|
ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
|
||||||
struct ice_aqc_sw_rules_elem *s_rule,
|
struct ice_sw_rule_lkup_rx_tx *s_rule,
|
||||||
const struct ice_dummy_pkt_profile *profile)
|
const struct ice_dummy_pkt_profile *profile)
|
||||||
{
|
{
|
||||||
u8 *pkt;
|
u8 *pkt;
|
||||||
@ -5650,7 +5643,7 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
|
|||||||
/* Start with a packet with a pre-defined/dummy content. Then, fill
|
/* Start with a packet with a pre-defined/dummy content. Then, fill
|
||||||
* in the header values to be looked up or matched.
|
* in the header values to be looked up or matched.
|
||||||
*/
|
*/
|
||||||
pkt = s_rule->pdata.lkup_tx_rx.hdr;
|
pkt = s_rule->hdr_data;
|
||||||
|
|
||||||
memcpy(pkt, profile->pkt, profile->pkt_len);
|
memcpy(pkt, profile->pkt, profile->pkt_len);
|
||||||
|
|
||||||
@ -5740,7 +5733,7 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
s_rule->pdata.lkup_tx_rx.hdr_len = cpu_to_le16(profile->pkt_len);
|
s_rule->hdr_len = cpu_to_le16(profile->pkt_len);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@ -5963,7 +5956,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
struct ice_rule_query_data *added_entry)
|
struct ice_rule_query_data *added_entry)
|
||||||
{
|
{
|
||||||
struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
|
struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
|
||||||
struct ice_aqc_sw_rules_elem *s_rule = NULL;
|
struct ice_sw_rule_lkup_rx_tx *s_rule = NULL;
|
||||||
const struct ice_dummy_pkt_profile *profile;
|
const struct ice_dummy_pkt_profile *profile;
|
||||||
u16 rid = 0, i, rule_buf_sz, vsi_handle;
|
u16 rid = 0, i, rule_buf_sz, vsi_handle;
|
||||||
struct list_head *rule_head;
|
struct list_head *rule_head;
|
||||||
@ -6040,7 +6033,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
}
|
}
|
||||||
return status;
|
return status;
|
||||||
}
|
}
|
||||||
rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + profile->pkt_len;
|
rule_buf_sz = ICE_SW_RULE_RX_TX_HDR_SIZE(s_rule, profile->pkt_len);
|
||||||
s_rule = kzalloc(rule_buf_sz, GFP_KERNEL);
|
s_rule = kzalloc(rule_buf_sz, GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
@ -6089,16 +6082,15 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
* by caller)
|
* by caller)
|
||||||
*/
|
*/
|
||||||
if (rinfo->rx) {
|
if (rinfo->rx) {
|
||||||
s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX);
|
s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX);
|
||||||
s_rule->pdata.lkup_tx_rx.src =
|
s_rule->src = cpu_to_le16(hw->port_info->lport);
|
||||||
cpu_to_le16(hw->port_info->lport);
|
|
||||||
} else {
|
} else {
|
||||||
s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX);
|
s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX);
|
||||||
s_rule->pdata.lkup_tx_rx.src = cpu_to_le16(rinfo->sw_act.src);
|
s_rule->src = cpu_to_le16(rinfo->sw_act.src);
|
||||||
}
|
}
|
||||||
|
|
||||||
s_rule->pdata.lkup_tx_rx.recipe_id = cpu_to_le16(rid);
|
s_rule->recipe_id = cpu_to_le16(rid);
|
||||||
s_rule->pdata.lkup_tx_rx.act = cpu_to_le32(act);
|
s_rule->act = cpu_to_le32(act);
|
||||||
|
|
||||||
status = ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, profile);
|
status = ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, profile);
|
||||||
if (status)
|
if (status)
|
||||||
@ -6107,7 +6099,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
if (rinfo->tun_type != ICE_NON_TUN &&
|
if (rinfo->tun_type != ICE_NON_TUN &&
|
||||||
rinfo->tun_type != ICE_SW_TUN_AND_NON_TUN) {
|
rinfo->tun_type != ICE_SW_TUN_AND_NON_TUN) {
|
||||||
status = ice_fill_adv_packet_tun(hw, rinfo->tun_type,
|
status = ice_fill_adv_packet_tun(hw, rinfo->tun_type,
|
||||||
s_rule->pdata.lkup_tx_rx.hdr,
|
s_rule->hdr_data,
|
||||||
profile->offsets);
|
profile->offsets);
|
||||||
if (status)
|
if (status)
|
||||||
goto err_ice_add_adv_rule;
|
goto err_ice_add_adv_rule;
|
||||||
@ -6135,8 +6127,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
|
|
||||||
adv_fltr->lkups_cnt = lkups_cnt;
|
adv_fltr->lkups_cnt = lkups_cnt;
|
||||||
adv_fltr->rule_info = *rinfo;
|
adv_fltr->rule_info = *rinfo;
|
||||||
adv_fltr->rule_info.fltr_rule_id =
|
adv_fltr->rule_info.fltr_rule_id = le16_to_cpu(s_rule->index);
|
||||||
le16_to_cpu(s_rule->pdata.lkup_tx_rx.index);
|
|
||||||
sw = hw->switch_info;
|
sw = hw->switch_info;
|
||||||
sw->recp_list[rid].adv_rule = true;
|
sw->recp_list[rid].adv_rule = true;
|
||||||
rule_head = &sw->recp_list[rid].filt_rules;
|
rule_head = &sw->recp_list[rid].filt_rules;
|
||||||
@ -6384,17 +6375,16 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
|||||||
}
|
}
|
||||||
mutex_unlock(rule_lock);
|
mutex_unlock(rule_lock);
|
||||||
if (remove_rule) {
|
if (remove_rule) {
|
||||||
struct ice_aqc_sw_rules_elem *s_rule;
|
struct ice_sw_rule_lkup_rx_tx *s_rule;
|
||||||
u16 rule_buf_sz;
|
u16 rule_buf_sz;
|
||||||
|
|
||||||
rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
|
rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule);
|
||||||
s_rule = kzalloc(rule_buf_sz, GFP_KERNEL);
|
s_rule = kzalloc(rule_buf_sz, GFP_KERNEL);
|
||||||
if (!s_rule)
|
if (!s_rule)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
s_rule->pdata.lkup_tx_rx.act = 0;
|
s_rule->act = 0;
|
||||||
s_rule->pdata.lkup_tx_rx.index =
|
s_rule->index = cpu_to_le16(list_elem->rule_info.fltr_rule_id);
|
||||||
cpu_to_le16(list_elem->rule_info.fltr_rule_id);
|
s_rule->hdr_len = 0;
|
||||||
s_rule->pdata.lkup_tx_rx.hdr_len = 0;
|
|
||||||
status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
|
status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
|
||||||
rule_buf_sz, 1,
|
rule_buf_sz, 1,
|
||||||
ice_aqc_opc_remove_sw_rules, NULL);
|
ice_aqc_opc_remove_sw_rules, NULL);
|
||||||
|
@ -23,9 +23,6 @@
|
|||||||
#define ICE_PROFID_IPV6_GTPU_TEID 46
|
#define ICE_PROFID_IPV6_GTPU_TEID 46
|
||||||
#define ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER 70
|
#define ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER 70
|
||||||
|
|
||||||
#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
|
|
||||||
(offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr))
|
|
||||||
|
|
||||||
/* VSI context structure for add/get/update/free operations */
|
/* VSI context structure for add/get/update/free operations */
|
||||||
struct ice_vsi_ctx {
|
struct ice_vsi_ctx {
|
||||||
u16 vsi_num;
|
u16 vsi_num;
|
||||||
|
@ -579,7 +579,7 @@ static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
|
|||||||
|
|
||||||
blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
|
blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
|
||||||
if (blkaddr < 0)
|
if (blkaddr < 0)
|
||||||
return blkaddr;
|
return false;
|
||||||
|
|
||||||
/* Registers that can be accessed from PF/VF */
|
/* Registers that can be accessed from PF/VF */
|
||||||
if ((offset & 0xFF000) == CPT_AF_LFX_CTL(0) ||
|
if ((offset & 0xFF000) == CPT_AF_LFX_CTL(0) ||
|
||||||
|
@ -2212,6 +2212,9 @@ static int mtk_hwlro_get_fdir_entry(struct net_device *dev,
|
|||||||
struct ethtool_rx_flow_spec *fsp =
|
struct ethtool_rx_flow_spec *fsp =
|
||||||
(struct ethtool_rx_flow_spec *)&cmd->fs;
|
(struct ethtool_rx_flow_spec *)&cmd->fs;
|
||||||
|
|
||||||
|
if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
/* only tcp dst ipv4 is meaningful, others are meaningless */
|
/* only tcp dst ipv4 is meaningful, others are meaningless */
|
||||||
fsp->flow_type = TCP_V4_FLOW;
|
fsp->flow_type = TCP_V4_FLOW;
|
||||||
fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]);
|
fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]);
|
||||||
|
@ -571,18 +571,32 @@ static int _next_phys_dev(struct mlx5_core_dev *mdev,
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void *pci_get_other_drvdata(struct device *this, struct device *other)
|
||||||
|
{
|
||||||
|
if (this->driver != other->driver)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
return pci_get_drvdata(to_pci_dev(other));
|
||||||
|
}
|
||||||
|
|
||||||
static int next_phys_dev(struct device *dev, const void *data)
|
static int next_phys_dev(struct device *dev, const void *data)
|
||||||
{
|
{
|
||||||
struct mlx5_adev *madev = container_of(dev, struct mlx5_adev, adev.dev);
|
struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
|
||||||
struct mlx5_core_dev *mdev = madev->mdev;
|
|
||||||
|
mdev = pci_get_other_drvdata(this->device, dev);
|
||||||
|
if (!mdev)
|
||||||
|
return 0;
|
||||||
|
|
||||||
return _next_phys_dev(mdev, data);
|
return _next_phys_dev(mdev, data);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int next_phys_dev_lag(struct device *dev, const void *data)
|
static int next_phys_dev_lag(struct device *dev, const void *data)
|
||||||
{
|
{
|
||||||
struct mlx5_adev *madev = container_of(dev, struct mlx5_adev, adev.dev);
|
struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
|
||||||
struct mlx5_core_dev *mdev = madev->mdev;
|
|
||||||
|
mdev = pci_get_other_drvdata(this->device, dev);
|
||||||
|
if (!mdev)
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (!MLX5_CAP_GEN(mdev, vport_group_manager) ||
|
if (!MLX5_CAP_GEN(mdev, vport_group_manager) ||
|
||||||
!MLX5_CAP_GEN(mdev, lag_master) ||
|
!MLX5_CAP_GEN(mdev, lag_master) ||
|
||||||
@ -596,19 +610,17 @@ static int next_phys_dev_lag(struct device *dev, const void *data)
|
|||||||
static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev,
|
static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev,
|
||||||
int (*match)(struct device *dev, const void *data))
|
int (*match)(struct device *dev, const void *data))
|
||||||
{
|
{
|
||||||
struct auxiliary_device *adev;
|
struct device *next;
|
||||||
struct mlx5_adev *madev;
|
|
||||||
|
|
||||||
if (!mlx5_core_is_pf(dev))
|
if (!mlx5_core_is_pf(dev))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
adev = auxiliary_find_device(NULL, dev, match);
|
next = bus_find_device(&pci_bus_type, NULL, dev, match);
|
||||||
if (!adev)
|
if (!next)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
madev = container_of(adev, struct mlx5_adev, adev);
|
put_device(next);
|
||||||
put_device(&adev->dev);
|
return pci_get_drvdata(to_pci_dev(next));
|
||||||
return madev->mdev;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Must be called with intf_mutex held */
|
/* Must be called with intf_mutex held */
|
||||||
|
@ -764,6 +764,7 @@ struct mlx5e_rq {
|
|||||||
u8 wq_type;
|
u8 wq_type;
|
||||||
u32 rqn;
|
u32 rqn;
|
||||||
struct mlx5_core_dev *mdev;
|
struct mlx5_core_dev *mdev;
|
||||||
|
struct mlx5e_channel *channel;
|
||||||
u32 umr_mkey;
|
u32 umr_mkey;
|
||||||
struct mlx5e_dma_info wqe_overflow;
|
struct mlx5e_dma_info wqe_overflow;
|
||||||
|
|
||||||
@ -1076,6 +1077,9 @@ void mlx5e_close_cq(struct mlx5e_cq *cq);
|
|||||||
int mlx5e_open_locked(struct net_device *netdev);
|
int mlx5e_open_locked(struct net_device *netdev);
|
||||||
int mlx5e_close_locked(struct net_device *netdev);
|
int mlx5e_close_locked(struct net_device *netdev);
|
||||||
|
|
||||||
|
void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c);
|
||||||
|
void mlx5e_trigger_napi_sched(struct napi_struct *napi);
|
||||||
|
|
||||||
int mlx5e_open_channels(struct mlx5e_priv *priv,
|
int mlx5e_open_channels(struct mlx5e_priv *priv,
|
||||||
struct mlx5e_channels *chs);
|
struct mlx5e_channels *chs);
|
||||||
void mlx5e_close_channels(struct mlx5e_channels *chs);
|
void mlx5e_close_channels(struct mlx5e_channels *chs);
|
||||||
|
@ -12,6 +12,7 @@ struct mlx5e_post_act;
|
|||||||
enum {
|
enum {
|
||||||
MLX5E_TC_FT_LEVEL = 0,
|
MLX5E_TC_FT_LEVEL = 0,
|
||||||
MLX5E_TC_TTC_FT_LEVEL,
|
MLX5E_TC_TTC_FT_LEVEL,
|
||||||
|
MLX5E_TC_MISS_LEVEL,
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mlx5e_tc_table {
|
struct mlx5e_tc_table {
|
||||||
@ -20,6 +21,7 @@ struct mlx5e_tc_table {
|
|||||||
*/
|
*/
|
||||||
struct mutex t_lock;
|
struct mutex t_lock;
|
||||||
struct mlx5_flow_table *t;
|
struct mlx5_flow_table *t;
|
||||||
|
struct mlx5_flow_table *miss_t;
|
||||||
struct mlx5_fs_chains *chains;
|
struct mlx5_fs_chains *chains;
|
||||||
struct mlx5e_post_act *post_act;
|
struct mlx5e_post_act *post_act;
|
||||||
|
|
||||||
|
@ -736,6 +736,7 @@ void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c)
|
|||||||
if (test_bit(MLX5E_PTP_STATE_RX, c->state)) {
|
if (test_bit(MLX5E_PTP_STATE_RX, c->state)) {
|
||||||
mlx5e_ptp_rx_set_fs(c->priv);
|
mlx5e_ptp_rx_set_fs(c->priv);
|
||||||
mlx5e_activate_rq(&c->rq);
|
mlx5e_activate_rq(&c->rq);
|
||||||
|
mlx5e_trigger_napi_sched(&c->napi);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -123,6 +123,8 @@ static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx)
|
|||||||
xskrq->stats->recover++;
|
xskrq->stats->recover++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mlx5e_trigger_napi_icosq(icosq->channel);
|
||||||
|
|
||||||
mutex_unlock(&icosq->channel->icosq_recovery_lock);
|
mutex_unlock(&icosq->channel->icosq_recovery_lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@ -166,6 +168,10 @@ static int mlx5e_rx_reporter_err_rq_cqe_recover(void *ctx)
|
|||||||
clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
|
clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
|
||||||
mlx5e_activate_rq(rq);
|
mlx5e_activate_rq(rq);
|
||||||
rq->stats->recover++;
|
rq->stats->recover++;
|
||||||
|
if (rq->channel)
|
||||||
|
mlx5e_trigger_napi_icosq(rq->channel);
|
||||||
|
else
|
||||||
|
mlx5e_trigger_napi_sched(rq->cq.napi);
|
||||||
return 0;
|
return 0;
|
||||||
out:
|
out:
|
||||||
clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
|
clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
|
||||||
|
@ -715,7 +715,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
|
|||||||
struct mlx5_flow_attr *attr,
|
struct mlx5_flow_attr *attr,
|
||||||
struct flow_rule *flow_rule,
|
struct flow_rule *flow_rule,
|
||||||
struct mlx5e_mod_hdr_handle **mh,
|
struct mlx5e_mod_hdr_handle **mh,
|
||||||
u8 zone_restore_id, bool nat)
|
u8 zone_restore_id, bool nat_table, bool has_nat)
|
||||||
{
|
{
|
||||||
DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS);
|
DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS);
|
||||||
DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr);
|
DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr);
|
||||||
@ -731,11 +731,12 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
|
|||||||
&attr->ct_attr.ct_labels_id);
|
&attr->ct_attr.ct_labels_id);
|
||||||
if (err)
|
if (err)
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
if (nat) {
|
if (nat_table) {
|
||||||
err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule,
|
if (has_nat) {
|
||||||
&mod_acts);
|
err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule, &mod_acts);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_mapping;
|
goto err_mapping;
|
||||||
|
}
|
||||||
|
|
||||||
ct_state |= MLX5_CT_STATE_NAT_BIT;
|
ct_state |= MLX5_CT_STATE_NAT_BIT;
|
||||||
}
|
}
|
||||||
@ -750,7 +751,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
|
|||||||
if (err)
|
if (err)
|
||||||
goto err_mapping;
|
goto err_mapping;
|
||||||
|
|
||||||
if (nat) {
|
if (nat_table && has_nat) {
|
||||||
attr->modify_hdr = mlx5_modify_header_alloc(ct_priv->dev, ct_priv->ns_type,
|
attr->modify_hdr = mlx5_modify_header_alloc(ct_priv->dev, ct_priv->ns_type,
|
||||||
mod_acts.num_actions,
|
mod_acts.num_actions,
|
||||||
mod_acts.actions);
|
mod_acts.actions);
|
||||||
@ -818,7 +819,9 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
|
|||||||
|
|
||||||
err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule,
|
err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule,
|
||||||
&zone_rule->mh,
|
&zone_rule->mh,
|
||||||
zone_restore_id, nat);
|
zone_restore_id,
|
||||||
|
nat,
|
||||||
|
mlx5_tc_ct_entry_has_nat(entry));
|
||||||
if (err) {
|
if (err) {
|
||||||
ct_dbg("Failed to create ct entry mod hdr");
|
ct_dbg("Failed to create ct entry mod hdr");
|
||||||
goto err_mod_hdr;
|
goto err_mod_hdr;
|
||||||
|
@ -179,6 +179,7 @@ static void mlx5e_activate_trap(struct mlx5e_trap *trap)
|
|||||||
{
|
{
|
||||||
napi_enable(&trap->napi);
|
napi_enable(&trap->napi);
|
||||||
mlx5e_activate_rq(&trap->rq);
|
mlx5e_activate_rq(&trap->rq);
|
||||||
|
mlx5e_trigger_napi_sched(&trap->napi);
|
||||||
}
|
}
|
||||||
|
|
||||||
void mlx5e_deactivate_trap(struct mlx5e_priv *priv)
|
void mlx5e_deactivate_trap(struct mlx5e_priv *priv)
|
||||||
|
@ -117,6 +117,7 @@ static int mlx5e_xsk_enable_locked(struct mlx5e_priv *priv,
|
|||||||
goto err_remove_pool;
|
goto err_remove_pool;
|
||||||
|
|
||||||
mlx5e_activate_xsk(c);
|
mlx5e_activate_xsk(c);
|
||||||
|
mlx5e_trigger_napi_icosq(c);
|
||||||
|
|
||||||
/* Don't wait for WQEs, because the newer xdpsock sample doesn't provide
|
/* Don't wait for WQEs, because the newer xdpsock sample doesn't provide
|
||||||
* any Fill Ring entries at the setup stage.
|
* any Fill Ring entries at the setup stage.
|
||||||
|
@ -64,6 +64,7 @@ static int mlx5e_init_xsk_rq(struct mlx5e_channel *c,
|
|||||||
rq->clock = &mdev->clock;
|
rq->clock = &mdev->clock;
|
||||||
rq->icosq = &c->icosq;
|
rq->icosq = &c->icosq;
|
||||||
rq->ix = c->ix;
|
rq->ix = c->ix;
|
||||||
|
rq->channel = c;
|
||||||
rq->mdev = mdev;
|
rq->mdev = mdev;
|
||||||
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
||||||
rq->xdpsq = &c->rq_xdpsq;
|
rq->xdpsq = &c->rq_xdpsq;
|
||||||
@ -179,10 +180,6 @@ void mlx5e_activate_xsk(struct mlx5e_channel *c)
|
|||||||
mlx5e_reporter_icosq_resume_recovery(c);
|
mlx5e_reporter_icosq_resume_recovery(c);
|
||||||
|
|
||||||
/* TX queue is created active. */
|
/* TX queue is created active. */
|
||||||
|
|
||||||
spin_lock_bh(&c->async_icosq_lock);
|
|
||||||
mlx5e_trigger_irq(&c->async_icosq);
|
|
||||||
spin_unlock_bh(&c->async_icosq_lock);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
|
void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
|
||||||
|
@ -475,6 +475,7 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param
|
|||||||
rq->clock = &mdev->clock;
|
rq->clock = &mdev->clock;
|
||||||
rq->icosq = &c->icosq;
|
rq->icosq = &c->icosq;
|
||||||
rq->ix = c->ix;
|
rq->ix = c->ix;
|
||||||
|
rq->channel = c;
|
||||||
rq->mdev = mdev;
|
rq->mdev = mdev;
|
||||||
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
||||||
rq->xdpsq = &c->rq_xdpsq;
|
rq->xdpsq = &c->rq_xdpsq;
|
||||||
@ -1066,13 +1067,6 @@ err_free_rq:
|
|||||||
void mlx5e_activate_rq(struct mlx5e_rq *rq)
|
void mlx5e_activate_rq(struct mlx5e_rq *rq)
|
||||||
{
|
{
|
||||||
set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
|
set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
|
||||||
if (rq->icosq) {
|
|
||||||
mlx5e_trigger_irq(rq->icosq);
|
|
||||||
} else {
|
|
||||||
local_bh_disable();
|
|
||||||
napi_schedule(rq->cq.napi);
|
|
||||||
local_bh_enable();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
|
void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
|
||||||
@ -2227,6 +2221,20 @@ static int mlx5e_channel_stats_alloc(struct mlx5e_priv *priv, int ix, int cpu)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c)
|
||||||
|
{
|
||||||
|
spin_lock_bh(&c->async_icosq_lock);
|
||||||
|
mlx5e_trigger_irq(&c->async_icosq);
|
||||||
|
spin_unlock_bh(&c->async_icosq_lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
void mlx5e_trigger_napi_sched(struct napi_struct *napi)
|
||||||
|
{
|
||||||
|
local_bh_disable();
|
||||||
|
napi_schedule(napi);
|
||||||
|
local_bh_enable();
|
||||||
|
}
|
||||||
|
|
||||||
static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
|
static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
|
||||||
struct mlx5e_params *params,
|
struct mlx5e_params *params,
|
||||||
struct mlx5e_channel_param *cparam,
|
struct mlx5e_channel_param *cparam,
|
||||||
@ -2308,6 +2316,8 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
|
|||||||
|
|
||||||
if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
|
if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
|
||||||
mlx5e_activate_xsk(c);
|
mlx5e_activate_xsk(c);
|
||||||
|
|
||||||
|
mlx5e_trigger_napi_icosq(c);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
|
static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
|
||||||
@ -4559,6 +4569,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
|
|||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
mutex_unlock(&priv->state_lock);
|
mutex_unlock(&priv->state_lock);
|
||||||
|
|
||||||
|
/* Need to fix some features. */
|
||||||
|
if (!err)
|
||||||
|
netdev_update_features(netdev);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4714,6 +4714,33 @@ static int mlx5e_tc_nic_get_ft_size(struct mlx5_core_dev *dev)
|
|||||||
return tc_tbl_size;
|
return tc_tbl_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int mlx5e_tc_nic_create_miss_table(struct mlx5e_priv *priv)
|
||||||
|
{
|
||||||
|
struct mlx5_flow_table **ft = &priv->fs.tc.miss_t;
|
||||||
|
struct mlx5_flow_table_attr ft_attr = {};
|
||||||
|
struct mlx5_flow_namespace *ns;
|
||||||
|
int err = 0;
|
||||||
|
|
||||||
|
ft_attr.max_fte = 1;
|
||||||
|
ft_attr.autogroup.max_num_groups = 1;
|
||||||
|
ft_attr.level = MLX5E_TC_MISS_LEVEL;
|
||||||
|
ft_attr.prio = 0;
|
||||||
|
ns = mlx5_get_flow_namespace(priv->mdev, MLX5_FLOW_NAMESPACE_KERNEL);
|
||||||
|
|
||||||
|
*ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr);
|
||||||
|
if (IS_ERR(*ft)) {
|
||||||
|
err = PTR_ERR(*ft);
|
||||||
|
netdev_err(priv->netdev, "failed to create tc nic miss table err=%d\n", err);
|
||||||
|
}
|
||||||
|
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mlx5e_tc_nic_destroy_miss_table(struct mlx5e_priv *priv)
|
||||||
|
{
|
||||||
|
mlx5_destroy_flow_table(priv->fs.tc.miss_t);
|
||||||
|
}
|
||||||
|
|
||||||
int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
|
int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
|
||||||
{
|
{
|
||||||
struct mlx5e_tc_table *tc = &priv->fs.tc;
|
struct mlx5e_tc_table *tc = &priv->fs.tc;
|
||||||
@ -4746,19 +4773,23 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
|
|||||||
}
|
}
|
||||||
tc->mapping = chains_mapping;
|
tc->mapping = chains_mapping;
|
||||||
|
|
||||||
|
err = mlx5e_tc_nic_create_miss_table(priv);
|
||||||
|
if (err)
|
||||||
|
goto err_chains;
|
||||||
|
|
||||||
if (MLX5_CAP_FLOWTABLE_NIC_RX(priv->mdev, ignore_flow_level))
|
if (MLX5_CAP_FLOWTABLE_NIC_RX(priv->mdev, ignore_flow_level))
|
||||||
attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED |
|
attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED |
|
||||||
MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED;
|
MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED;
|
||||||
attr.ns = MLX5_FLOW_NAMESPACE_KERNEL;
|
attr.ns = MLX5_FLOW_NAMESPACE_KERNEL;
|
||||||
attr.max_ft_sz = mlx5e_tc_nic_get_ft_size(dev);
|
attr.max_ft_sz = mlx5e_tc_nic_get_ft_size(dev);
|
||||||
attr.max_grp_num = MLX5E_TC_TABLE_NUM_GROUPS;
|
attr.max_grp_num = MLX5E_TC_TABLE_NUM_GROUPS;
|
||||||
attr.default_ft = mlx5e_vlan_get_flowtable(priv->fs.vlan);
|
attr.default_ft = priv->fs.tc.miss_t;
|
||||||
attr.mapping = chains_mapping;
|
attr.mapping = chains_mapping;
|
||||||
|
|
||||||
tc->chains = mlx5_chains_create(dev, &attr);
|
tc->chains = mlx5_chains_create(dev, &attr);
|
||||||
if (IS_ERR(tc->chains)) {
|
if (IS_ERR(tc->chains)) {
|
||||||
err = PTR_ERR(tc->chains);
|
err = PTR_ERR(tc->chains);
|
||||||
goto err_chains;
|
goto err_miss;
|
||||||
}
|
}
|
||||||
|
|
||||||
tc->post_act = mlx5e_tc_post_act_init(priv, tc->chains, MLX5_FLOW_NAMESPACE_KERNEL);
|
tc->post_act = mlx5e_tc_post_act_init(priv, tc->chains, MLX5_FLOW_NAMESPACE_KERNEL);
|
||||||
@ -4781,6 +4812,8 @@ err_reg:
|
|||||||
mlx5_tc_ct_clean(tc->ct);
|
mlx5_tc_ct_clean(tc->ct);
|
||||||
mlx5e_tc_post_act_destroy(tc->post_act);
|
mlx5e_tc_post_act_destroy(tc->post_act);
|
||||||
mlx5_chains_destroy(tc->chains);
|
mlx5_chains_destroy(tc->chains);
|
||||||
|
err_miss:
|
||||||
|
mlx5e_tc_nic_destroy_miss_table(priv);
|
||||||
err_chains:
|
err_chains:
|
||||||
mapping_destroy(chains_mapping);
|
mapping_destroy(chains_mapping);
|
||||||
err_mapping:
|
err_mapping:
|
||||||
@ -4821,6 +4854,7 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv)
|
|||||||
mlx5e_tc_post_act_destroy(tc->post_act);
|
mlx5e_tc_post_act_destroy(tc->post_act);
|
||||||
mapping_destroy(tc->mapping);
|
mapping_destroy(tc->mapping);
|
||||||
mlx5_chains_destroy(tc->chains);
|
mlx5_chains_destroy(tc->chains);
|
||||||
|
mlx5e_tc_nic_destroy_miss_table(priv);
|
||||||
}
|
}
|
||||||
|
|
||||||
int mlx5e_tc_ht_init(struct rhashtable *tc_ht)
|
int mlx5e_tc_ht_init(struct rhashtable *tc_ht)
|
||||||
|
@ -114,7 +114,7 @@
|
|||||||
#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
|
#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
|
||||||
|
|
||||||
#define KERNEL_NIC_TC_NUM_PRIOS 1
|
#define KERNEL_NIC_TC_NUM_PRIOS 1
|
||||||
#define KERNEL_NIC_TC_NUM_LEVELS 2
|
#define KERNEL_NIC_TC_NUM_LEVELS 3
|
||||||
|
|
||||||
#define ANCHOR_NUM_LEVELS 1
|
#define ANCHOR_NUM_LEVELS 1
|
||||||
#define ANCHOR_NUM_PRIOS 1
|
#define ANCHOR_NUM_PRIOS 1
|
||||||
|
@ -44,11 +44,10 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns,
|
|||||||
err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action);
|
err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action);
|
||||||
if (err && action) {
|
if (err && action) {
|
||||||
err = mlx5dr_action_destroy(action);
|
err = mlx5dr_action_destroy(action);
|
||||||
if (err) {
|
if (err)
|
||||||
action = NULL;
|
mlx5_core_err(ns->dev,
|
||||||
mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n",
|
"Failed to destroy action (%d)\n", err);
|
||||||
err);
|
action = NULL;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
ft->fs_dr_table.miss_action = action;
|
ft->fs_dr_table.miss_action = action;
|
||||||
if (old_miss_action) {
|
if (old_miss_action) {
|
||||||
|
@ -1164,9 +1164,14 @@ static int lan743x_phy_open(struct lan743x_adapter *adapter)
|
|||||||
if (!phydev)
|
if (!phydev)
|
||||||
goto return_error;
|
goto return_error;
|
||||||
|
|
||||||
ret = phy_connect_direct(netdev, phydev,
|
if (adapter->is_pci11x1x)
|
||||||
lan743x_phy_link_status_change,
|
ret = phy_connect_direct(netdev, phydev,
|
||||||
PHY_INTERFACE_MODE_GMII);
|
lan743x_phy_link_status_change,
|
||||||
|
PHY_INTERFACE_MODE_RGMII);
|
||||||
|
else
|
||||||
|
ret = phy_connect_direct(netdev, phydev,
|
||||||
|
lan743x_phy_link_status_change,
|
||||||
|
PHY_INTERFACE_MODE_GMII);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto return_error;
|
goto return_error;
|
||||||
}
|
}
|
||||||
@ -2936,20 +2941,27 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
|
|||||||
lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
|
lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
|
||||||
netif_dbg(adapter, drv, adapter->netdev,
|
netif_dbg(adapter, drv, adapter->netdev,
|
||||||
"SGMII operation\n");
|
"SGMII operation\n");
|
||||||
|
adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45;
|
||||||
|
adapter->mdiobus->read = lan743x_mdiobus_c45_read;
|
||||||
|
adapter->mdiobus->write = lan743x_mdiobus_c45_write;
|
||||||
|
adapter->mdiobus->name = "lan743x-mdiobus-c45";
|
||||||
|
netif_dbg(adapter, drv, adapter->netdev,
|
||||||
|
"lan743x-mdiobus-c45\n");
|
||||||
} else {
|
} else {
|
||||||
sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL);
|
sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL);
|
||||||
sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_;
|
sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_;
|
||||||
sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_;
|
sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_;
|
||||||
lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
|
lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
|
||||||
netif_dbg(adapter, drv, adapter->netdev,
|
netif_dbg(adapter, drv, adapter->netdev,
|
||||||
"(R)GMII operation\n");
|
"RGMII operation\n");
|
||||||
|
// Only C22 support when RGMII I/F
|
||||||
|
adapter->mdiobus->probe_capabilities = MDIOBUS_C22;
|
||||||
|
adapter->mdiobus->read = lan743x_mdiobus_read;
|
||||||
|
adapter->mdiobus->write = lan743x_mdiobus_write;
|
||||||
|
adapter->mdiobus->name = "lan743x-mdiobus";
|
||||||
|
netif_dbg(adapter, drv, adapter->netdev,
|
||||||
|
"lan743x-mdiobus\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45;
|
|
||||||
adapter->mdiobus->read = lan743x_mdiobus_c45_read;
|
|
||||||
adapter->mdiobus->write = lan743x_mdiobus_c45_write;
|
|
||||||
adapter->mdiobus->name = "lan743x-mdiobus-c45";
|
|
||||||
netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus-c45\n");
|
|
||||||
} else {
|
} else {
|
||||||
adapter->mdiobus->read = lan743x_mdiobus_read;
|
adapter->mdiobus->read = lan743x_mdiobus_read;
|
||||||
adapter->mdiobus->write = lan743x_mdiobus_write;
|
adapter->mdiobus->write = lan743x_mdiobus_write;
|
||||||
|
@ -1120,8 +1120,13 @@ static int lan966x_probe(struct platform_device *pdev)
|
|||||||
lan966x->ports[p]->fwnode = fwnode_handle_get(portnp);
|
lan966x->ports[p]->fwnode = fwnode_handle_get(portnp);
|
||||||
|
|
||||||
serdes = devm_of_phy_get(lan966x->dev, to_of_node(portnp), NULL);
|
serdes = devm_of_phy_get(lan966x->dev, to_of_node(portnp), NULL);
|
||||||
if (!IS_ERR(serdes))
|
if (PTR_ERR(serdes) == -ENODEV)
|
||||||
lan966x->ports[p]->serdes = serdes;
|
serdes = NULL;
|
||||||
|
if (IS_ERR(serdes)) {
|
||||||
|
err = PTR_ERR(serdes);
|
||||||
|
goto cleanup_ports;
|
||||||
|
}
|
||||||
|
lan966x->ports[p]->serdes = serdes;
|
||||||
|
|
||||||
lan966x_port_init(lan966x->ports[p]);
|
lan966x_port_init(lan966x->ports[p]);
|
||||||
}
|
}
|
||||||
|
@ -314,7 +314,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
|
|||||||
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
||||||
|
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
/* starts at bit 0 */
|
/* starts at bit 0 */
|
||||||
BUILD_BUG_ON(!(NFDK_DESC_TX_DMA_LEN_HEAD & 1));
|
BUILD_BUG_ON(!(NFDK_DESC_TX_DMA_LEN_HEAD & 1));
|
||||||
@ -339,7 +339,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
|
|||||||
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
||||||
|
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
dma_len -= dlen_type;
|
dma_len -= dlen_type;
|
||||||
dma_addr += dlen_type + 1;
|
dma_addr += dlen_type + 1;
|
||||||
@ -929,7 +929,7 @@ nfp_nfdk_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring,
|
|||||||
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
||||||
|
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
|
tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
|
||||||
dma_len -= tmp_dlen;
|
dma_len -= tmp_dlen;
|
||||||
@ -940,7 +940,7 @@ nfp_nfdk_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring,
|
|||||||
dma_len -= 1;
|
dma_len -= 1;
|
||||||
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
dlen_type &= NFDK_DESC_TX_DMA_LEN;
|
dlen_type &= NFDK_DESC_TX_DMA_LEN;
|
||||||
dma_len -= dlen_type;
|
dma_len -= dlen_type;
|
||||||
@ -1332,7 +1332,7 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
|
|||||||
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
|
||||||
|
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
|
tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
|
||||||
dma_len -= tmp_dlen;
|
dma_len -= tmp_dlen;
|
||||||
@ -1343,7 +1343,7 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
|
|||||||
dma_len -= 1;
|
dma_len -= 1;
|
||||||
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
|
||||||
txd->dma_len_type = cpu_to_le16(dlen_type);
|
txd->dma_len_type = cpu_to_le16(dlen_type);
|
||||||
nfp_desc_set_dma_addr(txd, dma_addr);
|
nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
|
||||||
|
|
||||||
dlen_type &= NFDK_DESC_TX_DMA_LEN;
|
dlen_type &= NFDK_DESC_TX_DMA_LEN;
|
||||||
dma_len -= dlen_type;
|
dma_len -= dlen_type;
|
||||||
|
@ -46,8 +46,7 @@
|
|||||||
struct nfp_nfdk_tx_desc {
|
struct nfp_nfdk_tx_desc {
|
||||||
union {
|
union {
|
||||||
struct {
|
struct {
|
||||||
u8 dma_addr_hi; /* High bits of host buf address */
|
__le16 dma_addr_hi; /* High bits of host buf address */
|
||||||
u8 padding; /* Must be zero */
|
|
||||||
__le16 dma_len_type; /* Length to DMA for this desc */
|
__le16 dma_len_type; /* Length to DMA for this desc */
|
||||||
__le32 dma_addr_lo; /* Low 32bit of host buf addr */
|
__le32 dma_addr_lo; /* Low 32bit of host buf addr */
|
||||||
};
|
};
|
||||||
|
@ -117,13 +117,22 @@ struct nfp_nfdk_tx_buf;
|
|||||||
/* Convenience macro for writing dma address into RX/TX descriptors */
|
/* Convenience macro for writing dma address into RX/TX descriptors */
|
||||||
#define nfp_desc_set_dma_addr(desc, dma_addr) \
|
#define nfp_desc_set_dma_addr(desc, dma_addr) \
|
||||||
do { \
|
do { \
|
||||||
__typeof(desc) __d = (desc); \
|
__typeof__(desc) __d = (desc); \
|
||||||
dma_addr_t __addr = (dma_addr); \
|
dma_addr_t __addr = (dma_addr); \
|
||||||
\
|
\
|
||||||
__d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr)); \
|
__d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr)); \
|
||||||
__d->dma_addr_hi = upper_32_bits(__addr) & 0xff; \
|
__d->dma_addr_hi = upper_32_bits(__addr) & 0xff; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
#define nfp_nfdk_tx_desc_set_dma_addr(desc, dma_addr) \
|
||||||
|
do { \
|
||||||
|
__typeof__(desc) __d = (desc); \
|
||||||
|
dma_addr_t __addr = (dma_addr); \
|
||||||
|
\
|
||||||
|
__d->dma_addr_hi = cpu_to_le16(upper_32_bits(__addr) & 0xff); \
|
||||||
|
__d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr)); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct nfp_net_tx_ring - TX ring structure
|
* struct nfp_net_tx_ring - TX ring structure
|
||||||
* @r_vec: Back pointer to ring vector structure
|
* @r_vec: Back pointer to ring vector structure
|
||||||
|
@ -289,8 +289,6 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
|
|||||||
|
|
||||||
/* Init to unknowns */
|
/* Init to unknowns */
|
||||||
ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
|
ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
|
||||||
ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
|
|
||||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
|
|
||||||
cmd->base.port = PORT_OTHER;
|
cmd->base.port = PORT_OTHER;
|
||||||
cmd->base.speed = SPEED_UNKNOWN;
|
cmd->base.speed = SPEED_UNKNOWN;
|
||||||
cmd->base.duplex = DUPLEX_UNKNOWN;
|
cmd->base.duplex = DUPLEX_UNKNOWN;
|
||||||
@ -298,6 +296,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
|
|||||||
port = nfp_port_from_netdev(netdev);
|
port = nfp_port_from_netdev(netdev);
|
||||||
eth_port = nfp_port_get_eth_port(port);
|
eth_port = nfp_port_get_eth_port(port);
|
||||||
if (eth_port) {
|
if (eth_port) {
|
||||||
|
ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
|
||||||
|
ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
|
||||||
cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ?
|
cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ?
|
||||||
AUTONEG_ENABLE : AUTONEG_DISABLE;
|
AUTONEG_ENABLE : AUTONEG_DISABLE;
|
||||||
nfp_net_set_fec_link_mode(eth_port, cmd);
|
nfp_net_set_fec_link_mode(eth_port, cmd);
|
||||||
|
@ -298,6 +298,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
|
|||||||
efx->n_channels = 1;
|
efx->n_channels = 1;
|
||||||
efx->n_rx_channels = 1;
|
efx->n_rx_channels = 1;
|
||||||
efx->n_tx_channels = 1;
|
efx->n_tx_channels = 1;
|
||||||
|
efx->tx_channel_offset = 0;
|
||||||
efx->n_xdp_channels = 0;
|
efx->n_xdp_channels = 0;
|
||||||
efx->xdp_channel_offset = efx->n_channels;
|
efx->xdp_channel_offset = efx->n_channels;
|
||||||
rc = pci_enable_msi(efx->pci_dev);
|
rc = pci_enable_msi(efx->pci_dev);
|
||||||
@ -318,6 +319,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
|
|||||||
efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
|
efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
|
||||||
efx->n_rx_channels = 1;
|
efx->n_rx_channels = 1;
|
||||||
efx->n_tx_channels = 1;
|
efx->n_tx_channels = 1;
|
||||||
|
efx->tx_channel_offset = 1;
|
||||||
efx->n_xdp_channels = 0;
|
efx->n_xdp_channels = 0;
|
||||||
efx->xdp_channel_offset = efx->n_channels;
|
efx->xdp_channel_offset = efx->n_channels;
|
||||||
efx->legacy_irq = efx->pci_dev->irq;
|
efx->legacy_irq = efx->pci_dev->irq;
|
||||||
@ -954,10 +956,6 @@ int efx_set_channels(struct efx_nic *efx)
|
|||||||
struct efx_channel *channel;
|
struct efx_channel *channel;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
efx->tx_channel_offset =
|
|
||||||
efx_separate_tx_channels ?
|
|
||||||
efx->n_channels - efx->n_tx_channels : 0;
|
|
||||||
|
|
||||||
if (efx->xdp_tx_queue_count) {
|
if (efx->xdp_tx_queue_count) {
|
||||||
EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
|
EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
|
||||||
|
|
||||||
|
@ -1530,7 +1530,7 @@ static inline bool efx_channel_is_xdp_tx(struct efx_channel *channel)
|
|||||||
|
|
||||||
static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
|
static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
|
||||||
{
|
{
|
||||||
return true;
|
return channel && channel->channel >= channel->efx->tx_channel_offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
|
static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
|
||||||
|
@ -299,6 +299,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
|
|||||||
efx->n_channels = 1;
|
efx->n_channels = 1;
|
||||||
efx->n_rx_channels = 1;
|
efx->n_rx_channels = 1;
|
||||||
efx->n_tx_channels = 1;
|
efx->n_tx_channels = 1;
|
||||||
|
efx->tx_channel_offset = 0;
|
||||||
efx->n_xdp_channels = 0;
|
efx->n_xdp_channels = 0;
|
||||||
efx->xdp_channel_offset = efx->n_channels;
|
efx->xdp_channel_offset = efx->n_channels;
|
||||||
rc = pci_enable_msi(efx->pci_dev);
|
rc = pci_enable_msi(efx->pci_dev);
|
||||||
@ -319,6 +320,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
|
|||||||
efx->n_channels = 1 + (efx_siena_separate_tx_channels ? 1 : 0);
|
efx->n_channels = 1 + (efx_siena_separate_tx_channels ? 1 : 0);
|
||||||
efx->n_rx_channels = 1;
|
efx->n_rx_channels = 1;
|
||||||
efx->n_tx_channels = 1;
|
efx->n_tx_channels = 1;
|
||||||
|
efx->tx_channel_offset = 1;
|
||||||
efx->n_xdp_channels = 0;
|
efx->n_xdp_channels = 0;
|
||||||
efx->xdp_channel_offset = efx->n_channels;
|
efx->xdp_channel_offset = efx->n_channels;
|
||||||
efx->legacy_irq = efx->pci_dev->irq;
|
efx->legacy_irq = efx->pci_dev->irq;
|
||||||
@ -958,10 +960,6 @@ int efx_siena_set_channels(struct efx_nic *efx)
|
|||||||
struct efx_channel *channel;
|
struct efx_channel *channel;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
efx->tx_channel_offset =
|
|
||||||
efx_siena_separate_tx_channels ?
|
|
||||||
efx->n_channels - efx->n_tx_channels : 0;
|
|
||||||
|
|
||||||
if (efx->xdp_tx_queue_count) {
|
if (efx->xdp_tx_queue_count) {
|
||||||
EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
|
EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
|
||||||
|
|
||||||
|
@ -1529,7 +1529,7 @@ static inline bool efx_channel_is_xdp_tx(struct efx_channel *channel)
|
|||||||
|
|
||||||
static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
|
static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
|
||||||
{
|
{
|
||||||
return true;
|
return channel && channel->channel >= channel->efx->tx_channel_offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
|
static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
|
||||||
|
@ -1161,6 +1161,7 @@ static SIMPLE_DEV_PM_OPS(intel_eth_pm_ops, intel_eth_pci_suspend,
|
|||||||
#define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_0 0x7aac
|
#define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_0 0x7aac
|
||||||
#define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_1 0x7aad
|
#define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_1 0x7aad
|
||||||
#define PCI_DEVICE_ID_INTEL_ADLN_SGMII1G 0x54ac
|
#define PCI_DEVICE_ID_INTEL_ADLN_SGMII1G 0x54ac
|
||||||
|
#define PCI_DEVICE_ID_INTEL_RPLP_SGMII1G 0x51ac
|
||||||
|
|
||||||
static const struct pci_device_id intel_eth_pci_id_table[] = {
|
static const struct pci_device_id intel_eth_pci_id_table[] = {
|
||||||
{ PCI_DEVICE_DATA(INTEL, QUARK, &quark_info) },
|
{ PCI_DEVICE_DATA(INTEL, QUARK, &quark_info) },
|
||||||
@ -1179,6 +1180,7 @@ static const struct pci_device_id intel_eth_pci_id_table[] = {
|
|||||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0, &adls_sgmii1g_phy0_info) },
|
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0, &adls_sgmii1g_phy0_info) },
|
||||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1, &adls_sgmii1g_phy1_info) },
|
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1, &adls_sgmii1g_phy1_info) },
|
||||||
{ PCI_DEVICE_DATA(INTEL, ADLN_SGMII1G, &tgl_sgmii1g_phy0_info) },
|
{ PCI_DEVICE_DATA(INTEL, ADLN_SGMII1G, &tgl_sgmii1g_phy0_info) },
|
||||||
|
{ PCI_DEVICE_DATA(INTEL, RPLP_SGMII1G, &tgl_sgmii1g_phy0_info) },
|
||||||
{}
|
{}
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
|
MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
|
||||||
|
@ -7129,9 +7129,9 @@ int stmmac_dvr_probe(struct device *device,
|
|||||||
/* MDIO bus Registration */
|
/* MDIO bus Registration */
|
||||||
ret = stmmac_mdio_register(ndev);
|
ret = stmmac_mdio_register(ndev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(priv->device,
|
dev_err_probe(priv->device, ret,
|
||||||
"%s: MDIO bus (id: %d) registration failed",
|
"%s: MDIO bus (id: %d) registration failed\n",
|
||||||
__func__, priv->plat->bus_id);
|
__func__, priv->plat->bus_id);
|
||||||
goto error_mdio_register;
|
goto error_mdio_register;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -482,7 +482,7 @@ int stmmac_mdio_register(struct net_device *ndev)
|
|||||||
|
|
||||||
err = of_mdiobus_register(new_bus, mdio_node);
|
err = of_mdiobus_register(new_bus, mdio_node);
|
||||||
if (err != 0) {
|
if (err != 0) {
|
||||||
dev_err(dev, "Cannot register the MDIO bus\n");
|
dev_err_probe(dev, err, "Cannot register the MDIO bus\n");
|
||||||
goto bus_register_fail;
|
goto bus_register_fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
#include <linux/etherdevice.h>
|
#include <linux/etherdevice.h>
|
||||||
#include <linux/if_vlan.h>
|
#include <linux/if_vlan.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
|
#include <linux/irqdomain.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/kmemleak.h>
|
#include <linux/kmemleak.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
@ -1788,6 +1789,7 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common)
|
|||||||
if (IS_ERR(cpts)) {
|
if (IS_ERR(cpts)) {
|
||||||
int ret = PTR_ERR(cpts);
|
int ret = PTR_ERR(cpts);
|
||||||
|
|
||||||
|
of_node_put(node);
|
||||||
if (ret == -EOPNOTSUPP) {
|
if (ret == -EOPNOTSUPP) {
|
||||||
dev_info(dev, "cpts disabled\n");
|
dev_info(dev, "cpts disabled\n");
|
||||||
return 0;
|
return 0;
|
||||||
@ -1981,7 +1983,9 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx)
|
|||||||
|
|
||||||
phy_interface_set_rgmii(port->slave.phylink_config.supported_interfaces);
|
phy_interface_set_rgmii(port->slave.phylink_config.supported_interfaces);
|
||||||
|
|
||||||
phylink = phylink_create(&port->slave.phylink_config, dev->fwnode, port->slave.phy_if,
|
phylink = phylink_create(&port->slave.phylink_config,
|
||||||
|
of_node_to_fwnode(port->slave.phy_node),
|
||||||
|
port->slave.phy_if,
|
||||||
&am65_cpsw_phylink_mac_ops);
|
&am65_cpsw_phylink_mac_ops);
|
||||||
if (IS_ERR(phylink))
|
if (IS_ERR(phylink))
|
||||||
return PTR_ERR(phylink);
|
return PTR_ERR(phylink);
|
||||||
@ -2662,9 +2666,9 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
|
|||||||
if (!node)
|
if (!node)
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
common->port_num = of_get_child_count(node);
|
common->port_num = of_get_child_count(node);
|
||||||
|
of_node_put(node);
|
||||||
if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS)
|
if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS)
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
of_node_put(node);
|
|
||||||
|
|
||||||
common->rx_flow_id_base = -1;
|
common->rx_flow_id_base = -1;
|
||||||
init_completion(&common->tdown_complete);
|
init_completion(&common->tdown_complete);
|
||||||
|
@ -1095,7 +1095,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint,
|
|||||||
|
|
||||||
ret = gsi_trans_page_add(trans, page, len, offset);
|
ret = gsi_trans_page_add(trans, page, len, offset);
|
||||||
if (ret)
|
if (ret)
|
||||||
__free_pages(page, get_order(buffer_size));
|
put_page(page);
|
||||||
else
|
else
|
||||||
trans->data = page; /* transaction owns page now */
|
trans->data = page; /* transaction owns page now */
|
||||||
|
|
||||||
@ -1418,11 +1418,8 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
|
|||||||
} else {
|
} else {
|
||||||
struct page *page = trans->data;
|
struct page *page = trans->data;
|
||||||
|
|
||||||
if (page) {
|
if (page)
|
||||||
u32 buffer_size = endpoint->config.rx.buffer_size;
|
put_page(page);
|
||||||
|
|
||||||
__free_pages(page, get_order(buffer_size));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -99,6 +99,7 @@ struct pcpu_secy_stats {
|
|||||||
* struct macsec_dev - private data
|
* struct macsec_dev - private data
|
||||||
* @secy: SecY config
|
* @secy: SecY config
|
||||||
* @real_dev: pointer to underlying netdevice
|
* @real_dev: pointer to underlying netdevice
|
||||||
|
* @dev_tracker: refcount tracker for @real_dev reference
|
||||||
* @stats: MACsec device stats
|
* @stats: MACsec device stats
|
||||||
* @secys: linked list of SecY's on the underlying device
|
* @secys: linked list of SecY's on the underlying device
|
||||||
* @gro_cells: pointer to the Generic Receive Offload cell
|
* @gro_cells: pointer to the Generic Receive Offload cell
|
||||||
@ -107,6 +108,7 @@ struct pcpu_secy_stats {
|
|||||||
struct macsec_dev {
|
struct macsec_dev {
|
||||||
struct macsec_secy secy;
|
struct macsec_secy secy;
|
||||||
struct net_device *real_dev;
|
struct net_device *real_dev;
|
||||||
|
netdevice_tracker dev_tracker;
|
||||||
struct pcpu_secy_stats __percpu *stats;
|
struct pcpu_secy_stats __percpu *stats;
|
||||||
struct list_head secys;
|
struct list_head secys;
|
||||||
struct gro_cells gro_cells;
|
struct gro_cells gro_cells;
|
||||||
@ -3459,6 +3461,9 @@ static int macsec_dev_init(struct net_device *dev)
|
|||||||
if (is_zero_ether_addr(dev->broadcast))
|
if (is_zero_ether_addr(dev->broadcast))
|
||||||
memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
|
memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
|
||||||
|
|
||||||
|
/* Get macsec's reference to real_dev */
|
||||||
|
dev_hold_track(real_dev, &macsec->dev_tracker, GFP_KERNEL);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3704,6 +3709,8 @@ static void macsec_free_netdev(struct net_device *dev)
|
|||||||
free_percpu(macsec->stats);
|
free_percpu(macsec->stats);
|
||||||
free_percpu(macsec->secy.tx_sc.stats);
|
free_percpu(macsec->secy.tx_sc.stats);
|
||||||
|
|
||||||
|
/* Get rid of the macsec's reference to real_dev */
|
||||||
|
dev_put_track(macsec->real_dev, &macsec->dev_tracker);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void macsec_setup(struct net_device *dev)
|
static void macsec_setup(struct net_device *dev)
|
||||||
|
@ -433,20 +433,21 @@ static void at803x_context_restore(struct phy_device *phydev,
|
|||||||
static int at803x_set_wol(struct phy_device *phydev,
|
static int at803x_set_wol(struct phy_device *phydev,
|
||||||
struct ethtool_wolinfo *wol)
|
struct ethtool_wolinfo *wol)
|
||||||
{
|
{
|
||||||
struct net_device *ndev = phydev->attached_dev;
|
|
||||||
const u8 *mac;
|
|
||||||
int ret, irq_enabled;
|
int ret, irq_enabled;
|
||||||
unsigned int i;
|
|
||||||
static const unsigned int offsets[] = {
|
|
||||||
AT803X_LOC_MAC_ADDR_32_47_OFFSET,
|
|
||||||
AT803X_LOC_MAC_ADDR_16_31_OFFSET,
|
|
||||||
AT803X_LOC_MAC_ADDR_0_15_OFFSET,
|
|
||||||
};
|
|
||||||
|
|
||||||
if (!ndev)
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
if (wol->wolopts & WAKE_MAGIC) {
|
if (wol->wolopts & WAKE_MAGIC) {
|
||||||
|
struct net_device *ndev = phydev->attached_dev;
|
||||||
|
const u8 *mac;
|
||||||
|
unsigned int i;
|
||||||
|
static const unsigned int offsets[] = {
|
||||||
|
AT803X_LOC_MAC_ADDR_32_47_OFFSET,
|
||||||
|
AT803X_LOC_MAC_ADDR_16_31_OFFSET,
|
||||||
|
AT803X_LOC_MAC_ADDR_0_15_OFFSET,
|
||||||
|
};
|
||||||
|
|
||||||
|
if (!ndev)
|
||||||
|
return -ENODEV;
|
||||||
|
|
||||||
mac = (const u8 *) ndev->dev_addr;
|
mac = (const u8 *) ndev->dev_addr;
|
||||||
|
|
||||||
if (!is_valid_ether_addr(mac))
|
if (!is_valid_ether_addr(mac))
|
||||||
@ -857,6 +858,9 @@ static int at803x_probe(struct phy_device *phydev)
|
|||||||
if (phydev->drv->phy_id == ATH8031_PHY_ID) {
|
if (phydev->drv->phy_id == ATH8031_PHY_ID) {
|
||||||
int ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG);
|
int ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG);
|
||||||
int mode_cfg;
|
int mode_cfg;
|
||||||
|
struct ethtool_wolinfo wol = {
|
||||||
|
.wolopts = 0,
|
||||||
|
};
|
||||||
|
|
||||||
if (ccr < 0)
|
if (ccr < 0)
|
||||||
goto err;
|
goto err;
|
||||||
@ -872,6 +876,13 @@ static int at803x_probe(struct phy_device *phydev)
|
|||||||
priv->is_fiber = true;
|
priv->is_fiber = true;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Disable WOL by default */
|
||||||
|
ret = at803x_set_wol(phydev, &wol);
|
||||||
|
if (ret < 0) {
|
||||||
|
phydev_err(phydev, "failed to disable WOL on probe: %d\n", ret);
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -180,7 +180,7 @@ static void fixed_phy_del(int phy_addr)
|
|||||||
if (fp->link_gpiod)
|
if (fp->link_gpiod)
|
||||||
gpiod_put(fp->link_gpiod);
|
gpiod_put(fp->link_gpiod);
|
||||||
kfree(fp);
|
kfree(fp);
|
||||||
ida_simple_remove(&phy_fixed_ida, phy_addr);
|
ida_free(&phy_fixed_ida, phy_addr);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -244,13 +244,13 @@ static struct phy_device *__fixed_phy_register(unsigned int irq,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Get the next available PHY address, up to PHY_MAX_ADDR */
|
/* Get the next available PHY address, up to PHY_MAX_ADDR */
|
||||||
phy_addr = ida_simple_get(&phy_fixed_ida, 0, PHY_MAX_ADDR, GFP_KERNEL);
|
phy_addr = ida_alloc_max(&phy_fixed_ida, PHY_MAX_ADDR - 1, GFP_KERNEL);
|
||||||
if (phy_addr < 0)
|
if (phy_addr < 0)
|
||||||
return ERR_PTR(phy_addr);
|
return ERR_PTR(phy_addr);
|
||||||
|
|
||||||
ret = fixed_phy_add_gpiod(irq, phy_addr, status, gpiod);
|
ret = fixed_phy_add_gpiod(irq, phy_addr, status, gpiod);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
ida_simple_remove(&phy_fixed_ida, phy_addr);
|
ida_free(&phy_fixed_ida, phy_addr);
|
||||||
return ERR_PTR(ret);
|
return ERR_PTR(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1366,6 +1366,7 @@ static const struct usb_device_id products[] = {
|
|||||||
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
|
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
|
||||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */
|
||||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1230, 2)}, /* Telit LE910Cx */
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1230, 2)}, /* Telit LE910Cx */
|
||||||
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1250, 0)}, /* Telit LE910Cx */
|
||||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)}, /* Telit LE910Cx */
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)}, /* Telit LE910Cx */
|
||||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)}, /* Telit LE910Cx */
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)}, /* Telit LE910Cx */
|
||||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */
|
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */
|
||||||
@ -1388,6 +1389,7 @@ static const struct usb_device_id products[] = {
|
|||||||
{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)}, /* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
|
{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)}, /* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
|
||||||
{QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)}, /* Cinterion CLS8 */
|
{QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)}, /* Cinterion CLS8 */
|
||||||
{QMI_FIXED_INTF(0x1e2d, 0x00b7, 0)}, /* Cinterion MV31 RmNet */
|
{QMI_FIXED_INTF(0x1e2d, 0x00b7, 0)}, /* Cinterion MV31 RmNet */
|
||||||
|
{QMI_FIXED_INTF(0x1e2d, 0x00b9, 0)}, /* Cinterion MV31 RmNet based on new baseline */
|
||||||
{QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
|
{QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
|
||||||
{QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
|
{QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
|
||||||
{QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
|
{QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
|
||||||
|
@ -1090,7 +1090,7 @@ struct iwl_causes_list {
|
|||||||
u8 addr;
|
u8 addr;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define CAUSE(reg, mask) \
|
#define IWL_CAUSE(reg, mask) \
|
||||||
{ \
|
{ \
|
||||||
.mask_reg = reg, \
|
.mask_reg = reg, \
|
||||||
.bit = ilog2(mask), \
|
.bit = ilog2(mask), \
|
||||||
@ -1101,28 +1101,28 @@ struct iwl_causes_list {
|
|||||||
}
|
}
|
||||||
|
|
||||||
static const struct iwl_causes_list causes_list_common[] = {
|
static const struct iwl_causes_list causes_list_common[] = {
|
||||||
CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH0_NUM),
|
IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH0_NUM),
|
||||||
CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH1_NUM),
|
IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH1_NUM),
|
||||||
CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_S2D),
|
IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_S2D),
|
||||||
CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_FH_ERR),
|
IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_FH_ERR),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_ALIVE),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_ALIVE),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_WAKEUP),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_WAKEUP),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RESET_DONE),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RESET_DONE),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_CT_KILL),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_CT_KILL),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RF_KILL),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RF_KILL),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_PERIODIC),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_PERIODIC),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SCD),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SCD),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_FH_TX),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_FH_TX),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HW_ERR),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HW_ERR),
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HAP),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HAP),
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct iwl_causes_list causes_list_pre_bz[] = {
|
static const struct iwl_causes_list causes_list_pre_bz[] = {
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR),
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct iwl_causes_list causes_list_bz[] = {
|
static const struct iwl_causes_list causes_list_bz[] = {
|
||||||
CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ),
|
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ),
|
||||||
};
|
};
|
||||||
|
|
||||||
static void iwl_pcie_map_list(struct iwl_trans *trans,
|
static void iwl_pcie_map_list(struct iwl_trans *trans,
|
||||||
|
@ -1053,7 +1053,6 @@ static int lbs_set_authtype(struct lbs_private *priv,
|
|||||||
*/
|
*/
|
||||||
#define LBS_ASSOC_MAX_CMD_SIZE \
|
#define LBS_ASSOC_MAX_CMD_SIZE \
|
||||||
(sizeof(struct cmd_ds_802_11_associate) \
|
(sizeof(struct cmd_ds_802_11_associate) \
|
||||||
- 512 /* cmd_ds_802_11_associate.iebuf */ \
|
|
||||||
+ LBS_MAX_SSID_TLV_SIZE \
|
+ LBS_MAX_SSID_TLV_SIZE \
|
||||||
+ LBS_MAX_CHANNEL_TLV_SIZE \
|
+ LBS_MAX_CHANNEL_TLV_SIZE \
|
||||||
+ LBS_MAX_CF_PARAM_TLV_SIZE \
|
+ LBS_MAX_CF_PARAM_TLV_SIZE \
|
||||||
@ -1130,8 +1129,7 @@ static int lbs_associate(struct lbs_private *priv,
|
|||||||
if (sme->ie && sme->ie_len)
|
if (sme->ie && sme->ie_len)
|
||||||
pos += lbs_add_wpa_tlv(pos, sme->ie, sme->ie_len);
|
pos += lbs_add_wpa_tlv(pos, sme->ie, sme->ie_len);
|
||||||
|
|
||||||
len = (sizeof(*cmd) - sizeof(cmd->iebuf)) +
|
len = sizeof(*cmd) + (u16)(pos - (u8 *) &cmd->iebuf);
|
||||||
(u16)(pos - (u8 *) &cmd->iebuf);
|
|
||||||
cmd->hdr.size = cpu_to_le16(len);
|
cmd->hdr.size = cpu_to_le16(len);
|
||||||
|
|
||||||
lbs_deb_hex(LBS_DEB_ASSOC, "ASSOC_CMD", (u8 *) cmd,
|
lbs_deb_hex(LBS_DEB_ASSOC, "ASSOC_CMD", (u8 *) cmd,
|
||||||
|
@ -528,7 +528,8 @@ struct cmd_ds_802_11_associate {
|
|||||||
__le16 listeninterval;
|
__le16 listeninterval;
|
||||||
__le16 bcnperiod;
|
__le16 bcnperiod;
|
||||||
u8 dtimperiod;
|
u8 dtimperiod;
|
||||||
u8 iebuf[512]; /* Enough for required and most optional IEs */
|
/* 512 permitted - enough for required and most optional IEs */
|
||||||
|
u8 iebuf[];
|
||||||
} __packed;
|
} __packed;
|
||||||
|
|
||||||
struct cmd_ds_802_11_associate_response {
|
struct cmd_ds_802_11_associate_response {
|
||||||
@ -537,7 +538,8 @@ struct cmd_ds_802_11_associate_response {
|
|||||||
__le16 capability;
|
__le16 capability;
|
||||||
__le16 statuscode;
|
__le16 statuscode;
|
||||||
__le16 aid;
|
__le16 aid;
|
||||||
u8 iebuf[512];
|
/* max 512 */
|
||||||
|
u8 iebuf[];
|
||||||
} __packed;
|
} __packed;
|
||||||
|
|
||||||
struct cmd_ds_802_11_set_wep {
|
struct cmd_ds_802_11_set_wep {
|
||||||
|
@ -1602,6 +1602,16 @@ free:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void rtw_fw_update_beacon_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct rtw_dev *rtwdev = container_of(work, struct rtw_dev,
|
||||||
|
update_beacon_work);
|
||||||
|
|
||||||
|
mutex_lock(&rtwdev->mutex);
|
||||||
|
rtw_fw_download_rsvd_page(rtwdev);
|
||||||
|
mutex_unlock(&rtwdev->mutex);
|
||||||
|
}
|
||||||
|
|
||||||
static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
|
static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
|
||||||
u32 *buf, u32 residue, u16 start_pg)
|
u32 *buf, u32 residue, u16 start_pg)
|
||||||
{
|
{
|
||||||
|
@ -809,6 +809,7 @@ void rtw_add_rsvd_page_pno(struct rtw_dev *rtwdev,
|
|||||||
void rtw_add_rsvd_page_sta(struct rtw_dev *rtwdev,
|
void rtw_add_rsvd_page_sta(struct rtw_dev *rtwdev,
|
||||||
struct rtw_vif *rtwvif);
|
struct rtw_vif *rtwvif);
|
||||||
int rtw_fw_download_rsvd_page(struct rtw_dev *rtwdev);
|
int rtw_fw_download_rsvd_page(struct rtw_dev *rtwdev);
|
||||||
|
void rtw_fw_update_beacon_work(struct work_struct *work);
|
||||||
void rtw_send_rsvd_page_h2c(struct rtw_dev *rtwdev);
|
void rtw_send_rsvd_page_h2c(struct rtw_dev *rtwdev);
|
||||||
int rtw_dump_drv_rsvd_page(struct rtw_dev *rtwdev,
|
int rtw_dump_drv_rsvd_page(struct rtw_dev *rtwdev,
|
||||||
u32 offset, u32 size, u32 *buf);
|
u32 offset, u32 size, u32 *buf);
|
||||||
|
@ -493,9 +493,7 @@ static int rtw_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
|
|||||||
{
|
{
|
||||||
struct rtw_dev *rtwdev = hw->priv;
|
struct rtw_dev *rtwdev = hw->priv;
|
||||||
|
|
||||||
mutex_lock(&rtwdev->mutex);
|
ieee80211_queue_work(hw, &rtwdev->update_beacon_work);
|
||||||
rtw_fw_download_rsvd_page(rtwdev);
|
|
||||||
mutex_unlock(&rtwdev->mutex);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1442,6 +1442,7 @@ void rtw_core_stop(struct rtw_dev *rtwdev)
|
|||||||
mutex_unlock(&rtwdev->mutex);
|
mutex_unlock(&rtwdev->mutex);
|
||||||
|
|
||||||
cancel_work_sync(&rtwdev->c2h_work);
|
cancel_work_sync(&rtwdev->c2h_work);
|
||||||
|
cancel_work_sync(&rtwdev->update_beacon_work);
|
||||||
cancel_delayed_work_sync(&rtwdev->watch_dog_work);
|
cancel_delayed_work_sync(&rtwdev->watch_dog_work);
|
||||||
cancel_delayed_work_sync(&coex->bt_relink_work);
|
cancel_delayed_work_sync(&coex->bt_relink_work);
|
||||||
cancel_delayed_work_sync(&coex->bt_reenable_work);
|
cancel_delayed_work_sync(&coex->bt_reenable_work);
|
||||||
@ -1998,6 +1999,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
|
|||||||
INIT_WORK(&rtwdev->c2h_work, rtw_c2h_work);
|
INIT_WORK(&rtwdev->c2h_work, rtw_c2h_work);
|
||||||
INIT_WORK(&rtwdev->ips_work, rtw_ips_work);
|
INIT_WORK(&rtwdev->ips_work, rtw_ips_work);
|
||||||
INIT_WORK(&rtwdev->fw_recovery_work, rtw_fw_recovery_work);
|
INIT_WORK(&rtwdev->fw_recovery_work, rtw_fw_recovery_work);
|
||||||
|
INIT_WORK(&rtwdev->update_beacon_work, rtw_fw_update_beacon_work);
|
||||||
INIT_WORK(&rtwdev->ba_work, rtw_txq_ba_work);
|
INIT_WORK(&rtwdev->ba_work, rtw_txq_ba_work);
|
||||||
skb_queue_head_init(&rtwdev->c2h_queue);
|
skb_queue_head_init(&rtwdev->c2h_queue);
|
||||||
skb_queue_head_init(&rtwdev->coex.queue);
|
skb_queue_head_init(&rtwdev->coex.queue);
|
||||||
|
@ -2008,6 +2008,7 @@ struct rtw_dev {
|
|||||||
struct work_struct c2h_work;
|
struct work_struct c2h_work;
|
||||||
struct work_struct ips_work;
|
struct work_struct ips_work;
|
||||||
struct work_struct fw_recovery_work;
|
struct work_struct fw_recovery_work;
|
||||||
|
struct work_struct update_beacon_work;
|
||||||
|
|
||||||
/* used to protect txqs list */
|
/* used to protect txqs list */
|
||||||
spinlock_t txq_lock;
|
spinlock_t txq_lock;
|
||||||
|
@ -828,7 +828,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
|
work_to_do = XEN_RING_NR_UNCONSUMED_REQUESTS(&queue->tx);
|
||||||
if (!work_to_do)
|
if (!work_to_do)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
@ -267,7 +267,7 @@ static int arm_tod_read_trig_sel_refclk(struct idtcm_channel *channel, u8 ref)
|
|||||||
static bool is_single_shot(u8 mask)
|
static bool is_single_shot(u8 mask)
|
||||||
{
|
{
|
||||||
/* Treat single bit ToD masks as continuous trigger */
|
/* Treat single bit ToD masks as continuous trigger */
|
||||||
return mask <= 8 && is_power_of_2(mask);
|
return !(mask <= 8 && is_power_of_2(mask));
|
||||||
}
|
}
|
||||||
|
|
||||||
static int idtcm_extts_enable(struct idtcm_channel *channel,
|
static int idtcm_extts_enable(struct idtcm_channel *channel,
|
||||||
|
@ -61,7 +61,7 @@ struct ipv6_devconf {
|
|||||||
__s32 suppress_frag_ndisc;
|
__s32 suppress_frag_ndisc;
|
||||||
__s32 accept_ra_mtu;
|
__s32 accept_ra_mtu;
|
||||||
__s32 drop_unsolicited_na;
|
__s32 drop_unsolicited_na;
|
||||||
__s32 accept_unsolicited_na;
|
__s32 accept_untracked_na;
|
||||||
struct ipv6_stable_secret {
|
struct ipv6_stable_secret {
|
||||||
bool initialized;
|
bool initialized;
|
||||||
struct in6_addr secret;
|
struct in6_addr secret;
|
||||||
|
@ -5176,12 +5176,11 @@ struct mlx5_ifc_query_qp_out_bits {
|
|||||||
|
|
||||||
u8 syndrome[0x20];
|
u8 syndrome[0x20];
|
||||||
|
|
||||||
u8 reserved_at_40[0x20];
|
u8 reserved_at_40[0x40];
|
||||||
u8 ece[0x20];
|
|
||||||
|
|
||||||
u8 opt_param_mask[0x20];
|
u8 opt_param_mask[0x20];
|
||||||
|
|
||||||
u8 reserved_at_a0[0x20];
|
u8 ece[0x20];
|
||||||
|
|
||||||
struct mlx5_ifc_qpc_bits qpc;
|
struct mlx5_ifc_qpc_bits qpc;
|
||||||
|
|
||||||
|
@ -2696,7 +2696,14 @@ void *skb_pull(struct sk_buff *skb, unsigned int len);
|
|||||||
static inline void *__skb_pull(struct sk_buff *skb, unsigned int len)
|
static inline void *__skb_pull(struct sk_buff *skb, unsigned int len)
|
||||||
{
|
{
|
||||||
skb->len -= len;
|
skb->len -= len;
|
||||||
BUG_ON(skb->len < skb->data_len);
|
if (unlikely(skb->len < skb->data_len)) {
|
||||||
|
#if defined(CONFIG_DEBUG_NET)
|
||||||
|
skb->len += len;
|
||||||
|
pr_err("__skb_pull(len=%u)\n", len);
|
||||||
|
skb_dump(KERN_ERR, skb, false);
|
||||||
|
#endif
|
||||||
|
BUG();
|
||||||
|
}
|
||||||
return skb->data += len;
|
return skb->data += len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -15,7 +15,7 @@ enum amt_msg_type {
|
|||||||
AMT_MSG_MEMBERSHIP_QUERY,
|
AMT_MSG_MEMBERSHIP_QUERY,
|
||||||
AMT_MSG_MEMBERSHIP_UPDATE,
|
AMT_MSG_MEMBERSHIP_UPDATE,
|
||||||
AMT_MSG_MULTICAST_DATA,
|
AMT_MSG_MULTICAST_DATA,
|
||||||
AMT_MSG_TEARDOWM,
|
AMT_MSG_TEARDOWN,
|
||||||
__AMT_MSG_MAX,
|
__AMT_MSG_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -228,6 +228,7 @@ typedef struct ax25_dev {
|
|||||||
ax25_dama_info dama;
|
ax25_dama_info dama;
|
||||||
#endif
|
#endif
|
||||||
refcount_t refcount;
|
refcount_t refcount;
|
||||||
|
bool device_up;
|
||||||
} ax25_dev;
|
} ax25_dev;
|
||||||
|
|
||||||
typedef struct ax25_cb {
|
typedef struct ax25_cb {
|
||||||
|
@ -149,7 +149,9 @@ struct bond_params {
|
|||||||
struct reciprocal_value reciprocal_packets_per_slave;
|
struct reciprocal_value reciprocal_packets_per_slave;
|
||||||
u16 ad_actor_sys_prio;
|
u16 ad_actor_sys_prio;
|
||||||
u16 ad_user_port_key;
|
u16 ad_user_port_key;
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
struct in6_addr ns_targets[BOND_MAX_NS_TARGETS];
|
struct in6_addr ns_targets[BOND_MAX_NS_TARGETS];
|
||||||
|
#endif
|
||||||
|
|
||||||
/* 2 bytes of padding : see ether_addr_equal_64bits() */
|
/* 2 bytes of padding : see ether_addr_equal_64bits() */
|
||||||
u8 ad_actor_system[ETH_ALEN + 2];
|
u8 ad_actor_system[ETH_ALEN + 2];
|
||||||
@ -503,12 +505,14 @@ static inline int bond_is_ip_target_ok(__be32 addr)
|
|||||||
return !ipv4_is_lbcast(addr) && !ipv4_is_zeronet(addr);
|
return !ipv4_is_lbcast(addr) && !ipv4_is_zeronet(addr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
static inline int bond_is_ip6_target_ok(struct in6_addr *addr)
|
static inline int bond_is_ip6_target_ok(struct in6_addr *addr)
|
||||||
{
|
{
|
||||||
return !ipv6_addr_any(addr) &&
|
return !ipv6_addr_any(addr) &&
|
||||||
!ipv6_addr_loopback(addr) &&
|
!ipv6_addr_loopback(addr) &&
|
||||||
!ipv6_addr_is_multicast(addr);
|
!ipv6_addr_is_multicast(addr);
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/* Get the oldest arp which we've received on this slave for bond's
|
/* Get the oldest arp which we've received on this slave for bond's
|
||||||
* arp_targets.
|
* arp_targets.
|
||||||
@ -746,6 +750,7 @@ static inline int bond_get_targets_ip(__be32 *targets, __be32 ip)
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip)
|
static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
@ -758,6 +763,7 @@ static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr
|
|||||||
|
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/* exported from bond_main.c */
|
/* exported from bond_main.c */
|
||||||
extern unsigned int bond_net_id;
|
extern unsigned int bond_net_id;
|
||||||
|
@ -58,8 +58,13 @@ static inline int nf_conntrack_confirm(struct sk_buff *skb)
|
|||||||
int ret = NF_ACCEPT;
|
int ret = NF_ACCEPT;
|
||||||
|
|
||||||
if (ct) {
|
if (ct) {
|
||||||
if (!nf_ct_is_confirmed(ct))
|
if (!nf_ct_is_confirmed(ct)) {
|
||||||
ret = __nf_conntrack_confirm(skb);
|
ret = __nf_conntrack_confirm(skb);
|
||||||
|
|
||||||
|
if (ret == NF_ACCEPT)
|
||||||
|
ct = (struct nf_conn *)skb_nfct(skb);
|
||||||
|
}
|
||||||
|
|
||||||
if (ret == NF_ACCEPT && nf_ct_ecache_exist(ct))
|
if (ret == NF_ACCEPT && nf_ct_ecache_exist(ct))
|
||||||
nf_ct_deliver_cached_events(ct);
|
nf_ct_deliver_cached_events(ct);
|
||||||
}
|
}
|
||||||
|
@ -187,37 +187,17 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
|
|||||||
if (spin_trylock(&qdisc->seqlock))
|
if (spin_trylock(&qdisc->seqlock))
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
/* Paired with smp_mb__after_atomic() to make sure
|
/* No need to insist if the MISSED flag was already set.
|
||||||
* STATE_MISSED checking is synchronized with clearing
|
* Note that test_and_set_bit() also gives us memory ordering
|
||||||
* in pfifo_fast_dequeue().
|
* guarantees wrt potential earlier enqueue() and below
|
||||||
|
* spin_trylock(), both of which are necessary to prevent races
|
||||||
*/
|
*/
|
||||||
smp_mb__before_atomic();
|
if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state))
|
||||||
|
|
||||||
/* If the MISSED flag is set, it means other thread has
|
|
||||||
* set the MISSED flag before second spin_trylock(), so
|
|
||||||
* we can return false here to avoid multi cpus doing
|
|
||||||
* the set_bit() and second spin_trylock() concurrently.
|
|
||||||
*/
|
|
||||||
if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
|
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
/* Set the MISSED flag before the second spin_trylock(),
|
/* Try to take the lock again to make sure that we will either
|
||||||
* if the second spin_trylock() return false, it means
|
* grab it or the CPU that still has it will see MISSED set
|
||||||
* other cpu holding the lock will do dequeuing for us
|
* when testing it in qdisc_run_end()
|
||||||
* or it will see the MISSED flag set after releasing
|
|
||||||
* lock and reschedule the net_tx_action() to do the
|
|
||||||
* dequeuing.
|
|
||||||
*/
|
|
||||||
set_bit(__QDISC_STATE_MISSED, &qdisc->state);
|
|
||||||
|
|
||||||
/* spin_trylock() only has load-acquire semantic, so use
|
|
||||||
* smp_mb__after_atomic() to ensure STATE_MISSED is set
|
|
||||||
* before doing the second spin_trylock().
|
|
||||||
*/
|
|
||||||
smp_mb__after_atomic();
|
|
||||||
|
|
||||||
/* Retry again in case other CPU may not see the new flag
|
|
||||||
* after it releases the lock at the end of qdisc_run_end().
|
|
||||||
*/
|
*/
|
||||||
return spin_trylock(&qdisc->seqlock);
|
return spin_trylock(&qdisc->seqlock);
|
||||||
}
|
}
|
||||||
@ -229,6 +209,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
|
|||||||
if (qdisc->flags & TCQ_F_NOLOCK) {
|
if (qdisc->flags & TCQ_F_NOLOCK) {
|
||||||
spin_unlock(&qdisc->seqlock);
|
spin_unlock(&qdisc->seqlock);
|
||||||
|
|
||||||
|
/* spin_unlock() only has store-release semantic. The unlock
|
||||||
|
* and test_bit() ordering is a store-load ordering, so a full
|
||||||
|
* memory barrier is needed here.
|
||||||
|
*/
|
||||||
|
smp_mb();
|
||||||
|
|
||||||
if (unlikely(test_bit(__QDISC_STATE_MISSED,
|
if (unlikely(test_bit(__QDISC_STATE_MISSED,
|
||||||
&qdisc->state)))
|
&qdisc->state)))
|
||||||
__netif_schedule(qdisc);
|
__netif_schedule(qdisc);
|
||||||
|
@ -194,7 +194,7 @@ enum {
|
|||||||
DEVCONF_IOAM6_ID,
|
DEVCONF_IOAM6_ID,
|
||||||
DEVCONF_IOAM6_ID_WIDE,
|
DEVCONF_IOAM6_ID_WIDE,
|
||||||
DEVCONF_NDISC_EVICT_NOCARRIER,
|
DEVCONF_NDISC_EVICT_NOCARRIER,
|
||||||
DEVCONF_ACCEPT_UNSOLICITED_NA,
|
DEVCONF_ACCEPT_UNTRACKED_NA,
|
||||||
DEVCONF_MAX
|
DEVCONF_MAX
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ struct __kernel_sockaddr_storage {
|
|||||||
|
|
||||||
#define SOCK_BUF_LOCK_MASK (SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK)
|
#define SOCK_BUF_LOCK_MASK (SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK)
|
||||||
|
|
||||||
#define SOCK_TXREHASH_DEFAULT ((u8)-1)
|
#define SOCK_TXREHASH_DEFAULT 255
|
||||||
#define SOCK_TXREHASH_DISABLED 0
|
#define SOCK_TXREHASH_DISABLED 0
|
||||||
#define SOCK_TXREHASH_ENABLED 1
|
#define SOCK_TXREHASH_ENABLED 1
|
||||||
|
|
||||||
|
@ -1953,6 +1953,11 @@ out:
|
|||||||
CONT; \
|
CONT; \
|
||||||
LDX_MEM_##SIZEOP: \
|
LDX_MEM_##SIZEOP: \
|
||||||
DST = *(SIZE *)(unsigned long) (SRC + insn->off); \
|
DST = *(SIZE *)(unsigned long) (SRC + insn->off); \
|
||||||
|
CONT; \
|
||||||
|
LDX_PROBE_MEM_##SIZEOP: \
|
||||||
|
bpf_probe_read_kernel(&DST, sizeof(SIZE), \
|
||||||
|
(const void *)(long) (SRC + insn->off)); \
|
||||||
|
DST = *((SIZE *)&DST); \
|
||||||
CONT;
|
CONT;
|
||||||
|
|
||||||
LDST(B, u8)
|
LDST(B, u8)
|
||||||
@ -1960,15 +1965,6 @@ out:
|
|||||||
LDST(W, u32)
|
LDST(W, u32)
|
||||||
LDST(DW, u64)
|
LDST(DW, u64)
|
||||||
#undef LDST
|
#undef LDST
|
||||||
#define LDX_PROBE(SIZEOP, SIZE) \
|
|
||||||
LDX_PROBE_MEM_##SIZEOP: \
|
|
||||||
bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off)); \
|
|
||||||
CONT;
|
|
||||||
LDX_PROBE(B, 1)
|
|
||||||
LDX_PROBE(H, 2)
|
|
||||||
LDX_PROBE(W, 4)
|
|
||||||
LDX_PROBE(DW, 8)
|
|
||||||
#undef LDX_PROBE
|
|
||||||
|
|
||||||
#define ATOMIC_ALU_OP(BOP, KOP) \
|
#define ATOMIC_ALU_OP(BOP, KOP) \
|
||||||
case BOP: \
|
case BOP: \
|
||||||
|
@ -20,7 +20,7 @@ config NET_NS_REFCNT_TRACKER
|
|||||||
|
|
||||||
config DEBUG_NET
|
config DEBUG_NET
|
||||||
bool "Add generic networking debug"
|
bool "Add generic networking debug"
|
||||||
depends on DEBUG_KERNEL
|
depends on DEBUG_KERNEL && NET
|
||||||
help
|
help
|
||||||
Enable extra sanity checks in networking.
|
Enable extra sanity checks in networking.
|
||||||
This is mostly used by fuzzers, but is safe to select.
|
This is mostly used by fuzzers, but is safe to select.
|
||||||
|
@ -62,12 +62,12 @@ static void ax25_free_sock(struct sock *sk)
|
|||||||
*/
|
*/
|
||||||
static void ax25_cb_del(ax25_cb *ax25)
|
static void ax25_cb_del(ax25_cb *ax25)
|
||||||
{
|
{
|
||||||
|
spin_lock_bh(&ax25_list_lock);
|
||||||
if (!hlist_unhashed(&ax25->ax25_node)) {
|
if (!hlist_unhashed(&ax25->ax25_node)) {
|
||||||
spin_lock_bh(&ax25_list_lock);
|
|
||||||
hlist_del_init(&ax25->ax25_node);
|
hlist_del_init(&ax25->ax25_node);
|
||||||
spin_unlock_bh(&ax25_list_lock);
|
|
||||||
ax25_cb_put(ax25);
|
ax25_cb_put(ax25);
|
||||||
}
|
}
|
||||||
|
spin_unlock_bh(&ax25_list_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -81,6 +81,7 @@ static void ax25_kill_by_device(struct net_device *dev)
|
|||||||
|
|
||||||
if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
|
if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
|
||||||
return;
|
return;
|
||||||
|
ax25_dev->device_up = false;
|
||||||
|
|
||||||
spin_lock_bh(&ax25_list_lock);
|
spin_lock_bh(&ax25_list_lock);
|
||||||
again:
|
again:
|
||||||
@ -91,6 +92,7 @@ again:
|
|||||||
spin_unlock_bh(&ax25_list_lock);
|
spin_unlock_bh(&ax25_list_lock);
|
||||||
ax25_disconnect(s, ENETUNREACH);
|
ax25_disconnect(s, ENETUNREACH);
|
||||||
s->ax25_dev = NULL;
|
s->ax25_dev = NULL;
|
||||||
|
ax25_cb_del(s);
|
||||||
spin_lock_bh(&ax25_list_lock);
|
spin_lock_bh(&ax25_list_lock);
|
||||||
goto again;
|
goto again;
|
||||||
}
|
}
|
||||||
@ -103,6 +105,7 @@ again:
|
|||||||
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
||||||
ax25_dev_put(ax25_dev);
|
ax25_dev_put(ax25_dev);
|
||||||
}
|
}
|
||||||
|
ax25_cb_del(s);
|
||||||
release_sock(sk);
|
release_sock(sk);
|
||||||
spin_lock_bh(&ax25_list_lock);
|
spin_lock_bh(&ax25_list_lock);
|
||||||
sock_put(sk);
|
sock_put(sk);
|
||||||
@ -995,9 +998,11 @@ static int ax25_release(struct socket *sock)
|
|||||||
if (sk->sk_type == SOCK_SEQPACKET) {
|
if (sk->sk_type == SOCK_SEQPACKET) {
|
||||||
switch (ax25->state) {
|
switch (ax25->state) {
|
||||||
case AX25_STATE_0:
|
case AX25_STATE_0:
|
||||||
release_sock(sk);
|
if (!sock_flag(ax25->sk, SOCK_DEAD)) {
|
||||||
ax25_disconnect(ax25, 0);
|
release_sock(sk);
|
||||||
lock_sock(sk);
|
ax25_disconnect(ax25, 0);
|
||||||
|
lock_sock(sk);
|
||||||
|
}
|
||||||
ax25_destroy_socket(ax25);
|
ax25_destroy_socket(ax25);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@ -1053,11 +1058,13 @@ static int ax25_release(struct socket *sock)
|
|||||||
ax25_destroy_socket(ax25);
|
ax25_destroy_socket(ax25);
|
||||||
}
|
}
|
||||||
if (ax25_dev) {
|
if (ax25_dev) {
|
||||||
del_timer_sync(&ax25->timer);
|
if (!ax25_dev->device_up) {
|
||||||
del_timer_sync(&ax25->t1timer);
|
del_timer_sync(&ax25->timer);
|
||||||
del_timer_sync(&ax25->t2timer);
|
del_timer_sync(&ax25->t1timer);
|
||||||
del_timer_sync(&ax25->t3timer);
|
del_timer_sync(&ax25->t2timer);
|
||||||
del_timer_sync(&ax25->idletimer);
|
del_timer_sync(&ax25->t3timer);
|
||||||
|
del_timer_sync(&ax25->idletimer);
|
||||||
|
}
|
||||||
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
||||||
ax25_dev_put(ax25_dev);
|
ax25_dev_put(ax25_dev);
|
||||||
}
|
}
|
||||||
|
@ -62,6 +62,7 @@ void ax25_dev_device_up(struct net_device *dev)
|
|||||||
ax25_dev->dev = dev;
|
ax25_dev->dev = dev;
|
||||||
dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
|
dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
|
||||||
ax25_dev->forward = NULL;
|
ax25_dev->forward = NULL;
|
||||||
|
ax25_dev->device_up = true;
|
||||||
|
|
||||||
ax25_dev->values[AX25_VALUES_IPDEFMODE] = AX25_DEF_IPDEFMODE;
|
ax25_dev->values[AX25_VALUES_IPDEFMODE] = AX25_DEF_IPDEFMODE;
|
||||||
ax25_dev->values[AX25_VALUES_AXDEFMODE] = AX25_DEF_AXDEFMODE;
|
ax25_dev->values[AX25_VALUES_AXDEFMODE] = AX25_DEF_AXDEFMODE;
|
||||||
|
@ -268,7 +268,7 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
|
|||||||
del_timer_sync(&ax25->t3timer);
|
del_timer_sync(&ax25->t3timer);
|
||||||
del_timer_sync(&ax25->idletimer);
|
del_timer_sync(&ax25->idletimer);
|
||||||
} else {
|
} else {
|
||||||
if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
|
if (ax25->sk && !sock_flag(ax25->sk, SOCK_DESTROY))
|
||||||
ax25_stop_heartbeat(ax25);
|
ax25_stop_heartbeat(ax25);
|
||||||
ax25_stop_t1timer(ax25);
|
ax25_stop_t1timer(ax25);
|
||||||
ax25_stop_t2timer(ax25);
|
ax25_stop_t2timer(ax25);
|
||||||
|
@ -1579,7 +1579,7 @@ static void neigh_managed_work(struct work_struct *work)
|
|||||||
list_for_each_entry(neigh, &tbl->managed_list, managed_list)
|
list_for_each_entry(neigh, &tbl->managed_list, managed_list)
|
||||||
neigh_event_send_probe(neigh, NULL, false);
|
neigh_event_send_probe(neigh, NULL, false);
|
||||||
queue_delayed_work(system_power_efficient_wq, &tbl->managed_work,
|
queue_delayed_work(system_power_efficient_wq, &tbl->managed_work,
|
||||||
NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME));
|
max(NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME), HZ));
|
||||||
write_unlock_bh(&tbl->lock);
|
write_unlock_bh(&tbl->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2706,12 +2706,15 @@ static void tcp_mtup_probe_success(struct sock *sk)
|
|||||||
{
|
{
|
||||||
struct tcp_sock *tp = tcp_sk(sk);
|
struct tcp_sock *tp = tcp_sk(sk);
|
||||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||||
|
u64 val;
|
||||||
|
|
||||||
/* FIXME: breaks with very large cwnd */
|
|
||||||
tp->prior_ssthresh = tcp_current_ssthresh(sk);
|
tp->prior_ssthresh = tcp_current_ssthresh(sk);
|
||||||
tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) *
|
|
||||||
tcp_mss_to_mtu(sk, tp->mss_cache) /
|
val = (u64)tcp_snd_cwnd(tp) * tcp_mss_to_mtu(sk, tp->mss_cache);
|
||||||
icsk->icsk_mtup.probe_size);
|
do_div(val, icsk->icsk_mtup.probe_size);
|
||||||
|
DEBUG_NET_WARN_ON_ONCE((u32)val != val);
|
||||||
|
tcp_snd_cwnd_set(tp, max_t(u32, 1U, val));
|
||||||
|
|
||||||
tp->snd_cwnd_cnt = 0;
|
tp->snd_cwnd_cnt = 0;
|
||||||
tp->snd_cwnd_stamp = tcp_jiffies32;
|
tp->snd_cwnd_stamp = tcp_jiffies32;
|
||||||
tp->snd_ssthresh = tcp_current_ssthresh(sk);
|
tp->snd_ssthresh = tcp_current_ssthresh(sk);
|
||||||
|
@ -1207,8 +1207,8 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
|
|||||||
key->l3index = l3index;
|
key->l3index = l3index;
|
||||||
key->flags = flags;
|
key->flags = flags;
|
||||||
memcpy(&key->addr, addr,
|
memcpy(&key->addr, addr,
|
||||||
(family == AF_INET6) ? sizeof(struct in6_addr) :
|
(IS_ENABLED(CONFIG_IPV6) && family == AF_INET6) ? sizeof(struct in6_addr) :
|
||||||
sizeof(struct in_addr));
|
sizeof(struct in_addr));
|
||||||
hlist_add_head_rcu(&key->node, &md5sig->head);
|
hlist_add_head_rcu(&key->node, &md5sig->head);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -4115,8 +4115,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
|
|||||||
res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,
|
res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,
|
||||||
NULL);
|
NULL);
|
||||||
if (!res) {
|
if (!res) {
|
||||||
__TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
|
TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
|
||||||
__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
|
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
|
||||||
if (unlikely(tcp_passive_fastopen(sk)))
|
if (unlikely(tcp_passive_fastopen(sk)))
|
||||||
tcp_sk(sk)->total_retrans++;
|
tcp_sk(sk)->total_retrans++;
|
||||||
trace_tcp_retransmit_synack(sk, req);
|
trace_tcp_retransmit_synack(sk, req);
|
||||||
|
@ -5586,7 +5586,7 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
|
|||||||
array[DEVCONF_IOAM6_ID] = cnf->ioam6_id;
|
array[DEVCONF_IOAM6_ID] = cnf->ioam6_id;
|
||||||
array[DEVCONF_IOAM6_ID_WIDE] = cnf->ioam6_id_wide;
|
array[DEVCONF_IOAM6_ID_WIDE] = cnf->ioam6_id_wide;
|
||||||
array[DEVCONF_NDISC_EVICT_NOCARRIER] = cnf->ndisc_evict_nocarrier;
|
array[DEVCONF_NDISC_EVICT_NOCARRIER] = cnf->ndisc_evict_nocarrier;
|
||||||
array[DEVCONF_ACCEPT_UNSOLICITED_NA] = cnf->accept_unsolicited_na;
|
array[DEVCONF_ACCEPT_UNTRACKED_NA] = cnf->accept_untracked_na;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline size_t inet6_ifla6_size(void)
|
static inline size_t inet6_ifla6_size(void)
|
||||||
@ -7038,8 +7038,8 @@ static const struct ctl_table addrconf_sysctl[] = {
|
|||||||
.extra2 = (void *)SYSCTL_ONE,
|
.extra2 = (void *)SYSCTL_ONE,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.procname = "accept_unsolicited_na",
|
.procname = "accept_untracked_na",
|
||||||
.data = &ipv6_devconf.accept_unsolicited_na,
|
.data = &ipv6_devconf.accept_untracked_na,
|
||||||
.maxlen = sizeof(int),
|
.maxlen = sizeof(int),
|
||||||
.mode = 0644,
|
.mode = 0644,
|
||||||
.proc_handler = proc_dointvec_minmax,
|
.proc_handler = proc_dointvec_minmax,
|
||||||
|
@ -979,7 +979,7 @@ static void ndisc_recv_na(struct sk_buff *skb)
|
|||||||
struct inet6_dev *idev = __in6_dev_get(dev);
|
struct inet6_dev *idev = __in6_dev_get(dev);
|
||||||
struct inet6_ifaddr *ifp;
|
struct inet6_ifaddr *ifp;
|
||||||
struct neighbour *neigh;
|
struct neighbour *neigh;
|
||||||
bool create_neigh;
|
u8 new_state;
|
||||||
|
|
||||||
if (skb->len < sizeof(struct nd_msg)) {
|
if (skb->len < sizeof(struct nd_msg)) {
|
||||||
ND_PRINTK(2, warn, "NA: packet too short\n");
|
ND_PRINTK(2, warn, "NA: packet too short\n");
|
||||||
@ -1000,7 +1000,7 @@ static void ndisc_recv_na(struct sk_buff *skb)
|
|||||||
/* For some 802.11 wireless deployments (and possibly other networks),
|
/* For some 802.11 wireless deployments (and possibly other networks),
|
||||||
* there will be a NA proxy and unsolicitd packets are attacks
|
* there will be a NA proxy and unsolicitd packets are attacks
|
||||||
* and thus should not be accepted.
|
* and thus should not be accepted.
|
||||||
* drop_unsolicited_na takes precedence over accept_unsolicited_na
|
* drop_unsolicited_na takes precedence over accept_untracked_na
|
||||||
*/
|
*/
|
||||||
if (!msg->icmph.icmp6_solicited && idev &&
|
if (!msg->icmph.icmp6_solicited && idev &&
|
||||||
idev->cnf.drop_unsolicited_na)
|
idev->cnf.drop_unsolicited_na)
|
||||||
@ -1041,25 +1041,33 @@ static void ndisc_recv_na(struct sk_buff *skb)
|
|||||||
in6_ifa_put(ifp);
|
in6_ifa_put(ifp);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
neigh = neigh_lookup(&nd_tbl, &msg->target, dev);
|
||||||
|
|
||||||
/* RFC 9131 updates original Neighbour Discovery RFC 4861.
|
/* RFC 9131 updates original Neighbour Discovery RFC 4861.
|
||||||
* An unsolicited NA can now create a neighbour cache entry
|
* NAs with Target LL Address option without a corresponding
|
||||||
* on routers if it has Target LL Address option.
|
* entry in the neighbour cache can now create a STALE neighbour
|
||||||
|
* cache entry on routers.
|
||||||
|
*
|
||||||
|
* entry accept fwding solicited behaviour
|
||||||
|
* ------- ------ ------ --------- ----------------------
|
||||||
|
* present X X 0 Set state to STALE
|
||||||
|
* present X X 1 Set state to REACHABLE
|
||||||
|
* absent 0 X X Do nothing
|
||||||
|
* absent 1 0 X Do nothing
|
||||||
|
* absent 1 1 X Add a new STALE entry
|
||||||
*
|
*
|
||||||
* drop accept fwding behaviour
|
|
||||||
* ---- ------ ------ ----------------------------------------------
|
|
||||||
* 1 X X Drop NA packet and don't pass up the stack
|
|
||||||
* 0 0 X Pass NA packet up the stack, don't update NC
|
|
||||||
* 0 1 0 Pass NA packet up the stack, don't update NC
|
|
||||||
* 0 1 1 Pass NA packet up the stack, and add a STALE
|
|
||||||
* NC entry
|
|
||||||
* Note that we don't do a (daddr == all-routers-mcast) check.
|
* Note that we don't do a (daddr == all-routers-mcast) check.
|
||||||
*/
|
*/
|
||||||
create_neigh = !msg->icmph.icmp6_solicited && lladdr &&
|
new_state = msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE;
|
||||||
idev && idev->cnf.forwarding &&
|
if (!neigh && lladdr &&
|
||||||
idev->cnf.accept_unsolicited_na;
|
idev && idev->cnf.forwarding &&
|
||||||
neigh = __neigh_lookup(&nd_tbl, &msg->target, dev, create_neigh);
|
idev->cnf.accept_untracked_na) {
|
||||||
|
neigh = neigh_create(&nd_tbl, &msg->target, dev);
|
||||||
|
new_state = NUD_STALE;
|
||||||
|
}
|
||||||
|
|
||||||
if (neigh) {
|
if (neigh && !IS_ERR(neigh)) {
|
||||||
u8 old_flags = neigh->flags;
|
u8 old_flags = neigh->flags;
|
||||||
struct net *net = dev_net(dev);
|
struct net *net = dev_net(dev);
|
||||||
|
|
||||||
@ -1079,7 +1087,7 @@ static void ndisc_recv_na(struct sk_buff *skb)
|
|||||||
}
|
}
|
||||||
|
|
||||||
ndisc_update(dev, neigh, lladdr,
|
ndisc_update(dev, neigh, lladdr,
|
||||||
msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE,
|
new_state,
|
||||||
NEIGH_UPDATE_F_WEAK_OVERRIDE|
|
NEIGH_UPDATE_F_WEAK_OVERRIDE|
|
||||||
(msg->icmph.icmp6_override ? NEIGH_UPDATE_F_OVERRIDE : 0)|
|
(msg->icmph.icmp6_override ? NEIGH_UPDATE_F_OVERRIDE : 0)|
|
||||||
NEIGH_UPDATE_F_OVERRIDE_ISROUTER|
|
NEIGH_UPDATE_F_OVERRIDE_ISROUTER|
|
||||||
|
@ -101,6 +101,9 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||||||
ipc6.sockc.tsflags = sk->sk_tsflags;
|
ipc6.sockc.tsflags = sk->sk_tsflags;
|
||||||
ipc6.sockc.mark = sk->sk_mark;
|
ipc6.sockc.mark = sk->sk_mark;
|
||||||
|
|
||||||
|
memset(&fl6, 0, sizeof(fl6));
|
||||||
|
fl6.flowi6_oif = oif;
|
||||||
|
|
||||||
if (msg->msg_controllen) {
|
if (msg->msg_controllen) {
|
||||||
struct ipv6_txoptions opt = {};
|
struct ipv6_txoptions opt = {};
|
||||||
|
|
||||||
@ -112,17 +115,14 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||||||
return err;
|
return err;
|
||||||
|
|
||||||
/* Changes to txoptions and flow info are not implemented, yet.
|
/* Changes to txoptions and flow info are not implemented, yet.
|
||||||
* Drop the options, fl6 is wiped below.
|
* Drop the options.
|
||||||
*/
|
*/
|
||||||
ipc6.opt = NULL;
|
ipc6.opt = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(&fl6, 0, sizeof(fl6));
|
|
||||||
|
|
||||||
fl6.flowi6_proto = IPPROTO_ICMPV6;
|
fl6.flowi6_proto = IPPROTO_ICMPV6;
|
||||||
fl6.saddr = np->saddr;
|
fl6.saddr = np->saddr;
|
||||||
fl6.daddr = *daddr;
|
fl6.daddr = *daddr;
|
||||||
fl6.flowi6_oif = oif;
|
|
||||||
fl6.flowi6_mark = ipc6.sockc.mark;
|
fl6.flowi6_mark = ipc6.sockc.mark;
|
||||||
fl6.flowi6_uid = sk->sk_uid;
|
fl6.flowi6_uid = sk->sk_uid;
|
||||||
fl6.fl6_icmp_type = user_icmph.icmp6_type;
|
fl6.fl6_icmp_type = user_icmph.icmp6_type;
|
||||||
|
@ -2826,10 +2826,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb
|
|||||||
void *ext_hdrs[SADB_EXT_MAX];
|
void *ext_hdrs[SADB_EXT_MAX];
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
|
/* Non-zero return value of pfkey_broadcast() does not always signal
|
||||||
BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
|
* an error and even on an actual error we may still want to process
|
||||||
if (err)
|
* the message so rather ignore the return value.
|
||||||
return err;
|
*/
|
||||||
|
pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
|
||||||
|
BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
|
||||||
|
|
||||||
memset(ext_hdrs, 0, sizeof(ext_hdrs));
|
memset(ext_hdrs, 0, sizeof(ext_hdrs));
|
||||||
err = parse_exthdrs(skb, hdr, ext_hdrs);
|
err = parse_exthdrs(skb, hdr, ext_hdrs);
|
||||||
|
@ -1749,12 +1749,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata)
|
|||||||
|
|
||||||
if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) {
|
if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) {
|
||||||
if (old_ctx)
|
if (old_ctx)
|
||||||
err = ieee80211_vif_use_reserved_reassign(sdata);
|
return ieee80211_vif_use_reserved_reassign(sdata);
|
||||||
else
|
|
||||||
err = ieee80211_vif_use_reserved_assign(sdata);
|
|
||||||
|
|
||||||
if (err)
|
return ieee80211_vif_use_reserved_assign(sdata);
|
||||||
return err;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -222,12 +222,18 @@ err_register:
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void nft_netdev_unregister_hooks(struct net *net,
|
static void nft_netdev_unregister_hooks(struct net *net,
|
||||||
struct list_head *hook_list)
|
struct list_head *hook_list,
|
||||||
|
bool release_netdev)
|
||||||
{
|
{
|
||||||
struct nft_hook *hook;
|
struct nft_hook *hook, *next;
|
||||||
|
|
||||||
list_for_each_entry(hook, hook_list, list)
|
list_for_each_entry_safe(hook, next, hook_list, list) {
|
||||||
nf_unregister_net_hook(net, &hook->ops);
|
nf_unregister_net_hook(net, &hook->ops);
|
||||||
|
if (release_netdev) {
|
||||||
|
list_del(&hook->list);
|
||||||
|
kfree_rcu(hook, rcu);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_register_hook(struct net *net,
|
static int nf_tables_register_hook(struct net *net,
|
||||||
@ -253,9 +259,10 @@ static int nf_tables_register_hook(struct net *net,
|
|||||||
return nf_register_net_hook(net, &basechain->ops);
|
return nf_register_net_hook(net, &basechain->ops);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nf_tables_unregister_hook(struct net *net,
|
static void __nf_tables_unregister_hook(struct net *net,
|
||||||
const struct nft_table *table,
|
const struct nft_table *table,
|
||||||
struct nft_chain *chain)
|
struct nft_chain *chain,
|
||||||
|
bool release_netdev)
|
||||||
{
|
{
|
||||||
struct nft_base_chain *basechain;
|
struct nft_base_chain *basechain;
|
||||||
const struct nf_hook_ops *ops;
|
const struct nf_hook_ops *ops;
|
||||||
@ -270,11 +277,19 @@ static void nf_tables_unregister_hook(struct net *net,
|
|||||||
return basechain->type->ops_unregister(net, ops);
|
return basechain->type->ops_unregister(net, ops);
|
||||||
|
|
||||||
if (nft_base_chain_netdev(table->family, basechain->ops.hooknum))
|
if (nft_base_chain_netdev(table->family, basechain->ops.hooknum))
|
||||||
nft_netdev_unregister_hooks(net, &basechain->hook_list);
|
nft_netdev_unregister_hooks(net, &basechain->hook_list,
|
||||||
|
release_netdev);
|
||||||
else
|
else
|
||||||
nf_unregister_net_hook(net, &basechain->ops);
|
nf_unregister_net_hook(net, &basechain->ops);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void nf_tables_unregister_hook(struct net *net,
|
||||||
|
const struct nft_table *table,
|
||||||
|
struct nft_chain *chain)
|
||||||
|
{
|
||||||
|
return __nf_tables_unregister_hook(net, table, chain, false);
|
||||||
|
}
|
||||||
|
|
||||||
static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
|
static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
|
||||||
{
|
{
|
||||||
struct nftables_pernet *nft_net = nft_pernet(net);
|
struct nftables_pernet *nft_net = nft_pernet(net);
|
||||||
@ -2873,27 +2888,31 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
|
|||||||
|
|
||||||
err = nf_tables_expr_parse(ctx, nla, &expr_info);
|
err = nf_tables_expr_parse(ctx, nla, &expr_info);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err1;
|
goto err_expr_parse;
|
||||||
|
|
||||||
|
err = -EOPNOTSUPP;
|
||||||
|
if (!(expr_info.ops->type->flags & NFT_EXPR_STATEFUL))
|
||||||
|
goto err_expr_stateful;
|
||||||
|
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
expr = kzalloc(expr_info.ops->size, GFP_KERNEL_ACCOUNT);
|
expr = kzalloc(expr_info.ops->size, GFP_KERNEL_ACCOUNT);
|
||||||
if (expr == NULL)
|
if (expr == NULL)
|
||||||
goto err2;
|
goto err_expr_stateful;
|
||||||
|
|
||||||
err = nf_tables_newexpr(ctx, &expr_info, expr);
|
err = nf_tables_newexpr(ctx, &expr_info, expr);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err3;
|
goto err_expr_new;
|
||||||
|
|
||||||
return expr;
|
return expr;
|
||||||
err3:
|
err_expr_new:
|
||||||
kfree(expr);
|
kfree(expr);
|
||||||
err2:
|
err_expr_stateful:
|
||||||
owner = expr_info.ops->type->owner;
|
owner = expr_info.ops->type->owner;
|
||||||
if (expr_info.ops->type->release_ops)
|
if (expr_info.ops->type->release_ops)
|
||||||
expr_info.ops->type->release_ops(expr_info.ops);
|
expr_info.ops->type->release_ops(expr_info.ops);
|
||||||
|
|
||||||
module_put(owner);
|
module_put(owner);
|
||||||
err1:
|
err_expr_parse:
|
||||||
return ERR_PTR(err);
|
return ERR_PTR(err);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4242,6 +4261,9 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
|
|||||||
u32 len;
|
u32 len;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
if (desc->field_count >= ARRAY_SIZE(desc->field_len))
|
||||||
|
return -E2BIG;
|
||||||
|
|
||||||
err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr,
|
err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr,
|
||||||
nft_concat_policy, NULL);
|
nft_concat_policy, NULL);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
@ -4251,9 +4273,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN]));
|
len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN]));
|
||||||
|
if (!len || len > U8_MAX)
|
||||||
if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT)
|
return -EINVAL;
|
||||||
return -E2BIG;
|
|
||||||
|
|
||||||
desc->field_len[desc->field_count++] = len;
|
desc->field_len[desc->field_count++] = len;
|
||||||
|
|
||||||
@ -4264,7 +4285,8 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
|
|||||||
const struct nlattr *nla)
|
const struct nlattr *nla)
|
||||||
{
|
{
|
||||||
struct nlattr *attr;
|
struct nlattr *attr;
|
||||||
int rem, err;
|
u32 num_regs = 0;
|
||||||
|
int rem, err, i;
|
||||||
|
|
||||||
nla_for_each_nested(attr, nla, rem) {
|
nla_for_each_nested(attr, nla, rem) {
|
||||||
if (nla_type(attr) != NFTA_LIST_ELEM)
|
if (nla_type(attr) != NFTA_LIST_ELEM)
|
||||||
@ -4275,6 +4297,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for (i = 0; i < desc->field_count; i++)
|
||||||
|
num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
|
||||||
|
|
||||||
|
if (num_regs > NFT_REG32_COUNT)
|
||||||
|
return -E2BIG;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -5344,8 +5372,10 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
|
|||||||
|
|
||||||
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
||||||
err = nft_get_set_elem(&ctx, set, attr);
|
err = nft_get_set_elem(&ctx, set, attr);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
NL_SET_BAD_ATTR(extack, attr);
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
@ -5413,9 +5443,6 @@ struct nft_expr *nft_set_elem_expr_alloc(const struct nft_ctx *ctx,
|
|||||||
return expr;
|
return expr;
|
||||||
|
|
||||||
err = -EOPNOTSUPP;
|
err = -EOPNOTSUPP;
|
||||||
if (!(expr->ops->type->flags & NFT_EXPR_STATEFUL))
|
|
||||||
goto err_set_elem_expr;
|
|
||||||
|
|
||||||
if (expr->ops->type->flags & NFT_EXPR_GC) {
|
if (expr->ops->type->flags & NFT_EXPR_GC) {
|
||||||
if (set->flags & NFT_SET_TIMEOUT)
|
if (set->flags & NFT_SET_TIMEOUT)
|
||||||
goto err_set_elem_expr;
|
goto err_set_elem_expr;
|
||||||
@ -6125,8 +6152,10 @@ static int nf_tables_newsetelem(struct sk_buff *skb,
|
|||||||
|
|
||||||
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
||||||
err = nft_add_set_elem(&ctx, set, attr, info->nlh->nlmsg_flags);
|
err = nft_add_set_elem(&ctx, set, attr, info->nlh->nlmsg_flags);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
NL_SET_BAD_ATTR(extack, attr);
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (nft_net->validate_state == NFT_VALIDATE_DO)
|
if (nft_net->validate_state == NFT_VALIDATE_DO)
|
||||||
@ -6396,8 +6425,10 @@ static int nf_tables_delsetelem(struct sk_buff *skb,
|
|||||||
|
|
||||||
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
|
||||||
err = nft_del_setelem(&ctx, set, attr);
|
err = nft_del_setelem(&ctx, set, attr);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
NL_SET_BAD_ATTR(extack, attr);
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
@ -7291,13 +7322,25 @@ static void nft_unregister_flowtable_hook(struct net *net,
|
|||||||
FLOW_BLOCK_UNBIND);
|
FLOW_BLOCK_UNBIND);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void __nft_unregister_flowtable_net_hooks(struct net *net,
|
||||||
|
struct list_head *hook_list,
|
||||||
|
bool release_netdev)
|
||||||
|
{
|
||||||
|
struct nft_hook *hook, *next;
|
||||||
|
|
||||||
|
list_for_each_entry_safe(hook, next, hook_list, list) {
|
||||||
|
nf_unregister_net_hook(net, &hook->ops);
|
||||||
|
if (release_netdev) {
|
||||||
|
list_del(&hook->list);
|
||||||
|
kfree_rcu(hook);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void nft_unregister_flowtable_net_hooks(struct net *net,
|
static void nft_unregister_flowtable_net_hooks(struct net *net,
|
||||||
struct list_head *hook_list)
|
struct list_head *hook_list)
|
||||||
{
|
{
|
||||||
struct nft_hook *hook;
|
__nft_unregister_flowtable_net_hooks(net, hook_list, false);
|
||||||
|
|
||||||
list_for_each_entry(hook, hook_list, list)
|
|
||||||
nf_unregister_net_hook(net, &hook->ops);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nft_register_flowtable_net_hooks(struct net *net,
|
static int nft_register_flowtable_net_hooks(struct net *net,
|
||||||
@ -9739,9 +9782,10 @@ static void __nft_release_hook(struct net *net, struct nft_table *table)
|
|||||||
struct nft_chain *chain;
|
struct nft_chain *chain;
|
||||||
|
|
||||||
list_for_each_entry(chain, &table->chains, list)
|
list_for_each_entry(chain, &table->chains, list)
|
||||||
nf_tables_unregister_hook(net, table, chain);
|
__nf_tables_unregister_hook(net, table, chain, true);
|
||||||
list_for_each_entry(flowtable, &table->flowtables, list)
|
list_for_each_entry(flowtable, &table->flowtables, list)
|
||||||
nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list);
|
__nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list,
|
||||||
|
true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __nft_release_hooks(struct net *net)
|
static void __nft_release_hooks(struct net *net)
|
||||||
@ -9880,7 +9924,11 @@ static int __net_init nf_tables_init_net(struct net *net)
|
|||||||
|
|
||||||
static void __net_exit nf_tables_pre_exit_net(struct net *net)
|
static void __net_exit nf_tables_pre_exit_net(struct net *net)
|
||||||
{
|
{
|
||||||
|
struct nftables_pernet *nft_net = nft_pernet(net);
|
||||||
|
|
||||||
|
mutex_lock(&nft_net->commit_mutex);
|
||||||
__nft_release_hooks(net);
|
__nft_release_hooks(net);
|
||||||
|
mutex_unlock(&nft_net->commit_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __net_exit nf_tables_exit_net(struct net *net)
|
static void __net_exit nf_tables_exit_net(struct net *net)
|
||||||
|
@ -45,7 +45,6 @@ MODULE_DESCRIPTION("Netfilter messages via netlink socket");
|
|||||||
static unsigned int nfnetlink_pernet_id __read_mostly;
|
static unsigned int nfnetlink_pernet_id __read_mostly;
|
||||||
|
|
||||||
struct nfnl_net {
|
struct nfnl_net {
|
||||||
unsigned int ctnetlink_listeners;
|
|
||||||
struct sock *nfnl;
|
struct sock *nfnl;
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -673,18 +672,8 @@ static int nfnetlink_bind(struct net *net, int group)
|
|||||||
|
|
||||||
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
||||||
if (type == NFNL_SUBSYS_CTNETLINK) {
|
if (type == NFNL_SUBSYS_CTNETLINK) {
|
||||||
struct nfnl_net *nfnlnet = nfnl_pernet(net);
|
|
||||||
|
|
||||||
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
|
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
|
||||||
|
WRITE_ONCE(net->ct.ctnetlink_has_listener, true);
|
||||||
if (WARN_ON_ONCE(nfnlnet->ctnetlink_listeners == UINT_MAX)) {
|
|
||||||
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
|
|
||||||
return -EOVERFLOW;
|
|
||||||
}
|
|
||||||
|
|
||||||
nfnlnet->ctnetlink_listeners++;
|
|
||||||
if (nfnlnet->ctnetlink_listeners == 1)
|
|
||||||
WRITE_ONCE(net->ct.ctnetlink_has_listener, true);
|
|
||||||
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
|
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
@ -694,15 +683,12 @@ static int nfnetlink_bind(struct net *net, int group)
|
|||||||
static void nfnetlink_unbind(struct net *net, int group)
|
static void nfnetlink_unbind(struct net *net, int group)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
||||||
int type = nfnl_group2type[group];
|
if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX)
|
||||||
|
return;
|
||||||
if (type == NFNL_SUBSYS_CTNETLINK) {
|
|
||||||
struct nfnl_net *nfnlnet = nfnl_pernet(net);
|
|
||||||
|
|
||||||
|
if (nfnl_group2type[group] == NFNL_SUBSYS_CTNETLINK) {
|
||||||
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
|
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
|
||||||
WARN_ON_ONCE(nfnlnet->ctnetlink_listeners == 0);
|
if (!nfnetlink_has_listeners(net, group))
|
||||||
nfnlnet->ctnetlink_listeners--;
|
|
||||||
if (nfnlnet->ctnetlink_listeners == 0)
|
|
||||||
WRITE_ONCE(net->ct.ctnetlink_has_listener, false);
|
WRITE_ONCE(net->ct.ctnetlink_has_listener, false);
|
||||||
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
|
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
|
||||||
}
|
}
|
||||||
|
@ -35,12 +35,13 @@ static unsigned int nfct_timeout_id __read_mostly;
|
|||||||
|
|
||||||
struct ctnl_timeout {
|
struct ctnl_timeout {
|
||||||
struct list_head head;
|
struct list_head head;
|
||||||
|
struct list_head free_head;
|
||||||
struct rcu_head rcu_head;
|
struct rcu_head rcu_head;
|
||||||
refcount_t refcnt;
|
refcount_t refcnt;
|
||||||
char name[CTNL_TIMEOUT_NAME_MAX];
|
char name[CTNL_TIMEOUT_NAME_MAX];
|
||||||
struct nf_ct_timeout timeout;
|
|
||||||
|
|
||||||
struct list_head free_head;
|
/* must be at the end */
|
||||||
|
struct nf_ct_timeout timeout;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct nfct_timeout_pernet {
|
struct nfct_timeout_pernet {
|
||||||
|
@ -232,19 +232,21 @@ static int nft_flow_route(const struct nft_pktinfo *pkt,
|
|||||||
switch (nft_pf(pkt)) {
|
switch (nft_pf(pkt)) {
|
||||||
case NFPROTO_IPV4:
|
case NFPROTO_IPV4:
|
||||||
fl.u.ip4.daddr = ct->tuplehash[dir].tuple.src.u3.ip;
|
fl.u.ip4.daddr = ct->tuplehash[dir].tuple.src.u3.ip;
|
||||||
fl.u.ip4.saddr = ct->tuplehash[dir].tuple.dst.u3.ip;
|
fl.u.ip4.saddr = ct->tuplehash[!dir].tuple.src.u3.ip;
|
||||||
fl.u.ip4.flowi4_oif = nft_in(pkt)->ifindex;
|
fl.u.ip4.flowi4_oif = nft_in(pkt)->ifindex;
|
||||||
fl.u.ip4.flowi4_iif = this_dst->dev->ifindex;
|
fl.u.ip4.flowi4_iif = this_dst->dev->ifindex;
|
||||||
fl.u.ip4.flowi4_tos = RT_TOS(ip_hdr(pkt->skb)->tos);
|
fl.u.ip4.flowi4_tos = RT_TOS(ip_hdr(pkt->skb)->tos);
|
||||||
fl.u.ip4.flowi4_mark = pkt->skb->mark;
|
fl.u.ip4.flowi4_mark = pkt->skb->mark;
|
||||||
|
fl.u.ip4.flowi4_flags = FLOWI_FLAG_ANYSRC;
|
||||||
break;
|
break;
|
||||||
case NFPROTO_IPV6:
|
case NFPROTO_IPV6:
|
||||||
fl.u.ip6.daddr = ct->tuplehash[dir].tuple.src.u3.in6;
|
fl.u.ip6.daddr = ct->tuplehash[dir].tuple.src.u3.in6;
|
||||||
fl.u.ip6.saddr = ct->tuplehash[dir].tuple.dst.u3.in6;
|
fl.u.ip6.saddr = ct->tuplehash[!dir].tuple.src.u3.in6;
|
||||||
fl.u.ip6.flowi6_oif = nft_in(pkt)->ifindex;
|
fl.u.ip6.flowi6_oif = nft_in(pkt)->ifindex;
|
||||||
fl.u.ip6.flowi6_iif = this_dst->dev->ifindex;
|
fl.u.ip6.flowi6_iif = this_dst->dev->ifindex;
|
||||||
fl.u.ip6.flowlabel = ip6_flowinfo(ipv6_hdr(pkt->skb));
|
fl.u.ip6.flowlabel = ip6_flowinfo(ipv6_hdr(pkt->skb));
|
||||||
fl.u.ip6.flowi6_mark = pkt->skb->mark;
|
fl.u.ip6.flowi6_mark = pkt->skb->mark;
|
||||||
|
fl.u.ip6.flowi6_flags = FLOWI_FLAG_ANYSRC;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -213,6 +213,8 @@ static int nft_limit_pkts_clone(struct nft_expr *dst, const struct nft_expr *src
|
|||||||
struct nft_limit_priv_pkts *priv_dst = nft_expr_priv(dst);
|
struct nft_limit_priv_pkts *priv_dst = nft_expr_priv(dst);
|
||||||
struct nft_limit_priv_pkts *priv_src = nft_expr_priv(src);
|
struct nft_limit_priv_pkts *priv_src = nft_expr_priv(src);
|
||||||
|
|
||||||
|
priv_dst->cost = priv_src->cost;
|
||||||
|
|
||||||
return nft_limit_clone(&priv_dst->limit, &priv_src->limit);
|
return nft_limit_clone(&priv_dst->limit, &priv_src->limit);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -975,7 +975,7 @@ static void nfc_release(struct device *d)
|
|||||||
kfree(se);
|
kfree(se);
|
||||||
}
|
}
|
||||||
|
|
||||||
ida_simple_remove(&nfc_index_ida, dev->idx);
|
ida_free(&nfc_index_ida, dev->idx);
|
||||||
|
|
||||||
kfree(dev);
|
kfree(dev);
|
||||||
}
|
}
|
||||||
@ -1066,7 +1066,7 @@ struct nfc_dev *nfc_allocate_device(const struct nfc_ops *ops,
|
|||||||
if (!dev)
|
if (!dev)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
rc = ida_simple_get(&nfc_index_ida, 0, 0, GFP_KERNEL);
|
rc = ida_alloc(&nfc_index_ida, GFP_KERNEL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto err_free_dev;
|
goto err_free_dev;
|
||||||
dev->idx = rc;
|
dev->idx = rc;
|
||||||
|
@ -1935,8 +1935,10 @@ static void packet_parse_headers(struct sk_buff *skb, struct socket *sock)
|
|||||||
/* Move network header to the right position for VLAN tagged packets */
|
/* Move network header to the right position for VLAN tagged packets */
|
||||||
if (likely(skb->dev->type == ARPHRD_ETHER) &&
|
if (likely(skb->dev->type == ARPHRD_ETHER) &&
|
||||||
eth_type_vlan(skb->protocol) &&
|
eth_type_vlan(skb->protocol) &&
|
||||||
__vlan_get_protocol(skb, skb->protocol, &depth) != 0)
|
__vlan_get_protocol(skb, skb->protocol, &depth) != 0) {
|
||||||
skb_set_network_header(skb, depth);
|
if (pskb_may_pull(skb, depth))
|
||||||
|
skb_set_network_header(skb, depth);
|
||||||
|
}
|
||||||
|
|
||||||
skb_probe_transport_header(skb);
|
skb_probe_transport_header(skb);
|
||||||
}
|
}
|
||||||
|
@ -548,7 +548,7 @@ tcf_ct_flow_table_fill_tuple_ipv6(struct sk_buff *skb,
|
|||||||
break;
|
break;
|
||||||
#endif
|
#endif
|
||||||
default:
|
default:
|
||||||
return -1;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ip6h->hop_limit <= 1)
|
if (ip6h->hop_limit <= 1)
|
||||||
|
@ -2161,6 +2161,7 @@ static void smc_find_rdma_v2_device_serv(struct smc_sock *new_smc,
|
|||||||
|
|
||||||
not_found:
|
not_found:
|
||||||
ini->smcr_version &= ~SMC_V2;
|
ini->smcr_version &= ~SMC_V2;
|
||||||
|
ini->smcrv2.ib_dev_v2 = NULL;
|
||||||
ini->check_smcrv2 = false;
|
ini->check_smcrv2 = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ int smc_cdc_get_free_slot(struct smc_connection *conn,
|
|||||||
/* abnormal termination */
|
/* abnormal termination */
|
||||||
if (!rc)
|
if (!rc)
|
||||||
smc_wr_tx_put_slot(link,
|
smc_wr_tx_put_slot(link,
|
||||||
(struct smc_wr_tx_pend_priv *)pend);
|
(struct smc_wr_tx_pend_priv *)(*pend));
|
||||||
rc = -EPIPE;
|
rc = -EPIPE;
|
||||||
}
|
}
|
||||||
return rc;
|
return rc;
|
||||||
|
@ -259,9 +259,8 @@ static int tipc_enable_bearer(struct net *net, const char *name,
|
|||||||
u32 i;
|
u32 i;
|
||||||
|
|
||||||
if (!bearer_name_validate(name, &b_names)) {
|
if (!bearer_name_validate(name, &b_names)) {
|
||||||
errstr = "illegal name";
|
|
||||||
NL_SET_ERR_MSG(extack, "Illegal name");
|
NL_SET_ERR_MSG(extack, "Illegal name");
|
||||||
goto rejected;
|
return res;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
|
if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
|
||||||
|
@ -273,6 +273,7 @@ static int xfrm4_beet_encap_add(struct xfrm_state *x, struct sk_buff *skb)
|
|||||||
*/
|
*/
|
||||||
static int xfrm4_tunnel_encap_add(struct xfrm_state *x, struct sk_buff *skb)
|
static int xfrm4_tunnel_encap_add(struct xfrm_state *x, struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
|
bool small_ipv6 = (skb->protocol == htons(ETH_P_IPV6)) && (skb->len <= IPV6_MIN_MTU);
|
||||||
struct dst_entry *dst = skb_dst(skb);
|
struct dst_entry *dst = skb_dst(skb);
|
||||||
struct iphdr *top_iph;
|
struct iphdr *top_iph;
|
||||||
int flags;
|
int flags;
|
||||||
@ -303,7 +304,7 @@ static int xfrm4_tunnel_encap_add(struct xfrm_state *x, struct sk_buff *skb)
|
|||||||
if (flags & XFRM_STATE_NOECN)
|
if (flags & XFRM_STATE_NOECN)
|
||||||
IP_ECN_clear(top_iph);
|
IP_ECN_clear(top_iph);
|
||||||
|
|
||||||
top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) ?
|
top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) || small_ipv6 ?
|
||||||
0 : (XFRM_MODE_SKB_CB(skb)->frag_off & htons(IP_DF));
|
0 : (XFRM_MODE_SKB_CB(skb)->frag_off & htons(IP_DF));
|
||||||
|
|
||||||
top_iph->ttl = ip4_dst_hoplimit(xfrm_dst_child(dst));
|
top_iph->ttl = ip4_dst_hoplimit(xfrm_dst_child(dst));
|
||||||
|
@ -39,7 +39,7 @@ struct {
|
|||||||
__type(value, stack_trace_t);
|
__type(value, stack_trace_t);
|
||||||
} stack_amap SEC(".maps");
|
} stack_amap SEC(".maps");
|
||||||
|
|
||||||
SEC("kprobe/urandom_read")
|
SEC("kprobe/urandom_read_iter")
|
||||||
int oncpu(struct pt_regs *args)
|
int oncpu(struct pt_regs *args)
|
||||||
{
|
{
|
||||||
__u32 max_len = sizeof(struct bpf_stack_build_id)
|
__u32 max_len = sizeof(struct bpf_stack_build_id)
|
||||||
|
@ -1,15 +1,14 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
# This test is for the accept_unsolicited_na feature to
|
# This test is for the accept_untracked_na feature to
|
||||||
# enable RFC9131 behaviour. The following is the test-matrix.
|
# enable RFC9131 behaviour. The following is the test-matrix.
|
||||||
# drop accept fwding behaviour
|
# drop accept fwding behaviour
|
||||||
# ---- ------ ------ ----------------------------------------------
|
# ---- ------ ------ ----------------------------------------------
|
||||||
# 1 X X Drop NA packet and don't pass up the stack
|
# 1 X X Don't update NC
|
||||||
# 0 0 X Pass NA packet up the stack, don't update NC
|
# 0 0 X Don't update NC
|
||||||
# 0 1 0 Pass NA packet up the stack, don't update NC
|
# 0 1 0 Don't update NC
|
||||||
# 0 1 1 Pass NA packet up the stack, and add a STALE
|
# 0 1 1 Add a STALE NC entry
|
||||||
# NC entry
|
|
||||||
|
|
||||||
ret=0
|
ret=0
|
||||||
# Kselftest framework requirement - SKIP code is 4.
|
# Kselftest framework requirement - SKIP code is 4.
|
||||||
@ -72,7 +71,7 @@ setup()
|
|||||||
set -e
|
set -e
|
||||||
|
|
||||||
local drop_unsolicited_na=$1
|
local drop_unsolicited_na=$1
|
||||||
local accept_unsolicited_na=$2
|
local accept_untracked_na=$2
|
||||||
local forwarding=$3
|
local forwarding=$3
|
||||||
|
|
||||||
# Setup two namespaces and a veth tunnel across them.
|
# Setup two namespaces and a veth tunnel across them.
|
||||||
@ -93,7 +92,7 @@ setup()
|
|||||||
${IP_ROUTER_EXEC} sysctl -qw \
|
${IP_ROUTER_EXEC} sysctl -qw \
|
||||||
${ROUTER_CONF}.drop_unsolicited_na=${drop_unsolicited_na}
|
${ROUTER_CONF}.drop_unsolicited_na=${drop_unsolicited_na}
|
||||||
${IP_ROUTER_EXEC} sysctl -qw \
|
${IP_ROUTER_EXEC} sysctl -qw \
|
||||||
${ROUTER_CONF}.accept_unsolicited_na=${accept_unsolicited_na}
|
${ROUTER_CONF}.accept_untracked_na=${accept_untracked_na}
|
||||||
${IP_ROUTER_EXEC} sysctl -qw ${ROUTER_CONF}.disable_ipv6=0
|
${IP_ROUTER_EXEC} sysctl -qw ${ROUTER_CONF}.disable_ipv6=0
|
||||||
${IP_ROUTER} addr add ${ROUTER_ADDR_WITH_MASK} dev ${ROUTER_INTF}
|
${IP_ROUTER} addr add ${ROUTER_ADDR_WITH_MASK} dev ${ROUTER_INTF}
|
||||||
|
|
||||||
@ -144,13 +143,13 @@ link_up() {
|
|||||||
|
|
||||||
verify_ndisc() {
|
verify_ndisc() {
|
||||||
local drop_unsolicited_na=$1
|
local drop_unsolicited_na=$1
|
||||||
local accept_unsolicited_na=$2
|
local accept_untracked_na=$2
|
||||||
local forwarding=$3
|
local forwarding=$3
|
||||||
|
|
||||||
neigh_show_output=$(${IP_ROUTER} neigh show \
|
neigh_show_output=$(${IP_ROUTER} neigh show \
|
||||||
to ${HOST_ADDR} dev ${ROUTER_INTF} nud stale)
|
to ${HOST_ADDR} dev ${ROUTER_INTF} nud stale)
|
||||||
if [ ${drop_unsolicited_na} -eq 0 ] && \
|
if [ ${drop_unsolicited_na} -eq 0 ] && \
|
||||||
[ ${accept_unsolicited_na} -eq 1 ] && \
|
[ ${accept_untracked_na} -eq 1 ] && \
|
||||||
[ ${forwarding} -eq 1 ]; then
|
[ ${forwarding} -eq 1 ]; then
|
||||||
# Neighbour entry expected to be present for 011 case
|
# Neighbour entry expected to be present for 011 case
|
||||||
[[ ${neigh_show_output} ]]
|
[[ ${neigh_show_output} ]]
|
||||||
@ -179,14 +178,14 @@ test_unsolicited_na_combination() {
|
|||||||
test_unsolicited_na_common $1 $2 $3
|
test_unsolicited_na_common $1 $2 $3
|
||||||
test_msg=("test_unsolicited_na: "
|
test_msg=("test_unsolicited_na: "
|
||||||
"drop_unsolicited_na=$1 "
|
"drop_unsolicited_na=$1 "
|
||||||
"accept_unsolicited_na=$2 "
|
"accept_untracked_na=$2 "
|
||||||
"forwarding=$3")
|
"forwarding=$3")
|
||||||
log_test $? 0 "${test_msg[*]}"
|
log_test $? 0 "${test_msg[*]}"
|
||||||
cleanup
|
cleanup
|
||||||
}
|
}
|
||||||
|
|
||||||
test_unsolicited_na_combinations() {
|
test_unsolicited_na_combinations() {
|
||||||
# Args: drop_unsolicited_na accept_unsolicited_na forwarding
|
# Args: drop_unsolicited_na accept_untracked_na forwarding
|
||||||
|
|
||||||
# Expect entry
|
# Expect entry
|
||||||
test_unsolicited_na_combination 0 1 1
|
test_unsolicited_na_combination 0 1 1
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user