Networking stragglers and fixes for 5.15-rc1, including changes from netfilter,

wireless and can.
 
 Current release - regressions:
 
  - qrtr: revert check in qrtr_endpoint_post(), fixes audio and wifi
 
  - ip_gre: validate csum_start only on pull
 
  - bnxt_en: fix 64-bit doorbell operation on 32-bit kernels
 
  - ionic: fix double use of queue-lock, fix a sleeping in atomic
 
  - can: c_can: fix null-ptr-deref on ioctl()
 
  - cs89x0: disable compile testing on powerpc
 
 Current release - new code bugs:
 
  - bridge: mcast: fix vlan port router deadlock, consistently disable BH
 
 Previous releases - regressions:
 
  - dsa: tag_rtl4_a: fix egress tags, only port 0 was working
 
  - mptcp: fix possible divide by zero
 
  - netfilter: nft_ct: protect nft_ct_pcpu_template_refcnt with mutex
 
  - netfilter: socket: icmp6: fix use-after-scope
 
  - stmmac: fix MAC not working when system resume back with WoL active
 
 Previous releases - always broken:
 
  - ip/ip6_gre: use the same logic as SIT interfaces when computing v6LL
                address
 
  - seg6: set fc_nlinfo in nh_create_ipv4, nh_create_ipv6
 
  - mptcp: only send extra TCP acks in eligible socket states
 
  - dsa: lantiq_gswip: fix maximum frame length
 
  - stmmac: fix overall budget calculation for rxtx_napi
 
  - bnxt_en: fix firmware version reporting via devlink
 
  - renesas: sh_eth: add missing barrier to fix freeing wrong tx descriptor
 
 Stragglers:
 
  - netfilter: conntrack: switch to siphash
 
  - netfilter: refuse insertion if chain has grown too large
 
  - ncsi: add get MAC address command to get Intel i210 MAC address
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmE3uicACgkQMUZtbf5S
 IrtJVA//XdE8qAmw1JukjyYC87JH2ale20eoZ6ERn7/09e4tdv3M6dOTI4YfrM6+
 CMNP5MP2qit3IzY+lN0+yt9AAFH7k85z3MA8zLxsXN4z63OJcZvFv/G/OWy4Wp/0
 vOo/DH+rF3LR+fZZvjJI+8Xi9/orsRpD12cwGmjGRxybh+XcnHKI/GvK2RgE6oBR
 015RfBbbQBpzFQvESLnSwDzabN1XFEL1x/bz7N8ek3okfO/tab+f3E1tb6eYtTy+
 jyDyOWpayd4xDttKNMUuxwS1q+/oAWOAq8PzkaF/ZG2sBH1Z4yZN9ZtsLNZmPG8N
 5L1FEem/Nmgr54T9v/FhfiryhhGGysVfVgtQcCBkKRmVn1Kk2L6dFvtuanPtFFd3
 llbi5PvCDJy3rbMmxKmyoM3T4jpMwWxQRZKsosw+k/WQfb8/SUOjgpY713V1Wx/P
 S+2uadU4l9Ql9sF6X0IqZABnnt+j/BuDo6C6vVq7vyj0iQ9hEX9YxC0ybrAHOYpH
 suHWKndodRfTxxVOg8xRNYwXyRLNbm1AP6LMDNKBlFUjwNSZ362qFX7W7DuXoRup
 Rrnb8V1QFvM+pyFb2a0qNtBS68IXbjCdVQX5e8a5ELaAUnDPefNrfPN+/rrTLEtV
 LnusmBF+02llVSYdr88t1e+LmzqS/aqXFy2ry4y6owjq20ld2O0=
 =Zvuz
 -----END PGP SIGNATURE-----

Merge tag 'net-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes and stragglers from Jakub Kicinski:
 "Networking stragglers and fixes, including changes from netfilter,
  wireless and can.

  Current release - regressions:

   - qrtr: revert check in qrtr_endpoint_post(), fixes audio and wifi

   - ip_gre: validate csum_start only on pull

   - bnxt_en: fix 64-bit doorbell operation on 32-bit kernels

   - ionic: fix double use of queue-lock, fix a sleeping in atomic

   - can: c_can: fix null-ptr-deref on ioctl()

   - cs89x0: disable compile testing on powerpc

  Current release - new code bugs:

   - bridge: mcast: fix vlan port router deadlock, consistently disable
     BH

  Previous releases - regressions:

   - dsa: tag_rtl4_a: fix egress tags, only port 0 was working

   - mptcp: fix possible divide by zero

   - netfilter: nft_ct: protect nft_ct_pcpu_template_refcnt with mutex

   - netfilter: socket: icmp6: fix use-after-scope

   - stmmac: fix MAC not working when system resume back with WoL active

  Previous releases - always broken:

   - ip/ip6_gre: use the same logic as SIT interfaces when computing
     v6LL address

   - seg6: set fc_nlinfo in nh_create_ipv4, nh_create_ipv6

   - mptcp: only send extra TCP acks in eligible socket states

   - dsa: lantiq_gswip: fix maximum frame length

   - stmmac: fix overall budget calculation for rxtx_napi

   - bnxt_en: fix firmware version reporting via devlink

   - renesas: sh_eth: add missing barrier to fix freeing wrong tx
     descriptor

  Stragglers:

   - netfilter: conntrack: switch to siphash

   - netfilter: refuse insertion if chain has grown too large

   - ncsi: add get MAC address command to get Intel i210 MAC address"

* tag 'net-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (76 commits)
  ieee802154: Remove redundant initialization of variable ret
  net: stmmac: fix MAC not working when system resume back with WoL active
  net: phylink: add suspend/resume support
  net: renesas: sh_eth: Fix freeing wrong tx descriptor
  bonding: 3ad: pass parameter bond_params by reference
  cxgb3: fix oops on module removal
  can: c_can: fix null-ptr-deref on ioctl()
  can: rcar_canfd: add __maybe_unused annotation to silence warning
  net: wwan: iosm: Unify IO accessors used in the driver
  net: wwan: iosm: Replace io.*64_lo_hi() with regular accessors
  net: qcom/emac: Replace strlcpy with strscpy
  ip6_gre: Revert "ip6_gre: add validation for csum_start"
  net: hns3: make hclgevf_cmd_caps_bit_map0 and hclge_cmd_caps_bit_map0 static
  selftests/bpf: Test XDP bonding nest and unwind
  bonding: Fix negative jump label count on nested bonding
  MAINTAINERS: add VM SOCKETS (AF_VSOCK) entry
  stmmac: dwmac-loongson:Fix missing return value
  iwlwifi: fix printk format warnings in uefi.c
  net: create netdev->dev_addr assignment helpers
  bnxt_en: Fix possible unintended driver initiated error recovery
  ...
This commit is contained in:
Linus Torvalds 2021-09-07 14:02:58 -07:00
commit 626bf91a29
89 changed files with 1066 additions and 389 deletions

View File

@ -17,9 +17,8 @@ nf_conntrack_acct - BOOLEAN
nf_conntrack_buckets - INTEGER
Size of hash table. If not specified as parameter during module
loading, the default size is calculated by dividing total memory
by 16384 to determine the number of buckets but the hash table will
never have fewer than 32 and limited to 16384 buckets. For systems
with more than 4GB of memory it will be 65536 buckets.
by 16384 to determine the number of buckets. The hash table will
never have fewer than 1024 and never more than 262144 buckets.
This sysctl is only writeable in the initial net namespace.
nf_conntrack_checksum - BOOLEAN
@ -100,8 +99,12 @@ nf_conntrack_log_invalid - INTEGER
Log invalid packets of a type specified by value.
nf_conntrack_max - INTEGER
Size of connection tracking table. Default value is
nf_conntrack_buckets value * 4.
Maximum number of allowed connection tracking entries. This value is set
to nf_conntrack_buckets by default.
Note that connection tracking entries are added to the table twice -- once
for the original direction and once for the reply direction (i.e., with
the reversed address). This means that with default settings a maxed-out
table will have a average hash chain length of 2, not 1.
nf_conntrack_tcp_be_liberal - BOOLEAN
- 0 - disabled (default)

View File

@ -19761,18 +19761,11 @@ L: kvm@vger.kernel.org
L: virtualization@lists.linux-foundation.org
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/vsockmon.c
F: drivers/vhost/vsock.c
F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h
F: include/uapi/linux/vm_sockets_diag.h
F: include/uapi/linux/vsockmon.h
F: net/vmw_vsock/af_vsock_tap.c
F: net/vmw_vsock/diag.c
F: net/vmw_vsock/virtio_transport.c
F: net/vmw_vsock/virtio_transport_common.c
F: net/vmw_vsock/vsock_loopback.c
F: tools/testing/vsock/
VIRTIO BLOCK AND SCSI DRIVERS
M: "Michael S. Tsirkin" <mst@redhat.com>
@ -19977,6 +19970,19 @@ F: drivers/staging/vme/
F: drivers/vme/
F: include/linux/vme*
VM SOCKETS (AF_VSOCK)
M: Stefano Garzarella <sgarzare@redhat.com>
L: virtualization@lists.linux-foundation.org
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/vsockmon.c
F: include/net/af_vsock.h
F: include/uapi/linux/vm_sockets.h
F: include/uapi/linux/vm_sockets_diag.h
F: include/uapi/linux/vsockmon.h
F: net/vmw_vsock/
F: tools/testing/vsock/
VMWARE BALLOON DRIVER
M: Nadav Amit <namit@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com>

View File

@ -96,7 +96,7 @@ static int ad_marker_send(struct port *port, struct bond_marker *marker);
static void ad_mux_machine(struct port *port, bool *update_slave_arr);
static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port);
static void ad_tx_machine(struct port *port);
static void ad_periodic_machine(struct port *port, struct bond_params bond_params);
static void ad_periodic_machine(struct port *port, struct bond_params *bond_params);
static void ad_port_selection_logic(struct port *port, bool *update_slave_arr);
static void ad_agg_selection_logic(struct aggregator *aggregator,
bool *update_slave_arr);
@ -1298,7 +1298,7 @@ static void ad_tx_machine(struct port *port)
*
* Turn ntt flag on priodically to perform periodic transmission of lacpdu's.
*/
static void ad_periodic_machine(struct port *port, struct bond_params bond_params)
static void ad_periodic_machine(struct port *port, struct bond_params *bond_params)
{
periodic_states_t last_state;
@ -1308,7 +1308,7 @@ static void ad_periodic_machine(struct port *port, struct bond_params bond_param
/* check if port was reinitialized */
if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) ||
(!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) ||
!bond_params.lacp_active) {
!bond_params->lacp_active) {
port->sm_periodic_state = AD_NO_PERIODIC;
}
/* check if state machine should change state */
@ -2342,7 +2342,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
}
ad_rx_machine(NULL, port);
ad_periodic_machine(port, bond->params);
ad_periodic_machine(port, &bond->params);
ad_port_selection_logic(port, &update_slave_arr);
ad_mux_machine(port, &update_slave_arr);
ad_tx_machine(port);

View File

@ -2169,7 +2169,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
res = -EOPNOTSUPP;
goto err_sysfs_del;
}
} else {
} else if (bond->xdp_prog) {
struct netdev_bpf xdp = {
.command = XDP_SETUP_PROG,
.flags = 0,
@ -2910,9 +2910,9 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
* probe to generate any traffic (arp_validate=0)
*/
if (bond->params.arp_validate)
net_warn_ratelimited("%s: no route to arp_ip_target %pI4 and arp_validate is set\n",
bond->dev->name,
&targets[i]);
pr_warn_once("%s: no route to arp_ip_target %pI4 and arp_validate is set\n",
bond->dev->name,
&targets[i]);
bond_arp_send(slave, ARPOP_REQUEST, targets[i],
0, tags);
continue;
@ -5224,13 +5224,12 @@ static int bond_xdp_set(struct net_device *dev, struct bpf_prog *prog,
bpf_prog_inc(prog);
}
if (old_prog)
bpf_prog_put(old_prog);
if (prog)
if (prog) {
static_branch_inc(&bpf_master_redirect_enabled_key);
else
} else if (old_prog) {
bpf_prog_put(old_prog);
static_branch_dec(&bpf_master_redirect_enabled_key);
}
return 0;

View File

@ -15,10 +15,8 @@ static void c_can_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *info)
{
struct c_can_priv *priv = netdev_priv(netdev);
struct platform_device *pdev = to_platform_device(priv->device);
strscpy(info->driver, "c_can", sizeof(info->driver));
strscpy(info->bus_info, pdev->name, sizeof(info->bus_info));
strscpy(info->bus_info, dev_name(priv->device), sizeof(info->bus_info));
}
static void c_can_get_ringparam(struct net_device *netdev,

View File

@ -2017,7 +2017,7 @@ static int __maybe_unused rcar_canfd_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(rcar_canfd_pm_ops, rcar_canfd_suspend,
rcar_canfd_resume);
static const struct of_device_id rcar_canfd_of_table[] = {
static const __maybe_unused struct of_device_id rcar_canfd_of_table[] = {
{ .compatible = "renesas,rcar-gen3-canfd", .data = (void *)RENESAS_RCAR_GEN3 },
{ .compatible = "renesas,rzg2l-canfd", .data = (void *)RENESAS_RZG2L },
{ }

View File

@ -1144,7 +1144,7 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
u8 reg, val, off;
/* Override the port settings */
if (port == dev->cpu_port) {
if (port == dev->imp_port) {
off = B53_PORT_OVERRIDE_CTRL;
val = PORT_OVERRIDE_EN;
} else {
@ -1168,7 +1168,7 @@ static void b53_force_port_config(struct b53_device *dev, int port,
u8 reg, val, off;
/* Override the port settings */
if (port == dev->cpu_port) {
if (port == dev->imp_port) {
off = B53_PORT_OVERRIDE_CTRL;
val = PORT_OVERRIDE_EN;
} else {
@ -1236,7 +1236,7 @@ static void b53_adjust_link(struct dsa_switch *ds, int port,
b53_force_link(dev, port, phydev->link);
if (is531x5(dev) && phy_interface_is_rgmii(phydev)) {
if (port == 8)
if (port == dev->imp_port)
off = B53_RGMII_CTRL_IMP;
else
off = B53_RGMII_CTRL_P(port);
@ -2280,6 +2280,7 @@ struct b53_chip_data {
const char *dev_name;
u16 vlans;
u16 enabled_ports;
u8 imp_port;
u8 cpu_port;
u8 vta_regs[3];
u8 arl_bins;
@ -2304,6 +2305,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 2,
.arl_buckets = 1024,
.imp_port = 5,
.cpu_port = B53_CPU_PORT_25,
.duplex_reg = B53_DUPLEX_STAT_FE,
},
@ -2314,6 +2316,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 2,
.arl_buckets = 1024,
.imp_port = 5,
.cpu_port = B53_CPU_PORT_25,
.duplex_reg = B53_DUPLEX_STAT_FE,
},
@ -2324,6 +2327,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2337,6 +2341,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2350,6 +2355,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS_9798,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2363,6 +2369,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x7f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS_9798,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2377,6 +2384,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.arl_bins = 4,
.arl_buckets = 1024,
.vta_regs = B53_VTA_REGS,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.duplex_reg = B53_DUPLEX_STAT_GE,
.jumbo_pm_reg = B53_JUMBO_PORT_MASK,
@ -2389,6 +2397,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0xff,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2402,6 +2411,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1ff,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2415,6 +2425,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0, /* pdata must provide them */
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS_63XX,
.duplex_reg = B53_DUPLEX_STAT_63XX,
@ -2428,6 +2439,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2441,6 +2453,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1bf,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2454,6 +2467,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1bf,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2467,6 +2481,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2480,6 +2495,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1f,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2493,6 +2509,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1ff,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2506,6 +2523,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x103,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2520,6 +2538,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1bf,
.arl_bins = 4,
.arl_buckets = 256,
.imp_port = 8,
.cpu_port = 8, /* TODO: ports 4, 5, 8 */
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2533,6 +2552,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1ff,
.arl_bins = 4,
.arl_buckets = 1024,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2546,6 +2566,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
.enabled_ports = 0x1ff,
.arl_bins = 4,
.arl_buckets = 256,
.imp_port = 8,
.cpu_port = B53_CPU_PORT,
.vta_regs = B53_VTA_REGS,
.duplex_reg = B53_DUPLEX_STAT_GE,
@ -2571,6 +2592,7 @@ static int b53_switch_init(struct b53_device *dev)
dev->vta_regs[1] = chip->vta_regs[1];
dev->vta_regs[2] = chip->vta_regs[2];
dev->jumbo_pm_reg = chip->jumbo_pm_reg;
dev->imp_port = chip->imp_port;
dev->cpu_port = chip->cpu_port;
dev->num_vlans = chip->vlans;
dev->num_arl_bins = chip->arl_bins;
@ -2612,9 +2634,10 @@ static int b53_switch_init(struct b53_device *dev)
dev->cpu_port = 5;
}
/* cpu port is always last */
dev->num_ports = dev->cpu_port + 1;
dev->enabled_ports |= BIT(dev->cpu_port);
dev->num_ports = fls(dev->enabled_ports);
dev->ds->num_ports = min_t(unsigned int, dev->num_ports, DSA_MAX_PORTS);
/* Include non standard CPU port built-in PHYs to be probed */
if (is539x(dev) || is531x5(dev)) {
@ -2660,7 +2683,6 @@ struct b53_device *b53_switch_alloc(struct device *base,
return NULL;
ds->dev = base;
ds->num_ports = DSA_MAX_PORTS;
dev = devm_kzalloc(base, sizeof(*dev), GFP_KERNEL);
if (!dev)

View File

@ -123,6 +123,7 @@ struct b53_device {
/* used ports mask */
u16 enabled_ports;
unsigned int imp_port;
unsigned int cpu_port;
/* connect specific data */

View File

@ -843,7 +843,8 @@ static int gswip_setup(struct dsa_switch *ds)
gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN,
GSWIP_MAC_CTRL_2p(cpu_port));
gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN);
gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN,
GSWIP_MAC_FLEN);
gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD,
GSWIP_BM_QUEUE_GCTRL);

View File

@ -2786,7 +2786,7 @@ static void
dump_tx_ring(struct net_device *dev)
{
if (vortex_debug > 0) {
struct vortex_private *vp = netdev_priv(dev);
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
if (vp->full_bus_master_tx) {

View File

@ -305,13 +305,15 @@ static bool bnxt_vf_pciid(enum board_idx idx)
writel(DB_CP_FLAGS | RING_CMP(idx), (db)->doorbell)
#define BNXT_DB_NQ_P5(db, idx) \
writeq((db)->db_key64 | DBR_TYPE_NQ | RING_CMP(idx), (db)->doorbell)
bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ | RING_CMP(idx), \
(db)->doorbell)
#define BNXT_DB_CQ_ARM(db, idx) \
writel(DB_CP_REARM_FLAGS | RING_CMP(idx), (db)->doorbell)
#define BNXT_DB_NQ_ARM_P5(db, idx) \
writeq((db)->db_key64 | DBR_TYPE_NQ_ARM | RING_CMP(idx), (db)->doorbell)
bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ_ARM | RING_CMP(idx),\
(db)->doorbell)
static void bnxt_db_nq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx)
{
@ -332,8 +334,8 @@ static void bnxt_db_nq_arm(struct bnxt *bp, struct bnxt_db_info *db, u32 idx)
static void bnxt_db_cq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx)
{
if (bp->flags & BNXT_FLAG_CHIP_P5)
writeq(db->db_key64 | DBR_TYPE_CQ_ARMALL | RING_CMP(idx),
db->doorbell);
bnxt_writeq(bp, db->db_key64 | DBR_TYPE_CQ_ARMALL |
RING_CMP(idx), db->doorbell);
else
BNXT_DB_CQ(db, idx);
}
@ -2200,25 +2202,34 @@ static int bnxt_async_event_process(struct bnxt *bp,
if (!fw_health)
goto async_event_process_exit;
fw_health->enabled = EVENT_DATA1_RECOVERY_ENABLED(data1);
fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
if (!fw_health->enabled) {
if (!EVENT_DATA1_RECOVERY_ENABLED(data1)) {
fw_health->enabled = false;
netif_info(bp, drv, bp->dev,
"Error recovery info: error recovery[0]\n");
break;
}
fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
fw_health->tmr_multiplier =
DIV_ROUND_UP(fw_health->polling_dsecs * HZ,
bp->current_interval * 10);
fw_health->tmr_counter = fw_health->tmr_multiplier;
fw_health->last_fw_heartbeat =
bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
fw_health->last_fw_reset_cnt =
bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
if (!fw_health->enabled) {
fw_health->last_fw_heartbeat =
bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
fw_health->last_fw_reset_cnt =
bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
}
netif_info(bp, drv, bp->dev,
"Error recovery info: error recovery[1], master[%d], reset count[%u], health status: 0x%x\n",
fw_health->master, fw_health->last_fw_reset_cnt,
bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG));
if (!fw_health->enabled) {
/* Make sure tmr_counter is set and visible to
* bnxt_health_check() before setting enabled to true.
*/
smp_wmb();
fw_health->enabled = true;
}
goto async_event_process_exit;
}
case ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION:
@ -2638,8 +2649,8 @@ static void __bnxt_poll_cqs_done(struct bnxt *bp, struct bnxt_napi *bnapi,
if (cpr2 && cpr2->had_work_done) {
db = &cpr2->cp_db;
writeq(db->db_key64 | dbr_type |
RING_CMP(cpr2->cp_raw_cons), db->doorbell);
bnxt_writeq(bp, db->db_key64 | dbr_type |
RING_CMP(cpr2->cp_raw_cons), db->doorbell);
cpr2->had_work_done = 0;
}
}
@ -4639,6 +4650,13 @@ static int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, u8 tunnel_type)
struct hwrm_tunnel_dst_port_free_input *req;
int rc;
if (tunnel_type == TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN &&
bp->vxlan_fw_dst_port_id == INVALID_HW_RING_ID)
return 0;
if (tunnel_type == TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE &&
bp->nge_fw_dst_port_id == INVALID_HW_RING_ID)
return 0;
rc = hwrm_req_init(bp, req, HWRM_TUNNEL_DST_PORT_FREE);
if (rc)
return rc;
@ -4648,10 +4666,12 @@ static int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, u8 tunnel_type)
switch (tunnel_type) {
case TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN:
req->tunnel_dst_port_id = cpu_to_le16(bp->vxlan_fw_dst_port_id);
bp->vxlan_port = 0;
bp->vxlan_fw_dst_port_id = INVALID_HW_RING_ID;
break;
case TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE:
req->tunnel_dst_port_id = cpu_to_le16(bp->nge_fw_dst_port_id);
bp->nge_port = 0;
bp->nge_fw_dst_port_id = INVALID_HW_RING_ID;
break;
default:
@ -4689,10 +4709,12 @@ static int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, __be16 port,
switch (tunnel_type) {
case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN:
bp->vxlan_port = port;
bp->vxlan_fw_dst_port_id =
le16_to_cpu(resp->tunnel_dst_port_id);
break;
case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE:
bp->nge_port = port;
bp->nge_fw_dst_port_id = le16_to_cpu(resp->tunnel_dst_port_id);
break;
default:
@ -8221,12 +8243,10 @@ static int bnxt_hwrm_port_qstats_ext(struct bnxt *bp, u8 flags)
static void bnxt_hwrm_free_tunnel_ports(struct bnxt *bp)
{
if (bp->vxlan_fw_dst_port_id != INVALID_HW_RING_ID)
bnxt_hwrm_tunnel_dst_port_free(
bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);
if (bp->nge_fw_dst_port_id != INVALID_HW_RING_ID)
bnxt_hwrm_tunnel_dst_port_free(
bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE);
bnxt_hwrm_tunnel_dst_port_free(bp,
TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);
bnxt_hwrm_tunnel_dst_port_free(bp,
TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE);
}
static int bnxt_set_tpa(struct bnxt *bp, bool set_tpa)
@ -11247,6 +11267,8 @@ static void bnxt_fw_health_check(struct bnxt *bp)
if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
return;
/* Make sure it is enabled before checking the tmr_counter. */
smp_rmb();
if (fw_health->tmr_counter) {
fw_health->tmr_counter--;
return;
@ -12625,13 +12647,10 @@ static int bnxt_udp_tunnel_sync(struct net_device *netdev, unsigned int table)
unsigned int cmd;
udp_tunnel_nic_get_port(netdev, table, 0, &ti);
if (ti.type == UDP_TUNNEL_TYPE_VXLAN) {
bp->vxlan_port = ti.port;
if (ti.type == UDP_TUNNEL_TYPE_VXLAN)
cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN;
} else {
bp->nge_port = ti.port;
else
cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE;
}
if (ti.port)
return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd);

View File

@ -28,6 +28,7 @@
#include <net/dst_metadata.h>
#include <net/xdp.h>
#include <linux/dim.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#ifdef CONFIG_TEE_BNXT_FW
#include <linux/firmware/broadcom/tee_bnxt_fw.h>
#endif
@ -1981,7 +1982,7 @@ struct bnxt {
struct mutex sriov_lock;
#endif
#ifndef writeq
#if BITS_PER_LONG == 32
/* ensure atomic 64-bit doorbell writes on 32-bit systems. */
spinlock_t db_lock;
#endif
@ -2110,24 +2111,36 @@ static inline u32 bnxt_tx_avail(struct bnxt *bp, struct bnxt_tx_ring_info *txr)
((txr->tx_prod - txr->tx_cons) & bp->tx_ring_mask);
}
#ifndef writeq
#define writeq(val64, db) \
do { \
spin_lock(&bp->db_lock); \
writel((val64) & 0xffffffff, db); \
writel((val64) >> 32, (db) + 4); \
spin_unlock(&bp->db_lock); \
} while (0)
#define writeq_relaxed writeq
static inline void bnxt_writeq(struct bnxt *bp, u64 val,
volatile void __iomem *addr)
{
#if BITS_PER_LONG == 32
spin_lock(&bp->db_lock);
lo_hi_writeq(val, addr);
spin_unlock(&bp->db_lock);
#else
writeq(val, addr);
#endif
}
static inline void bnxt_writeq_relaxed(struct bnxt *bp, u64 val,
volatile void __iomem *addr)
{
#if BITS_PER_LONG == 32
spin_lock(&bp->db_lock);
lo_hi_writeq_relaxed(val, addr);
spin_unlock(&bp->db_lock);
#else
writeq_relaxed(val, addr);
#endif
}
/* For TX and RX ring doorbells with no ordering guarantee*/
static inline void bnxt_db_write_relaxed(struct bnxt *bp,
struct bnxt_db_info *db, u32 idx)
{
if (bp->flags & BNXT_FLAG_CHIP_P5) {
writeq_relaxed(db->db_key64 | idx, db->doorbell);
bnxt_writeq_relaxed(bp, db->db_key64 | idx, db->doorbell);
} else {
u32 db_val = db->db_key32 | idx;
@ -2142,7 +2155,7 @@ static inline void bnxt_db_write(struct bnxt *bp, struct bnxt_db_info *db,
u32 idx)
{
if (bp->flags & BNXT_FLAG_CHIP_P5) {
writeq(db->db_key64 | idx, db->doorbell);
bnxt_writeq(bp, db->db_key64 | idx, db->doorbell);
} else {
u32 db_val = db->db_key32 | idx;

View File

@ -352,13 +352,16 @@ static void bnxt_copy_from_nvm_data(union devlink_param_value *dst,
dst->vu8 = (u8)val32;
}
static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp,
union devlink_param_value *nvm_cfg_ver)
static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp, u32 *nvm_cfg_ver)
{
struct hwrm_nvm_get_variable_input *req;
u16 bytes = BNXT_NVM_CFG_VER_BYTES;
u16 bits = BNXT_NVM_CFG_VER_BITS;
union devlink_param_value ver;
union bnxt_nvm_data *data;
dma_addr_t data_dma_addr;
int rc;
int rc, i = 2;
u16 dim = 1;
rc = hwrm_req_init(bp, req, HWRM_NVM_GET_VARIABLE);
if (rc)
@ -370,16 +373,34 @@ static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp,
goto exit;
}
/* earlier devices present as an array of raw bytes */
if (!BNXT_CHIP_P5(bp)) {
dim = 0;
i = 0;
bits *= 3; /* array of 3 version components */
bytes *= 4; /* copy whole word */
}
hwrm_req_hold(bp, req);
req->dest_data_addr = cpu_to_le64(data_dma_addr);
req->data_len = cpu_to_le16(BNXT_NVM_CFG_VER_BITS);
req->data_len = cpu_to_le16(bits);
req->option_num = cpu_to_le16(NVM_OFF_NVM_CFG_VER);
req->dimensions = cpu_to_le16(dim);
rc = hwrm_req_send_silent(bp, req);
if (!rc)
bnxt_copy_from_nvm_data(nvm_cfg_ver, data,
BNXT_NVM_CFG_VER_BITS,
BNXT_NVM_CFG_VER_BYTES);
while (i >= 0) {
req->index_0 = cpu_to_le16(i--);
rc = hwrm_req_send_silent(bp, req);
if (rc)
goto exit;
bnxt_copy_from_nvm_data(&ver, data, bits, bytes);
if (BNXT_CHIP_P5(bp)) {
*nvm_cfg_ver <<= 8;
*nvm_cfg_ver |= ver.vu8;
} else {
*nvm_cfg_ver = ver.vu32;
}
}
exit:
hwrm_req_drop(bp, req);
@ -416,12 +437,12 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
{
struct hwrm_nvm_get_dev_info_output nvm_dev_info;
struct bnxt *bp = bnxt_get_bp_from_dl(dl);
union devlink_param_value nvm_cfg_ver;
struct hwrm_ver_get_output *ver_resp;
char mgmt_ver[FW_VER_STR_LEN];
char roce_ver[FW_VER_STR_LEN];
char ncsi_ver[FW_VER_STR_LEN];
char buf[32];
u32 ver = 0;
int rc;
rc = devlink_info_driver_name_put(req, DRV_MODULE_NAME);
@ -456,7 +477,7 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
return rc;
ver_resp = &bp->ver_resp;
sprintf(buf, "%X", ver_resp->chip_rev);
sprintf(buf, "%c%d", 'A' + ver_resp->chip_rev, ver_resp->chip_metal);
rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_FIXED,
DEVLINK_INFO_VERSION_GENERIC_ASIC_REV, buf);
if (rc)
@ -475,11 +496,9 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
if (rc)
return rc;
if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) {
u32 ver = nvm_cfg_ver.vu32;
sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf,
ver & 0xf);
if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &ver)) {
sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xff, (ver >> 8) & 0xff,
ver & 0xff);
rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED,
DEVLINK_INFO_VERSION_GENERIC_FW_PSID,
buf);

View File

@ -40,8 +40,8 @@ static inline void bnxt_link_bp_to_dl(struct bnxt *bp, struct devlink *dl)
#define NVM_OFF_ENABLE_SRIOV 401
#define NVM_OFF_NVM_CFG_VER 602
#define BNXT_NVM_CFG_VER_BITS 24
#define BNXT_NVM_CFG_VER_BYTES 4
#define BNXT_NVM_CFG_VER_BITS 8
#define BNXT_NVM_CFG_VER_BYTES 1
#define BNXT_MSIX_VEC_MAX 512
#define BNXT_MSIX_VEC_MIN_MAX 128

View File

@ -145,11 +145,11 @@ void hwrm_req_timeout(struct bnxt *bp, void *req, unsigned int timeout)
* @bp: The driver context.
* @req: The request for which calls to hwrm_req_dma_slice() will have altered
* allocation flags.
* @flags: A bitmask of GFP flags. These flags are passed to
* dma_alloc_coherent() whenever it is used to allocate backing memory
* for slices. Note that calls to hwrm_req_dma_slice() will not always
* result in new allocations, however, memory suballocated from the
* request buffer is already __GFP_ZERO.
* @gfp: A bitmask of GFP flags. These flags are passed to dma_alloc_coherent()
* whenever it is used to allocate backing memory for slices. Note that
* calls to hwrm_req_dma_slice() will not always result in new allocations,
* however, memory suballocated from the request buffer is already
* __GFP_ZERO.
*
* Sets the GFP allocation flags associated with the request for subsequent
* calls to hwrm_req_dma_slice(). This can be useful for specifying __GFP_ZERO
@ -698,8 +698,8 @@ int hwrm_req_send_silent(struct bnxt *bp, void *req)
* @bp: The driver context.
* @req: The request for which indirect data will be associated.
* @size: The size of the allocation.
* @dma: The bus address associated with the allocation. The HWRM API has no
* knowledge about the type of the request and so cannot infer how the
* @dma_handle: The bus address associated with the allocation. The HWRM API has
* no knowledge about the type of the request and so cannot infer how the
* caller intends to use the indirect data. Thus, the caller is
* responsible for configuring the request object appropriately to
* point to the associated indirect memory. Note, DMA handle has the

View File

@ -1111,6 +1111,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!adapter->registered_device_map) {
pr_err("%s: could not register any net devices\n",
pci_name(pdev));
err = -EINVAL;
goto out_release_adapter_res;
}

View File

@ -3301,6 +3301,9 @@ void t3_sge_stop(struct adapter *adap)
t3_sge_stop_dma(adap);
/* workqueues aren't initialized otherwise */
if (!(adap->flags & FULL_INIT_DONE))
return;
for (i = 0; i < SGE_QSETS; ++i) {
struct sge_qset *qs = &adap->sge.qs[i];

View File

@ -38,7 +38,7 @@ config CS89x0_ISA
config CS89x0_PLATFORM
tristate "CS89x0 platform driver support"
depends on ARM || COMPILE_TEST
depends on ARM || (COMPILE_TEST && !PPC)
select CS89x0
help
Say Y to compile the cs89x0 platform driver. This makes this driver

View File

@ -362,7 +362,7 @@ static void hclge_set_default_capability(struct hclge_dev *hdev)
}
}
const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = {
static const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = {
{HCLGE_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B},
{HCLGE_CAP_PTP_B, HNAE3_DEV_SUPPORT_PTP_B},
{HCLGE_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B},

View File

@ -342,7 +342,7 @@ static void hclgevf_set_default_capability(struct hclgevf_dev *hdev)
set_bit(HNAE3_DEV_SUPPORT_FEC_B, ae_dev->caps);
}
const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = {
static const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = {
{HCLGEVF_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B},
{HCLGEVF_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B},
{HCLGEVF_CAP_TQP_TXRX_INDEP_B, HNAE3_DEV_SUPPORT_TQP_TXRX_INDEP_B},

View File

@ -314,7 +314,7 @@ static int __init sun3_82586_probe(void)
err = register_netdev(dev);
if (err)
goto out2;
return dev;
return 0;
out2:
release_region(ioaddr, SUN3_82586_TOTAL_SIZE);

View File

@ -1487,7 +1487,7 @@ static int cgx_lmac_init(struct cgx *cgx)
MAX_DMAC_ENTRIES_PER_CGX / cgx->lmac_count;
err = rvu_alloc_bitmap(&lmac->mac_to_index_bmap);
if (err)
return err;
goto err_name_free;
/* Reserve first entry for default MAC address */
set_bit(0, lmac->mac_to_index_bmap.bmap);
@ -1497,7 +1497,7 @@ static int cgx_lmac_init(struct cgx *cgx)
spin_lock_init(&lmac->event_cb_lock);
err = cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, false);
if (err)
goto err_irq;
goto err_bitmap_free;
/* Add reference */
cgx->lmac_idmap[lmac->lmac_id] = lmac;
@ -1507,7 +1507,9 @@ static int cgx_lmac_init(struct cgx *cgx)
return cgx_lmac_verify_fwi_version(cgx);
err_irq:
err_bitmap_free:
rvu_free_bitmap(&lmac->mac_to_index_bmap);
err_name_free:
kfree(lmac->name);
err_lmac_free:
kfree(lmac);

View File

@ -92,7 +92,8 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
*/
int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
{
unsigned long timeout = jiffies + usecs_to_jiffies(10000);
unsigned long timeout = jiffies + usecs_to_jiffies(20000);
bool twice = false;
void __iomem *reg;
u64 reg_val;
@ -107,6 +108,15 @@ again:
usleep_range(1, 5);
goto again;
}
/* In scenarios where CPU is scheduled out before checking
* 'time_before' (above) and gets scheduled in such that
* jiffies are beyond timeout value, then check again if HW is
* done with the operation in the meantime.
*/
if (!twice) {
twice = true;
goto again;
}
return -EBUSY;
}
@ -201,6 +211,11 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
return 0;
}
void rvu_free_bitmap(struct rsrc_bmap *rsrc)
{
kfree(rsrc->bmap);
}
/* Get block LF's HW index from a PF_FUNC's block slot number */
int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot)
{

View File

@ -638,6 +638,7 @@ static inline bool is_rvu_fwdata_valid(struct rvu *rvu)
}
int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
void rvu_free_bitmap(struct rsrc_bmap *rsrc);
int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
bool is_rsrc_free(struct rsrc_bmap *rsrc, int id);

View File

@ -27,7 +27,8 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf)
{
struct lmtst_tbl_setup_req *req;
int qcount, err;
struct otx2_lmt_info *lmt_info;
int err, cpu;
if (!test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) {
pfvf->hw_ops = &otx2_hw_ops;
@ -35,15 +36,9 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf)
}
pfvf->hw_ops = &cn10k_hw_ops;
qcount = pfvf->hw.max_queues;
/* LMTST lines allocation
* qcount = num_online_cpus();
* NPA = TX + RX + XDP.
* NIX = TX * 32 (For Burst SQE flush).
*/
pfvf->tot_lmt_lines = (qcount * 3) + (qcount * 32);
pfvf->npa_lmt_lines = qcount * 3;
pfvf->nix_lmt_size = LMT_BURST_SIZE * LMT_LINE_SIZE;
/* Total LMTLINES = num_online_cpus() * 32 (For Burst flush).*/
pfvf->tot_lmt_lines = (num_online_cpus() * LMT_BURST_SIZE);
pfvf->hw.lmt_info = alloc_percpu(struct otx2_lmt_info);
mutex_lock(&pfvf->mbox.lock);
req = otx2_mbox_alloc_msg_lmtst_tbl_setup(&pfvf->mbox);
@ -66,6 +61,13 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf)
err = otx2_sync_mbox_msg(&pfvf->mbox);
mutex_unlock(&pfvf->mbox.lock);
for_each_possible_cpu(cpu) {
lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, cpu);
lmt_info->lmt_addr = ((u64)pfvf->hw.lmt_base +
(cpu * LMT_BURST_SIZE * LMT_LINE_SIZE));
lmt_info->lmt_id = cpu * LMT_BURST_SIZE;
}
return 0;
}
EXPORT_SYMBOL(cn10k_lmtst_init);
@ -74,13 +76,6 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
{
struct nix_cn10k_aq_enq_req *aq;
struct otx2_nic *pfvf = dev;
struct otx2_snd_queue *sq;
sq = &pfvf->qset.sq[qidx];
sq->lmt_addr = (u64 *)((u64)pfvf->hw.nix_lmt_base +
(qidx * pfvf->nix_lmt_size));
sq->lmt_id = pfvf->npa_lmt_lines + (qidx * LMT_BURST_SIZE);
/* Get memory to put this msg */
aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
@ -125,8 +120,7 @@ void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq)
if (otx2_alloc_buffer(pfvf, cq, &bufptr)) {
if (num_ptrs--)
__cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs,
num_ptrs,
cq->rbpool->lmt_addr);
num_ptrs);
break;
}
cq->pool_ptrs--;
@ -134,8 +128,7 @@ void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq)
num_ptrs++;
if (num_ptrs == NPA_MAX_BURST || cq->pool_ptrs == 0) {
__cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs,
num_ptrs,
cq->rbpool->lmt_addr);
num_ptrs);
num_ptrs = 1;
}
}
@ -143,20 +136,23 @@ void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq)
void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx)
{
struct otx2_lmt_info *lmt_info;
struct otx2_nic *pfvf = dev;
u64 val = 0, tar_addr = 0;
lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, smp_processor_id());
/* FIXME: val[0:10] LMT_ID.
* [12:15] no of LMTST - 1 in the burst.
* [19:63] data size of each LMTST in the burst except first.
*/
val = (sq->lmt_id & 0x7FF);
val = (lmt_info->lmt_id & 0x7FF);
/* Target address for LMTST flush tells HW how many 128bit
* words are present.
* tar_addr[6:4] size of first LMTST - 1 in units of 128b.
*/
tar_addr |= sq->io_addr | (((size / 16) - 1) & 0x7) << 4;
dma_wmb();
memcpy(sq->lmt_addr, sq->sqe_base, size);
memcpy((u64 *)lmt_info->lmt_addr, sq->sqe_base, size);
cn10k_lmt_flush(val, tar_addr);
sq->head++;

View File

@ -1230,11 +1230,6 @@ static int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
pool->rbsize = buf_size;
/* Set LMTST addr for NPA batch free */
if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag))
pool->lmt_addr = (__force u64 *)((u64)pfvf->hw.npa_lmt_base +
(pool_id * LMT_LINE_SIZE));
/* Initialize this pool's context via AF */
aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
if (!aq) {

View File

@ -53,6 +53,10 @@ enum arua_mapped_qtypes {
/* Send skid of 2000 packets required for CQ size of 4K CQEs. */
#define SEND_CQ_SKID 2000
struct otx2_lmt_info {
u64 lmt_addr;
u16 lmt_id;
};
/* RSS configuration */
struct otx2_rss_ctx {
u8 ind_tbl[MAX_RSS_INDIR_TBL_SIZE];
@ -224,8 +228,7 @@ struct otx2_hw {
#define LMT_LINE_SIZE 128
#define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst SQE flush */
u64 *lmt_base;
u64 *npa_lmt_base;
u64 *nix_lmt_base;
struct otx2_lmt_info __percpu *lmt_info;
};
enum vfperm {
@ -407,17 +410,18 @@ static inline bool is_96xx_B0(struct pci_dev *pdev)
*/
#define PCI_REVISION_ID_96XX 0x00
#define PCI_REVISION_ID_95XX 0x10
#define PCI_REVISION_ID_LOKI 0x20
#define PCI_REVISION_ID_95XXN 0x20
#define PCI_REVISION_ID_98XX 0x30
#define PCI_REVISION_ID_95XXMM 0x40
#define PCI_REVISION_ID_95XXO 0xE0
static inline bool is_dev_otx2(struct pci_dev *pdev)
{
u8 midr = pdev->revision & 0xF0;
return (midr == PCI_REVISION_ID_96XX || midr == PCI_REVISION_ID_95XX ||
midr == PCI_REVISION_ID_LOKI || midr == PCI_REVISION_ID_98XX ||
midr == PCI_REVISION_ID_95XXMM);
midr == PCI_REVISION_ID_95XXN || midr == PCI_REVISION_ID_98XX ||
midr == PCI_REVISION_ID_95XXMM || midr == PCI_REVISION_ID_95XXO);
}
static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
@ -562,15 +566,16 @@ static inline u64 otx2_atomic64_add(u64 incr, u64 *ptr)
#endif
static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
u64 *ptrs, u64 num_ptrs,
u64 *lmt_addr)
u64 *ptrs, u64 num_ptrs)
{
struct otx2_lmt_info *lmt_info;
u64 size = 0, count_eot = 0;
u64 tar_addr, val = 0;
lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, smp_processor_id());
tar_addr = (__force u64)otx2_get_regaddr(pfvf, NPA_LF_AURA_BATCH_FREE0);
/* LMTID is same as AURA Id */
val = (aura & 0x7FF) | BIT_ULL(63);
val = (lmt_info->lmt_id & 0x7FF) | BIT_ULL(63);
/* Set if [127:64] of last 128bit word has a valid pointer */
count_eot = (num_ptrs % 2) ? 0ULL : 1ULL;
/* Set AURA ID to free pointer */
@ -586,7 +591,7 @@ static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
size++;
tar_addr |= ((size - 1) & 0x7) << 4;
}
memcpy(lmt_addr, ptrs, sizeof(u64) * num_ptrs);
memcpy((u64 *)lmt_info->lmt_addr, ptrs, sizeof(u64) * num_ptrs);
/* Perform LMTST flush */
cn10k_lmt_flush(val, tar_addr);
}
@ -594,12 +599,11 @@ static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
static inline void cn10k_aura_freeptr(void *dev, int aura, u64 buf)
{
struct otx2_nic *pfvf = dev;
struct otx2_pool *pool;
u64 ptrs[2];
pool = &pfvf->qset.pool[aura];
ptrs[1] = buf;
__cn10k_aura_freeptr(pfvf, aura, ptrs, 2, pool->lmt_addr);
/* Free only one buffer at time during init and teardown */
__cn10k_aura_freeptr(pfvf, aura, ptrs, 2);
}
/* Alloc pointer from pool/aura */

View File

@ -16,8 +16,8 @@
#include "otx2_common.h"
#include "otx2_ptp.h"
#define DRV_NAME "octeontx2-nicpf"
#define DRV_VF_NAME "octeontx2-nicvf"
#define DRV_NAME "rvu-nicpf"
#define DRV_VF_NAME "rvu-nicvf"
struct otx2_stat {
char name[ETH_GSTRING_LEN];

View File

@ -1533,14 +1533,6 @@ int otx2_open(struct net_device *netdev)
if (!qset->rq)
goto err_free_mem;
if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) {
/* Reserve LMT lines for NPA AURA batch free */
pf->hw.npa_lmt_base = pf->hw.lmt_base;
/* Reserve LMT lines for NIX TX */
pf->hw.nix_lmt_base = (u64 *)((u64)pf->hw.npa_lmt_base +
(pf->npa_lmt_lines * LMT_LINE_SIZE));
}
err = otx2_init_hw_resources(pf);
if (err)
goto err_free_mem;
@ -2668,6 +2660,8 @@ err_del_mcam_entries:
err_ptp_destroy:
otx2_ptp_destroy(pf);
err_detach_rsrc:
if (pf->hw.lmt_info)
free_percpu(pf->hw.lmt_info);
if (test_bit(CN10K_LMTST, &pf->hw.cap_flag))
qmem_free(pf->dev, pf->dync_lmt);
otx2_detach_resources(&pf->mbox);
@ -2811,6 +2805,8 @@ static void otx2_remove(struct pci_dev *pdev)
otx2_mcam_flow_del(pf);
otx2_shutdown_tc(pf);
otx2_detach_resources(&pf->mbox);
if (pf->hw.lmt_info)
free_percpu(pf->hw.lmt_info);
if (test_bit(CN10K_LMTST, &pf->hw.cap_flag))
qmem_free(pf->dev, pf->dync_lmt);
otx2_disable_mbox_intr(pf);

View File

@ -80,7 +80,6 @@ struct otx2_snd_queue {
u16 num_sqbs;
u16 sqe_thresh;
u8 sqe_per_sqb;
u32 lmt_id;
u64 io_addr;
u64 *aura_fc_addr;
u64 *lmt_addr;
@ -111,7 +110,6 @@ struct otx2_cq_poll {
struct otx2_pool {
struct qmem *stack;
struct qmem *fc_addr;
u64 *lmt_addr;
u16 rbsize;
};

View File

@ -582,7 +582,10 @@ static int ionic_set_ringparam(struct net_device *netdev,
qparam.ntxq_descs = ring->tx_pending;
qparam.nrxq_descs = ring->rx_pending;
mutex_lock(&lif->queue_lock);
err = ionic_reconfigure_queues(lif, &qparam);
mutex_unlock(&lif->queue_lock);
if (err)
netdev_info(netdev, "Ring reconfiguration failed, changes canceled: %d\n", err);
@ -679,7 +682,9 @@ static int ionic_set_channels(struct net_device *netdev,
return 0;
}
mutex_lock(&lif->queue_lock);
err = ionic_reconfigure_queues(lif, &qparam);
mutex_unlock(&lif->queue_lock);
if (err)
netdev_info(netdev, "Queue reconfiguration failed, changes canceled: %d\n", err);

View File

@ -1715,7 +1715,6 @@ static int ionic_set_mac_address(struct net_device *netdev, void *sa)
static void ionic_stop_queues_reconfig(struct ionic_lif *lif)
{
/* Stop and clean the queues before reconfiguration */
mutex_lock(&lif->queue_lock);
netif_device_detach(lif->netdev);
ionic_stop_queues(lif);
ionic_txrx_deinit(lif);
@ -1734,8 +1733,7 @@ static int ionic_start_queues_reconfig(struct ionic_lif *lif)
* DOWN and UP to try to reset and clear the issue.
*/
err = ionic_txrx_init(lif);
mutex_unlock(&lif->queue_lock);
ionic_link_status_check_request(lif, CAN_SLEEP);
ionic_link_status_check_request(lif, CAN_NOT_SLEEP);
netif_device_attach(lif->netdev);
return err;
@ -1765,9 +1763,13 @@ static int ionic_change_mtu(struct net_device *netdev, int new_mtu)
return 0;
}
mutex_lock(&lif->queue_lock);
ionic_stop_queues_reconfig(lif);
netdev->mtu = new_mtu;
return ionic_start_queues_reconfig(lif);
err = ionic_start_queues_reconfig(lif);
mutex_unlock(&lif->queue_lock);
return err;
}
static void ionic_tx_timeout_work(struct work_struct *ws)
@ -1783,8 +1785,10 @@ static void ionic_tx_timeout_work(struct work_struct *ws)
if (!netif_running(lif->netdev))
return;
mutex_lock(&lif->queue_lock);
ionic_stop_queues_reconfig(lif);
ionic_start_queues_reconfig(lif);
mutex_unlock(&lif->queue_lock);
}
static void ionic_tx_timeout(struct net_device *netdev, unsigned int txqueue)

View File

@ -318,7 +318,7 @@ void ionic_rx_filter_sync(struct ionic_lif *lif)
if (f->state == IONIC_FILTER_STATE_NEW ||
f->state == IONIC_FILTER_STATE_OLD) {
sync_item = devm_kzalloc(dev, sizeof(*sync_item),
GFP_KERNEL);
GFP_ATOMIC);
if (!sync_item)
goto loop_out;

View File

@ -437,7 +437,6 @@ int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter)
QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, 1);
msleep(20);
qlcnic_rom_unlock(adapter);
/* big hammer don't reset CAM block on reset */
QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0xfeffffff);

View File

@ -100,7 +100,7 @@ static void emac_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
case ETH_SS_STATS:
for (i = 0; i < EMAC_STATS_LEN; i++) {
strlcpy(data, emac_ethtool_stat_strings[i],
strscpy(data, emac_ethtool_stat_strings[i],
ETH_GSTRING_LEN);
data += ETH_GSTRING_LEN;
}

View File

@ -2533,6 +2533,7 @@ static netdev_tx_t sh_eth_start_xmit(struct sk_buff *skb,
else
txdesc->status |= cpu_to_le32(TD_TACT);
wmb(); /* cur_tx must be incremented after TACT bit was set */
mdp->cur_tx++;
if (!(sh_eth_read(ndev, EDTRR) & mdp->cd->edtrr_trns))

View File

@ -1550,7 +1550,7 @@ static int smc911x_ethtool_getregslen(struct net_device *dev)
}
static void smc911x_ethtool_getregs(struct net_device *dev,
struct ethtool_regs* regs, void *buf)
struct ethtool_regs *regs, void *buf)
{
struct smc911x_local *lp = netdev_priv(dev);
unsigned long flags;
@ -1600,7 +1600,7 @@ static int smc911x_ethtool_wait_eeprom_ready(struct net_device *dev)
}
static inline int smc911x_ethtool_write_eeprom_cmd(struct net_device *dev,
int cmd, int addr)
int cmd, int addr)
{
struct smc911x_local *lp = netdev_priv(dev);
int ret;
@ -1614,7 +1614,7 @@ static inline int smc911x_ethtool_write_eeprom_cmd(struct net_device *dev,
}
static inline int smc911x_ethtool_read_eeprom_byte(struct net_device *dev,
u8 *data)
u8 *data)
{
struct smc911x_local *lp = netdev_priv(dev);
int ret;
@ -1626,7 +1626,7 @@ static inline int smc911x_ethtool_read_eeprom_byte(struct net_device *dev,
}
static inline int smc911x_ethtool_write_eeprom_byte(struct net_device *dev,
u8 data)
u8 data)
{
struct smc911x_local *lp = netdev_priv(dev);
int ret;
@ -1638,7 +1638,7 @@ static inline int smc911x_ethtool_write_eeprom_byte(struct net_device *dev,
}
static int smc911x_ethtool_geteeprom(struct net_device *dev,
struct ethtool_eeprom *eeprom, u8 *data)
struct ethtool_eeprom *eeprom, u8 *data)
{
u8 eebuf[SMC911X_EEPROM_LEN];
int i, ret;
@ -1654,7 +1654,7 @@ static int smc911x_ethtool_geteeprom(struct net_device *dev,
}
static int smc911x_ethtool_seteeprom(struct net_device *dev,
struct ethtool_eeprom *eeprom, u8 *data)
struct ethtool_eeprom *eeprom, u8 *data)
{
int i, ret;

View File

@ -109,8 +109,10 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
plat->bus_id = pci_dev_id(pdev);
phy_mode = device_get_phy_mode(&pdev->dev);
if (phy_mode < 0)
if (phy_mode < 0) {
dev_err(&pdev->dev, "phy_mode not found\n");
return phy_mode;
}
plat->phy_interface = phy_mode;
plat->interface = PHY_INTERFACE_MODE_GMII;

View File

@ -5347,7 +5347,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
struct stmmac_channel *ch =
container_of(napi, struct stmmac_channel, rxtx_napi);
struct stmmac_priv *priv = ch->priv_data;
int rx_done, tx_done;
int rx_done, tx_done, rxtx_done;
u32 chan = ch->index;
priv->xstats.napi_poll++;
@ -5357,14 +5357,16 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
rx_done = stmmac_rx_zc(priv, budget, chan);
rxtx_done = max(tx_done, rx_done);
/* If either TX or RX work is not complete, return budget
* and keep pooling
*/
if (tx_done >= budget || rx_done >= budget)
if (rxtx_done >= budget)
return budget;
/* all work done, exit the polling mode */
if (napi_complete_done(napi, rx_done)) {
if (napi_complete_done(napi, rxtx_done)) {
unsigned long flags;
spin_lock_irqsave(&ch->lock, flags);
@ -5375,7 +5377,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
spin_unlock_irqrestore(&ch->lock, flags);
}
return min(rx_done, budget - 1);
return min(rxtx_done, budget - 1);
}
/**
@ -7121,8 +7123,6 @@ int stmmac_suspend(struct device *dev)
if (!ndev || !netif_running(ndev))
return 0;
phylink_mac_change(priv->phylink, false);
mutex_lock(&priv->lock);
netif_device_detach(ndev);
@ -7148,14 +7148,6 @@ int stmmac_suspend(struct device *dev)
stmmac_pmt(priv, priv->hw, priv->wolopts);
priv->irq_wake = 1;
} else {
mutex_unlock(&priv->lock);
rtnl_lock();
if (device_may_wakeup(priv->device))
phylink_speed_down(priv->phylink, false);
phylink_stop(priv->phylink);
rtnl_unlock();
mutex_lock(&priv->lock);
stmmac_mac_set(priv, priv->ioaddr, false);
pinctrl_pm_select_sleep_state(priv->device);
/* Disable clock in case of PWM is off */
@ -7169,6 +7161,16 @@ int stmmac_suspend(struct device *dev)
mutex_unlock(&priv->lock);
rtnl_lock();
if (device_may_wakeup(priv->device) && priv->plat->pmt) {
phylink_suspend(priv->phylink, true);
} else {
if (device_may_wakeup(priv->device))
phylink_speed_down(priv->phylink, false);
phylink_suspend(priv->phylink, false);
}
rtnl_unlock();
if (priv->dma_cap.fpesel) {
/* Disable FPE */
stmmac_fpe_configure(priv, priv->ioaddr,
@ -7259,13 +7261,15 @@ int stmmac_resume(struct device *dev)
return ret;
}
if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
rtnl_lock();
phylink_start(priv->phylink);
/* We may have called phylink_speed_down before */
phylink_speed_up(priv->phylink);
rtnl_unlock();
rtnl_lock();
if (device_may_wakeup(priv->device) && priv->plat->pmt) {
phylink_resume(priv->phylink);
} else {
phylink_resume(priv->phylink);
if (device_may_wakeup(priv->device))
phylink_speed_up(priv->phylink);
}
rtnl_unlock();
rtnl_lock();
mutex_lock(&priv->lock);
@ -7286,8 +7290,6 @@ int stmmac_resume(struct device *dev)
mutex_unlock(&priv->lock);
rtnl_unlock();
phylink_mac_change(priv->phylink, true);
netif_device_attach(ndev);
return 0;

View File

@ -16,7 +16,6 @@
#include <linux/ptp_clock_kernel.h>
#include <linux/platform_device.h>
#include <linux/soc/ixp4xx/cpu.h>
#include <linux/module.h>
#include <mach/ixp4xx-regs.h>
#include "ixp46x_ts.h"

View File

@ -33,6 +33,7 @@
enum {
PHYLINK_DISABLE_STOPPED,
PHYLINK_DISABLE_LINK,
PHYLINK_DISABLE_MAC_WOL,
};
/**
@ -1282,6 +1283,9 @@ EXPORT_SYMBOL_GPL(phylink_start);
* network device driver's &struct net_device_ops ndo_stop() method. The
* network device's carrier state should not be changed prior to calling this
* function.
*
* This will synchronously bring down the link if the link is not already
* down (in other words, it will trigger a mac_link_down() method call.)
*/
void phylink_stop(struct phylink *pl)
{
@ -1301,6 +1305,84 @@ void phylink_stop(struct phylink *pl)
}
EXPORT_SYMBOL_GPL(phylink_stop);
/**
* phylink_suspend() - handle a network device suspend event
* @pl: a pointer to a &struct phylink returned from phylink_create()
* @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan
*
* Handle a network device suspend event. There are several cases:
* - If Wake-on-Lan is not active, we can bring down the link between
* the MAC and PHY by calling phylink_stop().
* - If Wake-on-Lan is active, and being handled only by the PHY, we
* can also bring down the link between the MAC and PHY.
* - If Wake-on-Lan is active, but being handled by the MAC, the MAC
* still needs to receive packets, so we can not bring the link down.
*/
void phylink_suspend(struct phylink *pl, bool mac_wol)
{
ASSERT_RTNL();
if (mac_wol && (!pl->netdev || pl->netdev->wol_enabled)) {
/* Wake-on-Lan enabled, MAC handling */
mutex_lock(&pl->state_mutex);
/* Stop the resolver bringing the link up */
__set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);
/* Disable the carrier, to prevent transmit timeouts,
* but one would hope all packets have been sent. This
* also means phylink_resolve() will do nothing.
*/
netif_carrier_off(pl->netdev);
/* We do not call mac_link_down() here as we want the
* link to remain up to receive the WoL packets.
*/
mutex_unlock(&pl->state_mutex);
} else {
phylink_stop(pl);
}
}
EXPORT_SYMBOL_GPL(phylink_suspend);
/**
* phylink_resume() - handle a network device resume event
* @pl: a pointer to a &struct phylink returned from phylink_create()
*
* Undo the effects of phylink_suspend(), returning the link to an
* operational state.
*/
void phylink_resume(struct phylink *pl)
{
ASSERT_RTNL();
if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {
/* Wake-on-Lan enabled, MAC handling */
/* Call mac_link_down() so we keep the overall state balanced.
* Do this under the state_mutex lock for consistency. This
* will cause a "Link Down" message to be printed during
* resume, which is harmless - the true link state will be
* printed when we run a resolve.
*/
mutex_lock(&pl->state_mutex);
phylink_link_down(pl);
mutex_unlock(&pl->state_mutex);
/* Re-apply the link parameters so that all the settings get
* restored to the MAC.
*/
phylink_mac_initial_config(pl, true);
/* Re-enable and re-resolve the link parameters */
clear_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);
phylink_run_resolve(pl);
} else {
phylink_start(pl);
}
}
EXPORT_SYMBOL_GPL(phylink_resume);
/**
* phylink_ethtool_get_wol() - get the wake on lan parameters for the PHY
* @pl: a pointer to a &struct phylink returned from phylink_create()

View File

@ -654,6 +654,11 @@ static const struct usb_device_id mbim_devs[] = {
.driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
},
/* Telit LN920 */
{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1061, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
},
/* default entry */
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&cdc_mbim_info_zlp,

View File

@ -2535,13 +2535,17 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
if (!hso_net->mux_bulk_tx_buf)
goto err_free_tx_urb;
add_net_device(hso_dev);
result = add_net_device(hso_dev);
if (result) {
dev_err(&interface->dev, "Failed to add net device\n");
goto err_free_tx_buf;
}
/* registering our net device */
result = register_netdev(net);
if (result) {
dev_err(&interface->dev, "Failed to register device\n");
goto err_free_tx_buf;
goto err_rmv_ndev;
}
hso_log_port(hso_dev);
@ -2550,8 +2554,9 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
return hso_dev;
err_free_tx_buf:
err_rmv_ndev:
remove_net_device(hso_dev);
err_free_tx_buf:
kfree(hso_net->mux_bulk_tx_buf);
err_free_tx_urb:
usb_free_urb(hso_net->mux_bulk_tx_urb);

View File

@ -1354,6 +1354,7 @@ static const struct usb_device_id products[] = {
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */
{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */

View File

@ -9,7 +9,7 @@
#include "iwl-prph.h"
/* Highest firmware API version supported */
#define IWL_22000_UCODE_API_MAX 65
#define IWL_22000_UCODE_API_MAX 66
/* Lowest firmware API version supported */
#define IWL_22000_UCODE_API_MIN 39

View File

@ -231,6 +231,7 @@ static int iwl_pnvm_get_from_fs(struct iwl_trans *trans, u8 **data, size_t *len)
{
const struct firmware *pnvm;
char pnvm_name[MAX_PNVM_NAME];
size_t new_len;
int ret;
iwl_pnvm_get_fs_name(trans, pnvm_name, sizeof(pnvm_name));
@ -242,11 +243,14 @@ static int iwl_pnvm_get_from_fs(struct iwl_trans *trans, u8 **data, size_t *len)
return ret;
}
new_len = pnvm->size;
*data = kmemdup(pnvm->data, pnvm->size, GFP_KERNEL);
release_firmware(pnvm);
if (!*data)
return -ENOMEM;
*len = pnvm->size;
*len = new_len;
return 0;
}

View File

@ -558,6 +558,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
IWL_DEV_INFO(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
IWL_DEV_INFO(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0xA0F0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr, NULL),
IWL_DEV_INFO(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr, NULL),
IWL_DEV_INFO(0x02F0, 0x6074, iwl_ax201_cfg_quz_hr, NULL),

View File

@ -69,7 +69,7 @@ void ipc_mmio_update_cp_capability(struct iosm_mmio *ipc_mmio)
unsigned int ver;
ver = ipc_mmio_get_cp_version(ipc_mmio);
cp_cap = readl(ipc_mmio->base + ipc_mmio->offset.cp_capability);
cp_cap = ioread32(ipc_mmio->base + ipc_mmio->offset.cp_capability);
ipc_mmio->has_mux_lite = (ver >= IOSM_CP_VERSION) &&
!(cp_cap & DL_AGGR) && !(cp_cap & UL_AGGR);
@ -150,8 +150,8 @@ enum ipc_mem_exec_stage ipc_mmio_get_exec_stage(struct iosm_mmio *ipc_mmio)
if (!ipc_mmio)
return IPC_MEM_EXEC_STAGE_INVALID;
return (enum ipc_mem_exec_stage)readl(ipc_mmio->base +
ipc_mmio->offset.exec_stage);
return (enum ipc_mem_exec_stage)ioread32(ipc_mmio->base +
ipc_mmio->offset.exec_stage);
}
void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest,
@ -167,8 +167,8 @@ enum ipc_mem_device_ipc_state ipc_mmio_get_ipc_state(struct iosm_mmio *ipc_mmio)
if (!ipc_mmio)
return IPC_MEM_DEVICE_IPC_INVALID;
return (enum ipc_mem_device_ipc_state)
readl(ipc_mmio->base + ipc_mmio->offset.ipc_status);
return (enum ipc_mem_device_ipc_state)ioread32(ipc_mmio->base +
ipc_mmio->offset.ipc_status);
}
enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio)
@ -176,8 +176,8 @@ enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio)
if (!ipc_mmio)
return IMEM_ROM_EXIT_FAIL;
return (enum rom_exit_code)readl(ipc_mmio->base +
ipc_mmio->offset.rom_exit_code);
return (enum rom_exit_code)ioread32(ipc_mmio->base +
ipc_mmio->offset.rom_exit_code);
}
void ipc_mmio_config(struct iosm_mmio *ipc_mmio)
@ -188,10 +188,10 @@ void ipc_mmio_config(struct iosm_mmio *ipc_mmio)
/* AP memory window (full window is open and active so that modem checks
* each AP address) 0 means don't check on modem side.
*/
iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base);
iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end);
iowrite64(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base);
iowrite64(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end);
iowrite64_lo_hi(ipc_mmio->context_info_addr,
iowrite64(ipc_mmio->context_info_addr,
ipc_mmio->base + ipc_mmio->offset.context_info);
}
@ -201,8 +201,8 @@ void ipc_mmio_set_psi_addr_and_size(struct iosm_mmio *ipc_mmio, dma_addr_t addr,
if (!ipc_mmio)
return;
iowrite64_lo_hi(addr, ipc_mmio->base + ipc_mmio->offset.psi_address);
writel(size, ipc_mmio->base + ipc_mmio->offset.psi_size);
iowrite64(addr, ipc_mmio->base + ipc_mmio->offset.psi_address);
iowrite32(size, ipc_mmio->base + ipc_mmio->offset.psi_size);
}
void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio, phys_addr_t addr)
@ -218,6 +218,8 @@ void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio, phys_addr_t addr)
int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio)
{
return ipc_mmio ? readl(ipc_mmio->base + ipc_mmio->offset.cp_version) :
-EFAULT;
if (ipc_mmio)
return ioread32(ipc_mmio->base + ipc_mmio->offset.cp_version);
return -EFAULT;
}

View File

@ -299,6 +299,18 @@ static inline void ether_addr_copy(u8 *dst, const u8 *src)
#endif
}
/**
* eth_hw_addr_set - Assign Ethernet address to a net_device
* @dev: pointer to net_device structure
* @addr: address to assign
*
* Assign given address to the net_device, addr_assign_type is not changed.
*/
static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr)
{
ether_addr_copy(dev->dev_addr, addr);
}
/**
* eth_hw_addr_inherit - Copy dev_addr from another net_device
* @dst: pointer to net_device to copy dev_addr to

View File

@ -4641,6 +4641,24 @@ void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list,
void __hw_addr_init(struct netdev_hw_addr_list *list);
/* Functions used for device addresses handling */
static inline void
__dev_addr_set(struct net_device *dev, const u8 *addr, size_t len)
{
memcpy(dev->dev_addr, addr, len);
}
static inline void dev_addr_set(struct net_device *dev, const u8 *addr)
{
__dev_addr_set(dev, addr, dev->addr_len);
}
static inline void
dev_addr_mod(struct net_device *dev, unsigned int offset,
const u8 *addr, size_t len)
{
memcpy(&dev->dev_addr[offset], addr, len);
}
int dev_addr_add(struct net_device *dev, const unsigned char *addr,
unsigned char addr_type);
int dev_addr_del(struct net_device *dev, const unsigned char *addr,

View File

@ -18,6 +18,7 @@ struct ip_conntrack_stat {
unsigned int expect_create;
unsigned int expect_delete;
unsigned int search_restart;
unsigned int chaintoolong;
};
#define NFCT_INFOMASK 7UL

View File

@ -451,6 +451,9 @@ void phylink_mac_change(struct phylink *, bool up);
void phylink_start(struct phylink *);
void phylink_stop(struct phylink *);
void phylink_suspend(struct phylink *pl, bool mac_wol);
void phylink_resume(struct phylink *pl);
void phylink_ethtool_get_wol(struct phylink *, struct ethtool_wolinfo *);
int phylink_ethtool_set_wol(struct phylink *, struct ethtool_wolinfo *);

View File

@ -22,12 +22,17 @@
: [rs]"r" (ioaddr)); \
(result); \
})
/*
* STEORL store to memory with release semantics.
* This will avoid using DMB barrier after each LMTST
* operation.
*/
#define cn10k_lmt_flush(val, addr) \
({ \
__asm__ volatile(".cpu generic+lse\n" \
"steor %x[rf],[%[rs]]" \
: [rf]"+r"(val) \
: [rs]"r"(addr)); \
"steorl %x[rf],[%[rs]]" \
: [rf] "+r"(val) \
: [rs] "r"(addr)); \
})
#else
#define otx2_lmt_flush(ioaddr) ({ 0; })

View File

@ -194,7 +194,7 @@ static inline struct flowi *flowi4_to_flowi(struct flowi4 *fl4)
static inline struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4)
{
return &(flowi4_to_flowi(fl4)->u.__fl_common);
return &(fl4->__fl_common);
}
static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
@ -204,7 +204,7 @@ static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6)
{
return &(flowi6_to_flowi(fl6)->u.__fl_common);
return &(fl6->__fl_common);
}
static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn)

View File

@ -258,6 +258,7 @@ enum ctattr_stats_cpu {
CTA_STATS_ERROR,
CTA_STATS_SEARCH_RESTART,
CTA_STATS_CLASH_RESOLVE,
CTA_STATS_CHAIN_TOOLONG,
__CTA_STATS_MAX,
};
#define CTA_STATS_MAX (__CTA_STATS_MAX - 1)

View File

@ -827,6 +827,8 @@ struct tc_codel_xstats {
/* FQ_CODEL */
#define FQ_CODEL_QUANTUM_MAX (1 << 20)
enum {
TCA_FQ_CODEL_UNSPEC,
TCA_FQ_CODEL_TARGET,

View File

@ -4255,7 +4255,7 @@ int br_multicast_set_port_router(struct net_bridge_mcast_port *pmctx,
bool del = false;
brmctx = br_multicast_port_ctx_get_global(pmctx);
spin_lock(&brmctx->br->multicast_lock);
spin_lock_bh(&brmctx->br->multicast_lock);
if (pmctx->multicast_router == val) {
/* Refresh the temp router port timer */
if (pmctx->multicast_router == MDB_RTR_TYPE_TEMP) {
@ -4305,7 +4305,7 @@ int br_multicast_set_port_router(struct net_bridge_mcast_port *pmctx,
}
err = 0;
unlock:
spin_unlock(&brmctx->br->multicast_lock);
spin_unlock_bh(&brmctx->br->multicast_lock);
return err;
}

View File

@ -3602,7 +3602,6 @@ out:
static int pktgen_thread_worker(void *arg)
{
DEFINE_WAIT(wait);
struct pktgen_thread *t = arg;
struct pktgen_dev *pkt_dev = NULL;
int cpu = t->cpu;

View File

@ -3884,7 +3884,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
skb_push(nskb, -skb_network_offset(nskb) + offset);
skb_release_head_state(nskb);
__copy_skb_header(nskb, skb);
__copy_skb_header(nskb, skb);
skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb));
skb_copy_from_linear_data_offset(skb, -tnl_hlen,

View File

@ -54,9 +54,10 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
p = (__be16 *)tag;
*p = htons(RTL4_A_ETHERTYPE);
out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
/* The lower bits is the port number */
out |= (u8)dp->index;
out = (RTL4_A_PROTOCOL_RTL8366RB << RTL4_A_PROTOCOL_SHIFT) | (2 << 8);
/* The lower bits indicate the port number */
out |= BIT(dp->index);
p = (__be16 *)(tag + 2);
*p = htons(out);

View File

@ -465,16 +465,14 @@ void cipso_v4_doi_free(struct cipso_v4_doi *doi_def)
if (!doi_def)
return;
if (doi_def->map.std) {
switch (doi_def->type) {
case CIPSO_V4_MAP_TRANS:
kfree(doi_def->map.std->lvl.cipso);
kfree(doi_def->map.std->lvl.local);
kfree(doi_def->map.std->cat.cipso);
kfree(doi_def->map.std->cat.local);
kfree(doi_def->map.std);
break;
}
switch (doi_def->type) {
case CIPSO_V4_MAP_TRANS:
kfree(doi_def->map.std->lvl.cipso);
kfree(doi_def->map.std->lvl.local);
kfree(doi_def->map.std->cat.cipso);
kfree(doi_def->map.std->cat.local);
kfree(doi_def->map.std);
break;
}
kfree(doi_def);
}

View File

@ -473,8 +473,6 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
static int gre_handle_offloads(struct sk_buff *skb, bool csum)
{
if (csum && skb_checksum_start(skb) < skb->data)
return -EINVAL;
return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
}
@ -632,15 +630,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
}
if (dev->header_ops) {
const int pull_len = tunnel->hlen + sizeof(struct iphdr);
if (skb_cow_head(skb, 0))
goto free_skb;
tnl_params = (const struct iphdr *)skb->data;
if (pull_len > skb_transport_offset(skb))
goto free_skb;
/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
* to gre header.
*/
skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
skb_pull(skb, pull_len);
skb_reset_mac_header(skb);
} else {
if (skb_cow_head(skb, dev->needed_headroom))

View File

@ -2490,6 +2490,7 @@ static int nh_create_ipv4(struct net *net, struct nexthop *nh,
.fc_gw4 = cfg->gw.ipv4,
.fc_gw_family = cfg->gw.ipv4 ? AF_INET : 0,
.fc_flags = cfg->nh_flags,
.fc_nlinfo = cfg->nlinfo,
.fc_encap = cfg->nh_encap,
.fc_encap_type = cfg->nh_encap_type,
};
@ -2528,6 +2529,7 @@ static int nh_create_ipv6(struct net *net, struct nexthop *nh,
.fc_ifindex = cfg->nh_ifindex,
.fc_gateway = cfg->gw.ipv6,
.fc_flags = cfg->nh_flags,
.fc_nlinfo = cfg->nlinfo,
.fc_encap = cfg->nh_encap,
.fc_encap_type = cfg->nh_encap_type,
.fc_is_fdb = cfg->nh_fdb,

View File

@ -3092,19 +3092,22 @@ static void add_addr(struct inet6_dev *idev, const struct in6_addr *addr,
}
}
#if IS_ENABLED(CONFIG_IPV6_SIT)
static void sit_add_v4_addrs(struct inet6_dev *idev)
#if IS_ENABLED(CONFIG_IPV6_SIT) || IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)
static void add_v4_addrs(struct inet6_dev *idev)
{
struct in6_addr addr;
struct net_device *dev;
struct net *net = dev_net(idev->dev);
int scope, plen;
int scope, plen, offset = 0;
u32 pflags = 0;
ASSERT_RTNL();
memset(&addr, 0, sizeof(struct in6_addr));
memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
/* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */
if (idev->dev->addr_len == sizeof(struct in6_addr))
offset = sizeof(struct in6_addr) - 4;
memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);
if (idev->dev->flags&IFF_POINTOPOINT) {
addr.s6_addr32[0] = htonl(0xfe800000);
@ -3342,8 +3345,6 @@ static void addrconf_dev_config(struct net_device *dev)
(dev->type != ARPHRD_IEEE1394) &&
(dev->type != ARPHRD_TUNNEL6) &&
(dev->type != ARPHRD_6LOWPAN) &&
(dev->type != ARPHRD_IP6GRE) &&
(dev->type != ARPHRD_IPGRE) &&
(dev->type != ARPHRD_TUNNEL) &&
(dev->type != ARPHRD_NONE) &&
(dev->type != ARPHRD_RAWIP)) {
@ -3391,14 +3392,14 @@ static void addrconf_sit_config(struct net_device *dev)
return;
}
sit_add_v4_addrs(idev);
add_v4_addrs(idev);
if (dev->flags&IFF_POINTOPOINT)
addrconf_add_mroute(dev);
}
#endif
#if IS_ENABLED(CONFIG_NET_IPGRE)
#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)
static void addrconf_gre_config(struct net_device *dev)
{
struct inet6_dev *idev;
@ -3411,7 +3412,13 @@ static void addrconf_gre_config(struct net_device *dev)
return;
}
addrconf_addr_gen(idev, true);
if (dev->type == ARPHRD_ETHER) {
addrconf_addr_gen(idev, true);
return;
}
add_v4_addrs(idev);
if (dev->flags & IFF_POINTOPOINT)
addrconf_add_mroute(dev);
}
@ -3587,7 +3594,8 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
addrconf_sit_config(dev);
break;
#endif
#if IS_ENABLED(CONFIG_NET_IPGRE)
#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)
case ARPHRD_IP6GRE:
case ARPHRD_IPGRE:
addrconf_gre_config(dev);
break;

View File

@ -629,8 +629,6 @@ drop:
static int gre_handle_offloads(struct sk_buff *skb, bool csum)
{
if (csum && skb_checksum_start(skb) < skb->data)
return -EINVAL;
return iptunnel_handle_offloads(skb,
csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
}

View File

@ -1356,8 +1356,8 @@ static int mld_process_v1(struct inet6_dev *idev, struct mld_msg *mld,
return 0;
}
static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
unsigned long *max_delay)
static void mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
unsigned long *max_delay)
{
*max_delay = max(msecs_to_jiffies(mldv2_mrc(mld)), 1UL);
@ -1367,7 +1367,7 @@ static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
idev->mc_maxdelay = *max_delay;
return 0;
return;
}
/* called with rcu_read_lock() */
@ -1454,9 +1454,7 @@ static void __mld_query_work(struct sk_buff *skb)
mlh2 = (struct mld2_query *)skb_transport_header(skb);
err = mld_process_v2(idev, mlh2, &max_delay);
if (err < 0)
goto out;
mld_process_v2(idev, mlh2, &max_delay);
if (group_type == IPV6_ADDR_ANY) { /* general query */
if (mlh2->mld2q_nsrcs)

View File

@ -99,7 +99,7 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
{
__be16 dport, sport;
const struct in6_addr *daddr = NULL, *saddr = NULL;
struct ipv6hdr *iph = ipv6_hdr(skb);
struct ipv6hdr *iph = ipv6_hdr(skb), ipv6_var;
struct sk_buff *data_skb = NULL;
int doff = 0;
int thoff = 0, tproto;
@ -129,8 +129,6 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
thoff + sizeof(*hp);
} else if (tproto == IPPROTO_ICMPV6) {
struct ipv6hdr ipv6_var;
if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr,
&sport, &dport, &ipv6_var))
return NULL;

View File

@ -385,7 +385,7 @@ static int seg6_output_core(struct net *net, struct sock *sk,
struct dst_entry *orig_dst = skb_dst(skb);
struct dst_entry *dst = NULL;
struct seg6_lwt *slwt;
int err = -EINVAL;
int err;
err = seg6_do_srh(skb);
if (unlikely(err))

View File

@ -617,7 +617,7 @@ ieee802154_if_add(struct ieee802154_local *local, const char *name,
{
struct net_device *ndev = NULL;
struct ieee802154_sub_if_data *sdata = NULL;
int ret = -ENOMEM;
int ret;
ASSERT_RTNL();

View File

@ -644,15 +644,12 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node);
if (subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
bool slow;
spin_unlock_bh(&msk->pm.lock);
pr_debug("send ack for %s",
mptcp_pm_should_add_signal(msk) ? "add_addr" : "rm_addr");
slow = lock_sock_fast(ssk);
tcp_send_ack(ssk);
unlock_sock_fast(ssk, slow);
mptcp_subflow_send_ack(ssk);
spin_lock_bh(&msk->pm.lock);
}
}
@ -669,7 +666,6 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
struct sock *sk = (struct sock *)msk;
struct mptcp_addr_info local;
bool slow;
local_address((struct sock_common *)ssk, &local);
if (!addresses_equal(&local, addr, addr->port))
@ -682,9 +678,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
spin_unlock_bh(&msk->pm.lock);
pr_debug("send ack for mp_prio");
slow = lock_sock_fast(ssk);
tcp_send_ack(ssk);
unlock_sock_fast(ssk, slow);
mptcp_subflow_send_ack(ssk);
spin_lock_bh(&msk->pm.lock);
return 0;

View File

@ -440,19 +440,22 @@ static bool tcp_can_send_ack(const struct sock *ssk)
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN));
}
void mptcp_subflow_send_ack(struct sock *ssk)
{
bool slow;
slow = lock_sock_fast(ssk);
if (tcp_can_send_ack(ssk))
tcp_send_ack(ssk);
unlock_sock_fast(ssk, slow);
}
static void mptcp_send_ack(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
mptcp_for_each_subflow(msk, subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
bool slow;
slow = lock_sock_fast(ssk);
if (tcp_can_send_ack(ssk))
tcp_send_ack(ssk);
unlock_sock_fast(ssk, slow);
}
mptcp_for_each_subflow(msk, subflow)
mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow));
}
static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
@ -1003,6 +1006,13 @@ static void mptcp_wmem_uncharge(struct sock *sk, int size)
msk->wmem_reserved += size;
}
static void __mptcp_mem_reclaim_partial(struct sock *sk)
{
lockdep_assert_held_once(&sk->sk_lock.slock);
__mptcp_update_wmem(sk);
sk_mem_reclaim_partial(sk);
}
static void mptcp_mem_reclaim_partial(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
@ -1094,12 +1104,8 @@ static void __mptcp_clean_una(struct sock *sk)
msk->recovery = false;
out:
if (cleaned) {
if (tcp_under_memory_pressure(sk)) {
__mptcp_update_wmem(sk);
sk_mem_reclaim_partial(sk);
}
}
if (cleaned && tcp_under_memory_pressure(sk))
__mptcp_mem_reclaim_partial(sk);
if (snd_una == READ_ONCE(msk->snd_nxt) && !msk->recovery) {
if (mptcp_timer_pending(sk) && !mptcp_data_fin_enabled(msk))
@ -1179,6 +1185,7 @@ struct mptcp_sendmsg_info {
u16 limit;
u16 sent;
unsigned int flags;
bool data_lock_held;
};
static int mptcp_check_allowed_size(struct mptcp_sock *msk, u64 data_seq,
@ -1250,17 +1257,17 @@ static bool __mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, gfp_t gfp)
return false;
}
static bool mptcp_must_reclaim_memory(struct sock *sk, struct sock *ssk)
static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, bool data_lock_held)
{
return !ssk->sk_tx_skb_cache &&
tcp_under_memory_pressure(sk);
}
gfp_t gfp = data_lock_held ? GFP_ATOMIC : sk->sk_allocation;
static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk)
{
if (unlikely(mptcp_must_reclaim_memory(sk, ssk)))
mptcp_mem_reclaim_partial(sk);
return __mptcp_alloc_tx_skb(sk, ssk, sk->sk_allocation);
if (unlikely(tcp_under_memory_pressure(sk))) {
if (data_lock_held)
__mptcp_mem_reclaim_partial(sk);
else
mptcp_mem_reclaim_partial(sk);
}
return __mptcp_alloc_tx_skb(sk, ssk, gfp);
}
/* note: this always recompute the csum on the whole skb, even
@ -1284,7 +1291,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
bool zero_window_probe = false;
struct mptcp_ext *mpext = NULL;
struct sk_buff *skb, *tail;
bool can_collapse = false;
bool must_collapse = false;
int size_bias = 0;
int avail_size;
size_t ret = 0;
@ -1304,16 +1311,24 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
* SSN association set here
*/
mpext = skb_ext_find(skb, SKB_EXT_MPTCP);
can_collapse = (info->size_goal - skb->len > 0) &&
mptcp_skb_can_collapse_to(data_seq, skb, mpext);
if (!can_collapse) {
if (!mptcp_skb_can_collapse_to(data_seq, skb, mpext)) {
TCP_SKB_CB(skb)->eor = 1;
} else {
goto alloc_skb;
}
must_collapse = (info->size_goal - skb->len > 0) &&
(skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags);
if (must_collapse) {
size_bias = skb->len;
avail_size = info->size_goal - skb->len;
}
}
alloc_skb:
if (!must_collapse && !ssk->sk_tx_skb_cache &&
!mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held))
return 0;
/* Zero window and all data acked? Probe. */
avail_size = mptcp_check_allowed_size(msk, data_seq, avail_size);
if (avail_size == 0) {
@ -1343,7 +1358,6 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
if (skb == tail) {
TCP_SKB_CB(tail)->tcp_flags &= ~TCPHDR_PSH;
mpext->data_len += ret;
WARN_ON_ONCE(!can_collapse);
WARN_ON_ONCE(zero_window_probe);
goto out;
}
@ -1530,15 +1544,6 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
if (ssk != prev_ssk)
lock_sock(ssk);
/* keep it simple and always provide a new skb for the
* subflow, even if we will not use it when collapsing
* on the pending one
*/
if (!mptcp_alloc_tx_skb(sk, ssk)) {
mptcp_push_release(sk, ssk, &info);
goto out;
}
ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
if (ret <= 0) {
mptcp_push_release(sk, ssk, &info);
@ -1571,7 +1576,9 @@ out:
static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct mptcp_sendmsg_info info;
struct mptcp_sendmsg_info info = {
.data_lock_held = true,
};
struct mptcp_data_frag *dfrag;
struct sock *xmit_ssk;
int len, copied = 0;
@ -1597,13 +1604,6 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
goto out;
}
if (unlikely(mptcp_must_reclaim_memory(sk, ssk))) {
__mptcp_update_wmem(sk);
sk_mem_reclaim_partial(sk);
}
if (!__mptcp_alloc_tx_skb(sk, ssk, GFP_ATOMIC))
goto out;
ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
if (ret <= 0)
goto out;
@ -2409,9 +2409,6 @@ static void __mptcp_retrans(struct sock *sk)
info.sent = 0;
info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : dfrag->already_sent;
while (info.sent < info.limit) {
if (!mptcp_alloc_tx_skb(sk, ssk))
break;
ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
if (ret <= 0)
break;

View File

@ -34,7 +34,7 @@
#define OPTIONS_MPTCP_MPC (OPTION_MPTCP_MPC_SYN | OPTION_MPTCP_MPC_SYNACK | \
OPTION_MPTCP_MPC_ACK)
#define OPTIONS_MPTCP_MPJ (OPTION_MPTCP_MPJ_SYN | OPTION_MPTCP_MPJ_SYNACK | \
OPTION_MPTCP_MPJ_SYNACK)
OPTION_MPTCP_MPJ_ACK)
/* MPTCP option subtypes */
#define MPTCPOPT_MP_CAPABLE 0
@ -573,6 +573,7 @@ void __init mptcp_subflow_init(void);
void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how);
void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
struct mptcp_subflow_context *subflow);
void mptcp_subflow_send_ack(struct sock *ssk);
void mptcp_subflow_reset(struct sock *ssk);
void mptcp_sock_graft(struct sock *sk, struct socket *parent);
struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);

View File

@ -80,6 +80,7 @@ enum {
#define NCSI_OEM_MFR_BCM_ID 0x113d
#define NCSI_OEM_MFR_INTEL_ID 0x157
/* Intel specific OEM command */
#define NCSI_OEM_INTEL_CMD_GMA 0x06 /* CMD ID for Get MAC */
#define NCSI_OEM_INTEL_CMD_KEEP_PHY 0x20 /* CMD ID for Keep PHY up */
/* Broadcom specific OEM Command */
#define NCSI_OEM_BCM_CMD_GMA 0x01 /* CMD ID for Get MAC */
@ -89,6 +90,7 @@ enum {
#define NCSI_OEM_MLX_CMD_SMAF 0x01 /* CMD ID for Set MC Affinity */
#define NCSI_OEM_MLX_CMD_SMAF_PARAM 0x07 /* Parameter for SMAF */
/* OEM Command payload lengths*/
#define NCSI_OEM_INTEL_CMD_GMA_LEN 5
#define NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN 7
#define NCSI_OEM_BCM_CMD_GMA_LEN 12
#define NCSI_OEM_MLX_CMD_GMA_LEN 8
@ -99,6 +101,7 @@ enum {
/* Mac address offset in OEM response */
#define BCM_MAC_ADDR_OFFSET 28
#define MLX_MAC_ADDR_OFFSET 8
#define INTEL_MAC_ADDR_OFFSET 1
struct ncsi_channel_version {

View File

@ -795,13 +795,36 @@ static int ncsi_oem_smaf_mlx(struct ncsi_cmd_arg *nca)
return ret;
}
static int ncsi_oem_gma_handler_intel(struct ncsi_cmd_arg *nca)
{
unsigned char data[NCSI_OEM_INTEL_CMD_GMA_LEN];
int ret = 0;
nca->payload = NCSI_OEM_INTEL_CMD_GMA_LEN;
memset(data, 0, NCSI_OEM_INTEL_CMD_GMA_LEN);
*(unsigned int *)data = ntohl((__force __be32)NCSI_OEM_MFR_INTEL_ID);
data[4] = NCSI_OEM_INTEL_CMD_GMA;
nca->data = data;
ret = ncsi_xmit_cmd(nca);
if (ret)
netdev_err(nca->ndp->ndev.dev,
"NCSI: Failed to transmit cmd 0x%x during configure\n",
nca->type);
return ret;
}
/* OEM Command handlers initialization */
static struct ncsi_oem_gma_handler {
unsigned int mfr_id;
int (*handler)(struct ncsi_cmd_arg *nca);
} ncsi_oem_gma_handlers[] = {
{ NCSI_OEM_MFR_BCM_ID, ncsi_oem_gma_handler_bcm },
{ NCSI_OEM_MFR_MLX_ID, ncsi_oem_gma_handler_mlx }
{ NCSI_OEM_MFR_MLX_ID, ncsi_oem_gma_handler_mlx },
{ NCSI_OEM_MFR_INTEL_ID, ncsi_oem_gma_handler_intel }
};
static int ncsi_gma_handler(struct ncsi_cmd_arg *nca, unsigned int mf_id)

View File

@ -178,6 +178,12 @@ struct ncsi_rsp_oem_bcm_pkt {
unsigned char data[]; /* Cmd specific Data */
};
/* Intel Response Data */
struct ncsi_rsp_oem_intel_pkt {
unsigned char cmd; /* OEM Command ID */
unsigned char data[]; /* Cmd specific Data */
};
/* Get Link Status */
struct ncsi_rsp_gls_pkt {
struct ncsi_rsp_pkt_hdr rsp; /* Response header */

View File

@ -699,9 +699,51 @@ static int ncsi_rsp_handler_oem_bcm(struct ncsi_request *nr)
return 0;
}
/* Response handler for Intel command Get Mac Address */
static int ncsi_rsp_handler_oem_intel_gma(struct ncsi_request *nr)
{
struct ncsi_dev_priv *ndp = nr->ndp;
struct net_device *ndev = ndp->ndev.dev;
const struct net_device_ops *ops = ndev->netdev_ops;
struct ncsi_rsp_oem_pkt *rsp;
struct sockaddr saddr;
int ret = 0;
/* Get the response header */
rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp);
saddr.sa_family = ndev->type;
ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
memcpy(saddr.sa_data, &rsp->data[INTEL_MAC_ADDR_OFFSET], ETH_ALEN);
/* Increase mac address by 1 for BMC's address */
eth_addr_inc((u8 *)saddr.sa_data);
if (!is_valid_ether_addr((const u8 *)saddr.sa_data))
return -ENXIO;
/* Set the flag for GMA command which should only be called once */
ndp->gma_flag = 1;
ret = ops->ndo_set_mac_address(ndev, &saddr);
if (ret < 0)
netdev_warn(ndev,
"NCSI: 'Writing mac address to device failed\n");
return ret;
}
/* Response handler for Intel card */
static int ncsi_rsp_handler_oem_intel(struct ncsi_request *nr)
{
struct ncsi_rsp_oem_intel_pkt *intel;
struct ncsi_rsp_oem_pkt *rsp;
/* Get the response header */
rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp);
intel = (struct ncsi_rsp_oem_intel_pkt *)(rsp->data);
if (intel->cmd == NCSI_OEM_INTEL_CMD_GMA)
return ncsi_rsp_handler_oem_intel_gma(nr);
return 0;
}

View File

@ -21,7 +21,6 @@
#include <linux/stddef.h>
#include <linux/slab.h>
#include <linux/random.h>
#include <linux/jhash.h>
#include <linux/siphash.h>
#include <linux/err.h>
#include <linux/percpu.h>
@ -78,6 +77,8 @@ static __read_mostly bool nf_conntrack_locks_all;
#define GC_SCAN_INTERVAL (120u * HZ)
#define GC_SCAN_MAX_DURATION msecs_to_jiffies(10)
#define MAX_CHAINLEN 64u
static struct conntrack_gc_work conntrack_gc_work;
void nf_conntrack_lock(spinlock_t *lock) __acquires(lock)
@ -184,25 +185,31 @@ EXPORT_SYMBOL_GPL(nf_conntrack_htable_size);
unsigned int nf_conntrack_max __read_mostly;
EXPORT_SYMBOL_GPL(nf_conntrack_max);
seqcount_spinlock_t nf_conntrack_generation __read_mostly;
static unsigned int nf_conntrack_hash_rnd __read_mostly;
static siphash_key_t nf_conntrack_hash_rnd __read_mostly;
static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
const struct net *net)
{
unsigned int n;
u32 seed;
struct {
struct nf_conntrack_man src;
union nf_inet_addr dst_addr;
u32 net_mix;
u16 dport;
u16 proto;
} __aligned(SIPHASH_ALIGNMENT) combined;
get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd));
/* The direction must be ignored, so we hash everything up to the
* destination ports (which is a multiple of 4) and treat the last
* three bytes manually.
*/
seed = nf_conntrack_hash_rnd ^ net_hash_mix(net);
n = (sizeof(tuple->src) + sizeof(tuple->dst.u3)) / sizeof(u32);
return jhash2((u32 *)tuple, n, seed ^
(((__force __u16)tuple->dst.u.all << 16) |
tuple->dst.protonum));
memset(&combined, 0, sizeof(combined));
/* The direction must be ignored, so handle usable members manually. */
combined.src = tuple->src;
combined.dst_addr = tuple->dst.u3;
combined.net_mix = net_hash_mix(net);
combined.dport = (__force __u16)tuple->dst.u.all;
combined.proto = tuple->dst.protonum;
return (u32)siphash(&combined, sizeof(combined), &nf_conntrack_hash_rnd);
}
static u32 scale_hash(u32 hash)
@ -835,7 +842,9 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
unsigned int hash, reply_hash;
struct nf_conntrack_tuple_hash *h;
struct hlist_nulls_node *n;
unsigned int chainlen = 0;
unsigned int sequence;
int err = -EEXIST;
zone = nf_ct_zone(ct);
@ -849,15 +858,24 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
} while (nf_conntrack_double_lock(net, hash, reply_hash, sequence));
/* See if there's one in the list already, including reverse */
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode)
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) {
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
zone, net))
goto out;
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode)
if (chainlen++ > MAX_CHAINLEN)
goto chaintoolong;
}
chainlen = 0;
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) {
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
zone, net))
goto out;
if (chainlen++ > MAX_CHAINLEN)
goto chaintoolong;
}
smp_wmb();
/* The caller holds a reference to this object */
@ -867,11 +885,13 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
NF_CT_STAT_INC(net, insert);
local_bh_enable();
return 0;
chaintoolong:
NF_CT_STAT_INC(net, chaintoolong);
err = -ENOSPC;
out:
nf_conntrack_double_unlock(hash, reply_hash);
local_bh_enable();
return -EEXIST;
return err;
}
EXPORT_SYMBOL_GPL(nf_conntrack_hash_check_insert);
@ -1084,6 +1104,7 @@ int
__nf_conntrack_confirm(struct sk_buff *skb)
{
const struct nf_conntrack_zone *zone;
unsigned int chainlen = 0, sequence;
unsigned int hash, reply_hash;
struct nf_conntrack_tuple_hash *h;
struct nf_conn *ct;
@ -1091,7 +1112,6 @@ __nf_conntrack_confirm(struct sk_buff *skb)
struct hlist_nulls_node *n;
enum ip_conntrack_info ctinfo;
struct net *net;
unsigned int sequence;
int ret = NF_DROP;
ct = nf_ct_get(skb, &ctinfo);
@ -1151,15 +1171,28 @@ __nf_conntrack_confirm(struct sk_buff *skb)
/* See if there's one in the list already, including reverse:
NAT could have grabbed it without realizing, since we're
not in the hash. If there is, we lost race. */
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode)
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) {
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
zone, net))
goto out;
if (chainlen++ > MAX_CHAINLEN)
goto chaintoolong;
}
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode)
chainlen = 0;
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) {
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
zone, net))
goto out;
if (chainlen++ > MAX_CHAINLEN) {
chaintoolong:
nf_ct_add_to_dying_list(ct);
NF_CT_STAT_INC(net, chaintoolong);
NF_CT_STAT_INC(net, insert_failed);
ret = NF_DROP;
goto dying;
}
}
/* Timer relative to confirmation time, not original
setting time, otherwise we'd get timer wrap in
@ -2594,26 +2627,24 @@ int nf_conntrack_init_start(void)
spin_lock_init(&nf_conntrack_locks[i]);
if (!nf_conntrack_htable_size) {
/* Idea from tcp.c: use 1/16384 of memory.
* On i386: 32MB machine has 512 buckets.
* >= 1GB machines have 16384 buckets.
* >= 4GB machines have 65536 buckets.
*/
nf_conntrack_htable_size
= (((nr_pages << PAGE_SHIFT) / 16384)
/ sizeof(struct hlist_head));
if (nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
nf_conntrack_htable_size = 65536;
if (BITS_PER_LONG >= 64 &&
nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
nf_conntrack_htable_size = 262144;
else if (nr_pages > (1024 * 1024 * 1024 / PAGE_SIZE))
nf_conntrack_htable_size = 16384;
if (nf_conntrack_htable_size < 32)
nf_conntrack_htable_size = 32;
nf_conntrack_htable_size = 65536;
/* Use a max. factor of four by default to get the same max as
* with the old struct list_heads. When a table size is given
* we use the old value of 8 to avoid reducing the max.
* entries. */
max_factor = 4;
if (nf_conntrack_htable_size < 1024)
nf_conntrack_htable_size = 1024;
/* Use a max. factor of one by default to keep the average
* hash chain length at 2 entries. Each entry has to be added
* twice (once for original direction, once for reply).
* When a table size is given we use the old value of 8 to
* avoid implicit reduction of the max entries setting.
*/
max_factor = 1;
}
nf_conntrack_hash = nf_ct_alloc_hashtable(&nf_conntrack_htable_size, 1);

View File

@ -17,7 +17,7 @@
#include <linux/err.h>
#include <linux/percpu.h>
#include <linux/kernel.h>
#include <linux/jhash.h>
#include <linux/siphash.h>
#include <linux/moduleparam.h>
#include <linux/export.h>
#include <net/net_namespace.h>
@ -41,7 +41,7 @@ EXPORT_SYMBOL_GPL(nf_ct_expect_hash);
unsigned int nf_ct_expect_max __read_mostly;
static struct kmem_cache *nf_ct_expect_cachep __read_mostly;
static unsigned int nf_ct_expect_hashrnd __read_mostly;
static siphash_key_t nf_ct_expect_hashrnd __read_mostly;
/* nf_conntrack_expect helper functions */
void nf_ct_unlink_expect_report(struct nf_conntrack_expect *exp,
@ -81,15 +81,26 @@ static void nf_ct_expectation_timed_out(struct timer_list *t)
static unsigned int nf_ct_expect_dst_hash(const struct net *n, const struct nf_conntrack_tuple *tuple)
{
unsigned int hash, seed;
struct {
union nf_inet_addr dst_addr;
u32 net_mix;
u16 dport;
u8 l3num;
u8 protonum;
} __aligned(SIPHASH_ALIGNMENT) combined;
u32 hash;
get_random_once(&nf_ct_expect_hashrnd, sizeof(nf_ct_expect_hashrnd));
seed = nf_ct_expect_hashrnd ^ net_hash_mix(n);
memset(&combined, 0, sizeof(combined));
hash = jhash2(tuple->dst.u3.all, ARRAY_SIZE(tuple->dst.u3.all),
(((tuple->dst.protonum ^ tuple->src.l3num) << 16) |
(__force __u16)tuple->dst.u.all) ^ seed);
combined.dst_addr = tuple->dst.u3;
combined.net_mix = net_hash_mix(n);
combined.dport = (__force __u16)tuple->dst.u.all;
combined.l3num = tuple->src.l3num;
combined.protonum = tuple->dst.protonum;
hash = siphash(&combined, sizeof(combined), &nf_ct_expect_hashrnd);
return reciprocal_scale(hash, nf_ct_expect_hsize);
}

View File

@ -2528,7 +2528,9 @@ ctnetlink_ct_stat_cpu_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
nla_put_be32(skb, CTA_STATS_SEARCH_RESTART,
htonl(st->search_restart)) ||
nla_put_be32(skb, CTA_STATS_CLASH_RESOLVE,
htonl(st->clash_resolve)))
htonl(st->clash_resolve)) ||
nla_put_be32(skb, CTA_STATS_CHAIN_TOOLONG,
htonl(st->chaintoolong)))
goto nla_put_failure;
nlmsg_end(skb, nlh);

View File

@ -432,7 +432,7 @@ static int ct_cpu_seq_show(struct seq_file *seq, void *v)
unsigned int nr_conntracks;
if (v == SEQ_START_TOKEN) {
seq_puts(seq, "entries clashres found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n");
seq_puts(seq, "entries clashres found new invalid ignore delete chainlength insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n");
return 0;
}
@ -447,7 +447,7 @@ static int ct_cpu_seq_show(struct seq_file *seq, void *v)
st->invalid,
0,
0,
0,
st->chaintoolong,
st->insert,
st->insert_failed,
st->drop,

View File

@ -13,7 +13,7 @@
#include <linux/skbuff.h>
#include <linux/gfp.h>
#include <net/xfrm.h>
#include <linux/jhash.h>
#include <linux/siphash.h>
#include <linux/rtnetlink.h>
#include <net/netfilter/nf_conntrack.h>
@ -34,7 +34,7 @@ static unsigned int nat_net_id __read_mostly;
static struct hlist_head *nf_nat_bysource __read_mostly;
static unsigned int nf_nat_htable_size __read_mostly;
static unsigned int nf_nat_hash_rnd __read_mostly;
static siphash_key_t nf_nat_hash_rnd __read_mostly;
struct nf_nat_lookup_hook_priv {
struct nf_hook_entries __rcu *entries;
@ -153,12 +153,22 @@ static unsigned int
hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple)
{
unsigned int hash;
struct {
struct nf_conntrack_man src;
u32 net_mix;
u32 protonum;
} __aligned(SIPHASH_ALIGNMENT) combined;
get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd));
memset(&combined, 0, sizeof(combined));
/* Original src, to ensure we map it consistently if poss. */
hash = jhash2((u32 *)&tuple->src, sizeof(tuple->src) / sizeof(u32),
tuple->dst.protonum ^ nf_nat_hash_rnd ^ net_hash_mix(n));
combined.src = tuple->src;
combined.net_mix = net_hash_mix(n);
combined.protonum = tuple->dst.protonum;
hash = siphash(&combined, sizeof(combined), &nf_nat_hash_rnd);
return reciprocal_scale(hash, nf_nat_htable_size);
}

View File

@ -41,6 +41,7 @@ struct nft_ct_helper_obj {
#ifdef CONFIG_NF_CONNTRACK_ZONES
static DEFINE_PER_CPU(struct nf_conn *, nft_ct_pcpu_template);
static unsigned int nft_ct_pcpu_template_refcnt __read_mostly;
static DEFINE_MUTEX(nft_ct_pcpu_mutex);
#endif
static u64 nft_ct_get_eval_counter(const struct nf_conn_counter *c,
@ -525,8 +526,10 @@ static void __nft_ct_set_destroy(const struct nft_ctx *ctx, struct nft_ct *priv)
#endif
#ifdef CONFIG_NF_CONNTRACK_ZONES
case NFT_CT_ZONE:
mutex_lock(&nft_ct_pcpu_mutex);
if (--nft_ct_pcpu_template_refcnt == 0)
nft_ct_tmpl_put_pcpu();
mutex_unlock(&nft_ct_pcpu_mutex);
break;
#endif
default:
@ -564,9 +567,13 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
#endif
#ifdef CONFIG_NF_CONNTRACK_ZONES
case NFT_CT_ZONE:
if (!nft_ct_tmpl_alloc_pcpu())
mutex_lock(&nft_ct_pcpu_mutex);
if (!nft_ct_tmpl_alloc_pcpu()) {
mutex_unlock(&nft_ct_pcpu_mutex);
return -ENOMEM;
}
nft_ct_pcpu_template_refcnt++;
mutex_unlock(&nft_ct_pcpu_mutex);
len = sizeof(u16);
break;
#endif

View File

@ -493,7 +493,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
goto err;
}
if (!size || size & 3 || len != size + hdrlen)
if (!size || len != ALIGN(size, 4) + hdrlen)
goto err;
if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&

View File

@ -369,6 +369,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
{
struct fq_codel_sched_data *q = qdisc_priv(sch);
struct nlattr *tb[TCA_FQ_CODEL_MAX + 1];
u32 quantum = 0;
int err;
if (!opt)
@ -386,6 +387,13 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
q->flows_cnt > 65536)
return -EINVAL;
}
if (tb[TCA_FQ_CODEL_QUANTUM]) {
quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
if (quantum > FQ_CODEL_QUANTUM_MAX) {
NL_SET_ERR_MSG(extack, "Invalid quantum");
return -EINVAL;
}
}
sch_tree_lock(sch);
if (tb[TCA_FQ_CODEL_TARGET]) {
@ -412,8 +420,8 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
if (tb[TCA_FQ_CODEL_ECN])
q->cparams.ecn = !!nla_get_u32(tb[TCA_FQ_CODEL_ECN]);
if (tb[TCA_FQ_CODEL_QUANTUM])
q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
if (quantum)
q->quantum = quantum;
if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])
q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));

View File

@ -1426,7 +1426,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
if (ua) {
if (!tipc_uaddr_valid(ua, m->msg_namelen))
return -EINVAL;
atype = ua->addrtype;
atype = ua->addrtype;
}
/* If socket belongs to a communication group follow other paths */

View File

@ -384,8 +384,7 @@ static void test_xdp_bonding_attach(struct skeletons *skeletons)
{
struct bpf_link *link = NULL;
struct bpf_link *link2 = NULL;
int veth, bond;
int err;
int veth, bond, err;
if (!ASSERT_OK(system("ip link add veth type veth"), "add veth"))
goto out;
@ -399,22 +398,18 @@ static void test_xdp_bonding_attach(struct skeletons *skeletons)
if (!ASSERT_GE(bond, 0, "if_nametoindex bond"))
goto out;
/* enslaving with a XDP program loaded fails */
/* enslaving with a XDP program loaded is allowed */
link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth);
if (!ASSERT_OK_PTR(link, "attach program to veth"))
goto out;
err = system("ip link set veth master bond");
if (!ASSERT_NEQ(err, 0, "attaching slave with xdp program expected to fail"))
if (!ASSERT_OK(err, "set veth master"))
goto out;
bpf_link__destroy(link);
link = NULL;
err = system("ip link set veth master bond");
if (!ASSERT_OK(err, "set veth master"))
goto out;
/* attaching to slave when master has no program is allowed */
link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth);
if (!ASSERT_OK_PTR(link, "attach program to slave when enslaved"))
@ -434,8 +429,26 @@ static void test_xdp_bonding_attach(struct skeletons *skeletons)
goto out;
/* attaching to slave not allowed when master has program loaded */
link2 = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond);
ASSERT_ERR_PTR(link2, "attach program to slave when master has program");
link2 = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth);
if (!ASSERT_ERR_PTR(link2, "attach program to slave when master has program"))
goto out;
bpf_link__destroy(link);
link = NULL;
/* test program unwinding with a non-XDP slave */
if (!ASSERT_OK(system("ip link add vxlan type vxlan id 1 remote 1.2.3.4 dstport 0 dev lo"),
"add vxlan"))
goto out;
err = system("ip link set vxlan master bond");
if (!ASSERT_OK(err, "set vxlan master"))
goto out;
/* attaching not allowed when one slave does not support XDP */
link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond);
if (!ASSERT_ERR_PTR(link, "attach program to master when slave does not support XDP"))
goto out;
out:
bpf_link__destroy(link);
@ -443,6 +456,44 @@ out:
system("ip link del veth");
system("ip link del bond");
system("ip link del vxlan");
}
/* Test with nested bonding devices to catch issue with negative jump label count */
static void test_xdp_bonding_nested(struct skeletons *skeletons)
{
struct bpf_link *link = NULL;
int bond, err;
if (!ASSERT_OK(system("ip link add bond type bond"), "add bond"))
goto out;
bond = if_nametoindex("bond");
if (!ASSERT_GE(bond, 0, "if_nametoindex bond"))
goto out;
if (!ASSERT_OK(system("ip link add bond_nest1 type bond"), "add bond_nest1"))
goto out;
err = system("ip link set bond_nest1 master bond");
if (!ASSERT_OK(err, "set bond_nest1 master"))
goto out;
if (!ASSERT_OK(system("ip link add bond_nest2 type bond"), "add bond_nest1"))
goto out;
err = system("ip link set bond_nest2 master bond_nest1");
if (!ASSERT_OK(err, "set bond_nest2 master"))
goto out;
link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond);
ASSERT_OK_PTR(link, "attach program to master");
out:
bpf_link__destroy(link);
system("ip link del bond");
system("ip link del bond_nest1");
system("ip link del bond_nest2");
}
static int libbpf_debug_print(enum libbpf_print_level level,
@ -496,6 +547,9 @@ void test_xdp_bonding(void)
if (test__start_subtest("xdp_bonding_attach"))
test_xdp_bonding_attach(&skeletons);
if (test__start_subtest("xdp_bonding_nested"))
test_xdp_bonding_nested(&skeletons);
for (i = 0; i < ARRAY_SIZE(bond_test_cases); i++) {
struct bond_test_case *test_case = &bond_test_cases[i];

View File

@ -27,6 +27,7 @@ TEST_PROGS += udpgro_fwd.sh
TEST_PROGS += veth.sh
TEST_PROGS += ioam6.sh
TEST_PROGS += gro.sh
TEST_PROGS += gre_gso.sh
TEST_PROGS_EXTENDED := in_netns.sh
TEST_GEN_FILES = socket nettest
TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy reuseport_addr_any

View File

@ -0,0 +1,236 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
# This test is for checking GRE GSO.
ret=0
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
# all tests in this script. Can be overridden with -t option
TESTS="gre_gso"
VERBOSE=0
PAUSE_ON_FAIL=no
PAUSE=no
IP="ip -netns ns1"
NS_EXEC="ip netns exec ns1"
TMPFILE=`mktemp`
PID=
log_test()
{
local rc=$1
local expected=$2
local msg="$3"
if [ ${rc} -eq ${expected} ]; then
printf " TEST: %-60s [ OK ]\n" "${msg}"
nsuccess=$((nsuccess+1))
else
ret=1
nfail=$((nfail+1))
printf " TEST: %-60s [FAIL]\n" "${msg}"
if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
echo
echo "hit enter to continue, 'q' to quit"
read a
[ "$a" = "q" ] && exit 1
fi
fi
if [ "${PAUSE}" = "yes" ]; then
echo
echo "hit enter to continue, 'q' to quit"
read a
[ "$a" = "q" ] && exit 1
fi
}
setup()
{
set -e
ip netns add ns1
ip netns set ns1 auto
$IP link set dev lo up
ip link add veth0 type veth peer name veth1
ip link set veth0 up
ip link set veth1 netns ns1
$IP link set veth1 name veth0
$IP link set veth0 up
dd if=/dev/urandom of=$TMPFILE bs=1024 count=2048 &>/dev/null
set +e
}
cleanup()
{
rm -rf $TMPFILE
[ -n "$PID" ] && kill $PID
ip link del dev gre1 &> /dev/null
ip link del dev veth0 &> /dev/null
ip netns del ns1
}
get_linklocal()
{
local dev=$1
local ns=$2
local addr
[ -n "$ns" ] && ns="-netns $ns"
addr=$(ip -6 -br $ns addr show dev ${dev} | \
awk '{
for (i = 3; i <= NF; ++i) {
if ($i ~ /^fe80/)
print $i
}
}'
)
addr=${addr/\/*}
[ -z "$addr" ] && return 1
echo $addr
return 0
}
gre_create_tun()
{
local a1=$1
local a2=$2
local mode
[[ $a1 =~ ^[0-9.]*$ ]] && mode=gre || mode=ip6gre
ip tunnel add gre1 mode $mode local $a1 remote $a2 dev veth0
ip link set gre1 up
$IP tunnel add gre1 mode $mode local $a2 remote $a1 dev veth0
$IP link set gre1 up
}
gre_gst_test_checks()
{
local name=$1
local addr=$2
$NS_EXEC nc -kl $port >/dev/null &
PID=$!
while ! $NS_EXEC ss -ltn | grep -q $port; do ((i++)); sleep 0.01; done
cat $TMPFILE | timeout 1 nc $addr $port
log_test $? 0 "$name - copy file w/ TSO"
ethtool -K veth0 tso off
cat $TMPFILE | timeout 1 nc $addr $port
log_test $? 0 "$name - copy file w/ GSO"
ethtool -K veth0 tso on
kill $PID
PID=
}
gre6_gso_test()
{
local port=7777
setup
a1=$(get_linklocal veth0)
a2=$(get_linklocal veth0 ns1)
gre_create_tun $a1 $a2
ip addr add 172.16.2.1/24 dev gre1
$IP addr add 172.16.2.2/24 dev gre1
ip -6 addr add 2001:db8:1::1/64 dev gre1 nodad
$IP -6 addr add 2001:db8:1::2/64 dev gre1 nodad
sleep 2
gre_gst_test_checks GREv6/v4 172.16.2.2
gre_gst_test_checks GREv6/v6 2001:db8:1::2
cleanup
}
gre_gso_test()
{
gre6_gso_test
}
################################################################################
# usage
usage()
{
cat <<EOF
usage: ${0##*/} OPTS
-t <test> Test(s) to run (default: all)
(options: $TESTS)
-p Pause on fail
-P Pause after each test before cleanup
-v verbose mode (show commands and output)
EOF
}
################################################################################
# main
while getopts :t:pPhv o
do
case $o in
t) TESTS=$OPTARG;;
p) PAUSE_ON_FAIL=yes;;
P) PAUSE=yes;;
v) VERBOSE=$(($VERBOSE + 1));;
h) usage; exit 0;;
*) usage; exit 1;;
esac
done
PEER_CMD="ip netns exec ${PEER_NS}"
# make sure we don't pause twice
[ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
exit $ksft_skip;
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
exit $ksft_skip
fi
if [ ! -x "$(command -v nc)" ]; then
echo "SKIP: Could not run test without nc tool"
exit $ksft_skip
fi
# start clean
cleanup &> /dev/null
for t in $TESTS
do
case $t in
gre_gso) gre_gso_test;;
help) echo "Test names: $TESTS"; exit 0;;
esac
done
if [ "$TESTS" != "none" ]; then
printf "\nTests passed: %3d\n" ${nsuccess}
printf "Tests failed: %3d\n" ${nfail}
fi
exit $ret

View File

@ -22,8 +22,8 @@ usage() {
cleanup()
{
rm -f "$cin" "$cout"
rm -f "$sin" "$sout"
rm -f "$cout" "$sout"
rm -f "$large" "$small"
rm -f "$capout"
local netns