Networking fixes for 5.19-rc4, including fixes from bpf and netfilter.
Current release - regressions: - netfilter: cttimeout: fix slab-out-of-bounds read in cttimeout_net_exit Current release - new code bugs: - bpf: ftrace: keep address offset in ftrace_lookup_symbols - bpf: force cookies array to follow symbols sorting Previous releases - regressions: - ipv4: ping: fix bind address validity check - tipc: fix use-after-free read in tipc_named_reinit - eth: veth: add updating of trans_start Previous releases - always broken: - sock: redo the psock vs ULP protection check - netfilter: nf_dup_netdev: fix skb_under_panic - bpf: fix request_sock leak in sk lookup helpers - eth: igb: fix a use-after-free issue in igb_clean_tx_ring - eth: ice: prohibit improper channel config for DCB - eth: at803x: fix null pointer dereference on AR9331 phy - eth: virtio_net: fix xdp_rxq_info bug after suspend/resume Misc: - eth: hinic: replace memcpy() with direct assignment Signed-off-by: Paolo Abeni <pabeni@redhat.com> -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmK0P+0SHHBhYmVuaUBy ZWRoYXQuY29tAAoJECkkeY3MjxOkmBkP/1m5Et04wgtlEfQJtudZj0Sadra0tu6P vaYlqtiRNMziSY/hxG1p4w7giM4gD7fD3S12Pc/ueCaUwxxILN/eZ/hNgCq9huf6 IbmVmfq6YNZwDaNzFDP8UcIqjnxbg1B3XD41dN7+FggA9ogGFkOvuAcJByzdANVX BLOkQmGP22+pNJmniH3KYvCZlHIa+LVeRjdjdM+1/LKDs2pxpBi97obyzb5zUiE5 c5E7+BhkGI9X6V1TuHVCHIEFssYNWLiTJcw76HptWmK9Z/DlDEeVlHzKbAMNTycl I8eTLXnqgye0KCKOqJ4fN+YN42ypdDzrUILKMHGEddG1lOot/2XChgp8+EqMY7Nx Gjpjh28jTsKdCZMFF3lxDGxeonHciP6lZA80g3GNk4FWUVrqnKEYpdy+6psTkpDr HahjmFWylGXfmPIKJrsiVGIyxD4ObkRF6SSH7L8j5tAVGxaB5MDFrCws136kACCk ZyZiXTS0J3Cn1fAb2/vGKgDFhbEWykITYPaiVo7pyrO1jju5qQTtiKiABpcX0Ejs WxvPA8HB61+kEapIzBLhhxRl25CXTleGE986au2MVh0I/HuQBxVExrRE9FgThjwk YbSKhR2JOcD5B94HRQXVsQ05q02JzxmB0kVbqSLcIAbCOuo++LZCIdwR5XxSpF6s AAFhqQycWowh =JFWo -----END PGP SIGNATURE----- Merge tag 'net-5.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from bpf and netfilter. Current release - regressions: - netfilter: cttimeout: fix slab-out-of-bounds read in cttimeout_net_exit Current release - new code bugs: - bpf: ftrace: keep address offset in ftrace_lookup_symbols - bpf: force cookies array to follow symbols sorting Previous releases - regressions: - ipv4: ping: fix bind address validity check - tipc: fix use-after-free read in tipc_named_reinit - eth: veth: add updating of trans_start Previous releases - always broken: - sock: redo the psock vs ULP protection check - netfilter: nf_dup_netdev: fix skb_under_panic - bpf: fix request_sock leak in sk lookup helpers - eth: igb: fix a use-after-free issue in igb_clean_tx_ring - eth: ice: prohibit improper channel config for DCB - eth: at803x: fix null pointer dereference on AR9331 phy - eth: virtio_net: fix xdp_rxq_info bug after suspend/resume Misc: - eth: hinic: replace memcpy() with direct assignment" * tag 'net-5.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits) net: openvswitch: fix parsing of nw_proto for IPv6 fragments sock: redo the psock vs ULP protection check Revert "net/tls: fix tls_sk_proto_close executed repeatedly" virtio_net: fix xdp_rxq_info bug after suspend/resume igb: Make DMA faster when CPU is active on the PCIe link net: dsa: qca8k: reduce mgmt ethernet timeout net: dsa: qca8k: reset cpu port on MTU change MAINTAINERS: Add a maintainer for OCP Time Card hinic: Replace memcpy() with direct assignment Revert "drivers/net/ethernet/neterion/vxge: Fix a use-after-free bug in vxge-main.c" net: phy: smsc: Disable Energy Detect Power-Down in interrupt mode ice: ethtool: Prohibit improper channel config for DCB ice: ethtool: advertise 1000M speeds properly ice: Fix switchdev rules book keeping ice: ignore protocol field in GTP offload netfilter: nf_dup_netdev: add and use recursion counter netfilter: nf_dup_netdev: do not push mac header a second time selftests: netfilter: correct PKTGEN_SCRIPT_PATHS in nft_concat_range.sh net/tls: fix tls_sk_proto_close executed repeatedly erspan: do not assume transport header is always set ...
This commit is contained in:
commit
399bd66e21
42
MAINTAINERS
42
MAINTAINERS
@ -3662,7 +3662,7 @@ BPF JIT for ARM
|
|||||||
M: Shubham Bansal <illusionist.neo@gmail.com>
|
M: Shubham Bansal <illusionist.neo@gmail.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Maintained
|
S: Odd Fixes
|
||||||
F: arch/arm/net/
|
F: arch/arm/net/
|
||||||
|
|
||||||
BPF JIT for ARM64
|
BPF JIT for ARM64
|
||||||
@ -3686,14 +3686,15 @@ BPF JIT for NFP NICs
|
|||||||
M: Jakub Kicinski <kuba@kernel.org>
|
M: Jakub Kicinski <kuba@kernel.org>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Supported
|
S: Odd Fixes
|
||||||
F: drivers/net/ethernet/netronome/nfp/bpf/
|
F: drivers/net/ethernet/netronome/nfp/bpf/
|
||||||
|
|
||||||
BPF JIT for POWERPC (32-BIT AND 64-BIT)
|
BPF JIT for POWERPC (32-BIT AND 64-BIT)
|
||||||
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
||||||
|
M: Michael Ellerman <mpe@ellerman.id.au>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Maintained
|
S: Supported
|
||||||
F: arch/powerpc/net/
|
F: arch/powerpc/net/
|
||||||
|
|
||||||
BPF JIT for RISC-V (32-bit)
|
BPF JIT for RISC-V (32-bit)
|
||||||
@ -3719,7 +3720,7 @@ M: Heiko Carstens <hca@linux.ibm.com>
|
|||||||
M: Vasily Gorbik <gor@linux.ibm.com>
|
M: Vasily Gorbik <gor@linux.ibm.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Maintained
|
S: Supported
|
||||||
F: arch/s390/net/
|
F: arch/s390/net/
|
||||||
X: arch/s390/net/pnet.c
|
X: arch/s390/net/pnet.c
|
||||||
|
|
||||||
@ -3727,14 +3728,14 @@ BPF JIT for SPARC (32-BIT AND 64-BIT)
|
|||||||
M: David S. Miller <davem@davemloft.net>
|
M: David S. Miller <davem@davemloft.net>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Maintained
|
S: Odd Fixes
|
||||||
F: arch/sparc/net/
|
F: arch/sparc/net/
|
||||||
|
|
||||||
BPF JIT for X86 32-BIT
|
BPF JIT for X86 32-BIT
|
||||||
M: Wang YanQing <udknight@gmail.com>
|
M: Wang YanQing <udknight@gmail.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
S: Maintained
|
S: Odd Fixes
|
||||||
F: arch/x86/net/bpf_jit_comp32.c
|
F: arch/x86/net/bpf_jit_comp32.c
|
||||||
|
|
||||||
BPF JIT for X86 64-BIT
|
BPF JIT for X86 64-BIT
|
||||||
@ -3757,6 +3758,19 @@ F: include/linux/bpf_lsm.h
|
|||||||
F: kernel/bpf/bpf_lsm.c
|
F: kernel/bpf/bpf_lsm.c
|
||||||
F: security/bpf/
|
F: security/bpf/
|
||||||
|
|
||||||
|
BPF L7 FRAMEWORK
|
||||||
|
M: John Fastabend <john.fastabend@gmail.com>
|
||||||
|
M: Jakub Sitnicki <jakub@cloudflare.com>
|
||||||
|
L: netdev@vger.kernel.org
|
||||||
|
L: bpf@vger.kernel.org
|
||||||
|
S: Maintained
|
||||||
|
F: include/linux/skmsg.h
|
||||||
|
F: net/core/skmsg.c
|
||||||
|
F: net/core/sock_map.c
|
||||||
|
F: net/ipv4/tcp_bpf.c
|
||||||
|
F: net/ipv4/udp_bpf.c
|
||||||
|
F: net/unix/unix_bpf.c
|
||||||
|
|
||||||
BPFTOOL
|
BPFTOOL
|
||||||
M: Quentin Monnet <quentin@isovalent.com>
|
M: Quentin Monnet <quentin@isovalent.com>
|
||||||
L: bpf@vger.kernel.org
|
L: bpf@vger.kernel.org
|
||||||
@ -11098,20 +11112,6 @@ S: Maintained
|
|||||||
F: include/net/l3mdev.h
|
F: include/net/l3mdev.h
|
||||||
F: net/l3mdev
|
F: net/l3mdev
|
||||||
|
|
||||||
L7 BPF FRAMEWORK
|
|
||||||
M: John Fastabend <john.fastabend@gmail.com>
|
|
||||||
M: Daniel Borkmann <daniel@iogearbox.net>
|
|
||||||
M: Jakub Sitnicki <jakub@cloudflare.com>
|
|
||||||
L: netdev@vger.kernel.org
|
|
||||||
L: bpf@vger.kernel.org
|
|
||||||
S: Maintained
|
|
||||||
F: include/linux/skmsg.h
|
|
||||||
F: net/core/skmsg.c
|
|
||||||
F: net/core/sock_map.c
|
|
||||||
F: net/ipv4/tcp_bpf.c
|
|
||||||
F: net/ipv4/udp_bpf.c
|
|
||||||
F: net/unix/unix_bpf.c
|
|
||||||
|
|
||||||
LANDLOCK SECURITY MODULE
|
LANDLOCK SECURITY MODULE
|
||||||
M: Mickaël Salaün <mic@digikod.net>
|
M: Mickaël Salaün <mic@digikod.net>
|
||||||
L: linux-security-module@vger.kernel.org
|
L: linux-security-module@vger.kernel.org
|
||||||
@ -13954,7 +13954,6 @@ F: net/ipv6/tcp*.c
|
|||||||
NETWORKING [TLS]
|
NETWORKING [TLS]
|
||||||
M: Boris Pismenny <borisp@nvidia.com>
|
M: Boris Pismenny <borisp@nvidia.com>
|
||||||
M: John Fastabend <john.fastabend@gmail.com>
|
M: John Fastabend <john.fastabend@gmail.com>
|
||||||
M: Daniel Borkmann <daniel@iogearbox.net>
|
|
||||||
M: Jakub Kicinski <kuba@kernel.org>
|
M: Jakub Kicinski <kuba@kernel.org>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
@ -14871,6 +14870,7 @@ F: include/dt-bindings/
|
|||||||
|
|
||||||
OPENCOMPUTE PTP CLOCK DRIVER
|
OPENCOMPUTE PTP CLOCK DRIVER
|
||||||
M: Jonathan Lemon <jonathan.lemon@gmail.com>
|
M: Jonathan Lemon <jonathan.lemon@gmail.com>
|
||||||
|
M: Vadim Fedorenko <vadfed@fb.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: drivers/ptp/ptp_ocp.c
|
F: drivers/ptp/ptp_ocp.c
|
||||||
|
@ -1420,8 +1420,9 @@ st: if (is_imm8(insn->off))
|
|||||||
case BPF_JMP | BPF_CALL:
|
case BPF_JMP | BPF_CALL:
|
||||||
func = (u8 *) __bpf_call_base + imm32;
|
func = (u8 *) __bpf_call_base + imm32;
|
||||||
if (tail_call_reachable) {
|
if (tail_call_reachable) {
|
||||||
|
/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
|
||||||
EMIT3_off32(0x48, 0x8B, 0x85,
|
EMIT3_off32(0x48, 0x8B, 0x85,
|
||||||
-(bpf_prog->aux->stack_depth + 8));
|
-round_up(bpf_prog->aux->stack_depth, 8) - 8);
|
||||||
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
|
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
} else {
|
} else {
|
||||||
|
@ -3684,9 +3684,11 @@ re_arm:
|
|||||||
if (!rtnl_trylock())
|
if (!rtnl_trylock())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (should_notify_peers)
|
if (should_notify_peers) {
|
||||||
|
bond->send_peer_notif--;
|
||||||
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
|
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
|
||||||
bond->dev);
|
bond->dev);
|
||||||
|
}
|
||||||
if (should_notify_rtnl) {
|
if (should_notify_rtnl) {
|
||||||
bond_slave_state_notify(bond);
|
bond_slave_state_notify(bond);
|
||||||
bond_slave_link_notify(bond);
|
bond_slave_link_notify(bond);
|
||||||
|
@ -2334,6 +2334,7 @@ static int
|
|||||||
qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
||||||
{
|
{
|
||||||
struct qca8k_priv *priv = ds->priv;
|
struct qca8k_priv *priv = ds->priv;
|
||||||
|
int ret;
|
||||||
|
|
||||||
/* We have only have a general MTU setting.
|
/* We have only have a general MTU setting.
|
||||||
* DSA always set the CPU port's MTU to the largest MTU of the slave
|
* DSA always set the CPU port's MTU to the largest MTU of the slave
|
||||||
@ -2344,8 +2345,27 @@ qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
|||||||
if (!dsa_is_cpu_port(ds, port))
|
if (!dsa_is_cpu_port(ds, port))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
/* To change the MAX_FRAME_SIZE the cpu ports must be off or
|
||||||
|
* the switch panics.
|
||||||
|
* Turn off both cpu ports before applying the new value to prevent
|
||||||
|
* this.
|
||||||
|
*/
|
||||||
|
if (priv->port_enabled_map & BIT(0))
|
||||||
|
qca8k_port_set_status(priv, 0, 0);
|
||||||
|
|
||||||
|
if (priv->port_enabled_map & BIT(6))
|
||||||
|
qca8k_port_set_status(priv, 6, 0);
|
||||||
|
|
||||||
/* Include L2 header / FCS length */
|
/* Include L2 header / FCS length */
|
||||||
return qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);
|
ret = qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);
|
||||||
|
|
||||||
|
if (priv->port_enabled_map & BIT(0))
|
||||||
|
qca8k_port_set_status(priv, 0, 1);
|
||||||
|
|
||||||
|
if (priv->port_enabled_map & BIT(6))
|
||||||
|
qca8k_port_set_status(priv, 6, 1);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
|
@ -15,7 +15,7 @@
|
|||||||
|
|
||||||
#define QCA8K_ETHERNET_MDIO_PRIORITY 7
|
#define QCA8K_ETHERNET_MDIO_PRIORITY 7
|
||||||
#define QCA8K_ETHERNET_PHY_PRIORITY 6
|
#define QCA8K_ETHERNET_PHY_PRIORITY 6
|
||||||
#define QCA8K_ETHERNET_TIMEOUT 100
|
#define QCA8K_ETHERNET_TIMEOUT 5
|
||||||
|
|
||||||
#define QCA8K_NUM_PORTS 7
|
#define QCA8K_NUM_PORTS 7
|
||||||
#define QCA8K_NUM_CPU_PORTS 2
|
#define QCA8K_NUM_CPU_PORTS 2
|
||||||
|
@ -43,9 +43,7 @@ static bool check_image_valid(struct hinic_devlink_priv *priv, const u8 *buf,
|
|||||||
|
|
||||||
for (i = 0; i < fw_image->fw_info.fw_section_cnt; i++) {
|
for (i = 0; i < fw_image->fw_info.fw_section_cnt; i++) {
|
||||||
len += fw_image->fw_section_info[i].fw_section_len;
|
len += fw_image->fw_section_info[i].fw_section_len;
|
||||||
memcpy(&host_image->image_section_info[i],
|
host_image->image_section_info[i] = fw_image->fw_section_info[i];
|
||||||
&fw_image->fw_section_info[i],
|
|
||||||
sizeof(struct fw_section_info_st));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (len != fw_image->fw_len ||
|
if (len != fw_image->fw_len ||
|
||||||
|
@ -2189,6 +2189,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ice_set_phy_type_from_speed - set phy_types based on speeds
|
||||||
|
* and advertised modes
|
||||||
|
* @ks: ethtool link ksettings struct
|
||||||
|
* @phy_type_low: pointer to the lower part of phy_type
|
||||||
|
* @phy_type_high: pointer to the higher part of phy_type
|
||||||
|
* @adv_link_speed: targeted link speeds bitmap
|
||||||
|
*/
|
||||||
|
static void
|
||||||
|
ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks,
|
||||||
|
u64 *phy_type_low, u64 *phy_type_high,
|
||||||
|
u16 adv_link_speed)
|
||||||
|
{
|
||||||
|
/* Handle 1000M speed in a special way because ice_update_phy_type
|
||||||
|
* enables all link modes, but having mixed copper and optical
|
||||||
|
* standards is not supported.
|
||||||
|
*/
|
||||||
|
adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB;
|
||||||
|
|
||||||
|
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
|
||||||
|
1000baseT_Full))
|
||||||
|
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T |
|
||||||
|
ICE_PHY_TYPE_LOW_1G_SGMII;
|
||||||
|
|
||||||
|
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
|
||||||
|
1000baseKX_Full))
|
||||||
|
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX;
|
||||||
|
|
||||||
|
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
|
||||||
|
1000baseX_Full))
|
||||||
|
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX |
|
||||||
|
ICE_PHY_TYPE_LOW_1000BASE_LX;
|
||||||
|
|
||||||
|
ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ice_set_link_ksettings - Set Speed and Duplex
|
* ice_set_link_ksettings - Set Speed and Duplex
|
||||||
* @netdev: network interface device structure
|
* @netdev: network interface device structure
|
||||||
@ -2320,7 +2356,8 @@ ice_set_link_ksettings(struct net_device *netdev,
|
|||||||
adv_link_speed = curr_link_speed;
|
adv_link_speed = curr_link_speed;
|
||||||
|
|
||||||
/* Convert the advertise link speeds to their corresponded PHY_TYPE */
|
/* Convert the advertise link speeds to their corresponded PHY_TYPE */
|
||||||
ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed);
|
ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high,
|
||||||
|
adv_link_speed);
|
||||||
|
|
||||||
if (!autoneg_changed && adv_link_speed == curr_link_speed) {
|
if (!autoneg_changed && adv_link_speed == curr_link_speed) {
|
||||||
netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");
|
netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");
|
||||||
@ -3470,6 +3507,16 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
|
|||||||
new_rx = ch->combined_count + ch->rx_count;
|
new_rx = ch->combined_count + ch->rx_count;
|
||||||
new_tx = ch->combined_count + ch->tx_count;
|
new_tx = ch->combined_count + ch->tx_count;
|
||||||
|
|
||||||
|
if (new_rx < vsi->tc_cfg.numtc) {
|
||||||
|
netdev_err(dev, "Cannot set less Rx channels, than Traffic Classes you have (%u)\n",
|
||||||
|
vsi->tc_cfg.numtc);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
if (new_tx < vsi->tc_cfg.numtc) {
|
||||||
|
netdev_err(dev, "Cannot set less Tx channels, than Traffic Classes you have (%u)\n",
|
||||||
|
vsi->tc_cfg.numtc);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
if (new_rx > ice_get_max_rxq(pf)) {
|
if (new_rx > ice_get_max_rxq(pf)) {
|
||||||
netdev_err(dev, "Maximum allowed Rx channels is %d\n",
|
netdev_err(dev, "Maximum allowed Rx channels is %d\n",
|
||||||
ice_get_max_rxq(pf));
|
ice_get_max_rxq(pf));
|
||||||
|
@ -909,7 +909,7 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
|
|||||||
* @vsi: the VSI being configured
|
* @vsi: the VSI being configured
|
||||||
* @ctxt: VSI context structure
|
* @ctxt: VSI context structure
|
||||||
*/
|
*/
|
||||||
static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
|
static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
|
||||||
{
|
{
|
||||||
u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;
|
u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;
|
||||||
u16 num_txq_per_tc, num_rxq_per_tc;
|
u16 num_txq_per_tc, num_rxq_per_tc;
|
||||||
@ -982,7 +982,18 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
|
|||||||
else
|
else
|
||||||
vsi->num_rxq = num_rxq_per_tc;
|
vsi->num_rxq = num_rxq_per_tc;
|
||||||
|
|
||||||
|
if (vsi->num_rxq > vsi->alloc_rxq) {
|
||||||
|
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
|
||||||
|
vsi->num_rxq, vsi->alloc_rxq);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
vsi->num_txq = tx_count;
|
vsi->num_txq = tx_count;
|
||||||
|
if (vsi->num_txq > vsi->alloc_txq) {
|
||||||
|
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
|
||||||
|
vsi->num_txq, vsi->alloc_txq);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {
|
if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {
|
||||||
dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");
|
dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");
|
||||||
@ -1000,6 +1011,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
|
|||||||
*/
|
*/
|
||||||
ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);
|
ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);
|
||||||
ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq);
|
ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq);
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -1187,7 +1200,10 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
|
|||||||
if (vsi->type == ICE_VSI_CHNL) {
|
if (vsi->type == ICE_VSI_CHNL) {
|
||||||
ice_chnl_vsi_setup_q_map(vsi, ctxt);
|
ice_chnl_vsi_setup_q_map(vsi, ctxt);
|
||||||
} else {
|
} else {
|
||||||
ice_vsi_setup_q_map(vsi, ctxt);
|
ret = ice_vsi_setup_q_map(vsi, ctxt);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
if (!init_vsi) /* means VSI being updated */
|
if (!init_vsi) /* means VSI being updated */
|
||||||
/* must to indicate which section of VSI context are
|
/* must to indicate which section of VSI context are
|
||||||
* being modified
|
* being modified
|
||||||
@ -3464,7 +3480,7 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc)
|
|||||||
*
|
*
|
||||||
* Prepares VSI tc_config to have queue configurations based on MQPRIO options.
|
* Prepares VSI tc_config to have queue configurations based on MQPRIO options.
|
||||||
*/
|
*/
|
||||||
static void
|
static int
|
||||||
ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
|
ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
|
||||||
u8 ena_tc)
|
u8 ena_tc)
|
||||||
{
|
{
|
||||||
@ -3513,7 +3529,18 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
|
|||||||
|
|
||||||
/* Set actual Tx/Rx queue pairs */
|
/* Set actual Tx/Rx queue pairs */
|
||||||
vsi->num_txq = offset + qcount_tx;
|
vsi->num_txq = offset + qcount_tx;
|
||||||
|
if (vsi->num_txq > vsi->alloc_txq) {
|
||||||
|
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
|
||||||
|
vsi->num_txq, vsi->alloc_txq);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
vsi->num_rxq = offset + qcount_rx;
|
vsi->num_rxq = offset + qcount_rx;
|
||||||
|
if (vsi->num_rxq > vsi->alloc_rxq) {
|
||||||
|
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
|
||||||
|
vsi->num_rxq, vsi->alloc_rxq);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
/* Setup queue TC[0].qmap for given VSI context */
|
/* Setup queue TC[0].qmap for given VSI context */
|
||||||
ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);
|
ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);
|
||||||
@ -3531,6 +3558,8 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
|
|||||||
dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n", vsi->num_rxq);
|
dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n", vsi->num_rxq);
|
||||||
dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n",
|
dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n",
|
||||||
vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc);
|
vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc);
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -3580,9 +3609,12 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
|
|||||||
|
|
||||||
if (vsi->type == ICE_VSI_PF &&
|
if (vsi->type == ICE_VSI_PF &&
|
||||||
test_bit(ICE_FLAG_TC_MQPRIO, pf->flags))
|
test_bit(ICE_FLAG_TC_MQPRIO, pf->flags))
|
||||||
ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);
|
ret = ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);
|
||||||
else
|
else
|
||||||
ice_vsi_setup_q_map(vsi, ctx);
|
ret = ice_vsi_setup_q_map(vsi, ctx);
|
||||||
|
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
/* must to indicate which section of VSI context are being modified */
|
/* must to indicate which section of VSI context are being modified */
|
||||||
ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
|
ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
|
||||||
|
@ -524,6 +524,7 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
|
|||||||
*/
|
*/
|
||||||
fltr->rid = rule_added.rid;
|
fltr->rid = rule_added.rid;
|
||||||
fltr->rule_id = rule_added.rule_id;
|
fltr->rule_id = rule_added.rule_id;
|
||||||
|
fltr->dest_id = rule_added.vsi_handle;
|
||||||
|
|
||||||
exit:
|
exit:
|
||||||
kfree(list);
|
kfree(list);
|
||||||
@ -993,7 +994,9 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
|
|||||||
n_proto_key = ntohs(match.key->n_proto);
|
n_proto_key = ntohs(match.key->n_proto);
|
||||||
n_proto_mask = ntohs(match.mask->n_proto);
|
n_proto_mask = ntohs(match.mask->n_proto);
|
||||||
|
|
||||||
if (n_proto_key == ETH_P_ALL || n_proto_key == 0) {
|
if (n_proto_key == ETH_P_ALL || n_proto_key == 0 ||
|
||||||
|
fltr->tunnel_type == TNL_GTPU ||
|
||||||
|
fltr->tunnel_type == TNL_GTPC) {
|
||||||
n_proto_key = 0;
|
n_proto_key = 0;
|
||||||
n_proto_mask = 0;
|
n_proto_mask = 0;
|
||||||
} else {
|
} else {
|
||||||
|
@ -4819,8 +4819,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
|
|||||||
while (i != tx_ring->next_to_use) {
|
while (i != tx_ring->next_to_use) {
|
||||||
union e1000_adv_tx_desc *eop_desc, *tx_desc;
|
union e1000_adv_tx_desc *eop_desc, *tx_desc;
|
||||||
|
|
||||||
/* Free all the Tx ring sk_buffs */
|
/* Free all the Tx ring sk_buffs or xdp frames */
|
||||||
|
if (tx_buffer->type == IGB_TYPE_SKB)
|
||||||
dev_kfree_skb_any(tx_buffer->skb);
|
dev_kfree_skb_any(tx_buffer->skb);
|
||||||
|
else
|
||||||
|
xdp_return_frame(tx_buffer->xdpf);
|
||||||
|
|
||||||
/* unmap skb header data */
|
/* unmap skb header data */
|
||||||
dma_unmap_single(tx_ring->dev,
|
dma_unmap_single(tx_ring->dev,
|
||||||
@ -9898,11 +9901,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
|
|||||||
struct e1000_hw *hw = &adapter->hw;
|
struct e1000_hw *hw = &adapter->hw;
|
||||||
u32 dmac_thr;
|
u32 dmac_thr;
|
||||||
u16 hwm;
|
u16 hwm;
|
||||||
|
u32 reg;
|
||||||
|
|
||||||
if (hw->mac.type > e1000_82580) {
|
if (hw->mac.type > e1000_82580) {
|
||||||
if (adapter->flags & IGB_FLAG_DMAC) {
|
if (adapter->flags & IGB_FLAG_DMAC) {
|
||||||
u32 reg;
|
|
||||||
|
|
||||||
/* force threshold to 0. */
|
/* force threshold to 0. */
|
||||||
wr32(E1000_DMCTXTH, 0);
|
wr32(E1000_DMCTXTH, 0);
|
||||||
|
|
||||||
@ -9935,7 +9937,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
|
|||||||
/* Disable BMC-to-OS Watchdog Enable */
|
/* Disable BMC-to-OS Watchdog Enable */
|
||||||
if (hw->mac.type != e1000_i354)
|
if (hw->mac.type != e1000_i354)
|
||||||
reg &= ~E1000_DMACR_DC_BMC2OSW_EN;
|
reg &= ~E1000_DMACR_DC_BMC2OSW_EN;
|
||||||
|
|
||||||
wr32(E1000_DMACR, reg);
|
wr32(E1000_DMACR, reg);
|
||||||
|
|
||||||
/* no lower threshold to disable
|
/* no lower threshold to disable
|
||||||
@ -9952,12 +9953,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
|
|||||||
*/
|
*/
|
||||||
wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE -
|
wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE -
|
||||||
(IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6);
|
(IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6);
|
||||||
|
}
|
||||||
|
|
||||||
/* make low power state decision controlled
|
if (hw->mac.type >= e1000_i210 ||
|
||||||
* by DMA coal
|
(adapter->flags & IGB_FLAG_DMAC)) {
|
||||||
*/
|
|
||||||
reg = rd32(E1000_PCIEMISC);
|
reg = rd32(E1000_PCIEMISC);
|
||||||
reg &= ~E1000_PCIEMISC_LX_DECISION;
|
reg |= E1000_PCIEMISC_LX_DECISION;
|
||||||
wr32(E1000_PCIEMISC, reg);
|
wr32(E1000_PCIEMISC, reg);
|
||||||
} /* endif adapter->dmac is not disabled */
|
} /* endif adapter->dmac is not disabled */
|
||||||
} else if (hw->mac.type == e1000_82580) {
|
} else if (hw->mac.type == e1000_82580) {
|
||||||
|
@ -99,6 +99,7 @@ struct sixpack {
|
|||||||
|
|
||||||
unsigned int rx_count;
|
unsigned int rx_count;
|
||||||
unsigned int rx_count_cooked;
|
unsigned int rx_count_cooked;
|
||||||
|
spinlock_t rxlock;
|
||||||
|
|
||||||
int mtu; /* Our mtu (to spot changes!) */
|
int mtu; /* Our mtu (to spot changes!) */
|
||||||
int buffsize; /* Max buffers sizes */
|
int buffsize; /* Max buffers sizes */
|
||||||
@ -565,6 +566,7 @@ static int sixpack_open(struct tty_struct *tty)
|
|||||||
sp->dev = dev;
|
sp->dev = dev;
|
||||||
|
|
||||||
spin_lock_init(&sp->lock);
|
spin_lock_init(&sp->lock);
|
||||||
|
spin_lock_init(&sp->rxlock);
|
||||||
refcount_set(&sp->refcnt, 1);
|
refcount_set(&sp->refcnt, 1);
|
||||||
init_completion(&sp->dead);
|
init_completion(&sp->dead);
|
||||||
|
|
||||||
@ -913,6 +915,7 @@ static void decode_std_command(struct sixpack *sp, unsigned char cmd)
|
|||||||
sp->led_state = 0x60;
|
sp->led_state = 0x60;
|
||||||
/* fill trailing bytes with zeroes */
|
/* fill trailing bytes with zeroes */
|
||||||
sp->tty->ops->write(sp->tty, &sp->led_state, 1);
|
sp->tty->ops->write(sp->tty, &sp->led_state, 1);
|
||||||
|
spin_lock_bh(&sp->rxlock);
|
||||||
rest = sp->rx_count;
|
rest = sp->rx_count;
|
||||||
if (rest != 0)
|
if (rest != 0)
|
||||||
for (i = rest; i <= 3; i++)
|
for (i = rest; i <= 3; i++)
|
||||||
@ -930,6 +933,7 @@ static void decode_std_command(struct sixpack *sp, unsigned char cmd)
|
|||||||
sp_bump(sp, 0);
|
sp_bump(sp, 0);
|
||||||
}
|
}
|
||||||
sp->rx_count_cooked = 0;
|
sp->rx_count_cooked = 0;
|
||||||
|
spin_unlock_bh(&sp->rxlock);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case SIXP_TX_URUN: printk(KERN_DEBUG "6pack: TX underrun\n");
|
case SIXP_TX_URUN: printk(KERN_DEBUG "6pack: TX underrun\n");
|
||||||
@ -959,8 +963,11 @@ sixpack_decode(struct sixpack *sp, const unsigned char *pre_rbuff, int count)
|
|||||||
decode_prio_command(sp, inbyte);
|
decode_prio_command(sp, inbyte);
|
||||||
else if ((inbyte & SIXP_STD_CMD_MASK) != 0)
|
else if ((inbyte & SIXP_STD_CMD_MASK) != 0)
|
||||||
decode_std_command(sp, inbyte);
|
decode_std_command(sp, inbyte);
|
||||||
else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK)
|
else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) {
|
||||||
|
spin_lock_bh(&sp->rxlock);
|
||||||
decode_data(sp, inbyte);
|
decode_data(sp, inbyte);
|
||||||
|
spin_unlock_bh(&sp->rxlock);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -34,6 +34,8 @@
|
|||||||
#define MDIO_AN_VEND_PROV 0xc400
|
#define MDIO_AN_VEND_PROV 0xc400
|
||||||
#define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15)
|
#define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15)
|
||||||
#define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14)
|
#define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14)
|
||||||
|
#define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11)
|
||||||
|
#define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10)
|
||||||
#define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4)
|
#define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4)
|
||||||
#define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0)
|
#define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0)
|
||||||
#define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4
|
#define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4
|
||||||
@ -231,9 +233,20 @@ static int aqr_config_aneg(struct phy_device *phydev)
|
|||||||
phydev->advertising))
|
phydev->advertising))
|
||||||
reg |= MDIO_AN_VEND_PROV_1000BASET_HALF;
|
reg |= MDIO_AN_VEND_PROV_1000BASET_HALF;
|
||||||
|
|
||||||
|
/* Handle the case when the 2.5G and 5G speeds are not advertised */
|
||||||
|
if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT,
|
||||||
|
phydev->advertising))
|
||||||
|
reg |= MDIO_AN_VEND_PROV_2500BASET_FULL;
|
||||||
|
|
||||||
|
if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
|
||||||
|
phydev->advertising))
|
||||||
|
reg |= MDIO_AN_VEND_PROV_5000BASET_FULL;
|
||||||
|
|
||||||
ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV,
|
ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV,
|
||||||
MDIO_AN_VEND_PROV_1000BASET_HALF |
|
MDIO_AN_VEND_PROV_1000BASET_HALF |
|
||||||
MDIO_AN_VEND_PROV_1000BASET_FULL, reg);
|
MDIO_AN_VEND_PROV_1000BASET_FULL |
|
||||||
|
MDIO_AN_VEND_PROV_2500BASET_FULL |
|
||||||
|
MDIO_AN_VEND_PROV_5000BASET_FULL, reg);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
|
@ -2072,6 +2072,8 @@ static struct phy_driver at803x_driver[] = {
|
|||||||
/* ATHEROS AR9331 */
|
/* ATHEROS AR9331 */
|
||||||
PHY_ID_MATCH_EXACT(ATH9331_PHY_ID),
|
PHY_ID_MATCH_EXACT(ATH9331_PHY_ID),
|
||||||
.name = "Qualcomm Atheros AR9331 built-in PHY",
|
.name = "Qualcomm Atheros AR9331 built-in PHY",
|
||||||
|
.probe = at803x_probe,
|
||||||
|
.remove = at803x_remove,
|
||||||
.suspend = at803x_suspend,
|
.suspend = at803x_suspend,
|
||||||
.resume = at803x_resume,
|
.resume = at803x_resume,
|
||||||
.flags = PHY_POLL_CABLE_TEST,
|
.flags = PHY_POLL_CABLE_TEST,
|
||||||
@ -2087,6 +2089,8 @@ static struct phy_driver at803x_driver[] = {
|
|||||||
/* Qualcomm Atheros QCA9561 */
|
/* Qualcomm Atheros QCA9561 */
|
||||||
PHY_ID_MATCH_EXACT(QCA9561_PHY_ID),
|
PHY_ID_MATCH_EXACT(QCA9561_PHY_ID),
|
||||||
.name = "Qualcomm Atheros QCA9561 built-in PHY",
|
.name = "Qualcomm Atheros QCA9561 built-in PHY",
|
||||||
|
.probe = at803x_probe,
|
||||||
|
.remove = at803x_remove,
|
||||||
.suspend = at803x_suspend,
|
.suspend = at803x_suspend,
|
||||||
.resume = at803x_resume,
|
.resume = at803x_resume,
|
||||||
.flags = PHY_POLL_CABLE_TEST,
|
.flags = PHY_POLL_CABLE_TEST,
|
||||||
@ -2151,6 +2155,8 @@ static struct phy_driver at803x_driver[] = {
|
|||||||
PHY_ID_MATCH_EXACT(QCA8081_PHY_ID),
|
PHY_ID_MATCH_EXACT(QCA8081_PHY_ID),
|
||||||
.name = "Qualcomm QCA8081",
|
.name = "Qualcomm QCA8081",
|
||||||
.flags = PHY_POLL_CABLE_TEST,
|
.flags = PHY_POLL_CABLE_TEST,
|
||||||
|
.probe = at803x_probe,
|
||||||
|
.remove = at803x_remove,
|
||||||
.config_intr = at803x_config_intr,
|
.config_intr = at803x_config_intr,
|
||||||
.handle_interrupt = at803x_handle_interrupt,
|
.handle_interrupt = at803x_handle_interrupt,
|
||||||
.get_tunable = at803x_get_tunable,
|
.get_tunable = at803x_get_tunable,
|
||||||
|
@ -110,7 +110,7 @@ static int smsc_phy_config_init(struct phy_device *phydev)
|
|||||||
struct smsc_phy_priv *priv = phydev->priv;
|
struct smsc_phy_priv *priv = phydev->priv;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
if (!priv->energy_enable)
|
if (!priv->energy_enable || phydev->irq != PHY_POLL)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
|
rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
|
||||||
@ -210,6 +210,8 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
|
|||||||
* response on link pulses to detect presence of plugged Ethernet cable.
|
* response on link pulses to detect presence of plugged Ethernet cable.
|
||||||
* The Energy Detect Power-Down mode is enabled again in the end of procedure to
|
* The Energy Detect Power-Down mode is enabled again in the end of procedure to
|
||||||
* save approximately 220 mW of power if cable is unplugged.
|
* save approximately 220 mW of power if cable is unplugged.
|
||||||
|
* The workaround is only applicable to poll mode. Energy Detect Power-Down may
|
||||||
|
* not be used in interrupt mode lest link change detection becomes unreliable.
|
||||||
*/
|
*/
|
||||||
static int lan87xx_read_status(struct phy_device *phydev)
|
static int lan87xx_read_status(struct phy_device *phydev)
|
||||||
{
|
{
|
||||||
@ -217,7 +219,7 @@ static int lan87xx_read_status(struct phy_device *phydev)
|
|||||||
|
|
||||||
int err = genphy_read_status(phydev);
|
int err = genphy_read_status(phydev);
|
||||||
|
|
||||||
if (!phydev->link && priv->energy_enable) {
|
if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {
|
||||||
/* Disable EDPD to wake up PHY */
|
/* Disable EDPD to wake up PHY */
|
||||||
int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
|
int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
|
@ -312,6 +312,7 @@ static bool veth_skb_is_eligible_for_gro(const struct net_device *dev,
|
|||||||
static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
|
static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||||
{
|
{
|
||||||
struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
|
struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
|
||||||
|
struct netdev_queue *queue = NULL;
|
||||||
struct veth_rq *rq = NULL;
|
struct veth_rq *rq = NULL;
|
||||||
struct net_device *rcv;
|
struct net_device *rcv;
|
||||||
int length = skb->len;
|
int length = skb->len;
|
||||||
@ -329,6 +330,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
rxq = skb_get_queue_mapping(skb);
|
rxq = skb_get_queue_mapping(skb);
|
||||||
if (rxq < rcv->real_num_rx_queues) {
|
if (rxq < rcv->real_num_rx_queues) {
|
||||||
rq = &rcv_priv->rq[rxq];
|
rq = &rcv_priv->rq[rxq];
|
||||||
|
queue = netdev_get_tx_queue(dev, rxq);
|
||||||
|
|
||||||
/* The napi pointer is available when an XDP program is
|
/* The napi pointer is available when an XDP program is
|
||||||
* attached or when GRO is enabled
|
* attached or when GRO is enabled
|
||||||
@ -340,6 +342,8 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
|
|
||||||
skb_tx_timestamp(skb);
|
skb_tx_timestamp(skb);
|
||||||
if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) {
|
if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) {
|
||||||
|
if (queue)
|
||||||
|
txq_trans_cond_update(queue);
|
||||||
if (!use_napi)
|
if (!use_napi)
|
||||||
dev_lstats_add(dev, length);
|
dev_lstats_add(dev, length);
|
||||||
} else {
|
} else {
|
||||||
|
@ -2768,7 +2768,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
|
|||||||
static void virtnet_freeze_down(struct virtio_device *vdev)
|
static void virtnet_freeze_down(struct virtio_device *vdev)
|
||||||
{
|
{
|
||||||
struct virtnet_info *vi = vdev->priv;
|
struct virtnet_info *vi = vdev->priv;
|
||||||
int i;
|
|
||||||
|
|
||||||
/* Make sure no work handler is accessing the device */
|
/* Make sure no work handler is accessing the device */
|
||||||
flush_work(&vi->config_work);
|
flush_work(&vi->config_work);
|
||||||
@ -2776,14 +2775,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
|
|||||||
netif_tx_lock_bh(vi->dev);
|
netif_tx_lock_bh(vi->dev);
|
||||||
netif_device_detach(vi->dev);
|
netif_device_detach(vi->dev);
|
||||||
netif_tx_unlock_bh(vi->dev);
|
netif_tx_unlock_bh(vi->dev);
|
||||||
cancel_delayed_work_sync(&vi->refill);
|
if (netif_running(vi->dev))
|
||||||
|
virtnet_close(vi->dev);
|
||||||
if (netif_running(vi->dev)) {
|
|
||||||
for (i = 0; i < vi->max_queue_pairs; i++) {
|
|
||||||
napi_disable(&vi->rq[i].napi);
|
|
||||||
virtnet_napi_tx_disable(&vi->sq[i].napi);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int init_vqs(struct virtnet_info *vi);
|
static int init_vqs(struct virtnet_info *vi);
|
||||||
@ -2791,7 +2784,7 @@ static int init_vqs(struct virtnet_info *vi);
|
|||||||
static int virtnet_restore_up(struct virtio_device *vdev)
|
static int virtnet_restore_up(struct virtio_device *vdev)
|
||||||
{
|
{
|
||||||
struct virtnet_info *vi = vdev->priv;
|
struct virtnet_info *vi = vdev->priv;
|
||||||
int err, i;
|
int err;
|
||||||
|
|
||||||
err = init_vqs(vi);
|
err = init_vqs(vi);
|
||||||
if (err)
|
if (err)
|
||||||
@ -2800,15 +2793,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
|
|||||||
virtio_device_ready(vdev);
|
virtio_device_ready(vdev);
|
||||||
|
|
||||||
if (netif_running(vi->dev)) {
|
if (netif_running(vi->dev)) {
|
||||||
for (i = 0; i < vi->curr_queue_pairs; i++)
|
err = virtnet_open(vi->dev);
|
||||||
if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
|
if (err)
|
||||||
schedule_delayed_work(&vi->refill, 0);
|
return err;
|
||||||
|
|
||||||
for (i = 0; i < vi->max_queue_pairs; i++) {
|
|
||||||
virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
|
|
||||||
virtnet_napi_tx_enable(vi, vi->sq[i].vq,
|
|
||||||
&vi->sq[i].napi);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
netif_tx_lock_bh(vi->dev);
|
netif_tx_lock_bh(vi->dev);
|
||||||
|
@ -253,6 +253,11 @@ struct inet_sock {
|
|||||||
#define IP_CMSG_CHECKSUM BIT(7)
|
#define IP_CMSG_CHECKSUM BIT(7)
|
||||||
#define IP_CMSG_RECVFRAGSIZE BIT(8)
|
#define IP_CMSG_RECVFRAGSIZE BIT(8)
|
||||||
|
|
||||||
|
static inline bool sk_is_inet(struct sock *sk)
|
||||||
|
{
|
||||||
|
return sk->sk_family == AF_INET || sk->sk_family == AF_INET6;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* sk_to_full_sk - Access to a full socket
|
* sk_to_full_sk - Access to a full socket
|
||||||
* @sk: pointer to a socket
|
* @sk: pointer to a socket
|
||||||
|
@ -4815,6 +4815,7 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
|
|||||||
n = btf_nr_types(btf);
|
n = btf_nr_types(btf);
|
||||||
for (i = start_id; i < n; i++) {
|
for (i = start_id; i < n; i++) {
|
||||||
const struct btf_type *t;
|
const struct btf_type *t;
|
||||||
|
int chain_limit = 32;
|
||||||
u32 cur_id = i;
|
u32 cur_id = i;
|
||||||
|
|
||||||
t = btf_type_by_id(btf, i);
|
t = btf_type_by_id(btf, i);
|
||||||
@ -4827,6 +4828,10 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
|
|||||||
|
|
||||||
in_tags = btf_type_is_type_tag(t);
|
in_tags = btf_type_is_type_tag(t);
|
||||||
while (btf_type_is_modifier(t)) {
|
while (btf_type_is_modifier(t)) {
|
||||||
|
if (!chain_limit--) {
|
||||||
|
btf_verifier_log(env, "Max chain length or cycle detected");
|
||||||
|
return -ELOOP;
|
||||||
|
}
|
||||||
if (btf_type_is_type_tag(t)) {
|
if (btf_type_is_type_tag(t)) {
|
||||||
if (!in_tags) {
|
if (!in_tags) {
|
||||||
btf_verifier_log(env, "Type tags don't precede modifiers");
|
btf_verifier_log(env, "Type tags don't precede modifiers");
|
||||||
|
@ -2423,7 +2423,7 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long entry_ip,
|
|||||||
kprobe_multi_link_prog_run(link, entry_ip, regs);
|
kprobe_multi_link_prog_run(link, entry_ip, regs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int symbols_cmp(const void *a, const void *b)
|
static int symbols_cmp_r(const void *a, const void *b, const void *priv)
|
||||||
{
|
{
|
||||||
const char **str_a = (const char **) a;
|
const char **str_a = (const char **) a;
|
||||||
const char **str_b = (const char **) b;
|
const char **str_b = (const char **) b;
|
||||||
@ -2431,6 +2431,28 @@ static int symbols_cmp(const void *a, const void *b)
|
|||||||
return strcmp(*str_a, *str_b);
|
return strcmp(*str_a, *str_b);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
struct multi_symbols_sort {
|
||||||
|
const char **funcs;
|
||||||
|
u64 *cookies;
|
||||||
|
};
|
||||||
|
|
||||||
|
static void symbols_swap_r(void *a, void *b, int size, const void *priv)
|
||||||
|
{
|
||||||
|
const struct multi_symbols_sort *data = priv;
|
||||||
|
const char **name_a = a, **name_b = b;
|
||||||
|
|
||||||
|
swap(*name_a, *name_b);
|
||||||
|
|
||||||
|
/* If defined, swap also related cookies. */
|
||||||
|
if (data->cookies) {
|
||||||
|
u64 *cookie_a, *cookie_b;
|
||||||
|
|
||||||
|
cookie_a = data->cookies + (name_a - data->funcs);
|
||||||
|
cookie_b = data->cookies + (name_b - data->funcs);
|
||||||
|
swap(*cookie_a, *cookie_b);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
|
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
|
||||||
{
|
{
|
||||||
struct bpf_kprobe_multi_link *link = NULL;
|
struct bpf_kprobe_multi_link *link = NULL;
|
||||||
@ -2468,25 +2490,6 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
|
|||||||
if (!addrs)
|
if (!addrs)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
if (uaddrs) {
|
|
||||||
if (copy_from_user(addrs, uaddrs, size)) {
|
|
||||||
err = -EFAULT;
|
|
||||||
goto error;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
struct user_syms us;
|
|
||||||
|
|
||||||
err = copy_user_syms(&us, usyms, cnt);
|
|
||||||
if (err)
|
|
||||||
goto error;
|
|
||||||
|
|
||||||
sort(us.syms, cnt, sizeof(*us.syms), symbols_cmp, NULL);
|
|
||||||
err = ftrace_lookup_symbols(us.syms, cnt, addrs);
|
|
||||||
free_user_syms(&us);
|
|
||||||
if (err)
|
|
||||||
goto error;
|
|
||||||
}
|
|
||||||
|
|
||||||
ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies);
|
ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies);
|
||||||
if (ucookies) {
|
if (ucookies) {
|
||||||
cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
|
cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
|
||||||
@ -2500,6 +2503,33 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (uaddrs) {
|
||||||
|
if (copy_from_user(addrs, uaddrs, size)) {
|
||||||
|
err = -EFAULT;
|
||||||
|
goto error;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
struct multi_symbols_sort data = {
|
||||||
|
.cookies = cookies,
|
||||||
|
};
|
||||||
|
struct user_syms us;
|
||||||
|
|
||||||
|
err = copy_user_syms(&us, usyms, cnt);
|
||||||
|
if (err)
|
||||||
|
goto error;
|
||||||
|
|
||||||
|
if (cookies)
|
||||||
|
data.funcs = us.syms;
|
||||||
|
|
||||||
|
sort_r(us.syms, cnt, sizeof(*us.syms), symbols_cmp_r,
|
||||||
|
symbols_swap_r, &data);
|
||||||
|
|
||||||
|
err = ftrace_lookup_symbols(us.syms, cnt, addrs);
|
||||||
|
free_user_syms(&us);
|
||||||
|
if (err)
|
||||||
|
goto error;
|
||||||
|
}
|
||||||
|
|
||||||
link = kzalloc(sizeof(*link), GFP_KERNEL);
|
link = kzalloc(sizeof(*link), GFP_KERNEL);
|
||||||
if (!link) {
|
if (!link) {
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
@ -8029,15 +8029,23 @@ static int kallsyms_callback(void *data, const char *name,
|
|||||||
struct module *mod, unsigned long addr)
|
struct module *mod, unsigned long addr)
|
||||||
{
|
{
|
||||||
struct kallsyms_data *args = data;
|
struct kallsyms_data *args = data;
|
||||||
|
const char **sym;
|
||||||
|
int idx;
|
||||||
|
|
||||||
if (!bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp))
|
sym = bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp);
|
||||||
|
if (!sym)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
idx = sym - args->syms;
|
||||||
|
if (args->addrs[idx])
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
addr = ftrace_location(addr);
|
addr = ftrace_location(addr);
|
||||||
if (!addr)
|
if (!addr)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
args->addrs[args->found++] = addr;
|
args->addrs[idx] = addr;
|
||||||
|
args->found++;
|
||||||
return args->found == args->cnt ? 1 : 0;
|
return args->found == args->cnt ? 1 : 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -8062,6 +8070,7 @@ int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *a
|
|||||||
struct kallsyms_data args;
|
struct kallsyms_data args;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
memset(addrs, 0, sizeof(*addrs) * cnt);
|
||||||
args.addrs = addrs;
|
args.addrs = addrs;
|
||||||
args.syms = sorted_syms;
|
args.syms = sorted_syms;
|
||||||
args.cnt = cnt;
|
args.cnt = cnt;
|
||||||
|
@ -154,6 +154,15 @@ struct rethook_node *rethook_try_get(struct rethook *rh)
|
|||||||
if (unlikely(!handler))
|
if (unlikely(!handler))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This expects the caller will set up a rethook on a function entry.
|
||||||
|
* When the function returns, the rethook will eventually be reclaimed
|
||||||
|
* or released in the rethook_recycle() with call_rcu().
|
||||||
|
* This means the caller must be run in the RCU-availabe context.
|
||||||
|
*/
|
||||||
|
if (unlikely(!rcu_is_watching()))
|
||||||
|
return NULL;
|
||||||
|
|
||||||
fn = freelist_try_get(&rh->pool);
|
fn = freelist_try_get(&rh->pool);
|
||||||
if (!fn)
|
if (!fn)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -397,15 +397,17 @@ static void list_netdevice(struct net_device *dev)
|
|||||||
/* Device list removal
|
/* Device list removal
|
||||||
* caller must respect a RCU grace period before freeing/reusing dev
|
* caller must respect a RCU grace period before freeing/reusing dev
|
||||||
*/
|
*/
|
||||||
static void unlist_netdevice(struct net_device *dev)
|
static void unlist_netdevice(struct net_device *dev, bool lock)
|
||||||
{
|
{
|
||||||
ASSERT_RTNL();
|
ASSERT_RTNL();
|
||||||
|
|
||||||
/* Unlink dev from the device chain */
|
/* Unlink dev from the device chain */
|
||||||
|
if (lock)
|
||||||
write_lock(&dev_base_lock);
|
write_lock(&dev_base_lock);
|
||||||
list_del_rcu(&dev->dev_list);
|
list_del_rcu(&dev->dev_list);
|
||||||
netdev_name_node_del(dev->name_node);
|
netdev_name_node_del(dev->name_node);
|
||||||
hlist_del_rcu(&dev->index_hlist);
|
hlist_del_rcu(&dev->index_hlist);
|
||||||
|
if (lock)
|
||||||
write_unlock(&dev_base_lock);
|
write_unlock(&dev_base_lock);
|
||||||
|
|
||||||
dev_base_seq_inc(dev_net(dev));
|
dev_base_seq_inc(dev_net(dev));
|
||||||
@ -10043,11 +10045,11 @@ int register_netdevice(struct net_device *dev)
|
|||||||
goto err_uninit;
|
goto err_uninit;
|
||||||
|
|
||||||
ret = netdev_register_kobject(dev);
|
ret = netdev_register_kobject(dev);
|
||||||
if (ret) {
|
write_lock(&dev_base_lock);
|
||||||
dev->reg_state = NETREG_UNREGISTERED;
|
dev->reg_state = ret ? NETREG_UNREGISTERED : NETREG_REGISTERED;
|
||||||
|
write_unlock(&dev_base_lock);
|
||||||
|
if (ret)
|
||||||
goto err_uninit;
|
goto err_uninit;
|
||||||
}
|
|
||||||
dev->reg_state = NETREG_REGISTERED;
|
|
||||||
|
|
||||||
__netdev_update_features(dev);
|
__netdev_update_features(dev);
|
||||||
|
|
||||||
@ -10329,7 +10331,9 @@ void netdev_run_todo(void)
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
write_lock(&dev_base_lock);
|
||||||
dev->reg_state = NETREG_UNREGISTERED;
|
dev->reg_state = NETREG_UNREGISTERED;
|
||||||
|
write_unlock(&dev_base_lock);
|
||||||
linkwatch_forget_dev(dev);
|
linkwatch_forget_dev(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -10810,9 +10814,10 @@ void unregister_netdevice_many(struct list_head *head)
|
|||||||
|
|
||||||
list_for_each_entry(dev, head, unreg_list) {
|
list_for_each_entry(dev, head, unreg_list) {
|
||||||
/* And unlink it from device chain. */
|
/* And unlink it from device chain. */
|
||||||
unlist_netdevice(dev);
|
write_lock(&dev_base_lock);
|
||||||
|
unlist_netdevice(dev, false);
|
||||||
dev->reg_state = NETREG_UNREGISTERING;
|
dev->reg_state = NETREG_UNREGISTERING;
|
||||||
|
write_unlock(&dev_base_lock);
|
||||||
}
|
}
|
||||||
flush_all_backlogs();
|
flush_all_backlogs();
|
||||||
|
|
||||||
@ -10959,7 +10964,7 @@ int __dev_change_net_namespace(struct net_device *dev, struct net *net,
|
|||||||
dev_close(dev);
|
dev_close(dev);
|
||||||
|
|
||||||
/* And unlink it from device chain */
|
/* And unlink it from device chain */
|
||||||
unlist_netdevice(dev);
|
unlist_netdevice(dev, true);
|
||||||
|
|
||||||
synchronize_net();
|
synchronize_net();
|
||||||
|
|
||||||
|
@ -6516,11 +6516,22 @@ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
|||||||
ifindex, proto, netns_id, flags);
|
ifindex, proto, netns_id, flags);
|
||||||
|
|
||||||
if (sk) {
|
if (sk) {
|
||||||
sk = sk_to_full_sk(sk);
|
struct sock *sk2 = sk_to_full_sk(sk);
|
||||||
if (!sk_fullsock(sk)) {
|
|
||||||
|
/* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
|
||||||
|
* sock refcnt is decremented to prevent a request_sock leak.
|
||||||
|
*/
|
||||||
|
if (!sk_fullsock(sk2))
|
||||||
|
sk2 = NULL;
|
||||||
|
if (sk2 != sk) {
|
||||||
sock_gen_put(sk);
|
sock_gen_put(sk);
|
||||||
|
/* Ensure there is no need to bump sk2 refcnt */
|
||||||
|
if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
|
||||||
|
WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
sk = sk2;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return sk;
|
return sk;
|
||||||
@ -6553,11 +6564,22 @@ bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
|||||||
flags);
|
flags);
|
||||||
|
|
||||||
if (sk) {
|
if (sk) {
|
||||||
sk = sk_to_full_sk(sk);
|
struct sock *sk2 = sk_to_full_sk(sk);
|
||||||
if (!sk_fullsock(sk)) {
|
|
||||||
|
/* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
|
||||||
|
* sock refcnt is decremented to prevent a request_sock leak.
|
||||||
|
*/
|
||||||
|
if (!sk_fullsock(sk2))
|
||||||
|
sk2 = NULL;
|
||||||
|
if (sk2 != sk) {
|
||||||
sock_gen_put(sk);
|
sock_gen_put(sk);
|
||||||
|
/* Ensure there is no need to bump sk2 refcnt */
|
||||||
|
if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
|
||||||
|
WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
sk = sk2;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return sk;
|
return sk;
|
||||||
|
@ -33,6 +33,7 @@ static const char fmt_dec[] = "%d\n";
|
|||||||
static const char fmt_ulong[] = "%lu\n";
|
static const char fmt_ulong[] = "%lu\n";
|
||||||
static const char fmt_u64[] = "%llu\n";
|
static const char fmt_u64[] = "%llu\n";
|
||||||
|
|
||||||
|
/* Caller holds RTNL or dev_base_lock */
|
||||||
static inline int dev_isalive(const struct net_device *dev)
|
static inline int dev_isalive(const struct net_device *dev)
|
||||||
{
|
{
|
||||||
return dev->reg_state <= NETREG_REGISTERED;
|
return dev->reg_state <= NETREG_REGISTERED;
|
||||||
|
@ -699,6 +699,11 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
|
|||||||
|
|
||||||
write_lock_bh(&sk->sk_callback_lock);
|
write_lock_bh(&sk->sk_callback_lock);
|
||||||
|
|
||||||
|
if (sk_is_inet(sk) && inet_csk_has_ulp(sk)) {
|
||||||
|
psock = ERR_PTR(-EINVAL);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
if (sk->sk_user_data) {
|
if (sk->sk_user_data) {
|
||||||
psock = ERR_PTR(-EBUSY);
|
psock = ERR_PTR(-EBUSY);
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -36,7 +36,7 @@ static int fallback_set_params(struct eeprom_req_info *request,
|
|||||||
if (request->page)
|
if (request->page)
|
||||||
offset = request->page * ETH_MODULE_EEPROM_PAGE_LEN + offset;
|
offset = request->page * ETH_MODULE_EEPROM_PAGE_LEN + offset;
|
||||||
|
|
||||||
if (modinfo->type == ETH_MODULE_SFF_8079 &&
|
if (modinfo->type == ETH_MODULE_SFF_8472 &&
|
||||||
request->i2c_address == 0x51)
|
request->i2c_address == 0x51)
|
||||||
offset += ETH_MODULE_EEPROM_PAGE_LEN * 2;
|
offset += ETH_MODULE_EEPROM_PAGE_LEN * 2;
|
||||||
|
|
||||||
|
@ -524,7 +524,6 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
int tunnel_hlen;
|
int tunnel_hlen;
|
||||||
int version;
|
int version;
|
||||||
int nhoff;
|
int nhoff;
|
||||||
int thoff;
|
|
||||||
|
|
||||||
tun_info = skb_tunnel_info(skb);
|
tun_info = skb_tunnel_info(skb);
|
||||||
if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) ||
|
if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) ||
|
||||||
@ -558,10 +557,16 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
|
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
|
||||||
truncate = true;
|
truncate = true;
|
||||||
|
|
||||||
|
if (skb->protocol == htons(ETH_P_IPV6)) {
|
||||||
|
int thoff;
|
||||||
|
|
||||||
|
if (skb_transport_header_was_set(skb))
|
||||||
thoff = skb_transport_header(skb) - skb_mac_header(skb);
|
thoff = skb_transport_header(skb) - skb_mac_header(skb);
|
||||||
if (skb->protocol == htons(ETH_P_IPV6) &&
|
else
|
||||||
(ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
|
thoff = nhoff + sizeof(struct ipv6hdr);
|
||||||
|
if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
|
||||||
truncate = true;
|
truncate = true;
|
||||||
|
}
|
||||||
|
|
||||||
if (version == 1) {
|
if (version == 1) {
|
||||||
erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
|
erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
|
||||||
|
@ -319,12 +319,16 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
|
|||||||
pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
|
pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
|
||||||
sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
|
sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
|
||||||
|
|
||||||
|
if (addr->sin_addr.s_addr == htonl(INADDR_ANY))
|
||||||
|
return 0;
|
||||||
|
|
||||||
tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
|
tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
|
||||||
chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
|
chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
|
||||||
|
|
||||||
if (!inet_addr_valid_or_nonlocal(net, inet_sk(sk),
|
if (chk_addr_ret == RTN_MULTICAST ||
|
||||||
addr->sin_addr.s_addr,
|
chk_addr_ret == RTN_BROADCAST ||
|
||||||
chk_addr_ret))
|
(chk_addr_ret != RTN_LOCAL &&
|
||||||
|
!inet_can_nonlocal_bind(net, isk)))
|
||||||
return -EADDRNOTAVAIL;
|
return -EADDRNOTAVAIL;
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_IPV6)
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
|
@ -611,9 +611,6 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (inet_csk_has_ulp(sk))
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
if (sk->sk_family == AF_INET6) {
|
if (sk->sk_family == AF_INET6) {
|
||||||
if (tcp_bpf_assert_proto_ops(psock->sk_proto))
|
if (tcp_bpf_assert_proto_ops(psock->sk_proto))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -939,7 +939,6 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
|
|||||||
__be16 proto;
|
__be16 proto;
|
||||||
__u32 mtu;
|
__u32 mtu;
|
||||||
int nhoff;
|
int nhoff;
|
||||||
int thoff;
|
|
||||||
|
|
||||||
if (!pskb_inet_may_pull(skb))
|
if (!pskb_inet_may_pull(skb))
|
||||||
goto tx_err;
|
goto tx_err;
|
||||||
@ -960,10 +959,16 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
|
|||||||
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
|
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
|
||||||
truncate = true;
|
truncate = true;
|
||||||
|
|
||||||
|
if (skb->protocol == htons(ETH_P_IPV6)) {
|
||||||
|
int thoff;
|
||||||
|
|
||||||
|
if (skb_transport_header_was_set(skb))
|
||||||
thoff = skb_transport_header(skb) - skb_mac_header(skb);
|
thoff = skb_transport_header(skb) - skb_mac_header(skb);
|
||||||
if (skb->protocol == htons(ETH_P_IPV6) &&
|
else
|
||||||
(ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
|
thoff = nhoff + sizeof(struct ipv6hdr);
|
||||||
|
if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
|
||||||
truncate = true;
|
truncate = true;
|
||||||
|
}
|
||||||
|
|
||||||
if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen))
|
if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen))
|
||||||
goto tx_err;
|
goto tx_err;
|
||||||
|
@ -13,14 +13,31 @@
|
|||||||
#include <net/netfilter/nf_tables_offload.h>
|
#include <net/netfilter/nf_tables_offload.h>
|
||||||
#include <net/netfilter/nf_dup_netdev.h>
|
#include <net/netfilter/nf_dup_netdev.h>
|
||||||
|
|
||||||
static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev)
|
#define NF_RECURSION_LIMIT 2
|
||||||
|
|
||||||
|
static DEFINE_PER_CPU(u8, nf_dup_skb_recursion);
|
||||||
|
|
||||||
|
static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev,
|
||||||
|
enum nf_dev_hooks hook)
|
||||||
{
|
{
|
||||||
if (skb_mac_header_was_set(skb))
|
if (__this_cpu_read(nf_dup_skb_recursion) > NF_RECURSION_LIMIT)
|
||||||
|
goto err;
|
||||||
|
|
||||||
|
if (hook == NF_NETDEV_INGRESS && skb_mac_header_was_set(skb)) {
|
||||||
|
if (skb_cow_head(skb, skb->mac_len))
|
||||||
|
goto err;
|
||||||
|
|
||||||
skb_push(skb, skb->mac_len);
|
skb_push(skb, skb->mac_len);
|
||||||
|
}
|
||||||
|
|
||||||
skb->dev = dev;
|
skb->dev = dev;
|
||||||
skb_clear_tstamp(skb);
|
skb_clear_tstamp(skb);
|
||||||
|
__this_cpu_inc(nf_dup_skb_recursion);
|
||||||
dev_queue_xmit(skb);
|
dev_queue_xmit(skb);
|
||||||
|
__this_cpu_dec(nf_dup_skb_recursion);
|
||||||
|
return;
|
||||||
|
err:
|
||||||
|
kfree_skb(skb);
|
||||||
}
|
}
|
||||||
|
|
||||||
void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
|
void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
|
||||||
@ -33,7 +50,7 @@ void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
nf_do_netdev_egress(pkt->skb, dev);
|
nf_do_netdev_egress(pkt->skb, dev, nft_hook(pkt));
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress);
|
EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress);
|
||||||
|
|
||||||
@ -48,7 +65,7 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif)
|
|||||||
|
|
||||||
skb = skb_clone(pkt->skb, GFP_ATOMIC);
|
skb = skb_clone(pkt->skb, GFP_ATOMIC);
|
||||||
if (skb)
|
if (skb)
|
||||||
nf_do_netdev_egress(skb, dev);
|
nf_do_netdev_egress(skb, dev, nft_hook(pkt));
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(nf_dup_netdev_egress);
|
EXPORT_SYMBOL_GPL(nf_dup_netdev_egress);
|
||||||
|
|
||||||
|
@ -614,7 +614,7 @@ static void __net_exit cttimeout_net_exit(struct net *net)
|
|||||||
|
|
||||||
nf_ct_untimeout(net, NULL);
|
nf_ct_untimeout(net, NULL);
|
||||||
|
|
||||||
list_for_each_entry_safe(cur, tmp, &pernet->nfct_timeout_freelist, head) {
|
list_for_each_entry_safe(cur, tmp, &pernet->nfct_timeout_freelist, free_head) {
|
||||||
list_del(&cur->free_head);
|
list_del(&cur->free_head);
|
||||||
|
|
||||||
if (refcount_dec_and_test(&cur->refcnt))
|
if (refcount_dec_and_test(&cur->refcnt))
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
#include <linux/in.h>
|
#include <linux/in.h>
|
||||||
#include <linux/ip.h>
|
#include <linux/ip.h>
|
||||||
#include <linux/ipv6.h>
|
#include <linux/ipv6.h>
|
||||||
|
#include <linux/random.h>
|
||||||
#include <linux/smp.h>
|
#include <linux/smp.h>
|
||||||
#include <linux/static_key.h>
|
#include <linux/static_key.h>
|
||||||
#include <net/dst.h>
|
#include <net/dst.h>
|
||||||
@ -32,8 +33,6 @@
|
|||||||
#define NFT_META_SECS_PER_DAY 86400
|
#define NFT_META_SECS_PER_DAY 86400
|
||||||
#define NFT_META_DAYS_PER_WEEK 7
|
#define NFT_META_DAYS_PER_WEEK 7
|
||||||
|
|
||||||
static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state);
|
|
||||||
|
|
||||||
static u8 nft_meta_weekday(void)
|
static u8 nft_meta_weekday(void)
|
||||||
{
|
{
|
||||||
time64_t secs = ktime_get_real_seconds();
|
time64_t secs = ktime_get_real_seconds();
|
||||||
@ -271,13 +270,6 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest,
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static noinline u32 nft_prandom_u32(void)
|
|
||||||
{
|
|
||||||
struct rnd_state *state = this_cpu_ptr(&nft_prandom_state);
|
|
||||||
|
|
||||||
return prandom_u32_state(state);
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef CONFIG_IP_ROUTE_CLASSID
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
||||||
static noinline bool
|
static noinline bool
|
||||||
nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest)
|
nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest)
|
||||||
@ -389,7 +381,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
|
|||||||
break;
|
break;
|
||||||
#endif
|
#endif
|
||||||
case NFT_META_PRANDOM:
|
case NFT_META_PRANDOM:
|
||||||
*dest = nft_prandom_u32();
|
*dest = get_random_u32();
|
||||||
break;
|
break;
|
||||||
#ifdef CONFIG_XFRM
|
#ifdef CONFIG_XFRM
|
||||||
case NFT_META_SECPATH:
|
case NFT_META_SECPATH:
|
||||||
@ -518,7 +510,6 @@ int nft_meta_get_init(const struct nft_ctx *ctx,
|
|||||||
len = IFNAMSIZ;
|
len = IFNAMSIZ;
|
||||||
break;
|
break;
|
||||||
case NFT_META_PRANDOM:
|
case NFT_META_PRANDOM:
|
||||||
prandom_init_once(&nft_prandom_state);
|
|
||||||
len = sizeof(u32);
|
len = sizeof(u32);
|
||||||
break;
|
break;
|
||||||
#ifdef CONFIG_XFRM
|
#ifdef CONFIG_XFRM
|
||||||
|
@ -9,12 +9,11 @@
|
|||||||
#include <linux/netlink.h>
|
#include <linux/netlink.h>
|
||||||
#include <linux/netfilter.h>
|
#include <linux/netfilter.h>
|
||||||
#include <linux/netfilter/nf_tables.h>
|
#include <linux/netfilter/nf_tables.h>
|
||||||
|
#include <linux/random.h>
|
||||||
#include <linux/static_key.h>
|
#include <linux/static_key.h>
|
||||||
#include <net/netfilter/nf_tables.h>
|
#include <net/netfilter/nf_tables.h>
|
||||||
#include <net/netfilter/nf_tables_core.h>
|
#include <net/netfilter/nf_tables_core.h>
|
||||||
|
|
||||||
static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state);
|
|
||||||
|
|
||||||
struct nft_ng_inc {
|
struct nft_ng_inc {
|
||||||
u8 dreg;
|
u8 dreg;
|
||||||
u32 modulus;
|
u32 modulus;
|
||||||
@ -135,12 +134,9 @@ struct nft_ng_random {
|
|||||||
u32 offset;
|
u32 offset;
|
||||||
};
|
};
|
||||||
|
|
||||||
static u32 nft_ng_random_gen(struct nft_ng_random *priv)
|
static u32 nft_ng_random_gen(const struct nft_ng_random *priv)
|
||||||
{
|
{
|
||||||
struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state);
|
return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset;
|
||||||
|
|
||||||
return reciprocal_scale(prandom_u32_state(state), priv->modulus) +
|
|
||||||
priv->offset;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nft_ng_random_eval(const struct nft_expr *expr,
|
static void nft_ng_random_eval(const struct nft_expr *expr,
|
||||||
@ -168,8 +164,6 @@ static int nft_ng_random_init(const struct nft_ctx *ctx,
|
|||||||
if (priv->offset + priv->modulus - 1 < priv->offset)
|
if (priv->offset + priv->modulus - 1 < priv->offset)
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
|
|
||||||
prandom_init_once(&nft_numgen_prandom_state);
|
|
||||||
|
|
||||||
return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg,
|
return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg,
|
||||||
NULL, NFT_DATA_VALUE, sizeof(u32));
|
NULL, NFT_DATA_VALUE, sizeof(u32));
|
||||||
}
|
}
|
||||||
|
@ -407,7 +407,7 @@ static int parse_ipv6hdr(struct sk_buff *skb, struct sw_flow_key *key)
|
|||||||
if (flags & IP6_FH_F_FRAG) {
|
if (flags & IP6_FH_F_FRAG) {
|
||||||
if (frag_off) {
|
if (frag_off) {
|
||||||
key->ip.frag = OVS_FRAG_TYPE_LATER;
|
key->ip.frag = OVS_FRAG_TYPE_LATER;
|
||||||
key->ip.proto = nexthdr;
|
key->ip.proto = NEXTHDR_FRAGMENT;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
key->ip.frag = OVS_FRAG_TYPE_FIRST;
|
key->ip.frag = OVS_FRAG_TYPE_FIRST;
|
||||||
|
@ -1146,9 +1146,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
|
|||||||
struct tc_netem_rate rate;
|
struct tc_netem_rate rate;
|
||||||
struct tc_netem_slot slot;
|
struct tc_netem_slot slot;
|
||||||
|
|
||||||
qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency),
|
qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency),
|
||||||
UINT_MAX);
|
UINT_MAX);
|
||||||
qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter),
|
qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter),
|
||||||
UINT_MAX);
|
UINT_MAX);
|
||||||
qopt.limit = q->limit;
|
qopt.limit = q->limit;
|
||||||
qopt.loss = q->loss;
|
qopt.loss = q->loss;
|
||||||
|
@ -109,10 +109,9 @@ static void __net_exit tipc_exit_net(struct net *net)
|
|||||||
struct tipc_net *tn = tipc_net(net);
|
struct tipc_net *tn = tipc_net(net);
|
||||||
|
|
||||||
tipc_detach_loopback(net);
|
tipc_detach_loopback(net);
|
||||||
|
tipc_net_stop(net);
|
||||||
/* Make sure the tipc_net_finalize_work() finished */
|
/* Make sure the tipc_net_finalize_work() finished */
|
||||||
cancel_work_sync(&tn->work);
|
cancel_work_sync(&tn->work);
|
||||||
tipc_net_stop(net);
|
|
||||||
|
|
||||||
tipc_bcast_stop(net);
|
tipc_bcast_stop(net);
|
||||||
tipc_nametbl_stop(net);
|
tipc_nametbl_stop(net);
|
||||||
tipc_sk_rht_destroy(net);
|
tipc_sk_rht_destroy(net);
|
||||||
|
@ -921,6 +921,8 @@ static void tls_update(struct sock *sk, struct proto *p,
|
|||||||
{
|
{
|
||||||
struct tls_context *ctx;
|
struct tls_context *ctx;
|
||||||
|
|
||||||
|
WARN_ON_ONCE(sk->sk_prot == p);
|
||||||
|
|
||||||
ctx = tls_get_ctx(sk);
|
ctx = tls_get_ctx(sk);
|
||||||
if (likely(ctx)) {
|
if (likely(ctx)) {
|
||||||
ctx->sk_write_space = write_space;
|
ctx->sk_write_space = write_space;
|
||||||
|
@ -538,12 +538,6 @@ static int xsk_generic_xmit(struct sock *sk)
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
skb = xsk_build_skb(xs, &desc);
|
|
||||||
if (IS_ERR(skb)) {
|
|
||||||
err = PTR_ERR(skb);
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* This is the backpressure mechanism for the Tx path.
|
/* This is the backpressure mechanism for the Tx path.
|
||||||
* Reserve space in the completion queue and only proceed
|
* Reserve space in the completion queue and only proceed
|
||||||
* if there is space in it. This avoids having to implement
|
* if there is space in it. This avoids having to implement
|
||||||
@ -552,11 +546,19 @@ static int xsk_generic_xmit(struct sock *sk)
|
|||||||
spin_lock_irqsave(&xs->pool->cq_lock, flags);
|
spin_lock_irqsave(&xs->pool->cq_lock, flags);
|
||||||
if (xskq_prod_reserve(xs->pool->cq)) {
|
if (xskq_prod_reserve(xs->pool->cq)) {
|
||||||
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
|
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
|
||||||
kfree_skb(skb);
|
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
|
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
|
||||||
|
|
||||||
|
skb = xsk_build_skb(xs, &desc);
|
||||||
|
if (IS_ERR(skb)) {
|
||||||
|
err = PTR_ERR(skb);
|
||||||
|
spin_lock_irqsave(&xs->pool->cq_lock, flags);
|
||||||
|
xskq_prod_cancel(xs->pool->cq);
|
||||||
|
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
err = __dev_direct_xmit(skb, xs->queue_id);
|
err = __dev_direct_xmit(skb, xs->queue_id);
|
||||||
if (err == NETDEV_TX_BUSY) {
|
if (err == NETDEV_TX_BUSY) {
|
||||||
/* Tell user-space to retry the send */
|
/* Tell user-space to retry the send */
|
||||||
|
@ -21,6 +21,7 @@
|
|||||||
#define BACKTRACE_DEPTH 16
|
#define BACKTRACE_DEPTH 16
|
||||||
#define MAX_SYMBOL_LEN 4096
|
#define MAX_SYMBOL_LEN 4096
|
||||||
struct fprobe sample_probe;
|
struct fprobe sample_probe;
|
||||||
|
static unsigned long nhit;
|
||||||
|
|
||||||
static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";
|
static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";
|
||||||
module_param_string(symbol, symbol, sizeof(symbol), 0644);
|
module_param_string(symbol, symbol, sizeof(symbol), 0644);
|
||||||
@ -28,6 +29,8 @@ static char nosymbol[MAX_SYMBOL_LEN] = "";
|
|||||||
module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);
|
module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);
|
||||||
static bool stackdump = true;
|
static bool stackdump = true;
|
||||||
module_param(stackdump, bool, 0644);
|
module_param(stackdump, bool, 0644);
|
||||||
|
static bool use_trace = false;
|
||||||
|
module_param(use_trace, bool, 0644);
|
||||||
|
|
||||||
static void show_backtrace(void)
|
static void show_backtrace(void)
|
||||||
{
|
{
|
||||||
@ -40,7 +43,15 @@ static void show_backtrace(void)
|
|||||||
|
|
||||||
static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs)
|
static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
|
if (use_trace)
|
||||||
|
/*
|
||||||
|
* This is just an example, no kernel code should call
|
||||||
|
* trace_printk() except when actively debugging.
|
||||||
|
*/
|
||||||
|
trace_printk("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
|
||||||
|
else
|
||||||
pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
|
pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
|
||||||
|
nhit++;
|
||||||
if (stackdump)
|
if (stackdump)
|
||||||
show_backtrace();
|
show_backtrace();
|
||||||
}
|
}
|
||||||
@ -49,8 +60,17 @@ static void sample_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_r
|
|||||||
{
|
{
|
||||||
unsigned long rip = instruction_pointer(regs);
|
unsigned long rip = instruction_pointer(regs);
|
||||||
|
|
||||||
|
if (use_trace)
|
||||||
|
/*
|
||||||
|
* This is just an example, no kernel code should call
|
||||||
|
* trace_printk() except when actively debugging.
|
||||||
|
*/
|
||||||
|
trace_printk("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
|
||||||
|
(void *)ip, (void *)ip, (void *)rip, (void *)rip);
|
||||||
|
else
|
||||||
pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
|
pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
|
||||||
(void *)ip, (void *)ip, (void *)rip, (void *)rip);
|
(void *)ip, (void *)ip, (void *)rip, (void *)rip);
|
||||||
|
nhit++;
|
||||||
if (stackdump)
|
if (stackdump)
|
||||||
show_backtrace();
|
show_backtrace();
|
||||||
}
|
}
|
||||||
@ -112,7 +132,8 @@ static void __exit fprobe_exit(void)
|
|||||||
{
|
{
|
||||||
unregister_fprobe(&sample_probe);
|
unregister_fprobe(&sample_probe);
|
||||||
|
|
||||||
pr_info("fprobe at %s unregistered\n", symbol);
|
pr_info("fprobe at %s unregistered. %ld times hit, %ld times missed\n",
|
||||||
|
symbol, nhit, sample_probe.nmissed);
|
||||||
}
|
}
|
||||||
|
|
||||||
module_init(fprobe_init)
|
module_init(fprobe_init)
|
||||||
|
@ -121,24 +121,24 @@ static void kprobe_multi_link_api_subtest(void)
|
|||||||
})
|
})
|
||||||
|
|
||||||
GET_ADDR("bpf_fentry_test1", addrs[0]);
|
GET_ADDR("bpf_fentry_test1", addrs[0]);
|
||||||
GET_ADDR("bpf_fentry_test2", addrs[1]);
|
GET_ADDR("bpf_fentry_test3", addrs[1]);
|
||||||
GET_ADDR("bpf_fentry_test3", addrs[2]);
|
GET_ADDR("bpf_fentry_test4", addrs[2]);
|
||||||
GET_ADDR("bpf_fentry_test4", addrs[3]);
|
GET_ADDR("bpf_fentry_test5", addrs[3]);
|
||||||
GET_ADDR("bpf_fentry_test5", addrs[4]);
|
GET_ADDR("bpf_fentry_test6", addrs[4]);
|
||||||
GET_ADDR("bpf_fentry_test6", addrs[5]);
|
GET_ADDR("bpf_fentry_test7", addrs[5]);
|
||||||
GET_ADDR("bpf_fentry_test7", addrs[6]);
|
GET_ADDR("bpf_fentry_test2", addrs[6]);
|
||||||
GET_ADDR("bpf_fentry_test8", addrs[7]);
|
GET_ADDR("bpf_fentry_test8", addrs[7]);
|
||||||
|
|
||||||
#undef GET_ADDR
|
#undef GET_ADDR
|
||||||
|
|
||||||
cookies[0] = 1;
|
cookies[0] = 1; /* bpf_fentry_test1 */
|
||||||
cookies[1] = 2;
|
cookies[1] = 2; /* bpf_fentry_test3 */
|
||||||
cookies[2] = 3;
|
cookies[2] = 3; /* bpf_fentry_test4 */
|
||||||
cookies[3] = 4;
|
cookies[3] = 4; /* bpf_fentry_test5 */
|
||||||
cookies[4] = 5;
|
cookies[4] = 5; /* bpf_fentry_test6 */
|
||||||
cookies[5] = 6;
|
cookies[5] = 6; /* bpf_fentry_test7 */
|
||||||
cookies[6] = 7;
|
cookies[6] = 7; /* bpf_fentry_test2 */
|
||||||
cookies[7] = 8;
|
cookies[7] = 8; /* bpf_fentry_test8 */
|
||||||
|
|
||||||
opts.kprobe_multi.addrs = (const unsigned long *) &addrs;
|
opts.kprobe_multi.addrs = (const unsigned long *) &addrs;
|
||||||
opts.kprobe_multi.cnt = ARRAY_SIZE(addrs);
|
opts.kprobe_multi.cnt = ARRAY_SIZE(addrs);
|
||||||
@ -149,14 +149,14 @@ static void kprobe_multi_link_api_subtest(void)
|
|||||||
if (!ASSERT_GE(link1_fd, 0, "link1_fd"))
|
if (!ASSERT_GE(link1_fd, 0, "link1_fd"))
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
cookies[0] = 8;
|
cookies[0] = 8; /* bpf_fentry_test1 */
|
||||||
cookies[1] = 7;
|
cookies[1] = 7; /* bpf_fentry_test3 */
|
||||||
cookies[2] = 6;
|
cookies[2] = 6; /* bpf_fentry_test4 */
|
||||||
cookies[3] = 5;
|
cookies[3] = 5; /* bpf_fentry_test5 */
|
||||||
cookies[4] = 4;
|
cookies[4] = 4; /* bpf_fentry_test6 */
|
||||||
cookies[5] = 3;
|
cookies[5] = 3; /* bpf_fentry_test7 */
|
||||||
cookies[6] = 2;
|
cookies[6] = 2; /* bpf_fentry_test2 */
|
||||||
cookies[7] = 1;
|
cookies[7] = 1; /* bpf_fentry_test8 */
|
||||||
|
|
||||||
opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN;
|
opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN;
|
||||||
prog_fd = bpf_program__fd(skel->progs.test_kretprobe);
|
prog_fd = bpf_program__fd(skel->progs.test_kretprobe);
|
||||||
@ -181,12 +181,12 @@ static void kprobe_multi_attach_api_subtest(void)
|
|||||||
struct kprobe_multi *skel = NULL;
|
struct kprobe_multi *skel = NULL;
|
||||||
const char *syms[8] = {
|
const char *syms[8] = {
|
||||||
"bpf_fentry_test1",
|
"bpf_fentry_test1",
|
||||||
"bpf_fentry_test2",
|
|
||||||
"bpf_fentry_test3",
|
"bpf_fentry_test3",
|
||||||
"bpf_fentry_test4",
|
"bpf_fentry_test4",
|
||||||
"bpf_fentry_test5",
|
"bpf_fentry_test5",
|
||||||
"bpf_fentry_test6",
|
"bpf_fentry_test6",
|
||||||
"bpf_fentry_test7",
|
"bpf_fentry_test7",
|
||||||
|
"bpf_fentry_test2",
|
||||||
"bpf_fentry_test8",
|
"bpf_fentry_test8",
|
||||||
};
|
};
|
||||||
__u64 cookies[8];
|
__u64 cookies[8];
|
||||||
@ -198,14 +198,14 @@ static void kprobe_multi_attach_api_subtest(void)
|
|||||||
skel->bss->pid = getpid();
|
skel->bss->pid = getpid();
|
||||||
skel->bss->test_cookie = true;
|
skel->bss->test_cookie = true;
|
||||||
|
|
||||||
cookies[0] = 1;
|
cookies[0] = 1; /* bpf_fentry_test1 */
|
||||||
cookies[1] = 2;
|
cookies[1] = 2; /* bpf_fentry_test3 */
|
||||||
cookies[2] = 3;
|
cookies[2] = 3; /* bpf_fentry_test4 */
|
||||||
cookies[3] = 4;
|
cookies[3] = 4; /* bpf_fentry_test5 */
|
||||||
cookies[4] = 5;
|
cookies[4] = 5; /* bpf_fentry_test6 */
|
||||||
cookies[5] = 6;
|
cookies[5] = 6; /* bpf_fentry_test7 */
|
||||||
cookies[6] = 7;
|
cookies[6] = 7; /* bpf_fentry_test2 */
|
||||||
cookies[7] = 8;
|
cookies[7] = 8; /* bpf_fentry_test8 */
|
||||||
|
|
||||||
opts.syms = syms;
|
opts.syms = syms;
|
||||||
opts.cnt = ARRAY_SIZE(syms);
|
opts.cnt = ARRAY_SIZE(syms);
|
||||||
@ -216,14 +216,14 @@ static void kprobe_multi_attach_api_subtest(void)
|
|||||||
if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts"))
|
if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts"))
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
cookies[0] = 8;
|
cookies[0] = 8; /* bpf_fentry_test1 */
|
||||||
cookies[1] = 7;
|
cookies[1] = 7; /* bpf_fentry_test3 */
|
||||||
cookies[2] = 6;
|
cookies[2] = 6; /* bpf_fentry_test4 */
|
||||||
cookies[3] = 5;
|
cookies[3] = 5; /* bpf_fentry_test5 */
|
||||||
cookies[4] = 4;
|
cookies[4] = 4; /* bpf_fentry_test6 */
|
||||||
cookies[5] = 3;
|
cookies[5] = 3; /* bpf_fentry_test7 */
|
||||||
cookies[6] = 2;
|
cookies[6] = 2; /* bpf_fentry_test2 */
|
||||||
cookies[7] = 1;
|
cookies[7] = 1; /* bpf_fentry_test8 */
|
||||||
|
|
||||||
opts.retprobe = true;
|
opts.retprobe = true;
|
||||||
|
|
||||||
|
@ -364,6 +364,9 @@ static int get_syms(char ***symsp, size_t *cntp)
|
|||||||
continue;
|
continue;
|
||||||
if (!strncmp(name, "rcu_", 4))
|
if (!strncmp(name, "rcu_", 4))
|
||||||
continue;
|
continue;
|
||||||
|
if (!strncmp(name, "__ftrace_invalid_address__",
|
||||||
|
sizeof("__ftrace_invalid_address__") - 1))
|
||||||
|
continue;
|
||||||
err = hashmap__add(map, name, NULL);
|
err = hashmap__add(map, name, NULL);
|
||||||
if (err) {
|
if (err) {
|
||||||
free(name);
|
free(name);
|
||||||
|
@ -831,6 +831,59 @@ out:
|
|||||||
bpf_object__close(obj);
|
bpf_object__close(obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#include "tailcall_bpf2bpf6.skel.h"
|
||||||
|
|
||||||
|
/* Tail call counting works even when there is data on stack which is
|
||||||
|
* not aligned to 8 bytes.
|
||||||
|
*/
|
||||||
|
static void test_tailcall_bpf2bpf_6(void)
|
||||||
|
{
|
||||||
|
struct tailcall_bpf2bpf6 *obj;
|
||||||
|
int err, map_fd, prog_fd, main_fd, data_fd, i, val;
|
||||||
|
LIBBPF_OPTS(bpf_test_run_opts, topts,
|
||||||
|
.data_in = &pkt_v4,
|
||||||
|
.data_size_in = sizeof(pkt_v4),
|
||||||
|
.repeat = 1,
|
||||||
|
);
|
||||||
|
|
||||||
|
obj = tailcall_bpf2bpf6__open_and_load();
|
||||||
|
if (!ASSERT_OK_PTR(obj, "open and load"))
|
||||||
|
return;
|
||||||
|
|
||||||
|
main_fd = bpf_program__fd(obj->progs.entry);
|
||||||
|
if (!ASSERT_GE(main_fd, 0, "entry prog fd"))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
map_fd = bpf_map__fd(obj->maps.jmp_table);
|
||||||
|
if (!ASSERT_GE(map_fd, 0, "jmp_table map fd"))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
prog_fd = bpf_program__fd(obj->progs.classifier_0);
|
||||||
|
if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd"))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
i = 0;
|
||||||
|
err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
|
||||||
|
if (!ASSERT_OK(err, "jmp_table map update"))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
err = bpf_prog_test_run_opts(main_fd, &topts);
|
||||||
|
ASSERT_OK(err, "entry prog test run");
|
||||||
|
ASSERT_EQ(topts.retval, 0, "tailcall retval");
|
||||||
|
|
||||||
|
data_fd = bpf_map__fd(obj->maps.bss);
|
||||||
|
if (!ASSERT_GE(map_fd, 0, "bss map fd"))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
i = 0;
|
||||||
|
err = bpf_map_lookup_elem(data_fd, &i, &val);
|
||||||
|
ASSERT_OK(err, "bss map lookup");
|
||||||
|
ASSERT_EQ(val, 1, "done flag is set");
|
||||||
|
|
||||||
|
out:
|
||||||
|
tailcall_bpf2bpf6__destroy(obj);
|
||||||
|
}
|
||||||
|
|
||||||
void test_tailcalls(void)
|
void test_tailcalls(void)
|
||||||
{
|
{
|
||||||
if (test__start_subtest("tailcall_1"))
|
if (test__start_subtest("tailcall_1"))
|
||||||
@ -855,4 +908,6 @@ void test_tailcalls(void)
|
|||||||
test_tailcall_bpf2bpf_4(false);
|
test_tailcall_bpf2bpf_4(false);
|
||||||
if (test__start_subtest("tailcall_bpf2bpf_5"))
|
if (test__start_subtest("tailcall_bpf2bpf_5"))
|
||||||
test_tailcall_bpf2bpf_4(true);
|
test_tailcall_bpf2bpf_4(true);
|
||||||
|
if (test__start_subtest("tailcall_bpf2bpf_6"))
|
||||||
|
test_tailcall_bpf2bpf_6();
|
||||||
}
|
}
|
||||||
|
@ -54,21 +54,21 @@ static void kprobe_multi_check(void *ctx, bool is_return)
|
|||||||
|
|
||||||
if (is_return) {
|
if (is_return) {
|
||||||
SET(kretprobe_test1_result, &bpf_fentry_test1, 8);
|
SET(kretprobe_test1_result, &bpf_fentry_test1, 8);
|
||||||
SET(kretprobe_test2_result, &bpf_fentry_test2, 7);
|
SET(kretprobe_test2_result, &bpf_fentry_test2, 2);
|
||||||
SET(kretprobe_test3_result, &bpf_fentry_test3, 6);
|
SET(kretprobe_test3_result, &bpf_fentry_test3, 7);
|
||||||
SET(kretprobe_test4_result, &bpf_fentry_test4, 5);
|
SET(kretprobe_test4_result, &bpf_fentry_test4, 6);
|
||||||
SET(kretprobe_test5_result, &bpf_fentry_test5, 4);
|
SET(kretprobe_test5_result, &bpf_fentry_test5, 5);
|
||||||
SET(kretprobe_test6_result, &bpf_fentry_test6, 3);
|
SET(kretprobe_test6_result, &bpf_fentry_test6, 4);
|
||||||
SET(kretprobe_test7_result, &bpf_fentry_test7, 2);
|
SET(kretprobe_test7_result, &bpf_fentry_test7, 3);
|
||||||
SET(kretprobe_test8_result, &bpf_fentry_test8, 1);
|
SET(kretprobe_test8_result, &bpf_fentry_test8, 1);
|
||||||
} else {
|
} else {
|
||||||
SET(kprobe_test1_result, &bpf_fentry_test1, 1);
|
SET(kprobe_test1_result, &bpf_fentry_test1, 1);
|
||||||
SET(kprobe_test2_result, &bpf_fentry_test2, 2);
|
SET(kprobe_test2_result, &bpf_fentry_test2, 7);
|
||||||
SET(kprobe_test3_result, &bpf_fentry_test3, 3);
|
SET(kprobe_test3_result, &bpf_fentry_test3, 2);
|
||||||
SET(kprobe_test4_result, &bpf_fentry_test4, 4);
|
SET(kprobe_test4_result, &bpf_fentry_test4, 3);
|
||||||
SET(kprobe_test5_result, &bpf_fentry_test5, 5);
|
SET(kprobe_test5_result, &bpf_fentry_test5, 4);
|
||||||
SET(kprobe_test6_result, &bpf_fentry_test6, 6);
|
SET(kprobe_test6_result, &bpf_fentry_test6, 5);
|
||||||
SET(kprobe_test7_result, &bpf_fentry_test7, 7);
|
SET(kprobe_test7_result, &bpf_fentry_test7, 6);
|
||||||
SET(kprobe_test8_result, &bpf_fentry_test8, 8);
|
SET(kprobe_test8_result, &bpf_fentry_test8, 8);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
42
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
Normal file
42
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
#include <linux/bpf.h>
|
||||||
|
#include <bpf/bpf_helpers.h>
|
||||||
|
|
||||||
|
#define __unused __attribute__((unused))
|
||||||
|
|
||||||
|
struct {
|
||||||
|
__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
|
||||||
|
__uint(max_entries, 1);
|
||||||
|
__uint(key_size, sizeof(__u32));
|
||||||
|
__uint(value_size, sizeof(__u32));
|
||||||
|
} jmp_table SEC(".maps");
|
||||||
|
|
||||||
|
int done = 0;
|
||||||
|
|
||||||
|
SEC("tc")
|
||||||
|
int classifier_0(struct __sk_buff *skb __unused)
|
||||||
|
{
|
||||||
|
done = 1;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __noinline
|
||||||
|
int subprog_tail(struct __sk_buff *skb)
|
||||||
|
{
|
||||||
|
/* Don't propagate the constant to the caller */
|
||||||
|
volatile int ret = 1;
|
||||||
|
|
||||||
|
bpf_tail_call_static(skb, &jmp_table, 0);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
SEC("tc")
|
||||||
|
int entry(struct __sk_buff *skb)
|
||||||
|
{
|
||||||
|
/* Have data on stack which size is not a multiple of 8 */
|
||||||
|
volatile char arr[1] = {};
|
||||||
|
|
||||||
|
return subprog_tail(skb);
|
||||||
|
}
|
||||||
|
|
||||||
|
char __license[] SEC("license") = "GPL";
|
@ -70,6 +70,10 @@ NSB_LO_IP6=2001:db8:2::2
|
|||||||
NL_IP=172.17.1.1
|
NL_IP=172.17.1.1
|
||||||
NL_IP6=2001:db8:4::1
|
NL_IP6=2001:db8:4::1
|
||||||
|
|
||||||
|
# multicast and broadcast addresses
|
||||||
|
MCAST_IP=224.0.0.1
|
||||||
|
BCAST_IP=255.255.255.255
|
||||||
|
|
||||||
MD5_PW=abc123
|
MD5_PW=abc123
|
||||||
MD5_WRONG_PW=abc1234
|
MD5_WRONG_PW=abc1234
|
||||||
|
|
||||||
@ -308,6 +312,9 @@ addr2str()
|
|||||||
127.0.0.1) echo "loopback";;
|
127.0.0.1) echo "loopback";;
|
||||||
::1) echo "IPv6 loopback";;
|
::1) echo "IPv6 loopback";;
|
||||||
|
|
||||||
|
${BCAST_IP}) echo "broadcast";;
|
||||||
|
${MCAST_IP}) echo "multicast";;
|
||||||
|
|
||||||
${NSA_IP}) echo "ns-A IP";;
|
${NSA_IP}) echo "ns-A IP";;
|
||||||
${NSA_IP6}) echo "ns-A IPv6";;
|
${NSA_IP6}) echo "ns-A IPv6";;
|
||||||
${NSA_LO_IP}) echo "ns-A loopback IP";;
|
${NSA_LO_IP}) echo "ns-A loopback IP";;
|
||||||
@ -1793,12 +1800,33 @@ ipv4_addr_bind_novrf()
|
|||||||
done
|
done
|
||||||
|
|
||||||
#
|
#
|
||||||
# raw socket with nonlocal bind
|
# tests for nonlocal bind
|
||||||
#
|
#
|
||||||
a=${NL_IP}
|
a=${NL_IP}
|
||||||
log_start
|
log_start
|
||||||
run_cmd nettest -s -R -P icmp -f -l ${a} -I ${NSA_DEV} -b
|
run_cmd nettest -s -R -f -l ${a} -b
|
||||||
log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after device bind"
|
log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address"
|
||||||
|
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -f -l ${a} -b
|
||||||
|
log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address"
|
||||||
|
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -f -l ${a} -b
|
||||||
|
log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address"
|
||||||
|
|
||||||
|
#
|
||||||
|
# check that ICMP sockets cannot bind to broadcast and multicast addresses
|
||||||
|
#
|
||||||
|
a=${BCAST_IP}
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -l ${a} -b
|
||||||
|
log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address"
|
||||||
|
|
||||||
|
a=${MCAST_IP}
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -l ${a} -b
|
||||||
|
log_test_addr ${a} $? 1 "ICMP socket bind to multicast address"
|
||||||
|
|
||||||
#
|
#
|
||||||
# tcp sockets
|
# tcp sockets
|
||||||
@ -1850,13 +1878,34 @@ ipv4_addr_bind_vrf()
|
|||||||
log_test_addr ${a} $? 1 "Raw socket bind to out of scope address after VRF bind"
|
log_test_addr ${a} $? 1 "Raw socket bind to out of scope address after VRF bind"
|
||||||
|
|
||||||
#
|
#
|
||||||
# raw socket with nonlocal bind
|
# tests for nonlocal bind
|
||||||
#
|
#
|
||||||
a=${NL_IP}
|
a=${NL_IP}
|
||||||
log_start
|
log_start
|
||||||
run_cmd nettest -s -R -P icmp -f -l ${a} -I ${VRF} -b
|
run_cmd nettest -s -R -f -l ${a} -I ${VRF} -b
|
||||||
log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after VRF bind"
|
log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after VRF bind"
|
||||||
|
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -f -l ${a} -I ${VRF} -b
|
||||||
|
log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address after VRF bind"
|
||||||
|
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -f -l ${a} -I ${VRF} -b
|
||||||
|
log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address after VRF bind"
|
||||||
|
|
||||||
|
#
|
||||||
|
# check that ICMP sockets cannot bind to broadcast and multicast addresses
|
||||||
|
#
|
||||||
|
a=${BCAST_IP}
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b
|
||||||
|
log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address after VRF bind"
|
||||||
|
|
||||||
|
a=${MCAST_IP}
|
||||||
|
log_start
|
||||||
|
run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b
|
||||||
|
log_test_addr ${a} $? 1 "ICMP socket bind to multicast address after VRF bind"
|
||||||
|
|
||||||
#
|
#
|
||||||
# tcp sockets
|
# tcp sockets
|
||||||
#
|
#
|
||||||
@ -1889,10 +1938,12 @@ ipv4_addr_bind()
|
|||||||
|
|
||||||
log_subsection "No VRF"
|
log_subsection "No VRF"
|
||||||
setup
|
setup
|
||||||
|
set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
|
||||||
ipv4_addr_bind_novrf
|
ipv4_addr_bind_novrf
|
||||||
|
|
||||||
log_subsection "With VRF"
|
log_subsection "With VRF"
|
||||||
setup "yes"
|
setup "yes"
|
||||||
|
set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
|
||||||
ipv4_addr_bind_vrf
|
ipv4_addr_bind_vrf
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ BUGS="flush_remove_add reload"
|
|||||||
|
|
||||||
# List of possible paths to pktgen script from kernel tree for performance tests
|
# List of possible paths to pktgen script from kernel tree for performance tests
|
||||||
PKTGEN_SCRIPT_PATHS="
|
PKTGEN_SCRIPT_PATHS="
|
||||||
../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
|
../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
|
||||||
pktgen/pktgen_bench_xmit_mode_netif_receive.sh"
|
pktgen/pktgen_bench_xmit_mode_netif_receive.sh"
|
||||||
|
|
||||||
# Definition of set types:
|
# Definition of set types:
|
||||||
|
Loading…
Reference in New Issue
Block a user