Including fixes from can, wireless and netfilter.
Current release - regressions: - af_unix: fix task hung while purging oob_skb in GC - pds_core: do not try to run health-thread in VF path Current release - new code bugs: - sched: act_mirred: don't zero blockid when net device is being deleted Previous releases - regressions: - netfilter: - nat: restore default DNAT behavior - nf_tables: fix bidirectional offload, broken when unidirectional offload support was added - openvswitch: limit the number of recursions from action sets - eth: i40e: do not allow untrusted VF to remove administratively set MAC address Previous releases - always broken: - tls: fix races and bugs in use of async crypto - mptcp: prevent data races on some of the main socket fields, fix races in fastopen handling - dpll: fix possible deadlock during netlink dump operation - dsa: lan966x: fix crash when adding interface under a lag when some of the ports are disabled - can: j1939: prevent deadlock by changing j1939_socks_lock to rwlock Misc: - handful of fixes and reliability improvements for selftests - fix sysfs documentation missing net/ in paths - finish the work of squashing the missing MODULE_DESCRIPTION() warnings in networking Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmXOQ6AACgkQMUZtbf5S IrsUrBAAhFMdcrJwLO73+ODfix4okmpOVPLvnW8DxsT46F9Uex3oP2mR7W5CtSp9 yr10n5Ce2rjRUu8T5D5XGkg0dHFFF887Ngs3PLxaZTEb13UcfxANZ+jjyyVB8XPf HEODBqzJuFBkh4/qSY2/VEDjQW57JopyVVitC9ktF7yhJbZfFfEEf68L0DYqijF4 MzsGgcHenm2UuunOppp7S5yoWRHgl0IPr6Stz0Dw/AacqJrGl0sicuobTARvcGXP G/0nLDerbcr+JhbgQUmKX3t3hxxwG9zyJmgyuX285NTPQagbGvYM5gQHLREdAwLF 8N2r2uoD0cPv00PQee/7/kfepLOiIkKthX9YEutT4fjOqtQ/CwSForXDqe7oI3rs +KCMDn3LN/JECu9i8zUJUxdt2LBy0TPu7XrgZZuXbOEnAIKBjFQc59dtBE1Z2ROJ r10Q4aR0xjaQ1yErl+mu/WP7zQpJTJb0PQCuy8zSYl3b64cbyJb+UqpLcXaizY8G cT6XlTEpRvP21ULxU71/UyBLnYNX3msDTlfZRs2gVZEC1dt4WuM55BZmCl+mMvEd nuAkaPyp61EiUNSVx+eeZ5r91qFuwDo+pPyAta4PNNEzeVx2CZI0RzeFrrFzJevB DigB69R85zs8lhDJEC129GDNgGZpbQOttEA5GzVYFFsoxBS1ygk= =YRod -----END PGP SIGNATURE----- Merge tag 'net-6.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from can, wireless and netfilter. Current release - regressions: - af_unix: fix task hung while purging oob_skb in GC - pds_core: do not try to run health-thread in VF path Current release - new code bugs: - sched: act_mirred: don't zero blockid when net device is being deleted Previous releases - regressions: - netfilter: - nat: restore default DNAT behavior - nf_tables: fix bidirectional offload, broken when unidirectional offload support was added - openvswitch: limit the number of recursions from action sets - eth: i40e: do not allow untrusted VF to remove administratively set MAC address Previous releases - always broken: - tls: fix races and bugs in use of async crypto - mptcp: prevent data races on some of the main socket fields, fix races in fastopen handling - dpll: fix possible deadlock during netlink dump operation - dsa: lan966x: fix crash when adding interface under a lag when some of the ports are disabled - can: j1939: prevent deadlock by changing j1939_socks_lock to rwlock Misc: - a handful of fixes and reliability improvements for selftests - fix sysfs documentation missing net/ in paths - finish the work of squashing the missing MODULE_DESCRIPTION() warnings in networking" * tag 'net-6.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (92 commits) net: fill in MODULE_DESCRIPTION()s for missing arcnet net: fill in MODULE_DESCRIPTION()s for mdio_devres net: fill in MODULE_DESCRIPTION()s for ppp net: fill in MODULE_DESCRIPTION()s for fddik/skfp net: fill in MODULE_DESCRIPTION()s for plip net: fill in MODULE_DESCRIPTION()s for ieee802154/fakelb net: fill in MODULE_DESCRIPTION()s for xen-netback net: ravb: Count packets instead of descriptors in GbEth RX path pppoe: Fix memory leak in pppoe_sendmsg() net: sctp: fix skb leak in sctp_inq_free() net: bcmasp: Handle RX buffer allocation failure net-timestamp: make sk_tskey more predictable in error path selftests: tls: increase the wait in poll_partial_rec_async ice: Add check for lport extraction to LAG init netfilter: nf_tables: fix bidirectional offload regression netfilter: nat: restore default DNAT behavior netfilter: nft_set_pipapo: fix missing : in kdoc igc: Remove temporary workaround igb: Fix string truncation warnings in igb_set_fw_version can: netlink: Fix TDCO calculation using the old data bittiming ...
This commit is contained in:
commit
4f5e5092fd
9
.mailmap
9
.mailmap
@ -191,10 +191,11 @@ Gao Xiang <xiang@kernel.org> <gaoxiang25@huawei.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@aol.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@linux.alibaba.com>
|
||||
Gao Xiang <xiang@kernel.org> <hsiangkao@redhat.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliang.tang@suse.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@xiaomi.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@gmail.com>
|
||||
Geliang Tang <geliang.tang@linux.dev> <geliangtang@163.com>
|
||||
Geliang Tang <geliang@kernel.org> <geliang.tang@linux.dev>
|
||||
Geliang Tang <geliang@kernel.org> <geliang.tang@suse.com>
|
||||
Geliang Tang <geliang@kernel.org> <geliangtang@xiaomi.com>
|
||||
Geliang Tang <geliang@kernel.org> <geliangtang@gmail.com>
|
||||
Geliang Tang <geliang@kernel.org> <geliangtang@163.com>
|
||||
Georgi Djakov <djakov@kernel.org> <georgi.djakov@linaro.org>
|
||||
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com>
|
||||
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
|
||||
|
@ -1,4 +1,4 @@
|
||||
What: /sys/class/<iface>/statistics/collisions
|
||||
What: /sys/class/net/<iface>/statistics/collisions
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -6,7 +6,7 @@ Description:
|
||||
Indicates the number of collisions seen by this network device.
|
||||
This value might not be relevant with all MAC layers.
|
||||
|
||||
What: /sys/class/<iface>/statistics/multicast
|
||||
What: /sys/class/net/<iface>/statistics/multicast
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -14,7 +14,7 @@ Description:
|
||||
Indicates the number of multicast packets received by this
|
||||
network device.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_bytes
|
||||
What: /sys/class/net/<iface>/statistics/rx_bytes
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -23,7 +23,7 @@ Description:
|
||||
See the network driver for the exact meaning of when this
|
||||
value is incremented.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_compressed
|
||||
What: /sys/class/net/<iface>/statistics/rx_compressed
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -32,7 +32,7 @@ Description:
|
||||
network device. This value might only be relevant for interfaces
|
||||
that support packet compression (e.g: PPP).
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_crc_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_crc_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -41,7 +41,7 @@ Description:
|
||||
by this network device. Note that the specific meaning might
|
||||
depend on the MAC layer used by the interface.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_dropped
|
||||
What: /sys/class/net/<iface>/statistics/rx_dropped
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -51,7 +51,7 @@ Description:
|
||||
packet processing. See the network driver for the exact
|
||||
meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -59,7 +59,7 @@ Description:
|
||||
Indicates the number of receive errors on this network device.
|
||||
See the network driver for the exact meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_fifo_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_fifo_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -68,7 +68,7 @@ Description:
|
||||
network device. See the network driver for the exact
|
||||
meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_frame_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_frame_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -78,7 +78,7 @@ Description:
|
||||
on the MAC layer protocol used. See the network driver for
|
||||
the exact meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_length_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_length_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -87,7 +87,7 @@ Description:
|
||||
error, oversized or undersized. See the network driver for the
|
||||
exact meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_missed_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_missed_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -96,7 +96,7 @@ Description:
|
||||
due to lack of capacity in the receive side. See the network
|
||||
driver for the exact meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_nohandler
|
||||
What: /sys/class/net/<iface>/statistics/rx_nohandler
|
||||
Date: February 2016
|
||||
KernelVersion: 4.6
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -104,7 +104,7 @@ Description:
|
||||
Indicates the number of received packets that were dropped on
|
||||
an inactive device by the network core.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_over_errors
|
||||
What: /sys/class/net/<iface>/statistics/rx_over_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -114,7 +114,7 @@ Description:
|
||||
(e.g: larger than MTU). See the network driver for the exact
|
||||
meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/rx_packets
|
||||
What: /sys/class/net/<iface>/statistics/rx_packets
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -122,7 +122,7 @@ Description:
|
||||
Indicates the total number of good packets received by this
|
||||
network device.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_aborted_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_aborted_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -132,7 +132,7 @@ Description:
|
||||
a medium collision). See the network driver for the exact
|
||||
meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_bytes
|
||||
What: /sys/class/net/<iface>/statistics/tx_bytes
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -143,7 +143,7 @@ Description:
|
||||
transmitted packets or all packets that have been queued for
|
||||
transmission.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_carrier_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_carrier_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -152,7 +152,7 @@ Description:
|
||||
because of carrier errors (e.g: physical link down). See the
|
||||
network driver for the exact meaning of this value.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_compressed
|
||||
What: /sys/class/net/<iface>/statistics/tx_compressed
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -161,7 +161,7 @@ Description:
|
||||
this might only be relevant for devices that support
|
||||
compression (e.g: PPP).
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_dropped
|
||||
What: /sys/class/net/<iface>/statistics/tx_dropped
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -170,7 +170,7 @@ Description:
|
||||
See the driver for the exact reasons as to why the packets were
|
||||
dropped.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -179,7 +179,7 @@ Description:
|
||||
a network device. See the driver for the exact reasons as to
|
||||
why the packets were dropped.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_fifo_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_fifo_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -188,7 +188,7 @@ Description:
|
||||
FIFO error. See the driver for the exact reasons as to why the
|
||||
packets were dropped.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_heartbeat_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_heartbeat_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -197,7 +197,7 @@ Description:
|
||||
reported as heartbeat errors. See the driver for the exact
|
||||
reasons as to why the packets were dropped.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_packets
|
||||
What: /sys/class/net/<iface>/statistics/tx_packets
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
@ -206,7 +206,7 @@ Description:
|
||||
device. See the driver for whether this reports the number of all
|
||||
attempted or successful transmissions.
|
||||
|
||||
What: /sys/class/<iface>/statistics/tx_window_errors
|
||||
What: /sys/class/net/<iface>/statistics/tx_window_errors
|
||||
Date: April 2005
|
||||
KernelVersion: 2.6.12
|
||||
Contact: netdev@vger.kernel.org
|
||||
|
@ -384,8 +384,6 @@ operations:
|
||||
- type
|
||||
|
||||
dump:
|
||||
pre: dpll-lock-dumpit
|
||||
post: dpll-unlock-dumpit
|
||||
reply: *dev-attrs
|
||||
|
||||
-
|
||||
@ -473,8 +471,6 @@ operations:
|
||||
- fractional-frequency-offset
|
||||
|
||||
dump:
|
||||
pre: dpll-lock-dumpit
|
||||
post: dpll-unlock-dumpit
|
||||
request:
|
||||
attributes:
|
||||
- id
|
||||
|
@ -126,7 +126,7 @@ Users may also set the RoCE capability of the function using
|
||||
`devlink port function set roce` command.
|
||||
|
||||
Users may also set the function as migratable using
|
||||
'devlink port function set migratable' command.
|
||||
`devlink port function set migratable` command.
|
||||
|
||||
Users may also set the IPsec crypto capability of the function using
|
||||
`devlink port function set ipsec_crypto` command.
|
||||
|
@ -136,8 +136,8 @@ struct_netpoll_info* npinfo -
|
||||
possible_net_t nd_net - read_mostly (dev_net)napi_busy_loop,tcp_v(4/6)_rcv,ip(v6)_rcv,ip(6)_input,ip(6)_input_finish
|
||||
void* ml_priv
|
||||
enum_netdev_ml_priv_type ml_priv_type
|
||||
struct_pcpu_lstats__percpu* lstats
|
||||
struct_pcpu_sw_netstats__percpu* tstats
|
||||
struct_pcpu_lstats__percpu* lstats read_mostly dev_lstats_add()
|
||||
struct_pcpu_sw_netstats__percpu* tstats read_mostly dev_sw_netstats_tx_add()
|
||||
struct_pcpu_dstats__percpu* dstats
|
||||
struct_garp_port* garp_port
|
||||
struct_mrp_port* mrp_port
|
||||
|
@ -38,13 +38,13 @@ u32 max_window read_mostly -
|
||||
u32 mss_cache read_mostly read_mostly tcp_rate_check_app_limited,tcp_current_mss,tcp_sync_mss,tcp_sndbuf_expand,tcp_tso_should_defer(tx);tcp_update_pacing_rate,tcp_clean_rtx_queue(rx)
|
||||
u32 window_clamp read_mostly read_write tcp_rcv_space_adjust,__tcp_select_window
|
||||
u32 rcv_ssthresh read_mostly - __tcp_select_window
|
||||
u82 scaling_ratio
|
||||
u8 scaling_ratio read_mostly read_mostly tcp_win_from_space
|
||||
struct tcp_rack
|
||||
u16 advmss - read_mostly tcp_rcv_space_adjust
|
||||
u8 compressed_ack
|
||||
u8:2 dup_ack_counter
|
||||
u8:1 tlp_retrans
|
||||
u8:1 tcp_usec_ts
|
||||
u8:1 tcp_usec_ts read_mostly read_mostly
|
||||
u32 chrono_start read_write - tcp_chrono_start/stop(tcp_write_xmit,tcp_cwnd_validate,tcp_send_syn_data)
|
||||
u32[3] chrono_stat read_write - tcp_chrono_start/stop(tcp_write_xmit,tcp_cwnd_validate,tcp_send_syn_data)
|
||||
u8:2 chrono_type read_write - tcp_chrono_start/stop(tcp_write_xmit,tcp_cwnd_validate,tcp_send_syn_data)
|
||||
|
@ -15324,7 +15324,7 @@ K: \bmdo_
|
||||
NETWORKING [MPTCP]
|
||||
M: Matthieu Baerts <matttbe@kernel.org>
|
||||
M: Mat Martineau <martineau@kernel.org>
|
||||
R: Geliang Tang <geliang.tang@linux.dev>
|
||||
R: Geliang Tang <geliang@kernel.org>
|
||||
L: netdev@vger.kernel.org
|
||||
L: mptcp@lists.linux.dev
|
||||
S: Maintained
|
||||
|
@ -108,9 +108,8 @@ static inline void send_msg(struct cn_msg *msg)
|
||||
filter_data[1] = 0;
|
||||
}
|
||||
|
||||
if (cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
|
||||
cn_filter, (void *)filter_data) == -ESRCH)
|
||||
atomic_set(&proc_event_num_listeners, 0);
|
||||
cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
|
||||
cn_filter, (void *)filter_data);
|
||||
|
||||
local_unlock(&local_event.lock);
|
||||
}
|
||||
|
@ -1199,6 +1199,7 @@ int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
unsigned long i;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&dpll_lock);
|
||||
xa_for_each_marked_start(&dpll_pin_xa, i, pin, DPLL_REGISTERED,
|
||||
ctx->idx) {
|
||||
if (!dpll_pin_available(pin))
|
||||
@ -1218,6 +1219,8 @@ int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
}
|
||||
genlmsg_end(skb, hdr);
|
||||
}
|
||||
mutex_unlock(&dpll_lock);
|
||||
|
||||
if (ret == -EMSGSIZE) {
|
||||
ctx->idx = i;
|
||||
return skb->len;
|
||||
@ -1373,6 +1376,7 @@ int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
unsigned long i;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&dpll_lock);
|
||||
xa_for_each_marked_start(&dpll_device_xa, i, dpll, DPLL_REGISTERED,
|
||||
ctx->idx) {
|
||||
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
|
||||
@ -1389,6 +1393,8 @@ int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
}
|
||||
genlmsg_end(skb, hdr);
|
||||
}
|
||||
mutex_unlock(&dpll_lock);
|
||||
|
||||
if (ret == -EMSGSIZE) {
|
||||
ctx->idx = i;
|
||||
return skb->len;
|
||||
@ -1439,20 +1445,6 @@ dpll_unlock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
mutex_unlock(&dpll_lock);
|
||||
}
|
||||
|
||||
int dpll_lock_dumpit(struct netlink_callback *cb)
|
||||
{
|
||||
mutex_lock(&dpll_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dpll_unlock_dumpit(struct netlink_callback *cb)
|
||||
{
|
||||
mutex_unlock(&dpll_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dpll_pin_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
struct genl_info *info)
|
||||
{
|
||||
|
@ -95,9 +95,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
},
|
||||
{
|
||||
.cmd = DPLL_CMD_DEVICE_GET,
|
||||
.start = dpll_lock_dumpit,
|
||||
.dumpit = dpll_nl_device_get_dumpit,
|
||||
.done = dpll_unlock_dumpit,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
|
||||
},
|
||||
{
|
||||
@ -129,9 +127,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
},
|
||||
{
|
||||
.cmd = DPLL_CMD_PIN_GET,
|
||||
.start = dpll_lock_dumpit,
|
||||
.dumpit = dpll_nl_pin_get_dumpit,
|
||||
.done = dpll_unlock_dumpit,
|
||||
.policy = dpll_pin_get_dump_nl_policy,
|
||||
.maxattr = DPLL_A_PIN_ID,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
|
||||
|
@ -30,8 +30,6 @@ dpll_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
void
|
||||
dpll_pin_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
struct genl_info *info);
|
||||
int dpll_lock_dumpit(struct netlink_callback *cb);
|
||||
int dpll_unlock_dumpit(struct netlink_callback *cb);
|
||||
|
||||
int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info);
|
||||
int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info);
|
||||
|
@ -186,4 +186,5 @@ static void __exit arcnet_raw_exit(void)
|
||||
module_init(arcnet_raw_init);
|
||||
module_exit(arcnet_raw_exit);
|
||||
|
||||
MODULE_DESCRIPTION("ARCnet raw mode packet interface module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -312,6 +312,7 @@ module_param(node, int, 0);
|
||||
module_param(io, int, 0);
|
||||
module_param(irq, int, 0);
|
||||
module_param_string(device, device, sizeof(device), 0);
|
||||
MODULE_DESCRIPTION("ARCnet COM90xx RIM I chipset driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
static struct net_device *my_dev;
|
||||
|
@ -265,4 +265,5 @@ static void __exit capmode_module_exit(void)
|
||||
module_init(capmode_module_init);
|
||||
module_exit(capmode_module_exit);
|
||||
|
||||
MODULE_DESCRIPTION("ARCnet CAP mode packet interface module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -61,6 +61,7 @@ module_param(timeout, int, 0);
|
||||
module_param(backplane, int, 0);
|
||||
module_param(clockp, int, 0);
|
||||
module_param(clockm, int, 0);
|
||||
MODULE_DESCRIPTION("ARCnet COM20020 chipset PCI driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
static void led_tx_set(struct led_classdev *led_cdev,
|
||||
|
@ -399,6 +399,7 @@ EXPORT_SYMBOL(com20020_found);
|
||||
EXPORT_SYMBOL(com20020_netdev_ops);
|
||||
#endif
|
||||
|
||||
MODULE_DESCRIPTION("ARCnet COM20020 chipset core driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
#ifdef MODULE
|
||||
|
@ -97,6 +97,7 @@ module_param(backplane, int, 0);
|
||||
module_param(clockp, int, 0);
|
||||
module_param(clockm, int, 0);
|
||||
|
||||
MODULE_DESCRIPTION("ARCnet COM20020 chipset PCMCIA driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
/*====================================================================*/
|
||||
|
@ -350,6 +350,7 @@ static char device[9]; /* use eg. device=arc1 to change name */
|
||||
module_param_hw(io, int, ioport, 0);
|
||||
module_param_hw(irq, int, irq, 0);
|
||||
module_param_string(device, device, sizeof(device), 0);
|
||||
MODULE_DESCRIPTION("ARCnet COM90xx IO mapped chipset driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
#ifndef MODULE
|
||||
|
@ -645,6 +645,7 @@ static void com90xx_copy_from_card(struct net_device *dev, int bufnum,
|
||||
TIME(dev, "memcpy_fromio", count, memcpy_fromio(buf, memaddr, count));
|
||||
}
|
||||
|
||||
MODULE_DESCRIPTION("ARCnet COM90xx normal chipset driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
static int __init com90xx_init(void)
|
||||
|
@ -78,6 +78,7 @@ static void __exit arcnet_rfc1051_exit(void)
|
||||
module_init(arcnet_rfc1051_init);
|
||||
module_exit(arcnet_rfc1051_exit);
|
||||
|
||||
MODULE_DESCRIPTION("ARCNet packet format (RFC 1051) module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
/* Determine a packet's protocol ID.
|
||||
|
@ -35,6 +35,7 @@
|
||||
|
||||
#include "arcdevice.h"
|
||||
|
||||
MODULE_DESCRIPTION("ARCNet packet format (RFC 1201) module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
static __be16 type_trans(struct sk_buff *skb, struct net_device *dev);
|
||||
|
@ -1819,6 +1819,8 @@ void bond_xdp_set_features(struct net_device *bond_dev)
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
val &= slave->dev->xdp_features;
|
||||
|
||||
val &= ~NETDEV_XDP_ACT_XSK_ZEROCOPY;
|
||||
|
||||
xdp_set_features_flag(bond_dev, val);
|
||||
}
|
||||
|
||||
@ -5909,9 +5911,6 @@ void bond_setup(struct net_device *bond_dev)
|
||||
if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP)
|
||||
bond_dev->features |= BOND_XFRM_FEATURES;
|
||||
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||
|
||||
if (bond_xdp_check(bond))
|
||||
bond_dev->xdp_features = NETDEV_XDP_ACT_MASK;
|
||||
}
|
||||
|
||||
/* Destroy a bonding device.
|
||||
|
@ -346,7 +346,7 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
|
||||
/* Neither of TDC parameters nor TDC flags are
|
||||
* provided: do calculation
|
||||
*/
|
||||
can_calc_tdco(&priv->tdc, priv->tdc_const, &priv->data_bittiming,
|
||||
can_calc_tdco(&priv->tdc, priv->tdc_const, &dbt,
|
||||
&priv->ctrlmode, priv->ctrlmode_supported);
|
||||
} /* else: both CAN_CTRLMODE_TDC_{AUTO,MANUAL} are explicitly
|
||||
* turned off. TDC is disabled: do nothing
|
||||
|
@ -32,4 +32,5 @@ static int __init dsa_loop_bdinfo_init(void)
|
||||
}
|
||||
arch_initcall(dsa_loop_bdinfo_init)
|
||||
|
||||
MODULE_DESCRIPTION("DSA mock-up switch driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -451,6 +451,9 @@ static void pdsc_remove(struct pci_dev *pdev)
|
||||
|
||||
static void pdsc_stop_health_thread(struct pdsc *pdsc)
|
||||
{
|
||||
if (pdsc->pdev->is_virtfn)
|
||||
return;
|
||||
|
||||
timer_shutdown_sync(&pdsc->wdtimer);
|
||||
if (pdsc->health_work.func)
|
||||
cancel_work_sync(&pdsc->health_work);
|
||||
@ -458,6 +461,9 @@ static void pdsc_stop_health_thread(struct pdsc *pdsc)
|
||||
|
||||
static void pdsc_restart_health_thread(struct pdsc *pdsc)
|
||||
{
|
||||
if (pdsc->pdev->is_virtfn)
|
||||
return;
|
||||
|
||||
timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0);
|
||||
mod_timer(&pdsc->wdtimer, jiffies + 1);
|
||||
}
|
||||
|
@ -684,6 +684,8 @@ static int bcmasp_init_rx(struct bcmasp_intf *intf)
|
||||
|
||||
intf->rx_buf_order = get_order(RING_BUFFER_SIZE);
|
||||
buffer_pg = alloc_pages(GFP_KERNEL, intf->rx_buf_order);
|
||||
if (!buffer_pg)
|
||||
return -ENOMEM;
|
||||
|
||||
dma = dma_map_page(kdev, buffer_pg, 0, RING_BUFFER_SIZE,
|
||||
DMA_FROM_DEVICE);
|
||||
@ -1092,6 +1094,7 @@ static int bcmasp_netif_init(struct net_device *dev, bool phy_connect)
|
||||
return 0;
|
||||
|
||||
err_reclaim_tx:
|
||||
netif_napi_del(&intf->tx_napi);
|
||||
bcmasp_reclaim_free_all_tx(intf);
|
||||
err_phy_disconnect:
|
||||
if (phydev)
|
||||
|
@ -1091,10 +1091,10 @@ bnad_cb_tx_resume(struct bnad *bnad, struct bna_tx *tx)
|
||||
* Free all TxQs buffers and then notify TX_E_CLEANUP_DONE to Tx fsm.
|
||||
*/
|
||||
static void
|
||||
bnad_tx_cleanup(struct delayed_work *work)
|
||||
bnad_tx_cleanup(struct work_struct *work)
|
||||
{
|
||||
struct bnad_tx_info *tx_info =
|
||||
container_of(work, struct bnad_tx_info, tx_cleanup_work);
|
||||
container_of(work, struct bnad_tx_info, tx_cleanup_work.work);
|
||||
struct bnad *bnad = NULL;
|
||||
struct bna_tcb *tcb;
|
||||
unsigned long flags;
|
||||
@ -1170,7 +1170,7 @@ bnad_cb_rx_stall(struct bnad *bnad, struct bna_rx *rx)
|
||||
* Free all RxQs buffers and then notify RX_E_CLEANUP_DONE to Rx fsm.
|
||||
*/
|
||||
static void
|
||||
bnad_rx_cleanup(void *work)
|
||||
bnad_rx_cleanup(struct work_struct *work)
|
||||
{
|
||||
struct bnad_rx_info *rx_info =
|
||||
container_of(work, struct bnad_rx_info, rx_cleanup_work);
|
||||
@ -1991,8 +1991,7 @@ bnad_setup_tx(struct bnad *bnad, u32 tx_id)
|
||||
}
|
||||
tx_info->tx = tx;
|
||||
|
||||
INIT_DELAYED_WORK(&tx_info->tx_cleanup_work,
|
||||
(work_func_t)bnad_tx_cleanup);
|
||||
INIT_DELAYED_WORK(&tx_info->tx_cleanup_work, bnad_tx_cleanup);
|
||||
|
||||
/* Register ISR for the Tx object */
|
||||
if (intr_info->intr_type == BNA_INTR_T_MSIX) {
|
||||
@ -2248,8 +2247,7 @@ bnad_setup_rx(struct bnad *bnad, u32 rx_id)
|
||||
rx_info->rx = rx;
|
||||
spin_unlock_irqrestore(&bnad->bna_lock, flags);
|
||||
|
||||
INIT_WORK(&rx_info->rx_cleanup_work,
|
||||
(work_func_t)(bnad_rx_cleanup));
|
||||
INIT_WORK(&rx_info->rx_cleanup_work, bnad_rx_cleanup);
|
||||
|
||||
/*
|
||||
* Init NAPI, so that state is set to NAPI_STATE_SCHED,
|
||||
|
@ -1523,7 +1523,7 @@ void i40e_dcb_hw_rx_ets_bw_config(struct i40e_hw *hw, u8 *bw_share,
|
||||
reg = rd32(hw, I40E_PRTDCB_RETSTCC(i));
|
||||
reg &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK |
|
||||
I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK |
|
||||
I40E_PRTDCB_RETSTCC_ETSTC_SHIFT);
|
||||
I40E_PRTDCB_RETSTCC_ETSTC_MASK);
|
||||
reg |= FIELD_PREP(I40E_PRTDCB_RETSTCC_BWSHARE_MASK,
|
||||
bw_share[i]);
|
||||
reg |= FIELD_PREP(I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK,
|
||||
|
@ -4926,27 +4926,23 @@ int i40e_vsi_start_rings(struct i40e_vsi *vsi)
|
||||
void i40e_vsi_stop_rings(struct i40e_vsi *vsi)
|
||||
{
|
||||
struct i40e_pf *pf = vsi->back;
|
||||
int pf_q, err, q_end;
|
||||
u32 pf_q, tx_q_end, rx_q_end;
|
||||
|
||||
/* When port TX is suspended, don't wait */
|
||||
if (test_bit(__I40E_PORT_SUSPENDED, vsi->back->state))
|
||||
return i40e_vsi_stop_rings_no_wait(vsi);
|
||||
|
||||
q_end = vsi->base_queue + vsi->num_queue_pairs;
|
||||
for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
|
||||
i40e_pre_tx_queue_cfg(&pf->hw, (u32)pf_q, false);
|
||||
tx_q_end = vsi->base_queue +
|
||||
vsi->alloc_queue_pairs * (i40e_enabled_xdp_vsi(vsi) ? 2 : 1);
|
||||
for (pf_q = vsi->base_queue; pf_q < tx_q_end; pf_q++)
|
||||
i40e_pre_tx_queue_cfg(&pf->hw, pf_q, false);
|
||||
|
||||
for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++) {
|
||||
err = i40e_control_wait_rx_q(pf, pf_q, false);
|
||||
if (err)
|
||||
dev_info(&pf->pdev->dev,
|
||||
"VSI seid %d Rx ring %d disable timeout\n",
|
||||
vsi->seid, pf_q);
|
||||
}
|
||||
rx_q_end = vsi->base_queue + vsi->num_queue_pairs;
|
||||
for (pf_q = vsi->base_queue; pf_q < rx_q_end; pf_q++)
|
||||
i40e_control_rx_q(pf, pf_q, false);
|
||||
|
||||
msleep(I40E_DISABLE_TX_GAP_MSEC);
|
||||
pf_q = vsi->base_queue;
|
||||
for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
|
||||
for (pf_q = vsi->base_queue; pf_q < tx_q_end; pf_q++)
|
||||
wr32(&pf->hw, I40E_QTX_ENA(pf_q), 0);
|
||||
|
||||
i40e_vsi_wait_queues_disabled(vsi);
|
||||
@ -5360,7 +5356,7 @@ static int i40e_pf_wait_queues_disabled(struct i40e_pf *pf)
|
||||
{
|
||||
int v, ret = 0;
|
||||
|
||||
for (v = 0; v < pf->hw.func_caps.num_vsis; v++) {
|
||||
for (v = 0; v < pf->num_alloc_vsi; v++) {
|
||||
if (pf->vsi[v]) {
|
||||
ret = i40e_vsi_wait_queues_disabled(pf->vsi[v]);
|
||||
if (ret)
|
||||
|
@ -2848,6 +2848,24 @@ error_param:
|
||||
(u8 *)&stats, sizeof(stats));
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_can_vf_change_mac
|
||||
* @vf: pointer to the VF info
|
||||
*
|
||||
* Return true if the VF is allowed to change its MAC filters, false otherwise
|
||||
*/
|
||||
static bool i40e_can_vf_change_mac(struct i40e_vf *vf)
|
||||
{
|
||||
/* If the VF MAC address has been set administratively (via the
|
||||
* ndo_set_vf_mac command), then deny permission to the VF to
|
||||
* add/delete unicast MAC addresses, unless the VF is trusted
|
||||
*/
|
||||
if (vf->pf_set_mac && !vf->trusted)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#define I40E_MAX_MACVLAN_PER_HW 3072
|
||||
#define I40E_MAX_MACVLAN_PER_PF(num_ports) (I40E_MAX_MACVLAN_PER_HW / \
|
||||
(num_ports))
|
||||
@ -2907,8 +2925,8 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
|
||||
* The VF may request to set the MAC address filter already
|
||||
* assigned to it so do not return an error in that case.
|
||||
*/
|
||||
if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) &&
|
||||
!is_multicast_ether_addr(addr) && vf->pf_set_mac &&
|
||||
if (!i40e_can_vf_change_mac(vf) &&
|
||||
!is_multicast_ether_addr(addr) &&
|
||||
!ether_addr_equal(addr, vf->default_lan_addr.addr)) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
|
||||
@ -3114,19 +3132,29 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
ret = -EINVAL;
|
||||
goto error_param;
|
||||
}
|
||||
if (ether_addr_equal(al->list[i].addr, vf->default_lan_addr.addr))
|
||||
was_unimac_deleted = true;
|
||||
}
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
|
||||
spin_lock_bh(&vsi->mac_filter_hash_lock);
|
||||
/* delete addresses from the list */
|
||||
for (i = 0; i < al->num_elements; i++)
|
||||
for (i = 0; i < al->num_elements; i++) {
|
||||
const u8 *addr = al->list[i].addr;
|
||||
|
||||
/* Allow to delete VF primary MAC only if it was not set
|
||||
* administratively by PF or if VF is trusted.
|
||||
*/
|
||||
if (ether_addr_equal(addr, vf->default_lan_addr.addr) &&
|
||||
i40e_can_vf_change_mac(vf))
|
||||
was_unimac_deleted = true;
|
||||
else
|
||||
continue;
|
||||
|
||||
if (i40e_del_mac_filter(vsi, al->list[i].addr)) {
|
||||
ret = -EINVAL;
|
||||
spin_unlock_bh(&vsi->mac_filter_hash_lock);
|
||||
goto error_param;
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_bh(&vsi->mac_filter_hash_lock);
|
||||
|
||||
|
@ -151,6 +151,27 @@ ice_lag_find_hw_by_lport(struct ice_lag *lag, u8 lport)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_pkg_has_lport_extract - check if lport extraction supported
|
||||
* @hw: HW struct
|
||||
*/
|
||||
static bool ice_pkg_has_lport_extract(struct ice_hw *hw)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hw->blk[ICE_BLK_SW].es.count; i++) {
|
||||
u16 offset;
|
||||
u8 fv_prot;
|
||||
|
||||
ice_find_prot_off(hw, ICE_BLK_SW, ICE_SW_DEFAULT_PROFILE, i,
|
||||
&fv_prot, &offset);
|
||||
if (fv_prot == ICE_FV_PROT_MDID &&
|
||||
offset == ICE_LP_EXT_BUF_OFFSET)
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_lag_find_primary - returns pointer to primary interfaces lag struct
|
||||
* @lag: local interfaces lag struct
|
||||
@ -1206,7 +1227,7 @@ static void ice_lag_del_prune_list(struct ice_lag *lag, struct ice_pf *event_pf)
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_lag_init_feature_support_flag - Check for NVM support for LAG
|
||||
* ice_lag_init_feature_support_flag - Check for package and NVM support for LAG
|
||||
* @pf: PF struct
|
||||
*/
|
||||
static void ice_lag_init_feature_support_flag(struct ice_pf *pf)
|
||||
@ -1219,7 +1240,7 @@ static void ice_lag_init_feature_support_flag(struct ice_pf *pf)
|
||||
else
|
||||
ice_clear_feature_support(pf, ICE_F_ROCE_LAG);
|
||||
|
||||
if (caps->sriov_lag)
|
||||
if (caps->sriov_lag && ice_pkg_has_lport_extract(&pf->hw))
|
||||
ice_set_feature_support(pf, ICE_F_SRIOV_LAG);
|
||||
else
|
||||
ice_clear_feature_support(pf, ICE_F_SRIOV_LAG);
|
||||
|
@ -17,6 +17,9 @@ enum ice_lag_role {
|
||||
#define ICE_LAG_INVALID_PORT 0xFF
|
||||
|
||||
#define ICE_LAG_RESET_RETRIES 5
|
||||
#define ICE_SW_DEFAULT_PROFILE 0
|
||||
#define ICE_FV_PROT_MDID 255
|
||||
#define ICE_LP_EXT_BUF_OFFSET 32
|
||||
|
||||
struct ice_pf;
|
||||
struct ice_vf;
|
||||
|
@ -637,7 +637,7 @@ struct igb_adapter {
|
||||
struct timespec64 period;
|
||||
} perout[IGB_N_PEROUT];
|
||||
|
||||
char fw_version[32];
|
||||
char fw_version[48];
|
||||
#ifdef CONFIG_IGB_HWMON
|
||||
struct hwmon_buff *igb_hwmon_buff;
|
||||
bool ets;
|
||||
|
@ -3069,7 +3069,6 @@ void igb_set_fw_version(struct igb_adapter *adapter)
|
||||
{
|
||||
struct e1000_hw *hw = &adapter->hw;
|
||||
struct e1000_fw_version fw;
|
||||
char *lbuf;
|
||||
|
||||
igb_get_fw_version(hw, &fw);
|
||||
|
||||
@ -3077,34 +3076,36 @@ void igb_set_fw_version(struct igb_adapter *adapter)
|
||||
case e1000_i210:
|
||||
case e1000_i211:
|
||||
if (!(igb_get_flash_presence_i210(hw))) {
|
||||
lbuf = kasprintf(GFP_KERNEL, "%2d.%2d-%d",
|
||||
fw.invm_major, fw.invm_minor,
|
||||
fw.invm_img_type);
|
||||
snprintf(adapter->fw_version,
|
||||
sizeof(adapter->fw_version),
|
||||
"%2d.%2d-%d",
|
||||
fw.invm_major, fw.invm_minor,
|
||||
fw.invm_img_type);
|
||||
break;
|
||||
}
|
||||
fallthrough;
|
||||
default:
|
||||
/* if option rom is valid, display its version too */
|
||||
if (fw.or_valid) {
|
||||
lbuf = kasprintf(GFP_KERNEL, "%d.%d, 0x%08x, %d.%d.%d",
|
||||
fw.eep_major, fw.eep_minor,
|
||||
fw.etrack_id, fw.or_major, fw.or_build,
|
||||
fw.or_patch);
|
||||
snprintf(adapter->fw_version,
|
||||
sizeof(adapter->fw_version),
|
||||
"%d.%d, 0x%08x, %d.%d.%d",
|
||||
fw.eep_major, fw.eep_minor, fw.etrack_id,
|
||||
fw.or_major, fw.or_build, fw.or_patch);
|
||||
/* no option rom */
|
||||
} else if (fw.etrack_id != 0X0000) {
|
||||
lbuf = kasprintf(GFP_KERNEL, "%d.%d, 0x%08x",
|
||||
fw.eep_major, fw.eep_minor,
|
||||
fw.etrack_id);
|
||||
snprintf(adapter->fw_version,
|
||||
sizeof(adapter->fw_version),
|
||||
"%d.%d, 0x%08x",
|
||||
fw.eep_major, fw.eep_minor, fw.etrack_id);
|
||||
} else {
|
||||
lbuf = kasprintf(GFP_KERNEL, "%d.%d.%d", fw.eep_major,
|
||||
fw.eep_minor, fw.eep_build);
|
||||
snprintf(adapter->fw_version,
|
||||
sizeof(adapter->fw_version),
|
||||
"%d.%d.%d",
|
||||
fw.eep_major, fw.eep_minor, fw.eep_build);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
/* the truncate happens here if it doesn't fit */
|
||||
strscpy(adapter->fw_version, lbuf, sizeof(adapter->fw_version));
|
||||
kfree(lbuf);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -130,11 +130,7 @@ void igc_power_down_phy_copper(struct igc_hw *hw)
|
||||
/* The PHY will retain its settings across a power down/up cycle */
|
||||
hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
|
||||
mii_reg |= MII_CR_POWER_DOWN;
|
||||
|
||||
/* Temporary workaround - should be removed when PHY will implement
|
||||
* IEEE registers as properly
|
||||
*/
|
||||
/* hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);*/
|
||||
hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
|
||||
usleep_range(1000, 2000);
|
||||
}
|
||||
|
||||
|
@ -61,28 +61,6 @@ int rvu_npc_get_tx_nibble_cfg(struct rvu *rvu, u64 nibble_ena)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int npc_mcam_verify_pf_func(struct rvu *rvu,
|
||||
struct mcam_entry *entry_data, u8 intf,
|
||||
u16 pcifunc)
|
||||
{
|
||||
u16 pf_func, pf_func_mask;
|
||||
|
||||
if (is_npc_intf_rx(intf))
|
||||
return 0;
|
||||
|
||||
pf_func_mask = (entry_data->kw_mask[0] >> 32) &
|
||||
NPC_KEX_PF_FUNC_MASK;
|
||||
pf_func = (entry_data->kw[0] >> 32) & NPC_KEX_PF_FUNC_MASK;
|
||||
|
||||
pf_func = be16_to_cpu((__force __be16)pf_func);
|
||||
if (pf_func_mask != NPC_KEX_PF_FUNC_MASK ||
|
||||
((pf_func & ~RVU_PFVF_FUNC_MASK) !=
|
||||
(pcifunc & ~RVU_PFVF_FUNC_MASK)))
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void rvu_npc_set_pkind(struct rvu *rvu, int pkind, struct rvu_pfvf *pfvf)
|
||||
{
|
||||
int blkaddr;
|
||||
@ -2851,12 +2829,6 @@ int rvu_mbox_handler_npc_mcam_write_entry(struct rvu *rvu,
|
||||
else
|
||||
nix_intf = pfvf->nix_rx_intf;
|
||||
|
||||
if (!is_pffunc_af(pcifunc) &&
|
||||
npc_mcam_verify_pf_func(rvu, &req->entry_data, req->intf, pcifunc)) {
|
||||
rc = NPC_MCAM_INVALID_REQ;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
/* For AF installed rules, the nix_intf should be set to target NIX */
|
||||
if (is_pffunc_af(req->hdr.pcifunc))
|
||||
nix_intf = req->intf;
|
||||
@ -3208,10 +3180,6 @@ int rvu_mbox_handler_npc_mcam_alloc_and_write_entry(struct rvu *rvu,
|
||||
if (!is_npc_interface_valid(rvu, req->intf))
|
||||
return NPC_MCAM_INVALID_REQ;
|
||||
|
||||
if (npc_mcam_verify_pf_func(rvu, &req->entry_data, req->intf,
|
||||
req->hdr.pcifunc))
|
||||
return NPC_MCAM_INVALID_REQ;
|
||||
|
||||
/* Try to allocate a MCAM entry */
|
||||
entry_req.hdr.pcifunc = req->hdr.pcifunc;
|
||||
entry_req.contig = true;
|
||||
|
@ -389,7 +389,7 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev)
|
||||
struct mlx5_dpll *mdpll = auxiliary_get_drvdata(adev);
|
||||
struct mlx5_core_dev *mdev = mdpll->mdev;
|
||||
|
||||
cancel_delayed_work(&mdpll->work);
|
||||
cancel_delayed_work_sync(&mdpll->work);
|
||||
mlx5_dpll_mdev_netdev_untrack(mdpll, mdev);
|
||||
destroy_workqueue(mdpll->wq);
|
||||
dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
|
||||
|
@ -37,19 +37,24 @@ static void lan966x_lag_set_aggr_pgids(struct lan966x *lan966x)
|
||||
|
||||
/* Now, set PGIDs for each active LAG */
|
||||
for (lag = 0; lag < lan966x->num_phys_ports; ++lag) {
|
||||
struct net_device *bond = lan966x->ports[lag]->bond;
|
||||
struct lan966x_port *port = lan966x->ports[lag];
|
||||
int num_active_ports = 0;
|
||||
struct net_device *bond;
|
||||
unsigned long bond_mask;
|
||||
u8 aggr_idx[16];
|
||||
|
||||
if (!bond || (visited & BIT(lag)))
|
||||
if (!port || !port->bond || (visited & BIT(lag)))
|
||||
continue;
|
||||
|
||||
bond = port->bond;
|
||||
bond_mask = lan966x_lag_get_mask(lan966x, bond);
|
||||
|
||||
for_each_set_bit(p, &bond_mask, lan966x->num_phys_ports) {
|
||||
struct lan966x_port *port = lan966x->ports[p];
|
||||
|
||||
if (!port)
|
||||
continue;
|
||||
|
||||
lan_wr(ANA_PGID_PGID_SET(bond_mask),
|
||||
lan966x, ANA_PGID(p));
|
||||
if (port->lag_tx_active)
|
||||
|
@ -579,6 +579,9 @@ int ionic_tx_napi(struct napi_struct *napi, int budget)
|
||||
work_done = ionic_cq_service(cq, budget,
|
||||
ionic_tx_service, NULL, NULL);
|
||||
|
||||
if (unlikely(!budget))
|
||||
return budget;
|
||||
|
||||
if (work_done < budget && napi_complete_done(napi, work_done)) {
|
||||
ionic_dim_update(qcq, IONIC_LIF_F_TX_DIM_INTR);
|
||||
flags |= IONIC_INTR_CRED_UNMASK;
|
||||
@ -607,6 +610,9 @@ int ionic_rx_napi(struct napi_struct *napi, int budget)
|
||||
u32 work_done = 0;
|
||||
u32 flags = 0;
|
||||
|
||||
if (unlikely(!budget))
|
||||
return budget;
|
||||
|
||||
lif = cq->bound_q->lif;
|
||||
idev = &lif->ionic->idev;
|
||||
|
||||
@ -656,6 +662,9 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget)
|
||||
tx_work_done = ionic_cq_service(txcq, IONIC_TX_BUDGET_DEFAULT,
|
||||
ionic_tx_service, NULL, NULL);
|
||||
|
||||
if (unlikely(!budget))
|
||||
return budget;
|
||||
|
||||
rx_work_done = ionic_cq_service(rxcq, budget,
|
||||
ionic_rx_service, NULL, NULL);
|
||||
|
||||
|
@ -772,29 +772,25 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
|
||||
struct ravb_rx_desc *desc;
|
||||
struct sk_buff *skb;
|
||||
dma_addr_t dma_addr;
|
||||
int rx_packets = 0;
|
||||
u8 desc_status;
|
||||
int boguscnt;
|
||||
u16 pkt_len;
|
||||
u8 die_dt;
|
||||
int entry;
|
||||
int limit;
|
||||
int i;
|
||||
|
||||
entry = priv->cur_rx[q] % priv->num_rx_ring[q];
|
||||
boguscnt = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q];
|
||||
limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q];
|
||||
stats = &priv->stats[q];
|
||||
|
||||
boguscnt = min(boguscnt, *quota);
|
||||
limit = boguscnt;
|
||||
desc = &priv->gbeth_rx_ring[entry];
|
||||
while (desc->die_dt != DT_FEMPTY) {
|
||||
for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) {
|
||||
/* Descriptor type must be checked before all other reads */
|
||||
dma_rmb();
|
||||
desc_status = desc->msc;
|
||||
pkt_len = le16_to_cpu(desc->ds_cc) & RX_DS;
|
||||
|
||||
if (--boguscnt < 0)
|
||||
break;
|
||||
|
||||
/* We use 0-byte descriptors to mark the DMA mapping errors */
|
||||
if (!pkt_len)
|
||||
continue;
|
||||
@ -820,7 +816,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
|
||||
skb_put(skb, pkt_len);
|
||||
skb->protocol = eth_type_trans(skb, ndev);
|
||||
napi_gro_receive(&priv->napi[q], skb);
|
||||
stats->rx_packets++;
|
||||
rx_packets++;
|
||||
stats->rx_bytes += pkt_len;
|
||||
break;
|
||||
case DT_FSTART:
|
||||
@ -848,7 +844,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
|
||||
eth_type_trans(priv->rx_1st_skb, ndev);
|
||||
napi_gro_receive(&priv->napi[q],
|
||||
priv->rx_1st_skb);
|
||||
stats->rx_packets++;
|
||||
rx_packets++;
|
||||
stats->rx_bytes += pkt_len;
|
||||
break;
|
||||
}
|
||||
@ -887,9 +883,9 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
|
||||
desc->die_dt = DT_FEMPTY;
|
||||
}
|
||||
|
||||
*quota -= limit - (++boguscnt);
|
||||
|
||||
return boguscnt <= 0;
|
||||
stats->rx_packets += rx_packets;
|
||||
*quota -= rx_packets;
|
||||
return *quota == 0;
|
||||
}
|
||||
|
||||
/* Packet receive function for Ethernet AVB */
|
||||
|
@ -830,41 +830,42 @@ static const struct dwxgmac3_error_desc dwxgmac3_dma_errors[32]= {
|
||||
{ false, "UNKNOWN", "Unknown Error" }, /* 31 */
|
||||
};
|
||||
|
||||
static const char * const dpp_rx_err = "Read Rx Descriptor Parity checker Error";
|
||||
static const char * const dpp_tx_err = "Read Tx Descriptor Parity checker Error";
|
||||
#define DPP_RX_ERR "Read Rx Descriptor Parity checker Error"
|
||||
#define DPP_TX_ERR "Read Tx Descriptor Parity checker Error"
|
||||
|
||||
static const struct dwxgmac3_error_desc dwxgmac3_dma_dpp_errors[32] = {
|
||||
{ true, "TDPES0", dpp_tx_err },
|
||||
{ true, "TDPES1", dpp_tx_err },
|
||||
{ true, "TDPES2", dpp_tx_err },
|
||||
{ true, "TDPES3", dpp_tx_err },
|
||||
{ true, "TDPES4", dpp_tx_err },
|
||||
{ true, "TDPES5", dpp_tx_err },
|
||||
{ true, "TDPES6", dpp_tx_err },
|
||||
{ true, "TDPES7", dpp_tx_err },
|
||||
{ true, "TDPES8", dpp_tx_err },
|
||||
{ true, "TDPES9", dpp_tx_err },
|
||||
{ true, "TDPES10", dpp_tx_err },
|
||||
{ true, "TDPES11", dpp_tx_err },
|
||||
{ true, "TDPES12", dpp_tx_err },
|
||||
{ true, "TDPES13", dpp_tx_err },
|
||||
{ true, "TDPES14", dpp_tx_err },
|
||||
{ true, "TDPES15", dpp_tx_err },
|
||||
{ true, "RDPES0", dpp_rx_err },
|
||||
{ true, "RDPES1", dpp_rx_err },
|
||||
{ true, "RDPES2", dpp_rx_err },
|
||||
{ true, "RDPES3", dpp_rx_err },
|
||||
{ true, "RDPES4", dpp_rx_err },
|
||||
{ true, "RDPES5", dpp_rx_err },
|
||||
{ true, "RDPES6", dpp_rx_err },
|
||||
{ true, "RDPES7", dpp_rx_err },
|
||||
{ true, "RDPES8", dpp_rx_err },
|
||||
{ true, "RDPES9", dpp_rx_err },
|
||||
{ true, "RDPES10", dpp_rx_err },
|
||||
{ true, "RDPES11", dpp_rx_err },
|
||||
{ true, "RDPES12", dpp_rx_err },
|
||||
{ true, "RDPES13", dpp_rx_err },
|
||||
{ true, "RDPES14", dpp_rx_err },
|
||||
{ true, "RDPES15", dpp_rx_err },
|
||||
{ true, "TDPES0", DPP_TX_ERR },
|
||||
{ true, "TDPES1", DPP_TX_ERR },
|
||||
{ true, "TDPES2", DPP_TX_ERR },
|
||||
{ true, "TDPES3", DPP_TX_ERR },
|
||||
{ true, "TDPES4", DPP_TX_ERR },
|
||||
{ true, "TDPES5", DPP_TX_ERR },
|
||||
{ true, "TDPES6", DPP_TX_ERR },
|
||||
{ true, "TDPES7", DPP_TX_ERR },
|
||||
{ true, "TDPES8", DPP_TX_ERR },
|
||||
{ true, "TDPES9", DPP_TX_ERR },
|
||||
{ true, "TDPES10", DPP_TX_ERR },
|
||||
{ true, "TDPES11", DPP_TX_ERR },
|
||||
{ true, "TDPES12", DPP_TX_ERR },
|
||||
{ true, "TDPES13", DPP_TX_ERR },
|
||||
{ true, "TDPES14", DPP_TX_ERR },
|
||||
{ true, "TDPES15", DPP_TX_ERR },
|
||||
{ true, "RDPES0", DPP_RX_ERR },
|
||||
{ true, "RDPES1", DPP_RX_ERR },
|
||||
{ true, "RDPES2", DPP_RX_ERR },
|
||||
{ true, "RDPES3", DPP_RX_ERR },
|
||||
{ true, "RDPES4", DPP_RX_ERR },
|
||||
{ true, "RDPES5", DPP_RX_ERR },
|
||||
{ true, "RDPES6", DPP_RX_ERR },
|
||||
{ true, "RDPES7", DPP_RX_ERR },
|
||||
{ true, "RDPES8", DPP_RX_ERR },
|
||||
{ true, "RDPES9", DPP_RX_ERR },
|
||||
{ true, "RDPES10", DPP_RX_ERR },
|
||||
{ true, "RDPES11", DPP_RX_ERR },
|
||||
{ true, "RDPES12", DPP_RX_ERR },
|
||||
{ true, "RDPES13", DPP_RX_ERR },
|
||||
{ true, "RDPES14", DPP_RX_ERR },
|
||||
{ true, "RDPES15", DPP_RX_ERR },
|
||||
};
|
||||
|
||||
static void dwxgmac3_handle_dma_err(struct net_device *ndev,
|
||||
|
@ -189,6 +189,7 @@ config TI_ICSSG_PRUETH
|
||||
select TI_K3_CPPI_DESC_POOL
|
||||
depends on PRU_REMOTEPROC
|
||||
depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER
|
||||
depends on PTP_1588_CLOCK_OPTIONAL
|
||||
help
|
||||
Support dual Gigabit Ethernet ports over the ICSSG PRU Subsystem.
|
||||
This subsystem is available starting with the AM65 platform.
|
||||
|
@ -638,6 +638,16 @@ static void cpts_calc_mult_shift(struct cpts *cpts)
|
||||
freq, cpts->cc.mult, cpts->cc.shift, (ns - NSEC_PER_SEC));
|
||||
}
|
||||
|
||||
static void cpts_clk_unregister(void *clk)
|
||||
{
|
||||
clk_hw_unregister_mux(clk);
|
||||
}
|
||||
|
||||
static void cpts_clk_del_provider(void *np)
|
||||
{
|
||||
of_clk_del_provider(np);
|
||||
}
|
||||
|
||||
static int cpts_of_mux_clk_setup(struct cpts *cpts, struct device_node *node)
|
||||
{
|
||||
struct device_node *refclk_np;
|
||||
@ -687,9 +697,7 @@ static int cpts_of_mux_clk_setup(struct cpts *cpts, struct device_node *node)
|
||||
goto mux_fail;
|
||||
}
|
||||
|
||||
ret = devm_add_action_or_reset(cpts->dev,
|
||||
(void(*)(void *))clk_hw_unregister_mux,
|
||||
clk_hw);
|
||||
ret = devm_add_action_or_reset(cpts->dev, cpts_clk_unregister, clk_hw);
|
||||
if (ret) {
|
||||
dev_err(cpts->dev, "add clkmux unreg action %d", ret);
|
||||
goto mux_fail;
|
||||
@ -699,8 +707,7 @@ static int cpts_of_mux_clk_setup(struct cpts *cpts, struct device_node *node)
|
||||
if (ret)
|
||||
goto mux_fail;
|
||||
|
||||
ret = devm_add_action_or_reset(cpts->dev,
|
||||
(void(*)(void *))of_clk_del_provider,
|
||||
ret = devm_add_action_or_reset(cpts->dev, cpts_clk_del_provider,
|
||||
refclk_np);
|
||||
if (ret) {
|
||||
dev_err(cpts->dev, "add clkmux provider unreg action %d", ret);
|
||||
|
@ -153,6 +153,7 @@ static const struct pci_device_id skfddi_pci_tbl[] = {
|
||||
{ } /* Terminating entry */
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, skfddi_pci_tbl);
|
||||
MODULE_DESCRIPTION("SysKonnect FDDI PCI driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Mirko Lindner <mlindner@syskonnect.de>");
|
||||
|
||||
|
@ -259,4 +259,5 @@ static __exit void fake_remove_module(void)
|
||||
|
||||
module_init(fakelb_init_module);
|
||||
module_exit(fake_remove_module);
|
||||
MODULE_DESCRIPTION("IEEE 802.15.4 loopback driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -237,4 +237,5 @@ static void __exit ipvtap_exit(void)
|
||||
module_exit(ipvtap_exit);
|
||||
MODULE_ALIAS_RTNL_LINK("ipvtap");
|
||||
MODULE_AUTHOR("Sainath Grandhi <sainath.grandhi@intel.com>");
|
||||
MODULE_DESCRIPTION("IP-VLAN based tap driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -131,4 +131,5 @@ int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
|
||||
EXPORT_SYMBOL(__devm_of_mdiobus_register);
|
||||
#endif /* CONFIG_OF_MDIO */
|
||||
|
||||
MODULE_DESCRIPTION("Network MDIO bus devres helpers");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -1437,4 +1437,5 @@ static int __init plip_init (void)
|
||||
|
||||
module_init(plip_init);
|
||||
module_exit(plip_cleanup_module);
|
||||
MODULE_DESCRIPTION("PLIP (parallel port) network module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -1166,5 +1166,6 @@ static void __exit bsdcomp_cleanup(void)
|
||||
|
||||
module_init(bsdcomp_init);
|
||||
module_exit(bsdcomp_cleanup);
|
||||
MODULE_DESCRIPTION("PPP BSD-Compress compression module");
|
||||
MODULE_LICENSE("Dual BSD/GPL");
|
||||
MODULE_ALIAS("ppp-compress-" __stringify(CI_BSD_COMPRESS));
|
||||
|
@ -87,6 +87,7 @@ struct asyncppp {
|
||||
static int flag_time = HZ;
|
||||
module_param(flag_time, int, 0);
|
||||
MODULE_PARM_DESC(flag_time, "ppp_async: interval between flagged packets (in clock ticks)");
|
||||
MODULE_DESCRIPTION("PPP async serial channel module");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_LDISC(N_PPP);
|
||||
|
||||
|
@ -630,6 +630,7 @@ static void __exit deflate_cleanup(void)
|
||||
|
||||
module_init(deflate_init);
|
||||
module_exit(deflate_cleanup);
|
||||
MODULE_DESCRIPTION("PPP Deflate compression module");
|
||||
MODULE_LICENSE("Dual BSD/GPL");
|
||||
MODULE_ALIAS("ppp-compress-" __stringify(CI_DEFLATE));
|
||||
MODULE_ALIAS("ppp-compress-" __stringify(CI_DEFLATE_DRAFT));
|
||||
|
@ -3604,6 +3604,7 @@ EXPORT_SYMBOL(ppp_input_error);
|
||||
EXPORT_SYMBOL(ppp_output_wakeup);
|
||||
EXPORT_SYMBOL(ppp_register_compressor);
|
||||
EXPORT_SYMBOL(ppp_unregister_compressor);
|
||||
MODULE_DESCRIPTION("Generic PPP layer driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_CHARDEV(PPP_MAJOR, 0);
|
||||
MODULE_ALIAS_RTNL_LINK("ppp");
|
||||
|
@ -724,5 +724,6 @@ ppp_sync_cleanup(void)
|
||||
|
||||
module_init(ppp_sync_init);
|
||||
module_exit(ppp_sync_cleanup);
|
||||
MODULE_DESCRIPTION("PPP synchronous TTY channel module");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_LDISC(N_SYNC_PPP);
|
||||
|
@ -1007,26 +1007,21 @@ static int pppoe_recvmsg(struct socket *sock, struct msghdr *m,
|
||||
struct sk_buff *skb;
|
||||
int error = 0;
|
||||
|
||||
if (sk->sk_state & PPPOX_BOUND) {
|
||||
error = -EIO;
|
||||
goto end;
|
||||
}
|
||||
if (sk->sk_state & PPPOX_BOUND)
|
||||
return -EIO;
|
||||
|
||||
skb = skb_recv_datagram(sk, flags, &error);
|
||||
if (error < 0)
|
||||
goto end;
|
||||
if (!skb)
|
||||
return error;
|
||||
|
||||
if (skb) {
|
||||
total_len = min_t(size_t, total_len, skb->len);
|
||||
error = skb_copy_datagram_msg(skb, 0, m, total_len);
|
||||
if (error == 0) {
|
||||
consume_skb(skb);
|
||||
return total_len;
|
||||
}
|
||||
total_len = min_t(size_t, total_len, skb->len);
|
||||
error = skb_copy_datagram_msg(skb, 0, m, total_len);
|
||||
if (error == 0) {
|
||||
consume_skb(skb);
|
||||
return total_len;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
end:
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -618,7 +618,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 2) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -634,7 +634,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 1) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -650,7 +650,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 0) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -707,7 +707,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 2) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -723,7 +723,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 1) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -739,7 +739,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
|
||||
&tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 0) {
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -1116,6 +1116,9 @@ int iwl_acpi_get_ppag_table(struct iwl_fw_runtime *fwrt)
|
||||
goto read_table;
|
||||
}
|
||||
|
||||
ret = PTR_ERR(wifi_pkg);
|
||||
goto out_free;
|
||||
|
||||
read_table:
|
||||
fwrt->ppag_ver = tbl_rev;
|
||||
flags = &wifi_pkg->package.elements[1];
|
||||
|
@ -3687,6 +3687,9 @@ iwl_mvm_sta_state_notexist_to_none(struct iwl_mvm *mvm,
|
||||
NL80211_TDLS_SETUP);
|
||||
}
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for_each_sta_active_link(vif, sta, link_sta, i)
|
||||
link_sta->agg.max_rc_amsdu_len = 1;
|
||||
|
||||
|
@ -505,6 +505,10 @@ static bool iwl_mvm_is_dup(struct ieee80211_sta *sta, int queue,
|
||||
return false;
|
||||
|
||||
mvm_sta = iwl_mvm_sta_from_mac80211(sta);
|
||||
|
||||
if (WARN_ON_ONCE(!mvm_sta->dup_data))
|
||||
return false;
|
||||
|
||||
dup_data = &mvm_sta->dup_data[queue];
|
||||
|
||||
/*
|
||||
|
@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2023 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2024 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
*/
|
||||
@ -972,6 +972,7 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
|
||||
if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) {
|
||||
/* End TE, notify mac80211 */
|
||||
mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
|
||||
mvmvif->time_event_data.link_id = -1;
|
||||
iwl_mvm_p2p_roc_finished(mvm);
|
||||
ieee80211_remain_on_channel_expired(mvm->hw);
|
||||
} else if (le32_to_cpu(notif->start)) {
|
||||
|
@ -520,13 +520,24 @@ static void iwl_mvm_set_tx_cmd_crypto(struct iwl_mvm *mvm,
|
||||
}
|
||||
}
|
||||
|
||||
static void iwl_mvm_copy_hdr(void *cmd, const void *hdr, int hdrlen,
|
||||
const u8 *addr3_override)
|
||||
{
|
||||
struct ieee80211_hdr *out_hdr = cmd;
|
||||
|
||||
memcpy(cmd, hdr, hdrlen);
|
||||
if (addr3_override)
|
||||
memcpy(out_hdr->addr3, addr3_override, ETH_ALEN);
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocates and sets the Tx cmd the driver data pointers in the skb
|
||||
*/
|
||||
static struct iwl_device_tx_cmd *
|
||||
iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
struct ieee80211_tx_info *info, int hdrlen,
|
||||
struct ieee80211_sta *sta, u8 sta_id)
|
||||
struct ieee80211_sta *sta, u8 sta_id,
|
||||
const u8 *addr3_override)
|
||||
{
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct iwl_device_tx_cmd *dev_cmd;
|
||||
@ -584,7 +595,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
cmd->len = cpu_to_le16((u16)skb->len);
|
||||
|
||||
/* Copy MAC header from skb into command buffer */
|
||||
memcpy(cmd->hdr, hdr, hdrlen);
|
||||
iwl_mvm_copy_hdr(cmd->hdr, hdr, hdrlen, addr3_override);
|
||||
|
||||
cmd->flags = cpu_to_le16(flags);
|
||||
cmd->rate_n_flags = cpu_to_le32(rate_n_flags);
|
||||
@ -599,7 +610,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
cmd->len = cpu_to_le16((u16)skb->len);
|
||||
|
||||
/* Copy MAC header from skb into command buffer */
|
||||
memcpy(cmd->hdr, hdr, hdrlen);
|
||||
iwl_mvm_copy_hdr(cmd->hdr, hdr, hdrlen, addr3_override);
|
||||
|
||||
cmd->flags = cpu_to_le32(flags);
|
||||
cmd->rate_n_flags = cpu_to_le32(rate_n_flags);
|
||||
@ -617,7 +628,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
iwl_mvm_set_tx_cmd_rate(mvm, tx_cmd, info, sta, hdr->frame_control);
|
||||
|
||||
/* Copy MAC header from skb into command buffer */
|
||||
memcpy(tx_cmd->hdr, hdr, hdrlen);
|
||||
iwl_mvm_copy_hdr(tx_cmd->hdr, hdr, hdrlen, addr3_override);
|
||||
|
||||
out:
|
||||
return dev_cmd;
|
||||
@ -820,7 +831,8 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
|
||||
|
||||
IWL_DEBUG_TX(mvm, "station Id %d, queue=%d\n", sta_id, queue);
|
||||
|
||||
dev_cmd = iwl_mvm_set_tx_params(mvm, skb, &info, hdrlen, NULL, sta_id);
|
||||
dev_cmd = iwl_mvm_set_tx_params(mvm, skb, &info, hdrlen, NULL, sta_id,
|
||||
NULL);
|
||||
if (!dev_cmd)
|
||||
return -1;
|
||||
|
||||
@ -1140,7 +1152,8 @@ static int iwl_mvm_tx_pkt_queued(struct iwl_mvm *mvm,
|
||||
*/
|
||||
static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
struct ieee80211_tx_info *info,
|
||||
struct ieee80211_sta *sta)
|
||||
struct ieee80211_sta *sta,
|
||||
const u8 *addr3_override)
|
||||
{
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct iwl_mvm_sta *mvmsta;
|
||||
@ -1172,7 +1185,8 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
iwl_mvm_probe_resp_set_noa(mvm, skb);
|
||||
|
||||
dev_cmd = iwl_mvm_set_tx_params(mvm, skb, info, hdrlen,
|
||||
sta, mvmsta->deflink.sta_id);
|
||||
sta, mvmsta->deflink.sta_id,
|
||||
addr3_override);
|
||||
if (!dev_cmd)
|
||||
goto drop;
|
||||
|
||||
@ -1294,9 +1308,11 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
|
||||
struct ieee80211_tx_info info;
|
||||
struct sk_buff_head mpdus_skbs;
|
||||
struct ieee80211_vif *vif;
|
||||
unsigned int payload_len;
|
||||
int ret;
|
||||
struct sk_buff *orig_skb = skb;
|
||||
const u8 *addr3;
|
||||
|
||||
if (WARN_ON_ONCE(!mvmsta))
|
||||
return -1;
|
||||
@ -1307,26 +1323,59 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
|
||||
memcpy(&info, skb->cb, sizeof(info));
|
||||
|
||||
if (!skb_is_gso(skb))
|
||||
return iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
|
||||
return iwl_mvm_tx_mpdu(mvm, skb, &info, sta, NULL);
|
||||
|
||||
payload_len = skb_tail_pointer(skb) - skb_transport_header(skb) -
|
||||
tcp_hdrlen(skb) + skb->data_len;
|
||||
|
||||
if (payload_len <= skb_shinfo(skb)->gso_size)
|
||||
return iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
|
||||
return iwl_mvm_tx_mpdu(mvm, skb, &info, sta, NULL);
|
||||
|
||||
__skb_queue_head_init(&mpdus_skbs);
|
||||
|
||||
vif = info.control.vif;
|
||||
if (!vif)
|
||||
return -1;
|
||||
|
||||
ret = iwl_mvm_tx_tso(mvm, skb, &info, sta, &mpdus_skbs);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
WARN_ON(skb_queue_empty(&mpdus_skbs));
|
||||
|
||||
while (!skb_queue_empty(&mpdus_skbs)) {
|
||||
skb = __skb_dequeue(&mpdus_skbs);
|
||||
/*
|
||||
* As described in IEEE sta 802.11-2020, table 9-30 (Address
|
||||
* field contents), A-MSDU address 3 should contain the BSSID
|
||||
* address.
|
||||
* Pass address 3 down to iwl_mvm_tx_mpdu() and further to set it
|
||||
* in the command header. We need to preserve the original
|
||||
* address 3 in the skb header to correctly create all the
|
||||
* A-MSDU subframe headers from it.
|
||||
*/
|
||||
switch (vif->type) {
|
||||
case NL80211_IFTYPE_STATION:
|
||||
addr3 = vif->cfg.ap_addr;
|
||||
break;
|
||||
case NL80211_IFTYPE_AP:
|
||||
addr3 = vif->addr;
|
||||
break;
|
||||
default:
|
||||
addr3 = NULL;
|
||||
break;
|
||||
}
|
||||
|
||||
ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
|
||||
while (!skb_queue_empty(&mpdus_skbs)) {
|
||||
struct ieee80211_hdr *hdr;
|
||||
bool amsdu;
|
||||
|
||||
skb = __skb_dequeue(&mpdus_skbs);
|
||||
hdr = (void *)skb->data;
|
||||
amsdu = ieee80211_is_data_qos(hdr->frame_control) &&
|
||||
(*ieee80211_get_qos_ctl(hdr) &
|
||||
IEEE80211_QOS_CTL_A_MSDU_PRESENT);
|
||||
|
||||
ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta,
|
||||
amsdu ? addr3 : NULL);
|
||||
if (ret) {
|
||||
/* Free skbs created as part of TSO logic that have not yet been dequeued */
|
||||
__skb_queue_purge(&mpdus_skbs);
|
||||
|
@ -1778,5 +1778,6 @@ static void __exit netback_fini(void)
|
||||
}
|
||||
module_exit(netback_fini);
|
||||
|
||||
MODULE_DESCRIPTION("Xen backend network device module");
|
||||
MODULE_LICENSE("Dual BSD/GPL");
|
||||
MODULE_ALIAS("xen-backend:vif");
|
||||
|
@ -2141,6 +2141,11 @@ struct net_device {
|
||||
|
||||
/* TXRX read-mostly hotpath */
|
||||
__cacheline_group_begin(net_device_read_txrx);
|
||||
union {
|
||||
struct pcpu_lstats __percpu *lstats;
|
||||
struct pcpu_sw_netstats __percpu *tstats;
|
||||
struct pcpu_dstats __percpu *dstats;
|
||||
};
|
||||
unsigned int flags;
|
||||
unsigned short hard_header_len;
|
||||
netdev_features_t features;
|
||||
@ -2395,11 +2400,6 @@ struct net_device {
|
||||
enum netdev_ml_priv_type ml_priv_type;
|
||||
|
||||
enum netdev_stat_type pcpu_stat_type:8;
|
||||
union {
|
||||
struct pcpu_lstats __percpu *lstats;
|
||||
struct pcpu_sw_netstats __percpu *tstats;
|
||||
struct pcpu_dstats __percpu *dstats;
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_GARP)
|
||||
struct garp_port __rcu *garp_port;
|
||||
|
@ -221,8 +221,10 @@ struct tcp_sock {
|
||||
u32 lost_out; /* Lost packets */
|
||||
u32 sacked_out; /* SACK'd packets */
|
||||
u16 tcp_header_len; /* Bytes of tcp header to send */
|
||||
u8 scaling_ratio; /* see tcp_win_from_space() */
|
||||
u8 chrono_type : 2, /* current chronograph type */
|
||||
repair : 1,
|
||||
tcp_usec_ts : 1, /* TSval values in usec */
|
||||
is_sack_reneg:1, /* in recovery from loss with SACK reneg? */
|
||||
is_cwnd_limited:1;/* forward progress limited by snd_cwnd? */
|
||||
__cacheline_group_end(tcp_sock_read_txrx);
|
||||
@ -352,7 +354,6 @@ struct tcp_sock {
|
||||
u32 compressed_ack_rcv_nxt;
|
||||
struct list_head tsq_node; /* anchor in tsq_tasklet.head list */
|
||||
|
||||
u8 scaling_ratio; /* see tcp_win_from_space() */
|
||||
/* Information of the most recently (s)acked skb */
|
||||
struct tcp_rack {
|
||||
u64 mstamp; /* (Re)sent time of the skb */
|
||||
@ -368,8 +369,7 @@ struct tcp_sock {
|
||||
u8 compressed_ack;
|
||||
u8 dup_ack_counter:2,
|
||||
tlp_retrans:1, /* TLP is a retransmission */
|
||||
tcp_usec_ts:1, /* TSval values in usec */
|
||||
unused:4;
|
||||
unused:5;
|
||||
u8 thin_lto : 1,/* Use linear timeouts for thin streams */
|
||||
recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */
|
||||
fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
|
||||
|
@ -97,9 +97,6 @@ struct tls_sw_context_tx {
|
||||
struct tls_rec *open_rec;
|
||||
struct list_head tx_list;
|
||||
atomic_t encrypt_pending;
|
||||
/* protect crypto_wait with encrypt_pending */
|
||||
spinlock_t encrypt_compl_lock;
|
||||
int async_notify;
|
||||
u8 async_capable:1;
|
||||
|
||||
#define BIT_TX_SCHEDULED 0
|
||||
@ -136,8 +133,6 @@ struct tls_sw_context_rx {
|
||||
struct tls_strparser strp;
|
||||
|
||||
atomic_t decrypt_pending;
|
||||
/* protect crypto_wait with decrypt_pending*/
|
||||
spinlock_t decrypt_compl_lock;
|
||||
struct sk_buff_head async_hold;
|
||||
struct wait_queue_head wq;
|
||||
};
|
||||
|
@ -179,4 +179,5 @@ static void __exit lowpan_module_exit(void)
|
||||
module_init(lowpan_module_init);
|
||||
module_exit(lowpan_module_exit);
|
||||
|
||||
MODULE_DESCRIPTION("IPv6 over Low-Power Wireless Personal Area Network core module");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -1532,4 +1532,5 @@ static void __exit atm_mpoa_cleanup(void)
|
||||
module_init(atm_mpoa_init);
|
||||
module_exit(atm_mpoa_cleanup);
|
||||
|
||||
MODULE_DESCRIPTION("Multi-Protocol Over ATM (MPOA) driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -86,7 +86,7 @@ struct j1939_priv {
|
||||
unsigned int tp_max_packet_size;
|
||||
|
||||
/* lock for j1939_socks list */
|
||||
spinlock_t j1939_socks_lock;
|
||||
rwlock_t j1939_socks_lock;
|
||||
struct list_head j1939_socks;
|
||||
|
||||
struct kref rx_kref;
|
||||
@ -301,6 +301,7 @@ struct j1939_sock {
|
||||
|
||||
int ifindex;
|
||||
struct j1939_addr addr;
|
||||
spinlock_t filters_lock;
|
||||
struct j1939_filter *filters;
|
||||
int nfilters;
|
||||
pgn_t pgn_rx_filter;
|
||||
|
@ -274,7 +274,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
j1939_tp_init(priv);
|
||||
spin_lock_init(&priv->j1939_socks_lock);
|
||||
rwlock_init(&priv->j1939_socks_lock);
|
||||
INIT_LIST_HEAD(&priv->j1939_socks);
|
||||
|
||||
mutex_lock(&j1939_netdev_lock);
|
||||
|
@ -80,16 +80,16 @@ static void j1939_jsk_add(struct j1939_priv *priv, struct j1939_sock *jsk)
|
||||
jsk->state |= J1939_SOCK_BOUND;
|
||||
j1939_priv_get(priv);
|
||||
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
write_lock_bh(&priv->j1939_socks_lock);
|
||||
list_add_tail(&jsk->list, &priv->j1939_socks);
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
write_unlock_bh(&priv->j1939_socks_lock);
|
||||
}
|
||||
|
||||
static void j1939_jsk_del(struct j1939_priv *priv, struct j1939_sock *jsk)
|
||||
{
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
write_lock_bh(&priv->j1939_socks_lock);
|
||||
list_del_init(&jsk->list);
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
write_unlock_bh(&priv->j1939_socks_lock);
|
||||
|
||||
j1939_priv_put(priv);
|
||||
jsk->state &= ~J1939_SOCK_BOUND;
|
||||
@ -262,12 +262,17 @@ static bool j1939_sk_match_dst(struct j1939_sock *jsk,
|
||||
static bool j1939_sk_match_filter(struct j1939_sock *jsk,
|
||||
const struct j1939_sk_buff_cb *skcb)
|
||||
{
|
||||
const struct j1939_filter *f = jsk->filters;
|
||||
int nfilter = jsk->nfilters;
|
||||
const struct j1939_filter *f;
|
||||
int nfilter;
|
||||
|
||||
spin_lock_bh(&jsk->filters_lock);
|
||||
|
||||
f = jsk->filters;
|
||||
nfilter = jsk->nfilters;
|
||||
|
||||
if (!nfilter)
|
||||
/* receive all when no filters are assigned */
|
||||
return true;
|
||||
goto filter_match_found;
|
||||
|
||||
for (; nfilter; ++f, --nfilter) {
|
||||
if ((skcb->addr.pgn & f->pgn_mask) != f->pgn)
|
||||
@ -276,9 +281,15 @@ static bool j1939_sk_match_filter(struct j1939_sock *jsk,
|
||||
continue;
|
||||
if ((skcb->addr.src_name & f->name_mask) != f->name)
|
||||
continue;
|
||||
return true;
|
||||
goto filter_match_found;
|
||||
}
|
||||
|
||||
spin_unlock_bh(&jsk->filters_lock);
|
||||
return false;
|
||||
|
||||
filter_match_found:
|
||||
spin_unlock_bh(&jsk->filters_lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool j1939_sk_recv_match_one(struct j1939_sock *jsk,
|
||||
@ -329,13 +340,13 @@ bool j1939_sk_recv_match(struct j1939_priv *priv, struct j1939_sk_buff_cb *skcb)
|
||||
struct j1939_sock *jsk;
|
||||
bool match = false;
|
||||
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
read_lock_bh(&priv->j1939_socks_lock);
|
||||
list_for_each_entry(jsk, &priv->j1939_socks, list) {
|
||||
match = j1939_sk_recv_match_one(jsk, skcb);
|
||||
if (match)
|
||||
break;
|
||||
}
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
read_unlock_bh(&priv->j1939_socks_lock);
|
||||
|
||||
return match;
|
||||
}
|
||||
@ -344,11 +355,11 @@ void j1939_sk_recv(struct j1939_priv *priv, struct sk_buff *skb)
|
||||
{
|
||||
struct j1939_sock *jsk;
|
||||
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
read_lock_bh(&priv->j1939_socks_lock);
|
||||
list_for_each_entry(jsk, &priv->j1939_socks, list) {
|
||||
j1939_sk_recv_one(jsk, skb);
|
||||
}
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
read_unlock_bh(&priv->j1939_socks_lock);
|
||||
}
|
||||
|
||||
static void j1939_sk_sock_destruct(struct sock *sk)
|
||||
@ -401,6 +412,7 @@ static int j1939_sk_init(struct sock *sk)
|
||||
atomic_set(&jsk->skb_pending, 0);
|
||||
spin_lock_init(&jsk->sk_session_queue_lock);
|
||||
INIT_LIST_HEAD(&jsk->sk_session_queue);
|
||||
spin_lock_init(&jsk->filters_lock);
|
||||
|
||||
/* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
|
||||
sock_set_flag(sk, SOCK_RCU_FREE);
|
||||
@ -703,9 +715,11 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
|
||||
}
|
||||
|
||||
lock_sock(&jsk->sk);
|
||||
spin_lock_bh(&jsk->filters_lock);
|
||||
ofilters = jsk->filters;
|
||||
jsk->filters = filters;
|
||||
jsk->nfilters = count;
|
||||
spin_unlock_bh(&jsk->filters_lock);
|
||||
release_sock(&jsk->sk);
|
||||
kfree(ofilters);
|
||||
return 0;
|
||||
@ -1080,12 +1094,12 @@ void j1939_sk_errqueue(struct j1939_session *session,
|
||||
}
|
||||
|
||||
/* spread RX notifications to all sockets subscribed to this session */
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
read_lock_bh(&priv->j1939_socks_lock);
|
||||
list_for_each_entry(jsk, &priv->j1939_socks, list) {
|
||||
if (j1939_sk_recv_match_one(jsk, &session->skcb))
|
||||
__j1939_sk_errqueue(session, &jsk->sk, type);
|
||||
}
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
read_unlock_bh(&priv->j1939_socks_lock);
|
||||
};
|
||||
|
||||
void j1939_sk_send_loop_abort(struct sock *sk, int err)
|
||||
@ -1273,7 +1287,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
|
||||
struct j1939_sock *jsk;
|
||||
int error_code = ENETDOWN;
|
||||
|
||||
spin_lock_bh(&priv->j1939_socks_lock);
|
||||
read_lock_bh(&priv->j1939_socks_lock);
|
||||
list_for_each_entry(jsk, &priv->j1939_socks, list) {
|
||||
jsk->sk.sk_err = error_code;
|
||||
if (!sock_flag(&jsk->sk, SOCK_DEAD))
|
||||
@ -1281,7 +1295,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
|
||||
|
||||
j1939_sk_queue_drop_all(priv, jsk, error_code);
|
||||
}
|
||||
spin_unlock_bh(&priv->j1939_socks_lock);
|
||||
read_unlock_bh(&priv->j1939_socks_lock);
|
||||
}
|
||||
|
||||
static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
|
||||
|
@ -336,7 +336,7 @@ int netdev_name_node_alt_create(struct net_device *dev, const char *name)
|
||||
return -ENOMEM;
|
||||
netdev_name_node_add(net, name_node);
|
||||
/* The node that holds dev->name acts as a head of per-device list. */
|
||||
list_add_tail(&name_node->list, &dev->name_node->list);
|
||||
list_add_tail_rcu(&name_node->list, &dev->name_node->list);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -11652,11 +11652,12 @@ static void __init net_dev_struct_check(void)
|
||||
CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_tx, 160);
|
||||
|
||||
/* TXRX read-mostly hotpath */
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, lstats);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, flags);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, hard_header_len);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, features);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, ip6_ptr);
|
||||
CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_txrx, 30);
|
||||
CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_txrx, 38);
|
||||
|
||||
/* RX read-mostly hotpath */
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, ptype_specific);
|
||||
|
@ -1020,14 +1020,17 @@ static size_t rtnl_xdp_size(void)
|
||||
static size_t rtnl_prop_list_size(const struct net_device *dev)
|
||||
{
|
||||
struct netdev_name_node *name_node;
|
||||
size_t size;
|
||||
unsigned int cnt = 0;
|
||||
|
||||
if (list_empty(&dev->name_node->list))
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(name_node, &dev->name_node->list, list)
|
||||
cnt++;
|
||||
rcu_read_unlock();
|
||||
|
||||
if (!cnt)
|
||||
return 0;
|
||||
size = nla_total_size(0);
|
||||
list_for_each_entry(name_node, &dev->name_node->list, list)
|
||||
size += nla_total_size(ALTIFNAMSIZ);
|
||||
return size;
|
||||
|
||||
return nla_total_size(0) + cnt * nla_total_size(ALTIFNAMSIZ);
|
||||
}
|
||||
|
||||
static size_t rtnl_proto_down_size(const struct net_device *dev)
|
||||
|
@ -471,7 +471,10 @@ static void handshake_req_destroy_test1(struct kunit *test)
|
||||
handshake_req_cancel(sock->sk);
|
||||
|
||||
/* Act */
|
||||
fput(filp);
|
||||
/* Ensure the close/release/put process has run to
|
||||
* completion before checking the result.
|
||||
*/
|
||||
__fput_sync(filp);
|
||||
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_PTR_EQ(test, handshake_req_destroy_test, req);
|
||||
|
@ -597,5 +597,6 @@ static void __exit ah4_fini(void)
|
||||
|
||||
module_init(ah4_init);
|
||||
module_exit(ah4_fini);
|
||||
MODULE_DESCRIPTION("IPv4 AH transformation library");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET, XFRM_PROTO_AH);
|
||||
|
@ -1247,5 +1247,6 @@ static void __exit esp4_fini(void)
|
||||
|
||||
module_init(esp4_init);
|
||||
module_exit(esp4_fini);
|
||||
MODULE_DESCRIPTION("IPv4 ESP transformation library");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET, XFRM_PROTO_ESP);
|
||||
|
@ -1793,6 +1793,7 @@ static void __exit ipgre_fini(void)
|
||||
|
||||
module_init(ipgre_init);
|
||||
module_exit(ipgre_fini);
|
||||
MODULE_DESCRIPTION("IPv4 GRE tunnels over IP library");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_RTNL_LINK("gre");
|
||||
MODULE_ALIAS_RTNL_LINK("gretap");
|
||||
|
@ -972,8 +972,8 @@ static int __ip_append_data(struct sock *sk,
|
||||
unsigned int maxfraglen, fragheaderlen, maxnonfragsize;
|
||||
int csummode = CHECKSUM_NONE;
|
||||
struct rtable *rt = (struct rtable *)cork->dst;
|
||||
bool paged, hold_tskey, extra_uref = false;
|
||||
unsigned int wmem_alloc_delta = 0;
|
||||
bool paged, extra_uref = false;
|
||||
u32 tskey = 0;
|
||||
|
||||
skb = skb_peek_tail(queue);
|
||||
@ -982,10 +982,6 @@ static int __ip_append_data(struct sock *sk,
|
||||
mtu = cork->gso_size ? IP_MAX_MTU : cork->fragsize;
|
||||
paged = !!cork->gso_size;
|
||||
|
||||
if (cork->tx_flags & SKBTX_ANY_TSTAMP &&
|
||||
READ_ONCE(sk->sk_tsflags) & SOF_TIMESTAMPING_OPT_ID)
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
|
||||
|
||||
fragheaderlen = sizeof(struct iphdr) + (opt ? opt->optlen : 0);
|
||||
@ -1052,6 +1048,11 @@ static int __ip_append_data(struct sock *sk,
|
||||
|
||||
cork->length += length;
|
||||
|
||||
hold_tskey = cork->tx_flags & SKBTX_ANY_TSTAMP &&
|
||||
READ_ONCE(sk->sk_tsflags) & SOF_TIMESTAMPING_OPT_ID;
|
||||
if (hold_tskey)
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
/* So, what's going on in the loop below?
|
||||
*
|
||||
* We use calculated fragment length to generate chained skb,
|
||||
@ -1274,6 +1275,8 @@ error:
|
||||
cork->length -= length;
|
||||
IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTDISCARDS);
|
||||
refcount_add(wmem_alloc_delta, &sk->sk_wmem_alloc);
|
||||
if (hold_tskey)
|
||||
atomic_dec(&sk->sk_tskey);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -1298,4 +1298,5 @@ void ip_tunnel_setup(struct net_device *dev, unsigned int net_id)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ip_tunnel_setup);
|
||||
|
||||
MODULE_DESCRIPTION("IPv4 tunnel implementation library");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -721,6 +721,7 @@ static void __exit vti_fini(void)
|
||||
|
||||
module_init(vti_init);
|
||||
module_exit(vti_fini);
|
||||
MODULE_DESCRIPTION("Virtual (secure) IP tunneling library");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_RTNL_LINK("vti");
|
||||
MODULE_ALIAS_NETDEV("ip_vti0");
|
||||
|
@ -658,6 +658,7 @@ static void __exit ipip_fini(void)
|
||||
|
||||
module_init(ipip_init);
|
||||
module_exit(ipip_fini);
|
||||
MODULE_DESCRIPTION("IP/IP protocol decoder library");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_RTNL_LINK("ipip");
|
||||
MODULE_ALIAS_NETDEV("tunl0");
|
||||
|
@ -4615,7 +4615,8 @@ static void __init tcp_struct_check(void)
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_read_txrx, prr_out);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_read_txrx, lost_out);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_read_txrx, sacked_out);
|
||||
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_read_txrx, 31);
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_read_txrx, scaling_ratio);
|
||||
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_read_txrx, 32);
|
||||
|
||||
/* RX read-mostly hotpath cache lines */
|
||||
CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_read_rx, copied_seq);
|
||||
|
@ -294,4 +294,5 @@ static void __exit tunnel4_fini(void)
|
||||
|
||||
module_init(tunnel4_init);
|
||||
module_exit(tunnel4_fini);
|
||||
MODULE_DESCRIPTION("IPv4 XFRM tunnel library");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -253,4 +253,5 @@ struct rtable *udp_tunnel_dst_lookup(struct sk_buff *skb,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(udp_tunnel_dst_lookup);
|
||||
|
||||
MODULE_DESCRIPTION("IPv4 Foo over UDP tunnel driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -114,5 +114,6 @@ static void __exit ipip_fini(void)
|
||||
|
||||
module_init(ipip_init);
|
||||
module_exit(ipip_fini);
|
||||
MODULE_DESCRIPTION("IPv4 XFRM tunnel driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET, XFRM_PROTO_IPIP);
|
||||
|
@ -800,5 +800,6 @@ static void __exit ah6_fini(void)
|
||||
module_init(ah6_init);
|
||||
module_exit(ah6_fini);
|
||||
|
||||
MODULE_DESCRIPTION("IPv6 AH transformation helpers");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET6, XFRM_PROTO_AH);
|
||||
|
@ -1301,5 +1301,6 @@ static void __exit esp6_fini(void)
|
||||
module_init(esp6_init);
|
||||
module_exit(esp6_fini);
|
||||
|
||||
MODULE_DESCRIPTION("IPv6 ESP transformation helpers");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET6, XFRM_PROTO_ESP);
|
||||
|
@ -1424,11 +1424,11 @@ static int __ip6_append_data(struct sock *sk,
|
||||
bool zc = false;
|
||||
u32 tskey = 0;
|
||||
struct rt6_info *rt = (struct rt6_info *)cork->dst;
|
||||
bool paged, hold_tskey, extra_uref = false;
|
||||
struct ipv6_txoptions *opt = v6_cork->opt;
|
||||
int csummode = CHECKSUM_NONE;
|
||||
unsigned int maxnonfragsize, headersize;
|
||||
unsigned int wmem_alloc_delta = 0;
|
||||
bool paged, extra_uref = false;
|
||||
|
||||
skb = skb_peek_tail(queue);
|
||||
if (!skb) {
|
||||
@ -1440,10 +1440,6 @@ static int __ip6_append_data(struct sock *sk,
|
||||
mtu = cork->gso_size ? IP6_MAX_MTU : cork->fragsize;
|
||||
orig_mtu = mtu;
|
||||
|
||||
if (cork->tx_flags & SKBTX_ANY_TSTAMP &&
|
||||
READ_ONCE(sk->sk_tsflags) & SOF_TIMESTAMPING_OPT_ID)
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
|
||||
|
||||
fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
|
||||
@ -1538,6 +1534,11 @@ emsgsize:
|
||||
flags &= ~MSG_SPLICE_PAGES;
|
||||
}
|
||||
|
||||
hold_tskey = cork->tx_flags & SKBTX_ANY_TSTAMP &&
|
||||
READ_ONCE(sk->sk_tsflags) & SOF_TIMESTAMPING_OPT_ID;
|
||||
if (hold_tskey)
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
/*
|
||||
* Let's try using as much space as possible.
|
||||
* Use MTU if total length of the message fits into the MTU.
|
||||
@ -1794,6 +1795,8 @@ error:
|
||||
cork->length -= length;
|
||||
IP6_INC_STATS(sock_net(sk), rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
|
||||
refcount_add(wmem_alloc_delta, &sk->sk_wmem_alloc);
|
||||
if (hold_tskey)
|
||||
atomic_dec(&sk->sk_tskey);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -182,4 +182,5 @@ struct dst_entry *udp_tunnel6_dst_lookup(struct sk_buff *skb,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(udp_tunnel6_dst_lookup);
|
||||
|
||||
MODULE_DESCRIPTION("IPv6 Foo over UDP tunnel driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -405,6 +405,7 @@ static void __exit mip6_fini(void)
|
||||
module_init(mip6_init);
|
||||
module_exit(mip6_fini);
|
||||
|
||||
MODULE_DESCRIPTION("IPv6 Mobility driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET6, XFRM_PROTO_DSTOPTS);
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET6, XFRM_PROTO_ROUTING);
|
||||
|
@ -1956,6 +1956,7 @@ xfrm_tunnel_failed:
|
||||
|
||||
module_init(sit_init);
|
||||
module_exit(sit_cleanup);
|
||||
MODULE_DESCRIPTION("IPv6-in-IPv4 tunnel SIT driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_RTNL_LINK("sit");
|
||||
MODULE_ALIAS_NETDEV("sit0");
|
||||
|
@ -302,4 +302,5 @@ static void __exit tunnel6_fini(void)
|
||||
|
||||
module_init(tunnel6_init);
|
||||
module_exit(tunnel6_fini);
|
||||
MODULE_DESCRIPTION("IP-in-IPv6 tunnel driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -401,5 +401,6 @@ static void __exit xfrm6_tunnel_fini(void)
|
||||
|
||||
module_init(xfrm6_tunnel_init);
|
||||
module_exit(xfrm6_tunnel_fini);
|
||||
MODULE_DESCRIPTION("IPv6 XFRM tunnel driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_XFRM_TYPE(AF_INET6, XFRM_PROTO_IPV6);
|
||||
|
@ -3924,5 +3924,6 @@ out_unregister_key_proto:
|
||||
|
||||
module_init(ipsec_pfkey_init);
|
||||
module_exit(ipsec_pfkey_exit);
|
||||
MODULE_DESCRIPTION("PF_KEY socket helpers");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_NETPROTO(PF_KEY);
|
||||
|
@ -5,7 +5,7 @@
|
||||
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
|
||||
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
|
||||
* Copyright 2013-2014 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2018-2022 Intel Corporation
|
||||
* Copyright (C) 2018-2024 Intel Corporation
|
||||
*
|
||||
* Transmit and frame generation functions.
|
||||
*/
|
||||
@ -3927,6 +3927,7 @@ begin:
|
||||
goto begin;
|
||||
|
||||
skb = __skb_dequeue(&tx.skbs);
|
||||
info = IEEE80211_SKB_CB(skb);
|
||||
|
||||
if (!skb_queue_empty(&tx.skbs)) {
|
||||
spin_lock_bh(&fq->lock);
|
||||
@ -3971,7 +3972,7 @@ begin:
|
||||
}
|
||||
|
||||
encap_out:
|
||||
IEEE80211_SKB_CB(skb)->control.vif = vif;
|
||||
info->control.vif = vif;
|
||||
|
||||
if (tx.sta &&
|
||||
wiphy_ext_feature_isset(local->hw.wiphy, NL80211_EXT_FEATURE_AQL)) {
|
||||
|
@ -59,13 +59,12 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
||||
void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
{
|
||||
struct sock *sk = (struct sock *)msk;
|
||||
struct sk_buff *skb;
|
||||
|
||||
mptcp_data_lock(sk);
|
||||
skb = skb_peek_tail(&sk->sk_receive_queue);
|
||||
if (skb) {
|
||||
WARN_ON_ONCE(MPTCP_SKB_CB(skb)->end_seq);
|
||||
@ -77,5 +76,4 @@ void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_
|
||||
}
|
||||
|
||||
pr_debug("msk=%p ack_seq=%llx", msk, msk->ack_seq);
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
@ -962,9 +962,7 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
|
||||
/* subflows are fully established as soon as we get any
|
||||
* additional ack, including ADD_ADDR.
|
||||
*/
|
||||
subflow->fully_established = 1;
|
||||
WRITE_ONCE(msk->fully_established, true);
|
||||
goto check_notify;
|
||||
goto set_fully_established;
|
||||
}
|
||||
|
||||
/* If the first established packet does not contain MP_CAPABLE + data
|
||||
@ -986,7 +984,10 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
|
||||
set_fully_established:
|
||||
if (unlikely(!READ_ONCE(msk->pm.server_side)))
|
||||
pr_warn_once("bogus mpc option on established client sk");
|
||||
mptcp_subflow_fully_established(subflow, mp_opt);
|
||||
|
||||
mptcp_data_lock((struct sock *)msk);
|
||||
__mptcp_subflow_fully_established(msk, subflow, mp_opt);
|
||||
mptcp_data_unlock((struct sock *)msk);
|
||||
|
||||
check_notify:
|
||||
/* if the subflow is not already linked into the conn_list, we can't
|
||||
|
@ -130,10 +130,21 @@ int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk,
|
||||
int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk,
|
||||
struct mptcp_addr_info *skc)
|
||||
{
|
||||
struct mptcp_pm_addr_entry new_entry;
|
||||
struct mptcp_pm_addr_entry *entry = NULL, *e, new_entry;
|
||||
__be16 msk_sport = ((struct inet_sock *)
|
||||
inet_sk((struct sock *)msk))->inet_sport;
|
||||
|
||||
spin_lock_bh(&msk->pm.lock);
|
||||
list_for_each_entry(e, &msk->pm.userspace_pm_local_addr_list, list) {
|
||||
if (mptcp_addresses_equal(&e->addr, skc, false)) {
|
||||
entry = e;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&msk->pm.lock);
|
||||
if (entry)
|
||||
return entry->addr.id;
|
||||
|
||||
memset(&new_entry, 0, sizeof(struct mptcp_pm_addr_entry));
|
||||
new_entry.addr = *skc;
|
||||
new_entry.addr.id = 0;
|
||||
|
@ -1505,8 +1505,11 @@ static void mptcp_update_post_push(struct mptcp_sock *msk,
|
||||
|
||||
void mptcp_check_and_set_pending(struct sock *sk)
|
||||
{
|
||||
if (mptcp_send_head(sk))
|
||||
mptcp_sk(sk)->push_pending |= BIT(MPTCP_PUSH_PENDING);
|
||||
if (mptcp_send_head(sk)) {
|
||||
mptcp_data_lock(sk);
|
||||
mptcp_sk(sk)->cb_flags |= BIT(MPTCP_PUSH_PENDING);
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
}
|
||||
|
||||
static int __subflow_push_pending(struct sock *sk, struct sock *ssk,
|
||||
@ -1960,6 +1963,9 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
|
||||
if (copied <= 0)
|
||||
return;
|
||||
|
||||
if (!msk->rcvspace_init)
|
||||
mptcp_rcv_space_init(msk, msk->first);
|
||||
|
||||
msk->rcvq_space.copied += copied;
|
||||
|
||||
mstamp = div_u64(tcp_clock_ns(), NSEC_PER_USEC);
|
||||
@ -3142,7 +3148,6 @@ static int mptcp_disconnect(struct sock *sk, int flags)
|
||||
mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
|
||||
WRITE_ONCE(msk->flags, 0);
|
||||
msk->cb_flags = 0;
|
||||
msk->push_pending = 0;
|
||||
msk->recovery = false;
|
||||
msk->can_ack = false;
|
||||
msk->fully_established = false;
|
||||
@ -3158,6 +3163,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
|
||||
msk->bytes_received = 0;
|
||||
msk->bytes_sent = 0;
|
||||
msk->bytes_retrans = 0;
|
||||
msk->rcvspace_init = 0;
|
||||
|
||||
WRITE_ONCE(sk->sk_shutdown, 0);
|
||||
sk_error_report(sk);
|
||||
@ -3180,6 +3186,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
|
||||
{
|
||||
struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
|
||||
struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC);
|
||||
struct mptcp_subflow_context *subflow;
|
||||
struct mptcp_sock *msk;
|
||||
|
||||
if (!nsk)
|
||||
@ -3220,7 +3227,8 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
|
||||
|
||||
/* The msk maintain a ref to each subflow in the connections list */
|
||||
WRITE_ONCE(msk->first, ssk);
|
||||
list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list);
|
||||
subflow = mptcp_subflow_ctx(ssk);
|
||||
list_add(&subflow->node, &msk->conn_list);
|
||||
sock_hold(ssk);
|
||||
|
||||
/* new mpc subflow takes ownership of the newly
|
||||
@ -3235,6 +3243,9 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
|
||||
__mptcp_propagate_sndbuf(nsk, ssk);
|
||||
|
||||
mptcp_rcv_space_init(msk, ssk);
|
||||
|
||||
if (mp_opt->suboptions & OPTION_MPTCP_MPC_ACK)
|
||||
__mptcp_subflow_fully_established(msk, subflow, mp_opt);
|
||||
bh_unlock_sock(nsk);
|
||||
|
||||
/* note: the newly allocated socket refcount is 2 now */
|
||||
@ -3245,6 +3256,7 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
|
||||
{
|
||||
const struct tcp_sock *tp = tcp_sk(ssk);
|
||||
|
||||
msk->rcvspace_init = 1;
|
||||
msk->rcvq_space.copied = 0;
|
||||
msk->rcvq_space.rtt_us = 0;
|
||||
|
||||
@ -3255,8 +3267,6 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
|
||||
TCP_INIT_CWND * tp->advmss);
|
||||
if (msk->rcvq_space.space == 0)
|
||||
msk->rcvq_space.space = TCP_INIT_CWND * TCP_MSS_DEFAULT;
|
||||
|
||||
WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd);
|
||||
}
|
||||
|
||||
void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
|
||||
@ -3330,8 +3340,7 @@ static void mptcp_release_cb(struct sock *sk)
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
for (;;) {
|
||||
unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED) |
|
||||
msk->push_pending;
|
||||
unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
|
||||
struct list_head join_list;
|
||||
|
||||
if (!flags)
|
||||
@ -3347,7 +3356,6 @@ static void mptcp_release_cb(struct sock *sk)
|
||||
* datapath acquires the msk socket spinlock while helding
|
||||
* the subflow socket lock
|
||||
*/
|
||||
msk->push_pending = 0;
|
||||
msk->cb_flags &= ~flags;
|
||||
spin_unlock_bh(&sk->sk_lock.slock);
|
||||
|
||||
@ -3475,13 +3483,8 @@ void mptcp_finish_connect(struct sock *ssk)
|
||||
* accessing the field below
|
||||
*/
|
||||
WRITE_ONCE(msk->local_key, subflow->local_key);
|
||||
WRITE_ONCE(msk->write_seq, subflow->idsn + 1);
|
||||
WRITE_ONCE(msk->snd_nxt, msk->write_seq);
|
||||
WRITE_ONCE(msk->snd_una, msk->write_seq);
|
||||
|
||||
mptcp_pm_new_connection(msk, ssk, 0);
|
||||
|
||||
mptcp_rcv_space_init(msk, ssk);
|
||||
}
|
||||
|
||||
void mptcp_sock_graft(struct sock *sk, struct socket *parent)
|
||||
|
@ -286,7 +286,6 @@ struct mptcp_sock {
|
||||
int rmem_released;
|
||||
unsigned long flags;
|
||||
unsigned long cb_flags;
|
||||
unsigned long push_pending;
|
||||
bool recovery; /* closing subflow write queue reinjected */
|
||||
bool can_ack;
|
||||
bool fully_established;
|
||||
@ -305,7 +304,8 @@ struct mptcp_sock {
|
||||
nodelay:1,
|
||||
fastopening:1,
|
||||
in_accept_queue:1,
|
||||
free_first:1;
|
||||
free_first:1,
|
||||
rcvspace_init:1;
|
||||
struct work_struct work;
|
||||
struct sk_buff *ooo_last_skb;
|
||||
struct rb_root out_of_order_queue;
|
||||
@ -622,8 +622,9 @@ unsigned int mptcp_stale_loss_cnt(const struct net *net);
|
||||
unsigned int mptcp_close_timeout(const struct sock *sk);
|
||||
int mptcp_get_pm_type(const struct net *net);
|
||||
const char *mptcp_get_scheduler(const struct net *net);
|
||||
void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt);
|
||||
void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
|
||||
struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt);
|
||||
bool __mptcp_retransmit_pending_data(struct sock *sk);
|
||||
void mptcp_check_and_set_pending(struct sock *sk);
|
||||
void __mptcp_push_pending(struct sock *sk, unsigned int flags);
|
||||
@ -952,8 +953,8 @@ void mptcp_event_pm_listener(const struct sock *ssk,
|
||||
enum mptcp_event_type event);
|
||||
bool mptcp_userspace_pm_active(const struct mptcp_sock *msk);
|
||||
|
||||
void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt);
|
||||
void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt);
|
||||
void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
|
||||
struct request_sock *req);
|
||||
|
||||
@ -1128,7 +1129,8 @@ static inline bool subflow_simultaneous_connect(struct sock *sk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
|
||||
|
||||
return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_FIN_WAIT1) &&
|
||||
return (1 << sk->sk_state) &
|
||||
(TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | TCPF_CLOSING) &&
|
||||
is_active_ssk(subflow) &&
|
||||
!subflow->conn_finished;
|
||||
}
|
||||
|
@ -421,29 +421,26 @@ static bool subflow_use_different_dport(struct mptcp_sock *msk, const struct soc
|
||||
|
||||
void __mptcp_sync_state(struct sock *sk, int state)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow;
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
struct sock *ssk = msk->first;
|
||||
|
||||
subflow = mptcp_subflow_ctx(ssk);
|
||||
__mptcp_propagate_sndbuf(sk, ssk);
|
||||
if (!msk->rcvspace_init)
|
||||
mptcp_rcv_space_init(msk, ssk);
|
||||
|
||||
__mptcp_propagate_sndbuf(sk, msk->first);
|
||||
if (sk->sk_state == TCP_SYN_SENT) {
|
||||
/* subflow->idsn is always available is TCP_SYN_SENT state,
|
||||
* even for the FASTOPEN scenarios
|
||||
*/
|
||||
WRITE_ONCE(msk->write_seq, subflow->idsn + 1);
|
||||
WRITE_ONCE(msk->snd_nxt, msk->write_seq);
|
||||
mptcp_set_state(sk, state);
|
||||
sk->sk_state_change(sk);
|
||||
}
|
||||
}
|
||||
|
||||
static void mptcp_propagate_state(struct sock *sk, struct sock *ssk)
|
||||
{
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
mptcp_data_lock(sk);
|
||||
if (!sock_owned_by_user(sk)) {
|
||||
__mptcp_sync_state(sk, ssk->sk_state);
|
||||
} else {
|
||||
msk->pending_state = ssk->sk_state;
|
||||
__set_bit(MPTCP_SYNC_STATE, &msk->cb_flags);
|
||||
}
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
||||
static void subflow_set_remote_key(struct mptcp_sock *msk,
|
||||
struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
@ -465,6 +462,31 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
|
||||
atomic64_set(&msk->rcv_wnd_sent, subflow->iasn);
|
||||
}
|
||||
|
||||
static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
|
||||
struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
{
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
mptcp_data_lock(sk);
|
||||
if (mp_opt) {
|
||||
/* Options are available only in the non fallback cases
|
||||
* avoid updating rx path fields otherwise
|
||||
*/
|
||||
WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
|
||||
WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
|
||||
subflow_set_remote_key(msk, subflow, mp_opt);
|
||||
}
|
||||
|
||||
if (!sock_owned_by_user(sk)) {
|
||||
__mptcp_sync_state(sk, ssk->sk_state);
|
||||
} else {
|
||||
msk->pending_state = ssk->sk_state;
|
||||
__set_bit(MPTCP_SYNC_STATE, &msk->cb_flags);
|
||||
}
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
||||
static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
|
||||
@ -499,10 +521,9 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
||||
if (mp_opt.deny_join_id0)
|
||||
WRITE_ONCE(msk->pm.remote_deny_join_id0, true);
|
||||
subflow->mp_capable = 1;
|
||||
subflow_set_remote_key(msk, subflow, &mp_opt);
|
||||
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVEACK);
|
||||
mptcp_finish_connect(sk);
|
||||
mptcp_propagate_state(parent, sk);
|
||||
mptcp_propagate_state(parent, sk, subflow, &mp_opt);
|
||||
} else if (subflow->request_join) {
|
||||
u8 hmac[SHA256_DIGEST_SIZE];
|
||||
|
||||
@ -545,8 +566,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
||||
}
|
||||
} else if (mptcp_check_fallback(sk)) {
|
||||
fallback:
|
||||
mptcp_rcv_space_init(msk, sk);
|
||||
mptcp_propagate_state(parent, sk);
|
||||
mptcp_propagate_state(parent, sk, subflow, NULL);
|
||||
}
|
||||
return;
|
||||
|
||||
@ -731,17 +751,16 @@ void mptcp_subflow_drop_ctx(struct sock *ssk)
|
||||
kfree_rcu(ctx, rcu);
|
||||
}
|
||||
|
||||
void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
|
||||
struct mptcp_subflow_context *subflow,
|
||||
const struct mptcp_options_received *mp_opt)
|
||||
{
|
||||
struct mptcp_sock *msk = mptcp_sk(subflow->conn);
|
||||
|
||||
subflow_set_remote_key(msk, subflow, mp_opt);
|
||||
subflow->fully_established = 1;
|
||||
WRITE_ONCE(msk->fully_established, true);
|
||||
|
||||
if (subflow->is_mptfo)
|
||||
mptcp_fastopen_gen_msk_ackseq(msk, subflow, mp_opt);
|
||||
__mptcp_fastopen_gen_msk_ackseq(msk, subflow, mp_opt);
|
||||
}
|
||||
|
||||
static struct sock *subflow_syn_recv_sock(const struct sock *sk,
|
||||
@ -834,7 +853,6 @@ create_child:
|
||||
* mpc option
|
||||
*/
|
||||
if (mp_opt.suboptions & OPTION_MPTCP_MPC_ACK) {
|
||||
mptcp_subflow_fully_established(ctx, &mp_opt);
|
||||
mptcp_pm_fully_established(owner, child);
|
||||
ctx->pm_notified = 1;
|
||||
}
|
||||
@ -1744,10 +1762,9 @@ static void subflow_state_change(struct sock *sk)
|
||||
msk = mptcp_sk(parent);
|
||||
if (subflow_simultaneous_connect(sk)) {
|
||||
mptcp_do_fallback(sk);
|
||||
mptcp_rcv_space_init(msk, sk);
|
||||
pr_fallback(msk);
|
||||
subflow->conn_finished = 1;
|
||||
mptcp_propagate_state(parent, sk);
|
||||
mptcp_propagate_state(parent, sk, subflow, NULL);
|
||||
}
|
||||
|
||||
/* as recvmsg() does not acquire the subflow socket for ssk selection
|
||||
|
@ -551,8 +551,11 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple,
|
||||
find_free_id:
|
||||
if (range->flags & NF_NAT_RANGE_PROTO_OFFSET)
|
||||
off = (ntohs(*keyptr) - ntohs(range->base_proto.all));
|
||||
else
|
||||
else if ((range->flags & NF_NAT_RANGE_PROTO_RANDOM_ALL) ||
|
||||
maniptype != NF_NAT_MANIP_DST)
|
||||
off = get_random_u16();
|
||||
else
|
||||
off = 0;
|
||||
|
||||
attempts = range_size;
|
||||
if (attempts > NF_NAT_MAX_ATTEMPTS)
|
||||
|
@ -361,6 +361,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
|
||||
ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
|
||||
}
|
||||
|
||||
__set_bit(NF_FLOW_HW_BIDIRECTIONAL, &flow->flags);
|
||||
ret = flow_offload_add(flowtable, flow);
|
||||
if (ret < 0)
|
||||
goto err_flow_add;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user