9187210eee
Core & protocols ---------------- - Large effort by Eric to lower rtnl_lock pressure and remove locks: - Make commonly used parts of rtnetlink (address, route dumps etc.) lockless, protected by RCU instead of rtnl_lock. - Add a netns exit callback which already holds rtnl_lock, allowing netns exit to take rtnl_lock once in the core instead of once for each driver / callback. - Remove locks / serialization in the socket diag interface. - Remove 6 calls to synchronize_rcu() while holding rtnl_lock. - Remove the dev_base_lock, depend on RCU where necessary. - Support busy polling on a per-epoll context basis. Poll length and budget parameters can be set independently of system defaults. - Introduce struct net_hotdata, to make sure read-mostly global config variables fit in as few cache lines as possible. - Add optional per-nexthop statistics to ease monitoring / debug of ECMP imbalance problems. - Support TCP_NOTSENT_LOWAT in MPTCP. - Ensure that IPv6 temporary addresses' preferred lifetimes are long enough, compared to other configured lifetimes, and at least 2 sec. - Support forwarding of ICMP Error messages in IPSec, per RFC 4301. - Add support for the independent control state machine for bonding per IEEE 802.1AX-2008 5.4.15 in addition to the existing coupled control state machine. - Add "network ID" to MCTP socket APIs to support hosts with multiple disjoint MCTP networks. - Re-use the mono_delivery_time skbuff bit for packets which user space wants to be sent at a specified time. Maintain the timing information while traversing veth links, bridge etc. - Take advantage of MSG_SPLICE_PAGES for RxRPC DATA and ACK packets. - Simplify many places iterating over netdevs by using an xarray instead of a hash table walk (hash table remains in place, for use on fastpaths). - Speed up scanning for expired routes by keeping a dedicated list. - Speed up "generic" XDP by trying harder to avoid large allocations. - Support attaching arbitrary metadata to netconsole messages. Things we sprinkled into general kernel code -------------------------------------------- - Enforce VM_IOREMAP flag and range in ioremap_page_range and introduce VM_SPARSE kind and vm_area_[un]map_pages (used by bpf_arena). - Rework selftest harness to enable the use of the full range of ksft exit code (pass, fail, skip, xfail, xpass). Netfilter --------- - Allow userspace to define a table that is exclusively owned by a daemon (via netlink socket aliveness) without auto-removing this table when the userspace program exits. Such table gets marked as orphaned and a restarting management daemon can re-attach/regain ownership. - Speed up element insertions to nftables' concatenated-ranges set type. Compact a few related data structures. BPF --- - Add BPF token support for delegating a subset of BPF subsystem functionality from privileged system-wide daemons such as systemd through special mount options for userns-bound BPF fs to a trusted & unprivileged application. - Introduce bpf_arena which is sparse shared memory region between BPF program and user space where structures inside the arena can have pointers to other areas of the arena, and pointers work seamlessly for both user-space programs and BPF programs. - Introduce may_goto instruction that is a contract between the verifier and the program. The verifier allows the program to loop assuming it's behaving well, but reserves the right to terminate it. - Extend the BPF verifier to enable static subprog calls in spin lock critical sections. - Support registration of struct_ops types from modules which helps projects like fuse-bpf that seeks to implement a new struct_ops type. - Add support for retrieval of cookies for perf/kprobe multi links. - Support arbitrary TCP SYN cookie generation / validation in the TC layer with BPF to allow creating SYN flood handling in BPF firewalls. - Add code generation to inline the bpf_kptr_xchg() helper which improves performance when stashing/popping the allocated BPF objects. Wireless -------- - Add SPP (signaling and payload protected) AMSDU support. - Support wider bandwidth OFDMA, as required for EHT operation. Driver API ---------- - Major overhaul of the Energy Efficient Ethernet internals to support new link modes (2.5GE, 5GE), share more code between drivers (especially those using phylib), and encourage more uniform behavior. Convert and clean up drivers. - Define an API for querying per netdev queue statistics from drivers. - IPSec: account in global stats for fully offloaded sessions. - Create a concept of Ethernet PHY Packages at the Device Tree level, to allow parameterizing the existing PHY package code. - Enable Rx hashing (RSS) on GTP protocol fields. Misc ---- - Improvements and refactoring all over networking selftests. - Create uniform module aliases for TC classifiers, actions, and packet schedulers to simplify creating modprobe policies. - Address all missing MODULE_DESCRIPTION() warnings in networking. - Extend the Netlink descriptions in YAML to cover message encapsulation or "Netlink polymorphism", where interpretation of nested attributes depends on link type, classifier type or some other "class type". Drivers ------- - Ethernet high-speed NICs: - Add a new driver for Marvell's Octeon PCI Endpoint NIC VF. - Intel (100G, ice, idpf): - support E825-C devices - nVidia/Mellanox: - support devices with one port and multiple PCIe links - Broadcom (bnxt): - support n-tuple filters - support configuring the RSS key - Wangxun (ngbe/txgbe): - implement irq_domain for TXGBE's sub-interrupts - Pensando/AMD: - support XDP - optimize queue submission and wakeup handling (+17% bps) - optimize struct layout, saving 28% of memory on queues - Ethernet NICs embedded and virtual: - Google cloud vNIC: - refactor driver to perform memory allocations for new queue config before stopping and freeing the old queue memory - Synopsys (stmmac): - obey queueMaxSDU and implement counters required by 802.1Qbv - Renesas (ravb): - support packet checksum offload - suspend to RAM and runtime PM support - Ethernet switches: - nVidia/Mellanox: - support for nexthop group statistics - Microchip: - ksz8: implement PHY loopback - add support for KSZ8567, a 7-port 10/100Mbps switch - PTP: - New driver for RENESAS FemtoClock3 Wireless clock generator. - Support OCP PTP cards designed and built by Adva. - CAN: - Support recvmsg() flags for own, local and remote traffic on CAN BCM sockets. - Support for esd GmbH PCIe/402 CAN device family. - m_can: - Rx/Tx submission coalescing - wake on frame Rx - WiFi: - Intel (iwlwifi): - enable signaling and payload protected A-MSDUs - support wider-bandwidth OFDMA - support for new devices - bump FW API to 89 for AX devices; 90 for BZ/SC devices - MediaTek (mt76): - mt7915: newer ADIE version support - mt7925: radio temperature sensor support - Qualcomm (ath11k): - support 6 GHz station power modes: Low Power Indoor (LPI), Standard Power) SP and Very Low Power (VLP) - QCA6390 & WCN6855: support 2 concurrent station interfaces - QCA2066 support - Qualcomm (ath12k): - refactoring in preparation for Multi-Link Operation (MLO) support - 1024 Block Ack window size support - firmware-2.bin support - support having multiple identical PCI devices (firmware needs to have ATH12K_FW_FEATURE_MULTI_QRTR_ID) - QCN9274: support split-PHY devices - WCN7850: enable Power Save Mode in station mode - WCN7850: P2P support - RealTek: - rtw88: support for more rtw8811cu and rtw8821cu devices - rtw89: support SCAN_RANDOM_SN and SET_SCAN_DWELL - rtlwifi: speed up USB firmware initialization - rtwl8xxxu: - RTL8188F: concurrent interface support - Channel Switch Announcement (CSA) support in AP mode - Broadcom (brcmfmac): - per-vendor feature support - per-vendor SAE password setup - DMI nvram filename quirk for ACEPC W5 Pro Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmXv0mgACgkQMUZtbf5S IrtgMxAAuRd+WJW++SENr4KxIWhYO1q6Xcxnai43wrNkan9swD24icG8TYALt4f3 yoT6idQvWReAb5JNlh9rUQz8R7E0nJXlvEFn5MtJwcthx2C6wFo/XkJlddlRrT+j c2xGILwLjRhW65LaC0MZ2ECbEERkFz8xcGfK2SWzUgh6KYvPjcRfKFxugpM7xOQK P/Wnqhs4fVRS/Mj/bCcXcO+yhwC121Q3qVeQVjGS0AzEC65hAW87a/kc2BfgcegD EyI9R7mf6criQwX+0awubjfoIdr4oW/8oDVNvUDczkJkbaEVaLMQk9P5x/0XnnVS UHUchWXyI80Q8Rj12uN1/I0h3WtwNQnCRBuLSmtm6GLfCAwbLvp2nGWDnaXiqryW DVKUIHGvqPKjkOOMOVfSvfB3LvkS3xsFVVYiQBQCn0YSs/gtu4CoF2Nty9CiLPbK tTuxUnLdPDZDxU//l0VArZmP8p2JM7XQGJ+JH8GFH4SBTyBR23e0iyPSoyaxjnYn RReDnHMVsrS1i7GPhbqDJWn+uqMSs7N149i0XmmyeqwQHUVSJN3J2BApP2nCaDfy H2lTuYly5FfEezt61NvCE4qr/VsWeEjm1fYlFQ9dFn4pGn+HghyCpw+xD1ZN56DN lujemau5B3kk1UTtAT4ypPqvuqjkRFqpNV2LzsJSk/Js+hApw8Y= =oY52 -----END PGP SIGNATURE----- Merge tag 'net-next-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core & protocols: - Large effort by Eric to lower rtnl_lock pressure and remove locks: - Make commonly used parts of rtnetlink (address, route dumps etc) lockless, protected by RCU instead of rtnl_lock. - Add a netns exit callback which already holds rtnl_lock, allowing netns exit to take rtnl_lock once in the core instead of once for each driver / callback. - Remove locks / serialization in the socket diag interface. - Remove 6 calls to synchronize_rcu() while holding rtnl_lock. - Remove the dev_base_lock, depend on RCU where necessary. - Support busy polling on a per-epoll context basis. Poll length and budget parameters can be set independently of system defaults. - Introduce struct net_hotdata, to make sure read-mostly global config variables fit in as few cache lines as possible. - Add optional per-nexthop statistics to ease monitoring / debug of ECMP imbalance problems. - Support TCP_NOTSENT_LOWAT in MPTCP. - Ensure that IPv6 temporary addresses' preferred lifetimes are long enough, compared to other configured lifetimes, and at least 2 sec. - Support forwarding of ICMP Error messages in IPSec, per RFC 4301. - Add support for the independent control state machine for bonding per IEEE 802.1AX-2008 5.4.15 in addition to the existing coupled control state machine. - Add "network ID" to MCTP socket APIs to support hosts with multiple disjoint MCTP networks. - Re-use the mono_delivery_time skbuff bit for packets which user space wants to be sent at a specified time. Maintain the timing information while traversing veth links, bridge etc. - Take advantage of MSG_SPLICE_PAGES for RxRPC DATA and ACK packets. - Simplify many places iterating over netdevs by using an xarray instead of a hash table walk (hash table remains in place, for use on fastpaths). - Speed up scanning for expired routes by keeping a dedicated list. - Speed up "generic" XDP by trying harder to avoid large allocations. - Support attaching arbitrary metadata to netconsole messages. Things we sprinkled into general kernel code: - Enforce VM_IOREMAP flag and range in ioremap_page_range and introduce VM_SPARSE kind and vm_area_[un]map_pages (used by bpf_arena). - Rework selftest harness to enable the use of the full range of ksft exit code (pass, fail, skip, xfail, xpass). Netfilter: - Allow userspace to define a table that is exclusively owned by a daemon (via netlink socket aliveness) without auto-removing this table when the userspace program exits. Such table gets marked as orphaned and a restarting management daemon can re-attach/regain ownership. - Speed up element insertions to nftables' concatenated-ranges set type. Compact a few related data structures. BPF: - Add BPF token support for delegating a subset of BPF subsystem functionality from privileged system-wide daemons such as systemd through special mount options for userns-bound BPF fs to a trusted & unprivileged application. - Introduce bpf_arena which is sparse shared memory region between BPF program and user space where structures inside the arena can have pointers to other areas of the arena, and pointers work seamlessly for both user-space programs and BPF programs. - Introduce may_goto instruction that is a contract between the verifier and the program. The verifier allows the program to loop assuming it's behaving well, but reserves the right to terminate it. - Extend the BPF verifier to enable static subprog calls in spin lock critical sections. - Support registration of struct_ops types from modules which helps projects like fuse-bpf that seeks to implement a new struct_ops type. - Add support for retrieval of cookies for perf/kprobe multi links. - Support arbitrary TCP SYN cookie generation / validation in the TC layer with BPF to allow creating SYN flood handling in BPF firewalls. - Add code generation to inline the bpf_kptr_xchg() helper which improves performance when stashing/popping the allocated BPF objects. Wireless: - Add SPP (signaling and payload protected) AMSDU support. - Support wider bandwidth OFDMA, as required for EHT operation. Driver API: - Major overhaul of the Energy Efficient Ethernet internals to support new link modes (2.5GE, 5GE), share more code between drivers (especially those using phylib), and encourage more uniform behavior. Convert and clean up drivers. - Define an API for querying per netdev queue statistics from drivers. - IPSec: account in global stats for fully offloaded sessions. - Create a concept of Ethernet PHY Packages at the Device Tree level, to allow parameterizing the existing PHY package code. - Enable Rx hashing (RSS) on GTP protocol fields. Misc: - Improvements and refactoring all over networking selftests. - Create uniform module aliases for TC classifiers, actions, and packet schedulers to simplify creating modprobe policies. - Address all missing MODULE_DESCRIPTION() warnings in networking. - Extend the Netlink descriptions in YAML to cover message encapsulation or "Netlink polymorphism", where interpretation of nested attributes depends on link type, classifier type or some other "class type". Drivers: - Ethernet high-speed NICs: - Add a new driver for Marvell's Octeon PCI Endpoint NIC VF. - Intel (100G, ice, idpf): - support E825-C devices - nVidia/Mellanox: - support devices with one port and multiple PCIe links - Broadcom (bnxt): - support n-tuple filters - support configuring the RSS key - Wangxun (ngbe/txgbe): - implement irq_domain for TXGBE's sub-interrupts - Pensando/AMD: - support XDP - optimize queue submission and wakeup handling (+17% bps) - optimize struct layout, saving 28% of memory on queues - Ethernet NICs embedded and virtual: - Google cloud vNIC: - refactor driver to perform memory allocations for new queue config before stopping and freeing the old queue memory - Synopsys (stmmac): - obey queueMaxSDU and implement counters required by 802.1Qbv - Renesas (ravb): - support packet checksum offload - suspend to RAM and runtime PM support - Ethernet switches: - nVidia/Mellanox: - support for nexthop group statistics - Microchip: - ksz8: implement PHY loopback - add support for KSZ8567, a 7-port 10/100Mbps switch - PTP: - New driver for RENESAS FemtoClock3 Wireless clock generator. - Support OCP PTP cards designed and built by Adva. - CAN: - Support recvmsg() flags for own, local and remote traffic on CAN BCM sockets. - Support for esd GmbH PCIe/402 CAN device family. - m_can: - Rx/Tx submission coalescing - wake on frame Rx - WiFi: - Intel (iwlwifi): - enable signaling and payload protected A-MSDUs - support wider-bandwidth OFDMA - support for new devices - bump FW API to 89 for AX devices; 90 for BZ/SC devices - MediaTek (mt76): - mt7915: newer ADIE version support - mt7925: radio temperature sensor support - Qualcomm (ath11k): - support 6 GHz station power modes: Low Power Indoor (LPI), Standard Power) SP and Very Low Power (VLP) - QCA6390 & WCN6855: support 2 concurrent station interfaces - QCA2066 support - Qualcomm (ath12k): - refactoring in preparation for Multi-Link Operation (MLO) support - 1024 Block Ack window size support - firmware-2.bin support - support having multiple identical PCI devices (firmware needs to have ATH12K_FW_FEATURE_MULTI_QRTR_ID) - QCN9274: support split-PHY devices - WCN7850: enable Power Save Mode in station mode - WCN7850: P2P support - RealTek: - rtw88: support for more rtw8811cu and rtw8821cu devices - rtw89: support SCAN_RANDOM_SN and SET_SCAN_DWELL - rtlwifi: speed up USB firmware initialization - rtwl8xxxu: - RTL8188F: concurrent interface support - Channel Switch Announcement (CSA) support in AP mode - Broadcom (brcmfmac): - per-vendor feature support - per-vendor SAE password setup - DMI nvram filename quirk for ACEPC W5 Pro" * tag 'net-next-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2255 commits) nexthop: Fix splat with CONFIG_DEBUG_PREEMPT=y nexthop: Fix out-of-bounds access during attribute validation nexthop: Only parse NHA_OP_FLAGS for dump messages that require it nexthop: Only parse NHA_OP_FLAGS for get messages that require it bpf: move sleepable flag from bpf_prog_aux to bpf_prog bpf: hardcode BPF_PROG_PACK_SIZE to 2MB * num_possible_nodes() selftests/bpf: Add kprobe multi triggering benchmarks ptp: Move from simple ida to xarray vxlan: Remove generic .ndo_get_stats64 vxlan: Do not alloc tstats manually devlink: Add comments to use netlink gen tool nfp: flower: handle acti_netdevs allocation failure net/packet: Add getsockopt support for PACKET_COPY_THRESH net/netlink: Add getsockopt support for NETLINK_LISTEN_ALL_NSID selftests/bpf: Add bpf_arena_htab test. selftests/bpf: Add bpf_arena_list test. selftests/bpf: Add unit tests for bpf_arena_alloc/free_pages bpf: Add helper macro bpf_addr_space_cast() libbpf: Recognize __arena global variables. bpftool: Recognize arena map type ...
327 lines
7.7 KiB
C
327 lines
7.7 KiB
C
// SPDX-License-Identifier: GPL-2.0-only
|
|
/*
|
|
* Stack tracing support
|
|
*
|
|
* Copyright (C) 2012 ARM Ltd.
|
|
*/
|
|
#include <linux/kernel.h>
|
|
#include <linux/efi.h>
|
|
#include <linux/export.h>
|
|
#include <linux/filter.h>
|
|
#include <linux/ftrace.h>
|
|
#include <linux/kprobes.h>
|
|
#include <linux/sched.h>
|
|
#include <linux/sched/debug.h>
|
|
#include <linux/sched/task_stack.h>
|
|
#include <linux/stacktrace.h>
|
|
|
|
#include <asm/efi.h>
|
|
#include <asm/irq.h>
|
|
#include <asm/stack_pointer.h>
|
|
#include <asm/stacktrace.h>
|
|
|
|
/*
|
|
* Kernel unwind state
|
|
*
|
|
* @common: Common unwind state.
|
|
* @task: The task being unwound.
|
|
* @kr_cur: When KRETPROBES is selected, holds the kretprobe instance
|
|
* associated with the most recently encountered replacement lr
|
|
* value.
|
|
*/
|
|
struct kunwind_state {
|
|
struct unwind_state common;
|
|
struct task_struct *task;
|
|
#ifdef CONFIG_KRETPROBES
|
|
struct llist_node *kr_cur;
|
|
#endif
|
|
};
|
|
|
|
static __always_inline void
|
|
kunwind_init(struct kunwind_state *state,
|
|
struct task_struct *task)
|
|
{
|
|
unwind_init_common(&state->common);
|
|
state->task = task;
|
|
}
|
|
|
|
/*
|
|
* Start an unwind from a pt_regs.
|
|
*
|
|
* The unwind will begin at the PC within the regs.
|
|
*
|
|
* The regs must be on a stack currently owned by the calling task.
|
|
*/
|
|
static __always_inline void
|
|
kunwind_init_from_regs(struct kunwind_state *state,
|
|
struct pt_regs *regs)
|
|
{
|
|
kunwind_init(state, current);
|
|
|
|
state->common.fp = regs->regs[29];
|
|
state->common.pc = regs->pc;
|
|
}
|
|
|
|
/*
|
|
* Start an unwind from a caller.
|
|
*
|
|
* The unwind will begin at the caller of whichever function this is inlined
|
|
* into.
|
|
*
|
|
* The function which invokes this must be noinline.
|
|
*/
|
|
static __always_inline void
|
|
kunwind_init_from_caller(struct kunwind_state *state)
|
|
{
|
|
kunwind_init(state, current);
|
|
|
|
state->common.fp = (unsigned long)__builtin_frame_address(1);
|
|
state->common.pc = (unsigned long)__builtin_return_address(0);
|
|
}
|
|
|
|
/*
|
|
* Start an unwind from a blocked task.
|
|
*
|
|
* The unwind will begin at the blocked tasks saved PC (i.e. the caller of
|
|
* cpu_switch_to()).
|
|
*
|
|
* The caller should ensure the task is blocked in cpu_switch_to() for the
|
|
* duration of the unwind, or the unwind will be bogus. It is never valid to
|
|
* call this for the current task.
|
|
*/
|
|
static __always_inline void
|
|
kunwind_init_from_task(struct kunwind_state *state,
|
|
struct task_struct *task)
|
|
{
|
|
kunwind_init(state, task);
|
|
|
|
state->common.fp = thread_saved_fp(task);
|
|
state->common.pc = thread_saved_pc(task);
|
|
}
|
|
|
|
static __always_inline int
|
|
kunwind_recover_return_address(struct kunwind_state *state)
|
|
{
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
if (state->task->ret_stack &&
|
|
(state->common.pc == (unsigned long)return_to_handler)) {
|
|
unsigned long orig_pc;
|
|
orig_pc = ftrace_graph_ret_addr(state->task, NULL,
|
|
state->common.pc,
|
|
(void *)state->common.fp);
|
|
if (WARN_ON_ONCE(state->common.pc == orig_pc))
|
|
return -EINVAL;
|
|
state->common.pc = orig_pc;
|
|
}
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
#ifdef CONFIG_KRETPROBES
|
|
if (is_kretprobe_trampoline(state->common.pc)) {
|
|
unsigned long orig_pc;
|
|
orig_pc = kretprobe_find_ret_addr(state->task,
|
|
(void *)state->common.fp,
|
|
&state->kr_cur);
|
|
state->common.pc = orig_pc;
|
|
}
|
|
#endif /* CONFIG_KRETPROBES */
|
|
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Unwind from one frame record (A) to the next frame record (B).
|
|
*
|
|
* We terminate early if the location of B indicates a malformed chain of frame
|
|
* records (e.g. a cycle), determined based on the location and fp value of A
|
|
* and the location (but not the fp value) of B.
|
|
*/
|
|
static __always_inline int
|
|
kunwind_next(struct kunwind_state *state)
|
|
{
|
|
struct task_struct *tsk = state->task;
|
|
unsigned long fp = state->common.fp;
|
|
int err;
|
|
|
|
/* Final frame; nothing to unwind */
|
|
if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
|
|
return -ENOENT;
|
|
|
|
err = unwind_next_frame_record(&state->common);
|
|
if (err)
|
|
return err;
|
|
|
|
state->common.pc = ptrauth_strip_kernel_insn_pac(state->common.pc);
|
|
|
|
return kunwind_recover_return_address(state);
|
|
}
|
|
|
|
typedef bool (*kunwind_consume_fn)(const struct kunwind_state *state, void *cookie);
|
|
|
|
static __always_inline void
|
|
do_kunwind(struct kunwind_state *state, kunwind_consume_fn consume_state,
|
|
void *cookie)
|
|
{
|
|
if (kunwind_recover_return_address(state))
|
|
return;
|
|
|
|
while (1) {
|
|
int ret;
|
|
|
|
if (!consume_state(state, cookie))
|
|
break;
|
|
ret = kunwind_next(state);
|
|
if (ret < 0)
|
|
break;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Per-cpu stacks are only accessible when unwinding the current task in a
|
|
* non-preemptible context.
|
|
*/
|
|
#define STACKINFO_CPU(name) \
|
|
({ \
|
|
((task == current) && !preemptible()) \
|
|
? stackinfo_get_##name() \
|
|
: stackinfo_get_unknown(); \
|
|
})
|
|
|
|
/*
|
|
* SDEI stacks are only accessible when unwinding the current task in an NMI
|
|
* context.
|
|
*/
|
|
#define STACKINFO_SDEI(name) \
|
|
({ \
|
|
((task == current) && in_nmi()) \
|
|
? stackinfo_get_sdei_##name() \
|
|
: stackinfo_get_unknown(); \
|
|
})
|
|
|
|
#define STACKINFO_EFI \
|
|
({ \
|
|
((task == current) && current_in_efi()) \
|
|
? stackinfo_get_efi() \
|
|
: stackinfo_get_unknown(); \
|
|
})
|
|
|
|
static __always_inline void
|
|
kunwind_stack_walk(kunwind_consume_fn consume_state,
|
|
void *cookie, struct task_struct *task,
|
|
struct pt_regs *regs)
|
|
{
|
|
struct stack_info stacks[] = {
|
|
stackinfo_get_task(task),
|
|
STACKINFO_CPU(irq),
|
|
#if defined(CONFIG_VMAP_STACK)
|
|
STACKINFO_CPU(overflow),
|
|
#endif
|
|
#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_ARM_SDE_INTERFACE)
|
|
STACKINFO_SDEI(normal),
|
|
STACKINFO_SDEI(critical),
|
|
#endif
|
|
#ifdef CONFIG_EFI
|
|
STACKINFO_EFI,
|
|
#endif
|
|
};
|
|
struct kunwind_state state = {
|
|
.common = {
|
|
.stacks = stacks,
|
|
.nr_stacks = ARRAY_SIZE(stacks),
|
|
},
|
|
};
|
|
|
|
if (regs) {
|
|
if (task != current)
|
|
return;
|
|
kunwind_init_from_regs(&state, regs);
|
|
} else if (task == current) {
|
|
kunwind_init_from_caller(&state);
|
|
} else {
|
|
kunwind_init_from_task(&state, task);
|
|
}
|
|
|
|
do_kunwind(&state, consume_state, cookie);
|
|
}
|
|
|
|
struct kunwind_consume_entry_data {
|
|
stack_trace_consume_fn consume_entry;
|
|
void *cookie;
|
|
};
|
|
|
|
static __always_inline bool
|
|
arch_kunwind_consume_entry(const struct kunwind_state *state, void *cookie)
|
|
{
|
|
struct kunwind_consume_entry_data *data = cookie;
|
|
return data->consume_entry(data->cookie, state->common.pc);
|
|
}
|
|
|
|
noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
|
void *cookie, struct task_struct *task,
|
|
struct pt_regs *regs)
|
|
{
|
|
struct kunwind_consume_entry_data data = {
|
|
.consume_entry = consume_entry,
|
|
.cookie = cookie,
|
|
};
|
|
|
|
kunwind_stack_walk(arch_kunwind_consume_entry, &data, task, regs);
|
|
}
|
|
|
|
struct bpf_unwind_consume_entry_data {
|
|
bool (*consume_entry)(void *cookie, u64 ip, u64 sp, u64 fp);
|
|
void *cookie;
|
|
};
|
|
|
|
static bool
|
|
arch_bpf_unwind_consume_entry(const struct kunwind_state *state, void *cookie)
|
|
{
|
|
struct bpf_unwind_consume_entry_data *data = cookie;
|
|
|
|
return data->consume_entry(data->cookie, state->common.pc, 0,
|
|
state->common.fp);
|
|
}
|
|
|
|
noinline noinstr void arch_bpf_stack_walk(bool (*consume_entry)(void *cookie, u64 ip, u64 sp,
|
|
u64 fp), void *cookie)
|
|
{
|
|
struct bpf_unwind_consume_entry_data data = {
|
|
.consume_entry = consume_entry,
|
|
.cookie = cookie,
|
|
};
|
|
|
|
kunwind_stack_walk(arch_bpf_unwind_consume_entry, &data, current, NULL);
|
|
}
|
|
|
|
static bool dump_backtrace_entry(void *arg, unsigned long where)
|
|
{
|
|
char *loglvl = arg;
|
|
printk("%s %pSb\n", loglvl, (void *)where);
|
|
return true;
|
|
}
|
|
|
|
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
|
const char *loglvl)
|
|
{
|
|
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
|
|
|
|
if (regs && user_mode(regs))
|
|
return;
|
|
|
|
if (!tsk)
|
|
tsk = current;
|
|
|
|
if (!try_get_task_stack(tsk))
|
|
return;
|
|
|
|
printk("%sCall trace:\n", loglvl);
|
|
arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
|
|
|
|
put_task_stack(tsk);
|
|
}
|
|
|
|
void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
|
|
{
|
|
dump_backtrace(NULL, tsk, loglvl);
|
|
barrier();
|
|
}
|