aa137a6d30
The livepatch consistency model requires reliable stack tracing architecture support in order to work properly. In order to achieve this, two main issues have to be solved. First, reliable and consistent call chain backtracing has to be ensured. Second, the unwinder needs to be able to detect stack corruptions and return errors. The "zSeries ELF Application Binary Interface Supplement" says: "The stack pointer points to the first word of the lowest allocated stack frame. If the "back chain" is implemented this word will point to the previously allocated stack frame (towards higher addresses), except for the first stack frame, which shall have a back chain of zero (NULL). The stack shall grow downwards, in other words towards lower addresses." "back chain" is optional. GCC option -mbackchain enables it. Quoting Martin Schwidefsky [1]: "The compiler is called with the -mbackchain option, all normal C function will store the backchain in the function prologue. All functions written in assembler code should do the same, if you find one that does not we should fix that. The end result is that a task that *voluntarily* called schedule() should have a proper backchain at all times. Dependent on the use case this may or may not be enough. Asynchronous interrupts may stop the CPU at the beginning of a function, if kernel preemption is enabled we can end up with a broken backchain. The production kernels for IBM Z are all compiled *without* kernel preemption. So yes, we might get away without the objtool support. On a side-note, we do have a line item to implement the ORC unwinder for the kernel, that includes the objtool support. Once we have that we can drop the -mbackchain option for the kernel build. That gives us a nice little performance benefit. I hope that the change from backchain to the ORC unwinder will not be too hard to implement in the livepatch tools." Since -mbackchain is enabled by default when the kernel is compiled, the call chain backtracing should be currently ensured and objtool should not be necessary for livepatch purposes. Regarding the second issue, stack corruptions and non-reliable states have to be recognized by the unwinder. Mainly it means to detect preemption or page faults, the end of the task stack must be reached, return addresses must be valid text addresses and hacks like function graph tracing and kretprobes must be properly detected. Unwinding a running task's stack is not a problem, because there is a livepatch requirement that every checked task is blocked, except for the current task. Due to that, the implementation can be much simpler compared to the existing non-reliable infrastructure. We can consider a task's kernel/thread stack only and skip the other stacks. [1] 20180912121106.31ffa97c@mschwideX1 [not archived on lore.kernel.org] Link: https://lkml.kernel.org/r/20191106095601.29986-5-mbenes@suse.cz Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Tested-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
68 lines
1.6 KiB
C
68 lines
1.6 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* Stack trace management functions
|
|
*
|
|
* Copyright IBM Corp. 2006
|
|
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
*/
|
|
|
|
#include <linux/stacktrace.h>
|
|
#include <asm/stacktrace.h>
|
|
#include <asm/unwind.h>
|
|
#include <asm/kprobes.h>
|
|
|
|
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
|
|
struct task_struct *task, struct pt_regs *regs)
|
|
{
|
|
struct unwind_state state;
|
|
unsigned long addr;
|
|
|
|
unwind_for_each_frame(&state, task, regs, 0) {
|
|
addr = unwind_get_return_address(&state);
|
|
if (!addr || !consume_entry(cookie, addr, false))
|
|
break;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* This function returns an error if it detects any unreliable features of the
|
|
* stack. Otherwise it guarantees that the stack trace is reliable.
|
|
*
|
|
* If the task is not 'current', the caller *must* ensure the task is inactive.
|
|
*/
|
|
int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
|
|
void *cookie, struct task_struct *task)
|
|
{
|
|
struct unwind_state state;
|
|
unsigned long addr;
|
|
|
|
unwind_for_each_frame(&state, task, NULL, 0) {
|
|
if (state.stack_info.type != STACK_TYPE_TASK)
|
|
return -EINVAL;
|
|
|
|
if (state.regs)
|
|
return -EINVAL;
|
|
|
|
addr = unwind_get_return_address(&state);
|
|
if (!addr)
|
|
return -EINVAL;
|
|
|
|
#ifdef CONFIG_KPROBES
|
|
/*
|
|
* Mark stacktraces with kretprobed functions on them
|
|
* as unreliable.
|
|
*/
|
|
if (state.ip == (unsigned long)kretprobe_trampoline)
|
|
return -EINVAL;
|
|
#endif
|
|
|
|
if (!consume_entry(cookie, addr, false))
|
|
return -EINVAL;
|
|
}
|
|
|
|
/* Check for stack corruption */
|
|
if (unwind_error(&state))
|
|
return -EINVAL;
|
|
return 0;
|
|
}
|