2012-04-20 17:45:54 +04:00
#
# Makefile for the linux kernel.
#
CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET= $( TEXT_OFFSET)
AFLAGS_head.o := -DTEXT_OFFSET= $( TEXT_OFFSET)
2014-11-18 14:41:27 +03:00
CFLAGS_armv8_deprecated.o := -I$( src)
2012-04-20 17:45:54 +04:00
2014-04-30 13:54:33 +04:00
CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_insn.o = -pg
2014-04-30 13:54:35 +04:00
CFLAGS_REMOVE_return_address.o = -pg
2014-04-30 13:54:33 +04:00
2016-08-30 11:31:35 +03:00
CFLAGS_setup.o = -DUTS_MACHINE= '"$(UTS_MACHINE)"'
2012-04-20 17:45:54 +04:00
# Object file lists.
2015-03-18 17:55:20 +03:00
arm64-obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
2012-04-20 17:45:54 +04:00
entry-fpsimd.o process.o ptrace.o setup.o signal.o \
2012-10-19 20:46:27 +04:00
sys.o stacktrace.o time.o traps.o io.o vdso.o \
2016-01-04 17:46:47 +03:00
hyp-stub.o psci.o cpu_ops.o insn.o \
2015-02-25 15:10:35 +03:00
return_address.o cpuinfo.o cpu_errata.o \
2015-05-29 20:28:44 +03:00
cpufeature.o alternative.o cacheinfo.o \
2016-01-04 17:44:32 +03:00
smp.o smp_spin_table.o topology.o smccc-call.o
2012-04-20 17:45:54 +04:00
2015-10-23 17:48:14 +03:00
extra-$(CONFIG_EFI) := efi-entry.o
2015-10-08 22:02:04 +03:00
OBJCOPYFLAGS := --prefix-symbols= __efistub_
$(obj)/%.stub.o : $( obj ) /%.o FORCE
$( call if_changed,objcopy)
2012-04-20 17:45:54 +04:00
arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \
2016-07-08 19:35:47 +03:00
sys_compat.o entry32.o
2014-04-30 13:54:33 +04:00
arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
2012-04-20 17:45:54 +04:00
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
2015-11-24 14:37:35 +03:00
arm64-obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
2015-08-24 15:35:51 +03:00
arm64-obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o
arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
2014-02-03 22:18:27 +04:00
arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
2015-01-26 21:33:44 +03:00
arm64-obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
2014-07-17 13:30:07 +04:00
arm64-obj-$(CONFIG_CPU_IDLE) += cpuidle.o
2014-01-07 18:17:13 +04:00
arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
2014-01-28 15:20:18 +04:00
arm64-obj-$(CONFIG_KGDB) += kgdb.o
2015-10-23 17:48:14 +03:00
arm64-obj-$(CONFIG_EFI) += efi.o efi-entry.stub.o
2014-09-29 18:29:31 +04:00
arm64-obj-$(CONFIG_PCI) += pci.o
2014-11-18 14:41:24 +03:00
arm64-obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o
ARM64 / ACPI: Get RSDP and ACPI boot-time tables
As we want to get ACPI tables to parse and then use the information
for system initialization, we should get the RSDP (Root System
Description Pointer) first, it then locates Extended Root Description
Table (XSDT) which contains all the 64-bit physical address that
pointer to other boot-time tables.
Introduce acpi.c and its related head file in this patch to provide
fundamental needs of extern variables and functions for ACPI core,
and then get boot-time tables as needed.
- asm/acenv.h for arch specific ACPICA environments and
implementation, It is needed unconditionally by ACPI core;
- asm/acpi.h for arch specific variables and functions needed by
ACPI driver core;
- acpi.c for ARM64 related ACPI implementation for ACPI driver
core;
acpi_boot_table_init() is introduced to get RSDP and boot-time tables,
it will be called in setup_arch() before paging_init(), so we should
use eary_memremap() mechanism here to get the RSDP and all the table
pointers.
FADT Major.Minor version was introduced in ACPI 5.1, it is the same
as ACPI version.
In ACPI 5.1, some major gaps are fixed for ARM, such as updates in
MADT table for GIC and SMP init, without those updates, we can not
get the MPIDR for SMP init, and GICv2/3 related init information, so
we can't boot arm64 ACPI properly with table versions predating 5.1.
If firmware provides ACPI tables with ACPI version less than 5.1,
OS has no way to retrieve the configuration data that is necessary
to init SMP boot protocol and the GIC properly, so disable ACPI if
we get an FADT table with version less that 5.1 when acpi_boot_table_init()
called.
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 17:02:37 +03:00
arm64-obj-$(CONFIG_ACPI) += acpi.o
2016-05-25 01:35:44 +03:00
arm64-obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o
2016-01-26 14:10:38 +03:00
arm64-obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o
2015-11-23 13:33:49 +03:00
arm64-obj-$(CONFIG_PARAVIRT) += paravirt.o
arm64: add support for kernel ASLR
This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-01-26 16:12:01 +03:00
arm64-obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
2016-04-27 19:47:12 +03:00
arm64-obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
2016-06-23 20:54:48 +03:00
arm64-obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o \
cpu-reset.o
2012-04-20 17:45:54 +04:00
arm64: Kprobes with single stepping support
Add support for basic kernel probes(kprobes) and jump probes
(jprobes) for ARM64.
Kprobes utilizes software breakpoint and single step debug
exceptions supported on ARM v8.
A software breakpoint is placed at the probe address to trap the
kernel execution into the kprobe handler.
ARM v8 supports enabling single stepping before the break exception
return (ERET), with next PC in exception return address (ELR_EL1). The
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed, and
enables single stepping. The PC is set to the out-of-line slot address
before the ERET. With this scheme, the instruction is executed with the
exact same register context except for the PC (and DAIF) registers.
Debug mask (PSTATE.D) is enabled only when single stepping a recursive
kprobe, e.g.: during kprobes reenter so that probed instruction can be
single stepped within the kprobe handler -exception- context.
The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
any further re-entry is prevented by not calling handlers and the case
counted as a missed kprobe).
Single stepping from the x-o-l slot has a drawback for PC-relative accesses
like branching and symbolic literals access as the offset from the new PC
(slot address) may not be ensured to fit in the immediate value of
the opcode. Such instructions need simulation, so reject
probing them.
Instructions generating exceptions or cpu mode change are rejected
for probing.
Exclusive load/store instructions are rejected too. Additionally, the
code is checked to see if it is inside an exclusive load/store sequence
(code from Pratyush).
System instructions are mostly enabled for stepping, except MSR/MRS
accesses to "DAIF" flags in PSTATE, which are not safe for
probing.
This also changes arch/arm64/include/asm/ptrace.h to use
include/asm-generic/ptrace.h.
Thanks to Steve Capper and Pratyush Anand for several suggested
Changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-08 19:35:48 +03:00
obj-y += $( arm64-obj-y) vdso/ probes/
2012-04-20 17:45:54 +04:00
obj-m += $( arm64-obj-m)
head-y := head.o
2015-10-08 22:02:04 +03:00
extra-y += $( head-y) vmlinux.lds
2017-02-02 20:33:19 +03:00
i f e q ( $( CONFIG_DEBUG_EFI ) , y )
AFLAGS_head.o += -DVMLINUX_PATH= " \" $( realpath $( objtree) /vmlinux) \" "
e n d i f