IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Initializing max_mapnr using set_max_mapnr() helper function instead
of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn,
since it already contains it.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel wants to enable reporting of asynchronous interrupts (i.e.
System Errors) as early as possible. But if this happens too early then
any pending System Error on initial entry into the kernel may never be
reported where a user can see it. This situation will occur if the kernel
is configured with CONFIG_PANIC_ON_OOPS set and (default or command line)
enabled, in which case the kernel will panic as intended, however the
associated logging messages indicating this failure condition will remain
only in the kernel ring buffer and never be flushed out to the (not yet
configured) console. Therefore, this patch moves the enabling of
asynchronous interrupts during early setup to as early as reasonable,
but after parsing any possible earlycon parameters setting up earlycon.
Signed-off-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu.
This code now works with both gcc and clang.
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 irq work self-IPI support depends on __smp_cross_call to point to
some relevant IRQ controller operations. This information should be
available after the call to init_IRQ().
Lets implement arch_irq_work_has_interrupt() accordingly.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
The nohz full code needs irq work to trigger its own interrupt so that
the subsystem can work even when the tick is stopped.
Lets introduce arch_irq_work_has_interrupt() that archs can override to
tell about their support for this ability.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Just a couple of stragglers here:
- Fix an issue migrating interrupts on CPU hotplug
- Fix a potential information leak of TLS registers across an exec
(Nathan has sent a corresponding patch for arch/arm/ to rmk)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJUEfKCAAoJELescNyEwWM0I/8H/RLpR9kvk0npB8lroFJZUJfa
yIveU5kWnFpEpycjkDDHTYmXbbAMni1t6wII4ofMErDtMkJMW3y11gAp2iUEdP8w
YNGSO9WV8uddbEamoDnO1jMS2eE1sHSSFjXN5529ygM00mAdSq/sIYUkGrjkbRmo
6DHWFvaHYjZDIAb1teFFqtuaL5c4SX+DTwInqwO0hXIPXfgjmSD9PDk8KXJN0Qiu
daX3sNHlFyb4Bh4Q2/aIvQHrkFPVcNUnekCwNoHGgYJ/FMjTV67Kb5SmnlV41rSu
GU4dUuc26gumgrOQ9Yhob2AU6RhC4Auuht7ck+STZWy5kllmjX5TLZMLXmrLIRM=
=0A4L
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Just a couple of stragglers here:
- fix an issue migrating interrupts on CPU hotplug
- fix a potential information leak of TLS registers across an exec
(Nathan has sent a corresponding patch for arch/arm/ to rmk)"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: flush TLS registers during exec
arm64: use irq_set_affinity with force=false when migrating irqs
The start address needs to be actually updated after it
is detected to be unaligned. Adjust it and the end address
properly.
Reported-by: Zi Shen Lim <zlim.lnx@gmail.com>
Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On ARM64, when the BPF JIT compiler fills the JIT image body with
opcodes during translation of eBPF into ARM64 opcodes, we may fail
for several reasons during that phase: one being that we jump to
the notyet label for not yet supported eBPF instructions such as
BPF_ST. In that case we only free offsets, but not the actual
allocated target image where opcodes are being stored. Fix it by
calling module_free() on dismantle time in case of errors.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* cpuidle:
arm64: add PSCI CPU_SUSPEND based cpu_suspend support
arm64: kernel: introduce cpu_init_idle CPU operation
arm64: kernel: refactor the CPU suspend API for retention states
Documentation: arm: define DT idle states bindings
This patch implements the cpu_suspend cpu operations method through
the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state
index passed by the cpu_suspend core call into a valid PSCI state according to
the PSCI states initialized at boot through the cpu_init_idle() CPU
operations hook.
The PSCI CPU suspend operation hook checks if the PSCI state is a
standby state. If it is, it calls the PSCI suspend implementation
straight away, without saving any context. If the state is a power
down state the kernel calls the __cpu_suspend API (that saves the CPU
context) and passed the PSCI suspend finisher as a parameter so that PSCI
can be called by the __cpu_suspend implementation after saving and flushing
the context as last function before power down.
For power down states, entry point is set to cpu_resume physical address,
that represents the default kernel execution address following a CPU reset.
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The CPUidle subsystem on ARM64 machines requires the idle states
implementation back-end to initialize idle states parameter upon
boot. This patch adds a hook in the CPU operations structure that
should be initialized by the CPU operations back-end in order to
provide a function that initializes cpu idle states.
This patch also adds the infrastructure to arm64 kernel required
to export the CPU operations based initialization interface, so
that drivers (ie CPUidle) can use it when they are initialized
at probe time.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
CPU suspend is the standard kernel interface to be used to enter
low-power states on ARM64 systems. Current cpu_suspend implementation
by default assumes that all low power states are losing the CPU context,
so the CPU registers must be saved and cleaned to DRAM upon state
entry. Furthermore, the current cpu_suspend() implementation assumes
that if the CPU suspend back-end method returns when called, this has
to be considered an error regardless of the return code (which can be
successful) since the CPU was not expected to return from a code path that
is different from cpu_resume code path - eg returning from the reset vector.
All in all this means that the current API does not cope well with low-power
states that preserve the CPU context when entered (ie retention states),
since first of all the context is saved for nothing on state entry for
those states and a successful state entry can return as a normal function
return, which is considered an error by the current CPU suspend
implementation.
This patch refactors the cpu_suspend() API so that it can be split in
two separate functionalities. The arm64 cpu_suspend API just provides
a wrapper around CPU suspend operation hook. A new function is
introduced (for architecture code use only) for states that require
context saving upon entry:
__cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
__cpu_suspend() saves the context on function entry and calls the
so called suspend finisher (ie fn) to complete the suspend operation.
The finisher is not expected to return, unless it fails in which case
the error is propagated back to the __cpu_suspend caller.
The API refactoring results in the following pseudo code call sequence for a
suspending CPU, when triggered from a kernel subsystem:
/*
* int cpu_suspend(unsigned long idx)
* @idx: idle state index
*/
{
-> cpu_suspend(idx)
|---> CPU operations suspend hook called, if present
|--> if (retention_state)
|--> direct suspend back-end call (eg PSCI suspend)
else
|--> __cpu_suspend(idx, &back_end_finisher);
}
By refactoring the cpu_suspend API this way, the CPU operations back-end
has a chance to detect whether idle states require state saving or not
and can call the required suspend operations accordingly either through
simple function call or indirectly through __cpu_suspend() which carries out
state saving and suspend finisher dispatching to complete idle state entry.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Nathan reports that we leak TLS information from the parent context
during an exec, as we don't clear the TLS registers when flushing the
thread state.
This patch updates the flushing code so that we:
(1) Unconditionally zero the tpidr_el0 register (since this is fully
context switched for native tasks and zeroed for compat tasks)
(2) Zero the tp_value state in thread_info before clearing the
tpidrr0_el0 register for compat tasks (since this is only writable
by the set_tls compat syscall and therefore not fully switched).
A missing compiler barrier is also added to the compat set_tls syscall.
Cc: <stable@vger.kernel.org>
Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ISS encoding for an exception from a Data Abort has a WnR
bit[6] that indicates whether the Data Abort was caused by a
read or a write instruction. While there are several fields
in the encoding that are only valid if the ISV bit[24] is set,
WnR is not one of them, so we can read it unconditionally.
Instead of fixing both implementations of kvm_is_write_fault()
in place, reimplement it just once using kvm_vcpu_dabt_iswrite(),
which already does the right thing with respect to the WnR bit.
Also fix up the callers to pass 'vcpu'
Acked-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Commit 86c8b27a01cf:
"arm64: ignore DT memreserve entries when booting in UEFI mode
prevents early_init_fdt_scan_reserved_mem() from being called for
arm64 kernels booting via UEFI. This was done because the kernel
will use the UEFI memory map to determine reserved memory regions.
That approach has problems in that early_init_fdt_scan_reserved_mem()
also reserves the FDT itself and any node-specific reserved memory.
By chance of some kernel configs, the FDT may be overwritten before
it can be unflattened and the kernel will fail to boot. More subtle
problems will result if the FDT has node specific reserved memory
which is not really reserved.
This patch has the UEFI stub remove the memory reserve map entries
from the FDT as it does with the memory nodes. This allows
early_init_fdt_scan_reserved_mem() to be called unconditionally
so that the other needed reservations are made.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Raising the current maximum limit to 64. This is needed for Cavium's
Thunder systems that will have at least 48 cores per die.
The change keeps the current memory footprint in cpu mask structures.
It does not break existing code. Setting the maximum to 64 cpus still
boots systems with less cpus.
Mark's Juno happily booted with a NR_CPUS=64 kernel.
Tested on our Thunder system with 48 cores. We could see interrupts to
all cores.
Cc: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The JIT compiler emits A64 instructions. It supports eBPF only.
Legacy BPF is supported thanks to conversion by BPF core.
JIT is enabled in the same way as for other architectures:
echo 1 > /proc/sys/net/core/bpf_jit_enable
Or for additional compiler output:
echo 2 > /proc/sys/net/core/bpf_jit_enable
See Documentation/networking/filter.txt for more information.
The implementation passes all 57 tests in lib/test_bpf.c
on ARMv8 Foundation Model :) Also tested by Will on Juno platform.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate logical (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (3 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (2 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (1 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate move wide (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate bitfield instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store pair instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store (register offset)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate conditional branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate unconditional branch (register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate compare & branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global register current_stack_pointer holds the current stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
To support both Clang and GCC, use the global stack register variable vs
a local register variable.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Define a global named register for current_stack_pointer. The use of this new
variable guarantees that both gcc and clang can access this register in C code.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In a similar fashion to other architecture, add the infrastructure
and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
enabled, module ranges will be marked read-only/no-execute as
appropriate.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: fixed off-by-one in module end check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
It's useful to be able to change individual bits in ptes at times.
Introduce functions for this and update existing pte_mk* functions
to use these primatives.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: added missing inline keyword for new header functions]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current soft_restart() and setup_restart implementations incorrectly
assume that compiler will not spill/fill values to/from stack. However
this assumption seems to be wrong, revealed by the disassembly of the
currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
ffffffc000085224 <soft_restart>:
ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]!
ffffffc000085228: 910003fd mov x29, sp
ffffffc00008522c: f9000fa0 str x0, [x29,#24]
ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot>
ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off>
ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices>
ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr
ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr
ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40>
ffffffc000085250: 91290021 add x1, x1, #0xa40
ffffffc000085254: 8b010041 add x1, x2, x1
ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944
ffffffc00008525c: 8b020021 add x1, x1, x2
ffffffc000085260: d63f0020 blr x1
...
Here the compiler generates memory accesses after the cache is disabled,
loading stale values for the spilled value and global variable. As we cannot
control when the compiler will access memory we must rewrite the
functions in assembly to stash values we need in registers prior to
disabling the cache, avoiding the use of memory.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we cannot relocate the kernel Image to its preferred offset of base of DRAM
plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus
TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still
proceed normally otherwise.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.
However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.
Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In certain cases the cpu-release-addr of a CPU may not fall in the
linear mapping (e.g. when the kernel is loaded above this address due to
the presence of other images in memory). This is problematic for the
spin-table code as it assumes that it can trivially convert a
cpu-release-addr to a valid VA in the linear map.
This patch modifies the spin-table code to use a temporary cached
mapping to write to a given cpu-release-addr, enabling us to support
addresses regardless of whether they are covered by the linear mapping.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: added (__force void *) cast]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
VIPT caches are non-aliasing if the index is derived from address bits that
are always equal between VA and PA. Classifying these as aliasing results in
unnecessary flushing which may hurt performance.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This adds helper functions and #defines to <asm/cachetype.h> to read the
line size and the number of sets from the level 1 instruction cache.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJUCi7SAAoJEBvWZb6bTYbyuhgP+wQXwL1W4f6c219PMmz/Hpiz
OCRCTFpz8eOC/e1VG5zCcocu7FisG43auH5WqDnyr8RcC8RZlfKcpIwvIAgGk1oP
RusqDbhHXSo+mWCYAlVDwGeAagc1sgxjpHAKJ0uLT7Rsv7hJp88g/KE5JbOX+M6k
UfO+bMSvys5eFCct57O5ZgLWR58k++9CC0f6U5B/uMkwYCnPz32YQL83ebpBLPvT
vHcRVgUPXj7pg4ng8uVALvLJTewEKK8aG5o8LXtOmmOdYgccxSouy5wrX00g48pb
lCB3UZrKZ07AfEgdK/06oz9RwqUUpu58VZSE4DVlEhEcPTx0Uy6k1PMw6utAJi5r
WZ+Ws3IBQMfp3oqWJmdBLte0JAjK89glhqjrrseXjtux1piknyPfquPB/tGw6wxv
rBMq4r64KJwcpL2DMYZGbHpZ7vbfDTJ7aYZvHBp2YRFnR9YE7lqGt3VJpp9WHkZT
7SMvIVFEdF1SN4jXLZ4+3tno5hPlH+MbeteCT6ZweqVfSQzHEo7AvriKQS0wPoBm
rOMZvO7SMSctHkBBqnTkXHnZ8r0J+v89VZr85ayQC/FHUEp6nFdYNiqqO54VkTfE
BKuoepRfvjAK40hnWIWlPUMmzK1tzZwxB9vU+ghJ2yJWZMo4oIXjwg6fvYfa8Ar/
r68L21dou0TNpUpyIdvN
=vBFm
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"A smattering of bug fixes across most architectures"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
powerpc/kvm/cma: Fix panic introduces by signed shift operation
KVM: s390/mm: Fix guest storage key corruption in ptep_set_access_flags
KVM: s390/mm: Fix storage key corruption during swapping
arm/arm64: KVM: Complete WFI/WFE instructions
ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMU
KVM: s390/mm: try a cow on read only pages for key ops
KVM: s390: Fix user triggerable bug in dead code
The arm64 interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. Originally
this argument had no effect because it was not used by any interrupt
chip driver and there was no semantics defined.
This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de64012
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.
As a consequence the cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validation against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.
Commit 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity") intended to fix the above mentioned issue but
introduced another issue where affinity can be migrated to a wrong
CPU due to unconditional copy of cpu_online_mask.
As with for arm, solve the issue by calling irq_set_affinity() with
force=false from the CPU offline irq migration code so the GIC driver
validates the affinity mask against CPU online mask and therefore
removes CPU0 from the possible target candidates. Also revert the
changes done in the commit 601c942176d8 as it's no longer needed.
Tested on Juno platform.
Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Will Deacon <will.deacon@arm.com>
All the arm64 irqchip drivers have been converted to handle_domain_irq,
making it possible to remove the handle_IRQ stub entierely.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
In order to limit code duplication, convert the architecture specific
handle_IRQ to use the generic __handle_domain_irq function.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
It turns out that vendors are relying on the format of /proc/cpuinfo,
and we've even spotted out-of-tree hacks attempting to make it look
identical to the format used by arch/arm/. That means we can't afford to
churn this interface in mainline, so revert the recent reformatting of
the file for arm64 pending discussions on the list to find out what
people actually want.
This reverts commit d7a49086f263164a2c4c178eb76412d48cd671d7.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>