IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
of the latter in the middle of asm includes. Fix this up with the aid of
the below script and manual adjustments here and there.
import sys
import re
if len(sys.argv) is not 3:
print "USAGE: %s <file> <header>" % (sys.argv[0])
sys.exit(1)
hdr_to_move="#include <linux/%s>" % sys.argv[2]
moved = False
in_hdrs = False
with open(sys.argv[1], "r") as f:
lines = f.readlines()
for _line in lines:
line = _line.rstrip('
')
if line == hdr_to_move:
continue
if line.startswith("#include <linux/"):
in_hdrs = True
elif not moved and in_hdrs:
moved = True
print hdr_to_move
print line
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The include/linux/pgtable.h is going to be the home of generic page table
manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
make the latter include asm/pgtable.h.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It looks like a section directive was using "Solaris style" to declare
the section flags. Replace this with the GNU style so that Clang's
integrated assembler can assemble this directive.
The modified instances were identified via:
$ ag \.section | grep #
Link: https://ftp.gnu.org/old-gnu/Manuals/gas-2.9.1/html_chapter/as_7.html#SEC119
Link: https://github.com/ClangBuiltLinux/linux/issues/744
Link: https://bugs.llvm.org/show_bug.cgi?id=43759
Link: https://reviews.llvm.org/D69296
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Suggested-by: Fangrui Song <maskray@google.com>
Suggested-by: Jian Cai <jiancai@google.com>
Suggested-by: Peter Smith <peter.smith@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
- Add a "cut here" to make it clearer where oops dumps should be cut
from - we already have a marker for the end of the dumps.
- Add logging severity to show_pte()
- Drop unnecessary common-page-size linker flag
- Errata workarounds for Cortex A12 857271, Cortex A17 857272 and
Cortex A7 814220.
- Remove some unused variables that had started to provoke a compiler
warning.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAXSMct/TnkBvkraxkAQIpKBAAoE+5ZLeBanLZ6GMUpIMW+wVfVYaI4p8g
kZd0Dm2EB5Y9x9kfYwCLmdUp5A/Lh3dmAkAGHSP0TVdDj34qK9KXxCA3t+k85ph5
+VDzdpAtUrbuDWaF/6KIaGMLgA4ss+VXpoqMOvZuwDQReXbXaT325WVzcTFlNLe7
sUBS8k2DxTTjiF0HM/jnxtbvVmc3sfkPij+MTEz9XEcS9K0Hilz9suoJEKZCXY6H
ULX3n+8wJAxbnbnXdGLyQ095BqtaQC88mEcMUvBci7NBGiRne19vvRqdLwe1mJPc
PTz1v/Pjvbfj1apSZDGVJvIu3BVEkwJNBF1NujRGUVrYnKQd/YJSKJkqjXat5Q5M
H/G2l+y0vmzn4VSbn1LUEvleuou6Hq/1nucksS1DHruEB7tH/aQRtf0EduzGRsop
VmYkTiIiaF28N+eHDolDtUzSg5RNzEMCQnni4y6ajGhbWiu1Q63GBR7k9zwB27Bj
3t6RdrqIaE52uKUdgwc9APcpZhdJR/ncqrI68B+wIwbuObhUSY5sa1Ec8uiELOiK
g3zCp4AWRBbYQFNeQy87A811JSFU+L/lCbABs32Bj8UEKGn/qdmVwEaH9rZ8/PHk
TAlcu8PtwBVQQIEgs+ur+OyfZcLt2U4DUDiL/26GCyG+pzzL7fYap5GSoBuqR02p
MmoHFr87YZM=
=Ru/4
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
- Add a "cut here" to make it clearer where oops dumps should be cut
from - we already have a marker for the end of the dumps.
- Add logging severity to show_pte()
- Drop unnecessary common-page-size linker flag
- Errata workarounds for Cortex A12 857271, Cortex A17 857272 and
Cortex A7 814220.
- Remove some unused variables that had started to provoke a compiler
warning.
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 8863/1: stm32: select ARM errata 814220
ARM: 8862/1: errata: 814220-B-Cache maintenance by set/way operations can execute out of order
ARM: 8865/1: mm: remove unused variables
ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU cores
ARM: 8861/1: errata: Workaround errata A12 857271 / A17 857272
ARM: 8860/1: VDSO: Drop implicit common-page-size linker flag
ARM: arrange show_pte() to issue severity-based messages
ARM: add "8<--- cut here ---" to kernel dumps
This adds support for working around errata A12 857271 / A17 857272.
These errata were causing hangs on rk3288-based Chromebooks and it was
confirmed that this workaround fixed the problems. In the Chrome OS
3.14 kernel this was treated as two errata: ERRATA_FOOBAR [1] and
ERRATA_CR711784 [2]. Apparently the two errata got lumped together at
some point in time.
Let's actually get the workaround landed.
[1] https://crrev.com/c/342753
[2] https://crbug.com/711784
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Sonny Rao <sonnyrao@chromium.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pull ARM fix from Russell King:
"Ard spotted a typo in one of the assembly files which leads to a
kernel oops when that code path is executed. Fix this"
* 'spectre' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm
Due to what appears to be a copy/paste error, the opening ENTRY()
of cpu_v7_hvc_switch_mm() lacks a matching ENDPROC(), and instead,
the one for cpu_v7_smc_switch_mm() is duplicated.
Given that it is ENDPROC() that emits the Thumb annotation, the
cpu_v7_hvc_switch_mm() routine will be called in ARM mode on a
Thumb2 kernel, resulting in the following splat:
Internal error: Oops - undefined instruction: 0 [#1] SMP THUMB2
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.0-rc1-00030-g4d28ad89189d-dirty #488
Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
PC is at cpu_v7_hvc_switch_mm+0x12/0x18
LR is at flush_old_exec+0x31b/0x570
pc : [<c0316efe>] lr : [<c04117c7>] psr: 00000013
sp : ee899e50 ip : 00000000 fp : 00000001
r10: eda28f34 r9 : eda31800 r8 : c12470e0
r7 : eda1fc00 r6 : eda53000 r5 : 00000000 r4 : ee88c000
r3 : c0316eec r2 : 00000001 r1 : eda53000 r0 : 6da6c000
Flags: nzcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Note the 'ISA ARM' in the last line.
Fix this by using the correct name in ENDPROC().
Cc: <stable@vger.kernel.org>
Fixes: 10115105cb ("ARM: spectre-v2: add firmware based hardening")
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Add firmware based hardening for cores that require more complex
handling in firmware.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:
Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate icache.
If the IBE bit is not set, then there is little point to enabling the
workaround.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register. If this bit has not
been set, the Spectre workarounds will not be functional.
Add validation that this bit is set, and print a warning at alert level
if this is not the case.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Harden the branch predictor against Spectre v2 attacks on context
switches for ARMv7 and later CPUs. We do this by:
Cortex A9, A12, A17, A73, A75: invalidating the BTB.
Cortex A15, Brahma B15: invalidating the instruction cache.
Cortex A57 and Cortex A72 are not addressed in this patch.
Cortex R7 and Cortex R8 are also not addressed as we do not enforce
memory protection on these cores.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cortex-R8 has identical initialisation requirements to Cortex-R7, so
hook it up in proc-v7.S in the same way.
Signed-off-by: Luca Scalabrino <luca.scalabrino@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
If we detect that we are running on a Broadcom Brahma-B15 CPU, and
CONFIG_CACHE_B15_RAC is enabled, make sure that we pick-up the
b15_cache_fns function operations.
If CONFIG_CACHE_B15_RAC is enabled, but we are not running on a Broadcom
Brahma-B15 CPU, we will fallback to calling into the regular
v7_cache_fns with no cost. If CONFIG_CACHE_B15_RAC is disabled, there is
no cost and we just use the regular v7_cache_fns.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
In preparation for adding support for the Broadcom Brahma-B15 read-ahead
cache which requires a different set of cache functions, allow the
__v7_proc macro to override the cache_fns settings, and default to
v7_cache_fns unless specified otherwise.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
cpu_v7_reset() now takes a second parameter indicating whether
we should reboot in HYP or not. Update the documentation to
reflect this.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
When we soft-reboot (eg, kexec) from one kernel into the next, we need
to ensure that we enter the new kernel in the same processor mode as
when we were entered, so that (eg) the new kernel can install its own
hypervisor - the old kernel's hypervisor will have been overwritten.
In order to do this, we need to pass a flag to cpu_reset() so it knows
what to do, and we need to modify the kernel's own hypervisor stub to
allow it to handle a soft-reboot.
As we are always guaranteed to install our own hypervisor if we're
entered in HYP32 mode, and KVM will have moved itself out of the way
on kexec/normal reboot, we can assume that our hypervisor is in place
when we want to kexec, so changing our hypervisor API should not be a
problem.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Commit d781145549 (""ARM: 8512/1: proc-v7.S: Adjust stack address when
XIP_KERNEL"") introduced a macro which lives under asm/memory.h.
Unfortunately, for MMU-less systems (like R-class) it leads to build failure:
arch/arm/mm/proc-v7.S: Assembler messages:
arch/arm/mm/proc-v7.S:538: Error: unrecognised relocation suffix
make[1]: *** [arch/arm/mm/proc-v7.o] Error 1
make: *** [arch/arm/mm] Error 2
since it is implicitly pulled via asm/pgtable.h for MMU capable systems only.
To fix it include asm/memory.h explicitly.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The workaround for both errata is to set bit 24 in the diagnostic
register. There are no known end-user bugs solved by fixing this
errata, but the fix is trivial and it seems sane to apply it.
The arguments for why this needs to be in the kernel are similar to the
arugments made in the patch "Workaround errata A12 818325/852422 A17
852423".
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This erratum has a very simple workaround (set a bit in a register), so
let's apply it. Apparently the workaround's downside is a very slight
power impact.
Note that applying this errata fixes deadlocks that are easy to
reproduce with real world applications.
The arguments for why this needs to be in the kernel are similar to the
arugments made in the patch "Workaround errata A12 818325/852422 A17
852423".
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register.
Technically the list of errata are:
- A12 818325: Execution of an UNPREDICTABLE STR or STM instruction
might deadlock. Fixed in r0p1.
- A12 852422: Execution of a sequence of instructions might lead to
either a data corruption or a CPU deadlock. Not fixed in any A12s
yet.
- A17 852423: Execution of a sequence of instructions might lead to
either a data corruption or a CPU deadlock. Not fixed in any A17s
yet.
Since A12 got renamed to A17 it seems likely that there won't be any
future Cortex-A12 cores, so we'll enable for all Cortex-A12.
For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was
released it would have this problem fixed.
Note that in <https://patchwork.kernel.org/patch/4735341/> folks
previously expressed opposition to this change because:
A) It was thought to only apply to r0p0 and there were no known r0p0
boards supported in mainline.
B) It was argued that such a workaround beloned in firmware.
Now that this same fix solves other errata on real boards (like
rk3288) point A) is addressed.
Point B) is impossible to address on boards like rk3288. On rk3288
the firmware doesn't stay resident in RAM and isn't involved at all in
the suspend/resume process nor in the SMP bringup process. That means
that the most the firmware could do would be to set the bit on "core
0" and this bit would be lost at suspend/resume time. It is true that
we could write a "generic" solution that saved the boot-time "core 0"
value of this register and applied it at SMP bringup / resume time.
However, since this register (described as the "Feature Register" in
errata) appears to be undocumented (as far as I can tell) and is only
modified for these errata, that "generic" solution seems questionably
cleaner. The generic solution also won't fix existing users that
haven't happened to do a FW update.
Note that in ARM64 presumably PSCI will be universal and fixes like
this will end up in ATF. Hopefully we are nearing the end of this
style of errata workaround.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Huang Tao <huangtao@rock-chips.com>
Signed-off-by: Kever Yang <kever.yang@rock-chips.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Masahiro Yamada reports that we can fail to set the FW bit in the
auxiliary control register, which enables broadcasting the cache
maintanence operations. This occurs because we only check that the
SMP/nAMP bit is set, rather than checking whether all the bits we
want to be set are set.
Rearrange the code to ensure that all desired bits are set, and only
update the register if we discover some required bits are not set.
Tested-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The physical-relative calculation between the XIP text and data sections
introduced by the previous patch was far from obvious. Let's simplify it
by turning it into a macro which takes the two (virtual) addresses.
This allows us to arrange the calculation in a more obvious manner - we
can make it two sub-expressions which calculate the physical address for
each symbol, and then takes the difference of those physical addresses.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When XIP_KERNEL is enabled, the virt to phys address translation for RAM
is not the same as the virt to phys address translation for .text.
The only way to know where physical RAM is located is to use
PLAT_PHYS_OFFSET.
The MACRO will be useful for other places where there is a similar problem.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The proc-v7.S code uses a small temporary stack to preserve register
content in its setup code. This stack is located in the .text section
which is normally meant to be read-only.
Move that temporary stack to the .bss section and get its address in
a position independent way, similarly to what we do in other parts
of the kernel.
While at it, one comments was updated to reflect reality, and the list
of saved registers in the proc-v7.S case is updated to match the comment
next to it for coherency.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In cpu_v7_do_suspend routine, r11 is used while it is NOT
saved/restored, different compiler may have different usage
of ARM general registers, so it may cause issues during
calling cpu_v7_do_suspend.
We meet kernel fault occurs when using GCC 4.8.3, r11 contains
valid value before calling into cpu_v7_do_suspend, but when returned
from this routine, r11 is corrupted and lead to kernel fault.
Doing save/restore for those corrupted registers is a must in
assemble code.
Signed-off-by: Anson Huang <Anson.Huang@freescale.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Cc: <stable@vger.kernel.org> # v3.3+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We must invalidate the L1 cache before enabling coherency, otherwise
secondary CPUs can inject invalid cache lines into the coherent CPU
cluster, which could then be migrated to other CPUs. This fixes a
recent regression with SoCFPGA randomly failing to boot.
Fixes: 02b4e2756e ("ARM: v7 setup function should invalidate L1 cache")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Document that r13 is not a stack in the initialisation function, in
case anyone gets other ideas.
Document the registers available for the errata workarounds, and
specifically which registers contain parts of the MIDR register, as
well as which registers must be preserved.
Lastly, use the lowest numbered available register (r0) rather than
r10 for temporary storage.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We already have the main ID register available in r9, there's no need
to refetch it. Use the saved value.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than having a long sprawling __v7_setup function, which is hard
to maintain properly, move the CPU errata out of line.
While doing this, it was discovered that the Cortex-A15 errata had been
incorrectly added:
ldr r10, =0x00000c08 @ Cortex-A8 primary part number
teq r0, r10
bne 2f
/* Cortex-A8 errata */
b 3f
2: ldr r10, =0x00000c09 @ Cortex-A9 primary part number
teq r0, r10
bne 3f
/* Cortex-A9 errata */
3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number
teq r0, r10
bne 4f
/* Cortex-A15 errata */
4:
This results in the Cortex-A15 test always being executed after the
Cortex-A8 and Cortex-A9 errata, which is obviously not what is intended.
The 'b 3f' labels should have been updated to 'b 4f'. The new structure
of:
/* Cortex-A8 Errata */
ldr r10, =0x00000c08 @ Cortex-A8 primary part number
teq r0, r10
beq __ca8_errata
/* Cortex-A9 Errata */
ldr r10, =0x00000c09 @ Cortex-A9 primary part number
teq r0, r10
beq __ca9_errata
/* Cortex-A15 Errata */
ldr r10, =0x00000c0f @ Cortex-A15 primary part number
teq r0, r10
beq __ca15_errata
__errata_finish:
is much cleaner and easier to see that this kind of thing doesn't
happen.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Re-engineer the LPAE TTBR setup code. Rather than passing some shifted
address in order to fit in a CPU register, pass either a full physical
address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).
This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
cpu_set_ttbr() in the secondary CPU startup code path (which was there
to re-set TTBR1 to the appropriate high physical address space on
Keystone2.)
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All ARMv5 and older CPUs invalidate their caches in the early assembly
setup function, prior to enabling the MMU. This is because the L1
cache should not contain any data relevant to the execution of the
kernel at this point; all data should have been flushed out to memory.
This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed,
these typically do not search their caches when caching is disabled (as
it needs to be when the MMU is disabled) so this change should be safe.
ARMv7 allows there to be CPUs which search their caches while caching is
disabled, and it's permitted that the cache is uninitialised at boot;
for these, the architecture reference manual requires that an
implementation specific code sequence is used immediately after reset
to ensure that the cache is placed into a sane state. Such
functionality is definitely outside the remit of the Linux kernel, and
must be done by the SoC's firmware before _any_ CPU gets to the Linux
kernel.
Changing the data cache clean+invalidate to a mere invalidate allows us
to get rid of a lot of platform specific hacks around this issue for
their secondary CPU bringup paths - some of which were buggy.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid the errata 430973 workaround for non-Cortex A8 CPUs. Having this
workaround enabled introduces an additional branch target buffer flush
into the context switching path, something we wish to avoid. To allow
this errata to be enabled in multiplatform kernels while reducing its
impact, rearrange the Cortex-A8 CPU support to avoid impacting on other
Version 7 CPUs.
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch replaces the 'branch to setup()' instructions embedded
in the PROCINFO structs with the offset to that setup function
relative to the base of the struct. This preserves the position
independent nature of that field, but uses a data item rather
than an instruction.
This is mainly done to prevent linker failures on large kernels,
where the setup function is out of reach for the branch.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Under extremely rare conditions, in an MPCore node consisting of at
least 3 CPUs, two CPUs trying to perform a STREX to data on the same
shared cache line can enter a livelock situation.
This patch enables the HW mechanism that overcomes the bug. This fixes
the incorrect setup of the STREX backoff delay bit due to a wrong
description in the specification.
Note that enabling the STREX backoff delay mechanism is done by
leaving the bit *cleared*, while the bit was currently being set by
the proc-v7.S code.
[Thomas: adapt to latest mainline, slightly reword the commit log, add
stable markers.]
Fixes: de4901933f ("arm: mm: Add support for PJ4B cpu and init routines")
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Nadav Haklai <nadavh@marvell.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Certain versions of the Krait processor don't report that they
support the fused multiply accumulate instruction via the MVFR1
register despite the fact that they actually do. Unfortunately we
use this register to identify support for VFPv4. Override the
hwcap on all Krait processors to indicate support for VFPv4 to
workaround this.
Tested-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Brahma-B15's ISAR0 correcty advertises UDIV/SDIV support in both ARM
and Thumb2 modes (CPUID_EXT_ISAR0=02101110), so we don't need to
manually apply this hwcap.
The code in question actually predates the following commit, which made
our hwcaps unnecessary:
commit 8164f7af88
Author: Stephen Boyd <sboyd@codeaurora.org>
Date: Mon Mar 18 19:44:15 2013 +0100
ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Perform any CPU-specific initialization required on the
Broadcom Brahma-15 core.
Signed-off-by: Marc Carino <marc.ceeeee@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it
needs to be saved/restored on suspend/resume. Otherwise, the
effectiveness of errata workaround gets lost together with diagnostic
register bit across suspend/resume cycle. And the CP15 power control
register of Cortex-A9 shares the same problem.
The patch adds a couple of Cortex-A9 specific suspend/resume functions
to save/restore these two Cortex-A9 CP15 registers across the
suspend/resume cycle.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since pj4b suspend/resume routines are implemented based on generic
ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have
it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if
cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will
likely be broken.
While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring
CP15 registers rather than saving in there.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).
We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.
Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.
Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cortex-A17 has identical initialisation requirements to Cortex-A12, so
hook it up in proc-v7.S in the same way.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PJ4B needs extra instructions for suspend and resume, so instead of
using the armv7 version, this commit introduces specific versions for
PJ4B.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and
doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does.
Note that as the ACTLR cannot (usually) be written from non-secure, it is the
responsibility of the bootloader/firmware to set this bit per core - it is
done here in Linux as last resort in case of bad firmware.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
During __v{6,7}_setup, we invalidate the TLBs since we are about to
enable the MMU on return to head.S. Unfortunately, without a subsequent
dsb instruction, the invalidation is not guaranteed to have completed by
the time we write to the sctlr, potentially exposing us to junk/stale
translations cached in the TLB.
This patch reworks the init functions so that the dsb used to ensure
completion of cache/predictor maintenance is also used to ensure
completion of the TLB invalidation.
Cc: <stable@vger.kernel.org>
Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>