IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add support for parsing the explicit topology bindings to discover the
topology of the system.
Since it is not currently clear how to map multi-level clusters for the
scheduler all leaf clusters are presented to the scheduler at the same
level. This should be enough to provide good support for current systems.
Signed-off-by: Mark Brown <broonie@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As a legacy of the way 32 bit ARM did things the topology code uses a null
topology map by default and then overwrites it by mapping cores with no
information to a cluster by themselves later. In order to make it simpler
to reset things as part of recovering from parse failures in firmware
information directly set this configuration on init. A core will always be
its own sibling so there should be no risk of confusion with firmware
provided information.
Signed-off-by: Mark Brown <broonie@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Remove unused and deprecated mc_capable() and smt_capable().
Both were added recently by f6e763b93a ("arm64: topology:
Implement basic CPU topology support"). Uses of both were removed
by 8e7fbcbc22 ("sched: Remove stale power aware scheduling
remnants and dysfunctional knobs").
Signed-off-by: Zi Shen Lim <zlim@broadcom.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This reverts commit bc07c2c6e9.
While the aim is increased security for --x memory maps, it does not
protect against kernel level reads. Until SECCOMP is implemented for
arm64, revert this patch to avoid giving a false idea of execute-only
mappings.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQF8BAABCgBmBQJTdT+DXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ5Q0QyQTBEQTZBRDhGNzMzMDE3NUUyQkJD
MjM3MjA3RTk1NzRGQTdEAAoJEMI3IH6VdPp9jL8H/3kzXh+5rZycb5r48E6Cic/a
Gl0NmRjGtbsxLZcvd8NR3cDol1c9mEAelFrwSA3ar0W91hDf9gsEgxYSBcGKfX/b
sxqzhFoArMDitvu8QQ38SMIlXaGokW3sevj4B93ljw9DqhFR/BvJctmVfyNuPLpp
fFZh0JRyHnkMkXKMJKtYyiXRgfGiJ90rGHPRZLJW7zCIk0/oYd4LESpnqWC+cGKg
w8WIe0Vu4CsiQffkWRmMMJmcJIVayJXAtMHOyBNCKMnYziZSqs+JIUwhbPCxci51
vTxWOFs/CAeZp8mrlHR2TD3dHtzP8c/NSL7E3k4QA0tYReA8T3XHwhRmqtHCp7E=
=sllM
-----END PGP SIGNATURE-----
Merge tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm into upstream
FPSIMD register bank context switching and crypto algorithms
optimisations for arm64 from Ard Biesheuvel.
* tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm:
arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions
arm64: pull in <asm/simd.h> from asm-generic
arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions
arm64/crypto: AES using ARMv8 Crypto Extensions
arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions
arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions
arm64/crypto: SHA-1 using ARMv8 Crypto Extensions
arm64: add support for kernel mode NEON in interrupt context
arm64: defer reloading a task's FPSIMD state to userland resume
arm64: add abstractions for FPSIMD state manipulation
asm-generic: allow generic unaligned access if the arch supports it
Conflicts:
arch/arm64/include/asm/thread_info.h
This adds ARMv8 implementations of AES in ECB, CBC, CTR and XTS modes,
both for ARMv8 with Crypto Extensions and for plain ARMv8 NEON.
The Crypto Extensions version can only run on ARMv8 implementations that
have support for these optional extensions.
The plain NEON version is a table based yet time invariant implementation.
All S-box substitutions are performed in parallel, leveraging the wide range
of ARMv8's tbl/tbx instructions, and the huge NEON register file, which can
comfortably hold the entire S-box and still have room to spare for doing the
actual computations.
The key expansion routines were borrowed from aes_generic.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the AES-CCM encryption algorithm for CPUs that
have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the AES symmetric encryption algorithm for CPUs
that have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the
GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the
optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call
carry-less multiply).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms
for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that
have support for the SHA-1 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Some kernel files may include both linux/compat.h and asm/compat.h directly
or indirectly. Since both header files contain is_compat_task() under
!CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail.
Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c
(do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is
implemented).
So this patch proactively
1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h
2) replaces asm/compat.h to linux/compat.h in kernel/*.c,
but asm/compat.h is still necessary in ptrace.c and process.c because
they use is_compat_thread().
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This macro, regs_return_value, is used mainly for audit to record system
call's results, but may also be used in test_kprobes.c.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As done in arm, this change makes it easy to confirm we invoke syscall
related hooks, including syscall tracepoint, audit and seccomp which would
be implemented later, in correct order. That is, undoing operations in the
opposite order on exit that they were done on entry.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently syscall_trace() is called only for ptrace.
With additional TIF_xx flags defined, it is now called in all the cases
of audit, ftrace and seccomp in addition to ptrace.
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since mdscr_el1 is part of the debug register group, it is highly likely
to be trapped by a hypervisor to prevent virtual machines from debugging
(buggering?) each other. Unfortunately, this absolutely destroys our
performance, since we access the register on many of our low-level
fault handling paths to keep track of the various debug state machines.
This patch removes our dependency on mdscr_el1 in the case that debugging
is not being used. More specifically we:
- Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and
avoid disabling step in the MDSCR when we don't need to.
MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from
userspace.
- Ensure debug exceptions are re-enabled on *all* exception entry
paths, even the debug exception handling path (where we re-enable
exceptions after invoking the handler). Since we can now rely on
MDSCR_EL1.SS being cleared by the entry code, exception handlers can
usually enable debug immediately before enabling interrupts.
- Remove all debug exception unmasking from ret_to_user and
el1_preempt, since we will never get here with debug exceptions
masked.
This results in a slight change to kernel debug behaviour, where we now
step into interrupt handlers and data aborts from EL1 when debugging the
kernel, which is actually a useful thing to do. A side-effect of this is
that it *does* potentially prevent stepping off {break,watch}points when
there is a high-frequency interrupt source (e.g. a timer), so a debugger
would need to use either breakpoints or manually disable interrupts to
get around this issue.
With this patch applied, guest performance is restored under KVM when
debug register accesses are trapped (and we get a measurable performance
increase on the host on Cortex-A57 too).
Cc: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to ensure ordering and completion of inner-shareable maintenance
instructions (cache and TLB) on AArch64, we can use the -ish suffix to
the dmb and dsb instructions respectively.
This patch updates our low-level cache and tlb maintenance routines to
use the inner-shareable barrier variants where appropriate.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to ensure completion of inner-shareable maintenance instructions
(cache and TLB) on AArch64, we can use the -ish suffix to the dsb
instruction.
This patch relaxes our dsb sy instructions to dsb ish where possible.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
set_cpu_boot_mode_flag is used to identify which exception levels are
encountered across the system by CPUs trying to enter the kernel. The
basic algorithm is: if a CPU is booting at EL2, it will set a flag at
an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
Otherwise, a flag is set at an offset of zero into the same cacheline.
This enables us to check that all CPUs booted at the same exception
level.
This cacheline is written with the stage-1 MMU off (that is, via a
strongly-ordered mapping) and will bypass any clean lines in the cache,
leading to potential coherence problems when the variable is later
checked via the normal, cacheable mapping of the kernel image.
This patch reworks the broken flushing code so that we:
(1) Use a DMB to order the strongly-ordered write of the cacheline
against the subsequent cache-maintenance operation (by-VA
operations only hazard against normal, cacheable accesses).
(2) Use a single dc ivac instruction to invalidate any clean lines
containing a stale copy of the line after it has been updated.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The recently introduced acquire/release accessors refer to smp_mb()
in the !CONFIG_SMP case. This is confusing when reading the code, so use
barrier() directly when we know we're UP.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that all callers of the barrier macros are updated to pass the
mandatory options, update the macros so the option is actually used.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When calling our low-level barrier macros directly, we can often suffice
with more relaxed behaviour than the default "all accesses, full system"
option.
This patch updates the users of dsb() to specify the option which they
actually require.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The tlb maintainence functions: __cpu_flush_user_tlb_range and
__cpu_flush_kern_tlb_range do not take into consideration the page
granule when looping through the address range, and repeatedly flush
tlb entries for the same page when operating with 64K pages.
This patch re-works the logic s.t. we instead advance the loop by
1 << (PAGE_SHIFT - 12), so avoid repeating ourselves.
Also the routines have been converted from assembler to static inline
functions to aid with legibility and potential compiler optimisations.
The isb() has been removed from flush_tlb_kernel_range(.) as it is
only needed when changing the execute permission of a mapping. If one
needs to set an area of the kernel as execute/non-execute an isb()
must be inserted after the call to flush_tlb_kernel_range.
Cc: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Some users of xchg() don't bother using the return value, which results
in a compiler warning like the following (from kgdb):
In file included from linux/arch/arm64/include/asm/atomic.h:27:0,
from include/linux/atomic.h:4,
from include/linux/spinlock.h:402,
from include/linux/seqlock.h:35,
from include/linux/time.h:5,
from include/uapi/linux/timex.h:56,
from include/linux/timex.h:56,
from include/linux/sched.h:19,
from include/linux/pid_namespace.h:4,
from kernel/debug/debug_core.c:30:
kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’:
linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value]
((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
^
linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’
atomic_xchg(&kgdb_active, cpu);
^
This patch makes use of the same trick as we do for cmpxchg, by assigning
the return value to a dummy variable in the xchg() macro itself.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We have the capability to map 1GB level 1 blocks when using a 4K
granule.
This patch adjusts the create_mapping logic s.t. when mapping physical
memory on boot, we attempt to use a 1GB block if both the VA and PA
start and end are 1GB aligned. This both reduces the levels of lookup
required to resolve a kernel logical address, as well as reduces TLB
pressure on cores that support 1GB TLB entries.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Tested-by: Jungseok Lee <jays.lee@samsung.com>
[catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The primary aim of this patchset is to remove the pgprot_default and
prot_sect_default global variables and rely strictly on predefined
values. The original goal was to be able to run SMP kernels on UP
hardware by not setting the Shareability bit. However, it is unlikely to
see UP ARMv8 hardware and even if we do, the Shareability bit is no
longer assumed to disable cacheable accesses.
A side effect is that the device mappings now have the Shareability
attribute set. The hardware, however, should ignore it since Device
accesses are always Outer Shareable.
Following the removal of the two global variables, there is some PROT_*
macro reshuffling and cleanup, including the __PAGE_* macros (replaced
by PAGE_*).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
The ARMv8 architecture allows execute-only user permissions by clearing
the PTE_UXN and PTE_USER bits. The kernel, however, can still access
such page, so execute-only page permission does not protect against
read(2)/write(2) etc. accesses. Systems requiring such protection must
implement/enable features like SECCOMP.
This patch changes the arm64 __P100 and __S100 protection_map[] macros
to the new __PAGE_EXECONLY attributes. A side effect is that
pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
isn't set. To work around this, the check is done on the PTE_NG bit via
the pte_valid_ng() macro. VM_READ is also checked now for page faults.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This information is useful for instruction emulators to detect
read/write and access size without having to decode the faulting
instruction. The current patch exports it via sigcontext (struct
esr_context) and is only valid for SIGSEGV and SIGBUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch removes the aux_context structure (and the containing file)
to allow the placement of the _aarch64_ctx end magic based on the
context stored on the signal stack.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault
was caused by a write access and applications like Qemu rely on such
information being provided in sigcontext. This patch introduces the
ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly
in compat sigcontext.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The synchronisation with the boot thread already happens in __cpu_up()
via wait_for_completion_timeout(). In addition, __cpu_up() calls are
protected by the cpu_add_remove_lock mutex and already serialised.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The hardware provides the maximum cache line size in the system via the
CTR_EL0.CWG bits. This patch implements the cache_line_size() function
to read such information, together with a sanity check if the statically
defined L1_CACHE_BYTES is smaller than the hardware value.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
This patch modifies kernel_neon_begin() and kernel_neon_end(), so
they may be called from any context. To address the case where only
a couple of registers are needed, kernel_neon_begin_partial(u32) is
introduced which takes as a parameter the number of bottom 'n' NEON
q-registers required. To mark the end of such a partial section, the
regular kernel_neon_end() should be used.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
If a task gets scheduled out and back in again and nothing has touched
its FPSIMD state in the mean time, there is really no reason to reload
it from memory. Similarly, repeated calls to kernel_neon_begin() and
kernel_neon_end() will preserve and restore the FPSIMD state every time.
This patch defers the FPSIMD state restore to the last possible moment,
i.e., right before the task returns to userland. If a task does not return to
userland at all (for any reason), the existing FPSIMD state is preserved
and may be reused by the owning task if it gets scheduled in again on the
same CPU.
This patch adds two more functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
- fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
There are two tacit assumptions in the FPSIMD handling code that will no longer
hold after the next patch that optimizes away some FPSIMD state restores:
. the FPSIMD registers of this CPU contain the userland FPSIMD state of
task 'current';
. when switching to a task, its FPSIMD state will always be restored from
memory.
This patch adds the following functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved
- fpsimd_update_current_state -> replace current's FPSIMD state
Where necessary, the signal handling and fork code are updated to use the above
wrappers instead of poking into the FPSIMD registers directly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
arm64 move of of_clk_init() call in a previous commit
- Default DMA ops changed to non-coherent to preserve compatibility with
32-bit ARM DT files. The "dma-coherent" property can be used to
explicitly mark a device coherent. The Applied Micro DT file has been
updated to avoid DMA cache maintenance for the X-Gene SATA controller
(the only arm64 related driver with such assumption in -rc mainline)
- Fixmap correction for earlyprintk
- kern_addr_valid() fix for huge pages
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTZhiPAAoJEGvWsS0AyF7xGBUQAIthlCZGjq3yFh+P3YbZBbfh
8HEg3xQIEunaUTMLxrZ9c32rHdOwWMivmaStb7XfIzYc6XIGGnFwk0VFnxlBtOS/
yOw6khNy3d5b+R2yVVXJdOwGDvUJ7ZlZ4G35RbpFXqmHVOiT2JP5Pv/8hp/Ct3UE
eBoLjLYkvrnBgZyjBafTjc+ExjtViMdACNUCZ+fPfvWVF2pWesB72P9/+QT4DZ4Q
g+QXmtTviysFJPzi2LqVukPL5HzxrOcJql9F0lPEdCVypRHDQtNZfMf7aftZVRue
8z6IaqgwQuOkHko50RFcrPF1AbEnQWbbA//Mfm1YaJLtlaUwgEXS8jryP4MVGM/s
wjJD42tY80ysTFFiWjlqYx6wumtSjkZzLQIo7K+MjvleGaciRMsM5u2OyQJ6o8sR
GMLButOfZj1GOFPE56Xn6R27MzONS1eiCFR99dsnPPwNlqGuY7KEacAHGYRfEe75
g0Qwzj1sM6d+RHQKidWFRvvMQg5bxAENt1rpFJJ1cCge/jL2QqgbPhVPzMCM4nrW
xGQzSKO+5L1CLtH4gRd7Jdyg7tUrRBFzC8HXk/o6moO+lOebKzCpq4tNiW/MOwPG
sGCzmr2TpN6ImEjOhjYUByqa+XGUsz1n7d53Itkz8+pxsXhYHvd8iC1hOpNwakVM
h/0rfXwD782k1N3S++MH
=kRLA
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
"These are mostly arm64 fixes with an additional arm(64) platform fix
for the initialisation of vexpress clocks (the latter only affecting
arm64; the arch/arm64 code is SoC agnostic and does not rely on early
SoC-specific calls)
- vexpress platform clocks initialisation moved earlier following the
arm64 move of of_clk_init() call in a previous commit
- Default DMA ops changed to non-coherent to preserve compatibility
with 32-bit ARM DT files. The "dma-coherent" property can be used
to explicitly mark a device coherent. The Applied Micro DT file
has been updated to avoid DMA cache maintenance for the X-Gene SATA
controller (the only arm64 related driver with such assumption in
-rc mainline)
- Fixmap correction for earlyprintk
- kern_addr_valid() fix for huge pages"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
vexpress: Initialise the sysregs before setting up the clocks
arm64: Mark the Applied Micro X-Gene SATA controller as DMA coherent
arm64: Use bus notifiers to set per-device coherent DMA ops
arm64: Make default dma_ops to be noncoherent
arm64: fixmap: fix missing sub-page offset for earlyprintk
arm64: Fix for the arm64 kern_addr_valid() function
Since the default DMA ops for arm64 are non-coherent, mark the X-Gene
controller explicitly as dma-coherent to avoid additional cache
maintenance.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Loc Ho <lho@apm.com>
Recently, the default DMA ops have been changed to non-coherent for
alignment with 32-bit ARM platforms (and DT files). This patch adds bus
notifiers to be able to set the coherent DMA ops (with no cache
maintenance) for devices explicitly marked as coherent via the
"dma-coherent" DT property.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently arm64 dma_ops is by default made coherent which makes it
opposite in default policy from arm.
Make default dma_ops to be noncoherent (same as arm), as currently there
aren't any dma-capable drivers which assumes coherent ops
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit d57c33c5da (add generic fixmap.h) added (among other
similar things) set_fixmap_io to deal with early ioremap of devices.
More recently, commit bf4b558eba (arm64: add early_ioremap support)
converted the arm64 earlyprintk to use set_fixmap_io. A side effect of
this conversion is that my virtual machines have stopped booting when
I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel.
Turns out that the new earlyprintk code doesn't care at all about
sub-page offsets, and just assumes that the earlyprintk device will
be page-aligned. Obviously, that doesn't play well with the above example.
Further investigation shows that set_fixmap_io uses __set_fixmap instead
of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that
uses the latter, and to remove the superflous call to fix_to_virt
(which only returns the value that set_fixmap_io has already given us).
With this applied, my VMs are back in business. Tested on a Cortex-A57
platform with kvmtool as platform emulation.
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Fix for the arm64 kern_addr_valid() function to recognize
virtual addresses in the kernel logical memory map. The
function fails as written because it does not check whether
the addresses in that region are mapped at the pmd level to
2MB or 512MB pages, continues the page table walk to the
pte level, and issues a garbage value to pfn_valid().
Tested on 4K-page and 64K-page kernels.
Signed-off-by: Dave Anderson <anderson@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull irq fixes from Thomas Gleixner:
"This udpate delivers:
- A fix for dynamic interrupt allocation on x86 which is required to
exclude the GSI interrupts from the dynamic allocatable range.
This was detected with the newfangled tablet SoCs which have GPIOs
and therefor allocate a range of interrupts. The MSI allocations
already excluded the GSI range, so we never noticed before.
- The last missing set_irq_affinity() repair, which was delayed due
to testing issues
- A few bug fixes for the armada SoC interrupt controller
- A memory allocation fix for the TI crossbar interrupt controller
- A trivial kernel-doc warning fix"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip: irq-crossbar: Not allocating enough memory
irqchip: armanda: Sanitize set_irq_affinity()
genirq: x86: Ensure that dynamic irq allocation does not conflict
linux/interrupt.h: fix new kernel-doc warnings
irqchip: armada-370-xp: Fix releasing of MSIs
irqchip: armada-370-xp: implement the ->check_device() msi_chip operation
irqchip: armada-370-xp: fix invalid cast of signed value into unsigned variable
Pull x86 fixes from Peter Anvin:
"Two very small changes: one fix for the vSMP Foundation platform, and
one to help LLVM not choke on options it doesn't understand (although
it probably should)"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/vsmp: Fix irq routing
x86: LLVMLinux: Wrap -mno-80387 with cc-option
the merge window.
- A fix from Oleg to async page faults.
- A bunch of small ARM changes.
- A trivial patch to use the new MSI-X API introduced during the merge
window.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJTY5anAAoJEBvWZb6bTYby6EQQAIWbOJCrLO3NQxwE9M7d8YvN
oviaLFv7vJh1vaXVo7SNjBRXTq4pzWrhg9rwWlHBg1KnxJ0/sc9Tn07Fe+0bxWDh
1XIFXNEkO9+Bpl43VnKGC7sbkYE9m3+jpGCWjF01vBCh+BY73wUOsPD0Zw9YQojN
TKBtiQEjb8avuoTUR0JSOTwLZw4DlDRmRLHkNwlqqvbPdvuIWI/LG2wFUvY7/eq8
dWxIPBjLKaIv2aUs9wGNNiz4Kb92uyH5L6bI6SK8VxphRA+51BOjMcBbzdY+Q1XL
c4CTaL9ybAyUi4SRv41qWnM09YbI1FayUW93k9xz/vEplXOHp5R/lyUdZETd/d83
GxaooTLcy9nOYeZ75buiH/0EG5HxI7On/QfUBEE3qIf8KfGgxb479HbRw6RnX4bf
EhQzf7eyZvvk43Xk3OYwq8Ux1SOiXQEo+8TpCSaM/KN57cJbjGB4GCUK6JX8qJCx
7MfXBdrhkAdw5V4lEBQMYKp4pdUdgYKRXavhLevm0qFjX1Swl6LIHxLtjFTKyX9S
Xfxi09J7EUs7SsI35pdlMtPQkklEUXE96S/W3RCEpR+OfgbVMkYkcQI8TGb7ib3l
xLNJrSgFDSlP5F3rN5SYIItAqboXb7iLp7SiF2ByXV43yexIrzTH0bwdwPwpZHhk
2ziVieX5WXEX4tgzZkRj
=+bLo
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM fixes from Paolo Bonzini:
- Fix for a Haswell regression in nested virtualization, introduced
during the merge window.
- A fix from Oleg to async page faults.
- A bunch of small ARM changes.
- A trivial patch to use the new MSI-X API introduced during the merge
window.
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address.
KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses
KVM: async_pf: mm->mm_users can not pin apf->mm
KVM: ARM: vgic: Fix sgi dispatch problem
MAINTAINERS: co-maintainance of KVM/{arm,arm64}
arm: KVM: fix possible misalignment of PGDs and bounce page
KVM: x86: Check for host supported fields in shadow vmcs
kvm: Use pci_enable_msix_exact() instead of pci_enable_msix()
ARM: KVM: disable KVM in Kconfig on big-endian systems
Pull s390 fixes from Martin Schwidefsky:
"Two bug fixes, one to fix a potential information leak in the BPF jit
and common-io-layer fix for old firmware levels"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/bpf,jit: initialize A register if 1st insn is BPF_S_LDX_B_MSH
s390/chsc: fix SEI usage on old FW levels
Pull parisc fixes from Helge Deller:
"Drop the architecture-specifc value for_STK_LIM_MAX to fix stack
related problems with GNU make"
* 'parisc-3.15-3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: Use generic uapi/asm/resource.h file
parisc: remove _STK_LIM_MAX override
There are only a couple of architectures that override _STK_LIM_MAX to
a non-infinity value. This changes the stack allocation semantics in
subtle ways. For example, GNU make changes its stack allocation to the
hard maximum defined by _STK_LIM_MAX. As a results, threads executed
by processes running under make are allocated a stack size of
_STK_LIM_MAX rather than a sensible default value. This causes various
thread stress tests to fail when they can't muster more than about 50
threads.
The attached change implements the default behavior used by the
majority of architectures.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Reviewed-by: Carlos O'Donell <carlos@systemhalted.org>
Cc: stable@vger.kernel.org # 3.14
Signed-off-by: Helge Deller <deller@gmx.de>