534 Commits

Author SHA1 Message Date
Marc Zyngier
c5b2c0f520 arm64: KVM: vgic: byteswap GICv2 access on world switch if BE
Ensure that accesses to the GICH_* registers are byteswapped
when the kernel is compiled as big-endian.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-06 10:10:12 +00:00
Marc Zyngier
18ea3dbc9e arm64: KVM: initialize HYP mode following the kernel endianness
Force SCTLR_EL2.EE to 1 if the kernel is compiled as BE.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-06 10:10:01 +00:00
T.J. Purtell
aa62c20911 arm64: compat: Clear the IT state independent of the 32-bit ARM or Thumb-2 mode
The ARM architecture reference specifies that the IT state bits in the
PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM
function is registered as a signal handler, and that signal is delivered
inside a block of instructions following an IT instruction, some of the
instructions at the beginning of the signal handler may be skipped if
the IT state bits of the Program Status Register are not cleared by the
kernel.

Signed-off-by: T.J. Purtell <tj@mobisocial.us>
[catalin.marinas@arm.com: code comment and commit log updated]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-05 17:47:46 +00:00
Catalin Marinas
847264fb7e arm64: Use 42-bit address space with 64K pages
This patch expands the VA_BITS to 42 when the 64K page configuration is
enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of
page tables in this configuration with pgd now being a full page.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2013-11-05 17:23:52 +00:00
Will Deacon
122e2fa0d3 arm64: module: ensure instruction is little-endian before manipulation
Relocations that require an instruction immediate to be re-encoded must
ensure that the instruction pattern is represented in a little-endian
format for the manipulation code to work correctly.

This patch converts the loaded instruction into native-endianess prior
to encoding and then converts back to little-endian byteorder before
updating memory.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-05 10:23:13 +00:00
Catalin Marinas
dab7ea3609 arm64: defconfig: Enable CONFIG_PREEMPT by default
This way we can spot early bugs when just testing with the default
config.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-05 10:03:53 +00:00
Marc Zyngier
717321fcb5 arm64: fix access to preempt_count from assembly code
preempt_count is defined as an int. Oddly enough, we access it
as a 64bit value. Things become interesting when running a BE
kernel, and looking at the current CPU number, which is stored
as an int next to preempt_count. Like in a per-cpu interrupt
handler, for example...

Using a 32bit access fixes the issue for good.

Cc: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-05 09:33:57 +00:00
Marc Zyngier
7ade67b598 arm64: move enabling of GIC before CPUs are set online
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU
is marked online) moved the enabling of the GIC after the CPUs are
marked online.

This has some interesting effect:
[...]
[<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160
[<ffffffc000088f58>] smp_send_reschedule+0x38/0x40
[<ffffffc0000c8728>] resched_task+0x84/0xc0
[<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98
[<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4
[<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70
[<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4
[<ffffffc0000cae6c>] default_wake_function+0x10/0x18
[<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0
[<ffffffc0000c7784>] complete+0x48/0x64
[<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110
[...]

Here, we end-up calling gic_raise_softirq without having initialized
the interrupt controller for this CPU. While this goes unnoticed
with GICv2 (the distributor is always accessible), it explodes with
GICv3.

The fix is to move the call to notify_cpu_starting before we set
the secondary CPU online.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-04 18:18:05 +00:00
Mark Salter
3c620626c0 arm64: use generic RW_DATA_SECTION macro in linker script
The .data section in the arm64 linker script currently lacks a
definition for page-aligned data. This leads to a .page_aligned
section being placed between the end of data and start of bss.
This patch corrects that by using the generic RW_DATA_SECTION
macro which includes support for page-aligned data.

Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-04 18:17:25 +00:00
Catalin Marinas
264666e628 arm64: Slightly improve the warning on CPU0 enable-method
Commit e8765b265a69 (arm64: read enable-method for CPU0) introduced
checks for the enable method on CPU0 (to be later used with CPU
suspend). However, if the kernel is compiled for UP and a DT file is
used with a method like 'spin-table', Linux complains about 'invalid
enable method'. This patch turns it into an 'unsupported enable method'
warning.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-31 16:37:26 +00:00
Sudeep KarkadaNagesha
248f0e7f5f ARM64: simplify cpu_read_bootcpu_ops using OF/DT helper
Once the cpu_logical_map for any logical cpu is populated with the
corresponding physical identifier(i.e. mpidr), it's device node can
be retrieved using the DT helper 'of_get_cpu_node'. Currently the
device tree parsing code to get boot cpu node is duplicated in
'cpu_read_bootcpu_ops'.

This patch replaces the code parsing the device tree for the boot
cpu with of_get_cpu_node.

Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-30 17:54:49 +00:00
Sudeep KarkadaNagesha
6e15d0e04b ARM64: DT: define ARM64 specific arch_match_cpu_phys_id
OF/DT core library provides architecture specific hook to match the
logical cpu index with the corresponding physical identifier.

On ARM64, the MPIDR_EL1 contains specific bitfields(MPIDR_EL1.Aff{3..0})
which uniquely identify a CPU, in addition to some non-identifying
information and reserved bits. The ARM cpu binding defines the 'reg'
property to only contain the affinity bits, and any cpu nodes with other
bits set in their 'reg' entry are skipped.

This patch overrides the weak definition of arch_match_cpu_phys_id
with ARM64 specific version using MPIDR_EL1.Aff{3..0} as cpu physical
identifiers.

Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-30 12:10:37 +00:00
Mark Salter
c04e8e2fe5 arm64: allow ioremap_cache() to use existing RAM mappings
Some drivers (ACPI notably) use ioremap_cache() to map an area which could
either be outside of kernel RAM or in an already mapped reserved area of
RAM. To avoid aliases with different caching attributes, ioremap() does
not allow RAM to be remapped. But for ioremap_cache(), the existing kernel
mapping may be used.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-30 12:10:37 +00:00
Marc Zyngier
d241aac798 arm64: KVM: Yield CPU when vcpu executes a WFE
On an (even slightly) oversubscribed system, spinlocks are quickly
becoming a bottleneck, as some vcpus are spinning, waiting for a
lock to be released, while the vcpu holding the lock may not be
running at all.

The solution is to trap blocking WFEs and tell KVM that we're
now spinning. This ensures that other vpus will get a scheduling
boost, allowing the lock to be released more quickly. Also, using
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance
when the VM is severely overcommited.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2013-10-29 18:25:25 +00:00
Christoph Lameter
1436c1aa62 ARM: 7862/1: pcpu: replace __get_cpu_var_uses
This is the ARM part of Christoph's patchset cleaning up the various
uses of __get_cpu_var across the tree.

The idea is to convert __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations
that use the offset. Thereby address calculations are avoided and fewer
registers are used when code is generated.

[will: fixed debug ref counting checks and pcpu array accesses]

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-29 11:06:27 +00:00
Paolo Bonzini
5bb3398dd2 Updates for KVM/ARM, take 2 including:
- Transparent Huge Pages and hugetlbfs support for KVM/ARM
  - Yield CPU when guest executes WFE to speed up CPU overcommit
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQEcBAABAgAGBQJSZ5NVAAoJEEtpOizt6ddyEJgH+wWw6KWqHlParb+rf04cqCQV
 Fj3euz+SpYr2U2u0RimgkmeahUiGUhnlBSSH+tkLmt1if6nLawBJbUcIhaZMVdv+
 cvS6k+NtK7ibwPOyFeoZCS8taEbVDut2YgrtRKbne6QDLRYBEXFtpY8o6ptLoSu4
 ifQCF0FZyElCGLylSxFt9GsK+LjNjQWatVrzoHap9d58u2bma6GYwr4mEzVMHms7
 REtTvpwWgsDR5C/69aG8wE4cpJZALH3OeCgy6AccdzTLaQWWpK2YLWz8AFOvoYx6
 EsFmBFHZYcuwN+fv2jILgA3Is1oWwqI6k5lL+N3g/oTNNALDSWnfiJkXypJsfow=
 =2Ijm
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-3.13-2' of git://git.linaro.org/people/cdall/linux-kvm-arm into kvm-queue

Updates for KVM/ARM, take 2 including:
 - Transparent Huge Pages and hugetlbfs support for KVM/ARM
 - Yield CPU when guest executes WFE to speed up CPU overcommit
2013-10-28 13:15:55 +01:00
Robin Murphy
d0f38f9130 arm64: update 32-bit kuser helpers to ARMv8
This patch updates the barrier semantics in the kuser helper functions
to take advantage of the ARMv8 additions to AArch32, which are
guaranteed to be available in situations where these functions will be
called.

Note that this slightly changes the cmpxchg functions in that they are
no longer necessarily full barriers if they return 1. However, the
documentation only states they include their own barriers "as needed",
not that they are obligated to act as a full barrier for the caller.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
CC: Matthew Leach <matthew.leach@arm.com>
CC: Dave Martin <dave.martin@arm.com>
CC: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-28 10:40:28 +00:00
Vinayak Kale
c019de3de6 arm64: perf: fix event number mask
This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number)
field width is 10bits in event selection register.

Signed-off-by: Vinayak Kale <vkale@apm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 16:23:52 +01:00
Will Deacon
a872013d6d arm64: kconfig: allow CPU_BIG_ENDIAN to be selected
This patch wires up CONFIG_CPU_BIG_ENDIAN for the AArch64 kernel
configuration.

Selecting this option builds a big-endian kernel which can boot into a
big-endian userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 16:10:45 +01:00
Catalin Marinas
4a12cae7ef arm64: Fix the endianness of arch_spinlock_t
The owner and next members of the arch_spinlock_t structure need to be
swapped when compiling for big endian.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Matthew Leach <matthew.leach@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
2013-10-25 16:10:22 +01:00
Matthew Leach
710be9ac4e arm64: big-endian: write CPU holding pen address as LE
Currently when CPUs are brought online via a spin-table, the address
they should jump to is written to the cpu-release-addr in the kernel's
native endianness. As the kernel may switch endianness, secondaries
might read the value byte-reversed from what was intended, and they
would jump to the wrong address.

As the only current arm64 spin-table implementations are
little-endian, stricten up the arm64 spin-table definition such that
the value written to cpu-release-addr is _always_ little-endian
regardless of the endianness of any CPU. If a spinning CPU is
operating big-endian, it must byte-reverse the value before jumping to
handle this.

Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:42 +01:00
Matthew Leach
9cf7172893 arm64: big-endian: set correct endianess on kernel entry
The endianness of memory accesses at EL2 and EL1 are configured by
SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted,
the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must
ensure that they are set before performing any memory accesses.

This patch ensures that SCTLR_EL{2,1} are configured appropriately at
boot for kernels of either endianness.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
[catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:41 +01:00
Matthew Leach
828e9834e9 arm64: head: create a new function for setting the boot_cpu_mode flag
Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.

Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:39 +01:00
Matthew Leach
e68bedaa03 arm64: asm: add CPU_LE & CPU_BE assembler helpers
Add CPU_LE and CPU_BE to select assembler code in little and big
endian configurations respectively.

Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:38 +01:00
Matthew Leach
a1d5ebaf8c arm64: big-endian: don't treat code as data when copying sigret code
Currently the sigreturn compat code is copied to an offset in the
vectors table. When using a BE kernel this data will be stored in the
wrong endianess so when returning from a signal on a 32-bit BE system,
arbitrary code will be executed.

Instead of declaring the code inside a struct and copying that, use
the assembler's .byte directives to store the code in the correct
endianess regardless of platform endianess.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:37 +01:00
Matthew Leach
55b89540b0 arm64: compat: correct register concatenation for syscall wrappers
The arm64 port contains wrappers for arm32 syscalls that pass 64-bit
values. These wrappers concatenate the two registers to hold a 64-bit
value in a single X register. On BE, however, the lower and higher
words are swapped.

Create a new assembler macro, regs_to_64, that when on BE systems
swaps the registers in the orr instruction.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:36 +01:00
Will Deacon
a795a38eb9 arm64: compat: add support for big-endian (BE8) AArch32 binaries
This patch adds support for BE8 AArch32 tasks to the compat layer.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:35 +01:00
Will Deacon
94ed1f2cb5 arm64: setup: report ELF_PLATFORM as the machine for utsname
uname -m reports the machine field from the current utsname, which should
reflect the endianness of the system.

This patch reports ELF_PLATFORM for the field, so that everything appears
consistent from userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:34 +01:00
Will Deacon
5436b5c830 arm64: ELF: add support for big-endian executables
This patch adds support for the aarch64_be ELF format to the AArch64 ELF
loader.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:33 +01:00
Will Deacon
c194520ada arm64: big-endian: fix byteorder include
For big-endian processors, we must include
linux/byteorder/big_endian.h to get the relevant definitions for
swabbing between CPU order and a defined endianness.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:32 +01:00
Will Deacon
a0974e6e21 arm64: big-endian: add big-endian support to top-level arch Makefile
This patch adds big-endian support to the AArch64 top-level Makefile.
This currently just passes the relevant flags to the toolchain and is
predicated on a Kconfig option that will be introduced later on.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:31 +01:00
Stefano Stabellini
3d1975b570 arm,arm64: do not always merge biovec if we are running on Xen
This is similar to what it is done on X86: biovecs are prevented from merging
otherwise every dma requests would be forced to bounce on the swiotlb buffer.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>


Changes in v7:
- remove the extra autotranslate check in biomerge.c.
2013-10-25 10:33:26 +00:00
Stefano Stabellini
7100b077ab xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device
Introduce xen_dma_map_page, xen_dma_unmap_page,
xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device.
They have empty implementations on x86 and ia64 but they call the
corresponding platform dma_ops function on arm and arm64.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Changes in v9:
- xen_dma_map_page return void, avoid page_to_phys.
2013-10-25 10:39:49 +00:00
Mark Rutland
831ccf79b4 arm64: add PSCI CPU_OFF-based hotplug support
This patch adds support for using PSCI CPU_OFF calls for CPU hotplug.
With this code it is possible to hot unplug CPUs with "psci" as their
boot-method, as long as there's an appropriate cpu_off function id
specified in the psci node.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:21 +01:00
Mark Rutland
9327e2c6bb arm64: add CPU_HOTPLUG infrastructure
This patch adds the basic infrastructure necessary to support
CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
support will depend on an implementation's cpu_operations (e.g. PSCI).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:21 +01:00
Mark Rutland
e8765b265a arm64: read enable-method for CPU0
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may
tells us something useful (i.e. how to hotplug it back on), so we must
read it along with all the enable-method for all the other CPUs.  Even
on UP the enable-method may tell us useful information (e.g. if a core
has some mechanism that might be usable for cpuidle), so we should
always read it.

This patch factors out the reading of the enable method, and ensures
that CPU0's enable method is read regardless of whether the kernel is
built with SMP support.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:20 +01:00
Mark Rutland
652af89979 arm64: factor out spin-table boot method
The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
where they sit before they are launched into the kernel proper.

With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.

A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.

The cpu_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).
Documentation is added for cpu_operations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:20 +01:00
Mark Rutland
cd1aebf527 arm64: reorganise smp_enable_ops
For hotplug support, we're going to want a place to store operations
that do more than bring CPUs online, and it makes sense to group these
with our current smp_enable_ops. For cpuidle support, we'll want to
group additional functions, and we may want them even for UP kernels.

This patch renames smp_enable_ops to the more general cpu_operations,
and pulls the definitions out of smp code such that they can be used in
UP kernels. While we're at it, fix up instances of the cpu parameter to
be an unsigned int, drop the init markings and rename the *_cpu
functions to cpu_* to reduce future churn when cpu_operations is
extended.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:20 +01:00
Mark Rutland
00ef54bb97 arm64: unify smp_psci.c and psci.c
The functions in psci.c are only used from smp_psci.c, and smp_psci
cannot function without psci.c. Additionally psci.c is built when !SMP,
where it's expected that cpu_suspend may be useful.

This patch unifies the two files, removing pointless duplication and
paving the way for PSCI support in UP systems.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:19 +01:00
Catalin Marinas
2a3f912c78 arm64: Export __copy_in_user() to modules
This function may be called from loadable modules, so it needs
exporting.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Loc Ho <lho@apm.com>
2013-10-24 15:47:19 +01:00
Will Deacon
cf10b79a7d arm64: cmpxchg: implement cmpxchg64_relaxed
This patch introduces cmpxchg64_relaxed for arm64 using the existing
cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits)
without barrier semantics.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24 15:46:35 +01:00
Will Deacon
5686b06cea arm64: lockref: add support for lockless lockrefs using cmpxchg
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can
deal with 8-bytes (as one would hope!).

This patch wires up the cmpxchg-based lockless lockref implementation
for arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24 15:46:34 +01:00
Will Deacon
52ea2a560a arm64: locks: introduce ticket-based spinlock implementation
This patch introduces a ticket lock implementation for arm64, along the
same lines as the implementation for arch/arm/.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24 15:46:33 +01:00
AKASHI Takahiro
7b22c03536 arm64: check for number of arguments in syscall_get/set_arguments()
In ftrace_syscall_enter(),
    syscall_get_arguments(..., 0, n, ...)
        if (i == 0) { <handle orig_x0> ...; n--;}
        memcpy(..., n * sizeof(args[0]));
If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in
syscall_get_arguments(), none of arguments should be copied by memcpy().
Otherwise 'n--' can be a big positive number and unexpected amount of data
will be copied. Tracing system calls which take no argument, say sync(void),
may hit this case and eventually make the system corrupted.
This patch fixes the issue both in syscall_get_arguments() and
syscall_set_arguments().

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-23 15:45:35 +01:00
Marc Zyngier
79c648806f arm/arm64: KVM: PSCI: use MPIDR to identify a target CPU
The KVM PSCI code blindly assumes that vcpu_id and MPIDR are
the same thing. This is true when vcpus are organized as a flat
topology, but is wrong when trying to emulate any other topology
(such as A15 clusters).

Change the KVM PSCI CPU_ON code to look at the MPIDR instead
of the vcpu_id to pick a target CPU.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2013-10-22 08:00:06 -07:00
Christoffer Dall
ad361f093c KVM: ARM: Support hugetlbfs backed huge pages
Support huge pages in KVM/ARM and KVM/ARM64.  The pud_huge checking on
the unmap path may feel a bit silly as the pud_huge check is always
defined to false, but the compiler should be smart about this.

Note: This deals only with VMAs marked as huge which are allocated by
users through hugetlbfs only.  Transparent huge pages can only be
detected by looking at the underlying pages (or the page tables
themselves) and this patch so far simply maps these on a page-by-page
level in the Stage-2 page tables.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2013-10-17 17:06:20 -07:00
Gleb Natapov
13acfd5715 Powerpc KVM work is based on a commit after rc4.
Merging master into next to satisfy the dependencies.

Conflicts:
	arch/arm/kvm/reset.c
2013-10-17 17:41:49 +03:00
Gleb Natapov
d570142674 Updates for KVM/ARM including cpu=host and Cortex-A7 support
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQEcBAABAgAGBQJSXeGaAAoJEEtpOizt6ddyeyYH/AnWdKGUELjxC0lIBDkTitnD
 znyzSxqXG6z1Z6d+EYI3XCL1eB3dtyOBSJsZj45adG4HXGkCmGqosgDzivGO6GcI
 yhjYgXGhP8ZvIwky1ijbVQODaEE70SEYqKwyCpU4rLJw2uRkbfRaxTrpgnusL8Bg
 RG37uaOS/sasLoNxCe5GEUjm8BFGbvZGVAjcL7yJTPBw5qd7GYBxndFSTILa2iRQ
 ikoBD0bUVhoaBUqSNQenoNllUBwDpFJF1HiEXKMJkUIxX/FggrSvRp8A/MAWDBw0
 6Ef1P8Pt/hMfMQpOOeu8QFWM2s+smh2rTkO/O9mqi/tSvEf5YcZHMAl48B8OR88=
 =tJ2u
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-3.13-1' of git://git.linaro.org/people/cdall/linux-kvm-arm into next

Updates for KVM/ARM including cpu=host and Cortex-A7 support
2013-10-16 15:30:32 +03:00
Christoffer Dall
ef0cfe71c2 KVM: arm64: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
2013-10-14 10:12:06 +03:00
Ingo Molnar
8a749de5e3 Merge branch 'fortglx/3.13/time' of git://git.linaro.org/people/jstultz/linux into timers/core
Pull more timekeeping items for v3.13 from John Stultz:

  * Small cleanup in the clocksource code.

  * Fix for rtc-pl031 to let it work with alarmtimers.

  * Move arm64 to using the generic sched_clock framework & resulting
    cleanup in the generic sched_clock code.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-10 06:25:23 +02:00