1296 Commits

Author SHA1 Message Date
Linus Walleij
201043f227 ARM: 6985/1: export functions to determine the presence of I/DTCM
By allowing code to detect whether DTCM or ITCM is present, code paths
involving TCM can be avoided when running on platforms that lack it.
This is good for creating single kernels across several archs, if some
of them utilize TCM but others don't.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06 20:49:45 +01:00
Linus Walleij
9715efb8dc ARM: 6984/1: enhance TCM robustness
The PB11MPCore reports "3" DTCM banks, but anything above 2 is an
"undefined" value, so push this to become 0. Further add some checks
if code is compiled to TCM even if there is no D/ITCM present in the
system, and if we can really fit the compiled code. We don't do the
BUG() since it's not helpful, it's better to deal with non-present
TCM dynamically. If there is nothing compiled to the TCM and no TCM
is detected, it will now just shut up even if TCM support is enabled.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06 20:49:45 +01:00
Russell King
0371d3f7e8 ARM: move memory layout sanity checking before meminfo initialization
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock.  This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05 20:27:16 +01:00
Will Deacon
f4f38430c9 ARM: 6989/1: perf: do not start the PMU when no events are present
armpmu_enable can be called in situations where no events are present
(for example, from the event rotation tick after a profiled task has
exited). In this case, we currently start the PMU anyway which may
leave it active inevitably without any events being monitored.

This patch adds a simple check to the enabling code so that we avoid
starting the PMU when no events are present.

Cc: <stable@kernel.org>
Reported-by: Ashwin Chaugle <ashwinc@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05 12:37:23 +01:00
Russell King
30891c90d8 ARM: entry: no need to reload the SPSR value from struct pt_regs
The SVC IRQ, prefetch and data abort handlers preserve the SPSR value
via r5 across the exception.  Rather than re-loading it from pt_regs,
use the preserved value instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:12 +01:00
Russell King
da74047257 ARM: entry: data abort: tail-call the main data abort handler
Tail-call the main C data abort handler code from the per-CPU helper
code.  Update the comments in the code wrt the new calling and return
register state.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
3e287bec6f ARM: entry: data abort: arrange for CPU abort helpers to take pc/psr in r4/r5
Re-jig the CPU abort helpers to take the PC/PSR in r4/r5 rather
than r2/r3.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
8dfe7ac96f ARM: entry: prefetch abort: tail-call the main prefetch abort handler
Tail-call the main C prefetch abort handler code from the per-CPU
helper code.  Also note that the helper function becomes ABI
compliant in terms of the registers preserved.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
d9600c99c5 ARM: entry: re-allocate registers in irq entry assembly macros
This avoids the irq entry assembly corrupting r5, thereby allowing it
to be preserved through to the svc exit code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
f2741b78b6 ARM: entry: consolidate trace_hardirqs_off into (svc|usr)_entry macros
All handlers now call trace_hardirqs_off, so move this common code into
the (svc|usr)_entry assembler macros.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
bc089602d2 ARM: entry: instrument usr exception handlers with irqsoff tracing
As we no longer re-enable interrupts in these exception handlers, add
the irqsoff tracing calls to them so that the kernel tracks the state
more accurately.

Note that these calls are conditional on IRQSOFF_TRACER:

  kernel ----------> user ---------> kernel
          ^ irqs enabled   ^ irqs disabled

No kernel code can run on the local CPU until we've re-entered the
kernel through one of the exception handlers - and userspace can not
take any locks etc.  So, the kernel doesn't care about the IRQ mask
state while userspace is running unless we're doing IRQ off latency
tracing.  So, we can (and do) avoid the overhead of updating the IRQ
mask state on every kernel->user and user->kernel transition.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
df295df6c3 ARM: entry: instrument svc undefined exception handler with irqtrace
Add irqtrace function calls to the undefined exception handler, so
that we get sane lockdep traces from locking problems in undefined
exception handlers.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
02fe2845d6 ARM: entry: avoid enabling interrupts in prefetch/data abort handlers
Avoid enabling interrupts if the parent context had interrupts enabled
in the abort handler assembly code, and move this into the breakpoint/
page/alignment fault handlers instead.

This gets rid of some special-casing for the breakpoint fault handlers
from the low level abort handler path.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:00 +01:00
Russell King
29cb3cd208 ARM: pm: allow suspend finisher to return error codes
There are SoCs where attempting to enter a low power state is ignored,
and the CPU continues executing instructions with all state preserved.
It is over-complex at that point to disable the MMU just to call the
resume path.

Instead, allow the suspend finisher to return error codes to abort
suspend in this circumstance, where the cpu_suspend internals will then
unwind the saved state on the stack.  Also omit the tlb flush as no
changes to the page tables will have happened.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 09:54:01 +01:00
Avi Kivity
4dc0da8696 perf: Add context field to perf_event
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to looking up the perf_event
in their local data structure.  This is ugly and doesn't scale if a
single callback services many perf_events.

Fix by adding a context parameter to perf_event_create_kernel_counter()
(and derived hardware breakpoints APIs) and storing it in the perf_event.
The field can be accessed from the callback as event->overflow_handler_context.
All callers are updated.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1309362157-6596-2-git-send-email-avi@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01 11:06:38 +02:00
Peter Zijlstra
89d6c0b5bd perf, arch: Add generic NODE cache events
Add a NODE level to the generic cache events which is used to measure
local vs remote memory accesses. Like all other cache events, an
ACCESS is HIT+MISS, if there is no way to distinguish between reads
and writes do reads only etc..

The below needs filling out for !x86 (which I filled out with
unsupported events).

I'm fairly sure ARM can leave it like that since it doesn't strike me as
an architecture that even has NUMA support. SH might have something since
it does appear to have some NUMA bits.

Sparc64, PowerPC and MIPS certainly want a good look there since they
clearly are NUMA capable.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: David Miller <davem@davemloft.net>
Cc: Anton Blanchard <anton@samba.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptop
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01 11:06:38 +02:00
Peter Zijlstra
a8b0ca17b8 perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.

For the various event classes:

  - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
    the PMI-tail (ARM etc.)
  - tracepoint: nmi=0; since tracepoint could be from NMI context.
  - software: nmi=[0,1]; some, like the schedule thing cannot
    perform wakeups, and hence need 0.

As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).

The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01 11:06:35 +02:00
Russell King
8b4186160b ARM: entry: prefetch abort helper: pass aborted pc in r4 rather than r0
This avoids unnecessary instructions for CPUs which implement the IFAR
(instruction fault address register).

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-30 11:04:59 +01:00
Russell King
b059bdc393 ARM: entry: rejig register allocation in exception entry handlers
This allows us to avoid moving registers twice to work around the
clobbered registers when we add calls to trace_hardirqs_{on,off}.

Ensure that all SVC handlers return with SPSR in r5 for consistency.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-30 11:04:59 +01:00
Mark Rutland
e4b6381009 ARM: 6977/1: pmu: add platform_device_id table support
This patch adds support for platform_device_id tables, allowing new
PMU types to be registered with the correct type, without requiring
new platform_driver shims to provide the type. An single entry for
existing devices is provided.

Macros matching functionality of the of_device_id table macros are
provided for convenience.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:27:09 +01:00
Mark Rutland
e73c34c3d5 ARM: 6976/1: pmu: add OF probing support
This is based on an earlier patch from Rob Herring <rob.herring@calxeda.com>

> Add OF match table to enable OF style driver binding. The dts entry is like
> this:
>
> pmu {
> 	compatible = "arm,cortex-a9-pmu";
> 	interrupts = <100 101>;
> };
>
> The use of pdev->id as an index breaks with OF device binding, so set the type
> based on the OF compatible string.

This modification sets the PMU hardware type based on data embedded in the
binding, allowing easy addition of new PMU types in future.

Support for new PMU types not provided by devicetree can be added later using
platform_device_id tables in a similar fashion.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:27:08 +01:00
Mark Rutland
ae0c3751ab ARM: 6975/1: pmu: reject duplicate PMU registrations
Currently, the PMU reservation framework allows for multiple PMUs of
the same type to register themselves. This can lead to a bug with the
sequence:

register_pmu(pmu1);
reserve_pmu(pmu_type);
register_pmu(pmu2);
release_pmu(pmu1);

Here, pmu1 cannot be released, and pmu2 cannot be reserved.

This patch modifies register_pmu to reject registrations where a PMU is
already present, preventing this problem. PMUs which can have multiple
instances should not use the PMU reservation framework.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:27:08 +01:00
Mark Rutland
f12482c939 ARM: 6974/1: pmu: refactor reservation
Currently, PMU platform_device reservation relies on some minor abuse
of the platform_device::id field for determining the type of PMU. This
is problematic for device tree based probing, where the ID cannot be
controlled.

This patch removes reliance on the id field, and depends on each PMU's
platform driver to figure out which type it is. As all PMUs handled by
the current platform_driver name "arm-pmu" are CPU PMUs, this
convention is hardcoded. New PMU types can be supported through the use
of {of,platform}_device_id tables

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:27:08 +01:00
Russell King
fbab1c8094 ARM: entry: no need to check parent IRQ mask in IRQ handler return
There's no point checking to see whether IRQs were masked in the parent
context when returning from IRQ handling - the fact that we're handling
an IRQ means that the parent context must have had IRQs unmasked.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:06:37 +01:00
Russell King
1613cc1119 ARM: entry: no need to increase preempt count for IRQ handlers
irq_enter() and irq_exit() already take care of the preempt_count
handling for interrupts, which increment and decrement the hardirq
bits of the preempt count.  So we can remove the preempt count handing
in our IRQ entry/exit assembly, like x86 did some 9 years ago.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:06:37 +01:00
Russell King
0402becef9 ARM: entry: prefetch/data abort helpers: avoid corrupting r4
Replace r4 with ip for calling abort helpers - ip is allowed to be
corrupted by called functions in the ABI, so it makes more sense to
use such a register.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:05:33 +01:00
Russell King
ac8b9c1ce0 ARM: entry: prefetch/data abort helpers: convert to macros
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:03:02 +01:00
Russell King
2ff0720933 Merge branch 'cmpxchg64' of git://git.linaro.org/people/nico/linux into devel-stable 2011-06-28 21:23:00 +01:00
Nicolas Pitre
40fb79c8a8 ARM: add a kuser_cmpxchg64 user space helper
Some user space applications are designed around the ability to perform
atomic operations on 64 bit values.  Since this is natively possible
only with ARMv6k and above, let's provide a new kuser helper to perform
the operation with kernel supervision on pre ARMv6k hardware.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Dave Martin <dave.martin@linaro.org>
2011-06-28 15:47:47 -04:00
Russell King
2c74a0cefa ARM: pm: hide 1st and 2nd arguments to cpu_suspend from platform code
The first and second arguments shouldn't concern platform code, so
hide them from each platforms caller.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 09:54:39 +01:00
Russell King
14cd8fd574 ARM: pm: move cpu_init() call into core code
As we have core code dealing with CPU suspend/resume, we can
re-initialize the CPUs exception banked registers via that code rather
than having platforms deal with that level of detail.  So, move the
call to cpu_init() out of platform code into core code.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:48:43 +01:00
Russell King
e8856a8797 ARM: pm: convert cpu_suspend() to a normal function
cpu_suspend() has a weird calling method which makes it only possible to
call from assembly code: it returns with a modified stack pointer to
finish the suspend, but on resume, it 'returns' via a provided pointer.

We can make cpu_suspend() appear to be a normal function merely by
swapping the resume pointer argument and the link register.

Do so, and update all callers to take account of this more traditional
behaviour.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:48:43 +01:00
Russell King
3799bbe578 ARM: pm: rejig suspend follow-on function calling convention
Save the suspend function pointer onto the stack for use when returning.
Allocate r2 to pass an argument to the suspend function.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:36 +01:00
Russell King
8111eaa6d4 ARM: pm: reallocate registers to avoid r2, r3
Avoid using r2 and r3 in the suspend code, allowing these to be
passed further into the function as arguments.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:32 +01:00
Russell King
5fa94c812c ARM: pm: preserve r4 - r11 across a suspend
Make cpu_suspend()..return function preserve r4 to r11 across a suspend
cycle.  This is in preparation of relieving platform support code from
this task.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:29 +01:00
Russell King
3fd431bd0c ARM: pm: extract common code from MULTI_CPU/!MULTI_CPU paths
Very little code is different between these two paths now, so extract
the common code.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:26 +01:00
Russell King
2fefbcd585 ARM: pm: move return address (for cpu_resume) to top of stack
Move the return address for cpu_resume to the top of stack so that
cpu_resume looks more like a normal function.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:23 +01:00
Russell King
6b5f6ab0e1 ARM: pm: make MULTI_CPU and !MULTI_CPU resume paths the same
Eliminate the differences between MULTI_CPU and non-MULTI_CPU resume
paths, making the saved structure identical irrespective of the way
the kernel was configured.

Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:20 +01:00
Russell King
b69874e4f5 ARM: pm: arrange for cpu_proc_init() to be called on resume
cpu_proc_init() does processor specific initialization, which we do
at boot time.  We have been omitting to do this on resume, which
causes some of this initialization to be skipped.  We've also been
skipping this on SMP initialization too.

Ensure that cpu_proc_init() is always called appropriately by
moving it into cpu_init(), and move cpu_init() to a more appropriate
point in the boot initialization.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:12 +01:00
Russell King
573619d165 ARM: SMP: wait for CPU to be marked active
When we bring a CPU online, we should wait for it to become active
before entering the idle thread, so we know that the scheduler and
thread migration is going to work.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-21 11:09:05 +01:00
Nicolas Pitre
37b8304642 ARM: kuser: move interface documentation out of the source code
Digging into some assembly file in order to get information about the
kuser helpers is not that convivial.  Let's move that information to
a better formatted file in Documentation/arm/ and improve on it a bit.

Thanks to Dave Martin <dave.martin@linaro.org> for the initial cleanup and
clarifications.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Dave Martin <dave.martin@linaro.org>
2011-06-20 10:49:24 -04:00
Dave Martin
9a00318ead ARM: 6963/1: Thumb-2: Relax relocation requirements for non-function symbols
The "Thumb bit" of a symbol is only really meaningful for function
symbols (STT_FUNC).

However, sometimes a branch is relocated against a non-function
symbol; for example, PC-relative branches to anonymous assembler
local symbols are typically fixed up against the start-of-section
symbol, which is not a function symbol.  Some inline assembler
generates references of this type, such as fixup code generated by
macros in <asm/uaccess.h>.

The existing relocation code for R_ARM_THM_CALL/R_ARM_THM_JUMP24
interprets this case as an error, because the target symbol appears
to be an ARM symbol; but this is really not the case, since the
target symbol is just a base in these cases.  The addend defines
the precise offset to the target location, but since the addend is
encoded in a non-interworking Thumb branch instruction, there is no
explicit Thumb bit in the addend.  Because these instructions never
interwork, the implied Thumb bit in the addend is 1, and the
destination is Thumb by definition.

This patch removes the extraneous Thumb bit check for non-function
symbols, enabling modules containing the affected relocation types
to be loaded.  No modification to the actual relocation code is
required, since this code does not take bit[0] of the
location->destination offset into account in any case.

Function symbols are always checked for interworking conflicts, as
before.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-17 11:25:04 +01:00
Russell King
a9011580a9 ARM: extend Code: line by one 16-bit quantity for Thumb instructions
Dump out the following 16-bit instruction to the faulting instruction
in the Code: line.  This allows Thumb-2 instructions to be properly
encoded.

Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 23:55:45 +01:00
Po-Yu Chuang
373ce3020b ARM: 6955/1: cmpxchg syscall should data abort if page not write
If the page to cmpxchg is user mode read only (not write),
we should simulate a data abort first.

Signed-off-by: Po-Yu Chuang <ratbert@faraday-tech.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:15:07 +01:00
Nicolas Pitre
f506cd48a4 ARM: 6953/1: DT: don't try to access physical address zero
If the DT physical address is zero, this is equivalent to no DT.
Especially when the actual RAM physical address is not located at zero,
the result of phys_to_virt() would point to la-la-land and crash the
kernel, which crash is completely silent this early during boot.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:15:06 +01:00
Ming Lei
9fc2552a68 ARM: 6952/1: fix lockdep warning of "unannotated irqs-off"
This patch fixes the lockdep warning of "unannotated irqs-off"[1].

After entering __irq_usr, arm core will disable interrupt automatically,
but __irq_usr does not annotate the irq disable, so lockdep may complain
the warning if it has chance to check this in irq handler.

This patch adds trace_hardirqs_off in __irq_usr before entering irq_handler
to handle the irq, also calls ret_to_user_from_irq to avoid calling
disable_irq again.

This is also a fix for irq off tracer.

[1], lockdep warning log of "unannotated irqs-off"

[   13.804687] ------------[ cut here ]------------
[   13.809570] WARNING: at kernel/lockdep.c:3335 check_flags+0x78/0x1d0()
[   13.816467] Modules linked in:
[   13.819732] Backtrace:
[   13.822357] [<c01cb42c>] (dump_backtrace+0x0/0x100) from [<c06abb14>] (dump_stack+0x20/0x24)
[   13.831268]  r6:c07d8c2c r5:00000d07 r4:00000000 r3:00000000
[   13.837280] [<c06abaf4>] (dump_stack+0x0/0x24) from [<c01ffc04>] (warn_slowpath_common+0x5c/0x74)
[   13.846649] [<c01ffba8>] (warn_slowpath_common+0x0/0x74) from [<c01ffc48>] (warn_slowpath_null+0x2c/0x34)
[   13.856781]  r8:00000000 r7:00000000 r6:c18b8194 r5:60000093 r4:ef182000
[   13.863708] r3:00000009
[   13.866485] [<c01ffc1c>] (warn_slowpath_null+0x0/0x34) from [<c0237d84>] (check_flags+0x78/0x1d0)
[   13.875823] [<c0237d0c>] (check_flags+0x0/0x1d0) from [<c023afc8>] (lock_acquire+0x4c/0x150)
[   13.884704] [<c023af7c>] (lock_acquire+0x0/0x150) from [<c06af638>] (_raw_spin_lock+0x4c/0x84)
[   13.893798] [<c06af5ec>] (_raw_spin_lock+0x0/0x84) from [<c01f9a44>] (sched_ttwu_pending+0x58/0x8c)
[   13.903320]  r6:ef92d040 r5:00000003 r4:c18b8180
[   13.908233] [<c01f99ec>] (sched_ttwu_pending+0x0/0x8c) from [<c01f9a90>] (scheduler_ipi+0x18/0x1c)
[   13.917663]  r6:ef183fb0 r5:00000003 r4:00000000 r3:00000001
[   13.923645] [<c01f9a78>] (scheduler_ipi+0x0/0x1c) from [<c01bc458>] (do_IPI+0x9c/0xfc)
[   13.932006] [<c01bc3bc>] (do_IPI+0x0/0xfc) from [<c06b0888>] (__irq_usr+0x48/0xe0)
[   13.939971] Exception stack(0xef183fb0 to 0xef183ff8)
[   13.945281] 3fa0:                                     ffffffc3 0001500c 00000001 0001500c
[   13.953948] 3fc0: 00000050 400b45f0 400d9000 00000000 00000001 400d9600 6474e552 bea05b3c
[   13.962585] 3fe0: 400d96c0 bea059c0 400b6574 400b65d8 20000010 ffffffff
[   13.969573]  r6:00000403 r5:fa240100 r4:ffffffff r3:20000010
[   13.975585] ---[ end trace efc4896ab0fb62cb ]---
[   13.980468] possible reason: unannotated irqs-off.
[   13.985534] irq event stamp: 1610
[   13.989044] hardirqs last  enabled at (1610): [<c01c703c>] no_work_pending+0x8/0x2c
[   13.997131] hardirqs last disabled at (1609): [<c01c7024>] ret_slow_syscall+0xc/0x1c
[   14.005371] softirqs last  enabled at (0): [<c01fe5e4>] copy_process+0x2cc/0xa24
[   14.013183] softirqs last disabled at (0): [<  (null)>]   (null)

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-06 10:56:22 +01:00
Linus Torvalds
571503e100 Merge branch 'setns'
* setns:
  ns: Wire up the setns system call

Done as a merge to make it easier to fix up conflicts in arm due to
addition of sendmmsg system call
2011-05-28 10:51:01 -07:00
Eric W. Biederman
7b21fddd08 ns: Wire up the setns system call
32bit and 64bit on x86 are tested and working.  The rest I have looked
at closely and I can't find any problems.

setns is an easy system call to wire up.  It just takes two ints so I
don't expect any weird architecture porting problems.

While doing this I have noticed that we have some architectures that are
very slow to get new system calls.  cris seems to be the slowest where
the last system calls wired up were preadv and pwritev.  avr32 is weird
in that recvmmsg was wired up but never declared in unistd.h.  frv is
behind with perf_event_open being the last syscall wired up.  On h8300
the last system call wired up was epoll_wait.  On m32r the last system
call wired up was fallocate.  mn10300 has recvmmsg as the last system
call wired up.  The rest seem to at least have syncfs wired up which was
new in the 2.6.39.

v2: Most of the architecture support added by Daniel Lezcano <dlezcano@fr.ibm.com>
v3: ported to v2.6.36-rc4 by: Eric W. Biederman <ebiederm@xmission.com>
v4: Moved wiring up of the system call to another patch
v5: ported to v2.6.39-rc6
v6: rebased onto parisc-next and net-next to avoid syscall  conflicts.
v7: ported to Linus's latest post 2.6.39 tree.

>  arch/blackfin/include/asm/unistd.h     |    3 ++-
>  arch/blackfin/mach-common/entry.S      |    1 +
Acked-by: Mike Frysinger <vapier@gentoo.org>

Oh - ia64 wiring looks good.
Acked-by: Tony Luck <tony.luck@intel.com>

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-28 10:48:39 -07:00
Russell King
239df0fd5e Merge branches 'devel', 'devel-stable' and 'fixes' into for-linus 2011-05-27 22:59:57 +01:00
Catalin Marinas
d427958a46 ARM: 6942/1: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:32 +01:00