IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
By allowing code to detect whether DTCM or ITCM is present, code paths
involving TCM can be avoided when running on platforms that lack it.
This is good for creating single kernels across several archs, if some
of them utilize TCM but others don't.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The PB11MPCore reports "3" DTCM banks, but anything above 2 is an
"undefined" value, so push this to become 0. Further add some checks
if code is compiled to TCM even if there is no D/ITCM present in the
system, and if we can really fit the compiled code. We don't do the
BUG() since it's not helpful, it's better to deal with non-present
TCM dynamically. If there is nothing compiled to the TCM and no TCM
is detected, it will now just shut up even if TCM support is enabled.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock. This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
armpmu_enable can be called in situations where no events are present
(for example, from the event rotation tick after a profiled task has
exited). In this case, we currently start the PMU anyway which may
leave it active inevitably without any events being monitored.
This patch adds a simple check to the enabling code so that we avoid
starting the PMU when no events are present.
Cc: <stable@kernel.org>
Reported-by: Ashwin Chaugle <ashwinc@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The SVC IRQ, prefetch and data abort handlers preserve the SPSR value
via r5 across the exception. Rather than re-loading it from pt_regs,
use the preserved value instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tail-call the main C data abort handler code from the per-CPU helper
code. Update the comments in the code wrt the new calling and return
register state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tail-call the main C prefetch abort handler code from the per-CPU
helper code. Also note that the helper function becomes ABI
compliant in terms of the registers preserved.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This avoids the irq entry assembly corrupting r5, thereby allowing it
to be preserved through to the svc exit code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All handlers now call trace_hardirqs_off, so move this common code into
the (svc|usr)_entry assembler macros.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As we no longer re-enable interrupts in these exception handlers, add
the irqsoff tracing calls to them so that the kernel tracks the state
more accurately.
Note that these calls are conditional on IRQSOFF_TRACER:
kernel ----------> user ---------> kernel
^ irqs enabled ^ irqs disabled
No kernel code can run on the local CPU until we've re-entered the
kernel through one of the exception handlers - and userspace can not
take any locks etc. So, the kernel doesn't care about the IRQ mask
state while userspace is running unless we're doing IRQ off latency
tracing. So, we can (and do) avoid the overhead of updating the IRQ
mask state on every kernel->user and user->kernel transition.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add irqtrace function calls to the undefined exception handler, so
that we get sane lockdep traces from locking problems in undefined
exception handlers.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid enabling interrupts if the parent context had interrupts enabled
in the abort handler assembly code, and move this into the breakpoint/
page/alignment fault handlers instead.
This gets rid of some special-casing for the breakpoint fault handlers
from the low level abort handler path.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are SoCs where attempting to enter a low power state is ignored,
and the CPU continues executing instructions with all state preserved.
It is over-complex at that point to disable the MMU just to call the
resume path.
Instead, allow the suspend finisher to return error codes to abort
suspend in this circumstance, where the cpu_suspend internals will then
unwind the saved state on the stack. Also omit the tlb flush as no
changes to the page tables will have happened.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to looking up the perf_event
in their local data structure. This is ugly and doesn't scale if a
single callback services many perf_events.
Fix by adding a context parameter to perf_event_create_kernel_counter()
(and derived hardware breakpoints APIs) and storing it in the perf_event.
The field can be accessed from the callback as event->overflow_handler_context.
All callers are updated.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1309362157-6596-2-git-send-email-avi@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add a NODE level to the generic cache events which is used to measure
local vs remote memory accesses. Like all other cache events, an
ACCESS is HIT+MISS, if there is no way to distinguish between reads
and writes do reads only etc..
The below needs filling out for !x86 (which I filled out with
unsupported events).
I'm fairly sure ARM can leave it like that since it doesn't strike me as
an architecture that even has NUMA support. SH might have something since
it does appear to have some NUMA bits.
Sparc64, PowerPC and MIPS certainly want a good look there since they
clearly are NUMA capable.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: David Miller <davem@davemloft.net>
Cc: Anton Blanchard <anton@samba.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptop
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This avoids unnecessary instructions for CPUs which implement the IFAR
(instruction fault address register).
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows us to avoid moving registers twice to work around the
clobbered registers when we add calls to trace_hardirqs_{on,off}.
Ensure that all SVC handlers return with SPSR in r5 for consistency.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds support for platform_device_id tables, allowing new
PMU types to be registered with the correct type, without requiring
new platform_driver shims to provide the type. An single entry for
existing devices is provided.
Macros matching functionality of the of_device_id table macros are
provided for convenience.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is based on an earlier patch from Rob Herring <rob.herring@calxeda.com>
> Add OF match table to enable OF style driver binding. The dts entry is like
> this:
>
> pmu {
> compatible = "arm,cortex-a9-pmu";
> interrupts = <100 101>;
> };
>
> The use of pdev->id as an index breaks with OF device binding, so set the type
> based on the OF compatible string.
This modification sets the PMU hardware type based on data embedded in the
binding, allowing easy addition of new PMU types in future.
Support for new PMU types not provided by devicetree can be added later using
platform_device_id tables in a similar fashion.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, the PMU reservation framework allows for multiple PMUs of
the same type to register themselves. This can lead to a bug with the
sequence:
register_pmu(pmu1);
reserve_pmu(pmu_type);
register_pmu(pmu2);
release_pmu(pmu1);
Here, pmu1 cannot be released, and pmu2 cannot be reserved.
This patch modifies register_pmu to reject registrations where a PMU is
already present, preventing this problem. PMUs which can have multiple
instances should not use the PMU reservation framework.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, PMU platform_device reservation relies on some minor abuse
of the platform_device::id field for determining the type of PMU. This
is problematic for device tree based probing, where the ID cannot be
controlled.
This patch removes reliance on the id field, and depends on each PMU's
platform driver to figure out which type it is. As all PMUs handled by
the current platform_driver name "arm-pmu" are CPU PMUs, this
convention is hardcoded. New PMU types can be supported through the use
of {of,platform}_device_id tables
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no point checking to see whether IRQs were masked in the parent
context when returning from IRQ handling - the fact that we're handling
an IRQ means that the parent context must have had IRQs unmasked.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
irq_enter() and irq_exit() already take care of the preempt_count
handling for interrupts, which increment and decrement the hardirq
bits of the preempt count. So we can remove the preempt count handing
in our IRQ entry/exit assembly, like x86 did some 9 years ago.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Replace r4 with ip for calling abort helpers - ip is allowed to be
corrupted by called functions in the ABI, so it makes more sense to
use such a register.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some user space applications are designed around the ability to perform
atomic operations on 64 bit values. Since this is natively possible
only with ARMv6k and above, let's provide a new kuser helper to perform
the operation with kernel supervision on pre ARMv6k hardware.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Dave Martin <dave.martin@linaro.org>
The first and second arguments shouldn't concern platform code, so
hide them from each platforms caller.
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As we have core code dealing with CPU suspend/resume, we can
re-initialize the CPUs exception banked registers via that code rather
than having platforms deal with that level of detail. So, move the
call to cpu_init() out of platform code into core code.
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cpu_suspend() has a weird calling method which makes it only possible to
call from assembly code: it returns with a modified stack pointer to
finish the suspend, but on resume, it 'returns' via a provided pointer.
We can make cpu_suspend() appear to be a normal function merely by
swapping the resume pointer argument and the link register.
Do so, and update all callers to take account of this more traditional
behaviour.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Save the suspend function pointer onto the stack for use when returning.
Allocate r2 to pass an argument to the suspend function.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid using r2 and r3 in the suspend code, allowing these to be
passed further into the function as arguments.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make cpu_suspend()..return function preserve r4 to r11 across a suspend
cycle. This is in preparation of relieving platform support code from
this task.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Very little code is different between these two paths now, so extract
the common code.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the return address for cpu_resume to the top of stack so that
cpu_resume looks more like a normal function.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Eliminate the differences between MULTI_CPU and non-MULTI_CPU resume
paths, making the saved structure identical irrespective of the way
the kernel was configured.
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cpu_proc_init() does processor specific initialization, which we do
at boot time. We have been omitting to do this on resume, which
causes some of this initialization to be skipped. We've also been
skipping this on SMP initialization too.
Ensure that cpu_proc_init() is always called appropriately by
moving it into cpu_init(), and move cpu_init() to a more appropriate
point in the boot initialization.
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When we bring a CPU online, we should wait for it to become active
before entering the idle thread, so we know that the scheduler and
thread migration is going to work.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Digging into some assembly file in order to get information about the
kuser helpers is not that convivial. Let's move that information to
a better formatted file in Documentation/arm/ and improve on it a bit.
Thanks to Dave Martin <dave.martin@linaro.org> for the initial cleanup and
clarifications.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Dave Martin <dave.martin@linaro.org>
The "Thumb bit" of a symbol is only really meaningful for function
symbols (STT_FUNC).
However, sometimes a branch is relocated against a non-function
symbol; for example, PC-relative branches to anonymous assembler
local symbols are typically fixed up against the start-of-section
symbol, which is not a function symbol. Some inline assembler
generates references of this type, such as fixup code generated by
macros in <asm/uaccess.h>.
The existing relocation code for R_ARM_THM_CALL/R_ARM_THM_JUMP24
interprets this case as an error, because the target symbol appears
to be an ARM symbol; but this is really not the case, since the
target symbol is just a base in these cases. The addend defines
the precise offset to the target location, but since the addend is
encoded in a non-interworking Thumb branch instruction, there is no
explicit Thumb bit in the addend. Because these instructions never
interwork, the implied Thumb bit in the addend is 1, and the
destination is Thumb by definition.
This patch removes the extraneous Thumb bit check for non-function
symbols, enabling modules containing the affected relocation types
to be loaded. No modification to the actual relocation code is
required, since this code does not take bit[0] of the
location->destination offset into account in any case.
Function symbols are always checked for interworking conflicts, as
before.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Dump out the following 16-bit instruction to the faulting instruction
in the Code: line. This allows Thumb-2 instructions to be properly
encoded.
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If the page to cmpxchg is user mode read only (not write),
we should simulate a data abort first.
Signed-off-by: Po-Yu Chuang <ratbert@faraday-tech.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If the DT physical address is zero, this is equivalent to no DT.
Especially when the actual RAM physical address is not located at zero,
the result of phys_to_virt() would point to la-la-land and crash the
kernel, which crash is completely silent this early during boot.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch fixes the lockdep warning of "unannotated irqs-off"[1].
After entering __irq_usr, arm core will disable interrupt automatically,
but __irq_usr does not annotate the irq disable, so lockdep may complain
the warning if it has chance to check this in irq handler.
This patch adds trace_hardirqs_off in __irq_usr before entering irq_handler
to handle the irq, also calls ret_to_user_from_irq to avoid calling
disable_irq again.
This is also a fix for irq off tracer.
[1], lockdep warning log of "unannotated irqs-off"
[ 13.804687] ------------[ cut here ]------------
[ 13.809570] WARNING: at kernel/lockdep.c:3335 check_flags+0x78/0x1d0()
[ 13.816467] Modules linked in:
[ 13.819732] Backtrace:
[ 13.822357] [<c01cb42c>] (dump_backtrace+0x0/0x100) from [<c06abb14>] (dump_stack+0x20/0x24)
[ 13.831268] r6:c07d8c2c r5:00000d07 r4:00000000 r3:00000000
[ 13.837280] [<c06abaf4>] (dump_stack+0x0/0x24) from [<c01ffc04>] (warn_slowpath_common+0x5c/0x74)
[ 13.846649] [<c01ffba8>] (warn_slowpath_common+0x0/0x74) from [<c01ffc48>] (warn_slowpath_null+0x2c/0x34)
[ 13.856781] r8:00000000 r7:00000000 r6:c18b8194 r5:60000093 r4:ef182000
[ 13.863708] r3:00000009
[ 13.866485] [<c01ffc1c>] (warn_slowpath_null+0x0/0x34) from [<c0237d84>] (check_flags+0x78/0x1d0)
[ 13.875823] [<c0237d0c>] (check_flags+0x0/0x1d0) from [<c023afc8>] (lock_acquire+0x4c/0x150)
[ 13.884704] [<c023af7c>] (lock_acquire+0x0/0x150) from [<c06af638>] (_raw_spin_lock+0x4c/0x84)
[ 13.893798] [<c06af5ec>] (_raw_spin_lock+0x0/0x84) from [<c01f9a44>] (sched_ttwu_pending+0x58/0x8c)
[ 13.903320] r6:ef92d040 r5:00000003 r4:c18b8180
[ 13.908233] [<c01f99ec>] (sched_ttwu_pending+0x0/0x8c) from [<c01f9a90>] (scheduler_ipi+0x18/0x1c)
[ 13.917663] r6:ef183fb0 r5:00000003 r4:00000000 r3:00000001
[ 13.923645] [<c01f9a78>] (scheduler_ipi+0x0/0x1c) from [<c01bc458>] (do_IPI+0x9c/0xfc)
[ 13.932006] [<c01bc3bc>] (do_IPI+0x0/0xfc) from [<c06b0888>] (__irq_usr+0x48/0xe0)
[ 13.939971] Exception stack(0xef183fb0 to 0xef183ff8)
[ 13.945281] 3fa0: ffffffc3 0001500c 00000001 0001500c
[ 13.953948] 3fc0: 00000050 400b45f0 400d9000 00000000 00000001 400d9600 6474e552 bea05b3c
[ 13.962585] 3fe0: 400d96c0 bea059c0 400b6574 400b65d8 20000010 ffffffff
[ 13.969573] r6:00000403 r5:fa240100 r4:ffffffff r3:20000010
[ 13.975585] ---[ end trace efc4896ab0fb62cb ]---
[ 13.980468] possible reason: unannotated irqs-off.
[ 13.985534] irq event stamp: 1610
[ 13.989044] hardirqs last enabled at (1610): [<c01c703c>] no_work_pending+0x8/0x2c
[ 13.997131] hardirqs last disabled at (1609): [<c01c7024>] ret_slow_syscall+0xc/0x1c
[ 14.005371] softirqs last enabled at (0): [<c01fe5e4>] copy_process+0x2cc/0xa24
[ 14.013183] softirqs last disabled at (0): [< (null)>] (null)
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
32bit and 64bit on x86 are tested and working. The rest I have looked
at closely and I can't find any problems.
setns is an easy system call to wire up. It just takes two ints so I
don't expect any weird architecture porting problems.
While doing this I have noticed that we have some architectures that are
very slow to get new system calls. cris seems to be the slowest where
the last system calls wired up were preadv and pwritev. avr32 is weird
in that recvmmsg was wired up but never declared in unistd.h. frv is
behind with perf_event_open being the last syscall wired up. On h8300
the last system call wired up was epoll_wait. On m32r the last system
call wired up was fallocate. mn10300 has recvmmsg as the last system
call wired up. The rest seem to at least have syncfs wired up which was
new in the 2.6.39.
v2: Most of the architecture support added by Daniel Lezcano <dlezcano@fr.ibm.com>
v3: ported to v2.6.36-rc4 by: Eric W. Biederman <ebiederm@xmission.com>
v4: Moved wiring up of the system call to another patch
v5: ported to v2.6.39-rc6
v6: rebased onto parisc-next and net-next to avoid syscall conflicts.
v7: ported to Linus's latest post 2.6.39 tree.
> arch/blackfin/include/asm/unistd.h | 3 ++-
> arch/blackfin/mach-common/entry.S | 1 +
Acked-by: Mike Frysinger <vapier@gentoo.org>
Oh - ia64 wiring looks good.
Acked-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>