IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
arch/arm/mm/dma-mapping.c: In function `dma_sync_sg_for_cpu':
arch/arm/mm/dma-mapping.c:588: warning: statement with no effect
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that the critical read back to flush the next descriptor address is
fixed we can downgrade some BUG_ONs that need only be enabled when testing
changes to the driver.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mikael Pettersson reported:
The 2.6.28-rc kernels fail to detect PCI device 0000:00:01.0
(the first ethernet port) on my Thecus n2100 XScale box.
There is however still a strange "ghost" device that gets partially
detected in 2.6.28-rc2 vanilla.
The IOP321 manual says:
The user designates the memory region containing the OCCDR as
non-cacheable and non-bufferable from the IntelR XScaleTM core.
This guarantees that all load/stores to the OCCDR are only of
DWORD quantities.
Ensure that the OCCDR is so mapped.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As a result of the ptebits changes, we ended up marking device mappings
as normal memory on ARMv7 CPUs, resulting in undesirable behaviour with
serial ports and the like. While reviewing the section mapping table
entries, other errors in the memory type settings for devices were
detected and confirmed to prevent Xscale3 platforms booting.
Tested on:
OMAP34xx (ARMv7),
OMAP24xx (ARMv6),
OMAP16xx (ARM926T, ARMv5),
PXA311 (Xscale3),
PXA272 (Xscale),
PXA255 (Xscale),
IXP42x (Xscale),
S3C2410 (ARM920T, ARMv4T),
ARM720T (ARMv4T)
StrongARM-110 (ARMv4)
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Ben Dooks <ben-linux@fluff.org>
Tested-by: Anders Grafström <grfstrm@users.sourceforge.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As of 73bdf0a60e, the kernel needs
to know where modules are located in the virtual address space.
On ARM, we located this region between MODULE_START and MODULE_END.
Unfortunately, everyone else calls it MODULES_VADDR and MODULES_END.
Update ARM to use the same naming, so is_vmalloc_or_module_addr()
can work properly. Also update the comment on mm/vmalloc.c to
reflect that ARM also places modules in a separate region from the
vmalloc space.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Those inline assembly segments using the umlal instruction must have
the & modifier so to be sure that a purely input register won't alias
one of the registers used as input+output. In most cases, the inputs
are still used after the outputs are touched, and most binutil versions
insist on "rdhi, rdlo and rm must all be different" even for ARMv6+.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Due to confusion between the ftrace infrastructure and the gcc profiling
tracer "ftrace", this patch renames the config options from FTRACE to
FUNCTION_TRACER. The other two names that are offspring from FTRACE
DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same.
This patch was generated mostly by script, and partially by hand.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The SET_PERSONALITY macro is always called with a second argument of 0.
Remove the ibcs argument and the various tests to set the PER_SVR4
personality.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (236 commits)
[ARM] 5300/1: fixup spitz reset during boot
[ARM] 5295/1: make ZONE_DMA optional
[ARM] 5239/1: Palm Zire 72 power management support
[ARM] 5298/1: Drop desc_handle_irq()
[ARM] 5297/1: [KS8695] Fix two compile-time warnings
[ARM] 5296/1: [KS8695] Replace macro's with trailing underscores.
[ARM] pxa: allow multi-machine PCMCIA builds
[ARM] pxa: add preliminary CPUFREQ support for PXA3xx
[ARM] pxa: add missing ACCR bit definitions to pxa3xx-regs.h
[ARM] pxa: rename cpu-pxa.c to cpufreq-pxa2xx.c
[ARM] pxa/zylonite: add support for USB OHCI
[ARM] ohci-pxa27x: use ioremap() and offset for register access
[ARM] ohci-pxa27x: introduce pxa27x_clear_otgph()
[ARM] ohci-pxa27x: use platform_get_{irq,resource} for the resource
[ARM] ohci-pxa27x: move OHCI controller specific registers into the driver
[ARM] ohci-pxa27x: introduce flags to avoid direct access to OHCI registers
[ARM] pxa: move I2S register and bit definitions into pxa2xx-i2s.c
[ARM] pxa: simplify DMA register definitions
[ARM] pxa: make additional DCSR bits valid for PXA3xx
[ARM] pxa: move i2c register and bit definitions into i2c-pxa.c
...
Fixed up conflicts in
arch/arm/mach-versatile/core.c
sound/soc/pxa/pxa2xx-ac97.c
sound/soc/pxa/pxa2xx-i2s.c
manually.
Most ARM machines don't need a special "DMA" memory zone, and
when configured out, the kernel becomes a bit smaller:
| text data bss dec hex filename
|3826182 102384 111700 4040266 3da64a vmlinux
|3823593 101616 111700 4036909 3d992d vmlinux.nodmazone
This is because the system now has only one zone total which effect is
to optimize away many conditionals in page allocation paths.
So let's configure this zone only on machines that need split zones.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide helpers for getting physical addresses or pfns from the
meminfo array, and use them. Move for_each_nodebank() to
asm/setup.h alongside the meminfo structure definition.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add support for detecting non-executable stack binaries, and adjust
permissions to prevent execution from data and stack areas. Also,
ensure that READ_IMPLIES_EXEC is enabled for older CPUs where that
is true, and for any executable-stack binary.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As of the previous commit, MT_DEVICE_IXP2000 encodes to the same
PTE bit encoding as MT_DEVICE, so it's now redundant. Convert
MT_DEVICE_IXP2000 to use MT_DEVICE instead, and remove its aliases.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide L_PTE_MT_xxx definitions to describe the memory types that we
use in Linux/ARM. These definitions are carefully picked such that:
1. their LSBs match what is required for pre-ARMv6 CPUs.
2. they all have a unique encoding, including after modification
by build_mem_type_table() (the result being that some have more
than one combination.)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no point scattering this around the tree, the parsing
of the parameter might as well live beside the code which uses
it. That also means we can make vmalloc_reserve a static
variable.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership. Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures. We may
have to revisit the DMA API later for these architectures anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Validate the direction argument like x86 does. In addition,
validate the dma_unmap_* parameters against those passed to
dma_map_* when using the DMA bounce code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The dmabounce dma_sync_xxx() implementation have been broken for
quite some time; they all copy data between the DMA buffer and
the CPU visible buffer no irrespective of the change of ownership.
(IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer
to the CPU buffer during a call to dma_sync_single_for_device().)
Fix it by getting rid of sync_single(), moving the contents into
the recently created dmabounce_sync_for_xxx() functions and adjusting
appropriately.
This also makes it possible to properly support the DMA range sync
functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Delete ARM's own cnt32_to_63.h as the copy in include/linux/ should now be
used instead.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We can translate a struct page directly to a DMA address using
page_to_dma(). No need to use page_address() followed by
virt_to_dma().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than trying to (inaccurately) decode the cache type from the
registers each time we need to decide what type of cache we have,
use a bitmask initialized early during boot.
Since the setup is a one-off initialization, we can be a little more
clever and take account of the CPU architecture as well.
Note that we continue to achieve the compactness on optimised kernels
by forcing tests to always-false or always-true as appropriate, thereby
allowing the compiler to do build-time code elimination.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PCI_DMA_BUS_IS_PHYS was defined to be zero, which meant we ignored
the DMA mask for IDE and SCSI transfers. This is wrong - we have
no DMA translation hardware. We want to obey DMA masks so that the
block layer performs bouncing itself.
Reported-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch provides an ARM implementation of ioremap_wc().
We use different page table attributes depending on which CPU we
are running on:
- Non-XScale ARMv5 and earlier systems: The ARMv5 ARM documents four
possible mapping types (CB=00/01/10/11). We can't use any of the
cached memory types (CB=10/11), since that breaks coherency with
peripheral devices. Both CB=00 and CB=01 are suitable for _wc, and
CB=01 (Uncached/Buffered) allows the hardware more freedom than
CB=00, so we'll use that.
(The ARMv5 ARM seems to suggest that CB=01 is allowed to delay stores
but isn't allowed to merge them, but there is no other mapping type
we can use that allows the hardware to delay and merge stores, so
we'll go with CB=01.)
- XScale v1/v2 (ARMv5): same as the ARMv5 case above, with the slight
difference that on these platforms, CB=01 actually _does_ allow
merging stores. (If you want noncoalescing bufferable behavior
on Xscale v1/v2, you need to use XCB=101.)
- Xscale v3 (ARMv5) and ARMv6+: on these systems, we use TEXCB=00100
mappings (Inner/Outer Uncacheable in xsc3 parlance, Uncached Normal
in ARMv6 parlance).
The ARMv6 ARM explicitly says that any accesses to Normal memory can
be merged, which makes Normal memory more suitable for _wc mappings
than Device or Strongly Ordered memory, as the latter two mapping
types are guaranteed to maintain transaction number, size and order.
We use the Uncached variety of Normal mappings for the same reason
that we can't use C=1 mappings on ARMv5.
The xsc3 Architecture Specification documents TEXCB=00100 as being
Uncacheable and allowing coalescing of writes, which is also just
what we need.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
pc_pointer() was a function to mask the PC for 26-bit ARMs, which
we no longer support. Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This reverts commit ae82cbfc8b. It
needs the new byteorder headers to be exported to userspace, and
they aren't yet -- and probably shouldn't be, at this point in the
2.6.27 release cycle (or ever, for that matter).
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>