IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- fix debugfs initialization order (Anthony Iliopoulos)
- use memory_intersects() directly (Kefeng Wang)
- allow to return specific errors from ->map_sg
(Logan Gunthorpe, Martin Oliveira)
- turn the dma_map_sg return value into an unsigned int (me)
- provide a common global coherent pool іmplementation (me)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmEvY+8LHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYPaehAAsgnBzzzoLHO83pgs0KL92c+0DiMNHYmaMCJOvZXk
x2Irv+O74WikRJc4S7uQ26p2spjmUxjmiOjld+8+NN0liD4QO9BQ/SZpIp8emuKS
/yPG6Xh86xSl/OrPL1y7kGeHkRi5sm3mRhcTdILFQFPLcSReupe++GRfnvrpbOPk
tj3pBGXluD6iJH12BBt00ushUVzZ0F2xaF6xUDAs94RSZ3tlqsfx6c928Y1KxSZh
f89q/KuaokyogFG7Ujj/nYgIUETaIs2W6UmxBfRzdEMJFSffwomUMbw+M+qGJ7/d
2UjamFYRX16FReE8WNsndbX1E6k5JBW12E1qwV3dUwatlNLWEaRq3PNiWkF7zcFH
LDkpDYN6s5bIDPTfDp21XfPygoH8KQhnD9lVf0aB7n04uu8VJrGB9+10PpkCJVXD
0b2dcuSwCO7hAfTfNGVV8f3EI/1XPflr1hJvMgcVtY53CR96ldp+4QaElzWLXumN
MyptirmrVITNVyVwGzhGAblXBLWdarXD0EXudyiaF4Xbrj3AkIOSUCghEwKLpjQf
UwMFFwSE8yGxKTRK4HfU5gMzy6G751fU7TUe5lmxZLovDflQoSXMWgHE8e7r0Qel
o5v6lmUzoWz2fAISf3xjauo2ncgmfWMwYM6C7OJy5nG73QXLQId9J+ReXbmrgrrN
DgI=
=spje
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- fix debugfs initialization order (Anthony Iliopoulos)
- use memory_intersects() directly (Kefeng Wang)
- allow to return specific errors from ->map_sg (Logan Gunthorpe,
Martin Oliveira)
- turn the dma_map_sg return value into an unsigned int (me)
- provide a common global coherent pool іmplementation (me)
* tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mapping: (31 commits)
hexagon: use the generic global coherent pool
dma-mapping: make the global coherent pool conditional
dma-mapping: add a dma_init_global_coherent helper
dma-mapping: simplify dma_init_coherent_memory
dma-mapping: allow using the global coherent pool for !ARM
ARM/nommu: use the generic dma-direct code for non-coherent devices
dma-direct: add support for dma_coherent_default_memory
dma-mapping: return an unsigned int from dma_map_sg{,_attrs}
dma-mapping: disallow .map_sg operations from returning zero on error
dma-mapping: return error code from dma_dummy_map_sg()
x86/amd_gart: don't set failed sg dma_address to DMA_MAPPING_ERROR
x86/amd_gart: return error code from gart_map_sg()
xen: swiotlb: return error code from xen_swiotlb_map_sg()
parisc: return error code from .map_sg() ops
sparc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR
sparc/iommu: return error codes from .map_sg() ops
s390/pci: don't set failed sg dma_address to DMA_MAPPING_ERROR
s390/pci: return error code from s390_dma_map_sg()
powerpc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR
powerpc/iommu: return error code from .map_sg() ops
...
Select the right options to just use the generic dma-direct code
instead of reimplementing it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Dillon Min <dillon.minfei@gmail.com>
This fixes a Keystone 2 regression discovered as a side effect of
defining an passing the physical start/end sections of the kernel
to the MMU remapping code.
As the Keystone applies an offset to all physical addresses,
including those identified and patches by phys2virt, we fail to
account for this offset in the kernel_sec_start and kernel_sec_end
variables.
Further these offsets can extend into the 64bit range on LPAE
systems such as the Keystone 2.
Fix it like this:
- Extend kernel_sec_start and kernel_sec_end to be 64bit
- Add the offset also to kernel_sec_start and kernel_sec_end
As passing kernel_sec_start and kernel_sec_end as 64bit invariably
incurs BE8 endianness issues I have attempted to dry-code around
these.
Tested on the Vexpress QEMU model both with and without LPAE
enabled.
Fixes: 6e121df14c ("ARM: 9090/1: Map the lowmem and kernel separately")
Reported-by: Nishanth Menon <nmenon@kernel.org>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Tested-by: Nishanth Menon <nmenon@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of the
->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.GC13345@lst.de/
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
The .map_sg() op now expects an error code instead of zero on failure.
In the case of a DMA_MAPPING_ERROR, -EIO is returned. Otherwise,
-ENOMEM or -EINVAL is returned depending on the error from
__map_sg_chunk().
Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
The semantics of pfn_valid() is to check presence of the memory map for a
PFN and not whether a PFN is in RAM. The memory map may be present for a
hole in the physical memory and if such hole corresponds to an MMIO range,
__arm_ioremap_pfn_caller() will produce a WARN() and fail:
Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in
RAM or not.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmDoctcTHHJwcHRAbGlu
dXguaWJtLmNvbQAKCRA5A4Ymyw79kW1HCAC8lhm79EoktNw2+fumIfch15RjOGHC
8MfTSoPGJXLbwQXxBsTY8lGk8NRaLVRFDJ4OE8pMuwjCBbsgukg3oQUAbEt2n+zt
DN4bCOV9vsunWs/w7S5xifFcNeeB3VqOjy2mFPaLpz3wj8fll+LtfghvHlXPyMSO
xCO61F8AhsR4mUHeMmy5FRi0AG7fjYfaFrRYHqXQNAlnYqc3UgSSkfVViVEPeyOj
TLefgjXkwCdrJ/84GRFkBqkXAz2vLNhic9BFt7YG2ZKbRDcAsQK5uqdmRR667edo
Ca58NDOdsq3P2p+Qqum+JzWRc/F7/cGp+Twg+sA8sVvQ6Hr0TxmUNmsy
=klV7
-----END PGP SIGNATURE-----
Merge tag 'fixes-2021-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock fix from Mike Rapoport:
"This is a fix for the rework of ARM's pfn_valid() implementation
merged during this merge window.
Don't abuse pfn_valid() to check if pfn is in RAM
The semantics of pfn_valid() is to check presence of the memory map
for a PFN and not whether a PFN is in RAM. The memory map may be
present for a hole in the physical memory and if such hole corresponds
to an MMIO range, __arm_ioremap_pfn_caller() will produce a WARN() and
fail.
Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN
is in RAM or not"
* tag 'fixes-2021-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
arm: ioremap: don't abuse pfn_valid() to check if pfn is in RAM
- Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT
- Updates for setting vmalloc size via command line to resolve an issue
with the 8MiB hole not properly being accounted for, and clean up the
code.
- ftrace support for module PLTs
- Spelling fixes
- kbuild updates for removing generated files and pattern rules for
generating files
- Clang/llvm updates
- Change the way the kernel is mapped, placing it in vmalloc space
instead.
- Remove arm_pm_restart from arm and aarch64.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmDi5ewACgkQ9OeQG+St
rGRfLg//cWUq/FBgRWggSgvLGBnbqwJABKFnynVy7c+g+kPxNudDHjL9a2A8c6aR
oTBMzaQvfRCQA2drgGK2fZ02sCHJxStX8d6Y6WyVaVEIBZPH6y09gZy1wW0/fIZS
S8qk82WaASddk/kvNeFrWD/5qNT4tz8COndZeYbBpEsXw/5RjIqSQqyn0k5CZqUj
0lL95y1AW9vD9AH7OYyYMB6pLwDMt0LCTSynx/o6ZmaysX56KdM8c3ziiUllWwJB
TIR03DeSpCZMiJMjwZUiWVl2BLjTES9WE2klZYulhgfh+ljlhkHvO+i8B+qy8kDS
JHIXHnuMi3GjSFg6MlP/s21pLHT6yuCZ8dSGaACa+HEf1s0nRnE9wl2kzUFcJtLY
jHAE5YyvO0BLJHCMuRGiB77rKwI92ij4yxKHvchU0BRlpgaVYcBmhZfqdVGnB4VO
Mu2pMaHLzEdrkfLteYJ7bvKn0o5cD/G3wj/9UDAzJ6ME91LINiNqzgub68pf1KTe
/YipxKipqcpbSBeysZAkfqTbMNB5WuxNnfmgwU15ZyfZsalcXSYEDkYex5+GGgOc
w36VddVtQXNKd0LuCfoquda3hIjLvgCNf62ZDFNDXgOHcVu8okYXwZi9vyYg6xIn
0gfh/T/lK0DoLWul0/CuLpSnsjw+1T7WTgKlvgLYGusWIQ2mC7w=
=dq60
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM development updates from Russell King:
- Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT
- Updates for setting vmalloc size via command line to resolve an issue
with the 8MiB hole not properly being accounted for, and clean up the
code.
- ftrace support for module PLTs
- Spelling fixes
- kbuild updates for removing generated files and pattern rules for
generating files
- Clang/llvm updates
- Change the way the kernel is mapped, placing it in vmalloc space
instead.
- Remove arm_pm_restart from arm and aarch64.
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (29 commits)
ARM: 9098/1: ftrace: MODULE_PLT: Fix build problem without DYNAMIC_FTRACE
ARM: 9097/1: mmu: Declare section start/end correctly
ARM: 9096/1: Remove arm_pm_restart()
ARM: 9095/1: ARM64: Remove arm_pm_restart()
ARM: 9094/1: Register with kernel restart handler
ARM: 9093/1: drivers: firmwapsci: Register with kernel restart handler
ARM: 9092/1: xen: Register with kernel restart handler
ARM: 9091/1: Revert "mm: qsd8x50: Fix incorrect permission faults"
ARM: 9090/1: Map the lowmem and kernel separately
ARM: 9089/1: Define kernel physical section start and end
ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET
ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1
ARM: 9086/1: syscalls: use pattern rules to generate syscall headers
ARM: 9085/1: remove unneeded abi parameter to syscallnr.sh
ARM: 9084/1: simplify the build rule of mach-types.h
ARM: 9083/1: uncompress: atags_to_fdt: Spelling s/REturn/Return/
ARM: 9082/1: [v2] mark prepare_page_table as __init
ARM: 9079/1: ftrace: Add MODULE_PLTS support
ARM: 9078/1: Add warn suppress parameter to arm_gen_branch_link()
ARM: 9077/1: PLT: Move struct plt_entries definition to header
...
The coordination between freeing of unused memory map, pfn_valid() and core
mm assumptions about validity of the memory map in various ranges was not
designed for complex layouts of the physical memory with a lot of holes all
over the place.
Kefen Wang reported crashes in move_freepages() on a system with the
following memory layout [1]:
node 0: [mem 0x0000000080a00000-0x00000000855fffff]
node 0: [mem 0x0000000086a00000-0x0000000087dfffff]
node 0: [mem 0x000000008bd00000-0x000000008c4fffff]
node 0: [mem 0x000000008e300000-0x000000008ecfffff]
node 0: [mem 0x0000000090d00000-0x00000000bfffffff]
node 0: [mem 0x00000000cc000000-0x00000000dc9fffff]
node 0: [mem 0x00000000de700000-0x00000000de9fffff]
node 0: [mem 0x00000000e0800000-0x00000000e0bfffff]
node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff]
node 0: [mem 0x00000000fda00000-0x00000000ffffefff]
These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM and
essentially turning pfn_valid_within() to pfn_valid() instead of having it
hardwired to 1 on that architecture, but this would require to keep
CONFIG_HOLES_IN_ZONE solely for this purpose.
A cleaner approach is to update ARM's implementation of pfn_valid() to take
into accounting rounding of the freed memory map to pageblock boundaries
and make sure it returns true for PFNs that have memory map entries even if
there is no physical memory backing those PFNs.
[1] https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmDhzQQTHHJwcHRAbGlu
dXguaWJtLmNvbQAKCRA5A4Ymyw79kXeUCACS0lssuKbaBxFk6OkEe0nbmbwN/n9z
zKd2AWzw9xFxYZkLfOCmi5EPUMI0IeDYjOyZmnj8YDDd7wRLVxZ51LSdyFDZafXY
j6SVYprSmwUjLkuajmqifY5DLbZYeGuI6WFvNVLljltHc0i/GIzx1Tld2yO/M0Jk
NzHQ0/5nXmU74PvvY8LrWk+rRjTYqMuolHvbbl4nNId5e/FYEWNxEqNO5gq6aG5g
+5t1BjyLf1NMp67uc5aLoLmr2ZwK8/UmZeSZ7i9z03gU/5B1srLluhoBsYBPVHFY
hRNRKwWUDRUmqjJnu5/EzI+iQnj7t6zV1hyt+E5B1gb89vuSVcJNOPQt
=wCcY
-----END PGP SIGNATURE-----
Merge tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock updates from Mike Rapoport:
"Fix arm crashes caused by holes in the memory map.
The coordination between freeing of unused memory map, pfn_valid() and
core mm assumptions about validity of the memory map in various ranges
was not designed for complex layouts of the physical memory with a lot
of holes all over the place.
Kefen Wang reported crashes in move_freepages() on a system with the
following memory layout [1]:
node 0: [mem 0x0000000080a00000-0x00000000855fffff]
node 0: [mem 0x0000000086a00000-0x0000000087dfffff]
node 0: [mem 0x000000008bd00000-0x000000008c4fffff]
node 0: [mem 0x000000008e300000-0x000000008ecfffff]
node 0: [mem 0x0000000090d00000-0x00000000bfffffff]
node 0: [mem 0x00000000cc000000-0x00000000dc9fffff]
node 0: [mem 0x00000000de700000-0x00000000de9fffff]
node 0: [mem 0x00000000e0800000-0x00000000e0bfffff]
node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff]
node 0: [mem 0x00000000fda00000-0x00000000ffffefff]
These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM
and essentially turning pfn_valid_within() to pfn_valid() instead of
having it hardwired to 1 on that architecture, but this would require
to keep CONFIG_HOLES_IN_ZONE solely for this purpose.
A cleaner approach is to update ARM's implementation of pfn_valid() to
take into accounting rounding of the freed memory map to pageblock
boundaries and make sure it returns true for PFNs that have memory map
entries even if there is no physical memory backing those PFNs"
Link: https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com [1]
* tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
arm: extend pfn_valid to take into account freed memory map alignment
memblock: ensure there is no overflow in memblock_overlaps_region()
memblock: align freed memory map on pageblock boundaries with SPARSEMEM
memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER
When unused memory map is freed the preserved part of the memory map is
extended to match pageblock boundaries because lots of core mm
functionality relies on homogeneity of the memory map within pageblock
boundaries.
Since pfn_valid() is used to check whether there is a valid memory map
entry for a PFN, make it return true also for PFNs that have memory map
entries even if there is no actual memory populated there.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Tony Lindgren <tony@atomide.com>
1. These tlb flush functions have been using vma instead mm long time
ago, but there is still some comments use mm as parameter.
2. the actual struct we use is vm_area_struct instead of vma_struct.
3. remove unused flush_kern_tlb_page.
Link: https://lkml.kernel.org/r/87k0oaq311.wl-chenli@uniontech.com
Signed-off-by: Chen Li <chenli@uniontech.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit e220ba6022.
The VERIFY_PERMISSION_FAULT is introduced since 2009 but no
one use it, just revert it and clean unused comment.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Using our knowledge of where the physical kernel sections start
and end we can split mapping of lowmem and kernel apart.
This is helpful when you want to place the kernel independently
from lowmem and not be limited to putting it into lowmem only,
but also into places such as the VMALLOC area.
We extensively rewrite the lowmem mapping code to account for
all cases where the kernel image overlaps with the lowmem in
different ways. This is helpful to handle situations which
occur when the kernel is loaded in different places and makes
it possible to place the kernel in a more random manner
which is done with e.g. KASLR.
We sprinkle some comments with illustrations and pr_debug()
over it so it is also very evident to readers what is happening.
We now use the kernel_sec_start and kernel_sec_end instead
of relying on __pa() (phys_to_virt) to provide this. This
is helpful if we want to resolve physical-to-virtual and
virtual-to-physical mappings at runtime rather than
compiletime, especially if we are not using patch phys to
virt.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
In some configurations when building with gcc-11, prepare_page_table
does not get inline, which causes a build time warning for a section
mismatch:
WARNING: modpost: vmlinux.o(.text.unlikely+0xce8): Section mismatch in reference from the function prepare_page_table() to the (unknown reference) .init.data:(unknown)
The function prepare_page_table() references
the (unknown reference) __initdata (unknown).
This is often because prepare_page_table lacks a __initdata
annotation or the annotation of (unknown) is wrong.
Mark the function as __init to avoid the warning regardless of the
inlining, and remove the 'inline' keyword. The compiler is
free to ignore the 'inline' here and it doesn't result in better
object code or more readable source.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Rather than using "m" (which is the unit of metres, or milli), and
"MB" in the printk statements, use MiB to make it clear that we are
talking about the power-of-2 megabytes, aka mebibytes.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Make the default vmalloc size clearer by using a more natural
multiplication by SZ_1M rather than a shift left by 20 bits.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Rather than storing the start of vmalloc space, store the size, and
move the calculation into adjust_lowmem_limit(). We now have one single
place where this calculation takes place.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Change the current vmalloc_min, which is supposed to be the lowest
address of vmalloc space including the VMALLOC_OFFSET, to vmalloc_start
which does not include VMALLOC_OFFSET.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
We calculate the maximum size of the vmalloc space twice in
early_vmalloc(). Use a temporary variable to hold this value.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
vmalloc_min is currently a void pointer, but everywhere its used
contains a cast - either to a void pointer when setting or back to
an integer type when being used. Eliminate these casts by changing
its type to unsigned long.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Pull swiotlb updates from Konrad Rzeszutek Wilk:
"Christoph Hellwig has taken a cleaver and trimmed off the not-needed
code and nicely folded duplicate code in the generic framework.
This lays the groundwork for more work to add extra DMA-backend-ish in
the future. Along with that some bug-fixes to make this a nice working
package"
* 'stable/for-linus-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb:
swiotlb: don't override user specified size in swiotlb_adjust_size
swiotlb: Fix the type of index
swiotlb: Make SWIOTLB_NO_FORCE perform no allocation
ARM: Qualify enabling of swiotlb_init()
swiotlb: remove swiotlb_nr_tbl
swiotlb: dynamically allocate io_tlb_default_mem
swiotlb: move global variables into a new io_tlb_mem structure
xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup
xen-swiotlb: split xen_swiotlb_init
swiotlb: lift the double initialization protection from xen-swiotlb
xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs
xen-swiotlb: remove xen_set_nslabs
xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported
xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer
swiotlb: split swiotlb_tbl_sync_single
swiotlb: move orig addr and size validation into swiotlb_bounce
swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_single
powerpc/svm: stop using io_tlb_start
mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().
Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86]
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc]
Acked-by: David Hildenbrand <david@redhat.com>
Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64]
Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra" <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
page_mapping_file() is only used by some architectures, and then it
is usually only used in one place. Make it a static inline function
so other architectures don't have to carry this dead code.
Link: https://lkml.kernel.org/r/20210317123011.350118-1-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Huang Ying <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
They are not needed after booting, so mark them as __init to move them
to the .init section.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
After commit 5a735583b7 ("arm/ftrace: Use __patch_text()"), the last
and only user of these functions has gone, remove them.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
for_each_mem_range() uses a loop variable, yet looking into code it is
not just iteration counter but more complex entity which encodes
information about memblock. Thus condition i == 0 looks fragile.
Indeed, it broke boot of R-class platforms since it never took i == 0
path (due to i was set to 1). Fix that with restoring original flag
check.
Fixes: b10d6bca87 ("arch, drivers: replace for_each_membock() with for_each_mem_range()")
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
The debugging code for kmap_local() doubles the number of per-CPU fixmap
slots allocated for kmap_local(), in order to use half of them as guard
regions. This causes the fixmap region to grow downwards beyond the start
of its reserved window if the supported number of CPUs is large, and collide
with the newly added virtual DT mapping right below it, which is obviously
not good.
One manifestation of this is EFI boot on a kernel built with NR_CPUS=32
and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting
in block entries below the fixmap region that the fixmap code misidentifies
as fixmap table entries, and subsequently tries to dereference using a
phys-to-virt translation that is only valid for lowmem. This results in a
cryptic splat such as the one below.
ftrace: allocating 45548 entries in 89 pages
8<--- cut here ---
Unable to handle kernel paging request at virtual address fc6006f0
pgd = (ptrval)
[fc6006f0] *pgd=80000040207003, *pmd=00000000
Internal error: Oops: a06 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382
Hardware name: Generic DT based system
PC is at cpu_ca15_set_pte_ext+0x24/0x30
LR is at __set_fixmap+0xe4/0x118
pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3
sp : c1601ed8 ip : 00400000 fp : 00800000
r10: 0000071f r9 : 00421000 r8 : 00c00000
r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f
r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0
Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none
Control: 30c5387d Table: 40203000 DAC: 00000001
Process swapper (pid: 0, stack limit = 0x(ptrval))
So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also,
fix the BUILD_BUG_ON() check that was supposed to catch this, by checking
whether the region grows below the start address rather than above the end
address.
Fixes: 2a15ba82fa ("ARM: highmem: Switch to generic kmap atomic")
Reported-by: Peter Robinson <pbrobinson@gmail.com>
Tested-by: Peter Robinson <pbrobinson@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
We do not need a SWIOTLB unless we have DRAM that is addressable beyond
the arm_dma_limit. Compare max_pfn with arm_dma_pfn_limit to determine
whether we do need a SWIOTLB to be initialized.
Fixes: ad3c7b18c5 ("arm: use swiotlb for bounce buffering on LPAE configs")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Now that we have reduced the number of registers that we need to
preserve when calling v7_invalidate_l1 from the boot code, we can use
scratch registers to preserve the remaining ones, and get rid of the
mini stack entirely. This works around any issues regarding cache
behavior in relation to the uncached accesses to this memory, which is
hard to get right in the general case (i.e., both bare metal and under
virtualization)
While at it, switch v7_invalidate_l1 to using ip as a scratch register
instead of r4. This makes the function AAPCS compliant, and removes the
need to stash r4 in ip across the call.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
The cache invalidation code in v7_invalidate_l1 can be tweaked to
re-read the associativity from CCSIDR, and keep the way identifier
component in a single register that is assigned in the outer loop. This
way, we need 2 registers less.
Given that the number of sets is typically much larger than the
associativity, rearrange the code so that the outer loop has the fewer
number of iterations, ensuring that the re-read of CCSIDR only occurs a
handful of times in practice.
Fix the whitespace while at it, and update the comment to indicate that
this code is no longer a clone of anything else.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
A write to CSSELR needs to complete before its results can be observed
via CCSIDR. So add a ISB to ensure that this is the case.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
- Generalise byte swapping assembly
- Update debug addresses for STI
- Validate start of physical memory with DTB
- Do not clear SCTLR.nTLSMD in decompressor
- amba/locomo/sa1111 devices remove method return type is void
- address markers for KASAN in page table dump
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmAzrfYACgkQ9OeQG+St
rGTmRQ/+MG9BO1WahlXZ9eVx6n6KmtwlzAy5m4B24GomvcXsa+T2s0Vs43hAsOQ2
f/b6n1mPlwRjAvKW8IfO243HKDE6STGbtVPae6ts586QCODs8i3MQsq4SWmM/DMk
TR87hTo1zd4baVT9tkM8/UdUwQjr0yRf4ZDhcCj09tMClnV/8ZAEE9/lLkBpDoer
wcuPaDtRfJhN+Pqnm8ES8KPj15nVm/GFWBFoDWZIOCjyDnl8Y/1Bnz3NeqzfwM1o
O0NS/9a1tMBn7TNGkkcJCimqOLZS2OgxLND8fie0rC5fmwzVomKXE24OdXpSQCps
LiGJr+iQOaX6qNqJY2h1If8F+RPwKfh4Mrk12x0MWB6Ap2iKsQ6bmtUCNatmJ4PG
5iKV5zY0SwKRYXAkXcNosEPUJqZirFHJCzrQ8IBmiSJ1cahZykWFgorDnA97kNLR
Wlp2Y/037ug7EGZ0YSaXvbpuMyyjDP4TBKqBiSl7a90QYoXQg2QgcrBO3kVlh/5H
Dxq9URvIpDLIGo1EUBU90kB54TUeDhJVHJWDfXNwOp4dP1Xm6b2w+d86GnUQanlC
sinRut1ULMyitmIzg9F74MZKaSJ65ffEP3nZKIAlSSISQL+/bXtMDVtakVGsv1k1
w4IdACf3GqbjHig4mOX0oW7IwtyfBY+0q3udY28ASW0ujsH9qHE=
=BULT
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
- Generalise byte swapping assembly
- Update debug addresses for STI
- Validate start of physical memory with DTB
- Do not clear SCTLR.nTLSMD in decompressor
- amba/locomo/sa1111 devices remove method return type is void
- address markers for KASAN in page table dump
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9065/1: OABI compat: fix build when EPOLL is not enabled
ARM: 9055/1: mailbox: arm_mhuv2: make remove callback return void
amba: Make use of bus_type functions
amba: Make the remove callback return void
vfio: platform: simplify device removal
amba: reorder functions
amba: Fix resource leak for drivers without .remove
ARM: 9054/1: arch/arm/mm/mmu.c: Remove duplicate header
ARM: 9053/1: arm/mm/ptdump:Add address markers for KASAN regions
ARM: 9051/1: vdso: remove unneded extra-y addition
ARM: 9050/1: Kconfig: Select ARCH_HAVE_NMI_SAFE_CMPXCHG where possible
ARM: 9049/1: locomo: make locomo bus's remove callback return void
ARM: 9048/1: sa1111: make sa1111 bus's remove callback return void
ARM: 9047/1: smp: remove unused variable
ARM: 9046/1: decompressor: Do not clear SCTLR.nTLSMD for ARMv7+ cores
ARM: 9045/1: uncompress: Validate start of physical memory against passed DTB
ARM: 9042/1: debug: no uncompress debugging while semihosting
ARM: 9041/1: sti LL_UART: add STiH418 SBC UART0 support
ARM: 9040/1: use DEBUG_UART_PHYS and DEBUG_UART_VIRT for sti LL_UART
ARM: 9039/1: assembler: generalize byte swapping macro into rev_l
Remove asm/fixmap.h which is included more than once.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Signed-off-by: Hailong Liu <carver4lio@163.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
ARM has recently supported KASAN, so I think that it's time to add
KASAN regions for PTDUMP on ARM.
This patch has been tested with QEMU + vexpress-a15. Both
CONFIG_ARM_LPAE and no CONFIG_ARM_LPAE.
The result after patching looks like this:
---[ Kasan shadow start ]---
0x6ee00000-0x7af00000 193M RW NX SHD MEM/CACHED/WBWA
0x7b000000-0x7f000000 64M ro NX SHD MEM/CACHED/WBWA
---[ Kasan shadow end ]---
---[ Modules ]---
---[ Kernel Mapping ]---
......
---[ vmalloc() Area ]---
......
---[ vmalloc() End ]---
---[ Fixmap Area ]---
---[ Vectors ]---
......
---[ Vectors End ]---
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Hailong liu <liu.hailong6@zte.com.cn>
Signed-off-by: Hailong liu <carver4lio@163.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
I didn't touch this code since it served as a platform to introduce
ARMv7-M support to Linux. The only known machine that runs Linux has only
4 MiB of RAM (that originally only exists to hold the display's framebuffer).
There are no known users and no further use foreseeable, so drop the
code.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20210115155130.185010-2-u.kleine-koenig@pengutronix.de'
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
- Rework phys/virt translation
- Add KASan support
- Move DT out of linear map region
- Use more PC-relative addressing in assembly
- Remove FP emulation handling while in kernel mode
- Link with '-z norelro'
- remove old check for GCC <= 4.2 in ARM unwinder code
- disable big endian if using clang's linker
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAl/ghq0ACgkQ9OeQG+St
rGQXsxAAilC+P06NRN3etSFOnJH8GzGNu89wbVW/0lft89o+EpN8oZ9kEYRdb4d1
AJ1z4kGN0akKKNWWeg+1c2YzXh4xGvT1th1TzbBpCf8BxoMHFCSS1IZ98LZ3iiqy
bpMRpq2LJG+Va/5lkPnkY7e2sL9Jj5BxFdHAYUUg1Ipc0tfh7hXWLnRMohE1EYmu
E69AHTfyWs9ojgspCSg3KoUQ3eXUiaBslf8U4/zFhtmA9lwiOOozZ4ZRRgDWqI75
bp6pGzxpqXIFdD1QyThgSb3gvVBahbsYN7kj1fmD5LokBVWxHawCyzkCzNzKEfDL
ES+gc/wTewxwN928cjB5vfmOrAvd1T6amh/gsr39WnOIFngEPAGMBfApXAzhffsc
L5TYaDI3DNbQ75FCySfVV2VwQhSW03XQHYtElVxzc2Z1Q1Q9yoscqLzgHDgDy3LM
8s4CRviVtOzP9e/rNx48lUxgdQHmAjQ+dI4Y9NVxyphQzK0LLTv5Uc4zy/nG0F27
QIFtGCDz3PHDPWLzGBudYcu9HAqwXVhZXf9pMeYgwgvmqBdz0BFbXhEbZaup6oDl
H5k4iAZh3ADW38+8Vhp/D7CGDhznZm2dFNrgreJm2tHTEwd5xgpsUj1MaAMCcPbr
HTxiy0i4p9wN1jl9iWFD4A3/KsBvAIJFB+wqqJOyWku0FikntjU=
=fZGX
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux
Pull ARM updates from Russell King:
- Rework phys/virt translation
- Add KASan support
- Move DT out of linear map region
- Use more PC-relative addressing in assembly
- Remove FP emulation handling while in kernel mode
- Link with '-z norelro'
- remove old check for GCC <= 4.2 in ARM unwinder code
- disable big endian if using clang's linker
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits)
ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
ARM: 9038/1: Link with '-z norelro'
ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro
ARM: 9036/1: uncompress: Fix dbgadtb size parameter name
ARM: 9035/1: uncompress: Add be32tocpu macro
ARM: 9033/1: arm/smp: Drop the macro S(x,s)
ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions
ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol
ARM: 9044/1: vfp: use undef hook for VFP support detection
ARM: 9034/1: __div64_32(): straighten up inline asm constraints
ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode
ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler
ARM: 9028/1: disable KASAN in call stack capturing routines
ARM: 9026/1: unwind: remove old check for GCC <= 4.2
ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD
ARM: 9024/1: Drop useless cast of "u64" to "long long"
ARM: 9023/1: Spelling s/mmeory/memory/
ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak
ARM: kvm: replace open coded VA->PA calculations with adr_l call
ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
...
We have a handful of new kernel features for 5.11:
* Support for the contiguous memory allocator.
* Support for IRQ Time Accounting
* Support for stack tracing
* Support for strict /dev/mem
* Support for kernel section protection
I'm being a bit conservative on the cutoff for this round due to the
timing, so this is all the new development I'm going to take for this
cycle (even if some of it probably normally would have been OK). There
are, however, some fixes on the list that I will likely be sending along
either later this week or early next week.
There is one issue in here: one of my test configurations
(PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of the
.text.init alignment patch. With any luck we'll sort out the issue, but
given how many bugs get fixed all over the place and how unrelated those
features seem my guess is that we're just running into something that's
been lurking for a while and has already been fixed in the newer QEMU
(though I wouldn't be surprised if it's one of these implicit
assumptions we have in the boot flow). If it was hardware I'd be
strongly inclined to look more closely, but given that users can upgrade
their simulators I'm less worried about it.
There are two merge conflicts, both in build files. They're both a bit
clunky: arch/riscv/Kconfig is out of order (I have a script that's
supposed to keep them in order, I'll fix it) and lib/Makefile is out of
order (though GENERIC_LIB here doesn't mean quite what it does above).
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAl/cHO4THHBhbG1lckBk
YWJiZWx0LmNvbQAKCRAuExnzX7sYiTlmD/4uDyNHBM1XH/XD4fSEwTYJvGLqt/Jo
vtrGR/fm0SlQFUKCcywSzxcVAeGn56CACbEIDYLuL4xXRJmbwEuaRrHVx2sEhS9p
pNhy+wus/SgDz5EUAawMyR2AEWgzl77hY5T/+AAo4yv65SGGBfsIdz5noIVwGNqW
r0g5cw2O99z0vwu1aSrK4isWHconG9MfQnnVyepPSh67pyWS4aUCr1K3vLiqD2dE
XcgtwdcgzUIY5aEoJNrWo5qTrcaG8m6MRNCDAKJ6MKdDA2wdGIN868G0wQnoURRm
Y+yW7w3P20kM0b87zH50jujTWg38NBKOfaXb0mAfawZMapL60veTVmvs2kNtFXCy
F6JWRkgTiRnGY72FtRR0igWXT5M7fz0EiLFXLMItGcgj79TUget4l/3sRMN47S/O
cA/WiwptJH3mh8IkL6z5ZxWEThdOrbFt8F1T+Gyq/ayblcPnJaLn/wrWoeOwviWR
fvEC7smuF5SBTbWZK5tBOP21Nvhb7bfr49Sgr8Tvdjl15tz97qK+2tsLXwkBoQnJ
wU45jcXfzr5wgiGBOQANRite5bLsJ0TuOrTgA5gsGpv+JSDGbpcJbm0833x00nX/
3GsW5xr+vsLCvljgPAtKsyDNRlGQu908Gxrat2+s8u92bLr1bwn30uKL5h6i/n1w
QgWATuPPGXZZdw==
=GWIH
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V updates from Palmer Dabbelt:
"We have a handful of new kernel features for 5.11:
- Support for the contiguous memory allocator.
- Support for IRQ Time Accounting
- Support for stack tracing
- Support for strict /dev/mem
- Support for kernel section protection
I'm being a bit conservative on the cutoff for this round due to the
timing, so this is all the new development I'm going to take for this
cycle (even if some of it probably normally would have been OK). There
are, however, some fixes on the list that I will likely be sending
along either later this week or early next week.
There is one issue in here: one of my test configurations
(PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of
the .text.init alignment patch.
With any luck we'll sort out the issue, but given how many bugs get
fixed all over the place and how unrelated those features seem my
guess is that we're just running into something that's been lurking
for a while and has already been fixed in the newer QEMU (though I
wouldn't be surprised if it's one of these implicit assumptions we
have in the boot flow). If it was hardware I'd be strongly inclined to
look more closely, but given that users can upgrade their simulators
I'm less worried about it"
* tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
arm64: Use the generic devmem_is_allowed()
arm: Use the generic devmem_is_allowed()
RISC-V: Use the new generic devmem_is_allowed()
lib: Add a generic version of devmem_is_allowed()
riscv: Fixed kernel test robot warning
riscv: kernel: Drop unused clean rule
riscv: provide memmove implementation
RISC-V: Move dynamic relocation section under __init
RISC-V: Protect all kernel sections including init early
RISC-V: Align the .init.text section
RISC-V: Initialize SBI early
riscv: Enable ARCH_STACKWALK
riscv: Make stack walk callback consistent with generic code
riscv: Cleanup stacktrace
riscv: Add HAVE_IRQ_TIME_ACCOUNTING
riscv: Enable CMA support
riscv: Ignore Image.* and loader.bin
riscv: Clean up boot dir
riscv: Fix compressed Image formats build
RISC-V: Add kernel image sections to the resource tree
Merge misc updates from Andrew Morton:
- a few random little subsystems
- almost all of the MM patches which are staged ahead of linux-next
material. I'll trickle to post-linux-next work in as the dependents
get merged up.
Subsystems affected by this patch series: kthread, kbuild, ide, ntfs,
ocfs2, arch, and mm (slab-generic, slab, slub, dax, debug, pagecache,
gup, swap, shmem, memcg, pagemap, mremap, hmm, vmalloc, documentation,
kasan, pagealloc, memory-failure, hugetlb, vmscan, z3fold, compaction,
oom-kill, migration, cma, page-poison, userfaultfd, zswap, zsmalloc,
uaccess, zram, and cleanups).
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (200 commits)
mm: cleanup kstrto*() usage
mm: fix fall-through warnings for Clang
mm: slub: convert sysfs sprintf family to sysfs_emit/sysfs_emit_at
mm: shmem: convert shmem_enabled_show to use sysfs_emit_at
mm:backing-dev: use sysfs_emit in macro defining functions
mm: huge_memory: convert remaining use of sprintf to sysfs_emit and neatening
mm: use sysfs_emit for struct kobject * uses
mm: fix kernel-doc markups
zram: break the strict dependency from lzo
zram: add stat to gather incompressible pages since zram set up
zram: support page writeback
mm/process_vm_access: remove redundant initialization of iov_r
mm/zsmalloc.c: rework the list_add code in insert_zspage()
mm/zswap: move to use crypto_acomp API for hardware acceleration
mm/zswap: fix passing zero to 'PTR_ERR' warning
mm/zswap: make struct kernel_param_ops definitions const
userfaultfd/selftests: hint the test runner on required privilege
userfaultfd/selftests: fix retval check for userfaultfd_open()
userfaultfd/selftests: always dump something in modes
userfaultfd: selftests: make __{s,u}64 format specifiers portable
...
ARM and ARM64 free unused parts of the memory map just before the
initialization of the page allocator. To allow holes in the memory map both
architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.
Allowing holes in the memory map for FLATMEM may be useful for small
machines, such as ARC and m68k and will enable those architectures to cease
using DISCONTIGMEM and still support more than one memory bank.
Move the functions that free unused memory map to generic mm and enable
them in case HAVE_ARCH_PFN_VALID=y.
Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Meelis Roos <mroos@linux.ee>
Cc: Michael Schmitz <schmitzmic@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Consolidate all kmap_atomic() internals into a generic implementation
which builds the base for the kmap_local() API and make the
kmap_atomic() interface wrappers which handle the disabling/enabling of
preemption and pagefaults.
- Switch the storage from per-CPU to per task and provide scheduler
support for clearing mapping when scheduling out and restoring them
when scheduling back in.
- Merge the migrate_disable/enable() code, which is also part of the
scheduler pull request. This was required to make the kmap_local()
interface available which does not disable preemption when a mapping
is established. It has to disable migration instead to guarantee that
the virtual address of the mapped slot is the same accross preemption.
- Provide better debug facilities: guard pages and enforced utilization
of the mapping mechanics on 64bit systems when the architecture allows
it.
- Provide the new kmap_local() API which can now be used to cleanup the
kmap_atomic() usage sites all over the place. Most of the usage sites
do not require the implicit disabling of preemption and pagefaults so
the penalty on 64bit and 32bit non-highmem systems is removed and quite
some of the code can be simplified. A wholesale conversion is not
possible because some usage depends on the implicit side effects and
some need to be cleaned up because they work around these side effects.
The migrate disable side effect is only effective on highmem systems
and when enforced debugging is enabled. On 64bit and 32bit non-highmem
systems the overhead is completely avoided.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl/XyQwTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoUolD/9+R+BX96fGir+I8rG9dc3cbLw5meSi
0I/Nq3PToZMs2Iqv50DsoaPYHHz/M6fcAO9LRIgsE9jRbnY93GnsBM0wU9Y8yQaT
4wUzOG5WHaLDfqIkx/CN9coUl458oEiwOEbn79A2FmPXFzr7IpkufnV3ybGDwzwP
p73bjMJMPPFrsa9ig87YiYfV/5IAZHi82PN8Cq1v4yNzgXRP3Tg6QoAuCO84ZnWF
RYlrfKjcJ2xPdn+RuYyXolPtxr1hJQ0bOUpe4xu/UfeZjxZ7i1wtwLN9kWZe8CKH
+x4Lz8HZZ5QMTQ9sCHOLtKzu2MceMcpISzoQH4/aFQCNMgLn1zLbS790XkYiQCuR
ne9Cua+IqgYfGMG8cq8+bkU9HCNKaXqIBgPEKE/iHYVmqzCOqhW5Cogu4KFekf6V
Wi7pyyUdX2en8BAWpk5NHc8de9cGcc+HXMq2NIcgXjVWvPaqRP6DeITERTZLJOmz
XPxq5oPLGl7wdm7z+ICIaNApy8zuxpzb6sPLNcn7l5OeorViORlUu08AN8587wAj
FiVjp6ZYomg+gyMkiNkDqFOGDH5TMENpOFoB0hNNEyJwwS0xh6CgWuwZcv+N8aPO
HuS/P+tNANbD8ggT4UparXYce7YCtgOf3IG4GA3JJYvYmJ6pU+AZOWRoDScWq4o+
+jlfoJhMbtx5Gg==
=n71I
-----END PGP SIGNATURE-----
Merge tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull kmap updates from Thomas Gleixner:
"The new preemtible kmap_local() implementation:
- Consolidate all kmap_atomic() internals into a generic
implementation which builds the base for the kmap_local() API and
make the kmap_atomic() interface wrappers which handle the
disabling/enabling of preemption and pagefaults.
- Switch the storage from per-CPU to per task and provide scheduler
support for clearing mapping when scheduling out and restoring them
when scheduling back in.
- Merge the migrate_disable/enable() code, which is also part of the
scheduler pull request. This was required to make the kmap_local()
interface available which does not disable preemption when a
mapping is established. It has to disable migration instead to
guarantee that the virtual address of the mapped slot is the same
across preemption.
- Provide better debug facilities: guard pages and enforced
utilization of the mapping mechanics on 64bit systems when the
architecture allows it.
- Provide the new kmap_local() API which can now be used to cleanup
the kmap_atomic() usage sites all over the place. Most of the usage
sites do not require the implicit disabling of preemption and
pagefaults so the penalty on 64bit and 32bit non-highmem systems is
removed and quite some of the code can be simplified. A wholesale
conversion is not possible because some usage depends on the
implicit side effects and some need to be cleaned up because they
work around these side effects.
The migrate disable side effect is only effective on highmem
systems and when enforced debugging is enabled. On 64bit and 32bit
non-highmem systems the overhead is completely avoided"
* tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
ARM: highmem: Fix cache_is_vivt() reference
x86/crashdump/32: Simplify copy_oldmem_page()
io-mapping: Provide iomap_local variant
mm/highmem: Provide kmap_local*
sched: highmem: Store local kmaps in task struct
x86: Support kmap_local() forced debugging
mm/highmem: Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
mm/highmem: Provide and use CONFIG_DEBUG_KMAP_LOCAL
microblaze/mm/highmem: Add dropped #ifdef back
xtensa/mm/highmem: Make generic kmap_atomic() work correctly
mm/highmem: Take kmap_high_get() properly into account
highmem: High implementation details and document API
Documentation/io-mapping: Remove outdated blurb
io-mapping: Cleanup atomic iomap
mm/highmem: Remove the old kmap_atomic cruft
highmem: Get rid of kmap_types.h
xtensa/mm/highmem: Switch to generic kmap atomic
sparc/mm/highmem: Switch to generic kmap atomic
powerpc/mm/highmem: Switch to generic kmap atomic
nds32/mm/highmem: Switch to generic kmap atomic
...
As part of adding STRICT_DEVMEM support to the RISC-V port, Zong provided an
implementation of devmem_is_allowed() that's exactly the same as the version in
a handful of other ports. Rather than duplicate code, I've put a generic
version of this in lib/ and used it for the RISC-V port.
* palmer/generic-devmem:
arm64: Use the generic devmem_is_allowed()
arm: Use the generic devmem_is_allowed()
RISC-V: Use the new generic devmem_is_allowed()
lib: Add a generic version of devmem_is_allowed()
This is exactly the same as the arm64 version, which I recently copied
into lib/ for use by the RISC-V port.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
LLD does not yet support any big endian architectures. Make this config
non-selectable when using LLD until LLD is fixed.
Link: https://github.com/ClangBuiltLinux/linux/issues/965
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.
Until commit cddb5ddf2b ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.
However, after that commit free_highmem() rounds the beginning and end of
each region downwards, and we may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.
Restore the original rounding of the region boundaries to avoid freeing
reserved pages.
Fixes: cddb5ddf2b ("arm, xtensa: simplify initialization of high memory pages")
Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/
Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.
When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.
As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.
The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.
After the initial development by Andre Ryabinin several modifications
have been made to this code:
Abbott Liu <liuwenliang@huawei.com>
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
Reported by Florian Fainelli <f.fainelli@gmail.com>
Linus Walleij <linus.walleij@linaro.org>:
- Drop the custom mainpulation of TTBR0 and just use
cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
sequence to be recursive based on ARM64:s kasan_init.c.
Ard Biesheuvel <ardb@kernel.org>:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.
Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-developed-by: Abbott Liu <liuwenliang@huawei.com>
Co-developed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Russell King - ARM Linux <rmk+kernel@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:
+----+ 0xffffffff
| |
| | |-> Static kernel image (vmlinux) BSS and page table
| |/
+----+ PAGE_OFFSET
| |
| | |-> Loadable kernel modules virtual address space area
| |/
+----+ MODULES_VADDR = KASAN_SHADOW_END
| |
| | |-> The shadow area of kernel virtual address.
| |/
+----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
| | shadow address of MODULES_VADDR
| | |
| | |
| | |-> The user space area in lowmem. The kernel address
| | | sanitizer do not use this space, nor does it map it.
| | |
| | |
| | |
| | |
| |/
------ 0
0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.
KASAN_SHADOW_START:
This value begins with the MODULE_VADDR's shadow address. It is the
start of kernel virtual space. Since we have modules to load, we need
to cover also that area with shadow memory so we can find memory
bugs in modules.
KASAN_SHADOW_END
This value is the 0x100000000's shadow address: the mapping that would
be after the end of the kernel memory at 0xffffffff. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
As you would expect, >> 3 is equal to dividing by 8, meaning each
byte in the shadow memory covers 8 bytes of kernel memory, so one
bit shadow memory per byte of kernel memory is used.
The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
on the VMSPLIT layout of the system: the kernel and userspace can
split up lowmem in different ways according to needs, so we calculate
the shadow offset depending on this.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.
We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:
- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
- The kernel address space is 3G (0xc0000000)
- PAGE_OFFSET is then set to 0x40000000 so the kernel static
image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
so the modules use addresses 0x3f000000 .. 0x3fffffff
- So the addresses 0x3f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0xc1000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x18200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x26e00000, to
KASAN_SHADOW_END at 0x3effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x3f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
KASAN_SHADOW_OFFSET = 0x1f000000
- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
- The kernel space is 2G (0x80000000)
- PAGE_OFFSET is set to 0x80000000 so the kernel static
image uses 0x80000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
so the modules use addresses 0x7f000000 .. 0x7fffffff
- So the addresses 0x7f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x81000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x10200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x6ee00000, to
KASAN_SHADOW_END at 0x7effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x7f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
KASAN_SHADOW_OFFSET = 0x5f000000
- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
and this is the default split for ARM:
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xc0000000 so the kernel static
image uses 0xc0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
so the modules use addresses 0xbf000000 .. 0xbfffffff
- So the addresses 0xbf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x41000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x08200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xb6e00000, to
KASAN_SHADOW_END at 0xbfffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xbf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
KASAN_SHADOW_OFFSET = 0x9f000000
- 0x8f000000 if we have 3G userspace / 1G kernelspace with
full 1 GB low memory (VMSPLIT_3G_OPT):
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xb0000000 so the kernel static
image uses 0xb0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
so the modules use addresses 0xaf000000 .. 0xaffffff
- So the addresses 0xaf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x51000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x0a200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xa4e00000, to
KASAN_SHADOW_END at 0xaeffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xaf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
KASAN_SHADOW_OFFSET = 0x8f000000
- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
is an error value. We should always match one of the
above shadow offsets.
When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
or
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Disable instrumentation for arch/arm/boot/compressed/*
since that code is executed before the kernel has even
set up its mappings and definately out of scope for
KASan.
Disable instrumentation of arch/arm/vdso/* because that code
is not linked with the kernel image, so the KASan management
code would fail to link.
Disable instrumentation of arch/arm/mm/physaddr.c. See commit
ec6d06efb0 ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
for more details.
Disable kasan check in the function unwind_pop_register because
it does not matter that kasan checks failed when unwind_pop_register()
reads the stack memory of a task.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>