linux/arch/powerpc/mm
Michael Ellerman bb5f33c069 Merge "Use hugepages to map kernel mem on 8xx" into next
Merge Christophe's large series to use huge pages for the linear
mapping on 8xx.

From his cover letter:

The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.

The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Then each page table has 1024 entries.

At the time being, page sizes are managed in PGD entries, implying
the use of mm_slices as it can't mix several pages of the same size
in one page table.

The first purpose of this series is to reorganise things so that
standard page tables can also handle 512k pages. This is done by
adding a new _PAGE_HUGE flag which will be copied into the Level 1
entry in the TLB miss handler. That done, we have 2 types of pages:
- PGD entries to regular page tables handling 4k/16k and 512k pages
- PGD entries to hugepd tables handling 8M pages.

There is no need to mix 8M pages with other sizes, because a 8M page
will use more than what a single PGD covers.

Then comes the second purpose of this series. At the time being, the
8xx has implemented special handling in the TLB miss handlers in order
to transparently map kernel linear address space and the IMMR using
huge pages by building the TLB entries in assembly at the time of the
exception.

As mm_slices is only for user space pages, and also because it would
anyway not be convenient to slice kernel address space, it was not
possible to use huge pages for kernel address space. But after step
one of the series, it is now more flexible to use huge pages.

This series drop all assembly 'just in time' handling of huge pages
and use huge pages in page tables instead.

Once the above is done, then comes icing on the cake:
- Use huge pages for KASAN shadow mapping
- Allow pinned TLBs with strict kernel rwx
- Allow pinned TLBs with debug pagealloc

Then, last but not least, those modifications for the 8xx allows the
following improvement on book3s/32:
- Mapping KASAN shadow with BATs
- Allowing BATs with debug pagealloc

All this allows to considerably simplify TLB miss handlers and associated
initialisation. The overhead of reading page tables is negligible
compared to the reduction of the miss handlers.

While we were at touching pte_update(), some cleanup was done
there too.

Tested widely on 8xx and 832x. Boot tested on QEMU MAC99.
2020-05-26 22:54:27 +10:00
..
book3s32 powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC 2020-05-26 22:22:23 +10:00
book3s64 powerpc/64s/hash: Add stress_slb kernel boot option to increase SLB faults 2020-05-20 23:39:58 +10:00
kasan powerpc/32s: Implement dedicated kasan_init_region() 2020-05-26 22:22:23 +10:00
nohash powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC 2020-05-26 22:22:23 +10:00
ptdump powerpc/8xx: Add a function to early map kernel via huge pages 2020-05-26 22:22:22 +10:00
copro_fault.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 153 2019-05-30 11:26:32 -07:00
dma-noncoherent.c dma-mapping: drop the dev argument to arch_sync_dma_for_* 2019-11-20 20:31:38 +01:00
drmem.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152 2019-05-30 11:26:32 -07:00
fault.c powerpc: Add a probe_user_read_inst() function 2020-05-19 00:10:37 +10:00
highmem.c powerpc/highmem: Change BUG_ON() to WARN_ON() 2019-04-20 22:02:11 +10:00
hugetlbpage.c powerpc/8xx: Only 8M pages are hugepte pages now 2020-05-26 22:22:21 +10:00
init_32.c powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC 2020-05-26 22:22:23 +10:00
init_64.c powerpc: Replace _ALIGN_DOWN() by ALIGN_DOWN() 2020-05-11 23:15:15 +10:00
init-common.c powerpc: introduce kernstart_virt_addr to store the kernel base 2019-11-13 19:27:32 +11:00
ioremap_32.c powerpc/ioremap: warn on early use of ioremap() 2019-11-19 19:38:38 +11:00
ioremap_64.c powerpc/ioremap: warn on early use of ioremap() 2019-11-19 19:38:38 +11:00
ioremap.c mm/memremap_pages: Introduce memremap_compat_align() 2020-02-20 16:58:55 -08:00
Makefile powerpc/mm: Move ioremap functions out of pgtable_32/64.c 2019-08-27 13:03:35 +10:00
mem.c mm/memory_hotplug: add pgprot_t to mhp_params 2020-04-10 15:36:21 -07:00
mmap.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156 2019-05-30 11:26:35 -07:00
mmu_context.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152 2019-05-30 11:26:32 -07:00
mmu_decl.h powerpc/8xx: Don't set IMMR map anymore at boot 2020-05-26 22:22:21 +10:00
numa.c powerpc/numa: Remove late request for home node associativity 2020-03-04 22:44:31 +11:00
pgtable_32.c powerpc/8xx: Add a function to early map kernel via huge pages 2020-05-26 22:22:22 +10:00
pgtable_64.c powerpc/mm: Move ioremap functions out of pgtable_32/64.c 2019-08-27 13:03:35 +10:00
pgtable-frag.c mm: treewide: clarify pgtable_page_{ctor,dtor}() naming 2019-09-26 10:10:44 -07:00
pgtable.c powerpc/8xx: Manage 512k huge pages as standard pages. 2020-05-26 22:22:21 +10:00
slice.c powerpc: Replace _ALIGN_UP() by ALIGN() 2020-05-11 23:15:15 +10:00