License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
// SPDX-License-Identifier: GPL-2.0
2006-12-08 15:56:07 +01:00
/*
* Copyright IBM Corp . 2006
* Author ( s ) : Heiko Carstens < heiko . carstens @ de . ibm . com >
*/
2021-02-25 17:17:41 -08:00
# include <linux/memory_hotplug.h>
2018-10-30 15:09:49 -07:00
# include <linux/memblock.h>
2006-12-08 15:56:07 +01:00
# include <linux/pfn.h>
# include <linux/mm.h>
2017-02-09 15:20:24 -05:00
# include <linux/init.h>
2006-12-08 15:56:07 +01:00
# include <linux/list.h>
2008-04-30 13:38:46 +02:00
# include <linux/hugetlb.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 17:04:11 +09:00
# include <linux/slab.h>
2016-05-10 16:28:28 +02:00
# include <asm/cacheflush.h>
2021-04-07 09:20:17 +02:00
# include <asm/nospec-branch.h>
2006-12-08 15:56:07 +01:00
# include <asm/pgalloc.h>
# include <asm/setup.h>
# include <asm/tlbflush.h>
2008-04-30 13:38:46 +02:00
# include <asm/sections.h>
2017-05-08 15:58:08 -07:00
# include <asm/set_memory.h>
2006-12-08 15:56:07 +01:00
static DEFINE_MUTEX ( vmem_mutex ) ;
2008-05-30 10:03:27 +02:00
static void __ref * vmem_alloc_pages ( unsigned int order )
{
2016-05-13 11:10:09 +02:00
unsigned long size = PAGE_SIZE < < order ;
2008-05-30 10:03:27 +02:00
if ( slab_is_available ( ) )
return ( void * ) __get_free_pages ( GFP_KERNEL , order ) ;
2021-02-12 07:43:19 +01:00
return memblock_alloc ( size , size ) ;
2008-05-30 10:03:27 +02:00
}
2020-07-22 11:45:52 +02:00
static void vmem_free_pages ( unsigned long addr , int order )
{
/* We don't expect boot memory to be removed ever. */
if ( ! slab_is_available ( ) | |
2021-02-12 07:43:19 +01:00
WARN_ON_ONCE ( PageReserved ( virt_to_page ( addr ) ) ) )
2020-07-22 11:45:52 +02:00
return ;
free_pages ( addr , order ) ;
}
2017-06-16 17:51:15 +02:00
void * vmem_crst_alloc ( unsigned long val )
2017-04-24 18:19:10 +02:00
{
2017-06-16 17:51:15 +02:00
unsigned long * table ;
2017-04-24 18:19:10 +02:00
2017-06-16 17:51:15 +02:00
table = vmem_alloc_pages ( CRST_ALLOC_ORDER ) ;
if ( table )
crst_table_init ( table , val ) ;
return table ;
2006-12-08 15:56:07 +01:00
}
2016-05-17 10:50:15 +02:00
pte_t __ref * vmem_pte_alloc ( void )
2006-12-08 15:56:07 +01:00
{
2016-10-18 13:35:32 +02:00
unsigned long size = PTRS_PER_PTE * sizeof ( pte_t ) ;
2008-02-09 18:24:35 +01:00
pte_t * pte ;
2006-12-08 15:56:07 +01:00
2008-02-09 18:24:35 +01:00
if ( slab_is_available ( ) )
2014-04-30 16:04:25 +02:00
pte = ( pte_t * ) page_table_alloc ( & init_mm ) ;
2008-02-09 18:24:35 +01:00
else
2021-02-12 07:43:19 +01:00
pte = ( pte_t * ) memblock_alloc ( size , size ) ;
2006-12-08 15:56:07 +01:00
if ( ! pte )
return NULL ;
2017-10-04 19:27:07 +02:00
memset64 ( ( u64 * ) pte , _PAGE_INVALID , PTRS_PER_PTE ) ;
2006-12-08 15:56:07 +01:00
return pte ;
}
2020-07-22 11:45:55 +02:00
static void vmem_pte_free ( unsigned long * table )
{
/* We don't expect boot memory to be removed ever. */
if ( ! slab_is_available ( ) | |
WARN_ON_ONCE ( PageReserved ( virt_to_page ( table ) ) ) )
return ;
page_table_free ( & init_mm , table ) ;
}
2020-07-22 11:45:57 +02:00
# define PAGE_UNUSED 0xFD
2020-07-22 11:45:58 +02:00
/*
* The unused vmemmap range , which was not yet memset ( PAGE_UNUSED ) ranges
2020-11-10 10:36:23 +01:00
* from unused_sub_pmd_start to next PMD_SIZE boundary .
2020-07-22 11:45:58 +02:00
*/
2020-11-10 10:36:23 +01:00
static unsigned long unused_sub_pmd_start ;
2020-07-22 11:45:58 +02:00
2020-11-10 10:36:23 +01:00
static void vmemmap_flush_unused_sub_pmd ( void )
2020-07-22 11:45:58 +02:00
{
2020-11-10 10:36:23 +01:00
if ( ! unused_sub_pmd_start )
2020-07-22 11:45:58 +02:00
return ;
2021-02-12 07:43:19 +01:00
memset ( ( void * ) unused_sub_pmd_start , PAGE_UNUSED ,
2020-11-10 10:36:23 +01:00
ALIGN ( unused_sub_pmd_start , PMD_SIZE ) - unused_sub_pmd_start ) ;
unused_sub_pmd_start = 0 ;
2020-07-22 11:45:58 +02:00
}
2020-11-10 10:36:23 +01:00
static void vmemmap_mark_sub_pmd_used ( unsigned long start , unsigned long end )
2020-07-22 11:45:57 +02:00
{
/*
* As we expect to add in the same granularity as we remove , it ' s
* sufficient to mark only some piece used to block the memmap page from
* getting removed ( just in case the memmap never gets initialized ,
* e . g . , because the memory block never gets onlined ) .
*/
2021-02-12 07:43:19 +01:00
memset ( ( void * ) start , 0 , sizeof ( struct page ) ) ;
2020-07-22 11:45:57 +02:00
}
2020-07-22 11:45:58 +02:00
static void vmemmap_use_sub_pmd ( unsigned long start , unsigned long end )
{
/*
* We only optimize if the new used range directly follows the
* previously unused range ( esp . , when populating consecutive sections ) .
*/
2020-11-10 10:36:23 +01:00
if ( unused_sub_pmd_start = = start ) {
unused_sub_pmd_start = end ;
if ( likely ( IS_ALIGNED ( unused_sub_pmd_start , PMD_SIZE ) ) )
unused_sub_pmd_start = 0 ;
2020-07-22 11:45:58 +02:00
return ;
}
2020-11-10 10:36:23 +01:00
vmemmap_flush_unused_sub_pmd ( ) ;
vmemmap_mark_sub_pmd_used ( start , end ) ;
2020-07-22 11:45:58 +02:00
}
2020-07-22 11:45:57 +02:00
static void vmemmap_use_new_sub_pmd ( unsigned long start , unsigned long end )
{
2021-02-12 07:43:19 +01:00
unsigned long page = ALIGN_DOWN ( start , PMD_SIZE ) ;
2020-07-22 11:45:57 +02:00
2020-11-10 10:36:23 +01:00
vmemmap_flush_unused_sub_pmd ( ) ;
2020-07-22 11:45:58 +02:00
2020-07-22 11:45:57 +02:00
/* Could be our memmap page is filled with PAGE_UNUSED already ... */
2020-11-10 10:36:23 +01:00
vmemmap_mark_sub_pmd_used ( start , end ) ;
2020-07-22 11:45:57 +02:00
/* Mark the unused parts of the new memmap page PAGE_UNUSED. */
if ( ! IS_ALIGNED ( start , PMD_SIZE ) )
2021-02-12 07:43:19 +01:00
memset ( ( void * ) page , PAGE_UNUSED , start - page ) ;
2020-07-22 11:45:58 +02:00
/*
* We want to avoid memset ( PAGE_UNUSED ) when populating the vmemmap of
* consecutive sections . Remember for the last added PMD the last
* unused range in the populated PMD .
*/
2020-07-22 11:45:57 +02:00
if ( ! IS_ALIGNED ( end , PMD_SIZE ) )
2020-11-10 10:36:23 +01:00
unused_sub_pmd_start = end ;
2020-07-22 11:45:57 +02:00
}
/* Returns true if the PMD is completely unused and can be freed. */
static bool vmemmap_unuse_sub_pmd ( unsigned long start , unsigned long end )
{
2021-02-12 07:43:19 +01:00
unsigned long page = ALIGN_DOWN ( start , PMD_SIZE ) ;
2020-07-22 11:45:57 +02:00
2020-11-10 10:36:23 +01:00
vmemmap_flush_unused_sub_pmd ( ) ;
2021-02-12 07:43:19 +01:00
memset ( ( void * ) start , PAGE_UNUSED , end - start ) ;
return ! memchr_inv ( ( void * ) page , PAGE_UNUSED , PMD_SIZE ) ;
2020-07-22 11:45:57 +02:00
}
2020-07-22 11:45:52 +02:00
/* __ref: we'll only call vmemmap_alloc_block() via vmemmap_populate() */
static int __ref modify_pte_table ( pmd_t * pmd , unsigned long addr ,
unsigned long end , bool add , bool direct )
2006-12-08 15:56:07 +01:00
{
2020-07-22 11:45:51 +02:00
unsigned long prot , pages = 0 ;
2020-07-22 11:45:52 +02:00
int ret = - ENOMEM ;
2020-07-22 11:45:51 +02:00
pte_t * pte ;
2006-12-08 15:56:07 +01:00
2020-07-22 11:45:51 +02:00
prot = pgprot_val ( PAGE_KERNEL ) ;
if ( ! MACHINE_HAS_NX )
prot & = ~ _PAGE_NOEXEC ;
pte = pte_offset_kernel ( pmd , addr ) ;
for ( ; addr < end ; addr + = PAGE_SIZE , pte + + ) {
if ( ! add ) {
if ( pte_none ( * pte ) )
continue ;
2020-07-22 11:45:52 +02:00
if ( ! direct )
2021-02-12 07:43:19 +01:00
vmem_free_pages ( ( unsigned long ) pfn_to_virt ( pte_pfn ( * pte ) ) , 0 ) ;
2020-07-22 11:45:51 +02:00
pte_clear ( & init_mm , addr , pte ) ;
} else if ( pte_none ( * pte ) ) {
2020-07-22 11:45:52 +02:00
if ( ! direct ) {
2020-07-23 21:42:36 +02:00
void * new_page = vmemmap_alloc_block ( PAGE_SIZE , NUMA_NO_NODE ) ;
2020-07-22 11:45:52 +02:00
if ( ! new_page )
goto out ;
pte_val ( * pte ) = __pa ( new_page ) | prot ;
2020-07-23 21:42:36 +02:00
} else {
2021-02-12 07:43:19 +01:00
pte_val ( * pte ) = __pa ( addr ) | prot ;
2020-07-23 21:42:36 +02:00
}
} else {
2020-07-22 11:45:51 +02:00
continue ;
2020-07-23 21:42:36 +02:00
}
2020-07-22 11:45:51 +02:00
pages + + ;
2016-03-22 10:54:24 +01:00
}
2020-07-22 11:45:52 +02:00
ret = 0 ;
out :
if ( direct )
update_page_count ( PG_DIRECT_MAP_4K , add ? pages : - pages ) ;
return ret ;
2020-07-22 11:45:51 +02:00
}
2020-07-22 11:45:55 +02:00
static void try_free_pte_table ( pmd_t * pmd , unsigned long start )
{
pte_t * pte ;
int i ;
/* We can safely assume this is fully in 1:1 mapping & vmemmap area */
pte = pte_offset_kernel ( pmd , start ) ;
2020-07-23 21:42:36 +02:00
for ( i = 0 ; i < PTRS_PER_PTE ; i + + , pte + + ) {
2020-07-22 11:45:55 +02:00
if ( ! pte_none ( * pte ) )
return ;
2020-07-23 21:42:36 +02:00
}
2021-02-12 07:43:19 +01:00
vmem_pte_free ( ( unsigned long * ) pmd_deref ( * pmd ) ) ;
2020-07-22 11:45:55 +02:00
pmd_clear ( pmd ) ;
}
2020-07-22 11:45:52 +02:00
/* __ref: we'll only call vmemmap_alloc_block() via vmemmap_populate() */
static int __ref modify_pmd_table ( pud_t * pud , unsigned long addr ,
unsigned long end , bool add , bool direct )
2020-07-22 11:45:51 +02:00
{
unsigned long next , prot , pages = 0 ;
int ret = - ENOMEM ;
pmd_t * pmd ;
pte_t * pte ;
prot = pgprot_val ( SEGMENT_KERNEL ) ;
if ( ! MACHINE_HAS_NX )
prot & = ~ _SEGMENT_ENTRY_NOEXEC ;
pmd = pmd_offset ( pud , addr ) ;
for ( ; addr < end ; addr = next , pmd + + ) {
next = pmd_addr_end ( addr , end ) ;
if ( ! add ) {
if ( pmd_none ( * pmd ) )
continue ;
2020-11-10 10:36:21 +01:00
if ( pmd_large ( * pmd ) ) {
2020-07-22 11:45:51 +02:00
if ( IS_ALIGNED ( addr , PMD_SIZE ) & &
IS_ALIGNED ( next , PMD_SIZE ) ) {
2020-07-22 11:45:52 +02:00
if ( ! direct )
2020-07-23 21:42:36 +02:00
vmem_free_pages ( pmd_deref ( * pmd ) , get_order ( PMD_SIZE ) ) ;
2020-07-22 11:45:51 +02:00
pmd_clear ( pmd ) ;
pages + + ;
2020-07-23 21:42:36 +02:00
} else if ( ! direct & & vmemmap_unuse_sub_pmd ( addr , next ) ) {
vmem_free_pages ( pmd_deref ( * pmd ) , get_order ( PMD_SIZE ) ) ;
2020-07-22 11:45:57 +02:00
pmd_clear ( pmd ) ;
2020-07-22 11:45:51 +02:00
}
continue ;
}
} else if ( pmd_none ( * pmd ) ) {
if ( IS_ALIGNED ( addr , PMD_SIZE ) & &
IS_ALIGNED ( next , PMD_SIZE ) & &
2020-07-22 11:45:52 +02:00
MACHINE_HAS_EDAT1 & & addr & & direct & &
2020-07-22 11:45:51 +02:00
! debug_pagealloc_enabled ( ) ) {
2021-02-12 07:43:19 +01:00
pmd_val ( * pmd ) = __pa ( addr ) | prot ;
2020-07-22 11:45:51 +02:00
pages + + ;
continue ;
2020-07-22 11:45:52 +02:00
} else if ( ! direct & & MACHINE_HAS_EDAT1 ) {
void * new_page ;
/*
* Use 1 MB frames for vmemmap if available . We
* always use large frames even if they are only
* partially used . Otherwise we would have also
* page tables since vmemmap_populate gets
* called for each section separately .
*/
2020-07-23 21:42:36 +02:00
new_page = vmemmap_alloc_block ( PMD_SIZE , NUMA_NO_NODE ) ;
2020-07-22 11:45:56 +02:00
if ( new_page ) {
pmd_val ( * pmd ) = __pa ( new_page ) | prot ;
2020-07-22 11:45:57 +02:00
if ( ! IS_ALIGNED ( addr , PMD_SIZE ) | |
! IS_ALIGNED ( next , PMD_SIZE ) ) {
2020-07-23 21:42:36 +02:00
vmemmap_use_new_sub_pmd ( addr , next ) ;
2020-07-22 11:45:57 +02:00
}
2020-07-22 11:45:56 +02:00
continue ;
}
2020-07-22 11:45:51 +02:00
}
pte = vmem_pte_alloc ( ) ;
if ( ! pte )
2007-10-22 12:52:48 +02:00
goto out ;
2020-07-22 11:45:51 +02:00
pmd_populate ( & init_mm , pmd , pte ) ;
2020-07-22 11:45:57 +02:00
} else if ( pmd_large ( * pmd ) ) {
if ( ! direct )
vmemmap_use_sub_pmd ( addr , next ) ;
2012-10-08 09:18:26 +02:00
continue ;
2020-07-22 11:45:57 +02:00
}
2020-07-22 11:45:52 +02:00
ret = modify_pte_table ( pmd , addr , next , add , direct ) ;
if ( ret )
goto out ;
2020-07-22 11:45:55 +02:00
if ( ! add )
try_free_pte_table ( pmd , addr & PMD_MASK ) ;
2020-07-22 11:45:51 +02:00
}
ret = 0 ;
out :
2020-07-22 11:45:52 +02:00
if ( direct )
update_page_count ( PG_DIRECT_MAP_1M , add ? pages : - pages ) ;
2020-07-22 11:45:51 +02:00
return ret ;
}
2020-07-22 11:45:55 +02:00
static void try_free_pmd_table ( pud_t * pud , unsigned long start )
{
const unsigned long end = start + PUD_SIZE ;
pmd_t * pmd ;
int i ;
/* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */
if ( end > VMALLOC_START )
return ;
# ifdef CONFIG_KASAN
if ( start < KASAN_SHADOW_END & & KASAN_SHADOW_START > end )
return ;
# endif
pmd = pmd_offset ( pud , start ) ;
for ( i = 0 ; i < PTRS_PER_PMD ; i + + , pmd + + )
if ( ! pmd_none ( * pmd ) )
return ;
vmem_free_pages ( pud_deref ( * pud ) , CRST_ALLOC_ORDER ) ;
pud_clear ( pud ) ;
}
2020-07-22 11:45:51 +02:00
static int modify_pud_table ( p4d_t * p4d , unsigned long addr , unsigned long end ,
2020-07-22 11:45:52 +02:00
bool add , bool direct )
2020-07-22 11:45:51 +02:00
{
unsigned long next , prot , pages = 0 ;
int ret = - ENOMEM ;
pud_t * pud ;
pmd_t * pmd ;
prot = pgprot_val ( REGION3_KERNEL ) ;
if ( ! MACHINE_HAS_NX )
prot & = ~ _REGION_ENTRY_NOEXEC ;
pud = pud_offset ( p4d , addr ) ;
for ( ; addr < end ; addr = next , pud + + ) {
next = pud_addr_end ( addr , end ) ;
if ( ! add ) {
if ( pud_none ( * pud ) )
continue ;
if ( pud_large ( * pud ) ) {
if ( IS_ALIGNED ( addr , PUD_SIZE ) & &
IS_ALIGNED ( next , PUD_SIZE ) ) {
pud_clear ( pud ) ;
pages + + ;
}
continue ;
}
} else if ( pud_none ( * pud ) ) {
if ( IS_ALIGNED ( addr , PUD_SIZE ) & &
IS_ALIGNED ( next , PUD_SIZE ) & &
2020-07-22 11:45:52 +02:00
MACHINE_HAS_EDAT2 & & addr & & direct & &
2020-07-22 11:45:51 +02:00
! debug_pagealloc_enabled ( ) ) {
2021-02-12 07:43:19 +01:00
pud_val ( * pud ) = __pa ( addr ) | prot ;
2020-07-22 11:45:51 +02:00
pages + + ;
continue ;
}
pmd = vmem_crst_alloc ( _SEGMENT_ENTRY_EMPTY ) ;
if ( ! pmd )
2006-12-08 15:56:07 +01:00
goto out ;
2020-07-22 11:45:51 +02:00
pud_populate ( & init_mm , pud , pmd ) ;
2020-07-23 21:42:36 +02:00
} else if ( pud_large ( * pud ) ) {
2008-04-30 13:38:46 +02:00
continue ;
2020-07-23 21:42:36 +02:00
}
2020-07-22 11:45:52 +02:00
ret = modify_pmd_table ( pud , addr , next , add , direct ) ;
2020-07-22 11:45:51 +02:00
if ( ret )
goto out ;
2020-07-22 11:45:55 +02:00
if ( ! add )
try_free_pmd_table ( pud , addr & PUD_MASK ) ;
2020-07-22 11:45:51 +02:00
}
ret = 0 ;
out :
2020-07-22 11:45:52 +02:00
if ( direct )
update_page_count ( PG_DIRECT_MAP_2G , add ? pages : - pages ) ;
2020-07-22 11:45:51 +02:00
return ret ;
}
2020-07-22 11:45:55 +02:00
static void try_free_pud_table ( p4d_t * p4d , unsigned long start )
{
const unsigned long end = start + P4D_SIZE ;
pud_t * pud ;
int i ;
/* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */
if ( end > VMALLOC_START )
return ;
# ifdef CONFIG_KASAN
if ( start < KASAN_SHADOW_END & & KASAN_SHADOW_START > end )
return ;
# endif
pud = pud_offset ( p4d , start ) ;
2020-07-23 21:42:36 +02:00
for ( i = 0 ; i < PTRS_PER_PUD ; i + + , pud + + ) {
2020-07-22 11:45:55 +02:00
if ( ! pud_none ( * pud ) )
return ;
2020-07-23 21:42:36 +02:00
}
2020-07-22 11:45:55 +02:00
vmem_free_pages ( p4d_deref ( * p4d ) , CRST_ALLOC_ORDER ) ;
p4d_clear ( p4d ) ;
}
2020-07-22 11:45:51 +02:00
static int modify_p4d_table ( pgd_t * pgd , unsigned long addr , unsigned long end ,
2020-07-22 11:45:52 +02:00
bool add , bool direct )
2020-07-22 11:45:51 +02:00
{
unsigned long next ;
int ret = - ENOMEM ;
p4d_t * p4d ;
pud_t * pud ;
p4d = p4d_offset ( pgd , addr ) ;
for ( ; addr < end ; addr = next , p4d + + ) {
next = p4d_addr_end ( addr , end ) ;
if ( ! add ) {
if ( p4d_none ( * p4d ) )
continue ;
} else if ( p4d_none ( * p4d ) ) {
pud = vmem_crst_alloc ( _REGION3_ENTRY_EMPTY ) ;
if ( ! pud )
goto out ;
2020-08-21 18:27:36 +02:00
p4d_populate ( & init_mm , p4d , pud ) ;
2008-04-30 13:38:46 +02:00
}
2020-07-22 11:45:52 +02:00
ret = modify_pud_table ( p4d , addr , next , add , direct ) ;
2020-07-22 11:45:51 +02:00
if ( ret )
goto out ;
2020-07-22 11:45:55 +02:00
if ( ! add )
try_free_pud_table ( p4d , addr & P4D_MASK ) ;
2020-07-22 11:45:51 +02:00
}
ret = 0 ;
out :
return ret ;
}
2020-07-22 11:45:55 +02:00
static void try_free_p4d_table ( pgd_t * pgd , unsigned long start )
{
const unsigned long end = start + PGDIR_SIZE ;
p4d_t * p4d ;
int i ;
/* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */
if ( end > VMALLOC_START )
return ;
# ifdef CONFIG_KASAN
if ( start < KASAN_SHADOW_END & & KASAN_SHADOW_START > end )
return ;
# endif
p4d = p4d_offset ( pgd , start ) ;
2020-07-23 21:42:36 +02:00
for ( i = 0 ; i < PTRS_PER_P4D ; i + + , p4d + + ) {
2020-07-22 11:45:55 +02:00
if ( ! p4d_none ( * p4d ) )
return ;
2020-07-23 21:42:36 +02:00
}
2020-07-22 11:45:55 +02:00
vmem_free_pages ( pgd_deref ( * pgd ) , CRST_ALLOC_ORDER ) ;
pgd_clear ( pgd ) ;
}
2020-07-22 11:45:52 +02:00
static int modify_pagetable ( unsigned long start , unsigned long end , bool add ,
bool direct )
2020-07-22 11:45:51 +02:00
{
unsigned long addr , next ;
int ret = - ENOMEM ;
pgd_t * pgd ;
p4d_t * p4d ;
if ( WARN_ON_ONCE ( ! PAGE_ALIGNED ( start | end ) ) )
return - EINVAL ;
for ( addr = start ; addr < end ; addr = next ) {
next = pgd_addr_end ( addr , end ) ;
pgd = pgd_offset_k ( addr ) ;
if ( ! add ) {
if ( pgd_none ( * pgd ) )
continue ;
} else if ( pgd_none ( * pgd ) ) {
p4d = vmem_crst_alloc ( _REGION2_ENTRY_EMPTY ) ;
if ( ! p4d )
2006-12-08 15:56:07 +01:00
goto out ;
2020-07-22 11:45:51 +02:00
pgd_populate ( & init_mm , pgd , p4d ) ;
2006-12-08 15:56:07 +01:00
}
2020-07-22 11:45:52 +02:00
ret = modify_p4d_table ( pgd , addr , next , add , direct ) ;
2020-07-22 11:45:51 +02:00
if ( ret )
goto out ;
2020-07-22 11:45:55 +02:00
if ( ! add )
try_free_p4d_table ( pgd , addr & PGDIR_MASK ) ;
2006-12-08 15:56:07 +01:00
}
ret = 0 ;
out :
2020-07-22 11:45:51 +02:00
if ( ! add )
flush_tlb_kernel_range ( start , end ) ;
2006-12-08 15:56:07 +01:00
return ret ;
}
2020-07-22 11:45:52 +02:00
static int add_pagetable ( unsigned long start , unsigned long end , bool direct )
2020-07-22 11:45:51 +02:00
{
2020-07-22 11:45:52 +02:00
return modify_pagetable ( start , end , true , direct ) ;
2020-07-22 11:45:51 +02:00
}
2020-07-22 11:45:52 +02:00
static int remove_pagetable ( unsigned long start , unsigned long end , bool direct )
2020-07-22 11:45:51 +02:00
{
2020-07-22 11:45:52 +02:00
return modify_pagetable ( start , end , false , direct ) ;
2020-07-22 11:45:51 +02:00
}
/*
* Add a physical memory range to the 1 : 1 mapping .
*/
static int vmem_add_range ( unsigned long start , unsigned long size )
{
2020-07-22 11:45:52 +02:00
return add_pagetable ( start , start + size , true ) ;
2020-07-22 11:45:51 +02:00
}
2006-12-08 15:56:07 +01:00
/*
* Remove a physical memory range from the 1 : 1 mapping .
*/
static void vmem_remove_range ( unsigned long start , unsigned long size )
{
2020-07-22 11:45:52 +02:00
remove_pagetable ( start , start + size , true ) ;
2006-12-08 15:56:07 +01:00
}
/*
* Add a backed mem_map array to the virtual mem_map array .
*/
2017-12-29 08:53:54 +01:00
int __meminit vmemmap_populate ( unsigned long start , unsigned long end , int node ,
2020-07-23 21:42:36 +02:00
struct vmem_altmap * altmap )
2006-12-08 15:56:07 +01:00
{
2020-07-22 11:45:53 +02:00
int ret ;
2020-07-22 11:45:54 +02:00
mutex_lock ( & vmem_mutex ) ;
2020-07-22 11:45:52 +02:00
/* We don't care about the node, just use NUMA_NO_NODE on allocations */
2020-07-22 11:45:53 +02:00
ret = add_pagetable ( start , end , false ) ;
if ( ret )
remove_pagetable ( start , end , false ) ;
2020-07-22 11:45:54 +02:00
mutex_unlock ( & vmem_mutex ) ;
2020-07-22 11:45:53 +02:00
return ret ;
2006-12-08 15:56:07 +01:00
}
2017-12-29 08:53:56 +01:00
void vmemmap_free ( unsigned long start , unsigned long end ,
2020-07-23 21:42:36 +02:00
struct vmem_altmap * altmap )
2013-02-22 16:33:08 -08:00
{
2020-07-22 11:45:54 +02:00
mutex_lock ( & vmem_mutex ) ;
2020-07-22 11:45:52 +02:00
remove_pagetable ( start , end , false ) ;
2020-07-22 11:45:54 +02:00
mutex_unlock ( & vmem_mutex ) ;
2013-02-22 16:33:08 -08:00
}
2020-06-25 17:00:29 +02:00
void vmem_remove_mapping ( unsigned long start , unsigned long size )
2006-12-08 15:56:07 +01:00
{
mutex_lock ( & vmem_mutex ) ;
2020-06-25 17:00:29 +02:00
vmem_remove_range ( start , size ) ;
2006-12-08 15:56:07 +01:00
mutex_unlock ( & vmem_mutex ) ;
}
2021-02-25 17:17:41 -08:00
struct range arch_get_mappable_range ( void )
{
struct range mhp_range ;
mhp_range . start = 0 ;
mhp_range . end = VMEM_MAX_PHYS - 1 ;
return mhp_range ;
}
2008-04-30 13:38:47 +02:00
int vmem_add_mapping ( unsigned long start , unsigned long size )
2006-12-08 15:56:07 +01:00
{
2021-02-25 17:17:41 -08:00
struct range range = arch_get_mappable_range ( ) ;
2006-12-08 15:56:07 +01:00
int ret ;
2021-02-25 17:17:41 -08:00
if ( start < range . start | |
start + size > range . end + 1 | |
2020-06-25 17:00:29 +02:00
start + size < start )
return - ERANGE ;
2006-12-08 15:56:07 +01:00
2020-06-25 17:00:29 +02:00
mutex_lock ( & vmem_mutex ) ;
2020-07-22 11:45:50 +02:00
ret = vmem_add_range ( start , size ) ;
2006-12-08 15:56:07 +01:00
if ( ret )
2020-06-25 17:00:29 +02:00
vmem_remove_range ( start , size ) ;
2006-12-08 15:56:07 +01:00
mutex_unlock ( & vmem_mutex ) ;
return ret ;
}
/*
* map whole physical memory to virtual memory ( identity mapping )
2008-01-26 14:11:00 +01:00
* we reserve enough space in the vmalloc area for vmemmap to hotplug
* additional memory segments .
2006-12-08 15:56:07 +01:00
*/
void __init vmem_map_init ( void )
{
2020-10-13 16:58:08 -07:00
phys_addr_t base , end ;
u64 i ;
2006-12-08 15:56:07 +01:00
2020-10-13 16:58:08 -07:00
for_each_mem_range ( i , & base , & end )
vmem_add_range ( base , end - base ) ;
2017-11-08 11:18:29 +01:00
__set_memory ( ( unsigned long ) _stext ,
( unsigned long ) ( _etext - _stext ) > > PAGE_SHIFT ,
2016-03-22 10:54:24 +01:00
SET_MEMORY_RO | SET_MEMORY_X ) ;
2017-11-08 11:18:29 +01:00
__set_memory ( ( unsigned long ) _etext ,
( unsigned long ) ( __end_rodata - _etext ) > > PAGE_SHIFT ,
2016-03-22 10:54:24 +01:00
SET_MEMORY_RO ) ;
2017-11-08 11:18:29 +01:00
__set_memory ( ( unsigned long ) _sinittext ,
( unsigned long ) ( _einittext - _sinittext ) > > PAGE_SHIFT ,
2016-03-22 10:54:24 +01:00
SET_MEMORY_RO | SET_MEMORY_X ) ;
2021-08-04 13:40:31 +02:00
__set_memory ( __stext_amode31 , ( __etext_amode31 - __stext_amode31 ) > > PAGE_SHIFT ,
2019-02-03 21:37:20 +01:00
SET_MEMORY_RO | SET_MEMORY_X ) ;
2020-01-22 13:38:22 +01:00
2021-04-07 09:20:17 +02:00
if ( nospec_uses_trampoline ( ) | | ! static_key_enabled ( & cpu_has_bear ) ) {
/*
* Lowcore must be executable for LPSWE
* and expoline trampoline branch instructions .
*/
set_memory_x ( 0 , 1 ) ;
}
2020-01-22 13:38:22 +01:00
2016-03-22 10:54:24 +01:00
pr_info ( " Write protected kernel read-only data: %luk \n " ,
2017-11-08 11:18:29 +01:00
( unsigned long ) ( __end_rodata - _stext ) > > 10 ) ;
2006-12-08 15:56:07 +01:00
}