2005-04-17 02:20:36 +04:00
/* Generate assembler source containing symbol information
*
* Copyright 2002 by Kai Germaschewski
*
* This software may be used and distributed according to the terms
* of the GNU General Public License , incorporated herein by reference .
*
2022-11-18 16:36:31 +03:00
* Usage : kallsyms [ - - all - symbols ] [ - - absolute - percpu ]
* [ - - base - relative ] in . map > out . S
2005-04-17 02:20:36 +04:00
*
* Table compression uses all the unused char codes on the symbols and
* maps these to the most used substrings ( tokens ) . For instance , it might
* map char code 0xF7 to represent " write_ " and then in every symbol where
* " write_ " appears it can be replaced by 0xF7 , saving 5 bytes .
* The used codes themselves are also placed in the table so that the
* decompresion can work without " special cases " .
* Applied to kernel symbols , this usually produces a compression ratio
* of about 50 % .
*
*/
2022-09-26 12:02:28 +03:00
# include <getopt.h>
2019-11-23 19:04:39 +03:00
# include <stdbool.h>
2005-04-17 02:20:36 +04:00
# include <stdio.h>
# include <stdlib.h>
# include <string.h>
# include <ctype.h>
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
# include <limits.h>
2005-04-17 02:20:36 +04:00
2009-06-09 03:12:13 +04:00
# define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
2022-07-27 21:54:19 +03:00
# define _stringify_1(x) #x
# define _stringify(x) _stringify_1(x)
2021-04-05 06:03:50 +03:00
# define KSYM_NAME_LEN 512
2005-04-17 02:20:36 +04:00
2022-07-27 22:41:56 +03:00
/*
* A substantially bigger size than the current maximum .
*
* It cannot be defined as an expression because it gets stringified
* for the fscanf ( ) format string . Therefore , a _Static_assert ( ) is
* used instead to maintain the relationship with KSYM_NAME_LEN .
*/
2021-04-05 06:03:50 +03:00
# define KSYM_NAME_LEN_BUFFER 2048
2022-07-27 22:41:56 +03:00
_Static_assert (
KSYM_NAME_LEN_BUFFER = = KSYM_NAME_LEN * 4 ,
" Please keep KSYM_NAME_LEN_BUFFER in sync with KSYM_NAME_LEN "
) ;
2022-07-27 21:54:19 +03:00
2005-04-17 02:20:36 +04:00
struct sym_entry {
unsigned long long addr ;
2005-09-07 02:16:31 +04:00
unsigned int len ;
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
unsigned int seq ;
2008-02-06 12:37:33 +03:00
unsigned int start_pos ;
kallsyms: don't overload absolute symbol type for percpu symbols
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:15 +03:00
unsigned int percpu_absolute ;
2020-05-04 19:16:37 +03:00
unsigned char sym [ ] ;
2005-04-17 02:20:36 +04:00
} ;
2014-03-17 06:48:27 +04:00
struct addr_range {
const char * start_sym , * end_sym ;
2009-06-09 03:12:13 +04:00
unsigned long long start , end ;
} ;
static unsigned long long _text ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
static unsigned long long relative_base ;
2014-03-17 06:48:27 +04:00
static struct addr_range text_ranges [ ] = {
2009-06-09 03:12:13 +04:00
{ " _stext " , " _etext " } ,
{ " _sinittext " , " _einittext " } ,
} ;
# define text_range_text (&text_ranges[0])
# define text_range_inittext (&text_ranges[1])
2014-03-17 07:35:46 +04:00
static struct addr_range percpu_range = {
" __per_cpu_start " , " __per_cpu_end " , - 1ULL , 0
} ;
2020-02-02 08:09:21 +03:00
static struct sym_entry * * table ;
2005-09-07 02:16:31 +04:00
static unsigned int table_size , table_cnt ;
2019-11-23 19:04:44 +03:00
static int all_symbols ;
static int absolute_percpu ;
static int base_relative ;
kallsyms: Correctly sequence symbols when CONFIG_LTO_CLANG=y
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:15 +03:00
static int lto_clang ;
2005-04-17 02:20:36 +04:00
2019-02-04 04:53:16 +03:00
static int token_profit [ 0x10000 ] ;
2005-04-17 02:20:36 +04:00
/* the table that holds the result of the compression */
2019-02-04 04:53:16 +03:00
static unsigned char best_table [ 256 ] [ 2 ] ;
static unsigned char best_table_len [ 256 ] ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
static void usage ( void )
2005-04-17 02:20:36 +04:00
{
2022-05-22 17:12:40 +03:00
fprintf ( stderr , " Usage: kallsyms [--all-symbols] [--absolute-percpu] "
kallsyms: Correctly sequence symbols when CONFIG_LTO_CLANG=y
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:15 +03:00
" [--base-relative] [--lto-clang] in.map > out.S \n " ) ;
2005-04-17 02:20:36 +04:00
exit ( 1 ) ;
}
2019-11-23 19:04:35 +03:00
static char * sym_name ( const struct sym_entry * s )
{
return ( char * ) s - > sym + 1 ;
}
2019-11-23 19:04:39 +03:00
static bool is_ignored_symbol ( const char * name , char type )
{
2020-09-22 20:48:56 +03:00
/* Symbol names that exactly match to the following are ignored.*/
2019-11-23 19:04:39 +03:00
static const char * const ignored_symbols [ ] = {
/*
* Symbols which vary between passes . Passes 1 and 2 must have
* identical symbol lists . The kallsyms_ * symbols below are
* only added after pass 1 , they would be included in pass 2
* when - - all - symbols is specified so exclude them to get a
* stable symbol list .
*/
" kallsyms_addresses " ,
" kallsyms_offsets " ,
" kallsyms_relative_base " ,
" kallsyms_num_syms " ,
" kallsyms_names " ,
" kallsyms_markers " ,
" kallsyms_token_table " ,
" kallsyms_token_index " ,
/* Exclude linker generated symbols which vary between passes */
" _SDA_BASE_ " , /* ppc */
" _SDA2_BASE_ " , /* ppc */
NULL
} ;
2020-09-22 20:48:56 +03:00
/* Symbol names that begin with the following are ignored.*/
2019-11-23 19:04:39 +03:00
static const char * const ignored_prefixes [ ] = {
" __efistub_ " , /* arm64 EFI stub namespace */
2022-04-21 00:42:57 +03:00
" __kvm_nvhe_$ " , /* arm64 local symbols in non-VHE KVM namespace */
" __kvm_nvhe_.L " , /* arm64 local symbols in non-VHE KVM namespace */
2021-02-04 18:29:47 +03:00
" __AArch64ADRPThunk_ " , /* arm64 lld */
" __ARMV5PILongThunk_ " , /* arm lld */
" __ARMV7PILongThunk_ " ,
" __ThumbV7PILongThunk_ " ,
" __LA25Thunk_ " , /* mips lld */
" __microLA25Thunk_ " ,
2022-09-09 00:54:44 +03:00
" __kcfi_typeid_ " , /* CFI type identifiers */
2019-11-23 19:04:39 +03:00
NULL
} ;
2020-09-22 20:48:56 +03:00
/* Symbol names that end with the following are ignored.*/
2019-11-23 19:04:39 +03:00
static const char * const ignored_suffixes [ ] = {
" _from_arm " , /* arm */
" _from_thumb " , /* arm */
" _veneer " , /* arm */
NULL
} ;
2020-09-22 20:48:56 +03:00
/* Symbol names that contain the following are ignored.*/
static const char * const ignored_matches [ ] = {
" .long_branch. " , /* ppc stub */
" .plt_branch. " , /* ppc stub */
NULL
} ;
2019-11-23 19:04:39 +03:00
const char * const * p ;
for ( p = ignored_symbols ; * p ; p + + )
if ( ! strcmp ( name , * p ) )
return true ;
for ( p = ignored_prefixes ; * p ; p + + )
if ( ! strncmp ( name , * p , strlen ( * p ) ) )
return true ;
for ( p = ignored_suffixes ; * p ; p + + ) {
int l = strlen ( name ) - strlen ( * p ) ;
if ( l > = 0 & & ! strcmp ( name + l , * p ) )
return true ;
}
2020-09-22 20:48:56 +03:00
for ( p = ignored_matches ; * p ; p + + ) {
if ( strstr ( name , * p ) )
return true ;
}
2019-11-23 19:04:41 +03:00
if ( type = = ' U ' | | type = = ' u ' )
return true ;
/* exclude debugging symbols */
if ( type = = ' N ' | | type = = ' n ' )
return true ;
if ( toupper ( type ) = = ' A ' ) {
/* Keep these useful absolute symbols */
if ( strcmp ( name , " __kernel_syscall_via_break " ) & &
strcmp ( name , " __kernel_syscall_via_epc " ) & &
strcmp ( name , " __kernel_sigtramp " ) & &
strcmp ( name , " __gp " ) )
return true ;
}
2019-11-23 19:04:39 +03:00
return false ;
}
2019-11-23 19:04:42 +03:00
static void check_symbol_range ( const char * sym , unsigned long long addr ,
struct addr_range * ranges , int entries )
2009-06-09 03:12:13 +04:00
{
size_t i ;
2014-03-17 06:48:27 +04:00
struct addr_range * ar ;
2009-06-09 03:12:13 +04:00
2014-03-17 06:48:27 +04:00
for ( i = 0 ; i < entries ; + + i ) {
ar = & ranges [ i ] ;
2009-06-09 03:12:13 +04:00
2014-03-17 06:48:27 +04:00
if ( strcmp ( sym , ar - > start_sym ) = = 0 ) {
ar - > start = addr ;
2019-11-23 19:04:42 +03:00
return ;
2014-03-17 06:48:27 +04:00
} else if ( strcmp ( sym , ar - > end_sym ) = = 0 ) {
ar - > end = addr ;
2019-11-23 19:04:42 +03:00
return ;
2009-06-09 03:12:13 +04:00
}
}
}
2020-02-02 08:09:21 +03:00
static struct sym_entry * read_symbol ( FILE * in )
2005-04-17 02:20:36 +04:00
{
2022-07-27 21:54:19 +03:00
char name [ KSYM_NAME_LEN_BUFFER + 1 ] , type ;
2020-02-02 08:09:21 +03:00
unsigned long long addr ;
unsigned int len ;
struct sym_entry * sym ;
2005-04-17 02:20:36 +04:00
int rc ;
2022-07-27 21:54:19 +03:00
rc = fscanf ( in , " %llx %c % " _stringify ( KSYM_NAME_LEN_BUFFER ) " s \n " , & addr , & type , name ) ;
2005-04-17 02:20:36 +04:00
if ( rc ! = 3 ) {
2022-07-27 18:58:20 +03:00
if ( rc ! = EOF & & fgets ( name , ARRAY_SIZE ( name ) , in ) = = NULL )
2010-09-11 09:13:33 +04:00
fprintf ( stderr , " Read error or end of file. \n " ) ;
2020-02-02 08:09:21 +03:00
return NULL ;
2005-04-17 02:20:36 +04:00
}
2020-02-02 08:09:20 +03:00
if ( strlen ( name ) > = KSYM_NAME_LEN ) {
2019-01-18 01:46:00 +03:00
fprintf ( stderr , " Symbol %s too long for kallsyms (%zu >= %d). \n "
2014-06-10 14:08:13 +04:00
" Please increase KSYM_NAME_LEN both in kernel and kallsyms.c \n " ,
2020-02-02 08:09:20 +03:00
name , strlen ( name ) , KSYM_NAME_LEN ) ;
2020-02-02 08:09:21 +03:00
return NULL ;
2013-10-23 17:07:53 +04:00
}
2005-04-17 02:20:36 +04:00
2020-02-02 08:09:20 +03:00
if ( strcmp ( name , " _text " ) = = 0 )
2020-02-02 08:09:21 +03:00
_text = addr ;
2019-11-23 19:04:42 +03:00
2020-03-11 23:37:09 +03:00
/* Ignore most absolute/undefined (?) symbols. */
if ( is_ignored_symbol ( name , type ) )
return NULL ;
2020-02-02 08:09:21 +03:00
check_symbol_range ( name , addr , text_ranges , ARRAY_SIZE ( text_ranges ) ) ;
check_symbol_range ( name , addr , & percpu_range , 1 ) ;
2005-04-17 02:20:36 +04:00
/* include the type field in the symbol name, so that it gets
* compressed together */
2020-02-02 08:09:21 +03:00
len = strlen ( name ) + 1 ;
2020-02-10 19:18:52 +03:00
sym = malloc ( sizeof ( * sym ) + len + 1 ) ;
2020-02-02 08:09:21 +03:00
if ( ! sym ) {
2006-03-25 14:07:46 +03:00
fprintf ( stderr , " kallsyms failure: "
" unable to allocate required amount of memory \n " ) ;
exit ( EXIT_FAILURE ) ;
}
2020-02-02 08:09:21 +03:00
sym - > addr = addr ;
sym - > len = len ;
sym - > sym [ 0 ] = type ;
2020-02-10 19:18:52 +03:00
strcpy ( sym_name ( sym ) , name ) ;
2020-02-02 08:09:21 +03:00
sym - > percpu_absolute = 0 ;
kallsyms: don't overload absolute symbol type for percpu symbols
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:15 +03:00
2020-02-02 08:09:21 +03:00
return sym ;
2005-04-17 02:20:36 +04:00
}
2019-11-23 19:04:38 +03:00
static int symbol_in_range ( const struct sym_entry * s ,
const struct addr_range * ranges , int entries )
2009-06-09 03:12:13 +04:00
{
size_t i ;
2019-11-23 19:04:38 +03:00
const struct addr_range * ar ;
2009-06-09 03:12:13 +04:00
2014-03-17 06:48:27 +04:00
for ( i = 0 ; i < entries ; + + i ) {
ar = & ranges [ i ] ;
2009-06-09 03:12:13 +04:00
2014-03-17 06:48:27 +04:00
if ( s - > addr > = ar - > start & & s - > addr < = ar - > end )
2009-06-15 15:52:48 +04:00
return 1 ;
2009-06-09 03:12:13 +04:00
}
2009-06-15 15:52:48 +04:00
return 0 ;
2009-06-09 03:12:13 +04:00
}
2019-11-23 19:04:38 +03:00
static int symbol_valid ( const struct sym_entry * s )
2005-04-17 02:20:36 +04:00
{
2019-11-23 19:04:35 +03:00
const char * name = sym_name ( s ) ;
2015-03-30 16:20:31 +03:00
2005-04-17 02:20:36 +04:00
/* if --all-symbols is not specified, then symbols outside the text
* and inittext sections are discarded */
if ( ! all_symbols ) {
2014-03-17 06:48:27 +04:00
if ( symbol_in_range ( s , text_ranges ,
ARRAY_SIZE ( text_ranges ) ) = = 0 )
2005-04-17 02:20:36 +04:00
return 0 ;
/* Corner case. Discard any symbols with the same value as
2008-02-06 12:36:26 +03:00
* _etext _einittext ; they can move between pass 1 and 2 when
* the kallsyms data are added . If these symbols move then
* they may get dropped in pass 2 , which breaks the kallsyms
* rules .
2005-04-17 02:20:36 +04:00
*/
2009-06-09 03:12:13 +04:00
if ( ( s - > addr = = text_range_text - > end & &
2019-11-23 19:04:35 +03:00
strcmp ( name , text_range_text - > end_sym ) ) | |
2009-06-09 03:12:13 +04:00
( s - > addr = = text_range_inittext - > end & &
2019-11-23 19:04:35 +03:00
strcmp ( name , text_range_inittext - > end_sym ) ) )
2005-04-17 02:20:36 +04:00
return 0 ;
}
return 1 ;
}
2019-11-23 19:04:31 +03:00
/* remove all the invalid symbols from the table */
static void shrink_table ( void )
{
unsigned int i , pos ;
pos = 0 ;
for ( i = 0 ; i < table_cnt ; i + + ) {
2020-02-02 08:09:21 +03:00
if ( symbol_valid ( table [ i ] ) ) {
2019-11-23 19:04:31 +03:00
if ( pos ! = i )
table [ pos ] = table [ i ] ;
pos + + ;
} else {
2020-02-02 08:09:21 +03:00
free ( table [ i ] ) ;
2019-11-23 19:04:31 +03:00
}
}
table_cnt = pos ;
/* When valid symbol is not registered, exit to error */
if ( ! table_cnt ) {
fprintf ( stderr , " No valid symbol. \n " ) ;
exit ( 1 ) ;
}
}
2022-09-26 12:02:28 +03:00
static void read_map ( const char * in )
2005-04-17 02:20:36 +04:00
{
2022-09-26 12:02:28 +03:00
FILE * fp ;
2020-02-02 08:09:21 +03:00
struct sym_entry * sym ;
2022-09-26 12:02:28 +03:00
fp = fopen ( in , " r " ) ;
if ( ! fp ) {
perror ( in ) ;
exit ( 1 ) ;
}
while ( ! feof ( fp ) ) {
sym = read_symbol ( fp ) ;
2020-02-02 08:09:21 +03:00
if ( ! sym )
continue ;
sym - > start_pos = table_cnt ;
2005-09-07 02:16:31 +04:00
if ( table_cnt > = table_size ) {
table_size + = 10000 ;
table = realloc ( table , sizeof ( * table ) * table_size ) ;
2005-04-17 02:20:36 +04:00
if ( ! table ) {
fprintf ( stderr , " out of memory \n " ) ;
2022-09-26 12:02:28 +03:00
fclose ( fp ) ;
2005-04-17 02:20:36 +04:00
exit ( 1 ) ;
}
}
2020-02-02 08:09:21 +03:00
table [ table_cnt + + ] = sym ;
2005-04-17 02:20:36 +04:00
}
2022-09-26 12:02:28 +03:00
fclose ( fp ) ;
2005-04-17 02:20:36 +04:00
}
2019-11-23 19:04:38 +03:00
static void output_label ( const char * label )
2005-04-17 02:20:36 +04:00
{
2018-05-09 10:23:47 +03:00
printf ( " .globl %s \n " , label ) ;
2005-04-17 02:20:36 +04:00
printf ( " \t ALGN \n " ) ;
2018-05-09 10:23:47 +03:00
printf ( " %s: \n " , label ) ;
2005-04-17 02:20:36 +04:00
}
2019-12-09 06:51:48 +03:00
/* Provide proper symbols relocatability by their '_text' relativeness. */
static void output_address ( unsigned long long addr )
{
if ( _text < = addr )
printf ( " \t PTR \t _text + %#llx \n " , addr - _text ) ;
else
printf ( " \t PTR \t _text - %#llx \n " , _text - addr ) ;
}
2005-04-17 02:20:36 +04:00
/* uncompress a compressed symbol. When this function is called, the best table
* might still be compressed itself , so the function needs to be recursive */
2019-11-23 19:04:38 +03:00
static int expand_symbol ( const unsigned char * data , int len , char * result )
2005-04-17 02:20:36 +04:00
{
int c , rlen , total = 0 ;
while ( len ) {
c = * data ;
/* if the table holds a single char that is the same as the one
* we are looking for , then end the search */
if ( best_table [ c ] [ 0 ] = = c & & best_table_len [ c ] = = 1 ) {
* result + + = c ;
total + + ;
} else {
/* if not, recurse and expand */
rlen = expand_symbol ( best_table [ c ] , best_table_len [ c ] , result ) ;
total + = rlen ;
result + = rlen ;
}
data + + ;
len - - ;
}
* result = 0 ;
return total ;
}
2019-11-23 19:04:38 +03:00
static int symbol_absolute ( const struct sym_entry * s )
2014-03-17 06:48:27 +04:00
{
kallsyms: don't overload absolute symbol type for percpu symbols
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:15 +03:00
return s - > percpu_absolute ;
2014-03-17 06:48:27 +04:00
}
kallsyms: Correctly sequence symbols when CONFIG_LTO_CLANG=y
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:15 +03:00
static char * s_name ( char * buf )
{
/* Skip the symbol type */
return buf + 1 ;
}
static void cleanup_symbol_name ( char * s )
{
char * p ;
if ( ! lto_clang )
return ;
/*
* ASCII [ . ] = 2 e
* ASCII [ 0 - 9 ] = 30 , 39
* ASCII [ A - Z ] = 41 , 5 a
* ASCII [ _ ] = 5f
* ASCII [ a - z ] = 61 , 7 a
*
* As above , replacing ' . ' with ' \0 ' does not affect the main sorting ,
* but it helps us with subsorting .
*/
p = strchr ( s , ' . ' ) ;
if ( p )
* p = ' \0 ' ;
}
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
static int compare_names ( const void * a , const void * b )
{
int ret ;
char sa_namebuf [ KSYM_NAME_LEN ] ;
char sb_namebuf [ KSYM_NAME_LEN ] ;
const struct sym_entry * sa = * ( const struct sym_entry * * ) a ;
const struct sym_entry * sb = * ( const struct sym_entry * * ) b ;
expand_symbol ( sa - > sym , sa - > len , sa_namebuf ) ;
expand_symbol ( sb - > sym , sb - > len , sb_namebuf ) ;
kallsyms: Correctly sequence symbols when CONFIG_LTO_CLANG=y
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:15 +03:00
cleanup_symbol_name ( s_name ( sa_namebuf ) ) ;
cleanup_symbol_name ( s_name ( sb_namebuf ) ) ;
ret = strcmp ( s_name ( sa_namebuf ) , s_name ( sb_namebuf ) ) ;
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
if ( ! ret ) {
if ( sa - > addr > sb - > addr )
return 1 ;
else if ( sa - > addr < sb - > addr )
return - 1 ;
/* keep old order */
return ( int ) ( sa - > seq - sb - > seq ) ;
}
return ret ;
}
static void sort_symbols_by_name ( void )
{
qsort ( table , table_cnt , sizeof ( table [ 0 ] ) , compare_names ) ;
}
2005-09-07 02:16:31 +04:00
static void write_src ( void )
2005-04-17 02:20:36 +04:00
{
2005-09-07 02:16:31 +04:00
unsigned int i , k , off ;
2005-04-17 02:20:36 +04:00
unsigned int best_idx [ 256 ] ;
unsigned int * markers ;
2007-07-17 15:03:51 +04:00
char buf [ KSYM_NAME_LEN ] ;
2005-04-17 02:20:36 +04:00
2019-02-04 04:53:18 +03:00
printf ( " #include <asm/bitsperlong.h> \n " ) ;
2005-04-17 02:20:36 +04:00
printf ( " #if BITS_PER_LONG == 64 \n " ) ;
printf ( " #define PTR .quad \n " ) ;
2018-12-30 15:36:00 +03:00
printf ( " #define ALGN .balign 8 \n " ) ;
2005-04-17 02:20:36 +04:00
printf ( " #else \n " ) ;
printf ( " #define PTR .long \n " ) ;
2018-12-30 15:36:00 +03:00
printf ( " #define ALGN .balign 4 \n " ) ;
2005-04-17 02:20:36 +04:00
printf ( " #endif \n " ) ;
2006-12-08 13:35:57 +03:00
printf ( " \t .section .rodata, \" a \" \n " ) ;
2005-04-17 02:20:36 +04:00
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
if ( ! base_relative )
output_label ( " kallsyms_addresses " ) ;
else
output_label ( " kallsyms_offsets " ) ;
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < table_cnt ; i + + ) {
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
if ( base_relative ) {
2019-12-09 06:51:48 +03:00
/*
* Use the offset relative to the lowest value
* encountered of all relative symbols , and emit
* non - relocatable fixed offsets that will be fixed
* up at runtime .
*/
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
long long offset ;
int overflow ;
if ( ! absolute_percpu ) {
2020-02-02 08:09:21 +03:00
offset = table [ i ] - > addr - relative_base ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
overflow = ( offset < 0 | | offset > UINT_MAX ) ;
2020-02-02 08:09:21 +03:00
} else if ( symbol_absolute ( table [ i ] ) ) {
offset = table [ i ] - > addr ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
overflow = ( offset < 0 | | offset > INT_MAX ) ;
} else {
2020-02-02 08:09:21 +03:00
offset = relative_base - table [ i ] - > addr - 1 ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
overflow = ( offset < INT_MIN | | offset > = 0 ) ;
}
if ( overflow ) {
fprintf ( stderr , " kallsyms failure: "
" %s symbol value %#llx out of range in relative mode \n " ,
2020-02-02 08:09:21 +03:00
symbol_absolute ( table [ i ] ) ? " absolute " : " relative " ,
table [ i ] - > addr ) ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
exit ( EXIT_FAILURE ) ;
}
printf ( " \t .long \t %#x \n " , ( int ) offset ) ;
2020-02-02 08:09:21 +03:00
} else if ( ! symbol_absolute ( table [ i ] ) ) {
output_address ( table [ i ] - > addr ) ;
2006-12-07 04:14:04 +03:00
} else {
2020-02-02 08:09:21 +03:00
printf ( " \t PTR \t %#llx \n " , table [ i ] - > addr ) ;
2006-12-07 04:14:04 +03:00
}
2005-04-17 02:20:36 +04:00
}
printf ( " \n " ) ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
if ( base_relative ) {
output_label ( " kallsyms_relative_base " ) ;
2019-12-09 06:51:48 +03:00
output_address ( relative_base ) ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
printf ( " \n " ) ;
}
2005-04-17 02:20:36 +04:00
output_label ( " kallsyms_num_syms " ) ;
2018-09-03 15:09:34 +03:00
printf ( " \t .long \t %u \n " , table_cnt ) ;
2005-04-17 02:20:36 +04:00
printf ( " \n " ) ;
/* table of offset markers, that give the offset in the compressed stream
* every 256 symbols */
2006-03-25 14:07:46 +03:00
markers = malloc ( sizeof ( unsigned int ) * ( ( table_cnt + 255 ) / 256 ) ) ;
if ( ! markers ) {
fprintf ( stderr , " kallsyms failure: "
" unable to allocate required memory \n " ) ;
exit ( EXIT_FAILURE ) ;
}
2005-04-17 02:20:36 +04:00
output_label ( " kallsyms_names " ) ;
off = 0 ;
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < table_cnt ; i + + ) {
if ( ( i & 0xFF ) = = 0 )
markers [ i > > 8 ] = off ;
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
table [ i ] - > seq = i ;
2005-04-17 02:20:36 +04:00
2021-04-05 05:58:39 +03:00
/* There cannot be any symbol of length zero. */
if ( table [ i ] - > len = = 0 ) {
fprintf ( stderr , " kallsyms failure: "
" unexpected zero symbol length \n " ) ;
exit ( EXIT_FAILURE ) ;
}
/* Only lengths that fit in up-to-two-byte ULEB128 are supported. */
if ( table [ i ] - > len > 0x3FFF ) {
fprintf ( stderr , " kallsyms failure: "
" unexpected huge symbol length \n " ) ;
exit ( EXIT_FAILURE ) ;
}
/* Encode length with ULEB128. */
if ( table [ i ] - > len < = 0x7F ) {
/* Most symbols use a single byte for the length. */
printf ( " \t .byte 0x%02x " , table [ i ] - > len ) ;
off + = table [ i ] - > len + 1 ;
} else {
/* "Big" symbols use two bytes. */
printf ( " \t .byte 0x%02x, 0x%02x " ,
( table [ i ] - > len & 0x7F ) | 0x80 ,
( table [ i ] - > len > > 7 ) & 0x7F ) ;
off + = table [ i ] - > len + 2 ;
}
2020-02-02 08:09:21 +03:00
for ( k = 0 ; k < table [ i ] - > len ; k + + )
printf ( " , 0x%02x " , table [ i ] - > sym [ k ] ) ;
2005-04-17 02:20:36 +04:00
printf ( " \n " ) ;
}
printf ( " \n " ) ;
output_label ( " kallsyms_markers " ) ;
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < ( ( table_cnt + 255 ) > > 8 ) ; i + + )
2018-09-03 15:09:34 +03:00
printf ( " \t .long \t %u \n " , markers [ i ] ) ;
2005-04-17 02:20:36 +04:00
printf ( " \n " ) ;
free ( markers ) ;
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
sort_symbols_by_name ( ) ;
output_label ( " kallsyms_seqs_of_names " ) ;
for ( i = 0 ; i < table_cnt ; i + + )
2022-11-02 11:49:16 +03:00
printf ( " \t .byte 0x%02x, 0x%02x, 0x%02x \n " ,
( unsigned char ) ( table [ i ] - > seq > > 16 ) ,
( unsigned char ) ( table [ i ] - > seq > > 8 ) ,
( unsigned char ) ( table [ i ] - > seq > > 0 ) ) ;
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:14 +03:00
printf ( " \n " ) ;
2005-04-17 02:20:36 +04:00
output_label ( " kallsyms_token_table " ) ;
off = 0 ;
for ( i = 0 ; i < 256 ; i + + ) {
best_idx [ i ] = off ;
2005-09-07 02:16:31 +04:00
expand_symbol ( best_table [ i ] , best_table_len [ i ] , buf ) ;
2005-04-17 02:20:36 +04:00
printf ( " \t .asciz \t \" %s \" \n " , buf ) ;
off + = strlen ( buf ) + 1 ;
}
printf ( " \n " ) ;
output_label ( " kallsyms_token_index " ) ;
for ( i = 0 ; i < 256 ; i + + )
printf ( " \t .short \t %d \n " , best_idx [ i ] ) ;
printf ( " \n " ) ;
}
/* table lookup compression functions */
/* count all the possible tokens in a symbol */
2019-11-23 19:04:38 +03:00
static void learn_symbol ( const unsigned char * symbol , int len )
2005-04-17 02:20:36 +04:00
{
int i ;
for ( i = 0 ; i < len - 1 ; i + + )
2005-09-07 02:16:31 +04:00
token_profit [ symbol [ i ] + ( symbol [ i + 1 ] < < 8 ) ] + + ;
2005-04-17 02:20:36 +04:00
}
/* decrease the count for all the possible tokens in a symbol */
2019-11-23 19:04:38 +03:00
static void forget_symbol ( const unsigned char * symbol , int len )
2005-04-17 02:20:36 +04:00
{
int i ;
for ( i = 0 ; i < len - 1 ; i + + )
2005-09-07 02:16:31 +04:00
token_profit [ symbol [ i ] + ( symbol [ i + 1 ] < < 8 ) ] - - ;
2005-04-17 02:20:36 +04:00
}
2019-11-23 19:04:31 +03:00
/* do the initial token count */
2022-11-02 11:49:13 +03:00
static void build_initial_token_table ( void )
2005-04-17 02:20:36 +04:00
{
2019-11-23 19:04:31 +03:00
unsigned int i ;
2005-04-17 02:20:36 +04:00
2019-11-23 19:04:31 +03:00
for ( i = 0 ; i < table_cnt ; i + + )
2020-02-02 08:09:21 +03:00
learn_symbol ( table [ i ] - > sym , table [ i ] - > len ) ;
2005-04-17 02:20:36 +04:00
}
2019-11-23 19:04:37 +03:00
static unsigned char * find_token ( unsigned char * str , int len ,
2019-11-23 19:04:38 +03:00
const unsigned char * token )
2007-06-20 21:09:00 +04:00
{
int i ;
for ( i = 0 ; i < len - 1 ; i + + ) {
if ( str [ i ] = = token [ 0 ] & & str [ i + 1 ] = = token [ 1 ] )
return & str [ i ] ;
}
return NULL ;
}
2005-04-17 02:20:36 +04:00
/* replace a given token in all the valid symbols. Use the sampled symbols
* to update the counts */
2019-11-23 19:04:38 +03:00
static void compress_symbols ( const unsigned char * str , int idx )
2005-04-17 02:20:36 +04:00
{
2005-09-07 02:16:31 +04:00
unsigned int i , len , size ;
unsigned char * p1 , * p2 ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < table_cnt ; i + + ) {
2005-04-17 02:20:36 +04:00
2020-02-02 08:09:21 +03:00
len = table [ i ] - > len ;
p1 = table [ i ] - > sym ;
2005-09-07 02:16:31 +04:00
/* find the token on the symbol */
2007-06-20 21:09:00 +04:00
p2 = find_token ( p1 , len , str ) ;
2005-09-07 02:16:31 +04:00
if ( ! p2 ) continue ;
/* decrease the counts for this symbol's tokens */
2020-02-02 08:09:21 +03:00
forget_symbol ( table [ i ] - > sym , len ) ;
2005-09-07 02:16:31 +04:00
size = len ;
2005-04-17 02:20:36 +04:00
do {
2005-09-07 02:16:31 +04:00
* p2 = idx ;
p2 + + ;
size - = ( p2 - p1 ) ;
memmove ( p2 , p2 + 1 , size ) ;
p1 = p2 ;
len - - ;
if ( size < 2 ) break ;
2005-04-17 02:20:36 +04:00
/* find the token on the symbol */
2007-06-20 21:09:00 +04:00
p2 = find_token ( p1 , size , str ) ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
} while ( p2 ) ;
2005-04-17 02:20:36 +04:00
2020-02-02 08:09:21 +03:00
table [ i ] - > len = len ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
/* increase the counts for this symbol's new tokens */
2020-02-02 08:09:21 +03:00
learn_symbol ( table [ i ] - > sym , len ) ;
2005-04-17 02:20:36 +04:00
}
}
/* search the token with the maximum profit */
2005-09-07 02:16:31 +04:00
static int find_best_token ( void )
2005-04-17 02:20:36 +04:00
{
2005-09-07 02:16:31 +04:00
int i , best , bestprofit ;
2005-04-17 02:20:36 +04:00
bestprofit = - 10000 ;
2005-09-07 02:16:31 +04:00
best = 0 ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < 0x10000 ; i + + ) {
if ( token_profit [ i ] > bestprofit ) {
best = i ;
bestprofit = token_profit [ i ] ;
2005-04-17 02:20:36 +04:00
}
}
return best ;
}
/* this is the core of the algorithm: calculate the "best" table */
static void optimize_result ( void )
{
2005-09-07 02:16:31 +04:00
int i , best ;
2005-04-17 02:20:36 +04:00
/* using the '\0' symbol last allows compress_symbols to use standard
* fast string functions */
for ( i = 255 ; i > = 0 ; i - - ) {
/* if this table slot is empty (it is not used by an actual
* original char code */
if ( ! best_table_len [ i ] ) {
2018-02-27 11:16:19 +03:00
/* find the token with the best profit value */
2005-04-17 02:20:36 +04:00
best = find_best_token ( ) ;
2011-05-01 07:41:41 +04:00
if ( token_profit [ best ] = = 0 )
break ;
2005-04-17 02:20:36 +04:00
/* place it in the "best" table */
2005-09-07 02:16:31 +04:00
best_table_len [ i ] = 2 ;
best_table [ i ] [ 0 ] = best & 0xFF ;
best_table [ i ] [ 1 ] = ( best > > 8 ) & 0xFF ;
2005-04-17 02:20:36 +04:00
/* replace this token in all the valid symbols */
2005-09-07 02:16:31 +04:00
compress_symbols ( best_table [ i ] , i ) ;
2005-04-17 02:20:36 +04:00
}
}
}
/* start by placing the symbols that are actually used on the table */
static void insert_real_symbols_in_table ( void )
{
2005-09-07 02:16:31 +04:00
unsigned int i , j , c ;
2005-04-17 02:20:36 +04:00
2005-09-07 02:16:31 +04:00
for ( i = 0 ; i < table_cnt ; i + + ) {
2020-02-02 08:09:21 +03:00
for ( j = 0 ; j < table [ i ] - > len ; j + + ) {
c = table [ i ] - > sym [ j ] ;
2005-09-07 02:16:31 +04:00
best_table [ c ] [ 0 ] = c ;
best_table_len [ c ] = 1 ;
2005-04-17 02:20:36 +04:00
}
}
}
static void optimize_token_table ( void )
{
2022-11-02 11:49:13 +03:00
build_initial_token_table ( ) ;
2005-04-17 02:20:36 +04:00
insert_real_symbols_in_table ( ) ;
optimize_result ( ) ;
}
kallsyms, tracing: output more proper symbol name
Impact: bugfix, output more reliable symbol lookup result
Debug tools(dump_stack(), ftrace...) are like to print out symbols.
But it is always print out the first aliased symbol.(Aliased symbols
are symbols with the same address), and the first aliased symbol is
sometime not proper.
# echo function_graph > current_tracer
# cat trace
......
1) 1.923 us | select_nohz_load_balancer();
1) + 76.692 us | }
1) | default_idle() {
1) ==========> | __irqentry_text_start() {
1) 0.000 us | native_apic_mem_write();
1) | irq_enter() {
1) 0.000 us | idle_cpu();
1) | tick_check_idle() {
1) 0.000 us | tick_check_oneshot_broadcast();
1) | tick_nohz_stop_idle() {
......
It's very embarrassing, it ouputs "__irqentry_text_start()",
actually, it should output "smp_apic_timer_interrupt()".
(these two symbol are the same address, but "__irqentry_text_start"
is deemed to the first aliased symbol by scripts/kallsyms)
This patch puts symbols like "__irqentry_text_start" to the second
aliased symbols. And a more proper symbol name becomes the first.
Aliased symbols mostly come from linker script. The solution is
guessing "is this symbol defined in linker script", the symbols
defined in linker script will not become the first aliased symbol.
And if symbols are found to be equal in this "linker script provided"
criteria, symbols are sorted by the number of prefix underscores.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Paulo Marques <pmarques@grupopie.com>
LKML-Reference: <49BA06E2.7080807@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-13 10:10:26 +03:00
/* guess for "linker script provide" symbol */
static int may_be_linker_script_provide_symbol ( const struct sym_entry * se )
{
2019-11-23 19:04:35 +03:00
const char * symbol = sym_name ( se ) ;
kallsyms, tracing: output more proper symbol name
Impact: bugfix, output more reliable symbol lookup result
Debug tools(dump_stack(), ftrace...) are like to print out symbols.
But it is always print out the first aliased symbol.(Aliased symbols
are symbols with the same address), and the first aliased symbol is
sometime not proper.
# echo function_graph > current_tracer
# cat trace
......
1) 1.923 us | select_nohz_load_balancer();
1) + 76.692 us | }
1) | default_idle() {
1) ==========> | __irqentry_text_start() {
1) 0.000 us | native_apic_mem_write();
1) | irq_enter() {
1) 0.000 us | idle_cpu();
1) | tick_check_idle() {
1) 0.000 us | tick_check_oneshot_broadcast();
1) | tick_nohz_stop_idle() {
......
It's very embarrassing, it ouputs "__irqentry_text_start()",
actually, it should output "smp_apic_timer_interrupt()".
(these two symbol are the same address, but "__irqentry_text_start"
is deemed to the first aliased symbol by scripts/kallsyms)
This patch puts symbols like "__irqentry_text_start" to the second
aliased symbols. And a more proper symbol name becomes the first.
Aliased symbols mostly come from linker script. The solution is
guessing "is this symbol defined in linker script", the symbols
defined in linker script will not become the first aliased symbol.
And if symbols are found to be equal in this "linker script provided"
criteria, symbols are sorted by the number of prefix underscores.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Paulo Marques <pmarques@grupopie.com>
LKML-Reference: <49BA06E2.7080807@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-13 10:10:26 +03:00
int len = se - > len - 1 ;
if ( len < 8 )
return 0 ;
if ( symbol [ 0 ] ! = ' _ ' | | symbol [ 1 ] ! = ' _ ' )
return 0 ;
/* __start_XXXXX */
if ( ! memcmp ( symbol + 2 , " start_ " , 6 ) )
return 1 ;
/* __stop_XXXXX */
if ( ! memcmp ( symbol + 2 , " stop_ " , 5 ) )
return 1 ;
/* __end_XXXXX */
if ( ! memcmp ( symbol + 2 , " end_ " , 4 ) )
return 1 ;
/* __XXXXX_start */
if ( ! memcmp ( symbol + len - 6 , " _start " , 6 ) )
return 1 ;
/* __XXXXX_end */
if ( ! memcmp ( symbol + len - 4 , " _end " , 4 ) )
return 1 ;
return 0 ;
}
2008-02-06 12:37:33 +03:00
static int compare_symbols ( const void * a , const void * b )
{
2020-02-02 08:09:21 +03:00
const struct sym_entry * sa = * ( const struct sym_entry * * ) a ;
const struct sym_entry * sb = * ( const struct sym_entry * * ) b ;
2008-02-06 12:37:33 +03:00
int wa , wb ;
/* sort by address first */
if ( sa - > addr > sb - > addr )
return 1 ;
if ( sa - > addr < sb - > addr )
return - 1 ;
/* sort by "weakness" type */
wa = ( sa - > sym [ 0 ] = = ' w ' ) | | ( sa - > sym [ 0 ] = = ' W ' ) ;
wb = ( sb - > sym [ 0 ] = = ' w ' ) | | ( sb - > sym [ 0 ] = = ' W ' ) ;
if ( wa ! = wb )
return wa - wb ;
kallsyms, tracing: output more proper symbol name
Impact: bugfix, output more reliable symbol lookup result
Debug tools(dump_stack(), ftrace...) are like to print out symbols.
But it is always print out the first aliased symbol.(Aliased symbols
are symbols with the same address), and the first aliased symbol is
sometime not proper.
# echo function_graph > current_tracer
# cat trace
......
1) 1.923 us | select_nohz_load_balancer();
1) + 76.692 us | }
1) | default_idle() {
1) ==========> | __irqentry_text_start() {
1) 0.000 us | native_apic_mem_write();
1) | irq_enter() {
1) 0.000 us | idle_cpu();
1) | tick_check_idle() {
1) 0.000 us | tick_check_oneshot_broadcast();
1) | tick_nohz_stop_idle() {
......
It's very embarrassing, it ouputs "__irqentry_text_start()",
actually, it should output "smp_apic_timer_interrupt()".
(these two symbol are the same address, but "__irqentry_text_start"
is deemed to the first aliased symbol by scripts/kallsyms)
This patch puts symbols like "__irqentry_text_start" to the second
aliased symbols. And a more proper symbol name becomes the first.
Aliased symbols mostly come from linker script. The solution is
guessing "is this symbol defined in linker script", the symbols
defined in linker script will not become the first aliased symbol.
And if symbols are found to be equal in this "linker script provided"
criteria, symbols are sorted by the number of prefix underscores.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Paulo Marques <pmarques@grupopie.com>
LKML-Reference: <49BA06E2.7080807@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-13 10:10:26 +03:00
/* sort by "linker script provide" type */
wa = may_be_linker_script_provide_symbol ( sa ) ;
wb = may_be_linker_script_provide_symbol ( sb ) ;
if ( wa ! = wb )
return wa - wb ;
/* sort by the number of prefix underscores */
2019-11-23 19:04:36 +03:00
wa = strspn ( sym_name ( sa ) , " _ " ) ;
wb = strspn ( sym_name ( sb ) , " _ " ) ;
kallsyms, tracing: output more proper symbol name
Impact: bugfix, output more reliable symbol lookup result
Debug tools(dump_stack(), ftrace...) are like to print out symbols.
But it is always print out the first aliased symbol.(Aliased symbols
are symbols with the same address), and the first aliased symbol is
sometime not proper.
# echo function_graph > current_tracer
# cat trace
......
1) 1.923 us | select_nohz_load_balancer();
1) + 76.692 us | }
1) | default_idle() {
1) ==========> | __irqentry_text_start() {
1) 0.000 us | native_apic_mem_write();
1) | irq_enter() {
1) 0.000 us | idle_cpu();
1) | tick_check_idle() {
1) 0.000 us | tick_check_oneshot_broadcast();
1) | tick_nohz_stop_idle() {
......
It's very embarrassing, it ouputs "__irqentry_text_start()",
actually, it should output "smp_apic_timer_interrupt()".
(these two symbol are the same address, but "__irqentry_text_start"
is deemed to the first aliased symbol by scripts/kallsyms)
This patch puts symbols like "__irqentry_text_start" to the second
aliased symbols. And a more proper symbol name becomes the first.
Aliased symbols mostly come from linker script. The solution is
guessing "is this symbol defined in linker script", the symbols
defined in linker script will not become the first aliased symbol.
And if symbols are found to be equal in this "linker script provided"
criteria, symbols are sorted by the number of prefix underscores.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Paulo Marques <pmarques@grupopie.com>
LKML-Reference: <49BA06E2.7080807@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-13 10:10:26 +03:00
if ( wa ! = wb )
return wa - wb ;
2008-02-06 12:37:33 +03:00
/* sort by initial order, so that other symbols are left undisturbed */
return sa - > start_pos - sb - > start_pos ;
}
static void sort_symbols ( void )
{
2020-02-02 08:09:21 +03:00
qsort ( table , table_cnt , sizeof ( table [ 0 ] ) , compare_symbols ) ;
2008-02-06 12:37:33 +03:00
}
2005-04-17 02:20:36 +04:00
2014-03-17 07:35:46 +04:00
static void make_percpus_absolute ( void )
{
unsigned int i ;
for ( i = 0 ; i < table_cnt ; i + + )
2020-02-02 08:09:21 +03:00
if ( symbol_in_range ( table [ i ] , & percpu_range , 1 ) ) {
kallsyms: don't overload absolute symbol type for percpu symbols
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:15 +03:00
/*
* Keep the ' A ' override for percpu symbols to
* ensure consistent behavior compared to older
* versions of this tool .
*/
2020-02-02 08:09:21 +03:00
table [ i ] - > sym [ 0 ] = ' A ' ;
table [ i ] - > percpu_absolute = 1 ;
kallsyms: don't overload absolute symbol type for percpu symbols
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:15 +03:00
}
2014-03-17 07:35:46 +04:00
}
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
/* find the minimum non-absolute symbol address */
static void record_relative_base ( void )
{
unsigned int i ;
for ( i = 0 ; i < table_cnt ; i + + )
2020-02-02 08:09:21 +03:00
if ( ! symbol_absolute ( table [ i ] ) ) {
2019-11-23 19:04:32 +03:00
/*
* The table is sorted by address .
* Take the first non - absolute symbol value .
*/
2020-02-02 08:09:21 +03:00
relative_base = table [ i ] - > addr ;
2019-11-23 19:04:32 +03:00
return ;
}
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
}
2005-09-07 02:16:31 +04:00
int main ( int argc , char * * argv )
2005-04-17 02:20:36 +04:00
{
2022-09-26 12:02:28 +03:00
while ( 1 ) {
static struct option long_options [ ] = {
{ " all-symbols " , no_argument , & all_symbols , 1 } ,
{ " absolute-percpu " , no_argument , & absolute_percpu , 1 } ,
{ " base-relative " , no_argument , & base_relative , 1 } ,
kallsyms: Correctly sequence symbols when CONFIG_LTO_CLANG=y
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-02 11:49:15 +03:00
{ " lto-clang " , no_argument , & lto_clang , 1 } ,
2022-09-26 12:02:28 +03:00
{ } ,
} ;
int c = getopt_long ( argc , argv , " " , long_options , NULL ) ;
if ( c = = - 1 )
break ;
if ( c ! = 0 )
usage ( ) ;
}
if ( optind > = argc )
2005-04-17 02:20:36 +04:00
usage ( ) ;
2022-09-26 12:02:28 +03:00
read_map ( argv [ optind ] ) ;
2019-11-23 19:04:31 +03:00
shrink_table ( ) ;
2014-03-17 07:35:46 +04:00
if ( absolute_percpu )
make_percpus_absolute ( ) ;
2019-11-23 19:04:32 +03:00
sort_symbols ( ) ;
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 00:58:19 +03:00
if ( base_relative )
record_relative_base ( ) ;
2009-01-14 23:38:20 +03:00
optimize_token_table ( ) ;
2005-04-17 02:20:36 +04:00
write_src ( ) ;
return 0 ;
}