2012-04-20 17:45:54 +04:00
/ *
* ld s c r i p t t o m a k e A R M L i n u x k e r n e l
* taken f r o m t h e i 3 8 6 v e r s i o n b y R u s s e l l K i n g
* Written b y M a r t i n M a r e s < m j @atrey.karlin.mff.cuni.cz>
* /
# include < a s m - g e n e r i c / v m l i n u x . l d s . h >
2015-12-01 15:20:40 +03:00
# include < a s m / c a c h e . h >
2015-10-19 16:19:27 +03:00
# include < a s m / k e r n e l - p g t a b l e . h >
2012-04-20 17:45:54 +04:00
# include < a s m / t h r e a d _ i n f o . h >
# include < a s m / m e m o r y . h >
# include < a s m / p a g e . h >
2015-01-22 04:36:06 +03:00
# include < a s m / p g t a b l e . h >
2012-04-20 17:45:54 +04:00
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 19:51:36 +04:00
# include " i m a g e . h "
2014-11-25 18:26:13 +03:00
/* .exit.text needed in case of alternative patching */
# define A R M _ E X I T _ K E E P ( x ) x
# define A R M _ E X I T _ D I S C A R D ( x )
2012-04-20 17:45:54 +04:00
OUTPUT_ A R C H ( a a r c h64 )
2014-05-16 21:26:01 +04:00
ENTRY( _ t e x t )
2012-04-20 17:45:54 +04:00
jiffies = j i f f i e s _ 6 4 ;
2012-12-07 22:40:43 +04:00
# define H Y P E R V I S O R _ T E X T \
/ * \
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
* Align t o 4 K B s o t h a t \
* a) t h e H Y P v e c t o r t a b l e i s a t i t s m i n i m u m \
* alignment o f 2 0 4 8 b y t e s \
* b) t h e H Y P i n i t c o d e w i l l n o t c r o s s a p a g e \
* boundary i f i t s s i z e d o e s n o t e x c e e d \
* 4 KB ( s e e r e l a t e d A S S E R T ( ) b e l o w ) \
2012-12-07 22:40:43 +04:00
* / \
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
. = ALIGN( S Z _ 4 K ) ; \
2012-12-07 22:40:43 +04:00
VMLINUX_ S Y M B O L ( _ _ h y p _ i d m a p _ t e x t _ s t a r t ) = . ; \
* ( .hyp .idmap .text ) \
VMLINUX_ S Y M B O L ( _ _ h y p _ i d m a p _ t e x t _ e n d ) = . ; \
VMLINUX_ S Y M B O L ( _ _ h y p _ t e x t _ s t a r t ) = . ; \
* ( .hyp .text ) \
VMLINUX_ S Y M B O L ( _ _ h y p _ t e x t _ e n d ) = . ;
2015-06-01 14:40:33 +03:00
# define I D M A P _ T E X T \
. = ALIGN( S Z _ 4 K ) ; \
VMLINUX_ S Y M B O L ( _ _ i d m a p _ t e x t _ s t a r t ) = . ; \
* ( .idmap .text ) \
VMLINUX_ S Y M B O L ( _ _ i d m a p _ t e x t _ e n d ) = . ;
2014-10-10 20:42:55 +04:00
/ *
* The s i z e o f t h e P E / C O F F s e c t i o n t h a t c o v e r s t h e k e r n e l i m a g e , w h i c h
* runs f r o m s t e x t t o _ e d a t a , m u s t b e a r o u n d m u l t i p l e o f t h e P E / C O F F
* FileAlignment, w h i c h w e s e t t o i t s m i n i m u m v a l u e o f 0 x20 0 . ' s t e x t '
* itself i s 4 K B a l i g n e d , s o p a d d i n g o u t _ e d a t a t o a 0 x20 0 a l i g n e d
* boundary s h o u l d b e s u f f i c i e n t .
* /
PECOFF_ F I L E _ A L I G N M E N T = 0 x20 0 ;
# ifdef C O N F I G _ E F I
# define P E C O F F _ E D A T A _ P A D D I N G \
.pecoff_edata_padding : { BYTE( 0 ) ; . = ALIGN(PECOFF_FILE_ALIGNMENT); }
# else
# define P E C O F F _ E D A T A _ P A D D I N G
# endif
2015-10-27 00:42:33 +03:00
# if d e f i n e d ( C O N F I G _ D E B U G _ A L I G N _ R O D A T A )
2015-01-22 04:36:06 +03:00
# define A L I G N _ D E B U G _ R O . = A L I G N ( 1 < < S E C T I O N _ S H I F T ) ;
# define A L I G N _ D E B U G _ R O _ M I N ( m i n ) A L I G N _ D E B U G _ R O
2015-10-27 00:42:33 +03:00
# elif d e f i n e d ( C O N F I G _ D E B U G _ R O D A T A )
# define A L I G N _ D E B U G _ R O . = A L I G N ( 1 < < P A G E _ S H I F T ) ;
# define A L I G N _ D E B U G _ R O _ M I N ( m i n ) A L I G N _ D E B U G _ R O
2015-01-22 04:36:06 +03:00
# else
# define A L I G N _ D E B U G _ R O
# define A L I G N _ D E B U G _ R O _ M I N ( m i n ) . = A L I G N ( m i n ) ;
# endif
2012-04-20 17:45:54 +04:00
SECTIONS
{
/ *
* XXX : The l i n k e r d o e s n o t d e f i n e h o w o u t p u t s e c t i o n s a r e
* assigned t o i n p u t s e c t i o n s w h e n t h e r e a r e m u l t i p l e s t a t e m e n t s
* matching t h e s a m e i n p u t s e c t i o n n a m e . T h e r e i s n o d o c u m e n t e d
* order o f m a t c h i n g .
* /
/ DISCARD/ : {
ARM_ E X I T _ D I S C A R D ( E X I T _ T E X T )
ARM_ E X I T _ D I S C A R D ( E X I T _ D A T A )
EXIT_ C A L L
* ( .discard )
* ( .discard . * )
2016-01-26 11:13:44 +03:00
* ( .interp .dynamic )
2012-04-20 17:45:54 +04:00
}
2016-02-16 15:52:36 +03:00
. = KIMAGE_ V A D D R + T E X T _ O F F S E T ;
2012-04-20 17:45:54 +04:00
.head .text : {
_ text = . ;
HEAD_ T E X T
}
2016-01-25 14:45:11 +03:00
ALIGN_ D E B U G _ R O _ M I N ( P A G E _ S I Z E )
2012-04-20 17:45:54 +04:00
.text : { /* Real text segment */
_ stext = . ; /* Text and read-only data */
_ _ exception_ t e x t _ s t a r t = . ;
* ( .exception .text )
_ _ exception_ t e x t _ e n d = . ;
IRQENTRY_ T E X T
TEXT_ T E X T
SCHED_ T E X T
LOCK_ T E X T
2012-12-07 22:40:43 +04:00
HYPERVISOR_ T E X T
2015-06-01 14:40:33 +03:00
IDMAP_ T E X T
2012-04-20 17:45:54 +04:00
* ( .fixup )
* ( .gnu .warning )
. = ALIGN( 1 6 ) ;
* ( .got ) /* Global offset table */
}
RO_ D A T A ( P A G E _ S I Z E )
2013-05-08 20:29:24 +04:00
EXCEPTION_ T A B L E ( 8 )
2013-08-23 19:16:42 +04:00
NOTES
2012-04-20 17:45:54 +04:00
2015-01-22 04:36:06 +03:00
ALIGN_ D E B U G _ R O _ M I N ( P A G E _ S I Z E )
2016-01-25 14:45:11 +03:00
_ etext = . ; /* End of text and rodata section */
2012-04-20 17:45:54 +04:00
_ _ init_ b e g i n = . ;
INIT_ T E X T _ S E C T I O N ( 8 )
.exit .text : {
ARM_ E X I T _ K E E P ( E X I T _ T E X T )
}
2015-01-22 04:36:06 +03:00
2012-04-20 17:45:54 +04:00
.init .data : {
INIT_ D A T A
INIT_ S E T U P ( 1 6 )
INIT_ C A L L S
CON_ I N I T C A L L
SECURITY_ I N I T C A L L
INIT_ R A M _ F S
}
.exit .data : {
ARM_ E X I T _ K E E P ( E X I T _ D A T A )
}
2015-12-01 15:20:40 +03:00
PERCPU_ S E C T I O N ( L 1 _ C A C H E _ B Y T E S )
2012-04-20 17:45:54 +04:00
2014-11-14 18:54:08 +03:00
. = ALIGN( 4 ) ;
.altinstructions : {
_ _ alt_ i n s t r u c t i o n s = . ;
* ( .altinstructions )
_ _ alt_ i n s t r u c t i o n s _ e n d = . ;
}
.altinstr_replacement : {
* ( .altinstr_replacement )
}
2016-01-26 11:13:44 +03:00
.rela : ALIGN( 8 ) {
_ _ reloc_ s t a r t = . ;
* ( .rela .rela * )
_ _ reloc_ e n d = . ;
}
.dynsym : ALIGN( 8 ) {
_ _ dynsym_ s t a r t = . ;
* ( .dynsym )
}
.dynstr : {
* ( .dynstr )
}
.hash : {
* ( .hash )
}
2014-11-14 18:54:08 +03:00
. = ALIGN( P A G E _ S I Z E ) ;
2015-12-09 15:44:38 +03:00
_ _ init_ e n d = . ;
2013-11-04 20:38:47 +04:00
_ data = . ;
_ sdata = . ;
2015-12-01 15:20:40 +03:00
RW_ D A T A _ S E C T I O N ( L 1 _ C A C H E _ B Y T E S , P A G E _ S I Z E , T H R E A D _ S I Z E )
2014-10-10 20:42:55 +04:00
PECOFF_ E D A T A _ P A D D I N G
2013-11-04 20:38:47 +04:00
_ edata = . ;
2012-04-20 17:45:54 +04:00
BSS_ S E C T I O N ( 0 , 0 , 0 )
2014-06-24 19:51:35 +04:00
. = ALIGN( P A G E _ S I Z E ) ;
idmap_ p g _ d i r = . ;
. + = IDMAP_ D I R _ S I Z E ;
swapper_ p g _ d i r = . ;
. + = SWAPPER_ D I R _ S I Z E ;
2012-04-20 17:45:54 +04:00
_ end = . ;
STABS_ D E B U G
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 19:51:36 +04:00
HEAD_ S Y M B O L S
2012-04-20 17:45:54 +04:00
}
2012-12-07 22:40:43 +04:00
/ *
2015-06-01 14:40:33 +03:00
* The H Y P i n i t c o d e a n d I D m a p t e x t c a n ' t b e l o n g e r t h a n a p a g e e a c h ,
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
* and s h o u l d n o t c r o s s a p a g e b o u n d a r y .
2012-12-07 22:40:43 +04:00
* /
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
ASSERT( _ _ h y p _ i d m a p _ t e x t _ e n d - ( _ _ h y p _ i d m a p _ t e x t _ s t a r t & ~ ( S Z _ 4 K - 1 ) ) < = S Z _ 4 K ,
" HYP i n i t c o d e t o o b i g o r m i s a l i g n e d " )
2015-06-01 14:40:33 +03:00
ASSERT( _ _ i d m a p _ t e x t _ e n d - ( _ _ i d m a p _ t e x t _ s t a r t & ~ ( S Z _ 4 K - 1 ) ) < = S Z _ 4 K ,
" ID m a p t e x t t o o b i g o r m i s a l i g n e d " )
2014-06-24 19:51:37 +04:00
/ *
* If p a d d i n g i s a p p l i e d b e f o r e . h e a d . t e x t , v i r t < - > p h y s c o n v e r s i o n s w i l l f a i l .
* /
2016-02-16 15:52:36 +03:00
ASSERT( _ t e x t = = ( K I M A G E _ V A D D R + T E X T _ O F F S E T ) , " H E A D i s m i s a l i g n e d " )