2012-04-20 17:45:54 +04:00
/ *
* ld s c r i p t t o m a k e A R M L i n u x k e r n e l
* taken f r o m t h e i 3 8 6 v e r s i o n b y R u s s e l l K i n g
* Written b y M a r t i n M a r e s < m j @atrey.karlin.mff.cuni.cz>
* /
# include < a s m - g e n e r i c / v m l i n u x . l d s . h >
2015-12-01 15:20:40 +03:00
# include < a s m / c a c h e . h >
2015-10-19 16:19:27 +03:00
# include < a s m / k e r n e l - p g t a b l e . h >
2012-04-20 17:45:54 +04:00
# include < a s m / t h r e a d _ i n f o . h >
# include < a s m / m e m o r y . h >
# include < a s m / p a g e . h >
2015-01-22 04:36:06 +03:00
# include < a s m / p g t a b l e . h >
2012-04-20 17:45:54 +04:00
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 19:51:36 +04:00
# include " i m a g e . h "
2014-11-25 18:26:13 +03:00
/* .exit.text needed in case of alternative patching */
# define A R M _ E X I T _ K E E P ( x ) x
# define A R M _ E X I T _ D I S C A R D ( x )
2012-04-20 17:45:54 +04:00
OUTPUT_ A R C H ( a a r c h64 )
2014-05-16 21:26:01 +04:00
ENTRY( _ t e x t )
2012-04-20 17:45:54 +04:00
jiffies = j i f f i e s _ 6 4 ;
2012-12-07 22:40:43 +04:00
# define H Y P E R V I S O R _ T E X T \
/ * \
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
* Align t o 4 K B s o t h a t \
* a) t h e H Y P v e c t o r t a b l e i s a t i t s m i n i m u m \
* alignment o f 2 0 4 8 b y t e s \
* b) t h e H Y P i n i t c o d e w i l l n o t c r o s s a p a g e \
* boundary i f i t s s i z e d o e s n o t e x c e e d \
* 4 KB ( s e e r e l a t e d A S S E R T ( ) b e l o w ) \
2012-12-07 22:40:43 +04:00
* / \
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
. = ALIGN( S Z _ 4 K ) ; \
2012-12-07 22:40:43 +04:00
VMLINUX_ S Y M B O L ( _ _ h y p _ i d m a p _ t e x t _ s t a r t ) = . ; \
* ( .hyp .idmap .text ) \
VMLINUX_ S Y M B O L ( _ _ h y p _ i d m a p _ t e x t _ e n d ) = . ; \
VMLINUX_ S Y M B O L ( _ _ h y p _ t e x t _ s t a r t ) = . ; \
* ( .hyp .text ) \
VMLINUX_ S Y M B O L ( _ _ h y p _ t e x t _ e n d ) = . ;
2015-06-01 14:40:33 +03:00
# define I D M A P _ T E X T \
. = ALIGN( S Z _ 4 K ) ; \
VMLINUX_ S Y M B O L ( _ _ i d m a p _ t e x t _ s t a r t ) = . ; \
* ( .idmap .text ) \
VMLINUX_ S Y M B O L ( _ _ i d m a p _ t e x t _ e n d ) = . ;
2016-04-27 19:47:12 +03:00
# ifdef C O N F I G _ H I B E R N A T I O N
# define H I B E R N A T E _ T E X T \
. = ALIGN( S Z _ 4 K ) ; \
VMLINUX_ S Y M B O L ( _ _ h i b e r n a t e _ e x i t _ t e x t _ s t a r t ) = . ;\
* ( .hibernate_exit .text ) \
VMLINUX_ S Y M B O L ( _ _ h i b e r n a t e _ e x i t _ t e x t _ e n d ) = . ;
# else
# define H I B E R N A T E _ T E X T
# endif
2014-10-10 20:42:55 +04:00
/ *
* The s i z e o f t h e P E / C O F F s e c t i o n t h a t c o v e r s t h e k e r n e l i m a g e , w h i c h
* runs f r o m s t e x t t o _ e d a t a , m u s t b e a r o u n d m u l t i p l e o f t h e P E / C O F F
* FileAlignment, w h i c h w e s e t t o i t s m i n i m u m v a l u e o f 0 x20 0 . ' s t e x t '
* itself i s 4 K B a l i g n e d , s o p a d d i n g o u t _ e d a t a t o a 0 x20 0 a l i g n e d
* boundary s h o u l d b e s u f f i c i e n t .
* /
PECOFF_ F I L E _ A L I G N M E N T = 0 x20 0 ;
# ifdef C O N F I G _ E F I
# define P E C O F F _ E D A T A _ P A D D I N G \
.pecoff_edata_padding : { BYTE( 0 ) ; . = ALIGN(PECOFF_FILE_ALIGNMENT); }
# else
# define P E C O F F _ E D A T A _ P A D D I N G
# endif
2015-10-27 00:42:33 +03:00
# if d e f i n e d ( C O N F I G _ D E B U G _ A L I G N _ R O D A T A )
arm64: simplify kernel segment mapping granularity
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-30 18:43:09 +03:00
/ *
* 4 KB g r a n u l e : 1 l e v e l 2 e n t r y
* 1 6 KB g r a n u l e : 1 2 8 l e v e l 3 e n t r i e s , w i t h c o n t i g u o u s b i t
* 6 4 KB g r a n u l e : 3 2 l e v e l 3 e n t r i e s , w i t h c o n t i g u o u s b i t
* /
# define S E G M E N T _ A L I G N S Z _ 2 M
2015-01-22 04:36:06 +03:00
# else
arm64: simplify kernel segment mapping granularity
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-30 18:43:09 +03:00
/ *
* 4 KB g r a n u l e : 1 6 l e v e l 3 e n t r i e s , w i t h c o n t i g u o u s b i t
* 1 6 KB g r a n u l e : 4 l e v e l 3 e n t r i e s , w i t h o u t c o n t i g u o u s b i t
* 6 4 KB g r a n u l e : 1 l e v e l 3 e n t r y
* /
# define S E G M E N T _ A L I G N S Z _ 6 4 K
2015-01-22 04:36:06 +03:00
# endif
2012-04-20 17:45:54 +04:00
SECTIONS
{
/ *
* XXX : The l i n k e r d o e s n o t d e f i n e h o w o u t p u t s e c t i o n s a r e
* assigned t o i n p u t s e c t i o n s w h e n t h e r e a r e m u l t i p l e s t a t e m e n t s
* matching t h e s a m e i n p u t s e c t i o n n a m e . T h e r e i s n o d o c u m e n t e d
* order o f m a t c h i n g .
* /
/ DISCARD/ : {
ARM_ E X I T _ D I S C A R D ( E X I T _ T E X T )
ARM_ E X I T _ D I S C A R D ( E X I T _ D A T A )
EXIT_ C A L L
* ( .discard )
* ( .discard . * )
2016-01-26 11:13:44 +03:00
* ( .interp .dynamic )
2012-04-20 17:45:54 +04:00
}
2016-02-16 15:52:36 +03:00
. = KIMAGE_ V A D D R + T E X T _ O F F S E T ;
2012-04-20 17:45:54 +04:00
.head .text : {
_ text = . ;
HEAD_ T E X T
}
.text : { /* Real text segment */
_ stext = . ; /* Text and read-only data */
_ _ exception_ t e x t _ s t a r t = . ;
* ( .exception .text )
_ _ exception_ t e x t _ e n d = . ;
IRQENTRY_ T E X T
2016-03-26 00:22:05 +03:00
SOFTIRQENTRY_ T E X T
2012-04-20 17:45:54 +04:00
TEXT_ T E X T
SCHED_ T E X T
LOCK_ T E X T
2012-12-07 22:40:43 +04:00
HYPERVISOR_ T E X T
2015-06-01 14:40:33 +03:00
IDMAP_ T E X T
2016-04-27 19:47:12 +03:00
HIBERNATE_ T E X T
2012-04-20 17:45:54 +04:00
* ( .fixup )
* ( .gnu .warning )
. = ALIGN( 1 6 ) ;
* ( .got ) /* Global offset table */
}
arm64: simplify kernel segment mapping granularity
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-30 18:43:09 +03:00
. = ALIGN( S E G M E N T _ A L I G N ) ;
2016-02-19 20:50:32 +03:00
RO_ D A T A ( P A G E _ S I Z E ) / * e v e r y t h i n g f r o m t h i s p o i n t t o * /
EXCEPTION_ T A B L E ( 8 ) / * _ e t e x t w i l l b e m a r k e d R O N X * /
2013-08-23 19:16:42 +04:00
NOTES
2012-04-20 17:45:54 +04:00
arm64: simplify kernel segment mapping granularity
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-30 18:43:09 +03:00
. = ALIGN( S E G M E N T _ A L I G N ) ;
2016-01-25 14:45:11 +03:00
_ etext = . ; /* End of text and rodata section */
2012-04-20 17:45:54 +04:00
_ _ init_ b e g i n = . ;
INIT_ T E X T _ S E C T I O N ( 8 )
.exit .text : {
ARM_ E X I T _ K E E P ( E X I T _ T E X T )
}
2015-01-22 04:36:06 +03:00
2012-04-20 17:45:54 +04:00
.init .data : {
INIT_ D A T A
INIT_ S E T U P ( 1 6 )
INIT_ C A L L S
CON_ I N I T C A L L
SECURITY_ I N I T C A L L
INIT_ R A M _ F S
2016-02-17 15:35:58 +03:00
* ( .init .rodata . * .init .bss ) /* from the EFI stub */
2012-04-20 17:45:54 +04:00
}
.exit .data : {
ARM_ E X I T _ K E E P ( E X I T _ D A T A )
}
2015-12-01 15:20:40 +03:00
PERCPU_ S E C T I O N ( L 1 _ C A C H E _ B Y T E S )
2012-04-20 17:45:54 +04:00
2014-11-14 18:54:08 +03:00
. = ALIGN( 4 ) ;
.altinstructions : {
_ _ alt_ i n s t r u c t i o n s = . ;
* ( .altinstructions )
_ _ alt_ i n s t r u c t i o n s _ e n d = . ;
}
.altinstr_replacement : {
* ( .altinstr_replacement )
}
2016-01-26 11:13:44 +03:00
.rela : ALIGN( 8 ) {
* ( .rela .rela * )
}
.dynsym : ALIGN( 8 ) {
* ( .dynsym )
}
.dynstr : {
* ( .dynstr )
}
.hash : {
* ( .hash )
}
2014-11-14 18:54:08 +03:00
2016-04-18 18:09:43 +03:00
_ _ rela_ o f f s e t = A D D R ( . r e l a ) - K I M A G E _ V A D D R ;
_ _ rela_ s i z e = S I Z E O F ( . r e l a ) ;
_ _ dynsym_ o f f s e t = A D D R ( . d y n s y m ) - K I M A G E _ V A D D R ;
arm64: simplify kernel segment mapping granularity
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-30 18:43:09 +03:00
. = ALIGN( S E G M E N T _ A L I G N ) ;
2015-12-09 15:44:38 +03:00
_ _ init_ e n d = . ;
2013-11-04 20:38:47 +04:00
_ data = . ;
_ sdata = . ;
2015-12-01 15:20:40 +03:00
RW_ D A T A _ S E C T I O N ( L 1 _ C A C H E _ B Y T E S , P A G E _ S I Z E , T H R E A D _ S I Z E )
2014-10-10 20:42:55 +04:00
PECOFF_ E D A T A _ P A D D I N G
2013-11-04 20:38:47 +04:00
_ edata = . ;
2012-04-20 17:45:54 +04:00
BSS_ S E C T I O N ( 0 , 0 , 0 )
2014-06-24 19:51:35 +04:00
. = ALIGN( P A G E _ S I Z E ) ;
idmap_ p g _ d i r = . ;
. + = IDMAP_ D I R _ S I Z E ;
swapper_ p g _ d i r = . ;
. + = SWAPPER_ D I R _ S I Z E ;
2012-04-20 17:45:54 +04:00
_ end = . ;
STABS_ D E B U G
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 19:51:36 +04:00
HEAD_ S Y M B O L S
2012-04-20 17:45:54 +04:00
}
2012-12-07 22:40:43 +04:00
/ *
2015-06-01 14:40:33 +03:00
* The H Y P i n i t c o d e a n d I D m a p t e x t c a n ' t b e l o n g e r t h a n a p a g e e a c h ,
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
* and s h o u l d n o t c r o s s a p a g e b o u n d a r y .
2012-12-07 22:40:43 +04:00
* /
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:42:26 +03:00
ASSERT( _ _ h y p _ i d m a p _ t e x t _ e n d - ( _ _ h y p _ i d m a p _ t e x t _ s t a r t & ~ ( S Z _ 4 K - 1 ) ) < = S Z _ 4 K ,
" HYP i n i t c o d e t o o b i g o r m i s a l i g n e d " )
2015-06-01 14:40:33 +03:00
ASSERT( _ _ i d m a p _ t e x t _ e n d - ( _ _ i d m a p _ t e x t _ s t a r t & ~ ( S Z _ 4 K - 1 ) ) < = S Z _ 4 K ,
" ID m a p t e x t t o o b i g o r m i s a l i g n e d " )
2016-04-27 19:47:12 +03:00
# ifdef C O N F I G _ H I B E R N A T I O N
ASSERT( _ _ h i b e r n a t e _ e x i t _ t e x t _ e n d - ( _ _ h i b e r n a t e _ e x i t _ t e x t _ s t a r t & ~ ( S Z _ 4 K - 1 ) )
< = SZ_ 4 K , " H i b e r n a t e e x i t t e x t t o o b i g o r m i s a l i g n e d " )
# endif
2014-06-24 19:51:37 +04:00
/ *
* If p a d d i n g i s a p p l i e d b e f o r e . h e a d . t e x t , v i r t < - > p h y s c o n v e r s i o n s w i l l f a i l .
* /
2016-02-16 15:52:36 +03:00
ASSERT( _ t e x t = = ( K I M A G E _ V A D D R + T E X T _ O F F S E T ) , " H E A D i s m i s a l i g n e d " )