2012-12-17 21:07:52 +04:00
/ *
* Copyright ( C ) 2 0 1 2 ,2 0 1 3 - A R M L t d
* Author : Marc Z y n g i e r < m a r c . z y n g i e r @arm.com>
*
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or modify
* it u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e , v e r s i o n 2 , a s
* published b y t h e F r e e S o f t w a r e F o u n d a t i o n .
*
* This p r o g r a m i s d i s t r i b u t e d i n t h e h o p e t h a t i t w i l l b e u s e f u l ,
* but W I T H O U T A N Y W A R R A N T Y ; without even the implied warranty of
* MERCHANTABILITY o r F I T N E S S F O R A P A R T I C U L A R P U R P O S E . S e e t h e
* GNU G e n e r a l P u b l i c L i c e n s e f o r m o r e d e t a i l s .
*
* You s h o u l d h a v e r e c e i v e d a c o p y o f t h e G N U G e n e r a l P u b l i c L i c e n s e
* along w i t h t h i s p r o g r a m . I f n o t , s e e < h t t p : / / w w w . g n u . o r g / l i c e n s e s / > .
* /
# include < l i n u x / l i n k a g e . h >
# include < a s m / a s s e m b l e r . h >
# include < a s m / k v m _ a r m . h >
# include < a s m / k v m _ m m u . h >
2015-03-19 19:42:28 +03:00
# include < a s m / p g t a b l e - h w d e f . h >
2016-04-27 19:47:01 +03:00
# include < a s m / s y s r e g . h >
2017-04-03 21:37:40 +03:00
# include < a s m / v i r t . h >
2012-12-17 21:07:52 +04:00
.text
.pushsection .hyp .idmap .text , " ax"
.align 11
ENTRY( _ _ k v m _ h y p _ i n i t )
ventry _ _ i n v a l i d / / S y n c h r o n o u s E L 2 t
ventry _ _ i n v a l i d / / I R Q E L 2 t
ventry _ _ i n v a l i d / / F I Q E L 2 t
ventry _ _ i n v a l i d / / E r r o r E L 2 t
ventry _ _ i n v a l i d / / S y n c h r o n o u s E L 2 h
ventry _ _ i n v a l i d / / I R Q E L 2 h
ventry _ _ i n v a l i d / / F I Q E L 2 h
ventry _ _ i n v a l i d / / E r r o r E L 2 h
ventry _ _ d o _ h y p _ i n i t / / S y n c h r o n o u s 6 4 - b i t E L 1
ventry _ _ i n v a l i d / / I R Q 6 4 - b i t E L 1
ventry _ _ i n v a l i d / / F I Q 6 4 - b i t E L 1
ventry _ _ i n v a l i d / / E r r o r 6 4 - b i t E L 1
ventry _ _ i n v a l i d / / S y n c h r o n o u s 3 2 - b i t E L 1
ventry _ _ i n v a l i d / / I R Q 3 2 - b i t E L 1
ventry _ _ i n v a l i d / / F I Q 3 2 - b i t E L 1
ventry _ _ i n v a l i d / / E r r o r 3 2 - b i t E L 1
__invalid :
b .
/ *
2016-06-30 20:40:44 +03:00
* x0 : HYP p g d
* x1 : HYP s t a c k
* x2 : HYP v e c t o r s
2012-12-17 21:07:52 +04:00
* /
__do_hyp_init :
2017-04-03 21:37:40 +03:00
/* Check for a stub HVC call */
cmp x0 , #H V C _ S T U B _ H C A L L _ N R
b. l o _ _ k v m _ h a n d l e _ s t u b _ h v c
2012-12-17 21:07:52 +04:00
2018-01-29 14:59:57 +03:00
phys_ t o _ t t b r x4 , x0
2017-12-13 20:07:18 +03:00
msr t t b r0 _ e l 2 , x4
2012-12-17 21:07:52 +04:00
mrs x4 , t c r _ e l 1
ldr x5 , =TCR_EL2_MASK
and x4 , x4 , x5
2016-02-10 21:46:53 +03:00
mov x5 , #T C R _ E L 2 _ R E S 1
2012-12-17 21:07:52 +04:00
orr x4 , x4 , x5
2015-03-19 19:42:28 +03:00
/ *
arm64: allow ID map to be extended to 52 bits
Currently, when using VA_BITS < 48, if the ID map text happens to be
placed in physical memory above VA_BITS, we increase the VA size (up to
48) and create a new table level, in order to map in the ID map text.
This is okay because the system always supports 48 bits of VA.
This patch extends the code such that if the system supports 52 bits of
VA, and the ID map text is placed that high up, then we increase the VA
size accordingly, up to 52.
One difference from the current implementation is that so far the
condition of VA_BITS < 48 has meant that the top level table is always
"full", with the maximum number of entries, and an extra table level is
always needed. Now, when VA_BITS = 48 (and using 64k pages), the top
level table is not full, and we simply need to increase the number of
entries in it, instead of creating a new table level.
Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Bob Picco <bob.picco@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()]
[catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-13 20:07:24 +03:00
* The I D m a p m a y b e c o n f i g u r e d t o u s e a n e x t e n d e d v i r t u a l a d d r e s s
* range. T h i s i s o n l y t h e c a s e i f s y s t e m R A M i s o u t o f r a n g e f o r t h e
* currently c o n f i g u r e d p a g e s i z e a n d V A _ B I T S , i n w h i c h c a s e w e w i l l
* also n e e d t h e e x t e n d e d v i r t u a l r a n g e f o r t h e H Y P I D m a p , o r w e w o n ' t
* be a b l e t o e n a b l e t h e E L 2 M M U .
2015-03-19 19:42:28 +03:00
*
* However, a t E L 2 , t h e r e i s o n l y o n e T T B R r e g i s t e r , a n d w e c a n ' t s w i t c h
* between t r a n s l a t i o n t a b l e s * a n d * u p d a t e T C R _ E L 2 . T 0 S Z a t t h e s a m e
arm64: allow ID map to be extended to 52 bits
Currently, when using VA_BITS < 48, if the ID map text happens to be
placed in physical memory above VA_BITS, we increase the VA size (up to
48) and create a new table level, in order to map in the ID map text.
This is okay because the system always supports 48 bits of VA.
This patch extends the code such that if the system supports 52 bits of
VA, and the ID map text is placed that high up, then we increase the VA
size accordingly, up to 52.
One difference from the current implementation is that so far the
condition of VA_BITS < 48 has meant that the top level table is always
"full", with the maximum number of entries, and an extra table level is
always needed. Now, when VA_BITS = 48 (and using 64k pages), the top
level table is not full, and we simply need to increase the number of
entries in it, instead of creating a new table level.
Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Bob Picco <bob.picco@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()]
[catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-13 20:07:24 +03:00
* time. B o t t o m l i n e : w e n e e d t o u s e t h e e x t e n d e d r a n g e w i t h * b o t h * o u r
* translation t a b l e s .
2015-03-19 19:42:28 +03:00
*
* So u s e t h e s a m e T 0 S Z v a l u e w e u s e f o r t h e I D m a p .
* /
ldr_ l x5 , i d m a p _ t 0 s z
bfi x4 , x5 , T C R _ T 0 S Z _ O F F S E T , T C R _ T x S Z _ W I D T H
arm64: allow ID map to be extended to 52 bits
Currently, when using VA_BITS < 48, if the ID map text happens to be
placed in physical memory above VA_BITS, we increase the VA size (up to
48) and create a new table level, in order to map in the ID map text.
This is okay because the system always supports 48 bits of VA.
This patch extends the code such that if the system supports 52 bits of
VA, and the ID map text is placed that high up, then we increase the VA
size accordingly, up to 52.
One difference from the current implementation is that so far the
condition of VA_BITS < 48 has meant that the top level table is always
"full", with the maximum number of entries, and an extra table level is
always needed. Now, when VA_BITS = 48 (and using 64k pages), the top
level table is not full, and we simply need to increase the number of
entries in it, instead of creating a new table level.
Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Bob Picco <bob.picco@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()]
[catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-13 20:07:24 +03:00
2014-03-07 12:49:25 +04:00
/ *
2017-12-13 20:07:17 +03:00
* Set t h e P S b i t s i n T C R _ E L 2 .
2014-03-07 12:49:25 +04:00
* /
2017-12-13 20:07:17 +03:00
tcr_ c o m p u t e _ p a _ s i z e x4 , #T C R _ E L 2 _ P S _ S H I F T , x5 , x6
2016-02-10 21:46:53 +03:00
msr t c r _ e l 2 , x4
2012-12-17 21:07:52 +04:00
mrs x4 , m a i r _ e l 1
msr m a i r _ e l 2 , x4
isb
2014-07-31 10:53:23 +04:00
/* Invalidate the stale TLBs from Bootloader */
tlbi a l l e 2
dsb s y
2017-06-06 21:08:33 +03:00
/ *
* Preserve a l l t h e R E S 1 b i t s w h i l e s e t t i n g t h e d e f a u l t f l a g s ,
2017-06-06 21:08:34 +03:00
* as w e l l a s t h e E E b i t o n B E . D r o p t h e A f l a g s i n c e t h e c o m p i l e r
* is a l l o w e d t o g e n e r a t e u n a l i g n e d a c c e s s e s .
2017-06-06 21:08:33 +03:00
* /
2017-06-06 21:08:34 +03:00
ldr x4 , = ( S C T L R _ E L 2 _ R E S 1 | ( S C T L R _ E L x _ F L A G S & ~ S C T L R _ E L x _ A ) )
2017-06-06 21:08:33 +03:00
CPU_ B E ( o r r x4 , x4 , #S C T L R _ E L x _ E E )
2012-12-17 21:07:52 +04:00
msr s c t l r _ e l 2 , x4
isb
/* Set the stack and new vectors */
2016-06-30 20:40:44 +03:00
kern_ h y p _ v a x1
mov s p , x1
2012-12-17 21:07:52 +04:00
kern_ h y p _ v a x2
2016-06-30 20:40:44 +03:00
msr v b a r _ e l 2 , x2
2012-12-17 21:07:52 +04:00
2018-01-08 18:38:07 +03:00
/* copy tpidr_el1 into tpidr_el2 for use by HYP */
mrs x1 , t p i d r _ e l 1
msr t p i d r _ e l 2 , x1
2012-12-17 21:07:52 +04:00
/* Hello, World! */
eret
ENDPROC( _ _ k v m _ h y p _ i n i t )
2017-04-03 21:37:40 +03:00
ENTRY( _ _ k v m _ h a n d l e _ s t u b _ h v c )
2017-04-03 21:38:04 +03:00
cmp x0 , #H V C _ S O F T _ R E S T A R T
2017-04-03 21:37:44 +03:00
b. n e 1 f
/* This is where we're about to jump, staying at EL2 */
msr e l r _ e l 2 , x1
mov x0 , #( P S R _ F _ B I T | P S R _ I _ B I T | P S R _ A _ B I T | P S R _ D _ B I T | P S R _ M O D E _ E L 2 h )
msr s p s r _ e l 2 , x0
/* Shuffle the arguments, and don't come back */
mov x0 , x2
mov x1 , x3
mov x2 , x4
b r e s e t
2017-04-03 21:37:41 +03:00
1 : cmp x0 , #H V C _ R E S E T _ V E C T O R S
2017-04-03 21:37:40 +03:00
b. n e 1 f
2017-04-03 21:37:44 +03:00
reset :
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
/ *
2017-04-03 21:37:44 +03:00
* Reset k v m b a c k t o t h e h y p s t u b . D o n o t c l o b b e r x0 - x4 i n
* case w e c o m i n g v i a H V C _ S O F T _ R E S T A R T .
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
* /
2017-04-03 21:37:44 +03:00
mrs x5 , s c t l r _ e l 2
ldr x6 , =SCTLR_ELx_FLAGS
bic x5 , x5 , x6 / / C l e a r S C T L _ M a n d e t c
2018-01-29 14:59:52 +03:00
pre_ d i s a b l e _ m m u _ w o r k a r o u n d
2017-04-03 21:37:44 +03:00
msr s c t l r _ e l 2 , x5
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
isb
/* Install stub vectors */
2017-04-03 21:37:44 +03:00
adr_ l x5 , _ _ h y p _ s t u b _ v e c t o r s
msr v b a r _ e l 2 , x5
2017-04-03 21:38:05 +03:00
mov x0 , x z r
eret
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
2017-04-03 21:37:40 +03:00
1 : /* Bad stub call */
ldr x0 , =HVC_STUB_ERR
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
eret
2017-04-03 21:38:05 +03:00
2017-04-03 21:37:40 +03:00
ENDPROC( _ _ k v m _ h a n d l e _ s t u b _ h v c )
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
2012-12-17 21:07:52 +04:00
.ltorg
.popsection