2013-01-21 19:36:15 -05:00
# include < l i n u x / i r q c h i p / a r m - g i c . h >
2014-06-12 09:30:01 -07:00
# include < a s m / a s s e m b l e r . h >
2013-01-21 19:36:15 -05:00
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
# define V C P U _ U S R _ R E G ( _ r e g _ n r ) ( V C P U _ U S R _ R E G S + ( _ r e g _ n r * 4 ) )
# define V C P U _ U S R _ S P ( V C P U _ U S R _ R E G ( 1 3 ) )
# define V C P U _ U S R _ L R ( V C P U _ U S R _ R E G ( 1 4 ) )
# define C P 1 5 _ O F F S E T ( _ c p15 _ r e g _ i d x ) ( V C P U _ C P 1 5 + ( _ c p15 _ r e g _ i d x * 4 ) )
/ *
* Many o f t h e s e m a c r o s n e e d t o a c c e s s t h e V C P U s t r u c t u r e , w h i c h i s a l w a y s
* held i n r0 . T h e s e m a c r o s s h o u l d n e v e r c l o b b e r r1 , a s i t i s u s e d t o h o l d t h e
* exception c o d e o n t h e r e t u r n p a t h ( e x c e p t o f c o u r s e t h e m a c r o t h a t s w i t c h e s
* all t h e r e g i s t e r s b e f o r e t h e f i n a l j u m p t o t h e V M ) .
* /
vcpu . r e q r0 @ vcpu pointer always in r0
/* Clobbers {r2-r6} */
.macro store_vfp_state vfp_ b a s e
@ The VFPFMRX and VFPFMXR macros are the VMRS and VMSR instructions
VFPFMRX r2 , F P E X C
@ Make sure VFP is enabled so we can touch the registers.
orr r6 , r2 , #F P E X C _ E N
VFPFMXR F P E X C , r6
VFPFMRX r3 , F P S C R
tst r2 , #F P E X C _ E X @ C h e c k f o r V F P S u b a r c h i t e c t u r e
beq 1 f
@ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
@ we only need to save them if FPEXC_EX is set.
VFPFMRX r4 , F P I N S T
tst r2 , #F P E X C _ F P 2 V
VFPFMRX r5 , F P I N S T 2 , n e @ vmrsne
bic r6 , r2 , #F P E X C _ E X @ F P E X C _ E X d i s a b l e
VFPFMXR F P E X C , r6
1 :
VFPFSTMIA \ v f p _ b a s e , r6 @ Save VFP registers
stm \ v f p _ b a s e , { r2 - r5 } @ Save FPEXC, FPSCR, FPINST, FPINST2
.endm
/* Assume FPEXC_EN is on and FPEXC_EX is off, clobbers {r2-r6} */
.macro restore_vfp_state vfp_ b a s e
VFPFLDMIA \ v f p _ b a s e , r6 @ Load VFP registers
ldm \ v f p _ b a s e , { r2 - r5 } @ Load FPEXC, FPSCR, FPINST, FPINST2
VFPFMXR F P S C R , r3
tst r2 , #F P E X C _ E X @ C h e c k f o r V F P S u b a r c h i t e c t u r e
beq 1 f
VFPFMXR F P I N S T , r4
tst r2 , #F P E X C _ F P 2 V
VFPFMXR F P I N S T 2 , r5 , n e
1 :
VFPFMXR F P E X C , r2 @ FPEXC (last, in case !EN)
.endm
/* These are simply for the macros to work - value don't have meaning */
.equ usr, 0
.equ svc, 1
.equ abt, 2
.equ und, 3
.equ irq, 4
.equ fiq, 5
.macro push_host_regs_mode mode
mrs r2 , S P _ \ m o d e
mrs r3 , L R _ \ m o d e
mrs r4 , S P S R _ \ m o d e
push { r2 , r3 , r4 }
.endm
/ *
* Store a l l h o s t p e r s i s t e n t r e g i s t e r s o n t h e s t a c k .
* Clobbers a l l r e g i s t e r s , i n a l l m o d e s , e x c e p t r0 a n d r1 .
* /
.macro save_host_regs
/* Hyp regs. Only ELR_hyp (SPSR_hyp already saved) */
mrs r2 , E L R _ h y p
push { r2 }
/* usr regs */
push { r4 - r12 } @ r0-r3 are always clobbered
mrs r2 , S P _ u s r
mov r3 , l r
push { r2 , r3 }
push_ h o s t _ r e g s _ m o d e s v c
push_ h o s t _ r e g s _ m o d e a b t
push_ h o s t _ r e g s _ m o d e u n d
push_ h o s t _ r e g s _ m o d e i r q
/* fiq regs */
mrs r2 , r8 _ f i q
mrs r3 , r9 _ f i q
mrs r4 , r10 _ f i q
mrs r5 , r11 _ f i q
mrs r6 , r12 _ f i q
mrs r7 , S P _ f i q
mrs r8 , L R _ f i q
mrs r9 , S P S R _ f i q
push { r2 - r9 }
.endm
.macro pop_host_regs_mode mode
pop { r2 , r3 , r4 }
msr S P _ \ m o d e , r2
msr L R _ \ m o d e , r3
msr S P S R _ \ m o d e , r4
.endm
/ *
* Restore a l l h o s t r e g i s t e r s f r o m t h e s t a c k .
* Clobbers a l l r e g i s t e r s , i n a l l m o d e s , e x c e p t r0 a n d r1 .
* /
.macro restore_host_regs
pop { r2 - r9 }
msr r8 _ f i q , r2
msr r9 _ f i q , r3
msr r10 _ f i q , r4
msr r11 _ f i q , r5
msr r12 _ f i q , r6
msr S P _ f i q , r7
msr L R _ f i q , r8
msr S P S R _ f i q , r9
pop_ h o s t _ r e g s _ m o d e i r q
pop_ h o s t _ r e g s _ m o d e u n d
pop_ h o s t _ r e g s _ m o d e a b t
pop_ h o s t _ r e g s _ m o d e s v c
pop { r2 , r3 }
msr S P _ u s r , r2
mov l r , r3
pop { r4 - r12 }
pop { r2 }
msr E L R _ h y p , r2
.endm
/ *
* Restore S P , L R a n d S P S R f o r a g i v e n m o d e . o f f s e t i s t h e o f f s e t o f
* this m o d e ' s r e g i s t e r s f r o m t h e V C P U b a s e .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
*
* Clobbers r1 , r2 , r3 , r4 .
* /
.macro restore_guest_regs_mode mode, o f f s e t
add r1 , v c p u , \ o f f s e t
ldm r1 , { r2 , r3 , r4 }
msr S P _ \ m o d e , r2
msr L R _ \ m o d e , r3
msr S P S R _ \ m o d e , r4
.endm
/ *
* Restore a l l g u e s t r e g i s t e r s f r o m t h e v c p u s t r u c t .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
*
* Clobbers * a l l * r e g i s t e r s .
* /
.macro restore_guest_regs
restore_ g u e s t _ r e g s _ m o d e s v c , #V C P U _ S V C _ R E G S
restore_ g u e s t _ r e g s _ m o d e a b t , #V C P U _ A B T _ R E G S
restore_ g u e s t _ r e g s _ m o d e u n d , #V C P U _ U N D _ R E G S
restore_ g u e s t _ r e g s _ m o d e i r q , #V C P U _ I R Q _ R E G S
add r1 , v c p u , #V C P U _ F I Q _ R E G S
ldm r1 , { r2 - r9 }
msr r8 _ f i q , r2
msr r9 _ f i q , r3
msr r10 _ f i q , r4
msr r11 _ f i q , r5
msr r12 _ f i q , r6
msr S P _ f i q , r7
msr L R _ f i q , r8
msr S P S R _ f i q , r9
@ Load return state
ldr r2 , [ v c p u , #V C P U _ P C ]
ldr r3 , [ v c p u , #V C P U _ C P S R ]
msr E L R _ h y p , r2
msr S P S R _ c x s f , r3
@ Load user registers
ldr r2 , [ v c p u , #V C P U _ U S R _ S P ]
ldr r3 , [ v c p u , #V C P U _ U S R _ L R ]
msr S P _ u s r , r2
mov l r , r3
add v c p u , v c p u , #( V C P U _ U S R _ R E G S )
ldm v c p u , { r0 - r12 }
.endm
/ *
* Save S P , L R a n d S P S R f o r a g i v e n m o d e . o f f s e t i s t h e o f f s e t o f
* this m o d e ' s r e g i s t e r s f r o m t h e V C P U b a s e .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
*
* Clobbers r2 , r3 , r4 , r5 .
* /
.macro save_guest_regs_mode mode, o f f s e t
add r2 , v c p u , \ o f f s e t
mrs r3 , S P _ \ m o d e
mrs r4 , L R _ \ m o d e
mrs r5 , S P S R _ \ m o d e
stm r2 , { r3 , r4 , r5 }
.endm
/ *
* Save a l l g u e s t r e g i s t e r s t o t h e v c p u s t r u c t
* Expects g u e s t ' s r0 , r1 , r2 o n t h e s t a c k .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
*
* Clobbers r2 , r3 , r4 , r5 .
* /
.macro save_guest_regs
@ Store usr registers
add r2 , v c p u , #V C P U _ U S R _ R E G ( 3 )
stm r2 , { r3 - r12 }
add r2 , v c p u , #V C P U _ U S R _ R E G ( 0 )
pop { r3 , r4 , r5 } @ r0, r1, r2
stm r2 , { r3 , r4 , r5 }
mrs r2 , S P _ u s r
mov r3 , l r
str r2 , [ v c p u , #V C P U _ U S R _ S P ]
str r3 , [ v c p u , #V C P U _ U S R _ L R ]
@ Store return state
mrs r2 , E L R _ h y p
mrs r3 , s p s r
str r2 , [ v c p u , #V C P U _ P C ]
str r3 , [ v c p u , #V C P U _ C P S R ]
@ Store other guest registers
save_ g u e s t _ r e g s _ m o d e s v c , #V C P U _ S V C _ R E G S
save_ g u e s t _ r e g s _ m o d e a b t , #V C P U _ A B T _ R E G S
save_ g u e s t _ r e g s _ m o d e u n d , #V C P U _ U N D _ R E G S
save_ g u e s t _ r e g s _ m o d e i r q , #V C P U _ I R Q _ R E G S
.endm
/ * Reads c p15 r e g i s t e r s f r o m h a r d w a r e a n d s t o r e s t h e m i n m e m o r y
* @store_to_vcpu: If 0, registers are written in-order to the stack,
* otherwise t o t h e V C P U s t r u c t p o i n t e d t o b y v c p u p
*
* Assumes v c p u p o i n t e r i n v c p u r e g
*
* Clobbers r2 - r12
* /
.macro read_cp15_state store_ t o _ v c p u
mrc p15 , 0 , r2 , c1 , c0 , 0 @ SCTLR
mrc p15 , 0 , r3 , c1 , c0 , 2 @ CPACR
mrc p15 , 0 , r4 , c2 , c0 , 2 @ TTBCR
mrc p15 , 0 , r5 , c3 , c0 , 0 @ DACR
mrrc p15 , 0 , r6 , r7 , c2 @ TTBR 0
mrrc p15 , 1 , r8 , r9 , c2 @ TTBR 1
mrc p15 , 0 , r10 , c10 , c2 , 0 @ PRRR
mrc p15 , 0 , r11 , c10 , c2 , 1 @ NMRR
mrc p15 , 2 , r12 , c0 , c0 , 0 @ CSSELR
.if \ store_ t o _ v c p u = = 0
push { r2 - r12 } @ Push CP15 registers
.else
str r2 , [ v c p u , #C P 15 _ O F F S E T ( c1 _ S C T L R ) ]
str r3 , [ v c p u , #C P 15 _ O F F S E T ( c1 _ C P A C R ) ]
str r4 , [ v c p u , #C P 15 _ O F F S E T ( c2 _ T T B C R ) ]
str r5 , [ v c p u , #C P 15 _ O F F S E T ( c3 _ D A C R ) ]
add r2 , v c p u , #C P 15 _ O F F S E T ( c2 _ T T B R 0 )
strd r6 , r7 , [ r2 ]
add r2 , v c p u , #C P 15 _ O F F S E T ( c2 _ T T B R 1 )
strd r8 , r9 , [ r2 ]
str r10 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ P R R R ) ]
str r11 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ N M R R ) ]
str r12 , [ v c p u , #C P 15 _ O F F S E T ( c0 _ C S S E L R ) ]
.endif
mrc p15 , 0 , r2 , c13 , c0 , 1 @ CID
mrc p15 , 0 , r3 , c13 , c0 , 2 @ TID_URW
mrc p15 , 0 , r4 , c13 , c0 , 3 @ TID_URO
mrc p15 , 0 , r5 , c13 , c0 , 4 @ TID_PRIV
mrc p15 , 0 , r6 , c5 , c0 , 0 @ DFSR
mrc p15 , 0 , r7 , c5 , c0 , 1 @ IFSR
mrc p15 , 0 , r8 , c5 , c1 , 0 @ ADFSR
mrc p15 , 0 , r9 , c5 , c1 , 1 @ AIFSR
mrc p15 , 0 , r10 , c6 , c0 , 0 @ DFAR
mrc p15 , 0 , r11 , c6 , c0 , 2 @ IFAR
mrc p15 , 0 , r12 , c12 , c0 , 0 @ VBAR
.if \ store_ t o _ v c p u = = 0
push { r2 - r12 } @ Push CP15 registers
.else
str r2 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ C I D ) ]
str r3 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ U R W ) ]
str r4 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ U R O ) ]
str r5 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ P R I V ) ]
str r6 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ D F S R ) ]
str r7 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ I F S R ) ]
str r8 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ A D F S R ) ]
str r9 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ A I F S R ) ]
str r10 , [ v c p u , #C P 15 _ O F F S E T ( c6 _ D F A R ) ]
str r11 , [ v c p u , #C P 15 _ O F F S E T ( c6 _ I F A R ) ]
str r12 , [ v c p u , #C P 15 _ O F F S E T ( c12 _ V B A R ) ]
.endif
2013-01-23 13:21:59 -05:00
mrc p15 , 0 , r2 , c14 , c1 , 0 @ CNTKCTL
2013-06-21 13:08:46 +01:00
mrrc p15 , 0 , r4 , r5 , c7 @ PAR
2014-01-22 10:20:09 +00:00
mrc p15 , 0 , r6 , c10 , c3 , 0 @ AMAIR0
mrc p15 , 0 , r7 , c10 , c3 , 1 @ AMAIR1
2013-01-23 13:21:59 -05:00
.if \ store_ t o _ v c p u = = 0
2014-01-22 10:20:09 +00:00
push { r2 ,r4 - r7 }
2013-01-23 13:21:59 -05:00
.else
str r2 , [ v c p u , #C P 15 _ O F F S E T ( c14 _ C N T K C T L ) ]
2013-06-21 13:08:46 +01:00
add r12 , v c p u , #C P 15 _ O F F S E T ( c7 _ P A R )
strd r4 , r5 , [ r12 ]
2014-01-22 10:20:09 +00:00
str r6 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ A M A I R 0 ) ]
str r7 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ A M A I R 1 ) ]
2013-01-23 13:21:59 -05:00
.endif
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endm
/ *
* Reads c p15 r e g i s t e r s f r o m m e m o r y a n d w r i t e s t h e m t o h a r d w a r e
* @read_from_vcpu: If 0, registers are read in-order from the stack,
* otherwise f r o m t h e V C P U s t r u c t p o i n t e d t o b y v c p u p
*
* Assumes v c p u p o i n t e r i n v c p u r e g
* /
.macro write_cp15_state read_ f r o m _ v c p u
2013-01-23 13:21:59 -05:00
.if \ read_ f r o m _ v c p u = = 0
2014-01-22 10:20:09 +00:00
pop { r2 ,r4 - r7 }
2013-01-23 13:21:59 -05:00
.else
ldr r2 , [ v c p u , #C P 15 _ O F F S E T ( c14 _ C N T K C T L ) ]
2013-06-21 13:08:46 +01:00
add r12 , v c p u , #C P 15 _ O F F S E T ( c7 _ P A R )
ldrd r4 , r5 , [ r12 ]
2014-01-22 10:20:09 +00:00
ldr r6 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ A M A I R 0 ) ]
ldr r7 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ A M A I R 1 ) ]
2013-01-23 13:21:59 -05:00
.endif
mcr p15 , 0 , r2 , c14 , c1 , 0 @ CNTKCTL
2013-06-21 13:08:46 +01:00
mcrr p15 , 0 , r4 , r5 , c7 @ PAR
2014-01-22 10:20:09 +00:00
mcr p15 , 0 , r6 , c10 , c3 , 0 @ AMAIR0
mcr p15 , 0 , r7 , c10 , c3 , 1 @ AMAIR1
2013-01-23 13:21:59 -05:00
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.if \ read_ f r o m _ v c p u = = 0
pop { r2 - r12 }
.else
ldr r2 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ C I D ) ]
ldr r3 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ U R W ) ]
ldr r4 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ U R O ) ]
ldr r5 , [ v c p u , #C P 15 _ O F F S E T ( c13 _ T I D _ P R I V ) ]
ldr r6 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ D F S R ) ]
ldr r7 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ I F S R ) ]
ldr r8 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ A D F S R ) ]
ldr r9 , [ v c p u , #C P 15 _ O F F S E T ( c5 _ A I F S R ) ]
ldr r10 , [ v c p u , #C P 15 _ O F F S E T ( c6 _ D F A R ) ]
ldr r11 , [ v c p u , #C P 15 _ O F F S E T ( c6 _ I F A R ) ]
ldr r12 , [ v c p u , #C P 15 _ O F F S E T ( c12 _ V B A R ) ]
.endif
mcr p15 , 0 , r2 , c13 , c0 , 1 @ CID
mcr p15 , 0 , r3 , c13 , c0 , 2 @ TID_URW
mcr p15 , 0 , r4 , c13 , c0 , 3 @ TID_URO
mcr p15 , 0 , r5 , c13 , c0 , 4 @ TID_PRIV
mcr p15 , 0 , r6 , c5 , c0 , 0 @ DFSR
mcr p15 , 0 , r7 , c5 , c0 , 1 @ IFSR
mcr p15 , 0 , r8 , c5 , c1 , 0 @ ADFSR
mcr p15 , 0 , r9 , c5 , c1 , 1 @ AIFSR
mcr p15 , 0 , r10 , c6 , c0 , 0 @ DFAR
mcr p15 , 0 , r11 , c6 , c0 , 2 @ IFAR
mcr p15 , 0 , r12 , c12 , c0 , 0 @ VBAR
.if \ read_ f r o m _ v c p u = = 0
pop { r2 - r12 }
.else
ldr r2 , [ v c p u , #C P 15 _ O F F S E T ( c1 _ S C T L R ) ]
ldr r3 , [ v c p u , #C P 15 _ O F F S E T ( c1 _ C P A C R ) ]
ldr r4 , [ v c p u , #C P 15 _ O F F S E T ( c2 _ T T B C R ) ]
ldr r5 , [ v c p u , #C P 15 _ O F F S E T ( c3 _ D A C R ) ]
add r12 , v c p u , #C P 15 _ O F F S E T ( c2 _ T T B R 0 )
ldrd r6 , r7 , [ r12 ]
add r12 , v c p u , #C P 15 _ O F F S E T ( c2 _ T T B R 1 )
ldrd r8 , r9 , [ r12 ]
ldr r10 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ P R R R ) ]
ldr r11 , [ v c p u , #C P 15 _ O F F S E T ( c10 _ N M R R ) ]
ldr r12 , [ v c p u , #C P 15 _ O F F S E T ( c0 _ C S S E L R ) ]
.endif
mcr p15 , 0 , r2 , c1 , c0 , 0 @ SCTLR
mcr p15 , 0 , r3 , c1 , c0 , 2 @ CPACR
mcr p15 , 0 , r4 , c2 , c0 , 2 @ TTBCR
mcr p15 , 0 , r5 , c3 , c0 , 0 @ DACR
mcrr p15 , 0 , r6 , r7 , c2 @ TTBR 0
mcrr p15 , 1 , r8 , r9 , c2 @ TTBR 1
mcr p15 , 0 , r10 , c10 , c2 , 0 @ PRRR
mcr p15 , 0 , r11 , c10 , c2 , 1 @ NMRR
mcr p15 , 2 , r12 , c0 , c0 , 0 @ CSSELR
.endm
/ *
* Save t h e V G I C C P U s t a t e i n t o m e m o r y
*
* Assumes v c p u p o i n t e r i n v c p u r e g
* /
.macro save_vgic_state
2013-01-21 19:36:15 -05:00
/* Get VGIC VCTRL base into r2 */
ldr r2 , [ v c p u , #V C P U _ K V M ]
ldr r2 , [ r2 , #K V M _ V G I C _ V C T R L ]
cmp r2 , #0
beq 2 f
/* Compute the address of struct vgic_cpu */
add r11 , v c p u , #V C P U _ V G I C _ C P U
/* Save all interesting registers */
ldr r4 , [ r2 , #G I C H _ V M C R ]
ldr r5 , [ r2 , #G I C H _ M I S R ]
ldr r6 , [ r2 , #G I C H _ E I S R 0 ]
ldr r7 , [ r2 , #G I C H _ E I S R 1 ]
ldr r8 , [ r2 , #G I C H _ E L R S R 0 ]
ldr r9 , [ r2 , #G I C H _ E L R S R 1 ]
ldr r10 , [ r2 , #G I C H _ A P R ]
2014-06-12 09:30:01 -07:00
ARM_ B E 8 ( r e v r4 , r4 )
ARM_ B E 8 ( r e v r5 , r5 )
ARM_ B E 8 ( r e v r6 , r6 )
ARM_ B E 8 ( r e v r7 , r7 )
ARM_ B E 8 ( r e v r8 , r8 )
ARM_ B E 8 ( r e v r9 , r9 )
ARM_ B E 8 ( r e v r10 , r10 )
2013-01-21 19:36:15 -05:00
2013-05-30 10:20:36 +01:00
str r4 , [ r11 , #V G I C _ V 2 _ C P U _ V M C R ]
str r5 , [ r11 , #V G I C _ V 2 _ C P U _ M I S R ]
2014-09-28 16:04:26 +02:00
# ifdef C O N F I G _ C P U _ E N D I A N _ B E 8
str r6 , [ r11 , #( V G I C _ V 2 _ C P U _ E I S R + 4 ) ]
str r7 , [ r11 , #V G I C _ V 2 _ C P U _ E I S R ]
str r8 , [ r11 , #( V G I C _ V 2 _ C P U _ E L R S R + 4 ) ]
str r9 , [ r11 , #V G I C _ V 2 _ C P U _ E L R S R ]
# else
2013-05-30 10:20:36 +01:00
str r6 , [ r11 , #V G I C _ V 2 _ C P U _ E I S R ]
str r7 , [ r11 , #( V G I C _ V 2 _ C P U _ E I S R + 4 ) ]
str r8 , [ r11 , #V G I C _ V 2 _ C P U _ E L R S R ]
str r9 , [ r11 , #( V G I C _ V 2 _ C P U _ E L R S R + 4 ) ]
2014-09-28 16:04:26 +02:00
# endif
2013-05-30 10:20:36 +01:00
str r10 , [ r11 , #V G I C _ V 2 _ C P U _ A P R ]
2013-01-21 19:36:15 -05:00
/* Clear GICH_HCR */
mov r5 , #0
str r5 , [ r2 , #G I C H _ H C R ]
/* Save list registers */
add r2 , r2 , #G I C H _ L R 0
2013-05-30 10:20:36 +01:00
add r3 , r11 , #V G I C _ V 2 _ C P U _ L R
2013-01-21 19:36:15 -05:00
ldr r4 , [ r11 , #V G I C _ C P U _ N R _ L R ]
1 : ldr r6 , [ r2 ] , #4
2014-06-12 09:30:01 -07:00
ARM_ B E 8 ( r e v r6 , r6 )
2013-01-21 19:36:15 -05:00
str r6 , [ r3 ] , #4
subs r4 , r4 , #1
bne 1 b
2 :
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endm
/ *
* Restore t h e V G I C C P U s t a t e f r o m m e m o r y
*
* Assumes v c p u p o i n t e r i n v c p u r e g
* /
.macro restore_vgic_state
2013-01-21 19:36:15 -05:00
/* Get VGIC VCTRL base into r2 */
ldr r2 , [ v c p u , #V C P U _ K V M ]
ldr r2 , [ r2 , #K V M _ V G I C _ V C T R L ]
cmp r2 , #0
beq 2 f
/* Compute the address of struct vgic_cpu */
add r11 , v c p u , #V C P U _ V G I C _ C P U
/* We only restore a minimal set of registers */
2013-05-30 10:20:36 +01:00
ldr r3 , [ r11 , #V G I C _ V 2 _ C P U _ H C R ]
ldr r4 , [ r11 , #V G I C _ V 2 _ C P U _ V M C R ]
ldr r8 , [ r11 , #V G I C _ V 2 _ C P U _ A P R ]
2014-06-12 09:30:01 -07:00
ARM_ B E 8 ( r e v r3 , r3 )
ARM_ B E 8 ( r e v r4 , r4 )
ARM_ B E 8 ( r e v r8 , r8 )
2013-01-21 19:36:15 -05:00
str r3 , [ r2 , #G I C H _ H C R ]
str r4 , [ r2 , #G I C H _ V M C R ]
str r8 , [ r2 , #G I C H _ A P R ]
/* Restore list registers */
add r2 , r2 , #G I C H _ L R 0
2013-05-30 10:20:36 +01:00
add r3 , r11 , #V G I C _ V 2 _ C P U _ L R
2013-01-21 19:36:15 -05:00
ldr r4 , [ r11 , #V G I C _ C P U _ N R _ L R ]
1 : ldr r6 , [ r3 ] , #4
2014-06-12 09:30:01 -07:00
ARM_ B E 8 ( r e v r6 , r6 )
2013-01-21 19:36:15 -05:00
str r6 , [ r2 ] , #4
subs r4 , r4 , #1
bne 1 b
2 :
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endm
2013-01-23 13:21:58 -05:00
# define C N T H C T L _ P L 1 P C T E N ( 1 < < 0 )
# define C N T H C T L _ P L 1 P C E N ( 1 < < 1 )
/ *
* Save t h e t i m e r s t a t e o n t o t h e V C P U a n d a l l o w p h y s i c a l t i m e r / c o u n t e r a c c e s s
* for t h e h o s t .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
2013-01-23 13:21:59 -05:00
* Clobbers r2 - r5
2013-01-23 13:21:58 -05:00
* /
.macro save_timer_state
2013-01-23 13:21:59 -05:00
ldr r4 , [ v c p u , #V C P U _ K V M ]
ldr r2 , [ r4 , #K V M _ T I M E R _ E N A B L E D ]
cmp r2 , #0
beq 1 f
mrc p15 , 0 , r2 , c14 , c3 , 1 @ CNTV_CTL
str r2 , [ v c p u , #V C P U _ T I M E R _ C N T V _ C T L ]
bic r2 , #1 @ Clear ENABLE
mcr p15 , 0 , r2 , c14 , c3 , 1 @ CNTV_CTL
isb
2014-06-12 09:30:02 -07:00
mrrc p15 , 3 , r r _ l o _ h i ( r2 , r3 ) , c14 @ CNTV_CVAL
2013-01-23 13:21:59 -05:00
ldr r4 , =VCPU_TIMER_CNTV_CVAL
add r5 , v c p u , r4
strd r2 , r3 , [ r5 ]
2013-03-26 13:41:35 +00:00
@ Ensure host CNTVCT == CNTPCT
mov r2 , #0
mcrr p15 , 4 , r2 , r2 , c14 @ CNTVOFF
2013-01-23 13:21:59 -05:00
1 :
2013-01-23 13:21:58 -05:00
@ Allow physical timer/counter access for the host
mrc p15 , 4 , r2 , c14 , c1 , 0 @ CNTHCTL
orr r2 , r2 , #( C N T H C T L _ P L 1 P C E N | C N T H C T L _ P L 1 P C T E N )
mcr p15 , 4 , r2 , c14 , c1 , 0 @ CNTHCTL
.endm
/ *
* Load t h e t i m e r s t a t e f r o m t h e V C P U a n d d e n y p h y s i c a l t i m e r / c o u n t e r a c c e s s
* for t h e h o s t .
*
* Assumes v c p u p o i n t e r i n v c p u r e g
2013-01-23 13:21:59 -05:00
* Clobbers r2 - r5
2013-01-23 13:21:58 -05:00
* /
.macro restore_timer_state
@ Disallow physical timer access for the guest
@ Physical counter access is allowed
mrc p15 , 4 , r2 , c14 , c1 , 0 @ CNTHCTL
orr r2 , r2 , #C N T H C T L _ P L 1 P C T E N
bic r2 , r2 , #C N T H C T L _ P L 1 P C E N
mcr p15 , 4 , r2 , c14 , c1 , 0 @ CNTHCTL
2013-01-23 13:21:59 -05:00
ldr r4 , [ v c p u , #V C P U _ K V M ]
ldr r2 , [ r4 , #K V M _ T I M E R _ E N A B L E D ]
cmp r2 , #0
beq 1 f
ldr r2 , [ r4 , #K V M _ T I M E R _ C N T V O F F ]
ldr r3 , [ r4 , #( K V M _ T I M E R _ C N T V O F F + 4 ) ]
2014-06-12 09:30:02 -07:00
mcrr p15 , 4 , r r _ l o _ h i ( r2 , r3 ) , c14 @ CNTVOFF
2013-01-23 13:21:59 -05:00
ldr r4 , =VCPU_TIMER_CNTV_CVAL
add r5 , v c p u , r4
ldrd r2 , r3 , [ r5 ]
2014-06-12 09:30:02 -07:00
mcrr p15 , 3 , r r _ l o _ h i ( r2 , r3 ) , c14 @ CNTV_CVAL
2013-01-23 13:21:59 -05:00
isb
ldr r2 , [ v c p u , #V C P U _ T I M E R _ C N T V _ C T L ]
and r2 , r2 , #3
mcr p15 , 0 , r2 , c14 , c3 , 1 @ CNTV_CTL
1 :
2013-01-23 13:21:58 -05:00
.endm
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.equ vmentry, 0
.equ vmexit, 1
/ * Configures t h e H S T R ( H y p S y s t e m T r a p R e g i s t e r ) o n e n t r y / r e t u r n
* ( hardware r e s e t v a l u e i s 0 ) * /
.macro set_hstr operation
mrc p15 , 4 , r2 , c1 , c1 , 3
ldr r3 , =HSTR_T ( 1 5 )
.if \ operation = = v m e n t r y
orr r2 , r2 , r3 @ Trap CR{15}
.else
bic r2 , r2 , r3 @ Don't trap any CRx accesses
.endif
mcr p15 , 4 , r2 , c1 , c1 , 3
.endm
/ * Configures t h e H C P T R ( H y p C o p r o c e s s o r T r a p R e g i s t e r ) o n e n t r y / r e t u r n
arm: KVM: force execution of HCPTR access on VM exit
On VM entry, we disable access to the VFP registers in order to
perform a lazy save/restore of these registers.
On VM exit, we restore access, test if we did enable them before,
and save/restore the guest/host registers if necessary. In this
sequence, the FPEXC register is always accessed, irrespective
of the trapping configuration.
If the guest didn't touch the VFP registers, then the HCPTR access
has now enabled such access, but we're missing a barrier to ensure
architectural execution of the new HCPTR configuration. If the HCPTR
access has been delayed/reordered, the subsequent access to FPEXC
will cause a trap, which we aren't prepared to handle at all.
The same condition exists when trapping to enable VFP for the guest.
The fix is to introduce a barrier after enabling VFP access. In the
vmexit case, it can be relaxed to only takes place if the guest hasn't
accessed its view of the VFP registers, making the access to FPEXC safe.
The set_hcptr macro is modified to deal with both vmenter/vmexit and
vmtrap operations, and now takes an optional label that is branched to
when the guest hasn't touched the VFP registers.
Reported-by: Vikram Sethi <vikrams@codeaurora.org>
Cc: stable@kernel.org # v3.9+
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-16 10:59:43 +00:00
* ( hardware r e s e t v a l u e i s 0 ) . K e e p p r e v i o u s v a l u e i n r2 .
* An I S B i s e m i t e d o n v m e x i t / v m t r a p , b u t e x e c u t e d o n v m e x i t o n l y i f
* VFP w a s n ' t a l r e a d y e n a b l e d ( a l w a y s e x e c u t e d o n v m t r a p ) .
* If a l a b e l i s s p e c i f i e d w i t h v m e x i t , i t i s b r a n c h e d t o i f V F P w a s n ' t
* enabled.
* /
.macro set_hcptr operation, m a s k , l a b e l = n o n e
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
mrc p15 , 4 , r2 , c1 , c1 , 2
ldr r3 , = \ m a s k
.if \ operation = = v m e n t r y
orr r3 , r2 , r3 @ Trap coproc-accesses defined in mask
.else
bic r3 , r2 , r3 @ Don't trap defined coproc-accesses
.endif
mcr p15 , 4 , r3 , c1 , c1 , 2
arm: KVM: force execution of HCPTR access on VM exit
On VM entry, we disable access to the VFP registers in order to
perform a lazy save/restore of these registers.
On VM exit, we restore access, test if we did enable them before,
and save/restore the guest/host registers if necessary. In this
sequence, the FPEXC register is always accessed, irrespective
of the trapping configuration.
If the guest didn't touch the VFP registers, then the HCPTR access
has now enabled such access, but we're missing a barrier to ensure
architectural execution of the new HCPTR configuration. If the HCPTR
access has been delayed/reordered, the subsequent access to FPEXC
will cause a trap, which we aren't prepared to handle at all.
The same condition exists when trapping to enable VFP for the guest.
The fix is to introduce a barrier after enabling VFP access. In the
vmexit case, it can be relaxed to only takes place if the guest hasn't
accessed its view of the VFP registers, making the access to FPEXC safe.
The set_hcptr macro is modified to deal with both vmenter/vmexit and
vmtrap operations, and now takes an optional label that is branched to
when the guest hasn't touched the VFP registers.
Reported-by: Vikram Sethi <vikrams@codeaurora.org>
Cc: stable@kernel.org # v3.9+
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-16 10:59:43 +00:00
.if \ operation ! = v m e n t r y
.if \ operation = = v m e x i t
tst r2 , #( H C P T R _ T C P ( 10 ) | H C P T R _ T C P ( 1 1 ) )
beq 1 f
.endif
isb
.if \ label ! = n o n e
b \ l a b e l
.endif
1 :
.endif
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endm
/ * Configures t h e H D C R ( H y p D e b u g C o n f i g u r a t i o n R e g i s t e r ) o n e n t r y / r e t u r n
* ( hardware r e s e t v a l u e i s 0 ) * /
.macro set_hdcr operation
mrc p15 , 4 , r2 , c1 , c1 , 1
ldr r3 , = ( H D C R _ T P M | H D C R _ T P M C R )
.if \ operation = = v m e n t r y
orr r2 , r2 , r3 @ Trap some perfmon accesses
.else
bic r2 , r2 , r3 @ Don't trap any perfmon accesses
.endif
mcr p15 , 4 , r2 , c1 , c1 , 1
.endm
/* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
.macro configure_hyp_role operation
.if \ operation = = v m e n t r y
2014-01-22 09:43:38 +00:00
ldr r2 , [ v c p u , #V C P U _ H C R ]
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
ldr r3 , [ v c p u , #V C P U _ I R Q _ L I N E S ]
orr r2 , r2 , r3
.else
2014-01-22 09:43:38 +00:00
mov r2 , #0
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endif
2014-01-22 09:43:38 +00:00
mcr p15 , 4 , r2 , c1 , c1 , 0 @ HCR
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 18:47:42 -05:00
.endm
.macro load_vcpu
mrc p15 , 4 , v c p u , c13 , c0 , 2 @ HTPIDR
.endm