2019-05-29 17:12:40 +03:00
/* SPDX-License-Identifier: GPL-2.0-only */
2009-10-30 08:47:07 +03:00
/ *
*
* Copyright S U S E L i n u x P r o d u c t s G m b H 2 0 0 9
*
* Authors : Alexander G r a f < a g r a f @suse.de>
* /
# include < a s m / p p c _ a s m . h >
# include < a s m / k v m _ a s m . h >
# include < a s m / r e g . h >
# include < a s m / p a g e . h >
# include < a s m / a s m - o f f s e t s . h >
# include < a s m / e x c e p t i o n - 6 4 s . h >
2018-07-05 19:24:57 +03:00
# include < a s m / a s m - c o m p a t . h >
2009-10-30 08:47:07 +03:00
2010-04-16 02:11:47 +04:00
# if d e f i n e d ( C O N F I G _ P P C _ B O O K 3 S _ 6 4 )
2016-06-06 19:56:10 +03:00
# ifdef P P C 6 4 _ E L F _ A B I _ v2
2014-06-16 16:37:53 +04:00
# define F U N C ( n a m e ) n a m e
# else
2010-04-16 02:11:47 +04:00
# define F U N C ( n a m e ) G L U E ( . ,n a m e )
2014-06-16 16:37:53 +04:00
# endif
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
# define G E T _ S H A D O W _ V C P U ( r e g ) a d d i r e g , r13 , P A C A _ S V C P U
2010-04-16 02:11:47 +04:00
# elif d e f i n e d ( C O N F I G _ P P C _ B O O K 3 S _ 3 2 )
# define F U N C ( n a m e ) n a m e
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
# define G E T _ S H A D O W _ V C P U ( r e g ) l w z r e g , ( T H R E A D + T H R E A D _ K V M _ S V C P U ) ( r2 )
2020-07-24 16:17:27 +03:00
# endif / * C O N F I G _ P P C _ B O O K 3 S _ 6 4 * /
2010-04-16 02:11:47 +04:00
2010-01-05 00:19:25 +03:00
# define V C P U _ L O A D _ N V G P R S ( v c p u ) \
2012-06-25 17:33:10 +04:00
PPC_ L L r14 , V C P U _ G P R ( R 1 4 ) ( v c p u ) ; \
PPC_ L L r15 , V C P U _ G P R ( R 1 5 ) ( v c p u ) ; \
PPC_ L L r16 , V C P U _ G P R ( R 1 6 ) ( v c p u ) ; \
PPC_ L L r17 , V C P U _ G P R ( R 1 7 ) ( v c p u ) ; \
PPC_ L L r18 , V C P U _ G P R ( R 1 8 ) ( v c p u ) ; \
PPC_ L L r19 , V C P U _ G P R ( R 1 9 ) ( v c p u ) ; \
PPC_ L L r20 , V C P U _ G P R ( R 2 0 ) ( v c p u ) ; \
PPC_ L L r21 , V C P U _ G P R ( R 2 1 ) ( v c p u ) ; \
PPC_ L L r22 , V C P U _ G P R ( R 2 2 ) ( v c p u ) ; \
PPC_ L L r23 , V C P U _ G P R ( R 2 3 ) ( v c p u ) ; \
PPC_ L L r24 , V C P U _ G P R ( R 2 4 ) ( v c p u ) ; \
PPC_ L L r25 , V C P U _ G P R ( R 2 5 ) ( v c p u ) ; \
PPC_ L L r26 , V C P U _ G P R ( R 2 6 ) ( v c p u ) ; \
PPC_ L L r27 , V C P U _ G P R ( R 2 7 ) ( v c p u ) ; \
PPC_ L L r28 , V C P U _ G P R ( R 2 8 ) ( v c p u ) ; \
PPC_ L L r29 , V C P U _ G P R ( R 2 9 ) ( v c p u ) ; \
PPC_ L L r30 , V C P U _ G P R ( R 3 0 ) ( v c p u ) ; \
PPC_ L L r31 , V C P U _ G P R ( R 3 1 ) ( v c p u ) ; \
2010-01-05 00:19:25 +03:00
2009-10-30 08:47:07 +03:00
/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* *
* Guest e n t r y / e x i t c o d e t h a t i s i n k e r n e l m o d u l e m e m o r y ( h i g h m e m ) *
* *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /
/ * Registers :
2020-06-23 16:14:16 +03:00
* r3 : vcpu p o i n t e r
2009-10-30 08:47:07 +03:00
* /
2011-06-29 04:19:50 +04:00
_ GLOBAL( _ _ k v m p p c _ v c p u _ r u n )
2009-10-30 08:47:07 +03:00
kvm_start_entry :
/* Write correct stack frame */
2010-04-16 02:11:47 +04:00
mflr r0
PPC_ S T L r0 ,P P C _ L R _ S T K O F F ( r1 )
2009-10-30 08:47:07 +03:00
/* Save host state to the stack */
2010-04-16 02:11:47 +04:00
PPC_ S T L U r1 , - S W I T C H _ F R A M E _ S I Z E ( r1 )
2009-10-30 08:47:07 +03:00
2020-06-23 16:14:16 +03:00
/* Save r3 (vcpu) */
SAVE_ G P R ( 3 , r1 )
2009-10-30 08:47:07 +03:00
/* Save non-volatile registers (r14 - r31) */
SAVE_ N V G P R S ( r1 )
2012-03-05 19:00:28 +04:00
/* Save CR */
mfcr r14
stw r14 , _ C C R ( r1 )
2009-10-30 08:47:07 +03:00
/* Save LR */
2010-04-16 02:11:47 +04:00
PPC_ S T L r0 , _ L I N K ( r1 )
2010-01-05 00:19:25 +03:00
/* Load non-volatile guest state from the vcpu */
2020-06-23 16:14:16 +03:00
VCPU_ L O A D _ N V G P R S ( r3 )
2009-10-30 08:47:07 +03:00
2011-06-29 04:20:58 +04:00
kvm_start_lightweight :
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
/* Copy registers into shadow vcpu so we can access them in real mode */
bl F U N C ( k v m p p c _ c o p y _ t o _ s v c p u )
nop
2020-06-23 16:14:16 +03:00
REST_ G P R ( 3 , r1 )
2010-01-08 04:58:03 +03:00
2010-04-16 02:11:47 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 S _ 6 4
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
/* Get the dcbz32 flag */
2020-06-23 16:14:16 +03:00
PPC_ L L r0 , V C P U _ H F L A G S ( r3 )
rldicl r0 , r0 , 0 , 6 3 / * r3 & = 1 * /
stb r0 , H S T A T E _ R E S T O R E _ H I D 5 ( r13 )
2013-07-11 15:49:43 +04:00
/* Load up guest SPRG3 value, since it's user readable */
2020-06-23 16:14:16 +03:00
lbz r4 , V C P U _ S H A R E D B E ( r3 )
cmpwi r4 , 0
ld r5 , V C P U _ S H A R E D ( r3 )
2014-04-24 15:46:24 +04:00
beq s p r g 3 _ l i t t l e _ e n d i a n
sprg3_big_endian :
# ifdef _ _ B I G _ E N D I A N _ _
2020-06-23 16:14:16 +03:00
ld r4 , V C P U _ S H A R E D _ S P R G 3 ( r5 )
2014-04-24 15:46:24 +04:00
# else
addi r5 , r5 , V C P U _ S H A R E D _ S P R G 3
2020-06-23 16:14:16 +03:00
ldbrx r4 , 0 , r5
2014-04-24 15:46:24 +04:00
# endif
b a f t e r _ s p r g 3 _ l o a d
sprg3_little_endian :
# ifdef _ _ L I T T L E _ E N D I A N _ _
2020-06-23 16:14:16 +03:00
ld r4 , V C P U _ S H A R E D _ S P R G 3 ( r5 )
2014-04-24 15:46:24 +04:00
# else
addi r5 , r5 , V C P U _ S H A R E D _ S P R G 3
2020-06-23 16:14:16 +03:00
ldbrx r4 , 0 , r5
2014-04-24 15:46:24 +04:00
# endif
after_sprg3_load :
2020-06-23 16:14:16 +03:00
mtspr S P R N _ S P R G 3 , r4
2010-04-16 02:11:47 +04:00
# endif / * C O N F I G _ P P C _ B O O K 3 S _ 6 4 * /
2020-06-23 16:14:16 +03:00
PPC_ L L r4 , V C P U _ S H A D O W _ M S R ( r3 ) / * g e t s h a d o w _ m s r * /
2010-01-08 04:58:03 +03:00
2010-04-16 02:11:47 +04:00
/* Jump to segment patching handler and into our guest */
2011-07-23 11:41:44 +04:00
bl F U N C ( k v m p p c _ e n t r y _ t r a m p o l i n e )
nop
2009-10-30 08:47:07 +03:00
/ *
* This i s t h e h a n d l e r i n m o d u l e m e m o r y . I t g e t s j u m p e d a t f r o m t h e
* lowmem t r a m p o l i n e c o d e , s o i t ' s b a s i c a l l y t h e g u e s t e x i t c o d e .
*
* /
/ *
* Register u s a g e a t t h i s p o i n t :
*
2010-04-16 02:11:47 +04:00
* R1 = h o s t R 1
* R2 = h o s t R 2
* R1 2 = e x i t h a n d l e r i d
* R1 3 = P A C A
* SVCPU. * = g u e s t *
2013-11-29 05:32:31 +04:00
* MSR. E E = 1
2009-10-30 08:47:07 +03:00
*
* /
2020-06-23 16:14:16 +03:00
PPC_ L L r3 , G P R 3 ( r1 ) / * v c p u p o i n t e r * /
2013-11-29 05:24:18 +04:00
/ *
* kvmppc_ c o p y _ f r o m _ s v c p u c a n c l o b b e r v o l a t i l e r e g i s t e r s , s a v e
* the e x i t h a n d l e r i d t o t h e v c p u a n d r e s t o r e i t f r o m t h e r e l a t e r .
* /
stw r12 , V C P U _ T R A P ( r3 )
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
/* Transfer reg values from shadow vcpu back to vcpu struct */
2013-11-29 05:24:18 +04:00
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
bl F U N C ( k v m p p c _ c o p y _ f r o m _ s v c p u )
nop
2009-10-30 08:47:07 +03:00
2013-07-11 15:49:43 +04:00
# ifdef C O N F I G _ P P C _ B O O K 3 S _ 6 4
/ *
* Reload k e r n e l S P R G 3 v a l u e .
* No n e e d t o s a v e g u e s t v a l u e a s u s e r m o d e c a n ' t m o d i f y S P R G 3 .
* /
2014-03-11 02:29:38 +04:00
ld r3 , P A C A _ S P R G _ V D S O ( r13 )
mtspr S P R N _ S P R G _ V D S O _ W R I T E , r3
2013-07-11 15:49:43 +04:00
# endif / * C O N F I G _ P P C _ B O O K 3 S _ 6 4 * /
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
/* R7 = vcpu */
2020-06-23 16:14:16 +03:00
PPC_ L L r7 , G P R 3 ( r1 )
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-09-20 08:52:43 +04:00
2012-06-25 17:33:10 +04:00
PPC_ S T L r14 , V C P U _ G P R ( R 1 4 ) ( r7 )
PPC_ S T L r15 , V C P U _ G P R ( R 1 5 ) ( r7 )
PPC_ S T L r16 , V C P U _ G P R ( R 1 6 ) ( r7 )
PPC_ S T L r17 , V C P U _ G P R ( R 1 7 ) ( r7 )
PPC_ S T L r18 , V C P U _ G P R ( R 1 8 ) ( r7 )
PPC_ S T L r19 , V C P U _ G P R ( R 1 9 ) ( r7 )
PPC_ S T L r20 , V C P U _ G P R ( R 2 0 ) ( r7 )
PPC_ S T L r21 , V C P U _ G P R ( R 2 1 ) ( r7 )
PPC_ S T L r22 , V C P U _ G P R ( R 2 2 ) ( r7 )
PPC_ S T L r23 , V C P U _ G P R ( R 2 3 ) ( r7 )
PPC_ S T L r24 , V C P U _ G P R ( R 2 4 ) ( r7 )
PPC_ S T L r25 , V C P U _ G P R ( R 2 5 ) ( r7 )
PPC_ S T L r26 , V C P U _ G P R ( R 2 6 ) ( r7 )
PPC_ S T L r27 , V C P U _ G P R ( R 2 7 ) ( r7 )
PPC_ S T L r28 , V C P U _ G P R ( R 2 8 ) ( r7 )
PPC_ S T L r29 , V C P U _ G P R ( R 2 9 ) ( r7 )
PPC_ S T L r30 , V C P U _ G P R ( R 3 0 ) ( r7 )
PPC_ S T L r31 , V C P U _ G P R ( R 3 1 ) ( r7 )
2009-10-30 08:47:07 +03:00
2020-06-23 16:14:16 +03:00
/* Pass the exit number as 2nd argument to kvmppc_handle_exit */
lwz r4 , V C P U _ T R A P ( r7 )
2010-01-05 00:19:25 +03:00
2020-06-23 16:14:16 +03:00
/* Restore r3 (vcpu) */
REST_ G P R ( 3 , r1 )
2013-10-07 20:47:53 +04:00
bl F U N C ( k v m p p c _ h a n d l e _ e x i t _ p r )
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
/* If RESUME_GUEST, get back in the loop */
2009-10-30 08:47:07 +03:00
cmpwi r3 , R E S U M E _ G U E S T
2010-01-05 00:19:25 +03:00
beq k v m _ l o o p _ l i g h t w e i g h t
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
cmpwi r3 , R E S U M E _ G U E S T _ N V
beq k v m _ l o o p _ h e a v y w e i g h t
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
kvm_exit_loop :
2009-10-30 08:47:07 +03:00
2010-04-16 02:11:47 +04:00
PPC_ L L r4 , _ L I N K ( r1 )
2009-10-30 08:47:07 +03:00
mtlr r4
2012-03-05 19:00:28 +04:00
lwz r14 , _ C C R ( r1 )
mtcr r14
2010-01-05 00:19:25 +03:00
/* Restore non-volatile host registers (r14 - r31) */
REST_ N V G P R S ( r1 )
addi r1 , r1 , S W I T C H _ F R A M E _ S I Z E
blr
kvm_loop_heavyweight :
2009-10-30 08:47:07 +03:00
2010-04-16 02:11:47 +04:00
PPC_ L L r4 , _ L I N K ( r1 )
PPC_ S T L r4 , ( P P C _ L R _ S T K O F F + S W I T C H _ F R A M E _ S I Z E ) ( r1 )
2010-01-05 00:19:25 +03:00
2020-06-23 16:14:16 +03:00
/* Load vcpu */
REST_ G P R ( 3 , r1 )
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
/* Load non-volatile guest state from the vcpu */
2020-06-23 16:14:16 +03:00
VCPU_ L O A D _ N V G P R S ( r3 )
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
/* Jump back into the beginning of this function */
b k v m _ s t a r t _ l i g h t w e i g h t
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
kvm_loop_lightweight :
2009-10-30 08:47:07 +03:00
2010-01-05 00:19:25 +03:00
/* We'll need the vcpu pointer */
2020-06-23 16:14:16 +03:00
REST_ G P R ( 3 , r1 )
2010-01-05 00:19:25 +03:00
/* Jump back into the beginning of this function */
b k v m _ s t a r t _ l i g h t w e i g h t