KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
/* SPDX-License-Identifier: GPL-2.0 */
# include < l i n u x / l i n k a g e . h >
# include < a s m / a s m . h >
2019-01-25 07:41:12 -08:00
# include < a s m / b i t s p e r l o n g . h >
# include < a s m / k v m _ v c p u _ r e g s . h >
2019-04-26 17:23:58 -07:00
# include < a s m / n o s p e c - b r a n c h . h >
2022-06-14 23:16:16 +02:00
# include < a s m / p e r c p u . h >
2020-09-15 12:15:04 -07:00
# include < a s m / s e g m e n t . h >
2022-11-08 09:44:53 +01:00
# include " k v m - a s m - o f f s e t s . h "
2022-06-14 23:16:12 +02:00
# include " r u n _ f l a g s . h "
2019-01-25 07:41:12 -08:00
# define W O R D _ S I Z E ( B I T S _ P E R _ L O N G / 8 )
# define V C P U _ R A X _ _ V C P U _ R E G S _ R A X * W O R D _ S I Z E
# define V C P U _ R C X _ _ V C P U _ R E G S _ R C X * W O R D _ S I Z E
# define V C P U _ R D X _ _ V C P U _ R E G S _ R D X * W O R D _ S I Z E
# define V C P U _ R B X _ _ V C P U _ R E G S _ R B X * W O R D _ S I Z E
/* Intentionally omit RSP as it's context switched by hardware */
# define V C P U _ R B P _ _ V C P U _ R E G S _ R B P * W O R D _ S I Z E
# define V C P U _ R S I _ _ V C P U _ R E G S _ R S I * W O R D _ S I Z E
# define V C P U _ R D I _ _ V C P U _ R E G S _ R D I * W O R D _ S I Z E
# ifdef C O N F I G _ X 8 6 _ 6 4
# define V C P U _ R 8 _ _ V C P U _ R E G S _ R 8 * W O R D _ S I Z E
# define V C P U _ R 9 _ _ V C P U _ R E G S _ R 9 * W O R D _ S I Z E
# define V C P U _ R 1 0 _ _ V C P U _ R E G S _ R 1 0 * W O R D _ S I Z E
# define V C P U _ R 1 1 _ _ V C P U _ R E G S _ R 1 1 * W O R D _ S I Z E
# define V C P U _ R 1 2 _ _ V C P U _ R E G S _ R 1 2 * W O R D _ S I Z E
# define V C P U _ R 1 3 _ _ V C P U _ R E G S _ R 1 3 * W O R D _ S I Z E
# define V C P U _ R 1 4 _ _ V C P U _ R E G S _ R 1 4 * W O R D _ S I Z E
# define V C P U _ R 1 5 _ _ V C P U _ R E G S _ R 1 5 * W O R D _ S I Z E
# endif
KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
2022-12-13 06:09:11 +00:00
.macro VMX_DO_EVENT_IRQOFF call_ i n s n c a l l _ t a r g e t
/ *
* Unconditionally c r e a t e a s t a c k f r a m e , g e t t i n g t h e c o r r e c t R S P o n t h e
* stack ( f o r x86 - 6 4 ) w o u l d t a k e t w o i n s t r u c t i o n s a n y w a y s , a n d R B P c a n
* be u s e d t o r e s t o r e R S P t o m a k e o b j t o o l h a p p y ( s e e b e l o w ) .
* /
push % _ A S M _ B P
mov % _ A S M _ S P , % _ A S M _ B P
# ifdef C O N F I G _ X 8 6 _ 6 4
/ *
* Align R S P t o a 1 6 - b y t e b o u n d a r y ( t o e m u l a t e C P U b e h a v i o r ) b e f o r e
* creating t h e s y n t h e t i c i n t e r r u p t s t a c k f r a m e f o r t h e I R Q / N M I .
* /
and $ - 1 6 , % r s p
push $ _ _ K E R N E L _ D S
push % r b p
# endif
pushf
push $ _ _ K E R N E L _ C S
\ call_ i n s n \ c a l l _ t a r g e t
/ *
* " Restore" R S P f r o m R B P , e v e n t h o u g h I R E T h a s a l r e a d y u n w o u n d R S P t o
* the c o r r e c t v a l u e . o b j t o o l d o e s n ' t k n o w t h e c a l l e e w i l l I R E T a n d ,
* without t h e e x p l i c i t r e s t o r e , t h i n k s t h e s t a c k i s g e t t i n g w a l l o p e d .
* Using a n u n w i n d h i n t i s p r o b l e m a t i c d u e t o x86 - 6 4 ' s d y n a m i c a l i g n m e n t .
* /
mov % _ A S M _ B P , % _ A S M _ S P
pop % _ A S M _ B P
RET
.endm
2020-07-08 21:51:57 +02:00
.section .noinstr .text , " ax"
KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
2019-01-25 07:41:12 -08:00
/ * *
2019-01-25 07:41:14 -08:00
* _ _ vmx_ v c p u _ r u n - R u n a v C P U v i a a t r a n s i t i o n t o V M X g u e s t m o d e
2022-06-14 23:16:13 +02:00
* @vmx: struct vcpu_vmx *
2019-01-25 07:41:12 -08:00
* @regs: unsigned long * (to guest registers)
2022-06-14 23:16:13 +02:00
* @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH
* VMX_RUN_SAVE_SPEC_CTRL : save g u e s t S P E C _ C T R L i n t o v m x - > s p e c _ c t r l
2019-01-25 07:41:12 -08:00
*
* Returns :
2019-01-25 07:41:17 -08:00
* 0 on V M - E x i t , 1 o n V M - F a i l
2019-01-25 07:41:12 -08:00
* /
2019-10-11 13:51:04 +02:00
SYM_ F U N C _ S T A R T ( _ _ v m x _ v c p u _ r u n )
2019-01-25 07:41:12 -08:00
push % _ A S M _ B P
mov % _ A S M _ S P , % _ A S M _ B P
2019-01-25 07:41:18 -08:00
# ifdef C O N F I G _ X 8 6 _ 6 4
push % r15
push % r14
push % r13
push % r12
# else
push % e d i
push % e s i
# endif
push % _ A S M _ B X
2019-01-25 07:41:12 -08:00
2022-06-14 23:16:13 +02:00
/* Save @vmx for SPEC_CTRL handling */
push % _ A S M _ A R G 1
/* Save @flags for SPEC_CTRL handling */
push % _ A S M _ A R G 3
2019-01-25 07:41:12 -08:00
/ *
* Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and
* @regs is needed after VM-Exit to save the guest's register values.
* /
push % _ A S M _ A R G 2
2022-11-19 00:37:47 +00:00
/* Copy @flags to EBX, _ASM_ARG3 is volatile. */
mov % _ A S M _ A R G 3 L , % e b x
2019-01-25 07:41:16 -08:00
2022-06-14 23:16:11 +02:00
lea ( % _ A S M _ S P ) , % _ A S M _ A R G 2
2019-01-25 07:41:12 -08:00
call v m x _ u p d a t e _ h o s t _ r s p
2022-06-14 23:16:16 +02:00
ALTERNATIVE " j m p . L s p e c _ c t r l _ d o n e " , " " , X 8 6 _ F E A T U R E _ M S R _ S P E C _ C T R L
/ *
* SPEC_ C T R L h a n d l i n g : i f t h e g u e s t ' s S P E C _ C T R L v a l u e d i f f e r s f r o m t h e
* host' s , w r i t e t h e M S R .
*
* IMPORTANT : To a v o i d R S B u n d e r f l o w a t t a c k s a n d a n y o t h e r n a s t i n e s s ,
* there m u s t n o t b e a n y r e t u r n s o r i n d i r e c t b r a n c h e s b e t w e e n t h i s c o d e
* and v m e n t r y .
* /
mov 2 * W O R D _ S I Z E ( % _ A S M _ S P ) , % _ A S M _ D I
movl V M X _ s p e c _ c t r l ( % _ A S M _ D I ) , % e d i
movl P E R _ C P U _ V A R ( x86 _ s p e c _ c t r l _ c u r r e n t ) , % e s i
cmp % e d i , % e s i
je . L s p e c _ c t r l _ d o n e
mov $ M S R _ I A 3 2 _ S P E C _ C T R L , % e c x
xor % e d x , % e d x
mov % e d i , % e a x
wrmsr
.Lspec_ctrl_done :
/ *
* Since v m e n t r y i s s e r i a l i z i n g o n a f f e c t e d C P U s , t h e r e ' s n o n e e d f o r
* an L F E N C E t o s t o p s p e c u l a t i o n f r o m s k i p p i n g t h e w r m s r .
* /
2019-01-25 07:41:15 -08:00
/* Load @regs to RAX. */
mov ( % _ A S M _ S P ) , % _ A S M _ A X
2019-01-25 07:41:12 -08:00
/* Check if vmlaunch or vmresume is needed */
2022-11-19 00:37:47 +00:00
test $ V M X _ R U N _ V M R E S U M E , % e b x
2019-01-25 07:41:12 -08:00
/* Load guest registers. Don't clobber flags. */
2019-01-25 07:41:15 -08:00
mov V C P U _ R C X ( % _ A S M _ A X ) , % _ A S M _ C X
mov V C P U _ R D X ( % _ A S M _ A X ) , % _ A S M _ D X
2020-03-10 18:10:24 +01:00
mov V C P U _ R B X ( % _ A S M _ A X ) , % _ A S M _ B X
mov V C P U _ R B P ( % _ A S M _ A X ) , % _ A S M _ B P
2019-01-25 07:41:15 -08:00
mov V C P U _ R S I ( % _ A S M _ A X ) , % _ A S M _ S I
mov V C P U _ R D I ( % _ A S M _ A X ) , % _ A S M _ D I
2019-01-25 07:41:12 -08:00
# ifdef C O N F I G _ X 8 6 _ 6 4
2019-01-25 07:41:15 -08:00
mov V C P U _ R 8 ( % _ A S M _ A X ) , % r8
mov V C P U _ R 9 ( % _ A S M _ A X ) , % r9
mov V C P U _ R 1 0 ( % _ A S M _ A X ) , % r10
mov V C P U _ R 1 1 ( % _ A S M _ A X ) , % r11
mov V C P U _ R 1 2 ( % _ A S M _ A X ) , % r12
mov V C P U _ R 1 3 ( % _ A S M _ A X ) , % r13
mov V C P U _ R 1 4 ( % _ A S M _ A X ) , % r14
mov V C P U _ R 1 5 ( % _ A S M _ A X ) , % r15
2019-01-25 07:41:12 -08:00
# endif
2019-08-15 13:09:31 -07:00
/* Load guest RAX. This kills the @regs pointer! */
2019-01-25 07:41:15 -08:00
mov V C P U _ R A X ( % _ A S M _ A X ) , % _ A S M _ A X
2019-01-25 07:41:12 -08:00
2022-11-19 00:37:47 +00:00
/* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */
2022-06-14 23:16:12 +02:00
jz . L v m l a u n c h
2019-01-25 07:41:12 -08:00
2022-06-14 23:16:11 +02:00
/ *
* After a s u c c e s s f u l V M R E S U M E / V M L A U N C H , c o n t r o l f l o w " m a g i c a l l y "
* resumes b e l o w a t ' v m x _ v m e x i t ' d u e t o t h e V M C S H O S T _ R I P s e t t i n g .
* So t h i s i s n ' t a t y p i c a l f u n c t i o n a n d o b j t o o l n e e d s t o b e t o l d t o
* save t h e u n w i n d s t a t e h e r e a n d r e s t o r e i t b e l o w .
* /
UNWIND_ H I N T _ S A V E
/ *
* If V M R E S U M E / V M L A U N C H a n d c o r r e s p o n d i n g v m e x i t s u c c e e d , e x e c u t i o n r e s u m e s a t
* the ' v m x _ v m e x i t ' l a b e l b e l o w .
* /
.Lvmresume :
vmresume
jmp . L v m f a i l
.Lvmlaunch :
vmlaunch
jmp . L v m f a i l
_ ASM_ E X T A B L E ( . L v m r e s u m e , . L f i x u p )
_ ASM_ E X T A B L E ( . L v m l a u n c h , . L f i x u p )
2023-05-31 11:58:21 -04:00
SYM_ I N N E R _ L A B E L _ A L I G N ( v m x _ v m e x i t , S Y M _ L _ G L O B A L )
2022-06-14 23:16:11 +02:00
/* Restore unwind state from before the VMRESUME/VMLAUNCH. */
UNWIND_ H I N T _ R E S T O R E
ENDBR
2019-01-25 07:41:12 -08:00
2019-01-25 07:41:15 -08:00
/* Temporarily save guest's RAX. */
push % _ A S M _ A X
2019-01-25 07:41:12 -08:00
2019-01-25 07:41:15 -08:00
/* Reload @regs to RAX. */
mov W O R D _ S I Z E ( % _ A S M _ S P ) , % _ A S M _ A X
2019-01-25 07:41:12 -08:00
2019-01-25 07:41:15 -08:00
/* Save all guest registers, including RAX from the stack */
2020-04-27 22:50:35 +02:00
pop V C P U _ R A X ( % _ A S M _ A X )
mov % _ A S M _ C X , V C P U _ R C X ( % _ A S M _ A X )
mov % _ A S M _ D X , V C P U _ R D X ( % _ A S M _ A X )
mov % _ A S M _ B X , V C P U _ R B X ( % _ A S M _ A X )
mov % _ A S M _ B P , V C P U _ R B P ( % _ A S M _ A X )
mov % _ A S M _ S I , V C P U _ R S I ( % _ A S M _ A X )
mov % _ A S M _ D I , V C P U _ R D I ( % _ A S M _ A X )
2019-01-25 07:41:12 -08:00
# ifdef C O N F I G _ X 8 6 _ 6 4
2019-01-25 07:41:15 -08:00
mov % r8 , V C P U _ R 8 ( % _ A S M _ A X )
mov % r9 , V C P U _ R 9 ( % _ A S M _ A X )
mov % r10 , V C P U _ R 1 0 ( % _ A S M _ A X )
mov % r11 , V C P U _ R 1 1 ( % _ A S M _ A X )
mov % r12 , V C P U _ R 1 2 ( % _ A S M _ A X )
mov % r13 , V C P U _ R 1 3 ( % _ A S M _ A X )
mov % r14 , V C P U _ R 1 4 ( % _ A S M _ A X )
mov % r15 , V C P U _ R 1 5 ( % _ A S M _ A X )
2019-01-25 07:41:12 -08:00
# endif
2022-06-14 23:16:13 +02:00
/* Clear return value to indicate VM-Exit (as opposed to VM-Fail). */
xor % e b x , % e b x
2019-01-25 07:41:12 -08:00
2022-06-14 23:16:11 +02:00
.Lclear_regs :
2022-08-16 23:10:10 +02:00
/* Discard @regs. The register is irrelevant, it just can't be RBX. */
pop % _ A S M _ A X
2019-01-25 07:41:12 -08:00
/ *
2022-06-14 23:16:13 +02:00
* Clear a l l g e n e r a l p u r p o s e r e g i s t e r s e x c e p t R S P a n d R B X t o p r e v e n t
2019-01-25 07:41:12 -08:00
* speculative u s e o f t h e g u e s t ' s v a l u e s , e v e n t h o s e t h a t a r e r e l o a d e d
* via t h e s t a c k . I n t h e o r y , a n L 1 c a c h e m i s s w h e n r e s t o r i n g r e g i s t e r s
* could l e a d t o s p e c u l a t i v e e x e c u t i o n w i t h t h e g u e s t ' s v a l u e s .
* Zeroing X O R s a r e d i r t c h e a p , i . e . t h e e x t r a p a r a n o i a i s e s s e n t i a l l y
2022-08-16 23:10:10 +02:00
* free. R S P a n d R B X a r e e x e m p t a s R S P i s r e s t o r e d b y h a r d w a r e d u r i n g
2022-06-14 23:16:13 +02:00
* VM- E x i t a n d R B X i s e x p l i c i t l y l o a d e d w i t h 0 o r 1 t o h o l d t h e r e t u r n
* value.
2019-01-25 07:41:12 -08:00
* /
2022-06-14 23:16:13 +02:00
xor % e a x , % e a x
2022-06-14 23:16:11 +02:00
xor % e c x , % e c x
2019-01-25 07:41:20 -08:00
xor % e d x , % e d x
2020-03-10 18:10:24 +01:00
xor % e b p , % e b p
2019-01-25 07:41:20 -08:00
xor % e s i , % e s i
xor % e d i , % e d i
2019-01-25 07:41:12 -08:00
# ifdef C O N F I G _ X 8 6 _ 6 4
xor % r8 d , % r8 d
xor % r9 d , % r9 d
xor % r10 d , % r10 d
xor % r11 d , % r11 d
xor % r12 d , % r12 d
xor % r13 d , % r13 d
xor % r14 d , % r14 d
xor % r15 d , % r15 d
# endif
2022-06-14 23:16:13 +02:00
/ *
* IMPORTANT : RSB f i l l i n g a n d S P E C _ C T R L h a n d l i n g m u s t b e d o n e b e f o r e
* the f i r s t u n b a l a n c e d R E T a f t e r v m e x i t !
*
2022-06-14 23:16:15 +02:00
* For r e t p o l i n e o r I B R S , R S B f i l l i n g i s n e e d e d t o p r e v e n t p o i s o n e d R S B
* entries a n d ( i n s o m e c a s e s ) R S B u n d e r f l o w .
2022-06-14 23:16:13 +02:00
*
* eIBRS h a s i t s o w n p r o t e c t i o n a g a i n s t p o i s o n e d R S B , s o i t d o e s n ' t
x86/speculation: Add RSB VM Exit protections
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as
documented for RET instructions after VM exits. Mitigate it with a new
one-entry RSB stuffing mechanism and a new LFENCE.
== Background ==
Indirect Branch Restricted Speculation (IBRS) was designed to help
mitigate Branch Target Injection and Speculative Store Bypass, i.e.
Spectre, attacks. IBRS prevents software run in less privileged modes
from affecting branch prediction in more privileged modes. IBRS requires
the MSR to be written on every privilege level change.
To overcome some of the performance issues of IBRS, Enhanced IBRS was
introduced. eIBRS is an "always on" IBRS, in other words, just turn
it on once instead of writing the MSR on every privilege level change.
When eIBRS is enabled, more privileged modes should be protected from
less privileged modes, including protecting VMMs from guests.
== Problem ==
Here's a simplification of how guests are run on Linux' KVM:
void run_kvm_guest(void)
{
// Prepare to run guest
VMRESUME();
// Clean up after guest runs
}
The execution flow for that would look something like this to the
processor:
1. Host-side: call run_kvm_guest()
2. Host-side: VMRESUME
3. Guest runs, does "CALL guest_function"
4. VM exit, host runs again
5. Host might make some "cleanup" function calls
6. Host-side: RET from run_kvm_guest()
Now, when back on the host, there are a couple of possible scenarios of
post-guest activity the host needs to do before executing host code:
* on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not
touched and Linux has to do a 32-entry stuffing.
* on eIBRS hardware, VM exit with IBRS enabled, or restoring the host
IBRS=1 shortly after VM exit, has a documented side effect of flushing
the RSB except in this PBRSB situation where the software needs to stuff
the last RSB entry "by hand".
IOW, with eIBRS supported, host RET instructions should no longer be
influenced by guest behavior after the host retires a single CALL
instruction.
However, if the RET instructions are "unbalanced" with CALLs after a VM
exit as is the RET in #6, it might speculatively use the address for the
instruction after the CALL in #3 as an RSB prediction. This is a problem
since the (untrusted) guest controls this address.
Balanced CALL/RET instruction pairs such as in step #5 are not affected.
== Solution ==
The PBRSB issue affects a wide variety of Intel processors which
support eIBRS. But not all of them need mitigation. Today,
X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates
PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e.,
eIBRS systems which enable legacy IBRS explicitly.
However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT
and most of them need a new mitigation.
Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE
which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT.
The lighter-weight mitigation performs a CALL instruction which is
immediately followed by a speculative execution barrier (INT3). This
steers speculative execution to the barrier -- just like a retpoline
-- which ensures that speculation can never reach an unbalanced RET.
Then, ensure this CALL is retired before continuing execution with an
LFENCE.
In other words, the window of exposure is opened at VM exit where RET
behavior is troublesome. While the window is open, force RSB predictions
sampling for RET targets to a dead end at the INT3. Close the window
with the LFENCE.
There is a subset of eIBRS systems which are not vulnerable to PBRSB.
Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB.
Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO.
[ bp: Massage, incorporate review comments from Andy Cooper. ]
Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-08-02 15:47:01 -07:00
* need t h e R S B f i l l i n g s e q u e n c e . B u t i t d o e s n e e d t o b e e n a b l e d , a n d a
* single c a l l t o r e t i r e , b e f o r e t h e f i r s t u n b a l a n c e d R E T .
2022-12-21 20:28:49 +08:00
* /
2022-06-14 23:16:13 +02:00
x86/speculation: Add RSB VM Exit protections
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as
documented for RET instructions after VM exits. Mitigate it with a new
one-entry RSB stuffing mechanism and a new LFENCE.
== Background ==
Indirect Branch Restricted Speculation (IBRS) was designed to help
mitigate Branch Target Injection and Speculative Store Bypass, i.e.
Spectre, attacks. IBRS prevents software run in less privileged modes
from affecting branch prediction in more privileged modes. IBRS requires
the MSR to be written on every privilege level change.
To overcome some of the performance issues of IBRS, Enhanced IBRS was
introduced. eIBRS is an "always on" IBRS, in other words, just turn
it on once instead of writing the MSR on every privilege level change.
When eIBRS is enabled, more privileged modes should be protected from
less privileged modes, including protecting VMMs from guests.
== Problem ==
Here's a simplification of how guests are run on Linux' KVM:
void run_kvm_guest(void)
{
// Prepare to run guest
VMRESUME();
// Clean up after guest runs
}
The execution flow for that would look something like this to the
processor:
1. Host-side: call run_kvm_guest()
2. Host-side: VMRESUME
3. Guest runs, does "CALL guest_function"
4. VM exit, host runs again
5. Host might make some "cleanup" function calls
6. Host-side: RET from run_kvm_guest()
Now, when back on the host, there are a couple of possible scenarios of
post-guest activity the host needs to do before executing host code:
* on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not
touched and Linux has to do a 32-entry stuffing.
* on eIBRS hardware, VM exit with IBRS enabled, or restoring the host
IBRS=1 shortly after VM exit, has a documented side effect of flushing
the RSB except in this PBRSB situation where the software needs to stuff
the last RSB entry "by hand".
IOW, with eIBRS supported, host RET instructions should no longer be
influenced by guest behavior after the host retires a single CALL
instruction.
However, if the RET instructions are "unbalanced" with CALLs after a VM
exit as is the RET in #6, it might speculatively use the address for the
instruction after the CALL in #3 as an RSB prediction. This is a problem
since the (untrusted) guest controls this address.
Balanced CALL/RET instruction pairs such as in step #5 are not affected.
== Solution ==
The PBRSB issue affects a wide variety of Intel processors which
support eIBRS. But not all of them need mitigation. Today,
X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates
PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e.,
eIBRS systems which enable legacy IBRS explicitly.
However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT
and most of them need a new mitigation.
Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE
which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT.
The lighter-weight mitigation performs a CALL instruction which is
immediately followed by a speculative execution barrier (INT3). This
steers speculative execution to the barrier -- just like a retpoline
-- which ensures that speculation can never reach an unbalanced RET.
Then, ensure this CALL is retired before continuing execution with an
LFENCE.
In other words, the window of exposure is opened at VM exit where RET
behavior is troublesome. While the window is open, force RSB predictions
sampling for RET targets to a dead end at the INT3. Close the window
with the LFENCE.
There is a subset of eIBRS systems which are not vulnerable to PBRSB.
Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB.
Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO.
[ bp: Massage, incorporate review comments from Andy Cooper. ]
Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-08-02 15:47:01 -07:00
FILL_ R E T U R N _ B U F F E R % _ A S M _ C X , R S B _ C L E A R _ L O O P S , X 8 6 _ F E A T U R E _ R S B _ V M E X I T ,\
X8 6 _ F E A T U R E _ R S B _ V M E X I T _ L I T E
2022-06-14 23:16:13 +02:00
pop % _ A S M _ A R G 2 / * @flags */
pop % _ A S M _ A R G 1 / * @vmx */
call v m x _ s p e c _ c t r l _ r e s t o r e _ h o s t
/* Put return value in AX */
mov % _ A S M _ B X , % _ A S M _ A X
2022-06-14 23:16:11 +02:00
pop % _ A S M _ B X
2019-01-25 07:41:18 -08:00
# ifdef C O N F I G _ X 8 6 _ 6 4
pop % r12
pop % r13
pop % r14
pop % r15
# else
pop % e s i
pop % e d i
# endif
2019-01-25 07:41:12 -08:00
pop % _ A S M _ B P
2021-12-04 14:43:40 +01:00
RET
2019-01-25 07:41:12 -08:00
2022-06-14 23:16:11 +02:00
.Lfixup :
cmpb $ 0 , k v m _ r e b o o t i n g
jne . L v m f a i l
ud2
.Lvmfail :
/* VM-Fail: set return value to 1 */
2022-06-14 23:16:13 +02:00
mov $ 1 , % _ A S M _ B X
2022-06-14 23:16:11 +02:00
jmp . L c l e a r _ r e g s
2019-10-11 13:51:04 +02:00
SYM_ F U N C _ E N D ( _ _ v m x _ v c p u _ r u n )
2020-03-26 09:07:12 -07:00
2022-12-13 06:09:12 +00:00
SYM_ F U N C _ S T A R T ( v m x _ d o _ n m i _ i r q o f f )
VMX_ D O _ E V E N T _ I R Q O F F c a l l a s m _ e x c _ n m i _ k v m _ v m x
SYM_ F U N C _ E N D ( v m x _ d o _ n m i _ i r q o f f )
2020-07-08 21:51:57 +02:00
.section .text , " ax"
2022-09-28 23:20:15 +00:00
# ifndef C O N F I G _ C C _ H A S _ A S M _ G O T O _ O U T P U T
2020-03-26 09:07:12 -07:00
/ * *
* vmread_ e r r o r _ t r a m p o l i n e - T r a m p o l i n e f r o m i n l i n e a s m t o v m r e a d _ e r r o r ( )
* @field: VMCS field encoding that failed
* @fault: %true if the VMREAD faulted, %false if it failed
2022-12-21 20:28:49 +08:00
*
2020-03-26 09:07:12 -07:00
* Save a n d r e s t o r e v o l a t i l e r e g i s t e r s a c r o s s a c a l l t o v m r e a d _ e r r o r ( ) . N o t e ,
* all p a r a m e t e r s a r e p a s s e d o n t h e s t a c k .
* /
SYM_ F U N C _ S T A R T ( v m r e a d _ e r r o r _ t r a m p o l i n e )
push % _ A S M _ B P
mov % _ A S M _ S P , % _ A S M _ B P
push % _ A S M _ A X
push % _ A S M _ C X
push % _ A S M _ D X
# ifdef C O N F I G _ X 8 6 _ 6 4
push % r d i
push % r s i
push % r8
push % r9
push % r10
push % r11
# endif
2022-08-17 16:40:45 +02:00
2020-03-26 09:07:12 -07:00
/* Load @field and @fault to arg1 and arg2 respectively. */
2022-08-17 16:40:45 +02:00
mov 3 * W O R D _ S I Z E ( % _ A S M _ B P ) , % _ A S M _ A R G 2
mov 2 * W O R D _ S I Z E ( % _ A S M _ B P ) , % _ A S M _ A R G 1
2020-03-26 09:07:12 -07:00
call v m r e a d _ e r r o r
/* Zero out @fault, which will be popped into the result register. */
_ ASM_ M O V $ 0 , 3 * W O R D _ S I Z E ( % _ A S M _ B P )
# ifdef C O N F I G _ X 8 6 _ 6 4
pop % r11
pop % r10
pop % r9
pop % r8
pop % r s i
pop % r d i
# endif
pop % _ A S M _ D X
pop % _ A S M _ C X
pop % _ A S M _ A X
pop % _ A S M _ B P
2021-12-04 14:43:40 +01:00
RET
2020-03-26 09:07:12 -07:00
SYM_ F U N C _ E N D ( v m r e a d _ e r r o r _ t r a m p o l i n e )
2022-09-28 23:20:15 +00:00
# endif
2020-09-15 12:15:04 -07:00
2022-12-13 06:09:11 +00:00
SYM_ F U N C _ S T A R T ( v m x _ d o _ i n t e r r u p t _ i r q o f f )
VMX_ D O _ E V E N T _ I R Q O F F C A L L _ N O S P E C _ A S M _ A R G 1
SYM_ F U N C _ E N D ( v m x _ d o _ i n t e r r u p t _ i r q o f f )