2019-06-04 10:11:32 +02:00
/* SPDX-License-Identifier: GPL-2.0-only */
2008-11-17 19:03:16 -02:00
/* CPU virtualization extensions handling
*
* This should carry the code for handling CPU virtualization extensions
* that needs to live in the kernel core .
*
* Author : Eduardo Habkost < ehabkost @ redhat . com >
*
* Copyright ( C ) 2008 , Red Hat Inc .
*
* Contains code from KVM , Copyright ( C ) 2006 Qumranet , Inc .
*/
# ifndef _ASM_X86_VIRTEX_H
# define _ASM_X86_VIRTEX_H
# include <asm/processor.h>
2008-11-17 19:03:17 -02:00
# include <asm/vmx.h>
2008-11-17 19:03:20 -02:00
# include <asm/svm.h>
2014-10-24 15:58:07 -07:00
# include <asm/tlbflush.h>
2008-11-17 19:03:17 -02:00
2008-11-17 19:03:16 -02:00
/*
* VMX functions :
*/
static inline int cpu_has_vmx ( void )
{
unsigned long ecx = cpuid_ecx ( 1 ) ;
return test_bit ( 5 , & ecx ) ; /* CPUID.1:ECX.VMX[bit 5] -> VT */
}
2008-11-17 19:03:17 -02:00
x86/virt: Eat faults on VMXOFF in reboot flows
Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.
Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active. The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.
Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-30 16:26:54 -08:00
/**
* cpu_vmxoff ( ) - Disable VMX on the current CPU
2008-11-17 19:03:17 -02:00
*
x86/virt: Eat faults on VMXOFF in reboot flows
Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.
Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active. The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.
Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-30 16:26:54 -08:00
* Disable VMX and clear CR4 . VMXE ( even if VMXOFF faults )
*
* Note , VMXOFF causes a # UD if the CPU is ! post - VMXON , but it ' s impossible to
* atomically track post - VMXON state , e . g . this may be called in NMI context .
* Eat all faults as all other faults on VMXOFF faults are mode related , i . e .
* faults are guaranteed to be due to the ! post - VMXON check unless the CPU is
* magically in RM , VM86 , compat mode , or at CPL > 0.
2008-11-17 19:03:17 -02:00
*/
2020-12-30 16:26:59 -08:00
static inline int cpu_vmxoff ( void )
2008-11-17 19:03:17 -02:00
{
x86/virt: Eat faults on VMXOFF in reboot flows
Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.
Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active. The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.
Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-30 16:26:54 -08:00
asm_volatile_goto ( " 1: vmxoff \n \t "
x86/virt: Mark flags and memory as clobbered by VMXOFF
Explicitly tell the compiler that VMXOFF modifies flags (like all VMX
instructions), and mark memory as clobbered since VMXOFF must not be
reordered and also may have memory side effects (though the kernel
really shouldn't be accessing the root VMCS anyways).
Practically speaking, adding the clobbers is most likely a nop; the
primary motivation is to properly document VMXOFF's behavior.
For the flags clobber, both Clang and GCC automatically mark flags as
clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly
instruction mnemonics instead of .byte streams"), which intentionally
removed the previous clobber. But, neither Clang nor GCC documents
this behavior, and there's no downside to including the clobber.
For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that
immediately follow VMXOFF have compiler barriers of their own, i.e.
VMXOFF can't get reordered after clearing CR4.VMXE, which is really
what's of interest.
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David P. Reed <dpreed@deepplum.com>
[sean: rewrote changelog, dropped comment adjustments]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-30 16:26:56 -08:00
_ASM_EXTABLE ( 1 b , % l [ fault ] )
: : : " cc " , " memory " : fault ) ;
2020-12-30 16:26:59 -08:00
cr4_clear_bits ( X86_CR4_VMXE ) ;
return 0 ;
x86/virt: Eat faults on VMXOFF in reboot flows
Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.
Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active. The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.
Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-30 16:26:54 -08:00
fault :
2014-10-24 15:58:07 -07:00
cr4_clear_bits ( X86_CR4_VMXE ) ;
2020-12-30 16:26:59 -08:00
return - EIO ;
2008-11-17 19:03:17 -02:00
}
2008-11-17 19:03:19 -02:00
static inline int cpu_vmx_enabled ( void )
{
2014-10-24 15:58:08 -07:00
return __read_cr4 ( ) & X86_CR4_VMXE ;
2008-11-17 19:03:19 -02:00
}
/** Disable VMX if it is enabled on the current CPU
*
* You shouldn ' t call this if cpu_has_vmx ( ) returns 0.
*/
static inline void __cpu_emergency_vmxoff ( void )
{
if ( cpu_vmx_enabled ( ) )
cpu_vmxoff ( ) ;
}
/** Disable VMX if it is supported and enabled on the current CPU
*/
static inline void cpu_emergency_vmxoff ( void )
{
if ( cpu_has_vmx ( ) )
__cpu_emergency_vmxoff ( ) ;
}
2008-11-17 19:03:20 -02:00
/*
* SVM functions :
*/
/** Check if the CPU has SVM support
*
* You can use the ' msg ' arg to get a message describing the problem ,
* if the function returns zero . Simply pass NULL if you are not interested
* on the messages ; gcc should take care of not generating code for
* the messages on this case .
*/
static inline int cpu_has_svm ( const char * * msg )
{
2018-09-23 17:36:31 +08:00
if ( boot_cpu_data . x86_vendor ! = X86_VENDOR_AMD & &
boot_cpu_data . x86_vendor ! = X86_VENDOR_HYGON ) {
2008-11-17 19:03:20 -02:00
if ( msg )
2018-09-23 17:36:31 +08:00
* msg = " not amd or hygon " ;
2008-11-17 19:03:20 -02:00
return 0 ;
}
2016-05-09 11:53:06 +02:00
if ( boot_cpu_data . extended_cpuid_level < SVM_CPUID_FUNC ) {
2008-11-17 19:03:20 -02:00
if ( msg )
* msg = " can't execute cpuid_8000000a " ;
return 0 ;
}
2016-05-09 11:53:06 +02:00
if ( ! boot_cpu_has ( X86_FEATURE_SVM ) ) {
2008-11-17 19:03:20 -02:00
if ( msg )
* msg = " svm not available " ;
return 0 ;
}
return 1 ;
}
2008-11-17 19:03:21 -02:00
/** Disable SVM on the current CPU
*
* You should call this only if cpu_has_svm ( ) returned true .
*/
static inline void cpu_svm_disable ( void )
{
uint64_t efer ;
wrmsrl ( MSR_VM_HSAVE_PA , 0 ) ;
rdmsrl ( MSR_EFER , efer ) ;
2008-11-25 20:17:02 +01:00
wrmsrl ( MSR_EFER , efer & ~ EFER_SVME ) ;
2008-11-17 19:03:21 -02:00
}
2008-11-17 19:03:22 -02:00
/** Makes sure SVM is disabled, if it is supported on the CPU
*/
static inline void cpu_emergency_svm_disable ( void )
{
if ( cpu_has_svm ( NULL ) )
cpu_svm_disable ( ) ;
}
2008-11-17 19:03:16 -02:00
# endif /* _ASM_X86_VIRTEX_H */