2019-05-29 17:12:40 +03:00
// SPDX-License-Identifier: GPL-2.0-only
2013-01-21 03:28:06 +04:00
/*
* Copyright ( C ) 2012 - Virtual Open Systems and Columbia University
* Author : Christoffer Dall < c . dall @ virtualopensystems . com >
*/
2018-04-20 18:20:43 +03:00
# include <linux/bug.h>
2013-08-05 18:04:46 +04:00
# include <linux/cpu_pm.h>
2013-01-21 03:28:06 +04:00
# include <linux/errno.h>
# include <linux/err.h>
# include <linux/kvm_host.h>
2016-07-15 14:43:31 +03:00
# include <linux/list.h>
2013-01-21 03:28:06 +04:00
# include <linux/module.h>
# include <linux/vmalloc.h>
# include <linux/fs.h>
# include <linux/mman.h>
# include <linux/sched.h>
2013-01-21 03:28:08 +04:00
# include <linux/kvm.h>
2017-10-27 17:28:31 +03:00
# include <linux/kvm_irqfd.h>
# include <linux/irqbypass.h>
2018-06-21 12:43:59 +03:00
# include <linux/sched/stat.h>
2013-01-21 03:28:06 +04:00
# include <trace/events/kvm.h>
# define CREATE_TRACE_POINTS
# include "trace.h"
2016-12-24 22:46:01 +03:00
# include <linux/uaccess.h>
2013-01-21 03:28:06 +04:00
# include <asm/ptrace.h>
# include <asm/mman.h>
2013-01-21 03:28:06 +04:00
# include <asm/tlbflush.h>
2013-01-21 03:28:09 +04:00
# include <asm/cacheflush.h>
2018-04-20 18:20:43 +03:00
# include <asm/cpufeature.h>
2013-01-21 03:28:06 +04:00
# include <asm/virt.h>
# include <asm/kvm_arm.h>
# include <asm/kvm_asm.h>
# include <asm/kvm_mmu.h>
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
# include <asm/kvm_emulate.h>
2013-01-21 03:28:09 +04:00
# include <asm/kvm_coproc.h>
2015-10-27 15:18:48 +03:00
# include <asm/sections.h>
2013-01-21 03:28:06 +04:00
2019-10-21 18:28:18 +03:00
# include <kvm/arm_hypercalls.h>
# include <kvm/arm_pmu.h>
# include <kvm/arm_psci.h>
2013-01-21 03:28:06 +04:00
# ifdef REQUIRES_VIRT
__asm__ ( " .arch_extension virt " ) ;
# endif
2019-04-09 22:22:11 +03:00
DEFINE_PER_CPU ( kvm_host_data_t , kvm_host_data ) ;
2013-01-21 03:28:06 +04:00
static DEFINE_PER_CPU ( unsigned long , kvm_arm_hyp_stack_page ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/* The VMID used in the VTTBR */
static atomic64_t kvm_vmid_gen = ATOMIC64_INIT ( 1 ) ;
2015-11-16 14:28:18 +03:00
static u32 kvm_next_vmid ;
2018-12-11 15:23:57 +03:00
static DEFINE_SPINLOCK ( kvm_vmid_lock ) ;
2013-01-21 03:28:06 +04:00
2015-12-18 14:38:43 +03:00
static bool vgic_present ;
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
static DEFINE_PER_CPU ( unsigned char , kvm_arm_hardware_enabled ) ;
2017-10-27 20:57:51 +03:00
DEFINE_STATIC_KEY_FALSE ( userspace_irqchip_in_use ) ;
2013-01-21 03:28:06 +04:00
int kvm_arch_vcpu_should_kick ( struct kvm_vcpu * vcpu )
{
return kvm_vcpu_exiting_guest_mode ( vcpu ) = = IN_GUEST_MODE ;
}
int kvm_arch_hardware_setup ( void )
{
return 0 ;
}
2019-04-20 08:18:17 +03:00
int kvm_arch_check_processor_compat ( void )
2013-01-21 03:28:06 +04:00
{
2019-04-20 08:18:17 +03:00
return 0 ;
2013-01-21 03:28:06 +04:00
}
KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace
For a long time, if a guest accessed memory outside of a memslot using
any of the load/store instructions in the architecture which doesn't
supply decoding information in the ESR_EL2 (the ISV bit is not set), the
kernel would print the following message and terminate the VM as a
result of returning -ENOSYS to userspace:
load/store instruction decoding not implemented
The reason behind this message is that KVM assumes that all accesses
outside a memslot is an MMIO access which should be handled by
userspace, and we originally expected to eventually implement some sort
of decoding of load/store instructions where the ISV bit was not set.
However, it turns out that many of the instructions which don't provide
decoding information on abort are not safe to use for MMIO accesses, and
the remaining few that would potentially make sense to use on MMIO
accesses, such as those with register writeback, are not used in
practice. It also turns out that fetching an instruction from guest
memory can be a pretty horrible affair, involving stopping all CPUs on
SMP systems, handling multiple corner cases of address translation in
software, and more. It doesn't appear likely that we'll ever implement
this in the kernel.
What is much more common is that a user has misconfigured his/her guest
and is actually not accessing an MMIO region, but just hitting some
random hole in the IPA space. In this scenario, the error message above
is almost misleading and has led to a great deal of confusion over the
years.
It is, nevertheless, ABI to userspace, and we therefore need to
introduce a new capability that userspace explicitly enables to change
behavior.
This patch introduces KVM_CAP_ARM_NISV_TO_USER (NISV meaning Non-ISV)
which does exactly that, and introduces a new exit reason to report the
event to userspace. User space can then emulate an exception to the
guest, restart the guest, suspend the guest, or take any other
appropriate action as per the policy of the running system.
Reported-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-11 14:07:05 +03:00
int kvm_vm_ioctl_enable_cap ( struct kvm * kvm ,
struct kvm_enable_cap * cap )
{
int r ;
if ( cap - > flags )
return - EINVAL ;
switch ( cap - > cap ) {
case KVM_CAP_ARM_NISV_TO_USER :
r = 0 ;
kvm - > arch . return_nisv_io_abort_to_user = true ;
break ;
default :
r = - EINVAL ;
break ;
}
return r ;
}
2013-01-21 03:28:06 +04:00
2013-01-21 03:28:07 +04:00
/**
* kvm_arch_init_vm - initializes a VM data structure
* @ kvm : pointer to the KVM struct
*/
2013-01-21 03:28:06 +04:00
int kvm_arch_init_vm ( struct kvm * kvm , unsigned long type )
{
2016-10-18 20:37:49 +03:00
int ret , cpu ;
2013-01-21 03:28:07 +04:00
2018-10-01 15:40:36 +03:00
ret = kvm_arm_setup_stage2 ( kvm , type ) ;
2018-09-26 19:32:42 +03:00
if ( ret )
return ret ;
2013-01-21 03:28:06 +04:00
2016-10-18 20:37:49 +03:00
kvm - > arch . last_vcpu_ran = alloc_percpu ( typeof ( * kvm - > arch . last_vcpu_ran ) ) ;
if ( ! kvm - > arch . last_vcpu_ran )
return - ENOMEM ;
for_each_possible_cpu ( cpu )
* per_cpu_ptr ( kvm - > arch . last_vcpu_ran , cpu ) = - 1 ;
2013-01-21 03:28:07 +04:00
ret = kvm_alloc_stage2_pgd ( kvm ) ;
if ( ret )
goto out_fail_alloc ;
2016-06-13 17:00:45 +03:00
ret = create_hyp_mappings ( kvm , kvm + 1 , PAGE_HYP ) ;
2013-01-21 03:28:07 +04:00
if ( ret )
goto out_free_stage2_pgd ;
2014-06-23 20:37:18 +04:00
kvm_vgic_early_init ( kvm ) ;
2013-11-16 22:51:25 +04:00
2013-01-21 03:28:07 +04:00
/* Mark the initial VMID generation invalid */
2018-12-11 17:26:31 +03:00
kvm - > arch . vmid . vmid_gen = 0 ;
2013-01-21 03:28:07 +04:00
2014-06-02 18:26:01 +04:00
/* The maximum number of VCPUs is limited by the host's GIC model */
2015-12-18 14:38:43 +03:00
kvm - > arch . max_vcpus = vgic_present ?
kvm_vgic_get_max_vcpus ( ) : KVM_MAX_VCPUS ;
2014-06-02 18:26:01 +04:00
2013-01-21 03:28:07 +04:00
return ret ;
out_free_stage2_pgd :
kvm_free_stage2_pgd ( kvm ) ;
out_fail_alloc :
2016-10-18 20:37:49 +03:00
free_percpu ( kvm - > arch . last_vcpu_ran ) ;
kvm - > arch . last_vcpu_ran = NULL ;
2013-01-21 03:28:07 +04:00
return ret ;
2013-01-21 03:28:06 +04:00
}
2016-09-07 21:47:23 +03:00
int kvm_arch_create_vcpu_debugfs ( struct kvm_vcpu * vcpu )
{
return 0 ;
}
2018-04-18 22:19:58 +03:00
vm_fault_t kvm_arch_vcpu_fault ( struct kvm_vcpu * vcpu , struct vm_fault * vmf )
2013-01-21 03:28:06 +04:00
{
return VM_FAULT_SIGBUS ;
}
2013-01-21 03:28:07 +04:00
/**
* kvm_arch_destroy_vm - destroy the VM data structure
* @ kvm : pointer to the KVM struct
*/
2013-01-21 03:28:06 +04:00
void kvm_arch_destroy_vm ( struct kvm * kvm )
{
int i ;
2017-10-27 17:28:34 +03:00
kvm_vgic_destroy ( kvm ) ;
2016-10-18 20:37:49 +03:00
free_percpu ( kvm - > arch . last_vcpu_ran ) ;
kvm - > arch . last_vcpu_ran = NULL ;
2013-01-21 03:28:06 +04:00
for ( i = 0 ; i < KVM_MAX_VCPUS ; + + i ) {
if ( kvm - > vcpus [ i ] ) {
2019-12-19 00:55:14 +03:00
kvm_vcpu_destroy ( kvm - > vcpus [ i ] ) ;
2013-01-21 03:28:06 +04:00
kvm - > vcpus [ i ] = NULL ;
}
}
2017-11-27 21:17:18 +03:00
atomic_set ( & kvm - > online_vcpus , 0 ) ;
2013-01-21 03:28:06 +04:00
}
2014-07-14 20:27:35 +04:00
int kvm_vm_ioctl_check_extension ( struct kvm * kvm , long ext )
2013-01-21 03:28:06 +04:00
{
int r ;
switch ( ext ) {
2013-01-22 04:36:12 +04:00
case KVM_CAP_IRQCHIP :
2015-12-18 14:38:43 +03:00
r = vgic_present ;
break ;
2015-01-24 15:00:02 +03:00
case KVM_CAP_IOEVENTFD :
2013-10-25 20:29:18 +04:00
case KVM_CAP_DEVICE_CTRL :
2013-01-21 03:28:06 +04:00
case KVM_CAP_USER_MEMORY :
case KVM_CAP_SYNC_MMU :
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS :
case KVM_CAP_ONE_REG :
2013-01-21 03:28:13 +04:00
case KVM_CAP_ARM_PSCI :
2014-04-29 09:54:25 +04:00
case KVM_CAP_ARM_PSCI_0_2 :
2014-08-19 14:18:04 +04:00
case KVM_CAP_READONLY_MEM :
2015-03-13 20:02:52 +03:00
case KVM_CAP_MP_STATE :
2017-02-08 13:50:15 +03:00
case KVM_CAP_IMMEDIATE_EXIT :
2018-10-12 19:12:49 +03:00
case KVM_CAP_VCPU_EVENTS :
KVM: arm/arm64: vgic: Allow more than 256 vcpus for KVM_IRQ_LINE
While parts of the VGIC support a large number of vcpus (we
bravely allow up to 512), other parts are more limited.
One of these limits is visible in the KVM_IRQ_LINE ioctl, which
only allows 256 vcpus to be signalled when using the CPU or PPI
types. Unfortunately, we've cornered ourselves badly by allocating
all the bits in the irq field.
Since the irq_type subfield (8 bit wide) is currently only taking
the values 0, 1 and 2 (and we have been careful not to allow anything
else), let's reduce this field to only 4 bits, and allocate the
remaining 4 bits to a vcpu2_index, which acts as a multiplier:
vcpu_id = 256 * vcpu2_index + vcpu_index
With that, and a new capability (KVM_CAP_ARM_IRQ_LINE_LAYOUT_2)
allowing this to be discovered, it becomes possible to inject
PPIs to up to 4096 vcpus. But please just don't.
Whilst we're there, add a clarification about the use of KVM_IRQ_LINE
on arm, which is not completely conditionned by KVM_CAP_IRQCHIP.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-18 16:09:47 +03:00
case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 :
KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace
For a long time, if a guest accessed memory outside of a memslot using
any of the load/store instructions in the architecture which doesn't
supply decoding information in the ESR_EL2 (the ISV bit is not set), the
kernel would print the following message and terminate the VM as a
result of returning -ENOSYS to userspace:
load/store instruction decoding not implemented
The reason behind this message is that KVM assumes that all accesses
outside a memslot is an MMIO access which should be handled by
userspace, and we originally expected to eventually implement some sort
of decoding of load/store instructions where the ISV bit was not set.
However, it turns out that many of the instructions which don't provide
decoding information on abort are not safe to use for MMIO accesses, and
the remaining few that would potentially make sense to use on MMIO
accesses, such as those with register writeback, are not used in
practice. It also turns out that fetching an instruction from guest
memory can be a pretty horrible affair, involving stopping all CPUs on
SMP systems, handling multiple corner cases of address translation in
software, and more. It doesn't appear likely that we'll ever implement
this in the kernel.
What is much more common is that a user has misconfigured his/her guest
and is actually not accessing an MMIO region, but just hitting some
random hole in the IPA space. In this scenario, the error message above
is almost misleading and has led to a great deal of confusion over the
years.
It is, nevertheless, ABI to userspace, and we therefore need to
introduce a new capability that userspace explicitly enables to change
behavior.
This patch introduces KVM_CAP_ARM_NISV_TO_USER (NISV meaning Non-ISV)
which does exactly that, and introduces a new exit reason to report the
event to userspace. User space can then emulate an exception to the
guest, restart the guest, suspend the guest, or take any other
appropriate action as per the policy of the running system.
Reported-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-11 14:07:05 +03:00
case KVM_CAP_ARM_NISV_TO_USER :
2019-10-11 14:07:06 +03:00
case KVM_CAP_ARM_INJECT_EXT_DABT :
2013-01-21 03:28:06 +04:00
r = 1 ;
break ;
2013-01-23 22:18:04 +04:00
case KVM_CAP_ARM_SET_DEVICE_ADDR :
r = 1 ;
2013-04-03 13:43:13 +04:00
break ;
2013-01-21 03:28:06 +04:00
case KVM_CAP_NR_VCPUS :
r = num_online_cpus ( ) ;
break ;
case KVM_CAP_MAX_VCPUS :
r = KVM_MAX_VCPUS ;
break ;
2019-05-23 19:43:08 +03:00
case KVM_CAP_MAX_VCPU_ID :
r = KVM_MAX_VCPU_ID ;
break ;
2016-11-02 14:55:34 +03:00
case KVM_CAP_MSI_DEVID :
if ( ! kvm )
r = - EINVAL ;
else
r = kvm - > arch . vgic . msis_require_devid ;
break ;
2017-02-01 14:54:11 +03:00
case KVM_CAP_ARM_USER_IRQ :
/*
* 1 : EL1_VTIMER , EL1_PTIMER , and PMU .
* ( bump this number if adding more devices )
*/
r = 1 ;
break ;
2013-01-21 03:28:06 +04:00
default :
2018-10-12 19:12:48 +03:00
r = kvm_arch_vm_ioctl_check_extension ( kvm , ext ) ;
2013-01-21 03:28:06 +04:00
break ;
}
return r ;
}
long kvm_arch_dev_ioctl ( struct file * filp ,
unsigned int ioctl , unsigned long arg )
{
return - EINVAL ;
}
2018-05-15 14:37:37 +03:00
struct kvm * kvm_arch_alloc_vm ( void )
{
if ( ! has_vhe ( ) )
return kzalloc ( sizeof ( struct kvm ) , GFP_KERNEL ) ;
return vzalloc ( sizeof ( struct kvm ) ) ;
}
void kvm_arch_free_vm ( struct kvm * kvm )
{
if ( ! has_vhe ( ) )
kfree ( kvm ) ;
else
vfree ( kvm ) ;
}
2013-01-21 03:28:06 +04:00
2019-12-19 00:55:09 +03:00
int kvm_arch_vcpu_precreate ( struct kvm * kvm , unsigned int id )
{
if ( irqchip_in_kernel ( kvm ) & & vgic_initialized ( kvm ) )
return - EBUSY ;
if ( id > = kvm - > arch . max_vcpus )
return - EINVAL ;
return 0 ;
}
2019-12-19 00:55:15 +03:00
int kvm_arch_vcpu_create ( struct kvm_vcpu * vcpu )
2013-01-21 03:28:06 +04:00
{
2019-12-19 00:55:25 +03:00
int err ;
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu - > arch . target = - 1 ;
bitmap_zero ( vcpu - > arch . features , KVM_VCPU_MAX_FEATURES ) ;
/* Set up the timer */
kvm_timer_vcpu_init ( vcpu ) ;
kvm_pmu_vcpu_init ( vcpu ) ;
kvm_arm_reset_debug_ptr ( vcpu ) ;
kvm_arm_pvtime_vcpu_init ( & vcpu - > arch ) ;
err = kvm_vgic_vcpu_init ( vcpu ) ;
if ( err )
return err ;
2019-12-19 00:55:15 +03:00
return create_hyp_mappings ( vcpu , vcpu + 1 , PAGE_HYP ) ;
2013-01-21 03:28:06 +04:00
}
2014-12-04 17:47:07 +03:00
void kvm_arch_vcpu_postcreate ( struct kvm_vcpu * vcpu )
2013-01-21 03:28:06 +04:00
{
}
2019-12-19 00:55:04 +03:00
void kvm_arch_vcpu_destroy ( struct kvm_vcpu * vcpu )
2013-01-21 03:28:06 +04:00
{
2018-01-25 20:32:29 +03:00
if ( vcpu - > arch . has_run_once & & unlikely ( ! irqchip_in_kernel ( vcpu - > kvm ) ) )
static_branch_dec ( & userspace_irqchip_in_use ) ;
2013-01-21 03:28:07 +04:00
kvm_mmu_free_memory_caches ( vcpu ) ;
2013-01-23 22:21:59 +04:00
kvm_timer_vcpu_terminate ( vcpu ) ;
2015-09-11 10:18:05 +03:00
kvm_pmu_vcpu_destroy ( vcpu ) ;
2019-12-19 00:55:27 +03:00
kvm_arm_vcpu_destroy ( vcpu ) ;
2013-01-21 03:28:06 +04:00
}
int kvm_cpu_has_pending_timer ( struct kvm_vcpu * vcpu )
{
2017-01-06 18:07:48 +03:00
return kvm_timer_is_pending ( vcpu ) ;
2013-01-21 03:28:06 +04:00
}
arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block
We currently schedule a soft timer every time we exit the guest if the
timer did not expire while running the guest. This is really not
necessary, because the only work we do in the timer work function is to
kick the vcpu.
Kicking the vcpu does two things:
(1) If the vpcu thread is on a waitqueue, make it runnable and remove it
from the waitqueue.
(2) If the vcpu is running on a different physical CPU from the one
doing the kick, it sends a reschedule IPI.
The second case cannot happen, because the soft timer is only ever
scheduled when the vcpu is not running. The first case is only relevant
when the vcpu thread is on a waitqueue, which is only the case when the
vcpu thread has called kvm_vcpu_block().
Therefore, we only need to make sure a timer is scheduled for
kvm_vcpu_block(), which we do by encapsulating all calls to
kvm_vcpu_block() with kvm_timer_{un}schedule calls.
Additionally, we only schedule a soft timer if the timer is enabled and
unmasked, since it is useless otherwise.
Note that theoretically userspace can use the SET_ONE_REG interface to
change registers that should cause the timer to fire, even if the vcpu
is blocked without a scheduled timer, but this case was not supported
before this patch and we leave it for future work for now.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-08-25 20:48:21 +03:00
void kvm_arch_vcpu_blocking ( struct kvm_vcpu * vcpu )
{
KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
its GICv2 equivalent) loaded as long as we can, only syncing it
back when we're scheduled out.
There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
which is indirectly called from kvm_vcpu_check_block(), needs to
evaluate the guest's view of ICC_PMR_EL1. At the point were we
call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
changes to PMR is not visible in memory until we do a vcpu_put().
Things go really south if the guest does the following:
mov x0, #0 // or any small value masking interrupts
msr ICC_PMR_EL1, x0
[vcpu preempted, then rescheduled, VMCR sampled]
mov x0, #ff // allow all interrupts
msr ICC_PMR_EL1, x0
wfi // traps to EL2, so samping of VMCR
[interrupt arrives just after WFI]
Here, the hypervisor's view of PMR is zero, while the guest has enabled
its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
interrupts are pending (despite an interrupt being received) and we'll
block for no reason. If the guest doesn't have a periodic interrupt
firing once it has blocked, it will stay there forever.
To avoid this unfortuante situation, let's resync VMCR from
kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
will observe the latest value of PMR.
This has been found by booting an arm64 Linux guest with the pseudo NMI
feature, and thus using interrupt priorities to mask interrupts instead
of the usual PSTATE masking.
Cc: stable@vger.kernel.org # 4.12
Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-02 12:28:32 +03:00
/*
* If we ' re about to block ( most likely because we ' ve just hit a
* WFI ) , we need to sync back the state of the GIC CPU interface
KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put
When the VHE code was reworked, a lot of the vgic stuff was moved around,
but the GICv4 residency code did stay untouched, meaning that we come
in and out of residency on each flush/sync, which is obviously suboptimal.
To address this, let's move things around a bit:
- Residency entry (flush) moves to vcpu_load
- Residency exit (sync) moves to vcpu_put
- On blocking (entry to WFI), we "put"
- On unblocking (exit from WFI), we "load"
Because these can nest (load/block/put/load/unblock/put, for example),
we now have per-VPE tracking of the residency state.
Additionally, vgic_v4_put gains a "need doorbell" parameter, which only
gets set to true when blocking because of a WFI. This allows a finer
control of the doorbell, which now also gets disabled as soon as
it gets signaled.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20191027144234.8395-2-maz@kernel.org
2019-10-27 17:41:59 +03:00
* so that we have the latest PMR and group enables . This ensures
KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
its GICv2 equivalent) loaded as long as we can, only syncing it
back when we're scheduled out.
There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
which is indirectly called from kvm_vcpu_check_block(), needs to
evaluate the guest's view of ICC_PMR_EL1. At the point were we
call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
changes to PMR is not visible in memory until we do a vcpu_put().
Things go really south if the guest does the following:
mov x0, #0 // or any small value masking interrupts
msr ICC_PMR_EL1, x0
[vcpu preempted, then rescheduled, VMCR sampled]
mov x0, #ff // allow all interrupts
msr ICC_PMR_EL1, x0
wfi // traps to EL2, so samping of VMCR
[interrupt arrives just after WFI]
Here, the hypervisor's view of PMR is zero, while the guest has enabled
its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
interrupts are pending (despite an interrupt being received) and we'll
block for no reason. If the guest doesn't have a periodic interrupt
firing once it has blocked, it will stay there forever.
To avoid this unfortuante situation, let's resync VMCR from
kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
will observe the latest value of PMR.
This has been found by booting an arm64 Linux guest with the pseudo NMI
feature, and thus using interrupt priorities to mask interrupts instead
of the usual PSTATE masking.
Cc: stable@vger.kernel.org # 4.12
Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-02 12:28:32 +03:00
* that kvm_arch_vcpu_runnable has up - to - date data to decide
* whether we have pending interrupts .
KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put
When the VHE code was reworked, a lot of the vgic stuff was moved around,
but the GICv4 residency code did stay untouched, meaning that we come
in and out of residency on each flush/sync, which is obviously suboptimal.
To address this, let's move things around a bit:
- Residency entry (flush) moves to vcpu_load
- Residency exit (sync) moves to vcpu_put
- On blocking (entry to WFI), we "put"
- On unblocking (exit from WFI), we "load"
Because these can nest (load/block/put/load/unblock/put, for example),
we now have per-VPE tracking of the residency state.
Additionally, vgic_v4_put gains a "need doorbell" parameter, which only
gets set to true when blocking because of a WFI. This allows a finer
control of the doorbell, which now also gets disabled as soon as
it gets signaled.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20191027144234.8395-2-maz@kernel.org
2019-10-27 17:41:59 +03:00
*
* For the same reason , we want to tell GICv4 that we need
* doorbells to be signalled , should an interrupt become pending .
KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
its GICv2 equivalent) loaded as long as we can, only syncing it
back when we're scheduled out.
There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
which is indirectly called from kvm_vcpu_check_block(), needs to
evaluate the guest's view of ICC_PMR_EL1. At the point were we
call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
changes to PMR is not visible in memory until we do a vcpu_put().
Things go really south if the guest does the following:
mov x0, #0 // or any small value masking interrupts
msr ICC_PMR_EL1, x0
[vcpu preempted, then rescheduled, VMCR sampled]
mov x0, #ff // allow all interrupts
msr ICC_PMR_EL1, x0
wfi // traps to EL2, so samping of VMCR
[interrupt arrives just after WFI]
Here, the hypervisor's view of PMR is zero, while the guest has enabled
its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
interrupts are pending (despite an interrupt being received) and we'll
block for no reason. If the guest doesn't have a periodic interrupt
firing once it has blocked, it will stay there forever.
To avoid this unfortuante situation, let's resync VMCR from
kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
will observe the latest value of PMR.
This has been found by booting an arm64 Linux guest with the pseudo NMI
feature, and thus using interrupt priorities to mask interrupts instead
of the usual PSTATE masking.
Cc: stable@vger.kernel.org # 4.12
Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-02 12:28:32 +03:00
*/
preempt_disable ( ) ;
kvm_vgic_vmcr_sync ( vcpu ) ;
KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put
When the VHE code was reworked, a lot of the vgic stuff was moved around,
but the GICv4 residency code did stay untouched, meaning that we come
in and out of residency on each flush/sync, which is obviously suboptimal.
To address this, let's move things around a bit:
- Residency entry (flush) moves to vcpu_load
- Residency exit (sync) moves to vcpu_put
- On blocking (entry to WFI), we "put"
- On unblocking (exit from WFI), we "load"
Because these can nest (load/block/put/load/unblock/put, for example),
we now have per-VPE tracking of the residency state.
Additionally, vgic_v4_put gains a "need doorbell" parameter, which only
gets set to true when blocking because of a WFI. This allows a finer
control of the doorbell, which now also gets disabled as soon as
it gets signaled.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20191027144234.8395-2-maz@kernel.org
2019-10-27 17:41:59 +03:00
vgic_v4_put ( vcpu , true ) ;
KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
its GICv2 equivalent) loaded as long as we can, only syncing it
back when we're scheduled out.
There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
which is indirectly called from kvm_vcpu_check_block(), needs to
evaluate the guest's view of ICC_PMR_EL1. At the point were we
call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
changes to PMR is not visible in memory until we do a vcpu_put().
Things go really south if the guest does the following:
mov x0, #0 // or any small value masking interrupts
msr ICC_PMR_EL1, x0
[vcpu preempted, then rescheduled, VMCR sampled]
mov x0, #ff // allow all interrupts
msr ICC_PMR_EL1, x0
wfi // traps to EL2, so samping of VMCR
[interrupt arrives just after WFI]
Here, the hypervisor's view of PMR is zero, while the guest has enabled
its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
interrupts are pending (despite an interrupt being received) and we'll
block for no reason. If the guest doesn't have a periodic interrupt
firing once it has blocked, it will stay there forever.
To avoid this unfortuante situation, let's resync VMCR from
kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
will observe the latest value of PMR.
This has been found by booting an arm64 Linux guest with the pseudo NMI
feature, and thus using interrupt priorities to mask interrupts instead
of the usual PSTATE masking.
Cc: stable@vger.kernel.org # 4.12
Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-02 12:28:32 +03:00
preempt_enable ( ) ;
arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block
We currently schedule a soft timer every time we exit the guest if the
timer did not expire while running the guest. This is really not
necessary, because the only work we do in the timer work function is to
kick the vcpu.
Kicking the vcpu does two things:
(1) If the vpcu thread is on a waitqueue, make it runnable and remove it
from the waitqueue.
(2) If the vcpu is running on a different physical CPU from the one
doing the kick, it sends a reschedule IPI.
The second case cannot happen, because the soft timer is only ever
scheduled when the vcpu is not running. The first case is only relevant
when the vcpu thread is on a waitqueue, which is only the case when the
vcpu thread has called kvm_vcpu_block().
Therefore, we only need to make sure a timer is scheduled for
kvm_vcpu_block(), which we do by encapsulating all calls to
kvm_vcpu_block() with kvm_timer_{un}schedule calls.
Additionally, we only schedule a soft timer if the timer is enabled and
unmasked, since it is useless otherwise.
Note that theoretically userspace can use the SET_ONE_REG interface to
change registers that should cause the timer to fire, even if the vcpu
is blocked without a scheduled timer, but this case was not supported
before this patch and we leave it for future work for now.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-08-25 20:48:21 +03:00
}
void kvm_arch_vcpu_unblocking ( struct kvm_vcpu * vcpu )
{
KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put
When the VHE code was reworked, a lot of the vgic stuff was moved around,
but the GICv4 residency code did stay untouched, meaning that we come
in and out of residency on each flush/sync, which is obviously suboptimal.
To address this, let's move things around a bit:
- Residency entry (flush) moves to vcpu_load
- Residency exit (sync) moves to vcpu_put
- On blocking (entry to WFI), we "put"
- On unblocking (exit from WFI), we "load"
Because these can nest (load/block/put/load/unblock/put, for example),
we now have per-VPE tracking of the residency state.
Additionally, vgic_v4_put gains a "need doorbell" parameter, which only
gets set to true when blocking because of a WFI. This allows a finer
control of the doorbell, which now also gets disabled as soon as
it gets signaled.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20191027144234.8395-2-maz@kernel.org
2019-10-27 17:41:59 +03:00
preempt_disable ( ) ;
vgic_v4_load ( vcpu ) ;
preempt_enable ( ) ;
arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block
We currently schedule a soft timer every time we exit the guest if the
timer did not expire while running the guest. This is really not
necessary, because the only work we do in the timer work function is to
kick the vcpu.
Kicking the vcpu does two things:
(1) If the vpcu thread is on a waitqueue, make it runnable and remove it
from the waitqueue.
(2) If the vcpu is running on a different physical CPU from the one
doing the kick, it sends a reschedule IPI.
The second case cannot happen, because the soft timer is only ever
scheduled when the vcpu is not running. The first case is only relevant
when the vcpu thread is on a waitqueue, which is only the case when the
vcpu thread has called kvm_vcpu_block().
Therefore, we only need to make sure a timer is scheduled for
kvm_vcpu_block(), which we do by encapsulating all calls to
kvm_vcpu_block() with kvm_timer_{un}schedule calls.
Additionally, we only schedule a soft timer if the timer is enabled and
unmasked, since it is useless otherwise.
Note that theoretically userspace can use the SET_ONE_REG interface to
change registers that should cause the timer to fire, even if the vcpu
is blocked without a scheduled timer, but this case was not supported
before this patch and we leave it for future work for now.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-08-25 20:48:21 +03:00
}
2013-01-21 03:28:06 +04:00
void kvm_arch_vcpu_load ( struct kvm_vcpu * vcpu , int cpu )
{
2016-10-18 20:37:49 +03:00
int * last_ran ;
2019-04-09 22:22:11 +03:00
kvm_host_data_t * cpu_data ;
2016-10-18 20:37:49 +03:00
last_ran = this_cpu_ptr ( vcpu - > kvm - > arch . last_vcpu_ran ) ;
2019-04-09 22:22:11 +03:00
cpu_data = this_cpu_ptr ( & kvm_host_data ) ;
2016-10-18 20:37:49 +03:00
/*
* We might get preempted before the vCPU actually runs , but
* over - invalidation doesn ' t affect correctness .
*/
if ( * last_ran ! = vcpu - > vcpu_id ) {
kvm_call_hyp ( __kvm_tlb_flush_local_vmid , vcpu ) ;
* last_ran = vcpu - > vcpu_id ;
}
2013-01-21 03:28:08 +04:00
vcpu - > cpu = cpu ;
2019-04-09 22:22:11 +03:00
vcpu - > arch . host_cpu_context = & cpu_data - > host_ctxt ;
2013-01-21 03:28:09 +04:00
2016-03-24 13:21:04 +03:00
kvm_vgic_load ( vcpu ) ;
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
kvm_timer_vcpu_load ( vcpu ) ;
2017-10-10 11:21:18 +03:00
kvm_vcpu_load_sysregs ( vcpu ) ;
2018-04-06 16:55:59 +03:00
kvm_arch_vcpu_load_fp ( vcpu ) ;
2019-04-09 22:22:15 +03:00
kvm_vcpu_pmu_restore_guest ( vcpu ) ;
2019-10-21 18:28:18 +03:00
if ( kvm_arm_is_pvtime_enabled ( & vcpu - > arch ) )
kvm_make_request ( KVM_REQ_RECORD_STEAL , vcpu ) ;
2018-06-21 12:43:59 +03:00
if ( single_task_running ( ) )
2019-11-07 19:04:12 +03:00
vcpu_clear_wfx_traps ( vcpu ) ;
2018-06-21 12:43:59 +03:00
else
2019-11-07 19:04:12 +03:00
vcpu_set_wfx_traps ( vcpu ) ;
KVM: arm/arm64: Context-switch ptrauth registers
When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.
Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.
When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.
Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.
Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.
This switch of key is done from guest enter/exit assembly as preparation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
[maz: various fixups]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-23 07:42:35 +03:00
vcpu_ptrauth_setup_lazy ( vcpu ) ;
2013-01-21 03:28:06 +04:00
}
void kvm_arch_vcpu_put ( struct kvm_vcpu * vcpu )
{
2018-04-06 16:55:59 +03:00
kvm_arch_vcpu_put_fp ( vcpu ) ;
2017-10-10 11:21:18 +03:00
kvm_vcpu_put_sysregs ( vcpu ) ;
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
kvm_timer_vcpu_put ( vcpu ) ;
2016-03-24 13:21:04 +03:00
kvm_vgic_put ( vcpu ) ;
2019-04-09 22:22:15 +03:00
kvm_vcpu_pmu_restore_host ( vcpu ) ;
2016-03-24 13:21:04 +03:00
2013-12-12 08:29:11 +04:00
vcpu - > cpu = - 1 ;
2013-01-21 03:28:06 +04:00
}
2017-06-04 15:43:57 +03:00
static void vcpu_power_off ( struct kvm_vcpu * vcpu )
{
vcpu - > arch . power_off = true ;
2017-06-04 15:43:58 +03:00
kvm_make_request ( KVM_REQ_SLEEP , vcpu ) ;
2017-06-04 15:43:57 +03:00
kvm_vcpu_kick ( vcpu ) ;
}
2013-01-21 03:28:06 +04:00
int kvm_arch_vcpu_ioctl_get_mpstate ( struct kvm_vcpu * vcpu ,
struct kvm_mp_state * mp_state )
{
2015-09-26 00:41:14 +03:00
if ( vcpu - > arch . power_off )
2015-03-13 20:02:52 +03:00
mp_state - > mp_state = KVM_MP_STATE_STOPPED ;
else
mp_state - > mp_state = KVM_MP_STATE_RUNNABLE ;
return 0 ;
2013-01-21 03:28:06 +04:00
}
int kvm_arch_vcpu_ioctl_set_mpstate ( struct kvm_vcpu * vcpu ,
struct kvm_mp_state * mp_state )
{
2017-12-04 23:35:31 +03:00
int ret = 0 ;
2015-03-13 20:02:52 +03:00
switch ( mp_state - > mp_state ) {
case KVM_MP_STATE_RUNNABLE :
2015-09-26 00:41:14 +03:00
vcpu - > arch . power_off = false ;
2015-03-13 20:02:52 +03:00
break ;
case KVM_MP_STATE_STOPPED :
2017-06-04 15:43:57 +03:00
vcpu_power_off ( vcpu ) ;
2015-03-13 20:02:52 +03:00
break ;
default :
2017-12-04 23:35:31 +03:00
ret = - EINVAL ;
2015-03-13 20:02:52 +03:00
}
2017-12-04 23:35:31 +03:00
return ret ;
2013-01-21 03:28:06 +04:00
}
2013-01-21 03:28:09 +04:00
/**
* kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
* @ v : The VCPU pointer
*
* If the guest CPU is not waiting for interrupts or an interrupt line is
* asserted , the CPU is by definition runnable .
*/
2013-01-21 03:28:06 +04:00
int kvm_arch_vcpu_runnable ( struct kvm_vcpu * v )
{
2017-08-03 13:09:05 +03:00
bool irq_lines = * vcpu_hcr ( v ) & ( HCR_VI | HCR_VF ) ;
return ( ( irq_lines | | kvm_vgic_vcpu_pending_irq ( v ) )
2015-09-26 00:41:17 +03:00
& & ! v - > arch . power_off & & ! v - > arch . pause ) ;
2013-01-21 03:28:06 +04:00
}
2017-08-08 07:05:32 +03:00
bool kvm_arch_vcpu_in_kernel ( struct kvm_vcpu * vcpu )
{
2017-08-08 07:05:35 +03:00
return vcpu_mode_priv ( vcpu ) ;
2017-08-08 07:05:32 +03:00
}
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/* Just ensure a guest exit from a particular CPU */
static void exit_vm_noop ( void * info )
{
}
void force_vm_exit ( const cpumask_t * mask )
{
2016-03-07 19:50:36 +03:00
preempt_disable ( ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
smp_call_function_many ( mask , exit_vm_noop , NULL , true ) ;
2016-03-07 19:50:36 +03:00
preempt_enable ( ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
}
/**
* need_new_vmid_gen - check that the VMID is still valid
2018-12-11 17:26:31 +03:00
* @ vmid : The VMID to check
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
*
* return true if there is a new generation of VMIDs being used
*
2018-12-11 17:26:31 +03:00
* The hardware supports a limited set of values with the value zero reserved
* for the host , so we check if an assigned value belongs to a previous
* generation , which which requires us to assign a new value . If we ' re the
* first to use a VMID for the new generation , we must flush necessary caches
* and TLBs on all CPUs .
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
*/
2018-12-11 17:26:31 +03:00
static bool need_new_vmid_gen ( struct kvm_vmid * vmid )
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
{
2018-12-11 15:23:57 +03:00
u64 current_vmid_gen = atomic64_read ( & kvm_vmid_gen ) ;
smp_rmb ( ) ; /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
2018-12-11 17:26:31 +03:00
return unlikely ( READ_ONCE ( vmid - > vmid_gen ) ! = current_vmid_gen ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
}
/**
2018-12-11 17:26:31 +03:00
* update_vmid - Update the vmid with a valid VMID for the current generation
* @ kvm : The guest that struct vmid belongs to
* @ vmid : The stage - 2 VMID information struct
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
*/
2018-12-11 17:26:31 +03:00
static void update_vmid ( struct kvm_vmid * vmid )
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
{
2018-12-11 17:26:31 +03:00
if ( ! need_new_vmid_gen ( vmid ) )
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
return ;
2018-12-11 15:23:57 +03:00
spin_lock ( & kvm_vmid_lock ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/*
* We need to re - check the vmid_gen here to ensure that if another vcpu
* already allocated a valid vmid for this vm , then this vcpu should
* use the same vmid .
*/
2018-12-11 17:26:31 +03:00
if ( ! need_new_vmid_gen ( vmid ) ) {
2018-12-11 15:23:57 +03:00
spin_unlock ( & kvm_vmid_lock ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
return ;
}
/* First user of a new VMID generation? */
if ( unlikely ( kvm_next_vmid = = 0 ) ) {
atomic64_inc ( & kvm_vmid_gen ) ;
kvm_next_vmid = 1 ;
/*
* On SMP we know no other CPUs can use this CPU ' s or each
* other ' s VMID after force_vm_exit returns since the
* kvm_vmid_lock blocks them from reentry to the guest .
*/
force_vm_exit ( cpu_all_mask ) ;
/*
* Now broadcast TLB + ICACHE invalidation over the inner
* shareable domain to make sure all data structures are
* clean .
*/
kvm_call_hyp ( __kvm_flush_vm_context ) ;
}
2018-12-11 17:26:31 +03:00
vmid - > vmid = kvm_next_vmid ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
kvm_next_vmid + + ;
2018-12-11 17:26:31 +03:00
kvm_next_vmid & = ( 1 < < kvm_get_vmid_bits ( ) ) - 1 ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
2018-12-11 15:23:57 +03:00
smp_wmb ( ) ;
2018-12-11 17:26:31 +03:00
WRITE_ONCE ( vmid - > vmid_gen , atomic64_read ( & kvm_vmid_gen ) ) ;
2018-12-11 15:23:57 +03:00
spin_unlock ( & kvm_vmid_lock ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
}
static int kvm_vcpu_first_run_init ( struct kvm_vcpu * vcpu )
{
2014-12-12 23:19:23 +03:00
struct kvm * kvm = vcpu - > kvm ;
2016-05-18 18:26:00 +03:00
int ret = 0 ;
2013-09-24 01:55:55 +04:00
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
if ( likely ( vcpu - > arch . has_run_once ) )
return 0 ;
2018-12-19 17:27:01 +03:00
if ( ! kvm_arm_vcpu_is_finalized ( vcpu ) )
return - EPERM ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
vcpu - > arch . has_run_once = true ;
2013-01-21 03:28:13 +04:00
2017-10-27 20:57:51 +03:00
if ( likely ( irqchip_in_kernel ( kvm ) ) ) {
/*
* Map the VGIC hardware resources before running a vcpu the
* first time on this VM .
*/
if ( unlikely ( ! vgic_ready ( kvm ) ) ) {
ret = kvm_vgic_map_resources ( kvm ) ;
if ( ret )
return ret ;
}
} else {
/*
* Tell the rest of the code that there are userspace irqchip
* VMs in the wild .
*/
static_branch_inc ( & userspace_irqchip_in_use ) ;
2013-01-22 04:36:16 +04:00
}
KVM: arm/arm64: Support arch timers with a userspace gic
If you're running with a userspace gic or other interrupt controller
(that is no vgic in the kernel), then you have so far not been able to
use the architected timers, because the output of the architected
timers, which are driven inside the kernel, was a kernel-only construct
between the arch timer code and the vgic.
This patch implements the new KVM_CAP_ARM_USER_IRQ feature, where we use a
side channel on the kvm_run structure, run->s.regs.device_irq_level, to
always notify userspace of the timer output levels when using a userspace
irqchip.
This works by ensuring that before we enter the guest, if the timer
output level has changed compared to what we last told userspace, we
don't enter the guest, but instead return to userspace to notify it of
the new level. If we are exiting, because of an MMIO for example, and
the level changed at the same time, the value is also updated and
userspace can sample the line as it needs. This is nicely achieved
simply always updating the timer_irq_level field after the main run
loop.
Note that the kvm_timer_update_irq trace event is changed to show the
host IRQ number for the timer instead of the guest IRQ number, because
the kernel no longer know which IRQ userspace wires up the timer signal
to.
Also note that this patch implements all required functionality but does
not yet advertise the capability.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-27 22:08:06 +03:00
ret = kvm_timer_enable ( vcpu ) ;
2017-05-02 14:41:02 +03:00
if ( ret )
return ret ;
ret = kvm_arm_pmu_v3_enable ( vcpu ) ;
2014-12-12 23:19:23 +03:00
2016-05-18 18:26:00 +03:00
return ret ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
}
2015-03-04 13:14:34 +03:00
bool kvm_arch_intc_initialized ( struct kvm * kvm )
{
return vgic_initialized ( kvm ) ;
}
2016-04-27 12:28:00 +03:00
void kvm_arm_halt_guest ( struct kvm * kvm )
2015-09-26 00:41:17 +03:00
{
int i ;
struct kvm_vcpu * vcpu ;
kvm_for_each_vcpu ( i , vcpu , kvm )
vcpu - > arch . pause = true ;
2017-06-04 15:43:58 +03:00
kvm_make_all_cpus_request ( kvm , KVM_REQ_SLEEP ) ;
2015-09-26 00:41:17 +03:00
}
2016-04-27 12:28:00 +03:00
void kvm_arm_resume_guest ( struct kvm * kvm )
2015-09-26 00:41:17 +03:00
{
int i ;
struct kvm_vcpu * vcpu ;
2017-05-06 21:01:24 +03:00
kvm_for_each_vcpu ( i , vcpu , kvm ) {
vcpu - > arch . pause = false ;
2018-06-12 11:34:52 +03:00
swake_up_one ( kvm_arch_vcpu_wq ( vcpu ) ) ;
2017-05-06 21:01:24 +03:00
}
2015-09-26 00:41:17 +03:00
}
2017-06-04 15:43:58 +03:00
static void vcpu_req_sleep ( struct kvm_vcpu * vcpu )
2013-01-21 03:28:13 +04:00
{
2016-02-19 11:46:39 +03:00
struct swait_queue_head * wq = kvm_arch_vcpu_wq ( vcpu ) ;
2013-01-21 03:28:13 +04:00
2018-06-12 11:34:52 +03:00
swait_event_interruptible_exclusive ( * wq , ( ( ! vcpu - > arch . power_off ) & &
2015-09-26 00:41:17 +03:00
( ! vcpu - > arch . pause ) ) ) ;
2017-06-04 15:43:55 +03:00
2017-06-04 15:43:57 +03:00
if ( vcpu - > arch . power_off | | vcpu - > arch . pause ) {
2017-06-04 15:43:55 +03:00
/* Awaken to handle a signal, request we sleep again later. */
2017-06-04 15:43:58 +03:00
kvm_make_request ( KVM_REQ_SLEEP , vcpu ) ;
2017-06-04 15:43:55 +03:00
}
2018-12-20 14:36:07 +03:00
/*
* Make sure we will observe a potential reset request if we ' ve
* observed a change to the power state . Pairs with the smp_wmb ( ) in
* kvm_psci_vcpu_on ( ) .
*/
smp_rmb ( ) ;
2013-01-21 03:28:13 +04:00
}
2013-05-09 02:28:06 +04:00
static int kvm_vcpu_initialized ( struct kvm_vcpu * vcpu )
{
return vcpu - > arch . target > = 0 ;
}
2017-06-04 15:43:55 +03:00
static void check_vcpu_requests ( struct kvm_vcpu * vcpu )
{
if ( kvm_request_pending ( vcpu ) ) {
2017-06-04 15:43:58 +03:00
if ( kvm_check_request ( KVM_REQ_SLEEP , vcpu ) )
vcpu_req_sleep ( vcpu ) ;
2017-06-04 15:43:59 +03:00
2018-12-20 14:36:07 +03:00
if ( kvm_check_request ( KVM_REQ_VCPU_RESET , vcpu ) )
kvm_reset_vcpu ( vcpu ) ;
2017-06-04 15:43:59 +03:00
/*
* Clear IRQ_PENDING requests that were made to guarantee
* that a VCPU sees new virtual interrupts .
*/
kvm_check_request ( KVM_REQ_IRQ_PENDING , vcpu ) ;
2019-10-21 18:28:18 +03:00
if ( kvm_check_request ( KVM_REQ_RECORD_STEAL , vcpu ) )
kvm_update_stolen_time ( vcpu ) ;
2017-06-04 15:43:55 +03:00
}
}
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/**
* kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
* @ vcpu : The VCPU pointer
* @ run : The kvm_run structure pointer used for userspace state exchange
*
* This function is called through the VCPU_RUN ioctl called from user space . It
* will execute VM code in a loop until the time slice for the process is used
* or some emulation is needed from user space in which case the function will
* return with return value 0 and with the kvm_run structure filled in with the
* required data for the requested emulation .
*/
2013-01-21 03:28:06 +04:00
int kvm_arch_vcpu_ioctl_run ( struct kvm_vcpu * vcpu , struct kvm_run * run )
{
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
int ret ;
2013-05-09 02:28:06 +04:00
if ( unlikely ( ! kvm_vcpu_initialized ( vcpu ) ) )
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
return - ENOEXEC ;
ret = kvm_vcpu_first_run_init ( vcpu ) ;
if ( ret )
2017-11-29 18:37:53 +03:00
return ret ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
2013-01-21 03:43:58 +04:00
if ( run - > exit_reason = = KVM_EXIT_MMIO ) {
ret = kvm_handle_mmio_return ( vcpu , vcpu - > run ) ;
if ( ret )
2017-11-29 18:37:53 +03:00
return ret ;
2013-01-21 03:43:58 +04:00
}
2017-11-29 18:37:53 +03:00
if ( run - > immediate_exit )
return - EINTR ;
vcpu_load ( vcpu ) ;
2017-02-08 13:50:15 +03:00
2017-11-25 00:39:01 +03:00
kvm_sigset_activate ( vcpu ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
ret = 1 ;
run - > exit_reason = KVM_EXIT_UNKNOWN ;
while ( ret > 0 ) {
/*
* Check conditions before entering the guest
*/
cond_resched ( ) ;
2018-12-11 17:26:31 +03:00
update_vmid ( & vcpu - > kvm - > arch . vmid ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
2017-06-04 15:43:55 +03:00
check_vcpu_requests ( vcpu ) ;
2015-06-08 17:00:28 +03:00
/*
* Preparing the interrupts to be injected also
* involves poking the GIC , which must be done in a
* non - preemptible context .
*/
2015-05-28 21:49:10 +03:00
preempt_disable ( ) ;
2016-03-24 13:21:04 +03:00
2016-02-26 14:29:19 +03:00
kvm_pmu_flush_hwstate ( vcpu ) ;
2016-03-24 13:21:04 +03:00
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
local_irq_disable ( ) ;
2015-06-08 17:00:28 +03:00
kvm_vgic_flush_hwstate ( vcpu ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/*
2017-10-27 20:57:51 +03:00
* Exit if we have a signal pending so that we can deliver the
* signal to user space .
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
*/
2017-10-27 20:57:51 +03:00
if ( signal_pending ( current ) ) {
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
ret = - EINTR ;
run - > exit_reason = KVM_EXIT_INTR ;
}
2017-10-27 20:57:51 +03:00
/*
* If we ' re using a userspace irqchip , then check if we need
* to tell a userspace irqchip about timer or PMU level
* changes and if so , exit to userspace ( the actual level
* state gets updated in kvm_timer_update_run and
* kvm_pmu_update_run below ) .
*/
if ( static_branch_unlikely ( & userspace_irqchip_in_use ) ) {
if ( kvm_timer_should_notify_user ( vcpu ) | |
kvm_pmu_should_notify_user ( vcpu ) ) {
ret = - EINTR ;
run - > exit_reason = KVM_EXIT_INTR ;
}
}
2017-06-04 15:43:54 +03:00
/*
* Ensure we set mode to IN_GUEST_MODE after we disable
* interrupts and before the final VCPU requests check .
* See the comment in kvm_vcpu_exiting_guest_mode ( ) and
2019-07-24 10:24:49 +03:00
* Documentation / virt / kvm / vcpu - requests . rst
2017-06-04 15:43:54 +03:00
*/
smp_store_mb ( vcpu - > mode , IN_GUEST_MODE ) ;
2018-12-11 17:26:31 +03:00
if ( ret < = 0 | | need_new_vmid_gen ( & vcpu - > kvm - > arch . vmid ) | |
2017-06-04 15:43:57 +03:00
kvm_request_pending ( vcpu ) ) {
2017-06-04 15:43:54 +03:00
vcpu - > mode = OUTSIDE_GUEST_MODE ;
2017-10-05 00:42:32 +03:00
isb ( ) ; /* Ensure work in x_flush_hwstate is committed */
2016-02-26 14:29:19 +03:00
kvm_pmu_sync_hwstate ( vcpu ) ;
2017-10-27 20:57:51 +03:00
if ( static_branch_unlikely ( & userspace_irqchip_in_use ) )
kvm_timer_sync_hwstate ( vcpu ) ;
2013-01-22 04:36:12 +04:00
kvm_vgic_sync_hwstate ( vcpu ) ;
2016-10-16 21:24:30 +03:00
local_irq_enable ( ) ;
2015-06-08 17:00:28 +03:00
preempt_enable ( ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
continue ;
}
2015-07-07 19:29:56 +03:00
kvm_arm_setup_debug ( vcpu ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/**************************************************************
* Enter the guest
*/
trace_kvm_entry ( * vcpu_pc ( vcpu ) ) ;
2016-06-15 16:18:26 +03:00
guest_enter_irqoff ( ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
2017-10-03 15:02:12 +03:00
if ( has_vhe ( ) ) {
kvm_arm_vhe_guest_enter ( ) ;
ret = kvm_vcpu_run_vhe ( vcpu ) ;
2018-01-15 22:39:00 +03:00
kvm_arm_vhe_guest_exit ( ) ;
2017-10-03 15:02:12 +03:00
} else {
2019-01-05 18:49:50 +03:00
ret = kvm_call_hyp_ret ( __kvm_vcpu_run_nvhe , vcpu ) ;
2017-10-03 15:02:12 +03:00
}
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
vcpu - > mode = OUTSIDE_GUEST_MODE ;
2015-11-26 13:09:43 +03:00
vcpu - > stat . exits + + ;
2015-05-28 21:49:10 +03:00
/*
* Back from guest
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
2015-07-07 19:29:56 +03:00
kvm_arm_clear_debug ( vcpu ) ;
2016-10-16 21:24:30 +03:00
/*
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
* We must sync the PMU state before the vgic state so
2016-10-16 21:24:30 +03:00
* that the vgic can properly sample the updated state of the
* interrupt line .
*/
kvm_pmu_sync_hwstate ( vcpu ) ;
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
/*
* Sync the vgic state before syncing the timer state because
* the timer code needs to know if the virtual timer
* interrupts are active .
*/
2016-10-16 21:24:30 +03:00
kvm_vgic_sync_hwstate ( vcpu ) ;
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
/*
* Sync the timer hardware state before enabling interrupts as
* we don ' t want vtimer interrupts to race with syncing the
* timer virtual interrupt state .
*/
2017-10-27 20:57:51 +03:00
if ( static_branch_unlikely ( & userspace_irqchip_in_use ) )
kvm_timer_sync_hwstate ( vcpu ) ;
KVM: arm/arm64: Avoid timer save/restore in vcpu entry/exit
We don't need to save and restore the hardware timer state and examine
if it generates interrupts on on every entry/exit to the guest. The
timer hardware is perfectly capable of telling us when it has expired
by signaling interrupts.
When taking a vtimer interrupt in the host, we don't want to mess with
the timer configuration, we just want to forward the physical interrupt
to the guest as a virtual interrupt. We can use the split priority drop
and deactivate feature of the GIC to do this, which leaves an EOI'ed
interrupt active on the physical distributor, making sure we don't keep
taking timer interrupts which would prevent the guest from running. We
can then forward the physical interrupt to the VM using the HW bit in
the LR of the GIC, like we do already, which lets the guest directly
deactivate both the physical and virtual timer simultaneously, allowing
the timer hardware to exit the VM and generate a new physical interrupt
when the timer output is again asserted later on.
We do need to capture this state when migrating VCPUs between physical
CPUs, however, which we use the vcpu put/load functions for, which are
called through preempt notifiers whenever the thread is scheduled away
from the CPU or called directly if we return from the ioctl to
userspace.
One caveat is that we have to save and restore the timer state in both
kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because
we can have the following flows:
1. kvm_vcpu_block
2. kvm_timer_schedule
3. schedule
4. kvm_timer_vcpu_put (preempt notifier)
5. schedule (vcpu thread gets scheduled back)
6. kvm_timer_vcpu_load (preempt notifier)
7. kvm_timer_unschedule
And a version where we don't actually call schedule:
1. kvm_vcpu_block
2. kvm_timer_schedule
7. kvm_timer_unschedule
Since kvm_timer_[schedule/unschedule] may not be followed by put/load,
but put/load also may be called independently, we call the timer
save/restore functions from both paths. Since they rely on the loaded
flag to never save/restore when unnecessary, this doesn't cause any
harm, and we ensure that all invokations of either set of functions work
as intended.
An added benefit beyond not having to read and write the timer sysregs
on every entry and exit is that we no longer have to actively write the
active state to the physical distributor, because we configured the
irq for the vtimer to only get a priority drop when handling the
interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and
the interrupt stays active after firing on the host.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-10-16 21:30:38 +03:00
2018-04-06 16:55:59 +03:00
kvm_arch_vcpu_ctxsync_fp ( vcpu ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
/*
* We may have taken a host interrupt in HYP mode ( ie
* while executing the guest ) . This interrupt is still
* pending , as we haven ' t serviced it yet !
*
* We ' re now back in SVC mode , with interrupts
* disabled . Enabling the interrupts now will have
* the effect of taking the interrupt again , in SVC
* mode this time .
*/
local_irq_enable ( ) ;
/*
2016-06-15 16:18:26 +03:00
* We do local_irq_enable ( ) before calling guest_exit ( ) so
2015-05-28 21:49:10 +03:00
* that if a timer interrupt hits while running the guest we
* account that tick as being spent in the guest . We enable
2016-06-15 16:18:26 +03:00
* preemption after calling guest_exit ( ) so that if we get
2015-05-28 21:49:10 +03:00
* preempted we make sure ticks after that is not counted as
* guest time .
*/
2016-06-15 16:18:26 +03:00
guest_exit ( ) ;
2015-08-30 16:55:22 +03:00
trace_kvm_exit ( ret , kvm_vcpu_trap_get_class ( vcpu ) , * vcpu_pc ( vcpu ) ) ;
2015-05-28 21:49:10 +03:00
2018-01-15 22:39:04 +03:00
/* Exit types that need handling before we can be preempted */
handle_exit_early ( vcpu , run , ret ) ;
2015-06-08 17:00:28 +03:00
preempt_enable ( ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
ret = handle_exit ( vcpu , run , ret ) ;
}
KVM: arm/arm64: Support arch timers with a userspace gic
If you're running with a userspace gic or other interrupt controller
(that is no vgic in the kernel), then you have so far not been able to
use the architected timers, because the output of the architected
timers, which are driven inside the kernel, was a kernel-only construct
between the arch timer code and the vgic.
This patch implements the new KVM_CAP_ARM_USER_IRQ feature, where we use a
side channel on the kvm_run structure, run->s.regs.device_irq_level, to
always notify userspace of the timer output levels when using a userspace
irqchip.
This works by ensuring that before we enter the guest, if the timer
output level has changed compared to what we last told userspace, we
don't enter the guest, but instead return to userspace to notify it of
the new level. If we are exiting, because of an MMIO for example, and
the level changed at the same time, the value is also updated and
userspace can sample the line as it needs. This is nicely achieved
simply always updating the timer_irq_level field after the main run
loop.
Note that the kvm_timer_update_irq trace event is changed to show the
host IRQ number for the timer instead of the guest IRQ number, because
the kernel no longer know which IRQ userspace wires up the timer signal
to.
Also note that this patch implements all required functionality but does
not yet advertise the capability.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-27 22:08:06 +03:00
/* Tell userspace about in-kernel device output levels */
2017-02-01 14:51:52 +03:00
if ( unlikely ( ! irqchip_in_kernel ( vcpu - > kvm ) ) ) {
kvm_timer_update_run ( vcpu ) ;
kvm_pmu_update_run ( vcpu ) ;
}
KVM: arm/arm64: Support arch timers with a userspace gic
If you're running with a userspace gic or other interrupt controller
(that is no vgic in the kernel), then you have so far not been able to
use the architected timers, because the output of the architected
timers, which are driven inside the kernel, was a kernel-only construct
between the arch timer code and the vgic.
This patch implements the new KVM_CAP_ARM_USER_IRQ feature, where we use a
side channel on the kvm_run structure, run->s.regs.device_irq_level, to
always notify userspace of the timer output levels when using a userspace
irqchip.
This works by ensuring that before we enter the guest, if the timer
output level has changed compared to what we last told userspace, we
don't enter the guest, but instead return to userspace to notify it of
the new level. If we are exiting, because of an MMIO for example, and
the level changed at the same time, the value is also updated and
userspace can sample the line as it needs. This is nicely achieved
simply always updating the timer_irq_level field after the main run
loop.
Note that the kvm_timer_update_irq trace event is changed to show the
host IRQ number for the timer instead of the guest IRQ number, because
the kernel no longer know which IRQ userspace wires up the timer signal
to.
Also note that this patch implements all required functionality but does
not yet advertise the capability.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-27 22:08:06 +03:00
2017-11-25 00:39:01 +03:00
kvm_sigset_deactivate ( vcpu ) ;
2017-12-04 23:35:25 +03:00
vcpu_put ( vcpu ) ;
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-21 03:47:42 +04:00
return ret ;
2013-01-21 03:28:06 +04:00
}
2013-01-21 03:28:08 +04:00
static int vcpu_interrupt_line ( struct kvm_vcpu * vcpu , int number , bool level )
{
int bit_index ;
bool set ;
2017-08-03 13:09:05 +03:00
unsigned long * hcr ;
2013-01-21 03:28:08 +04:00
if ( number = = KVM_ARM_IRQ_CPU_IRQ )
bit_index = __ffs ( HCR_VI ) ;
else /* KVM_ARM_IRQ_CPU_FIQ */
bit_index = __ffs ( HCR_VF ) ;
2017-08-03 13:09:05 +03:00
hcr = vcpu_hcr ( vcpu ) ;
2013-01-21 03:28:08 +04:00
if ( level )
2017-08-03 13:09:05 +03:00
set = test_and_set_bit ( bit_index , hcr ) ;
2013-01-21 03:28:08 +04:00
else
2017-08-03 13:09:05 +03:00
set = test_and_clear_bit ( bit_index , hcr ) ;
2013-01-21 03:28:08 +04:00
/*
* If we didn ' t change anything , no need to wake up or kick other CPUs
*/
if ( set = = level )
return 0 ;
/*
* The vcpu irq_lines field was updated , wake up sleeping VCPUs and
* trigger a world - switch round on the running physical CPU to set the
* virtual IRQ / FIQ fields in the HCR appropriately .
*/
2017-06-04 15:43:59 +03:00
kvm_make_request ( KVM_REQ_IRQ_PENDING , vcpu ) ;
2013-01-21 03:28:08 +04:00
kvm_vcpu_kick ( vcpu ) ;
return 0 ;
}
2013-04-16 21:21:41 +04:00
int kvm_vm_ioctl_irq_line ( struct kvm * kvm , struct kvm_irq_level * irq_level ,
bool line_status )
2013-01-21 03:28:08 +04:00
{
u32 irq = irq_level - > irq ;
unsigned int irq_type , vcpu_idx , irq_num ;
int nrcpus = atomic_read ( & kvm - > online_vcpus ) ;
struct kvm_vcpu * vcpu = NULL ;
bool level = irq_level - > level ;
irq_type = ( irq > > KVM_ARM_IRQ_TYPE_SHIFT ) & KVM_ARM_IRQ_TYPE_MASK ;
vcpu_idx = ( irq > > KVM_ARM_IRQ_VCPU_SHIFT ) & KVM_ARM_IRQ_VCPU_MASK ;
KVM: arm/arm64: vgic: Allow more than 256 vcpus for KVM_IRQ_LINE
While parts of the VGIC support a large number of vcpus (we
bravely allow up to 512), other parts are more limited.
One of these limits is visible in the KVM_IRQ_LINE ioctl, which
only allows 256 vcpus to be signalled when using the CPU or PPI
types. Unfortunately, we've cornered ourselves badly by allocating
all the bits in the irq field.
Since the irq_type subfield (8 bit wide) is currently only taking
the values 0, 1 and 2 (and we have been careful not to allow anything
else), let's reduce this field to only 4 bits, and allocate the
remaining 4 bits to a vcpu2_index, which acts as a multiplier:
vcpu_id = 256 * vcpu2_index + vcpu_index
With that, and a new capability (KVM_CAP_ARM_IRQ_LINE_LAYOUT_2)
allowing this to be discovered, it becomes possible to inject
PPIs to up to 4096 vcpus. But please just don't.
Whilst we're there, add a clarification about the use of KVM_IRQ_LINE
on arm, which is not completely conditionned by KVM_CAP_IRQCHIP.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-18 16:09:47 +03:00
vcpu_idx + = ( ( irq > > KVM_ARM_IRQ_VCPU2_SHIFT ) & KVM_ARM_IRQ_VCPU2_MASK ) * ( KVM_ARM_IRQ_VCPU_MASK + 1 ) ;
2013-01-21 03:28:08 +04:00
irq_num = ( irq > > KVM_ARM_IRQ_NUM_SHIFT ) & KVM_ARM_IRQ_NUM_MASK ;
trace_kvm_irq_line ( irq_type , vcpu_idx , irq_num , irq_level - > level ) ;
2013-01-22 04:36:15 +04:00
switch ( irq_type ) {
case KVM_ARM_IRQ_TYPE_CPU :
if ( irqchip_in_kernel ( kvm ) )
return - ENXIO ;
2013-01-21 03:28:08 +04:00
2013-01-22 04:36:15 +04:00
if ( vcpu_idx > = nrcpus )
return - EINVAL ;
2013-01-21 03:28:08 +04:00
2013-01-22 04:36:15 +04:00
vcpu = kvm_get_vcpu ( kvm , vcpu_idx ) ;
if ( ! vcpu )
return - EINVAL ;
2013-01-21 03:28:08 +04:00
2013-01-22 04:36:15 +04:00
if ( irq_num > KVM_ARM_IRQ_CPU_FIQ )
return - EINVAL ;
return vcpu_interrupt_line ( vcpu , irq_num , level ) ;
case KVM_ARM_IRQ_TYPE_PPI :
if ( ! irqchip_in_kernel ( kvm ) )
return - ENXIO ;
if ( vcpu_idx > = nrcpus )
return - EINVAL ;
vcpu = kvm_get_vcpu ( kvm , vcpu_idx ) ;
if ( ! vcpu )
return - EINVAL ;
if ( irq_num < VGIC_NR_SGIS | | irq_num > = VGIC_NR_PRIVATE_IRQS )
return - EINVAL ;
2013-01-21 03:28:08 +04:00
2017-05-16 13:41:18 +03:00
return kvm_vgic_inject_irq ( kvm , vcpu - > vcpu_id , irq_num , level , NULL ) ;
2013-01-22 04:36:15 +04:00
case KVM_ARM_IRQ_TYPE_SPI :
if ( ! irqchip_in_kernel ( kvm ) )
return - ENXIO ;
KVM: arm/arm64: check IRQ number on userland injection
When userland injects a SPI via the KVM_IRQ_LINE ioctl we currently
only check it against a fixed limit, which historically is set
to 127. With the new dynamic IRQ allocation the effective limit may
actually be smaller (64).
So when now a malicious or buggy userland injects a SPI in that
range, we spill over on our VGIC bitmaps and bytemaps memory.
I could trigger a host kernel NULL pointer dereference with current
mainline by injecting some bogus IRQ number from a hacked kvmtool:
-----------------
....
DEBUG: kvm_vgic_inject_irq(kvm, cpu=0, irq=114, level=1)
DEBUG: vgic_update_irq_pending(kvm, cpu=0, irq=114, level=1)
DEBUG: IRQ #114 still in the game, writing to bytemap now...
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = ffffffc07652e000
[00000000] *pgd=00000000f658b003, *pud=00000000f658b003, *pmd=0000000000000000
Internal error: Oops: 96000006 [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 1053 Comm: lkvm-msi-irqinj Not tainted 4.0.0-rc7+ #3027
Hardware name: FVP Base (DT)
task: ffffffc0774e9680 ti: ffffffc0765a8000 task.ti: ffffffc0765a8000
PC is at kvm_vgic_inject_irq+0x234/0x310
LR is at kvm_vgic_inject_irq+0x30c/0x310
pc : [<ffffffc0000ae0a8>] lr : [<ffffffc0000ae180>] pstate: 80000145
.....
So this patch fixes this by checking the SPI number against the
actual limit. Also we remove the former legacy hard limit of
127 in the ioctl code.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
CC: <stable@vger.kernel.org> # 4.0, 3.19, 3.18
[maz: wrap KVM_ARM_IRQ_GIC_MAX with #ifndef __KERNEL__,
as suggested by Christopher Covington]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-04-10 18:17:59 +03:00
if ( irq_num < VGIC_NR_PRIVATE_IRQS )
2013-01-22 04:36:15 +04:00
return - EINVAL ;
2017-05-16 13:41:18 +03:00
return kvm_vgic_inject_irq ( kvm , 0 , irq_num , level , NULL ) ;
2013-01-22 04:36:15 +04:00
}
return - EINVAL ;
2013-01-21 03:28:08 +04:00
}
2014-10-16 18:40:53 +04:00
static int kvm_vcpu_set_target ( struct kvm_vcpu * vcpu ,
const struct kvm_vcpu_init * init )
{
2019-04-04 20:42:30 +03:00
unsigned int i , ret ;
2014-10-16 18:40:53 +04:00
int phys_target = kvm_target_cpu ( ) ;
if ( init - > target ! = phys_target )
return - EINVAL ;
/*
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
* use the same target .
*/
if ( vcpu - > arch . target ! = - 1 & & vcpu - > arch . target ! = init - > target )
return - EINVAL ;
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
for ( i = 0 ; i < sizeof ( init - > features ) * 8 ; i + + ) {
bool set = ( init - > features [ i / 32 ] & ( 1 < < ( i % 32 ) ) ) ;
if ( set & & i > = KVM_VCPU_MAX_FEATURES )
return - ENOENT ;
/*
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
* use the same feature set .
*/
if ( vcpu - > arch . target ! = - 1 & & i < KVM_VCPU_MAX_FEATURES & &
test_bit ( i , vcpu - > arch . features ) ! = set )
return - EINVAL ;
if ( set )
set_bit ( i , vcpu - > arch . features ) ;
}
vcpu - > arch . target = phys_target ;
/* Now we know what it is, we can reset it. */
2019-04-04 20:42:30 +03:00
ret = kvm_reset_vcpu ( vcpu ) ;
if ( ret ) {
vcpu - > arch . target = - 1 ;
bitmap_zero ( vcpu - > arch . features , KVM_VCPU_MAX_FEATURES ) ;
}
2014-10-16 18:40:53 +04:00
2019-04-04 20:42:30 +03:00
return ret ;
}
2014-10-16 18:40:53 +04:00
2013-11-20 05:43:19 +04:00
static int kvm_arch_vcpu_ioctl_vcpu_init ( struct kvm_vcpu * vcpu ,
struct kvm_vcpu_init * init )
{
int ret ;
ret = kvm_vcpu_set_target ( vcpu , init ) ;
if ( ret )
return ret ;
2014-11-27 12:35:03 +03:00
/*
* Ensure a rebooted VM will fault in RAM pages and detect if the
* guest MMU is turned off and flush the caches as needed .
*/
if ( vcpu - > arch . has_run_once )
stage2_unmap_vm ( vcpu - > kvm ) ;
2014-10-16 19:21:16 +04:00
vcpu_reset_hcr ( vcpu ) ;
2013-11-20 05:43:19 +04:00
/*
2015-09-26 00:41:14 +03:00
* Handle the " start in power-off " case .
2013-11-20 05:43:19 +04:00
*/
2014-12-02 17:27:51 +03:00
if ( test_bit ( KVM_ARM_VCPU_POWER_OFF , vcpu - > arch . features ) )
2017-06-04 15:43:57 +03:00
vcpu_power_off ( vcpu ) ;
2014-10-16 18:14:43 +04:00
else
2015-09-26 00:41:14 +03:00
vcpu - > arch . power_off = false ;
2013-11-20 05:43:19 +04:00
return 0 ;
}
2016-01-11 15:56:17 +03:00
static int kvm_arm_vcpu_set_attr ( struct kvm_vcpu * vcpu ,
struct kvm_device_attr * attr )
{
int ret = - ENXIO ;
switch ( attr - > group ) {
default :
2016-01-11 16:35:32 +03:00
ret = kvm_arm_vcpu_arch_set_attr ( vcpu , attr ) ;
2016-01-11 15:56:17 +03:00
break ;
}
return ret ;
}
static int kvm_arm_vcpu_get_attr ( struct kvm_vcpu * vcpu ,
struct kvm_device_attr * attr )
{
int ret = - ENXIO ;
switch ( attr - > group ) {
default :
2016-01-11 16:35:32 +03:00
ret = kvm_arm_vcpu_arch_get_attr ( vcpu , attr ) ;
2016-01-11 15:56:17 +03:00
break ;
}
return ret ;
}
static int kvm_arm_vcpu_has_attr ( struct kvm_vcpu * vcpu ,
struct kvm_device_attr * attr )
{
int ret = - ENXIO ;
switch ( attr - > group ) {
default :
2016-01-11 16:35:32 +03:00
ret = kvm_arm_vcpu_arch_has_attr ( vcpu , attr ) ;
2016-01-11 15:56:17 +03:00
break ;
}
return ret ;
}
2018-07-19 18:24:24 +03:00
static int kvm_arm_vcpu_get_events ( struct kvm_vcpu * vcpu ,
struct kvm_vcpu_events * events )
{
memset ( events , 0 , sizeof ( * events ) ) ;
return __kvm_arm_vcpu_get_events ( vcpu , events ) ;
}
static int kvm_arm_vcpu_set_events ( struct kvm_vcpu * vcpu ,
struct kvm_vcpu_events * events )
{
int i ;
/* check whether the reserved field is zero */
for ( i = 0 ; i < ARRAY_SIZE ( events - > reserved ) ; i + + )
if ( events - > reserved [ i ] )
return - EINVAL ;
/* check whether the pad field is zero */
for ( i = 0 ; i < ARRAY_SIZE ( events - > exception . pad ) ; i + + )
if ( events - > exception . pad [ i ] )
return - EINVAL ;
return __kvm_arm_vcpu_set_events ( vcpu , events ) ;
}
2013-01-21 03:28:06 +04:00
long kvm_arch_vcpu_ioctl ( struct file * filp ,
unsigned int ioctl , unsigned long arg )
{
struct kvm_vcpu * vcpu = filp - > private_data ;
void __user * argp = ( void __user * ) arg ;
2016-01-11 15:56:17 +03:00
struct kvm_device_attr attr ;
2017-12-04 23:35:36 +03:00
long r ;
2013-01-21 03:28:06 +04:00
switch ( ioctl ) {
case KVM_ARM_VCPU_INIT : {
struct kvm_vcpu_init init ;
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2013-01-21 03:28:06 +04:00
if ( copy_from_user ( & init , argp , sizeof ( init ) ) )
2017-12-04 23:35:36 +03:00
break ;
2013-01-21 03:28:06 +04:00
2017-12-04 23:35:36 +03:00
r = kvm_arch_vcpu_ioctl_vcpu_init ( vcpu , & init ) ;
break ;
2013-01-21 03:28:06 +04:00
}
case KVM_SET_ONE_REG :
case KVM_GET_ONE_REG : {
struct kvm_one_reg reg ;
2013-05-09 02:28:06 +04:00
2017-12-04 23:35:36 +03:00
r = - ENOEXEC ;
2013-05-09 02:28:06 +04:00
if ( unlikely ( ! kvm_vcpu_initialized ( vcpu ) ) )
2017-12-04 23:35:36 +03:00
break ;
2013-05-09 02:28:06 +04:00
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2013-01-21 03:28:06 +04:00
if ( copy_from_user ( & reg , argp , sizeof ( reg ) ) )
2017-12-04 23:35:36 +03:00
break ;
2013-01-21 03:28:06 +04:00
if ( ioctl = = KVM_SET_ONE_REG )
2017-12-04 23:35:36 +03:00
r = kvm_arm_set_reg ( vcpu , & reg ) ;
2013-01-21 03:28:06 +04:00
else
2017-12-04 23:35:36 +03:00
r = kvm_arm_get_reg ( vcpu , & reg ) ;
break ;
2013-01-21 03:28:06 +04:00
}
case KVM_GET_REG_LIST : {
struct kvm_reg_list __user * user_list = argp ;
struct kvm_reg_list reg_list ;
unsigned n ;
2017-12-04 23:35:36 +03:00
r = - ENOEXEC ;
2013-05-09 02:28:06 +04:00
if ( unlikely ( ! kvm_vcpu_initialized ( vcpu ) ) )
2017-12-04 23:35:36 +03:00
break ;
2013-05-09 02:28:06 +04:00
2018-12-19 17:27:01 +03:00
r = - EPERM ;
if ( ! kvm_arm_vcpu_is_finalized ( vcpu ) )
break ;
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2013-01-21 03:28:06 +04:00
if ( copy_from_user ( & reg_list , user_list , sizeof ( reg_list ) ) )
2017-12-04 23:35:36 +03:00
break ;
2013-01-21 03:28:06 +04:00
n = reg_list . n ;
reg_list . n = kvm_arm_num_regs ( vcpu ) ;
if ( copy_to_user ( user_list , & reg_list , sizeof ( reg_list ) ) )
2017-12-04 23:35:36 +03:00
break ;
r = - E2BIG ;
2013-01-21 03:28:06 +04:00
if ( n < reg_list . n )
2017-12-04 23:35:36 +03:00
break ;
r = kvm_arm_copy_reg_indices ( vcpu , user_list - > reg ) ;
break ;
2013-01-21 03:28:06 +04:00
}
2016-01-11 15:56:17 +03:00
case KVM_SET_DEVICE_ATTR : {
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2016-01-11 15:56:17 +03:00
if ( copy_from_user ( & attr , argp , sizeof ( attr ) ) )
2017-12-04 23:35:36 +03:00
break ;
r = kvm_arm_vcpu_set_attr ( vcpu , & attr ) ;
break ;
2016-01-11 15:56:17 +03:00
}
case KVM_GET_DEVICE_ATTR : {
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2016-01-11 15:56:17 +03:00
if ( copy_from_user ( & attr , argp , sizeof ( attr ) ) )
2017-12-04 23:35:36 +03:00
break ;
r = kvm_arm_vcpu_get_attr ( vcpu , & attr ) ;
break ;
2016-01-11 15:56:17 +03:00
}
case KVM_HAS_DEVICE_ATTR : {
2017-12-04 23:35:36 +03:00
r = - EFAULT ;
2016-01-11 15:56:17 +03:00
if ( copy_from_user ( & attr , argp , sizeof ( attr ) ) )
2017-12-04 23:35:36 +03:00
break ;
r = kvm_arm_vcpu_has_attr ( vcpu , & attr ) ;
break ;
2016-01-11 15:56:17 +03:00
}
2018-07-19 18:24:22 +03:00
case KVM_GET_VCPU_EVENTS : {
struct kvm_vcpu_events events ;
if ( kvm_arm_vcpu_get_events ( vcpu , & events ) )
return - EINVAL ;
if ( copy_to_user ( argp , & events , sizeof ( events ) ) )
return - EFAULT ;
return 0 ;
}
case KVM_SET_VCPU_EVENTS : {
struct kvm_vcpu_events events ;
if ( copy_from_user ( & events , argp , sizeof ( events ) ) )
return - EFAULT ;
return kvm_arm_vcpu_set_events ( vcpu , & events ) ;
}
2018-12-19 17:27:01 +03:00
case KVM_ARM_VCPU_FINALIZE : {
int what ;
if ( ! kvm_vcpu_initialized ( vcpu ) )
return - ENOEXEC ;
if ( get_user ( what , ( const int __user * ) argp ) )
return - EFAULT ;
return kvm_arm_vcpu_finalize ( vcpu , what ) ;
}
2013-01-21 03:28:06 +04:00
default :
2017-12-04 23:35:36 +03:00
r = - EINVAL ;
2013-01-21 03:28:06 +04:00
}
2017-12-04 23:35:36 +03:00
return r ;
2013-01-21 03:28:06 +04:00
}
2015-01-16 02:58:57 +03:00
/**
* kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
* @ kvm : kvm instance
* @ log : slot id and address to which we copy the log
*
* Steps 1 - 4 below provide general overview of dirty page logging . See
* kvm_get_dirty_log_protect ( ) function description for additional details .
*
* We call kvm_get_dirty_log_protect ( ) to handle steps 1 - 3 , upon return we
* always flush the TLB ( step 4 ) even if previous step failed and the dirty
* bitmap may be corrupt . Regardless of previous outcome the KVM logging API
* does not preclude user space subsequent dirty log read . Flushing TLB ensures
* writes will be marked dirty for next log read .
*
* 1. Take a snapshot of the bit and clear it if needed .
* 2. Write protect the corresponding page .
* 3. Copy the snapshot to the userspace .
* 4. Flush TLB ' s if needed .
*/
2013-01-21 03:28:06 +04:00
int kvm_vm_ioctl_get_dirty_log ( struct kvm * kvm , struct kvm_dirty_log * log )
{
2018-10-23 03:18:42 +03:00
bool flush = false ;
2015-01-16 02:58:57 +03:00
int r ;
mutex_lock ( & kvm - > slots_lock ) ;
2018-10-23 03:18:42 +03:00
r = kvm_get_dirty_log_protect ( kvm , log , & flush ) ;
2015-01-16 02:58:57 +03:00
2018-10-23 03:18:42 +03:00
if ( flush )
2015-01-16 02:58:57 +03:00
kvm_flush_remote_tlbs ( kvm ) ;
mutex_unlock ( & kvm - > slots_lock ) ;
return r ;
2013-01-21 03:28:06 +04:00
}
kvm: introduce manual dirty log reprotect
There are two problems with KVM_GET_DIRTY_LOG. First, and less important,
it can take kvm->mmu_lock for an extended period of time. Second, its user
can actually see many false positives in some cases. The latter is due
to a benign race like this:
1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects
them.
2. The guest modifies the pages, causing them to be marked ditry.
3. Userspace actually copies the pages.
4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though
they were not written to since (3).
This is especially a problem for large guests, where the time between
(1) and (3) can be substantial. This patch introduces a new
capability which, when enabled, makes KVM_GET_DIRTY_LOG not
write-protect the pages it returns. Instead, userspace has to
explicitly clear the dirty log bits just before using the content
of the page. The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a
64-page granularity rather than requiring to sync a full memslot;
this way, the mmu_lock is taken for small amounts of time, and
only a small amount of time will pass between write protection
of pages and the sending of their content.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-10-23 03:36:47 +03:00
int kvm_vm_ioctl_clear_dirty_log ( struct kvm * kvm , struct kvm_clear_dirty_log * log )
{
bool flush = false ;
2015-01-16 02:58:57 +03:00
int r ;
mutex_lock ( & kvm - > slots_lock ) ;
kvm: introduce manual dirty log reprotect
There are two problems with KVM_GET_DIRTY_LOG. First, and less important,
it can take kvm->mmu_lock for an extended period of time. Second, its user
can actually see many false positives in some cases. The latter is due
to a benign race like this:
1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects
them.
2. The guest modifies the pages, causing them to be marked ditry.
3. Userspace actually copies the pages.
4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though
they were not written to since (3).
This is especially a problem for large guests, where the time between
(1) and (3) can be substantial. This patch introduces a new
capability which, when enabled, makes KVM_GET_DIRTY_LOG not
write-protect the pages it returns. Instead, userspace has to
explicitly clear the dirty log bits just before using the content
of the page. The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a
64-page granularity rather than requiring to sync a full memslot;
this way, the mmu_lock is taken for small amounts of time, and
only a small amount of time will pass between write protection
of pages and the sending of their content.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-10-23 03:36:47 +03:00
r = kvm_clear_dirty_log_protect ( kvm , log , & flush ) ;
2015-01-16 02:58:57 +03:00
kvm: introduce manual dirty log reprotect
There are two problems with KVM_GET_DIRTY_LOG. First, and less important,
it can take kvm->mmu_lock for an extended period of time. Second, its user
can actually see many false positives in some cases. The latter is due
to a benign race like this:
1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects
them.
2. The guest modifies the pages, causing them to be marked ditry.
3. Userspace actually copies the pages.
4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though
they were not written to since (3).
This is especially a problem for large guests, where the time between
(1) and (3) can be substantial. This patch introduces a new
capability which, when enabled, makes KVM_GET_DIRTY_LOG not
write-protect the pages it returns. Instead, userspace has to
explicitly clear the dirty log bits just before using the content
of the page. The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a
64-page granularity rather than requiring to sync a full memslot;
this way, the mmu_lock is taken for small amounts of time, and
only a small amount of time will pass between write protection
of pages and the sending of their content.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-10-23 03:36:47 +03:00
if ( flush )
2015-01-16 02:58:57 +03:00
kvm_flush_remote_tlbs ( kvm ) ;
mutex_unlock ( & kvm - > slots_lock ) ;
return r ;
2013-01-21 03:28:06 +04:00
}
2013-01-23 22:18:04 +04:00
static int kvm_vm_ioctl_set_device_addr ( struct kvm * kvm ,
struct kvm_arm_device_addr * dev_addr )
{
2013-01-22 04:36:13 +04:00
unsigned long dev_id , type ;
dev_id = ( dev_addr - > id & KVM_ARM_DEVICE_ID_MASK ) > >
KVM_ARM_DEVICE_ID_SHIFT ;
type = ( dev_addr - > id & KVM_ARM_DEVICE_TYPE_MASK ) > >
KVM_ARM_DEVICE_TYPE_SHIFT ;
switch ( dev_id ) {
case KVM_ARM_DEVICE_VGIC_V2 :
2015-12-18 14:38:43 +03:00
if ( ! vgic_present )
return - ENXIO ;
2013-09-24 01:55:56 +04:00
return kvm_vgic_addr ( kvm , type , & dev_addr - > addr , true ) ;
2013-01-22 04:36:13 +04:00
default :
return - ENODEV ;
}
2013-01-23 22:18:04 +04:00
}
2013-01-21 03:28:06 +04:00
long kvm_arch_vm_ioctl ( struct file * filp ,
unsigned int ioctl , unsigned long arg )
{
2013-01-23 22:18:04 +04:00
struct kvm * kvm = filp - > private_data ;
void __user * argp = ( void __user * ) arg ;
switch ( ioctl ) {
2013-01-22 04:36:15 +04:00
case KVM_CREATE_IRQCHIP : {
2016-08-09 20:13:01 +03:00
int ret ;
2015-12-18 14:38:43 +03:00
if ( ! vgic_present )
return - ENXIO ;
2016-08-09 20:13:01 +03:00
mutex_lock ( & kvm - > lock ) ;
ret = kvm_vgic_create ( kvm , KVM_DEV_TYPE_ARM_VGIC_V2 ) ;
mutex_unlock ( & kvm - > lock ) ;
return ret ;
2013-01-22 04:36:15 +04:00
}
2013-01-23 22:18:04 +04:00
case KVM_ARM_SET_DEVICE_ADDR : {
struct kvm_arm_device_addr dev_addr ;
if ( copy_from_user ( & dev_addr , argp , sizeof ( dev_addr ) ) )
return - EFAULT ;
return kvm_vm_ioctl_set_device_addr ( kvm , & dev_addr ) ;
}
2013-09-30 12:50:07 +04:00
case KVM_ARM_PREFERRED_TARGET : {
int err ;
struct kvm_vcpu_init init ;
err = kvm_vcpu_preferred_target ( & init ) ;
if ( err )
return err ;
if ( copy_to_user ( argp , & init , sizeof ( init ) ) )
return - EFAULT ;
return 0 ;
}
2013-01-23 22:18:04 +04:00
default :
return - EINVAL ;
}
2013-01-21 03:28:06 +04:00
}
2019-11-21 10:15:59 +03:00
static void cpu_init_hyp_mode ( void )
2013-01-21 03:28:06 +04:00
{
2013-05-14 15:11:37 +04:00
phys_addr_t pgd_ptr ;
2013-01-21 03:28:06 +04:00
unsigned long hyp_stack_ptr ;
unsigned long stack_page ;
unsigned long vector_ptr ;
/* Switch from the HYP stub to our own HYP init vector */
ARM: KVM: switch to a dual-step HYP init code
Our HYP init code suffers from two major design issues:
- it cannot support CPU hotplug, as we tear down the idmap very early
- it cannot perform a TLB invalidation when switching from init to
runtime mappings, as pages are manipulated from PL1 exclusively
The hotplug problem mandates that we keep two sets of page tables
(boot and runtime). The TLB problem mandates that we're able to
transition from one PGD to another while in HYP, invalidating the TLBs
in the process.
To be able to do this, we need to share a page between the two page
tables. A page that will have the same VA in both configurations. All we
need is a VA that has the following properties:
- This VA can't be used to represent a kernel mapping.
- This VA will not conflict with the physical address of the kernel text
The vectors page seems to satisfy this requirement:
- The kernel never maps anything else there
- The kernel text being copied at the beginning of the physical memory,
it is unlikely to use the last 64kB (I doubt we'll ever support KVM
on a system with something like 4MB of RAM, but patches are very
welcome).
Let's call this VA the trampoline VA.
Now, we map our init page at 3 locations:
- idmap in the boot pgd
- trampoline VA in the boot pgd
- trampoline VA in the runtime pgd
The init scenario is now the following:
- We jump in HYP with four parameters: boot HYP pgd, runtime HYP pgd,
runtime stack, runtime vectors
- Enable the MMU with the boot pgd
- Jump to a target into the trampoline page (remember, this is the same
physical page!)
- Now switch to the runtime pgd (same VA, and still the same physical
page!)
- Invalidate TLBs
- Set stack and vectors
- Profit! (or eret, if you only care about the code).
Note that we keep the boot mapping permanently (it is not strictly an
idmap anymore) to allow for CPU hotplug in later patches.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-04-12 22:12:06 +04:00
__hyp_set_vectors ( kvm_get_idmap_vector ( ) ) ;
2013-01-21 03:28:06 +04:00
2013-05-14 15:11:37 +04:00
pgd_ptr = kvm_mmu_get_httbr ( ) ;
2013-10-21 16:17:08 +04:00
stack_page = __this_cpu_read ( kvm_arm_hyp_stack_page ) ;
2013-01-21 03:28:06 +04:00
hyp_stack_ptr = stack_page + PAGE_SIZE ;
2018-01-03 19:38:35 +03:00
vector_ptr = ( unsigned long ) kvm_get_hyp_vector ( ) ;
2013-01-21 03:28:06 +04:00
2016-06-30 20:40:45 +03:00
__cpu_init_hyp_mode ( pgd_ptr , hyp_stack_ptr , vector_ptr ) ;
2016-02-01 20:54:35 +03:00
__cpu_init_stage2 ( ) ;
2013-01-21 03:28:06 +04:00
}
2017-04-03 21:38:01 +03:00
static void cpu_hyp_reset ( void )
{
if ( ! is_kernel_in_hyp_mode ( ) )
__hyp_reset_vectors ( ) ;
}
2016-03-30 20:33:04 +03:00
static void cpu_hyp_reinit ( void )
{
2019-07-06 01:35:56 +03:00
kvm_init_host_cpu_context ( & this_cpu_ptr ( & kvm_host_data ) - > host_ctxt ) ;
2017-04-03 21:38:01 +03:00
cpu_hyp_reset ( ) ;
2018-10-01 15:41:32 +03:00
if ( is_kernel_in_hyp_mode ( ) )
2017-06-12 17:37:48 +03:00
kvm_timer_init_vhe ( ) ;
2018-10-01 15:41:32 +03:00
else
2019-11-21 10:15:59 +03:00
cpu_init_hyp_mode ( ) ;
2017-03-18 15:56:56 +03:00
2018-10-17 19:42:10 +03:00
kvm_arm_init_debug ( ) ;
2017-03-18 15:56:56 +03:00
if ( vgic_present )
kvm_vgic_init_cpu_hardware ( ) ;
2016-03-30 20:33:04 +03:00
}
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
static void _kvm_arch_hardware_enable ( void * discard )
{
if ( ! __this_cpu_read ( kvm_arm_hardware_enabled ) ) {
2016-03-30 20:33:04 +03:00
cpu_hyp_reinit ( ) ;
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
__this_cpu_write ( kvm_arm_hardware_enabled , 1 ) ;
2013-04-12 22:12:07 +04:00
}
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
}
2013-04-12 22:12:07 +04:00
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
int kvm_arch_hardware_enable ( void )
{
_kvm_arch_hardware_enable ( NULL ) ;
return 0 ;
2013-01-21 03:28:06 +04:00
}
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
static void _kvm_arch_hardware_disable ( void * discard )
{
if ( __this_cpu_read ( kvm_arm_hardware_enabled ) ) {
cpu_hyp_reset ( ) ;
__this_cpu_write ( kvm_arm_hardware_enabled , 0 ) ;
}
}
void kvm_arch_hardware_disable ( void )
{
_kvm_arch_hardware_disable ( NULL ) ;
}
2013-04-12 22:12:07 +04:00
2013-08-05 18:04:46 +04:00
# ifdef CONFIG_CPU_PM
static int hyp_init_cpu_pm_notifier ( struct notifier_block * self ,
unsigned long cmd ,
void * v )
{
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
/*
* kvm_arm_hardware_enabled is left with its old value over
* PM_ENTER - > PM_EXIT . It is used to indicate PM_EXIT should
* re - enable hyp .
*/
switch ( cmd ) {
case CPU_PM_ENTER :
if ( __this_cpu_read ( kvm_arm_hardware_enabled ) )
/*
* don ' t update kvm_arm_hardware_enabled here
* so that the hardware will be re - enabled
* when we resume . See below .
*/
cpu_hyp_reset ( ) ;
2013-08-05 18:04:46 +04:00
return NOTIFY_OK ;
2018-01-22 21:19:06 +03:00
case CPU_PM_ENTER_FAILED :
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
case CPU_PM_EXIT :
if ( __this_cpu_read ( kvm_arm_hardware_enabled ) )
/* The hardware was enabled before suspend. */
cpu_hyp_reinit ( ) ;
2013-08-05 18:04:46 +04:00
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
return NOTIFY_OK ;
default :
return NOTIFY_DONE ;
}
2013-08-05 18:04:46 +04:00
}
static struct notifier_block hyp_init_cpu_pm_nb = {
. notifier_call = hyp_init_cpu_pm_notifier ,
} ;
static void __init hyp_cpu_pm_init ( void )
{
cpu_pm_register_notifier ( & hyp_init_cpu_pm_nb ) ;
}
2016-04-04 16:46:51 +03:00
static void __init hyp_cpu_pm_exit ( void )
{
cpu_pm_unregister_notifier ( & hyp_init_cpu_pm_nb ) ;
}
2013-08-05 18:04:46 +04:00
# else
static inline void hyp_cpu_pm_init ( void )
{
}
2016-04-04 16:46:51 +03:00
static inline void hyp_cpu_pm_exit ( void )
{
}
2013-08-05 18:04:46 +04:00
# endif
2015-01-29 14:59:54 +03:00
static int init_common_resources ( void )
{
2018-09-26 19:32:52 +03:00
kvm_set_ipa_limit ( ) ;
2015-01-29 14:59:54 +03:00
return 0 ;
}
static int init_subsystems ( void )
{
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
int err = 0 ;
2015-01-29 14:59:54 +03:00
2016-03-30 20:33:04 +03:00
/*
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
* Enable hardware so that subsystem initialisation can access EL2 .
2016-03-30 20:33:04 +03:00
*/
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
on_each_cpu ( _kvm_arch_hardware_enable , NULL , 1 ) ;
2016-03-30 20:33:04 +03:00
/*
* Register CPU lower - power notifier
*/
hyp_cpu_pm_init ( ) ;
2015-01-29 14:59:54 +03:00
/*
* Init HYP view of VGIC
*/
err = kvm_vgic_hyp_init ( ) ;
switch ( err ) {
case 0 :
vgic_present = true ;
break ;
case - ENODEV :
case - ENXIO :
vgic_present = false ;
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
err = 0 ;
2015-01-29 14:59:54 +03:00
break ;
default :
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
goto out ;
2015-01-29 14:59:54 +03:00
}
/*
* Init HYP architected timer support
*/
2017-12-07 14:46:15 +03:00
err = kvm_timer_hyp_init ( vgic_present ) ;
2015-01-29 14:59:54 +03:00
if ( err )
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
goto out ;
2015-01-29 14:59:54 +03:00
kvm_perf_init ( ) ;
kvm_coproc_table_init ( ) ;
arm64: kvm: allows kvm cpu hotplug
The current kvm implementation on arm64 does cpu-specific initialization
at system boot, and has no way to gracefully shutdown a core in terms of
kvm. This prevents kexec from rebooting the system at EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init
code into a separate function, kvm_arch_hardware_disable() and
kvm_arch_hardware_enable() respectively.
We don't need the arm64 specific cpu hotplug hook any more.
Since this patch modifies common code between arm and arm64, one stub
definition, __cpu_reset_hyp_mode(), is added on arm side to avoid
compilation errors.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[Rebase, added separate VHE init/exit path, changed resets use of
kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(),
added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed
guest-enter after teardown handling]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-27 19:47:05 +03:00
out :
on_each_cpu ( _kvm_arch_hardware_disable , NULL , 1 ) ;
return err ;
2015-01-29 14:59:54 +03:00
}
static void teardown_hyp_mode ( void )
{
int cpu ;
free_hyp_pgds ( ) ;
for_each_possible_cpu ( cpu )
free_page ( per_cpu ( kvm_arm_hyp_stack_page , cpu ) ) ;
}
2013-01-21 03:28:06 +04:00
/**
* Inits Hyp - mode on all online CPUs
*/
static int init_hyp_mode ( void )
{
int cpu ;
int err = 0 ;
/*
* Allocate Hyp PGD and setup Hyp identity mapping
*/
err = kvm_mmu_init ( ) ;
if ( err )
goto out_err ;
/*
* Allocate stack pages for Hypervisor - mode
*/
for_each_possible_cpu ( cpu ) {
unsigned long stack_page ;
stack_page = __get_free_page ( GFP_KERNEL ) ;
if ( ! stack_page ) {
err = - ENOMEM ;
2015-01-29 14:59:54 +03:00
goto out_err ;
2013-01-21 03:28:06 +04:00
}
per_cpu ( kvm_arm_hyp_stack_page , cpu ) = stack_page ;
}
/*
* Map the Hyp - code called directly from the host
*/
arm64 updates for 4.6:
- Initial page table creation reworked to avoid breaking large block
mappings (huge pages) into smaller ones. The ARM architecture requires
break-before-make in such cases to avoid TLB conflicts but that's not
always possible on live page tables
- Kernel virtual memory layout: the kernel image is no longer linked to
the bottom of the linear mapping (PAGE_OFFSET) but at the bottom of
the vmalloc space, allowing the kernel to be loaded (nearly) anywhere
in physical RAM
- Kernel ASLR: position independent kernel Image and modules being
randomly mapped in the vmalloc space with the randomness is provided
by UEFI (efi_get_random_bytes() patches merged via the arm64 tree,
acked by Matt Fleming)
- Implement relative exception tables for arm64, required by KASLR
(initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c but
actual x86 conversion to deferred to 4.7 because of the merge
dependencies)
- Support for the User Access Override feature of ARMv8.2: this allows
uaccess functions (get_user etc.) to be implemented using LDTR/STTR
instructions. Such instructions, when run by the kernel, perform
unprivileged accesses adding an extra level of protection. The
set_fs() macro is used to "upgrade" such instruction to privileged
accesses via the UAO bit
- Half-precision floating point support (part of ARMv8.2)
- Optimisations for CPUs with or without a hardware prefetcher (using
run-time code patching)
- copy_page performance improvement to deal with 128 bytes at a time
- Sanity checks on the CPU capabilities (via CPUID) to prevent
incompatible secondary CPUs from being brought up (e.g. weird
big.LITTLE configurations)
- valid_user_regs() reworked for better sanity check of the sigcontext
information (restored pstate information)
- ACPI parking protocol implementation
- CONFIG_DEBUG_RODATA enabled by default
- VDSO code marked as read-only
- DEBUG_PAGEALLOC support
- ARCH_HAS_UBSAN_SANITIZE_ALL enabled
- Erratum workaround Cavium ThunderX SoC
- set_pte_at() fix for PROT_NONE mappings
- Code clean-ups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJW6u95AAoJEGvWsS0AyF7xMyoP/3x2O6bgreSQ84BdO4JChN4+
RQ9OVdX8u2ItO9sgaCY2AA6KoiBuEjGmPl/XRuK0I7DpODTtRjEXQHuNNhz8AelC
hn4AEVqamY6Z5BzHFIjs8G9ydEbq+OXcKWEdwSsBhP/cMvI7ss3dps1f5iNPT5Vv
50E/kUz+aWYy7pKlB18VDV7TUOA3SuYuGknWV8+bOY5uPb8hNT3Y3fHOg/EuNNN3
DIuYH1V7XQkXtF+oNVIGxzzJCXULBE7egMcWAm1ydSOHK0JwkZAiL7OhI7ceVD0x
YlDxBnqmi4cgzfBzTxITAhn3OParwN6udQprdF1WGtFF6fuY2eRDSH/L/iZoE4DY
OulL951OsBtF8YC3+RKLk908/0bA2Uw8ftjCOFJTYbSnZBj1gWK41VkCYMEXiHQk
EaN8+2Iw206iYIoyvdjGCLw7Y0oakDoVD9vmv12SOaHeQljTkjoN8oIlfjjKTeP7
3AXj5v9BDMDVh40nkVayysRNvqe48Kwt9Wn0rhVTLxwdJEiFG/OIU6HLuTkretdN
dcCNFSQrRieSFHpBK9G0vKIpIss1ZwLm8gjocVXH7VK4Mo/TNQe4p2/wAF29mq4r
xu1UiXmtU3uWxiqZnt72LOYFCarQ0sFA5+pMEvF5W+NrVB0wGpXhcwm+pGsIi4IM
LepccTgykiUBqW5TRzPz
=/oS+
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
"Here are the main arm64 updates for 4.6. There are some relatively
intrusive changes to support KASLR, the reworking of the kernel
virtual memory layout and initial page table creation.
Summary:
- Initial page table creation reworked to avoid breaking large block
mappings (huge pages) into smaller ones. The ARM architecture
requires break-before-make in such cases to avoid TLB conflicts but
that's not always possible on live page tables
- Kernel virtual memory layout: the kernel image is no longer linked
to the bottom of the linear mapping (PAGE_OFFSET) but at the bottom
of the vmalloc space, allowing the kernel to be loaded (nearly)
anywhere in physical RAM
- Kernel ASLR: position independent kernel Image and modules being
randomly mapped in the vmalloc space with the randomness is
provided by UEFI (efi_get_random_bytes() patches merged via the
arm64 tree, acked by Matt Fleming)
- Implement relative exception tables for arm64, required by KASLR
(initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c
but actual x86 conversion to deferred to 4.7 because of the merge
dependencies)
- Support for the User Access Override feature of ARMv8.2: this
allows uaccess functions (get_user etc.) to be implemented using
LDTR/STTR instructions. Such instructions, when run by the kernel,
perform unprivileged accesses adding an extra level of protection.
The set_fs() macro is used to "upgrade" such instruction to
privileged accesses via the UAO bit
- Half-precision floating point support (part of ARMv8.2)
- Optimisations for CPUs with or without a hardware prefetcher (using
run-time code patching)
- copy_page performance improvement to deal with 128 bytes at a time
- Sanity checks on the CPU capabilities (via CPUID) to prevent
incompatible secondary CPUs from being brought up (e.g. weird
big.LITTLE configurations)
- valid_user_regs() reworked for better sanity check of the
sigcontext information (restored pstate information)
- ACPI parking protocol implementation
- CONFIG_DEBUG_RODATA enabled by default
- VDSO code marked as read-only
- DEBUG_PAGEALLOC support
- ARCH_HAS_UBSAN_SANITIZE_ALL enabled
- Erratum workaround Cavium ThunderX SoC
- set_pte_at() fix for PROT_NONE mappings
- Code clean-ups"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (99 commits)
arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
arm64: kasan: Use actual memory node when populating the kernel image shadow
arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission
arm64: Fix misspellings in comments.
arm64: efi: add missing frame pointer assignment
arm64: make mrs_s prefixing implicit in read_cpuid
arm64: enable CONFIG_DEBUG_RODATA by default
arm64: Rework valid_user_regs
arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
arm64: KVM: Move kvm_call_hyp back to its original localtion
arm64: mm: treat memstart_addr as a signed quantity
arm64: mm: list kernel sections in order
arm64: lse: deal with clobbered IP registers after branch via PLT
arm64: mm: dump: Use VA_START directly instead of private LOWEST_ADDR
arm64: kconfig: add submenu for 8.2 architectural features
arm64: kernel: acpi: fix ioremap in ACPI parking protocol cpu_postboot
arm64: Add support for Half precision floating point
arm64: Remove fixmap include fragility
arm64: Add workaround for Cavium erratum 27456
arm64: mm: Mark .rodata as RO
...
2016-03-18 06:03:47 +03:00
err = create_hyp_mappings ( kvm_ksym_ref ( __hyp_text_start ) ,
2016-06-13 17:00:48 +03:00
kvm_ksym_ref ( __hyp_text_end ) , PAGE_HYP_EXEC ) ;
2013-01-21 03:28:06 +04:00
if ( err ) {
kvm_err ( " Cannot map world-switch code \n " ) ;
2015-01-29 14:59:54 +03:00
goto out_err ;
2013-01-21 03:28:06 +04:00
}
2016-02-16 15:52:39 +03:00
err = create_hyp_mappings ( kvm_ksym_ref ( __start_rodata ) ,
2016-06-13 17:00:47 +03:00
kvm_ksym_ref ( __end_rodata ) , PAGE_HYP_RO ) ;
2015-10-27 15:18:48 +03:00
if ( err ) {
kvm_err ( " Cannot map rodata section \n " ) ;
2016-10-20 12:17:21 +03:00
goto out_err ;
}
err = create_hyp_mappings ( kvm_ksym_ref ( __bss_start ) ,
kvm_ksym_ref ( __bss_stop ) , PAGE_HYP_RO ) ;
if ( err ) {
kvm_err ( " Cannot map bss section \n " ) ;
2015-01-29 14:59:54 +03:00
goto out_err ;
2015-10-27 15:18:48 +03:00
}
2018-01-03 19:38:35 +03:00
err = kvm_map_vectors ( ) ;
if ( err ) {
kvm_err ( " Cannot map vectors \n " ) ;
goto out_err ;
}
2013-01-21 03:28:06 +04:00
/*
* Map the Hyp stack pages
*/
for_each_possible_cpu ( cpu ) {
char * stack_page = ( char * ) per_cpu ( kvm_arm_hyp_stack_page , cpu ) ;
2016-06-13 17:00:45 +03:00
err = create_hyp_mappings ( stack_page , stack_page + PAGE_SIZE ,
PAGE_HYP ) ;
2013-01-21 03:28:06 +04:00
if ( err ) {
kvm_err ( " Cannot map hyp stack \n " ) ;
2015-01-29 14:59:54 +03:00
goto out_err ;
2013-01-21 03:28:06 +04:00
}
}
for_each_possible_cpu ( cpu ) {
2019-04-09 22:22:11 +03:00
kvm_host_data_t * cpu_data ;
2013-01-21 03:28:06 +04:00
2019-04-09 22:22:11 +03:00
cpu_data = per_cpu_ptr ( & kvm_host_data , cpu ) ;
err = create_hyp_mappings ( cpu_data , cpu_data + 1 , PAGE_HYP ) ;
2013-01-21 03:28:06 +04:00
if ( err ) {
2013-04-08 19:47:19 +04:00
kvm_err ( " Cannot map host CPU state: %d \n " , err ) ;
2015-01-29 14:59:54 +03:00
goto out_err ;
2013-01-21 03:28:06 +04:00
}
}
2018-05-29 15:11:16 +03:00
err = hyp_map_aux_data ( ) ;
if ( err )
2019-02-18 01:36:26 +03:00
kvm_err ( " Cannot map host auxiliary data: %d \n " , err ) ;
2018-05-29 15:11:16 +03:00
2013-01-21 03:28:06 +04:00
return 0 ;
2015-01-29 14:59:54 +03:00
2013-01-21 03:28:06 +04:00
out_err :
2015-01-29 14:59:54 +03:00
teardown_hyp_mode ( ) ;
2013-01-21 03:28:06 +04:00
kvm_err ( " error initializing Hyp mode: %d \n " , err ) ;
return err ;
}
2013-04-17 14:52:01 +04:00
static void check_kvm_target_cpu ( void * ret )
{
* ( int * ) ret = kvm_target_cpu ( ) ;
}
2014-06-02 17:37:13 +04:00
struct kvm_vcpu * kvm_mpidr_to_vcpu ( struct kvm * kvm , unsigned long mpidr )
{
struct kvm_vcpu * vcpu ;
int i ;
mpidr & = MPIDR_HWID_BITMASK ;
kvm_for_each_vcpu ( i , vcpu , kvm ) {
if ( mpidr = = kvm_vcpu_get_mpidr_aff ( vcpu ) )
return vcpu ;
}
return NULL ;
}
2017-10-27 17:28:31 +03:00
bool kvm_arch_has_irq_bypass ( void )
{
return true ;
}
int kvm_arch_irq_bypass_add_producer ( struct irq_bypass_consumer * cons ,
struct irq_bypass_producer * prod )
{
struct kvm_kernel_irqfd * irqfd =
container_of ( cons , struct kvm_kernel_irqfd , consumer ) ;
2017-10-27 17:28:39 +03:00
return kvm_vgic_v4_set_forwarding ( irqfd - > kvm , prod - > irq ,
& irqfd - > irq_entry ) ;
2017-10-27 17:28:31 +03:00
}
void kvm_arch_irq_bypass_del_producer ( struct irq_bypass_consumer * cons ,
struct irq_bypass_producer * prod )
{
struct kvm_kernel_irqfd * irqfd =
container_of ( cons , struct kvm_kernel_irqfd , consumer ) ;
2017-10-27 17:28:39 +03:00
kvm_vgic_v4_unset_forwarding ( irqfd - > kvm , prod - > irq ,
& irqfd - > irq_entry ) ;
2017-10-27 17:28:31 +03:00
}
void kvm_arch_irq_bypass_stop ( struct irq_bypass_consumer * cons )
{
struct kvm_kernel_irqfd * irqfd =
container_of ( cons , struct kvm_kernel_irqfd , consumer ) ;
kvm_arm_halt_guest ( irqfd - > kvm ) ;
}
void kvm_arch_irq_bypass_start ( struct irq_bypass_consumer * cons )
{
struct kvm_kernel_irqfd * irqfd =
container_of ( cons , struct kvm_kernel_irqfd , consumer ) ;
kvm_arm_resume_guest ( irqfd - > kvm ) ;
}
2013-01-21 03:28:06 +04:00
/**
* Initialize Hyp - mode and memory mappings on all CPUs .
*/
2013-01-21 03:28:06 +04:00
int kvm_arch_init ( void * opaque )
{
2013-01-21 03:28:06 +04:00
int err ;
2013-04-17 14:52:01 +04:00
int ret , cpu ;
2017-10-20 14:34:16 +03:00
bool in_hyp_mode ;
2013-01-21 03:28:06 +04:00
if ( ! is_hyp_mode_available ( ) ) {
2017-11-28 18:18:19 +03:00
kvm_info ( " HYP mode not available \n " ) ;
2013-01-21 03:28:06 +04:00
return - ENODEV ;
}
2018-12-06 20:31:20 +03:00
in_hyp_mode = is_kernel_in_hyp_mode ( ) ;
if ( ! in_hyp_mode & & kvm_arch_requires_vhe ( ) ) {
kvm_pr_unimpl ( " CPU unsupported in non-VHE mode, not initializing \n " ) ;
2018-04-20 18:20:43 +03:00
return - ENODEV ;
}
2013-04-17 14:52:01 +04:00
for_each_online_cpu ( cpu ) {
smp_call_function_single ( cpu , check_kvm_target_cpu , & ret , 1 ) ;
if ( ret < 0 ) {
kvm_err ( " Error, CPU %d not supported! \n " , cpu ) ;
return - ENODEV ;
}
2013-01-21 03:28:06 +04:00
}
2015-01-29 14:59:54 +03:00
err = init_common_resources ( ) ;
2013-01-21 03:28:06 +04:00
if ( err )
2015-01-29 14:59:54 +03:00
return err ;
2013-01-21 03:28:06 +04:00
2019-04-12 17:30:58 +03:00
err = kvm_arm_init_sve ( ) ;
2019-02-28 21:33:00 +03:00
if ( err )
return err ;
2017-10-20 14:34:16 +03:00
if ( ! in_hyp_mode ) {
2015-01-29 14:59:54 +03:00
err = init_hyp_mode ( ) ;
2017-10-20 14:34:16 +03:00
if ( err )
goto out_err ;
}
arm, kvm: Fix CPU hotplug callback registration
On 03/15/2014 12:40 AM, Christoffer Dall wrote:
> On Fri, Mar 14, 2014 at 11:13:29AM +0530, Srivatsa S. Bhat wrote:
>> On 03/13/2014 04:51 AM, Christoffer Dall wrote:
>>> On Tue, Mar 11, 2014 at 02:05:38AM +0530, Srivatsa S. Bhat wrote:
>>>> Subsystems that want to register CPU hotplug callbacks, as well as perform
>>>> initialization for the CPUs that are already online, often do it as shown
>>>> below:
>>>>
[...]
>>> Just so we're clear, the existing code was simply racy as not prone to
>>> deadlocks, right?
>>>
>>> This makes it clear that the test above for compatible CPUs can be quite
>>> easily evaded by using CPU hotplug, but we don't really have a good
>>> solution for handling that yet... Hmmm, grumble grumble, I guess if you
>>> hotplug unsupported CPUs on a KVM/ARM system for now, stuff will break.
>>>
>>
>> In this particular case, there was no deadlock possibility, rather the
>> existing code had insufficient synchronization against CPU hotplug.
>>
>> init_hyp_mode() would invoke cpu_init_hyp_mode() on currently online CPUs
>> using on_each_cpu(). If a CPU came online after this point and before calling
>> register_cpu_notifier(), that CPU would remain uninitialized because this
>> subsystem would miss the hot-online event. This patch fixes this bug and
>> also uses the new synchronization method (instead of get/put_online_cpus())
>> to ensure that we don't deadlock with CPU hotplug.
>>
>
> Yes, that was my conclusion as well. Thanks for clarifying. (It could
> be noted in the commit message as well if you should feel so inclined).
>
Please find the patch with updated changelog (and your Ack) below.
(No changes in code).
From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Subject: [PATCH] arm, kvm: Fix CPU hotplug callback registration
Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:
get_online_cpus();
for_each_online_cpu(cpu)
init_cpu(cpu);
register_cpu_notifier(&foobar_cpu_notifier);
put_online_cpus();
This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).
Instead, the correct and race-free way of performing the callback
registration is:
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Note the use of the double underscored version of the API */
__register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_done();
In the existing arm kvm code, there is no synchronization with CPU hotplug
to avoid missing the hotplug events that might occur after invoking
init_hyp_mode() and before calling register_cpu_notifier(). Fix this bug
and also use the new synchronization method (instead of get/put_online_cpus())
to ensure that we don't deadlock with CPU hotplug.
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ingo Molnar <mingo@kernel.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-03-18 14:23:05 +04:00
2015-01-29 14:59:54 +03:00
err = init_subsystems ( ) ;
if ( err )
goto out_hyp ;
2013-08-05 18:04:46 +04:00
2017-10-20 14:34:16 +03:00
if ( in_hyp_mode )
kvm_info ( " VHE mode initialized successfully \n " ) ;
else
kvm_info ( " Hyp mode initialized successfully \n " ) ;
2013-01-21 03:28:06 +04:00
return 0 ;
2015-01-29 14:59:54 +03:00
out_hyp :
2019-12-02 10:42:11 +03:00
hyp_cpu_pm_exit ( ) ;
2017-10-20 14:34:16 +03:00
if ( ! in_hyp_mode )
teardown_hyp_mode ( ) ;
2013-01-21 03:28:06 +04:00
out_err :
return err ;
2013-01-21 03:28:06 +04:00
}
/* NOP: Compiling as a module not supported */
void kvm_arch_exit ( void )
{
2013-03-05 07:18:00 +04:00
kvm_perf_teardown ( ) ;
2013-01-21 03:28:06 +04:00
}
static int arm_init ( void )
{
int rc = kvm_init ( NULL , sizeof ( struct kvm_vcpu ) , 0 , THIS_MODULE ) ;
return rc ;
}
module_init ( arm_init ) ;