KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)
Reduce time spent holding kvm->lock: unlock mutex before calling synchronize_srcu_expedited(). There is no need to hold kvm->lock until all vCPUs have been kicked, KVM only needs to guarantee that all vCPUs will switch to the new filter before exiting to userspace. Protecting the write to __reprogram_pmi is also unnecessary as a vCPU may process a set bit before receiving the final KVM_REQ_PMU, but the per-vCPU writes are guaranteed to occur after all vCPUs have switched to the new filter. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Michal Luczaj <mhal@rbox.co> Link: https://lore.kernel.org/r/20230107001256.2365304-2-mhal@rbox.co [sean: expand changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
This commit is contained in:
parent
096691e0d2
commit
95744a90db
@ -634,6 +634,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
|
||||
mutex_lock(&kvm->lock);
|
||||
filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
|
||||
mutex_is_locked(&kvm->lock));
|
||||
mutex_unlock(&kvm->lock);
|
||||
synchronize_srcu_expedited(&kvm->srcu);
|
||||
|
||||
BUILD_BUG_ON(sizeof(((struct kvm_pmu *)0)->reprogram_pmi) >
|
||||
@ -644,8 +645,6 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
|
||||
|
||||
kvm_make_all_cpus_request(kvm, KVM_REQ_PMU);
|
||||
|
||||
mutex_unlock(&kvm->lock);
|
||||
|
||||
r = 0;
|
||||
cleanup:
|
||||
kfree(filter);
|
||||
|
Loading…
x
Reference in New Issue
Block a user