5a3f49364c
Currently the HV KVM code takes the kvm->lock around calls to kvm_for_each_vcpu() and kvm_get_vcpu_by_id() (which can call kvm_for_each_vcpu() internally). However, that leads to a lock order inversion problem, because these are called in contexts where the vcpu mutex is held, but the vcpu mutexes nest within kvm->lock according to Documentation/virtual/kvm/locking.txt. Hence there is a possibility of deadlock. To fix this, we simply don't take the kvm->lock mutex around these calls. This is safe because the implementations of kvm_for_each_vcpu() and kvm_get_vcpu_by_id() have been designed to be able to be called locklessly. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> |
||
---|---|---|
.. | ||
boot | ||
configs | ||
crypto | ||
include | ||
kernel | ||
kvm | ||
lib | ||
math-emu | ||
mm | ||
net | ||
oprofile | ||
perf | ||
platforms | ||
purgatory | ||
sysdev | ||
tools | ||
xmon | ||
Kbuild | ||
Kconfig | ||
Kconfig.debug | ||
Makefile | ||
Makefile.postlink |