3c9bd4006b
It could take kvm->mmu_lock for an extended period of time when enabling dirty log for the first time. The main cost is to clear all the D-bits of last level SPTEs. This situation can benefit from manual dirty log protect as well, which can reduce the mmu_lock time taken. The sequence is like this: 1. Initialize all the bits of the dirty bitmap to 1 when enabling dirty log for the first time 2. Only write protect the huge pages 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level SPTEs gradually in small chunks Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did some tests with a 128G windows VM and counted the time taken of memory_global_dirty_log_start, here is the numbers: VM Size Before After optimization 128G 460ms 10ms Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
||
---|---|---|
.. | ||
arm | ||
devices | ||
amd-memory-encryption.rst | ||
api.rst | ||
cpuid.rst | ||
halt-polling.rst | ||
hypercalls.rst | ||
index.rst | ||
locking.rst | ||
mmu.rst | ||
msr.rst | ||
nested-vmx.rst | ||
ppc-pv.rst | ||
review-checklist.rst | ||
s390-diag.rst | ||
timekeeping.rst | ||
vcpu-requests.rst |