2020-03-03 22:35:59 +09:00
# SPDX-License-Identifier: GPL-2.0-only
2021-10-07 23:34:38 +00:00
/aarch64/arch_timer
2021-06-10 18:10:20 -07:00
/aarch64/debug-exceptions
2020-10-29 21:17:01 +01:00
/aarch64/get-reg-list
2022-05-02 23:38:52 +00:00
/aarch64/hypercalls
2022-04-09 18:45:45 +00:00
/aarch64/psci_test
2022-03-28 20:19:24 -07:00
/aarch64/vcpu_width_config
2021-04-05 18:39:41 +02:00
/aarch64/vgic_init
2021-11-08 18:38:55 -08:00
/aarch64/vgic_irq
2019-10-07 15:26:56 +02:00
/s390x/memop
2020-03-13 16:56:44 +01:00
/s390x/resets
2020-03-10 10:15:53 +01:00
/s390x/sync_regs_test
2022-02-11 19:22:09 +01:00
/s390x/tprot
2022-01-18 17:20:52 +05:00
/x86_64/amx_test
2022-01-17 16:05:41 +01:00
/x86_64/cpuid_test
2018-09-18 19:54:26 +02:00
/x86_64/cr4_cpuid_sync_test
2020-06-08 13:23:45 +02:00
/x86_64/debug_regs
2018-10-19 16:38:16 +02:00
/x86_64/evmcs_test
2021-05-10 07:48:34 -07:00
/x86_64/emulator_error_test
2022-03-16 00:55:38 +00:00
/x86_64/fix_hypercall_test
2021-03-18 15:56:29 +01:00
/x86_64/get_msr_index_features
2021-09-16 18:15:48 +00:00
/x86_64/kvm_clock_test
2020-10-27 16:10:44 -07:00
/x86_64/kvm_pv_test
2021-03-18 15:09:49 +01:00
/x86_64/hyperv_clock
2019-05-06 07:19:10 -07:00
/x86_64/hyperv_cpuid
2021-05-21 11:52:04 +02:00
/x86_64/hyperv_features
2022-03-02 23:01:20 +05:00
/x86_64/hyperv_svm_test
2022-04-22 21:44:56 +08:00
/x86_64/max_vcpuid_cap_test
2019-05-31 14:14:52 +00:00
/x86_64/mmio_warning_test
2022-06-08 22:45:16 +00:00
/x86_64/monitor_mwait_test
2022-06-13 21:25:19 +00:00
/x86_64/nx_huge_pages_test
2018-09-18 19:54:26 +02:00
/x86_64/platform_info_test
2022-01-14 21:24:31 -08:00
/x86_64/pmu_event_filter_test
2021-03-18 16:16:24 +01:00
/x86_64/set_boot_cpu_id
2018-09-18 19:54:26 +02:00
/x86_64/set_sregs_test
2021-11-16 12:03:25 -03:00
/x86_64/sev_migrate_tests
2019-05-06 07:19:10 -07:00
/x86_64/smm_test
/x86_64/state_test
2020-03-10 10:15:53 +01:00
/x86_64/svm_vmcall_test
2021-09-14 18:48:13 +03:00
/x86_64/svm_int_ctl_test
2022-05-02 00:07:35 +02:00
/x86_64/svm_nested_soft_inject_test
2018-09-18 19:54:26 +02:00
/x86_64/sync_regs_test
2020-11-06 07:39:26 -05:00
/x86_64/tsc_msrs_test
2022-05-02 00:07:35 +02:00
/x86_64/tsc_scaling_sync
2022-06-10 10:11:34 -07:00
/x86_64/ucna_injection_test
2021-10-25 13:13:11 -07:00
/x86_64/userspace_io_test
2020-10-12 12:47:16 -07:00
/x86_64/userspace_msr_exit_test
2020-10-26 11:09:22 -07:00
/x86_64/vmx_apic_access_test
2019-01-31 23:49:21 +01:00
/x86_64/vmx_close_while_nested_test
2019-10-07 15:26:56 +02:00
/x86_64/vmx_dirty_log_test
2021-12-28 23:24:37 +00:00
/x86_64/vmx_exception_with_invalid_guest_state
2021-12-07 19:30:06 +00:00
/x86_64/vmx_invalid_nested_guest_state
2020-11-06 07:39:26 -05:00
/x86_64/vmx_preemption_timer_test
2019-05-02 11:31:41 -07:00
/x86_64/vmx_set_nested_state_test
2018-09-18 19:54:26 +02:00
/x86_64/vmx_tsc_adjust_test
2021-05-26 19:44:18 +01:00
/x86_64/vmx_nested_tsc_scaling_test
2020-11-05 14:38:23 -08:00
/x86_64/xapic_ipi_test
2022-02-04 21:42:05 +00:00
/x86_64/xapic_state_test
2021-02-10 10:26:05 -08:00
/x86_64/xen_shinfo_test
/x86_64/xen_vmcall_test
2019-10-21 16:30:28 -07:00
/x86_64/xss_msr_test
2022-05-19 01:01:17 +08:00
/x86_64/vmx_pmu_caps_test
2022-05-24 21:56:22 +08:00
/x86_64/triple_fault_event_test
2021-07-13 22:09:57 +00:00
/access_tracking_perf_test
2020-03-10 10:15:53 +01:00
/demand_paging_test
2018-09-18 19:54:32 +02:00
/dirty_log_test
2020-10-27 16:37:33 -07:00
/dirty_log_perf_test
2021-02-13 00:14:52 +00:00
/hardware_disable_test
2019-07-31 16:28:51 +02:00
/kvm_create_max_vcpus
KVM: selftests: Add a test for kvm page table code
This test serves as a performance tester and a bug reproducer for
kvm page table code (GPA->HPA mappings), so it gives guidance for
people trying to make some improvement for kvm.
The function guest_code() can cover the conditions where a single vcpu or
multiple vcpus access guest pages within the same memory region, in three
VM stages(before dirty logging, during dirty logging, after dirty logging).
Besides, the backing src memory type(ANONYMOUS/THP/HUGETLB) of the tested
memory region can be specified by users, which means normal page mappings
or block mappings can be chosen by users to be created in the test.
If ANONYMOUS memory is specified, kvm will create normal page mappings
for the tested memory region before dirty logging, and update attributes
of the page mappings from RO to RW during dirty logging. If THP/HUGETLB
memory is specified, kvm will create block mappings for the tested memory
region before dirty logging, and split the blcok mappings into normal page
mappings during dirty logging, and coalesce the page mappings back into
block mappings after dirty logging is stopped.
So in summary, as a performance tester, this test can present the
performance of kvm creating/updating normal page mappings, or the
performance of kvm creating/splitting/recovering block mappings,
through execution time.
When we need to coalesce the page mappings back to block mappings after
dirty logging is stopped, we have to firstly invalidate *all* the TLB
entries for the page mappings right before installation of the block entry,
because a TLB conflict abort error could occur if we can't invalidate the
TLB entries fully. We have hit this TLB conflict twice on aarch64 software
implementation and fixed it. As this test can imulate process from dirty
logging enabled to dirty logging stopped of a VM with block mappings,
so it can also reproduce this TLB conflict abort due to inadequate TLB
invalidation when coalescing tables.
Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <20210330080856.14940-11-wangyanan55@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-30 16:08:56 +08:00
/kvm_page_table_test
KVM: selftests: Add test to populate a VM with the max possible guest mem
Add a selftest that enables populating a VM with the maximum amount of
guest memory allowed by the underlying architecture. Abuse KVM's
memslots by mapping a single host memory region into multiple memslots so
that the selftest doesn't require a system with terabytes of RAM.
Default to 512gb of guest memory, which isn't all that interesting, but
should work on all MMUs and doesn't take an exorbitant amount of memory
or time. E.g. testing with ~64tb of guest memory takes the better part
of an hour, and requires 200gb of memory for KVM's page tables when using
4kb pages.
To inflicit maximum abuse on KVM' MMU, default to 4kb pages (or whatever
the not-hugepage size is) in the backing store (memfd). Use memfd for
the host backing store to ensure that hugepages are guaranteed when
requested, and to give the user explicit control of the size of hugepage
being tested.
By default, spin up as many vCPUs as there are available to the selftest,
and distribute the work of dirtying each 4kb chunk of memory across all
vCPUs. Dirtying guest memory forces KVM to populate its page tables, and
also forces KVM to write back accessed/dirty information to struct page
when the guest memory is freed.
On x86, perform two passes with a MMU context reset between each pass to
coerce KVM into dropping all references to the MMU root, e.g. to emulate
a vCPU dropping the last reference. Perform both passes and all
rendezvous on all architectures in the hope that arm64 and s390x can gain
similar shenanigans in the future.
Measure and report the duration of each operation, which is helpful not
only to verify the test is working as intended, but also to easily
evaluate the performance differences different page sizes.
Provide command line options to limit the amount of guest memory, set the
size of each slot (i.e. of the host memory region), set the number of
vCPUs, and to enable usage of hugepages.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-29-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-02-26 00:15:46 +00:00
/max_guest_memory_test
2021-01-12 13:42:53 -08:00
/memslot_modification_stress_test
KVM: selftests: add a memslot-related performance benchmark
This benchmark contains the following tests:
* Map test, where the host unmaps guest memory while the guest writes to
it (maps it).
The test is designed in a way to make the unmap operation on the host
take a negligible amount of time in comparison with the mapping
operation in the guest.
The test area is actually split in two: the first half is being mapped
by the guest while the second half in being unmapped by the host.
Then a guest <-> host sync happens and the areas are reversed.
* Unmap test which is broadly similar to the above map test, but it is
designed in an opposite way: to make the mapping operation in the guest
take a negligible amount of time in comparison with the unmap operation
on the host.
This test is available in two variants: with per-page unmap operation
or a chunked one (using 2 MiB chunk size).
* Move active area test which involves moving the last (highest gfn)
memslot a bit back and forth on the host while the guest is
concurrently writing around the area being moved (including over the
moved memslot).
* Move inactive area test which is similar to the previous move active
area test, but now guest writes all happen outside of the area being
moved.
* Read / write test in which the guest writes to the beginning of each
page of the test area while the host writes to the middle of each such
page.
Then each side checks the values the other side has written.
This particular test is not expected to give different results depending
on particular memslots implementation, it is meant as a rough sanity
check and to provide insight on the spread of test results expected.
Each test performs its operation in a loop until a test period ends
(this is 5 seconds by default, but it is configurable).
Then the total count of loops done is divided by the actual elapsed
time to give the test result.
The tests have a configurable memslot cap with the "-s" test option, by
default the system maximum is used.
Each test is repeated a particular number of times (by default 20
times), the best result achieved is printed.
The test memory area is divided equally between memslots, the reminder
is added to the last memslot.
The test area size does not depend on the number of memslots in use.
The tests also measure the time that it took to add all these memslots.
The best result from the tests that use the whole test area is printed
after all the requested tests are done.
In general, these tests are designed to use as much memory as possible
(within reason) while still doing 100+ loops even on high memslot counts
with the default test length.
Increasing the test runtime makes it increasingly more likely that some
event will happen on the system during the test run, which might lower
the test result.
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <8d31bb3d92bc8fa33a9756fa802ee14266ab994e.1618253574.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-13 16:08:28 +02:00
/memslot_perf_test
2021-09-01 13:30:29 -07:00
/rseq_test
2020-04-10 16:17:06 -07:00
/set_memory_region_test
2020-03-13 16:56:44 +01:00
/steal_time
2021-06-18 22:27:08 +00:00
/kvm_binary_stats_test
2021-09-16 18:15:51 +00:00
/system_counter_offset_test