KVM: x86/pmu: Add PRIR++ and PDist support for SPR and later models
The pebs capability on the SPR is basically the same as Ice Lake Server with the exception of two special facilities that have been enhanced and require special handling. Upon triggering a PEBS assist, there will be a finite delay between the time the counter overflows and when the microcode starts to carry out its data collection obligations. Even if the delay is constant in core clock space, it invariably manifest as variable "skids" in instruction address space. On the Ice Lake Server, the Precise Distribution of Instructions Retire (PDIR) facility mitigates the "skid" problem by providing an early indication of when the counter is about to overflow. On SPR, the PDIR counter available (Fixed 0) is unchanged, but the capability is enhanced to Instruction-Accurate PDIR (PDIR++), where PEBS is taken on the next instruction after the one that caused the overflow. SPR also introduces a new Precise Distribution (PDist) facility only on general programmable counter 0. Per Intel SDM, PDist eliminates any skid or shadowing effects from PEBS. With PDist, the PEBS record will be generated precisely upon completion of the instruction or operation that causes the counter to overflow (there is no "wait for next occurrence" by default). In terms of KVM handling, when guest accesses those special counters, the KVM needs to request the same index counters via the perf_event kernel subsystem to ensure that the guest uses the correct pebs hardware counter (PRIR++ or PDist). This is mainly achieved by adjusting the event precise level to the maximum, where the semantics of this magic number is mainly defined by the internal software context of perf_event and it's also backwards compatible as part of the user space interface. Opportunistically, refine confusing comments on TNT+, as the only ones that currently support pebs_ept are Ice Lake server and SPR (GLC+). Signed-off-by: Like Xu <likexu@tencent.com> Link: https://lore.kernel.org/r/20221109082802.27543-3-likexu@tencent.com Signed-off-by: Sean Christopherson <seanjc@google.com>
This commit is contained in:
parent
2de154f541
commit
974850be01
@ -29,9 +29,18 @@
|
||||
struct x86_pmu_capability __read_mostly kvm_pmu_cap;
|
||||
EXPORT_SYMBOL_GPL(kvm_pmu_cap);
|
||||
|
||||
static const struct x86_cpu_id vmx_icl_pebs_cpu[] = {
|
||||
/* Precise Distribution of Instructions Retired (PDIR) */
|
||||
static const struct x86_cpu_id vmx_pebs_pdir_cpu[] = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, NULL),
|
||||
/* Instruction-Accurate PDIR (PDIR++) */
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, NULL),
|
||||
{}
|
||||
};
|
||||
|
||||
/* Precise Distribution (PDist) */
|
||||
static const struct x86_cpu_id vmx_pebs_pdist_cpu[] = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, NULL),
|
||||
{}
|
||||
};
|
||||
|
||||
@ -156,6 +165,28 @@ static void kvm_perf_overflow(struct perf_event *perf_event,
|
||||
kvm_make_request(KVM_REQ_PMU, pmc->vcpu);
|
||||
}
|
||||
|
||||
static u64 pmc_get_pebs_precise_level(struct kvm_pmc *pmc)
|
||||
{
|
||||
/*
|
||||
* For some model specific pebs counters with special capabilities
|
||||
* (PDIR, PDIR++, PDIST), KVM needs to raise the event precise
|
||||
* level to the maximum value (currently 3, backwards compatible)
|
||||
* so that the perf subsystem would assign specific hardware counter
|
||||
* with that capability for vPMC.
|
||||
*/
|
||||
if ((pmc->idx == 0 && x86_match_cpu(vmx_pebs_pdist_cpu)) ||
|
||||
(pmc->idx == 32 && x86_match_cpu(vmx_pebs_pdir_cpu)))
|
||||
return 3;
|
||||
|
||||
/*
|
||||
* The non-zero precision level of guest event makes the ordinary
|
||||
* guest event becomes a guest PEBS event and triggers the host
|
||||
* PEBS PMI handler to determine whether the PEBS overflow PMI
|
||||
* comes from the host counters or the guest.
|
||||
*/
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config,
|
||||
bool exclude_user, bool exclude_kernel,
|
||||
bool intr)
|
||||
@ -187,22 +218,12 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config,
|
||||
}
|
||||
if (pebs) {
|
||||
/*
|
||||
* The non-zero precision level of guest event makes the ordinary
|
||||
* guest event becomes a guest PEBS event and triggers the host
|
||||
* PEBS PMI handler to determine whether the PEBS overflow PMI
|
||||
* comes from the host counters or the guest.
|
||||
*
|
||||
* For most PEBS hardware events, the difference in the software
|
||||
* precision levels of guest and host PEBS events will not affect
|
||||
* the accuracy of the PEBS profiling result, because the "event IP"
|
||||
* in the PEBS record is calibrated on the guest side.
|
||||
*
|
||||
* On Icelake everything is fine. Other hardware (GLC+, TNT+) that
|
||||
* could possibly care here is unsupported and needs changes.
|
||||
*/
|
||||
attr.precise_ip = 1;
|
||||
if (x86_match_cpu(vmx_icl_pebs_cpu) && pmc->idx == 32)
|
||||
attr.precise_ip = 3;
|
||||
attr.precise_ip = pmc_get_pebs_precise_level(pmc);
|
||||
}
|
||||
|
||||
event = perf_event_create_kernel_counter(&attr, -1, current,
|
||||
|
Loading…
Reference in New Issue
Block a user