IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Expand the comment about what bits are and aren't checked when emulating
PMC events in software. As pointed out by Jim, AMD's mask includes bits
35:32, which on Intel overlap with the IN_TX and IN_TXCP bits (32 and 33)
as well as reserved bits (34 and 45).
Checking The IN_TX* bits is actually correct, as it's safe to assert that
the vCPU can't be in an HLE/RTM transaction if KVM is emulating an
instruction, i.e. KVM *shouldn't count if either of those bits is set.
For the reserved bits, KVM is has equal odds of being right if Intel adds
new behavior, i.e. ignoring them is just as likely to be correct as
checking them.
Opportunistically explain *why* the other flags aren't checked.
Suggested-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20231110022857.1273836-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Snapshot the event selectors for the events that KVM emulates in software,
which is currently instructions retired and branch instructions retired.
The event selectors a tied to the underlying CPU, i.e. are constant for a
given platform even though perf doesn't manage the mappings as such.
Getting the event selectors from perf isn't exactly cheap, especially if
mitigations are enabled, as at least one indirect call is involved.
Snapshot the values in KVM instead of optimizing perf as working with the
raw event selectors will be required if KVM ever wants to emulate events
that aren't part of perf's uABI, i.e. that don't have an "enum perf_hw_id"
entry.
Link: https://lore.kernel.org/r/20231110022857.1273836-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Mask off disabled counters based on PERF_GLOBAL_CTRL *before* iterating
over PMCs to emulate (branch) instruction required events in software. In
the common case where the guest isn't utilizing the PMU, pre-checking for
enabled counters turns a relatively expensive search into a few AND uops
and a Jcc.
Sadly, PMUs without PERF_GLOBAL_CTRL, e.g. most existing AMD CPUs, are out
of luck as there is no way to check that a PMC isn't being used without
checking the PMC's event selector.
Cc: Konstantin Khorenko <khorenko@virtuozzo.com>
Link: https://lore.kernel.org/r/20231110022857.1273836-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add and use kvm_for_each_pmc() to dedup a variety of open coded for-loops
that iterate over valid PMCs given a bitmap (and because seeing checkpatch
whine about bad macro style is always amusing).
No functional change intended.
Link: https://lore.kernel.org/r/20231110022857.1273836-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Refactor the handling of the reprogramming bitmap to snapshot and clear
to-be-processed bits before doing the reprogramming, and then explicitly
set bits for PMCs that need to be reprogrammed (again). This will allow
adding a macro to iterate over all valid PMCs without having to add
special handling for the reprogramming bit, which (a) can have bits set
for non-existent PMCs and (b) needs to clear such bits to avoid wasting
cycles in perpetuity.
Note, the existing behavior of clearing bits after reprogramming does NOT
have a race with kvm_vm_ioctl_set_pmu_event_filter(). Setting a new PMU
filter synchronizes SRCU _before_ setting the bitmap, i.e. guarantees that
the vCPU isn't in the middle of reprogramming with a stale filter prior to
setting the bitmap.
Link: https://lore.kernel.org/r/20231110022857.1273836-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a common helper for *internal* PMC lookups, and delete the ops hook
and Intel's implementation. Keep AMD's implementation, but rename it to
amd_pmu_get_pmc() to make it somewhat more obvious that it's suited for
both KVM-internal and guest-initiated lookups.
Because KVM tracks all counters in a single bitmap, getting a counter
when iterating over a bitmap, e.g. of all valid PMCs, requires a small
amount of math, that while simple, isn't super obvious and doesn't use the
same semantics as PMC lookups from RDPMC! Although AMD doesn't support
fixed counters, the common PMU code still behaves as if there a split, the
high half of which just happens to always be empty.
Opportunstically add a comment to explain both what is going on, and why
KVM uses a single bitmap, e.g. the boilerplate for iterating over separate
bitmaps could be done via macros, so it's not (just) about deduplicating
code.
Link: https://lore.kernel.org/r/20231110022857.1273836-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a common define to "officially" solidify KVM's split of counters,
i.e. to commit to using bits 31:0 to track general purpose counters and
bits 63:32 to track fixed counters (which only Intel supports). KVM
already bleeds this behavior all over common PMU code, and adding a KVM-
defined macro allows clarifying that the value is a _base_, as oppposed to
the _flag_ that is used to access fixed PMCs via RDPMC (which perf
confusingly calls INTEL_PMC_FIXED_RDPMC_BASE).
No functional change intended.
Link: https://lore.kernel.org/r/20231110022857.1273836-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Move the purging of common PMU metadata from intel_pmu_refresh() to
kvm_pmu_refresh(), and invoke the vendor refresh() hook if and only if
the VM is supposed to have a vPMU.
KVM already denies access to the PMU based on kvm->arch.enable_pmu, as
get_gp_pmc_amd() returns NULL for all PMCs in that case, i.e. KVM already
violates AMD's architecture by not virtualizing a PMU (kernels have long
since learned to not panic when the PMU is unavailable). But configuring
the PMU as if it were enabled causes unwanted side effects, e.g. calls to
kvm_pmu_trigger_event() waste an absurd number of cycles due to the
all_valid_pmc_idx bitmap being non-zero.
Fixes: b1d66dad65 ("KVM: x86/svm: Add module param to control PMU virtualization")
Reported-by: Konstantin Khorenko <khorenko@virtuozzo.com>
Closes: https://lore.kernel.org/all/20231109180646.2963718-2-khorenko@virtuozzo.com
Link: https://lore.kernel.org/r/20231110022857.1273836-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the read/write PMU counters subtest to verify that RDPMC also reads
back the written value. Opportunsitically verify that attempting to use
the "fast" mode of RDPMC fails, as the "fast" flag is only supported by
non-architectural PMUs, which KVM doesn't virtualize.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-30-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add helpers for safe and safe-with-forced-emulations versions of RDMSR,
RDPMC, and XGETBV. Use macro shenanigans to eliminate the rather large
amount of boilerplate needed to get values in and out of registers.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-29-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add KVM_ASM_SAFE_FEP() to allow forcing emulation on an instruction that
might fault. Note, KVM skips RIP past the FEP prefix before injecting an
exception, i.e. the fixup needs to be on the instruction itself. Do not
check for FEP support, that is firmly the responsibility of whatever code
wants to use KVM_ASM_SAFE_FEP().
Sadly, chaining variadic arguments that contain commas doesn't work, thus
the unfortunate amount of copy+paste.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-28-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the PMC counters test to use forced emulation to verify that KVM
emulates counter events for instructions retired and branches retired.
Force emulation for only a subset of the measured code to test that KVM
does the right thing when mixing perf events with emulated events.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-27-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Move the KVM_FEP definition, a.k.a. the KVM force emulation prefix, into
processor.h so that it can be used for other tests besides the MSR filter
test.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-26-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a helper to detect KVM support for forced emulation by querying the
module param, and use the helper to detect support for the MSR filtering
test instead of throwing a noodle/NOP at KVM to see if it sticks.
Cc: Aaron Lewis <aaronlewis@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-25-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add helpers to read integer module params, which is painfully non-trivial
because the pain of dealing with strings in C is exacerbated by the kernel
inserting a newline.
Don't bother differentiating between int, uint, short, etc. They all fit
in an int, and KVM (thankfully) doesn't have any integer params larger
than an int.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-24-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a helper to probe KVM's "enable_pmu" param, open coding strings in
multiple places is just asking for false negatives and/or runtime errors
due to typos.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-23-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Expand the PMU counters test to verify that LLC references and misses have
non-zero counts when the code being executed while the LLC event(s) is
active is evicted via CFLUSH{,OPT}. Note, CLFLUSH{,OPT} requires a fence
of some kind to ensure the cache lines are flushed before execution
continues. Use MFENCE for simplicity (performance is not a concern).
Suggested-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-22-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the fixed counters test to verify that supported counters can
actually be enabled in the control MSRs, that unsupported counters cannot,
and that enabled counters actually count.
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
[sean: fold into the rd/wr access test, massage changelog]
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-21-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the PMU counters test to verify KVM emulation of fixed counters in
addition to general purpose counters. Fixed counters add an extra wrinkle
in the form of an extra supported bitmask. Thus quoth the SDM:
fixed-function performance counter 'i' is supported if ECX[i] || (EDX[4:0] > i)
Test that KVM handles a counter being available through either method.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-20-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a test to verify that KVM correctly emulates MSR-based accesses to
general purpose counters based on guest CPUID, e.g. that accesses to
non-existent counters #GP and accesses to existent counters succeed.
Note, for compatibility reasons, KVM does not emulate #GP when
MSR_P6_PERFCTR[0|1] is not present (writes should be dropped).
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-19-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the PMU counters test to validate architectural events using fixed
counters. The core logic is largely the same, the biggest difference
being that if a fixed counter exists, its associated event is available
(the SDM doesn't explicitly state this to be true, but it's KVM's ABI and
letting software program a fixed counter that doesn't actually count would
be quite bizarre).
Note, fixed counters rely on PERF_GLOBAL_CTRL.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-18-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add test cases to verify that Intel's Architectural PMU events work as
expected when they are available according to guest CPUID. Iterate over a
range of sane PMU versions, with and without full-width writes enabled,
and over interesting combinations of lengths/masks for the bit vector that
enumerates unavailable events.
Test up to vPMU version 5, i.e. the current architectural max. KVM only
officially supports up to version 2, but the behavior of the counters is
backwards compatible, i.e. KVM shouldn't do something completely different
for a higher, architecturally-defined vPMU version. Verify KVM behavior
against the effective vPMU version, e.g. advertising vPMU 5 when KVM only
supports vPMU 2 shouldn't magically unlock vPMU 5 features.
According to Intel SDM, the number of architectural events is reported
through CPUID.0AH:EAX[31:24] and the architectural event x is supported
if EBX[x]=0 && EAX[31:24]>x.
Handcode the entirety of the measured section so that the test can
precisely assert on the number of instructions and branches retired.
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-17-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add a PMU library for x86 selftests to help eliminate open-coded event
encodings, and to reduce the amount of copy+paste between PMU selftests.
Use the new common macro definitions in the existing PMU event filter test.
Cc: Aaron Lewis <aaronlewis@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-16-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Extend the kvm_x86_pmu_feature framework to allow querying for fixed
counters via {kvm,this}_pmu_has(). Like architectural events, checking
for a fixed counter annoyingly requires checking multiple CPUID fields, as
a fixed counter exists if:
FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i);
Note, KVM currently doesn't actually support exposing fixed counters via
the bitmask, but that will hopefully change sooner than later, and Intel's
SDM explicitly "recommends" checking both the number of counters and the
mask.
Rename the intermedate "anti_feature" field to simply 'f' since the fixed
counter bitmask (thankfully) doesn't have reversed polarity like the
architectural events bitmask.
Note, ideally the helpers would use BUILD_BUG_ON() to assert on the
incoming register, but the expected usage in PMU tests can't guarantee the
inputs are compile-time constants.
Opportunistically define macros for all of the known architectural events
and fixed counters.
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-15-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Drop the "name" parameter from KVM_X86_PMU_FEATURE(), it's unused and
the name is redundant with the macro, i.e. it's truly useless.
Reviewed-by: Jim Mattson <jmattson@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-14-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Add vcpu_set_cpuid_property() helper function for setting properties, and
use it instead of open coding an equivalent for MAX_PHY_ADDR. Future vPMU
testcases will also need to stuff various CPUID properties.
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-13-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Explicitly check for attempts to read unsupported PMC types instead of
letting the bounds check fail. Functionally, letting the check fail is
ok, but it's unnecessarily subtle and does a poor job of documenting the
architectural behavior that KVM is emulating.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-12-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Refactor KVM's handling of ECX for RDPMC to treat the FIXED modifier as an
explicit value, not a flag (minus one wart). While non-architectural PMUs
do use bit 31 as a flag (for "fast" reads), architectural PMUs use the
upper half of ECX to encode the type. From the SDM:
ECX[31:16] specifies type of PMC while ECX[15:0] specifies the index of
the PMC to be read within that type
Note, that the known supported types are 4000H and 2000H, i.e. look a lot
like flags, doesn't contradict the above statement that ECX[31:16] holds
the type, at least not by any sane reading of the SDM.
Keep the explicitly clearing of the FIXED "flag", as KVM subtly relies on
that behavior to disallow unsupported types while allowing the correct
indices for fixed counters. This wart will be cleaned up in short order.
Opportunistically grab the per-type bitmask in the if-else blocks to
eliminate the one-off usage of the local "fixed" bool.
Reported-by: Jim Mattson <jmattson@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-11-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Inject #GP on RDPMC if the "fast" flag is set for architectural Intel
PMUs, i.e. if the PMU version is non-zero. Per Intel's SDM, and confirmed
on bare metal, the "fast" flag is supported only for non-architectural
PMUs, and is reserved for architectural PMUs.
If the processor does not support architectural performance monitoring
(CPUID.0AH:EAX[7:0]=0), ECX[30:0] specifies the index of the PMC to be
read. Setting ECX[31] selects “fast” read mode if supported. In this mode,
RDPMC returns bits 31:0 of the PMC in EAX while clearing EDX to zero.
If the processor does support architectural performance monitoring
(CPUID.0AH:EAX[7:0] ≠ 0), ECX[31:16] specifies type of PMC while ECX[15:0]
specifies the index of the PMC to be read within that type. The following
PMC types are currently defined:
— General-purpose counters use type 0. The index x (to read IA32_PMCx)
must be less than the value enumerated by CPUID.0AH.EAX[15:8] (thus
ECX[15:8] must be zero).
— Fixed-function counters use type 4000H. The index x (to read
IA32_FIXED_CTRx) can be used if either CPUID.0AH.EDX[4:0] > x or
CPUID.0AH.ECX[x] = 1 (thus ECX[15:5] must be 0).
— Performance metrics use type 2000H. This type can be used only if
IA32_PERF_CAPABILITIES.PERF_METRICS_AVAILABLE[bit 15]=1. For this type,
the index in ECX[15:0] is implementation specific.
Opportunistically WARN if KVM ever actually tries to complete RDPMC for a
non-architectural PMU, and drop the non-existent "support" for fast RDPMC,
as KVM doesn't support such PMUs, i.e. kvm_pmu_rdpmc() should reject the
RDPMC before getting to the Intel code.
Fixes: f5132b0138 ("KVM: Expose a version 2 architectural PMU to a guests")
Fixes: 67f4d4288c ("KVM: x86: rdpmc emulation checks the counter incorrectly")
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-10-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Move the handling of "fast" RDPMC instructions, which drop bits 63:32 of
the count, to Intel. The "fast" flag, and all modifiers for that matter,
are Intel-only and aren't supported by AMD.
Opportunistically replace open coded bit crud with proper #defines, and
add comments to try and disentangle the flags vs. values mess for
non-architectural vs. architectural PMUs.
Fixes: ca724305a2 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM")
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Apply the pre-intercepts RDPMC validity check only to AMD, and rename all
relevant functions to make it as clear as possible that the check is not a
standard PMC index check. On Intel, the basic rule is that only invalid
opcodes and privilege/permission/mode checks have priority over VM-Exit,
i.e. RDPMC with an invalid index should VM-Exit, not #GP. While the SDM
doesn't explicitly call out RDPMC, it _does_ explicitly use RDMSR of a
non-existent MSR as an example where VM-Exit has priority over #GP, and
RDPMC is effectively just a variation of RDMSR.
Manually testing on various Intel CPUs confirms this behavior, and the
inverted priority was introduced for SVM compatibility, i.e. was not an
intentional change for Intel PMUs. On AMD, *all* exceptions on RDPMC have
priority over VM-Exit.
Check for a NULL kvm_pmu_ops.check_rdpmc_early instead of using a RET0
static call so as to provide a convenient location to document the
difference between Intel and AMD, and to again try to make it as obvious
as possible that the early check is a one-off thing, not a generic "is
this PMC valid?" helper.
Fixes: 8061252ee0 ("KVM: SVM: Add intercept checks for remaining twobyte instructions")
Cc: Jim Mattson <jmattson@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Stop stripping bits 31:30 prior to validating/consuming the RDPMC index on
AMD. Per the APM's documentation of RDPMC, *values* greater than 27 are
reserved. The behavior of upper bits being flags is firmly Intel-only.
Fixes: ca724305a2 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM")
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Get the event selectors used to effectively request fixed counters for
perf events from perf itself instead of hardcoding them in KVM and hoping
that they match the underlying hardware. While fixed counters 0 and 1 use
architectural events, as of ffbe4ab0be ("perf/x86/intel: Extend the
ref-cycles event to GP counters") fixed counter 2 (reference TSC cycles)
may use a software-defined pseudo-encoding or a real hardware-defined
encoding.
Reported-by: Kan Liang <kan.liang@linux.intel.com>
Closes: https://lkml.kernel.org/r/4281eee7-6423-4ec8-bb18-c6aeee1faf2c%40linux.intel.com
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Set the eventsel for all fixed counters during PMU initialization, the
eventsel is hardcoded and consumed if and only if the counter is supported,
i.e. there is no reason to redo the setup every time the PMU is refreshed.
Configuring all KVM-supported fixed counter also eliminates a potential
pitfall if/when KVM supports discontiguous fixed counters, in which case
configuring only nr_arch_fixed_counters will be insufficient (ignoring the
fact that KVM will need many other changes to support discontiguous fixed
counters).
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Drop KVM's enumeration of Intel's architectural event encodings, and
instead open code the three encodings (of which only two are real) that
KVM uses to emulate fixed counters. Now that KVM doesn't incorrectly
enforce the availability of architectural encodings, there is no reason
for KVM to ever care about the encodings themselves, at least not in the
current format of an array indexed by the encoding's position in CPUID.
Opportunistically add a comment to explain why KVM cares about eventsel
values for fixed counters.
Suggested-by: Jim Mattson <jmattson@google.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Remove KVM's bogus restriction that the guest can't program an event whose
encoding matches an unsupported architectural event. The enumeration of
an architectural event only says that if a CPU supports an architectural
event, then the event can be programmed using the architectural encoding.
The enumeration does NOT say anything about the encoding when the CPU
doesn't report support the architectural event.
Preventing the guest from counting events whose encoding happens to match
an architectural event breaks existing functionality whenever Intel adds
an architectural encoding that was *ever* used for a CPU that doesn't
enumerate support for the architectural event, even if the encoding is for
the exact same event!
E.g. the architectural encoding for Top-Down Slots is 0x01a4. Broadwell
CPUs, which do not support the Top-Down Slots architectural event, 0x01a4
is a valid, model-specific event. Denying guest usage of 0x01a4 if/when
KVM adds support for Top-Down slots would break any Broadwell-based guest.
Reported-by: Kan Liang <kan.liang@linux.intel.com>
Closes: https://lore.kernel.org/all/2004baa6-b494-462c-a11f-8104ea152c6a@linux.intel.com
Fixes: a21864486f ("KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event")
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Treat fixed counters as available when they are supported, i.e. don't
silently ignore an enabled fixed counter just because guest CPUID says the
associated general purpose architectural event is unavailable.
KVM originally treated fixed counters as always available, but that got
changed as part of a fix to avoid confusing REF_CPU_CYCLES, which does NOT
map to an architectural event, with the actual architectural event used
associated with bit 7, TOPDOWN_SLOTS.
The commit justified the change with:
If the event is marked as unavailable in the Intel guest CPUID
0AH.EBX leaf, we need to avoid any perf_event creation, whether
it's a gp or fixed counter.
but that justification doesn't mesh with reality. The Intel SDM uses
"architectural events" to refer to both general purpose events (the ones
with the reverse polarity mask in CPUID.0xA.EBX) and the events for fixed
counters, e.g. the SDM makes statements like:
Each of the fixed-function PMC can count only one architectural
performance event.
but the fact that fixed counter 2 (TSC reference cycles) doesn't have an
associated general purpose architectural makes trying to apply the mask
from CPUID.0xA.EBX impossible.
Furthermore, the lack of enumeration for an architectural event in CPUID
only means the CPU doesn't officially support the architectural encoding,
i.e. it doesn't mean using the architectural encoding _won't_ work, it
sipmly means there are no guarantees that it will work as expected. E.g.
if KVM is running in a VM that advertises a fixed counters but not the
corresponding architectural event encoding, and perf decides to use a
general purpose counter instead of a fixed counter, odds are very good
that the underlying hardware actually does support the architectrual
encoding, and that programming the encoding will count the right thing.
In other words, asking perf to count the event will probably work, whereas
intentionally doing nothing is obviously guaranteed to fail.
Note, at the time of the change, KVM didn't enforce hardware support, i.e.
didn't prevent userspace from enumerating support in guest CPUID.0xA.EBX
for architectural events that aren't supported in hardware. I.e. silently
dropping the fixed counter didn't somehow protection against counting the
wrong event, it just enforced guest CPUID. And practically speaking, this
issue is almost certainly limited to running KVM on a funky virtual CPU
model. No known real hardware has an asymmetric PMU where a fixed counter
is supported but the associated architectural event is not.
Fixes: a21864486f ("KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event")
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20240109230250.424295-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
- Fix unit test build regression fallout from global
"missing-prototypes" change
- Fix compatibility with devices that do not support interrupts
- Fix overflow when calculating the capacity of large interleave sets
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQSbo+XnGs+rwLz9XGXfioYZHlFsZwUCZbahQwAKCRDfioYZHlFs
Z1VNAP0dz0rhIriILaAvRidYQWt/qtmhoaZswrVvEtY+q/orogD/d6eHChpRQKM2
Bg8ofjqLkaEEszx7VBVhOGodgaQnBA8=
=RUzU
-----END PGP SIGNATURE-----
Merge tag 'cxl-fixes-6.8-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl
Pull cxl fixes from Dan Williams:
"A build regression fix, a device compatibility fix, and an original
bug preventing creation of large (16 device) interleave sets:
- Fix unit test build regression fallout from global
"missing-prototypes" change
- Fix compatibility with devices that do not support interrupts
- Fix overflow when calculating the capacity of large interleave sets"
* tag 'cxl-fixes-6.8-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl:
cxl/region:Fix overflow issue in alloc_hpa()
cxl/pci: Skip irq features if MSI/MSI-X are not supported
tools/testing/nvdimm: Disable "missing prototypes / declarations" warnings
tools/testing/cxl: Disable "missing prototypes / declarations" warnings
exposure
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmW2LAgACgkQEsHwGGHe
VUqowBAAiW9aPQmp401DSXLX+bX0oS5IQVEZnAEE3hQTWxdvDoIdmX+SBReSutXy
PDm8mZgVtIiUg3V5bu7/9Dgpu7ovRuChJPjjkYFUDcEmzmsMI11W6u8+8eyt8yRd
X9LuGUeXPJSI1kadYudhFUhl6X6KcXj4Y+XUqNcyp8yClSEcLriYeiumNApSEzj6
BneO5VBbXTpJq1b7GOlC4MNhNXhx+WlUdJUb3VPLlxy/akxrNs9x0ASdOuqslCq8
X9SJPnKeRh0mpezmWDgU72eQ/3vpvWQzwyXvp2pQGbjArCx7IwwD765NDu0P6651
C/+4ruXmcd+Jp3wuobdHG8/J2NlZQy8tZQm284YkS5vyBQDi4s17hycXw/aeUFpu
/3LR1Hppl//u7hkaHszE8vE5l6in4a2XAbk9EozChVj/aHRJqIaLn8TGQRquK4Tg
uRjIC3O2ubJCsIlNIczysjCobSCO+cELwUuFVHh7cdmQAgUwF3efDab0+pJ7MHFb
ZEcqQbIt4FGea4BGzvRYCYj6W9bkhzttnH+68ef+mDA3BcdGoYnHcQ143M8duNhe
0inWCibQXMFC9EGPjC8Sz8WvzF/L5KL9bPQmO1sitIzH6kbU3o7PBk2Fe4V6+KP9
THK865SJ/9QirjXrGmp9Sle6dqJRUylmt1ts8reOWACZ98LKeWU=
=ibuM
-----END PGP SIGNATURE-----
Merge tag 'locking_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fix from Borislav Petkov:
- Prevent an inconsistent futex operation leading to stale state
exposure
* tag 'locking_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Prevent the reuse of stale pi_state
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmW2KrEACgkQEsHwGGHe
VUr1WA//Qsi2JkxO1lyUQgnyuXqs0+oVZJxFyH2dFYzWkfSaxgsyPZ0H+wsweDfP
OgoNzwwDf1IaNbVz2voV6lSM/30ujJMx4aAucT5WTEXa12cJsvipxRiNd8WU8GqQ
buBz+vnS9IJ2WfM7UxhIVevYFU8H/ERcSO9WCII0YjlcVxmlwMK3B7kFdpBPdT5Q
m5hvBorZzIa9wD3TI7e+VvEVbCx0WjsYYEpXDXM/yCf1Juc9952pjjunzx3YmJES
5JG2WpnEvmNdWwIPO0NAjs7Shw/MNViXTy5Ls5jcbswiAcBoUxNHQlUsvNVDaVyv
8eMCkPzuSipY8HSoetQSTJl+mr3LyYRvevKahuTgwbS8K+kxgClqHFoLZVqolWnk
2IDo63R6Ex6lb1Xpb/Rpg/4j4NqUVWcvPHf6Z2CmMRq/XbSk2DIFl1Wxjgy/Cjnu
+nNLw2FYayEBrKF3VlYgERGoCfBrEsksxzljjeHFn5XWr+G2x1ykF37xaWjQ2+oV
sFl6UYwIsdqPCjHmpT6R1lwCdeEC3o3Zc2Kf5uEVj+pXacKJkxZU0L6ZneO8UiEc
rtc0gTgm9ZNd8oDsjsaBU1A3KxH9lOfVz82ZV0tipz94dcN4zrB9Qag0Yw+64YOC
cQ9cRKiFiCVCeD1ksDLZe1IUX++T2Y9O8MZv06ZDrbaFC56+lU8=
=IgON
-----END PGP SIGNATURE-----
Merge tag 'irq_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fix from Borislav Petkov:
- Initialize the resend node of each IRQ descriptor, not only the first
one
* tag 'irq_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq: Initialize resend_node hlist for all interrupt descriptors
events in order to be able to compute correct averages
- Limit the duration of the clocksource watchdog checking interval as
too long intervals lead to wrongly marking the TSC as unstable
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmW2KYEACgkQEsHwGGHe
VUoX7w//Ulls1tp3m1oiejTBtUmkewSmnhNAfkHJv3MlKNe+ttG6LvQVh9g2bf1Z
FOu2M2se0ge8G5xf3+I5E4rpqlJZSuhPmNmIET+aj+2a61UJq/6zE1Zw6mxjrJOK
emjOYKTxZ/HxvKJJGO6NiH8Iv5Aj3nQR3Y6oyb/FyP5TLJ6MCT21iEaqyqU7P+Ix
AHIS3cL97M5R/tFtP2CY3PV2M6hJ0lqapSi9t75hT8DfJN1TNQ5SvFkKgmOIrGFw
2WxPTSTEZAnXlvI4cC3Nru9i64QQRw9S05FFelX2pwxE/7wVzBvfh8cjuGZJBve/
KQhNnQ4/fzv6E/hUcavKuOyk1lx5XonfCuG4RFoLl67LjLbLh+Q55RBdXflBPF4T
Ow9BSyQNFu391C2Bl5gJUYVd2JMv+IVpi2wUiwrXJ/Mxj+A2J7Fj0jz7hMbNCmsU
EaA+QyfkAGsoa99xP3UDhPzxoCr2s5YTAxH+IUeSWeI25PMq9f+6fifXBwG+GaVa
FS6Ei1VI0GCNmcYFYawHJbdM2ui5h7lZ96aEpOBSVcAv/2yBNgqxuYZ+icO/wI6N
JM0DSEEOrWcytfxftl7LmglJauhXSKZH4UTG4RCz0IkDgR72Wn0QF4cqm4wWQ5yh
n5/xO+SbkzE57bltsnAkvpu0a110fdK5ec+vkFIy4PrxyO83XhY=
=9RXx
-----END PGP SIGNATURE-----
Merge tag 'timers_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fixes from Borislav Petkov:
- Preserve the number of idle calls and sleep entries across CPU
hotplug events in order to be able to compute correct averages
- Limit the duration of the clocksource watchdog checking interval as
too long intervals lead to wrongly marking the TSC as unstable
* tag 'timers_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
tick/sched: Preserve number of idle sleeps across CPU hotplug events
clocksource: Skip watchdog check for large watchdog intervals
- Add detection for AMD's Zen5 generation CPUs and Intel's Clearwater
Forest CPU model number
- Make a stub function export non-GPL because it is part of the
paravirt alternatives and that can be used by non-GPL code
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmW2JvkACgkQEsHwGGHe
VUozlQ//VJDOBaZ/rEoKPK/mhoTHvcBBb8WyhoVfV0/MNOW7CFoFw3RocDxg6BYI
h4w4vCCuhca0ZO0u5k9AKgWUVbheaUv3e4J0hTqIgEsN6qDY/3pYSxUy8cv+Gxwq
dkNTZyfsmpgVDKM5NrFUOK6njDu2nmWjJmGWlu9pGJ4gyyK5+gkPUwxKXj2QVqTg
oemAWHbwkXgXGt5SW9nr0ihNMMGIMGTp0rR8ax4Mr9Ge1d4LXnPQSuo3DnxRVdx5
sdW/XFf9wRIa95ig9lUGD9Uh7Mkcx3L5aggP62jrMPZkzWFKWKpW5Br4HlSo1SWv
YOLrjkLj45GqzQFOn2S+RY5GZC4woLjZZTjmt6Rvk+C+LL2C0w+jsEmuS/sPDeKN
MJb09FZs5FnqcX+hpcBvmIkRYIF6KbEwVGmYh1+23ffW8Cih4A2XCWmfrADpOdbW
h16irylL7tTwt4jL7dAxEzL9ViUtfi1l1pgD+BzbMOqac3/tSBdrKXWgEAnssXx5
QLYFrG3i+6M6ls2qigsLKp5w4r9IHU5Lx3oZDfd2xX89bCYpp1ua/H3JO3eBeUCw
YqkUEYUg+5XIqN+90QuHMHPzc6Hyd4p6meliiQzHqKO55/QRj8s037tuydzjRcvy
8h0Y3UuUg/+Sb/qLwGeY3rRy/zknIvFc8lL5MBWVCW0iaYV0XAk=
=7vE4
-----END PGP SIGNATURE-----
Merge tag 'x86_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Make sure 32-bit syscall registers are properly sign-extended
- Add detection for AMD's Zen5 generation CPUs and Intel's Clearwater
Forest CPU model number
- Make a stub function export non-GPL because it is part of the
paravirt alternatives and that can be used by non-GPL code
* tag 'x86_urgent_for_v6.8_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/CPU/AMD: Add more models to X86_FEATURE_ZEN5
x86/entry/ia32: Ensure s32 is sign extended to s64
x86/cpu: Add model number for Intel Clearwater Forest processor
x86/CPU/AMD: Add X86_FEATURE_ZEN5
x86/paravirt: Make BUG_func() usable by non-GPL modules
When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, the initialization of
reserved pages may cause access of NODE_DATA() with invalid nid and crash.
Add a fall back to early_pfn_to_nid() in memmap_init_reserved_pages() to
ensure a valid node id is always passed to init_reserved_page().
-----BEGIN PGP SIGNATURE-----
iQFEBAABCgAuFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmW19ekQHHJwcHRAa2Vy
bmVsLm9yZwAKCRA5A4Ymyw79kQ2CCAC+aXkQdFbP08jyZ1Q3rjZpXMAq6xORVT1z
fWFQQlAQ2L75dWR2dUh+lFPAQRLhs1KfUUmwZUczKhyWXpCFsLLT5OgLtfDLk/sB
XzoyZeW7//pSY22mFxcVmOMuJBZ3q+ZB0n9LdhIaWcdedltvEFhVXZjVPFJszszb
8BZIq7tKvUFUv8KOlfGTvjvNjhjmXRRmcrG1fsS4sdkHQ8/36/KjqI0sZUgMq7Fz
HfawJ6bK+ysHBmKCuWRAU4ssiuUGSaivqh8Azt+FI/zr2Dk+40asFpE0573VMNB7
MaXAn9TjXKU6e/wMBB7KQSUqIlv3Pm7iK2+B/IP4AJ1cWLylRO1r
=efaB
-----END PGP SIGNATURE-----
Merge tag 'fixes-2024-01-28' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock fix from Mike Rapoport:
"Fix crash when reserved memory is not added to memory.
When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, the initialization
of reserved pages may cause access of NODE_DATA() with invalid nid and
crash.
Add a fall back to early_pfn_to_nid() in memmap_init_reserved_pages()
to ensure a valid node id is always passed to init_reserved_page()"
* tag 'fixes-2024-01-28' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
memblock: fix crash when reserved memory is not added to memory
Highlights:
- WMI bus driver fixes
- Second attempt (previously reverted) at P2SB PCI rescan deadlock fix
- AMD PMF driver improvements
- MAINTAINERS updates
- Misc. other small fixes and hw-id additions
The following is an automated git shortlog grouped by driver:
MAINTAINERS:
- remove defunct acpi4asus project info from asus notebooks section
- add Luke Jones as maintainer for asus notebooks
- Remove Perry Yuan as DELL WMI HARDWARE PRIVACY SUPPORT maintainer
intel-uncore-freq:
- Fix types in sysfs callbacks
intel-wmi-sbl-fw-update:
- Fix function name in error message
p2sb:
- Use pci_resource_n() in p2sb_read_bar0()
- Allow p2sb_bar() calls during PCI device probe
platform/mellanox:
- mlxbf-pmc: Fix offset calculation for crspace events
- mlxbf-tmfifo: Drop Tx network packet when Tx TmFIFO is full
platform/x86/amd/pmf:
- Fix memory leak in amd_pmf_get_pb_data()
- Get ambient light information from AMD SFH driver
- Get Human presence information from AMD SFH driver
platform/x86/intel/ifs:
- Call release_firmware() when handling errors.
silicom-platform:
- Add missing "Description:" for power_cycle sysfs attr
touchscreen_dmi:
- Add info for the TECLAST X16 Plus tablet
wmi:
- Fix wmi_dev_probe()
- Fix notify callback locking
- Decouple legacy WMI notify handlers from wmi_block_list
- Return immediately if an suitable WMI event is found
- Fix error handling in legacy WMI notify handler functions
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEEuvA7XScYQRpenhd+kuxHeUQDJ9wFAmW1JScUHGhkZWdvZWRl
QHJlZGhhdC5jb20ACgkQkuxHeUQDJ9xNVwf/YXSuNEw+ztLH0pEySBUATHrcIbO7
gOpW2ZISf6IzRe7HFw7Ea1IJxrvysPn8VEribT3Sot9Ka+Pzd6H/TVA64sfyE7oG
wEke2Uxpnfie65Yo2IYNADhfLTOyAL7mvchScQz5hTE+gBq5Fdac2ykK+ox1dpTs
BqPg1/yG06L1SRX2Id0UNNYGMBsmjUH6v2b+M8Rcba+qcdznGMRe7l8T1Q2fY+nl
P6+tz3rYdfrGn1j+35Wo2bgKaB8l6rrtOscIvpke+CxZ95+6UxqZfLOBCg8u/njA
QbWqfZGjmbRGrbNo4C3fAHjj6SzQNyNfsm4gd4eJzl8X1CR9gzM8kb/xmg==
=yoe7
-----END PGP SIGNATURE-----
Merge tag 'platform-drivers-x86-v6.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86
Pull x86 platform driver fixes from Hans de Goede:
- WMI bus driver fixes
- Second attempt (previously reverted) at P2SB PCI rescan deadlock fix
- AMD PMF driver improvements
- MAINTAINERS updates
- Misc other small fixes and hw-id additions
* tag 'platform-drivers-x86-v6.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86:
platform/x86: touchscreen_dmi: Add info for the TECLAST X16 Plus tablet
platform/x86/intel/ifs: Call release_firmware() when handling errors.
platform/x86/amd/pmf: Fix memory leak in amd_pmf_get_pb_data()
platform/x86/amd/pmf: Get ambient light information from AMD SFH driver
platform/x86/amd/pmf: Get Human presence information from AMD SFH driver
platform/mellanox: mlxbf-pmc: Fix offset calculation for crspace events
platform/mellanox: mlxbf-tmfifo: Drop Tx network packet when Tx TmFIFO is full
MAINTAINERS: remove defunct acpi4asus project info from asus notebooks section
MAINTAINERS: add Luke Jones as maintainer for asus notebooks
MAINTAINERS: Remove Perry Yuan as DELL WMI HARDWARE PRIVACY SUPPORT maintainer
platform/x86: silicom-platform: Add missing "Description:" for power_cycle sysfs attr
platform/x86: intel-wmi-sbl-fw-update: Fix function name in error message
platform/x86: p2sb: Use pci_resource_n() in p2sb_read_bar0()
platform/x86: p2sb: Allow p2sb_bar() calls during PCI device probe
platform/x86: intel-uncore-freq: Fix types in sysfs callbacks
platform/x86: wmi: Fix wmi_dev_probe()
platform/x86: wmi: Fix notify callback locking
platform/x86: wmi: Decouple legacy WMI notify handlers from wmi_block_list
platform/x86: wmi: Return immediately if an suitable WMI event is found
platform/x86: wmi: Fix error handling in legacy WMI notify handler functions
-----BEGIN PGP SIGNATURE-----
iQJKBAABCAA0FiEEzOlt8mkP+tbeiYy5AoYrw/LiJnoFAmW0uEAWHGNoZW5odWFj
YWlAa2VybmVsLm9yZwAKCRAChivD8uImeqGUEACf2JP1cPmaWfZZpQtpRwD2umdm
Tk1esuMqfvS5RqUqZUKMMc/fHe2JxgZ6J7NZFoAzrd3lZTOxfEjv0hZGEui6Sb8a
mnOszrPX3fbP45ViIu7HqUOnkluEofaeTZmAATuDlHroXvvpXV8uGFDyujH085iG
ZhpKurE5aT3yxGphHguFLBH14ZXIAHAZHR0NUFs54shAcGV5n2HZipbN7S081iwv
RC+ah61Ls93grebC1PxvtvbTPrvEUJo00eqHErWn6u72Ek7bbYpoWvcmxyXXHWAH
ETBf9MmMEQccTRCD81wpzWdf1/TdZ2tsrifh5efCHCxhu5flu3RxFygRBR5lNMtS
+IdiJnHjZ0xF+tXDGBNiFQ8+b+BvSQ58haj30ob6dFs16e4WMbP6lrACjZA6rBEl
Ks5qDwFoeGKxDyJmBflsXK5CUgTFHFD3STPHSZ6o4ChSoZRaiC6W7QxdwHLrcacc
51ThKKUQsaWiR8sa9ag7svCJYcaXzurQtXPsvi6L5VfoT5Rk0HzxbczVoi08M/+z
t4V03sEIYtcbRCHblKupz20A9kRXCh7dq/ShrGqZ1A0T2K3n7R+3ol9a5VjKdaJ3
y9lQDYoEpaqFM/G0zcaUpw5ueGoynO2g0Pq6b+U/wWvwWFnAQQAHgtQpaZvscBsw
D9ExPRWgsDtXQWzRfg==
=3XNw
-----END PGP SIGNATURE-----
Merge tag 'loongarch-fixes-6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson
Pull LoongArch fixes from Huacai Chen:
"Fix boot failure on machines with more than 8 nodes, and fix two build
errors about KVM"
* tag 'loongarch-fixes-6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
LoongArch: KVM: Add returns to SIMD stubs
LoongArch: KVM: Fix build due to API changes
LoongArch/smp: Call rcutree_report_cpu_starting() at tlb_init()
* Fix read only mounts when using fsopen mount API
Signed-off-by: Chandan Babu R <chandanbabu@kernel.org>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQQjMC4mbgVeU7MxEIYH7y4RirJu9AUCZbCl9wAKCRAH7y4RirJu
9AbqAP43B2qBRn2o16MrK3qwGTNOvecck1Nze1klaQIKnwyTAQD7BbcpuNlLOKqZ
7HkZcaXnyIrzNm1DrnDFX1zIagDY+QA=
=S9zy
-----END PGP SIGNATURE-----
Merge tag 'xfs-6.8-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull xfs fix from Chandan Babu:
- Fix read only mounts when using fsopen mount API
* tag 'xfs-6.8-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
xfs: read only mounts with fsopen mount API are busted
- fix for REQ_OP_FLUSH usage; this fixes filesystems going read only
with -EOPNOTSUPP from the block layer.
(this really should have gone in with the block layer patch causing
the -EOPNOTSUPP, or should have gone in before).
- fix an allocation in non-sleepable context
- fix one source of srcu lock latency, on devices with terrible discard
latency
- fix a reattach_inode() issue in fsck
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEKnAFLkS8Qha+jvQrE6szbY3KbnYFAmW0PmoACgkQE6szbY3K
bnZpbQ//V2tTJOvnhfMNn7n0oq9TVRJnSmhuredkE4arhWgyfx8RF81mUajAS5GW
u4S+i+g5BZDgb75vnAAXKxzCwr5TLAoOBU9jhKROzXykqgoyQPyNSAHxa9vHDEmp
hfaML1ZuAq54N6ahuEhFjEGaImqF0Jv1K/F3pqDeBm1ulW+VDyLzLihKm/CJs7bp
m9B35QyPx7/dqTrqCrmKPY0LH3lhv6DT3GbCS8D6B3OPgoqFBYvgnMj+uHiS4gAo
pSqkMnoIFatTOuQYJXrLJsYUc72WHdXjZuAVmr0pGWyICZmOdGdrayYo48Xa/nBE
a8qVdo8SWwn8zJKzZPxXK6ViYg8Gn7xCWx3V30SiPaLhcQddwlyRS+QemX3nNsmm
4w9Q/QwhfjJ1624vLH850ajfbIjYoUwhmIeVt1UYO7adBvNCzmDFI+HZqqWPP3Z1
FhZsjTs8JzByLMFL9N8z4Iy7DZ2it1EhoW+RoptvYp0K46NREeh1reDmiI9/RWp7
62IaX8/fqiiakXfflpPa0gXo+S/aZfaDy6QL9xSOLLf/7tbPwnmEZVw0V6imBTy6
E0ja7HDeBzlGk/IqMZGDva3FPGJZxjFQPBemiHMGYvMvm0qVPKum5behnZFRq6hc
Z2SWm/jnDo4J/K9156GrWWvLU6WhZbt/1twp7YEdhefEjKRLH7g=
=6Xv0
-----END PGP SIGNATURE-----
Merge tag 'bcachefs-2024-01-26' of https://evilpiepirate.org/git/bcachefs
Pull bcachefs fixes from Kent Overstreet:
- fix for REQ_OP_FLUSH usage; this fixes filesystems going read only
with -EOPNOTSUPP from the block layer.
(this really should have gone in with the block layer patch causing
the -EOPNOTSUPP, or should have gone in before).
- fix an allocation in non-sleepable context
- fix one source of srcu lock latency, on devices with terrible discard
latency
- fix a reattach_inode() issue in fsck
* tag 'bcachefs-2024-01-26' of https://evilpiepirate.org/git/bcachefs:
bcachefs: __lookup_dirent() works in snapshot, not subvol
bcachefs: discard path uses unlock_long()
bcachefs: fix incorrect usage of REQ_OP_FLUSH
bcachefs: Add gfp flags param to bch2_prt_task_backtrace()
-----BEGIN PGP SIGNATURE-----
iQGzBAABCgAdFiEE6fsu8pdIjtWE/DpLiiy9cAdyT1EFAmWz98EACgkQiiy9cAdy
T1HlJgv/YrHQn08KHVUEC35U4XAtWVRKP6UsMZgj3ru7F+MurHKamhuyzd8WxhFE
2WVnnx0frue8wvFNd3GTnivWrW3zLu3QbeG2RjdHD3Z58uKb/fkq8K4HU1PAYpLC
6pLGQ2qaKcTzqOUs65PVsBtL7Eb5OG3kWjR5tCBohIB9uWo4weiKcRDDryhSZZr9
p5TfJ0bbttz60XUHZ8pCgj4cjsb9DScqalYrvMIuUIzjdKg3rgSXsr4mOPPPyqRB
1allNGBosL9iBXuSRY3boYivEI+tyZq8UH3KPQKd0XV1qzaWTg/f8wNb0CdBWGXQ
xeTRPBc9CswhNEWVWgpq4zbxBhkUF9ZXgCJlnca/OgPL6inuST+FAB9CKCd4kPrx
hxXHrfzYAvjUvZvtK32c64J4BF9EcJ8jQtdnheezy7FdKZuuCZCRss35MVwCd38v
DaqqYb4Etfmj3HjlyxYxU+rNsk/lvZr/ZMixQ9sIIDJFjsYCRh4rH1YmGN2DaVOP
tmLP3/w/
=FipL
-----END PGP SIGNATURE-----
Merge tag '6.8-rc2-smb3-server-fixes' of git://git.samba.org/ksmbd
Pull smb server fixes from Steve French:
- Fix netlink OOB
- Minor kernel doc fix
* tag '6.8-rc2-smb3-server-fixes' of git://git.samba.org/ksmbd:
ksmbd: fix global oob in ksmbd_nl_policy
smb: Fix some kernel-doc comments