IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Pull more RISC-V updates from Palmer Dabbelt:
- Support for hibernation
- The .rela.dyn section has been moved to the init area
- A fix for the SBI probing to allow for implementation-defined
behavior
- Various other fixes and cleanups throughout the tree
* tag 'riscv-for-linus-6.4-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
RISC-V: include cpufeature.h in cpufeature.c
riscv: Move .rela.dyn to the init sections
dt-bindings: riscv: explicitly mention assumption of Zicsr & Zifencei support
riscv: compat_syscall_table: Fixup compile warning
RISC-V: fixup in-flight collision with ARCH_WANT_OPTIMIZE_VMEMMAP rename
RISC-V: fix sifive and thead section mismatches in errata
RISC-V: Align SBI probe implementation with spec
riscv: mm: remove redundant parameter of create_fdt_early_page_table
riscv: Adjust dependencies of HAVE_DYNAMIC_FTRACE selection
RISC-V: Add arch functions to support hibernation/suspend-to-disk
RISC-V: mm: Enable huge page support to kernel_page_present() function
RISC-V: Factor out common code of __cpu_resume_enter()
RISC-V: Change suspend_save_csrs and suspend_restore_csrs to public function
Automation complains:
warning: symbol '__pcpu_scope_misaligned_access_speed' was not declared. Should it be static?
cpufeature.c doesn't actually include the header of the same name, as it
had not previously used anything from it.
The per-cpu variable is declared there, so include it to silence the
complaints.
Fixes: 62a31d6e38 ("RISC-V: hwprobe: Support probing of misaligned access performance")
Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Evan Green <evan@rivosinc.com>
Link: https://lore.kernel.org/r/20230420-wound-gizzard-2b2b589d9bea@spud
Cc: stable@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Evan Green <evan@rivosinc.com> says:
There's been a bunch of off-list discussions about this, including at
Plumbers. The original plan was to do something involving providing an
ISA string to userspace, but ISA strings just aren't sufficient for a
stable ABI any more: in order to parse an ISA string users need the
version of the specifications that the string is written to, the version
of each extension (sometimes at a finer granularity than the RISC-V
releases/versions encode), and the expected use case for the ISA string
(ie, is it a U-mode or M-mode string). That's a lot of complexity to
try and keep ABI compatible and it's probably going to continue to grow,
as even if there's no more complexity in the specifications we'll have
to deal with the various ISA string parsing oddities that end up all
over userspace.
Instead this patch set takes a very different approach and provides a set
of key/value pairs that encode various bits about the system. The big
advantage here is that we can clearly define what these mean so we can
ensure ABI stability, but it also allows us to encode information that's
unlikely to ever appear in an ISA string (see the misaligned access
performance, for example). The resulting interface looks a lot like
what arm64 and x86 do, and will hopefully fit well into something like
ACPI in the future.
The actual user interface is a syscall, with a vDSO function in front of
it. The vDSO function can answer some queries without a syscall at all,
and falls back to the syscall for cases it doesn't have answers to.
Currently we prepopulate it with an array of answers for all keys and
a CPU set of "all CPUs". This can be adjusted as necessary to provide
fast answers to the most common queries.
An example series in glibc exposing this syscall and using it in an
ifunc selector for memcpy can be found at [1].
I was asked about the performance delta between this and something like
sysfs. I created a small test program and ran it on a Nezha D1
Allwinner board. Doing each operation 100000 times and dividing, these
operations take the following amount of time:
- open()+read()+close() of /sys/kernel/cpu_byteorder: 3.8us
- access("/sys/kernel/cpu_byteorder", R_OK): 1.3us
- riscv_hwprobe() vDSO and syscall: .0094us
- riscv_hwprobe() vDSO with no syscall: 0.0091us
These numbers get farther apart if we query multiple keys, as sysfs will
scale linearly with the number of keys, where the dedicated syscall
stays the same. To frame these numbers, I also did a tight
fork/exec/wait loop, which I measured as 4.8ms. So doing 4
open/read/close operations is a delta of about 0.3%, versus a single vDSO
call is a delta of essentially zero.
[1] https://patchwork.ozlabs.org/project/glibc/list/?series=343050
* b4-shazam-merge:
RISC-V: Add hwprobe vDSO function and data
selftests: Test the new RISC-V hwprobe interface
RISC-V: hwprobe: Support probing of misaligned access performance
RISC-V: hwprobe: Add support for RISCV_HWPROBE_BASE_BEHAVIOR_IMA
RISC-V: Add a syscall for HW probing
RISC-V: Move struct riscv_cpuinfo to new header
Link: https://lore.kernel.org/r/20230407231103.2622178-1-evan@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
This allows userspace to select various routines to use based on the
performance of misaligned access on the target hardware.
Rather than adding DT bindings, this change taps into the alternatives
mechanism used to probe CPU errata. Add a new function pointer alongside
the vendor-specific errata_patch_func() that probes for desirable errata
(otherwise known as "features"). Unlike the errata_patch_func(), this
function is called on each CPU as it comes up, so it can save
feature information per-CPU.
The T-head C906 has fast unaligned access, both as defined by GCC [1],
and in performing a basic benchmark, which determined that byte copies
are >50% slower than a misaligned word copy of the same data size (source
for this test at [2]):
bytecopy size f000 count 50000 offset 0 took 31664899 us
wordcopy size f000 count 50000 offset 0 took 5180919 us
wordcopy size f000 count 50000 offset 1 took 13416949 us
[1] https://github.com/gcc-mirror/gcc/blob/master/gcc/config/riscv/riscv.cc#L353
[2] https://pastebin.com/EPXvDHSW
Co-developed-by: Palmer Dabbelt <palmer@rivosinc.com>
Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Heiko Stuebner <heiko.stuebner@vrull.eu>
Tested-by: Heiko Stuebner <heiko.stuebner@vrull.eu>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Paul Walmsley <paul.walmsley@sifive.com>
Link: https://lore.kernel.org/r/20230407231103.2622178-5-evan@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Andrew Jones <ajones@ventanamicro.com> says:
When the Zicboz extension is available we can more rapidly zero naturally
aligned Zicboz block sized chunks of memory. As pages are always page
aligned and are larger than any Zicboz block size will be, then
clear_page() appears to be a good candidate for the extension. While cycle
count and energy consumption should also be considered, we can be pretty
certain that implementing clear_page() with the Zicboz extension is a win
by comparing the new dynamic instruction count with its current count[1].
Doing so we see that the new count is just over a quarter of the old count
(see patch6's commit message for more details).
For those of you who reviewed v1[2], you may be looking for the memset()
patches. As pointed out in v1, and a couple follow-up emails, it's not
clear that patching memset() is a win yet. When I get a chance to test
on real hardware with a comprehensive benchmark collection then I can
post the memset() patches separately (assuming the benchmarks show it's
worthwhile).
* b4-shazam-merge:
RISC-V: KVM: Expose Zicboz to the guest
RISC-V: KVM: Provide UAPI for Zicboz block size
RISC-V: Use Zicboz in clear_page when available
RISC-V: cpufeatures: Put the upper 16 bits of patch ID to work
RISC-V: Add Zicboz detection and block size parsing
dt-bindings: riscv: Document cboz-block-size
RISC-V: Factor out body of riscv_init_cbom_blocksize loop
RISC-V: alternatives: Support patching multiple insns in assembly
Link: https://lore.kernel.org/r/20230224162631.405473-1-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Using memset() to zero a 4K page takes 563 total instructions, where
20 are branches. clear_page(), with Zicboz and a 64 byte block size,
takes 169 total instructions, where 4 are branches and 33 are nops.
Even though the block size is a variable, thanks to alternatives, we
can still implement a Duff device without having to do any preliminary
calculations. This is achieved by using the alternatives' cpufeature
value (the upper 16 bits of patch_id). The value used is the maximum
zicboz block size order accepted at the patch site. This enables us
to stop patching / unrolling when 4K bytes have been zeroed (we would
loop and continue after 4K if the page size would be larger)
For 4K pages, unrolling 16 times allows block sizes of 64 and 128 to
only loop a few times and larger block sizes to not loop at all. Since
cbo.zero doesn't take an offset, we also need an 'add' after each
instruction, making the loop body 112 to 160 bytes. Hopefully this
is small enough to not cause icache misses.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230224162631.405473-7-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
cpufeature IDs are consecutive integers starting at 26, so a 32-bit
patch ID allows an aircraft carrier load of feature IDs. Repurposing
the upper 16 bits still leaves a boat load of feature IDs and gains
16 bits which may be used to control patching on a per patch-site
basis.
This will be initially used in Zicboz's application to clear_page(),
as Zicboz's block size must also be considered. In that case, the
upper 16-bit value's role will be to convey the maximum block size
which the Zicboz clear_page() implementation supports.
cpufeature patch sites which need to check for the existence or
absence of other cpufeatures may also be able to make use of this.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230224162631.405473-6-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Andrew Jones <ajones@ventanamicro.com> says:
This series has no intended functional change. These cleanups were
found while renaming errata_id to patch_id in order to better
convey that its purpose is larger than errata (it's also for
cpufeatures).
* b4-shazam-merge:
riscv: cpufeature: Drop errata_list.h and other unused includes
riscv: lib: Include hwcap.h directly
riscv: alternatives: Rename errata_id to patch_id
riscv: alternatives: Remove unnecessary define and unused struct
riscv: Rename Kconfig.erratas to Kconfig.errata
riscv: Clarify RISCV_ALTERNATIVE help text
Link: https://lore.kernel.org/r/20230224154601.88163-1-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Qinglin Pan <panqinglin00@gmail.com> says:
Svnapot is a RISC-V extension for marking contiguous 4K pages as a non-4K
page. This patch set is for using Svnapot in hugetlb fs and huge vmap.
This patchset adds a Kconfig item for using Svnapot in
"Platform type"->"SVNAPOT extension support". Its default value is on,
and people can set it off if they don't allow kernel to detect Svnapot
hardware support and leverage it.
Tested on:
- qemu rv64 with "Svnapot support" off and svnapot=true.
- qemu rv64 with "Svnapot support" on and svnapot=true.
- qemu rv64 with "Svnapot support" off and svnapot=false.
- qemu rv64 with "Svnapot support" on and svnapot=false.
* b4-shazam-merge:
riscv: mm: support Svnapot in huge vmap
riscv: mm: support Svnapot in hugetlb page
riscv: mm: modify pte format for Svnapot
Link: https://lore.kernel.org/r/20230209131647.17245-1-panqinglin00@gmail.com
[Palmer: fix up the feature ordering in the merge]
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Add one alternative to enable/disable svnapot support, enable this static
key when "svnapot" is in the "riscv,isa" field of fdt and SVNAPOT compile
option is set. It will influence the behavior of has_svnapot. All code
dependent on svnapot should make sure that has_svnapot return true firstly.
Modify PTE definition for Svnapot, and creates some functions in pgtable.h
to mark a PTE as napot and check if it is a Svnapot PTE. Until now, only
64KB napot size is supported in spec, so some macros has only 64KB version.
Signed-off-by: Qinglin Pan <panqinglin00@gmail.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Link: https://lore.kernel.org/r/20230209131647.17245-2-panqinglin00@gmail.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Jisheng Zhang <jszhang@kernel.org> says:
Generally, riscv ISA extensions are fixed for any specific hardware
platform, so a hart's features won't change after booting, this
chacteristic makes it straightforward to use a static branch to check
a specific ISA extension is supported or not to optimize performance.
However, some ISA extensions such as SVPBMT and ZICBOM are handled
via. the alternative sequences.
Basically, for ease of maintenance, we prefer to use static branches
in C code, but recently, Samuel found that the static branch usage in
cpu_relax() breaks building with CONFIG_CC_OPTIMIZE_FOR_SIZE[1]. As
Samuel pointed out, "Having a static branch in cpu_relax() is
problematic because that function is widely inlined, including in some
quite complex functions like in the VDSO. A quick measurement shows
this static branch is responsible by itself for around 40% of the jump
table."
Samuel's findings pointed out one of a few downsides of static branches
usage in C code to handle ISA extensions detected at boot time:
static branch's metadata in the __jump_table section, which is not
discarded after ISA extensions are finalized, wastes some space.
I want to try to solve the issue for all possible dynamic handling of
ISA extensions at boot time. Inspired by Mark[2], this patch introduces
riscv_has_extension_*() helpers, which work like static branches but
are patched using alternatives, thus the metadata can be freed after
patching.
[1]https://lore.kernel.org/linux-riscv/20220922060958.44203-1-samuel@sholland.org/
[2]https://lore.kernel.org/linux-arm-kernel/20220912162210.3626215-8-mark.rutland@arm.com/
[3]https://lore.kernel.org/linux-riscv/20221130225614.1594256-1-heiko@sntech.de/
* b4-shazam-merge:
riscv: remove riscv_isa_ext_keys[] array and related usage
riscv: KVM: Switch has_svinval() to riscv_has_extension_unlikely()
riscv: cpu_relax: switch to riscv_has_extension_likely()
riscv: alternative: patch alternatives in the vDSO
riscv: switch to relative alternative entries
riscv: module: Add ADD16 and SUB16 rela types
riscv: module: move find_section to module.h
riscv: fpu: switch has_fpu() to riscv_has_extension_likely()
riscv: introduce riscv_has_extension_[un]likely()
riscv: cpufeature: extend riscv_cpufeature_patch_func to all ISA extensions
riscv: hwcap: make ISA extension ids can be used in asm
riscv: cpufeature: detect RISCV_ALTERNATIVES_EARLY_BOOT earlier
riscv: move riscv_noncoherent_supported() out of ZICBOM probe
Link: https://lore.kernel.org/r/20230128172856.3814-1-jszhang@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Instead of using absolute addresses for both the old instrucions and
the alternative instructions, use offsets relative to the alt_entry
values. So this not only cuts the size of the alternative entry, but
also meets the prerequisite for patching alternatives in the vDSO,
since absolute alternative entries are subject to dynamic relocation,
which is incompatible with the vDSO building.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230128172856.3814-10-jszhang@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
This cleans up the ISA string handling to more closely match a version
of the ISA spec. This is visible in /proc/cpuinfo and the ordering
changes may break something in userspace, but these orderings have
changed before without issues so with any luck that's still the case.
This also adds documentation so userspace has a better idea of what is
intended when it comes to compatibility for /proc/cpuinfo, which should
help everyone as this will likely keep changing.
* b4-shazam-merge:
Documentation: riscv: add a section about ISA string ordering in /proc/cpuinfo
RISC-V: resort all extensions in consistent orders
RISC-V: clarify ISA string ordering rules in cpu.c
Link: https://lore.kernel.org/r/20221205144525.2148448-1-conor.dooley@microchip.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
When a DT puts zicbom in the isa string, but does not provide a block
size, ALT_CMO_OP() will attempt to do cache operations on address
zero since the start address will be ANDed with zero. We can't simply
BUG() in riscv_init_cbom_blocksize() when we fail to find a block
size because the failure will happen before logging works, leaving
users to scratch their heads as to why the boot hung. Instead, ensure
Zicbom is disabled and output an error which will hopefully alert
people that the DT needs to be fixed. While at it, add a check that
the block size is a power-of-2 too.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Link: https://lore.kernel.org/r/20221129143447.49714-4-ajones@ventanamicro.com
[Palmer: base on 5c20a3a9df ("RISC-V: Fix compilation without RISCV_ISA_ZICBOM"]
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Currently any isa extension found in the isa string is set in the
isa bitmap. An isa extension set in the bitmap indicates that the
extension is present and may be used (a.k.a is enabled). However,
when an extension cannot be used due to missing dependencies or
errata it should not be added to the bitmap. Introduce a function
where additional checks may be placed in order to determine if an
extension should be enabled or not.
Note, the checks may simply indicate an issue with the DT, but,
since extensions may be used in early boot, it's not always possible
to simply produce an error at the point the issue is determined.
It's best to keep the extension disabled and produce an error.
No functional change intended, as the function is only introduced
and always returns true. A later patch will provide checks for an
isa extension.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Link: https://lore.kernel.org/r/20221129143447.49714-3-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Pull more RISC-V updates from Palmer Dabbelt:
- DT updates for the PolarFire SOC
- a fix to correct the handling of write-only mappings
- m{vetndor,arcd,imp}id is now in /proc/cpuinfo
- the SiFive L2 cache controller support has been refactored to also
support L3 caches
- misc fixes, cleanups and improvements throughout the tree
* tag 'riscv-for-linus-6.1-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (42 commits)
MAINTAINERS: add RISC-V's patchwork
RISC-V: Make port I/O string accessors actually work
riscv: enable software resend of irqs
RISC-V: Re-enable counter access from userspace
riscv: vdso: fix NULL deference in vdso_join_timens() when vfork
riscv: Add cache information in AUX vector
soc: sifive: ccache: define the macro for the register shifts
soc: sifive: ccache: use pr_fmt() to remove CCACHE: prefixes
soc: sifive: ccache: reduce printing on init
soc: sifive: ccache: determine the cache level from dts
soc: sifive: ccache: Rename SiFive L2 cache to Composable cache.
dt-bindings: sifive-ccache: change Sifive L2 cache to Composable cache
riscv: check for kernel config option in t-head memory types errata
riscv: use BIT() marco for cpufeature probing
riscv: use BIT() macros in t-head errata init
riscv: drop some idefs from CMO initialization
riscv: cleanup svpbmt cpufeature probing
riscv: Pass -mno-relax only on lld < 15.0.0
RISC-V: Avoid dereferening NULL regs in die()
dt-bindings: riscv: add new riscv,isa strings for emulators
...
Heiko Stuebner <heiko@sntech.de> says:
As noted by some people, some parts of the recently added extensions
(svpbmt, zicbom) + t-head errata could use some styling upgrades.
So this series provides these.
changes in v2:
- add patch also converting cpufeature probe to BIT()
- update commit message in patch1 (Conor)
Heiko Stuebner (5):
riscv: cleanup svpbmt cpufeature probing
riscv: drop some idefs from CMO initialization
riscv: use BIT() macros in t-head errata init
riscv: use BIT() marco for cpufeature probing
riscv: check for kernel config option in t-head memory types errata
arch/riscv/errata/thead/errata.c | 14 ++++++-----
arch/riscv/include/asm/cacheflush.h | 2 ++
arch/riscv/kernel/cpufeature.c | 39 ++++++++++++-----------------
3 files changed, 26 insertions(+), 29 deletions(-)
Link: https://lore.kernel.org/r/20220905111027.2463297-1-heiko@sntech.de
* b4-shazam-merge:
riscv: check for kernel config option in t-head memory types errata
riscv: use BIT() marco for cpufeature probing
riscv: use BIT() macros in t-head errata init
riscv: drop some idefs from CMO initialization
riscv: cleanup svpbmt cpufeature probing
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
riscv_isa_ext_keys[] is an array of static keys used in the unified
ISA extension framework. The keys added to this array may be used
anywhere, including in modules. Ensure the keys remain writable by
placing them in the data section.
The need to change riscv_isa_ext_keys[]'s section was found when the
kvm module started failing to load. Commit 8eb060e101 ("arch/riscv:
add Zihintpause support") adds a static branch check for a newly
added isa-ext key to cpu_relax(), which kvm uses.
Fixes: c360cbec35 ("riscv: introduce unified static key mechanism for ISA extensions")
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Cc: stable@vger.kernel.org
Reported-by: Ron Economos <re@w6rz.net>
Reported-by: Anup Patel <apatel@ventanamicro.com>
Reported-by: Conor Dooley <conor.dooley@microchip.com>
Tested-by: Atish Patra <atishp@rivosinc.com>
Link: https://lore.kernel.org/r/20220816163058.3004536-1-ajones@ventanamicro.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
This series implements Sstc extension support which was ratified
recently. Before the Sstc extension, an SBI call is necessary to
generate timer interrupts as only M-mode have access to the timecompare
registers. Thus, there is significant latency to generate timer
interrupts at kernel. For virtualized enviornments, its even worse as
the KVM handles the SBI call and uses a software timer to emulate the
timecomapre register.
Sstc extension solves both these problems by defining a
stimecmp/vstimecmp at supervisor (host/guest) level. It allows kernel to
program a timer and recieve interrupt without supervisor execution
enviornment (M-mode/HS mode) intervention.
* palmer/riscv-sstc:
RISC-V: Prefer sstc extension if available
RISC-V: Enable sstc extension parsing from DT
RISC-V: Add SSTC extension CSR details
This series is based on the alternatives changes done in my svpbmt
series and thus also depends on Atish's isa-extension parsing series.
It implements using the cache-management instructions from the Zicbom-
extension to handle cache flush, etc actions on platforms needing them.
SoCs using cpu cores from T-Head like the Allwinne D1 implement a
different set of cache instructions. But while they are different,
instructions they provide the same functionality, so a variant can easly
hook into the existing alternatives mechanism on those.
[Palmer: Some minor fixups, including a RISCV_ISA_ZICBOM dependency on
MMU that's probably not strictly necessary. The Zicbom support will
trip up sparse for users that have new toolchains, I just sent a patch.]
Link: https://lore.kernel.org/all/20220706231536.2041855-1-heiko@sntech.de/
Link: https://lore.kernel.org/linux-sparse/20220811033138.20676-1-palmer@rivosinc.com/T/#u
* palmer/riscv-zicbom:
riscv: implement cache-management errata for T-Head SoCs
riscv: Add support for non-coherent devices using zicbom extension
dt-bindings: riscv: document cbom-block-size
of: also handle dma-noncoherent in of_dma_is_coherent()
The Zicbom ISA-extension was ratified in november 2021
and introduces instructions for dcache invalidate, clean
and flush operations.
Implement cache management operations for non-coherent devices
based on them.
Of course not all cores will support this, so implement an
alternative-based mechanism that replaces empty instructions
with ones done around Zicbom instructions.
As discussed in previous versions, assume the platform
being coherent by default so that non-coherent devices need
to get marked accordingly by firmware.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Guo Ren <guoren@kernel.org>
Link: https://lore.kernel.org/r/20220706231536.2041855-4-heiko@sntech.de
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
The hartid can be a 64bit value on RV64 platforms. This series updates
the code so that 64bit hartid can be supported on RV64 platforms.
* 'riscv-64bit_hartid' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/linux.git:
riscv/efi_stub: Add 64bit boot-hartid support on RV64
riscv: cpu: Add 64bit hartid support on RV64
riscv: smp: Add 64bit hartid support on RV64
riscv: spinwait: Fix hartid variable type
riscv: cpu_ops_sbi: Add 64bit hartid support on RV64
Some additionals comments and notes from autobuilders received after the
series got applied, warranted some changes.
* 'riscv-svpbmt' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/palmer/linux:
riscv: remove usage of function-pointers from cpufeatures and t-head errata
riscv: make patch-function pointer more generic in cpu_manufacturer_info struct
riscv: Improve description for RISCV_ISA_SVPBMT Kconfig symbol
riscv: drop cpufeature_apply_feature tracking variable
riscv: fix dependency for t-head errata
Having a list of alternatives to check with a per-entry function pointer
to a check function is nice style-wise. But in case of early-alternatives
it can clash with the non-relocated kernel and the function pointer in
the list pointing to a completely wrong location.
This isn't an issue with one or two list entries, as in that case the
compiler seems to unroll the loop and even usage of the list structure
and then only does relative jumps into the check functions based on this.
When adding a third entry to either list though, the issue that was
hiding there from the beginning is triggered resulting a jump to a
memory address that isn't part of the kernel at all.
The list of features/erratas only contained an unused name and the
pointer to the check function, so an easy solution for the problem
is to just unroll the loop in code, dismantle the whole list structure
and just call the relevant check functions one by one ourself.
For the T-Head errata this includes moving the stage-check inside
the check functions.
The issue is only relevant for things that might be called for early-
alternatives (T-Head and possible future main extensions), so the
SiFive erratas were not affected from the beginning, as they got
an early return for early-alternatives in the original patchset.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Samuel Holland <samuel@sholland.org>
Link: https://lore.kernel.org/r/20220526205646.258337-6-heiko@sntech.de
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Currently, riscv has several extensions which may not be supported on
all riscv platforms, for example, FPU and so on. To support unified
kernel Image style, we need to check whether the feature is supported
or not. If the check sits at hot code path, then performance will be
impacted a lot. static key can be used to solve the issue. In the past,
FPU support has been converted to use static key mechanism. I believe
we will have similar cases in the future.
This patch tries to add an unified mechanism to use static keys for
some ISA extensions by implementing an array of default-false static keys
and enabling them when detected.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20220522153543.2656-2-jszhang@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Pull bitmap updates from Yury Norov:
- bitmap: optimize bitmap_weight() usage, from me
- lib/bitmap.c make bitmap_print_bitmask_to_buf parseable, from Mauro
Carvalho Chehab
- include/linux/find: Fix documentation, from Anna-Maria Behnsen
- bitmap: fix conversion from/to fix-sized arrays, from me
- bitmap: Fix return values to be unsigned, from Kees Cook
It has been in linux-next for at least a week with no problems.
* tag 'bitmap-for-5.19-rc1' of https://github.com/norov/linux: (31 commits)
nodemask: Fix return values to be unsigned
bitmap: Fix return values to be unsigned
KVM: x86: hyper-v: replace bitmap_weight() with hweight64()
KVM: x86: hyper-v: fix type of valid_bank_mask
ia64: cleanup remove_siblinginfo()
drm/amd/pm: use bitmap_{from,to}_arr32 where appropriate
KVM: s390: replace bitmap_copy with bitmap_{from,to}_arr64 where appropriate
lib/bitmap: add test for bitmap_{from,to}_arr64
lib: add bitmap_{from,to}_arr64
lib/bitmap: extend comment for bitmap_(from,to)_arr32()
include/linux/find: Fix documentation
lib/bitmap.c make bitmap_print_bitmask_to_buf parseable
MAINTAINERS: add cpumask and nodemask files to BITMAP_API
arch/x86: replace nodes_weight with nodes_empty where appropriate
mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate
clocksource: replace cpumask_weight with cpumask_empty in clocksource.c
genirq/affinity: replace cpumask_weight with cpumask_empty where appropriate
irq: mips: replace cpumask_weight with cpumask_empty where appropriate
drm/i915/pmu: replace cpumask_weight with cpumask_empty where appropriate
arch/x86: replace cpumask_weight with cpumask_empty where appropriate
...