2019-05-27 09:55:21 +03:00
// SPDX-License-Identifier: GPL-2.0-only
2017-07-11 04:00:26 +03:00
/*
* SMP initialisation and IPI support
* Based on arch / arm64 / kernel / smp . c
*
* Copyright ( C ) 2012 ARM Ltd .
* Copyright ( C ) 2015 Regents of the University of California
* Copyright ( C ) 2017 SiFive
*/
2023-05-15 08:49:19 +03:00
# include <linux/acpi.h>
2019-06-27 22:53:00 +03:00
# include <linux/arch_topology.h>
2017-07-11 04:00:26 +03:00
# include <linux/module.h>
# include <linux/init.h>
# include <linux/kernel.h>
# include <linux/mm.h>
# include <linux/sched.h>
# include <linux/kernel_stat.h>
# include <linux/notifier.h>
# include <linux/cpu.h>
# include <linux/percpu.h>
# include <linux/delay.h>
# include <linux/err.h>
# include <linux/irq.h>
# include <linux/of.h>
# include <linux/sched/task_stack.h>
2018-10-02 22:15:02 +03:00
# include <linux/sched/mm.h>
2020-03-18 04:11:40 +03:00
# include <asm/cpu_ops.h>
RISC-V: Probe for unaligned access speed
Rather than deferring unaligned access speed determinations to a vendor
function, let's probe them and find out how fast they are. If we
determine that an unaligned word access is faster than N byte accesses,
mark the hardware's unaligned access as "fast". Otherwise, we mark
accesses as slow.
The algorithm itself runs for a fixed amount of jiffies. Within each
iteration it attempts to time a single loop, and then keeps only the best
(fastest) loop it saw. This algorithm was found to have lower variance from
run to run than my first attempt, which counted the total number of
iterations that could be done in that fixed amount of jiffies. By taking
only the best iteration in the loop, assuming at least one loop wasn't
perturbed by an interrupt, we eliminate the effects of interrupts and
other "warm up" factors like branch prediction. The only downside is it
depends on having an rdtime granular and accurate enough to measure a
single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
leave the unaligned setting at UNKNOWN.
There is a slight change in user-visible behavior here. Previously, all
boards except the THead C906 reported misaligned access speed of
UNKNOWN. C906 reported FAST. With this change, since we're now measuring
misaligned access speed on each hart, all RISC-V systems will have this
key set as either FAST or SLOW.
Currently, we don't have a way to confidently measure the difference between
SLOW and EMULATED, so we label anything not fast as SLOW. This will
mislabel some systems that are actually EMULATED as SLOW. When we get
support for delegating misaligned access traps to the kernel (as opposed
to the firmware quietly handling it), we can explicitly test in Linux to
see if unaligned accesses trap. Those systems will start to report
EMULATED, though older (today's) systems without that new SBI mechanism
will continue to report SLOW.
I've updated the documentation for those hwprobe values to reflect
this, specifically: SLOW may or may not be emulated by software, and FAST
represents means being faster than equivalent byte accesses. The change
in documentation is accurate with respect to both the former and current
behavior.
Signed-off-by: Evan Green <evan@rivosinc.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230818194136.4084400-2-evan@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-08-18 22:41:35 +03:00
# include <asm/cpufeature.h>
2017-07-11 04:00:26 +03:00
# include <asm/irq.h>
# include <asm/mmu_context.h>
2020-11-19 03:38:29 +03:00
# include <asm/numa.h>
2017-07-11 04:00:26 +03:00
# include <asm/tlbflush.h>
# include <asm/sections.h>
2019-10-18 01:21:28 +03:00
# include <asm/smp.h>
2023-06-05 14:07:05 +03:00
# include <uapi/asm/hwcap.h>
# include <asm/vector.h>
2017-07-11 04:00:26 +03:00
2019-10-18 01:00:17 +03:00
# include "head.h"
2019-02-22 22:41:35 +03:00
static DECLARE_COMPLETION ( cpu_running ) ;
2017-07-11 04:00:26 +03:00
void __init smp_prepare_boot_cpu ( void )
{
}
void __init smp_prepare_cpus ( unsigned int max_cpus )
{
2019-04-25 00:47:59 +03:00
int cpuid ;
2020-03-18 04:11:40 +03:00
int ret ;
2020-11-19 03:38:29 +03:00
unsigned int curr_cpuid ;
2023-01-05 06:37:05 +03:00
init_cpu_topology ( ) ;
2020-11-19 03:38:29 +03:00
curr_cpuid = smp_processor_id ( ) ;
2022-07-15 20:51:56 +03:00
store_cpu_topology ( curr_cpuid ) ;
2020-11-19 03:38:29 +03:00
numa_store_cpu_info ( curr_cpuid ) ;
numa_add_cpu ( curr_cpuid ) ;
2019-04-25 00:47:59 +03:00
/* This covers non-smp usecase mandated by "nosmp" option */
if ( max_cpus = = 0 )
return ;
for_each_possible_cpu ( cpuid ) {
2020-11-19 03:38:29 +03:00
if ( cpuid = = curr_cpuid )
2019-04-25 00:47:59 +03:00
continue ;
2020-03-18 04:11:40 +03:00
if ( cpu_ops [ cpuid ] - > cpu_prepare ) {
ret = cpu_ops [ cpuid ] - > cpu_prepare ( cpuid ) ;
if ( ret )
continue ;
}
2019-04-25 00:47:59 +03:00
set_cpu_present ( cpuid , true ) ;
2020-11-19 03:38:29 +03:00
numa_store_cpu_info ( cpuid ) ;
2019-04-25 00:47:59 +03:00
}
2017-07-11 04:00:26 +03:00
}
2023-05-15 08:49:19 +03:00
# ifdef CONFIG_ACPI
static unsigned int cpu_count = 1 ;
static int __init acpi_parse_rintc ( union acpi_subtable_headers * header , const unsigned long end )
{
unsigned long hart ;
static bool found_boot_cpu ;
struct acpi_madt_rintc * processor = ( struct acpi_madt_rintc * ) header ;
/*
* Each RINTC structure in MADT will have a flag . If ACPI_MADT_ENABLED
* bit in the flag is not enabled , it means OS should not try to enable
* the cpu to which RINTC belongs .
*/
if ( ! ( processor - > flags & ACPI_MADT_ENABLED ) )
return 0 ;
if ( BAD_MADT_ENTRY ( processor , end ) )
return - EINVAL ;
acpi_table_print_madt_entry ( & header - > common ) ;
hart = processor - > hart_id ;
if ( hart = = INVALID_HARTID ) {
pr_warn ( " Invalid hartid \n " ) ;
return 0 ;
}
if ( hart = = cpuid_to_hartid_map ( 0 ) ) {
BUG_ON ( found_boot_cpu ) ;
found_boot_cpu = true ;
early_map_cpu_to_node ( 0 , acpi_numa_get_nid ( cpu_count ) ) ;
return 0 ;
}
if ( cpu_count > = NR_CPUS ) {
pr_warn ( " NR_CPUS is too small for the number of ACPI tables. \n " ) ;
return 0 ;
}
cpuid_to_hartid_map ( cpu_count ) = hart ;
early_map_cpu_to_node ( cpu_count , acpi_numa_get_nid ( cpu_count ) ) ;
cpu_count + + ;
return 0 ;
}
static void __init acpi_parse_and_init_cpus ( void )
{
int cpuid ;
cpu_set_ops ( 0 ) ;
acpi_table_parse_madt ( ACPI_MADT_TYPE_RINTC , acpi_parse_rintc , 0 ) ;
for ( cpuid = 1 ; cpuid < nr_cpu_ids ; cpuid + + ) {
if ( cpuid_to_hartid_map ( cpuid ) ! = INVALID_HARTID ) {
cpu_set_ops ( cpuid ) ;
set_cpu_possible ( cpuid , true ) ;
}
}
}
# else
# define acpi_parse_and_init_cpus(...) do { } while (0)
# endif
2023-05-15 08:49:18 +03:00
static void __init of_parse_and_init_cpus ( void )
2017-07-11 04:00:26 +03:00
{
2019-01-18 17:03:08 +03:00
struct device_node * dn ;
2022-05-27 08:17:42 +03:00
unsigned long hart ;
2018-10-02 22:15:01 +03:00
bool found_boot_cpu = false ;
2018-10-02 22:15:05 +03:00
int cpuid = 1 ;
2022-05-27 08:17:42 +03:00
int rc ;
2017-07-11 04:00:26 +03:00
2020-03-18 04:11:40 +03:00
cpu_set_ops ( 0 ) ;
2019-01-18 17:03:08 +03:00
for_each_of_cpu_node ( dn ) {
2023-06-07 23:28:26 +03:00
rc = riscv_early_of_processor_hartid ( dn , & hart ) ;
2022-05-27 08:17:42 +03:00
if ( rc < 0 )
2018-10-02 22:15:05 +03:00
continue ;
if ( hart = = cpuid_to_hartid_map ( 0 ) ) {
BUG_ON ( found_boot_cpu ) ;
found_boot_cpu = 1 ;
2020-11-19 03:38:29 +03:00
early_map_cpu_to_node ( 0 , of_node_to_nid ( dn ) ) ;
2018-10-02 22:15:05 +03:00
continue ;
2017-07-11 04:00:26 +03:00
}
2019-02-22 22:41:39 +03:00
if ( cpuid > = NR_CPUS ) {
2022-05-27 08:17:42 +03:00
pr_warn ( " Invalid cpuid [%d] for hartid [%lu] \n " ,
2019-02-22 22:41:39 +03:00
cpuid , hart ) ;
2022-01-20 12:09:18 +03:00
continue ;
2019-02-22 22:41:39 +03:00
}
2018-10-02 22:15:05 +03:00
cpuid_to_hartid_map ( cpuid ) = hart ;
2020-11-19 03:38:29 +03:00
early_map_cpu_to_node ( cpuid , of_node_to_nid ( dn ) ) ;
2018-10-02 22:15:05 +03:00
cpuid + + ;
2017-07-11 04:00:26 +03:00
}
2018-10-02 22:15:01 +03:00
BUG_ON ( ! found_boot_cpu ) ;
2019-04-25 00:48:00 +03:00
if ( cpuid > nr_cpu_ids )
pr_warn ( " Total number of cpus [%d] is greater than nr_cpus option value [%d] \n " ,
cpuid , nr_cpu_ids ) ;
for ( cpuid = 1 ; cpuid < nr_cpu_ids ; cpuid + + ) {
2020-03-18 04:11:40 +03:00
if ( cpuid_to_hartid_map ( cpuid ) ! = INVALID_HARTID ) {
cpu_set_ops ( cpuid ) ;
2019-04-25 00:48:00 +03:00
set_cpu_possible ( cpuid , true ) ;
2020-03-18 04:11:40 +03:00
}
2019-04-25 00:48:00 +03:00
}
2017-07-11 04:00:26 +03:00
}
2023-05-15 08:49:18 +03:00
void __init setup_smp ( void )
{
2023-05-15 08:49:19 +03:00
if ( acpi_disabled )
of_parse_and_init_cpus ( ) ;
else
acpi_parse_and_init_cpus ( ) ;
2023-05-15 08:49:18 +03:00
}
2020-07-30 03:25:35 +03:00
static int start_secondary_cpu ( int cpu , struct task_struct * tidle )
2020-03-18 04:11:40 +03:00
{
if ( cpu_ops [ cpu ] - > cpu_start )
return cpu_ops [ cpu ] - > cpu_start ( cpu , tidle ) ;
return - EOPNOTSUPP ;
}
2017-07-11 04:00:26 +03:00
int __cpu_up ( unsigned int cpu , struct task_struct * tidle )
{
2019-02-22 22:41:35 +03:00
int ret = 0 ;
2017-07-11 04:00:26 +03:00
tidle - > thread_info . cpu = cpu ;
2020-03-18 04:11:40 +03:00
ret = start_secondary_cpu ( cpu , tidle ) ;
if ( ! ret ) {
wait_for_completion_timeout ( & cpu_running ,
2019-02-22 22:41:35 +03:00
msecs_to_jiffies ( 1000 ) ) ;
2017-07-11 04:00:26 +03:00
2020-03-18 04:11:40 +03:00
if ( ! cpu_online ( cpu ) ) {
pr_crit ( " CPU%u: failed to come online \n " , cpu ) ;
ret = - EIO ;
}
} else {
pr_crit ( " CPU%u: failed to start \n " , cpu ) ;
2019-02-22 22:41:35 +03:00
}
return ret ;
2017-07-11 04:00:26 +03:00
}
void __init smp_cpus_done ( unsigned int max_cpus )
{
}
/*
* C entry point for a secondary processor .
*/
2020-03-18 04:11:43 +03:00
asmlinkage __visible void smp_callin ( void )
2017-07-11 04:00:26 +03:00
{
struct mm_struct * mm = & init_mm ;
2020-06-23 02:47:25 +03:00
unsigned int curr_cpuid = smp_processor_id ( ) ;
2017-07-11 04:00:26 +03:00
/* All kernel threads share the same mm context. */
2018-10-02 22:15:02 +03:00
mmgrab ( mm ) ;
2017-07-11 04:00:26 +03:00
current - > active_mm = mm ;
2022-07-15 20:51:56 +03:00
store_cpu_topology ( curr_cpuid ) ;
2020-06-23 02:47:25 +03:00
notify_cpu_starting ( curr_cpuid ) ;
2023-07-03 21:31:26 +03:00
riscv_ipi_enable ( ) ;
2020-11-19 03:38:29 +03:00
numa_add_cpu ( curr_cpuid ) ;
2020-06-23 02:47:25 +03:00
set_cpu_online ( curr_cpuid , 1 ) ;
RISC-V: Probe for unaligned access speed
Rather than deferring unaligned access speed determinations to a vendor
function, let's probe them and find out how fast they are. If we
determine that an unaligned word access is faster than N byte accesses,
mark the hardware's unaligned access as "fast". Otherwise, we mark
accesses as slow.
The algorithm itself runs for a fixed amount of jiffies. Within each
iteration it attempts to time a single loop, and then keeps only the best
(fastest) loop it saw. This algorithm was found to have lower variance from
run to run than my first attempt, which counted the total number of
iterations that could be done in that fixed amount of jiffies. By taking
only the best iteration in the loop, assuming at least one loop wasn't
perturbed by an interrupt, we eliminate the effects of interrupts and
other "warm up" factors like branch prediction. The only downside is it
depends on having an rdtime granular and accurate enough to measure a
single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
leave the unaligned setting at UNKNOWN.
There is a slight change in user-visible behavior here. Previously, all
boards except the THead C906 reported misaligned access speed of
UNKNOWN. C906 reported FAST. With this change, since we're now measuring
misaligned access speed on each hart, all RISC-V systems will have this
key set as either FAST or SLOW.
Currently, we don't have a way to confidently measure the difference between
SLOW and EMULATED, so we label anything not fast as SLOW. This will
mislabel some systems that are actually EMULATED as SLOW. When we get
support for delegating misaligned access traps to the kernel (as opposed
to the firmware quietly handling it), we can explicitly test in Linux to
see if unaligned accesses trap. Those systems will start to report
EMULATED, though older (today's) systems without that new SBI mechanism
will continue to report SLOW.
I've updated the documentation for those hwprobe values to reflect
this, specifically: SLOW may or may not be emulated by software, and FAST
represents means being faster than equivalent byte accesses. The change
in documentation is accurate with respect to both the former and current
behavior.
Signed-off-by: Evan Green <evan@rivosinc.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/r/20230818194136.4084400-2-evan@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-08-18 22:41:35 +03:00
check_unaligned_access ( curr_cpuid ) ;
2020-07-16 02:30:06 +03:00
2023-06-05 14:07:05 +03:00
if ( has_vector ( ) ) {
if ( riscv_v_setup_vsize ( ) )
elf_hwcap & = ~ COMPAT_HWCAP_ISA_V ;
}
2018-10-02 22:14:57 +03:00
/*
* Remote TLB flushes are ignored while the CPU is offline , so emit
* a local TLB flush right now just in case .
*/
2017-07-11 04:00:26 +03:00
local_flush_tlb_all ( ) ;
2019-02-22 22:41:35 +03:00
complete ( & cpu_running ) ;
2018-10-02 22:14:58 +03:00
/*
* Disable preemption before enabling interrupts , so we don ' t try to
* schedule a CPU that hasn ' t actually started yet .
*/
local_irq_enable ( ) ;
2017-07-11 04:00:26 +03:00
cpu_startup_entry ( CPUHP_AP_ONLINE_IDLE ) ;
}