Merge branch 'pm-cpufreq'

Merge cpufreq changes for 6.11-rc1:

 - Add Loongson-3 CPUFreq driver support (Huacai Chen).

 - Add support for the Arrow Lake and Lunar Lake platforms and
   the out-of-band (OOB) mode on Emerald Rapids to the intel_pstate
   cpufreq driver, make it support the highest performance change
   interrupt and clean it up (Srinivas Pandruvada).

 - Switch cpufreq to new Intel CPU model defines (Tony Luck).

 - Simplify the cpufreq driver interface by switching the .exit() driver
   callback to the void return data type (Lizhe, Viresh Kumar).

 - Make cpufreq_boost_enabled() return bool (Dhruva Gole).

 - Add fast CPPC support to the amd-pstate cpufreq driver, address
   multiple assorted issues in it and clean it up (Perry Yuan, Mario
   Limonciello, Dhananjay Ugwekar, Meng Li, Xiaojian Du).

 - Add Allwinner H700 speed bin to the sun50i cpufreq driver (Ryan
   Walklin).

 - Fix memory leaks and of_node_put() usage in the sun50i and qcom-nvmem
   cpufreq drivers (Javier Carrasco).

 - Clean up the sti and dt-platdev cpufreq drivers (Jeff Johnson,
   Raphael Gallais-Pou).

 - Fix deferred probe handling in the TI cpufreq driver and wrong return
   values of ti_opp_supply_probe(), and add OPP tables for the AM62Ax and
   AM62Px SoCs to it (Bryan Brattlof, Primoz Fiser).

 - Avoid overflow of target_freq in .fast_switch() in the SCMI cpufreq
   driver (Jagadeesh Kona).

 - Use dev_err_probe() in every error path in probe in the Mediatek
   cpufreq driver (Nícolas Prado).

 - Fix kernel-doc param for longhaul_setstate in the longhaul cpufreq
   driver (Yang Li).

 - Fix system resume handling in the CPPC cpufreq driver (Riwen Lu).

* pm-cpufreq: (55 commits)
  cpufreq: sti: fix build warning
  cpufreq: mediatek: Use dev_err_probe in every error path in probe
  cpufreq: Add Loongson-3 CPUFreq driver support
  cpufreq: Make cpufreq_driver->exit() return void
  cpufreq/amd-pstate: Fix the scaling_max_freq setting on shared memory CPPC systems
  cpufreq/amd-pstate-ut: Convert nominal_freq to khz during comparisons
  cpufreq: pcc: Remove empty exit() callback
  cpufreq: loongson2: Remove empty exit() callback
  cpufreq: nforce2: Remove empty exit() callback
  cpufreq: docs: Add missing scaling_available_frequencies description
  cpufreq: make cpufreq_boost_enabled() return bool
  cpufreq: intel_pstate: Support highest performance change interrupt
  x86/cpufeatures: Add HWP highest perf change feature flag
  Documentation: cpufreq: amd-pstate: update doc for Per CPU boost control method
  cpufreq: amd-pstate: Cap the CPPC.max_perf to nominal_perf if CPB is off
  cpufreq: amd-pstate: initialize core precision boost state
  cpufreq: acpi: move MSR_K7_HWCR_CPB_DIS_BIT into msr-index.h
  cpufreq: sti: add missing MODULE_DEVICE_TABLE entry for stih418
  cpufreq: intel_pstate: Replace boot_cpu_has()
  cpufreq: ti: update OPP table for AM62Px SoCs
  ...
This commit is contained in:
Rafael J. Wysocki 2024-07-15 18:51:35 +02:00
commit a18abb873b
51 changed files with 987 additions and 384 deletions

View File

@ -281,6 +281,22 @@ integer values defined between 0 to 255 when EPP feature is enabled by platform
firmware, if EPP feature is disabled, driver will ignore the written value
This attribute is read-write.
``boost``
The `boost` sysfs attribute provides control over the CPU core
performance boost, allowing users to manage the maximum frequency limitation
of the CPU. This attribute can be used to enable or disable the boost feature
on individual CPUs.
When the boost feature is enabled, the CPU can dynamically increase its frequency
beyond the base frequency, providing enhanced performance for demanding workloads.
On the other hand, disabling the boost feature restricts the CPU to operate at the
base frequency, which may be desirable in certain scenarios to prioritize power
efficiency or manage temperature.
To manipulate the `boost` attribute, users can write a value of `0` to disable the
boost or `1` to enable it, for the respective CPU using the sysfs path
`/sys/devices/system/cpu/cpuX/cpufreq/boost`, where `X` represents the CPU number.
Other performance and frequency values can be read back from
``/sys/devices/system/cpu/cpuX/acpi_cppc/``, see :ref:`cppc_sysfs`.
@ -406,7 +422,7 @@ control its functionality at the system level. They are located in the
``/sys/devices/system/cpu/amd_pstate/`` directory and affect all CPUs.
``status``
Operation mode of the driver: "active", "passive" or "disable".
Operation mode of the driver: "active", "passive", "guided" or "disable".
"active"
The driver is functional and in the ``active mode``

View File

@ -267,6 +267,10 @@ are the following:
``related_cpus``
List of all (online and offline) CPUs belonging to this policy.
``scaling_available_frequencies``
List of available frequencies of the CPUs belonging to this policy
(in kHz).
``scaling_available_governors``
List of ``CPUFreq`` scaling governors present in the kernel that can
be attached to this policy or (if the |intel_pstate| scaling driver is

View File

@ -12966,6 +12966,7 @@ F: Documentation/arch/loongarch/
F: Documentation/translations/zh_CN/arch/loongarch/
F: arch/loongarch/
F: drivers/*/*loongarch*
F: drivers/cpufreq/loongson3_cpufreq.c
LOONGSON GPIO DRIVER
M: Yinbo Zhu <zhuyinbo@loongson.cn>

View File

@ -361,6 +361,7 @@
#define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* HWP Activity Window */
#define X86_FEATURE_HWP_EPP (14*32+10) /* HWP Energy Perf. Preference */
#define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* HWP Package Level Request */
#define X86_FEATURE_HWP_HIGHEST_PERF_CHANGE (14*32+15) /* "" HWP Highest perf change */
#define X86_FEATURE_HFI (14*32+19) /* Hardware Feedback Interface */
/* AMD SVM Feature Identification, CPUID level 0x8000000a (EDX), word 15 */
@ -470,6 +471,7 @@
#define X86_FEATURE_BHI_CTRL (21*32+ 2) /* "" BHI_DIS_S HW control available */
#define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* "" BHI_DIS_S HW control enabled */
#define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */
#define X86_FEATURE_FAST_CPPC (21*32 + 5) /* "" AMD Fast CPPC */
/*
* BUG word(s)

View File

@ -781,6 +781,8 @@
#define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT)
#define MSR_K7_FID_VID_CTL 0xc0010041
#define MSR_K7_FID_VID_STATUS 0xc0010042
#define MSR_K7_HWCR_CPB_DIS_BIT 25
#define MSR_K7_HWCR_CPB_DIS BIT_ULL(MSR_K7_HWCR_CPB_DIS_BIT)
/* K6 MSRs */
#define MSR_K6_WHCR 0xc0000082

View File

@ -45,6 +45,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_HW_PSTATE, CPUID_EDX, 7, 0x80000007, 0 },
{ X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
{ X86_FEATURE_FAST_CPPC, CPUID_EDX, 15, 0x80000007, 0 },
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
{ X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
{ X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },

View File

@ -262,6 +262,18 @@ config LOONGSON2_CPUFREQ
If in doubt, say N.
endif
if LOONGARCH
config LOONGSON3_CPUFREQ
tristate "Loongson3 CPUFreq Driver"
help
This option adds a CPUFreq driver for Loongson processors which
support software configurable cpu frequency.
Loongson-3 family processors support this feature.
If in doubt, say N.
endif
if SPARC64
config SPARC_US3_CPUFREQ
tristate "UltraSPARC-III CPU Frequency driver"

View File

@ -71,6 +71,7 @@ config X86_AMD_PSTATE_DEFAULT_MODE
config X86_AMD_PSTATE_UT
tristate "selftest for AMD Processor P-State driver"
depends on X86 && ACPI_PROCESSOR
depends on X86_AMD_PSTATE
default n
help
This kernel module is used for testing. It's safe to say M here.

View File

@ -103,6 +103,7 @@ obj-$(CONFIG_POWERNV_CPUFREQ) += powernv-cpufreq.o
# Other platform drivers
obj-$(CONFIG_BMIPS_CPUFREQ) += bmips-cpufreq.o
obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o
obj-$(CONFIG_LOONGSON3_CPUFREQ) += loongson3_cpufreq.o
obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o
obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o
obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o

View File

@ -50,8 +50,6 @@ enum {
#define AMD_MSR_RANGE (0x7)
#define HYGON_MSR_RANGE (0x7)
#define MSR_K7_HWCR_CPB_DIS (1ULL << 25)
struct acpi_cpufreq_data {
unsigned int resume;
unsigned int cpu_feature;
@ -908,7 +906,7 @@ err_free:
return result;
}
static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct acpi_cpufreq_data *data = policy->driver_data;
@ -921,8 +919,6 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
free_cpumask_var(data->freqdomain_cpus);
kfree(policy->freq_table);
kfree(data);
return 0;
}
static int acpi_cpufreq_resume(struct cpufreq_policy *policy)

View File

@ -202,6 +202,7 @@ static void amd_pstate_ut_check_freq(u32 index)
int cpu = 0;
struct cpufreq_policy *policy = NULL;
struct amd_cpudata *cpudata = NULL;
u32 nominal_freq_khz;
for_each_possible_cpu(cpu) {
policy = cpufreq_cpu_get(cpu);
@ -209,13 +210,14 @@ static void amd_pstate_ut_check_freq(u32 index)
break;
cpudata = policy->driver_data;
if (!((cpudata->max_freq >= cpudata->nominal_freq) &&
(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
nominal_freq_khz = cpudata->nominal_freq*1000;
if (!((cpudata->max_freq >= nominal_freq_khz) &&
(nominal_freq_khz > cpudata->lowest_nonlinear_freq) &&
(cpudata->lowest_nonlinear_freq > cpudata->min_freq) &&
(cpudata->min_freq > 0))) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
__func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
__func__, cpu, cpudata->max_freq, nominal_freq_khz,
cpudata->lowest_nonlinear_freq, cpudata->min_freq);
goto skip_test;
}
@ -229,13 +231,13 @@ static void amd_pstate_ut_check_freq(u32 index)
if (cpudata->boost_supported) {
if ((policy->max == cpudata->max_freq) ||
(policy->max == cpudata->nominal_freq))
(policy->max == nominal_freq_khz))
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
else {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
__func__, cpu, policy->max, cpudata->max_freq,
cpudata->nominal_freq);
nominal_freq_khz);
goto skip_test;
}
} else {

View File

@ -51,6 +51,7 @@
#define AMD_PSTATE_TRANSITION_LATENCY 20000
#define AMD_PSTATE_TRANSITION_DELAY 1000
#define AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY 600
#define CPPC_HIGHEST_PERF_PERFORMANCE 196
#define CPPC_HIGHEST_PERF_DEFAULT 166
@ -85,15 +86,6 @@ struct quirk_entry {
u32 lowest_freq;
};
/*
* TODO: We need more time to fine tune processors with shared memory solution
* with community together.
*
* There are some performance drops on the CPU benchmarks which reports from
* Suse. We are co-working with them to fine tune the shared memory solution. So
* we disable it by default to go acpi-cpufreq on these processors and add a
* module parameter to be able to enable it manually for debugging.
*/
static struct cpufreq_driver *current_pstate_driver;
static struct cpufreq_driver amd_pstate_driver;
static struct cpufreq_driver amd_pstate_epp_driver;
@ -157,7 +149,7 @@ static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi)
* broken BIOS lack of nominal_freq and lowest_freq capabilities
* definition in ACPI tables
*/
if (boot_cpu_has(X86_FEATURE_ZEN2)) {
if (cpu_feature_enabled(X86_FEATURE_ZEN2)) {
quirks = dmi->driver_data;
pr_info("Overriding nominal and lowest frequencies for %s\n", dmi->ident);
return 1;
@ -199,7 +191,7 @@ static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached)
u64 epp;
int ret;
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
if (!cppc_req_cached) {
epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
&cppc_req_cached);
@ -247,12 +239,32 @@ static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)
return index;
}
static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
u32 des_perf, u32 max_perf, bool fast_switch)
{
if (fast_switch)
wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
else
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
READ_ONCE(cpudata->cppc_req_cached));
}
DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
u32 min_perf, u32 des_perf,
u32 max_perf, bool fast_switch)
{
static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
max_perf, fast_switch);
}
static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
{
int ret;
struct cppc_perf_ctrls perf_ctrls;
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
u64 value = READ_ONCE(cpudata->cppc_req_cached);
value &= ~GENMASK_ULL(31, 24);
@ -263,6 +275,9 @@ static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
if (!ret)
cpudata->epp_cached = epp;
} else {
amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U,
cpudata->max_limit_perf, false);
perf_ctrls.energy_perf = epp;
ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
if (ret) {
@ -281,10 +296,8 @@ static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,
int epp = -EINVAL;
int ret;
if (!pref_index) {
pr_debug("EPP pref_index is invalid\n");
return -EINVAL;
}
if (!pref_index)
epp = cpudata->epp_default;
if (epp == -EINVAL)
epp = epp_values[pref_index];
@ -452,16 +465,6 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
return static_call(amd_pstate_init_perf)(cpudata);
}
static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
u32 des_perf, u32 max_perf, bool fast_switch)
{
if (fast_switch)
wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
else
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
READ_ONCE(cpudata->cppc_req_cached));
}
static void cppc_update_perf(struct amd_cpudata *cpudata,
u32 min_perf, u32 des_perf,
u32 max_perf, bool fast_switch)
@ -475,16 +478,6 @@ static void cppc_update_perf(struct amd_cpudata *cpudata,
cppc_set_perf(cpudata->cpu, &perf_ctrls);
}
DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
u32 min_perf, u32 des_perf,
u32 max_perf, bool fast_switch)
{
static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
max_perf, fast_switch);
}
static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
{
u64 aperf, mperf, tsc;
@ -521,7 +514,10 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags)
{
unsigned long max_freq;
struct cpufreq_policy *policy = cpufreq_cpu_get(cpudata->cpu);
u64 prev = READ_ONCE(cpudata->cppc_req_cached);
u32 nominal_perf = READ_ONCE(cpudata->nominal_perf);
u64 value = prev;
min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf,
@ -530,6 +526,9 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
cpudata->max_limit_perf);
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
max_freq = READ_ONCE(cpudata->max_limit_freq);
policy->cur = div_u64(des_perf * max_freq, max_perf);
if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
min_perf = des_perf;
des_perf = 0;
@ -541,6 +540,10 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
value &= ~AMD_CPPC_DES_PERF(~0L);
value |= AMD_CPPC_DES_PERF(des_perf);
/* limit the max perf when core performance boost feature is disabled */
if (!cpudata->boost_supported)
max_perf = min_t(unsigned long, nominal_perf, max_perf);
value &= ~AMD_CPPC_MAX_PERF(~0L);
value |= AMD_CPPC_MAX_PERF(max_perf);
@ -651,10 +654,9 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
unsigned long capacity)
{
unsigned long max_perf, min_perf, des_perf,
cap_perf, lowest_nonlinear_perf, max_freq;
cap_perf, lowest_nonlinear_perf;
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
struct amd_cpudata *cpudata = policy->driver_data;
unsigned int target_freq;
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
amd_pstate_update_min_max_limit(policy);
@ -662,7 +664,6 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
cap_perf = READ_ONCE(cpudata->highest_perf);
lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
max_freq = READ_ONCE(cpudata->max_freq);
des_perf = cap_perf;
if (target_perf < capacity)
@ -680,14 +681,59 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
max_perf = min_perf;
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
target_freq = div_u64(des_perf * max_freq, max_perf);
policy->cur = target_freq;
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true,
policy->governor->flags);
cpufreq_cpu_put(policy);
}
static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
{
struct amd_cpudata *cpudata = policy->driver_data;
struct cppc_perf_ctrls perf_ctrls;
u32 highest_perf, nominal_perf, nominal_freq, max_freq;
int ret;
highest_perf = READ_ONCE(cpudata->highest_perf);
nominal_perf = READ_ONCE(cpudata->nominal_perf);
nominal_freq = READ_ONCE(cpudata->nominal_freq);
max_freq = READ_ONCE(cpudata->max_freq);
if (boot_cpu_has(X86_FEATURE_CPPC)) {
u64 value = READ_ONCE(cpudata->cppc_req_cached);
value &= ~GENMASK_ULL(7, 0);
value |= on ? highest_perf : nominal_perf;
WRITE_ONCE(cpudata->cppc_req_cached, value);
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
} else {
perf_ctrls.max_perf = on ? highest_perf : nominal_perf;
ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
if (ret) {
cpufreq_cpu_release(policy);
pr_debug("Failed to set max perf on CPU:%d. ret:%d\n",
cpudata->cpu, ret);
return ret;
}
}
if (on)
policy->cpuinfo.max_freq = max_freq;
else if (policy->cpuinfo.max_freq > nominal_freq * 1000)
policy->cpuinfo.max_freq = nominal_freq * 1000;
policy->max = policy->cpuinfo.max_freq;
if (cppc_state == AMD_PSTATE_PASSIVE) {
ret = freq_qos_update_request(&cpudata->req[1], policy->cpuinfo.max_freq);
if (ret < 0)
pr_debug("Failed to update freq constraint: CPU%d\n", cpudata->cpu);
}
return ret < 0 ? ret : 0;
}
static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
{
struct amd_cpudata *cpudata = policy->driver_data;
@ -695,36 +741,51 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
if (!cpudata->boost_supported) {
pr_err("Boost mode is not supported by this processor or SBIOS\n");
return -EINVAL;
return -EOPNOTSUPP;
}
mutex_lock(&amd_pstate_driver_lock);
ret = amd_pstate_cpu_boost_update(policy, state);
WRITE_ONCE(cpudata->boost_state, !ret ? state : false);
policy->boost_enabled = !ret ? state : false;
refresh_frequency_limits(policy);
mutex_unlock(&amd_pstate_driver_lock);
if (state)
policy->cpuinfo.max_freq = cpudata->max_freq;
else
policy->cpuinfo.max_freq = cpudata->nominal_freq * 1000;
policy->max = policy->cpuinfo.max_freq;
ret = freq_qos_update_request(&cpudata->req[1],
policy->cpuinfo.max_freq);
if (ret < 0)
return ret;
return 0;
return ret;
}
static void amd_pstate_boost_init(struct amd_cpudata *cpudata)
static int amd_pstate_init_boost_support(struct amd_cpudata *cpudata)
{
u32 highest_perf, nominal_perf;
u64 boost_val;
int ret = -1;
highest_perf = READ_ONCE(cpudata->highest_perf);
nominal_perf = READ_ONCE(cpudata->nominal_perf);
/*
* If platform has no CPB support or disable it, initialize current driver
* boost_enabled state to be false, it is not an error for cpufreq core to handle.
*/
if (!cpu_feature_enabled(X86_FEATURE_CPB)) {
pr_debug_once("Boost CPB capabilities not present in the processor\n");
ret = 0;
goto exit_err;
}
if (highest_perf <= nominal_perf)
return;
cpudata->boost_supported = true;
/* at least one CPU supports CPB, even if others fail later on to set up */
current_pstate_driver->boost_enabled = true;
ret = rdmsrl_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
if (ret) {
pr_err_once("failed to read initial CPU boost state!\n");
ret = -EIO;
goto exit_err;
}
if (!(boost_val & MSR_K7_HWCR_CPB_DIS))
cpudata->boost_supported = true;
return 0;
exit_err:
cpudata->boost_supported = false;
return ret;
}
static void amd_perf_ctl_reset(unsigned int cpu)
@ -753,7 +814,7 @@ static int amd_pstate_get_highest_perf(int cpu, u32 *highest_perf)
{
int ret;
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
u64 cap1;
ret = rdmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
@ -849,8 +910,12 @@ static u32 amd_pstate_get_transition_delay_us(unsigned int cpu)
u32 transition_delay_ns;
transition_delay_ns = cppc_get_transition_latency(cpu);
if (transition_delay_ns == CPUFREQ_ETERNAL)
return AMD_PSTATE_TRANSITION_DELAY;
if (transition_delay_ns == CPUFREQ_ETERNAL) {
if (cpu_feature_enabled(X86_FEATURE_FAST_CPPC))
return AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY;
else
return AMD_PSTATE_TRANSITION_DELAY;
}
return transition_delay_ns / NSEC_PER_USEC;
}
@ -921,12 +986,30 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
WRITE_ONCE(cpudata->max_freq, max_freq);
/**
* Below values need to be initialized correctly, otherwise driver will fail to load
* max_freq is calculated according to (nominal_freq * highest_perf)/nominal_perf
* lowest_nonlinear_freq is a value between [min_freq, nominal_freq]
* Check _CPC in ACPI table objects if any values are incorrect
*/
if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) {
pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n",
min_freq, max_freq, nominal_freq * 1000);
return -EINVAL;
}
if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq * 1000) {
pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n",
lowest_nonlinear_freq, min_freq, nominal_freq * 1000);
return -EINVAL;
}
return 0;
}
static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
{
int min_freq, max_freq, nominal_freq, ret;
int min_freq, max_freq, ret;
struct device *dev;
struct amd_cpudata *cpudata;
@ -955,18 +1038,12 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
if (ret)
goto free_cpudata1;
ret = amd_pstate_init_boost_support(cpudata);
if (ret)
goto free_cpudata1;
min_freq = READ_ONCE(cpudata->min_freq);
max_freq = READ_ONCE(cpudata->max_freq);
nominal_freq = READ_ONCE(cpudata->nominal_freq);
if (min_freq <= 0 || max_freq <= 0 ||
nominal_freq <= 0 || min_freq > max_freq) {
dev_err(dev,
"min_freq(%d) or max_freq(%d) or nominal_freq (%d) value is incorrect, check _CPC in ACPI tables\n",
min_freq, max_freq, nominal_freq);
ret = -EINVAL;
goto free_cpudata1;
}
policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu);
policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu);
@ -977,10 +1054,12 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.min_freq = min_freq;
policy->cpuinfo.max_freq = max_freq;
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
/* It will be updated by governor */
policy->cur = policy->cpuinfo.min_freq;
if (boot_cpu_has(X86_FEATURE_CPPC))
if (cpu_feature_enabled(X86_FEATURE_CPPC))
policy->fast_switch_possible = true;
ret = freq_qos_add_request(&policy->constraints, &cpudata->req[0],
@ -1002,7 +1081,6 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
policy->driver_data = cpudata;
amd_pstate_boost_init(cpudata);
if (!current_pstate_driver->adjust_perf)
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
@ -1015,7 +1093,7 @@ free_cpudata1:
return ret;
}
static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
static void amd_pstate_cpu_exit(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
@ -1023,8 +1101,6 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
freq_qos_remove_request(&cpudata->req[0]);
policy->fast_switch_possible = false;
kfree(cpudata);
return 0;
}
static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
@ -1213,7 +1289,7 @@ static int amd_pstate_change_mode_without_dvr_change(int mode)
cppc_state = mode;
if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
if (cpu_feature_enabled(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
return 0;
for_each_present_cpu(cpu) {
@ -1386,7 +1462,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void)
static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
{
int min_freq, max_freq, nominal_freq, ret;
int min_freq, max_freq, ret;
struct amd_cpudata *cpudata;
struct device *dev;
u64 value;
@ -1417,17 +1493,12 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
if (ret)
goto free_cpudata1;
ret = amd_pstate_init_boost_support(cpudata);
if (ret)
goto free_cpudata1;
min_freq = READ_ONCE(cpudata->min_freq);
max_freq = READ_ONCE(cpudata->max_freq);
nominal_freq = READ_ONCE(cpudata->nominal_freq);
if (min_freq <= 0 || max_freq <= 0 ||
nominal_freq <= 0 || min_freq > max_freq) {
dev_err(dev,
"min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect, check _CPC in ACPI tables\n",
min_freq, max_freq, nominal_freq);
ret = -EINVAL;
goto free_cpudata1;
}
policy->cpuinfo.min_freq = min_freq;
policy->cpuinfo.max_freq = max_freq;
@ -1436,11 +1507,13 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
policy->driver_data = cpudata;
cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0);
cpudata->epp_cached = cpudata->epp_default = amd_pstate_get_epp(cpudata, 0);
policy->min = policy->cpuinfo.min_freq;
policy->max = policy->cpuinfo.max_freq;
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
/*
* Set the policy to provide a valid fallback value in case
* the default cpufreq governor is neither powersave nor performance.
@ -1451,7 +1524,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
else
policy->policy = CPUFREQ_POLICY_POWERSAVE;
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
if (ret)
return ret;
@ -1462,7 +1535,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
return ret;
WRITE_ONCE(cpudata->cppc_cap1_cached, value);
}
amd_pstate_boost_init(cpudata);
return 0;
@ -1471,7 +1543,7 @@ free_cpudata1:
return ret;
}
static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
@ -1481,7 +1553,6 @@ static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
}
pr_debug("CPU %d exiting\n", policy->cpu);
return 0;
}
static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
@ -1541,7 +1612,7 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
epp = 0;
/* Set initial EPP value */
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
value &= ~GENMASK_ULL(31, 24);
value |= (u64)epp << 24;
}
@ -1564,6 +1635,12 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
amd_pstate_epp_update_limit(policy);
/*
* policy->cur is never updated with the amd_pstate_epp driver, but it
* is used as a stale frequency value. So, keep it within limits.
*/
policy->cur = policy->min;
return 0;
}
@ -1580,7 +1657,7 @@ static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)
value = READ_ONCE(cpudata->cppc_req_cached);
max_perf = READ_ONCE(cpudata->highest_perf);
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
} else {
perf_ctrls.max_perf = max_perf;
@ -1614,7 +1691,7 @@ static void amd_pstate_epp_offline(struct cpufreq_policy *policy)
value = READ_ONCE(cpudata->cppc_req_cached);
mutex_lock(&amd_pstate_limits_lock);
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;
/* Set max perf same as min perf */
@ -1718,6 +1795,7 @@ static struct cpufreq_driver amd_pstate_epp_driver = {
.suspend = amd_pstate_epp_suspend,
.resume = amd_pstate_epp_resume,
.update_limits = amd_pstate_update_limits,
.set_boost = amd_pstate_set_boost,
.name = "amd-pstate-epp",
.attr = amd_pstate_epp_attr,
};
@ -1741,6 +1819,46 @@ static int __init amd_pstate_set_driver(int mode_idx)
return -EINVAL;
}
/**
* CPPC function is not supported for family ID 17H with model_ID ranging from 0x10 to 0x2F.
* show the debug message that helps to check if the CPU has CPPC support for loading issue.
*/
static bool amd_cppc_supported(void)
{
struct cpuinfo_x86 *c = &cpu_data(0);
bool warn = false;
if ((boot_cpu_data.x86 == 0x17) && (boot_cpu_data.x86_model < 0x30)) {
pr_debug_once("CPPC feature is not supported by the processor\n");
return false;
}
/*
* If the CPPC feature is disabled in the BIOS for processors that support MSR-based CPPC,
* the AMD Pstate driver may not function correctly.
* Check the CPPC flag and display a warning message if the platform supports CPPC.
* Note: below checking code will not abort the driver registeration process because of
* the code is added for debugging purposes.
*/
if (!cpu_feature_enabled(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_ZEN1) || cpu_feature_enabled(X86_FEATURE_ZEN2)) {
if (c->x86_model > 0x60 && c->x86_model < 0xaf)
warn = true;
} else if (cpu_feature_enabled(X86_FEATURE_ZEN3) || cpu_feature_enabled(X86_FEATURE_ZEN4)) {
if ((c->x86_model > 0x10 && c->x86_model < 0x1F) ||
(c->x86_model > 0x40 && c->x86_model < 0xaf))
warn = true;
} else if (cpu_feature_enabled(X86_FEATURE_ZEN5)) {
warn = true;
}
}
if (warn)
pr_warn_once("The CPPC feature is supported but currently disabled by the BIOS.\n"
"Please enable it if your BIOS has the CPPC option.\n");
return true;
}
static int __init amd_pstate_init(void)
{
struct device *dev_root;
@ -1749,6 +1867,11 @@ static int __init amd_pstate_init(void)
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return -ENODEV;
/* show debug message only if CPPC is not supported */
if (!amd_cppc_supported())
return -EOPNOTSUPP;
/* show warning message when BIOS broken or ACPI disabled */
if (!acpi_cpc_valid()) {
pr_warn_once("the _CPC object is not present in SBIOS or ACPI disabled\n");
return -ENODEV;
@ -1763,35 +1886,43 @@ static int __init amd_pstate_init(void)
/* check if this machine need CPPC quirks */
dmi_check_system(amd_pstate_quirks_table);
switch (cppc_state) {
case AMD_PSTATE_UNDEFINED:
/*
* determine the driver mode from the command line or kernel config.
* If no command line input is provided, cppc_state will be AMD_PSTATE_UNDEFINED.
* command line options will override the kernel config settings.
*/
if (cppc_state == AMD_PSTATE_UNDEFINED) {
/* Disable on the following configs by default:
* 1. Undefined platforms
* 2. Server platforms
* 3. Shared memory designs
*/
if (amd_pstate_acpi_pm_profile_undefined() ||
amd_pstate_acpi_pm_profile_server() ||
!boot_cpu_has(X86_FEATURE_CPPC)) {
amd_pstate_acpi_pm_profile_server()) {
pr_info("driver load is disabled, boot with specific mode to enable this\n");
return -ENODEV;
}
ret = amd_pstate_set_driver(CONFIG_X86_AMD_PSTATE_DEFAULT_MODE);
if (ret)
return ret;
break;
/* get driver mode from kernel config option [1:4] */
cppc_state = CONFIG_X86_AMD_PSTATE_DEFAULT_MODE;
}
switch (cppc_state) {
case AMD_PSTATE_DISABLE:
pr_info("driver load is disabled, boot with specific mode to enable this\n");
return -ENODEV;
case AMD_PSTATE_PASSIVE:
case AMD_PSTATE_ACTIVE:
case AMD_PSTATE_GUIDED:
ret = amd_pstate_set_driver(cppc_state);
if (ret)
return ret;
break;
default:
return -EINVAL;
}
/* capability check */
if (boot_cpu_has(X86_FEATURE_CPPC)) {
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
pr_debug("AMD CPPC MSR based functionality is supported\n");
if (cppc_state != AMD_PSTATE_ACTIVE)
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
@ -1805,13 +1936,15 @@ static int __init amd_pstate_init(void)
/* enable amd pstate feature */
ret = amd_pstate_enable(true);
if (ret) {
pr_err("failed to enable with return %d\n", ret);
pr_err("failed to enable driver mode(%d)\n", cppc_state);
return ret;
}
ret = cpufreq_register_driver(current_pstate_driver);
if (ret)
if (ret) {
pr_err("failed to register with return %d\n", ret);
goto disable_driver;
}
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
@ -1827,6 +1960,8 @@ static int __init amd_pstate_init(void)
global_attr_free:
cpufreq_unregister_driver(current_pstate_driver);
disable_driver:
amd_pstate_enable(false);
return ret;
}
device_initcall(amd_pstate_init);

View File

@ -99,6 +99,8 @@ struct amd_cpudata {
u32 policy;
u64 cppc_cap1_cached;
bool suspended;
s16 epp_default;
bool boost_state;
};
#endif /* _LINUX_AMD_PSTATE_H */

View File

@ -305,7 +305,7 @@ out_iounmap:
return ret;
}
static int apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
static void apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
{
struct apple_cpu_priv *priv = policy->driver_data;
@ -313,8 +313,6 @@ static int apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
iounmap(priv->reg_base);
kfree(priv);
return 0;
}
static struct cpufreq_driver apple_soc_cpufreq_driver = {

View File

@ -121,11 +121,9 @@ static int bmips_cpufreq_target_index(struct cpufreq_policy *policy,
return 0;
}
static int bmips_cpufreq_exit(struct cpufreq_policy *policy)
static void bmips_cpufreq_exit(struct cpufreq_policy *policy)
{
kfree(policy->freq_table);
return 0;
}
static int bmips_cpufreq_init(struct cpufreq_policy *policy)

View File

@ -291,15 +291,10 @@ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
struct cppc_cpudata *cpu_data = policy->driver_data;
unsigned int cpu = policy->cpu;
struct cpufreq_freqs freqs;
u32 desired_perf;
int ret = 0;
desired_perf = cppc_khz_to_perf(&cpu_data->perf_caps, target_freq);
/* Return if it is exactly the same perf */
if (desired_perf == cpu_data->perf_ctrls.desired_perf)
return ret;
cpu_data->perf_ctrls.desired_perf = desired_perf;
cpu_data->perf_ctrls.desired_perf =
cppc_khz_to_perf(&cpu_data->perf_caps, target_freq);
freqs.old = policy->cur;
freqs.new = target_freq;
@ -688,7 +683,7 @@ out:
return ret;
}
static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct cppc_cpudata *cpu_data = policy->driver_data;
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
@ -705,7 +700,6 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
caps->lowest_perf, cpu, ret);
cppc_cpufreq_put_cpu_data(policy);
return 0;
}
static inline u64 get_delta(u64 t1, u64 t0)

View File

@ -233,4 +233,5 @@ create_pdev:
sizeof(struct cpufreq_dt_platform_data)));
}
core_initcall(cpufreq_dt_platdev_init);
MODULE_DESCRIPTION("Generic DT based cpufreq platdev driver");
MODULE_LICENSE("GPL");

View File

@ -157,10 +157,9 @@ static int cpufreq_offline(struct cpufreq_policy *policy)
return 0;
}
static int cpufreq_exit(struct cpufreq_policy *policy)
static void cpufreq_exit(struct cpufreq_policy *policy)
{
clk_put(policy->clk);
return 0;
}
static struct cpufreq_driver dt_cpufreq_driver = {

View File

@ -359,11 +359,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int nforce2_cpu_exit(struct cpufreq_policy *policy)
{
return 0;
}
static struct cpufreq_driver nforce2_driver = {
.name = "nforce2",
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
@ -371,7 +366,6 @@ static struct cpufreq_driver nforce2_driver = {
.target = nforce2_target,
.get = nforce2_get,
.init = nforce2_cpu_init,
.exit = nforce2_cpu_exit,
};
#ifdef MODULE

View File

@ -608,16 +608,15 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
static ssize_t show_boost(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
return sysfs_emit(buf, "%d\n", cpufreq_driver->boost_enabled);
}
static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
int ret, enable;
bool enable;
ret = sscanf(buf, "%d", &enable);
if (ret != 1 || enable < 0 || enable > 1)
if (kstrtobool(buf, &enable))
return -EINVAL;
if (cpufreq_boost_trigger_state(enable)) {
@ -641,10 +640,10 @@ static ssize_t show_local_boost(struct cpufreq_policy *policy, char *buf)
static ssize_t store_local_boost(struct cpufreq_policy *policy,
const char *buf, size_t count)
{
int ret, enable;
int ret;
bool enable;
ret = kstrtoint(buf, 10, &enable);
if (ret || enable < 0 || enable > 1)
if (kstrtobool(buf, &enable))
return -EINVAL;
if (!cpufreq_driver->boost_enabled)
@ -739,7 +738,7 @@ static struct cpufreq_governor *cpufreq_parse_governor(char *str_governor)
static ssize_t show_##file_name \
(struct cpufreq_policy *policy, char *buf) \
{ \
return sprintf(buf, "%u\n", policy->object); \
return sysfs_emit(buf, "%u\n", policy->object); \
}
show_one(cpuinfo_min_freq, cpuinfo.min_freq);
@ -760,11 +759,11 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf)
freq = arch_freq_get_on_cpu(policy->cpu);
if (freq)
ret = sprintf(buf, "%u\n", freq);
ret = sysfs_emit(buf, "%u\n", freq);
else if (cpufreq_driver->setpolicy && cpufreq_driver->get)
ret = sprintf(buf, "%u\n", cpufreq_driver->get(policy->cpu));
ret = sysfs_emit(buf, "%u\n", cpufreq_driver->get(policy->cpu));
else
ret = sprintf(buf, "%u\n", policy->cur);
ret = sysfs_emit(buf, "%u\n", policy->cur);
return ret;
}
@ -798,9 +797,9 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
unsigned int cur_freq = __cpufreq_get(policy);
if (cur_freq)
return sprintf(buf, "%u\n", cur_freq);
return sysfs_emit(buf, "%u\n", cur_freq);
return sprintf(buf, "<unknown>\n");
return sysfs_emit(buf, "<unknown>\n");
}
/*
@ -809,12 +808,11 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf)
{
if (policy->policy == CPUFREQ_POLICY_POWERSAVE)
return sprintf(buf, "powersave\n");
return sysfs_emit(buf, "powersave\n");
else if (policy->policy == CPUFREQ_POLICY_PERFORMANCE)
return sprintf(buf, "performance\n");
return sysfs_emit(buf, "performance\n");
else if (policy->governor)
return scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n",
policy->governor->name);
return sysfs_emit(buf, "%s\n", policy->governor->name);
return -EINVAL;
}
@ -873,7 +871,7 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
struct cpufreq_governor *t;
if (!has_target()) {
i += sprintf(buf, "performance powersave");
i += sysfs_emit(buf, "performance powersave");
goto out;
}
@ -882,11 +880,11 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char))
- (CPUFREQ_NAME_LEN + 2)))
break;
i += scnprintf(&buf[i], CPUFREQ_NAME_PLEN, "%s ", t->name);
i += sysfs_emit_at(buf, i, "%s ", t->name);
}
mutex_unlock(&cpufreq_governor_mutex);
out:
i += sprintf(&buf[i], "\n");
i += sysfs_emit_at(buf, i, "\n");
return i;
}
@ -896,7 +894,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf)
unsigned int cpu;
for_each_cpu(cpu, mask) {
i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u ", cpu);
i += sysfs_emit_at(buf, i, "%u ", cpu);
if (i >= (PAGE_SIZE - 5))
break;
}
@ -904,7 +902,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf)
/* Remove the extra space at the end */
i--;
i += sprintf(&buf[i], "\n");
i += sysfs_emit_at(buf, i, "\n");
return i;
}
EXPORT_SYMBOL_GPL(cpufreq_show_cpus);
@ -947,7 +945,7 @@ static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy,
static ssize_t show_scaling_setspeed(struct cpufreq_policy *policy, char *buf)
{
if (!policy->governor || !policy->governor->show_setspeed)
return sprintf(buf, "<unsupported>\n");
return sysfs_emit(buf, "<unsupported>\n");
return policy->governor->show_setspeed(policy, buf);
}
@ -961,8 +959,8 @@ static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf)
int ret;
ret = cpufreq_driver->bios_limit(policy->cpu, &limit);
if (!ret)
return sprintf(buf, "%u\n", limit);
return sprintf(buf, "%u\n", policy->cpuinfo.max_freq);
return sysfs_emit(buf, "%u\n", limit);
return sysfs_emit(buf, "%u\n", policy->cpuinfo.max_freq);
}
cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400);
@ -2876,7 +2874,7 @@ int cpufreq_enable_boost_support(void)
}
EXPORT_SYMBOL_GPL(cpufreq_enable_boost_support);
int cpufreq_boost_enabled(void)
bool cpufreq_boost_enabled(void)
{
return cpufreq_driver->boost_enabled;
}

View File

@ -360,14 +360,13 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int eps_cpu_exit(struct cpufreq_policy *policy)
static void eps_cpu_exit(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
/* Bye */
kfree(eps_cpu[cpu]);
eps_cpu[cpu] = NULL;
return 0;
}
static struct cpufreq_driver eps_driver = {

View File

@ -300,6 +300,7 @@ static struct cpufreq_driver *intel_pstate_driver __read_mostly;
#define HYBRID_SCALING_FACTOR 78741
#define HYBRID_SCALING_FACTOR_MTL 80000
#define HYBRID_SCALING_FACTOR_LNL 86957
static int hybrid_scaling_factor = HYBRID_SCALING_FACTOR;
@ -1625,17 +1626,24 @@ static void intel_pstate_notify_work(struct work_struct *work)
static DEFINE_SPINLOCK(hwp_notify_lock);
static cpumask_t hwp_intr_enable_mask;
#define HWP_GUARANTEED_PERF_CHANGE_STATUS BIT(0)
#define HWP_HIGHEST_PERF_CHANGE_STATUS BIT(3)
void notify_hwp_interrupt(void)
{
unsigned int this_cpu = smp_processor_id();
u64 value, status_mask;
unsigned long flags;
u64 value;
if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
if (!hwp_active || !cpu_feature_enabled(X86_FEATURE_HWP_NOTIFY))
return;
status_mask = HWP_GUARANTEED_PERF_CHANGE_STATUS;
if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
status_mask |= HWP_HIGHEST_PERF_CHANGE_STATUS;
rdmsrl_safe(MSR_HWP_STATUS, &value);
if (!(value & 0x01))
if (!(value & status_mask))
return;
spin_lock_irqsave(&hwp_notify_lock, flags);
@ -1659,7 +1667,7 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
{
bool cancel_work;
if (!boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
if (!cpu_feature_enabled(X86_FEATURE_HWP_NOTIFY))
return;
/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
@ -1673,17 +1681,25 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
cancel_delayed_work_sync(&cpudata->hwp_notify_work);
}
#define HWP_GUARANTEED_PERF_CHANGE_REQ BIT(0)
#define HWP_HIGHEST_PERF_CHANGE_REQ BIT(2)
static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
{
/* Enable HWP notification interrupt for guaranteed performance change */
/* Enable HWP notification interrupt for performance change */
if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) {
u64 interrupt_mask = HWP_GUARANTEED_PERF_CHANGE_REQ;
spin_lock_irq(&hwp_notify_lock);
INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work);
cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask);
spin_unlock_irq(&hwp_notify_lock);
if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
interrupt_mask |= HWP_HIGHEST_PERF_CHANGE_REQ;
/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x01);
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, interrupt_mask);
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
}
}
@ -2367,54 +2383,54 @@ static const struct pstate_funcs knl_funcs = {
.get_val = core_get_val,
};
#define X86_MATCH(model, policy) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_##model, \
X86_FEATURE_APERFMPERF, &policy)
#define X86_MATCH(vfm, policy) \
X86_MATCH_VFM_FEATURE(vfm, X86_FEATURE_APERFMPERF, &policy)
static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
X86_MATCH(SANDYBRIDGE, core_funcs),
X86_MATCH(SANDYBRIDGE_X, core_funcs),
X86_MATCH(ATOM_SILVERMONT, silvermont_funcs),
X86_MATCH(IVYBRIDGE, core_funcs),
X86_MATCH(HASWELL, core_funcs),
X86_MATCH(BROADWELL, core_funcs),
X86_MATCH(IVYBRIDGE_X, core_funcs),
X86_MATCH(HASWELL_X, core_funcs),
X86_MATCH(HASWELL_L, core_funcs),
X86_MATCH(HASWELL_G, core_funcs),
X86_MATCH(BROADWELL_G, core_funcs),
X86_MATCH(ATOM_AIRMONT, airmont_funcs),
X86_MATCH(SKYLAKE_L, core_funcs),
X86_MATCH(BROADWELL_X, core_funcs),
X86_MATCH(SKYLAKE, core_funcs),
X86_MATCH(BROADWELL_D, core_funcs),
X86_MATCH(XEON_PHI_KNL, knl_funcs),
X86_MATCH(XEON_PHI_KNM, knl_funcs),
X86_MATCH(ATOM_GOLDMONT, core_funcs),
X86_MATCH(ATOM_GOLDMONT_PLUS, core_funcs),
X86_MATCH(SKYLAKE_X, core_funcs),
X86_MATCH(COMETLAKE, core_funcs),
X86_MATCH(ICELAKE_X, core_funcs),
X86_MATCH(TIGERLAKE, core_funcs),
X86_MATCH(SAPPHIRERAPIDS_X, core_funcs),
X86_MATCH(EMERALDRAPIDS_X, core_funcs),
X86_MATCH(INTEL_SANDYBRIDGE, core_funcs),
X86_MATCH(INTEL_SANDYBRIDGE_X, core_funcs),
X86_MATCH(INTEL_ATOM_SILVERMONT, silvermont_funcs),
X86_MATCH(INTEL_IVYBRIDGE, core_funcs),
X86_MATCH(INTEL_HASWELL, core_funcs),
X86_MATCH(INTEL_BROADWELL, core_funcs),
X86_MATCH(INTEL_IVYBRIDGE_X, core_funcs),
X86_MATCH(INTEL_HASWELL_X, core_funcs),
X86_MATCH(INTEL_HASWELL_L, core_funcs),
X86_MATCH(INTEL_HASWELL_G, core_funcs),
X86_MATCH(INTEL_BROADWELL_G, core_funcs),
X86_MATCH(INTEL_ATOM_AIRMONT, airmont_funcs),
X86_MATCH(INTEL_SKYLAKE_L, core_funcs),
X86_MATCH(INTEL_BROADWELL_X, core_funcs),
X86_MATCH(INTEL_SKYLAKE, core_funcs),
X86_MATCH(INTEL_BROADWELL_D, core_funcs),
X86_MATCH(INTEL_XEON_PHI_KNL, knl_funcs),
X86_MATCH(INTEL_XEON_PHI_KNM, knl_funcs),
X86_MATCH(INTEL_ATOM_GOLDMONT, core_funcs),
X86_MATCH(INTEL_ATOM_GOLDMONT_PLUS, core_funcs),
X86_MATCH(INTEL_SKYLAKE_X, core_funcs),
X86_MATCH(INTEL_COMETLAKE, core_funcs),
X86_MATCH(INTEL_ICELAKE_X, core_funcs),
X86_MATCH(INTEL_TIGERLAKE, core_funcs),
X86_MATCH(INTEL_SAPPHIRERAPIDS_X, core_funcs),
X86_MATCH(INTEL_EMERALDRAPIDS_X, core_funcs),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
#ifdef CONFIG_ACPI
static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = {
X86_MATCH(BROADWELL_D, core_funcs),
X86_MATCH(BROADWELL_X, core_funcs),
X86_MATCH(SKYLAKE_X, core_funcs),
X86_MATCH(ICELAKE_X, core_funcs),
X86_MATCH(SAPPHIRERAPIDS_X, core_funcs),
X86_MATCH(INTEL_BROADWELL_D, core_funcs),
X86_MATCH(INTEL_BROADWELL_X, core_funcs),
X86_MATCH(INTEL_SKYLAKE_X, core_funcs),
X86_MATCH(INTEL_ICELAKE_X, core_funcs),
X86_MATCH(INTEL_SAPPHIRERAPIDS_X, core_funcs),
X86_MATCH(INTEL_EMERALDRAPIDS_X, core_funcs),
{}
};
#endif
static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
X86_MATCH(KABYLAKE, core_funcs),
X86_MATCH(INTEL_KABYLAKE, core_funcs),
{}
};
@ -2699,13 +2715,11 @@ static int intel_pstate_cpu_offline(struct cpufreq_policy *policy)
return intel_cpufreq_cpu_offline(policy);
}
static int intel_pstate_cpu_exit(struct cpufreq_policy *policy)
static void intel_pstate_cpu_exit(struct cpufreq_policy *policy)
{
pr_debug("CPU %d exiting\n", policy->cpu);
policy->fast_switch_possible = false;
return 0;
}
static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
@ -3036,7 +3050,7 @@ pstate_exit:
return ret;
}
static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct freq_qos_request *req;
@ -3046,7 +3060,7 @@ static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
freq_qos_remove_request(req);
kfree(req);
return intel_pstate_cpu_exit(policy);
intel_pstate_cpu_exit(policy);
}
static int intel_cpufreq_suspend(struct cpufreq_policy *policy)
@ -3350,14 +3364,13 @@ static inline void intel_pstate_request_control_from_smm(void) {}
#define INTEL_PSTATE_HWP_BROADWELL 0x01
#define X86_MATCH_HWP(model, hwp_mode) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_##model, \
X86_FEATURE_HWP, hwp_mode)
#define X86_MATCH_HWP(vfm, hwp_mode) \
X86_MATCH_VFM_FEATURE(vfm, X86_FEATURE_HWP, hwp_mode)
static const struct x86_cpu_id hwp_support_ids[] __initconst = {
X86_MATCH_HWP(BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL),
X86_MATCH_HWP(BROADWELL_D, INTEL_PSTATE_HWP_BROADWELL),
X86_MATCH_HWP(ANY, 0),
X86_MATCH_HWP(INTEL_BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL),
X86_MATCH_HWP(INTEL_BROADWELL_D, INTEL_PSTATE_HWP_BROADWELL),
X86_MATCH_HWP(INTEL_ANY, 0),
{}
};
@ -3390,15 +3403,19 @@ static const struct x86_cpu_id intel_epp_default[] = {
* which can result in one core turbo frequency for
* AlderLake Mobile CPUs.
*/
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
HWP_EPP_BALANCE_POWERSAVE, 115, 16)),
X86_MATCH_VFM(INTEL_ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)),
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
X86_MATCH_VFM(INTEL_METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
179, 64, 16)),
X86_MATCH_VFM(INTEL_ARROWLAKE, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
179, 64, 16)),
{}
};
static const struct x86_cpu_id intel_hybrid_scaling_factor[] = {
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL),
X86_MATCH_VFM(INTEL_METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL),
X86_MATCH_VFM(INTEL_ARROWLAKE, HYBRID_SCALING_FACTOR_MTL),
X86_MATCH_VFM(INTEL_LUNARLAKE_M, HYBRID_SCALING_FACTOR_LNL),
{}
};

View File

@ -236,8 +236,9 @@ static void do_powersaver(int cx_address, unsigned int mults_index,
}
/**
* longhaul_set_cpu_frequency()
* @mults_index : bitpattern of the new multiplier.
* longhaul_setstate()
* @policy: cpufreq_policy structure containing the current policy.
* @table_index: index of the frequency within the cpufreq_frequency_table.
*
* Sets a new clock ratio.
*/

View File

@ -85,18 +85,12 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int loongson2_cpufreq_exit(struct cpufreq_policy *policy)
{
return 0;
}
static struct cpufreq_driver loongson2_cpufreq_driver = {
.name = "loongson2",
.init = loongson2_cpufreq_cpu_init,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = loongson2_cpufreq_target,
.get = cpufreq_generic_get,
.exit = loongson2_cpufreq_exit,
.attr = cpufreq_generic_attr,
};

View File

@ -0,0 +1,395 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* CPUFreq driver for the Loongson-3 processors.
*
* All revisions of Loongson-3 processor support cpu_has_scalefreq feature.
*
* Author: Huacai Chen <chenhuacai@loongson.cn>
* Copyright (C) 2024 Loongson Technology Corporation Limited
*/
#include <linux/cpufreq.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/units.h>
#include <asm/idle.h>
#include <asm/loongarch.h>
#include <asm/loongson.h>
/* Message */
union smc_message {
u32 value;
struct {
u32 id : 4;
u32 info : 4;
u32 val : 16;
u32 cmd : 6;
u32 extra : 1;
u32 complete : 1;
};
};
/* Command return values */
#define CMD_OK 0 /* No error */
#define CMD_ERROR 1 /* Regular error */
#define CMD_NOCMD 2 /* Command does not support */
#define CMD_INVAL 3 /* Invalid Parameter */
/* Version commands */
/*
* CMD_GET_VERSION - Get interface version
* Input: none
* Output: version
*/
#define CMD_GET_VERSION 0x1
/* Feature commands */
/*
* CMD_GET_FEATURE - Get feature state
* Input: feature ID
* Output: feature flag
*/
#define CMD_GET_FEATURE 0x2
/*
* CMD_SET_FEATURE - Set feature state
* Input: feature ID, feature flag
* output: none
*/
#define CMD_SET_FEATURE 0x3
/* Feature IDs */
#define FEATURE_SENSOR 0
#define FEATURE_FAN 1
#define FEATURE_DVFS 2
/* Sensor feature flags */
#define FEATURE_SENSOR_ENABLE BIT(0)
#define FEATURE_SENSOR_SAMPLE BIT(1)
/* Fan feature flags */
#define FEATURE_FAN_ENABLE BIT(0)
#define FEATURE_FAN_AUTO BIT(1)
/* DVFS feature flags */
#define FEATURE_DVFS_ENABLE BIT(0)
#define FEATURE_DVFS_BOOST BIT(1)
#define FEATURE_DVFS_AUTO BIT(2)
#define FEATURE_DVFS_SINGLE_BOOST BIT(3)
/* Sensor commands */
/*
* CMD_GET_SENSOR_NUM - Get number of sensors
* Input: none
* Output: number
*/
#define CMD_GET_SENSOR_NUM 0x4
/*
* CMD_GET_SENSOR_STATUS - Get sensor status
* Input: sensor ID, type
* Output: sensor status
*/
#define CMD_GET_SENSOR_STATUS 0x5
/* Sensor types */
#define SENSOR_INFO_TYPE 0
#define SENSOR_INFO_TYPE_TEMP 1
/* Fan commands */
/*
* CMD_GET_FAN_NUM - Get number of fans
* Input: none
* Output: number
*/
#define CMD_GET_FAN_NUM 0x6
/*
* CMD_GET_FAN_INFO - Get fan status
* Input: fan ID, type
* Output: fan info
*/
#define CMD_GET_FAN_INFO 0x7
/*
* CMD_SET_FAN_INFO - Set fan status
* Input: fan ID, type, value
* Output: none
*/
#define CMD_SET_FAN_INFO 0x8
/* Fan types */
#define FAN_INFO_TYPE_LEVEL 0
/* DVFS commands */
/*
* CMD_GET_FREQ_LEVEL_NUM - Get number of freq levels
* Input: CPU ID
* Output: number
*/
#define CMD_GET_FREQ_LEVEL_NUM 0x9
/*
* CMD_GET_FREQ_BOOST_LEVEL - Get the first boost level
* Input: CPU ID
* Output: number
*/
#define CMD_GET_FREQ_BOOST_LEVEL 0x10
/*
* CMD_GET_FREQ_LEVEL_INFO - Get freq level info
* Input: CPU ID, level ID
* Output: level info
*/
#define CMD_GET_FREQ_LEVEL_INFO 0x11
/*
* CMD_GET_FREQ_INFO - Get freq info
* Input: CPU ID, type
* Output: freq info
*/
#define CMD_GET_FREQ_INFO 0x12
/*
* CMD_SET_FREQ_INFO - Set freq info
* Input: CPU ID, type, value
* Output: none
*/
#define CMD_SET_FREQ_INFO 0x13
/* Freq types */
#define FREQ_INFO_TYPE_FREQ 0
#define FREQ_INFO_TYPE_LEVEL 1
#define FREQ_MAX_LEVEL 16
struct loongson3_freq_data {
unsigned int def_freq_level;
struct cpufreq_frequency_table table[];
};
static struct mutex cpufreq_mutex[MAX_PACKAGES];
static struct cpufreq_driver loongson3_cpufreq_driver;
static DEFINE_PER_CPU(struct loongson3_freq_data *, freq_data);
static inline int do_service_request(u32 id, u32 info, u32 cmd, u32 val, u32 extra)
{
int retries;
unsigned int cpu = smp_processor_id();
unsigned int package = cpu_data[cpu].package;
union smc_message msg, last;
mutex_lock(&cpufreq_mutex[package]);
last.value = iocsr_read32(LOONGARCH_IOCSR_SMCMBX);
if (!last.complete) {
mutex_unlock(&cpufreq_mutex[package]);
return -EPERM;
}
msg.id = id;
msg.info = info;
msg.cmd = cmd;
msg.val = val;
msg.extra = extra;
msg.complete = 0;
iocsr_write32(msg.value, LOONGARCH_IOCSR_SMCMBX);
iocsr_write32(iocsr_read32(LOONGARCH_IOCSR_MISC_FUNC) | IOCSR_MISC_FUNC_SOFT_INT,
LOONGARCH_IOCSR_MISC_FUNC);
for (retries = 0; retries < 10000; retries++) {
msg.value = iocsr_read32(LOONGARCH_IOCSR_SMCMBX);
if (msg.complete)
break;
usleep_range(8, 12);
}
if (!msg.complete || msg.cmd != CMD_OK) {
mutex_unlock(&cpufreq_mutex[package]);
return -EPERM;
}
mutex_unlock(&cpufreq_mutex[package]);
return msg.val;
}
static unsigned int loongson3_cpufreq_get(unsigned int cpu)
{
int ret;
ret = do_service_request(cpu, FREQ_INFO_TYPE_FREQ, CMD_GET_FREQ_INFO, 0, 0);
return ret * KILO;
}
static int loongson3_cpufreq_target(struct cpufreq_policy *policy, unsigned int index)
{
int ret;
ret = do_service_request(cpu_data[policy->cpu].core,
FREQ_INFO_TYPE_LEVEL, CMD_SET_FREQ_INFO, index, 0);
return (ret >= 0) ? 0 : ret;
}
static int configure_freq_table(int cpu)
{
int i, ret, boost_level, max_level, freq_level;
struct platform_device *pdev = cpufreq_get_driver_data();
struct loongson3_freq_data *data;
if (per_cpu(freq_data, cpu))
return 0;
ret = do_service_request(cpu, 0, CMD_GET_FREQ_LEVEL_NUM, 0, 0);
if (ret < 0)
return ret;
max_level = ret;
ret = do_service_request(cpu, 0, CMD_GET_FREQ_BOOST_LEVEL, 0, 0);
if (ret < 0)
return ret;
boost_level = ret;
freq_level = min(max_level, FREQ_MAX_LEVEL);
data = devm_kzalloc(&pdev->dev, struct_size(data, table, freq_level + 1), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->def_freq_level = boost_level - 1;
for (i = 0; i < freq_level; i++) {
ret = do_service_request(cpu, FREQ_INFO_TYPE_FREQ, CMD_GET_FREQ_LEVEL_INFO, i, 0);
if (ret < 0) {
devm_kfree(&pdev->dev, data);
return ret;
}
data->table[i].frequency = ret * KILO;
data->table[i].flags = (i >= boost_level) ? CPUFREQ_BOOST_FREQ : 0;
}
data->table[freq_level].flags = 0;
data->table[freq_level].frequency = CPUFREQ_TABLE_END;
per_cpu(freq_data, cpu) = data;
return 0;
}
static int loongson3_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
int i, ret, cpu = policy->cpu;
ret = configure_freq_table(cpu);
if (ret < 0)
return ret;
policy->cpuinfo.transition_latency = 10000;
policy->freq_table = per_cpu(freq_data, cpu)->table;
policy->suspend_freq = policy->freq_table[per_cpu(freq_data, cpu)->def_freq_level].frequency;
cpumask_copy(policy->cpus, topology_sibling_cpumask(cpu));
for_each_cpu(i, policy->cpus) {
if (i != cpu)
per_cpu(freq_data, i) = per_cpu(freq_data, cpu);
}
if (policy_has_boost_freq(policy)) {
ret = cpufreq_enable_boost_support();
if (ret < 0) {
pr_warn("cpufreq: Failed to enable boost: %d\n", ret);
return ret;
}
loongson3_cpufreq_driver.boost_enabled = true;
}
return 0;
}
static void loongson3_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
int cpu = policy->cpu;
loongson3_cpufreq_target(policy, per_cpu(freq_data, cpu)->def_freq_level);
}
static int loongson3_cpufreq_cpu_online(struct cpufreq_policy *policy)
{
return 0;
}
static int loongson3_cpufreq_cpu_offline(struct cpufreq_policy *policy)
{
return 0;
}
static struct cpufreq_driver loongson3_cpufreq_driver = {
.name = "loongson3",
.flags = CPUFREQ_CONST_LOOPS,
.init = loongson3_cpufreq_cpu_init,
.exit = loongson3_cpufreq_cpu_exit,
.online = loongson3_cpufreq_cpu_online,
.offline = loongson3_cpufreq_cpu_offline,
.get = loongson3_cpufreq_get,
.target_index = loongson3_cpufreq_target,
.attr = cpufreq_generic_attr,
.verify = cpufreq_generic_frequency_table_verify,
.suspend = cpufreq_generic_suspend,
};
static int loongson3_cpufreq_probe(struct platform_device *pdev)
{
int i, ret;
for (i = 0; i < MAX_PACKAGES; i++)
devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0);
if (ret <= 0)
return -EPERM;
ret = do_service_request(FEATURE_DVFS, 0, CMD_SET_FEATURE,
FEATURE_DVFS_ENABLE | FEATURE_DVFS_BOOST, 0);
if (ret < 0)
return -EPERM;
loongson3_cpufreq_driver.driver_data = pdev;
ret = cpufreq_register_driver(&loongson3_cpufreq_driver);
if (ret)
return ret;
pr_info("cpufreq: Loongson-3 CPU frequency driver.\n");
return 0;
}
static void loongson3_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&loongson3_cpufreq_driver);
}
static struct platform_device_id cpufreq_id_table[] = {
{ "loongson3_cpufreq", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(platform, cpufreq_id_table);
static struct platform_driver loongson3_platform_driver = {
.driver = {
.name = "loongson3_cpufreq",
},
.id_table = cpufreq_id_table,
.probe = loongson3_cpufreq_probe,
.remove_new = loongson3_cpufreq_remove,
};
module_platform_driver(loongson3_platform_driver);
MODULE_AUTHOR("Huacai Chen <chenhuacai@loongson.cn>");
MODULE_DESCRIPTION("CPUFreq driver for Loongson-3 processors");
MODULE_LICENSE("GPL");

View File

@ -260,7 +260,7 @@ static int mtk_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
static void mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{
struct mtk_cpufreq_data *data = policy->driver_data;
struct resource *res = data->res;
@ -270,8 +270,6 @@ static int mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
writel_relaxed(0x0, data->reg_bases[REG_FREQ_ENABLE]);
iounmap(base);
release_mem_region(res->start, resource_size(res));
return 0;
}
static void mtk_cpufreq_register_em(struct cpufreq_policy *policy)

View File

@ -390,28 +390,23 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
int ret;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
dev_err(cpu_dev, "failed to get cpu%d device\n", cpu);
return -ENODEV;
}
if (!cpu_dev)
return dev_err_probe(cpu_dev, -ENODEV, "failed to get cpu%d device\n", cpu);
info->cpu_dev = cpu_dev;
info->ccifreq_bound = false;
if (info->soc_data->ccifreq_supported) {
info->cci_dev = of_get_cci(info->cpu_dev);
if (IS_ERR(info->cci_dev)) {
ret = PTR_ERR(info->cci_dev);
dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu);
return -ENODEV;
}
if (IS_ERR(info->cci_dev))
return dev_err_probe(cpu_dev, PTR_ERR(info->cci_dev),
"cpu%d: failed to get cci device\n",
cpu);
}
info->cpu_clk = clk_get(cpu_dev, "cpu");
if (IS_ERR(info->cpu_clk)) {
ret = PTR_ERR(info->cpu_clk);
return dev_err_probe(cpu_dev, ret,
if (IS_ERR(info->cpu_clk))
return dev_err_probe(cpu_dev, PTR_ERR(info->cpu_clk),
"cpu%d: failed to get cpu clk\n", cpu);
}
info->inter_clk = clk_get(cpu_dev, "intermediate");
if (IS_ERR(info->inter_clk)) {
@ -431,7 +426,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
ret = regulator_enable(info->proc_reg);
if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu);
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable vproc\n", cpu);
goto out_free_proc_reg;
}
@ -439,14 +434,17 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
info->sram_reg = regulator_get_optional(cpu_dev, "sram");
if (IS_ERR(info->sram_reg)) {
ret = PTR_ERR(info->sram_reg);
if (ret == -EPROBE_DEFER)
if (ret == -EPROBE_DEFER) {
dev_err_probe(cpu_dev, ret,
"cpu%d: Failed to get sram regulator\n", cpu);
goto out_disable_proc_reg;
}
info->sram_reg = NULL;
} else {
ret = regulator_enable(info->sram_reg);
if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable vsram\n", cpu);
goto out_free_sram_reg;
}
}
@ -454,31 +452,34 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
/* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, &info->cpus);
if (ret) {
dev_err(cpu_dev,
dev_err_probe(cpu_dev, ret,
"cpu%d: failed to get OPP-sharing information\n", cpu);
goto out_disable_sram_reg;
}
ret = dev_pm_opp_of_cpumask_add_table(&info->cpus);
if (ret) {
dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu);
dev_err_probe(cpu_dev, ret, "cpu%d: no OPP table\n", cpu);
goto out_disable_sram_reg;
}
ret = clk_prepare_enable(info->cpu_clk);
if (ret)
if (ret) {
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable cpu clk\n", cpu);
goto out_free_opp_table;
}
ret = clk_prepare_enable(info->inter_clk);
if (ret)
if (ret) {
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable inter clk\n", cpu);
goto out_disable_mux_clock;
}
if (info->soc_data->ccifreq_supported) {
info->vproc_on_boot = regulator_get_voltage(info->proc_reg);
if (info->vproc_on_boot < 0) {
ret = info->vproc_on_boot;
dev_err(info->cpu_dev,
"invalid Vproc value: %d\n", info->vproc_on_boot);
ret = dev_err_probe(info->cpu_dev, info->vproc_on_boot,
"invalid Vproc value\n");
goto out_disable_inter_clock;
}
}
@ -487,8 +488,8 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
rate = clk_get_rate(info->inter_clk);
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
if (IS_ERR(opp)) {
dev_err(cpu_dev, "cpu%d: failed to get intermediate opp\n", cpu);
ret = PTR_ERR(opp);
ret = dev_err_probe(cpu_dev, PTR_ERR(opp),
"cpu%d: failed to get intermediate opp\n", cpu);
goto out_disable_inter_clock;
}
info->intermediate_voltage = dev_pm_opp_get_voltage(opp);
@ -501,7 +502,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
info->opp_nb.notifier_call = mtk_cpufreq_opp_notifier;
ret = dev_pm_opp_register_notifier(cpu_dev, &info->opp_nb);
if (ret) {
dev_err(cpu_dev, "cpu%d: failed to register opp notifier\n", cpu);
dev_err_probe(cpu_dev, ret, "cpu%d: failed to register opp notifier\n", cpu);
goto out_disable_inter_clock;
}
@ -599,13 +600,11 @@ static int mtk_cpufreq_init(struct cpufreq_policy *policy)
return 0;
}
static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
static void mtk_cpufreq_exit(struct cpufreq_policy *policy)
{
struct mtk_cpu_dvfs_info *info = policy->driver_data;
dev_pm_opp_free_cpufreq_table(info->cpu_dev, &policy->freq_table);
return 0;
}
static struct cpufreq_driver mtk_cpufreq_driver = {
@ -629,11 +628,9 @@ static int mtk_cpufreq_probe(struct platform_device *pdev)
int cpu, ret;
data = dev_get_platdata(&pdev->dev);
if (!data) {
dev_err(&pdev->dev,
"failed to get mtk cpufreq platform data\n");
return -ENODEV;
}
if (!data)
return dev_err_probe(&pdev->dev, -ENODEV,
"failed to get mtk cpufreq platform data\n");
for_each_possible_cpu(cpu) {
info = mtk_cpu_dvfs_info_lookup(cpu);
@ -642,25 +639,22 @@ static int mtk_cpufreq_probe(struct platform_device *pdev)
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info) {
ret = -ENOMEM;
ret = dev_err_probe(&pdev->dev, -ENOMEM,
"Failed to allocate dvfs_info\n");
goto release_dvfs_info_list;
}
info->soc_data = data;
ret = mtk_cpu_dvfs_info_init(info, cpu);
if (ret) {
dev_err(&pdev->dev,
"failed to initialize dvfs info for cpu%d\n",
cpu);
if (ret)
goto release_dvfs_info_list;
}
list_add(&info->list_head, &dvfs_info_list);
}
ret = cpufreq_register_driver(&mtk_cpufreq_driver);
if (ret) {
dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n");
dev_err_probe(&pdev->dev, ret, "failed to register mtk cpufreq driver\n");
goto release_dvfs_info_list;
}

View File

@ -135,11 +135,10 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int omap_cpu_exit(struct cpufreq_policy *policy)
static void omap_cpu_exit(struct cpufreq_policy *policy)
{
freq_table_free();
clk_put(policy->clk);
return 0;
}
static struct cpufreq_driver omap_driver = {

View File

@ -204,21 +204,19 @@ out:
return err;
}
static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
/*
* We don't support CPU hotplug. Don't unmap after the system
* has already made it to a running state.
*/
if (system_state >= SYSTEM_RUNNING)
return 0;
return;
if (sdcasr_mapbase)
iounmap(sdcasr_mapbase);
if (sdcpwr_mapbase)
iounmap(sdcpwr_mapbase);
return 0;
}
static int pas_cpufreq_target(struct cpufreq_policy *policy,

View File

@ -562,18 +562,12 @@ out:
return result;
}
static int pcc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
return 0;
}
static struct cpufreq_driver pcc_cpufreq_driver = {
.flags = CPUFREQ_CONST_LOOPS,
.get = pcc_get_freq,
.verify = pcc_cpufreq_verify,
.target = pcc_cpufreq_target,
.init = pcc_cpufreq_cpu_init,
.exit = pcc_cpufreq_cpu_exit,
.name = "pcc-cpufreq",
};

View File

@ -219,7 +219,7 @@ have_busfreq:
}
static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
static void powernow_k6_cpu_exit(struct cpufreq_policy *policy)
{
unsigned int i;
@ -234,10 +234,9 @@ static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
cpufreq_freq_transition_begin(policy, &freqs);
powernow_k6_target(policy, i);
cpufreq_freq_transition_end(policy, &freqs, 0);
break;
return;
}
}
return 0;
}
static unsigned int powernow_k6_get(unsigned int cpu)

View File

@ -644,7 +644,7 @@ static int powernow_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int powernow_cpu_exit(struct cpufreq_policy *policy)
static void powernow_cpu_exit(struct cpufreq_policy *policy)
{
#ifdef CONFIG_X86_POWERNOW_K7_ACPI
if (acpi_processor_perf) {
@ -655,7 +655,6 @@ static int powernow_cpu_exit(struct cpufreq_policy *policy)
#endif
kfree(powernow_table);
return 0;
}
static struct cpufreq_driver powernow_driver = {

View File

@ -1089,13 +1089,13 @@ err_out:
return -ENODEV;
}
static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
static void powernowk8_cpu_exit(struct cpufreq_policy *pol)
{
struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
int cpu;
if (!data)
return -EINVAL;
return;
powernow_k8_cpu_exit_acpi(data);
@ -1104,8 +1104,6 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
/* pol->cpus will be empty here, use related_cpus instead. */
for_each_cpu(cpu, pol->related_cpus)
per_cpu(powernow_data, cpu) = NULL;
return 0;
}
static void query_values_on_cpu(void *_err)

View File

@ -874,7 +874,7 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct powernv_smp_call_data freq_data;
struct global_pstate_info *gpstates = policy->driver_data;
@ -886,8 +886,6 @@ static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
del_timer_sync(&gpstates->timer);
kfree(policy->driver_data);
return 0;
}
static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,

View File

@ -113,10 +113,9 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cbe_cpufreq_pmi_policy_exit(policy);
return 0;
}
static int cbe_cpufreq_target(struct cpufreq_policy *policy,

View File

@ -573,7 +573,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
return qcom_cpufreq_hw_lmh_init(policy, index);
}
static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
static void qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
struct qcom_cpufreq_data *data = policy->driver_data;
@ -583,8 +583,6 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
qcom_cpufreq_hw_lmh_exit(data);
kfree(policy->freq_table);
kfree(data);
return 0;
}
static void qcom_cpufreq_ready(struct cpufreq_policy *policy)

View File

@ -455,7 +455,6 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
{
struct qcom_cpufreq_drv *drv;
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
char *pvs_name = pvs_name_buffer;
@ -467,16 +466,15 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
if (!cpu_dev)
return -ENODEV;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
struct device_node *np __free(device_node) =
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu") ||
of_device_is_compatible(np, "operating-points-v2-krait-cpu");
if (!ret) {
of_node_put(np);
if (!ret)
return -ENOENT;
}
drv = devm_kzalloc(&pdev->dev, struct_size(drv, cpus, num_possible_cpus()),
GFP_KERNEL);
@ -502,7 +500,6 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
}
nvmem_cell_put(speedbin_nvmem);
}
of_node_put(np);
for_each_possible_cpu(cpu) {
struct device **virt_devs = NULL;
@ -638,7 +635,7 @@ MODULE_DEVICE_TABLE(of, qcom_cpufreq_match_list);
*/
static int __init qcom_cpufreq_init(void)
{
struct device_node *np = of_find_node_by_path("/");
struct device_node *np __free(device_node) = of_find_node_by_path("/");
const struct of_device_id *match;
int ret;
@ -646,7 +643,6 @@ static int __init qcom_cpufreq_init(void)
return -ENODEV;
match = of_match_node(qcom_cpufreq_match_list, np);
of_node_put(np);
if (!match)
return -ENODEV;

View File

@ -225,7 +225,7 @@ err_np:
return -ENODEV;
}
static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct cpu_data *data = policy->driver_data;
@ -233,8 +233,6 @@ static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
kfree(data->table);
kfree(data);
policy->driver_data = NULL;
return 0;
}
static int qoriq_cpufreq_target(struct cpufreq_policy *policy,

View File

@ -63,9 +63,9 @@ static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
struct scmi_data *priv = policy->driver_data;
unsigned long freq = target_freq;
if (!perf_ops->freq_set(ph, priv->domain_id,
target_freq * 1000, true))
if (!perf_ops->freq_set(ph, priv->domain_id, freq * 1000, true))
return target_freq;
return 0;
@ -308,7 +308,7 @@ out_free_priv:
return ret;
}
static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
static void scmi_cpufreq_exit(struct cpufreq_policy *policy)
{
struct scmi_data *priv = policy->driver_data;
@ -316,8 +316,6 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
free_cpumask_var(priv->opp_shared_cpus);
kfree(priv);
return 0;
}
static void scmi_cpufreq_register_em(struct cpufreq_policy *policy)

View File

@ -167,7 +167,7 @@ out_free_opp:
return ret;
}
static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
static void scpi_cpufreq_exit(struct cpufreq_policy *policy)
{
struct scpi_data *priv = policy->driver_data;
@ -175,8 +175,6 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
kfree(priv);
return 0;
}
static struct cpufreq_driver scpi_cpufreq_driver = {

View File

@ -135,14 +135,12 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static void sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
clk_put(cpuclk);
return 0;
}
static struct cpufreq_driver sh_cpufreq_driver = {

View File

@ -296,10 +296,9 @@ static int us2e_freq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int us2e_freq_cpu_exit(struct cpufreq_policy *policy)
static void us2e_freq_cpu_exit(struct cpufreq_policy *policy)
{
us2e_freq_target(policy, 0);
return 0;
}
static struct cpufreq_driver cpufreq_us2e_driver = {

View File

@ -140,10 +140,9 @@ static int us3_freq_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int us3_freq_cpu_exit(struct cpufreq_policy *policy)
static void us3_freq_cpu_exit(struct cpufreq_policy *policy)
{
us3_freq_target(policy, 0);
return 0;
}
static struct cpufreq_driver cpufreq_us3_driver = {

View File

@ -400,16 +400,12 @@ static int centrino_cpu_init(struct cpufreq_policy *policy)
return 0;
}
static int centrino_cpu_exit(struct cpufreq_policy *policy)
static void centrino_cpu_exit(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
if (!per_cpu(centrino_model, cpu))
return -ENODEV;
per_cpu(centrino_model, cpu) = NULL;
return 0;
if (per_cpu(centrino_model, cpu))
per_cpu(centrino_model, cpu) = NULL;
}
/**
@ -520,10 +516,10 @@ static struct cpufreq_driver centrino_driver = {
* or ASCII model IDs.
*/
static const struct x86_cpu_id centrino_ids[] = {
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, 9, X86_FEATURE_EST, NULL),
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, 13, X86_FEATURE_EST, NULL),
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 15, 3, X86_FEATURE_EST, NULL),
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 15, 4, X86_FEATURE_EST, NULL),
X86_MATCH_VFM_FEATURE(IFM( 6, 9), X86_FEATURE_EST, NULL),
X86_MATCH_VFM_FEATURE(IFM( 6, 13), X86_FEATURE_EST, NULL),
X86_MATCH_VFM_FEATURE(IFM(15, 3), X86_FEATURE_EST, NULL),
X86_MATCH_VFM_FEATURE(IFM(15, 4), X86_FEATURE_EST, NULL),
{}
};

View File

@ -18,7 +18,7 @@
#include <linux/regmap.h>
#define VERSION_ELEMENTS 3
#define MAX_PCODE_NAME_LEN 7
#define MAX_PCODE_NAME_LEN 16
#define VERSION_SHIFT 28
#define HW_INFO_INDEX 1
@ -293,6 +293,7 @@ module_init(sti_cpufreq_init);
static const struct of_device_id __maybe_unused sti_cpufreq_of_match[] = {
{ .compatible = "st,stih407" },
{ .compatible = "st,stih410" },
{ .compatible = "st,stih418" },
{ },
};
MODULE_DEVICE_TABLE(of, sti_cpufreq_of_match);

View File

@ -91,6 +91,9 @@ static u32 sun50i_h616_efuse_xlate(u32 speedbin)
case 0x5d00:
value = 0;
break;
case 0x6c00:
value = 5;
break;
default:
pr_warn("sun50i-cpufreq-nvmem: unknown speed bin 0x%x, using default bin 0\n",
speedbin & 0xffff);
@ -131,26 +134,24 @@ static const struct of_device_id cpu_opp_match_list[] = {
static bool dt_has_supported_hw(void)
{
bool has_opp_supported_hw = false;
struct device_node *np, *opp;
struct device *cpu_dev;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return false;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
struct device_node *np __free(device_node) =
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return false;
for_each_child_of_node(np, opp) {
for_each_child_of_node_scoped(np, opp) {
if (of_find_property(opp, "opp-supported-hw", NULL)) {
has_opp_supported_hw = true;
break;
}
}
of_node_put(np);
return has_opp_supported_hw;
}
@ -165,7 +166,6 @@ static int sun50i_cpufreq_get_efuse(void)
const struct sunxi_cpufreq_data *opp_data;
struct nvmem_cell *speedbin_nvmem;
const struct of_device_id *match;
struct device_node *np;
struct device *cpu_dev;
u32 *speedbin;
int ret;
@ -174,19 +174,18 @@ static int sun50i_cpufreq_get_efuse(void)
if (!cpu_dev)
return -ENODEV;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
struct device_node *np __free(device_node) =
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
match = of_match_node(cpu_opp_match_list, np);
if (!match) {
of_node_put(np);
if (!match)
return -ENOENT;
}
opp_data = match->data;
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
if (IS_ERR(speedbin_nvmem))
return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
"Could not get nvmem cell\n");
@ -301,14 +300,9 @@ MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list);
static const struct of_device_id *sun50i_cpufreq_match_node(void)
{
const struct of_device_id *match;
struct device_node *np;
struct device_node *np __free(device_node) = of_find_node_by_path("/");
np = of_find_node_by_path("/");
match = of_match_node(sun50i_cpufreq_match_list, np);
of_node_put(np);
return match;
return of_match_node(sun50i_cpufreq_match_list, np);
}
/*

View File

@ -551,14 +551,12 @@ static int tegra194_cpufreq_offline(struct cpufreq_policy *policy)
return 0;
}
static int tegra194_cpufreq_exit(struct cpufreq_policy *policy)
static void tegra194_cpufreq_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
dev_pm_opp_remove_all_dynamic(cpu_dev);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
return 0;
}
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,

View File

@ -47,6 +47,35 @@
#define AM625_SUPPORT_S_MPU_OPP BIT(1)
#define AM625_SUPPORT_T_MPU_OPP BIT(2)
enum {
AM62A7_EFUSE_M_MPU_OPP = 13,
AM62A7_EFUSE_N_MPU_OPP,
AM62A7_EFUSE_O_MPU_OPP,
AM62A7_EFUSE_P_MPU_OPP,
AM62A7_EFUSE_Q_MPU_OPP,
AM62A7_EFUSE_R_MPU_OPP,
AM62A7_EFUSE_S_MPU_OPP,
/*
* The V, U, and T speed grade numbering is out of order
* to align with the AM625 more uniformly. I promise I know
* my ABCs ;)
*/
AM62A7_EFUSE_V_MPU_OPP,
AM62A7_EFUSE_U_MPU_OPP,
AM62A7_EFUSE_T_MPU_OPP,
};
#define AM62A7_SUPPORT_N_MPU_OPP BIT(0)
#define AM62A7_SUPPORT_R_MPU_OPP BIT(1)
#define AM62A7_SUPPORT_V_MPU_OPP BIT(2)
#define AM62P5_EFUSE_O_MPU_OPP 15
#define AM62P5_EFUSE_S_MPU_OPP 19
#define AM62P5_EFUSE_U_MPU_OPP 21
#define AM62P5_SUPPORT_O_MPU_OPP BIT(0)
#define AM62P5_SUPPORT_U_MPU_OPP BIT(2)
#define VERSION_COUNT 2
struct ti_cpufreq_data;
@ -112,6 +141,49 @@ static unsigned long omap3_efuse_xlate(struct ti_cpufreq_data *opp_data,
return BIT(efuse);
}
static unsigned long am62p5_efuse_xlate(struct ti_cpufreq_data *opp_data,
unsigned long efuse)
{
unsigned long calculated_efuse = AM62P5_SUPPORT_O_MPU_OPP;
switch (efuse) {
case AM62P5_EFUSE_U_MPU_OPP:
case AM62P5_EFUSE_S_MPU_OPP:
calculated_efuse |= AM62P5_SUPPORT_U_MPU_OPP;
fallthrough;
case AM62P5_EFUSE_O_MPU_OPP:
calculated_efuse |= AM62P5_SUPPORT_O_MPU_OPP;
}
return calculated_efuse;
}
static unsigned long am62a7_efuse_xlate(struct ti_cpufreq_data *opp_data,
unsigned long efuse)
{
unsigned long calculated_efuse = AM62A7_SUPPORT_N_MPU_OPP;
switch (efuse) {
case AM62A7_EFUSE_V_MPU_OPP:
case AM62A7_EFUSE_U_MPU_OPP:
case AM62A7_EFUSE_T_MPU_OPP:
case AM62A7_EFUSE_S_MPU_OPP:
calculated_efuse |= AM62A7_SUPPORT_V_MPU_OPP;
fallthrough;
case AM62A7_EFUSE_R_MPU_OPP:
case AM62A7_EFUSE_Q_MPU_OPP:
case AM62A7_EFUSE_P_MPU_OPP:
case AM62A7_EFUSE_O_MPU_OPP:
calculated_efuse |= AM62A7_SUPPORT_R_MPU_OPP;
fallthrough;
case AM62A7_EFUSE_N_MPU_OPP:
case AM62A7_EFUSE_M_MPU_OPP:
calculated_efuse |= AM62A7_SUPPORT_N_MPU_OPP;
}
return calculated_efuse;
}
static unsigned long am625_efuse_xlate(struct ti_cpufreq_data *opp_data,
unsigned long efuse)
{
@ -234,6 +306,24 @@ static struct ti_cpufreq_soc_data am625_soc_data = {
.multi_regulator = false,
};
static struct ti_cpufreq_soc_data am62a7_soc_data = {
.efuse_xlate = am62a7_efuse_xlate,
.efuse_offset = 0x0,
.efuse_mask = 0x07c0,
.efuse_shift = 0x6,
.rev_offset = 0x0014,
.multi_regulator = false,
};
static struct ti_cpufreq_soc_data am62p5_soc_data = {
.efuse_xlate = am62p5_efuse_xlate,
.efuse_offset = 0x0,
.efuse_mask = 0x07c0,
.efuse_shift = 0x6,
.rev_offset = 0x0014,
.multi_regulator = false,
};
/**
* ti_cpufreq_get_efuse() - Parse and return efuse value present on SoC
* @opp_data: pointer to ti_cpufreq_data context
@ -337,8 +427,8 @@ static const struct of_device_id ti_cpufreq_of_match[] = {
{ .compatible = "ti,omap34xx", .data = &omap34xx_soc_data, },
{ .compatible = "ti,omap36xx", .data = &omap36xx_soc_data, },
{ .compatible = "ti,am625", .data = &am625_soc_data, },
{ .compatible = "ti,am62a7", .data = &am625_soc_data, },
{ .compatible = "ti,am62p5", .data = &am625_soc_data, },
{ .compatible = "ti,am62a7", .data = &am62a7_soc_data, },
{ .compatible = "ti,am62p5", .data = &am62p5_soc_data, },
/* legacy */
{ .compatible = "ti,omap3430", .data = &omap34xx_soc_data, },
{ .compatible = "ti,omap3630", .data = &omap36xx_soc_data, },
@ -417,7 +507,7 @@ static int ti_cpufreq_probe(struct platform_device *pdev)
ret = dev_pm_opp_set_config(opp_data->cpu_dev, &config);
if (ret < 0) {
dev_err(opp_data->cpu_dev, "Failed to set OPP config\n");
dev_err_probe(opp_data->cpu_dev, ret, "Failed to set OPP config\n");
goto fail_put_node;
}

View File

@ -447,7 +447,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy)
return 0;
}
static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
static void ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev;
@ -455,11 +455,10 @@ static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__,
policy->cpu);
return -ENODEV;
return;
}
put_cluster_clk_and_freq_table(cpu_dev, policy->related_cpus);
return 0;
}
static struct cpufreq_driver ve_spc_cpufreq_driver = {

View File

@ -393,10 +393,12 @@ static int ti_opp_supply_probe(struct platform_device *pdev)
}
ret = dev_pm_opp_set_config_regulators(cpu_dev, ti_opp_config_regulators);
if (ret < 0)
if (ret < 0) {
_free_optimized_voltages(dev, &opp_data);
return ret;
}
return ret;
return 0;
}
static struct platform_driver ti_opp_supply_driver = {

View File

@ -396,7 +396,7 @@ struct cpufreq_driver {
int (*online)(struct cpufreq_policy *policy);
int (*offline)(struct cpufreq_policy *policy);
int (*exit)(struct cpufreq_policy *policy);
void (*exit)(struct cpufreq_policy *policy);
int (*suspend)(struct cpufreq_policy *policy);
int (*resume)(struct cpufreq_policy *policy);
@ -785,7 +785,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf);
#ifdef CONFIG_CPU_FREQ
int cpufreq_boost_trigger_state(int state);
int cpufreq_boost_enabled(void);
bool cpufreq_boost_enabled(void);
int cpufreq_enable_boost_support(void);
bool policy_has_boost_freq(struct cpufreq_policy *policy);
@ -1164,9 +1164,9 @@ static inline int cpufreq_boost_trigger_state(int state)
{
return 0;
}
static inline int cpufreq_boost_enabled(void)
static inline bool cpufreq_boost_enabled(void)
{
return 0;
return false;
}
static inline int cpufreq_enable_boost_support(void)