Merge tag 'pm-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These add some new hardware support (for example, IceLake-D idle
  states in intel_idle), fix some issues (for example, the handling of
  negative "sleep length" values in cpuidle governors), add new
  functionality to the existing drivers (for example, scale-invariance
  support in the ACPI CPPC cpufreq driver) and clean up code all over.

  Specifics:

   - Add idle states table for IceLake-D to the intel_idle driver and
     update IceLake-X C6 data in it (Artem Bityutskiy).

   - Fix the C7 idle state on Tegra114 in the tegra cpuidle driver and
     drop the unused do_idle() firmware call from it (Dmitry Osipenko).

   - Fix cpuidle-qcom-spm Kconfig entry (He Ying).

   - Fix handling of possible negative tick_nohz_get_next_hrtimer()
     return values of in cpuidle governors (Rafael Wysocki).

   - Add support for frequency-invariance to the ACPI CPPC cpufreq
     driver and update the frequency-invariance engine (FIE) to use it
     as needed (Viresh Kumar).

   - Simplify the default delay_us setting in the ACPI CPPC cpufreq
     driver (Tom Saeger).

   - Clean up frequency-related computations in the intel_pstate cpufreq
     driver (Rafael Wysocki).

   - Fix TBG parent setting for load levels in the armada-37xx cpufreq
     driver and drop the CPU PM clock .set_parent method for armada-37xx
     (Marek Behún).

   - Fix multiple issues in the armada-37xx cpufreq driver (Pali Rohár).

   - Fix handling of dev_pm_opp_of_cpumask_add_table() return values in
     cpufreq-dt to take the -EPROBE_DEFER one into acconut as
     appropriate (Quanyang Wang).

   - Fix format string in ia64-acpi-cpufreq (Sergei Trofimovich).

   - Drop the unused for_each_policy() macro from cpufreq (Shaokun
     Zhang).

   - Simplify computations in the schedutil cpufreq governor to avoid
     unnecessary overhead (Yue Hu).

   - Fix typos in the s5pv210 cpufreq driver (Bhaskar Chowdhury).

   - Fix cpufreq documentation links in Kconfig (Alexander Monakov).

   - Fix PCI device power state handling in pci_enable_device_flags() to
     avoid issuse in some cases when the device depends on an ACPI power
     resource (Rafael Wysocki).

   - Add missing documentation of pm_runtime_resume_and_get() (Alan
     Stern).

   - Add missing static inline stub for pm_runtime_has_no_callbacks() to
     pm_runtime.h and drop the unused try_to_freeze_nowarn() definition
     (YueHaibing).

   - Drop duplicate struct device declaration from pm.h and fix a
     structure type declaration in intel_rapl.h (Wan Jiabing).

   - Use dev_set_name() instead of an open-coded equivalent of it in the
     wakeup sources code and drop a redundant local variable
     initialization from it (Andy Shevchenko, Colin Ian King).

   - Use crc32 instead of md5 for e820 memory map integrity check during
     resume from hibernation on x86 (Chris von Recklinghausen).

   - Fix typos in comments in the system-wide and hibernation support
     code (Lu Jialin).

   - Modify the generic power domains (genpd) code to avoid resuming
     devices in the "prepare" phase of system-wide suspend and
     hibernation (Ulf Hansson).

   - Add Hygon Fam18h RAPL support to the intel_rapl power capping
     driver (Pu Wen).

   - Add MAINTAINERS entry for the dynamic thermal power management
     (DTPM) code (Daniel Lezcano).

   - Add devm variants of operating performance points (OPP) API
     functions and switch over some users of the OPP framework to the
     new resource-managed API (Yangtao Li and Dmitry Osipenko).

   - Update devfreq core:

      * Register devfreq devices as cooling devices on demand (Daniel
        Lezcano).

      * Add missing unlock opeation in devfreq_add_device() (Lukasz
        Luba).

      * Use the next frequency as resume_freq instead of the previous
        frequency when using the opp-suspend property (Dong Aisheng).

      * Check get_dev_status in devfreq_update_stats() (Dong Aisheng).

      * Fix set_freq path for the userspace governor in Kconfig (Dong
        Aisheng).

      * Remove invalid description of get_target_freq() (Dong Aisheng).

   - Update devfreq drivers:

      * imx8m-ddrc: Remove imx8m_ddrc_get_dev_status() and unneeded
        of_match_ptr() (Dong Aisheng, Fabio Estevam).

      * rk3399_dmc: dt-bindings: Add rockchip,pmu phandle and drop
        references to undefined symbols (Enric Balletbo i Serra, Gaël
        PORTAY).

      * rk3399_dmc: Use dev_err_probe() to simplify the code (Krzysztof
        Kozlowski).

      * imx-bus: Remove unneeded of_match_ptr() (Fabio Estevam).

   - Fix kernel-doc warnings in three places (Pierre-Louis Bossart).

   - Fix typo in the pm-graph utility code (Ricardo Ribalda)"

* tag 'pm-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (74 commits)
  PM: wakeup: remove redundant assignment to variable retval
  PM: hibernate: x86: Use crc32 instead of md5 for hibernation e820 integrity check
  cpufreq: Kconfig: fix documentation links
  PM: wakeup: use dev_set_name() directly
  PM: runtime: Add documentation for pm_runtime_resume_and_get()
  cpufreq: intel_pstate: Simplify intel_pstate_update_perf_limits()
  cpufreq: armada-37xx: Fix module unloading
  cpufreq: armada-37xx: Remove cur_frequency variable
  cpufreq: armada-37xx: Fix determining base CPU frequency
  cpufreq: armada-37xx: Fix driver cleanup when registration failed
  clk: mvebu: armada-37xx-periph: Fix workaround for switching from L1 to L0
  clk: mvebu: armada-37xx-periph: Fix switching CPU freq from 250 Mhz to 1 GHz
  cpufreq: armada-37xx: Fix the AVS value for load L1
  clk: mvebu: armada-37xx-periph: remove .set_parent method for CPU PM clock
  cpufreq: armada-37xx: Fix setting TBG parent for load levels
  cpuidle: Fix ARM_QCOM_SPM_CPUIDLE configuration
  cpuidle: tegra: Remove do_idle firmware call
  cpuidle: tegra: Fix C7 idling state on Tegra114
  PM: sleep: fix typos in comments
  cpufreq: Remove unused for_each_policy macro
  ...
This commit is contained in:
Linus Torvalds
2021-04-26 15:10:25 -07:00
66 changed files with 1028 additions and 754 deletions

View File

@ -97,10 +97,7 @@ Description:
object. The values are represented in ms. If the value is object. The values are represented in ms. If the value is
less than 1 jiffy, it is considered to be 0, which means less than 1 jiffy, it is considered to be 0, which means
no polling. This value is meaningless if the governor is no polling. This value is meaningless if the governor is
not polling; thus. If the governor is not using not polling.
devfreq-provided central polling
(/sys/class/devfreq/.../central_polling is 0), this value
may be useless.
A list of governors that support the node: A list of governors that support the node:
- simple_ondmenad - simple_ondmenad

View File

@ -12,6 +12,8 @@ Required properties:
for details. for details.
- center-supply: DMC supply node. - center-supply: DMC supply node.
- status: Marks the node enabled/disabled. - status: Marks the node enabled/disabled.
- rockchip,pmu: Phandle to the syscon managing the "PMU general register
files".
Optional properties: Optional properties:
- interrupts: The CPU interrupt number. The interrupt specifier - interrupts: The CPU interrupt number. The interrupt specifier
@ -77,24 +79,23 @@ Following properties relate to DDR timing:
- rockchip,ddr3_drv : When the DRAM type is DDR3, this parameter defines - rockchip,ddr3_drv : When the DRAM type is DDR3, this parameter defines
the DRAM side driver strength in ohms. Default the DRAM side driver strength in ohms. Default
value is DDR3_DS_40ohm. value is 40.
- rockchip,ddr3_odt : When the DRAM type is DDR3, this parameter defines - rockchip,ddr3_odt : When the DRAM type is DDR3, this parameter defines
the DRAM side ODT strength in ohms. Default value the DRAM side ODT strength in ohms. Default value
is DDR3_ODT_120ohm. is 120.
- rockchip,phy_ddr3_ca_drv : When the DRAM type is DDR3, this parameter defines - rockchip,phy_ddr3_ca_drv : When the DRAM type is DDR3, this parameter defines
the phy side CA line (incluing command line, the phy side CA line (incluing command line,
address line and clock line) driver strength. address line and clock line) driver strength.
Default value is PHY_DRV_ODT_40. Default value is 40.
- rockchip,phy_ddr3_dq_drv : When the DRAM type is DDR3, this parameter defines - rockchip,phy_ddr3_dq_drv : When the DRAM type is DDR3, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line) the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is PHY_DRV_ODT_40. driver strength. Default value is 40.
- rockchip,phy_ddr3_odt : When the DRAM type is DDR3, this parameter defines - rockchip,phy_ddr3_odt : When the DRAM type is DDR3, this parameter defines
the PHY side ODT strength. Default value is the PHY side ODT strength. Default value is 240.
PHY_DRV_ODT_240.
- rockchip,lpddr3_odt_dis_freq : When the DRAM type is LPDDR3, this parameter defines - rockchip,lpddr3_odt_dis_freq : When the DRAM type is LPDDR3, this parameter defines
then ODT disable frequency in MHz (Mega Hz). then ODT disable frequency in MHz (Mega Hz).
@ -104,25 +105,23 @@ Following properties relate to DDR timing:
- rockchip,lpddr3_drv : When the DRAM type is LPDDR3, this parameter defines - rockchip,lpddr3_drv : When the DRAM type is LPDDR3, this parameter defines
the DRAM side driver strength in ohms. Default the DRAM side driver strength in ohms. Default
value is LP3_DS_34ohm. value is 34.
- rockchip,lpddr3_odt : When the DRAM type is LPDDR3, this parameter defines - rockchip,lpddr3_odt : When the DRAM type is LPDDR3, this parameter defines
the DRAM side ODT strength in ohms. Default value the DRAM side ODT strength in ohms. Default value
is LP3_ODT_240ohm. is 240.
- rockchip,phy_lpddr3_ca_drv : When the DRAM type is LPDDR3, this parameter defines - rockchip,phy_lpddr3_ca_drv : When the DRAM type is LPDDR3, this parameter defines
the PHY side CA line (including command line, the PHY side CA line (including command line,
address line and clock line) driver strength. address line and clock line) driver strength.
Default value is PHY_DRV_ODT_40. Default value is 40.
- rockchip,phy_lpddr3_dq_drv : When the DRAM type is LPDDR3, this parameter defines - rockchip,phy_lpddr3_dq_drv : When the DRAM type is LPDDR3, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line) the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is driver strength. Default value is 40.
PHY_DRV_ODT_40.
- rockchip,phy_lpddr3_odt : When dram type is LPDDR3, this parameter define - rockchip,phy_lpddr3_odt : When dram type is LPDDR3, this parameter define
the phy side odt strength, default value is the phy side odt strength, default value is 240.
PHY_DRV_ODT_240.
- rockchip,lpddr4_odt_dis_freq : When the DRAM type is LPDDR4, this parameter - rockchip,lpddr4_odt_dis_freq : When the DRAM type is LPDDR4, this parameter
defines the ODT disable frequency in defines the ODT disable frequency in
@ -132,32 +131,30 @@ Following properties relate to DDR timing:
- rockchip,lpddr4_drv : When the DRAM type is LPDDR4, this parameter defines - rockchip,lpddr4_drv : When the DRAM type is LPDDR4, this parameter defines
the DRAM side driver strength in ohms. Default the DRAM side driver strength in ohms. Default
value is LP4_PDDS_60ohm. value is 60.
- rockchip,lpddr4_dq_odt : When the DRAM type is LPDDR4, this parameter defines - rockchip,lpddr4_dq_odt : When the DRAM type is LPDDR4, this parameter defines
the DRAM side ODT on DQS/DQ line strength in ohms. the DRAM side ODT on DQS/DQ line strength in ohms.
Default value is LP4_DQ_ODT_40ohm. Default value is 40.
- rockchip,lpddr4_ca_odt : When the DRAM type is LPDDR4, this parameter defines - rockchip,lpddr4_ca_odt : When the DRAM type is LPDDR4, this parameter defines
the DRAM side ODT on CA line strength in ohms. the DRAM side ODT on CA line strength in ohms.
Default value is LP4_CA_ODT_40ohm. Default value is 40.
- rockchip,phy_lpddr4_ca_drv : When the DRAM type is LPDDR4, this parameter defines - rockchip,phy_lpddr4_ca_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side CA line (including command address the PHY side CA line (including command address
line) driver strength. Default value is line) driver strength. Default value is 40.
PHY_DRV_ODT_40.
- rockchip,phy_lpddr4_ck_cs_drv : When the DRAM type is LPDDR4, this parameter defines - rockchip,phy_lpddr4_ck_cs_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side clock line and CS line driver the PHY side clock line and CS line driver
strength. Default value is PHY_DRV_ODT_80. strength. Default value is 80.
- rockchip,phy_lpddr4_dq_drv : When the DRAM type is LPDDR4, this parameter defines - rockchip,phy_lpddr4_dq_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line) the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is PHY_DRV_ODT_80. driver strength. Default value is 80.
- rockchip,phy_lpddr4_odt : When the DRAM type is LPDDR4, this parameter defines - rockchip,phy_lpddr4_odt : When the DRAM type is LPDDR4, this parameter defines
the PHY side ODT strength. Default value is the PHY side ODT strength. Default value is 60.
PHY_DRV_ODT_60.
Example: Example:
dmc_opp_table: dmc_opp_table { dmc_opp_table: dmc_opp_table {
@ -193,23 +190,23 @@ Example:
rockchip,phy_dll_dis_freq = <125>; rockchip,phy_dll_dis_freq = <125>;
rockchip,auto_pd_dis_freq = <666>; rockchip,auto_pd_dis_freq = <666>;
rockchip,ddr3_odt_dis_freq = <333>; rockchip,ddr3_odt_dis_freq = <333>;
rockchip,ddr3_drv = <DDR3_DS_40ohm>; rockchip,ddr3_drv = <40>;
rockchip,ddr3_odt = <DDR3_ODT_120ohm>; rockchip,ddr3_odt = <120>;
rockchip,phy_ddr3_ca_drv = <PHY_DRV_ODT_40>; rockchip,phy_ddr3_ca_drv = <40>;
rockchip,phy_ddr3_dq_drv = <PHY_DRV_ODT_40>; rockchip,phy_ddr3_dq_drv = <40>;
rockchip,phy_ddr3_odt = <PHY_DRV_ODT_240>; rockchip,phy_ddr3_odt = <240>;
rockchip,lpddr3_odt_dis_freq = <333>; rockchip,lpddr3_odt_dis_freq = <333>;
rockchip,lpddr3_drv = <LP3_DS_34ohm>; rockchip,lpddr3_drv = <34>;
rockchip,lpddr3_odt = <LP3_ODT_240ohm>; rockchip,lpddr3_odt = <240>;
rockchip,phy_lpddr3_ca_drv = <PHY_DRV_ODT_40>; rockchip,phy_lpddr3_ca_drv = <40>;
rockchip,phy_lpddr3_dq_drv = <PHY_DRV_ODT_40>; rockchip,phy_lpddr3_dq_drv = <40>;
rockchip,phy_lpddr3_odt = <PHY_DRV_ODT_240>; rockchip,phy_lpddr3_odt = <240>;
rockchip,lpddr4_odt_dis_freq = <333>; rockchip,lpddr4_odt_dis_freq = <333>;
rockchip,lpddr4_drv = <LP4_PDDS_60ohm>; rockchip,lpddr4_drv = <60>;
rockchip,lpddr4_dq_odt = <LP4_DQ_ODT_40ohm>; rockchip,lpddr4_dq_odt = <40>;
rockchip,lpddr4_ca_odt = <LP4_CA_ODT_40ohm>; rockchip,lpddr4_ca_odt = <40>;
rockchip,phy_lpddr4_ca_drv = <PHY_DRV_ODT_40>; rockchip,phy_lpddr4_ca_drv = <40>;
rockchip,phy_lpddr4_ck_cs_drv = <PHY_DRV_ODT_80>; rockchip,phy_lpddr4_ck_cs_drv = <80>;
rockchip,phy_lpddr4_dq_drv = <PHY_DRV_ODT_80>; rockchip,phy_lpddr4_dq_drv = <80>;
rockchip,phy_lpddr4_odt = <PHY_DRV_ODT_60>; rockchip,phy_lpddr4_odt = <60>;
}; };

View File

@ -339,6 +339,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
checked additionally, and -EACCES means that 'power.disable_depth' is checked additionally, and -EACCES means that 'power.disable_depth' is
different from 0 different from 0
`int pm_runtime_resume_and_get(struct device *dev);`
- run pm_runtime_resume(dev) and if successful, increment the device's
usage counter; return the result of pm_runtime_resume
`int pm_request_idle(struct device *dev);` `int pm_request_idle(struct device *dev);`
- submit a request to execute the subsystem-level idle callback for the - submit a request to execute the subsystem-level idle callback for the
device (the request is represented by a work item in pm_wq); returns 0 on device (the request is represented by a work item in pm_wq); returns 0 on

View File

@ -14439,6 +14439,15 @@ F: include/linux/pm_*
F: include/linux/powercap.h F: include/linux/powercap.h
F: kernel/configs/nopm.config F: kernel/configs/nopm.config
DYNAMIC THERMAL POWER MANAGEMENT (DTPM)
M: Daniel Lezcano <daniel.lezcano@kernel.org>
L: linux-pm@vger.kernel.org
S: Supported
B: https://bugzilla.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
F: drivers/powercap/dtpm*
F: include/linux/dtpm.h
POWER STATE COORDINATION INTERFACE (PSCI) POWER STATE COORDINATION INTERFACE (PSCI)
M: Mark Rutland <mark.rutland@arm.com> M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>

View File

@ -17,17 +17,9 @@ int pcibus_to_node(struct pci_bus *bus);
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
void update_freq_counters_refs(void); void update_freq_counters_refs(void);
void topology_scale_freq_tick(void);
#ifdef CONFIG_ARM64_AMU_EXTN
/*
* Replace task scheduler's default counter-based
* frequency-invariance scale factor setting.
*/
#define arch_scale_freq_tick topology_scale_freq_tick
#endif /* CONFIG_ARM64_AMU_EXTN */
/* Replace task scheduler's default frequency-invariant accounting */ /* Replace task scheduler's default frequency-invariant accounting */
#define arch_scale_freq_tick topology_scale_freq_tick
#define arch_set_freq_scale topology_set_freq_scale #define arch_set_freq_scale topology_set_freq_scale
#define arch_scale_freq_capacity topology_get_freq_scale #define arch_scale_freq_capacity topology_get_freq_scale
#define arch_scale_freq_invariant topology_scale_freq_invariant #define arch_scale_freq_invariant topology_scale_freq_invariant

View File

@ -199,107 +199,10 @@ static int freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate)
return 0; return 0;
} }
static DEFINE_STATIC_KEY_FALSE(amu_fie_key); static void amu_scale_freq_tick(void)
#define amu_freq_invariant() static_branch_unlikely(&amu_fie_key)
static void amu_fie_setup(const struct cpumask *cpus)
{
bool invariant;
int cpu;
/* We are already set since the last insmod of cpufreq driver */
if (unlikely(cpumask_subset(cpus, amu_fie_cpus)))
return;
for_each_cpu(cpu, cpus) {
if (!freq_counters_valid(cpu) ||
freq_inv_set_max_ratio(cpu,
cpufreq_get_hw_max_freq(cpu) * 1000,
arch_timer_get_rate()))
return;
}
cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);
invariant = topology_scale_freq_invariant();
/* We aren't fully invariant yet */
if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask))
return;
static_branch_enable(&amu_fie_key);
pr_debug("CPUs[%*pbl]: counters will be used for FIE.",
cpumask_pr_args(cpus));
/*
* Task scheduler behavior depends on frequency invariance support,
* either cpufreq or counter driven. If the support status changes as
* a result of counter initialisation and use, retrigger the build of
* scheduling domains to ensure the information is propagated properly.
*/
if (!invariant)
rebuild_sched_domains_energy();
}
static int init_amu_fie_callback(struct notifier_block *nb, unsigned long val,
void *data)
{
struct cpufreq_policy *policy = data;
if (val == CPUFREQ_CREATE_POLICY)
amu_fie_setup(policy->related_cpus);
/*
* We don't need to handle CPUFREQ_REMOVE_POLICY event as the AMU
* counters don't have any dependency on cpufreq driver once we have
* initialized AMU support and enabled invariance. The AMU counters will
* keep on working just fine in the absence of the cpufreq driver, and
* for the CPUs for which there are no counters available, the last set
* value of freq_scale will remain valid as that is the frequency those
* CPUs are running at.
*/
return 0;
}
static struct notifier_block init_amu_fie_notifier = {
.notifier_call = init_amu_fie_callback,
};
static int __init init_amu_fie(void)
{
int ret;
if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL))
return -ENOMEM;
ret = cpufreq_register_notifier(&init_amu_fie_notifier,
CPUFREQ_POLICY_NOTIFIER);
if (ret)
free_cpumask_var(amu_fie_cpus);
return ret;
}
core_initcall(init_amu_fie);
bool arch_freq_counters_available(const struct cpumask *cpus)
{
return amu_freq_invariant() &&
cpumask_subset(cpus, amu_fie_cpus);
}
void topology_scale_freq_tick(void)
{ {
u64 prev_core_cnt, prev_const_cnt; u64 prev_core_cnt, prev_const_cnt;
u64 core_cnt, const_cnt, scale; u64 core_cnt, const_cnt, scale;
int cpu = smp_processor_id();
if (!amu_freq_invariant())
return;
if (!cpumask_test_cpu(cpu, amu_fie_cpus))
return;
prev_const_cnt = this_cpu_read(arch_const_cycles_prev); prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
prev_core_cnt = this_cpu_read(arch_core_cycles_prev); prev_core_cnt = this_cpu_read(arch_core_cycles_prev);
@ -327,9 +230,79 @@ void topology_scale_freq_tick(void)
const_cnt - prev_const_cnt); const_cnt - prev_const_cnt);
scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE); scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE);
this_cpu_write(freq_scale, (unsigned long)scale); this_cpu_write(arch_freq_scale, (unsigned long)scale);
} }
static struct scale_freq_data amu_sfd = {
.source = SCALE_FREQ_SOURCE_ARCH,
.set_freq_scale = amu_scale_freq_tick,
};
static void amu_fie_setup(const struct cpumask *cpus)
{
int cpu;
/* We are already set since the last insmod of cpufreq driver */
if (unlikely(cpumask_subset(cpus, amu_fie_cpus)))
return;
for_each_cpu(cpu, cpus) {
if (!freq_counters_valid(cpu) ||
freq_inv_set_max_ratio(cpu,
cpufreq_get_hw_max_freq(cpu) * 1000,
arch_timer_get_rate()))
return;
}
cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);
topology_set_scale_freq_source(&amu_sfd, amu_fie_cpus);
pr_debug("CPUs[%*pbl]: counters will be used for FIE.",
cpumask_pr_args(cpus));
}
static int init_amu_fie_callback(struct notifier_block *nb, unsigned long val,
void *data)
{
struct cpufreq_policy *policy = data;
if (val == CPUFREQ_CREATE_POLICY)
amu_fie_setup(policy->related_cpus);
/*
* We don't need to handle CPUFREQ_REMOVE_POLICY event as the AMU
* counters don't have any dependency on cpufreq driver once we have
* initialized AMU support and enabled invariance. The AMU counters will
* keep on working just fine in the absence of the cpufreq driver, and
* for the CPUs for which there are no counters available, the last set
* value of arch_freq_scale will remain valid as that is the frequency
* those CPUs are running at.
*/
return 0;
}
static struct notifier_block init_amu_fie_notifier = {
.notifier_call = init_amu_fie_callback,
};
static int __init init_amu_fie(void)
{
int ret;
if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL))
return -ENOMEM;
ret = cpufreq_register_notifier(&init_amu_fie_notifier,
CPUFREQ_POLICY_NOTIFIER);
if (ret)
free_cpumask_var(amu_fie_cpus);
return ret;
}
core_initcall(init_amu_fie);
#ifdef CONFIG_ACPI_CPPC_LIB #ifdef CONFIG_ACPI_CPPC_LIB
#include <acpi/cppc_acpi.h> #include <acpi/cppc_acpi.h>

View File

@ -31,8 +31,8 @@
* - inform the user about the firmware's notion of memory layout * - inform the user about the firmware's notion of memory layout
* via /sys/firmware/memmap * via /sys/firmware/memmap
* *
* - the hibernation code uses it to generate a kernel-independent MD5 * - the hibernation code uses it to generate a kernel-independent CRC32
* fingerprint of the physical memory layout of a system. * checksum of the physical memory layout of a system.
* *
* - 'e820_table_kexec': a slightly modified (by the kernel) firmware version * - 'e820_table_kexec': a slightly modified (by the kernel) firmware version
* passed to us by the bootloader - the major difference between * passed to us by the bootloader - the major difference between

View File

@ -13,8 +13,8 @@
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/pgtable.h> #include <linux/pgtable.h>
#include <linux/types.h>
#include <crypto/hash.h> #include <linux/crc32.h>
#include <asm/e820/api.h> #include <asm/e820/api.h>
#include <asm/init.h> #include <asm/init.h>
@ -54,95 +54,33 @@ int pfn_is_nosave(unsigned long pfn)
return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn; return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn;
} }
#define MD5_DIGEST_SIZE 16
struct restore_data_record { struct restore_data_record {
unsigned long jump_address; unsigned long jump_address;
unsigned long jump_address_phys; unsigned long jump_address_phys;
unsigned long cr3; unsigned long cr3;
unsigned long magic; unsigned long magic;
u8 e820_digest[MD5_DIGEST_SIZE]; unsigned long e820_checksum;
}; };
#if IS_BUILTIN(CONFIG_CRYPTO_MD5)
/** /**
* get_e820_md5 - calculate md5 according to given e820 table * compute_e820_crc32 - calculate crc32 of a given e820 table
* *
* @table: the e820 table to be calculated * @table: the e820 table to be calculated
* @buf: the md5 result to be stored to *
* Return: the resulting checksum
*/ */
static int get_e820_md5(struct e820_table *table, void *buf) static inline u32 compute_e820_crc32(struct e820_table *table)
{ {
struct crypto_shash *tfm; int size = offsetof(struct e820_table, entries) +
struct shash_desc *desc;
int size;
int ret = 0;
tfm = crypto_alloc_shash("md5", 0, 0);
if (IS_ERR(tfm))
return -ENOMEM;
desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
GFP_KERNEL);
if (!desc) {
ret = -ENOMEM;
goto free_tfm;
}
desc->tfm = tfm;
size = offsetof(struct e820_table, entries) +
sizeof(struct e820_entry) * table->nr_entries; sizeof(struct e820_entry) * table->nr_entries;
if (crypto_shash_digest(desc, (u8 *)table, size, buf)) return ~crc32_le(~0, (unsigned char const *)table, size);
ret = -EINVAL;
kfree_sensitive(desc);
free_tfm:
crypto_free_shash(tfm);
return ret;
} }
static int hibernation_e820_save(void *buf)
{
return get_e820_md5(e820_table_firmware, buf);
}
static bool hibernation_e820_mismatch(void *buf)
{
int ret;
u8 result[MD5_DIGEST_SIZE];
memset(result, 0, MD5_DIGEST_SIZE);
/* If there is no digest in suspend kernel, let it go. */
if (!memcmp(result, buf, MD5_DIGEST_SIZE))
return false;
ret = get_e820_md5(e820_table_firmware, result);
if (ret)
return true;
return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
}
#else
static int hibernation_e820_save(void *buf)
{
return 0;
}
static bool hibernation_e820_mismatch(void *buf)
{
/* If md5 is not builtin for restore kernel, let it go. */
return false;
}
#endif
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define RESTORE_MAGIC 0x23456789ABCDEF01UL #define RESTORE_MAGIC 0x23456789ABCDEF02UL
#else #else
#define RESTORE_MAGIC 0x12345678UL #define RESTORE_MAGIC 0x12345679UL
#endif #endif
/** /**
@ -179,7 +117,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
*/ */
rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK; rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK;
return hibernation_e820_save(rdr->e820_digest); rdr->e820_checksum = compute_e820_crc32(e820_table_firmware);
return 0;
} }
/** /**
@ -200,7 +139,7 @@ int arch_hibernation_header_restore(void *addr)
jump_address_phys = rdr->jump_address_phys; jump_address_phys = rdr->jump_address_phys;
restore_cr3 = rdr->cr3; restore_cr3 = rdr->cr3;
if (hibernation_e820_mismatch(rdr->e820_digest)) { if (rdr->e820_checksum != compute_e820_crc32(e820_table_firmware)) {
pr_crit("Hibernate inconsistent memory map detected!\n"); pr_crit("Hibernate inconsistent memory map detected!\n");
return -ENODEV; return -ENODEV;
} }

View File

@ -21,17 +21,94 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/smp.h> #include <linux/smp.h>
static DEFINE_PER_CPU(struct scale_freq_data *, sft_data);
static struct cpumask scale_freq_counters_mask;
static bool scale_freq_invariant;
static bool supports_scale_freq_counters(const struct cpumask *cpus)
{
return cpumask_subset(cpus, &scale_freq_counters_mask);
}
bool topology_scale_freq_invariant(void) bool topology_scale_freq_invariant(void)
{ {
return cpufreq_supports_freq_invariance() || return cpufreq_supports_freq_invariance() ||
arch_freq_counters_available(cpu_online_mask); supports_scale_freq_counters(cpu_online_mask);
} }
__weak bool arch_freq_counters_available(const struct cpumask *cpus) static void update_scale_freq_invariant(bool status)
{ {
return false; if (scale_freq_invariant == status)
return;
/*
* Task scheduler behavior depends on frequency invariance support,
* either cpufreq or counter driven. If the support status changes as
* a result of counter initialisation and use, retrigger the build of
* scheduling domains to ensure the information is propagated properly.
*/
if (topology_scale_freq_invariant() == status) {
scale_freq_invariant = status;
rebuild_sched_domains_energy();
}
} }
DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
void topology_set_scale_freq_source(struct scale_freq_data *data,
const struct cpumask *cpus)
{
struct scale_freq_data *sfd;
int cpu;
/*
* Avoid calling rebuild_sched_domains() unnecessarily if FIE is
* supported by cpufreq.
*/
if (cpumask_empty(&scale_freq_counters_mask))
scale_freq_invariant = topology_scale_freq_invariant();
for_each_cpu(cpu, cpus) {
sfd = per_cpu(sft_data, cpu);
/* Use ARCH provided counters whenever possible */
if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
per_cpu(sft_data, cpu) = data;
cpumask_set_cpu(cpu, &scale_freq_counters_mask);
}
}
update_scale_freq_invariant(true);
}
EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
void topology_clear_scale_freq_source(enum scale_freq_source source,
const struct cpumask *cpus)
{
struct scale_freq_data *sfd;
int cpu;
for_each_cpu(cpu, cpus) {
sfd = per_cpu(sft_data, cpu);
if (sfd && sfd->source == source) {
per_cpu(sft_data, cpu) = NULL;
cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
}
}
update_scale_freq_invariant(false);
}
EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source);
void topology_scale_freq_tick(void)
{
struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data);
if (sfd)
sfd->set_freq_scale();
}
DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE;
EXPORT_PER_CPU_SYMBOL_GPL(arch_freq_scale);
void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq) unsigned long max_freq)
@ -47,13 +124,13 @@ void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
* want to update the scale factor with information from CPUFREQ. * want to update the scale factor with information from CPUFREQ.
* Instead the scale factor will be updated from arch_scale_freq_tick. * Instead the scale factor will be updated from arch_scale_freq_tick.
*/ */
if (arch_freq_counters_available(cpus)) if (supports_scale_freq_counters(cpus))
return; return;
scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq; scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq;
for_each_cpu(i, cpus) for_each_cpu(i, cpus)
per_cpu(freq_scale, i) = scale; per_cpu(arch_freq_scale, i) = scale;
} }
DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;

View File

@ -140,7 +140,7 @@ static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
} }
/** /**
* pm_clk_enable - Enable a clock, reporting any errors * __pm_clk_enable - Enable a clock, reporting any errors
* @dev: The device for the given clock * @dev: The device for the given clock
* @ce: PM clock entry corresponding to the clock. * @ce: PM clock entry corresponding to the clock.
*/ */

View File

@ -1087,34 +1087,6 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock,
genpd->status = GENPD_STATE_ON; genpd->status = GENPD_STATE_ON;
} }
/**
* resume_needed - Check whether to resume a device before system suspend.
* @dev: Device to check.
* @genpd: PM domain the device belongs to.
*
* There are two cases in which a device that can wake up the system from sleep
* states should be resumed by genpd_prepare(): (1) if the device is enabled
* to wake up the system and it has to remain active for this purpose while the
* system is in the sleep state and (2) if the device is not enabled to wake up
* the system from sleep states and it generally doesn't generate wakeup signals
* by itself (those signals are generated on its behalf by other parts of the
* system). In the latter case it may be necessary to reconfigure the device's
* wakeup settings during system suspend, because it may have been set up to
* signal remote wakeup from the system's working state as needed by runtime PM.
* Return 'true' in either of the above cases.
*/
static bool resume_needed(struct device *dev,
const struct generic_pm_domain *genpd)
{
bool active_wakeup;
if (!device_can_wakeup(dev))
return false;
active_wakeup = genpd_is_active_wakeup(genpd);
return device_may_wakeup(dev) ? active_wakeup : !active_wakeup;
}
/** /**
* genpd_prepare - Start power transition of a device in a PM domain. * genpd_prepare - Start power transition of a device in a PM domain.
* @dev: Device to start the transition of. * @dev: Device to start the transition of.
@ -1135,14 +1107,6 @@ static int genpd_prepare(struct device *dev)
if (IS_ERR(genpd)) if (IS_ERR(genpd))
return -EINVAL; return -EINVAL;
/*
* If a wakeup request is pending for the device, it should be woken up
* at this point and a system wakeup event should be reported if it's
* set up to wake up the system from sleep states.
*/
if (resume_needed(dev, genpd))
pm_runtime_resume(dev);
genpd_lock(genpd); genpd_lock(genpd);
if (genpd->prepared_count++ == 0) if (genpd->prepared_count++ == 0)

View File

@ -951,7 +951,7 @@ static void pm_runtime_work(struct work_struct *work)
/** /**
* pm_suspend_timer_fn - Timer function for pm_schedule_suspend(). * pm_suspend_timer_fn - Timer function for pm_schedule_suspend().
* @data: Device pointer passed by pm_schedule_suspend(). * @timer: hrtimer used by pm_schedule_suspend().
* *
* Check if the time is right and queue a suspend request. * Check if the time is right and queue a suspend request.
*/ */

View File

@ -400,9 +400,9 @@ void device_wakeup_detach_irq(struct device *dev)
} }
/** /**
* device_wakeup_arm_wake_irqs(void) * device_wakeup_arm_wake_irqs -
* *
* Itereates over the list of device wakeirqs to arm them. * Iterates over the list of device wakeirqs to arm them.
*/ */
void device_wakeup_arm_wake_irqs(void) void device_wakeup_arm_wake_irqs(void)
{ {
@ -416,9 +416,9 @@ void device_wakeup_arm_wake_irqs(void)
} }
/** /**
* device_wakeup_disarm_wake_irqs(void) * device_wakeup_disarm_wake_irqs -
* *
* Itereates over the list of device wakeirqs to disarm them. * Iterates over the list of device wakeirqs to disarm them.
*/ */
void device_wakeup_disarm_wake_irqs(void) void device_wakeup_disarm_wake_irqs(void)
{ {
@ -532,6 +532,7 @@ EXPORT_SYMBOL_GPL(device_init_wakeup);
/** /**
* device_set_wakeup_enable - Enable or disable a device to wake up the system. * device_set_wakeup_enable - Enable or disable a device to wake up the system.
* @dev: Device to handle. * @dev: Device to handle.
* @enable: enable/disable flag
*/ */
int device_set_wakeup_enable(struct device *dev, bool enable) int device_set_wakeup_enable(struct device *dev, bool enable)
{ {
@ -581,7 +582,7 @@ static bool wakeup_source_not_registered(struct wakeup_source *ws)
*/ */
/** /**
* wakup_source_activate - Mark given wakeup source as active. * wakeup_source_activate - Mark given wakeup source as active.
* @ws: Wakeup source to handle. * @ws: Wakeup source to handle.
* *
* Update the @ws' statistics and, if @ws has just been activated, notify the PM * Update the @ws' statistics and, if @ws has just been activated, notify the PM
@ -686,7 +687,7 @@ static inline void update_prevent_sleep_time(struct wakeup_source *ws,
#endif #endif
/** /**
* wakup_source_deactivate - Mark given wakeup source as inactive. * wakeup_source_deactivate - Mark given wakeup source as inactive.
* @ws: Wakeup source to handle. * @ws: Wakeup source to handle.
* *
* Update the @ws' statistics and notify the PM core that the wakeup source has * Update the @ws' statistics and notify the PM core that the wakeup source has
@ -785,7 +786,7 @@ EXPORT_SYMBOL_GPL(pm_relax);
/** /**
* pm_wakeup_timer_fn - Delayed finalization of a wakeup event. * pm_wakeup_timer_fn - Delayed finalization of a wakeup event.
* @data: Address of the wakeup source object associated with the event source. * @t: timer list
* *
* Call wakeup_source_deactivate() for the wakeup source whose address is stored * Call wakeup_source_deactivate() for the wakeup source whose address is stored
* in @data if it is currently active and its timer has not been canceled and * in @data if it is currently active and its timer has not been canceled and
@ -1021,7 +1022,7 @@ bool pm_save_wakeup_count(unsigned int count)
#ifdef CONFIG_PM_AUTOSLEEP #ifdef CONFIG_PM_AUTOSLEEP
/** /**
* pm_wakep_autosleep_enabled - Modify autosleep_enabled for all wakeup sources. * pm_wakep_autosleep_enabled - Modify autosleep_enabled for all wakeup sources.
* @enabled: Whether to set or to clear the autosleep_enabled flags. * @set: Whether to set or to clear the autosleep_enabled flags.
*/ */
void pm_wakep_autosleep_enabled(bool set) void pm_wakep_autosleep_enabled(bool set)
{ {

View File

@ -137,7 +137,7 @@ static struct device *wakeup_source_device_create(struct device *parent,
struct wakeup_source *ws) struct wakeup_source *ws)
{ {
struct device *dev = NULL; struct device *dev = NULL;
int retval = -ENODEV; int retval;
dev = kzalloc(sizeof(*dev), GFP_KERNEL); dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev) { if (!dev) {

View File

@ -84,6 +84,7 @@ struct clk_pm_cpu {
void __iomem *reg_div; void __iomem *reg_div;
u8 shift_div; u8 shift_div;
struct regmap *nb_pm_base; struct regmap *nb_pm_base;
unsigned long l1_expiration;
}; };
#define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw) #define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw)
@ -440,33 +441,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
return val; return val;
} }
static int clk_pm_cpu_set_parent(struct clk_hw *hw, u8 index)
{
struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
struct regmap *base = pm_cpu->nb_pm_base;
int load_level;
/*
* We set the clock parent only if the DVFS is available but
* not enabled.
*/
if (IS_ERR(base) || armada_3700_pm_dvfs_is_enabled(base))
return -EINVAL;
/* Set the parent clock for all the load level */
for (load_level = 0; load_level < LOAD_LEVEL_NR; load_level++) {
unsigned int reg, mask, val,
offset = ARMADA_37XX_NB_TBG_SEL_OFF;
armada_3700_pm_dvfs_update_regs(load_level, &reg, &offset);
val = index << offset;
mask = ARMADA_37XX_NB_TBG_SEL_MASK << offset;
regmap_update_bits(base, reg, mask, val);
}
return 0;
}
static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw, static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate) unsigned long parent_rate)
{ {
@ -514,8 +488,10 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
} }
/* /*
* Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz * Workaround when base CPU frequnecy is 1000 or 1200 MHz
* respectively) to L0 frequency (1.2 Ghz) requires a significant *
* Switching the CPU from the L2 or L3 frequencies (250/300 or 200 MHz
* respectively) to L0 frequency (1/1.2 GHz) requires a significant
* amount of time to let VDD stabilize to the appropriate * amount of time to let VDD stabilize to the appropriate
* voltage. This amount of time is large enough that it cannot be * voltage. This amount of time is large enough that it cannot be
* covered by the hardware countdown register. Due to this, the CPU * covered by the hardware countdown register. Due to this, the CPU
@ -525,26 +501,56 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
* To work around this problem, we prevent switching directly from the * To work around this problem, we prevent switching directly from the
* L2/L3 frequencies to the L0 frequency, and instead switch to the L1 * L2/L3 frequencies to the L0 frequency, and instead switch to the L1
* frequency in-between. The sequence therefore becomes: * frequency in-between. The sequence therefore becomes:
* 1. First switch from L2/L3(200/300MHz) to L1(600MHZ) * 1. First switch from L2/L3 (200/250/300 MHz) to L1 (500/600 MHz)
* 2. Sleep 20ms for stabling VDD voltage * 2. Sleep 20ms for stabling VDD voltage
* 3. Then switch from L1(600MHZ) to L0(1200Mhz). * 3. Then switch from L1 (500/600 MHz) to L0 (1000/1200 MHz).
*/ */
static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base) static void clk_pm_cpu_set_rate_wa(struct clk_pm_cpu *pm_cpu,
unsigned int new_level, unsigned long rate,
struct regmap *base)
{ {
unsigned int cur_level; unsigned int cur_level;
if (rate != 1200 * 1000 * 1000)
return;
regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level); regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level);
cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK; cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK;
if (cur_level <= ARMADA_37XX_DVFS_LOAD_1)
if (cur_level == new_level)
return; return;
/*
* System wants to go to L1 on its own. If we are going from L2/L3,
* remember when 20ms will expire. If from L0, set the value so that
* next switch to L0 won't have to wait.
*/
if (new_level == ARMADA_37XX_DVFS_LOAD_1) {
if (cur_level == ARMADA_37XX_DVFS_LOAD_0)
pm_cpu->l1_expiration = jiffies;
else
pm_cpu->l1_expiration = jiffies + msecs_to_jiffies(20);
return;
}
/*
* If we are setting to L2/L3, just invalidate L1 expiration time,
* sleeping is not needed.
*/
if (rate < 1000*1000*1000)
goto invalidate_l1_exp;
/*
* We are going to L0 with rate >= 1GHz. Check whether we have been at
* L1 for long enough time. If not, go to L1 for 20ms.
*/
if (pm_cpu->l1_expiration && jiffies >= pm_cpu->l1_expiration)
goto invalidate_l1_exp;
regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD, regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD,
ARMADA_37XX_NB_CPU_LOAD_MASK, ARMADA_37XX_NB_CPU_LOAD_MASK,
ARMADA_37XX_DVFS_LOAD_1); ARMADA_37XX_DVFS_LOAD_1);
msleep(20); msleep(20);
invalidate_l1_exp:
pm_cpu->l1_expiration = 0;
} }
static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate, static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
@ -578,7 +584,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
reg = ARMADA_37XX_NB_CPU_LOAD; reg = ARMADA_37XX_NB_CPU_LOAD;
mask = ARMADA_37XX_NB_CPU_LOAD_MASK; mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
clk_pm_cpu_set_rate_wa(rate, base); /* Apply workaround when base CPU frequency is 1000 or 1200 MHz */
if (parent_rate >= 1000*1000*1000)
clk_pm_cpu_set_rate_wa(pm_cpu, load_level, rate, base);
regmap_update_bits(base, reg, mask, load_level); regmap_update_bits(base, reg, mask, load_level);
@ -592,7 +600,6 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
static const struct clk_ops clk_pm_cpu_ops = { static const struct clk_ops clk_pm_cpu_ops = {
.get_parent = clk_pm_cpu_get_parent, .get_parent = clk_pm_cpu_get_parent,
.set_parent = clk_pm_cpu_set_parent,
.round_rate = clk_pm_cpu_round_rate, .round_rate = clk_pm_cpu_round_rate,
.set_rate = clk_pm_cpu_set_rate, .set_rate = clk_pm_cpu_set_rate,
.recalc_rate = clk_pm_cpu_recalc_rate, .recalc_rate = clk_pm_cpu_recalc_rate,

View File

@ -13,7 +13,8 @@ config CPU_FREQ
clock speed, you need to either enable a dynamic cpufreq governor clock speed, you need to either enable a dynamic cpufreq governor
(see below) after boot, or use a userspace tool. (see below) after boot, or use a userspace tool.
For details, take a look at <file:Documentation/cpu-freq>. For details, take a look at
<file:Documentation/admin-guide/pm/cpufreq.rst>.
If in doubt, say N. If in doubt, say N.
@ -140,8 +141,6 @@ config CPU_FREQ_GOV_USERSPACE
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called cpufreq_userspace. module will be called cpufreq_userspace.
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say Y. If in doubt, say Y.
config CPU_FREQ_GOV_ONDEMAND config CPU_FREQ_GOV_ONDEMAND
@ -158,7 +157,8 @@ config CPU_FREQ_GOV_ONDEMAND
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called cpufreq_ondemand. module will be called cpufreq_ondemand.
For details, take a look at linux/Documentation/cpu-freq. For details, take a look at
<file:Documentation/admin-guide/pm/cpufreq.rst>.
If in doubt, say N. If in doubt, say N.
@ -182,7 +182,8 @@ config CPU_FREQ_GOV_CONSERVATIVE
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called cpufreq_conservative. module will be called cpufreq_conservative.
For details, take a look at linux/Documentation/cpu-freq. For details, take a look at
<file:Documentation/admin-guide/pm/cpufreq.rst>.
If in doubt, say N. If in doubt, say N.
@ -246,8 +247,6 @@ config IA64_ACPI_CPUFREQ
This driver adds a CPUFreq driver which utilizes the ACPI This driver adds a CPUFreq driver which utilizes the ACPI
Processor Performance States. Processor Performance States.
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say N. If in doubt, say N.
endif endif
@ -271,8 +270,6 @@ config LOONGSON2_CPUFREQ
Loongson2F and it's successors support this feature. Loongson2F and it's successors support this feature.
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say N. If in doubt, say N.
config LOONGSON1_CPUFREQ config LOONGSON1_CPUFREQ
@ -282,8 +279,6 @@ config LOONGSON1_CPUFREQ
This option adds a CPUFreq driver for loongson1 processors which This option adds a CPUFreq driver for loongson1 processors which
support software configurable cpu frequency. support software configurable cpu frequency.
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say N. If in doubt, say N.
endif endif
@ -293,8 +288,6 @@ config SPARC_US3_CPUFREQ
help help
This adds the CPUFreq driver for UltraSPARC-III processors. This adds the CPUFreq driver for UltraSPARC-III processors.
For details, take a look at <file:Documentation/cpu-freq>.
If in doubt, say N. If in doubt, say N.
config SPARC_US2E_CPUFREQ config SPARC_US2E_CPUFREQ
@ -302,8 +295,6 @@ config SPARC_US2E_CPUFREQ
help help
This adds the CPUFreq driver for UltraSPARC-IIe processors. This adds the CPUFreq driver for UltraSPARC-IIe processors.
For details, take a look at <file:Documentation/cpu-freq>.
If in doubt, say N. If in doubt, say N.
endif endif
@ -318,8 +309,6 @@ config SH_CPU_FREQ
will also generate a notice in the boot log before disabling will also generate a notice in the boot log before disabling
itself if the CPU in question is not capable of rate rounding. itself if the CPU in question is not capable of rate rounding.
For details, take a look at <file:Documentation/cpu-freq>.
If unsure, say N. If unsure, say N.
endif endif

View File

@ -19,6 +19,16 @@ config ACPI_CPPC_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ACPI_CPPC_CPUFREQ_FIE
bool "Frequency Invariance support for CPPC cpufreq driver"
depends on ACPI_CPPC_CPUFREQ && GENERIC_ARCH_TOPOLOGY
default y
help
This extends frequency invariance support in the CPPC cpufreq driver,
by using CPPC delivered and reference performance counters.
If in doubt, say N.
config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
tristate "Allwinner nvmem based SUN50I CPUFreq driver" tristate "Allwinner nvmem based SUN50I CPUFreq driver"
depends on ARCH_SUNXI depends on ARCH_SUNXI

View File

@ -25,6 +25,10 @@
#include "cpufreq-dt.h" #include "cpufreq-dt.h"
/* Clk register set */
#define ARMADA_37XX_CLK_TBG_SEL 0
#define ARMADA_37XX_CLK_TBG_SEL_CPU_OFF 22
/* Power management in North Bridge register set */ /* Power management in North Bridge register set */
#define ARMADA_37XX_NB_L0L1 0x18 #define ARMADA_37XX_NB_L0L1 0x18
#define ARMADA_37XX_NB_L2L3 0x1C #define ARMADA_37XX_NB_L2L3 0x1C
@ -69,6 +73,8 @@
#define LOAD_LEVEL_NR 4 #define LOAD_LEVEL_NR 4
#define MIN_VOLT_MV 1000 #define MIN_VOLT_MV 1000
#define MIN_VOLT_MV_FOR_L1_1000MHZ 1108
#define MIN_VOLT_MV_FOR_L1_1200MHZ 1155
/* AVS value for the corresponding voltage (in mV) */ /* AVS value for the corresponding voltage (in mV) */
static int avs_map[] = { static int avs_map[] = {
@ -80,6 +86,8 @@ static int avs_map[] = {
}; };
struct armada37xx_cpufreq_state { struct armada37xx_cpufreq_state {
struct platform_device *pdev;
struct device *cpu_dev;
struct regmap *regmap; struct regmap *regmap;
u32 nb_l0l1; u32 nb_l0l1;
u32 nb_l2l3; u32 nb_l2l3;
@ -120,10 +128,15 @@ static struct armada_37xx_dvfs *armada_37xx_cpu_freq_info_get(u32 freq)
* will be configured then the DVFS will be enabled. * will be configured then the DVFS will be enabled.
*/ */
static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base, static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
struct clk *clk, u8 *divider) struct regmap *clk_base, u8 *divider)
{ {
u32 cpu_tbg_sel;
int load_lvl; int load_lvl;
struct clk *parent;
/* Determine to which TBG clock is CPU connected */
regmap_read(clk_base, ARMADA_37XX_CLK_TBG_SEL, &cpu_tbg_sel);
cpu_tbg_sel >>= ARMADA_37XX_CLK_TBG_SEL_CPU_OFF;
cpu_tbg_sel &= ARMADA_37XX_NB_TBG_SEL_MASK;
for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) { for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
unsigned int reg, mask, val, offset = 0; unsigned int reg, mask, val, offset = 0;
@ -142,6 +155,11 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
mask = (ARMADA_37XX_NB_CLK_SEL_MASK mask = (ARMADA_37XX_NB_CLK_SEL_MASK
<< ARMADA_37XX_NB_CLK_SEL_OFF); << ARMADA_37XX_NB_CLK_SEL_OFF);
/* Set TBG index, for all levels we use the same TBG */
val = cpu_tbg_sel << ARMADA_37XX_NB_TBG_SEL_OFF;
mask = (ARMADA_37XX_NB_TBG_SEL_MASK
<< ARMADA_37XX_NB_TBG_SEL_OFF);
/* /*
* Set cpu divider based on the pre-computed array in * Set cpu divider based on the pre-computed array in
* order to have balanced step. * order to have balanced step.
@ -160,14 +178,6 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
regmap_update_bits(base, reg, mask, val); regmap_update_bits(base, reg, mask, val);
} }
/*
* Set cpu clock source, for all the level we keep the same
* clock source that the one already configured. For this one
* we need to use the clock framework
*/
parent = clk_get_parent(clk);
clk_set_parent(clk, parent);
} }
/* /*
@ -202,6 +212,8 @@ static u32 armada_37xx_avs_val_match(int target_vm)
* - L2 & L3 voltage should be about 150mv smaller than L0 voltage. * - L2 & L3 voltage should be about 150mv smaller than L0 voltage.
* This function calculates L1 & L2 & L3 AVS values dynamically based * This function calculates L1 & L2 & L3 AVS values dynamically based
* on L0 voltage and fill all AVS values to the AVS value table. * on L0 voltage and fill all AVS values to the AVS value table.
* When base CPU frequency is 1000 or 1200 MHz then there is additional
* minimal avs value for load L1.
*/ */
static void __init armada37xx_cpufreq_avs_configure(struct regmap *base, static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
struct armada_37xx_dvfs *dvfs) struct armada_37xx_dvfs *dvfs)
@ -233,6 +245,19 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++) for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++)
dvfs->avs[load_level] = avs_min; dvfs->avs[load_level] = avs_min;
/*
* Set the avs values for load L0 and L1 when base CPU frequency
* is 1000/1200 MHz to its typical initial values according to
* the Armada 3700 Hardware Specifications.
*/
if (dvfs->cpu_freq_max >= 1000*1000*1000) {
if (dvfs->cpu_freq_max >= 1200*1000*1000)
avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
else
avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
dvfs->avs[0] = dvfs->avs[1] = avs_min;
}
return; return;
} }
@ -252,6 +277,26 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
target_vm = avs_map[l0_vdd_min] - 150; target_vm = avs_map[l0_vdd_min] - 150;
target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV; target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV;
dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm); dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm);
/*
* Fix the avs value for load L1 when base CPU frequency is 1000/1200 MHz,
* otherwise the CPU gets stuck when switching from load L1 to load L0.
* Also ensure that avs value for load L1 is not higher than for L0.
*/
if (dvfs->cpu_freq_max >= 1000*1000*1000) {
u32 avs_min_l1;
if (dvfs->cpu_freq_max >= 1200*1000*1000)
avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
else
avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
if (avs_min_l1 > dvfs->avs[0])
avs_min_l1 = dvfs->avs[0];
if (dvfs->avs[1] < avs_min_l1)
dvfs->avs[1] = avs_min_l1;
}
} }
static void __init armada37xx_cpufreq_avs_setup(struct regmap *base, static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
@ -357,12 +402,17 @@ static int __init armada37xx_cpufreq_driver_init(void)
struct armada_37xx_dvfs *dvfs; struct armada_37xx_dvfs *dvfs;
struct platform_device *pdev; struct platform_device *pdev;
unsigned long freq; unsigned long freq;
unsigned int cur_frequency, base_frequency; unsigned int base_frequency;
struct regmap *nb_pm_base, *avs_base; struct regmap *nb_clk_base, *nb_pm_base, *avs_base;
struct device *cpu_dev; struct device *cpu_dev;
int load_lvl, ret; int load_lvl, ret;
struct clk *clk, *parent; struct clk *clk, *parent;
nb_clk_base =
syscon_regmap_lookup_by_compatible("marvell,armada-3700-periph-clock-nb");
if (IS_ERR(nb_clk_base))
return -ENODEV;
nb_pm_base = nb_pm_base =
syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm"); syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
@ -413,15 +463,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
return -EINVAL; return -EINVAL;
} }
/* Get nominal (current) CPU frequency */ dvfs = armada_37xx_cpu_freq_info_get(base_frequency);
cur_frequency = clk_get_rate(clk);
if (!cur_frequency) {
dev_err(cpu_dev, "Failed to get clock rate for CPU\n");
clk_put(clk);
return -EINVAL;
}
dvfs = armada_37xx_cpu_freq_info_get(cur_frequency);
if (!dvfs) { if (!dvfs) {
clk_put(clk); clk_put(clk);
return -EINVAL; return -EINVAL;
@ -439,7 +481,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
armada37xx_cpufreq_avs_configure(avs_base, dvfs); armada37xx_cpufreq_avs_configure(avs_base, dvfs);
armada37xx_cpufreq_avs_setup(avs_base, dvfs); armada37xx_cpufreq_avs_setup(avs_base, dvfs);
armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider); armada37xx_cpufreq_dvfs_setup(nb_pm_base, nb_clk_base, dvfs->divider);
clk_put(clk); clk_put(clk);
for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR; for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
@ -466,6 +508,9 @@ static int __init armada37xx_cpufreq_driver_init(void)
if (ret) if (ret)
goto disable_dvfs; goto disable_dvfs;
armada37xx_cpufreq_state->cpu_dev = cpu_dev;
armada37xx_cpufreq_state->pdev = pdev;
platform_set_drvdata(pdev, dvfs);
return 0; return 0;
disable_dvfs: disable_dvfs:
@ -473,7 +518,7 @@ disable_dvfs:
remove_opp: remove_opp:
/* clean-up the already added opp before leaving */ /* clean-up the already added opp before leaving */
while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) { while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) {
freq = cur_frequency / dvfs->divider[load_lvl]; freq = base_frequency / dvfs->divider[load_lvl];
dev_pm_opp_remove(cpu_dev, freq); dev_pm_opp_remove(cpu_dev, freq);
} }
@ -484,6 +529,26 @@ remove_opp:
/* late_initcall, to guarantee the driver is loaded after A37xx clock driver */ /* late_initcall, to guarantee the driver is loaded after A37xx clock driver */
late_initcall(armada37xx_cpufreq_driver_init); late_initcall(armada37xx_cpufreq_driver_init);
static void __exit armada37xx_cpufreq_driver_exit(void)
{
struct platform_device *pdev = armada37xx_cpufreq_state->pdev;
struct armada_37xx_dvfs *dvfs = platform_get_drvdata(pdev);
unsigned long freq;
int load_lvl;
platform_device_unregister(pdev);
armada37xx_cpufreq_disable_dvfs(armada37xx_cpufreq_state->regmap);
for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
freq = dvfs->cpu_freq_max / dvfs->divider[load_lvl];
dev_pm_opp_remove(armada37xx_cpufreq_state->cpu_dev, freq);
}
kfree(armada37xx_cpufreq_state);
}
module_exit(armada37xx_cpufreq_driver_exit);
static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = { static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = {
{ .compatible = "marvell,armada-3700-nb-pm" }, { .compatible = "marvell,armada-3700-nb-pm" },
{ }, { },

View File

@ -10,14 +10,18 @@
#define pr_fmt(fmt) "CPPC Cpufreq:" fmt #define pr_fmt(fmt) "CPPC Cpufreq:" fmt
#include <linux/arch_topology.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/irq_work.h>
#include <linux/kthread.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <uapi/linux/sched/types.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
@ -57,6 +61,204 @@ static struct cppc_workaround_oem_info wa_info[] = {
} }
}; };
#ifdef CONFIG_ACPI_CPPC_CPUFREQ_FIE
/* Frequency invariance support */
struct cppc_freq_invariance {
int cpu;
struct irq_work irq_work;
struct kthread_work work;
struct cppc_perf_fb_ctrs prev_perf_fb_ctrs;
struct cppc_cpudata *cpu_data;
};
static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv);
static struct kthread_worker *kworker_fie;
static bool fie_disabled;
static struct cpufreq_driver cppc_cpufreq_driver;
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu);
static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs fb_ctrs_t0,
struct cppc_perf_fb_ctrs fb_ctrs_t1);
/**
* cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance
* @work: The work item.
*
* The CPPC driver register itself with the topology core to provide its own
* implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which
* gets called by the scheduler on every tick.
*
* Note that the arch specific counters have higher priority than CPPC counters,
* if available, though the CPPC driver doesn't need to have any special
* handling for that.
*
* On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we
* reach here from hard-irq context), which then schedules a normal work item
* and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable
* based on the counter updates since the last tick.
*/
static void cppc_scale_freq_workfn(struct kthread_work *work)
{
struct cppc_freq_invariance *cppc_fi;
struct cppc_perf_fb_ctrs fb_ctrs = {0};
struct cppc_cpudata *cpu_data;
unsigned long local_freq_scale;
u64 perf;
cppc_fi = container_of(work, struct cppc_freq_invariance, work);
cpu_data = cppc_fi->cpu_data;
if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) {
pr_warn("%s: failed to read perf counters\n", __func__);
return;
}
cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
perf = cppc_perf_from_fbctrs(cpu_data, cppc_fi->prev_perf_fb_ctrs,
fb_ctrs);
perf <<= SCHED_CAPACITY_SHIFT;
local_freq_scale = div64_u64(perf, cpu_data->perf_caps.highest_perf);
if (WARN_ON(local_freq_scale > 1024))
local_freq_scale = 1024;
per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale;
}
static void cppc_irq_work(struct irq_work *irq_work)
{
struct cppc_freq_invariance *cppc_fi;
cppc_fi = container_of(irq_work, struct cppc_freq_invariance, irq_work);
kthread_queue_work(kworker_fie, &cppc_fi->work);
}
static void cppc_scale_freq_tick(void)
{
struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id());
/*
* cppc_get_perf_ctrs() can potentially sleep, call that from the right
* context.
*/
irq_work_queue(&cppc_fi->irq_work);
}
static struct scale_freq_data cppc_sftd = {
.source = SCALE_FREQ_SOURCE_CPPC,
.set_freq_scale = cppc_scale_freq_tick,
};
static void cppc_freq_invariance_policy_init(struct cpufreq_policy *policy,
struct cppc_cpudata *cpu_data)
{
struct cppc_perf_fb_ctrs fb_ctrs = {0};
struct cppc_freq_invariance *cppc_fi;
int i, ret;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
if (fie_disabled)
return;
for_each_cpu(i, policy->cpus) {
cppc_fi = &per_cpu(cppc_freq_inv, i);
cppc_fi->cpu = i;
cppc_fi->cpu_data = cpu_data;
kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn);
init_irq_work(&cppc_fi->irq_work, cppc_irq_work);
ret = cppc_get_perf_ctrs(i, &fb_ctrs);
if (ret) {
pr_warn("%s: failed to read perf counters: %d\n",
__func__, ret);
fie_disabled = true;
} else {
cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
}
}
}
static void __init cppc_freq_invariance_init(void)
{
struct sched_attr attr = {
.size = sizeof(struct sched_attr),
.sched_policy = SCHED_DEADLINE,
.sched_nice = 0,
.sched_priority = 0,
/*
* Fake (unused) bandwidth; workaround to "fix"
* priority inheritance.
*/
.sched_runtime = 1000000,
.sched_deadline = 10000000,
.sched_period = 10000000,
};
int ret;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
if (fie_disabled)
return;
kworker_fie = kthread_create_worker(0, "cppc_fie");
if (IS_ERR(kworker_fie))
return;
ret = sched_setattr_nocheck(kworker_fie->task, &attr);
if (ret) {
pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__,
ret);
kthread_destroy_worker(kworker_fie);
return;
}
/* Register for freq-invariance */
topology_set_scale_freq_source(&cppc_sftd, cpu_present_mask);
}
static void cppc_freq_invariance_exit(void)
{
struct cppc_freq_invariance *cppc_fi;
int i;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
if (fie_disabled)
return;
topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, cpu_present_mask);
for_each_possible_cpu(i) {
cppc_fi = &per_cpu(cppc_freq_inv, i);
irq_work_sync(&cppc_fi->irq_work);
}
kthread_destroy_worker(kworker_fie);
kworker_fie = NULL;
}
#else
static inline void
cppc_freq_invariance_policy_init(struct cpufreq_policy *policy,
struct cppc_cpudata *cpu_data)
{
}
static inline void cppc_freq_invariance_init(void)
{
}
static inline void cppc_freq_invariance_exit(void)
{
}
#endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */
/* Callback function used to retrieve the max frequency from DMI */ /* Callback function used to retrieve the max frequency from DMI */
static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private) static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private)
{ {
@ -216,26 +418,16 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
{ {
unsigned long implementor = read_cpuid_implementor(); unsigned long implementor = read_cpuid_implementor();
unsigned long part_num = read_cpuid_part_number(); unsigned long part_num = read_cpuid_part_number();
unsigned int delay_us = 0;
switch (implementor) { switch (implementor) {
case ARM_CPU_IMP_QCOM: case ARM_CPU_IMP_QCOM:
switch (part_num) { switch (part_num) {
case QCOM_CPU_PART_FALKOR_V1: case QCOM_CPU_PART_FALKOR_V1:
case QCOM_CPU_PART_FALKOR: case QCOM_CPU_PART_FALKOR:
delay_us = 10000; return 10000;
break;
default:
delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
break;
} }
break;
default:
delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
break;
} }
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
return delay_us;
} }
#else #else
@ -355,9 +547,12 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpu_data->perf_ctrls.desired_perf = caps->highest_perf; cpu_data->perf_ctrls.desired_perf = caps->highest_perf;
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
if (ret) if (ret) {
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
caps->highest_perf, cpu, ret); caps->highest_perf, cpu, ret);
} else {
cppc_freq_invariance_policy_init(policy, cpu_data);
}
return ret; return ret;
} }
@ -370,12 +565,12 @@ static inline u64 get_delta(u64 t1, u64 t0)
return (u32)t1 - (u32)t0; return (u32)t1 - (u32)t0;
} }
static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs fb_ctrs_t0, struct cppc_perf_fb_ctrs fb_ctrs_t0,
struct cppc_perf_fb_ctrs fb_ctrs_t1) struct cppc_perf_fb_ctrs fb_ctrs_t1)
{ {
u64 delta_reference, delta_delivered; u64 delta_reference, delta_delivered;
u64 reference_perf, delivered_perf; u64 reference_perf;
reference_perf = fb_ctrs_t0.reference_perf; reference_perf = fb_ctrs_t0.reference_perf;
@ -384,12 +579,21 @@ static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data,
delta_delivered = get_delta(fb_ctrs_t1.delivered, delta_delivered = get_delta(fb_ctrs_t1.delivered,
fb_ctrs_t0.delivered); fb_ctrs_t0.delivered);
/* Check to avoid divide-by zero */ /* Check to avoid divide-by zero and invalid delivered_perf */
if (delta_reference || delta_delivered) if (!delta_reference || !delta_delivered)
delivered_perf = (reference_perf * delta_delivered) / return cpu_data->perf_ctrls.desired_perf;
delta_reference;
else return (reference_perf * delta_delivered) / delta_reference;
delivered_perf = cpu_data->perf_ctrls.desired_perf; }
static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs fb_ctrs_t0,
struct cppc_perf_fb_ctrs fb_ctrs_t1)
{
u64 delivered_perf;
delivered_perf = cppc_perf_from_fbctrs(cpu_data, fb_ctrs_t0,
fb_ctrs_t1);
return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf); return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
} }
@ -514,6 +718,8 @@ static void cppc_check_hisi_workaround(void)
static int __init cppc_cpufreq_init(void) static int __init cppc_cpufreq_init(void)
{ {
int ret;
if ((acpi_disabled) || !acpi_cpc_valid()) if ((acpi_disabled) || !acpi_cpc_valid())
return -ENODEV; return -ENODEV;
@ -521,7 +727,11 @@ static int __init cppc_cpufreq_init(void)
cppc_check_hisi_workaround(); cppc_check_hisi_workaround();
return cpufreq_register_driver(&cppc_cpufreq_driver); ret = cpufreq_register_driver(&cppc_cpufreq_driver);
if (!ret)
cppc_freq_invariance_init();
return ret;
} }
static inline void free_cpu_data(void) static inline void free_cpu_data(void)
@ -538,6 +748,7 @@ static inline void free_cpu_data(void)
static void __exit cppc_cpufreq_exit(void) static void __exit cppc_cpufreq_exit(void)
{ {
cppc_freq_invariance_exit();
cpufreq_unregister_driver(&cppc_cpufreq_driver); cpufreq_unregister_driver(&cppc_cpufreq_driver);
free_cpu_data(); free_cpu_data();

View File

@ -255,10 +255,15 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
* before updating priv->cpus. Otherwise, we will end up creating * before updating priv->cpus. Otherwise, we will end up creating
* duplicate OPPs for the CPUs. * duplicate OPPs for the CPUs.
* *
* OPPs might be populated at runtime, don't check for error here. * OPPs might be populated at runtime, don't fail for error here unless
* it is -EPROBE_DEFER.
*/ */
if (!dev_pm_opp_of_cpumask_add_table(priv->cpus)) ret = dev_pm_opp_of_cpumask_add_table(priv->cpus);
if (!ret) {
priv->have_static_opps = true; priv->have_static_opps = true;
} else if (ret == -EPROBE_DEFER) {
goto out;
}
/* /*
* The OPP table must be initialized, statically or dynamically, by this * The OPP table must be initialized, statically or dynamically, by this

View File

@ -42,9 +42,6 @@ static LIST_HEAD(cpufreq_policy_list);
#define for_each_inactive_policy(__policy) \ #define for_each_inactive_policy(__policy) \
for_each_suitable_policy(__policy, false) for_each_suitable_policy(__policy, false)
#define for_each_policy(__policy) \
list_for_each_entry(__policy, &cpufreq_policy_list, policy_list)
/* Iterate over governors */ /* Iterate over governors */
static LIST_HEAD(cpufreq_governor_list); static LIST_HEAD(cpufreq_governor_list);
#define for_each_governor(__governor) \ #define for_each_governor(__governor) \

View File

@ -54,7 +54,7 @@ processor_set_pstate (
retval = ia64_pal_set_pstate((u64)value); retval = ia64_pal_set_pstate((u64)value);
if (retval) { if (retval) {
pr_debug("Failed to set freq to 0x%x, with error 0x%lx\n", pr_debug("Failed to set freq to 0x%x, with error 0x%llx\n",
value, retval); value, retval);
return -ENODEV; return -ENODEV;
} }
@ -77,7 +77,7 @@ processor_get_pstate (
if (retval) if (retval)
pr_debug("Failed to get current freq with " pr_debug("Failed to get current freq with "
"error 0x%lx, idx 0x%x\n", retval, *value); "error 0x%llx, idx 0x%x\n", retval, *value);
return (int)retval; return (int)retval;
} }

View File

@ -819,19 +819,21 @@ static struct freq_attr *hwp_cpufreq_attrs[] = {
NULL, NULL,
}; };
static void intel_pstate_get_hwp_max(struct cpudata *cpu, int *phy_max, static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
int *current_max)
{ {
u64 cap; u64 cap;
rdmsrl_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap); rdmsrl_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
WRITE_ONCE(cpu->hwp_cap_cached, cap); WRITE_ONCE(cpu->hwp_cap_cached, cap);
if (global.no_turbo || global.turbo_disabled) cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(cap);
*current_max = HWP_GUARANTEED_PERF(cap); cpu->pstate.turbo_pstate = HWP_HIGHEST_PERF(cap);
else }
*current_max = HWP_HIGHEST_PERF(cap);
*phy_max = HWP_HIGHEST_PERF(cap); static void intel_pstate_get_hwp_cap(struct cpudata *cpu)
{
__intel_pstate_get_hwp_cap(cpu);
cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
} }
static void intel_pstate_hwp_set(unsigned int cpu) static void intel_pstate_hwp_set(unsigned int cpu)
@ -1195,12 +1197,13 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
static void update_qos_request(enum freq_qos_req_type type) static void update_qos_request(enum freq_qos_req_type type)
{ {
int max_state, turbo_max, freq, i, perf_pct;
struct freq_qos_request *req; struct freq_qos_request *req;
struct cpufreq_policy *policy; struct cpufreq_policy *policy;
int i;
for_each_possible_cpu(i) { for_each_possible_cpu(i) {
struct cpudata *cpu = all_cpu_data[i]; struct cpudata *cpu = all_cpu_data[i];
unsigned int freq, perf_pct;
policy = cpufreq_cpu_get(i); policy = cpufreq_cpu_get(i);
if (!policy) if (!policy)
@ -1213,9 +1216,7 @@ static void update_qos_request(enum freq_qos_req_type type)
continue; continue;
if (hwp_active) if (hwp_active)
intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); intel_pstate_get_hwp_cap(cpu);
else
turbo_max = cpu->pstate.turbo_pstate;
if (type == FREQ_QOS_MIN) { if (type == FREQ_QOS_MIN) {
perf_pct = global.min_perf_pct; perf_pct = global.min_perf_pct;
@ -1224,8 +1225,7 @@ static void update_qos_request(enum freq_qos_req_type type)
perf_pct = global.max_perf_pct; perf_pct = global.max_perf_pct;
} }
freq = DIV_ROUND_UP(turbo_max * perf_pct, 100); freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * perf_pct, 100);
freq *= cpu->pstate.scaling;
if (freq_qos_update_request(req, freq) < 0) if (freq_qos_update_request(req, freq) < 0)
pr_warn("Failed to update freq constraint: CPU%d\n", i); pr_warn("Failed to update freq constraint: CPU%d\n", i);
@ -1715,21 +1715,17 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
{ {
cpu->pstate.min_pstate = pstate_funcs.get_min(); cpu->pstate.min_pstate = pstate_funcs.get_min();
cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical(); cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical();
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
cpu->pstate.scaling = pstate_funcs.get_scaling(); cpu->pstate.scaling = pstate_funcs.get_scaling();
if (hwp_active && !hwp_mode_bdw) { if (hwp_active && !hwp_mode_bdw) {
unsigned int phy_max, current_max; __intel_pstate_get_hwp_cap(cpu);
intel_pstate_get_hwp_max(cpu, &phy_max, &current_max);
cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
cpu->pstate.turbo_pstate = phy_max;
cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(READ_ONCE(cpu->hwp_cap_cached));
} else { } else {
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
cpu->pstate.max_pstate = pstate_funcs.get_max(); cpu->pstate.max_pstate = pstate_funcs.get_max();
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
} }
cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
if (pstate_funcs.get_aperf_mperf_shift) if (pstate_funcs.get_aperf_mperf_shift)
cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift(); cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift();
@ -2199,41 +2195,34 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
unsigned int policy_min, unsigned int policy_min,
unsigned int policy_max) unsigned int policy_max)
{ {
int scaling = cpu->pstate.scaling;
int32_t max_policy_perf, min_policy_perf; int32_t max_policy_perf, min_policy_perf;
int max_state, turbo_max;
int max_freq;
/* /*
* HWP needs some special consideration, because on BDX the * HWP needs some special consideration, because HWP_REQUEST uses
* HWP_REQUEST uses abstract value to represent performance * abstract values to represent performance rather than pure ratios.
* rather than pure ratios.
*/ */
if (hwp_active) { if (hwp_active)
intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); intel_pstate_get_hwp_cap(cpu);
} else {
max_state = global.no_turbo || global.turbo_disabled ?
cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
turbo_max = cpu->pstate.turbo_pstate;
}
max_freq = max_state * cpu->pstate.scaling;
max_policy_perf = max_state * policy_max / max_freq; max_policy_perf = policy_max / scaling;
if (policy_max == policy_min) { if (policy_max == policy_min) {
min_policy_perf = max_policy_perf; min_policy_perf = max_policy_perf;
} else { } else {
min_policy_perf = max_state * policy_min / max_freq; min_policy_perf = policy_min / scaling;
min_policy_perf = clamp_t(int32_t, min_policy_perf, min_policy_perf = clamp_t(int32_t, min_policy_perf,
0, max_policy_perf); 0, max_policy_perf);
} }
pr_debug("cpu:%d max_state %d min_policy_perf:%d max_policy_perf:%d\n", pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
cpu->cpu, max_state, min_policy_perf, max_policy_perf); cpu->cpu, min_policy_perf, max_policy_perf);
/* Normalize user input to [min_perf, max_perf] */ /* Normalize user input to [min_perf, max_perf] */
if (per_cpu_limits) { if (per_cpu_limits) {
cpu->min_perf_ratio = min_policy_perf; cpu->min_perf_ratio = min_policy_perf;
cpu->max_perf_ratio = max_policy_perf; cpu->max_perf_ratio = max_policy_perf;
} else { } else {
int turbo_max = cpu->pstate.turbo_pstate;
int32_t global_min, global_max; int32_t global_min, global_max;
/* Global limits are in percent of the maximum turbo P-state. */ /* Global limits are in percent of the maximum turbo P-state. */
@ -2322,10 +2311,9 @@ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu,
update_turbo_state(); update_turbo_state();
if (hwp_active) { if (hwp_active) {
int max_state, turbo_max; intel_pstate_get_hwp_cap(cpu);
max_freq = global.no_turbo || global.turbo_disabled ?
intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); cpu->pstate.max_freq : cpu->pstate.turbo_freq;
max_freq = max_state * cpu->pstate.scaling;
} else { } else {
max_freq = intel_pstate_get_max_freq(cpu); max_freq = intel_pstate_get_max_freq(cpu);
} }
@ -2416,25 +2404,15 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
cpu->max_perf_ratio = 0xFF; cpu->max_perf_ratio = 0xFF;
cpu->min_perf_ratio = 0; cpu->min_perf_ratio = 0;
policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling;
policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling; policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
update_turbo_state(); update_turbo_state();
global.turbo_disabled_mf = global.turbo_disabled; global.turbo_disabled_mf = global.turbo_disabled;
policy->cpuinfo.max_freq = global.turbo_disabled ? policy->cpuinfo.max_freq = global.turbo_disabled ?
cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
policy->cpuinfo.max_freq *= cpu->pstate.scaling;
if (hwp_active) {
unsigned int max_freq;
max_freq = global.turbo_disabled ?
cpu->pstate.max_freq : cpu->pstate.turbo_freq; cpu->pstate.max_freq : cpu->pstate.turbo_freq;
if (max_freq < policy->cpuinfo.max_freq)
policy->cpuinfo.max_freq = max_freq; policy->min = policy->cpuinfo.min_freq;
} policy->max = policy->cpuinfo.max_freq;
intel_pstate_init_acpi_perf_limits(policy); intel_pstate_init_acpi_perf_limits(policy);
@ -2683,10 +2661,10 @@ static void intel_cpufreq_adjust_perf(unsigned int cpunum,
static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
int max_state, turbo_max, min_freq, max_freq, ret;
struct freq_qos_request *req; struct freq_qos_request *req;
struct cpudata *cpu; struct cpudata *cpu;
struct device *dev; struct device *dev;
int ret, freq;
dev = get_cpu_device(policy->cpu); dev = get_cpu_device(policy->cpu);
if (!dev) if (!dev)
@ -2711,30 +2689,31 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
if (hwp_active) { if (hwp_active) {
u64 value; u64 value;
intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state);
policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY_HWP; policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY_HWP;
intel_pstate_get_hwp_cap(cpu);
rdmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value); rdmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
WRITE_ONCE(cpu->hwp_req_cached, value); WRITE_ONCE(cpu->hwp_req_cached, value);
cpu->epp_cached = intel_pstate_get_epp(cpu, value); cpu->epp_cached = intel_pstate_get_epp(cpu, value);
} else { } else {
turbo_max = cpu->pstate.turbo_pstate;
policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY; policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY;
} }
min_freq = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100); freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * global.min_perf_pct, 100);
min_freq *= cpu->pstate.scaling;
max_freq = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100);
max_freq *= cpu->pstate.scaling;
ret = freq_qos_add_request(&policy->constraints, req, FREQ_QOS_MIN, ret = freq_qos_add_request(&policy->constraints, req, FREQ_QOS_MIN,
min_freq); freq);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret); dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret);
goto free_req; goto free_req;
} }
freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * global.max_perf_pct, 100);
ret = freq_qos_add_request(&policy->constraints, req + 1, FREQ_QOS_MAX, ret = freq_qos_add_request(&policy->constraints, req + 1, FREQ_QOS_MAX,
max_freq); freq);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret); dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret);
goto remove_min_req; goto remove_min_req;

View File

@ -91,7 +91,7 @@ static DEFINE_MUTEX(set_freq_lock);
/* Use 800MHz when entering sleep mode */ /* Use 800MHz when entering sleep mode */
#define SLEEP_FREQ (800 * 1000) #define SLEEP_FREQ (800 * 1000)
/* Tracks if cpu freqency can be updated anymore */ /* Tracks if CPU frequency can be updated anymore */
static bool no_cpufreq_access; static bool no_cpufreq_access;
/* /*
@ -190,7 +190,7 @@ static u32 clkdiv_val[5][11] = {
/* /*
* This function set DRAM refresh counter * This function set DRAM refresh counter
* accoriding to operating frequency of DRAM * according to operating frequency of DRAM
* ch: DMC port number 0 or 1 * ch: DMC port number 0 or 1
* freq: Operating frequency of DRAM(KHz) * freq: Operating frequency of DRAM(KHz)
*/ */
@ -320,7 +320,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
/* /*
* 3. DMC1 refresh count for 133Mhz if (index == L4) is * 3. DMC1 refresh count for 133Mhz if (index == L4) is
* true refresh counter is already programed in upper * true refresh counter is already programmed in upper
* code. 0x287@83Mhz * code. 0x287@83Mhz
*/ */
if (!bus_speed_changing) if (!bus_speed_changing)
@ -378,7 +378,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
/* /*
* 6. Turn on APLL * 6. Turn on APLL
* 6-1. Set PMS values * 6-1. Set PMS values
* 6-2. Wait untile the PLL is locked * 6-2. Wait until the PLL is locked
*/ */
if (index == L0) if (index == L0)
writel_relaxed(APLL_VAL_1000, S5P_APLL_CON); writel_relaxed(APLL_VAL_1000, S5P_APLL_CON);
@ -390,7 +390,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
} while (!(reg & (0x1 << 29))); } while (!(reg & (0x1 << 29)));
/* /*
* 7. Change souce clock from SCLKMPLL(667Mhz) * 7. Change source clock from SCLKMPLL(667Mhz)
* to SCLKA2M(200Mhz) in MFC_MUX and G3D MUX * to SCLKA2M(200Mhz) in MFC_MUX and G3D MUX
* (667/4=166)->(200/4=50)Mhz * (667/4=166)->(200/4=50)Mhz
*/ */
@ -439,8 +439,8 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
} }
/* /*
* L4 level need to change memory bus speed, hence onedram clock divier * L4 level needs to change memory bus speed, hence ONEDRAM clock
* and memory refresh parameter should be changed * divider and memory refresh parameter should be changed
*/ */
if (bus_speed_changing) { if (bus_speed_changing) {
reg = readl_relaxed(S5P_CLK_DIV6); reg = readl_relaxed(S5P_CLK_DIV6);

View File

@ -107,7 +107,7 @@ config ARM_TEGRA_CPUIDLE
config ARM_QCOM_SPM_CPUIDLE config ARM_QCOM_SPM_CPUIDLE
bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)" bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)"
depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU
select ARM_CPU_SUSPEND select ARM_CPU_SUSPEND
select CPU_IDLE_MULTIPLE_DRIVERS select CPU_IDLE_MULTIPLE_DRIVERS
select DT_IDLE_STATES select DT_IDLE_STATES

View File

@ -48,11 +48,6 @@ enum tegra_state {
static atomic_t tegra_idle_barrier; static atomic_t tegra_idle_barrier;
static atomic_t tegra_abort_flag; static atomic_t tegra_abort_flag;
static inline bool tegra_cpuidle_using_firmware(void)
{
return firmware_ops->prepare_idle && firmware_ops->do_idle;
}
static void tegra_cpuidle_report_cpus_state(void) static void tegra_cpuidle_report_cpus_state(void)
{ {
unsigned long cpu, lcpu, csr; unsigned long cpu, lcpu, csr;
@ -135,13 +130,9 @@ static int tegra_cpuidle_c7_enter(void)
{ {
int err; int err;
if (tegra_cpuidle_using_firmware()) { err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2);
err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2); if (err && err != -ENOSYS)
if (err) return err;
return err;
return call_firmware_op(do_idle, 0);
}
return cpu_suspend(0, tegra30_pm_secondary_cpu_suspend); return cpu_suspend(0, tegra30_pm_secondary_cpu_suspend);
} }
@ -356,9 +347,7 @@ static int tegra_cpuidle_probe(struct platform_device *pdev)
* is disabled. * is disabled.
*/ */
if (!IS_ENABLED(CONFIG_PM_SLEEP)) { if (!IS_ENABLED(CONFIG_PM_SLEEP)) {
if (!tegra_cpuidle_using_firmware()) tegra_cpuidle_disable_state(TEGRA_C7);
tegra_cpuidle_disable_state(TEGRA_C7);
tegra_cpuidle_disable_state(TEGRA_CC6); tegra_cpuidle_disable_state(TEGRA_CC6);
} }

View File

@ -181,9 +181,13 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv)
*/ */
if (s->target_residency > 0) if (s->target_residency > 0)
s->target_residency_ns = s->target_residency * NSEC_PER_USEC; s->target_residency_ns = s->target_residency * NSEC_PER_USEC;
else if (s->target_residency_ns < 0)
s->target_residency_ns = 0;
if (s->exit_latency > 0) if (s->exit_latency > 0)
s->exit_latency_ns = s->exit_latency * NSEC_PER_USEC; s->exit_latency_ns = s->exit_latency * NSEC_PER_USEC;
else if (s->exit_latency_ns < 0)
s->exit_latency_ns = 0;
} }
} }

View File

@ -271,7 +271,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
u64 predicted_ns; u64 predicted_ns;
u64 interactivity_req; u64 interactivity_req;
unsigned long nr_iowaiters; unsigned long nr_iowaiters;
ktime_t delta_next; ktime_t delta, delta_tick;
int i, idx; int i, idx;
if (data->needs_update) { if (data->needs_update) {
@ -280,7 +280,12 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
} }
/* determine the expected residency time, round up */ /* determine the expected residency time, round up */
data->next_timer_ns = tick_nohz_get_sleep_length(&delta_next); delta = tick_nohz_get_sleep_length(&delta_tick);
if (unlikely(delta < 0)) {
delta = 0;
delta_tick = 0;
}
data->next_timer_ns = delta;
nr_iowaiters = nr_iowait_cpu(dev->cpu); nr_iowaiters = nr_iowait_cpu(dev->cpu);
data->bucket = which_bucket(data->next_timer_ns, nr_iowaiters); data->bucket = which_bucket(data->next_timer_ns, nr_iowaiters);
@ -318,7 +323,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* state selection. * state selection.
*/ */
if (predicted_ns < TICK_NSEC) if (predicted_ns < TICK_NSEC)
predicted_ns = delta_next; predicted_ns = data->next_timer_ns;
} else { } else {
/* /*
* Use the performance multiplier and the user-configurable * Use the performance multiplier and the user-configurable
@ -377,7 +382,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* stuck in the shallow one for too long. * stuck in the shallow one for too long.
*/ */
if (drv->states[idx].target_residency_ns < TICK_NSEC && if (drv->states[idx].target_residency_ns < TICK_NSEC &&
s->target_residency_ns <= delta_next) s->target_residency_ns <= delta_tick)
idx = i; idx = i;
return idx; return idx;
@ -399,7 +404,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
predicted_ns < TICK_NSEC) && !tick_nohz_tick_stopped()) { predicted_ns < TICK_NSEC) && !tick_nohz_tick_stopped()) {
*stop_tick = false; *stop_tick = false;
if (idx > 0 && drv->states[idx].target_residency_ns > delta_next) { if (idx > 0 && drv->states[idx].target_residency_ns > delta_tick) {
/* /*
* The tick is not going to be stopped and the target * The tick is not going to be stopped and the target
* residency of the state to be returned is not within * residency of the state to be returned is not within
@ -411,7 +416,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
continue; continue;
idx = i; idx = i;
if (drv->states[i].target_residency_ns <= delta_next) if (drv->states[i].target_residency_ns <= delta_tick)
break; break;
} }
} }

View File

@ -100,8 +100,8 @@ struct teo_idle_state {
* @intervals: Saved idle duration values. * @intervals: Saved idle duration values.
*/ */
struct teo_cpu { struct teo_cpu {
u64 time_span_ns; s64 time_span_ns;
u64 sleep_length_ns; s64 sleep_length_ns;
struct teo_idle_state states[CPUIDLE_STATE_MAX]; struct teo_idle_state states[CPUIDLE_STATE_MAX];
int interval_idx; int interval_idx;
u64 intervals[INTERVALS]; u64 intervals[INTERVALS];
@ -117,7 +117,8 @@ static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{ {
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
int i, idx_hit = -1, idx_timer = -1; int i, idx_hit = 0, idx_timer = 0;
unsigned int hits, misses;
u64 measured_ns; u64 measured_ns;
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) { if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
@ -174,25 +175,22 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
* also increase the "early hits" metric for the state that actually * also increase the "early hits" metric for the state that actually
* matches the measured idle duration. * matches the measured idle duration.
*/ */
if (idx_timer >= 0) { hits = cpu_data->states[idx_timer].hits;
unsigned int hits = cpu_data->states[idx_timer].hits; hits -= hits >> DECAY_SHIFT;
unsigned int misses = cpu_data->states[idx_timer].misses;
hits -= hits >> DECAY_SHIFT; misses = cpu_data->states[idx_timer].misses;
misses -= misses >> DECAY_SHIFT; misses -= misses >> DECAY_SHIFT;
if (idx_timer > idx_hit) { if (idx_timer == idx_hit) {
misses += PULSE; hits += PULSE;
if (idx_hit >= 0) } else {
cpu_data->states[idx_hit].early_hits += PULSE; misses += PULSE;
} else { cpu_data->states[idx_hit].early_hits += PULSE;
hits += PULSE;
}
cpu_data->states[idx_timer].misses = misses;
cpu_data->states[idx_timer].hits = hits;
} }
cpu_data->states[idx_timer].misses = misses;
cpu_data->states[idx_timer].hits = hits;
/* /*
* Save idle duration values corresponding to non-timer wakeups for * Save idle duration values corresponding to non-timer wakeups for
* pattern detection. * pattern detection.
@ -216,7 +214,7 @@ static bool teo_time_ok(u64 interval_ns)
*/ */
static int teo_find_shallower_state(struct cpuidle_driver *drv, static int teo_find_shallower_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int state_idx, struct cpuidle_device *dev, int state_idx,
u64 duration_ns) s64 duration_ns)
{ {
int i; int i;
@ -242,10 +240,10 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
{ {
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
s64 latency_req = cpuidle_governor_latency_req(dev->cpu); s64 latency_req = cpuidle_governor_latency_req(dev->cpu);
u64 duration_ns; int max_early_idx, prev_max_early_idx, constraint_idx, idx0, idx, i;
unsigned int hits, misses, early_hits; unsigned int hits, misses, early_hits;
int max_early_idx, prev_max_early_idx, constraint_idx, idx, i;
ktime_t delta_tick; ktime_t delta_tick;
s64 duration_ns;
if (dev->last_state_idx >= 0) { if (dev->last_state_idx >= 0) {
teo_update(drv, dev); teo_update(drv, dev);
@ -264,6 +262,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
prev_max_early_idx = -1; prev_max_early_idx = -1;
constraint_idx = drv->state_count; constraint_idx = drv->state_count;
idx = -1; idx = -1;
idx0 = idx;
for (i = 0; i < drv->state_count; i++) { for (i = 0; i < drv->state_count; i++) {
struct cpuidle_state *s = &drv->states[i]; struct cpuidle_state *s = &drv->states[i];
@ -324,6 +323,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
idx = i; /* first enabled state */ idx = i; /* first enabled state */
hits = cpu_data->states[i].hits; hits = cpu_data->states[i].hits;
misses = cpu_data->states[i].misses; misses = cpu_data->states[i].misses;
idx0 = i;
} }
if (s->target_residency_ns > duration_ns) if (s->target_residency_ns > duration_ns)
@ -376,11 +376,16 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (idx < 0) { if (idx < 0) {
idx = 0; /* No states enabled. Must use 0. */ idx = 0; /* No states enabled. Must use 0. */
} else if (idx > 0) { } else if (idx > idx0) {
unsigned int count = 0; unsigned int count = 0;
u64 sum = 0; u64 sum = 0;
/* /*
* The target residencies of at least two different enabled idle
* states are less than or equal to the current expected idle
* duration. Try to refine the selection using the most recent
* measured idle duration values.
*
* Count and sum the most recent idle duration values less than * Count and sum the most recent idle duration values less than
* the current expected idle duration value. * the current expected idle duration value.
*/ */
@ -428,7 +433,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* till the closest timer including the tick, try to correct * till the closest timer including the tick, try to correct
* that. * that.
*/ */
if (idx > 0 && drv->states[idx].target_residency_ns > delta_tick) if (idx > idx0 &&
drv->states[idx].target_residency_ns > delta_tick)
idx = teo_find_shallower_state(drv, dev, idx, delta_tick); idx = teo_find_shallower_state(drv, dev, idx, delta_tick);
} }

View File

@ -62,7 +62,7 @@ config DEVFREQ_GOV_USERSPACE
help help
Sets the frequency at the user specified one. Sets the frequency at the user specified one.
This governor returns the user configured frequency if there This governor returns the user configured frequency if there
has been an input to /sys/devices/.../power/devfreq_set_freq. has been an input to /sys/devices/.../userspace/set_freq.
Otherwise, the governor does not change the frequency Otherwise, the governor does not change the frequency
given at the initialization. given at the initialization.

View File

@ -11,6 +11,7 @@
#include <linux/kmod.h> #include <linux/kmod.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/devfreq_cooling.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
@ -387,7 +388,7 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
devfreq->previous_freq = new_freq; devfreq->previous_freq = new_freq;
if (devfreq->suspend_freq) if (devfreq->suspend_freq)
devfreq->resume_freq = cur_freq; devfreq->resume_freq = new_freq;
return err; return err;
} }
@ -821,7 +822,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (devfreq->profile->timer < 0 if (devfreq->profile->timer < 0
|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) { || devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
goto err_out; mutex_unlock(&devfreq->lock);
goto err_dev;
} }
if (!devfreq->profile->max_state && !devfreq->profile->freq_table) { if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
@ -935,6 +937,12 @@ struct devfreq *devfreq_add_device(struct device *dev,
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
if (devfreq->profile->is_cooling_device) {
devfreq->cdev = devfreq_cooling_em_register(devfreq, NULL);
if (IS_ERR(devfreq->cdev))
devfreq->cdev = NULL;
}
return devfreq; return devfreq;
err_init: err_init:
@ -960,6 +968,8 @@ int devfreq_remove_device(struct devfreq *devfreq)
if (!devfreq) if (!devfreq)
return -EINVAL; return -EINVAL;
devfreq_cooling_unregister(devfreq->cdev);
if (devfreq->governor) { if (devfreq->governor) {
devfreq->governor->event_handler(devfreq, devfreq->governor->event_handler(devfreq,
DEVFREQ_GOV_STOP, NULL); DEVFREQ_GOV_STOP, NULL);

View File

@ -57,8 +57,6 @@
* Basically, get_target_freq will run * Basically, get_target_freq will run
* devfreq_dev_profile.get_dev_status() to get the * devfreq_dev_profile.get_dev_status() to get the
* status of the device (load = busy_time / total_time). * status of the device (load = busy_time / total_time).
* If no_central_polling is set, this callback is called
* only with update_devfreq() notified by OPP.
* @event_handler: Callback for devfreq core framework to notify events * @event_handler: Callback for devfreq core framework to notify events
* to governors. Events include per device governor * to governors. Events include per device governor
* init and exit, opp changes out of devfreq, suspend * init and exit, opp changes out of devfreq, suspend
@ -91,6 +89,9 @@ int devfreq_update_target(struct devfreq *devfreq, unsigned long freq);
static inline int devfreq_update_stats(struct devfreq *df) static inline int devfreq_update_stats(struct devfreq *df)
{ {
if (!df->profile->get_dev_status)
return -EINVAL;
return df->profile->get_dev_status(df->dev.parent, &df->last_status); return df->profile->get_dev_status(df->dev.parent, &df->last_status);
} }
#endif /* _GOVERNOR_H */ #endif /* _GOVERNOR_H */

View File

@ -169,7 +169,7 @@ static struct platform_driver imx_bus_platdrv = {
.probe = imx_bus_probe, .probe = imx_bus_probe,
.driver = { .driver = {
.name = "imx-bus-devfreq", .name = "imx-bus-devfreq",
.of_match_table = of_match_ptr(imx_bus_of_match), .of_match_table = imx_bus_of_match,
}, },
}; };
module_platform_driver(imx_bus_platdrv); module_platform_driver(imx_bus_platdrv);

View File

@ -280,18 +280,6 @@ static int imx8m_ddrc_get_cur_freq(struct device *dev, unsigned long *freq)
return 0; return 0;
} }
static int imx8m_ddrc_get_dev_status(struct device *dev,
struct devfreq_dev_status *stat)
{
struct imx8m_ddrc *priv = dev_get_drvdata(dev);
stat->busy_time = 0;
stat->total_time = 0;
stat->current_frequency = clk_get_rate(priv->dram_core);
return 0;
}
static int imx8m_ddrc_init_freq_info(struct device *dev) static int imx8m_ddrc_init_freq_info(struct device *dev)
{ {
struct imx8m_ddrc *priv = dev_get_drvdata(dev); struct imx8m_ddrc *priv = dev_get_drvdata(dev);
@ -429,9 +417,7 @@ static int imx8m_ddrc_probe(struct platform_device *pdev)
if (ret < 0) if (ret < 0)
goto err; goto err;
priv->profile.polling_ms = 1000;
priv->profile.target = imx8m_ddrc_target; priv->profile.target = imx8m_ddrc_target;
priv->profile.get_dev_status = imx8m_ddrc_get_dev_status;
priv->profile.exit = imx8m_ddrc_exit; priv->profile.exit = imx8m_ddrc_exit;
priv->profile.get_cur_freq = imx8m_ddrc_get_cur_freq; priv->profile.get_cur_freq = imx8m_ddrc_get_cur_freq;
priv->profile.initial_freq = clk_get_rate(priv->dram_core); priv->profile.initial_freq = clk_get_rate(priv->dram_core);
@ -461,7 +447,7 @@ static struct platform_driver imx8m_ddrc_platdrv = {
.probe = imx8m_ddrc_probe, .probe = imx8m_ddrc_probe,
.driver = { .driver = {
.name = "imx8m-ddrc-devfreq", .name = "imx8m-ddrc-devfreq",
.of_match_table = of_match_ptr(imx8m_ddrc_of_match), .of_match_table = imx8m_ddrc_of_match,
}, },
}; };
module_platform_driver(imx8m_ddrc_platdrv); module_platform_driver(imx8m_ddrc_platdrv);

View File

@ -324,22 +324,14 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
mutex_init(&data->lock); mutex_init(&data->lock);
data->vdd_center = devm_regulator_get(dev, "center"); data->vdd_center = devm_regulator_get(dev, "center");
if (IS_ERR(data->vdd_center)) { if (IS_ERR(data->vdd_center))
if (PTR_ERR(data->vdd_center) == -EPROBE_DEFER) return dev_err_probe(dev, PTR_ERR(data->vdd_center),
return -EPROBE_DEFER; "Cannot get the regulator \"center\"\n");
dev_err(dev, "Cannot get the regulator \"center\"\n");
return PTR_ERR(data->vdd_center);
}
data->dmc_clk = devm_clk_get(dev, "dmc_clk"); data->dmc_clk = devm_clk_get(dev, "dmc_clk");
if (IS_ERR(data->dmc_clk)) { if (IS_ERR(data->dmc_clk))
if (PTR_ERR(data->dmc_clk) == -EPROBE_DEFER) return dev_err_probe(dev, PTR_ERR(data->dmc_clk),
return -EPROBE_DEFER; "Cannot get the clk dmc_clk\n");
dev_err(dev, "Cannot get the clk dmc_clk\n");
return PTR_ERR(data->dmc_clk);
}
data->edev = devfreq_event_get_edev_by_phandle(dev, "devfreq-events", 0); data->edev = devfreq_event_get_edev_by_phandle(dev, "devfreq-events", 0);
if (IS_ERR(data->edev)) if (IS_ERR(data->edev))

View File

@ -99,20 +99,12 @@ void lima_devfreq_fini(struct lima_device *ldev)
devm_devfreq_remove_device(ldev->dev, devfreq->devfreq); devm_devfreq_remove_device(ldev->dev, devfreq->devfreq);
devfreq->devfreq = NULL; devfreq->devfreq = NULL;
} }
dev_pm_opp_of_remove_table(ldev->dev);
dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
devfreq->regulators_opp_table = NULL;
devfreq->clkname_opp_table = NULL;
} }
int lima_devfreq_init(struct lima_device *ldev) int lima_devfreq_init(struct lima_device *ldev)
{ {
struct thermal_cooling_device *cooling; struct thermal_cooling_device *cooling;
struct device *dev = ldev->dev; struct device *dev = ldev->dev;
struct opp_table *opp_table;
struct devfreq *devfreq; struct devfreq *devfreq;
struct lima_devfreq *ldevfreq = &ldev->devfreq; struct lima_devfreq *ldevfreq = &ldev->devfreq;
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
@ -125,40 +117,28 @@ int lima_devfreq_init(struct lima_device *ldev)
spin_lock_init(&ldevfreq->lock); spin_lock_init(&ldevfreq->lock);
opp_table = dev_pm_opp_set_clkname(dev, "core"); ret = devm_pm_opp_set_clkname(dev, "core");
if (IS_ERR(opp_table)) { if (ret)
ret = PTR_ERR(opp_table); return ret;
goto err_fini;
}
ldevfreq->clkname_opp_table = opp_table;
opp_table = dev_pm_opp_set_regulators(dev,
(const char *[]){ "mali" },
1);
if (IS_ERR(opp_table)) {
ret = PTR_ERR(opp_table);
ret = devm_pm_opp_set_regulators(dev, (const char *[]){ "mali" }, 1);
if (ret) {
/* Continue if the optional regulator is missing */ /* Continue if the optional regulator is missing */
if (ret != -ENODEV) if (ret != -ENODEV)
goto err_fini; return ret;
} else {
ldevfreq->regulators_opp_table = opp_table;
} }
ret = dev_pm_opp_of_add_table(dev); ret = devm_pm_opp_of_add_table(dev);
if (ret) if (ret)
goto err_fini; return ret;
lima_devfreq_reset(ldevfreq); lima_devfreq_reset(ldevfreq);
cur_freq = clk_get_rate(ldev->clk_gpu); cur_freq = clk_get_rate(ldev->clk_gpu);
opp = devfreq_recommended_opp(dev, &cur_freq, 0); opp = devfreq_recommended_opp(dev, &cur_freq, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp))
ret = PTR_ERR(opp); return PTR_ERR(opp);
goto err_fini;
}
lima_devfreq_profile.initial_freq = cur_freq; lima_devfreq_profile.initial_freq = cur_freq;
dev_pm_opp_put(opp); dev_pm_opp_put(opp);
@ -167,8 +147,7 @@ int lima_devfreq_init(struct lima_device *ldev)
DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
if (IS_ERR(devfreq)) { if (IS_ERR(devfreq)) {
dev_err(dev, "Couldn't initialize GPU devfreq\n"); dev_err(dev, "Couldn't initialize GPU devfreq\n");
ret = PTR_ERR(devfreq); return PTR_ERR(devfreq);
goto err_fini;
} }
ldevfreq->devfreq = devfreq; ldevfreq->devfreq = devfreq;
@ -180,10 +159,6 @@ int lima_devfreq_init(struct lima_device *ldev)
ldevfreq->cooling = cooling; ldevfreq->cooling = cooling;
return 0; return 0;
err_fini:
lima_devfreq_fini(ldev);
return ret;
} }
void lima_devfreq_record_busy(struct lima_devfreq *devfreq) void lima_devfreq_record_busy(struct lima_devfreq *devfreq)

View File

@ -8,15 +8,12 @@
#include <linux/ktime.h> #include <linux/ktime.h>
struct devfreq; struct devfreq;
struct opp_table;
struct thermal_cooling_device; struct thermal_cooling_device;
struct lima_device; struct lima_device;
struct lima_devfreq { struct lima_devfreq {
struct devfreq *devfreq; struct devfreq *devfreq;
struct opp_table *clkname_opp_table;
struct opp_table *regulators_opp_table;
struct thermal_cooling_device *cooling; struct thermal_cooling_device *cooling;
ktime_t busy_time; ktime_t busy_time;

View File

@ -89,29 +89,25 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
unsigned long cur_freq; unsigned long cur_freq;
struct device *dev = &pfdev->pdev->dev; struct device *dev = &pfdev->pdev->dev;
struct devfreq *devfreq; struct devfreq *devfreq;
struct opp_table *opp_table;
struct thermal_cooling_device *cooling; struct thermal_cooling_device *cooling;
struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq; struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq;
opp_table = dev_pm_opp_set_regulators(dev, pfdev->comp->supply_names, ret = devm_pm_opp_set_regulators(dev, pfdev->comp->supply_names,
pfdev->comp->num_supplies); pfdev->comp->num_supplies);
if (IS_ERR(opp_table)) { if (ret) {
ret = PTR_ERR(opp_table);
/* Continue if the optional regulator is missing */ /* Continue if the optional regulator is missing */
if (ret != -ENODEV) { if (ret != -ENODEV) {
DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n"); DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n");
goto err_fini; return ret;
} }
} else {
pfdevfreq->regulators_opp_table = opp_table;
} }
ret = dev_pm_opp_of_add_table(dev); ret = devm_pm_opp_of_add_table(dev);
if (ret) { if (ret) {
/* Optional, continue without devfreq */ /* Optional, continue without devfreq */
if (ret == -ENODEV) if (ret == -ENODEV)
ret = 0; ret = 0;
goto err_fini; return ret;
} }
pfdevfreq->opp_of_table_added = true; pfdevfreq->opp_of_table_added = true;
@ -122,10 +118,8 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
cur_freq = clk_get_rate(pfdev->clock); cur_freq = clk_get_rate(pfdev->clock);
opp = devfreq_recommended_opp(dev, &cur_freq, 0); opp = devfreq_recommended_opp(dev, &cur_freq, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp))
ret = PTR_ERR(opp); return PTR_ERR(opp);
goto err_fini;
}
panfrost_devfreq_profile.initial_freq = cur_freq; panfrost_devfreq_profile.initial_freq = cur_freq;
dev_pm_opp_put(opp); dev_pm_opp_put(opp);
@ -134,8 +128,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
if (IS_ERR(devfreq)) { if (IS_ERR(devfreq)) {
DRM_DEV_ERROR(dev, "Couldn't initialize GPU devfreq\n"); DRM_DEV_ERROR(dev, "Couldn't initialize GPU devfreq\n");
ret = PTR_ERR(devfreq); return PTR_ERR(devfreq);
goto err_fini;
} }
pfdevfreq->devfreq = devfreq; pfdevfreq->devfreq = devfreq;
@ -146,10 +139,6 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
pfdevfreq->cooling = cooling; pfdevfreq->cooling = cooling;
return 0; return 0;
err_fini:
panfrost_devfreq_fini(pfdev);
return ret;
} }
void panfrost_devfreq_fini(struct panfrost_device *pfdev) void panfrost_devfreq_fini(struct panfrost_device *pfdev)
@ -160,14 +149,6 @@ void panfrost_devfreq_fini(struct panfrost_device *pfdev)
devfreq_cooling_unregister(pfdevfreq->cooling); devfreq_cooling_unregister(pfdevfreq->cooling);
pfdevfreq->cooling = NULL; pfdevfreq->cooling = NULL;
} }
if (pfdevfreq->opp_of_table_added) {
dev_pm_opp_of_remove_table(&pfdev->pdev->dev);
pfdevfreq->opp_of_table_added = false;
}
dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table);
pfdevfreq->regulators_opp_table = NULL;
} }
void panfrost_devfreq_resume(struct panfrost_device *pfdev) void panfrost_devfreq_resume(struct panfrost_device *pfdev)

View File

@ -8,14 +8,12 @@
#include <linux/ktime.h> #include <linux/ktime.h>
struct devfreq; struct devfreq;
struct opp_table;
struct thermal_cooling_device; struct thermal_cooling_device;
struct panfrost_device; struct panfrost_device;
struct panfrost_devfreq { struct panfrost_devfreq {
struct devfreq *devfreq; struct devfreq *devfreq;
struct opp_table *regulators_opp_table;
struct thermal_cooling_device *cooling; struct thermal_cooling_device *cooling;
bool opp_of_table_added; bool opp_of_table_added;

View File

@ -744,8 +744,8 @@ static struct cpuidle_state icx_cstates[] __initdata = {
.name = "C6", .name = "C6",
.desc = "MWAIT 0x20", .desc = "MWAIT 0x20",
.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 128, .exit_latency = 170,
.target_residency = 384, .target_residency = 600,
.enter = &intel_idle, .enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, }, .enter_s2idle = intel_idle_s2idle, },
{ {
@ -1156,6 +1156,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &idle_cpu_skl), X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &idle_cpu_skl),
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx), X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &idle_cpu_icx), X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &idle_cpu_icx),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &idle_cpu_icx),
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &idle_cpu_knl), X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &idle_cpu_knl),
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &idle_cpu_knl), X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &idle_cpu_knl),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &idle_cpu_bxt), X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &idle_cpu_bxt),

View File

@ -343,7 +343,7 @@ static int exynos5_init_freq_table(struct exynos5_dmc *dmc,
int idx; int idx;
unsigned long freq; unsigned long freq;
ret = dev_pm_opp_of_add_table(dmc->dev); ret = devm_pm_opp_of_add_table(dmc->dev);
if (ret < 0) { if (ret < 0) {
dev_err(dmc->dev, "Failed to get OPP table\n"); dev_err(dmc->dev, "Failed to get OPP table\n");
return ret; return ret;
@ -354,7 +354,7 @@ static int exynos5_init_freq_table(struct exynos5_dmc *dmc,
dmc->opp = devm_kmalloc_array(dmc->dev, dmc->opp_count, dmc->opp = devm_kmalloc_array(dmc->dev, dmc->opp_count,
sizeof(struct dmc_opp_table), GFP_KERNEL); sizeof(struct dmc_opp_table), GFP_KERNEL);
if (!dmc->opp) if (!dmc->opp)
goto err_opp; return -ENOMEM;
idx = dmc->opp_count - 1; idx = dmc->opp_count - 1;
for (i = 0, freq = ULONG_MAX; i < dmc->opp_count; i++, freq--) { for (i = 0, freq = ULONG_MAX; i < dmc->opp_count; i++, freq--) {
@ -362,7 +362,7 @@ static int exynos5_init_freq_table(struct exynos5_dmc *dmc,
opp = dev_pm_opp_find_freq_floor(dmc->dev, &freq); opp = dev_pm_opp_find_freq_floor(dmc->dev, &freq);
if (IS_ERR(opp)) if (IS_ERR(opp))
goto err_opp; return PTR_ERR(opp);
dmc->opp[idx - i].freq_hz = freq; dmc->opp[idx - i].freq_hz = freq;
dmc->opp[idx - i].volt_uv = dev_pm_opp_get_voltage(opp); dmc->opp[idx - i].volt_uv = dev_pm_opp_get_voltage(opp);
@ -371,11 +371,6 @@ static int exynos5_init_freq_table(struct exynos5_dmc *dmc,
} }
return 0; return 0;
err_opp:
dev_pm_opp_of_remove_table(dmc->dev);
return -EINVAL;
} }
/** /**
@ -1569,8 +1564,6 @@ static int exynos5_dmc_remove(struct platform_device *pdev)
clk_disable_unprepare(dmc->mout_bpll); clk_disable_unprepare(dmc->mout_bpll);
clk_disable_unprepare(dmc->fout_bpll); clk_disable_unprepare(dmc->fout_bpll);
dev_pm_opp_remove_table(dmc->dev);
return 0; return 0;
} }

View File

@ -264,7 +264,6 @@ struct sdhci_msm_host {
struct clk_bulk_data bulk_clks[5]; struct clk_bulk_data bulk_clks[5];
unsigned long clk_rate; unsigned long clk_rate;
struct mmc_host *mmc; struct mmc_host *mmc;
struct opp_table *opp_table;
bool use_14lpp_dll_reset; bool use_14lpp_dll_reset;
bool tuning_done; bool tuning_done;
bool calibration_done; bool calibration_done;
@ -2551,17 +2550,15 @@ static int sdhci_msm_probe(struct platform_device *pdev)
if (ret) if (ret)
goto bus_clk_disable; goto bus_clk_disable;
msm_host->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core"); ret = devm_pm_opp_set_clkname(&pdev->dev, "core");
if (IS_ERR(msm_host->opp_table)) { if (ret)
ret = PTR_ERR(msm_host->opp_table);
goto bus_clk_disable; goto bus_clk_disable;
}
/* OPP table is optional */ /* OPP table is optional */
ret = dev_pm_opp_of_add_table(&pdev->dev); ret = devm_pm_opp_of_add_table(&pdev->dev);
if (ret && ret != -ENODEV) { if (ret && ret != -ENODEV) {
dev_err(&pdev->dev, "Invalid OPP table in Device tree\n"); dev_err(&pdev->dev, "Invalid OPP table in Device tree\n");
goto opp_put_clkname; goto bus_clk_disable;
} }
/* Vote for maximum clock rate for maximum performance */ /* Vote for maximum clock rate for maximum performance */
@ -2587,7 +2584,7 @@ static int sdhci_msm_probe(struct platform_device *pdev)
ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks),
msm_host->bulk_clks); msm_host->bulk_clks);
if (ret) if (ret)
goto opp_cleanup; goto bus_clk_disable;
/* /*
* xo clock is needed for FLL feature of cm_dll. * xo clock is needed for FLL feature of cm_dll.
@ -2732,10 +2729,6 @@ pm_runtime_disable:
clk_disable: clk_disable:
clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks), clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks),
msm_host->bulk_clks); msm_host->bulk_clks);
opp_cleanup:
dev_pm_opp_of_remove_table(&pdev->dev);
opp_put_clkname:
dev_pm_opp_put_clkname(msm_host->opp_table);
bus_clk_disable: bus_clk_disable:
if (!IS_ERR(msm_host->bus_clk)) if (!IS_ERR(msm_host->bus_clk))
clk_disable_unprepare(msm_host->bus_clk); clk_disable_unprepare(msm_host->bus_clk);
@ -2754,8 +2747,6 @@ static int sdhci_msm_remove(struct platform_device *pdev)
sdhci_remove_host(host, dead); sdhci_remove_host(host, dead);
dev_pm_opp_of_remove_table(&pdev->dev);
dev_pm_opp_put_clkname(msm_host->opp_table);
pm_runtime_get_sync(&pdev->dev); pm_runtime_get_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev); pm_runtime_put_noidle(&pdev->dev);

View File

@ -1857,6 +1857,35 @@ void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw);
static void devm_pm_opp_supported_hw_release(void *data)
{
dev_pm_opp_put_supported_hw(data);
}
/**
* devm_pm_opp_set_supported_hw() - Set supported platforms
* @dev: Device for which supported-hw has to be set.
* @versions: Array of hierarchy of versions to match.
* @count: Number of elements in the array.
*
* This is a resource-managed variant of dev_pm_opp_set_supported_hw().
*
* Return: 0 on success and errorno otherwise.
*/
int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions,
unsigned int count)
{
struct opp_table *opp_table;
opp_table = dev_pm_opp_set_supported_hw(dev, versions, count);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
return devm_add_action_or_reset(dev, devm_pm_opp_supported_hw_release,
opp_table);
}
EXPORT_SYMBOL_GPL(devm_pm_opp_set_supported_hw);
/** /**
* dev_pm_opp_set_prop_name() - Set prop-extn name * dev_pm_opp_set_prop_name() - Set prop-extn name
* @dev: Device for which the prop-name has to be set. * @dev: Device for which the prop-name has to be set.
@ -2047,6 +2076,36 @@ put_opp_table:
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators);
static void devm_pm_opp_regulators_release(void *data)
{
dev_pm_opp_put_regulators(data);
}
/**
* devm_pm_opp_set_regulators() - Set regulator names for the device
* @dev: Device for which regulator name is being set.
* @names: Array of pointers to the names of the regulator.
* @count: Number of regulators.
*
* This is a resource-managed variant of dev_pm_opp_set_regulators().
*
* Return: 0 on success and errorno otherwise.
*/
int devm_pm_opp_set_regulators(struct device *dev,
const char * const names[],
unsigned int count)
{
struct opp_table *opp_table;
opp_table = dev_pm_opp_set_regulators(dev, names, count);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
return devm_add_action_or_reset(dev, devm_pm_opp_regulators_release,
opp_table);
}
EXPORT_SYMBOL_GPL(devm_pm_opp_set_regulators);
/** /**
* dev_pm_opp_set_clkname() - Set clk name for the device * dev_pm_opp_set_clkname() - Set clk name for the device
* @dev: Device for which clk name is being set. * @dev: Device for which clk name is being set.
@ -2119,6 +2178,33 @@ void dev_pm_opp_put_clkname(struct opp_table *opp_table)
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname); EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname);
static void devm_pm_opp_clkname_release(void *data)
{
dev_pm_opp_put_clkname(data);
}
/**
* devm_pm_opp_set_clkname() - Set clk name for the device
* @dev: Device for which clk name is being set.
* @name: Clk name.
*
* This is a resource-managed variant of dev_pm_opp_set_clkname().
*
* Return: 0 on success and errorno otherwise.
*/
int devm_pm_opp_set_clkname(struct device *dev, const char *name)
{
struct opp_table *opp_table;
opp_table = dev_pm_opp_set_clkname(dev, name);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
return devm_add_action_or_reset(dev, devm_pm_opp_clkname_release,
opp_table);
}
EXPORT_SYMBOL_GPL(devm_pm_opp_set_clkname);
/** /**
* dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper
* @dev: Device for which the helper is getting registered. * @dev: Device for which the helper is getting registered.
@ -2209,25 +2295,19 @@ static void devm_pm_opp_unregister_set_opp_helper(void *data)
* *
* This is a resource-managed version of dev_pm_opp_register_set_opp_helper(). * This is a resource-managed version of dev_pm_opp_register_set_opp_helper().
* *
* Return: pointer to 'struct opp_table' on success and errorno otherwise. * Return: 0 on success and errorno otherwise.
*/ */
struct opp_table * int devm_pm_opp_register_set_opp_helper(struct device *dev,
devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data))
int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int err;
opp_table = dev_pm_opp_register_set_opp_helper(dev, set_opp); opp_table = dev_pm_opp_register_set_opp_helper(dev, set_opp);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return opp_table; return PTR_ERR(opp_table);
err = devm_add_action_or_reset(dev, devm_pm_opp_unregister_set_opp_helper, return devm_add_action_or_reset(dev, devm_pm_opp_unregister_set_opp_helper,
opp_table); opp_table);
if (err)
return ERR_PTR(err);
return opp_table;
} }
EXPORT_SYMBOL_GPL(devm_pm_opp_register_set_opp_helper); EXPORT_SYMBOL_GPL(devm_pm_opp_register_set_opp_helper);
@ -2380,25 +2460,19 @@ static void devm_pm_opp_detach_genpd(void *data)
* *
* This is a resource-managed version of dev_pm_opp_attach_genpd(). * This is a resource-managed version of dev_pm_opp_attach_genpd().
* *
* Return: pointer to 'struct opp_table' on success and errorno otherwise. * Return: 0 on success and errorno otherwise.
*/ */
struct opp_table * int devm_pm_opp_attach_genpd(struct device *dev, const char **names,
devm_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs)
struct device ***virt_devs)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int err;
opp_table = dev_pm_opp_attach_genpd(dev, names, virt_devs); opp_table = dev_pm_opp_attach_genpd(dev, names, virt_devs);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return opp_table; return PTR_ERR(opp_table);
err = devm_add_action_or_reset(dev, devm_pm_opp_detach_genpd, return devm_add_action_or_reset(dev, devm_pm_opp_detach_genpd,
opp_table); opp_table);
if (err)
return ERR_PTR(err);
return opp_table;
} }
EXPORT_SYMBOL_GPL(devm_pm_opp_attach_genpd); EXPORT_SYMBOL_GPL(devm_pm_opp_attach_genpd);

View File

@ -1104,6 +1104,42 @@ static int _of_add_table_indexed(struct device *dev, int index, bool getclk)
return ret; return ret;
} }
static void devm_pm_opp_of_table_release(void *data)
{
dev_pm_opp_of_remove_table(data);
}
/**
* devm_pm_opp_of_add_table() - Initialize opp table from device tree
* @dev: device pointer used to lookup OPP table.
*
* Register the initial OPP table with the OPP library for given device.
*
* The opp_table structure will be freed after the device is destroyed.
*
* Return:
* 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available
* -EEXIST Freq are same and volt are different OR
* Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM Memory allocation failure
* -ENODEV when 'operating-points' property is not found or is invalid data
* in device node.
* -ENODATA when empty 'operating-points' property is found
* -EINVAL when invalid entries are found in opp-v2 table
*/
int devm_pm_opp_of_add_table(struct device *dev)
{
int ret;
ret = dev_pm_opp_of_add_table(dev);
if (ret)
return ret;
return devm_add_action_or_reset(dev, devm_pm_opp_of_table_release, dev);
}
EXPORT_SYMBOL_GPL(devm_pm_opp_of_add_table);
/** /**
* dev_pm_opp_of_add_table() - Initialize opp table from device tree * dev_pm_opp_of_add_table() - Initialize opp table from device tree
* @dev: device pointer used to lookup OPP table. * @dev: device pointer used to lookup OPP table.

View File

@ -1870,20 +1870,10 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
int err; int err;
int i, bars = 0; int i, bars = 0;
/* if (atomic_inc_return(&dev->enable_cnt) > 1) {
* Power state could be unknown at this point, either due to a fresh pci_update_current_state(dev, dev->current_state);
* boot or a device removal call. So get the current power state
* so that things like MSI message writing will behave as expected
* (e.g. if the device really is in D0 at enable time).
*/
if (dev->pm_cap) {
u16 pmcsr;
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
}
if (atomic_inc_return(&dev->enable_cnt) > 1)
return 0; /* already enabled */ return 0; /* already enabled */
}
bridge = pci_upstream_bridge(dev); bridge = pci_upstream_bridge(dev);
if (bridge) if (bridge)

View File

@ -1069,6 +1069,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd), X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd), X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
X86_MATCH_VENDOR_FAM(HYGON, 0x18, &rapl_defaults_amd),
{} {}
}; };
MODULE_DEVICE_TABLE(x86cpu, rapl_ids); MODULE_DEVICE_TABLE(x86cpu, rapl_ids);

View File

@ -150,6 +150,7 @@ static int rapl_msr_probe(struct platform_device *pdev)
case X86_VENDOR_INTEL: case X86_VENDOR_INTEL:
rapl_msr_priv = &rapl_msr_priv_intel; rapl_msr_priv = &rapl_msr_priv_intel;
break; break;
case X86_VENDOR_HYGON:
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
rapl_msr_priv = &rapl_msr_priv_amd; rapl_msr_priv = &rapl_msr_priv_amd;
break; break;

View File

@ -691,14 +691,15 @@ static int spi_geni_probe(struct platform_device *pdev)
mas->se.wrapper = dev_get_drvdata(dev->parent); mas->se.wrapper = dev_get_drvdata(dev->parent);
mas->se.base = base; mas->se.base = base;
mas->se.clk = clk; mas->se.clk = clk;
mas->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se");
if (IS_ERR(mas->se.opp_table)) ret = devm_pm_opp_set_clkname(&pdev->dev, "se");
return PTR_ERR(mas->se.opp_table); if (ret)
return ret;
/* OPP table is optional */ /* OPP table is optional */
ret = dev_pm_opp_of_add_table(&pdev->dev); ret = devm_pm_opp_of_add_table(&pdev->dev);
if (ret && ret != -ENODEV) { if (ret && ret != -ENODEV) {
dev_err(&pdev->dev, "invalid OPP table in device tree\n"); dev_err(&pdev->dev, "invalid OPP table in device tree\n");
goto put_clkname; return ret;
} }
spi->bus_num = -1; spi->bus_num = -1;
@ -750,9 +751,6 @@ spi_geni_probe_free_irq:
free_irq(mas->irq, spi); free_irq(mas->irq, spi);
spi_geni_probe_runtime_disable: spi_geni_probe_runtime_disable:
pm_runtime_disable(dev); pm_runtime_disable(dev);
dev_pm_opp_of_remove_table(&pdev->dev);
put_clkname:
dev_pm_opp_put_clkname(mas->se.opp_table);
return ret; return ret;
} }
@ -766,8 +764,6 @@ static int spi_geni_remove(struct platform_device *pdev)
free_irq(mas->irq, spi); free_irq(mas->irq, spi);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
dev_pm_opp_of_remove_table(&pdev->dev);
dev_pm_opp_put_clkname(mas->se.opp_table);
return 0; return 0;
} }

View File

@ -142,7 +142,6 @@ struct qcom_qspi {
struct clk_bulk_data *clks; struct clk_bulk_data *clks;
struct qspi_xfer xfer; struct qspi_xfer xfer;
struct icc_path *icc_path_cpu_to_qspi; struct icc_path *icc_path_cpu_to_qspi;
struct opp_table *opp_table;
unsigned long last_speed; unsigned long last_speed;
/* Lock to protect data accessed by IRQs */ /* Lock to protect data accessed by IRQs */
spinlock_t lock; spinlock_t lock;
@ -530,14 +529,14 @@ static int qcom_qspi_probe(struct platform_device *pdev)
master->handle_err = qcom_qspi_handle_err; master->handle_err = qcom_qspi_handle_err;
master->auto_runtime_pm = true; master->auto_runtime_pm = true;
ctrl->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core"); ret = devm_pm_opp_set_clkname(&pdev->dev, "core");
if (IS_ERR(ctrl->opp_table)) if (ret)
return PTR_ERR(ctrl->opp_table); return ret;
/* OPP table is optional */ /* OPP table is optional */
ret = dev_pm_opp_of_add_table(&pdev->dev); ret = devm_pm_opp_of_add_table(&pdev->dev);
if (ret && ret != -ENODEV) { if (ret && ret != -ENODEV) {
dev_err(&pdev->dev, "invalid OPP table in device tree\n"); dev_err(&pdev->dev, "invalid OPP table in device tree\n");
goto exit_probe_put_clkname; return ret;
} }
pm_runtime_use_autosuspend(dev); pm_runtime_use_autosuspend(dev);
@ -549,10 +548,6 @@ static int qcom_qspi_probe(struct platform_device *pdev)
return 0; return 0;
pm_runtime_disable(dev); pm_runtime_disable(dev);
dev_pm_opp_of_remove_table(&pdev->dev);
exit_probe_put_clkname:
dev_pm_opp_put_clkname(ctrl->opp_table);
return ret; return ret;
} }
@ -560,14 +555,11 @@ exit_probe_put_clkname:
static int qcom_qspi_remove(struct platform_device *pdev) static int qcom_qspi_remove(struct platform_device *pdev)
{ {
struct spi_master *master = platform_get_drvdata(pdev); struct spi_master *master = platform_get_drvdata(pdev);
struct qcom_qspi *ctrl = spi_master_get_devdata(master);
/* Unregister _before_ disabling pm_runtime() so we stop transfers */ /* Unregister _before_ disabling pm_runtime() so we stop transfers */
spi_unregister_master(master); spi_unregister_master(master);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
dev_pm_opp_of_remove_table(&pdev->dev);
dev_pm_opp_put_clkname(ctrl->opp_table);
return 0; return 0;
} }

View File

@ -1426,14 +1426,14 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
if (of_property_read_bool(pdev->dev.of_node, "cts-rts-swap")) if (of_property_read_bool(pdev->dev.of_node, "cts-rts-swap"))
port->cts_rts_swap = true; port->cts_rts_swap = true;
port->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se"); ret = devm_pm_opp_set_clkname(&pdev->dev, "se");
if (IS_ERR(port->se.opp_table)) if (ret)
return PTR_ERR(port->se.opp_table); return ret;
/* OPP table is optional */ /* OPP table is optional */
ret = dev_pm_opp_of_add_table(&pdev->dev); ret = devm_pm_opp_of_add_table(&pdev->dev);
if (ret && ret != -ENODEV) { if (ret && ret != -ENODEV) {
dev_err(&pdev->dev, "invalid OPP table in device tree\n"); dev_err(&pdev->dev, "invalid OPP table in device tree\n");
goto put_clkname; return ret;
} }
port->private_data.drv = drv; port->private_data.drv = drv;
@ -1443,7 +1443,7 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
ret = uart_add_one_port(drv, uport); ret = uart_add_one_port(drv, uport);
if (ret) if (ret)
goto err; return ret;
irq_set_status_flags(uport->irq, IRQ_NOAUTOEN); irq_set_status_flags(uport->irq, IRQ_NOAUTOEN);
ret = devm_request_irq(uport->dev, uport->irq, qcom_geni_serial_isr, ret = devm_request_irq(uport->dev, uport->irq, qcom_geni_serial_isr,
@ -1451,7 +1451,7 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
if (ret) { if (ret) {
dev_err(uport->dev, "Failed to get IRQ ret %d\n", ret); dev_err(uport->dev, "Failed to get IRQ ret %d\n", ret);
uart_remove_one_port(drv, uport); uart_remove_one_port(drv, uport);
goto err; return ret;
} }
/* /*
@ -1468,16 +1468,11 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
if (ret) { if (ret) {
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
uart_remove_one_port(drv, uport); uart_remove_one_port(drv, uport);
goto err; return ret;
} }
} }
return 0; return 0;
err:
dev_pm_opp_of_remove_table(&pdev->dev);
put_clkname:
dev_pm_opp_put_clkname(port->se.opp_table);
return ret;
} }
static int qcom_geni_serial_remove(struct platform_device *pdev) static int qcom_geni_serial_remove(struct platform_device *pdev)
@ -1485,8 +1480,6 @@ static int qcom_geni_serial_remove(struct platform_device *pdev)
struct qcom_geni_serial_port *port = platform_get_drvdata(pdev); struct qcom_geni_serial_port *port = platform_get_drvdata(pdev);
struct uart_driver *drv = port->private_data.drv; struct uart_driver *drv = port->private_data.drv;
dev_pm_opp_of_remove_table(&pdev->dev);
dev_pm_opp_put_clkname(port->se.opp_table);
dev_pm_clear_wake_irq(&pdev->dev); dev_pm_clear_wake_irq(&pdev->dev);
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
uart_remove_one_port(drv, &port->uport); uart_remove_one_port(drv, &port->uport);

View File

@ -23,18 +23,31 @@ static inline unsigned long topology_get_cpu_scale(int cpu)
void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity); void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity);
DECLARE_PER_CPU(unsigned long, freq_scale); DECLARE_PER_CPU(unsigned long, arch_freq_scale);
static inline unsigned long topology_get_freq_scale(int cpu) static inline unsigned long topology_get_freq_scale(int cpu)
{ {
return per_cpu(freq_scale, cpu); return per_cpu(arch_freq_scale, cpu);
} }
void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq); unsigned long max_freq);
bool topology_scale_freq_invariant(void); bool topology_scale_freq_invariant(void);
bool arch_freq_counters_available(const struct cpumask *cpus); enum scale_freq_source {
SCALE_FREQ_SOURCE_CPUFREQ = 0,
SCALE_FREQ_SOURCE_ARCH,
SCALE_FREQ_SOURCE_CPPC,
};
struct scale_freq_data {
enum scale_freq_source source;
void (*set_freq_scale)(void);
};
void topology_scale_freq_tick(void);
void topology_set_scale_freq_source(struct scale_freq_data *data, const struct cpumask *cpus);
void topology_clear_scale_freq_source(enum scale_freq_source source, const struct cpumask *cpus);
DECLARE_PER_CPU(unsigned long, thermal_pressure); DECLARE_PER_CPU(unsigned long, thermal_pressure);

View File

@ -49,8 +49,8 @@ struct cpuidle_state {
char name[CPUIDLE_NAME_LEN]; char name[CPUIDLE_NAME_LEN];
char desc[CPUIDLE_DESC_LEN]; char desc[CPUIDLE_DESC_LEN];
u64 exit_latency_ns; s64 exit_latency_ns;
u64 target_residency_ns; s64 target_residency_ns;
unsigned int flags; unsigned int flags;
unsigned int exit_latency; /* in US */ unsigned int exit_latency; /* in US */
int power_usage; /* in mW */ int power_usage; /* in mW */

View File

@ -38,6 +38,7 @@ enum devfreq_timer {
struct devfreq; struct devfreq;
struct devfreq_governor; struct devfreq_governor;
struct thermal_cooling_device;
/** /**
* struct devfreq_dev_status - Data given from devfreq user device to * struct devfreq_dev_status - Data given from devfreq user device to
@ -98,11 +99,15 @@ struct devfreq_dev_status {
* @freq_table: Optional list of frequencies to support statistics * @freq_table: Optional list of frequencies to support statistics
* and freq_table must be generated in ascending order. * and freq_table must be generated in ascending order.
* @max_state: The size of freq_table. * @max_state: The size of freq_table.
*
* @is_cooling_device: A self-explanatory boolean giving the device a
* cooling effect property.
*/ */
struct devfreq_dev_profile { struct devfreq_dev_profile {
unsigned long initial_freq; unsigned long initial_freq;
unsigned int polling_ms; unsigned int polling_ms;
enum devfreq_timer timer; enum devfreq_timer timer;
bool is_cooling_device;
int (*target)(struct device *dev, unsigned long *freq, u32 flags); int (*target)(struct device *dev, unsigned long *freq, u32 flags);
int (*get_dev_status)(struct device *dev, int (*get_dev_status)(struct device *dev,
@ -156,6 +161,7 @@ struct devfreq_stats {
* @suspend_count: suspend requests counter for a device. * @suspend_count: suspend requests counter for a device.
* @stats: Statistics of devfreq device behavior * @stats: Statistics of devfreq device behavior
* @transition_notifier_list: list head of DEVFREQ_TRANSITION_NOTIFIER notifier * @transition_notifier_list: list head of DEVFREQ_TRANSITION_NOTIFIER notifier
* @cdev: Cooling device pointer if the devfreq has cooling property
* @nb_min: Notifier block for DEV_PM_QOS_MIN_FREQUENCY * @nb_min: Notifier block for DEV_PM_QOS_MIN_FREQUENCY
* @nb_max: Notifier block for DEV_PM_QOS_MAX_FREQUENCY * @nb_max: Notifier block for DEV_PM_QOS_MAX_FREQUENCY
* *
@ -198,6 +204,9 @@ struct devfreq {
struct srcu_notifier_head transition_notifier_list; struct srcu_notifier_head transition_notifier_list;
/* Pointer to the cooling device if used for thermal mitigation */
struct thermal_cooling_device *cdev;
struct notifier_block nb_min; struct notifier_block nb_min;
struct notifier_block nb_max; struct notifier_block nb_max;
}; };

View File

@ -279,7 +279,6 @@ static inline int freeze_kernel_threads(void) { return -ENOSYS; }
static inline void thaw_processes(void) {} static inline void thaw_processes(void) {}
static inline void thaw_kernel_threads(void) {} static inline void thaw_kernel_threads(void) {}
static inline bool try_to_freeze_nowarn(void) { return false; }
static inline bool try_to_freeze(void) { return false; } static inline bool try_to_freeze(void) { return false; }
static inline void freezer_do_not_count(void) {} static inline void freezer_do_not_count(void) {}

View File

@ -33,7 +33,7 @@ enum rapl_domain_reg_id {
RAPL_DOMAIN_REG_MAX, RAPL_DOMAIN_REG_MAX,
}; };
struct rapl_package; struct rapl_domain;
enum rapl_primitives { enum rapl_primitives {
ENERGY_COUNTER, ENERGY_COUNTER,

View File

@ -39,7 +39,6 @@ static inline void pm_vt_switch_unregister(struct device *dev)
* Device power management * Device power management
*/ */
struct device;
#ifdef CONFIG_PM #ifdef CONFIG_PM
extern const char power_group_name[]; /* = "power" */ extern const char power_group_name[]; /* = "power" */

View File

@ -144,18 +144,21 @@ int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb
struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count); struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count);
void dev_pm_opp_put_supported_hw(struct opp_table *opp_table); void dev_pm_opp_put_supported_hw(struct opp_table *opp_table);
int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count);
struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name); struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name);
void dev_pm_opp_put_prop_name(struct opp_table *opp_table); void dev_pm_opp_put_prop_name(struct opp_table *opp_table);
struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count);
void dev_pm_opp_put_regulators(struct opp_table *opp_table); void dev_pm_opp_put_regulators(struct opp_table *opp_table);
int devm_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count);
struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name); struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name);
void dev_pm_opp_put_clkname(struct opp_table *opp_table); void dev_pm_opp_put_clkname(struct opp_table *opp_table);
int devm_pm_opp_set_clkname(struct device *dev, const char *name);
struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table); void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
struct opp_table *devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); int devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs); struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs);
void dev_pm_opp_detach_genpd(struct opp_table *opp_table); void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
struct opp_table *devm_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs); int devm_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs);
struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, struct opp_table *dst_table, struct dev_pm_opp *src_opp); struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, struct opp_table *dst_table, struct dev_pm_opp *src_opp);
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate); int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
@ -319,6 +322,13 @@ static inline struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {} static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {}
static inline int devm_pm_opp_set_supported_hw(struct device *dev,
const u32 *versions,
unsigned int count)
{
return -EOPNOTSUPP;
}
static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
@ -327,11 +337,10 @@ static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device
static inline void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) {} static inline void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) {}
static inline struct opp_table * static inline int devm_pm_opp_register_set_opp_helper(struct device *dev,
devm_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
return ERR_PTR(-EOPNOTSUPP); return -EOPNOTSUPP;
} }
static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name) static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
@ -348,6 +357,13 @@ static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, co
static inline void dev_pm_opp_put_regulators(struct opp_table *opp_table) {} static inline void dev_pm_opp_put_regulators(struct opp_table *opp_table) {}
static inline int devm_pm_opp_set_regulators(struct device *dev,
const char * const names[],
unsigned int count)
{
return -EOPNOTSUPP;
}
static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name) static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name)
{ {
return ERR_PTR(-EOPNOTSUPP); return ERR_PTR(-EOPNOTSUPP);
@ -355,6 +371,11 @@ static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const
static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {} static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {}
static inline int devm_pm_opp_set_clkname(struct device *dev, const char *name)
{
return -EOPNOTSUPP;
}
static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs) static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs)
{ {
return ERR_PTR(-EOPNOTSUPP); return ERR_PTR(-EOPNOTSUPP);
@ -362,10 +383,11 @@ static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, cons
static inline void dev_pm_opp_detach_genpd(struct opp_table *opp_table) {} static inline void dev_pm_opp_detach_genpd(struct opp_table *opp_table) {}
static inline struct opp_table *devm_pm_opp_attach_genpd(struct device *dev, static inline int devm_pm_opp_attach_genpd(struct device *dev,
const char **names, struct device ***virt_devs) const char **names,
struct device ***virt_devs)
{ {
return ERR_PTR(-EOPNOTSUPP); return -EOPNOTSUPP;
} }
static inline struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, static inline struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table,
@ -419,6 +441,7 @@ int dev_pm_opp_of_add_table(struct device *dev);
int dev_pm_opp_of_add_table_indexed(struct device *dev, int index); int dev_pm_opp_of_add_table_indexed(struct device *dev, int index);
int dev_pm_opp_of_add_table_noclk(struct device *dev, int index); int dev_pm_opp_of_add_table_noclk(struct device *dev, int index);
void dev_pm_opp_of_remove_table(struct device *dev); void dev_pm_opp_of_remove_table(struct device *dev);
int devm_pm_opp_of_add_table(struct device *dev);
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask); int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask);
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask); void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask);
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
@ -451,6 +474,11 @@ static inline void dev_pm_opp_of_remove_table(struct device *dev)
{ {
} }
static inline int devm_pm_opp_of_add_table(struct device *dev)
{
return -EOPNOTSUPP;
}
static inline int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) static inline int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;

View File

@ -265,7 +265,7 @@ static inline void pm_runtime_no_callbacks(struct device *dev) {}
static inline void pm_runtime_irq_safe(struct device *dev) {} static inline void pm_runtime_irq_safe(struct device *dev) {}
static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; } static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; }
static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; } static inline bool pm_runtime_has_no_callbacks(struct device *dev) { return false; }
static inline void pm_runtime_mark_last_busy(struct device *dev) {} static inline void pm_runtime_mark_last_busy(struct device *dev) {}
static inline void __pm_runtime_use_autosuspend(struct device *dev, static inline void __pm_runtime_use_autosuspend(struct device *dev,
bool use) {} bool use) {}

View File

@ -47,7 +47,6 @@ struct geni_icc_path {
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl * @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock * @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE * @icc_paths: Array of ICC paths for SE
* @opp_table: Pointer to the OPP table
*/ */
struct geni_se { struct geni_se {
void __iomem *base; void __iomem *base;
@ -57,7 +56,6 @@ struct geni_se {
unsigned int num_clk_levels; unsigned int num_clk_levels;
unsigned long *clk_perf_tbl; unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3]; struct geni_icc_path icc_paths[3];
struct opp_table *opp_table;
}; };
/* Common SE registers */ /* Common SE registers */

View File

@ -54,7 +54,7 @@ static void try_to_suspend(struct work_struct *work)
goto out; goto out;
/* /*
* If the wakeup occured for an unknown reason, wait to prevent the * If the wakeup occurred for an unknown reason, wait to prevent the
* system from trying to suspend and waking up in a tight loop. * system from trying to suspend and waking up in a tight loop.
*/ */
if (final_count == initial_count) if (final_count == initial_count)

View File

@ -329,7 +329,7 @@ static void *chain_alloc(struct chain_allocator *ca, unsigned int size)
/** /**
* Data types related to memory bitmaps. * Data types related to memory bitmaps.
* *
* Memory bitmap is a structure consiting of many linked lists of * Memory bitmap is a structure consisting of many linked lists of
* objects. The main list's elements are of type struct zone_bitmap * objects. The main list's elements are of type struct zone_bitmap
* and each of them corresonds to one zone. For each zone bitmap * and each of them corresonds to one zone. For each zone bitmap
* object there is a list of objects of type struct bm_block that * object there is a list of objects of type struct bm_block that

View File

@ -884,7 +884,7 @@ out_clean:
* enough_swap - Make sure we have enough swap to save the image. * enough_swap - Make sure we have enough swap to save the image.
* *
* Returns TRUE or FALSE after checking the total amount of swap * Returns TRUE or FALSE after checking the total amount of swap
* space avaiable from the resume partition. * space available from the resume partition.
*/ */
static int enough_swap(unsigned int nr_pages) static int enough_swap(unsigned int nr_pages)

View File

@ -6384,6 +6384,7 @@ int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
{ {
return __sched_setscheduler(p, attr, false, true); return __sched_setscheduler(p, attr, false, true);
} }
EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
/** /**
* sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace. * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.

View File

@ -114,19 +114,8 @@ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
return true; return true;
} }
static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time, static void sugov_deferred_update(struct sugov_policy *sg_policy)
unsigned int next_freq)
{ {
if (sugov_update_next_freq(sg_policy, time, next_freq))
cpufreq_driver_fast_switch(sg_policy->policy, next_freq);
}
static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
unsigned int next_freq)
{
if (!sugov_update_next_freq(sg_policy, time, next_freq))
return;
if (!sg_policy->work_in_progress) { if (!sg_policy->work_in_progress) {
sg_policy->work_in_progress = true; sg_policy->work_in_progress = true;
irq_work_queue(&sg_policy->irq_work); irq_work_queue(&sg_policy->irq_work);
@ -366,16 +355,19 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time,
sg_policy->cached_raw_freq = cached_freq; sg_policy->cached_raw_freq = cached_freq;
} }
if (!sugov_update_next_freq(sg_policy, time, next_f))
return;
/* /*
* This code runs under rq->lock for the target CPU, so it won't run * This code runs under rq->lock for the target CPU, so it won't run
* concurrently on two different CPUs for the same target and it is not * concurrently on two different CPUs for the same target and it is not
* necessary to acquire the lock in the fast switch case. * necessary to acquire the lock in the fast switch case.
*/ */
if (sg_policy->policy->fast_switch_enabled) { if (sg_policy->policy->fast_switch_enabled) {
sugov_fast_switch(sg_policy, time, next_f); cpufreq_driver_fast_switch(sg_policy->policy, next_f);
} else { } else {
raw_spin_lock(&sg_policy->update_lock); raw_spin_lock(&sg_policy->update_lock);
sugov_deferred_update(sg_policy, time, next_f); sugov_deferred_update(sg_policy);
raw_spin_unlock(&sg_policy->update_lock); raw_spin_unlock(&sg_policy->update_lock);
} }
} }
@ -454,12 +446,15 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
if (sugov_should_update_freq(sg_policy, time)) { if (sugov_should_update_freq(sg_policy, time)) {
next_f = sugov_next_freq_shared(sg_cpu, time); next_f = sugov_next_freq_shared(sg_cpu, time);
if (sg_policy->policy->fast_switch_enabled) if (!sugov_update_next_freq(sg_policy, time, next_f))
sugov_fast_switch(sg_policy, time, next_f); goto unlock;
else
sugov_deferred_update(sg_policy, time, next_f);
}
if (sg_policy->policy->fast_switch_enabled)
cpufreq_driver_fast_switch(sg_policy->policy, next_f);
else
sugov_deferred_update(sg_policy);
}
unlock:
raw_spin_unlock(&sg_policy->update_lock); raw_spin_unlock(&sg_policy->update_lock);
} }

View File

@ -1124,7 +1124,11 @@ ktime_t tick_nohz_get_next_hrtimer(void)
* tick_nohz_get_sleep_length - return the expected length of the current sleep * tick_nohz_get_sleep_length - return the expected length of the current sleep
* @delta_next: duration until the next event if the tick cannot be stopped * @delta_next: duration until the next event if the tick cannot be stopped
* *
* Called from power state control code with interrupts disabled * Called from power state control code with interrupts disabled.
*
* The return value of this function and/or the value returned by it through the
* @delta_next pointer can be negative which must be taken into account by its
* callers.
*/ */
ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next) ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next)
{ {

View File

@ -6819,7 +6819,7 @@ if __name__ == '__main__':
sysvals.outdir = val sysvals.outdir = val
sysvals.notestrun = True sysvals.notestrun = True
if(os.path.isdir(val) == False): if(os.path.isdir(val) == False):
doError('%s is not accesible' % val) doError('%s is not accessible' % val)
elif(arg == '-filter'): elif(arg == '-filter'):
try: try:
val = next(args) val = next(args)