thermal: core: Resume thermal zones asynchronously

The resume of thermal zones in thermal_pm_notify() is carried out
sequentially, which may be a problem if __thermal_zone_device_update()
takes a significant time to run for some thermal zones, because some
other thermal zones may need to wait for them to resume then and if
any other PM notifiers are going to be invoked after the thermal one,
they will need to wait for it either.

To address this, make thermal_pm_notify() switch the poll_queue delayed
work over to a one-shot thermal_zone_device_resume() work function that
will restore the original one during the thermal zone resume and queue
up poll_queue without a delay for each thermal zone.

Link: https://lore.kernel.org/linux-pm/20231120234015.3273143-1-radusolea@google.com/
Reported-by: Radu Solea <radusolea@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This commit is contained in:
Rafael J. Wysocki 2023-12-18 20:28:31 +01:00
parent 33fcb595dc
commit 5a5efdaffd

View File

@ -1532,6 +1532,22 @@ exit:
}
EXPORT_SYMBOL_GPL(thermal_zone_get_zone_by_name);
static void thermal_zone_device_resume(struct work_struct *work)
{
struct thermal_zone_device *tz;
tz = container_of(work, struct thermal_zone_device, poll_queue.work);
mutex_lock(&tz->lock);
tz->suspended = false;
thermal_zone_device_init(tz);
__thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
mutex_unlock(&tz->lock);
}
static int thermal_pm_notify(struct notifier_block *nb,
unsigned long mode, void *_unused)
{
@ -1563,10 +1579,16 @@ static int thermal_pm_notify(struct notifier_block *nb,
cancel_delayed_work(&tz->poll_queue);
tz->suspended = false;
thermal_zone_device_init(tz);
__thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
/*
* Replace the work function with the resume one, which
* will restore the original work function and schedule
* the polling work if needed.
*/
INIT_DELAYED_WORK(&tz->poll_queue,
thermal_zone_device_resume);
/* Queue up the work without a delay. */
mod_delayed_work(system_freezable_power_efficient_wq,
&tz->poll_queue, 0);
mutex_unlock(&tz->lock);
}