sched/rt: Re-instate old behavior in select_task_rq_rt()
When RT Capacity Aware support was added, the logic in select_task_rq_rt was modified to force a search for a fitting CPU if the task currently doesn't run on one. But if the search failed, and the search was only triggered to fulfill the fitness request; we could end up selecting a new CPU unnecessarily. Fix this and re-instate the original behavior by ensuring we bail out in that case. This behavior change only affected asymmetric systems that are using util_clamp to implement capacity aware. None asymmetric systems weren't affected. LINK: https://lore.kernel.org/lkml/20200218041620.GD28029@codeaurora.org/ Reported-by: Pavan Kondeti <pkondeti@codeaurora.org> Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Fixes: 804d402fb6f6 ("sched/rt: Make RT capacity-aware") Link: https://lkml.kernel.org/r/20200302132721.8353-3-qais.yousef@arm.com
This commit is contained in:
parent
d9cb236b94
commit
b28bc1e002
@ -1474,6 +1474,13 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
|
||||
if (test || !rt_task_fits_capacity(p, cpu)) {
|
||||
int target = find_lowest_rq(p);
|
||||
|
||||
/*
|
||||
* Bail out if we were forcing a migration to find a better
|
||||
* fitting CPU but our search failed.
|
||||
*/
|
||||
if (!test && target != -1 && !rt_task_fits_capacity(p, target))
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* Don't bother moving it if the destination CPU is
|
||||
* not running a lower priority task.
|
||||
@ -1482,6 +1489,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
|
||||
p->prio < cpu_rq(target)->rt.highest_prio.curr)
|
||||
cpu = target;
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
rcu_read_unlock();
|
||||
|
||||
out:
|
||||
|
Loading…
x
Reference in New Issue
Block a user