sched/fair: Fix overutilized update in enqueue_task_fair()
enqueue_task_fair() attempts to skip the overutilized update for new tasks as their util_avg is not accurate yet. However, the flag we check to do so is overwritten earlier on in the function, which makes the condition pretty much a nop. Fix this by saving the flag early on. Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator") Reported-by: Rick Yiu <rickyiu@google.com> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20201112111201.2081902-1-qperret@google.com
This commit is contained in:
parent
8d4d9c7b43
commit
8e1ac4299a
@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
|||||||
struct cfs_rq *cfs_rq;
|
struct cfs_rq *cfs_rq;
|
||||||
struct sched_entity *se = &p->se;
|
struct sched_entity *se = &p->se;
|
||||||
int idle_h_nr_running = task_has_idle_policy(p);
|
int idle_h_nr_running = task_has_idle_policy(p);
|
||||||
|
int task_new = !(flags & ENQUEUE_WAKEUP);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The code below (indirectly) updates schedutil which looks at
|
* The code below (indirectly) updates schedutil which looks at
|
||||||
@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
|||||||
* into account, but that is not straightforward to implement,
|
* into account, but that is not straightforward to implement,
|
||||||
* and the following generally works well enough in practice.
|
* and the following generally works well enough in practice.
|
||||||
*/
|
*/
|
||||||
if (flags & ENQUEUE_WAKEUP)
|
if (!task_new)
|
||||||
update_overutilized_status(rq);
|
update_overutilized_status(rq);
|
||||||
|
|
||||||
enqueue_throttle:
|
enqueue_throttle:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user