sched/fair: Optimize enqueue_task_fair()
enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is throttled which means that se can't be NULL in such case and we can move the label after the if (!se) statement. Futhermore, the latter can be removed because se is always NULL when reaching this point. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Phil Auld <pauld@redhat.com> Link: https://lkml.kernel.org/r/20200513135502.4672-1-vincent.guittot@linaro.org
This commit is contained in:
parent
9013196a46
commit
7d148be69e
@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
||||
list_add_leaf_cfs_rq(cfs_rq);
|
||||
}
|
||||
|
||||
/* At this point se is NULL and we are at root level*/
|
||||
add_nr_running(rq, 1);
|
||||
|
||||
/*
|
||||
* Since new tasks are assigned an initial util_avg equal to
|
||||
* half of the spare capacity of their CPU, tiny tasks have the
|
||||
* ability to cross the overutilized threshold, which will
|
||||
* result in the load balancer ruining all the task placement
|
||||
* done by EAS. As a way to mitigate that effect, do not account
|
||||
* for the first enqueue operation of new tasks during the
|
||||
* overutilized flag detection.
|
||||
*
|
||||
* A better way of solving this problem would be to wait for
|
||||
* the PELT signals of tasks to converge before taking them
|
||||
* into account, but that is not straightforward to implement,
|
||||
* and the following generally works well enough in practice.
|
||||
*/
|
||||
if (flags & ENQUEUE_WAKEUP)
|
||||
update_overutilized_status(rq);
|
||||
|
||||
enqueue_throttle:
|
||||
if (!se) {
|
||||
add_nr_running(rq, 1);
|
||||
/*
|
||||
* Since new tasks are assigned an initial util_avg equal to
|
||||
* half of the spare capacity of their CPU, tiny tasks have the
|
||||
* ability to cross the overutilized threshold, which will
|
||||
* result in the load balancer ruining all the task placement
|
||||
* done by EAS. As a way to mitigate that effect, do not account
|
||||
* for the first enqueue operation of new tasks during the
|
||||
* overutilized flag detection.
|
||||
*
|
||||
* A better way of solving this problem would be to wait for
|
||||
* the PELT signals of tasks to converge before taking them
|
||||
* into account, but that is not straightforward to implement,
|
||||
* and the following generally works well enough in practice.
|
||||
*/
|
||||
if (flags & ENQUEUE_WAKEUP)
|
||||
update_overutilized_status(rq);
|
||||
|
||||
}
|
||||
|
||||
if (cfs_bandwidth_used()) {
|
||||
/*
|
||||
* When bandwidth control is enabled; the cfs_rq_throttled()
|
||||
|
Loading…
Reference in New Issue
Block a user