mirror of https://github.com/torvalds/linux.git
sched/fair: Only update stats for allowed CPUs when looking for dst group
Load imbalance is observed when the workload frequently forks new threads.
Due to CPU affinity, the workload can run on CPU 0-7 in the first
group, and only on CPU 8-11 in the second group. CPU 12-15 are always idle.
{ 0 1 2 3 4 5 6 7 } {8 9 10 11 12 13 14 15}
* * * * * * * * * * * *
When looking for dst group for newly forked threads, in many times
update_sg_wakeup_stats() reports the second group has more idle CPUs
than the first group. The scheduler thinks the second group is less
busy. Then it selects least busy CPUs among CPU 8-11. Therefore CPU 8-11
can be crowded with newly forked threads, at the same time CPU 0-7
can be idle.
A task may not use all the CPUs in a schedule group due to CPU affinity.
Only update schedule group statistics for allowed CPUs.
Signed-off-by: Adam Li <adamli@os.amperecomputing.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
This commit is contained in:
parent
4d6dd05d07
commit
82d6e01a06
|
|
@ -10683,7 +10683,7 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
|
||||||
if (sd->flags & SD_ASYM_CPUCAPACITY)
|
if (sd->flags & SD_ASYM_CPUCAPACITY)
|
||||||
sgs->group_misfit_task_load = 1;
|
sgs->group_misfit_task_load = 1;
|
||||||
|
|
||||||
for_each_cpu(i, sched_group_span(group)) {
|
for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
|
||||||
struct rq *rq = cpu_rq(i);
|
struct rq *rq = cpu_rq(i);
|
||||||
unsigned int local;
|
unsigned int local;
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue