sched/fair: Tune down misfit NOHZ kicks
authorValentin Schneider <valentin.schneider@arm.com>
Mon, 11 Feb 2019 17:59:45 +0000 (17:59 +0000)
committerIngo Molnar <mingo@kernel.org>
Tue, 19 Mar 2019 11:06:15 +0000 (12:06 +0100)
In this commit:

  3b1baa6496e6 ("sched/fair: Add 'group_misfit_task' load-balance type")

we set rq->misfit_task_load whenever the current running task has a
utilization greater than 80% of rq->cpu_capacity. A non-zero value in
this field enables misfit load balancing.

However, if the task being looked at is already running on a CPU of
highest capacity, there's nothing more we can do for it. We can
currently spot this in update_sd_pick_busiest(), which prevents us
from selecting a sched_group of group_type == group_misfit_task as the
busiest group, but we don't do any of that in nohz_balancer_kick().

This means that we could repeatedly kick NOHZ CPUs when there's no
improvements in terms of load balance to be done.

Introduce a check_misfit_status() helper that returns true iff there
is a CPU in the system that could give more CPU capacity to a rq's
misfit task - IOW, there exists a CPU of higher capacity_orig or the
rq's CPU is severely pressured by rt/IRQ.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-3-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index e6f7d39d4d45586459716d1ec9c555a1b6450683..f0d2f8a352bf19360d7e5860e2124972e1d73cd7 100644 (file)
@@ -8058,6 +8058,18 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
                                (rq->cpu_capacity_orig * 100));
 }
 
+/*
+ * Check whether a rq has a misfit task and if it looks like we can actually
+ * help that task: we can migrate the task to a CPU of higher capacity, or
+ * the task's current CPU is heavily pressured.
+ */
+static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
+{
+       return rq->misfit_task_load &&
+               (rq->cpu_capacity_orig < rq->rd->max_cpu_capacity ||
+                check_cpu_capacity(rq, sd));
+}
+
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
  * groups is inadequate due to ->cpus_allowed constraints.
@@ -9585,7 +9597,7 @@ static void nohz_balancer_kick(struct rq *rq)
        if (time_before(now, nohz.next_balance))
                goto out;
 
-       if (rq->nr_running >= 2 || rq->misfit_task_load) {
+       if (rq->nr_running >= 2) {
                flags = NOHZ_KICK_MASK;
                goto out;
        }
@@ -9623,6 +9635,18 @@ static void nohz_balancer_kick(struct rq *rq)
                }
        }
 
+       sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
+       if (sd) {
+               /*
+                * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
+                * to run the misfit task on.
+                */
+               if (check_misfit_status(rq, sd)) {
+                       flags = NOHZ_KICK_MASK;
+                       goto unlock;
+               }
+       }
+
        sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
        if (sd) {
                /*