sched/numa: Consider 'imbalance_pct' when comparing loads in numa_has_capacity()
authorSrikar Dronamraju <srikar@linux.vnet.ibm.com>
Tue, 16 Jun 2015 11:56:00 +0000 (17:26 +0530)
committerIngo Molnar <mingo@kernel.org>
Tue, 7 Jul 2015 06:46:10 +0000 (08:46 +0200)
This is consistent with all other load balancing instances where we
absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto
env->imbalance_pct allows to pull and retain task to their preferred
nodes.

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1434455762-30857-3-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 43ee84f05d1ef318e1ae63967d53aa335bc324d8..a53a610095e63da814298114daf03e56631c19e7 100644 (file)
@@ -1415,8 +1415,9 @@ static bool numa_has_capacity(struct task_numa_env *env)
         * --------------------- vs ---------------------
         * src->compute_capacity    dst->compute_capacity
         */
-       if (src->load * dst->compute_capacity >
-           dst->load * src->compute_capacity)
+       if (src->load * dst->compute_capacity * env->imbalance_pct >
+
+           dst->load * src->compute_capacity * 100)
                return true;
 
        return false;