numa,sched: fix load_to_imbalanced logic inversion
authorRik van Riel <riel@redhat.com>
Sun, 8 Jun 2014 20:55:57 +0000 (16:55 -0400)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sun, 8 Jun 2014 21:35:05 +0000 (14:35 -0700)
This function is supposed to return true if the new load imbalance is
worse than the old one.  It didn't.  I can only hope brown paper bags
are in style.

Now things converge much better on both the 4 node and 8 node systems.

I am not sure why this did not seem to impact specjbb performance on the
4 node system, which is the system I have full-time access to.

This bug was introduced recently, with commit e63da03639cc ("sched/numa:
Allow task switch if load imbalance improves")

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel/sched/fair.c

index 17de1956ddad6110704e0de84bf5672bb4cac528..9855e87d671a54982238d325f014162327a974e8 100644 (file)
@@ -1120,7 +1120,7 @@ static bool load_too_imbalanced(long orig_src_load, long orig_dst_load,
        old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct;
 
        /* Would this change make things worse? */
-       return (old_imb > imb);
+       return (imb > old_imb);
 }
 
 /*