sched/fair: Rewrite PELT migration propagation
authorPeter Zijlstra <peterz@infradead.org>
Mon, 8 May 2017 15:30:46 +0000 (17:30 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 29 Sep 2017 17:35:15 +0000 (19:35 +0200)
commit0e2d2aaaae52c247c047d14999b93486bdbd3431
tree7bff425ce22d58f3cdd054065eec5b5bd2ea8edf
parent2a2f5d4e44ed160a5ed822c94e04f918f9fbb487
sched/fair: Rewrite PELT migration propagation

When an entity migrates in (or out) of a runqueue, we need to add (or
remove) its contribution from the entire PELT hierarchy, because even
non-runnable entities are included in the load average sums.

In order to do this we have some propagation logic that updates the
PELT tree, however the way it 'propagates' the runnable (or load)
change is (more or less):

                     tg->weight * grq->avg.load_avg
  ge->avg.load_avg = ------------------------------
                               tg->load_avg

But that is the expression for ge->weight, and per the definition of
load_avg:

  ge->avg.load_avg := ge->weight * ge->avg.runnable_avg

That destroys the runnable_avg (by setting it to 1) we wanted to
propagate.

Instead directly propagate runnable_sum.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/debug.c
kernel/sched/fair.c
kernel/sched/sched.h