sched/fair: Implement synchonous PELT detach on load-balance migrate
authorPeter Zijlstra <peterz@infradead.org>
Thu, 11 May 2017 15:57:24 +0000 (17:57 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 29 Sep 2017 17:35:16 +0000 (19:35 +0200)
commit144d8487bc6e9b741895709cb46d4e19b748a725
tree00e02dd5dfbfa99e3be67ed6e2015bf60b7bed2f
parent1ea6c46a23f1213d1972bfae220db5c165e27bba
sched/fair: Implement synchonous PELT detach on load-balance migrate

Vincent wondered why his self migrating task had a roughly 50% dip in
load_avg when landing on the new CPU. This is because we uncondionally
take the asynchronous detatch_entity route, which can lead to the
attach on the new CPU still seeing the old CPU's contribution to
tg->load_avg, effectively halving the new CPU's shares.

While in general this is something we have to live with, there is the
special case of runnable migration where we can do better.

Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c