sched/fair: Rewrite cfs_rq->removed_*avg
authorPeter Zijlstra <peterz@infradead.org>
Mon, 8 May 2017 14:51:41 +0000 (16:51 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 29 Sep 2017 17:35:14 +0000 (19:35 +0200)
commit2a2f5d4e44ed160a5ed822c94e04f918f9fbb487
tree044c01816758a1501c3565f6ebb53ef2c34c3ea9
parent9059393e4ec1c8c6623a120b405ef2c90b968d80
sched/fair: Rewrite cfs_rq->removed_*avg

Since on wakeup migration we don't hold the rq->lock for the old CPU
we cannot update its state. Instead we add the removed 'load' to an
atomic variable and have the next update on that CPU collect and
process it.

Currently we have 2 atomic variables; which already have the issue
that they can be read out-of-sync. Also, two atomic ops on a single
cacheline is already more expensive than an uncontended lock.

Since we want to add more, convert the thing over to an explicit
cacheline with a lock in.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/debug.c
kernel/sched/fair.c
kernel/sched/sched.h