sched/core: Robustify preemption leak checks
authorPeter Zijlstra <peterz@infradead.org>
Mon, 28 Sep 2015 15:57:39 +0000 (17:57 +0200)
committerIngo Molnar <mingo@kernel.org>
Tue, 6 Oct 2015 15:08:17 +0000 (17:08 +0200)
commit1dc0fffc48af94513e621f95dff730ed4f7317ec
tree602dbd67f0565830ea99196d71e7f47b17d849e3
parent3d8f74dd4ca1da8a1a464bbafcf679e40c2fc10f
sched/core: Robustify preemption leak checks

When we warn about a preempt_count leak; reset the preempt_count to
the known good value such that the problem does not ripple forward.

This is most important on x86 which has a per cpu preempt_count that is
not saved/restored (after this series). So if you schedule with an
invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is
messed up too.

Enforcing this invariant limits the borkage to just the one task.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/exit.c
kernel/sched/core.c