x86/dumpstack: Try harder to get a call trace on stack overflow
authorAndy Lutomirski <luto@kernel.org>
Thu, 14 Jul 2016 20:22:52 +0000 (13:22 -0700)
committerIngo Molnar <mingo@kernel.org>
Fri, 15 Jul 2016 08:26:26 +0000 (10:26 +0200)
If we overflow the stack, print_context_stack() will abort.  Detect
this case and rewind back into the valid part of the stack so that
we can trace it.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ee1690eb2715ccc5dc187fde94effa4ca0ccbbcd.1468527351.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/kernel/dumpstack.c

index ef8017ca5ba9dab6c59e51ae91b8c353ddaa00da..cc88e25d73e9ef80bb056154250ba424091cf2b8 100644 (file)
@@ -87,7 +87,7 @@ static inline int valid_stack_ptr(struct task_struct *task,
                else
                        return 0;
        }
-       return p > t && p < t + THREAD_SIZE - size;
+       return p >= t && p < t + THREAD_SIZE - size;
 }
 
 unsigned long
@@ -98,6 +98,14 @@ print_context_stack(struct task_struct *task,
 {
        struct stack_frame *frame = (struct stack_frame *)bp;
 
+       /*
+        * If we overflowed the stack into a guard page, jump back to the
+        * bottom of the usable stack.
+        */
+       if ((unsigned long)task_stack_page(task) - (unsigned long)stack <
+           PAGE_SIZE)
+               stack = (unsigned long *)task_stack_page(task);
+
        while (valid_stack_ptr(task, stack, sizeof(*stack), end)) {
                unsigned long addr;