uprobes: Change prepare_uretprobe() to (try to) flush the dead frames
authorOleg Nesterov <oleg@redhat.com>
Tue, 21 Jul 2015 13:40:23 +0000 (15:40 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 31 Jul 2015 08:38:05 +0000 (10:38 +0200)
Change prepare_uretprobe() to flush the !arch_uretprobe_is_alive()
return_instance's. This is not needed correctness-wise, but can help
to avoid the failure caused by MAX_URETPROBE_DEPTH.

Note: in this case arch_uretprobe_is_alive() can be false
positive, the stack can grow after longjmp(). Unfortunately, the
kernel can't 100% solve this problem, but see the next patch.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134023.GA4776@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/events/uprobes.c

index 93d939c80cd92e73d018f52ea6b1d40a4a18d9aa..7e61c8ca27e04c47e10d6a1cf4e38b9e0aa05d19 100644 (file)
@@ -1511,6 +1511,16 @@ static unsigned long get_trampoline_vaddr(void)
        return trampoline_vaddr;
 }
 
+static void cleanup_return_instances(struct uprobe_task *utask, struct pt_regs *regs)
+{
+       struct return_instance *ri = utask->return_instances;
+       while (ri && !arch_uretprobe_is_alive(ri, regs)) {
+               ri = free_ret_instance(ri);
+               utask->depth--;
+       }
+       utask->return_instances = ri;
+}
+
 static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
 {
        struct return_instance *ri;
@@ -1541,6 +1551,9 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
        if (orig_ret_vaddr == -1)
                goto fail;
 
+       /* drop the entries invalidated by longjmp() */
+       cleanup_return_instances(utask, regs);
+
        /*
         * We don't want to keep trampoline address in stack, rather keep the
         * original return address of first caller thru all the consequent