x86, fpu: Fix math_state_restore() race with kernel_fpu_begin()
authorOleg Nesterov <oleg@redhat.com>
Thu, 15 Jan 2015 19:20:28 +0000 (20:20 +0100)
committerThomas Gleixner <tglx@linutronix.de>
Tue, 20 Jan 2015 12:53:07 +0000 (13:53 +0100)
math_state_restore() can race with kernel_fpu_begin() if irq comes
right after __thread_fpu_begin(), __save_init_fpu() will overwrite
fpu->state we are going to restore.

Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
which simply set/clear in_kernel_fpu, and change math_state_restore()
to exclude kernel_fpu_begin() in between.

Alternatively we could use local_irq_save/restore, but probably these
new helpers can have more users.

Perhaps they should disable/enable preemption themselves, in this case
we can remove preempt_disable() in __restore_xstate_sig().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: matt.fleming@intel.com
Cc: bp@suse.de
Cc: pbonzini@redhat.com
Cc: luto@amacapital.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150115192028.GD27332@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
arch/x86/include/asm/i387.h
arch/x86/kernel/i387.c
arch/x86/kernel/traps.c

index 5e275d31802e728da6aeea0ef2173457efdf7d72..6eb6fcb83f6362c2126af95207971b5ff3620c80 100644 (file)
@@ -51,6 +51,10 @@ static inline void kernel_fpu_end(void)
        preempt_enable();
 }
 
+/* Must be called with preempt disabled */
+extern void kernel_fpu_disable(void);
+extern void kernel_fpu_enable(void);
+
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
  * DNA fault but don't modify SSE registers. And these instructions
index 12088a3f459f7cc9ebd6fb27d365607b9b79156d..81049ffab2d601cf67ce6bdf455edb4d65abbc46 100644 (file)
 
 static DEFINE_PER_CPU(bool, in_kernel_fpu);
 
+void kernel_fpu_disable(void)
+{
+       WARN_ON(this_cpu_read(in_kernel_fpu));
+       this_cpu_write(in_kernel_fpu, true);
+}
+
+void kernel_fpu_enable(void)
+{
+       this_cpu_write(in_kernel_fpu, false);
+}
+
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
index 88900e288021f23a2f22aebf739e25070f456971..fb4cb6adf2259b991796d2e88d72577790f140bd 100644 (file)
@@ -788,18 +788,16 @@ void math_state_restore(void)
                local_irq_disable();
        }
 
+       /* Avoid __kernel_fpu_begin() right after __thread_fpu_begin() */
+       kernel_fpu_disable();
        __thread_fpu_begin(tsk);
-
-       /*
-        * Paranoid restore. send a SIGSEGV if we fail to restore the state.
-        */
        if (unlikely(restore_fpu_checking(tsk))) {
                drop_init_fpu(tsk);
                force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk);
-               return;
+       } else {
+               tsk->thread.fpu_counter++;
        }
-
-       tsk->thread.fpu_counter++;
+       kernel_fpu_enable();
 }
 EXPORT_SYMBOL_GPL(math_state_restore);