x86/asm/entry: Delay loading sp0 slightly on task switch
authorAndy Lutomirski <luto@amacapital.net>
Sat, 7 Mar 2015 01:50:18 +0000 (17:50 -0800)
committerIngo Molnar <mingo@kernel.org>
Sat, 7 Mar 2015 08:34:03 +0000 (09:34 +0100)
The change:

  75182b1632a8 ("x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0()")

had the unintended side effect of changing the return value of
current_thread_info() during part of the context switch process.
Change it back.

This has no effect as far as I can tell -- it's just for
consistency.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/9fcaa47dd8487db59eed7a3911b6ae409476763e.1425692936.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/kernel/process_32.c
arch/x86/kernel/process_64.c

index d3460af3d27a7173a9625da63f837c94446c9d51..0405cab6634d0b83ea8966528fdbadd8a6e37381 100644 (file)
@@ -255,11 +255,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 
        fpu = switch_fpu_prepare(prev_p, next_p, cpu);
 
-       /*
-        * Reload esp0.
-        */
-       load_sp0(tss, next);
-
        /*
         * Save away %gs. No need to save %fs, as it was saved on the
         * stack on entry.  No need to save %es and %ds, as those are
@@ -310,6 +305,11 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
         */
        arch_end_context_switch(next_p);
 
+       /*
+        * Reload esp0.  This changes current_thread_info().
+        */
+       load_sp0(tss, next);
+
        this_cpu_write(kernel_stack,
                  (unsigned long)task_stack_page(next_p) +
                  THREAD_SIZE - KERNEL_STACK_OFFSET);
index 2cd562f96c1f20eda81fd10953319dd1297e80fb..1e393d27d7015f5b67d8d9dce44d09265a27edfe 100644 (file)
@@ -283,9 +283,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 
        fpu = switch_fpu_prepare(prev_p, next_p, cpu);
 
-       /* Reload esp0 and ss1. */
-       load_sp0(tss, next);
-
        /* We must save %fs and %gs before load_TLS() because
         * %fs and %gs may be cleared by load_TLS().
         *
@@ -413,6 +410,9 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
        task_thread_info(prev_p)->saved_preempt_count = this_cpu_read(__preempt_count);
        this_cpu_write(__preempt_count, task_thread_info(next_p)->saved_preempt_count);
 
+       /* Reload esp0 and ss1.  This changes current_thread_info(). */
+       load_sp0(tss, next);
+
        this_cpu_write(kernel_stack,
                  (unsigned long)task_stack_page(next_p) +
                  THREAD_SIZE - KERNEL_STACK_OFFSET);