x86/entry/32: Check for VM86 mode in slow-path check
authorJoerg Roedel <jroedel@suse.de>
Fri, 20 Jul 2018 16:22:23 +0000 (18:22 +0200)
committerThomas Gleixner <tglx@linutronix.de>
Fri, 20 Jul 2018 20:33:41 +0000 (22:33 +0200)
The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to go down the
slow and paranoid entry path. The problem is that this check also returns
true when coming from VM86 mode. This is not a problem by itself, as the
paranoid path handles VM86 stack-frames just fine, but it is not necessary
as the normal code path handles VM86 mode as well (and faster).

Extend the check to include VM86 mode. This also makes an optimization of
the paranoid path possible.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eduardo Valentin <eduval@amazon.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Waiman Long <llong@redhat.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1532103744-31902-3-git-send-email-joro@8bytes.org
arch/x86/entry/entry_32.S

index 010cdb41e3c73237a471a8dd4c748a462579ebf8..2767c625a52cf68891b9bbfa2af1fe9a0b3dfd00 100644 (file)
        andl    $(0x0000ffff), PT_CS(%esp)
 
        /* Special case - entry from kernel mode via entry stack */
-       testl   $SEGMENT_RPL_MASK, PT_CS(%esp)
-       jz      .Lentry_from_kernel_\@
+#ifdef CONFIG_VM86
+       movl    PT_EFLAGS(%esp), %ecx           # mix EFLAGS and CS
+       movb    PT_CS(%esp), %cl
+       andl    $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
+#else
+       movl    PT_CS(%esp), %ecx
+       andl    $SEGMENT_RPL_MASK, %ecx
+#endif
+       cmpl    $USER_RPL, %ecx
+       jb      .Lentry_from_kernel_\@
 
        /* Bytes to copy */
        movl    $PTREGS_SIZE, %ecx