x86/mm: Make the SME mask a u64
authorBorislav Petkov <bp@suse.de>
Thu, 7 Sep 2017 09:38:37 +0000 (11:38 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 7 Sep 2017 09:53:11 +0000 (11:53 +0200)
commit21d9bb4a05bac50fb4f850517af4030baecd00f6
tree729adf81c36b0c7d2745226f225b6b64fc655eb9
parent1c9fe4409ce3e9c78b1ed96ee8ed699d4f03bf33
x86/mm: Make the SME mask a u64

The SME encryption mask is for masking 64-bit pagetable entries. It
being an unsigned long works fine on X86_64 but on 32-bit builds in
truncates bits leading to Xen guests crashing very early.

And regardless, the whole SME mask handling shouldnt've leaked into
32-bit because SME is X86_64-only feature. So, first make the mask u64.
And then, add trivial 32-bit versions of the __sme_* macros so that
nothing happens there.

Reported-and-tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Tom Lendacky <Thomas.Lendacky@amd.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas <Thomas.Lendacky@amd.com>
Fixes: 21729f81ce8a ("x86/mm: Provide general kernel support for memory encryption")
Link: http://lkml.kernel.org/r/20170907093837.76zojtkgebwtqc74@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/mem_encrypt.h
arch/x86/mm/mem_encrypt.c
include/linux/mem_encrypt.h