powerpc/spufs: Fix hash faults for kernel regions
authorJeremy Kerr <jk@ozlabs.org>
Wed, 24 May 2017 06:49:59 +0000 (16:49 +1000)
committerMichael Ellerman <mpe@ellerman.id.au>
Thu, 25 May 2017 13:07:44 +0000 (23:07 +1000)
Commit ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with
_PAGE_PRIVILEGED") swapped _PAGE_USER for _PAGE_PRIVILEGED, and
introduced check_pte_access() which denied kernel access to
non-_PAGE_PRIVILEGED pages.

However, it didn't add _PAGE_PRIVILEGED to the hash fault handler
for spufs' kernel accesses, so the DMAs required to establish SPE
memory no longer work.

This change adds _PAGE_PRIVILEGED to the hash fault handler for
kernel accesses.

Fixes: ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with _PAGE_PRIVILEGED")
Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Reported-by: Sombat Tragolgosol <sombat3960@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
arch/powerpc/platforms/cell/spu_base.c

index 96c2b8a406303194737962905330c0a7a3900944..0c45cdbac4cfc80bcb79e287486d5a8a85fefdef 100644 (file)
@@ -197,7 +197,9 @@ static int __spu_trap_data_map(struct spu *spu, unsigned long ea, u64 dsisr)
            (REGION_ID(ea) != USER_REGION_ID)) {
 
                spin_unlock(&spu->register_lock);
-               ret = hash_page(ea, _PAGE_PRESENT | _PAGE_READ, 0x300, dsisr);
+               ret = hash_page(ea,
+                               _PAGE_PRESENT | _PAGE_READ | _PAGE_PRIVILEGED,
+                               0x300, dsisr);
                spin_lock(&spu->register_lock);
 
                if (!ret) {