x86/mm/cpa: Convert noop to functional fix
authorAndrea Arcangeli <aarcange@redhat.com>
Wed, 10 Apr 2013 13:28:25 +0000 (15:28 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 11 Apr 2013 08:34:42 +0000 (10:34 +0200)
Commit:

  a8aed3e0752b ("x86/mm/pageattr: Prevent PSE and GLOABL leftovers to confuse pmd/pte_present and pmd_huge")

introduced a valid fix but one location that didn't trigger the bug that
lead to finding those (small) problems, wasn't updated using the
right variable.

The wrong variable was also initialized for no good reason, that
may have been the source of the confusion. Remove the noop
initialization accordingly.

Commit a8aed3e0752b also erroneously removed one canon_pgprot pass meant
to clear pmd bitflags not supported in hardware by older CPUs, that
automatically gets corrected by this patch too by applying it to the right
variable in the new location.

Reported-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Borislav Petkov <bp@alien8.de>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Mel Gorman <mgorman@suse.de>
Link: http://lkml.kernel.org/r/1365600505-19314-1-git-send-email-aarcange@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/mm/pageattr.c

index 091934e1d0d97ca26570eb9962aa1890e08e7bd8..7896f7190fda3de59736b44bff6e8900bea8ec81 100644 (file)
@@ -467,7 +467,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
         * We are safe now. Check whether the new pgprot is the same:
         */
        old_pte = *kpte;
-       old_prot = new_prot = req_prot = pte_pgprot(old_pte);
+       old_prot = req_prot = pte_pgprot(old_pte);
 
        pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
        pgprot_val(req_prot) |= pgprot_val(cpa->mask_set);
@@ -478,12 +478,12 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
         * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL
         * for the ancient hardware that doesn't support it.
         */
-       if (pgprot_val(new_prot) & _PAGE_PRESENT)
-               pgprot_val(new_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
+       if (pgprot_val(req_prot) & _PAGE_PRESENT)
+               pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
        else
-               pgprot_val(new_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
+               pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
 
-       new_prot = canon_pgprot(new_prot);
+       req_prot = canon_pgprot(req_prot);
 
        /*
         * old_pte points to the large page base address. So we need