mm: use ptep_get() instead of directly dereferencing pte_t*
authorRyan Roberts <ryan.roberts@arm.com>
Mon, 10 Mar 2025 14:04:17 +0000 (14:04 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 18 Mar 2025 05:07:02 +0000 (22:07 -0700)
It is best practice for all pte accesses to go via the arch helpers, to
ensure non-torn values and to allow the arch to intervene where needed
(contpte for arm64 for example).  While in this case it was probably safe
to directly dereference, let's tidy it up for consistency.

Link: https://lkml.kernel.org/r/20250310140418.1737409-1-ryan.roberts@arm.com
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/migrate.c

index c0adea67cd62df281f18dc4ba823d1c490975f79..f3ee6d8d5e2eaab4313403aea3e68ddb1736fb63 100644 (file)
@@ -202,7 +202,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
                return false;
        VM_BUG_ON_PAGE(!PageAnon(page), page);
        VM_BUG_ON_PAGE(!PageLocked(page), page);
-       VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
+       VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page);
 
        if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) ||
            mm_forbids_zeropage(pvmw->vma->vm_mm))