mm/hugetlb: use hugetlb_pte_stable in migration race check
authorPeter Xu <peterx@redhat.com>
Tue, 4 Oct 2022 19:33:59 +0000 (15:33 -0400)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 13 Oct 2022 01:51:50 +0000 (18:51 -0700)
After hugetlb_pte_stable() introduced, we can also rewrite the migration
race condition against page allocation to use the new helper too.

Link: https://lkml.kernel.org/r/20221004193400.110155-3-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index bf9d8d04bf4f82760173765fe6d2da856de154a5..9b26055f311978164066dbfc4fc7f8ab43ecb98c 100644 (file)
@@ -5634,11 +5634,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
                         * here.  Before returning error, get ptl and make
                         * sure there really is no pte entry.
                         */
-                       ptl = huge_pte_lock(h, mm, ptep);
-                       ret = 0;
-                       if (huge_pte_none(huge_ptep_get(ptep)))
+                       if (hugetlb_pte_stable(h, mm, ptep, old_pte))
                                ret = vmf_error(PTR_ERR(page));
-                       spin_unlock(ptl);
+                       else
+                               ret = 0;
                        goto out;
                }
                clear_huge_page(page, address, pages_per_huge_page(h));