mm/hugetlb: do not call vma_add_reservation upon ENOMEM
authorOscar Salvador <osalvador@suse.de>
Tue, 28 May 2024 20:53:23 +0000 (22:53 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 6 Jun 2024 02:19:26 +0000 (19:19 -0700)
sysbot reported a splat [1] on __unmap_hugepage_range().  This is because
vma_needs_reservation() can return -ENOMEM if
allocate_file_region_entries() fails to allocate the file_region struct
for the reservation.

Check for that and do not call vma_add_reservation() if that is the case,
otherwise region_abort() and region_del() will see that we do not have any
file_regions.

If we detect that vma_needs_reservation() returned -ENOMEM, we clear the
hugetlb_restore_reserve flag as if this reservation was still consumed, so
free_huge_folio() will not increment the resv count.

[1] https://lore.kernel.org/linux-mm/0000000000004096100617c58d54@google.com/T/#ma5983bc1ab18a54910da83416b3f89f3c7ee43aa

Link: https://lkml.kernel.org/r/20240528205323.20439-1-osalvador@suse.de
Fixes: df7a6d1f6405 ("mm/hugetlb: restore the reservation if needed")
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reported-and-tested-by: syzbot+d3fe2dc5ffe9380b714b@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/linux-mm/0000000000004096100617c58d54@google.com/
Cc: Breno Leitao <leitao@debian.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index 6be78e7d4f6e058ef1c72e54aa8fc2cadc76cc06..f35abff8be60f8703d7e36bdd1c21aa94eaeff76 100644 (file)
@@ -5768,8 +5768,20 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
                 * do_exit() will not see it, and will keep the reservation
                 * forever.
                 */
-               if (adjust_reservation && vma_needs_reservation(h, vma, address))
-                       vma_add_reservation(h, vma, address);
+               if (adjust_reservation) {
+                       int rc = vma_needs_reservation(h, vma, address);
+
+                       if (rc < 0)
+                               /* Pressumably allocate_file_region_entries failed
+                                * to allocate a file_region struct. Clear
+                                * hugetlb_restore_reserve so that global reserve
+                                * count will not be incremented by free_huge_folio.
+                                * Act as if we consumed the reservation.
+                                */
+                               folio_clear_hugetlb_restore_reserve(page_folio(page));
+                       else if (rc)
+                               vma_add_reservation(h, vma, address);
+               }
 
                tlb_remove_page_size(tlb, page, huge_page_size(h));
                /*