mm: correct page_mapped_in_vma() for large folios
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Thu, 28 Mar 2024 22:58:27 +0000 (22:58 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 26 Apr 2024 03:56:31 +0000 (20:56 -0700)
Patch series "Unify vma_address and vma_pgoff_address".

The current vma_address() pretends that the ambiguity between head & tail
page is an advantage.  If you pass a head page to vma_address(), it will
operate on all pages in the folio, while if you pass a tail page, it will
operate on a single page.  That's not what any of the callers actually
want, so first convert all callers to use vma_pgoff_address() and then
rename vma_pgoff_address() to vma_address().

This patch (of 3):

If 'page' is the first page of a large folio then vma_address() will scan
for any page in the entire folio.  This can lead to page_mapped_in_vma()
returning true if some of the tail pages are mapped and the head page is
not.  This could lead to memory failure choosing to kill a task
unnecessarily.

Link: https://lkml.kernel.org/r/20240328225831.1765286-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20240328225831.1765286-2-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_vma_mapped.c

index 74d2de15fb5e09324bf265d06d4e48aac3cd803e..ac48d6284badcedd68d421fdd43f7962dc5543ef 100644 (file)
@@ -325,6 +325,8 @@ next_pte:
  */
 int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 {
+       struct folio *folio = page_folio(page);
+       pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
        struct page_vma_mapped_walk pvmw = {
                .pfn = page_to_pfn(page),
                .nr_pages = 1,
@@ -332,7 +334,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
                .flags = PVMW_SYNC,
        };
 
-       pvmw.address = vma_address(page, vma);
+       pvmw.address = vma_pgoff_address(pgoff, 1, vma);
        if (pvmw.address == -EFAULT)
                return 0;
        if (!page_vma_mapped_walk(&pvmw))