mm: introduce arch_do_swap_page_nr() which allows restore metadata for nr pages
authorBarry Song <v-songbaohua@oppo.com>
Wed, 29 May 2024 08:28:22 +0000 (20:28 +1200)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 4 Jul 2024 02:30:01 +0000 (19:30 -0700)
Should do_swap_page() have the capability to directly map a large folio,
metadata restoration becomes necessary for a specified number of pages
denoted as nr.  It's important to highlight that metadata restoration is
solely required by the SPARC platform, which, however, does not enable
THP_SWAP.  Consequently, in the present kernel configuration, there exists
no practical scenario where users necessitate the restoration of nr
metadata.  Platforms implementing THP_SWAP might invoke this function with
nr values exceeding 1, subsequent to do_swap_page() successfully mapping
an entire large folio.  Nonetheless, their arch_do_swap_page_nr()
functions remain empty.

Link: https://lkml.kernel.org/r/20240529082824.150954-5-21cnbao@gmail.com
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chuanhua Han <hanchuanhua@oppo.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Gao Xiang <xiang@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
include/linux/pgtable.h
mm/memory.c

index 117b807e3f894d665d1f2f315e85698def301518..2f32eaccf0b9378fad481096ec918abb3aa14d31 100644 (file)
@@ -1089,6 +1089,15 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b)
 })
 
 #ifndef __HAVE_ARCH_DO_SWAP_PAGE
+static inline void arch_do_swap_page_nr(struct mm_struct *mm,
+                                    struct vm_area_struct *vma,
+                                    unsigned long addr,
+                                    pte_t pte, pte_t oldpte,
+                                    int nr)
+{
+
+}
+#else
 /*
  * Some architectures support metadata associated with a page. When a
  * page is being swapped out, this metadata must be saved so it can be
@@ -1097,12 +1106,17 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b)
  * page as metadata for the page. arch_do_swap_page() can restore this
  * metadata when a page is swapped back in.
  */
-static inline void arch_do_swap_page(struct mm_struct *mm,
-                                    struct vm_area_struct *vma,
-                                    unsigned long addr,
-                                    pte_t pte, pte_t oldpte)
-{
-
+static inline void arch_do_swap_page_nr(struct mm_struct *mm,
+                                       struct vm_area_struct *vma,
+                                       unsigned long addr,
+                                       pte_t pte, pte_t oldpte,
+                                       int nr)
+{
+       for (int i = 0; i < nr; i++) {
+               arch_do_swap_page(vma->vm_mm, vma, addr + i * PAGE_SIZE,
+                               pte_advance_pfn(pte, i),
+                               pte_advance_pfn(oldpte, i));
+       }
 }
 #endif
 
index 1411edbb55d052e7a8afa810afe66fe4978f4086..56cc3dd50c56bafcde73ab12d82ef8fdc340076b 100644 (file)
@@ -4308,7 +4308,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
        VM_BUG_ON(!folio_test_anon(folio) ||
                        (pte_write(pte) && !PageAnonExclusive(page)));
        set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
-       arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
+       arch_do_swap_page_nr(vma->vm_mm, vma, vmf->address,
+                       pte, vmf->orig_pte, 1);
 
        folio_unlock(folio);
        if (folio != swapcache && swapcache) {