mm: remove folio from deferred split list before uncharging it
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Mon, 11 Mar 2024 19:18:34 +0000 (19:18 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 12 Mar 2024 20:07:16 +0000 (13:07 -0700)
When freeing a large folio, we must remove it from the deferred split list
before we uncharge it as each memcg has its own deferred split list (with
associated lock) and removing a folio from the deferred split list while
holding the wrong lock will corrupt that list and cause various related
problems.

Link: https://lore.kernel.org/linux-mm/367a14f7-340e-4b29-90ae-bc3fcefdd5f4@arm.com/
Link: https://lkml.kernel.org/r/20240311191835.312162-1-willy@infradead.org
Fixes: f77171d241e3 (mm: allow non-hugetlb large folios to be batch processed)
Fixes: 29f3843026cf (mm: free folios directly in move_folios_to_lru())
Fixes: bc2ff4cbc329 (mm: free folios in a batch in shrink_folio_list())
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Debugged-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/swap.c
mm/vmscan.c

index 6b697d33fa5b1ecf5e59aca1569ac6fd0692c2ab..e43a5911b170ce1bd141122d340fec26c5a50113 100644 (file)
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1012,6 +1012,9 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
                        free_huge_folio(folio);
                        continue;
                }
+               if (folio_test_large(folio) &&
+                   folio_test_large_rmappable(folio))
+                       folio_undo_large_rmappable(folio);
 
                __page_cache_release(folio, &lruvec, &flags);
 
index e3349b75f15bab37d1bf0561ae1773f1ba2b1143..61606fa8350492e1e3ad3e3d83eb5574375f86c0 100644 (file)
@@ -1413,6 +1413,9 @@ free_it:
                 */
                nr_reclaimed += nr_pages;
 
+               if (folio_test_large(folio) &&
+                   folio_test_large_rmappable(folio))
+                       folio_undo_large_rmappable(folio);
                if (folio_batch_add(&free_folios, folio) == 0) {
                        mem_cgroup_uncharge_folios(&free_folios);
                        try_to_unmap_flush();
@@ -1819,6 +1822,9 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
                if (unlikely(folio_put_testzero(folio))) {
                        __folio_clear_lru_flags(folio);
 
+                       if (folio_test_large(folio) &&
+                           folio_test_large_rmappable(folio))
+                               folio_undo_large_rmappable(folio);
                        if (folio_batch_add(&free_folios, folio) == 0) {
                                spin_unlock_irq(&lruvec->lru_lock);
                                mem_cgroup_uncharge_folios(&free_folios);