mm: fix list corruption in put_pages_list
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Wed, 6 Mar 2024 21:27:30 +0000 (21:27 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 12 Mar 2024 20:07:16 +0000 (13:07 -0700)
My recent change to put_pages_list() dereferences folio->lru.next after
returning the folio to the page allocator.  Usually this is now on the pcp
list with other free folios, so we try to free an already-free folio.
This only happens with lists that have more than 15 entries, so it wasn't
immediately discovered.  Revert to using list_for_each_safe() so we
dereference lru.next before disposing of the folio.

Link: https://lkml.kernel.org/r/20240306212749.1823380-1-willy@infradead.org
Fixes: 24835f899c01 ("mm: use free_unref_folios() in put_pages_list()")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reported-by: "Borah, Chaitanya Kumar" <chaitanya.kumar.borah@intel.com>
Closes: https://lore.kernel.org/intel-gfx/SJ1PR11MB61292145F3B79DA58ADDDA63B9232@SJ1PR11MB6129.namprd11.prod.outlook.com/
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/swap.c

index e43a5911b170ce1bd141122d340fec26c5a50113..500a09a48dfd3afe33f06722305532d325e43727 100644 (file)
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -152,10 +152,10 @@ EXPORT_SYMBOL(__folio_put);
 void put_pages_list(struct list_head *pages)
 {
        struct folio_batch fbatch;
-       struct folio *folio;
+       struct folio *folio, *next;
 
        folio_batch_init(&fbatch);
-       list_for_each_entry(folio, pages, lru) {
+       list_for_each_entry_safe(folio, next, pages, lru) {
                if (!folio_put_testzero(folio))
                        continue;
                if (folio_test_large(folio)) {