mm: page_alloc: fix missed updates of PGFREE in free_unref_{page/folios}
authorYosry Ahmed <yosryahmed@google.com>
Wed, 4 Sep 2024 20:54:19 +0000 (20:54 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 9 Sep 2024 23:39:14 +0000 (16:39 -0700)
PGFREE is currently updated in two code paths:

- __free_pages_ok(): for pages freed to the buddy allocator.
- free_unref_page_commit(): for pages freed to the pcplists.

Before commit df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled
with zone->lock"), free_unref_page_commit() used to fallback to freeing
isolated pages directly to the buddy allocator through free_one_page().
This was done _after_ updating PGFREE, so the counter was correctly
updated.

However, that commit moved the fallback logic to its callers (now called
free_unref_page() and free_unref_folios()), so PGFREE was no longer
updated in this fallback case.

Now that the code has developed, there are more cases in free_unref_page()
and free_unref_folios() where we fallback to calling free_one_page() (e.g.
!pcp_allowed_order(), pcp_spin_trylock() fails).  These cases also miss
updating PGFREE.

To make sure PGFREE is updated in all cases where pages are freed to the
buddy allocator, move the update down the stack to free_one_page().

This was noticed through code inspection, although it should be noticeable
at runtime (at least with some workloads).

Link: https://lkml.kernel.org/r/20240904205419.821776-1-yosryahmed@google.com
Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock")
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_alloc.c

index ee377bf5c033a3d647ae78f3116fdb873ffc5d15..a6adc8f4b7c09ef784f46f29125e570ed8c844e5 100644 (file)
@@ -1231,6 +1231,8 @@ static void free_one_page(struct zone *zone, struct page *page,
        spin_lock_irqsave(&zone->lock, flags);
        split_large_buddy(zone, page, pfn, order, fpi_flags);
        spin_unlock_irqrestore(&zone->lock, flags);
+
+       __count_vm_events(PGFREE, 1 << order);
 }
 
 static void __free_pages_ok(struct page *page, unsigned int order,
@@ -1239,12 +1241,8 @@ static void __free_pages_ok(struct page *page, unsigned int order,
        unsigned long pfn = page_to_pfn(page);
        struct zone *zone = page_zone(page);
 
-       if (!free_pages_prepare(page, order))
-               return;
-
-       free_one_page(zone, page, pfn, order, fpi_flags);
-
-       __count_vm_events(PGFREE, 1 << order);
+       if (free_pages_prepare(page, order))
+               free_one_page(zone, page, pfn, order, fpi_flags);
 }
 
 void __meminit __free_pages_core(struct page *page, unsigned int order,