From: Eric Dumazet Date: Tue, 22 Mar 2022 21:43:57 +0000 (-0700) Subject: mm/page_alloc: call check_new_pages() while zone spinlock is not held X-Git-Tag: v5.18-rc1~168^2~120 X-Git-Url: https://git.kernel.dk/?a=commitdiff_plain;h=3313204c8ad553cf93f1ee8cc89456c73a7df938;p=linux-block.git mm/page_alloc: call check_new_pages() while zone spinlock is not held For high order pages not using pcp, rmqueue() is currently calling the costly check_new_pages() while zone spinlock is held, and hard irqs masked. This is not needed, we can release the spinlock sooner to reduce zone spinlock contention. Note that after this patch, we call __mod_zone_freepage_state() before deciding to leak the page because it is in bad state. Link: https://lkml.kernel.org/r/20220304170215.1868106-1-eric.dumazet@gmail.com Signed-off-by: Eric Dumazet Reviewed-by: Shakeel Butt Acked-by: David Rientjes Acked-by: Mel Gorman Reviewed-by: Vlastimil Babka Cc: Michal Hocko Cc: Wei Xu Cc: Greg Thelen Cc: Hugh Dickins Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 30d35f24d7a5..5d126853e239 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3665,10 +3665,10 @@ struct page *rmqueue(struct zone *preferred_zone, * allocate greater than order-1 page units with __GFP_NOFAIL. */ WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); - spin_lock_irqsave(&zone->lock, flags); do { page = NULL; + spin_lock_irqsave(&zone->lock, flags); /* * order-0 request can reach here when the pcplist is skipped * due to non-CMA allocation context. HIGHATOMIC area is @@ -3680,15 +3680,15 @@ struct page *rmqueue(struct zone *preferred_zone, if (page) trace_mm_page_alloc_zone_locked(page, order, migratetype); } - if (!page) + if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); - } while (page && check_new_pages(page, order)); - if (!page) - goto failed; - - __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); - spin_unlock_irqrestore(&zone->lock, flags); + if (!page) + goto failed; + } + __mod_zone_freepage_state(zone, -(1 << order), + get_pcppage_migratetype(page)); + spin_unlock_irqrestore(&zone->lock, flags); + } while (check_new_pages(page, order)); __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone, 1);