mm/page_alloc: drain the requested list first during bulk free
authorMel Gorman <mgorman@techsingularity.net>
Mon, 28 Feb 2022 23:01:00 +0000 (10:01 +1100)
committerStephen Rothwell <sfr@canb.auug.org.au>
Mon, 28 Feb 2022 23:01:00 +0000 (10:01 +1100)
commite63309bc5c49842c4ce798bf1dda89521b089a54
tree996c3c2e419fc3ff946f3aaf51f371af623ddc72
parentd65818c7d81815776284539d5060af13ca9f80b5
mm/page_alloc: drain the requested list first during bulk free

Prior to the series, pindex 0 (order-0 MIGRATE_UNMOVABLE) was always
skipped first and the precise reason is forgotten.  A potential reason may
have been to artificially preserve MIGRATE_UNMOVABLE but there is no
reason why that would be optimal as it depends on the workload.  The more
likely reason is that it was less complicated to do a pre-increment
instead of a post-increment in terms of overall code flow.  As
free_pcppages_bulk() now typically receives the pindex of the PCP list
that exceeded high, always start draining that list.

Link: https://lkml.kernel.org/r/20220217002227.5739-5-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Tested-by: Aaron Lu <aaron.lu@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
mm/page_alloc.c