mm, thp: only collapse hugepages to nodes with affinity for zone_reclaim_mode
authorDavid Rientjes <rientjes@google.com>
Wed, 6 Aug 2014 23:07:29 +0000 (16:07 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 7 Aug 2014 01:01:20 +0000 (18:01 -0700)
Commit 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target
node") improved the previous khugepaged logic which allocated a
transparent hugepages from the node of the first page being collapsed.

However, it is still possible to collapse pages to remote memory which
may suffer from additional access latency.  With the current policy, it
is possible that 255 pages (with PAGE_SHIFT == 12) will be collapsed
remotely if the majority are allocated from that node.

When zone_reclaim_mode is enabled, it means the VM should make every
attempt to allocate locally to prevent NUMA performance degradation.  In
this case, we do not want to collapse hugepages to remote nodes that
would suffer from increased access latency.  Thus, when
zone_reclaim_mode is enabled, only allow collapsing to nodes with
RECLAIM_DISTANCE or less.

There is no functional change for systems that disable
zone_reclaim_mode.

Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/huge_memory.c

index 24e354c2b59e5b8e166fb51a6e7bc61753a35df8..3630d577e9879e9d6dc6a80912e2eb88d5f1c959 100644 (file)
@@ -2233,6 +2233,30 @@ static void khugepaged_alloc_sleep(void)
 
 static int khugepaged_node_load[MAX_NUMNODES];
 
+static bool khugepaged_scan_abort(int nid)
+{
+       int i;
+
+       /*
+        * If zone_reclaim_mode is disabled, then no extra effort is made to
+        * allocate memory locally.
+        */
+       if (!zone_reclaim_mode)
+               return false;
+
+       /* If there is a count for this node already, it must be acceptable */
+       if (khugepaged_node_load[nid])
+               return false;
+
+       for (i = 0; i < MAX_NUMNODES; i++) {
+               if (!khugepaged_node_load[i])
+                       continue;
+               if (node_distance(nid, i) > RECLAIM_DISTANCE)
+                       return true;
+       }
+       return false;
+}
+
 #ifdef CONFIG_NUMA
 static int khugepaged_find_target_node(void)
 {
@@ -2545,6 +2569,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
                 * hit record.
                 */
                node = page_to_nid(page);
+               if (khugepaged_scan_abort(node))
+                       goto out_unmap;
                khugepaged_node_load[node]++;
                VM_BUG_ON_PAGE(PageCompound(page), page);
                if (!PageLRU(page) || PageLocked(page) || !PageAnon(page))