mm/gup: speed up check_and_migrate_cma_pages() on huge page
authorPingfan Liu <kernelfans@gmail.com>
Fri, 12 Jul 2019 03:57:39 +0000 (20:57 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 12 Jul 2019 18:05:45 +0000 (11:05 -0700)
Both hugetlb and thp locate on the same migration type of pageblock, since
they are allocated from a free_list[].  Based on this fact, it is enough
to check on a single subpage to decide the migration type of the whole
huge page.  By this way, it saves (2M/4K - 1) times loop for pmd_huge on
x86, similar on other archs.

Furthermore, when executing isolate_huge_page(), it avoid taking global
hugetlb_lock many times, and meanless remove/add to the local link list
cma_page_list.

[akpm@linux-foundation.org: make `i' and `step' unsigned]
Link: http://lkml.kernel.org/r/1561612545-28997-1-git-send-email-kernelfans@gmail.com
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/gup.c

index 83d480e9b05fbd9b0acea983cc969646746c2936..f411bab037f527c901b7c0ecf12fd8745a5464a7 100644 (file)
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1449,25 +1449,31 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
                                        struct vm_area_struct **vmas,
                                        unsigned int gup_flags)
 {
-       long i;
+       unsigned long i;
+       unsigned long step;
        bool drain_allow = true;
        bool migrate_allow = true;
        LIST_HEAD(cma_page_list);
 
 check_again:
-       for (i = 0; i < nr_pages; i++) {
+       for (i = 0; i < nr_pages;) {
+
+               struct page *head = compound_head(pages[i]);
+
+               /*
+                * gup may start from a tail page. Advance step by the left
+                * part.
+                */
+               step = (1 << compound_order(head)) - (pages[i] - head);
                /*
                 * If we get a page from the CMA zone, since we are going to
                 * be pinning these entries, we might as well move them out
                 * of the CMA zone if possible.
                 */
-               if (is_migrate_cma_page(pages[i])) {
-
-                       struct page *head = compound_head(pages[i]);
-
-                       if (PageHuge(head)) {
+               if (is_migrate_cma_page(head)) {
+                       if (PageHuge(head))
                                isolate_huge_page(head, &cma_page_list);
-                       else {
+                       else {
                                if (!PageLRU(head) && drain_allow) {
                                        lru_add_drain_all();
                                        drain_allow = false;
@@ -1482,6 +1488,8 @@ check_again:
                                }
                        }
                }
+
+               i += step;
        }
 
        if (!list_empty(&cma_page_list)) {