mm/sparse: only sub-section aligned range would be populated
authorWei Yang <richard.weiyang@linux.alibaba.com>
Fri, 7 Aug 2020 06:23:59 +0000 (23:23 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 7 Aug 2020 18:33:27 +0000 (11:33 -0700)
There are two code path which invoke __populate_section_memmap()

  * sparse_init_nid()
  * sparse_add_section()

For both case, we are sure the memory range is sub-section aligned.

  * we pass PAGES_PER_SECTION to sparse_init_nid()
  * we check range by check_pfn_span() before calling
    sparse_add_section()

Also, the counterpart of __populate_section_memmap(), we don't do such
calculation and check since the range is checked by check_pfn_span() in
__remove_pages().

Clear the calculation and check to keep it simple and comply with its
counterpart.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Hildenbrand <david@redhat.com>
Link: http://lkml.kernel.org/r/20200703031828.14645-1-richard.weiyang@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/sparse-vmemmap.c

index 41eeac67723bcf5c9dad7a508fec1e68abf5ef04..16183d85a7d505538a14bc1d958ae672a547148d 100644 (file)
@@ -251,20 +251,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
 struct page * __meminit __populate_section_memmap(unsigned long pfn,
                unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
 {
-       unsigned long start;
-       unsigned long end;
-
-       /*
-        * The minimum granularity of memmap extensions is
-        * PAGES_PER_SUBSECTION as allocations are tracked in the
-        * 'subsection_map' bitmap of the section.
-        */
-       end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
-       pfn &= PAGE_SUBSECTION_MASK;
-       nr_pages = end - pfn;
-
-       start = (unsigned long) pfn_to_page(pfn);
-       end = start + nr_pages * sizeof(struct page);
+       unsigned long start = (unsigned long) pfn_to_page(pfn);
+       unsigned long end = start + nr_pages * sizeof(struct page);
+
+       if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
+               !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
+               return NULL;
 
        if (vmemmap_populate(start, end, nid, altmap))
                return NULL;