mm/memblock.c: warn if zero alignment was requested
authorMike Rapoport <rppt@linux.vnet.ibm.com>
Tue, 30 Oct 2018 22:10:01 +0000 (15:10 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 31 Oct 2018 15:54:17 +0000 (08:54 -0700)
After updating all memblock users to explicitly specify SMP_CACHE_BYTES
alignment rather than use 0, it is still possible that uncovered users may
sneak in.  Add a WARN_ON_ONCE for such cases.

[sfr@canb.auug.org.au: use dump_stack() instead of WARN_ON_ONCE for the alignment checks]
Link: http://lkml.kernel.org/r/20181016131927.6ceba6ab@canb.auug.org.au
[akpm@linux-foundation.org: add apologetic comment]
Link: http://lkml.kernel.org/r/20181011060850.GA19822@rapoport-lnx
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memblock.c

index 8395311338167196a6747cf867ceb578e61d160c..7df468c8ebc8c0ada5b560ed70c9c821e5127c05 100644 (file)
@@ -1247,6 +1247,12 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 {
        phys_addr_t found;
 
+       if (!align) {
+               /* Can't use WARNs this early in boot on powerpc */
+               dump_stack();
+               align = SMP_CACHE_BYTES;
+       }
+
        found = memblock_find_in_range_node(size, align, start, end, nid,
                                            flags);
        if (found && !memblock_reserve(found, size)) {
@@ -1369,6 +1375,11 @@ static void * __init memblock_alloc_internal(
        if (WARN_ON_ONCE(slab_is_available()))
                return kzalloc_node(size, GFP_NOWAIT, nid);
 
+       if (!align) {
+               dump_stack();
+               align = SMP_CACHE_BYTES;
+       }
+
        if (max_addr > memblock.current_limit)
                max_addr = memblock.current_limit;
 again: