mm: kmsan: panic on failure to allocate early boot metadata
authorPedro Falcato <pedro.falcato@gmail.com>
Mon, 16 Oct 2023 15:34:46 +0000 (16:34 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 25 Oct 2023 23:47:10 +0000 (16:47 -0700)
Given large enough allocations and a machine with low enough memory (i.e a
default QEMU VM), it's entirely possible that
kmsan_init_alloc_meta_for_range's shadow+origin allocation fails.

Instead of eating a NULL deref kernel oops, check explicitly for
memblock_alloc() failure and panic with a nice error message.

Alexander Potapenko said:

For posterity, it is generally quite important for the allocated shadow
and origin to be contiguous, otherwise an unaligned memory write may
result in memory corruption (the corresponding unaligned shadow write will
be assuming that shadow pages are adjacent).  So instead of panicking we
could have split the range into smaller ones until the allocation
succeeds, but that would've led to hard-to-debug problems in the future.

Link: https://lkml.kernel.org/r/20231016153446.132763-1-pedro.falcato@gmail.com
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Marco Elver <elver@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/kmsan/shadow.c

index 87318f9170f197878fa2b81708cd698dc756f5f3..b9d05aff313e21f0dd73d4b3d7b75d9be7aa60fc 100644 (file)
@@ -285,12 +285,17 @@ void __init kmsan_init_alloc_meta_for_range(void *start, void *end)
        size = PAGE_ALIGN((u64)end - (u64)start);
        shadow = memblock_alloc(size, PAGE_SIZE);
        origin = memblock_alloc(size, PAGE_SIZE);
+
+       if (!shadow || !origin)
+               panic("%s: Failed to allocate metadata memory for early boot range of size %llu",
+                     __func__, size);
+
        for (u64 addr = 0; addr < size; addr += PAGE_SIZE) {
                page = virt_to_page_or_null((char *)start + addr);
-               shadow_p = virt_to_page_or_null((char *)shadow + addr);
+               shadow_p = virt_to_page((char *)shadow + addr);
                set_no_shadow_origin_page(shadow_p);
                shadow_page_for(page) = shadow_p;
-               origin_p = virt_to_page_or_null((char *)origin + addr);
+               origin_p = virt_to_page((char *)origin + addr);
                set_no_shadow_origin_page(origin_p);
                origin_page_for(page) = origin_p;
        }