arm64: memblock: don't permit memblock resizing until linear mapping is up
authorArd Biesheuvel <ard.biesheuvel@linaro.org>
Wed, 7 Nov 2018 14:16:06 +0000 (15:16 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Thu, 8 Nov 2018 17:54:03 +0000 (17:54 +0000)
commit24cc61d8cb5a9232fadf21a830061853c1268fdd
treeb8536afd53e89dc7d859a871f084ad4453f563aa
parent26a4676faa1ad5d99317e0cd701e5d6f3e716b77
arm64: memblock: don't permit memblock resizing until linear mapping is up

Bhupesh reports that having numerous memblock reservations at early
boot may result in the following crash:

  Unable to handle kernel paging request at virtual address ffff80003ffe0000
  ...
  Call trace:
   __memcpy+0x110/0x180
   memblock_add_range+0x134/0x2e8
   memblock_reserve+0x70/0xb8
   memblock_alloc_base_nid+0x6c/0x88
   __memblock_alloc_base+0x3c/0x4c
   memblock_alloc_base+0x28/0x4c
   memblock_alloc+0x2c/0x38
   early_pgtable_alloc+0x20/0xb0
   paging_init+0x28/0x7f8

This is caused by the fact that we permit memblock resizing before the
linear mapping is up, and so the memblock_reserved() array is moved
into memory that is not mapped yet.

So let's ensure that this crash can no longer occur, by deferring to
call to memblock_allow_resize() to after the linear mapping has been
created.

Reported-by: Bhupesh Sharma <bhsharma@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/mm/init.c
arch/arm64/mm/mmu.c