riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required
authorJisheng Zhang <jszhang@kernel.org>
Mon, 25 Mar 2024 11:00:36 +0000 (19:00 +0800)
committerPalmer Dabbelt <palmer@rivosinc.com>
Tue, 30 Apr 2024 17:35:45 +0000 (10:35 -0700)
commitdcb2743d1e701fc1a986c187adc11f6148316d21
tree33263065e01dc2f7af6fda2665cabb2cf903399c
parent0fdbb06379b1126a8c69ceec28e6e506088614a2
riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required

After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC
for !dma_coherent"), for non-coherent platforms with less than 4GB
memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters
to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go
further: If no bouncing needed for ZONE_DMA, let kernel automatically
allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on
non-coherent platforms, so that no need to pass "swiotlb=mmnn,force"
any more.

The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing"
is taken from arm64. Users can still force smaller swiotlb buffer by
passing "swiotlb=mmnn".

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20240325110036.1564-1-jszhang@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
arch/riscv/include/asm/cache.h
arch/riscv/mm/init.c