vmalloc: use atomic_long_add_return_relaxed()
authorUladzislau Rezki (Sony) <urezki@gmail.com>
Tue, 15 Apr 2025 11:26:46 +0000 (13:26 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 12 May 2025 00:48:30 +0000 (17:48 -0700)
Switch from the atomic_long_add_return() to its relaxed version.

We do not need a full memory barrier or any memory ordering during
increasing the "vmap_lazy_nr" variable.  What we only need is to do it
atomically.  This is what atomic_long_add_return_relaxed() guarantees.

AARCH64:

<snip>
Default:
    40ec:       d34cfe94        lsr     x20, x20, #12
    40f0:       14000044        b       4200 <free_vmap_area_noflush+0x19c>
    40f4:       94000000        bl      0 <__sanitizer_cov_trace_pc>
    40f8:       90000000        adrp    x0, 0 <__traceiter_alloc_vmap_area>
    40fc:       91000000        add     x0, x0, #0x0
    4100:       f8f40016        ldaddal x20, x22, [x0]
    4104:       8b160296        add     x22, x20, x22

Relaxed:
    40ec:       d34cfe94        lsr     x20, x20, #12
    40f0:       14000044        b       4200 <free_vmap_area_noflush+0x19c>
    40f4:       94000000        bl      0 <__sanitizer_cov_trace_pc>
    40f8:       90000000        adrp    x0, 0 <__traceiter_alloc_vmap_area>
    40fc:       91000000        add     x0, x0, #0x0
    4100:       f8340016        ldadd   x20, x22, [x0]
    4104:       8b160296        add     x22, x20, x22
<snip>

Link: https://lkml.kernel.org/r/20250415112646.113091-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Cc: Christop Hellwig <hch@infradead.org>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/vmalloc.c

index 9eb3880887b679b1cdf8e79903a531e875a8411b..dc33ebeb8b1bde8d2dbe58aeb1714b5a8992df1c 100644 (file)
@@ -2370,7 +2370,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
        if (WARN_ON_ONCE(!list_empty(&va->list)))
                return;
 
-       nr_lazy = atomic_long_add_return(va_size(va) >> PAGE_SHIFT,
+       nr_lazy = atomic_long_add_return_relaxed(va_size(va) >> PAGE_SHIFT,
                                         &vmap_lazy_nr);
 
        /*