powerpc/vdso: refactor error handling
authorMichael Ellerman <mpe@ellerman.id.au>
Mon, 12 Aug 2024 08:26:05 +0000 (18:26 +1000)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 2 Sep 2024 03:26:13 +0000 (20:26 -0700)
Linus noticed that the error handling in __arch_setup_additional_pages()
fails to clear the mm VDSO pointer if _install_special_mapping() fails.
In practice there should be no actual bug, because if there's an error the
VDSO pointer is cleared later in arch_setup_additional_pages().

However it's no longer necessary to set the pointer before installing the
mapping.  Commit c1bab64360e6 ("powerpc/vdso: Move to
_install_special_mapping() and remove arch_vma_name()") reworked the code
so that the VMA name comes from the vm_special_mapping.name, rather than
relying on arch_vma_name().

So rework the code to only set the VDSO pointer once the mappings have
been installed correctly, and remove the stale comment.

Link: https://lkml.kernel.org/r/20240812082605.743814-4-mpe@ellerman.id.au
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Jeff Xu <jeffxu@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/powerpc/kernel/vdso.c

index 220a76cae7c180c5f0290d6d3f70e2fd5032bef3..ee4b9d676cff546caced78f6dd76923239697a3c 100644 (file)
@@ -214,13 +214,6 @@ static int __arch_setup_additional_pages(struct linux_binprm *bprm, int uses_int
        /* Add required alignment. */
        vdso_base = ALIGN(vdso_base, VDSO_ALIGNMENT);
 
-       /*
-        * Put vDSO base into mm struct. We need to do this before calling
-        * install_special_mapping or the perf counter mmap tracking code
-        * will fail to recognise it as a vDSO.
-        */
-       mm->context.vdso = (void __user *)vdso_base + vvar_size;
-
        vma = _install_special_mapping(mm, vdso_base, vvar_size,
                                       VM_READ | VM_MAYREAD | VM_IO |
                                       VM_DONTDUMP | VM_PFNMAP, &vvar_spec);
@@ -240,10 +233,15 @@ static int __arch_setup_additional_pages(struct linux_binprm *bprm, int uses_int
        vma = _install_special_mapping(mm, vdso_base + vvar_size, vdso_size,
                                       VM_READ | VM_EXEC | VM_MAYREAD |
                                       VM_MAYWRITE | VM_MAYEXEC, vdso_spec);
-       if (IS_ERR(vma))
+       if (IS_ERR(vma)) {
                do_munmap(mm, vdso_base, vvar_size, NULL);
+               return PTR_ERR(vma);
+       }
 
-       return PTR_ERR_OR_ZERO(vma);
+       // Now that the mappings are in place, set the mm VDSO pointer
+       mm->context.vdso = (void __user *)vdso_base + vvar_size;
+
+       return 0;
 }
 
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
@@ -257,8 +255,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
                return -EINTR;
 
        rc = __arch_setup_additional_pages(bprm, uses_interp);
-       if (rc)
-               mm->context.vdso = NULL;
 
        mmap_write_unlock(mm);
        return rc;