s390/uv: convert gmap_destroy_page() from follow_page() to folio_walk
authorDavid Hildenbrand <david@redhat.com>
Fri, 2 Aug 2024 15:55:21 +0000 (17:55 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 2 Sep 2024 03:26:01 +0000 (20:26 -0700)
Let's get rid of another follow_page() user and perform the UV calls under
PTL -- which likely should be fine.

No need for an additional reference while holding the PTL:
uv_destroy_folio() and uv_convert_from_secure_folio() raise the refcount,
so any concurrent make_folio_secure() would see an unexpted reference and
cannot set PG_arch_1 concurrently.

Do we really need a writable PTE?  Likely yes, because the "destroy" part
is, in comparison to the export, a destructive operation.  So we'll keep
the writability check for now.

We'll lose the secretmem check from follow_page().  Likely we don't care
about that here.

Link: https://lkml.kernel.org/r/20240802155524.517137-9-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/s390/kernel/uv.c

index 35ed2aea889186a42856d20f5a9ecb601f7762a7..9646f773208a11aeb7c73f60782e08eac7d51bfd 100644 (file)
@@ -14,6 +14,7 @@
 #include <linux/memblock.h>
 #include <linux/pagemap.h>
 #include <linux/swap.h>
+#include <linux/pagewalk.h>
 #include <asm/facility.h>
 #include <asm/sections.h>
 #include <asm/uv.h>
@@ -462,9 +463,9 @@ EXPORT_SYMBOL_GPL(gmap_convert_to_secure);
 int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
 {
        struct vm_area_struct *vma;
+       struct folio_walk fw;
        unsigned long uaddr;
        struct folio *folio;
-       struct page *page;
        int rc;
 
        rc = -EFAULT;
@@ -483,11 +484,15 @@ int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
                goto out;
 
        rc = 0;
-       /* we take an extra reference here */
-       page = follow_page(vma, uaddr, FOLL_WRITE | FOLL_GET);
-       if (IS_ERR_OR_NULL(page))
+       folio = folio_walk_start(&fw, vma, uaddr, 0);
+       if (!folio)
                goto out;
-       folio = page_folio(page);
+       /*
+        * See gmap_make_secure(): large folios cannot be secure. Small
+        * folio implies FW_LEVEL_PTE.
+        */
+       if (folio_test_large(folio) || !pte_write(fw.pte))
+               goto out_walk_end;
        rc = uv_destroy_folio(folio);
        /*
         * Fault handlers can race; it is possible that two CPUs will fault
@@ -500,7 +505,8 @@ int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
         */
        if (rc)
                rc = uv_convert_from_secure_folio(folio);
-       folio_put(folio);
+out_walk_end:
+       folio_walk_end(&fw, vma);
 out:
        mmap_read_unlock(gmap->mm);
        return rc;