mm/mmu_gather: add tlb_remove_tlb_entries()
authorDavid Hildenbrand <david@redhat.com>
Wed, 14 Feb 2024 20:44:32 +0000 (21:44 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 22 Feb 2024 23:27:17 +0000 (15:27 -0800)
Let's add a helper that lets us batch-process multiple consecutive PTEs.

Note that the loop will get optimized out on all architectures except on
powerpc.  We have to add an early define of __tlb_remove_tlb_entry() on
ppc to make the compiler happy (and avoid making tlb_remove_tlb_entries()
a macro).

[arnd@kernel.org: change __tlb_remove_tlb_entry() to an inline function]
Link: https://lkml.kernel.org/r/20240221154549.2026073-1-arnd@kernel.org
Link: https://lkml.kernel.org/r/20240214204435.167852-8-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/powerpc/include/asm/tlb.h
include/asm-generic/tlb.h

index b3de6102a90779739a598d9784ab9b55ab6e1ee0..1ca7d4c4b90dbf49cb7e002376fc5e73ad65a9ba 100644 (file)
@@ -19,6 +19,8 @@
 
 #include <linux/pagemap.h>
 
+static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
+                                         unsigned long address);
 #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry
 
 #define tlb_flush tlb_flush
index 2eb7b0d4f5d2b5de62a2c599594889e814671846..127a8230a40abc26ac315c070a8b5e37429ac938 100644 (file)
@@ -592,7 +592,9 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
 }
 
 #ifndef __tlb_remove_tlb_entry
-#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
+static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long address)
+{
+}
 #endif
 
 /**
@@ -608,6 +610,26 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
                __tlb_remove_tlb_entry(tlb, ptep, address);     \
        } while (0)
 
+/**
+ * tlb_remove_tlb_entries - remember unmapping of multiple consecutive ptes for
+ *                         later tlb invalidation.
+ *
+ * Similar to tlb_remove_tlb_entry(), but remember unmapping of multiple
+ * consecutive ptes instead of only a single one.
+ */
+static inline void tlb_remove_tlb_entries(struct mmu_gather *tlb,
+               pte_t *ptep, unsigned int nr, unsigned long address)
+{
+       tlb_flush_pte_range(tlb, address, PAGE_SIZE * nr);
+       for (;;) {
+               __tlb_remove_tlb_entry(tlb, ptep, address);
+               if (--nr == 0)
+                       break;
+               ptep++;
+               address += PAGE_SIZE;
+       }
+}
+
 #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address)       \
        do {                                                    \
                unsigned long _sz = huge_page_size(h);          \