net: page_pool: avoid touching slow on the fastpath
authorJakub Kicinski <kuba@kernel.org>
Tue, 21 Nov 2023 00:00:35 +0000 (16:00 -0800)
committerJakub Kicinski <kuba@kernel.org>
Wed, 22 Nov 2023 01:22:30 +0000 (17:22 -0800)
To fully benefit from previous commit add one byte of state
in the first cache line recording if we need to look at
the slow part.

The packing isn't all that impressive right now, we create
a 7B hole. I'm expecting Olek's rework will reshuffle this,
anyway.

Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
Link: https://lore.kernel.org/r/20231121000048.789613-3-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
include/net/page_pool/types.h
net/core/page_pool.c

index 23950fcc4eca3320404365a8775075662fbc0aa2..e1bb92c192def10c55d8c194f1c8cee177d796cc 100644 (file)
@@ -125,6 +125,8 @@ struct page_pool_stats {
 struct page_pool {
        struct page_pool_params_fast p;
 
+       bool has_init_callback;
+
        long frag_users;
        struct page *frag_page;
        unsigned int frag_offset;
index ab22a2fdae5780451e6b38b9cd087d493e885575..df2a06d7da522cbc38911201769b93eae141c678 100644 (file)
@@ -212,6 +212,8 @@ static int page_pool_init(struct page_pool *pool,
                 */
        }
 
+       pool->has_init_callback = !!pool->slow.init_callback;
+
 #ifdef CONFIG_PAGE_POOL_STATS
        pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
        if (!pool->recycle_stats)
@@ -389,7 +391,7 @@ static void page_pool_set_pp_info(struct page_pool *pool,
         * the overhead is negligible.
         */
        page_pool_fragment_page(page, 1);
-       if (pool->slow.init_callback)
+       if (pool->has_init_callback)
                pool->slow.init_callback(page, pool->slow.init_arg);
 }