veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk()
authorAlexander Lobakin <aleksander.lobakin@intel.com>
Tue, 25 Feb 2025 17:17:49 +0000 (18:17 +0100)
committerPaolo Abeni <pabeni@redhat.com>
Thu, 27 Feb 2025 13:03:52 +0000 (14:03 +0100)
Now that we can bulk-allocate skbs from the NAPI cache, use that
function to do that in veth as well instead of direct allocation from
the kmem caches. veth uses NAPI and GRO, so this is both context-safe
and beneficial.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
drivers/net/veth.c

index ba3ae2d8092f83a7110caa6d166e96fba9829964..05f5eeef539f0427809b5eece79cc5db66f387ba 100644 (file)
@@ -684,8 +684,7 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames,
        void *skbs[VETH_XDP_BATCH];
        int i;
 
-       if (xdp_alloc_skb_bulk(skbs, n_xdpf,
-                              GFP_ATOMIC | __GFP_ZERO) < 0) {
+       if (unlikely(!napi_skb_cache_get_bulk(skbs, n_xdpf))) {
                for (i = 0; i < n_xdpf; i++)
                        xdp_return_frame(frames[i]);
                stats->rx_drops += n_xdpf;