1 .. SPDX-License-Identifier: GPL-2.0
7 The page_pool allocator is optimized for the XDP mode that uses one frame
8 per-page, but it can fallback on the regular page allocator APIs.
10 Basic use involves replacing alloc_pages() calls with the
11 page_pool_alloc_pages() call. Drivers should use page_pool_dev_alloc_pages()
12 replacing dev_alloc_pages().
14 API keeps track of in-flight pages, in order to let API user know
15 when it is safe to free a page_pool object. Thus, API users
16 must call page_pool_put_page() to free the page, or attach
17 the page to a page_pool-aware objects like skbs marked with
18 skb_mark_for_recycle().
20 API user must call page_pool_put_page() once on a page, as it
21 will either recycle the page, or in case of refcnt > 1, it will
22 release the DMA mapping and in-flight state accounting.
37 +--------------------------------------------+
39 +--------------------------------------------+
42 | Pool empty | Pool has entries
45 +-----------------------+ +------------------------+
46 | alloc (and map) pages | | get page from cache |
47 +-----------------------+ +------------------------+
50 | cache available | No entries, refill
54 +-----------------+ +------------------+
55 | Fast cache | | ptr-ring cache |
56 +-----------------+ +------------------+
60 The number of pools created **must** match the number of hardware queues
61 unless hardware restrictions make that impossible. This would otherwise beat the
62 purpose of page pool, which is allocate pages fast from cache without locking.
63 This lockless guarantee naturally comes from running under a NAPI softirq.
64 The protection doesn't strictly have to be NAPI, any guarantee that allocating
65 a page will cause no race conditions is enough.
67 * page_pool_create(): Create a pool.
68 * flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV
69 * order: 2^order pages on allocation
70 * pool_size: size of the ptr_ring
71 * nid: preferred NUMA node for allocation
72 * dev: struct device. Used on DMA operations
73 * dma_dir: DMA direction
74 * max_len: max DMA sync memory size
75 * offset: DMA address offset
77 * page_pool_put_page(): The outcome of this depends on the page refcnt. If the
78 driver bumps the refcnt > 1 this will unmap the page. If the page refcnt is 1
79 the allocator owns the page and will try to recycle it in one of the pool
80 caches. If PP_FLAG_DMA_SYNC_DEV is set, the page will be synced for_device
81 using dma_sync_single_range_for_device().
83 * page_pool_put_full_page(): Similar to page_pool_put_page(), but will DMA sync
84 for the entire memory area configured in area pool->max_len.
86 * page_pool_recycle_direct(): Similar to page_pool_put_full_page() but caller
87 must guarantee safe context (e.g NAPI), since it will recycle the page
88 directly into the pool fast cache.
90 * page_pool_dev_alloc_pages(): Get a page from the page allocator or page_pool
93 * page_pool_get_dma_addr(): Retrieve the stored DMA address.
95 * page_pool_get_dma_dir(): Retrieve the stored DMA direction.
97 * page_pool_put_page_bulk(): Tries to refill a number of pages into the
98 ptr_ring cache holding ptr_ring producer lock. If the ptr_ring is full,
99 page_pool_put_page_bulk() will release leftover pages to the page allocator.
100 page_pool_put_page_bulk() is suitable to be run inside the driver NAPI tx
101 completion loop for the XDP_REDIRECT use case.
102 Please note the caller must not use data area after running
103 page_pool_put_page_bulk(), as this function overwrites it.
105 * page_pool_get_stats(): Retrieve statistics about the page_pool. This API
106 is only available if the kernel has been configured with
107 ``CONFIG_PAGE_POOL_STATS=y``. A pointer to a caller allocated ``struct
108 page_pool_stats`` structure is passed to this API which is filled in. The
109 caller can then report those stats to the user (perhaps via ethtool,
110 debugfs, etc.). See below for an example usage of this API.
112 Stats API and structures
113 ------------------------
114 If the kernel is configured with ``CONFIG_PAGE_POOL_STATS=y``, the API
115 ``page_pool_get_stats()`` and structures described below are available. It
116 takes a pointer to a ``struct page_pool`` and a pointer to a ``struct
117 page_pool_stats`` allocated by the caller.
119 The API will fill in the provided ``struct page_pool_stats`` with
120 statistics about the page_pool.
122 The stats structure has the following fields::
124 struct page_pool_stats {
125 struct page_pool_alloc_stats alloc_stats;
126 struct page_pool_recycle_stats recycle_stats;
130 The ``struct page_pool_alloc_stats`` has the following fields:
131 * ``fast``: successful fast path allocations
132 * ``slow``: slow path order-0 allocations
133 * ``slow_high_order``: slow path high order allocations
134 * ``empty``: ptr ring is empty, so a slow path allocation was forced.
135 * ``refill``: an allocation which triggered a refill of the cache
136 * ``waive``: pages obtained from the ptr ring that cannot be added to
137 the cache due to a NUMA mismatch.
139 The ``struct page_pool_recycle_stats`` has the following fields:
140 * ``cached``: recycling placed page in the page pool cache
141 * ``cache_full``: page pool cache was full
142 * ``ring``: page placed into the ptr ring
143 * ``ring_full``: page released from page pool because the ptr ring was full
144 * ``released_refcnt``: page released (and not recycled) because refcnt > 1
154 /* Page pool registration */
155 struct page_pool_params pp_params = { 0 };
156 struct xdp_rxq_info xdp_rxq;
160 /* internal DMA mapping in page_pool */
161 pp_params.flags = PP_FLAG_DMA_MAP;
162 pp_params.pool_size = DESC_NUM;
163 pp_params.nid = NUMA_NO_NODE;
164 pp_params.dev = priv->dev;
165 pp_params.napi = napi; /* only if locking is tied to NAPI */
166 pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
167 page_pool = page_pool_create(&pp_params);
169 err = xdp_rxq_info_reg(&xdp_rxq, ndev, 0);
173 err = xdp_rxq_info_reg_mem_model(&xdp_rxq, MEM_TYPE_PAGE_POOL, page_pool);
184 enum dma_data_direction dma_dir;
186 dma_dir = page_pool_get_dma_dir(dring->page_pool);
187 while (done < budget) {
189 page_pool_recycle_direct(page_pool, page);
192 page_pool_recycle_direct(page_pool, page);
193 } else (packet_is_skb) {
194 skb_mark_for_recycle(skb);
195 new_page = page_pool_dev_alloc_pages(page_pool);
204 #ifdef CONFIG_PAGE_POOL_STATS
206 struct page_pool_stats stats = { 0 };
207 if (page_pool_get_stats(page_pool, &stats)) {
208 /* perhaps the driver reports statistics with ethool */
209 ethtool_print_allocation_stats(&stats.alloc_stats);
210 ethtool_print_recycle_stats(&stats.recycle_stats);
220 page_pool_put_full_page(page_pool, page, false);
221 xdp_rxq_info_unreg(&xdp_rxq);