net: page_pool: avoid touching slow on the fastpath

To fully benefit from previous commit add one byte of state
in the first cache line recording if we need to look at
the slow part.

The packing isn't all that impressive right now, we create
a 7B hole. I'm expecting Olek's rework will reshuffle this,
anyway.

Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
Link: https://lore.kernel.org/r/20231121000048.789613-3-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-11-20 16:00:35 -08:00
parent 5027ec19f1
commit 2da0cac1e9
2 changed files with 5 additions and 1 deletions

View File

@ -125,6 +125,8 @@ struct page_pool_stats {
struct page_pool {
struct page_pool_params_fast p;
bool has_init_callback;
long frag_users;
struct page *frag_page;
unsigned int frag_offset;

View File

@ -212,6 +212,8 @@ static int page_pool_init(struct page_pool *pool,
*/
}
pool->has_init_callback = !!pool->slow.init_callback;
#ifdef CONFIG_PAGE_POOL_STATS
pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
if (!pool->recycle_stats)
@ -389,7 +391,7 @@ static void page_pool_set_pp_info(struct page_pool *pool,
* the overhead is negligible.
*/
page_pool_fragment_page(page, 1);
if (pool->slow.init_callback)
if (pool->has_init_callback)
pool->slow.init_callback(page, pool->slow.init_arg);
}