Eric Dumazet 31e1da773a tcp: avoid premature drops in tcp_add_backlog()
[ Upstream commit ec00ed472bdb7d0af840da68c8c11bff9f4d9caa ]

While testing TCP performance with latest trees,
I saw suspect SOCKET_BACKLOG drops.

tcp_add_backlog() computes its limit with :

    limit = (u32)READ_ONCE(sk->sk_rcvbuf) +
            (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
    limit += 64 * 1024;

This does not take into account that sk->sk_backlog.len
is reset only at the very end of __release_sock().

Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach
sk_rcvbuf in normal conditions.

We should double sk->sk_rcvbuf contribution in the formula
to absorb bubbles in the backlog, which happen more often
for very fast flows.

This change maintains decent protection against abuses.

Fixes: c377411f2494 ("net: sk_add_backlog() take rmem_alloc into account")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240423125620.3309458-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-16 13:28:35 +02:00
..
2022-04-20 09:19:40 +02:00
2023-10-25 11:53:19 +02:00
2023-06-21 15:44:10 +02:00
2024-03-26 18:22:25 -04:00
2024-05-17 11:43:49 +02:00
2024-05-17 11:43:55 +02:00
2023-07-27 08:37:23 +02:00
2021-06-18 09:59:00 +02:00
2023-06-21 15:44:10 +02:00
2023-06-21 15:44:10 +02:00