Files
linux/net/ipv4
Eric Dumazet 05c6b74734 tcp: avoid premature drops in tcp_add_backlog()
[ Upstream commit ec00ed472b ]

While testing TCP performance with latest trees,
I saw suspect SOCKET_BACKLOG drops.

tcp_add_backlog() computes its limit with :

    limit = (u32)READ_ONCE(sk->sk_rcvbuf) +
            (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
    limit += 64 * 1024;

This does not take into account that sk->sk_backlog.len
is reset only at the very end of __release_sock().

Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach
sk_rcvbuf in normal conditions.

We should double sk->sk_rcvbuf contribution in the formula
to absorb bubbles in the backlog, which happen more often
for very fast flows.

This change maintains decent protection against abuses.

Fixes: c377411f24 ("net: sk_add_backlog() take rmem_alloc into account")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240423125620.3309458-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-16 13:39:22 +02:00
..
2024-03-01 13:22:00 +01:00
2021-08-31 12:03:33 +01:00
2021-05-17 15:29:35 -07:00
2021-07-27 20:11:44 +01:00
2022-05-18 10:26:57 +02:00
2021-05-17 15:29:35 -07:00
2024-05-17 11:50:56 +02:00
2021-05-17 15:29:35 -07:00