Soheil Hassas Yeganeh bafbb9c732 tcp: eliminate negative reordering in tcp_clean_rtx_queue
tcp_ack() can call tcp_fragment() which may dededuct the
value tp->fackets_out when MSS changes. When prior_fackets
is larger than tp->fackets_out, tcp_clean_rtx_queue() can
invoke tcp_update_reordering() with negative values. This
results in absurd tp->reodering values higher than
sysctl_tcp_max_reordering.

Note that tcp_update_reordering indeeds sets tp->reordering
to min(sysctl_tcp_max_reordering, metric), but because
the comparison is signed, a negative metric always wins.

Fixes: c7caf8d3ed7a ("[TCP]: Fix reord detection due to snd_una covered holes")
Reported-by: Rebecca Isaacs <risaacs@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-16 12:45:21 -04:00
..
2016-10-20 14:32:22 -04:00
2017-01-13 22:37:16 -05:00
2017-04-30 22:44:04 -04:00
2017-05-08 15:10:31 -04:00
2017-03-24 20:50:28 -07:00
2017-05-02 15:07:02 -04:00
2016-06-10 23:07:49 -07:00
2017-02-15 11:04:11 +01:00
2017-01-10 10:57:12 +01:00