Soheil Hassas Yeganeh 0174b07408 tcp: eliminate negative reordering in tcp_clean_rtx_queue
[ Upstream commit bafbb9c73241760023d8981191ddd30bb1c6dbac ]

tcp_ack() can call tcp_fragment() which may dededuct the
value tp->fackets_out when MSS changes. When prior_fackets
is larger than tp->fackets_out, tcp_clean_rtx_queue() can
invoke tcp_update_reordering() with negative values. This
results in absurd tp->reodering values higher than
sysctl_tcp_max_reordering.

Note that tcp_update_reordering indeeds sets tp->reordering
to min(sysctl_tcp_max_reordering, metric), but because
the comparison is signed, a negative metric always wins.

Fixes: c7caf8d3ed7a ("[TCP]: Fix reord detection due to snd_una covered holes")
Reported-by: Rebecca Isaacs <risaacs@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-07 12:07:44 +02:00
..
2017-05-03 08:36:38 -07:00
2016-10-20 14:32:22 -04:00
2016-12-05 14:48:48 -05:00
2016-12-03 23:54:25 -05:00
2016-07-05 14:08:47 -07:00
2016-10-20 11:23:08 -04:00
2016-12-02 14:03:20 -05:00
2016-06-09 23:41:03 -07:00
2016-08-17 19:36:23 -04:00
2016-08-17 19:36:23 -04:00