Eric Dumazet de614973ee net: speed up skb_rbtree_purge()
commit 7c90584c66cc4b033a3b684b0e0950f79e7b7166 upstream.

As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)

Also note that there is not even an increase of text size :
$ size net/core/skbuff.o.before net/core/skbuff.o
   text	   data	    bss	    dec	    hex	filename
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o.before
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o

From: Eric Dumazet <edumazet@google.com>

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-01-26 09:42:55 +01:00
..
2019-01-13 10:05:33 +01:00
2018-05-16 10:06:51 +02:00
2019-01-26 09:42:55 +01:00
2015-11-23 14:56:15 -05:00
2015-10-07 04:27:43 -07:00
2018-12-01 09:46:34 +01:00
2018-11-10 07:41:34 -08:00
2019-01-13 10:05:28 +01:00
2018-02-25 11:03:37 +01:00
2018-11-10 07:41:41 -08:00