Eric Dumazet 7c90584c66 net: speed up skb_rbtree_purge()
As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)

Also note that there is not even an increase of text size :
$ size net/core/skbuff.o.before net/core/skbuff.o
   text	   data	    bss	    dec	    hex	filename
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o.before
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o

From: Eric Dumazet <edumazet@google.com>

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-25 20:35:11 -07:00
..
2017-04-12 22:02:40 +02:00
2017-08-09 22:43:50 -07:00
2017-09-25 20:35:11 -07:00
2017-08-29 15:16:52 -07:00
2017-09-25 20:31:32 -07:00
2017-09-11 22:01:44 -07:00
2017-08-03 09:13:51 -07:00
2017-08-29 15:16:52 -07:00
2017-08-16 11:27:52 -07:00