net_sched: pfifo_head_drop problem

commit 57dbb2d83d (sched: add head drop fifo queue)
introduced pfifo_head_drop, and broke the invariant that
sch->bstats.bytes and sch->bstats.packets are COUNTER (increasing
counters only)

This can break estimators because est_timer() handles unsigned deltas
only. A decreasing counter can then give a huge unsigned delta.

My mid term suggestion would be to change things so that
sch->bstats.bytes and sch->bstats.packets are incremented in dequeue()
only, not at enqueue() time. We also could add drop_bytes/drop_packets
and provide estimations of drop rates.

It would be more sensible anyway for very low speeds, and big bursts.
Right now, if we drop packets, they still are accounted in byte/packets
abolute counters and rate estimators.

Before this mid term change, this patch makes pfifo_head_drop behavior
similar to other qdiscs in case of drops :
Dont decrement sch->bstats.bytes and sch->bstats.packets

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Eric Dumazet 2011-01-05 10:35:02 +00:00 committed by David S. Miller
parent dbbe68bb12
commit 44b8288308

View File

@ -54,8 +54,6 @@ static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc* sch)
/* queue full, remove one skb to fulfill the limit */ /* queue full, remove one skb to fulfill the limit */
skb_head = qdisc_dequeue_head(sch); skb_head = qdisc_dequeue_head(sch);
sch->bstats.bytes -= qdisc_pkt_len(skb_head);
sch->bstats.packets--;
sch->qstats.drops++; sch->qstats.drops++;
kfree_skb(skb_head); kfree_skb(skb_head);