1. 24 11月, 2013 1 次提交
    • E
      sch_tbf: handle too small burst · 4d0820cf
      Eric Dumazet 提交于
      If a too small burst is inadvertently set on TBF, we might trigger
      a bug in tbf_segment(), as 'skb' instead of 'segs' was used in a
      qdisc_reshape_fail() call.
      
      tc qdisc add dev eth0 root handle 1: tbf latency 50ms burst 1KB rate
      50mbit
      
      Fix the bug, and add a warning, as such configuration is not
      going to work anyway for non GSO packets.
      
      (For some reason, one has to use a burst >= 1520 to get a working
      configuration, even with old kernels. This is a probable iproute2/tc
      bug)
      
      Based on a report and initial patch from Yang Yingliang
      
      Fixes: e43ac79a ("sch_tbf: segment too big GSO packets")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d0820cf
  2. 10 11月, 2013 1 次提交
  3. 21 9月, 2013 1 次提交
  4. 03 6月, 2013 1 次提交
  5. 23 5月, 2013 1 次提交
  6. 13 2月, 2013 1 次提交
    • J
      tbf: improved accuracy at high rates · b757c933
      Jiri Pirko 提交于
      Current TBF uses rate table computed by the "tc" userspace program,
      which has the following issue:
      
      The rate table has 256 entries to map packet lengths to
      token (time units).  With TSO sized packets, the 256 entry granularity
      leads to loss/gain of rate, making the token bucket inaccurate.
      
      Thus, instead of relying on rate table, this patch explicitly computes
      the time and accounts for packet transmission times with nanosecond
      granularity.
      
      This is a followup to 56b765b7
      ("htb: improved accuracy at high rates").
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b757c933
  7. 02 4月, 2012 1 次提交
  8. 30 12月, 2011 1 次提交
    • E
      sch_tbf: report backlog information · b0460e44
      Eric Dumazet 提交于
      Provide child qdisc backlog (byte count) information so that "tc -s
      qdisc" can report it to user.
      
      qdisc netem 30: root refcnt 18 limit 1000 delay 20.0ms  10.0ms
       Sent 948517 bytes 898 pkt (dropped 0, overlimits 0 requeues 1)
       rate 175056bit 16pps backlog 114b 1p requeues 1
      qdisc tbf 40: parent 30: rate 256000bit burst 20Kb/8 mpu 0b lat 0us
       Sent 948517 bytes 898 pkt (dropped 15, overlimits 611 requeues 0)
       backlog 18168b 12p requeues 0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0460e44
  9. 21 1月, 2011 2 次提交
    • E
      net_sched: accurate bytes/packets stats/rates · 9190b3b3
      Eric Dumazet 提交于
      In commit 44b82883 (net_sched: pfifo_head_drop problem), we fixed
      a problem with pfifo_head drops that incorrectly decreased
      sch->bstats.bytes and sch->bstats.packets
      
      Several qdiscs (CHOKe, SFQ, pfifo_head, ...) are able to drop a
      previously enqueued packet, and bstats cannot be changed, so
      bstats/rates are not accurate (over estimated)
      
      This patch changes the qdisc_bstats updates to be done at dequeue() time
      instead of enqueue() time. bstats counters no longer account for dropped
      frames, and rates are more correct, since enqueue() bursts dont have
      effect on dequeue() rate.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9190b3b3
    • E
      net_sched: move TCQ_F_THROTTLED flag · fd245a4a
      Eric Dumazet 提交于
      In commit 37112105 (net: QDISC_STATE_RUNNING dont need atomic bit
      ops) I moved QDISC_STATE_RUNNING flag to __state container, located in
      the cache line containing qdisc lock and often dirtied fields.
      
      I now move TCQ_F_THROTTLED bit too, so that we let first cache line read
      mostly, and shared by all cpus. This should speedup HTB/CBQ for example.
      
      Not using test_bit()/__clear_bit()/__test_and_set_bit allows to use an
      "unsigned int" for __state container, reducing by 8 bytes Qdisc size.
      
      Introduce helpers to hide implementation details.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Jesper Dangaard Brouer <hawk@diku.dk>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Jamal Hadi Salim <hadi@cyberus.ca>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd245a4a
  10. 20 1月, 2011 1 次提交
  11. 11 1月, 2011 1 次提交
  12. 10 8月, 2010 1 次提交
  13. 18 5月, 2010 1 次提交
  14. 06 9月, 2009 3 次提交
  15. 22 3月, 2009 1 次提交
  16. 20 11月, 2008 1 次提交
  17. 14 11月, 2008 1 次提交
  18. 31 10月, 2008 2 次提交
  19. 18 8月, 2008 1 次提交
  20. 05 8月, 2008 1 次提交
    • J
      net_sched: Add qdisc __NET_XMIT_STOLEN flag · 378a2f09
      Jarek Poplawski 提交于
      Patrick McHardy <kaber@trash.net> noticed:
      "The other problem that affects all qdiscs supporting actions is
      TC_ACT_QUEUED/TC_ACT_STOLEN getting mapped to NET_XMIT_SUCCESS
      even though the packet is not queued, corrupting upper qdiscs'
      qlen counters."
      
      and later explained:
      "The reason why it translates it at all seems to be to not increase
      the drops counter. Within a single qdisc this could be avoided by
      other means easily, upper qdiscs would still increase the counter
      when we return anything besides NET_XMIT_SUCCESS though.
      
      This means we need a new NET_XMIT return value to indicate this to
      the upper qdiscs. So I'd suggest to introduce NET_XMIT_STOLEN,
      return that to upper qdiscs and translate it to NET_XMIT_SUCCESS
      in dev_queue_xmit, similar to NET_XMIT_BYPASS."
      
      David Miller <davem@davemloft.net> noticed:
      "Maybe these NET_XMIT_* values being passed around should be a set of
      bits. They could be composed of base meanings, combined with specific
      attributes.
      
      So you could say "NET_XMIT_DROP | __NET_XMIT_NO_DROP_COUNT"
      
      The attributes get masked out by the top-level ->enqueue() caller,
      such that the base meanings are the only thing that make their
      way up into the stack. If it's only about communication within the
      qdisc tree, let's simply code it that way."
      
      This patch is trying to realize these ideas.
      Signed-off-by: NJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      378a2f09
  21. 20 7月, 2008 2 次提交
  22. 06 7月, 2008 1 次提交
  23. 29 1月, 2008 5 次提交
  24. 11 10月, 2007 1 次提交
  25. 15 7月, 2007 2 次提交
    • P
      [NET_SCHED]: Kill CONFIG_NET_CLS_POLICE · c3bc7cff
      Patrick McHardy 提交于
      The NET_CLS_ACT option is now a full replacement for NET_CLS_POLICE,
      remove the old code. The config option will be kept around to select
      the equivalent NET_CLS_ACT options for a short time to allow easier
      upgrades.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3bc7cff
    • P
      [NET_SCHED]: act_api: qdisc internal reclassify support · 73ca4918
      Patrick McHardy 提交于
      The behaviour of NET_CLS_POLICE for TC_POLICE_RECLASSIFY was to return
      it to the qdisc, which could handle it internally or ignore it. With
      NET_CLS_ACT however, tc_classify starts over at the first classifier
      and never returns it to the qdisc. This makes it impossible to support
      qdisc-internal reclassification, which in turn makes it impossible to
      remove the old NET_CLS_POLICE code without breaking compatibility since
      we have two qdiscs (CBQ and ATM) that support this.
      
      This patch adds a tc_classify_compat function that handles
      reclassification the old way and changes CBQ and ATM to use it.
      
      This again is of course not fully backwards compatible with the previous
      NET_CLS_ACT behaviour. Unfortunately there is no way to fully maintain
      compatibility *and* support qdisc internal reclassification with
      NET_CLS_ACT, but this seems like the better choice over keeping the two
      incompatible options around forever.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73ca4918
  26. 11 7月, 2007 1 次提交
  27. 26 4月, 2007 4 次提交