1. 01 11月, 2011 1 次提交
  2. 21 10月, 2011 1 次提交
  3. 06 7月, 2011 1 次提交
  4. 24 3月, 2011 1 次提交
    • E
      net_sched: fix THROTTLED/RUNNING race · ef352e7c
      Eric Dumazet 提交于
      commit fd245a4a (net_sched: move TCQ_F_THROTTLED flag)
      added a race.
      
      qdisc_watchdog() is run from softirq, so special care should be taken or
      we can lose one state transition (THROTTLED/RUNNING)
      
      Prior to fd245a4a, we were manipulating q->flags (qdisc->flags &=
      ~TCQ_F_THROTTLED;) and this manipulation could only race with
      qdisc_warn_nonwc().
      
      Since we want to avoid atomic ops in qdisc fast path - it was the
      meaning of commit 37112105 (QDISC_STATE_RUNNING dont need atomic
      bit ops) - fix is to move THROTTLE bit into 'state' field, this one
      being manipulated with SMP and IRQ safe operations.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ef352e7c
  5. 04 3月, 2011 1 次提交
  6. 24 2月, 2011 1 次提交
  7. 21 1月, 2011 3 次提交
    • E
      net_sched: accurate bytes/packets stats/rates · 9190b3b3
      Eric Dumazet 提交于
      In commit 44b82883 (net_sched: pfifo_head_drop problem), we fixed
      a problem with pfifo_head drops that incorrectly decreased
      sch->bstats.bytes and sch->bstats.packets
      
      Several qdiscs (CHOKe, SFQ, pfifo_head, ...) are able to drop a
      previously enqueued packet, and bstats cannot be changed, so
      bstats/rates are not accurate (over estimated)
      
      This patch changes the qdisc_bstats updates to be done at dequeue() time
      instead of enqueue() time. bstats counters no longer account for dropped
      frames, and rates are more correct, since enqueue() bursts dont have
      effect on dequeue() rate.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9190b3b3
    • E
      net_sched: RCU conversion of stab · a2da570d
      Eric Dumazet 提交于
      This patch converts stab qdisc management to RCU, so that we can perform
      the qdisc_calculate_pkt_len() call before getting qdisc lock.
      
      This shortens the lock's held time in __dev_xmit_skb().
      
      This permits more qdiscs to get TCQ_F_CAN_BYPASS status, avoiding lot of
      cache misses and so reducing latencies.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Jesper Dangaard Brouer <hawk@diku.dk>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Jamal Hadi Salim <hadi@cyberus.ca>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2da570d
    • E
      net_sched: move TCQ_F_THROTTLED flag · fd245a4a
      Eric Dumazet 提交于
      In commit 37112105 (net: QDISC_STATE_RUNNING dont need atomic bit
      ops) I moved QDISC_STATE_RUNNING flag to __state container, located in
      the cache line containing qdisc lock and often dirtied fields.
      
      I now move TCQ_F_THROTTLED bit too, so that we let first cache line read
      mostly, and shared by all cpus. This should speedup HTB/CBQ for example.
      
      Not using test_bit()/__clear_bit()/__test_and_set_bit allows to use an
      "unsigned int" for __state container, reducing by 8 bytes Qdisc size.
      
      Introduce helpers to hide implementation details.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Jesper Dangaard Brouer <hawk@diku.dk>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Jamal Hadi Salim <hadi@cyberus.ca>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd245a4a
  8. 11 1月, 2011 1 次提交
  9. 21 12月, 2010 1 次提交
  10. 17 12月, 2010 1 次提交
  11. 21 10月, 2010 1 次提交
  12. 24 9月, 2010 1 次提交
  13. 03 7月, 2010 2 次提交
  14. 29 6月, 2010 1 次提交
  15. 02 6月, 2010 3 次提交
    • E
      net: add additional lock to qdisc to increase throughput · 79640a4c
      Eric Dumazet 提交于
      When many cpus compete for sending frames on a given qdisc, the qdisc
      spinlock suffers from very high contention.
      
      The cpu owning __QDISC_STATE_RUNNING bit has same priority to acquire
      the lock, and cannot dequeue packets fast enough, since it must wait for
      this lock for each dequeued packet.
      
      One solution to this problem is to force all cpus spinning on a second
      lock before trying to get the main lock, when/if they see
      __QDISC_STATE_RUNNING already set.
      
      The owning cpu then compete with at most one other cpu for the main
      lock, allowing for higher dequeueing rate.
      
      Based on a previous patch from Alexander Duyck. I added the heuristic to
      avoid the atomic in fast path, and put the new lock far away from the
      cache line used by the dequeue worker. Also try to release the busylock
      lock as late as possible.
      
      Tests with following script gave a boost from ~50.000 pps to ~600.000
      pps on a dual quad core machine (E5450 @3.00GHz), tg3 driver.
      (A single netperf flow can reach ~800.000 pps on this platform)
      
      for j in `seq 0 3`; do
        for i in `seq 0 7`; do
          netperf -H 192.168.0.1 -t UDP_STREAM -l 60 -N -T $i -- -m 6 &
        done
      done
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      79640a4c
    • E
      net: QDISC_STATE_RUNNING dont need atomic bit ops · 37112105
      Eric Dumazet 提交于
      __QDISC_STATE_RUNNING is always changed while qdisc lock is held.
      
      We can avoid two atomic operations in xmit path, if we move this bit in
      a new __state container.
      
      Location of this __state container is carefully chosen so that fast path
      only dirties one qdisc cache line.
      
      THROTTLED bit could later be moved into this __state location too, to
      avoid dirtying first qdisc cache line.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      37112105
    • E
      net: Define accessors to manipulate QDISC_STATE_RUNNING · bc135b23
      Eric Dumazet 提交于
      Define three helpers to manipulate QDISC_STATE_RUNNIG flag, that a
      second patch will move on another location.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc135b23
  16. 02 4月, 2010 1 次提交
    • E
      gen_estimator: deadlock fix · 5d944c64
      Eric Dumazet 提交于
      One of my test machine got a deadlock during "tc" sessions,
      adding/deleting classes & filters, using traffic estimators.
      
      After some analysis, I believe we have a potential use after free case
      in est_timer() :
      
      spin_lock(e->stats_lock); << HERE >>
      read_lock(&est_lock);
      if (e->bstats == NULL)   << TEST >>
      	goto skip;
      
      Test is done a bit late, because after estimator is killed, and before
      rcu grace period elapsed, we might already have freed/reuse memory where
      e->stats_locks points to (some qdisc->q.lock)
      
      A possible fix is to respect a rcu grace period at Qdisc dismantle time.
      
      On 64bit, sizeof(struct Qdisc) is exactly 192 bytes. Adding 16 bytes to
      it (for struct rcu_head) is a problem because it might change
      performance, given QDISC_ALIGNTO is 32 bytes.
      
      This is why I also change QDISC_ALIGNTO to 64 bytes, to satisfy most
      current alignment requirements.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5d944c64
  17. 29 1月, 2010 1 次提交
  18. 04 11月, 2009 1 次提交
  19. 15 9月, 2009 1 次提交
    • J
      pkt_sched: Fix tx queue selection in tc_modify_qdisc · 926e61b7
      Jarek Poplawski 提交于
      After the recent mq change there is the new select_queue qdisc class
      method used in tc_modify_qdisc, but it works OK only for direct child
      qdiscs of mq qdisc. Grandchildren always get the first tx queue, which
      would give wrong qdisc_root etc. results (e.g. for sch_htb as child of
      sch_prio). This patch fixes it by using parent's dev_queue for such
      grandchildren qdiscs. The select_queue method's return type is changed
      BTW.
      
      With feedback from: Patrick McHardy <kaber@trash.net>
      Signed-off-by: NJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      926e61b7
  20. 10 9月, 2009 1 次提交
    • P
      net_sched: fix estimator lock selection for mq child qdiscs · 23bcf634
      Patrick McHardy 提交于
      When new child qdiscs are attached to the mq qdisc, they are actually
      attached as root qdiscs to the device queues. The lock selection for
      new estimators incorrectly picks the root lock of the existing and
      to be replaced qdisc, which results in a use-after-free once the old
      qdisc has been destroyed.
      
      Mark mq qdisc instances with a new flag and treat qdiscs attached to
      mq as children similar to regular root qdiscs.
      
      Additionally prevent estimators from being attached to the mq qdisc
      itself since it only updates its byte and packet counters during dumps.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      23bcf634
  21. 06 9月, 2009 2 次提交
  22. 18 8月, 2009 1 次提交
  23. 07 8月, 2009 1 次提交
    • K
      net: Avoid enqueuing skb for default qdiscs · bbd8a0d3
      Krishna Kumar 提交于
      dev_queue_xmit enqueue's a skb and calls qdisc_run which
      dequeue's the skb and xmits it. In most cases, the skb that
      is enqueue'd is the same one that is dequeue'd (unless the
      queue gets stopped or multiple cpu's write to the same queue
      and ends in a race with qdisc_run). For default qdiscs, we
      can remove the redundant enqueue/dequeue and simply xmit the
      skb since the default qdisc is work-conserving.
      
      The patch uses a new flag - TCQ_F_CAN_BYPASS to identify the
      default fast queue. The controversial part of the patch is
      incrementing qlen when a skb is requeued - this is to avoid
      checks like the second line below:
      
      +  } else if ((q->flags & TCQ_F_CAN_BYPASS) && !qdisc_qlen(q) &&
      >>         !q->gso_skb &&
      +          !test_and_set_bit(__QDISC_STATE_RUNNING, &q->state)) {
      
      Results of a 2 hour testing for multiple netperf sessions (1,
      2, 4, 8, 12 sessions on a 4 cpu system-X). The BW numbers are
      aggregate Mb/s across iterations tested with this version on
      System-X boxes with Chelsio 10gbps cards:
      
      ----------------------------------
      Size |  ORG BW          NEW BW   |
      ----------------------------------
      128K |  156964          159381   |
      256K |  158650          162042   |
      ----------------------------------
      
      Changes from ver1:
      
      1. Move sch_direct_xmit declaration from sch_generic.h to
         pkt_sched.h
      2. Update qdisc basic statistics for direct xmit path.
      3. Set qlen to zero in qdisc_reset.
      4. Changed some function names to more meaningful ones.
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bbd8a0d3
  24. 20 3月, 2009 1 次提交
    • E
      net: reorder struct Qdisc for better SMP performance · 5e140dfc
      Eric Dumazet 提交于
      dev_queue_xmit() needs to dirty fields "state", "q", "bstats" and "qstats"
      
      On x86_64 arch, they currently span three cache lines, involving more
      cache line ping pongs than necessary, making longer holding of queue spinlock.
      
      We can reduce this to one cache line, by grouping all read-mostly fields
      at the beginning of structure. (Or should I say, all highly modified fields
      at the end :) )
      
      Before patch :
      
      offsetof(struct Qdisc, state)=0x38
      offsetof(struct Qdisc, q)=0x48
      offsetof(struct Qdisc, bstats)=0x80
      offsetof(struct Qdisc, qstats)=0x90
      sizeof(struct Qdisc)=0xc8
      
      After patch :
      
      offsetof(struct Qdisc, state)=0x80
      offsetof(struct Qdisc, q)=0x88
      offsetof(struct Qdisc, bstats)=0xa0
      offsetof(struct Qdisc, qstats)=0xac
      sizeof(struct Qdisc)=0xc0
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e140dfc
  25. 01 2月, 2009 1 次提交
  26. 14 11月, 2008 1 次提交
  27. 06 11月, 2008 1 次提交
  28. 31 10月, 2008 3 次提交
  29. 07 10月, 2008 1 次提交
  30. 23 9月, 2008 1 次提交
  31. 27 8月, 2008 2 次提交