1. 06 9月, 2009 2 次提交
  2. 05 9月, 2009 1 次提交
  3. 02 9月, 2009 1 次提交
    • D
      pkt_sched: Revert tasklet_hrtimer changes. · 2fbd3da3
      David S. Miller 提交于
      These are full of unresolved problems, mainly that conversions don't
      work 1-1 from hrtimers to tasklet_hrtimers because unlike hrtimers
      tasklets can't be killed from softirq context.
      
      And when a qdisc gets reset, that's exactly what we need to do here.
      
      We'll work this out in the net-next-2.6 tree and if warranted we'll
      backport that work to -stable.
      
      This reverts the following 3 changesets:
      
      a2cb6a4d
      ("pkt_sched: Fix bogon in tasklet_hrtimer changes.")
      
      38acce2d
      ("pkt_sched: Convert CBQ to tasklet_hrtimer.")
      
      ee5f9757
      ("pkt_sched: Convert qdisc_watchdog to tasklet_hrtimer")
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2fbd3da3
  4. 01 9月, 2009 1 次提交
  5. 31 8月, 2009 1 次提交
    • K
      pkt_sched: Fix resource limiting in pfifo_fast · a453e068
      Krishna Kumar 提交于
      pfifo_fast_enqueue has this check:
              if (skb_queue_len(list) < qdisc_dev(qdisc)->tx_queue_len) {
      
      which allows each band to enqueue upto tx_queue_len skbs for a
      total of 3*tx_queue_len skbs. I am not sure if this was the
      intention of limiting in qdisc.
      
      Patch compiled and 32 simultaneous netperf testing ran fine. Also:
      # tc -s qdisc show dev eth2
      qdisc pfifo_fast 0: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
       Sent 16835026752 bytes 373116 pkt (dropped 0, overlimits 0 requeues 25) 
       rate 0bit 0pps backlog 0b 0p requeues 25 
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a453e068
  6. 29 8月, 2009 1 次提交
  7. 25 8月, 2009 1 次提交
  8. 24 8月, 2009 1 次提交
  9. 23 8月, 2009 1 次提交
  10. 18 8月, 2009 1 次提交
  11. 07 8月, 2009 1 次提交
    • K
      net: Avoid enqueuing skb for default qdiscs · bbd8a0d3
      Krishna Kumar 提交于
      dev_queue_xmit enqueue's a skb and calls qdisc_run which
      dequeue's the skb and xmits it. In most cases, the skb that
      is enqueue'd is the same one that is dequeue'd (unless the
      queue gets stopped or multiple cpu's write to the same queue
      and ends in a race with qdisc_run). For default qdiscs, we
      can remove the redundant enqueue/dequeue and simply xmit the
      skb since the default qdisc is work-conserving.
      
      The patch uses a new flag - TCQ_F_CAN_BYPASS to identify the
      default fast queue. The controversial part of the patch is
      incrementing qlen when a skb is requeued - this is to avoid
      checks like the second line below:
      
      +  } else if ((q->flags & TCQ_F_CAN_BYPASS) && !qdisc_qlen(q) &&
      >>         !q->gso_skb &&
      +          !test_and_set_bit(__QDISC_STATE_RUNNING, &q->state)) {
      
      Results of a 2 hour testing for multiple netperf sessions (1,
      2, 4, 8, 12 sessions on a 4 cpu system-X). The BW numbers are
      aggregate Mb/s across iterations tested with this version on
      System-X boxes with Chelsio 10gbps cards:
      
      ----------------------------------
      Size |  ORG BW          NEW BW   |
      ----------------------------------
      128K |  156964          159381   |
      256K |  158650          162042   |
      ----------------------------------
      
      Changes from ver1:
      
      1. Move sch_direct_xmit declaration from sch_generic.h to
         pkt_sched.h
      2. Update qdisc basic statistics for direct xmit path.
      3. Set qlen to zero in qdisc_reset.
      4. Changed some function names to more meaningful ones.
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bbd8a0d3
  12. 06 7月, 2009 1 次提交
  13. 18 6月, 2009 2 次提交
  14. 15 6月, 2009 1 次提交
  15. 13 6月, 2009 1 次提交
  16. 09 6月, 2009 2 次提交
  17. 03 6月, 2009 2 次提交
  18. 02 6月, 2009 1 次提交
  19. 27 5月, 2009 1 次提交
  20. 26 5月, 2009 1 次提交
    • E
      net: txq_trans_update() helper · 08baf561
      Eric Dumazet 提交于
      We would like to get rid of netdev->trans_start = jiffies; that about all net
      drivers have to use in their start_xmit() function, and use txq->trans_start
      instead.
      
      This can be done generically in core network, as suggested by David.
      
      Some devices, (particularly loopback) dont need trans_start update, because
      they dont have transmit watchdog. We could add a new device flag, or rely
      on fact that txq->tran_start can be updated is txq->xmit_lock_owner is
      different than -1. Use a helper function to hide our choice.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      08baf561
  21. 20 5月, 2009 1 次提交
  22. 19 5月, 2009 1 次提交
  23. 18 5月, 2009 2 次提交
  24. 07 5月, 2009 1 次提交
  25. 03 5月, 2009 1 次提交
  26. 20 4月, 2009 1 次提交
    • J
      net: sch_netem: Fix an inconsistency in ingress netem timestamps. · 8caf1539
      Jarek Poplawski 提交于
      Alex Sidorenko reported:
      
      "while experimenting with 'netem' we have found some strange behaviour. It
      seemed that ingress delay as measured by 'ping' command shows up on some
      hosts but not on others.
      
      After some investigation I have found that the problem is that skbuff->tstamp
      field value depends on whether there are any packet sniffers enabled. That
      is:
      
      - if any ptype_all handler is registered, the tstamp field is as expected
      - if there are no ptype_all handlers, the tstamp field does not show the delay"
      
      This patch prevents unnecessary update of tstamp in dev_queue_xmit_nit()
      on ingress path (with act_mirred) adding a check, so minimal overhead on
      the fast path, but only when sniffers etc. are active.
      
      Since netem at ingress seems to logically emulate a network before a host,
      tstamp is zeroed to trigger the update and pretend delays are from the
      outside.
      Reported-by: NAlex Sidorenko <alexandre.sidorenko@hp.com>
      Tested-by: NAlex Sidorenko <alexandre.sidorenko@hp.com>
      Signed-off-by: NJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8caf1539
  27. 14 4月, 2009 1 次提交
  28. 22 3月, 2009 1 次提交
  29. 16 3月, 2009 1 次提交
    • J
      pkt_sched: Change misleading code in class delete. · 7cd0a638
      Jarek Poplawski 提交于
      While looking for a possible reason of bugzilla report on HTB oops:
      http://bugzilla.kernel.org/show_bug.cgi?id=12858
      I found the code in htb_delete calling htb_destroy_class on zero
      refcount is very misleading: it can suggest this is a common path, and
      destroy is called under sch_tree_lock. Actually, this can never happen
      like this because before deletion cops->get() is done, and after
      delete a class is still used by tclass_notify. The class destroy is
      always called from cops->put(), so without sch_tree_lock.
      
      This doesn't mean much now (since 2.6.27) because all vulnerable calls
      were moved from htb_destroy_class to htb_delete, but there was a bug
      in older kernels. The same change is done for other classful scheds,
      which, it seems, didn't have similar locking problems here.
      Reported-by: Nm0sia <m0sia@m0sia.ru>
      Signed-off-by: NJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7cd0a638
  30. 05 3月, 2009 1 次提交
  31. 27 2月, 2009 1 次提交
  32. 10 2月, 2009 1 次提交
  33. 01 2月, 2009 3 次提交