1. 23 7月, 2008 4 次提交
  2. 17 4月, 2008 1 次提交
  3. 26 3月, 2008 2 次提交
  4. 31 10月, 2007 1 次提交
  5. 11 10月, 2007 1 次提交
    • S
      [NET]: Make NAPI polling independent of struct net_device objects. · bea3348e
      Stephen Hemminger 提交于
      Several devices have multiple independant RX queues per net
      device, and some have a single interrupt doorbell for several
      queues.
      
      In either case, it's easier to support layouts like that if the
      structure representing the poll is independant from the net
      device itself.
      
      The signature of the ->poll() call back goes from:
      
      	int foo_poll(struct net_device *dev, int *budget)
      
      to
      
      	int foo_poll(struct napi_struct *napi, int budget)
      
      The caller is returned the number of RX packets processed (or
      the number of "NAPI credits" consumed if you want to get
      abstract).  The callee no longer messes around bumping
      dev->quota, *budget, etc. because that is all handled in the
      caller upon return.
      
      The napi_struct is to be embedded in the device driver private data
      structures.
      
      Furthermore, it is the driver's responsibility to disable all NAPI
      instances in it's ->stop() device close handler.  Since the
      napi_struct is privatized into the driver's private data structures,
      only the driver knows how to get at all of the napi_struct instances
      it may have per-device.
      
      With lots of help and suggestions from Rusty Russell, Roland Dreier,
      Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
      
      Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
      Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
      
      [ Ported to current tree and all drivers converted.  Integrated
        Stephen's follow-on kerneldoc additions, and restored poll_list
        handling to the old style to fix mutual exclusion issues.  -DaveM ]
      Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bea3348e
  6. 18 5月, 2007 1 次提交
  7. 28 4月, 2007 1 次提交
  8. 18 2月, 2007 1 次提交
  9. 06 2月, 2007 2 次提交
  10. 02 12月, 2006 3 次提交
    • J
      e1000: add dynamic itr modes · 835bb129
      Jesse Brandeburg 提交于
      Add a new dynamic itr algorithm, with 2 modes, and make it the default
      operation mode. This greatly reduces latency and increases small packet
      performance, at the "cost" of some CPU utilization. Bulk traffic
      throughput is unaffected.
      
      The driver can limit the amount of interrupts per second that the
      adapter will generate for incoming packets. It does this by writing a
      value to the adapter that is based on the maximum amount of interrupts
      that the adapter will generate per second.
      
      Setting InterruptThrottleRate to a value greater or equal to 100 will
      program the adapter to send out a maximum of that many interrupts per
      second, even if more packets have come in. This reduces interrupt
      load on the system and can lower CPU utilization under heavy load,
      but will increase latency as packets are not processed as quickly.
      
      The default behaviour of the driver previously assumed a static
      InterruptThrottleRate value of 8000, providing a good fallback value
      for all traffic types,but lacking in small packet performance and
      latency. The hardware can handle many more small packets per second
      however, and for this reason an adaptive interrupt moderation algorithm
      was implemented.
      
      Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in
      which it dynamically adjusts the InterruptThrottleRate value based on
      the traffic that it receives. After determining the type of incoming
      traffic in the last timeframe, it will adjust the InterruptThrottleRate
      to an appropriate value for that traffic.
      
      The algorithm classifies the incoming traffic every interval into
      classes.  Once the class is determined, the InterruptThrottleRate
      value is adjusted to suit that traffic type the best. There are
      three classes defined: "Bulk traffic", for large amounts of packets
      of normal size; "Low latency", for small amounts of traffic and/or
      a significant percentage of small packets; and "Lowest latency",
      for almost completely small packets or minimal traffic.
      
      In dynamic conservative mode, the InterruptThrottleRate value is
      set to 4000 for traffic that falls in class "Bulk traffic". If
      traffic falls in the "Low latency" or "Lowest latency" class, the
      InterruptThrottleRate is increased stepwise to 20000. This default
      mode is suitable for most applications.
      
      For situations where low latency is vital such as cluster or
      grid computing, the algorithm can reduce latency even more when
      InterruptThrottleRate is set to mode 1. In this mode, which operates
      the same as mode 3, the InterruptThrottleRate will be increased
      stepwise to 70000 for traffic in class "Lowest latency".
      
      Setting InterruptThrottleRate to 0 turns off any interrupt moderation
      and may improve small packet latency, but is generally not suitable
      for bulk throughput traffic.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Cc: Rick Jones <rick.jones2@hp.com>
      Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
      835bb129
    • J
      e1000: add queue restart counter · fcfb1224
      Jesse Brandeburg 提交于
      Add a netif_wake/start_queue counter to the ethtool statistics to indicated
      to the user that their transmit ring could be too small for their workload.
      Signed-off-by: NJesse brandeburg <jesse.brandeburg@intel.com>
      Cc: Jamal Hadi <hadi@cyberus.ca>
      Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
      fcfb1224
    • A
      e1000: FIX: enable hw TSO for IPV6 · 87ca4e5b
      Auke Kok 提交于
      Enable TSO for IPV6. All e1000 hardware supports it. This reduces CPU
      utilizations by 50% when transmitting IPv6 frames.
      
      Fix symbol naming enabling ipv6 TSO. Turn off TSO6 for 10/100.
      Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
      87ca4e5b
  11. 28 9月, 2006 3 次提交
  12. 01 9月, 2006 1 次提交
  13. 29 8月, 2006 2 次提交
  14. 15 7月, 2006 1 次提交
  15. 01 7月, 2006 1 次提交
  16. 28 6月, 2006 3 次提交
  17. 15 4月, 2006 3 次提交
  18. 12 3月, 2006 1 次提交
  19. 03 3月, 2006 5 次提交
  20. 01 3月, 2006 1 次提交
    • J
      [PATCH] e1000: revert to single descriptor for legacy receive path · a1415ee6
      Jeff Kirsher 提交于
      A recent patch attempted to enable more efficient memory usage by using
      only 2kB descriptors for jumbo frames.  The method used to implement this
      has since been commented upon as "illegal" and in recent kernels even
      causes a BUG when receiving ip fragments while using jumbo frames.
      This patch simply goes back to the way things were.  We expect some
      complaints due to order 3 allocations failing to come back due to this
      change.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      a1415ee6
  21. 17 1月, 2006 2 次提交