1. 17 3月, 2012 1 次提交
    • A
      ixgbe: Replace standard receive path with a page based receive · f800326d
      Alexander Duyck 提交于
      This patch replaces the existing Rx hot-path in the ixgbe driver with a new
      implementation that is based on performing a double buffered receive.  The
      ixgbe driver already had something similar in place for its' packet split
      path, however in that case we were still receiving the header for the
      packet into the sk_buff.  The big change here is the entire receive path
      will receive into pages only, and then pull the header out of the page and
      copy it into the sk_buff data.  There are several motivations behind this
      approach.
      
      First, this allows us to avoid several cache misses as we were taking a
      set of cache misses for allocating the sk_buff and then another set for
      receiving data into the sk_buff.  We are able to avoid these misses on
      receive now as we allocate the sk_buff when data is available.
      
      Second we are able to see a considerable performance gain when an IOMMU is
      enabled because we are no longer unmapping every buffer on receive.
      Instead we can delay the unmap until we are unable to use the page, and
      instead we can simply call sync_single_range on the half of the page that
      contains new data.
      
      Finally we are able to drop a considerable amount of code from the driver
      as we no longer have to support 2 different receive modes, packet split and
      one buffer.  This allows us to optimize the Rx path further since less
      branching is required.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NRoss Brattain <ross.b.brattain@intel.com>
      Tested-by: NStephen Ko <stephen.s.ko@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      f800326d
  2. 14 3月, 2012 1 次提交
  3. 13 3月, 2012 5 次提交
  4. 11 2月, 2012 4 次提交
  5. 03 2月, 2012 1 次提交
  6. 06 1月, 2012 1 次提交
  7. 18 10月, 2011 1 次提交
  8. 17 10月, 2011 1 次提交
  9. 13 10月, 2011 1 次提交
  10. 29 9月, 2011 1 次提交
  11. 24 9月, 2011 2 次提交
  12. 17 9月, 2011 2 次提交
  13. 16 9月, 2011 2 次提交
  14. 29 8月, 2011 1 次提交
  15. 27 8月, 2011 1 次提交
  16. 19 8月, 2011 2 次提交
  17. 11 8月, 2011 1 次提交
  18. 22 7月, 2011 4 次提交
  19. 25 6月, 2011 3 次提交
  20. 24 6月, 2011 2 次提交
  21. 21 6月, 2011 1 次提交
    • J
      ixgbe: DCB use existing TX and RX queues · e901acd6
      John Fastabend 提交于
      The number of TX and RX queues allocated depends on the device
      type, the current features set, online CPUs, and various
      compile flags.
      
      To enable DCB with multiple queues and allow it to coexist with
      all the features currently implemented it has to setup a valid
      queue count. This is done at init time using the FDIR and RSS
      max queue counts and allowing each TC to allocate a queue per
      CPU.
      
      DCB will now use available queues up to (8 x TCs) this is somewhat
      arbitrary cap but allows DCB to use up to 64 queues. Its easy to
      increase this later if that is needed.
      
      This is prep work to enable Flow Director with DCB. After this
      DCB can easily coexist with existing features and no longer
      needs its own DCB feature ring.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Tested-by: NRoss Brattain <ross.b.brattain@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      e901acd6
  22. 15 5月, 2011 2 次提交