1. 11 8月, 2011 1 次提交
  2. 22 7月, 2011 1 次提交
  3. 18 4月, 2011 1 次提交
  4. 21 10月, 2010 1 次提交
  5. 25 2月, 2010 1 次提交
    • S
      RDMA/cxgb3: Doorbell overflow avoidance and recovery · e998f245
      Steve Wise 提交于
      T3 hardware doorbell FIFO overflows can cause application stalls due
      to lost doorbell ring events.  This has been seen when running large
      NP IMB alltoall MPI jobs.  The T3 hardware supports an xon/xoff-type
      flow control mechanism to help avoid overflowing the HW doorbell FIFO.
      
      This patch uses these interrupts to disable RDMA QP doorbell rings
      when we near an overflow condition, and then turn them back on (and
      ring all the active QP doorbells) when when the doorbell FIFO empties
      out.  In addition if an doorbell ring is dropped by the hardware, the
      code will now recover.
      
      Design:
      
      cxgb3:
      - enable these DB interrupts
      - in the interrupt handler, schedule work tasks to call the ULPs event
        handlers with the new events.
      - ring all the qset txqs when an overflow is detected.
      
      iw_cxgb3:
      - disable db ringing on all active qps when we get the DB_FULL event
      - enable db ringing on all active qps and ring all active dbs when we get
        the DB_EMPTY event
      - On DB_DROP event:
             - disable db rings in the event handler
             - delay-schedule a work task which rings and enables the dbs on
               all active qps.
      - in post_send and post_recv logic, don't ring the db if it's disabled.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      e998f245
  6. 13 10月, 2009 1 次提交
  7. 01 9月, 2009 1 次提交
  8. 09 7月, 2009 2 次提交
  9. 11 6月, 2009 1 次提交
    • D
      cxgb3: remove __GFP_NOFAIL usage · 74b793e1
      Divy Le Ray 提交于
      Pre-allocate a skb at init time to be used for control messages to the HW
      if skb allocation fails.
      
      Tolerate failures to send messages initializing some memories at the cost of
      parity error detection for these memories.
      Retry sending connection id release messages if both alloc_skb(GFP_ATOMIC)
      and alloc_skb(GFP_KERNEL) fail.
      Do not bring the interface up if messages binding queue set to port fail to
      be sent.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      74b793e1
  10. 29 5月, 2009 1 次提交
    • D
      cxgb3: fix dma mapping regression · 10b6d956
      Divy Le Ray 提交于
      Commit 5e68b772
        cxgb3: map entire Rx page, feed map+offset to Rx ring.
      
      introduced a regression on platforms defining DECLARE_PCI_UNMAP_ADDR()
      and related macros as no-ops.
      
      Rx descriptors are fed with the a page buffer bus address + page chunk offset.
      The page buffer bus address is set and retrieved through
      pci_unamp_addr_set(), pci_unmap_addr().
      These functions being meaningless on x86 (if CONFIG_DMA_API_DEBUG is not set).
      The HW ends up with a bogus bus address.
      
      This patch saves the page buffer bus address for all plaftorms.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10b6d956
  11. 16 4月, 2009 1 次提交
  12. 27 3月, 2009 2 次提交
  13. 14 3月, 2009 3 次提交
  14. 22 1月, 2009 2 次提交
  15. 11 1月, 2009 1 次提交
    • R
      cxgb3: Keep LRO off if disabled when interface is down · 47fd23fe
      Roland Dreier 提交于
      I have a system with a Chelsio adapter (driven by cxgb3) whose ports are
      part of a Linux bridge.  Recently I updated the kernel and discovered
      that things stopped working because cxgb3 was doing LRO on packets that
      were passed into the bridge code for forwarding.  (Incidentally, this
      problem manifested itself in a strange way that made debugging a bit
      interesting -- for some reason, the skb_warn_if_lro() check in bridge
      didn't trigger and these LROed packets were forwarded out a forcedeth
      interface, and caused the forcedeth transmit path to get stuck)
      
      This is because cxgb3 has no way of keeping state for the LRO flag until
      the interface is brought up, so if the bridging code disables LRO while
      the interface is down, then cxgb3_up() will just reenable LRO, and on my
      Debian system at least, the init scripts add interfaces to a bridge
      before bringing the interfaces up.
      
      Fix this by keeping track of each interface's LRO state in cxgb3 so that
      when bridge disables LRO, it stays disabled in cxgb3_up() when the
      interface is brought up.  I did this by changing the rx_csum_offload
      flag into a pair of bit flags; the effect of this on the rx_eth() fast
      path is miniscule enough that it should be fine (eg on x86, a cmpb
      instruction becomes a testb instruction).
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47fd23fe
  16. 19 12月, 2008 1 次提交
  17. 16 12月, 2008 1 次提交
  18. 14 10月, 2008 1 次提交
  19. 09 10月, 2008 3 次提交
  20. 22 9月, 2008 1 次提交
  21. 22 5月, 2008 2 次提交
  22. 13 5月, 2008 1 次提交
  23. 19 4月, 2008 1 次提交
  24. 29 1月, 2008 1 次提交
  25. 24 10月, 2007 1 次提交
  26. 20 10月, 2007 1 次提交
  27. 11 10月, 2007 1 次提交
    • S
      [NET]: Make NAPI polling independent of struct net_device objects. · bea3348e
      Stephen Hemminger 提交于
      Several devices have multiple independant RX queues per net
      device, and some have a single interrupt doorbell for several
      queues.
      
      In either case, it's easier to support layouts like that if the
      structure representing the poll is independant from the net
      device itself.
      
      The signature of the ->poll() call back goes from:
      
      	int foo_poll(struct net_device *dev, int *budget)
      
      to
      
      	int foo_poll(struct napi_struct *napi, int budget)
      
      The caller is returned the number of RX packets processed (or
      the number of "NAPI credits" consumed if you want to get
      abstract).  The callee no longer messes around bumping
      dev->quota, *budget, etc. because that is all handled in the
      caller upon return.
      
      The napi_struct is to be embedded in the device driver private data
      structures.
      
      Furthermore, it is the driver's responsibility to disable all NAPI
      instances in it's ->stop() device close handler.  Since the
      napi_struct is privatized into the driver's private data structures,
      only the driver knows how to get at all of the napi_struct instances
      it may have per-device.
      
      With lots of help and suggestions from Rusty Russell, Roland Dreier,
      Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
      
      Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
      Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
      
      [ Ported to current tree and all drivers converted.  Integrated
        Stephen's follow-on kerneldoc additions, and restored poll_list
        handling to the old style to fix mutual exclusion issues.  -DaveM ]
      Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bea3348e
  28. 31 8月, 2007 1 次提交
  29. 09 7月, 2007 1 次提交
  30. 27 2月, 2007 2 次提交
  31. 06 2月, 2007 1 次提交