1. 30 5月, 2009 1 次提交
  2. 29 5月, 2009 2 次提交
    • D
      cxgb3: fix dma mapping regression · 10b6d956
      Divy Le Ray 提交于
      Commit 5e68b772
        cxgb3: map entire Rx page, feed map+offset to Rx ring.
      
      introduced a regression on platforms defining DECLARE_PCI_UNMAP_ADDR()
      and related macros as no-ops.
      
      Rx descriptors are fed with the a page buffer bus address + page chunk offset.
      The page buffer bus address is set and retrieved through
      pci_unamp_addr_set(), pci_unmap_addr().
      These functions being meaningless on x86 (if CONFIG_DMA_API_DEBUG is not set).
      The HW ends up with a bogus bus address.
      
      This patch saves the page buffer bus address for all plaftorms.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10b6d956
    • E
      net: dont update dev->trans_start in 10GB drivers · 28679751
      Eric Dumazet 提交于
      Followup of commits 9d21493b
      and 08baf561
      (net: tx scalability works : trans_start)
      (net: txq_trans_update() helper)
      
      Now that core network takes care of trans_start updates, dont do it
      in drivers themselves, if possible. Multi queue drivers can
      avoid one cache miss (on dev->trans_start) in their start_xmit()
      handler.
      
      Exceptions are NETIF_F_LLTX drivers (vxge & tehuti)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      28679751
  3. 16 4月, 2009 1 次提交
  4. 27 3月, 2009 3 次提交
  5. 14 3月, 2009 6 次提交
  6. 05 2月, 2009 1 次提交
  7. 28 1月, 2009 1 次提交
  8. 22 1月, 2009 1 次提交
  9. 20 1月, 2009 1 次提交
  10. 11 1月, 2009 1 次提交
    • R
      cxgb3: Keep LRO off if disabled when interface is down · 47fd23fe
      Roland Dreier 提交于
      I have a system with a Chelsio adapter (driven by cxgb3) whose ports are
      part of a Linux bridge.  Recently I updated the kernel and discovered
      that things stopped working because cxgb3 was doing LRO on packets that
      were passed into the bridge code for forwarding.  (Incidentally, this
      problem manifested itself in a strange way that made debugging a bit
      interesting -- for some reason, the skb_warn_if_lro() check in bridge
      didn't trigger and these LROed packets were forwarded out a forcedeth
      interface, and caused the forcedeth transmit path to get stuck)
      
      This is because cxgb3 has no way of keeping state for the LRO flag until
      the interface is brought up, so if the bridging code disables LRO while
      the interface is down, then cxgb3_up() will just reenable LRO, and on my
      Debian system at least, the init scripts add interfaces to a bridge
      before bringing the interfaces up.
      
      Fix this by keeping track of each interface's LRO state in cxgb3 so that
      when bridge disables LRO, it stays disabled in cxgb3_up() when the
      interface is brought up.  I did this by changing the rx_csum_offload
      flag into a pair of bit flags; the effect of this on the rx_eth() fast
      path is miniscule enough that it should be fine (eg on x86, a cmpb
      instruction becomes a testb instruction).
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47fd23fe
  11. 19 12月, 2008 1 次提交
  12. 16 12月, 2008 1 次提交
  13. 29 11月, 2008 1 次提交
    • R
      cxgb3: Fix sparse warning and micro-optimize is_pure_response() · c5419e6f
      Roland Dreier 提交于
      The function is_pure_response() does "ntohl(var) & const" and then
      essentially just tests whether the result is 0 or not; this can be done
      more efficiently by computing "var & htonl(const)" instead and doing the
      byte swap at compile time instead of run time.
      
      This change slightly shrinks the compiled code; eg on x86-64 we save a
      couple of bswapl instructions:
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-8 (-8)
      function                                     old     new   delta
      t3_sge_intr_msix_napi                        544     536      -8
      
      and this also has the pleasant side effect of fixing a sparse warning:
      
          drivers/net/cxgb3/sge.c:2313:15: warning: restricted degrades to integer
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c5419e6f
  14. 27 11月, 2008 1 次提交
  15. 04 11月, 2008 1 次提交
  16. 14 10月, 2008 1 次提交
  17. 09 10月, 2008 2 次提交
  18. 25 9月, 2008 1 次提交
  19. 22 9月, 2008 1 次提交
  20. 27 7月, 2008 1 次提交
    • F
      dma-mapping: add the device argument to dma_mapping_error() · 8d8bb39b
      FUJITA Tomonori 提交于
      Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
      architecture does:
      
      This enables us to cleanly fix the Calgary IOMMU issue that some devices
      are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
      
      I think that per-device dma_mapping_ops support would be also helpful for
      KVM people to support PCI passthrough but Andi thinks that this makes it
      difficult to support the PCI passthrough (see the above thread).  So I
      CC'ed this to KVM camp.  Comments are appreciated.
      
      A pointer to dma_mapping_ops to struct dev_archdata is added.  If the
      pointer is non NULL, DMA operations in asm/dma-mapping.h use it.  If it's
      NULL, the system-wide dma_ops pointer is used as before.
      
      If it's useful for KVM people, I plan to implement a mechanism to register
      a hook called when a new pci (or dma capable) device is created (it works
      with hot plugging).  It enables IOMMUs to set up an appropriate
      dma_mapping_ops per device.
      
      The major obstacle is that dma_mapping_error doesn't take a pointer to the
      device unlike other DMA operations.  So x86 can't have dma_mapping_ops per
      device.  Note all the POWER IOMMUs use the same dma_mapping_error function
      so this is not a problem for POWER but x86 IOMMUs use different
      dma_mapping_error functions.
      
      The first patch adds the device argument to dma_mapping_error.  The patch
      is trivial but large since it touches lots of drivers and dma-mapping.h in
      all the architecture.
      
      This patch:
      
      dma_mapping_error() doesn't take a pointer to the device unlike other DMA
      operations.  So we can't have dma_mapping_ops per device.
      
      Note that POWER already has dma_mapping_ops per device but all the POWER
      IOMMUs use the same dma_mapping_error function.  x86 IOMMUs use device
      argument.
      
      [akpm@linux-foundation.org: fix sge]
      [akpm@linux-foundation.org: fix svc_rdma]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix bnx2x]
      [akpm@linux-foundation.org: fix s2io]
      [akpm@linux-foundation.org: fix pasemi_mac]
      [akpm@linux-foundation.org: fix sdhci]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix sparc]
      [akpm@linux-foundation.org: fix ibmvscsi]
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Muli Ben-Yehuda <muli@il.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Avi Kivity <avi@qumranet.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8d8bb39b
  21. 22 5月, 2008 3 次提交
  22. 13 5月, 2008 1 次提交
  23. 26 3月, 2008 1 次提交
    • R
      cxgb3: Fix lockdep problems with sge.reg_lock · b1186dee
      Roland Dreier 提交于
      Using iWARP with a Chelsio T3 NIC generates the following lockdep warning:
      
          =================================
          [ INFO: inconsistent lock state ]
          2.6.25-rc6 #50
          ---------------------------------
          inconsistent {softirq-on-W} -> {in-softirq-W} usage.
          swapper/0 [HC0[0]:SC1[1]:HE0:SE0] takes:
           (&adap->sge.reg_lock){-+..}, at: [<ffffffff880e5ee2>] cxgb_offload_ctl+0x3af/0x507 [cxgb3]
      
      The problem is that reg_lock is used with plain spin_lock() in
      drivers/net/cxgb3/sge.c but is used with spin_lock_irqsave() in
      drivers/net/cxgb3/cxgb3_offload.c.  This is technically a false
      positive, since the uses in sge.c are only in the initialization and
      cleanup paths and cannot overlap with any use in interrupt context.
      
      The best fix is probably just to use spin_lock_irq() with reg_lock in
      sge.c.  Even though it's not strictly required for correctness, it
      avoids triggering lockdep and the extra overhead of disabling
      interrupts is not important at all in the initialization and cleanup
      slow paths.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      b1186dee
  24. 17 3月, 2008 1 次提交
    • D
      cxgb3: Fix transmit queue stop mechanism · cd7e9034
      Divy Le Ray 提交于
      The last change in the Tx queue stop mechanism opens a window
      where the Tx queue might be stopped after pending credits
      returned.
      
      Tx credits are returned via a control message generated by the HW.
      It returns tx credits on demand, triggered by a completion bit
      set in selective transmit packet headers.
      
      The current code can lead to the Tx queue stopped
      with all pending credits returned, and the current frame
      not triggering a credit return. The Tx queue will then never be
      awaken.
      
      The driver could alternatively request a completion for packets
      that stop the queue. It's however safer at this point to go back
      to the pre-existing behaviour.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      cd7e9034
  25. 11 2月, 2008 1 次提交
    • K
      Optimize cxgb3 xmit path (a bit) · a8cc21f6
      Krishna Kumar 提交于
      	1. Add common code for stopping queue.
      	2. No need to call netif_stop_queue followed by netif_wake_queue (and
      	   infact a netif_start_queue could have been used instead), instead
      	   call stop_queue if required, and remove code under USE_GTS macro.
      	3. There is no need to check for netif_queue_stopped, as the network
      	   core guarantees that for us (I am sure every driver could remove
      	   that check, eg e1000 - I have tested that path a few billion times
      	   with about a few hundred thousand qstops but the condition never
      	   hit even once).
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      a8cc21f6
  26. 03 2月, 2008 1 次提交
  27. 29 1月, 2008 3 次提交