1. 30 9月, 2006 1 次提交
    • M
      [BNX2]: Disable MSI on 5706 if AMD 8132 bridge is present. · f9317a40
      Michael Chan 提交于
      MSI is defined to be 32-bit write.  The 5706 does 64-bit MSI writes
      with byte enables disabled on the unused 32-bit word.  This is legal
      but causes problems on the AMD 8132 which will eventually stop
      responding after a while.
      
      Without this patch, the MSI test done by the driver during open will
      pass, but MSI will eventually stop working after a few MSIs are
      written by the device.
      
      AMD believes this incompatibility is unique to the 5706, and
      prefers to locally disable MSI rather than globally disabling it
      using pci_msi_quirk.
      
      Update version to 1.4.45.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9317a40
  2. 23 9月, 2006 1 次提交
  3. 14 9月, 2006 2 次提交
  4. 20 8月, 2006 1 次提交
  5. 18 8月, 2006 2 次提交
    • M
      [BNX2]: Convert to netdev_alloc_skb() · 932f3772
      Michael Chan 提交于
      Convert dev_alloc_skb() to netdev_alloc_skb() and increase default
      rx ring size to 255. The old ring size of 100 was too small.
      
      Update version to 1.4.44.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      932f3772
    • M
      [BNX2]: Fix tx race condition. · 2f8af120
      Michael Chan 提交于
      Fix a subtle race condition between bnx2_start_xmit() and bnx2_tx_int()
      similar to the one in tg3 discovered by Herbert Xu:
      
      CPU0					CPU1
      bnx2_start_xmit()
      	if (tx_ring_full) {
      		tx_lock
      					bnx2_tx()
      						if (!netif_queue_stopped)
      		netif_stop_queue()
      		if (!tx_ring_full)
      						update_tx_ring
      			netif_wake_queue()
      		tx_unlock
      	}
      
      Even though tx_ring is updated before the if statement in bnx2_tx_int() in
      program order, it can be re-ordered by the CPU as shown above.  This
      scenario can cause the tx queue to be stopped forever if bnx2_tx_int() has
      just freed up the entire tx_ring.  The possibility of this happening
      should be very rare though.
      
      The following changes are made, very much identical to the tg3 fix:
      
      1. Add memory barrier to fix the above race condition.
      
      2. Eliminate the private tx_lock altogether and rely solely on
      netif_tx_lock.  This eliminates one spinlock in bnx2_start_xmit()
      when the ring is full.
      
      3. Because of 2, use netif_tx_lock in bnx2_tx_int() before calling
      netif_wake_queue().
      
      4. Add memory barrier to bnx2_tx_avail().
      
      5. Add bp->tx_wake_thresh which is set to half the tx ring size.
      
      6. Check for the full wake queue condition before getting
      netif_tx_lock in tg3_tx().  This reduces the number of unnecessary
      spinlocks when the tx ring is full in a steady-state condition.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f8af120
  6. 09 7月, 2006 1 次提交
  7. 06 7月, 2006 2 次提交
  8. 03 7月, 2006 1 次提交
  9. 01 7月, 2006 1 次提交
  10. 30 6月, 2006 3 次提交
  11. 23 6月, 2006 1 次提交
    • H
      [NET]: Merge TSO/UFO fields in sk_buff · 7967168c
      Herbert Xu 提交于
      Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not
      going to scale if we add any more segmentation methods (e.g., DCCP).  So
      let's merge them.
      
      They were used to tell the protocol of a packet.  This function has been
      subsumed by the new gso_type field.  This is essentially a set of netdev
      feature bits (shifted by 16 bits) that are required to process a specific
      skb.  As such it's easy to tell whether a given device can process a GSO
      skb: you just have to and the gso_type field and the netdev's features
      field.
      
      I've made gso_type a conjunction.  The idea is that you have a base type
      (e.g., SKB_GSO_TCPV4) that can be modified further to support new features.
      For example, if we add a hardware TSO type that supports ECN, they would
      declare NETIF_F_TSO | NETIF_F_TSO_ECN.  All TSO packets with CWR set would
      have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO
      packets would be SKB_GSO_TCPV4.  This means that only the CWR packets need
      to be emulated in software.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7967168c
  12. 18 6月, 2006 7 次提交
  13. 23 5月, 2006 2 次提交
  14. 13 4月, 2006 1 次提交
  15. 23 3月, 2006 5 次提交
  16. 21 3月, 2006 7 次提交
  17. 04 3月, 2006 1 次提交
  18. 24 1月, 2006 1 次提交