1. 18 8月, 2006 1 次提交
    • M
      [BNX2]: Fix tx race condition. · 2f8af120
      Michael Chan 提交于
      Fix a subtle race condition between bnx2_start_xmit() and bnx2_tx_int()
      similar to the one in tg3 discovered by Herbert Xu:
      
      CPU0					CPU1
      bnx2_start_xmit()
      	if (tx_ring_full) {
      		tx_lock
      					bnx2_tx()
      						if (!netif_queue_stopped)
      		netif_stop_queue()
      		if (!tx_ring_full)
      						update_tx_ring
      			netif_wake_queue()
      		tx_unlock
      	}
      
      Even though tx_ring is updated before the if statement in bnx2_tx_int() in
      program order, it can be re-ordered by the CPU as shown above.  This
      scenario can cause the tx queue to be stopped forever if bnx2_tx_int() has
      just freed up the entire tx_ring.  The possibility of this happening
      should be very rare though.
      
      The following changes are made, very much identical to the tg3 fix:
      
      1. Add memory barrier to fix the above race condition.
      
      2. Eliminate the private tx_lock altogether and rely solely on
      netif_tx_lock.  This eliminates one spinlock in bnx2_start_xmit()
      when the ring is full.
      
      3. Because of 2, use netif_tx_lock in bnx2_tx_int() before calling
      netif_wake_queue().
      
      4. Add memory barrier to bnx2_tx_avail().
      
      5. Add bp->tx_wake_thresh which is set to half the tx ring size.
      
      6. Check for the full wake queue condition before getting
      netif_tx_lock in tg3_tx().  This reduces the number of unnecessary
      spinlocks when the tx ring is full in a steady-state condition.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f8af120
  2. 09 8月, 2006 1 次提交
  3. 08 8月, 2006 2 次提交
    • M
      [TG3]: Fix tx race condition · 1b2a7205
      Michael Chan 提交于
      Fix a subtle race condition between tg3_start_xmit() and tg3_tx()
      discovered by Herbert Xu <herbert@gondor.apana.org.au>:
      
      CPU0					CPU1
      tg3_start_xmit()
      	if (tx_ring_full) {
      		tx_lock
      					tg3_tx()
      						if (!netif_queue_stopped)
      		netif_stop_queue()
      		if (!tx_ring_full)
      						update_tx_ring 
      			netif_wake_queue()
      		tx_unlock
      	}
      
      Even though tx_ring is updated before the if statement in tg3_tx() in
      program order, it can be re-ordered by the CPU as shown above.  This
      scenario can cause the tx queue to be stopped forever if tg3_tx() has
      just freed up the entire tx_ring.  The possibility of this happening
      should be very rare though.
      
      The following changes are made:
      
      1. Add memory barrier to fix the above race condition.
      
      2. Eliminate the private tx_lock altogether and rely solely on
      netif_tx_lock.  This eliminates one spinlock in tg3_start_xmit()
      when the ring is full.
      
      3. Because of 2, use netif_tx_lock in tg3_tx() before calling
      netif_wake_queue().
      
      4. Change TX_BUFFS_AVAIL to an inline function with a memory barrier.
      Herbert and David suggested using the memory barrier instead of
      volatile.
      
      5. Check for the full wake queue condition before getting
      netif_tx_lock in tg3_tx().  This reduces the number of unnecessary
      spinlocks when the tx ring is full in a steady-state condition.
      
      6. Update version to 3.65.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1b2a7205
    • C
      [TG3]: skb->dev assignment is done by netdev_alloc_skb · d14cc9a3
      Christoph Hellwig 提交于
      All caller of netdev_alloc_skb need to assign skb->dev shortly
      afterwards.  Move it into common code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d14cc9a3
  4. 04 8月, 2006 5 次提交
  5. 03 8月, 2006 9 次提交
  6. 29 7月, 2006 2 次提交
  7. 28 7月, 2006 5 次提交
  8. 26 7月, 2006 3 次提交
  9. 22 7月, 2006 5 次提交
  10. 21 7月, 2006 1 次提交
  11. 18 7月, 2006 3 次提交
  12. 15 7月, 2006 3 次提交