1. 13 11月, 2007 2 次提交
  2. 22 10月, 2007 2 次提交
  3. 11 10月, 2007 8 次提交
  4. 19 7月, 2007 1 次提交
    • M
      [TG3]: Fix msi issue with kexec/kdump. · ee6a99b5
      Michael Chan 提交于
      Tina Yang <tina.yang@oracle.com> discovered an MSI related problem
      when doing kdump.  The problem is that the kexec kernel is booted
      without going through system reset, and as a result, MSI may already
      be enabled when tg3_init_one() is called.  tg3_init_one() calls
      pci_save_state() which will save the stale MSI state.  Later on in
      tg3_open(), we call pci_enable_msi() to reconfigure MSI on the chip
      before we reset the chip.  After chip reset, we call
      pci_restore_state() which will put the stale MSI address/data back
      onto the chip.
      
      This is no longer a problem in the latest kernel because
      pci_restore_state() has been changed to restore MSI state from
      internal data structures which will guarantee restoring the proper
      MSI state.
      
      But I think we should still fix it.  Our save and restore sequence
      can still cause very subtle problems down the road.  The fix is to
      have our own functions save and restore precisely what we need.  We
      also change it to save and restore state inside tg3_chip_reset() in a
      more straight forward way.
      
      Thanks to Tina for helping to test and debug the problem.
      
      [ Bump driver version and release date. -DaveM ]
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee6a99b5
  5. 12 7月, 2007 2 次提交
  6. 10 5月, 2007 1 次提交
    • A
      tg3: use flush_work_keventd() · 2b3cb2e7
      Andrew Morton 提交于
      Convert tg3 over to flush_work_keventd().  Remove nasty now-unneeded deadlock
      avoidance logic.
      
      (akpm: bypassed maintainers, sorry.  There are other patches which depend on
      this)
      
      Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Michael Chan <mchan@broadcom.com>
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b3cb2e7
  7. 07 5月, 2007 1 次提交
  8. 06 5月, 2007 4 次提交
  9. 26 3月, 2007 2 次提交
    • M
      [TG3]: Exit irq handler during chip reset. · d18edcb2
      Michael Chan 提交于
      On most tg3 chips, the memory enable bit in the PCI command register
      gets cleared during chip reset and must be restored before accessing
      PCI registers using memory cycles.  The chip does not generate
      interrupt during chip reset, but the irq handler can still be called
      because of irq sharing or irqpoll.  Reading a register in the irq
      handler can cause a master abort in this scenario and may result in a
      crash on some architectures.
      
      Use the TG3_FLAG_CHIP_RESETTING flag to tell the irq handler to exit
      without touching any registers.  The checking of the flag is in the
      "slow" path of the irq handler and will not affect normal performance.
      The msi handler is not shared and therefore does not require checking
      the flag.
      
      Thanks to Bernhard Walle <bwalle@suse.de> for reporting the problem.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d18edcb2
    • M
      [TG3]: Eliminate the unused TG3_FLAG_SPLIT_MODE flag. · 1c46ae05
      Michael Chan 提交于
      This flag to support multiple PCIX split completions was never used
      because of hardware bugs.  This will make room for a new flag.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c46ae05
  10. 27 2月, 2007 1 次提交
    • M
      [TG3]: TSO workaround fixes. · 7f62ad5d
      Michael Chan 提交于
      1.  Add race condition check after netif_stop_queue().  tg3_tx() runs
          without netif_tx_lock and can race with tg3_start_xmit_dma_bug() ->
          tg3_tso_bug().
      
      2.  Firmware TSO in 5703/5704/5705 also have the same TSO limitation,
          i.e. they cannot handle TSO headers bigger than 80 bytes.  Rename
          TG3_FL2_HW_TSO_1_BUG to TG3_FL2_TSO_BUG and set this flag on
          these chips as well.
      
      3.  Update version to 3.74.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f62ad5d
  11. 14 2月, 2007 1 次提交
  12. 09 1月, 2007 1 次提交
  13. 18 12月, 2006 1 次提交
  14. 07 12月, 2006 1 次提交
  15. 29 9月, 2006 6 次提交
  16. 08 8月, 2006 1 次提交
    • M
      [TG3]: Fix tx race condition · 1b2a7205
      Michael Chan 提交于
      Fix a subtle race condition between tg3_start_xmit() and tg3_tx()
      discovered by Herbert Xu <herbert@gondor.apana.org.au>:
      
      CPU0					CPU1
      tg3_start_xmit()
      	if (tx_ring_full) {
      		tx_lock
      					tg3_tx()
      						if (!netif_queue_stopped)
      		netif_stop_queue()
      		if (!tx_ring_full)
      						update_tx_ring 
      			netif_wake_queue()
      		tx_unlock
      	}
      
      Even though tx_ring is updated before the if statement in tg3_tx() in
      program order, it can be re-ordered by the CPU as shown above.  This
      scenario can cause the tx queue to be stopped forever if tg3_tx() has
      just freed up the entire tx_ring.  The possibility of this happening
      should be very rare though.
      
      The following changes are made:
      
      1. Add memory barrier to fix the above race condition.
      
      2. Eliminate the private tx_lock altogether and rely solely on
      netif_tx_lock.  This eliminates one spinlock in tg3_start_xmit()
      when the ring is full.
      
      3. Because of 2, use netif_tx_lock in tg3_tx() before calling
      netif_wake_queue().
      
      4. Change TX_BUFFS_AVAIL to an inline function with a memory barrier.
      Herbert and David suggested using the memory barrier instead of
      volatile.
      
      5. Check for the full wake queue condition before getting
      netif_tx_lock in tg3_tx().  This reduces the number of unnecessary
      spinlocks when the tx ring is full in a steady-state condition.
      
      6. Update version to 3.65.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1b2a7205
  17. 01 7月, 2006 3 次提交
  18. 18 6月, 2006 2 次提交
    • M
      [TG3]: Convert to non-LLTX · 00b70504
      Michael Chan 提交于
      Herbert Xu pointed out that it is unsafe to call netif_tx_disable()
      from LLTX drivers because it uses dev->xmit_lock to synchronize
      whereas LLTX drivers use private locks.
      
      Convert tg3 to non-LLTX to fix this issue. tg3 is a lockless driver
      where hard_start_xmit and tx completion handling can run concurrently
      under normal conditions. A tx_lock is only needed to prevent
      netif_stop_queue and netif_wake_queue race condtions when the queue
      is full.
      
      So whether we use LLTX or non-LLTX, it makes practically no
      difference.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      00b70504
    • M
      [TG3]: Add recovery logic when MMIOs are re-ordered · df3e6548
      Michael Chan 提交于
      Add recovery logic when we suspect that the system is re-ordering
      MMIOs. Re-ordered MMIOs to the send mailbox can cause bogus tx
      completions and hit BUG_ON() in the tx completion path.
      
      tg3 already has logic to handle re-ordered MMIOs by flushing the MMIOs
      that must be strictly ordered (such as the send mailbox).  Determining
      when to enable the flush is currently a manual process of adding known
      chipsets to a list.
      
      The new code replaces the BUG_ON() in the tx completion path with the
      call to tg3_tx_recover(). It will set the TG3_FLAG_MBOX_WRITE_REORDER
      flag and reset the chip later in the workqueue to recover and start
      flushing MMIOs to the mailbox.
      
      A message to report the problem will be printed. We will then decide
      whether or not to add the host bridge to the list of chipsets that do
      re-ordering.
      
      We may add some additional code later to print the host bridge's ID so
      that the user can report it more easily.
      
      The assumption that re-ordering can only happen on x86 systems is also
      removed.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df3e6548