1. 23 4月, 2009 9 次提交
  2. 22 4月, 2009 2 次提交
  3. 21 4月, 2009 20 次提交
  4. 20 4月, 2009 9 次提交
    • H
      tun: Fix sk_sleep races when attaching/detaching · c40af84a
      Herbert Xu 提交于
      As the sk_sleep wait queue actually lives in tfile, which may be
      detached from the tun device, bad things will happen when we use
      sk_sleep after detaching.
      
      Since the tun device is the persistent data structure here (when
      requested by the user), it makes much more sense to have the wait
      queue live there.  There is no reason to have it in tfile at all
      since the only time we can wait is if we have a tun attached.
      In fact we already have a wait queue in tun_struct, so we might
      as well use it.
      Reported-by: NEric W. Biederman <ebiederm@xmission.com>
      Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Tested-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c40af84a
    • H
      tun: Only free a netdev when all tun descriptors are closed · 9c3fea6a
      Herbert Xu 提交于
      The commit c70f1829 ("tun: Fix
      races between tun_net_close and free_netdev") fixed a race where
      an asynchronous deletion of a tun device can hose a poll(2) on
      a tun fd attached to that device.
      
      However, this came at the cost of moving the tun wait queue into
      the tun file data structure.  The problem with this is that it
      imposes restrictions on when and where the tun device can access
      the wait queue since the tun file may change at any time due to
      detaching and reattaching.
      
      In particular, now that we need to use the wait queue on the
      receive path it becomes difficult to properly synchronise this
      with the detachment of the tun device.
      
      This patch solves the original race in a different way.  Since
      the race is only because the underlying memory gets freed, we
      can prevent it simply by ensuring that we don't do that until
      all tun descriptors ever attached to the device (even if they
      have since be detached because they may still be sitting in poll)
      have been closed.
      
      This is done by using reference counting the attached tun file
      descriptors.  The refcount in tun->sk has been reappropriated
      for this purpose since it was already being used for that, albeit
      from the opposite angle.
      
      Note that we no longer zero tfile->tun since tun_get will return
      NULL anyway after the refcount on tfile hits zero.  Instead it
      represents whether this device has ever been attached to a device.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c3fea6a
    • E
      loopback: packet drops accounting · 7eebb0b2
      Eric Dumazet 提交于
      We can in some situations drop packets in netif_rx()
      
      loopback driver does not report these (unlikely) drops to its stats,
      and incorrectly change packets/bytes counts.
      
      After this patch applied, "ifconfig lo" can reports these drops as in :
      
      # ifconfig lo
      lo        Link encap:Local Loopback
                inet addr:127.0.0.1  Mask:255.0.0.0
                UP LOOPBACK RUNNING  MTU:16436  Metric:1
                RX packets:692562900 errors:3228 dropped:3228 overruns:0 frame:0
                TX packets:692562900 errors:0 dropped:0 overruns:0 carrier:0
                collisions:0 txqueuelen:0
                RX bytes:2865674174 (2.6 GiB)  TX bytes:2865674174 (2.6 GiB)
      
      I initialy chose to reflect those errors only in tx_dropped/tx_errors, but David
      convinced me that it was really RX errors, as loopback_xmit() really starts
      a RX process. (calling eth_type_trans() for example, that itself pulls the ethernet header)
      
      These errors are accounted in rx_dropped/rx_errors.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7eebb0b2
    • M
      net: fix "compatibility" typos · eb39c57f
      Marcin Slusarz 提交于
      Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb39c57f
    • D
      cxgb3: Fix EEH final recovery attempt · e8d19370
      Divy Le Ray 提交于
      EEH attempts to recover up 6 times.
      The last attempt leaves all the ports and adapter down.hen
      The driver is then unloaded, bringing the adapter down again
      unconditionally. The unload will hang.
      Check if the adapter is already down before trying to bring it down again.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e8d19370
    • D
      cxgb3: Fix potential msi-x vector leak · 2c2f409f
      Divy Le Ray 提交于
      Release vectors when a MSI-X allocation fails.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2c2f409f
    • D
      cxgb3: fix workqueue flush issues · c80b0c28
      Divy Le Ray 提交于
      The fatal error task can be scheduled while processing an offload packet
      in NAPI context when the connection handle is bogus. this can race
      with the ports being brought down and the cxgb3 workqueue being flushed.
      Stop napi processing before flushing the work queue.
      
      The ULP drivers (iSCSI, iWARP) might also schedule a task on keventd_wk
      while releasing a connection handle (cxgb3_offload.c::cxgb3_queue_tid_release()).
      The driver however does not flush any work on keventd_wq while being unloaded.
      This patch also fixes this.
      
      Also call cancel_delayed_work_sync in place of the the deprecated
      cancel_rearming_delayed_workqueue.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c80b0c28
    • D
      cxgb3: fix link fault handling · 3851c66c
      Divy Le Ray 提交于
      Use the existing periodic task to handle link faults.
      The link fault interrupt handler is also called in work queue context,
      which is wrong and might cause potential deadlocks.
      Signed-off-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3851c66c
    • J
      e1000: init link state correctly · eb62efd2
      Jesse Brandeburg 提交于
      As reported by Andrew Lutomirski <amluto@gmail.com>
      
      All the intel wired ethernet drivers were calling netif_carrier_off
      and netif_stop_queue (or variants) before calling register_netdevice
      
      This is incorrect behavior as was pointed out by davem, and causes
      ifconfig and friends to report a strange state before first link
      after the driver was loaded.
      
      This apparently confused *some* versions of networkmanager.
      
      Andy tested this for e1000e and confirmed it was working for him.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Reported-by: NAndrew Lutomirski <amluto@gmail.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb62efd2