1. 11 7月, 2009 29 次提交
  2. 10 7月, 2009 5 次提交
    • D
    • R
      cxgb3: Fix crash caused by stashing wrong netdev_queue · e594e96e
      Roland Dreier 提交于
      Commit c3a8c5b6 ("cxgb3: move away from LLTX") exposed a bug in how
      cxgb3 looks up the netdev_queue it stashes away in a qset during
      initialization.  For multiport devices, the TX queue index it uses is
      offset by the first_qset index of each port.  This leads to a crash
      once LLTX is removed, since hard_start_xmit is called with one TX
      queue lock held, while the TX reclaim timer task grabs a different
      (wrong) TX queue lock when it frees skbs.
      
      Fix this by removing the first_qset offset used to look up the TX
      queue passed into t3_sge_alloc_qset() from setup_sge_qsets().
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Acked-by: NDivy Le Ray <divy@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e594e96e
    • Y
      ixgbe: Fix coexistence of FCoE and Flow Director in 82599 · 8faa2a78
      Yi Zou 提交于
      Fix coexistence of Fiber Channel over Ethernet (FCoE) and Flow Director (FDIR)
      in 82599 and remove the disabling of FDIR when FCoE is enabled.
      
      Currently, FDIR is turned off when FCoE is enabled under the assumption that
      FCoE is always enabled with DCB being turned on. However, FDIR does not have
      to be turned off all the time when FCoE is enabled since FCoE can be enabled
      without DCB being turned on, e.g., use link pause only. This patch makes sure
      that when DCB is turned on or off, FDIR is turned on or off correspondingly;
      and when FCoE is enabled, it does not disable FDIR, rather, it will have FDIR
      set up properly so FCoE and FDIR can coexist regardless of DCB being on or off.
      Signed-off-by: NYi Zou <yi.zou@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8faa2a78
    • J
      memory barrier: adding smp_mb__after_lock · ad462769
      Jiri Olsa 提交于
      Adding smp_mb__after_lock define to be used as a smp_mb call after
      a lock.
      
      Making it nop for x86, since {read|write|spin}_lock() on x86 are
      full memory barriers.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ad462769
    • J
      net: adding memory barrier to the poll and receive callbacks · a57de0b4
      Jiri Olsa 提交于
      Adding memory barrier after the poll_wait function, paired with
      receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper
      to wrap the memory barrier.
      
      Without the memory barrier, following race can happen.
      The race fires, when following code paths meet, and the tp->rcv_nxt
      and __add_wait_queue updates stay in CPU caches.
      
      CPU1                         CPU2
      
      sys_select                   receive packet
        ...                        ...
        __add_wait_queue           update tp->rcv_nxt
        ...                        ...
        tp->rcv_nxt check          sock_def_readable
        ...                        {
        schedule                      ...
                                      if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
                                              wake_up_interruptible(sk->sk_sleep)
                                      ...
                                   }
      
      If there was no cache the code would work ok, since the wait_queue and
      rcv_nxt are opposit to each other.
      
      Meaning that once tp->rcv_nxt is updated by CPU2, the CPU1 either already
      passed the tp->rcv_nxt check and sleeps, or will get the new value for
      tp->rcv_nxt and will return with new data mask.
      In both cases the process (CPU1) is being added to the wait queue, so the
      waitqueue_active (CPU2) call cannot miss and will wake up CPU1.
      
      The bad case is when the __add_wait_queue changes done by CPU1 stay in its
      cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1 will then
      endup calling schedule and sleep forever if there are no more data on the
      socket.
      
      Calls to poll_wait in following modules were ommited:
      	net/bluetooth/af_bluetooth.c
      	net/irda/af_irda.c
      	net/irda/irnet/irnet_ppp.c
      	net/mac80211/rc80211_pid_debugfs.c
      	net/phonet/socket.c
      	net/rds/af_rds.c
      	net/rfkill/core.c
      	net/sunrpc/cache.c
      	net/sunrpc/rpc_pipe.c
      	net/tipc/socket.c
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a57de0b4
  3. 09 7月, 2009 6 次提交