1. 22 7月, 2010 28 次提交
  2. 21 7月, 2010 8 次提交
  3. 20 7月, 2010 4 次提交
    • H
      bridge: Partially disable netpoll support · 573201f3
      Herbert Xu 提交于
      The new netpoll code in bridging contains use-after-free bugs
      that are non-trivial to fix.
      
      This patch fixes this by removing the code that uses skbs after
      they're freed.
      
      As a consequence, this means that we can no longer call bridge
      from the netpoll path, so this patch also removes the controller
      function in order to disable netpoll.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      
      Thanks,
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      573201f3
    • D
      ipv6: Make IP6CB(skb)->nhoff 16-bit. · e7c38157
      David S. Miller 提交于
      Even with jumbograms I cannot see any way in which we would need
      to records a larger than 65535 valued next-header offset.
      
      The maximum extension header length is (256 << 3) == 2048.
      There are only a handful of extension headers specified which
      we'd even accept (say 5 or 6), therefore the largest next-header
      offset we'd ever have to contend with is something less than
      say 16k.
      
      Therefore make it a u16 instead of a u32.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e7c38157
    • M
      bnx2: Update version to 2.0.17. · 5ae482e0
      Michael Chan 提交于
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5ae482e0
    • M
      bnx2: Remove some unnecessary smp_mb() in tx fast path. · 11848b96
      Michael Chan 提交于
      smp_mb() inside bnx2_tx_avail() is used twice in the normal
      bnx2_start_xmit() path (see illustration below).  The full memory
      barrier is only necessary during race conditions with tx completion.
      We can speed up the tx path by replacing smp_mb() in bnx2_tx_avail()
      with a compiler barrier.  The compiler barrier is to force the
      compiler to fetch the tx_prod and tx_cons from memory.
      
      In the race condition between bnx2_start_xmit() and bnx2_tx_int(),
      we have the following situation:
      
      bnx2_start_xmit()                       bnx2_tx_int()
          if (!bnx2_tx_avail())
                  BUG();
      
          ...
      
          if (!bnx2_tx_avail())
                  netif_tx_stop_queue();          update_tx_index();
                  smp_mb();                       smp_mb();
                  if (bnx2_tx_avail())            if (netif_tx_queue_stopped() &&
                          netif_tx_wake_queue();      bnx2_tx_avail())
      
      With smp_mb() removed from bnx2_tx_avail(), we need to add smp_mb() to
      bnx2_start_xmit() as shown above to properly order netif_tx_stop_queue()
      and bnx2_tx_avail() to check the ring index.  If it is not strictly
      ordered, the tx queue can be stopped forever.
      
      This improves performance by about 5% with 2 ports running bi-directional
      64-byte packets.
      Reviewed-by: NBenjamin Li <benli@broadcom.com>
      Reviewed-by: NMatt Carlson <mcarlson@broadcom.com>
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      11848b96