1. 09 10月, 2008 34 次提交
  2. 08 10月, 2008 6 次提交
    • D
      tcp: Fix tcp_hybla zero congestion window growth with small rho and large cwnd. · 9d2c27e1
      Daniele Lacamera 提交于
      Because of rounding, in certain conditions, i.e. when in congestion
      avoidance state rho is smaller than 1/128 of the current cwnd, TCP
      Hybla congestion control starves and the cwnd is kept constant
      forever.
      
      This patch forces an increment by one segment after #send_cwnd calls
      without increments(newreno behavior).
      Signed-off-by: NDaniele Lacamera <root@danielinux.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d2c27e1
    • H
      net: Fix netdev_run_todo dead-lock · 58ec3b4d
      Herbert Xu 提交于
      Benjamin Thery tracked down a bug that explains many instances
      of the error
      
      unregister_netdevice: waiting for %s to become free. Usage count = %d
      
      It turns out that netdev_run_todo can dead-lock with itself if
      a second instance of it is run in a thread that will then free
      a reference to the device waited on by the first instance.
      
      The problem is really quite silly.  We were trying to create
      parallelism where none was required.  As netdev_run_todo always
      follows a RTNL section, and that todo tasks can only be added
      with the RTNL held, by definition you should only need to wait
      for the very ones that you've added and be done with it.
      
      There is no need for a second mutex or spinlock.
      
      This is exactly what the following patch does.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58ec3b4d
    • D
    • A
      tcp: Fix possible double-ack w/ user dma · 53240c20
      Ali Saidi 提交于
      From: Ali Saidi <saidi@engin.umich.edu>
      
      When TCP receive copy offload is enabled it's possible that
      tcp_rcv_established() will cause two acks to be sent for a single
      packet. In the case that a tcp_dma_early_copy() is successful,
      copied_early is set to true which causes tcp_cleanup_rbuf() to be
      called early which can send an ack. Further along in
      tcp_rcv_established(), __tcp_ack_snd_check() is called and will
      schedule a delayed ACK. If no packets are processed before the delayed
      ack timer expires the packet will be acked twice.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      53240c20
    • P
      net: only invoke dev->change_rx_flags when device is UP · b6c40d68
      Patrick McHardy 提交于
      Jesper Dangaard Brouer <hawk@comx.dk> reported a bug when setting a VLAN
      device down that is in promiscous mode:
      
      When the VLAN device is set down, the promiscous count on the real
      device is decremented by one by vlan_dev_stop(). When removing the
      promiscous flag from the VLAN device afterwards, the promiscous
      count on the real device is decremented a second time by the
      vlan_change_rx_flags() callback.
      
      The root cause for this is that the ->change_rx_flags() callback is
      invoked while the device is down. The synchronization is meant to mirror
      the behaviour of the ->set_rx_mode callbacks, meaning the ->open function
      is responsible for doing a full sync on open, the ->close() function is
      responsible for doing full cleanup on ->stop() and ->change_rx_flags()
      is meant to do incremental changes while the device is UP.
      
      Only invoke ->change_rx_flags() while the device is UP to provide the
      intended behaviour.
      Tested-by: NJesper Dangaard Brouer <jdb@comx.dk>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c40d68
    • M
      SLOB: fix bogus ksize calculation · 85ba94ba
      Matt Mackall 提交于
      SLOB's ksize calculation was braindamaged and generally harmlessly
      underreported the allocation size. But for very small buffers, it could
      in fact overreport them, leading code depending on krealloc to overrun
      the allocation and trample other data.
      Signed-off-by: NMatt Mackall <mpm@selenic.com>
      Tested-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      85ba94ba