1. 08 9月, 2008 1 次提交
  2. 09 10月, 2008 1 次提交
    • E
      ext4: Avoid printk floods in the face of directory corruption · 9d9f1775
      Eric Sandeen 提交于
      Note: some people thinks this represents a security bug, since it
      might make the system go away while it is printing a large number of
      console messages, especially if a serial console is involved.  Hence,
      it has been assigned CVE-2008-3528, but it requires that the attacker
      either has physical access to your machine to insert a USB disk with a
      corrupted filesystem image (at which point why not just hit the power
      button), or is otherwise able to convince the system administrator to
      mount an arbitrary filesystem image (at which point why not just
      include a setuid shell or world-writable hard disk device file or some
      such).  Me, I think they're just being silly. --tytso
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: linux-ext4@vger.kernel.org
      Cc: Eugene Teo <eugeneteo@kernel.sg>
      9d9f1775
  3. 14 9月, 2008 2 次提交
  4. 09 9月, 2008 3 次提交
  5. 09 10月, 2008 2 次提交
  6. 10 10月, 2008 1 次提交
  7. 09 9月, 2008 1 次提交
  8. 09 10月, 2008 1 次提交
    • A
      ext4: Make sure all the block allocation paths reserve blocks · a30d542a
      Aneesh Kumar K.V 提交于
      With delayed allocation we need to make sure block are reserved before
      we attempt to allocate them. Otherwise we get block allocation failure
      (ENOSPC) during writepages which cannot be handled. This would mean
      silent data loss (We do a printk stating data will be lost). This patch
      updates the DIO and fallocate code path to do block reservation before
      block allocation. This is needed to make sure parallel DIO and fallocate
      request doesn't take block out of delayed reserve space.
      
      When free blocks count go below a threshold we switch to a slow patch
      which looks at other CPU's accumulated percpu counter values.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      a30d542a
  9. 20 8月, 2008 1 次提交
  10. 09 9月, 2008 3 次提交
  11. 10 10月, 2008 7 次提交
  12. 09 10月, 2008 9 次提交
  13. 08 10月, 2008 6 次提交
    • D
      tcp: Fix tcp_hybla zero congestion window growth with small rho and large cwnd. · 9d2c27e1
      Daniele Lacamera 提交于
      Because of rounding, in certain conditions, i.e. when in congestion
      avoidance state rho is smaller than 1/128 of the current cwnd, TCP
      Hybla congestion control starves and the cwnd is kept constant
      forever.
      
      This patch forces an increment by one segment after #send_cwnd calls
      without increments(newreno behavior).
      Signed-off-by: NDaniele Lacamera <root@danielinux.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d2c27e1
    • H
      net: Fix netdev_run_todo dead-lock · 58ec3b4d
      Herbert Xu 提交于
      Benjamin Thery tracked down a bug that explains many instances
      of the error
      
      unregister_netdevice: waiting for %s to become free. Usage count = %d
      
      It turns out that netdev_run_todo can dead-lock with itself if
      a second instance of it is run in a thread that will then free
      a reference to the device waited on by the first instance.
      
      The problem is really quite silly.  We were trying to create
      parallelism where none was required.  As netdev_run_todo always
      follows a RTNL section, and that todo tasks can only be added
      with the RTNL held, by definition you should only need to wait
      for the very ones that you've added and be done with it.
      
      There is no need for a second mutex or spinlock.
      
      This is exactly what the following patch does.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58ec3b4d
    • D
    • A
      tcp: Fix possible double-ack w/ user dma · 53240c20
      Ali Saidi 提交于
      From: Ali Saidi <saidi@engin.umich.edu>
      
      When TCP receive copy offload is enabled it's possible that
      tcp_rcv_established() will cause two acks to be sent for a single
      packet. In the case that a tcp_dma_early_copy() is successful,
      copied_early is set to true which causes tcp_cleanup_rbuf() to be
      called early which can send an ack. Further along in
      tcp_rcv_established(), __tcp_ack_snd_check() is called and will
      schedule a delayed ACK. If no packets are processed before the delayed
      ack timer expires the packet will be acked twice.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      53240c20
    • P
      net: only invoke dev->change_rx_flags when device is UP · b6c40d68
      Patrick McHardy 提交于
      Jesper Dangaard Brouer <hawk@comx.dk> reported a bug when setting a VLAN
      device down that is in promiscous mode:
      
      When the VLAN device is set down, the promiscous count on the real
      device is decremented by one by vlan_dev_stop(). When removing the
      promiscous flag from the VLAN device afterwards, the promiscous
      count on the real device is decremented a second time by the
      vlan_change_rx_flags() callback.
      
      The root cause for this is that the ->change_rx_flags() callback is
      invoked while the device is down. The synchronization is meant to mirror
      the behaviour of the ->set_rx_mode callbacks, meaning the ->open function
      is responsible for doing a full sync on open, the ->close() function is
      responsible for doing full cleanup on ->stop() and ->change_rx_flags()
      is meant to do incremental changes while the device is UP.
      
      Only invoke ->change_rx_flags() while the device is UP to provide the
      intended behaviour.
      Tested-by: NJesper Dangaard Brouer <jdb@comx.dk>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c40d68
    • M
      SLOB: fix bogus ksize calculation · 85ba94ba
      Matt Mackall 提交于
      SLOB's ksize calculation was braindamaged and generally harmlessly
      underreported the allocation size. But for very small buffers, it could
      in fact overreport them, leading code depending on krealloc to overrun
      the allocation and trample other data.
      Signed-off-by: NMatt Mackall <mpm@selenic.com>
      Tested-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      85ba94ba
  14. 07 10月, 2008 2 次提交