1. 23 2月, 2006 2 次提交
  2. 21 2月, 2006 1 次提交
  3. 19 2月, 2006 1 次提交
  4. 18 2月, 2006 3 次提交
  5. 17 2月, 2006 6 次提交
  6. 16 2月, 2006 1 次提交
  7. 15 2月, 2006 3 次提交
  8. 14 2月, 2006 1 次提交
  9. 13 2月, 2006 1 次提交
  10. 12 2月, 2006 2 次提交
    • A
      [PATCH] select: fix returned timeval · 643a6545
      Andrew Morton 提交于
      With David Woodhouse <dwmw2@infradead.org>
      
      select() presently has a habit of increasing the value of the user's
      `timeout' argument on return.
      
      We were writing back a timeout larger than the original.  We _deliberately_
      round up, since we know we must wait at _least_ as long as the caller asks
      us to.
      
      The patch adds a couple of helper functions for magnitude comparison of
      timespecs and of timevals, and uses them to prevent the various poll and
      select functions from returning a timeout which is larger than the one which
      was passed in.
      
      The patch also fixes a bug in compat_sys_pselect7(): it was adding the new
      timeout value to the old one and was returning that.  It should just return
      the new timeout value.
      
      (We have various handy timespec/timeval-to-from-nsec conversion functions in
      time.h.  But this code open-codes it all).
      
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: george anzinger <george@mvista.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      643a6545
    • U
      [PATCH] fstatat64 support · cff2b760
      Ulrich Drepper 提交于
      The *at patches introduced fstatat and, due to inusfficient research, I
      used the newfstat functions generally as the guideline.  The result is that
      on 32-bit platforms we don't have all the information needed to implement
      fstatat64.
      
      This patch modifies the code to pass up 64-bit information if
      __ARCH_WANT_STAT64 is defined.  I renamed the syscall entry point to make
      this clear.  Other archs will continue to use the existing code.  On x86-64
      the compat code is implemented using a new sys32_ function.  this is what
      is done for the other stat syscalls as well.
      
      This patch might break some other archs (those which define
      __ARCH_WANT_STAT64 and which already wired up the syscall).  Yet others
      might need changes to accomodate the compatibility mode.  I really don't
      want to do that work because all this stat handling is a mess (more so in
      glibc, but the kernel is also affected).  It should be done by the arch
      maintainers.  I'll provide some stand-alone test shortly.  Those who are
      eager could compile glibc and run 'make check' (no installation needed).
      
      The patch below has been tested on x86 and x86-64.
      Signed-off-by: NUlrich Drepper <drepper@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cff2b760
  11. 08 2月, 2006 10 次提交
  12. 07 2月, 2006 3 次提交
  13. 06 2月, 2006 5 次提交
    • U
      170aa3d0
    • T
      [PATCH] VFS: Ensure LOOKUP_CONTINUE flag is preserved by link_path_walk() · f55eab82
      Trond Myklebust 提交于
      When walking a path, the LOOKUP_CONTINUE flag is used by some filesystems
      (for instance NFS) in order to determine whether or not it is looking up
      the last component of the path.  It this is the case, it may have to look
      at the intent information in order to perform various tasks such as atomic
      open.
      
      A problem currently occurs when link_path_walk() hits a symlink.  In this
      case LOOKUP_CONTINUE may be cleared prematurely when we hit the end of the
      path passed by __vfs_follow_link() (i.e.  the end of the symlink path)
      rather than when we hit the end of the path passed by the user.
      
      The solution is to have link_path_walk() clear LOOKUP_CONTINUE if and only
      if that flag was unset when we entered the function.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f55eab82
    • A
      [PATCH] jbd: fix transaction batching · fe1dcbc4
      Andrew Morton 提交于
      Ben points out that:
      
        When writing files out using O_SYNC, jbd's 1 jiffy delay results in a
        significant drop in throughput as the disk sits idle.  The patch below
        results in a 4-5x performance improvement (from 6.5MB/s to ~24-30MB/s on my
        IDE test box) when writing out files using O_SYNC.
      
      So optimise the batching code by omitting it entirely if the process which is
      doing a sync write is the same as the one which did the most recent sync
      write.  If that's true, we're unlikely to get any other processes joining the
      transaction.
      
      (Has been in -mm for ages - it took me a long time to get on to performance
      testing it)
      
      Numbers, on write-cache-disabled IDE:
      
      /usr/bin/time -p synctest -n 10 -uf -t 1 -p 1 dir-name
      
      Unpatched:
      	40 seconds
      Patched:
      	35 seconds
      Batching disabled:
      	35 seconds
      
      This is the problematic single-process-doing-fsync case.  With multiple
      fsyncing processes the numbers are AFACIT unaltered by the patch.
      
      Aside: performance testing and instrumentation shows that the transaction
      batching almost doesn't help (testing with synctest -n 1 -uf -t 100 -p 10
      dir-name on non-writeback-caching IDE).  This is because by the time one
      process is running a synchronous commit, a bunch of other processes already
      have a transaction handle open, so they're all going to batch into the same
      transaction anyway.
      
      The batching seems to offer maybe 5-10% speedup with this workload, but I'm
      pretty sure it was more important than that when it was first developed 4-odd
      years ago...
      
      Cc: "Stephen C. Tweedie" <sct@redhat.com>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fe1dcbc4
    • M
      [PATCH] fuse: fix request_end() vs fuse_reset_request() race · 7128ec2a
      Miklos Szeredi 提交于
      The last fix for this function in fact opened up a much more often
      triggering race.
      
      It was uncommented tricky code, that was buggy.  Add comment, make it less
      tricky and fix bug.
      Signed-off-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7128ec2a
    • E
      [PATCH] percpu data: only iterate over possible CPUs · 88a2a4ac
      Eric Dumazet 提交于
      percpu_data blindly allocates bootmem memory to store NR_CPUS instances of
      cpudata, instead of allocating memory only for possible cpus.
      
      As a preparation for changing that, we need to convert various 0 -> NR_CPUS
      loops to use for_each_cpu().
      
      (The above only applies to users of asm-generic/percpu.h.  powerpc has gone it
      alone and is presently only allocating memory for present CPUs, so it's
      currently corrupting memory).
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: James Bottomley <James.Bottomley@steeleye.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Anton Blanchard <anton@samba.org>
      Acked-by: NWilliam Irwin <wli@holomorphy.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      88a2a4ac
  14. 04 2月, 2006 1 次提交