1. 16 1月, 2009 1 次提交
  2. 14 1月, 2009 26 次提交
  3. 10 1月, 2009 9 次提交
  4. 09 1月, 2009 4 次提交
    • N
      [XFS] use scalable vmap API · 0087167c
      Nick Piggin 提交于
      Implement XFS's large buffer support with the new vmap APIs. See the vmap
      rewrite (db64fe02) for some numbers. The biggest improvement that comes from
      using the new APIs is avoiding the global KVA allocation lock on every call.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Reviewed-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      0087167c
    • N
      [XFS] remove old vmap cache · 958f8c0e
      Nick Piggin 提交于
      XFS's vmap batching simply defers a number (up to 64) of vunmaps, and keeps
      track of them in a list. To purge the batch, it just goes through the list and
      calls vunamp on each one. This is pretty poor: a global TLB flush is generally
      still performed on each vunmap, with the most expensive parts of the operation
      being the broadcast IPIs and locking involved in the SMP callouts, and the
      locking involved in the vmap management -- none of these are avoided by just
      batching up the calls. I'm actually surprised it ever made much difference.
      (Now that the lazy vmap allocator is upstream, this description is not quite
      right, but the vunmap batching still doesn't seem to do much)
      
      Rip all this logic out of XFS completely. I will improve vmap performance
      and scalability directly in subsequent patch.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Reviewed-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      958f8c0e
    • C
      [XFS] make xfs_ino_t an unsigned long long · 058652a3
      Christoph Hellwig 提交于
      Currently xfs_ino_t is defined as a u64 which can either be an unsigned
      long long or on some 64 bit platforms and unsigned long.  Just making
      it and unsigned long long mean's it's still always 64 bits wide, but we
      don't need to resort to cases to print it.
      
      Fixes a warning regression on 64 bit powerpc in current git.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      058652a3
    • C
      [XFS] truncate readdir offsets to signed 32 bit values · 15440319
      Christoph Hellwig 提交于
      John Stanley reported EOVERFLOW errors in readdir from his self-build
      glibc.  I traced this down to glibc enabling d_off overflow checks
      in one of the about five million different getdents implementations.
      
      In 2.6.28 Dave Woodhouse moved our readdir double buffering required
      for NFS4 readdirplus into nfsd and at that point we lost the capping
      of the directory offsets to 32 bit signed values.  Johns glibc used
      getdents64 to even implement readdir for normal 32 bit offset dirents,
      and failed with EOVERFLOW only if this happens on the first dirent in
      a getdents call.  I managed to come up with a testcase that uses
      raw getdents and does the EOVERFLOW check manually.  We always hit
      it with our last entry due to the special end of directory marker.
      
      The patch below is a dumb version of just putting back the masking,
      to make sure we have the same behavior as in 2.6.27 and earlier.
      
      I will work on a better and cleaner fix for 2.6.30.
      Reported-by: NJohn Stanley <jpsinthemix@verizon.net>
      Tested-by: NJohn Stanley <jpsinthemix@verizon.net>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      15440319