1. 27 7月, 2010 19 次提交
  2. 20 7月, 2010 2 次提交
    • D
      xfs: track AGs with reclaimable inodes in per-ag radix tree · 16fd5367
      Dave Chinner 提交于
      https://bugzilla.kernel.org/show_bug.cgi?id=16348
      
      When the filesystem grows to a large number of allocation groups,
      the summing of recalimable inodes gets expensive. In many cases,
      most AGs won't have any reclaimable inodes and so we are wasting CPU
      time aggregating over these AGs. This is particularly important for
      the inode shrinker that gets called frequently under memory
      pressure.
      
      To avoid the overhead, track AGs with reclaimable inodes in the
      per-ag radix tree so that we can find all the AGs with reclaimable
      inodes via a simple gang tag lookup. This involves setting the tag
      when the first reclaimable inode is tracked in the AG, and removing
      the tag when the last reclaimable inode is removed from the tree.
      Then the summation process becomes a loop walking the radix tree
      summing AGs with the reclaim tag set.
      
      This significantly reduces the overhead of scanning - a 6400 AG
      filesystea now only uses about 25% of a cpu in kswapd while slab
      reclaim progresses instead of being permanently stuck at 100% CPU
      and making little progress. Clean filesystems filesystems will see
      no overhead and the overhead only increases linearly with the number
      of dirty AGs.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      16fd5367
    • D
      xfs: convert inode shrinker to per-filesystem contexts · 70e60ce7
      Dave Chinner 提交于
      Now the shrinker passes us a context, wire up a shrinker context per
      filesystem. This allows us to remove the global mount list and the
      locking problems that introduced. It also means that a shrinker call
      does not need to traverse clean filesystems before finding a
      filesystem with reclaimable inodes.  This significantly reduces
      scanning overhead when lots of filesystems are present.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      70e60ce7
  3. 19 7月, 2010 1 次提交
    • D
      mm: add context argument to shrinker callback · 7f8275d0
      Dave Chinner 提交于
      The current shrinker implementation requires the registered callback
      to have global state to work from. This makes it difficult to shrink
      caches that are not global (e.g. per-filesystem caches). Pass the shrinker
      structure to the callback so that users can embed the shrinker structure
      in the context the shrinker needs to operate on and get back to it in the
      callback via container_of().
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      7f8275d0
  4. 24 6月, 2010 3 次提交
    • D
      xfs: remove block number from inode lookup code · 7b6259e7
      Dave Chinner 提交于
      The block number comes from bulkstat based inode lookups to shortcut
      the mapping calculations. We ar enot able to trust anything from
      bulkstat, so drop the block number as well so that the correct
      lookups and mappings are always done.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      7b6259e7
    • D
      xfs: rename XFS_IGET_BULKSTAT to XFS_IGET_UNTRUSTED · 1920779e
      Dave Chinner 提交于
      Inode numbers may come from somewhere external to the filesystem
      (e.g. file handles, bulkstat information) and so are inherently
      untrusted. Rename the flag we use for these lookups to make it
      obvious we are doing a lookup of an untrusted inode number and need
      to verify it completely before trying to read it from disk.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      1920779e
    • D
      xfs: validate untrusted inode numbers during lookup · 7124fe0a
      Dave Chinner 提交于
      When we decode a handle or do a bulkstat lookup, we are using an
      inode number we cannot trust to be valid. If we are deleting inode
      chunks from disk (default noikeep mode), then we cannot trust the on
      disk inode buffer for any given inode number to correctly reflect
      whether the inode has been unlinked as the di_mode nor the
      generation number may have been updated on disk.
      
      This is due to the fact that when we delete an inode chunk, we do
      not write the clusters back to disk when they are removed - instead
      we mark them stale to avoid them being written back potentially over
      the top of something that has been subsequently allocated at that
      location. The result is that we can have locations of disk that look
      like they contain valid inodes but in reality do not. Hence we
      cannot simply convert the inode number to a block number and read
      the location from disk to determine if the inode is valid or not.
      
      As a result, and XFS_IGET_BULKSTAT lookup needs to actually look the
      inode up in the inode allocation btree to determine if the inode
      number is valid or not.
      
      It should be noted even on ikeep filesystems, there is the
      possibility that blocks on disk may look like valid inode clusters.
      e.g. if there are filesystem images hosted on the filesystem. Hence
      even for ikeep filesystems we really need to validate that the inode
      number is valid before issuing the inode buffer read.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      7124fe0a
  5. 23 6月, 2010 1 次提交
  6. 24 6月, 2010 1 次提交
  7. 09 6月, 2010 1 次提交
  8. 03 6月, 2010 3 次提交
    • C
      xfs: improve xfs_isilocked · f9369729
      Christoph Hellwig 提交于
      Use rwsem_is_locked to make the assertations for shared locks work.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      
      f9369729
    • C
      xfs: skip writeback from reclaim context · 070ecdca
      Christoph Hellwig 提交于
      Allowing writeback from reclaim context causes massive problems with stack
      overflows as we can call into the writeback code which tends to be a heavy
      stack user both in the generic code and XFS from random contexts that
      perform memory allocations.
      
      Follow the example of btrfs (and in slightly different form ext4) and refuse
      to write out data from reclaim context.  This issue should really be handled
      by the VM so that we can tune better for this case, but until we get it
      sorted out there we have to hack around this in each filesystem with a
      complex writeback path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      
      070ecdca
    • D
      xfs: fix race in inode cluster freeing failing to stale inodes · 5b257b4a
      Dave Chinner 提交于
      When an inode cluster is freed, it needs to mark all inodes in memory as
      XFS_ISTALE before marking the buffer as stale. This is eeded because the inodes
      have a different life cycle to the buffer, and once the buffer is torn down
      during transaction completion, we must ensure none of the inodes get written
      back (which is what XFS_ISTALE does).
      
      Unfortunately, xfs_ifree_cluster() has some bugs that lead to inodes not being
      marked with XFS_ISTALE. This shows up when xfs_iflush() is called on these
      inodes either during inode reclaim or tail pushing on the AIL.  The buffer is
      read back, but no longer contains inodes and so triggers assert failures and
      shutdowns. This was reproducable with at run.dbench10 invocation from xfstests.
      
      There are two main causes of xfs_ifree_cluster() failing. The first is simple -
      it checks in-memory inodes it finds in the per-ag icache to see if they are
      clean without holding the flush lock. if they are clean it skips them
      completely. However, If an inode is flushed delwri, it will
      appear clean, but is not guaranteed to be written back until the flush lock has
      been dropped. Hence we may have raced on the clean check and the inode may
      actually be dirty. Hence always mark inodes found in memory stale before we
      check properly if they are clean.
      
      The second is more complex, and makes the first problem easier to hit.
      Basically the in-memory inode scan is done with full knowledge it can be racing
      with inode flushing and AIl tail pushing, which means that inodes that it can't
      get the flush lock on might not be attached to the buffer after then in-memory
      inode scan due to IO completion occurring. This is actually documented in the
      code as "needs better interlocking". i.e. this is a zero-day bug.
      
      Effectively, the in-memory scan must be done while the inode buffer is locked
      and Io cannot be issued on it while we do the in-memory inode scan. This
      ensures that inodes we couldn't get the flush lock on are guaranteed to be
      attached to the cluster buffer, so we can then catch all in-memory inodes and
      mark them stale.
      
      Now that the inode cluster buffer is locked before the in-memory scan is done,
      there is no need for the two-phase update of the in-memory inodes, so simplify
      the code into two loops and remove the allocation of the temporary buffer used
      to hold locked inodes across the phases.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      5b257b4a
  9. 29 5月, 2010 9 次提交