1. 19 5月, 2010 36 次提交
  2. 30 4月, 2010 1 次提交
    • D
      xfs: add a shrinker to background inode reclaim · 9bf729c0
      Dave Chinner 提交于
      On low memory boxes or those with highmem, kernel can OOM before the
      background reclaims inodes via xfssyncd. Add a shrinker to run inode
      reclaim so that it inode reclaim is expedited when memory is low.
      
      This is more complex than it needs to be because the VM folk don't
      want a context added to the shrinker infrastructure. Hence we need
      to add a global list of XFS mount structures so the shrinker can
      traverse them.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      9bf729c0
  3. 27 4月, 2010 1 次提交
    • D
      xfs: more swap extent fixes for dynamic fork offsets · dd77ef92
      Dave Chinner 提交于
      A new xfsqa test (226) with a prototype xfs_fsr change to try to
      handle dynamic fork offsets better triggers an assertion failure
      where the inode data fork is in btree format, yet there is room in
      the inode for it to be in extent format. The two inodes look like:
      
      before: ino 0x101 (target), num_extents 11, Max in-fork extents 6, broot size 40, fork offset 96
      before: ino 0x115 (temp),  num_extents 5, Max in-fork extents 3, broot size 40, fork offset 56
      after: ino 0x101 (target), num_extents 5, Max in-fork extents 6, broot size 40, fork offset 96
      after: ino 0x115 (temp), num_extents 11, Max in-fork extents 3, broot size 40, fork offset 56
      
      Basically the target inode ends up with 5 extents in btree format,
      but it had space for 6 extents in extent format, so ends up
      incorrect. Notably here the broot size is the same, and that is
      where the kernel code is going wrong - the btree root will fit, so
      it lets the swap go ahead.
      
      The check should not allow the swap to take place if the number of
      extents while in btree format is less than the number of extents
      that can fit in the inode in extent format. Adding that check will
      prevent this swap and corruption from occurring.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      dd77ef92
  4. 17 4月, 2010 2 次提交
    • D
      xfs: don't warn on EAGAIN in inode reclaim · f1d486a3
      Dave Chinner 提交于
      Any inode reclaim flush that returns EAGAIN will result in the inode
      reclaim being attempted again later. There is no need to issue a
      warning into the logs about this situation.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      f1d486a3
    • D
      xfs: ensure that sync updates the log tail correctly · b6f8dd49
      Dave Chinner 提交于
      Updates to the VFS layer removed an extra ->sync_fs call into the
      filesystem during the sync process (from the quota code).
      Unfortunately the sync code was unknowingly relying on this call to
      make sure metadata buffers were flushed via a xfs_buftarg_flush()
      call to move the tail of the log forward in memory before the final
      transactions of the sync process were issued.
      
      As a result, the old code would write a very recent log tail value
      to the log by the end of the sync process, and so a subsequent crash
      would leave nothing for log recovery to do. Hence in qa test 182,
      log recovery only replayed a small handle for inode fsync
      transactions in this case.
      
      However, with the removal of the extra ->sync_fs call, the log tail
      was now not moved forward with the inode fsync transactions near the
      end of the sync procese the first (and only) buftarg flush occurred
      after these transactions went to disk. The result is that log
      recovery now sees a large number of transactions for metadata that
      is already on disk.
      
      This usually isn't a problem, but when the transactions include
      inode chunk allocation, the inode create transactions and all
      subsequent changes are replayed as we cannt rely on what is on disk
      is valid. As a result, if the inode was written and contains
      unlogged changes, the unlogged changes are lost, thereby violating
      sync semantics.
      
      The fix is to always issue a transaction after the buftarg flush
      occurs is the log iѕ not idle or covered. This results in a dummy
      transaction being written that contains the up-to-date log tail
      value, which will be very recent. Indeed, it will be at least as
      recent as the old code would have left on disk, so log recovery
      will behave exactly as it used to in this situation.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      b6f8dd49