1. 09 9月, 2014 1 次提交
    • B
      xfs: export log_recovery_delay to delay mount time log recovery · 2e227178
      Brian Foster 提交于
      XFS log recovery has been discovered to have race conditions with
      buffers when I/O errors occur. External tools are available to simulate
      I/O errors to XFS, but this alone is not sufficient for testing log
      recovery. XFS unconditionally resets the inactive region of the log
      prior to log recovery to avoid confusion over processing any partially
      written log records that might have been written before an unclean
      shutdown. Therefore, unconditional write I/O failures at mount time are
      caught by the reset sequence rather than log recovery and hinder the
      ability to test the latter.
      
      The device-mapper dm-flakey module uses an up/down timer to define a
      cycle for when to fail I/Os. Create a pre log recovery delay tunable
      that can be used to coordinate XFS log recovery with I/O errors
      simulated by dm-flakey. This facilitates coordination in userspace that
      allows the reset of stale log blocks to succeed and writes due to log
      recovery to fail. For example, define a dm-flakey instance with an
      uptime long enough to allow log reset to succeed and a log recovery
      delay long enough to allow the dm-flakey uptime to expire.
      
      The 'log_recovery_delay' sysfs tunable is exported under
      /sys/fs/xfs/debug and is only enabled for kernels compiled in XFS debug
      mode. The value is exported in units of seconds and allows for a delay
      of up to 60 seconds. Note that this is for XFS debug and test
      instrumentation purposes only and should not be used by applications. No
      delay is enabled by default.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      2e227178
  2. 04 8月, 2014 2 次提交
    • D
      xfs: dquot recovery needs verifiers · ad3714b8
      Dave Chinner 提交于
      dquot recovery should add verifiers to the dquot buffers that it
      recovers changes into. Unfortunately, it doesn't attached the
      verifiers to the buffers in a consistent manner. For example,
      xlog_recover_dquot_pass2() reads dquot buffers without a verifier
      and then writes it without ever having attached a verifier to the
      buffer.
      
      Further, dquot buffer recovery may write a dquot buffer that has not
      been modified, or indeed, shoul dbe written because quotas are not
      enabled and hence changes to the buffer were not replayed. In this
      case, we again write buffers without verifiers attached because that
      doesn't happen until after the buffer changes have been replayed.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      ad3714b8
    • D
      xfs: ensure verifiers are attached to recovered buffers · 67dc288c
      Dave Chinner 提交于
      Crash testing of CRC enabled filesystems has resulted in a number of
      reports of bad CRCs being detected after the filesystem was mounted.
      Errors such as the following were being seen:
      
      XFS (sdb3): Mounting V5 Filesystem
      XFS (sdb3): Starting recovery (logdev: internal)
      XFS (sdb3): Metadata CRC error detected at xfs_agf_read_verify+0x5a/0x100 [xfs], block 0x1
      XFS (sdb3): Unmount and run xfs_repair
      XFS (sdb3): First 64 bytes of corrupted metadata buffer:
      ffff880136ffd600: 58 41 47 46 00 00 00 01 00 00 00 00 00 0f aa 40  XAGF...........@
      ffff880136ffd610: 00 02 6d 53 00 02 77 f8 00 00 00 00 00 00 00 01  ..mS..w.........
      ffff880136ffd620: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 03  ................
      ffff880136ffd630: 00 00 00 04 00 08 81 d0 00 08 81 a7 00 00 00 00  ................
      XFS (sdb3): metadata I/O error: block 0x1 ("xfs_trans_read_buf_map") error 74 numblks 1
      
      The errors were typically being seen in AGF, AGI and their related
      btree block buffers some time after log recovery had run. Often it
      wasn't until later subsequent mounts that the problem was
      discovered. The common symptom was a buffer with the correct
      contents, but a CRC and an LSN that matched an older version of the
      contents.
      
      Some debug added to _xfs_buf_ioapply() indicated that buffers were
      being written without verifiers attached to them from log recovery,
      and Jan Kara isolated the cause to log recovery readahead an dit's
      interactions with buffers that had a more recent LSN on disk than
      the transaction being recovered. In this case, the buffer did not
      get a verifier attached, and os when the second phase of log
      recovery ran and recovered EFIs and unlinked inodes, the buffers
      were modified and written without the verifier running. Hence they
      had up to date contents, but stale LSNs and CRCs.
      
      Fix it by attaching verifiers to buffers we skip due to future LSN
      values so they don't escape into the buffer cache without the
      correct verifier attached.
      
      This patch is based on analysis and a patch from Jan Kara.
      
      cc: <stable@vger.kernel.org>
      Reported-by: NJan Kara <jack@suse.cz>
      Reported-by: NFanael Linithien <fanael4@gmail.com>
      Reported-by: NGrozdan <neutrino8@gmail.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      67dc288c
  3. 25 6月, 2014 1 次提交
    • D
      xfs: global error sign conversion · 2451337d
      Dave Chinner 提交于
      Convert all the errors the core XFs code to negative error signs
      like the rest of the kernel and remove all the sign conversion we
      do in the interface layers.
      
      Errors for conversion (and comparison) found via searches like:
      
      $ git grep " E" fs/xfs
      $ git grep "return E" fs/xfs
      $ git grep " E[A-Z].*;$" fs/xfs
      
      Negation points found via searches like:
      
      $ git grep "= -[a-z,A-Z]" fs/xfs
      $ git grep "return -[a-z,A-D,F-Z]" fs/xfs
      $ git grep " -[a-z].*;" fs/xfs
      
      [ with some bits I missed from Brian Foster ]
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      2451337d
  4. 22 6月, 2014 2 次提交
  5. 24 4月, 2014 1 次提交
  6. 14 4月, 2014 2 次提交
  7. 17 12月, 2013 1 次提交
    • C
      xfs: remove xfsbdstrat error · 83a0adc3
      Christoph Hellwig 提交于
      The xfsbdstrat helper is a small but useless wrapper for xfs_buf_iorequest that
      handles the case of a shut down filesystem.  Most of the users have private,
      uncached buffers that can just be freed in this case, but the complex error
      handling in xfs_bioerror_relse messes up the case when it's called without
      a locked buffer.
      
      Remove xfsbdstrat and opencode the error handling in the callers.  All but
      one can simply return an error and don't need to deal with buffer state,
      and the one caller that cares about the buffer state could do with a major
      cleanup as well, but we'll defer that to later.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      83a0adc3
  8. 13 12月, 2013 3 次提交
  9. 06 12月, 2013 1 次提交
  10. 24 10月, 2013 5 次提交
    • D
      xfs: decouple inode and bmap btree header files · a4fbe6ab
      Dave Chinner 提交于
      Currently the xfs_inode.h header has a dependency on the definition
      of the BMAP btree records as the inode fork includes an array of
      xfs_bmbt_rec_host_t objects in it's definition.
      
      Move all the btree format definitions from xfs_btree.h,
      xfs_bmap_btree.h, xfs_alloc_btree.h and xfs_ialloc_btree.h to
      xfs_format.h to continue the process of centralising the on-disk
      format definitions. With this done, the xfs inode definitions are no
      longer dependent on btree header files.
      
      The enables a massive culling of unnecessary includes, with close to
      200 #include directives removed from the XFS kernel code base.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      a4fbe6ab
    • D
      xfs: decouple log and transaction headers · 239880ef
      Dave Chinner 提交于
      xfs_trans.h has a dependency on xfs_log.h for a couple of
      structures. Most code that does transactions doesn't need to know
      anything about the log, but this dependency means that they have to
      include xfs_log.h. Decouple the xfs_trans.h and xfs_log.h header
      files and clean up the includes to be in dependency order.
      
      In doing this, remove the direct include of xfs_trans_reserve.h from
      xfs_trans.h so that we remove the dependency between xfs_trans.h and
      xfs_mount.h. Hence the xfs_trans.h include can be moved to the
      indicate the actual dependencies other header files have on it.
      
      Note that these are kernel only header files, so this does not
      translate to any userspace changes at all.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      239880ef
    • D
      xfs: split dquot buffer operations out · 9aede1d8
      Dave Chinner 提交于
      Parts of userspace want to be able to read and modify dquot buffers
      (e.g. xfs_db) so we need to split out the reading and writing of
      these buffers so it is easy to shared code with libxfs in userspace.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      9aede1d8
    • D
      xfs: unify directory/attribute format definitions · 57062787
      Dave Chinner 提交于
      The on-disk format definitions for the directory and attribute
      structures are spread across 3 header files right now, only one of
      which is dedicated to defining on-disk structures and their
      manipulation (xfs_dir2_format.h). Pull all the format definitions
      into a single header file - xfs_da_format.h - and switch all the
      code over to point at that.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      57062787
    • D
      xfs: create a shared header file for format-related information · 70a9883c
      Dave Chinner 提交于
      All of the buffer operations structures are needed to be exported
      for xfs_db, so move them all to a common location rather than
      spreading them all over the place. They are verifying the on-disk
      format, so while xfs_format.h might be a good place, it is not part
      of the on disk format.
      
      Hence we need to create a new header file that we centralise these
      related definitions. Start by moving the bffer operations
      structures, and then also move all the other definitions that have
      crept into xfs_log_format.h and xfs_format.h as there was no other
      shared header file to put them in.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      70a9883c
  11. 18 10月, 2013 1 次提交
  12. 05 10月, 2013 2 次提交
  13. 01 10月, 2013 2 次提交
  14. 25 9月, 2013 1 次提交
    • D
      xfs: log recovery lsn ordering needs uuid check · 566055d3
      Dave Chinner 提交于
      After a fair number of xfstests runs, xfs/182 started to fail
      regularly with a corrupted directory - a directory read verifier was
      failing after recovery because it found a block with a XARM magic
      number (remote attribute block) rather than a directory data block.
      
      The first time I saw this repeated failure I did /something/ and the
      problem went away, so I was never able to find the underlying
      problem. Test xfs/182 failed again today, and I found the root
      cause before I did /something else/ that made it go away.
      
      Tracing indicated that the block in question was being correctly
      logged, the log was being flushed by sync, but the buffer was not
      being written back before the shutdown occurred. Tracing also
      indicated that log recovery was also reading the block, but then
      never writing it before log recovery invalidated the cache,
      indicating that it was not modified by log recovery.
      
      More detailed analysis of the corpse indicated that the filesystem
      had a uuid of "a4131074-1872-4cac-9323-2229adbcb886" but the XARM
      block had a uuid of "8f32f043-c3c9-e7f8-f947-4e7f989c05d3", which
      indicated it was a block from an older filesystem. The reason that
      log recovery didn't replay it was that the LSN in the XARM block was
      larger than the LSN of the transaction being replayed, and so the
      block was not overwritten by log recovery.
      
      Hence, log recovery cant blindly trust the magic number and LSN in
      the block - it must verify that it belongs to the filesystem being
      recovered before using the LSN. i.e. if the UUIDs don't match, we
      need to unconditionally recovery the change held in the log.
      
      This patch was first tested on a block device that was repeatedly
      causing xfs/182 to fail with the same failure on the same block with
      the same directory read corruption signature (i.e. XARM block). It
      did not fail, and hasn't failed since.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      566055d3
  15. 12 9月, 2013 1 次提交
  16. 11 9月, 2013 1 次提交
    • D
      xfs: recovery of swap extents operations for CRC filesystems · 638f4416
      Dave Chinner 提交于
      This is the recovery side of the btree block owner change operation
      performed by swapext on CRC enabled filesystems. We detect that an
      owner change is needed by the flag that has been placed on the inode
      log format flag field. Because the inode recovery is being replayed
      after the buffers that make up the BMBT in the given checkpoint, we
      can walk all the buffers and directly modify them when we see the
      flag set on an inode.
      
      Because the inode can be relogged and hence present in multiple
      chekpoints with the "change owner" flag set, we could do multiple
      passes across the inode to do this change. While this isn't optimal,
      we can't directly ignore the flag as there may be multiple
      independent swap extent operations being replayed on the same inode
      in different checkpoints so we can't ignore them.
      
      Further, because the owner change operation uses ordered buffers, we
      might have buffers that are newer on disk than the current
      checkpoint and so already have the owner changed in them. Hence we
      cannot just peek at a buffer in the tree and check that it has the
      correct owner and assume that the change was completed.
      
      So, for the moment just brute force the owner change every time we
      see an inode with the flag set. Note that we have to be careful here
      because the owner of the buffers may point to either the old owner
      or the new owner. Currently the verifier can't verify the owner
      directly, so there is no failure case here right now. If we verify
      the owner exactly in future, then we'll have to take this into
      account.
      
      This was tested in terms of normal operation via xfstests - all of
      the fsr tests now pass without failure. however, we really need to
      modify xfs/227 to stress v3 inodes correctly to ensure we fully
      cover this case for v5 filesystems.
      
      In terms of recovery testing, I used a hacked version of xfs_fsr
      that held the temp inode open for a few seconds before exiting so
      that the filesystem could be shut down with an open owner change
      recovery flags set on at least the temp inode. fsr leaves the temp
      inode unlinked and in btree format, so this was necessary for the
      owner change to be reliably replayed.
      
      logprint confirmed the tmp inode in the log had the correct flag set:
      
      INO: cnt:3 total:3 a:0x69e9e0 len:56 a:0x69ea20 len:176 a:0x69eae0 len:88
              INODE: #regs:3   ino:0x44  flags:0x209   dsize:88
      	                                 ^^^^^
      
      0x200 is set, indicating a data fork owner change needed to be
      replayed on inode 0x44.  A printk in the revoery code confirmed that
      the inode change was recovered:
      
      XFS (vdc): Mounting Filesystem
      XFS (vdc): Starting recovery (logdev: internal)
      recovering owner change ino 0x44
      XFS (vdc): Version 5 superblock detected. This kernel L support enabled!
      Use of these features in this kernel is at your own risk!
      XFS (vdc): Ending recovery (logdev: internal)
      
      The script used to test this was:
      
      $ cat ./recovery-fsr.sh
      #!/bin/bash
      
      dev=/dev/vdc
      mntpt=/mnt/scratch
      testfile=$mntpt/testfile
      
      umount $mntpt
      mkfs.xfs -f -m crc=1 $dev
      mount $dev $mntpt
      chmod 777 $mntpt
      
      for i in `seq 10000 -1 0`; do
              xfs_io -f -d -c "pwrite $(($i * 4096)) 4096" $testfile > /dev/null 2>&1
      done
      xfs_bmap -vp $testfile |head -20
      
      xfs_fsr -d -v $testfile &
      sleep 10
      /home/dave/src/xfstests-dev/src/godown -f $mntpt
      wait
      umount $mntpt
      
      xfs_logprint -t $dev |tail -20
      time mount $dev $mntpt
      xfs_bmap -vp $testfile
      umount $mntpt
      $
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      638f4416
  17. 10 9月, 2013 1 次提交
  18. 31 8月, 2013 2 次提交
    • D
      xfs: inode buffers may not be valid during recovery readahead · d8914002
      Dave Chinner 提交于
      CRC enabled filesystems fail log recovery with 100% reliability on
      xfstests xfs/085 with the following failure:
      
      XFS (vdb): Mounting Filesystem
      XFS (vdb): Starting recovery (logdev: internal)
      XFS (vdb): Corruption detected. Unmount and run xfs_repair
      XFS (vdb): bad inode magic/vsn daddr 144 #0 (magic=0)
      XFS: Assertion failed: 0, file: fs/xfs/xfs_inode_buf.c, line: 95
      
      The problem is that the inode buffer has not been recovered before
      the readahead on the inode buffer is issued. The checkpoint being
      recovered actually allocates the inode chunk we are doing readahead
      from, so what comes from disk during readahead is essentially
      random and the verifier barfs on it.
      
      This inode buffer readahead problem affects non-crc filesystems,
      too, but xfstests does not trigger it at all on such
      configurations....
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      d8914002
    • D
      xfs: check LSN ordering for v5 superblocks during recovery · 50d5c8d8
      Dave Chinner 提交于
      Log recovery has some strict ordering requirements which unordered
      or reordered metadata writeback can defeat. This can occur when an
      item is logged in a transaction, written back to disk, and then
      logged in a new transaction before the tail of the log is moved past
      the original modification.
      
      The result of this is that when we read an object off disk for
      recovery purposes, the buffer that we read may not contain the
      object type that recovery is expecting and hence at the end of the
      checkpoint being recovered we have an invalid object in memory.
      
      This isn't usually a problem, as recovery will then replay all the
      other checkpoints and that brings the object back to a valid and
      correct state, but the issue is that while the object is in the
      invalid state it can be flushed to disk. This results in the object
      verifier failing and triggering a corruption shutdown of log
      recover. This is correct behaviour for the verifiers - the problem
      is that we are not detecting that the object we've read off disk is
      newer than the transaction we are replaying.
      
      All metadata in v5 filesystems has the LSN of it's last modification
      stamped in it. This enabled log recover to read that field and
      determine the age of the object on disk correctly. If the LSN of the
      object on disk is older than the transaction being replayed, then we
      replay the modification. If the LSN of the object matches or is more
      recent than the transaction's LSN, then we should avoid overwriting
      the object as that is what leads to the transient corrupt state.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      50d5c8d8
  19. 29 8月, 2013 2 次提交
    • D
      xfs: fix bad dquot buffer size in log recovery readahead · 0f0d3345
      Dave Chinner 提交于
      xfstests xfs/087 fails 100% reliably with this assert:
      
      XFS (vdb): Mounting Filesystem
      XFS (vdb): Starting recovery (logdev: internal)
      XFS: Assertion failed: bp->b_flags & XBF_STALE, file: fs/xfs/xfs_buf.c, line: 548
      
      while trying to read a dquot buffer in xlog_recover_dquot_ra_pass2().
      
      The issue is that the buffer length to read that is passed to
      xfs_buf_readahead is in units of filesystem blocks, not disk blocks.
      (i.e. FSB, not daddr). Fix it but putting the correct conversion in
      place.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      0f0d3345
    • D
      xfs: don't account buffer cancellation during log recovery readahead · 84a5b730
      Dave Chinner 提交于
      When doing readhaead in log recovery, we check to see if buffers are
      cancelled before doing readahead. If we find a cancelled buffer,
      however, we always decrement the reference count we have on it, and
      that means that readahead is causing a double decrement of the
      cancelled buffer reference count.
      
      This results in log recovery *replaying cancelled buffers* as the
      actual recovery pass does not find the cancelled buffer entry in the
      commit phase of the second pass across a transaction. On debug
      kernels, this results in an ASSERT failure like so:
      
      XFS: Assertion failed: !(flags & XFS_BLF_CANCEL), file: fs/xfs/xfs_log_recover.c, line: 1815
      
      xfstests generic/311 reproduces this ASSERT failure with 100%
      reproducability.
      
      Fix it by making readahead only peek at the buffer cancelled state
      rather than the full accounting that xlog_check_buffer_cancelled()
      does.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      84a5b730
  20. 24 8月, 2013 1 次提交
  21. 21 8月, 2013 3 次提交
  22. 14 8月, 2013 2 次提交
  23. 13 8月, 2013 2 次提交