1. 17 1月, 2013 1 次提交
  2. 04 12月, 2012 1 次提交
    • D
      xfs: fix sparse reported log CRC endian issue · f9668a09
      Dave Chinner 提交于
      Not a bug as such, just warning noise from the xlog_cksum()
      returning a __be32 type when it should be returning a __le32 type.
      
      On Wed, Nov 28, 2012 at 08:30:59AM -0500, Christoph Hellwig wrote:
      > But why are we storing the crc field little endian while all other on
      > disk formats are big endian? (And yes I realize it might as well have
      > been me who did that back in the idea, but I still have no idea why)
      
      Because the CRC always returns the calcuation LE format, even on BE
      systems. So rather than always having to byte swap it everywhere and
      have all the force casts and anootations for sparse, it seems simpler to
      just make it a __le32 everywhere....
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      f9668a09
  3. 30 11月, 2012 3 次提交
    • D
      xfs: fix stray dquot unlock when reclaiming dquots · b870553c
      Dave Chinner 提交于
      When we fail to get a dquot lock during reclaim, we jump to an error
      handler that unlocks the dquot. This is wrong as we didn't lock the
      dquot, and unlocking it means who-ever is holding the lock has had
      it silently taken away, and hence it results in a lock imbalance.
      
      Found by inspection while modifying the code for the numa-lru
      patchset. This fixes a random hang I've been seeing on xfstest 232
      for the past several months.
      
      cc: <stable@vger.kernel.org>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      b870553c
    • D
      xfs: fix direct IO nested transaction deadlock. · 437a255a
      Dave Chinner 提交于
      The direct IO path can do a nested transaction reservation when
      writing past the EOF. The first transaction is the append
      transaction for setting the filesize at IO completion, but we can
      also need a transaction for allocation of blocks. If the log is low
      on space due to reservations and small log, the append transaction
      can be granted after wating for space as the only active transaction
      in the system. This then attempts a reservation for an allocation,
      which there isn't space in the log for, and the reservation sleeps.
      The result is that there is nothing left in the system to wake up
      all the processes waiting for log space to come free.
      
      The stack trace that shows this deadlock is relatively innocuous:
      
       xlog_grant_head_wait
       xlog_grant_head_check
       xfs_log_reserve
       xfs_trans_reserve
       xfs_iomap_write_direct
       __xfs_get_blocks
       xfs_get_blocks_direct
       do_blockdev_direct_IO
       __blockdev_direct_IO
       xfs_vm_direct_IO
       generic_file_direct_write
       xfs_file_dio_aio_writ
       xfs_file_aio_write
       do_sync_write
       vfs_write
      
      This was discovered on a filesystem with a log of only 10MB, and a
      log stripe unit of 256k whih increased the base reservations by
      512k. Hence a allocation transaction requires 1.2MB of log space to
      be available instead of only 260k, and so greatly increased the
      chance that there wouldn't be enough log space available for the
      nested transaction to succeed. The key to reproducing it is this
      mkfs command:
      
      mkfs.xfs -f -d agcount=16,su=256k,sw=12 -l su=256k,size=2560b $SCRATCH_DEV
      
      The test case was a 1000 fsstress processes running with random
      freeze and unfreezes every few seconds. Thanks to Eryu Guan
      (eguan@redhat.com) for writing the test that found this on a system
      with a somewhat unique default configuration....
      
      cc: <stable@vger.kernel.org>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NAndrew Dahl <adahl@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      437a255a
    • D
      xfs: byte range granularity for XFS_IOC_ZERO_RANGE · ef9d8733
      Dave Chinner 提交于
      XFS_IOC_ZERO_RANGE simply does not work properly for non page cache
      aligned ranges. Neither test 242 or 290 exercise this correctly, so
      the behaviour is completely busted even though the tests pass.
      
      Fix it to support full byte range granularity as was originally
      intended for this ioctl.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      ef9d8733
  4. 27 11月, 2012 1 次提交
  5. 20 11月, 2012 2 次提交
    • C
      xfs: add CRC checks to the log · 0e446be4
      Christoph Hellwig 提交于
      Implement CRCs for the log buffers.  We re-use a field in
      struct xlog_rec_header that was used for a weak checksum of the
      log buffer payload in debug builds before.
      
      The new checksumming uses the crc32c checksum we will use elsewhere
      in XFS, and also protects the record header and addition cycle data.
      
      Due to this there are some interesting changes in xlog_sync, as we
      need to do the cycle wrapping for the split buffer case much earlier,
      as we would touch the buffer after generating the checksum otherwise.
      
      The CRC calculation is always enabled, even for non-CRC filesystems,
      as adding this CRC does not change the log format. On non-CRC
      filesystems, only issue an alert if a CRC mismatch is found and
      allow recovery to continue - this will act as an indicator that
      log recovery problems are a result of log corruption. On CRC enabled
      filesystems, however, log recovery will fail.
      
      Note that existing debug kernels will write a simple checksum value
      to the log, so the first time this is run on a filesystem taht was
      last used on a debug kernel it will through CRC mismatch warning
      errors. These can be ignored.
      
      Initially based on a patch from Dave Chinner, then modified
      significantly by Christoph Hellwig.  Modified again by Dave Chinner
      to get to this version.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      0e446be4
    • C
      xfs: add CRC infrastructure · bc02e869
      Christoph Hellwig 提交于
       - add a mount feature bit for CRC enabled filesystems
       - add some helpers for generating and verifying the CRCs
       - add a copy_uuid helper
      
      The checksumming helpers are loosely based on similar ones in sctp,
      all other bits come from Dave Chinner.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      bc02e869
  6. 17 11月, 2012 3 次提交
    • D
      xfs: drop buffer io reference when a bad bio is built · d69043c4
      Dave Chinner 提交于
      Error handling in xfs_buf_ioapply_map() does not handle IO reference
      counts correctly. We increment the b_io_remaining count before
      building the bio, but then fail to decrement it in the failure case.
      This leads to the buffer never running IO completion and releasing
      the reference that the IO holds, so at unmount we can leak the
      buffer. This leak is captured by this assert failure during unmount:
      
      XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 273
      
      This is not a new bug - the b_io_remaining accounting has had this
      problem for a long, long time - it's just very hard to get a
      zero length bio being built by this code...
      
      Further, the buffer IO error can be overwritten on a multi-segment
      buffer by subsequent bio completions for partial sections of the
      buffer. Hence we should only set the buffer error status if the
      buffer is not already carrying an error status. This ensures that a
      partial IO error on a multi-segment buffer will not be lost. This
      part of the problem is a regression, however.
      
      cc: <stable@vger.kernel.org>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      d69043c4
    • D
      xfs: fix broken error handling in xfs_vm_writepage · 3daed8bc
      Dave Chinner 提交于
      When we shut down the filesystem, it might first be detected in
      writeback when we are allocating a inode size transaction. This
      happens after we have moved all the pages into the writeback state
      and unlocked them. Unfortunately, if we fail to set up the
      transaction we then abort writeback and try to invalidate the
      current page. This then triggers are BUG() in block_invalidatepage()
      because we are trying to invalidate an unlocked page.
      
      Fixing this is a bit of a chicken and egg problem - we can't
      allocate the transaction until we've clustered all the pages into
      the IO and we know the size of it (i.e. whether the last block of
      the IO is beyond the current EOF or not). However, we don't want to
      hold pages locked for long periods of time, especially while we lock
      other pages to cluster them into the write.
      
      To fix this, we need to make a clear delineation in writeback where
      errors can only be handled by IO completion processing. That is,
      once we have marked a page for writeback and unlocked it, we have to
      report errors via IO completion because we've already started the
      IO. We may not have submitted any IO, but we've changed the page
      state to indicate that it is under IO so we must now use the IO
      completion path to report errors.
      
      To do this, add an error field to xfs_submit_ioend() to pass it the
      error that occurred during the building on the ioend chain. When
      this is non-zero, mark each ioend with the error and call
      xfs_finish_ioend() directly rather than building bios. This will
      immediately push the ioends through completion processing with the
      error that has occurred.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      3daed8bc
    • D
      xfs: fix attr tree double split corruption · 42e2976f
      Dave Chinner 提交于
      In certain circumstances, a double split of an attribute tree is
      needed to insert or replace an attribute. In rare situations, this
      can go wrong, leaving the attribute tree corrupted. In this case,
      the attr being replaced is the last attr in a leaf node, and the
      replacement is larger so doesn't fit in the same leaf node.
      When we have the initial condition of a node format attribute
      btree with two leaves at index 1 and 2. Call them L1 and L2.  The
      leaf L1 is completely full, there is not a single byte of free space
      in it. L2 is mostly empty.  The attribute being replaced - call it X
      - is the last attribute in L1.
      
      The way an attribute replace is executed is that the replacement
      attribute - call it Y - is first inserted into the tree, but has an
      INCOMPLETE flag set on it so that list traversals ignore it. Once
      this transaction is committed, a second transaction it run to
      atomically mark Y as COMPLETE and X as INCOMPLETE, so that a
      traversal will now find Y and skip X. Once that transaction is
      committed, attribute X is then removed.
      
      So, the initial condition is:
      
           +--------+     +--------+
           |   L1   |     |   L2   |
           | fwd: 2 |---->| fwd: 0 |
           | bwd: 0 |<----| bwd: 1 |
           | fsp: 0 |     | fsp: N |
           |--------|     |--------|
           | attr A |     | attr 1 |
           |--------|     |--------|
           | attr B |     | attr 2 |
           |--------|     |--------|
           ..........     ..........
           |--------|     |--------|
           | attr X |     | attr n |
           +--------+     +--------+
      
      So now we go to replace X, and see that L1:fsp = 0 - it is full so
      we can't insert Y in the same leaf. So we record the the location of
      attribute X so we can track it for later use, then we split L1 into
      L1 and L3 and reblance across the two leafs. We end with:
      
           +--------+     +--------+     +--------+
           |   L1   |     |   L3   |     |   L2   |
           | fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
           | bwd: 0 |<----| bwd: 1 |<----| bwd: 3 |
           | fsp: M |     | fsp: J |     | fsp: N |
           |--------|     |--------|     |--------|
           | attr A |     | attr X |     | attr 1 |
           |--------|     +--------+     |--------|
           | attr B |                    | attr 2 |
           |--------|                    |--------|
           ..........                    ..........
           |--------|                    |--------|
           | attr W |                    | attr n |
           +--------+                    +--------+
      
      And we track that the original attribute is now at L3:0.
      
      We then try to insert Y into L1 again, and find that there isn't
      enough room because the new attribute is larger than the old one.
      Hence we have to split again to make room for Y. We end up with
      this:
      
           +--------+     +--------+     +--------+     +--------+
           |   L1   |     |   L4   |     |   L3   |     |   L2   |
           | fwd: 4 |---->| fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
           | bwd: 0 |<----| bwd: 1 |<----| bwd: 4 |<----| bwd: 3 |
           | fsp: M |     | fsp: J |     | fsp: J |     | fsp: N |
           |--------|     |--------|     |--------|     |--------|
           | attr A |     | attr Y |     | attr X |     | attr 1 |
           |--------|     + INCOMP +     +--------+     |--------|
           | attr B |     +--------+                    | attr 2 |
           |--------|                                   |--------|
           ..........                                   ..........
           |--------|                                   |--------|
           | attr W |                                   | attr n |
           +--------+                                   +--------+
      
      And now we have the new (incomplete) attribute @ L4:0, and the
      original attribute at L3:0. At this point, the first transaction is
      committed, and we move to the flipping of the flags.
      
      This is where we are supposed to end up with this:
      
           +--------+     +--------+     +--------+     +--------+
           |   L1   |     |   L4   |     |   L3   |     |   L2   |
           | fwd: 4 |---->| fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
           | bwd: 0 |<----| bwd: 1 |<----| bwd: 4 |<----| bwd: 3 |
           | fsp: M |     | fsp: J |     | fsp: J |     | fsp: N |
           |--------|     |--------|     |--------|     |--------|
           | attr A |     | attr Y |     | attr X |     | attr 1 |
           |--------|     +--------+     + INCOMP +     |--------|
           | attr B |                    +--------+     | attr 2 |
           |--------|                                   |--------|
           ..........                                   ..........
           |--------|                                   |--------|
           | attr W |                                   | attr n |
           +--------+                                   +--------+
      
      But that doesn't happen properly - the attribute tracking indexes
      are not pointing to the right locations. What we end up with is both
      the old attribute to be removed pointing at L4:0 and the new
      attribute at L4:1.  On a debug kernel, this assert fails like so:
      
      XFS: Assertion failed: args->index2 < be16_to_cpu(leaf2->hdr.count), file: fs/xfs/xfs_attr_leaf.c, line: 2725
      
      because the new attribute location does not exist. On a production
      kernel, this goes unnoticed and the code proceeds ahead merrily and
      removes L4 because it thinks that is the block that is no longer
      needed. This leaves the hash index node pointing to entries
      L1, L4 and L2, but only blocks L1, L3 and L2 to exist. Further, the
      leaf level sibling list is L1 <-> L4 <-> L2, but L4 is now free
      space, and so everything is busted. This corruption is caused by the
      removal of the old attribute triggering a join - it joins everything
      correctly but then frees the wrong block.
      
      xfs_repair will report something like:
      
      bad sibling back pointer for block 4 in attribute fork for inode 131
      problem with attribute contents in inode 131
      would clear attr fork
      bad nblocks 8 for inode 131, would reset to 3
      bad anextents 4 for inode 131, would reset to 0
      
      The problem lies in the assignment of the old/new blocks for
      tracking purposes when the double leaf split occurs. The first split
      tries to place the new attribute inside the current leaf (i.e.
      "inleaf == true") and moves the old attribute (X) to the new block.
      This sets up the old block/index to L1:X, and newly allocated
      block to L3:0. It then moves attr X to the new block and tries to
      insert attr Y at the old index. That fails, so it splits again.
      
      With the second split, the rebalance ends up placing the new attr in
      the second new block - L4:0 - and this is where the code goes wrong.
      What is does is it sets both the new and old block index to the
      second new block. Hence it inserts attr Y at the right place (L4:0)
      but overwrites the current location of the attr to replace that is
      held in the new block index (currently L3:0). It over writes it with
      L4:1 - the index we later assert fail on.
      
      Hopefully this table will show this in a foramt that is a bit easier
      to understand:
      
      Split		old attr index		new attr index
      		vanilla	patched		vanilla	patched
      before 1st	L1:26	L1:26		N/A	N/A
      after 1st	L3:0	L3:0		L1:26	L1:26
      after 2nd	L4:0	L3:0		L4:1	L4:0
                      ^^^^			^^^^
      		wrong			wrong
      
      The fix is surprisingly simple, for all this analysis - just stop
      the rebalance on the out-of leaf case from overwriting the new attr
      index - it's already correct for the double split case.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      42e2976f
  7. 16 11月, 2012 22 次提交
  8. 15 11月, 2012 5 次提交
  9. 14 11月, 2012 2 次提交
    • D
      xfs: make growfs initialise the AGFL header · de497688
      Dave Chinner 提交于
      For verification purposes, AGFLs need to be initialised to a known
      set of values. For upcoming CRC changes, they are also headers that
      need to be initialised. Currently, growfs does neither for the AGFLs
      - it ignores them completely. Add initialisation of the AGFL to be
      full of invalid block numbers (NULLAGBLOCK) to put the
      infrastructure in place needed for CRC support.
      
      Includes a comment clarification from Jeff Liu.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by Rich Johnston <rjohnston@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      de497688
    • D
      xfs: growfs: use uncached buffers for new headers · fd23683c
      Dave Chinner 提交于
      When writing the new AG headers to disk, we can't attach write
      verifiers because they have a dependency on the struct xfs-perag
      being attached to the buffer to be fully initialised and growfs
      can't fully initialise them until later in the process.
      
      The simplest way to avoid this problem is to use uncached buffers
      for writing the new headers. These buffers don't have the xfs-perag
      attached to them, so it's simple to detect in the write verifier and
      be able to skip the checks that need the xfs-perag.
      
      This enables us to attach the appropriate buffer ops to the buffer
      and hence calculate CRCs on the way to disk. IT also means that the
      buffer is torn down immediately, and so the first access to the AG
      headers will re-read the header from disk and perform full
      verification of the buffer. This way we also can catch corruptions
      due to problems that went undetected in growfs.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by Rich Johnston <rjohnston@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      fd23683c