1. 29 10月, 2013 2 次提交
  2. 27 10月, 2013 1 次提交
  3. 03 10月, 2013 1 次提交
  4. 26 9月, 2013 1 次提交
    • M
      xfs: fix node forward in xfs_node_toosmall · 997def25
      Mark Tinguely 提交于
      Commit f5ea1100 cleans up the disk to host conversions for
      node directory entries, but because a variable is reused in
      xfs_node_toosmall() the next node is not correctly found.
      If the original node is small enough (<= 3/8 of the node size),
      this change may incorrectly cause a node collapse when it should
      not. That will cause an assert in xfstest generic/319:
      
         Assertion failed: first <= last && last < BBTOB(bp->b_length),
         file: /root/newest/xfs/fs/xfs/xfs_trans_buf.c, line: 569
      
      Keep the original node header to get the correct forward node.
      
      (When a node is considered for a merge with a sibling, it overwrites the
       sibling pointers of the original incore nodehdr with the sibling's
       pointers.  This leads to loop considering the original node as a merge
       candidate with itself in the second pass, and so it incorrectly
       determines a merge should occur.)
      Signed-off-by: NMark Tinguely <tinguely@sgi.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      
      [v3: added Dave Chinner's (slightly modified) suggestion to the commit header,
      	cleaned up whitespace.  -bpm]
      997def25
  5. 25 9月, 2013 6 次提交
    • G
      fs/ocfs2/super.c: use a bigger nodestr in ocfs2_dismount_volume · 99d7a882
      Goldwyn Rodrigues 提交于
      While printing 32-bit node numbers, an 8-byte string is not enough.
      Increase the size of the string to 12 chars.
      
      This got left out in commit 49fa8140 ("fs/ocfs2/super.c: Use bigger
      nodestr to accomodate 32-bit node numbers").
      Signed-off-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99d7a882
    • K
      block: Fix bio_copy_data() · 2f6cf0de
      Kent Overstreet 提交于
      The memcpy() in bio_copy_data() was using the wrong offset vars, leading
      to data corruption in weird unusual setups.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-stable <stable@vger.kernel.org> # >= v3.9
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f6cf0de
    • D
      xfs: log recovery lsn ordering needs uuid check · 566055d3
      Dave Chinner 提交于
      After a fair number of xfstests runs, xfs/182 started to fail
      regularly with a corrupted directory - a directory read verifier was
      failing after recovery because it found a block with a XARM magic
      number (remote attribute block) rather than a directory data block.
      
      The first time I saw this repeated failure I did /something/ and the
      problem went away, so I was never able to find the underlying
      problem. Test xfs/182 failed again today, and I found the root
      cause before I did /something else/ that made it go away.
      
      Tracing indicated that the block in question was being correctly
      logged, the log was being flushed by sync, but the buffer was not
      being written back before the shutdown occurred. Tracing also
      indicated that log recovery was also reading the block, but then
      never writing it before log recovery invalidated the cache,
      indicating that it was not modified by log recovery.
      
      More detailed analysis of the corpse indicated that the filesystem
      had a uuid of "a4131074-1872-4cac-9323-2229adbcb886" but the XARM
      block had a uuid of "8f32f043-c3c9-e7f8-f947-4e7f989c05d3", which
      indicated it was a block from an older filesystem. The reason that
      log recovery didn't replay it was that the LSN in the XARM block was
      larger than the LSN of the transaction being replayed, and so the
      block was not overwritten by log recovery.
      
      Hence, log recovery cant blindly trust the magic number and LSN in
      the block - it must verify that it belongs to the filesystem being
      recovered before using the LSN. i.e. if the UUIDs don't match, we
      need to unconditionally recovery the change held in the log.
      
      This patch was first tested on a block device that was repeatedly
      causing xfs/182 to fail with the same failure on the same block with
      the same directory read corruption signature (i.e. XARM block). It
      did not fail, and hasn't failed since.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      566055d3
    • D
      xfs: fix XFS_IOC_FREE_EOFBLOCKS definition · b771af2f
      Dave Chinner 提交于
      It uses a kernel internal structure in it's definition rather than
      the user visible structure that is passed to the ioctl.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      b771af2f
    • D
      xfs: asserting lock not held during freeing not valid · b313a5f1
      Dave Chinner 提交于
      When we free an inode, we do so via RCU. As an RCU lookup can occur
      at any time before we free an inode, and that lookup takes the inode
      flags lock, we cannot safely assert that the flags lock is not held
      just before marking it dead and running call_rcu() to free the
      inode.
      
      We check on allocation of a new inode structre that the lock is not
      held, so we still have protection against locks being leaked and
      hence not correctly initialised when allocated out of the slab.
      Hence just remove the assert...
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      b313a5f1
    • D
      xfs: lock the AIL before removing the buffer item · 48852358
      Dave Chinner 提交于
      Regression introduced by commit 46f9d2eb ("xfs: aborted buf items can
      be in the AIL") which fails to lock the AIL before removing the
      item. Spinlock debugging throws a warning about this.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      48852358
  6. 24 9月, 2013 3 次提交
    • J
      reiserfs: fix race with flush_used_journal_lists and flush_journal_list · 721a769c
      Jeff Mahoney 提交于
      There are two locks involved in managing the journal lists. The general
      reiserfs_write_lock and the journal->j_flush_mutex.
      
      While flush_journal_list is sleeping to acquire the j_flush_mutex or to
      submit a block for write, it will drop the write lock. This allows
      another thread to acquire the write lock and ultimately call
      flush_used_journal_lists to traverse the list of journal lists and
      select one for flushing. It can select the journal_list that has just
      had flush_journal_list called on it in the original thread and call it
      again with the same journal_list.
      
      The second thread then drops the write lock to acquire j_flush_mutex and
      the first thread reacquires it and continues execution and eventually
      clears and frees the journal list before dropping j_flush_mutex and
      returning.
      
      The second thread acquires j_flush_mutex and ends up operating on a
      journal_list that has already been released. If the memory hasn't
      been reused, we'll soon after hit a BUG_ON because the transaction id
      has already been cleared. If it's been reused, we'll crash in other
      fun ways.
      
      Since flush_journal_list will synchronize on j_flush_mutex, we can fix
      the race by taking a proper reference in flush_used_journal_lists
      and checking to see if it's still valid after the mutex is taken. It's
      safe to iterate the list of journal lists and pick a list with
      just the write lock as long as a reference is taken on the journal list
      before we drop the lock. We already have code to handle whether a
      transaction has been flushed already so we can use that to handle the
      race and get rid of the trans_id BUG_ON.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      721a769c
    • J
      reiserfs: remove useless flush_old_journal_lists · 7bc9cc07
      Jeff Mahoney 提交于
      Commit a3172027 introduced test_transaction as a requirement for
      flushing old lists -- but it can never return 1 unless the transaction
      has already been flushed.
      
      As a result, we have a routine that iterates the j_realblocks list but
      doesn't actually do anything. Since it's been this way since 2006 and
      the latency numbers were what Chris expected, let's just rip it out.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      7bc9cc07
    • J
      udf: Fortify LVID loading · 69d75671
      Jan Kara 提交于
      A user has reported an oops in udf_statfs() that was caused by
      numOfPartitions entry in LVID structure being corrupted. Fix the problem
      by verifying whether numOfPartitions makes sense at least to the extent
      that LVID fits into a single block as it should.
      Reported-by: NJuergen Weigert <jw@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      69d75671
  7. 21 9月, 2013 26 次提交