1. 02 11月, 2011 1 次提交
  2. 27 10月, 2011 1 次提交
  3. 04 9月, 2011 2 次提交
    • D
      jbd2: use gfp_t instead of int · d2159fb7
      Dan Carpenter 提交于
      This silences some Sparse warnings:
      fs/jbd2/transaction.c:135:69: warning: incorrect type in argument 2 (different base types)
      fs/jbd2/transaction.c:135:69:    expected restricted gfp_t [usertype] flags
      fs/jbd2/transaction.c:135:69:    got int [signed] gfp_mask
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      d2159fb7
    • T
      jbd2: add debugging information to jbd2_journal_dirty_metadata() · 9ea7a0df
      Theodore Ts'o 提交于
      Add debugging information in case jbd2_journal_dirty_metadata() is
      called with a buffer_head which didn't have
      jbd2_journal_get_write_access() called on it, or if the journal_head
      has the wrong transaction in it.  In addition, return an error code.
      This won't change anything for ocfs2, which will BUG_ON() the non-zero
      exit code.
      
      For ext4, the caller of this function is ext4_handle_dirty_metadata(),
      and on seeing a non-zero return code, will call __ext4_journal_stop(),
      which will print the function and line number of the (buggy) calling
      function and abort the journal.  This will allow us to recover instead
      of bug halting, which is better from a robustness and reliability
      point of view.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      9ea7a0df
  4. 14 6月, 2011 1 次提交
    • J
      jbd2: Fix oops in jbd2_journal_remove_journal_head() · de1b7941
      Jan Kara 提交于
      jbd2_journal_remove_journal_head() can oops when trying to access
      journal_head returned by bh2jh(). This is caused for example by the
      following race:
      
      	TASK1					TASK2
        jbd2_journal_commit_transaction()
          ...
          processing t_forget list
            __jbd2_journal_refile_buffer(jh);
            if (!jh->b_transaction) {
              jbd_unlock_bh_state(bh);
      					jbd2_journal_try_to_free_buffers()
      					  jbd2_journal_grab_journal_head(bh)
      					  jbd_lock_bh_state(bh)
      					  __journal_try_to_free_buffer()
      					  jbd2_journal_put_journal_head(jh)
              jbd2_journal_remove_journal_head(bh);
      
      jbd2_journal_put_journal_head() in TASK2 sees that b_jcount == 0 and
      buffer is not part of any transaction and thus frees journal_head
      before TASK1 gets to doing so. Note that even buffer_head can be
      released by try_to_free_buffers() after
      jbd2_journal_put_journal_head() which adds even larger opportunity for
      oops (but I didn't see this happen in reality).
      
      Fix the problem by making transactions hold their own journal_head
      reference (in b_jcount). That way we don't have to remove journal_head
      explicitely via jbd2_journal_remove_journal_head() and instead just
      remove journal_head when b_jcount drops to zero. The result of this is
      that [__]jbd2_journal_refile_buffer(),
      [__]jbd2_journal_unfile_buffer(), and
      __jdb2_journal_remove_checkpoint() can free journal_head which needs
      modification of a few callers. Also we have to be careful because once
      journal_head is removed, buffer_head might be freed as well. So we
      have to get our own buffer_head reference where it matters.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      de1b7941
  5. 13 6月, 2011 1 次提交
  6. 26 5月, 2011 1 次提交
  7. 25 5月, 2011 1 次提交
  8. 24 5月, 2011 1 次提交
    • J
      jbd2: fix sending of data flush on journal commit · 81be12c8
      Jan Kara 提交于
      
      In data=ordered mode, it's theoretically possible (however rare) that
      an inode is filed to transaction's t_inode_list and a flusher thread
      writes all the data and inode is reclaimed before the transaction
      starts to commit.  In such a case, we could erroneously omit sending a
      flush to file system device when it is different from the journal
      device (because data can still be in disk cache only).
      
      Fix the problem by setting a flag in a transaction when some inode is added
      to it and then send disk flush in the commit code when the flag is set.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      81be12c8
  9. 23 5月, 2011 1 次提交
    • T
      jbd2: Fix the wrong calculation of t_max_wait in update_t_max_wait · 28e35e42
      Tao Ma 提交于
      t_max_wait is added in commit 8e85fb3f to indicate how long we
      were waiting for new transaction to start. In commit 6d0bf005,
      it is moved to another function named update_t_max_wait to
      avoid a build warning. But the wrong thing is that the original
      'ts' is initialized in the start of function start_this_handle
      and we can calculate t_max_wait in the right way. while with
      this change, ts is initialized within the function and t_max_wait
      can never be calculated right.
      
      This patch moves the initialization of ts to the original beginning
      of start_this_handle and pass it to function update_t_max_wait so
      that it can be calculated right and the build warning is avoided also.
      
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      28e35e42
  10. 31 3月, 2011 1 次提交
  11. 12 2月, 2011 1 次提交
    • T
      jbd2: call __jbd2_log_start_commit with j_state_lock write locked · e4471831
      Theodore Ts'o 提交于
      On an SMP ARM system running ext4, I've received a report that the
      first J_ASSERT in jbd2_journal_commit_transaction has been triggering:
      
      	J_ASSERT(journal->j_running_transaction != NULL);
      
      While investigating possible causes for this problem, I noticed that
      __jbd2_log_start_commit() is getting called with j_state_lock only
      read-locked, in spite of the fact that it's possible for it might
      j_commit_request.  Fix this by grabbing the necessary information so
      we can test to see if we need to start a new transaction before
      dropping the read lock, and then calling jbd2_log_start_commit() which
      will grab the write lock.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      e4471831
  12. 19 12月, 2010 3 次提交
  13. 10 12月, 2010 1 次提交
  14. 28 10月, 2010 1 次提交
  15. 10 8月, 2010 1 次提交
  16. 04 8月, 2010 2 次提交
  17. 02 8月, 2010 1 次提交
  18. 27 7月, 2010 1 次提交
    • T
      jbd2: Remove __GFP_NOFAIL from jbd2 layer · 47def826
      Theodore Ts'o 提交于
      __GFP_NOFAIL is going away, so add our own retry loop.  Also add
      jbd2__journal_start() and jbd2__journal_restart() which take a gfp
      mask, so that file systems can optionally (re)start transaction
      handles using GFP_KERNEL.  If they do this, then they need to be
      prepared to handle receiving an PTR_ERR(-ENOMEM) error, and be ready
      to reflect that error up to userspace.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      47def826
  19. 16 7月, 2010 1 次提交
    • J
      jbd2/ocfs2: Fix block checksumming when a buffer is used in several transactions · 13ceef09
      Jan Kara 提交于
      OCFS2 uses t_commit trigger to compute and store checksum of the just
      committed blocks. When a buffer has b_frozen_data, checksum is computed
      for it instead of b_data but this can result in an old checksum being
      written to the filesystem in the following scenario:
      
      1) transaction1 is opened
      2) handle1 is opened
      3) journal_access(handle1, bh)
          - This sets jh->b_transaction to transaction1
      4) modify(bh)
      5) journal_dirty(handle1, bh)
      6) handle1 is closed
      7) start committing transaction1, opening transaction2
      8) handle2 is opened
      9) journal_access(handle2, bh)
          - This copies off b_frozen_data to make it safe for transaction1 to commit.
            jh->b_next_transaction is set to transaction2.
      10) jbd2_journal_write_metadata() checksums b_frozen_data
      11) the journal correctly writes b_frozen_data to the disk journal
      12) handle2 is closed
          - There was no dirty call for the bh on handle2, so it is never queued for
            any more journal operation
      13) Checkpointing finally happens, and it just spools the bh via normal buffer
      writeback.  This will write b_data, which was never triggered on and thus
      contains a wrong (old) checksum.
      
      This patch fixes the problem by calling the trigger at the moment data is
      frozen for journal commit - i.e., either when b_frozen_data is created by
      do_get_write_access or just before we write a buffer to the log if
      b_frozen_data does not exist. We also rename the trigger to t_frozen as
      that better describes when it is called.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      13ceef09
  20. 16 5月, 2010 1 次提交
  21. 16 2月, 2010 1 次提交
  22. 18 8月, 2009 1 次提交
  23. 11 8月, 2009 1 次提交
  24. 18 6月, 2009 1 次提交
  25. 14 7月, 2009 1 次提交
    • J
      jbd2: Fix a race between checkpointing code and journal_get_write_access() · f91d1d04
      Jan Kara 提交于
      The following race can happen:
      
       CPU1                          CPU2
                                     checkpointing code checks the buffer, adds
                                       it to an array for writeback
       do_get_write_access()
       ...
       lock_buffer()
       unlock_buffer()
                                     flush_batch() submits the buffer for IO
       __jbd2_journal_file_buffer()
      
      So a buffer under writeout is returned from
      do_get_write_access(). Since the filesystem code relies on the fact
      that journaled buffers cannot be written out, it does not take the
      buffer lock and so it can modify buffer while it is under
      writeout. That can lead to a filesystem corruption if we crash at the
      right moment.
      
      We fix the problem by clearing the buffer dirty bit under buffer_lock
      even if the buffer is on BJ_None list. Actually, we clear the dirty
      bit regardless the list the buffer is in and warn about the fact if
      the buffer is already journalled.
      
      Thanks for spotting the problem goes to dingdinghua <dingdinghua85@gmail.com>.
      Reported-by: Ndingdinghua <dingdinghua85@gmail.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      f91d1d04
  26. 26 3月, 2009 1 次提交
  27. 11 2月, 2009 1 次提交
    • J
      jbd2: Avoid possible NULL dereference in jbd2_journal_begin_ordered_truncate() · 7f5aa215
      Jan Kara 提交于
      If we race with commit code setting i_transaction to NULL, we could
      possibly dereference it.  Proper locking requires the journal pointer
      (to access journal->j_list_lock), which we don't have.  So we have to
      change the prototype of the function so that filesystem passes us the
      journal pointer.  Also add a more detailed comment about why the
      function jbd2_journal_begin_ordered_truncate() does what it does and
      how it should be used.
      
      Thanks to Dan Carpenter <error27@gmail.com> for pointing to the
      suspitious code.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Acked-by: NJoel Becker <joel.becker@oracle.com>
      CC: linux-ext4@vger.kernel.org
      CC: ocfs2-devel@oss.oracle.com
      CC: mfasheh@suse.de
      CC: Dan Carpenter <error27@gmail.com>
      7f5aa215
  28. 06 1月, 2009 1 次提交
    • J
      jbd2: Add buffer triggers · e06c8227
      Joel Becker 提交于
      Filesystems often to do compute intensive operation on some
      metadata.  If this operation is repeated many times, it can be very
      expensive.  It would be much nicer if the operation could be performed
      once before a buffer goes to disk.
      
      This adds triggers to jbd2 buffer heads.  Just before writing a metadata
      buffer to the journal, jbd2 will optionally call a commit trigger associated
      with the buffer.  If the journal is aborted, an abort trigger will be
      called on any dirty buffers as they are dropped from pending
      transactions.
      
      ocfs2 will use this feature.
      
      Initially I tried to come up with a more generic trigger that could be
      used for non-buffer-related events like transaction completion.  It
      doesn't tie nicely, because the information a buffer trigger needs
      (specific to a journal_head) isn't the same as what a transaction
      trigger needs (specific to a tranaction_t or perhaps journal_t).  So I
      implemented a buffer set, with the understanding that
      journal/transaction wide triggers should be implemented separately.
      
      There is only one trigger set allowed per buffer.  I can't think of any
      reason to attach more than one set.  Contrast this with a journal or
      transaction in which multiple places may want to watch the entire
      transaction separately.
      
      The trigger sets are considered static allocation from the jbd2
      perspective.  ocfs2 will just have one trigger set per block type,
      setting the same set on every bh of the same type.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      e06c8227
  29. 04 1月, 2009 1 次提交
  30. 26 11月, 2008 1 次提交
    • J
      jbd2: improve jbd2 fsync batching · e07f7183
      Josef Bacik 提交于
      This patch removes the static sleep time in favor of a more self
      optimizing approach where we measure the average amount of time it
      takes to commit a transaction to disk and the ammount of time a
      transaction has been running.  If somebody does a sync write or an
      fsync() traditionally we would sleep for 1 jiffies, which depending on
      the value of HZ could be a significant amount of time compared to how
      long it takes to commit a transaction to the underlying storage.  With
      this patch instead of sleeping for a jiffie, we check to see if the
      amount of time this transaction has been running is less than the
      average commit time, and if it is we sleep for the delta using
      schedule_hrtimeout to give us a higher precision sleep time.  This
      greatly benefits high end storage where you could end up sleeping for
      longer than it takes to commit the transaction and therefore sitting
      idle instead of allowing the transaction to be committed by keeping
      the sleep time to a minimum so you are sure to always be doing
      something.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      e07f7183
  31. 17 10月, 2008 1 次提交
    • T
      ext4: Replace hackish ext4_mb_poll_new_transaction with commit callback · 3e624fc7
      Theodore Ts'o 提交于
      The multiblock allocator needs to be able to release blocks (and issue
      a blkdev discard request) when the transaction which freed those
      blocks is committed.  Previously this was done via a polling mechanism
      when blocks are allocated or freed.  A much better way of doing things
      is to create a jbd2 callback function and attaching the list of blocks
      to be freed directly to the transaction structure.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      3e624fc7
  32. 11 8月, 2008 2 次提交
  33. 12 7月, 2008 2 次提交
  34. 14 7月, 2008 1 次提交
    • M
      jbd2: fix race between jbd2_journal_try_to_free_buffers() and jbd2 commit transaction · 530576bb
      Mingming Cao 提交于
      journal_try_to_free_buffers() could race with jbd commit transaction
      when the later is holding the buffer reference while waiting for the
      data buffer to flush to disk. If the caller of
      journal_try_to_free_buffers() request tries hard to release the buffers,
      it will treat the failure as error and return back to the caller. We
      have seen the directo IO failed due to this race.  Some of the caller of
      releasepage() also expecting the buffer to be dropped when passed with
      GFP_KERNEL mask to the releasepage()->journal_try_to_free_buffers().
      
      With this patch, if the caller is passing the GFP_KERNEL to indicating
      this call could wait, in case of try_to_free_buffers() failed, let's
      waiting for journal_commit_transaction() to finish commit the current
      committing transaction , then try to free those buffers again with
      journal locked.
      Signed-off-by: NMingming Cao <cmm@us.ibm.com>
      Reviewed-by: Badari Pulavarty <pbadari@us.ibm.com> 
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      530576bb