1. 09 2月, 2013 1 次提交
  2. 07 2月, 2013 1 次提交
    • T
      jbd2: track request delay statistics · 9fff24aa
      Theodore Ts'o 提交于
      Track the delay between when we first request that the commit begin
      and when it actually begins, so we can see how much of a gap exists.
      In theory, this should just be the remaining scheduling quantuum of
      the thread which requested the commit (assuming it was not a
      synchronous operation which triggered the commit request) plus
      scheduling overhead; however, it's possible that real time processes
      might get in the way of letting the kjournald thread from executing.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      9fff24aa
  3. 26 12月, 2012 1 次提交
    • J
      ext4: fix deadlock in journal_unmap_buffer() · 53e87268
      Jan Kara 提交于
      We cannot wait for transaction commit in journal_unmap_buffer()
      because we hold page lock which ranks below transaction start.  We
      solve the issue by bailing out of journal_unmap_buffer() and
      jbd2_journal_invalidatepage() with -EBUSY.  Caller is then responsible
      for waiting for transaction commit to finish and try invalidation
      again. Since the issue can happen only for page stradding i_size, it
      is simple enough to manually call jbd2_journal_invalidatepage() for
      such page from ext4_setattr(), check the return value and wait if
      necessary.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      53e87268
  4. 21 12月, 2012 1 次提交
    • J
      jbd2: fix assertion failure in jbd2_journal_flush() · d7961c7f
      Jan Kara 提交于
      The following race is possible between start_this_handle() and someone
      calling jbd2_journal_flush().
      
      Process A                              Process B
      start_this_handle().
        if (journal->j_barrier_count) # false
        if (!journal->j_running_transaction) { #true
          read_unlock(&journal->j_state_lock);
                                             jbd2_journal_lock_updates()
                                             jbd2_journal_flush()
                                               write_lock(&journal->j_state_lock);
                                               if (journal->j_running_transaction) {
                                                 # false
                                               ... wait for committing trans ...
                                               write_unlock(&journal->j_state_lock);
          ...
          write_lock(&journal->j_state_lock);
          if (!journal->j_running_transaction) { # true
            jbd2_get_transaction(journal, new_transaction);
          write_unlock(&journal->j_state_lock);
          goto repeat; # eventually blocks on j_barrier_count > 0
                                               ...
                                               J_ASSERT(!journal->j_running_transaction);
                                                 # fails
      
      We fix the race by rechecking j_barrier_count after reacquiring j_state_lock
      in exclusive mode.
      
      Reported-by: yjwsignal@empal.com
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      d7961c7f
  5. 19 11月, 2012 1 次提交
  6. 09 11月, 2012 1 次提交
  7. 27 9月, 2012 1 次提交
    • J
      jbd2: fix assertion failure in commit code due to lacking transaction credits · b794e7a6
      Jan Kara 提交于
      ext4 users of data=journal mode with blocksize < pagesize were
      occasionally hitting assertion failure in
      jbd2_journal_commit_transaction() checking whether the transaction has
      at least as many credits reserved as buffers attached.  The core of the
      problem is that when a file gets truncated, buffers that still need
      checkpointing or that are attached to the committing transaction are
      left with buffer_mapped set. When this happens to buffers beyond i_size
      attached to a page stradding i_size, subsequent write extending the file
      will see these buffers and as they are mapped (but underlying blocks
      were freed) things go awry from here.
      
      The assertion failure just coincidentally (and in this case luckily as
      we would start corrupting filesystem) triggers due to journal_head not
      being properly cleaned up as well.
      
      We fix the problem by unmapping buffers if possible (in lots of cases we
      just need a buffer attached to a transaction as a place holder but it
      must not be written out anyway).  And in one case, we just have to bite
      the bullet and wait for transaction commit to finish.
      
      CC: Josef Bacik <jbacik@fusionio.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      b794e7a6
  8. 01 6月, 2012 1 次提交
  9. 20 3月, 2012 1 次提交
  10. 14 3月, 2012 2 次提交
  11. 21 2月, 2012 2 次提交
    • Y
      jbd2: allocate transaction from separate slab cache · 0c2022ec
      Yongqiang Yang 提交于
      There is normally only a handful of these active at any one time, but
      putting them in a separate slab cache makes debugging memory
      corruption problems easier.  Manish Katiyar also wanted this make it
      easier to test memory failure scenarios in the jbd2 layer.
      
      Cc: Manish Katiyar <mkatiyar@gmail.com>
      Signed-off-by: NYongqiang Yang <xiaoqiangnk@gmail.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      0c2022ec
    • E
      jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer · 15291164
      Eric Sandeen 提交于
      journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
      state ala discard_buffer(), but does not touch _Delay or _Unwritten as
      discard_buffer() does.
      
      This can be problematic in some areas of the ext4 code which assume
      that if they have found a buffer marked unwritten or delay, then it's
      a live one.  Perhaps those spots should check whether it is mapped
      as well, but if jbd2 is going to tear down a buffer, let's really
      tear it down completely.
      
      Without this I get some fsx failures on sub-page-block filesystems
      up until v3.2, at which point 4e96b2db
      and 189e868f make the failures go
      away, because buried within that large change is some more flag
      clearing.  I still think it's worth doing in jbd2, since
      ->invalidatepage leads here directly, and it's the right place
      to clear away these flags.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      15291164
  12. 05 1月, 2012 1 次提交
    • J
      jbd2: fix hung processes in jbd2_journal_lock_updates() · 9837d8e9
      Jan Kara 提交于
      Toshiyuki Okajima found out that when running
      
      for ((i=0; i < 100000; i++)); do
              if ((i%2 == 0)); then
                      chattr +j /mnt/file
              else
                      chattr -j /mnt/file
              fi
              echo "0" >> /mnt/file
      done
      
      process sometimes hangs indefinitely in jbd2_journal_lock_updates().
      
      Toshiyuki identified that the following race happens:
      
      jbd2_journal_lock_updates()            |jbd2_journal_stop()
      ---------------------------------------+---------------------------------------
       write_lock(&journal->j_state_lock)    |    .
       ++journal->j_barrier_count            |    .
       spin_lock(&tran->t_handle_lock)       |    .
       atomic_read(&tran->t_updates) //not 0 |
                                             | atomic_dec_and_test(&tran->t_updates)
                                             |    // t_updates = 0
                                             | wake_up(&journal->j_wait_updates)
       prepare_to_wait()                     |    // no process is woken up.
       spin_unlock(&tran->t_handle_lock)     |
       write_unlock(&journal->j_state_lock)  |
       schedule() // never return            |
      
      We fix the problem by first calling prepare_to_wait() and only after that
      checking t_updates in jbd2_journal_lock_updates().
      Reported-and-analyzed-by: NToshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      9837d8e9
  13. 02 11月, 2011 1 次提交
  14. 27 10月, 2011 1 次提交
  15. 04 9月, 2011 2 次提交
    • D
      jbd2: use gfp_t instead of int · d2159fb7
      Dan Carpenter 提交于
      This silences some Sparse warnings:
      fs/jbd2/transaction.c:135:69: warning: incorrect type in argument 2 (different base types)
      fs/jbd2/transaction.c:135:69:    expected restricted gfp_t [usertype] flags
      fs/jbd2/transaction.c:135:69:    got int [signed] gfp_mask
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      d2159fb7
    • T
      jbd2: add debugging information to jbd2_journal_dirty_metadata() · 9ea7a0df
      Theodore Ts'o 提交于
      Add debugging information in case jbd2_journal_dirty_metadata() is
      called with a buffer_head which didn't have
      jbd2_journal_get_write_access() called on it, or if the journal_head
      has the wrong transaction in it.  In addition, return an error code.
      This won't change anything for ocfs2, which will BUG_ON() the non-zero
      exit code.
      
      For ext4, the caller of this function is ext4_handle_dirty_metadata(),
      and on seeing a non-zero return code, will call __ext4_journal_stop(),
      which will print the function and line number of the (buggy) calling
      function and abort the journal.  This will allow us to recover instead
      of bug halting, which is better from a robustness and reliability
      point of view.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      9ea7a0df
  16. 14 6月, 2011 1 次提交
    • J
      jbd2: Fix oops in jbd2_journal_remove_journal_head() · de1b7941
      Jan Kara 提交于
      jbd2_journal_remove_journal_head() can oops when trying to access
      journal_head returned by bh2jh(). This is caused for example by the
      following race:
      
      	TASK1					TASK2
        jbd2_journal_commit_transaction()
          ...
          processing t_forget list
            __jbd2_journal_refile_buffer(jh);
            if (!jh->b_transaction) {
              jbd_unlock_bh_state(bh);
      					jbd2_journal_try_to_free_buffers()
      					  jbd2_journal_grab_journal_head(bh)
      					  jbd_lock_bh_state(bh)
      					  __journal_try_to_free_buffer()
      					  jbd2_journal_put_journal_head(jh)
              jbd2_journal_remove_journal_head(bh);
      
      jbd2_journal_put_journal_head() in TASK2 sees that b_jcount == 0 and
      buffer is not part of any transaction and thus frees journal_head
      before TASK1 gets to doing so. Note that even buffer_head can be
      released by try_to_free_buffers() after
      jbd2_journal_put_journal_head() which adds even larger opportunity for
      oops (but I didn't see this happen in reality).
      
      Fix the problem by making transactions hold their own journal_head
      reference (in b_jcount). That way we don't have to remove journal_head
      explicitely via jbd2_journal_remove_journal_head() and instead just
      remove journal_head when b_jcount drops to zero. The result of this is
      that [__]jbd2_journal_refile_buffer(),
      [__]jbd2_journal_unfile_buffer(), and
      __jdb2_journal_remove_checkpoint() can free journal_head which needs
      modification of a few callers. Also we have to be careful because once
      journal_head is removed, buffer_head might be freed as well. So we
      have to get our own buffer_head reference where it matters.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      de1b7941
  17. 13 6月, 2011 1 次提交
  18. 26 5月, 2011 1 次提交
  19. 25 5月, 2011 1 次提交
  20. 24 5月, 2011 1 次提交
    • J
      jbd2: fix sending of data flush on journal commit · 81be12c8
      Jan Kara 提交于
      
      In data=ordered mode, it's theoretically possible (however rare) that
      an inode is filed to transaction's t_inode_list and a flusher thread
      writes all the data and inode is reclaimed before the transaction
      starts to commit.  In such a case, we could erroneously omit sending a
      flush to file system device when it is different from the journal
      device (because data can still be in disk cache only).
      
      Fix the problem by setting a flag in a transaction when some inode is added
      to it and then send disk flush in the commit code when the flag is set.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      81be12c8
  21. 23 5月, 2011 1 次提交
    • T
      jbd2: Fix the wrong calculation of t_max_wait in update_t_max_wait · 28e35e42
      Tao Ma 提交于
      t_max_wait is added in commit 8e85fb3f to indicate how long we
      were waiting for new transaction to start. In commit 6d0bf005,
      it is moved to another function named update_t_max_wait to
      avoid a build warning. But the wrong thing is that the original
      'ts' is initialized in the start of function start_this_handle
      and we can calculate t_max_wait in the right way. while with
      this change, ts is initialized within the function and t_max_wait
      can never be calculated right.
      
      This patch moves the initialization of ts to the original beginning
      of start_this_handle and pass it to function update_t_max_wait so
      that it can be calculated right and the build warning is avoided also.
      
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      28e35e42
  22. 31 3月, 2011 1 次提交
  23. 12 2月, 2011 1 次提交
    • T
      jbd2: call __jbd2_log_start_commit with j_state_lock write locked · e4471831
      Theodore Ts'o 提交于
      On an SMP ARM system running ext4, I've received a report that the
      first J_ASSERT in jbd2_journal_commit_transaction has been triggering:
      
      	J_ASSERT(journal->j_running_transaction != NULL);
      
      While investigating possible causes for this problem, I noticed that
      __jbd2_log_start_commit() is getting called with j_state_lock only
      read-locked, in spite of the fact that it's possible for it might
      j_commit_request.  Fix this by grabbing the necessary information so
      we can test to see if we need to start a new transaction before
      dropping the read lock, and then calling jbd2_log_start_commit() which
      will grab the write lock.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      e4471831
  24. 19 12月, 2010 3 次提交
  25. 10 12月, 2010 1 次提交
  26. 28 10月, 2010 1 次提交
  27. 10 8月, 2010 1 次提交
  28. 04 8月, 2010 2 次提交
  29. 02 8月, 2010 1 次提交
  30. 27 7月, 2010 1 次提交
    • T
      jbd2: Remove __GFP_NOFAIL from jbd2 layer · 47def826
      Theodore Ts'o 提交于
      __GFP_NOFAIL is going away, so add our own retry loop.  Also add
      jbd2__journal_start() and jbd2__journal_restart() which take a gfp
      mask, so that file systems can optionally (re)start transaction
      handles using GFP_KERNEL.  If they do this, then they need to be
      prepared to handle receiving an PTR_ERR(-ENOMEM) error, and be ready
      to reflect that error up to userspace.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      47def826
  31. 16 7月, 2010 1 次提交
    • J
      jbd2/ocfs2: Fix block checksumming when a buffer is used in several transactions · 13ceef09
      Jan Kara 提交于
      OCFS2 uses t_commit trigger to compute and store checksum of the just
      committed blocks. When a buffer has b_frozen_data, checksum is computed
      for it instead of b_data but this can result in an old checksum being
      written to the filesystem in the following scenario:
      
      1) transaction1 is opened
      2) handle1 is opened
      3) journal_access(handle1, bh)
          - This sets jh->b_transaction to transaction1
      4) modify(bh)
      5) journal_dirty(handle1, bh)
      6) handle1 is closed
      7) start committing transaction1, opening transaction2
      8) handle2 is opened
      9) journal_access(handle2, bh)
          - This copies off b_frozen_data to make it safe for transaction1 to commit.
            jh->b_next_transaction is set to transaction2.
      10) jbd2_journal_write_metadata() checksums b_frozen_data
      11) the journal correctly writes b_frozen_data to the disk journal
      12) handle2 is closed
          - There was no dirty call for the bh on handle2, so it is never queued for
            any more journal operation
      13) Checkpointing finally happens, and it just spools the bh via normal buffer
      writeback.  This will write b_data, which was never triggered on and thus
      contains a wrong (old) checksum.
      
      This patch fixes the problem by calling the trigger at the moment data is
      frozen for journal commit - i.e., either when b_frozen_data is created by
      do_get_write_access or just before we write a buffer to the log if
      b_frozen_data does not exist. We also rename the trigger to t_frozen as
      that better describes when it is called.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      13ceef09
  32. 16 5月, 2010 1 次提交
  33. 16 2月, 2010 1 次提交
  34. 18 8月, 2009 1 次提交