1. 22 11月, 2011 1 次提交
    • Y
      jbd: clear revoked flag on buffers before a new transaction started · 8c111b3f
      Yongqiang Yang 提交于
      Currently, we clear revoked flag only when a block is reused.  However,
      this can tigger a false journal error.  Consider a situation when a block
      is used as a meta block and is deleted(revoked) in ordered mode, then the
      block is allocated as a data block to a file.  At this moment, user changes
      the file's journal mode from ordered to journaled and truncates the file.
      The block will be considered re-revoked by journal because it has revoked
      flag still pending from the last transaction and an assertion triggers.
      
      We fix the problem by keeping the revoked status more uptodate - we clear
      revoked flag when switching revoke tables to reflect there is no revoked
      buffers in current transaction any more.
      Signed-off-by: NYongqiang Yang <xiaoqiangnk@gmail.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      8c111b3f
  2. 02 11月, 2011 1 次提交
    • E
      jbd/jbd2: validate sb->s_first in journal_get_superblock() · 8762202d
      Eryu Guan 提交于
      I hit a J_ASSERT(blocknr != 0) failure in cleanup_journal_tail() when
      mounting a fsfuzzed ext3 image. It turns out that the corrupted ext3
      image has s_first = 0 in journal superblock, and the 0 is passed to
      journal->j_head in journal_reset(), then to blocknr in
      cleanup_journal_tail(), in the end the J_ASSERT failed.
      
      So validate s_first after reading journal superblock from disk in
      journal_get_superblock() to ensure s_first is valid.
      
      The following script could reproduce it:
      
      fstype=ext3
      blocksize=1024
      img=$fstype.img
      offset=0
      found=0
      magic="c0 3b 39 98"
      
      dd if=/dev/zero of=$img bs=1M count=8
      mkfs -t $fstype -b $blocksize -F $img
      filesize=`stat -c %s $img`
      while [ $offset -lt $filesize ]
      do
              if od -j $offset -N 4 -t x1 $img | grep -i "$magic";then
                      echo "Found journal: $offset"
                      found=1
                      break
              fi
              offset=`echo "$offset+$blocksize" | bc`
      done
      
      if [ $found -ne 1 ];then
              echo "Magic \"$magic\" not found"
              exit 1
      fi
      
      dd if=/dev/zero of=$img seek=$(($offset+23)) conv=notrunc bs=1 count=1
      
      mkdir -p ./mnt
      mount -o loop $img ./mnt
      
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NEryu Guan <guaneryu@gmail.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      8762202d
  3. 28 6月, 2011 1 次提交
    • T
      jbd: Use WRITE_SYNC in journal checkpoint. · a212d1a7
      Tao Ma 提交于
      In journal checkpoint, we write the buffer and wait for its finish.
      But in cfq, the async queue has a very low priority, and in our test,
      if there are too many sync queues and every queue is filled up with
      requests, and the process will hang waiting for the log space.
      
      So this patch tries to use WRITE_SYNC in __flush_batch so that the request will
      be moved into sync queue and handled by cfq timely. We also use the new plug,
      sot that all the WRITE_SYNC requests can be given as a whole when we unplug it.
      Reported-by: NRobin Dong <sanbai@taobao.com>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      a212d1a7
  4. 27 6月, 2011 1 次提交
    • J
      jbd: Fix oops in journal_remove_journal_head() · bb189247
      Jan Kara 提交于
      journal_remove_journal_head() can oops when trying to access journal_head
      returned by bh2jh(). This is caused for example by the following race:
      
      	TASK1					TASK2
        journal_commit_transaction()
          ...
          processing t_forget list
            __journal_refile_buffer(jh);
            if (!jh->b_transaction) {
              jbd_unlock_bh_state(bh);
      					journal_try_to_free_buffers()
      					  journal_grab_journal_head(bh)
      					  jbd_lock_bh_state(bh)
      					  __journal_try_to_free_buffer()
      					  journal_put_journal_head(jh)
              journal_remove_journal_head(bh);
      
      journal_put_journal_head() in TASK2 sees that b_jcount == 0 and buffer is not
      part of any transaction and thus frees journal_head before TASK1 gets to doing
      so. Note that even buffer_head can be released by try_to_free_buffers() after
      journal_put_journal_head() which adds even larger opportunity for oops (but I
      didn't see this happen in reality).
      
      Fix the problem by making transactions hold their own journal_head reference
      (in b_jcount). That way we don't have to remove journal_head explicitely via
      journal_remove_journal_head() and instead just remove journal_head when
      b_jcount drops to zero. The result of this is that [__]journal_refile_buffer(),
      [__]journal_unfile_buffer(), and __journal_remove_checkpoint() can free
      journal_head which needs modification of a few callers. Also we have to be
      careful because once journal_head is removed, buffer_head might be freed as
      well. So we have to get our own buffer_head reference where it matters.
      Signed-off-by: NJan Kara <jack@suse.cz>
      bb189247
  5. 25 6月, 2011 3 次提交
    • D
      jbd: fix a bug of leaking jh->b_jcount · bd5c9e18
      Ding Dinghua 提交于
      journal_get_create_access should drop jh->b_jcount in error handling path
      Signed-off-by: NDing Dinghua <dingdinghua@nrchpc.ac.cn>
      Signed-off-by: NJan Kara <jack@suse.cz>
      bd5c9e18
    • J
      jbd: remove dependency on __GFP_NOFAIL · 05713082
      Jan Kara 提交于
      The callers of start_this_handle() (or better ext3_journal_start()) are not
      really prepared to handle allocation failures. Such failures can for example
      result in silent data loss when it happens in ext3_..._writepage().  OTOH
      __GFP_NOFAIL is going away so we just retry allocation in start_this_handle().
      
      This loop is potentially dangerous because the oom killer cannot be invoked
      for GFP_NOFS allocation, so there is a potential for infinitely looping.
      But still this is better than silent data loss.
      Signed-off-by: NJan Kara <jack@suse.cz>
      05713082
    • L
      jbd: Add fixed tracepoints · 99cb1a31
      Lukas Czerner 提交于
      This commit adds fixed tracepoint for jbd. It has been based on fixed
      tracepoints for jbd2, however there are missing those for collecting
      statistics, since I think that it will require more intrusive patch so I
      should have its own commit, if someone decide that it is needed. Also
      there are new tracepoints in __journal_drop_transaction() and
      journal_update_superblock().
      
      The list of jbd tracepoints:
      
      jbd_checkpoint
      jbd_start_commit
      jbd_commit_locking
      jbd_commit_flushing
      jbd_commit_logging
      jbd_drop_transaction
      jbd_end_commit
      jbd_do_submit_data
      jbd_cleanup_journal_tail
      jbd_update_superblock_end
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NJan Kara <jack@suse.cz>
      99cb1a31
  6. 24 5月, 2011 1 次提交
  7. 17 5月, 2011 3 次提交
    • T
      jbd/jbd2: remove obsolete summarise_journal_usage. · 9199e665
      Tao Ma 提交于
      summarise_journal_usage seems to be obsolete for a long time,
      so remove it.
      
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      9199e665
    • J
      jbd: Fix forever sleeping process in do_get_write_access() · 2842bb20
      Jan Kara 提交于
      In do_get_write_access() we wait on BH_Unshadow bit for buffer to get
      from shadow state. The waking code in journal_commit_transaction() has
      a bug because it does not issue a memory barrier after the buffer is moved
      from the shadow state and before wake_up_bit() is called. Thus a waitqueue
      check can happen before the buffer is actually moved from the shadow state
      and waiting process may never be woken. Fix the problem by issuing proper
      barrier.
      
      CC: stable@kernel.org
      Reported-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      2842bb20
    • T
      jbd: fix fsync() tid wraparound bug · d9b01934
      Ted Ts'o 提交于
      If an application program does not make any changes to the indirect
      blocks or extent tree, i_datasync_tid will not get updated.  If there
      are enough commits (i.e., 2**31) such that tid_geq()'s calculations
      wrap, and there isn't a currently active transaction at the time of
      the fdatasync() call, this can end up triggering a BUG_ON in
      fs/jbd/commit.c:
      
      	J_ASSERT(journal->j_running_transaction != NULL);
      
      It's pretty rare that this can happen, since it requires the use of
      fdatasync() plus *very* frequent and excessive use of fsync().  But
      with the right workload, it can.
      
      We fix this by replacing the use of tid_geq() with an equality test,
      since there's only one valid transaction id that is valid for us to
      start: namely, the currently running transaction (if it exists).
      
      CC: stable@kernel.org
      Reported-by: Martin_Zielinski@McAfee.com
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Signed-off-by: NJan Kara <jack@suse.cz>
      d9b01934
  8. 31 3月, 2011 1 次提交
  9. 17 3月, 2011 1 次提交
  10. 10 3月, 2011 1 次提交
    • J
      block: kill off REQ_UNPLUG · 721a9602
      Jens Axboe 提交于
      With the plugging now being explicitly controlled by the
      submitter, callers need not pass down unplugging hints
      to the block layer. If they want to unplug, it's because they
      manually plugged on their own - in which case, they should just
      unplug at will.
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      721a9602
  11. 01 3月, 2011 1 次提交
  12. 10 12月, 2010 1 次提交
  13. 28 10月, 2010 10 次提交
  14. 20 9月, 2010 1 次提交
    • C
      cfq: improve fsync performance for small files · 749ef9f8
      Corrado Zoccolo 提交于
      Fsync performance for small files achieved by cfq on high-end disks is
      lower than what deadline can achieve, due to idling introduced between
      the sync write happening in process context and the journal commit.
      
      Moreover, when competing with a sequential reader, a process writing
      small files and fsync-ing them is starved.
      
      This patch fixes the two problems by:
      - marking journal commits as WRITE_SYNC, so that they get the REQ_NOIDLE
        flag set,
      - force all queues that have REQ_NOIDLE requests to be put in the noidle
        tree.
      
      Having the queue associated to the fsync-ing process and the one associated
       to journal commits in the noidle tree allows:
      - switching between them without idling,
      - fairness vs. competing idling queues, since they will be serviced only
        after the noidle tree expires its slice.
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Tested-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      749ef9f8
  15. 10 9月, 2010 1 次提交
  16. 18 8月, 2010 2 次提交
    • C
      remove SWRITE* I/O types · 9cb569d6
      Christoph Hellwig 提交于
      These flags aren't real I/O types, but tell ll_rw_block to always
      lock the buffer instead of giving up on a failed trylock.
      
      Instead add a new write_dirty_buffer helper that implements this semantic
      and use it from the existing SWRITE* callers.  Note that the ll_rw_block
      code had a bug where it didn't promote WRITE_SYNC_PLUG properly, which
      this patch fixes.
      
      In the ufs code clean up the helper that used to call ll_rw_block
      to mirror sync_dirty_buffer, which is the function it implements for
      compound buffers.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9cb569d6
    • C
      kill BH_Ordered flag · 87e99511
      Christoph Hellwig 提交于
      Instead of abusing a buffer_head flag just add a variant of
      sync_dirty_buffer which allows passing the exact type of write
      flag required.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      87e99511
  17. 21 7月, 2010 1 次提交
  18. 22 5月, 2010 2 次提交
  19. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  20. 05 3月, 2010 1 次提交
    • J
      jbd: Delay discarding buffers in journal_unmap_buffer · 86963918
      Jan Kara 提交于
      Delay discarding buffers in journal_unmap_buffer until
      we know that "add to orphan" operation has definitely been
      committed, otherwise the log space of committing transation
      may be freed and reused before truncate get committed, updates
      may get lost if crash happens.
      
      This patch is a backport of JBD2 fix by dingdinghua <dingdinghua@nrchpc.ac.cn>.
      Signed-off-by: NJan Kara <jack@suse.cz>
      86963918
  21. 09 2月, 2010 1 次提交
  22. 23 12月, 2009 1 次提交
  23. 18 12月, 2009 1 次提交
  24. 16 12月, 2009 1 次提交
  25. 12 11月, 2009 1 次提交