1. 02 9月, 2006 1 次提交
  2. 28 8月, 2006 1 次提交
  3. 23 6月, 2006 2 次提交
    • A
      [PATCH] jbd: avoid kfree(NULL) · 304c4c84
      Andrew Morton 提交于
      There are a couple of places where JBD has to check to see whether an unneeded
      memory allocation was performed.  Usually it _was_ needed, so we end up
      calling kfree(NULL).  We can micro-optimise that by checking the pointer
      before calling kfree().
      
      Thanks to Steven Rostedt <rostedt@goodmis.org> for identifying this.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      304c4c84
    • J
      [PATCH] jbd: fix BUG in journal_commit_transaction() · 9ada7340
      Jan Kara 提交于
      Fix possible assertion failure in journal_commit_transaction() on
      jh->b_next_transaction == NULL (when we are processing BJ_Forget list and
      buffer is not jbddirty).
      
      !jbddirty buffers can be placed on BJ_Forget list for example by
      journal_forget() or by __dispose_buffer() - generally such buffer means
      that it has been freed by this transaction.
      
      Freed buffers should not be reallocated until the transaction has committed
      (that's why we have the assertion there) but they *can* be reallocated when
      the transaction has already been committed to disk and we are just
      processing the BJ_Forget list (as soon as we remove b_committed_data from
      the bitmap bh, ext3 will be able to reallocate buffers freed by the
      committing transaction).  So we have to also count with the case that the
      buffer has been reallocated and b_next_transaction has been already set.
      
      And one more subtle point: it can happen that we manage to reallocate the
      buffer and also mark it jbddirty.  Then we also add the freed buffer to the
      checkpoint list of the committing trasaction.  But that should do no harm.
      
      Non-jbddirty buffers should be filed to BJ_Reserved and not BJ_Metadata
      list.  It can actually happen that we refile such buffers during the commit
      phase when we reallocate in the running transaction blocks deleted in
      committing transaction (and that can happen if the committing transaction
      already wrote all the data and is just cleaning up BJ_Forget list).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Acked-by: N"Stephen C. Tweedie" <sct@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9ada7340
  4. 27 3月, 2006 1 次提交
  5. 26 3月, 2006 1 次提交
  6. 23 3月, 2006 1 次提交
  7. 06 2月, 2006 1 次提交
    • A
      [PATCH] jbd: fix transaction batching · fe1dcbc4
      Andrew Morton 提交于
      Ben points out that:
      
        When writing files out using O_SYNC, jbd's 1 jiffy delay results in a
        significant drop in throughput as the disk sits idle.  The patch below
        results in a 4-5x performance improvement (from 6.5MB/s to ~24-30MB/s on my
        IDE test box) when writing out files using O_SYNC.
      
      So optimise the batching code by omitting it entirely if the process which is
      doing a sync write is the same as the one which did the most recent sync
      write.  If that's true, we're unlikely to get any other processes joining the
      transaction.
      
      (Has been in -mm for ages - it took me a long time to get on to performance
      testing it)
      
      Numbers, on write-cache-disabled IDE:
      
      /usr/bin/time -p synctest -n 10 -uf -t 1 -p 1 dir-name
      
      Unpatched:
      	40 seconds
      Patched:
      	35 seconds
      Batching disabled:
      	35 seconds
      
      This is the problematic single-process-doing-fsync case.  With multiple
      fsyncing processes the numbers are AFACIT unaltered by the patch.
      
      Aside: performance testing and instrumentation shows that the transaction
      batching almost doesn't help (testing with synctest -n 1 -uf -t 100 -p 10
      dir-name on non-writeback-caching IDE).  This is because by the time one
      process is running a synchronous commit, a bunch of other processes already
      have a transaction handle open, so they're all going to batch into the same
      transaction anyway.
      
      The batching seems to offer maybe 5-10% speedup with this workload, but I'm
      pretty sure it was more important than that when it was first developed 4-odd
      years ago...
      
      Cc: "Stephen C. Tweedie" <sct@redhat.com>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fe1dcbc4
  8. 07 11月, 2005 1 次提交
  9. 28 10月, 2005 1 次提交
    • A
      [PATCH] gfp_t: fs/* · 27496a8c
      Al Viro 提交于
       - ->releasepage() annotated (s/int/gfp_t), instances updated
       - missing gfp_t in fs/* added
       - fixed misannotation from the original sweep caught by bitwise checks:
         XFS used __nocast both for gfp_t and for flags used by XFS allocator.
         The latter left with unsigned int __nocast; we might want to add a
         different type for those but for now let's leave them alone.  That,
         BTW, is a case when __nocast use had been actively confusing - it had
         been used in the same code for two different and similar types, with
         no way to catch misuses.  Switch of gfp_t to bitwise had caught that
         immediately...
      
      One tricky bit is left alone to be dealt with later - mapping->flags is
      a mix of gfp_t and error indications.  Left alone for now.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      27496a8c
  10. 11 9月, 2005 1 次提交
  11. 08 9月, 2005 1 次提交
    • J
      [PATCH] Fix race in do_get_write_access() · 4407c2b6
      Jan Kara 提交于
        attached patch should fix the following race:
           Proc 1                               Proc 2
      
           __flush_batch()
             ll_rw_block()
                                              do_get_write_access()
      					   lock_buffer
                                                   jh is only waiting for checkpoint
      					     -> b_transaction == NULL ->
      					     do nothing
                                                 unlock_buffer
          test_set_buffer_locked()
          test_clear_buffer_dirty()
                                                 __journal_file_buffer()
                                              change the data
          submit_bh()
      
      and we have sent wrong data to disk...  We now clean the dirty buffer flag
      under buffer lock in all cases and hence we know that whenever a buffer is
      starting to be journaled we either finish the pending write-out before
      attaching a buffer to a transaction or we won't write the buffer until the
      transaction is going to be committed.
      
      The test in jbd_unexpected_dirty_buffer() is redundant - remove it.
      Furthermore we have to clear the buffer dirty bit under the buffer lock to
      prevent races with buffer write-out (and hence prevent returning a buffer with
      IO happening).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4407c2b6
  12. 17 4月, 2005 2 次提交
    • A
      [PATCH] jbd dirty buffer leak fix · d13df84f
      akpm@osdl.org 提交于
      This fixes the lots-of-fsx-linux-instances-cause-a-slow-leak bug.
      
      It's been there since 2.6.6, caused by:
      
      ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.5/2.6.5-mm4/broken-out/jbd-move-locked-buffers.patch
      
      That patch moves under-writeout ordered-data buffers onto a separate journal
      list during commit.  It took out the old code which was based on a single
      list.
      
      The old code (necessarily) had logic which would restart I/O against buffers
      which had been redirtied while they were on the committing transaction's
      t_sync_datalist list.  The new code only writes buffers once, ignoring
      redirtyings by a later transaction, which is good.
      
      But over on the truncate side of things, in journal_unmap_buffer(), we're
      treating buffers on the t_locked_list as inviolable things which belong to the
      committing transaction, and we just leave them alone during concurrent
      truncate-vs-commit.
      
      The net effect is that when truncate tries to invalidate a page whose buffers
      are on t_locked_list and have been redirtied, journal_unmap_buffer() just
      leaves those buffers alone.  truncate will remove the page from its mapping
      and we end up with an anonymous clean page with dirty buffers, which is an
      illegal state for a page.  The JBD commit will not clean those buffers as they
      are removed from t_locked_list.  The VM (try_to_free_buffers) cannot reclaim
      these pages.
      
      The patch teaches journal_unmap_buffer() about buffers which are on the
      committing transaction's t_locked_list.  These buffers have been written and
      I/O has completed.  We can take them off the transaction and undirty them
      within the context of journal_invalidatepage()->journal_unmap_buffer().
      Acked-by: N"Stephen C. Tweedie" <sct@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d13df84f
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4