1. 15 10月, 2009 1 次提交
    • F
      kill-the-bkl/reiserfs: drop the fs race watchdog from _get_block_create_0() · 27b3a5c5
      Frederic Weisbecker 提交于
      We had a watchdog in _get_block_create_0() that jumped to a fixup retry
      path in case the bkl got relaxed while calling kmap().
      This is not necessary anymore since we now have a reiserfs lock that is
      not implicitly relaxed while sleeping.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alexander Beregalov <a.beregalov@gmail.com>
      Cc: Laurent Riffard <laurent.riffard@free.fr>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      27b3a5c5
  2. 14 9月, 2009 5 次提交
    • F
      kill-the-bkl/reiserfs: fix recursive reiserfs write lock in reiserfs_commit_write() · 7e942770
      Frederic Weisbecker 提交于
      reiserfs_commit_write() is always called with the write lock held.
      Thus the current calls to reiserfs_write_lock() in this function are
      acquiring the lock recursively.
      We can safely drop them.
      
      This also solves further assumptions for this lock to be really
      released while calling reiserfs_write_unlock().
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alexander Beregalov <a.beregalov@gmail.com>
      Cc: Laurent Riffard <laurent.riffard@free.fr>
      7e942770
    • F
      kill-the-bkl/reiserfs: factorize the locking in reiserfs_write_end() · d6f5b0aa
      Frederic Weisbecker 提交于
      reiserfs_write_end() is a hot path in reiserfs.
      We have two wasteful write lock lock/release inside that can be gathered
      without changing the code logic.
      
      This patch factorizes them out in a single protected section, reducing the
      number of contentions inside.
      
      [ Impact: reduce lock contention in a reiserfs hotpath ]
      
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alexander Beregalov <a.beregalov@gmail.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      d6f5b0aa
    • F
      kill-the-bkl/reiserfs: lock only once on reiserfs_get_block() · 26931309
      Frederic Weisbecker 提交于
      reiserfs_get_block() is one of these sites where the write lock might
      be acquired recursively.
      
      It's a particular problem because this function is called very often.
      It's a hot spot which needs to reschedule() periodically while converting
      direct items to indirect ones because it can take some time.
      
      Then if we are applying the write lock release/reacquire pattern on
      schedule() here, it may not produce the desired effect since we may have
      locked in more than one depth.
      
      The solution is to use reiserfs_write_lock_once() which won't try
      to reacquire the lock recursively. Then the lock will be *really*
      released before schedule().
      
      Also, we only release the lock if TIF_NEED_RESCHED is set to not
      create wasteful numerous contentions.
      
      [ Impact: fix a too long holded lock case in reiserfs_get_block() ]
      
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alexander Beregalov <a.beregalov@gmail.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      26931309
    • F
      kill-the-BKL/reiserfs: lock only once in reiserfs_truncate_file · 22c963ad
      Frederic Weisbecker 提交于
      Impact: fix a deadlock
      
      reiserfs_truncate_file() can be called from multiple context where
      the write lock can be already hold or not.
      
      This function also acquire (possibly recursively) the write
      lock. Subsequent releases before sleeping will not actually release
      the lock because we may be in more than one lock depth degree.
      
      A typical case is:
      
      reiserfs_file_release {
      	acquire_the_lock()
      	reiserfs_truncate_file()
      		reacquire_the_lock()
      		journal_begin() {
      			do_journal_begin_r() {
      				reiserfs_wait_on_write_block() {
      					/*
      					 * Not released because still one
      					 * depth owned
      					 */
      					release_lock()
      					wait_for_event()
      
      At this stage the event never happen because the one which provides
      it needs the write lock.
      
      We use reiserfs_write_lock_once() here to ensure that we don't acquire the
      write lock recursively.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Alessio Igor Bogani <abogani@texware.it>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Alexander Beregalov <a.beregalov@gmail.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      LKML-Reference: <1239680065-25013-3-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      22c963ad
    • F
      reiserfs: kill-the-BKL · 8ebc4232
      Frederic Weisbecker 提交于
      This patch is an attempt to remove the Bkl based locking scheme from
      reiserfs and is intended.
      
      It is a bit inspired from an old attempt by Peter Zijlstra:
      
         http://lkml.indiana.edu/hypermail/linux/kernel/0704.2/2174.html
      
      The bkl is heavily used in this filesystem to prevent from
      concurrent write accesses on the filesystem.
      
      Reiserfs makes a deep use of the specific properties of the Bkl:
      
      - It can be acqquired recursively by a same task
      - It is released on the schedule() calls and reacquired when schedule() returns
      
      The two properties above are a roadmap for the reiserfs write locking so it's
      very hard to simply replace it with a common mutex.
      
      - We need a recursive-able locking unless we want to restructure several blocks
        of the code.
      - We need to identify the sites where the bkl was implictly relaxed
        (schedule, wait, sync, etc...) so that we can in turn release and
        reacquire our new lock explicitly.
        Such implicit releases of the lock are often required to let other
        resources producer/consumer do their job or we can suffer unexpected
        starvations or deadlocks.
      
      So the new lock that replaces the bkl here is a per superblock mutex with a
      specific property: it can be acquired recursively by a same task, like the
      bkl.
      
      For such purpose, we integrate a lock owner and a lock depth field on the
      superblock information structure.
      
      The first axis on this patch is to turn reiserfs_write_(un)lock() function
      into a wrapper to manage this mutex. Also some explicit calls to
      lock_kernel() have been converted to reiserfs_write_lock() helpers.
      
      The second axis is to find the important blocking sites (schedule...(),
      wait_on_buffer(), sync_dirty_buffer(), etc...) and then apply an explicit
      release of the write lock on these locations before blocking. Then we can
      safely wait for those who can give us resources or those who need some.
      Typically this is a fight between the current writer, the reiserfs workqueue
      (aka the async commiter) and the pdflush threads.
      
      The third axis is a consequence of the second. The write lock is usually
      on top of a lock dependency chain which can include the journal lock, the
      flush lock or the commit lock. So it's dangerous to release and trying to
      reacquire the write lock while we still hold other locks.
      
      This is fine with the bkl:
      
            T1                       T2
      
      lock_kernel()
          mutex_lock(A)
          unlock_kernel()
          // do something
                                  lock_kernel()
                                      mutex_lock(A) -> already locked by T1
                                      schedule() (and then unlock_kernel())
          lock_kernel()
          mutex_unlock(A)
          ....
      
      This is not fine with a mutex:
      
            T1                       T2
      
      mutex_lock(write)
          mutex_lock(A)
          mutex_unlock(write)
          // do something
                                 mutex_lock(write)
                                    mutex_lock(A) -> already locked by T1
                                    schedule()
      
          mutex_lock(write) -> already locked by T2
          deadlock
      
      The solution in this patch is to provide a helper which releases the write
      lock and sleep a bit if we can't lock a mutex that depend on it. It's another
      simulation of the bkl behaviour.
      
      The last axis is to locate the fs callbacks that are called with the bkl held,
      according to Documentation/filesystem/Locking.
      
      Those are:
      
      - reiserfs_remount
      - reiserfs_fill_super
      - reiserfs_put_super
      
      Reiserfs didn't need to explicitly lock because of the context of these callbacks.
      But now we must take care of that with the new locking.
      
      After this patch, reiserfs suffers from a slight performance regression (for now).
      On UP, a high volume write with dd reports an average of 27 MB/s instead
      of 30 MB/s without the patch applied.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Bron Gondwana <brong@fastmail.fm>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      LKML-Reference: <1239070789-13354-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8ebc4232
  3. 24 6月, 2009 1 次提交
  4. 31 3月, 2009 9 次提交
  5. 26 3月, 2009 1 次提交
  6. 06 1月, 2009 1 次提交
  7. 05 1月, 2009 1 次提交
    • N
      fs: symlink write_begin allocation context fix · 54566b2c
      Nick Piggin 提交于
      With the write_begin/write_end aops, page_symlink was broken because it
      could no longer pass a GFP_NOFS type mask into the point where the
      allocations happened.  They are done in write_begin, which would always
      assume that the filesystem can be entered from reclaim.  This bug could
      cause filesystem deadlocks.
      
      The funny thing with having a gfp_t mask there is that it doesn't really
      allow the caller to arbitrarily tinker with the context in which it can be
      called.  It couldn't ever be GFP_ATOMIC, for example, because it needs to
      take the page lock.  The only thing any callers care about is __GFP_FS
      anyway, so turn that into a single flag.
      
      Add a new flag for write_begin, AOP_FLAG_NOFS.  Filesystems can now act on
      this flag in their write_begin function.  Change __grab_cache_page to
      accept a nofs argument as well, to honour that flag (while we're there,
      change the name to grab_cache_page_write_begin which is more instructive
      and does away with random leading underscores).
      
      This is really a more flexible way to go in the end anyway -- if a
      filesystem happens to want any extra allocations aside from the pagecache
      ones in ints write_begin function, it may now use GFP_KERNEL (rather than
      GFP_NOFS) for common case allocations (eg.  ocfs2_alloc_write_ctxt, for a
      random example).
      
      [kosaki.motohiro@jp.fujitsu.com: fix ubifs]
      [kosaki.motohiro@jp.fujitsu.com: fix fuse]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: <stable@kernel.org>		[2.6.28.x]
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      [ Cleaned up the calling convention: just pass in the AOP flags
        untouched to the grab_cache_page_write_begin() function.  That
        just simplifies everybody, and may even allow future expansion of the
        logic.   - Linus ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54566b2c
  8. 01 1月, 2009 1 次提交
  9. 23 10月, 2008 1 次提交
  10. 05 8月, 2008 1 次提交
  11. 09 7月, 2008 1 次提交
  12. 08 2月, 2008 1 次提交
  13. 06 2月, 2008 1 次提交
    • C
      Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user · eebd2aa3
      Christoph Lameter 提交于
      Simplify page cache zeroing of segments of pages through 3 functions
      
      zero_user_segments(page, start1, end1, start2, end2)
      
              Zeros two segments of the page. It takes the position where to
              start and end the zeroing which avoids length calculations and
      	makes code clearer.
      
      zero_user_segment(page, start, end)
      
              Same for a single segment.
      
      zero_user(page, start, length)
      
              Length variant for the case where we know the length.
      
      We remove the zero_user_page macro. Issues:
      
      1. Its a macro. Inline functions are preferable.
      
      2. The KM_USER0 macro is only defined for HIGHMEM.
      
         Having to treat this special case everywhere makes the
         code needlessly complex. The parameter for zeroing is always
         KM_USER0 except in one single case that we open code.
      
      Avoiding KM_USER0 makes a lot of code not having to be dealing
      with the special casing for HIGHMEM anymore. Dealing with
      kmap is only necessary for HIGHMEM configurations. In those
      configurations we use KM_USER0 like we do for a series of other
      functions defined in highmem.h.
      
      Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
      function could not be a macro. zero_user_* functions introduced
      here can be be inline because that constant is not used when these
      functions are called.
      
      Also extract the flushing of the caches to be outside of the kmap.
      
      [akpm@linux-foundation.org: fix nfs and ntfs build]
      [akpm@linux-foundation.org: fix ntfs build some more]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eebd2aa3
  14. 22 10月, 2007 1 次提交
  15. 20 10月, 2007 1 次提交
  16. 19 10月, 2007 1 次提交
  17. 17 10月, 2007 3 次提交
  18. 18 7月, 2007 1 次提交
  19. 10 5月, 2007 1 次提交
  20. 23 1月, 2007 1 次提交
    • V
      [PATCH] resierfs: avoid tail packing if an inode was ever mmapped · de14569f
      Vladimir Saveliev 提交于
      This patch fixes a confusion reiserfs has for a long time.
      
      On release file operation reiserfs used to try to pack file data stored in
      last incomplete page of some files into metadata blocks.  After packing the
      page got cleared with clear_page_dirty.  It did not take into account that
      the page may be mmaped into other process's address space.  Recent
      replacement for clear_page_dirty cancel_dirty_page found the confusion with
      sanity check that page has to be not mapped.
      
      The patch fixes the confusion by making reiserfs avoid tail packing if an
      inode was ever mmapped.  reiserfs_mmap and reiserfs_file_release are
      serialized with mutex in reiserfs specific inode.  reiserfs_mmap locks the
      mutex and sets a bit in reiserfs specific inode flags.
      reiserfs_file_release checks the bit having the mutex locked.  If bit is
      set - tail packing is avoided.  This eliminates a possibility that mmapped
      page gets cancel_page_dirty-ed.
      Signed-off-by: NVladimir Saveliev <vs@namesys.com>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Chris Mason <mason@suse.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de14569f
  21. 09 12月, 2006 1 次提交
  22. 08 12月, 2006 2 次提交
    • Y
      [PATCH] reiser: replace kmalloc+memset with kzalloc · 01afb213
      Yan Burman 提交于
      Replace kmalloc+memset with kzalloc
      Signed-off-by: NYan Burman <burman.yan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      01afb213
    • S
      [PATCH] fix reiserfs bad path release panic · 87b4126f
      Suzuki K P 提交于
      One of our test team hit a reiserfs_panic while running fsstress tests on
      2.6.19-rc1.  The message looks like :
      
        REISERFS: panic(device Null superblock):
        reiserfs[5676]: assertion !(p->path_length != 1 ) failed at
        fs/reiserfs/stree.c:397:reiserfs_check_path: path not properly relsed.
      
      The backtrace looked :
      
        kernel BUG in reiserfs_panic at fs/reiserfs/prints.c:361!
      	.reiserfs_check_path+0x58/0x74
      	.reiserfs_get_block+0x1444/0x1508
      	.__block_prepare_write+0x1c8/0x558
      	.block_prepare_write+0x34/0x64
      	.reiserfs_prepare_write+0x118/0x1d0
      	.generic_file_buffered_write+0x314/0x82c
      	.__generic_file_aio_write_nolock+0x350/0x3e0
      	.__generic_file_write_nolock+0x78/0xb0
      	.generic_file_write+0x60/0xf0
      	.reiserfs_file_write+0x198/0x2038
      	.vfs_write+0xd0/0x1b4
      	.sys_write+0x4c/0x8c
      	syscall_exit+0x0/0x4
      
      Upon debugging I found that the restart_transaction was not releasing
      the path if the th->refcount was > 1.
      
      /*static*/
      int restart_transaction(struct reiserfs_transaction_handle *th,
                                 			struct inode *inode, struct path *path)
      {
      	[...]
      
               /* we cannot restart while nested */
               if (th->t_refcount > 1) { <<- Path is not released in this case!
                       return 0;
               }
      
               pathrelse(path); <<- Path released here.
      	[...]
      
      This could happen in such a situation :
      
      In reiserfs/inode.c: reiserfs_get_block() ::
      
            if (repeat == NO_DISK_SPACE || repeat == QUOTA_EXCEEDED) {
                /* restart the transaction to give the journal a chance to free
                 ** some blocks.  releases the path, so we have to go back to
                 ** research if we succeed on the second try
                 */
                SB_JOURNAL(inode->i_sb)->j_next_async_flush = 1;
      
              -->>  retval = restart_transaction(th, inode, &path); <<--
      
        We are supposed to release the path, no matter we succeed or fail. But
      if the th->refcount is > 1, the path is still valid. And,
      
                if (retval)
                         goto failure;
                repeat =
                    _allocate_block(th, block, inode,
                                   &allocated_block_nr, NULL, create);
      
      If the above allocate_block fails with NO_DISK_SPACE or QUOTA_EXCEEDED,
      we would have path which is not released.
      
               if (repeat != NO_DISK_SPACE && repeat != QUOTA_EXCEEDED) {
                         goto research;
               }
               if (repeat == QUOTA_EXCEEDED)
                         retval = -EDQUOT;
               else
                         retval = -ENOSPC;
               goto failure;
      	[...]
      
             failure:
      	[...]
               reiserfs_check_path(&path); << Panics here !
      
      Attached here is a patch which could fix the issue.
      
      fix reiserfs/inode.c : restart_transaction() to release the path in all
      cases.
      
      The restart_transaction() doesn't release the path when the the journal
      handle has a refcount > 1.  This would trigger a reiserfs_panic() if we
      encounter an -ENOSPC / -EDQUOT in reiserfs_get_block().
      Signed-off-by: NSuzuki K P <suzuki@in.ibm.com>
      Cc: "Vladimir V. Saveliev" <vs@namesys.com>
      Cc: <reiserfs-dev@namesys.com>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      87b4126f
  23. 04 10月, 2006 1 次提交
  24. 30 9月, 2006 2 次提交