1. 24 2月, 2011 1 次提交
    • M
      mm: prevent concurrent unmap_mapping_range() on the same inode · 2aa15890
      Miklos Szeredi 提交于
      Michael Leun reported that running parallel opens on a fuse filesystem
      can trigger a "kernel BUG at mm/truncate.c:475"
      
      Gurudas Pai reported the same bug on NFS.
      
      The reason is, unmap_mapping_range() is not prepared for more than
      one concurrent invocation per inode.  For example:
      
        thread1: going through a big range, stops in the middle of a vma and
           stores the restart address in vm_truncate_count.
      
        thread2: comes in with a small (e.g. single page) unmap request on
           the same vma, somewhere before restart_address, finds that the
           vma was already unmapped up to the restart address and happily
           returns without doing anything.
      
      Another scenario would be two big unmap requests, both having to
      restart the unmapping and each one setting vm_truncate_count to its
      own value.  This could go on forever without any of them being able to
      finish.
      
      Truncate and hole punching already serialize with i_mutex.  Other
      callers of unmap_mapping_range() do not, and it's difficult to get
      i_mutex protection for all callers.  In particular ->d_revalidate(),
      which calls invalidate_inode_pages2_range() in fuse, may be called
      with or without i_mutex.
      
      This patch adds a new mutex to 'struct address_space' to prevent
      running multiple concurrent unmap_mapping_range() on the same mapping.
      
      [ We'll hopefully get rid of all this with the upcoming mm
        preemptibility series by Peter Zijlstra, the "mm: Remove i_mmap_mutex
        lockbreak" patch in particular.  But that is for 2.6.39 ]
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Reported-by: NMichael Leun <lkml20101129@newton.leun.net>
      Reported-by: NGurudas Pai <gurudas.pai@oracle.com>
      Tested-by: NGurudas Pai <gurudas.pai@oracle.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2aa15890
  2. 10 1月, 2011 2 次提交
  3. 23 10月, 2010 3 次提交
    • R
      nilfs2: get rid of GCDAT inode · c1c1d709
      Ryusuke Konishi 提交于
      This applies prepared rollback function and redirect function of
      metadata file to DAT file, and eliminates GCDAT inode.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      c1c1d709
    • R
      nilfs2: add routines to redirect access to buffers of DAT file · b1f6a4f2
      Ryusuke Konishi 提交于
      During garbage collection (GC), DAT file, which converts virtual block
      number to real block number, may return disk block number that is not
      yet written to the device.
      
      To avoid access to unwritten blocks, the current implementation stores
      changes to the caches of GCDAT during GC and atomically commit the
      changes into the DAT file after they are written to the device.
      
      This patch, instead, adds a function that makes a copy of specified
      buffer and stores it in nilfs_shadow_map, and a function to get the
      backup copy as needed (nilfs_mdt_freeze_buffer and
      nilfs_mdt_get_frozen_buffer respectively).
      
      Before DAT changes block number in an entry block, it makes a copy and
      redirect access to the buffer so that address conversion function
      (i.e. nilfs_dat_translate) refers to the old address saved in the
      copy.
      
      This patch gives requisites for such redirection.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      b1f6a4f2
    • R
      nilfs2: add routines to roll back state of DAT file · ebdfed4d
      Ryusuke Konishi 提交于
      This adds optional function to metadata files which makes a copy of
      bmap, page caches, and b-tree node cache, and rolls back to the copy
      as needed.
      
      This enhancement is intended to displace gcdat inode that provides a
      similar function in a different way.
      
      In this patch, nilfs_shadow_map structure is added to store a copy of
      the foregoing states.  nilfs_mdt_setup_shadow_map relates this
      structure to a metadata file.  And, nilfs_mdt_save_to_shadow_map() and
      nilfs_mdt_restore_from_shadow_map() provides save and restore
      functions respectively.  Finally, nilfs_mdt_clear_shadow_map() clears
      states of nilfs_shadow_map.
      
      The copy of b-tree node cache and page cache is made by duplicating
      only dirty pages into corresponding caches in nilfs_shadow_map.  Their
      restoration is done by clearing dirty pages from original caches and
      by copying dirty pages back from nilfs_shadow_map.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      ebdfed4d
  4. 23 7月, 2010 1 次提交
  5. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  6. 14 3月, 2010 1 次提交
  7. 10 5月, 2009 1 次提交
  8. 07 4月, 2009 2 次提交
    • R
      nilfs2: replace BUG_ON and BUG calls triggerable from ioctl · 1f5abe7e
      Ryusuke Konishi 提交于
      Pekka Enberg advised me:
      > It would be nice if BUG(), BUG_ON(), and panic() calls would be
      > converted to proper error handling using WARN_ON() calls. The BUG()
      > call in nilfs_cpfile_delete_checkpoints(), for example, looks to be
      > triggerable from user-space via the ioctl() system call.
      
      This will follow the comment and keep them to a minimum.
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f5abe7e
    • R
      nilfs2: buffer and page operations · 0bd49f94
      Ryusuke Konishi 提交于
      This adds common routines for buffer/page operations used in B-tree
      node caches, meta data files, or segment constructor (log writer).
      
      NILFS uses copy functions for buffers and pages due to the following
      reasons:
      
       1) Relocation required for COW
          Since NILFS changes address of on-disk blocks, moving buffers
          in page cache is needed for the buffers which are not addressed
          by a file offset.  If buffer size is smaller than page size,
          this involves partial copy of pages.
      
       2) Freezing mmapped pages
          NILFS calculates checksums for each log to ensure its validity.
          If page data changes after the checksum calculation, this validity
          check will not work correctly.  To avoid this failure for mmaped
          pages, NILFS freezes their data by copying.
      
       3) Copy-on-write for DAT pages
          NILFS makes clones of DAT page caches in a copy-on-write manner
          during GC processes, and this ensures atomicity and consistency
          of the DAT in the transient state.
      
      In addition, NILFS uses two obsolete functions, nilfs_mark_buffer_dirty()
      and nilfs_clear_page_dirty() respectively.
      
      * nilfs_mark_buffer_dirty() was required to avoid NULL pointer
        dereference faults:
      
        Since the page cache of B-tree node pages or data page cache of pseudo
        inodes does not have a valid mapping->host, calling mark_buffer_dirty()
        for their buffers causes the fault; it calls __mark_inode_dirty(NULL)
        through __set_page_dirty().
      
      * nilfs_clear_page_dirty() was needed in the two cases:
      
       1) For B-tree node pages and data pages of the dat/gcdat, NILFS2 clears
          page dirty flags when it copies back pages from the cloned cache
          (gcdat->{i_mapping,i_btnode_cache}) to its original cache
          (dat->{i_mapping,i_btnode_cache}).
      
       2) Some B-tree operations like insertion or deletion may dispose buffers
          in dirty state, and this needs to cancel the dirty state of their
          pages.  clear_page_dirty_for_io() caused faults because it does not
          clear the dirty tag on the page cache.
      Signed-off-by: NSeiji Kihara <kihara.seiji@lab.ntt.co.jp>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0bd49f94