1. 10 1月, 2011 2 次提交
  2. 07 1月, 2011 1 次提交
  3. 23 10月, 2010 10 次提交
  4. 10 8月, 2010 4 次提交
    • A
      convert nilfs2 to ->evict_inode() · 6fd1e5c9
      Al Viro 提交于
      [folded build fix from sfr]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6fd1e5c9
    • C
      remove inode_setattr · 1025774c
      Christoph Hellwig 提交于
      Replace inode_setattr with opencoded variants of it in all callers.  This
      moves the remaining call to vmtruncate into the filesystem methods where it
      can be replaced with the proper truncate sequence.
      
      In a few cases it was obvious that we would never end up calling vmtruncate
      so it was left out in the opencoded variant:
      
       spufs: explicitly checks for ATTR_SIZE earlier
       btrfs,hugetlbfs,logfs,dlmfs: explicitly clears ATTR_SIZE earlier
       ufs: contains an opencoded simple_seattr + truncate that sets the filesize just above
      
      In addition to that ncpfs called inode_setattr with handcrafted iattrs,
      which allowed to trim down the opencoded variant.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1025774c
    • C
      get rid of block_write_begin_newtrunc · 155130a4
      Christoph Hellwig 提交于
      Move the call to vmtruncate to get rid of accessive blocks to the callers
      in preparation of the new truncate sequence and rename the non-truncating
      version to block_write_begin.
      
      While we're at it also remove several unused arguments to block_write_begin.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      155130a4
    • C
      sort out blockdev_direct_IO variants · eafdc7d1
      Christoph Hellwig 提交于
      Move the call to vmtruncate to get rid of accessive blocks to the callers
      in prepearation of the new truncate calling sequence.  This was only done
      for DIO_LOCKING filesystems, so the __blockdev_direct_IO_newtrunc variant
      was not needed anyway.  Get rid of blockdev_direct_IO_no_locking and
      its _newtrunc variant while at it as just opencoding the two additional
      paramters is shorted than the name suffix.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      eafdc7d1
  5. 22 5月, 2010 1 次提交
  6. 10 5月, 2010 1 次提交
  7. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  8. 27 11月, 2009 2 次提交
  9. 20 11月, 2009 2 次提交
    • R
      nilfs2: move out mark_inode_dirty calls from bmap routines · 9cb4e0d2
      Ryusuke Konishi 提交于
      Previously, nilfs_bmap_add_blocks() and nilfs_bmap_sub_blocks() called
      mark_inode_dirty() after they changed the number of data blocks.
      
      This moves these calls outside bmap outermost functions like
      nilfs_bmap_insert() or nilfs_bmap_truncate().
      
      This will mitigate overhead for truncate or delete operation since
      they repeatedly remove set of blocks.  Nearly 10 percent improvement
      was observed for removal of a large file:
      
       # dd if=/dev/zero of=/test/aaa bs=1M count=512
       # time rm /test/aaa
      
        real  2.968s -> 2.705s
      
      Further optimization may be possible by eliminating these
      mark_inode_dirty() uses though I avoid mixing separate changes here.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      9cb4e0d2
    • R
      nilfs2: remove buffer locking in nilfs_mark_inode_dirty · a49762fd
      Ryusuke Konishi 提交于
      This lock is eliminable because inodes on the buffer can be updated
      independently.  Although a log writer also fills in bmap data on the
      on-disk inodes, this update is exclusively done by a log writer lock.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      a49762fd
  10. 15 11月, 2009 1 次提交
  11. 29 9月, 2009 1 次提交
  12. 22 9月, 2009 1 次提交
  13. 14 9月, 2009 1 次提交
  14. 24 6月, 2009 1 次提交
  15. 10 6月, 2009 3 次提交
    • R
      nilfs2: support contiguous lookup of blocks · c3a7abf0
      Ryusuke Konishi 提交于
      Although get_block() callback function can return extent of contiguous
      blocks with bh->b_size, nilfs_get_block() function did not support
      this feature.
      
      This adds contiguous lookup feature to the block mapping codes of
      nilfs, and allows the nilfs_get_blocks() function to return the extent
      information by applying the feature.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      c3a7abf0
    • R
      nilfs2: enable sync_page method · e85dc1d5
      Ryusuke Konishi 提交于
      This adds a missing sync_page method which unplugs bio requests when
      waiting for page locks. This will improve read performance of nilfs.
      
      Here is a measurement result using dd command.
      
      Without this patch:
      
       # mount -t nilfs2 /dev/sde1 /test
       # dd if=/test/aaa of=/dev/null bs=512k
       1024+0 records in
       1024+0 records out
       536870912 bytes (537 MB) copied, 6.00688 seconds, 89.4 MB/s
      
      With this patch:
      
       # mount -t nilfs2 /dev/sde1 /test
       # dd if=/test/aaa of=/dev/null bs=512k
       1024+0 records in
       1024+0 records out
       536870912 bytes (537 MB) copied, 3.54998 seconds, 151 MB/s
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      e85dc1d5
    • H
      NILFS2: Pagecache usage optimization on NILFS2 · 258ef67e
      Hisashi Hifumi 提交于
      Hi,
      
      I introduced "is_partially_uptodate" aops for NILFS2.
      
      A page can have multiple buffers and even if a page is not uptodate, some buffers
      can be uptodate on pagesize != blocksize environment.
      This aops checks that all buffers which correspond to a part of a file
      that we want to read are uptodate. If so, we do not have to issue actual
      read IO to HDD even if a page is not uptodate because the portion we
      want to read are uptodate.
      "block_is_partially_uptodate" function is already used by ext2/3/4.
      With the following patch random read/write mixed workloads or random read after
      random write workloads can be optimized and we can get performance improvement.
      
      I did a performance test using the sysbench.
      
      1 --file-block-size=8K --file-total-size=2G --file-test-mode=rndrw --file-fsync-freq=0 --fil
      e-rw-ratio=1 run
      
      -2.6.30-rc5
      
      Test execution summary:
          total time:                          151.2907s
          total number of events:              200000
          total time taken by event execution: 2409.8387
          per-request statistics:
               min:                            0.0000s
               avg:                            0.0120s
               max:                            0.9306s
               approx.  95 percentile:         0.0439s
      
      Threads fairness:
          events (avg/stddev):           12500.0000/238.52
          execution time (avg/stddev):   150.6149/0.01
      
      -2.6.30-rc5-patched
      
      Test execution summary:
          total time:                          140.8828s
          total number of events:              200000
          total time taken by event execution: 2240.8577
          per-request statistics:
               min:                            0.0000s
               avg:                            0.0112s
               max:                            0.8750s
               approx.  95 percentile:         0.0418s
      
      Threads fairness:
          events (avg/stddev):           12500.0000/218.43
          execution time (avg/stddev):   140.0536/0.01
      
      arch: ia64
      pagesize: 16k
      
      Thanks.
      Signed-off-by: NHisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      258ef67e
  16. 07 4月, 2009 6 次提交