1. 10 7月, 2014 1 次提交
  2. 23 6月, 2014 2 次提交
  3. 12 6月, 2014 1 次提交
    • A
      ->splice_write() via ->write_iter() · 8d020765
      Al Viro 提交于
      iter_file_splice_write() - a ->splice_write() instance that gathers the
      pipe buffers, builds a bio_vec-based iov_iter covering those and feeds
      it to ->write_iter().  A bunch of simple cases coverted to that...
      
      [AV: fixed the braino spotted by Cyrill]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      8d020765
  4. 08 6月, 2014 1 次提交
  5. 07 6月, 2014 1 次提交
  6. 07 5月, 2014 7 次提交
  7. 08 4月, 2014 1 次提交
  8. 07 4月, 2014 1 次提交
    • J
      f2fs: introduce f2fs_issue_flush to avoid redundant flush issue · 6b4afdd7
      Jaegeuk Kim 提交于
      Some storage devices show relatively high latencies to complete cache_flush
      commands, even though their normal IO speed is prettry much high. In such
      the case, it needs to merge cache_flush commands as much as possible to avoid
      issuing them redundantly.
      So, this patch introduces a mount option, "-o flush_merge", to mitigate such
      the overhead.
      
      If this option is enabled by user, F2FS merges the cache_flush commands and then
      issues just one cache_flush on behalf of them. Once the single command is
      finished, F2FS sends a completion signal to all the pending threads.
      
      Note that, this option can be used under a workload consisting of very intensive
      concurrent fsync calls, while the storage handles cache_flush commands slowly.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      6b4afdd7
  9. 20 3月, 2014 3 次提交
  10. 03 3月, 2014 1 次提交
  11. 17 2月, 2014 1 次提交
  12. 26 1月, 2014 1 次提交
  13. 20 1月, 2014 1 次提交
  14. 08 1月, 2014 1 次提交
    • J
      f2fs: improve write performance under frequent fsync calls · fb5566da
      Jaegeuk Kim 提交于
      When considering a bunch of data writes with very frequent fsync calls, we
      are able to think the following performance regression.
      
      N: Node IO, D: Data IO, IO scheduler: cfq
      
      Issue    pending IOs
      	 D1 D2 D3 D4
       D1         D2 D3 D4 N1
       D2            D3 D4 N1 N2
       N1            D3 D4 N2 D1
       --> N1 can be selected by cfq becase of the same priority of N and D.
           Then D3 and D4 would be delayed, resuling in performance degradation.
      
      So, when processing the fsync call, it'd better give higher priority to data IOs
      than node IOs by assigning WRITE and WRITE_SYNC respectively.
      This patch improves the random wirte performance with frequent fsync calls by up
      to 10%.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      fb5566da
  15. 06 1月, 2014 3 次提交
    • J
      f2fs: add inline_data recovery routine · 1e1bb4ba
      Jaegeuk Kim 提交于
      This patch adds a inline_data recovery routine with the following policy.
      
      [prev.] [next] of inline_data flag
         o       o  -> recover inline_data
         o       x  -> remove inline_data, and then recover data blocks
         x       o  -> remove inline_data, and then recover inline_data
         x       x  -> recover data blocks
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      1e1bb4ba
    • J
      f2fs: refactor f2fs_convert_inline_data · 9e09fc85
      Jaegeuk Kim 提交于
      Change log from v1:
       o handle NULL pointer of grab_cache_page_write_begin() pointed by Chao Yu.
      
      This patch refactors f2fs_convert_inline_data to check a couple of conditions
      internally for deciding whether it needs to convert inline_data or not.
      
      So, the new f2fs_convert_inline_data initially checks:
      1) f2fs_has_inline_data(), and
      2) the data size to be changed.
      
      If the inode has inline_data but the size to fill is less than MAX_INLINE_DATA,
      then we don't need to convert the inline_data with data allocation.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      9e09fc85
    • J
      f2fs: convert inline_data for punch_hole · 8230a0a4
      Jaegeuk Kim 提交于
      In the punch_hole(), let's convert inline_data all the time for simplicity and
      to avoid potential deadlock conditions.
      It is pretty much not a big deal to do this.
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      8230a0a4
  16. 27 12月, 2013 1 次提交
  17. 26 12月, 2013 1 次提交
    • H
      f2fs: handle inline data operations · 9ffe0fb5
      Huajun Li 提交于
      Hook inline data read/write, truncate, fallocate, setattr, etc.
      
      Files need meet following 2 requirement to inline:
       1) file size is not greater than MAX_INLINE_DATA;
       2) file doesn't pre-allocate data blocks by fallocate().
      
      FI_INLINE_DATA will not be set while creating a new regular inode because
      most of the files are bigger than ~3.4K. Set FI_INLINE_DATA only when
      data is submitted to block layer, ranther than set it while creating a new
      inode, this also avoids converting data from inline to normal data block
      and vice versa.
      
      While writting inline data to inode block, the first data block should be
      released if the file has a block indexed by i_addr[0].
      
      On the other hand, when a file operation is appied to a file with inline
      data, we need to test if this file can remain inline by doing this
      operation, otherwise it should be convert into normal file by reserving
      a new data block, copying inline data to this new block and clear
      FI_INLINE_DATA flag. Because reserve a new data block here will make use
      of i_addr[0], if we save inline data in i_addr[0..872], then the first
      4 bytes would be overwriten. This problem can be avoided simply by
      not using i_addr[0] for inline data.
      Signed-off-by: NHuajun Li <huajun.li@intel.com>
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NWeihong Xu <weihong.xu@intel.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      9ffe0fb5
  18. 23 12月, 2013 3 次提交
  19. 31 10月, 2013 1 次提交
  20. 29 10月, 2013 1 次提交
  21. 25 10月, 2013 1 次提交
  22. 07 10月, 2013 1 次提交
    • G
      f2fs: use rw_sem instead of fs_lock(locks mutex) · e479556b
      Gu Zheng 提交于
      The fs_locks is used to block other ops(ex, recovery) when doing checkpoint.
      And each other operate routine(besides checkpoint) needs to acquire a fs_lock,
      there is a terrible problem here, if these are too many concurrency threads acquiring
      fs_lock, so that they will block each other and may lead to some performance problem,
      but this is not the phenomenon we want to see.
      Though there are some optimization patches introduced to enhance the usage of fs_lock,
      but the thorough solution is using a *rw_sem* to replace the fs_lock.
      Checkpoint routine takes write_sem, and other ops take read_sem, so that we can block
      other ops(ex, recovery) when doing checkpoint, and other ops will not disturb each other,
      this can avoid the problem described above completely.
      Because of the weakness of rw_sem, the above change may introduce a potential problem
      that the checkpoint thread might get starved if other threads are intensively locking
      the read semaphore for I/O.(Pointed out by Xu Jin)
      In order to avoid this, a wait_list is introduced, the appending read semaphore ops
      will be dropped into the wait_list if checkpoint thread is waiting for write semaphore,
      and will be waked up when checkpoint thread gives up write semaphore.
      Thanks to Kim's previous review and test, and will be very glad to see other guys'
      performance tests about this patch.
      
      V2:
        -fix the potential starvation problem.
        -use more suitable func name suggested by Xu Jin.
      Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      [Jaegeuk Kim: adjust minor coding standard]
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      e479556b
  23. 26 8月, 2013 1 次提交
    • J
      f2fs: reserve the xattr space dynamically · de93653f
      Jaegeuk Kim 提交于
      This patch enables the number of direct pointers inside on-disk inode block to
      be changed dynamically according to the size of inline xattr space.
      
      The number of direct pointers, ADDRS_PER_INODE, can be changed only if the file
      has inline xattr flag.
      
      The number of direct pointers that will be used by inline xattrs is defined as
      F2FS_INLINE_XATTR_ADDRS.
      Current patch assigns F2FS_INLINE_XATTR_ADDRS to 0 temporarily.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      de93653f
  24. 09 8月, 2013 2 次提交
  25. 30 7月, 2013 2 次提交