1. 19 7月, 2016 1 次提交
  2. 16 7月, 2016 3 次提交
  3. 09 7月, 2016 2 次提交
  4. 07 7月, 2016 1 次提交
  5. 14 6月, 2016 1 次提交
  6. 08 6月, 2016 1 次提交
  7. 03 6月, 2016 5 次提交
  8. 21 5月, 2016 2 次提交
  9. 19 5月, 2016 1 次提交
  10. 12 5月, 2016 3 次提交
  11. 08 5月, 2016 3 次提交
    • J
      f2fs: read node blocks ahead when truncating blocks · 79344efb
      Jaegeuk Kim 提交于
      This patch enables reading node blocks in advance when truncating large
      data blocks.
      
       > time rm $MNT/testfile (500GB) after drop_cachees
      Before : 9.422 s
      After  : 4.821 s
      Reported-by: NStephen Bates <stephen.bates@microsemi.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      79344efb
    • J
      f2fs: fallocate data blocks in single locked node page · e12dd7bd
      Jaegeuk Kim 提交于
      This patch is to improve the expand_inode speed in fallocate by allocating
      data blocks as many as possible in single locked node page.
      
      In SSD,
       # time fallocate -l 500G $MNT/testfile
      
      Before : 1m 33.410 s
      After  : 24.758 s
      Reported-by: NStephen Bates <stephen.bates@microsemi.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      e12dd7bd
    • C
      f2fs: avoid panic when truncating to max filesize · 09210c97
      Chao Yu 提交于
      The following panic occurs when truncating inode which has inline
      xattr to max filesize.
      
      [<ffffffffa013d3be>] get_dnode_of_data+0x4e/0x580 [f2fs]
      [<ffffffffa013aca1>] ? read_node_page+0x51/0x90 [f2fs]
      [<ffffffffa013ad99>] ? get_node_page.part.34+0xb9/0x170 [f2fs]
      [<ffffffffa01235b1>] truncate_blocks+0x131/0x3f0 [f2fs]
      [<ffffffffa01238e3>] f2fs_truncate+0x73/0x100 [f2fs]
      [<ffffffffa01239d2>] f2fs_setattr+0x62/0x2a0 [f2fs]
      [<ffffffff811a72c8>] notify_change+0x158/0x300
      [<ffffffff8118a42b>] do_truncate+0x6b/0xa0
      [<ffffffff8118e539>] ? __sb_start_write+0x49/0x100
      [<ffffffff8118a798>] do_sys_ftruncate.constprop.12+0x118/0x170
      [<ffffffff8118a82e>] SyS_ftruncate+0xe/0x10
      [<ffffffff8169efcf>] tracesys+0xe1/0xe6
      [<ffffffffa0139ae0>] get_node_path+0x210/0x220 [f2fs]
       <ffff880206a89ce8>
      --[ end trace 5fea664dfbcc6625 ]---
      
      The reason is truncate_blocks tries to truncate all node and data blocks
      start from specified block offset with value of (max filesize / block
      size), but actually, our valid max block offset is (max filesize / block
      size) - 1, so f2fs detects such invalid block offset with BUG_ON in
      truncation path.
      
      This patch lets f2fs skip truncating data which is exceeding max
      filesize.
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      09210c97
  12. 02 5月, 2016 2 次提交
  13. 27 4月, 2016 3 次提交
  14. 15 4月, 2016 3 次提交
  15. 13 4月, 2016 1 次提交
  16. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  17. 18 3月, 2016 2 次提交
  18. 27 2月, 2016 1 次提交
  19. 26 2月, 2016 1 次提交
    • C
      f2fs: fix incorrect upper bound when iterating inode mapping tree · 80dd9c0e
      Chao Yu 提交于
      1. Inode mapping tree can index page in range of [0, ULONG_MAX], however,
      in some places, f2fs only search or iterate page in ragne of [0, LONG_MAX],
      result in miss hitting in page cache.
      
      2. filemap_fdatawait_range accepts range parameters in unit of bytes, so
      the max range it covers should be [0, LLONG_MAX], if we use [0, LONG_MAX]
      as range for waiting on writeback, big number of pages will not be covered.
      
      This patch corrects above two issues.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      80dd9c0e
  20. 23 2月, 2016 3 次提交
    • C
      f2fs crypto: handle unexpected lack of encryption keys · ae108668
      Chao Yu 提交于
      This patch syncs f2fs with commit abdd438b ("ext4 crypto: handle
      unexpected lack of encryption keys") from ext4.
      
      Fix up attempts by users to try to write to a file when they don't
      have access to the encryption key.
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      ae108668
    • C
      f2fs: support revoking atomic written pages · 28bc106b
      Chao Yu 提交于
      f2fs support atomic write with following semantics:
      1. open db file
      2. ioctl start atomic write
      3. (write db file) * n
      4. ioctl commit atomic write
      5. close db file
      
      With this flow we can avoid file becoming corrupted when abnormal power
      cut, because we hold data of transaction in referenced pages linked in
      inmem_pages list of inode, but without setting them dirty, so these data
      won't be persisted unless we commit them in step 4.
      
      But we should still hold journal db file in memory by using volatile
      write, because our semantics of 'atomic write support' is incomplete, in
      step 4, we could fail to submit all dirty data of transaction, once
      partial dirty data was committed in storage, then after a checkpoint &
      abnormal power-cut, db file will be corrupted forever.
      
      So this patch tries to improve atomic write flow by adding a revoking flow,
      once inner error occurs in committing, this gives another chance to try to
      revoke these partial submitted data of current transaction, it makes
      committing operation more like aotmical one.
      
      If we're not lucky, once revoking operation was failed, EAGAIN will be
      reported to user for suggesting doing the recovery with held journal file,
      or retrying current transaction again.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      28bc106b
    • C
      f2fs: split drop_inmem_pages from commit_inmem_pages · 29b96b54
      Chao Yu 提交于
      Split drop_inmem_pages from commit_inmem_pages for code readability,
      and prepare for the following modification.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      29b96b54