1. 24 11月, 2016 6 次提交
  2. 01 10月, 2016 4 次提交
  3. 23 9月, 2016 1 次提交
  4. 13 9月, 2016 2 次提交
  5. 08 9月, 2016 3 次提交
  6. 30 8月, 2016 2 次提交
  7. 25 8月, 2016 1 次提交
  8. 16 7月, 2016 2 次提交
  9. 09 7月, 2016 2 次提交
    • C
      f2fs: fix to avoid redundant discard during fstrim · c24a0fd6
      Chao Yu 提交于
      With below test steps, f2fs will issue redundant discard when doing fstrim,
      the reason is that we issue discards for both prefree segments and
      consecutive freed region user wants to trim, part regions they covered are
      overlapped, here, we change to do not to issue any discards for prefree
      segments in trimmed range.
      
      1. mount -t f2fs -o discard /dev/zram0 /mnt/f2fs
      2. fstrim -o 0 -l 3221225472 -m 2097152 -v /mnt/f2fs/
      3. dd if=/dev/zero  of=/mnt/f2fs/a bs=2M count=1
      4. dd if=/dev/zero  of=/mnt/f2fs/b bs=1M count=1
      5. sync
      6. rm /mnt/f2fs/a /mnt/f2fs/b
      7. fstrim -o 0 -l 3221225472 -m 2097152 -v /mnt/f2fs/
      
      Before:
      <...>-5428  [001] ...1  9511.052125: f2fs_issue_discard: dev = (251,0), blkstart = 0x2200, blklen = 0x200
      <...>-5428  [001] ...1  9511.052787: f2fs_issue_discard: dev = (251,0), blkstart = 0x2200, blklen = 0x300
      
      After:
      <...>-6764  [000] ...1  9720.382504: f2fs_issue_discard: dev = (251,0), blkstart = 0x2200, blklen = 0x300
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      c24a0fd6
    • Y
      f2fs: avoid mismatching block range for discard · c7b41e16
      Yunlei He 提交于
      This patch skip discard block range smaller than trim_minlen,
      and can not be merged by neighbour
      Signed-off-by: NYunlei He <heyunlei@huawei.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      c7b41e16
  10. 07 7月, 2016 2 次提交
  11. 14 6月, 2016 1 次提交
  12. 09 6月, 2016 2 次提交
  13. 08 6月, 2016 3 次提交
  14. 03 6月, 2016 2 次提交
  15. 04 5月, 2016 1 次提交
  16. 15 4月, 2016 2 次提交
  17. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  18. 27 2月, 2016 2 次提交
    • C
      f2fs: introduce f2fs_update_data_blkaddr for cleanup · f28b3434
      Chao Yu 提交于
      Add a new help f2fs_update_data_blkaddr to clean up redundant codes.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      f28b3434
    • C
      f2fs crypto: fix incorrect positioning for GCing encrypted data page · 4356e48e
      Chao Yu 提交于
      For now, flow of GCing an encrypted data page:
      1) try to grab meta page in meta inode's mapping with index of old block
      address of that data page
      2) load data of ciphertext into meta page
      3) allocate new block address
      4) write the meta page into new block address
      5) update block address pointer in direct node page.
      
      Other reader/writer will use f2fs_wait_on_encrypted_page_writeback to
      check and wait on GCed encrypted data cached in meta page writebacked
      in order to avoid inconsistence among data page cache, meta page cache
      and data on-disk when updating.
      
      However, we will use new block address updated in step 5) as an index to
      lookup meta page in inner bio buffer. That would be wrong, and we will
      never find the GCing meta page, since we use the old block address as
      index of that page in step 1).
      
      This patch fixes the issue by adjust the order of step 1) and step 3),
      and in step 1) grab page with index generated in step 3).
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      4356e48e
  19. 23 2月, 2016 1 次提交
    • C
      f2fs: trace old block address for CoWed page · 7a9d7548
      Chao Yu 提交于
      This patch enables to trace old block address of CoWed page for better
      debugging.
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4f0, oldaddr = 0xfe8ab, newaddr = 0xfee90 rw = WRITE_SYNC, type = NODE
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4f8, oldaddr = 0xfe8b0, newaddr = 0xfee91 rw = WRITE_SYNC, type = NODE
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4fa, oldaddr = 0xfe8ae, newaddr = 0xfee92 rw = WRITE_SYNC, type = NODE
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x96, oldaddr = 0xf049b, newaddr = 0x2bbe rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x97, oldaddr = 0xf049c, newaddr = 0x2bbf rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x98, oldaddr = 0xf049d, newaddr = 0x2bc0 rw = WRITE, type = DATA
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x47, oldaddr = 0xffffffff, newaddr = 0xf2631 rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x48, oldaddr = 0xffffffff, newaddr = 0xf2632 rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x49, oldaddr = 0xffffffff, newaddr = 0xf2633 rw = WRITE, type = DATA
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      7a9d7548