1. 30 9月, 2020 1 次提交
    • J
      f2fs: fix slab leak of rpages pointer · adfc6943
      Jaegeuk Kim 提交于
      This fixes the below mem leak.
      
      [  130.157600] =============================================================================
      [  130.159662] BUG f2fs_page_array_entry-252:16 (Tainted: G        W  O     ): Objects remaining in f2fs_page_array_entry-252:16 on __kmem_cache_shutdown()
      [  130.162742] -----------------------------------------------------------------------------
      [  130.162742]
      [  130.164979] Disabling lock debugging due to kernel taint
      [  130.166188] INFO: Slab 0x000000009f5a52d2 objects=22 used=4 fp=0x00000000ba72c3e9 flags=0xfffffc0010200
      [  130.168269] CPU: 7 PID: 3560 Comm: umount Tainted: G    B   W  O      5.9.0-rc4+ #35
      [  130.170019] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1 04/01/2014
      [  130.171941] Call Trace:
      [  130.172528]  dump_stack+0x74/0x9a
      [  130.173298]  slab_err+0xb7/0xdc
      [  130.174044]  ? kernel_poison_pages+0xc0/0xc0
      [  130.175065]  ? on_each_cpu_cond_mask+0x48/0x90
      [  130.176096]  __kmem_cache_shutdown.cold+0x34/0x141
      [  130.177190]  kmem_cache_destroy+0x59/0x100
      [  130.178223]  f2fs_destroy_page_array_cache+0x15/0x20 [f2fs]
      [  130.179527]  f2fs_put_super+0x1bc/0x380 [f2fs]
      [  130.180538]  generic_shutdown_super+0x72/0x110
      [  130.181547]  kill_block_super+0x27/0x50
      [  130.182438]  kill_f2fs_super+0x76/0xe0 [f2fs]
      [  130.183448]  deactivate_locked_super+0x3b/0x80
      [  130.184456]  deactivate_super+0x3e/0x50
      [  130.185363]  cleanup_mnt+0x109/0x160
      [  130.186179]  __cleanup_mnt+0x12/0x20
      [  130.187003]  task_work_run+0x70/0xb0
      [  130.187841]  exit_to_user_mode_prepare+0x18f/0x1b0
      [  130.188917]  syscall_exit_to_user_mode+0x31/0x170
      [  130.189989]  do_syscall_64+0x45/0x90
      [  130.190828]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      [  130.191986] RIP: 0033:0x7faf868ea2eb
      [  130.192815] Code: 7b 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 90 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 75 7b 0c 00 f7 d8 64 89 01
      [  130.196872] RSP: 002b:00007fffb7edb478 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
      [  130.198494] RAX: 0000000000000000 RBX: 00007faf86a18204 RCX: 00007faf868ea2eb
      [  130.201021] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055971df71c50
      [  130.203415] RBP: 000055971df71a40 R08: 0000000000000000 R09: 00007fffb7eda1f0
      [  130.205772] R10: 00007faf86a04339 R11: 0000000000000246 R12: 000055971df71c50
      [  130.208150] R13: 0000000000000000 R14: 000055971df71b38 R15: 0000000000000000
      [  130.210515] INFO: Object 0x00000000a980843a @offset=744
      [  130.212476] INFO: Allocated in page_array_alloc+0x3d/0xe0 [f2fs] age=1572 cpu=0 pid=3297
      [  130.215030] 	__slab_alloc+0x20/0x40
      [  130.216566] 	kmem_cache_alloc+0x2a0/0x2e0
      [  130.218217] 	page_array_alloc+0x3d/0xe0 [f2fs]
      [  130.219940] 	f2fs_init_compress_ctx+0x1f/0x40 [f2fs]
      [  130.221736] 	f2fs_write_cache_pages+0x3db/0x860 [f2fs]
      [  130.223591] 	f2fs_write_data_pages+0x2c9/0x300 [f2fs]
      [  130.225414] 	do_writepages+0x43/0xd0
      [  130.226907] 	__filemap_fdatawrite_range+0xd5/0x110
      [  130.228632] 	filemap_write_and_wait_range+0x48/0xb0
      [  130.230336] 	__generic_file_write_iter+0x18a/0x1d0
      [  130.232035] 	f2fs_file_write_iter+0x226/0x550 [f2fs]
      [  130.233737] 	new_sync_write+0x113/0x1a0
      [  130.235204] 	vfs_write+0x1a6/0x200
      [  130.236579] 	ksys_write+0x67/0xe0
      [  130.237898] 	__x64_sys_write+0x1a/0x20
      [  130.239309] 	do_syscall_64+0x38/0x90
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      adfc6943
  2. 15 9月, 2020 1 次提交
  3. 12 9月, 2020 4 次提交
    • D
      f2fs: change return value of f2fs_disable_compressed_file to bool · 78134d03
      Daeho Jeong 提交于
      The returned integer is not required anywhere. So we need to change
      the return value to bool type.
      Signed-off-by: NDaeho Jeong <daehojeong@google.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      78134d03
    • D
      f2fs: add block address limit check to compressed file · 4eda1682
      Daeho Jeong 提交于
      Need to add block address range check to compressed file case and
      avoid calling get_data_block_bmap() for compressed file.
      Signed-off-by: NDaeho Jeong <daehojeong@google.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      4eda1682
    • J
      f2fs: correct statistic of APP_DIRECT_IO/APP_DIRECT_READ_IO · 335cac8b
      Jack Qiu 提交于
      Miss to update APP_DIRECT_IO/APP_DIRECT_READ_IO when receiving async DIO.
      For example: fio -filename=/data/test.0 -bs=1m -ioengine=libaio -direct=1
      		-name=fill -size=10m -numjobs=1 -iodepth=32 -rw=write
      Signed-off-by: NJack Qiu <jack.qiu@huawei.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      335cac8b
    • C
      f2fs: support age threshold based garbage collection · 093749e2
      Chao Yu 提交于
      There are several issues in current background GC algorithm:
      - valid blocks is one of key factors during cost overhead calculation,
      so if segment has less valid block, however even its age is young or
      it locates hot segment, CB algorithm will still choose the segment as
      victim, it's not appropriate.
      - GCed data/node will go to existing logs, no matter in-there datas'
      update frequency is the same or not, it may mix hot and cold data
      again.
      - GC alloctor mainly use LFS type segment, it will cost free segment
      more quickly.
      
      This patch introduces a new algorithm named age threshold based
      garbage collection to solve above issues, there are three steps
      mainly:
      
      1. select a source victim:
      - set an age threshold, and select candidates beased threshold:
      e.g.
       0 means youngest, 100 means oldest, if we set age threshold to 80
       then select dirty segments which has age in range of [80, 100] as
       candiddates;
      - set candidate_ratio threshold, and select candidates based the
      ratio, so that we can shrink candidates to those oldest segments;
      - select target segment with fewest valid blocks in order to
      migrate blocks with minimum cost;
      
      2. select a target victim:
      - select candidates beased age threshold;
      - set candidate_radius threshold, search candidates whose age is
      around source victims, searching radius should less than the
      radius threshold.
      - select target segment with most valid blocks in order to avoid
      migrating current target segment.
      
      3. merge valid blocks from source victim into target victim with
      SSR alloctor.
      
      Test steps:
      - create 160 dirty segments:
       * half of them have 128 valid blocks per segment
       * left of them have 384 valid blocks per segment
      - run background GC
      
      Benefit: GC count and block movement count both decrease obviously:
      
      - Before:
        - Valid: 86
        - Dirty: 1
        - Prefree: 11
        - Free: 6001 (6001)
      
      GC calls: 162 (BG: 220)
        - data segments : 160 (160)
        - node segments : 2 (2)
      Try to move 41454 blocks (BG: 41454)
        - data blocks : 40960 (40960)
        - node blocks : 494 (494)
      
      IPU: 0 blocks
      SSR: 0 blocks in 0 segments
      LFS: 41364 blocks in 81 segments
      
      - After:
      
        - Valid: 87
        - Dirty: 0
        - Prefree: 4
        - Free: 6008 (6008)
      
      GC calls: 75 (BG: 76)
        - data segments : 74 (74)
        - node segments : 1 (1)
      Try to move 12813 blocks (BG: 12813)
        - data blocks : 12544 (12544)
        - node blocks : 269 (269)
      
      IPU: 0 blocks
      SSR: 12032 blocks in 77 segments
      LFS: 855 blocks in 2 segments
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      [Jaegeuk Kim: fix a bug along with pinfile in-mem segment & clean up]
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      093749e2
  4. 11 9月, 2020 3 次提交
  5. 09 9月, 2020 1 次提交
    • G
      f2fs: Return EOF on unaligned end of file DIO read · 20d0a107
      Gabriel Krisman Bertazi 提交于
      Reading past end of file returns EOF for aligned reads but -EINVAL for
      unaligned reads on f2fs.  While documentation is not strict about this
      corner case, most filesystem returns EOF on this case, like iomap
      filesystems.  This patch consolidates the behavior for f2fs, by making
      it return EOF(0).
      
      it can be verified by a read loop on a file that does a partial read
      before EOF (A file that doesn't end at an aligned address).  The
      following code fails on an unaligned file on f2fs, but not on
      btrfs, ext4, and xfs.
      
        while (done < total) {
          ssize_t delta = pread(fd, buf + done, total - done, off + done);
          if (!delta)
            break;
          ...
        }
      
      It is arguable whether filesystems should actually return EOF or
      -EINVAL, but since iomap filesystems support it, and so does the
      original DIO code, it seems reasonable to consolidate on that.
      Signed-off-by: NGabriel Krisman Bertazi <krisman@collabora.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      20d0a107
  6. 04 8月, 2020 1 次提交
  7. 26 7月, 2020 2 次提交
  8. 17 7月, 2020 1 次提交
  9. 09 7月, 2020 1 次提交
  10. 08 7月, 2020 8 次提交
  11. 09 6月, 2020 2 次提交
  12. 05 6月, 2020 1 次提交
    • S
      f2fs: fix retry logic in f2fs_write_cache_pages() · e78790f8
      Sahitya Tummala 提交于
      In case a compressed file is getting overwritten, the current retry
      logic doesn't include the current page to be retried now as it sets
      the new start index as 0 and new end index as writeback_index - 1.
      This causes the corresponding cluster to be uncompressed and written
      as normal pages without compression. Fix this by allowing writeback to
      be retried for the current page as well (in case of compressed page
      getting retried due to index mismatch with cluster index). So that
      this cluster can be written compressed in case of overwrite.
      
      Also, align f2fs_write_cache_pages() according to the change -
      <64081362>("mm/page-writeback.c: fix range_cyclic writeback vs
      writepages deadlock").
      Signed-off-by: NSahitya Tummala <stummala@codeaurora.org>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      e78790f8
  13. 04 6月, 2020 3 次提交
  14. 03 6月, 2020 3 次提交
  15. 12 5月, 2020 3 次提交
  16. 08 5月, 2020 2 次提交
  17. 24 4月, 2020 1 次提交
  18. 18 4月, 2020 1 次提交
  19. 17 4月, 2020 1 次提交