1. 12 2月, 2015 3 次提交
  2. 10 1月, 2015 1 次提交
    • C
      f2fs: reuse inode_entry_slab in gc procedure for using slab more effectively · 06292073
      Chao Yu 提交于
      There are two slab cache inode_entry_slab and winode_slab using the same
      structure as below:
      
      struct dir_inode_entry {
      	struct list_head list;	/* list head */
      	struct inode *inode;	/* vfs inode pointer */
      };
      
      struct inode_entry {
      	struct list_head list;
      	struct inode *inode;
      };
      
      It's a little waste that the two cache can not share their memory space for each
      other.
      So in this patch we remove one redundant winode_slab slab cache, then use more
      universal name struct inode_entry as remaining data structure name of slab,
      finally we reuse the inode_entry_slab to store dirty dir item and gc item for
      more effective.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      06292073
  3. 09 12月, 2014 1 次提交
  4. 06 12月, 2014 1 次提交
    • J
      f2fs: call radix_tree_preload before radix_tree_insert · 769ec6e5
      Jaegeuk Kim 提交于
      This patch tries to fix:
      
       BUG: using smp_processor_id() in preemptible [00000000] code: f2fs_gc-254:0/384
        (radix_tree_node_alloc+0x14/0x74) from [<c033d8a0>] (radix_tree_insert+0x110/0x200)
        (radix_tree_insert+0x110/0x200) from [<c02e8264>] (gc_data_segment+0x340/0x52c)
        (gc_data_segment+0x340/0x52c) from [<c02e8658>] (f2fs_gc+0x208/0x400)
        (f2fs_gc+0x208/0x400) from [<c02e8a98>] (gc_thread_func+0x248/0x28c)
        (gc_thread_func+0x248/0x28c) from [<c0139944>] (kthread+0xa0/0xac)
        (kthread+0xa0/0xac) from [<c0105ef8>] (ret_from_fork+0x14/0x3c)
      
      The reason is that f2fs calls radix_tree_insert under enabled preemption.
      So, before calling it, we need to call radix_tree_preload.
      
      Otherwise, we should use _GFP_WAIT for the radix tree, and use mutex or
      semaphore to cover the radix tree operations.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      769ec6e5
  5. 03 12月, 2014 1 次提交
  6. 28 11月, 2014 1 次提交
  7. 20 11月, 2014 1 次提交
    • C
      f2fs: avoid unable to restart gc thread in remount · 6c029932
      Chao Yu 提交于
      In f2fs_remount, we will stop gc thread and set need_restart_gc as true when new
      option is set without BG_GC, then if any error occurred in the following
      procedure, we can restore to start the gc thread.
      But after that, We will fail to restore gc thread in start_gc_thread as BG_GC is
      not set in new option, so we'd better move this condition judgment out of
      start_gc_thread to fix this issue.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      6c029932
  8. 05 11月, 2014 1 次提交
  9. 04 11月, 2014 1 次提交
  10. 01 10月, 2014 2 次提交
    • J
      f2fs: check the use of macros on block counts and addresses · 7cd8558b
      Jaegeuk Kim 提交于
      This patch cleans up the existing and new macros for readability.
      
      Rule is like this.
      
               ,-----------------------------------------> MAX_BLKADDR -,
               |  ,------------- TOTAL_BLKS ----------------------------,
               |  |                                                     |
               |  ,- seg0_blkaddr   ,----- sit/nat/ssa/main blkaddress  |
      block    |  | (SEG0_BLKADDR)  | | | |   (e.g., MAIN_BLKADDR)      |
      address  0..x................ a b c d .............................
                  |                                                     |
      global seg# 0...................... m .............................
                  |                       |                             |
                  |                       `------- MAIN_SEGS -----------'
                  `-------------- TOTAL_SEGS ---------------------------'
                                          |                             |
       seg#                               0..........xx..................
      
      = Note =
       o GET_SEGNO_FROM_SEG0 : blk address -> global segno
       o GET_SEGNO           : blk address -> segno
       o START_BLOCK         : segno -> starting block address
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      7cd8558b
    • J
      f2fs: introduce cp_control structure · 75ab4cb8
      Jaegeuk Kim 提交于
      This patch add a new data structure to control checkpoint parameters.
      Currently, it presents the reason of checkpoint such as is_umount and normal
      sync.
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      75ab4cb8
  11. 24 9月, 2014 1 次提交
    • C
      f2fs: fix to search whole dirty segmap when get_victim · 210f41bc
      Chao Yu 提交于
      In ->get_victim we get max_search value from dirty_i->nr_dirty without
      protection of seglist_lock, after that, nr_dirty can be increased/decreased
      before we hold seglist_lock lock.
      Then in main loop we attempt to traverse all dirty section one time to find
      victim section, but it's not accurate to use max_search as the total loop count,
      because we might lose checking several sections or check sections redundantly
      for the case of nr_dirty are increased or decreased previously.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      210f41bc
  12. 16 9月, 2014 1 次提交
  13. 10 9月, 2014 1 次提交
    • H
      f2fs: avoid node page to be written twice in gc_node_segment · 9a01b56b
      Huang Ying 提交于
      In gc_node_segment, if node page gc is run concurrently with node page
      writeback, and check_valid_map and get_node_page run after page locked
      and before cur_valid_map is updated as below, it is possible for the
      page to be written twice unnecessarily.
      
      			sync_node_pages
      			  try_lock_page
      			  ...
      check_valid_map		  f2fs_write_node_page
      			    ...
      			    write_node_page
      			      do_write_page
      			        allocate_data_block
      				  ...
      				  refresh_sit_entry /* update cur_valid_map */
      				  ...
      			    ...
      			    unlock_page
      get_node_page
      ...
      set_page_dirty
      ...
      f2fs_put_page
        unlock_page
      
      This can be solved via calling check_valid_map after get_node_page again.
      Signed-off-by: NHuang, Ying <ying.huang@intel.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      9a01b56b
  14. 02 9月, 2014 1 次提交
    • C
      f2fs: reposition unlock_new_inode to prevent accessing invalid inode · b73e5282
      Chao Yu 提交于
      As the race condition on the inode cache, following scenario can appear:
      [Thread a]				[Thread b]
      					->f2fs_mkdir
      					  ->f2fs_add_link
      					    ->__f2fs_add_link
      					      ->init_inode_metadata failed here
      ->gc_thread_func
        ->f2fs_gc
          ->do_garbage_collect
            ->gc_data_segment
              ->f2fs_iget
                ->iget_locked
                  ->wait_on_inode
      					  ->unlock_new_inode
              ->move_data_page
      					  ->make_bad_inode
      					  ->iput
      
      When we fail in create/symlink/mkdir/mknod/tmpfile, the new allocated inode
      should be set as bad to avoid being accessed by other thread. But in above
      scenario, it allows f2fs to access the invalid inode before this inode was set
      as bad.
      This patch fix the potential problem, and this issue was found by code review.
      
      change log from v1:
       o Add condition judgment in gc_data_segment() suggested by Changman Lee.
       o use iget_failed to simplify code.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      b73e5282
  15. 22 8月, 2014 1 次提交
  16. 20 8月, 2014 1 次提交
  17. 05 8月, 2014 1 次提交
  18. 10 3月, 2014 1 次提交
  19. 27 2月, 2014 1 次提交
  20. 17 2月, 2014 2 次提交
  21. 14 1月, 2014 1 次提交
  22. 08 1月, 2014 1 次提交
    • J
      f2fs: add a sysfs entry to control max_victim_search · b1c57c1c
      Jaegeuk Kim 提交于
      Previously during SSR and GC, the maximum number of retrials to find a victim
      segment was hard-coded by MAX_VICTIM_SEARCH, 4096 by default.
      
      This number makes an effect on IO locality, when SSR mode is activated, which
      results in performance fluctuation on some low-end devices.
      
      If max_victim_search = 4, the victim will be searched like below.
      ("D" represents a dirty segment, and "*" indicates a selected victim segment.)
      
       D1 D2 D3 D4 D5 D6 D7 D8 D9
      [   *       ]
            [   *    ]
                  [         * ]
      	                [ ....]
      
      This patch adds a sysfs entry to control the number dynamically through:
        /sys/fs/f2fs/$dev/max_victim_search
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      b1c57c1c
  23. 23 12月, 2013 6 次提交
    • G
      f2fs: remove the rw_flag domain from f2fs_io_info · 7e8f2308
      Gu Zheng 提交于
      When using the f2fs_io_info in the low level, we still need to merge the
      rw and rw_flag, so use the rw to hold all the io flags directly,
      and remove the rw_flag field.
      
      ps.It is based on the previous patch:
      f2fs: move all the bio initialization into __bio_alloc
      Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      7e8f2308
    • J
      f2fs: refactor bio->rw handling · 458e6197
      Jaegeuk Kim 提交于
      This patch introduces f2fs_io_info to mitigate the complex parameter list.
      
      struct f2fs_io_info {
      	enum page_type type;		/* contains DATA/NODE/META/META_FLUSH */
      	int rw;				/* contains R/RS/W/WS */
      	int rw_flag;			/* contains REQ_META/REQ_PRIO */
      }
      
      1. f2fs_write_data_pages
       - DATA
       - WRITE_SYNC is set when wbc->WB_SYNC_ALL.
      
      2. sync_node_pages
       - NODE
       - WRITE_SYNC all the time
      
      3. sync_meta_pages
       - META
       - WRITE_SYNC all the time
       - REQ_META | REQ_PRIO all the time
      
       ** f2fs_submit_merged_bio() handles META_FLUSH.
      
      4. ra_nat_pages, ra_sit_pages, ra_sum_pages
       - META
       - READ_SYNC
      
      Cc: Fan Li <fanofcode.li@samsung.com>
      Cc: Changman Lee <cm224.lee@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      458e6197
    • F
      f2fs: merge pages with the same sync_mode flag · 63a0b7cb
      Fan Li 提交于
      Previously f2fs submits most of write requests using WRITE_SYNC, but f2fs_write_data_pages
      submits last write requests by sync_mode flags callers pass.
      
      This causes a performance problem since continuous pages with different sync flags
      can't be merged in cfq IO scheduler(thanks yu chao for pointing it out), and synchronous
      requests often take more time.
      
      This patch makes the following modifies to DATA writebacks:
      
      1. every page will be written back using the sync mode caller pass.
      2. only pages with the same sync mode can be merged in one bio request.
      
      These changes are restricted to DATA pages.Other types of writebacks are modified
      To remain synchronous.
      
      In my test with tiotest, f2fs sequence write performance is improved by about 7%-10% ,
      and this patch has no obvious impact on other performance tests.
      Signed-off-by: NFan Li <fanofcode.li@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      63a0b7cb
    • J
      f2fs: add unlikely() macro for compiler more aggressively · 6bacf52f
      Jaegeuk Kim 提交于
      This patch adds unlikely() macro into the most of codes.
      The basic rule is to add that when:
      - checking unusual errors,
      - checking page mappings,
      - and the other unlikely conditions.
      
      Change log from v1:
       - Don't add unlikely for the NULL test and error test: advised by Andi Kleen.
      
      Cc: Chao Yu <chao2.yu@samsung.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      6bacf52f
    • J
      f2fs: refactor bio-related operations · 93dfe2ac
      Jaegeuk Kim 提交于
      This patch integrates redundant bio operations on read and write IOs.
      
      1. Move bio-related codes to the top of data.c.
      2. Replace f2fs_submit_bio with f2fs_submit_merged_bio, which handles read
         bios additionally.
      3. Introduce __submit_merged_bio to submit the merged bio.
      4. Change f2fs_readpage to f2fs_submit_page_bio.
      5. Introduce f2fs_submit_page_mbio to integrate previous submit_read_page and
         submit_write_page.
      Reviewed-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      Reviewed-by: Chao Yu <chao2.yu@samsung.com >
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      93dfe2ac
    • J
      f2fs: remove unnecessary condition checks · 031fa8cc
      Jaegeuk Kim 提交于
      This patch removes the unnecessary condition checks on:
      
      fs/f2fs/gc.c:667 do_garbage_collect() warn: 'sum_page' isn't an ERR_PTR
      fs/f2fs/f2fs.h:795 f2fs_put_page() warn: 'page' isn't an ERR_PTR
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      031fa8cc
  24. 25 10月, 2013 3 次提交
  25. 22 10月, 2013 1 次提交
  26. 24 9月, 2013 1 次提交
    • J
      f2fs: optimize the victim searching loop slightly · a57e564d
      Jin Xu 提交于
      Since the MAX_VICTIM_SEARCH has been enlarged from 20 to 4096,
      the victim searching overhead will be increased much than before,
      especially for SSR that searches victim for use quiet often.
      This patch intends to reduce the overhead a little bit by:
      - make the get_gc_cost a inline routine to reduce function call
        overhead
      - reduce multiplication and division operations
      - reduce unnecessary comparison operation
      Signed-off-by: NJin Xu <jinuxstyle@gmail.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      a57e564d
  27. 05 9月, 2013 1 次提交
    • J
      f2fs: optimize gc for better performance · a26b7c8a
      Jin Xu 提交于
      This patch improves the gc efficiency by optimizing the victim
      selection policy. With this optimization, the random re-write
      performance could increase up to 20%.
      
      For f2fs, when disk is in shortage of free spaces, gc will selects
      dirty segments and moves valid blocks around for making more space
      available. The gc cost of a segment is determined by the valid blocks
      in the segment. The less the valid blocks, the higher the efficiency.
      The ideal victim segment is the one that has the most garbage blocks.
      
      Currently, it searches up to 20 dirty segments for a victim segment.
      The selected victim is not likely the best victim for gc when there
      are much more dirty segments. Why not searching more dirty segments
      for a better victim? The cost of searching dirty segments is
      negligible in comparison to moving blocks.
      
      In this patch, it enlarges the MAX_VICTIM_SEARCH to 4096 to make
      the search more aggressively for a possible better victim. Since
      it also applies to victim selection for SSR, it will likely improve
      the SSR efficiency as well.
      
      The test case is simple. It creates as many files until the disk full.
      The size for each file is 32KB. Then it writes as many as 100000
      records of 4KB size to random offsets of random files in sync mode.
      The testing was done on a 2GB partition of a SDHC card. Let's see the
      test result of f2fs without and with the patch.
      
      ---------------------------------------
      2GB partition, SDHC
      create 52023 files of size 32768 bytes
      random re-write 100000 records of 4KB
      ---------------------------------------
      | file creation (s) | rewrite time (s) | gc count | gc garbage blocks |
      [no patch]  341         4227             1174          174840
      [patched]   324         2958             645           106682
      
      It's obvious that, with the patch, f2fs finishes the test in 20+% less
      time than without the patch. And internally it does much less gc with
      higher efficiency than before.
      
      Since the performance improvement is related to gc, it might not be so
      obvious for other tests that do not trigger gc as often as this one (
      This is because f2fs selects dirty segments for SSR use most of the
      time when free space is in shortage). The well-known iozone test tool
      was not used for benchmarking the patch becuase it seems do not have
      a test case that performs random re-write on a full disk.
      
      This patch is the revised version based on the suggestion from
      Jaegeuk Kim.
      Signed-off-by: NJin Xu <jinuxstyle@gmail.com>
      [Jaegeuk Kim: suggested simpler solution]
      Reviewed-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      a26b7c8a
  28. 26 8月, 2013 1 次提交
    • J
      f2fs: reserve the xattr space dynamically · de93653f
      Jaegeuk Kim 提交于
      This patch enables the number of direct pointers inside on-disk inode block to
      be changed dynamically according to the size of inline xattr space.
      
      The number of direct pointers, ADDRS_PER_INODE, can be changed only if the file
      has inline xattr flag.
      
      The number of direct pointers that will be used by inline xattrs is defined as
      F2FS_INLINE_XATTR_ADDRS.
      Current patch assigns F2FS_INLINE_XATTR_ADDRS to 0 temporarily.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      de93653f
  29. 06 8月, 2013 1 次提交
    • J
      f2fs: fix a deadlock in fsync · a569469e
      Jin Xu 提交于
      This patch fixes a deadlock bug that occurs quite often when there are
      concurrent write and fsync on a same file.
      
      Following is the simplified call trace when tasks get hung.
      
      fsync thread:
      - f2fs_sync_file
       ...
       - f2fs_write_data_pages
       ...
        - update_extent_cache
        ...
         - update_inode
          - wait_on_page_writeback
      
      bdi writeback thread
      - __writeback_single_inode
       - f2fs_write_data_pages
        - mutex_lock(sbi->writepages)
      
      The deadlock happens when the fsync thread waits on a inode page that has
      been added to the f2fs' cached bio sbi->bio[NODE], and unfortunately,
      no one else could be able to submit the cached bio to block layer for
      writeback. This is because the fsync thread already hold a sbi->fs_lock and
      the sbi->writepages lock, causing the bdi thread being blocked when attempt
      to write data pages for the same inode. At the same time, f2fs_gc thread
      does not notice the situation and could not help. Even the sync syscall
      gets blocked.
      
      To fix it, we could submit the cached bio first before waiting on a inode page
      that is being written back.
      Signed-off-by: NJin Xu <jinuxstyle@gmail.com>
      [Jaegeuk Kim: add more cases to use f2fs_wait_on_page_writeback]
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      a569469e