1. 01 1月, 2016 1 次提交
  2. 31 12月, 2015 6 次提交
  3. 23 12月, 2015 1 次提交
  4. 18 12月, 2015 1 次提交
  5. 17 12月, 2015 3 次提交
  6. 16 12月, 2015 2 次提交
  7. 05 12月, 2015 5 次提交
  8. 21 10月, 2015 1 次提交
  9. 14 10月, 2015 1 次提交
    • C
      f2fs crypto: fix racing of accessing encrypted page among · 08b39fbd
      Chao Yu 提交于
       different competitors
      
      Since we use different page cache (normally inode's page cache for R/W
      and meta inode's page cache for GC) to cache the same physical block
      which is belong to an encrypted inode. Writeback of these two page
      cache should be exclusive, but now we didn't handle writeback state
      well, so there may be potential racing problem:
      
      a)
      kworker:				f2fs_gc:
       - f2fs_write_data_pages
        - f2fs_write_data_page
         - do_write_data_page
          - write_data_page
           - f2fs_submit_page_mbio
      (page#1 in inode's page cache was queued
      in f2fs bio cache, and be ready to write
      to new blkaddr)
      					 - gc_data_segment
      					  - move_encrypted_block
      					   - pagecache_get_page
      					(page#2 in meta inode's page cache
      					was cached with the invalid datas
      					of physical block located in new
      					blkaddr)
      					   - f2fs_submit_page_mbio
      					(page#1 was submitted, later, page#2
      					with invalid data will be submitted)
      
      b)
      f2fs_gc:
       - gc_data_segment
        - move_encrypted_block
         - f2fs_submit_page_mbio
      (page#1 in meta inode's page cache was
      queued in f2fs bio cache, and be ready
      to write to new blkaddr)
      					user thread:
      					 - f2fs_write_begin
      					  - f2fs_submit_page_bio
      					(we submit the request to block layer
      					to update page#2 in inode's page cache
      					with physical block located in new
      					blkaddr, so here we may read gabbage
      					data from new blkaddr since GC hasn't
      					writebacked the page#1 yet)
      
      This patch fixes above potential racing problem for encrypted inode.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      08b39fbd
  10. 13 10月, 2015 6 次提交
  11. 10 10月, 2015 8 次提交
  12. 27 8月, 2015 1 次提交
    • C
      f2fs: update extent tree in batches · 19b2c30d
      Chao Yu 提交于
      This patch introduce a new helper f2fs_update_extent_tree_range which can
      do extent mapping update at a specified range.
      
      The main idea is:
      1) punch all mapping info in extent node(s) which are at a specified range;
      2) try to merge new extent mapping with adjacent node, or failing that,
         insert the mapping into extent tree as a new node.
      
      In order to see the benefit, I add a function for stating time stamping
      count as below:
      
      uint64_t rdtsc(void)
      {
      	uint32_t lo, hi;
      	__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
      	return (uint64_t)hi << 32 | lo;
      }
      
      My test environment is: ubuntu, intel i7-3770, 16G memory, 256g micron ssd.
      
      truncation path:	update extent cache from truncate_data_blocks_range
      non-truncataion path:	update extent cache from other paths
      total:			all update paths
      
      a) Removing 128MB file which has one extent node mapping whole range of
      file:
      1. dd if=/dev/zero of=/mnt/f2fs/128M bs=1M count=128
      2. sync
      3. rm /mnt/f2fs/128M
      
      Before:
      		total		count		average
      truncation:	7651022		32768		233.49
      
      Patched:
      		total		count		average
      truncation:	3321		33		100.64
      
      b) fsstress:
      fsstress -d /mnt/f2fs -l 5 -n 100 -p 20
      Test times:		5 times.
      
      Before:
      		total		count		average
      truncation:	5812480.6	20911.6		277.95
      non-truncation:	7783845.6	13440.8		579.12
      total:		13596326.2	34352.4		395.79
      
      Patched:
      		total		count		average
      truncation:	1281283.0	3041.6		421.25
      non-truncation:	7355844.4	13662.8		538.38
      total:		8637127.4	16704.4		517.06
      
      1) For the updates in truncation path:
       - we can see updating in batches leads total tsc and update count reducing
         explicitly;
       - besides, for a single batched updating, punching multiple extent nodes
         in a loop, result in executing more operations, so our average tsc
         increase intensively.
      2) For the updates in non-truncation path:
       - there is a little improvement, that is because for the scenario that we
         just need to update in the head or tail of extent node, new interface
         optimize to update info in extent node directly, rather than removing
         original extent node for updating and then inserting that updated one
         into cache as new node.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      19b2c30d
  13. 25 8月, 2015 3 次提交
    • C
      f2fs: fix to release inode correctly · 13ec7297
      Chao Yu 提交于
      In following call stack, if unfortunately we lose all chances to truncate
      inode page in remove_inode_page, eventually we will add the nid allocated
      previously into free nid cache, this nid is with NID_NEW status and with
      NEW_ADDR in its blkaddr pointer:
      
       - f2fs_create
        - f2fs_add_link
         - __f2fs_add_link
          - init_inode_metadata
           - new_inode_page
            - new_node_page
             - set_node_addr(, NEW_ADDR)
           - f2fs_init_acl   failed
           - remove_inode_page  failed
        - handle_failed_inode
         - remove_inode_page  failed
         - iput
          - f2fs_evict_inode
           - remove_inode_page  failed
           - alloc_nid_failed   cache a nid with valid blkaddr: NEW_ADDR
      
      This may not only cause resource leak of previous inode, but also may cause
      incorrect use of the previous blkaddr which is located in NO.nid node entry
      when this nid is reused by others.
      
      This patch tries to add this inode to orphan list if we fail to truncate
      inode, so that we can obtain a second chance to release it in orphan
      recovery flow.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      13ec7297
    • C
      f2fs: handle f2fs_truncate error correctly · b0154891
      Chao Yu 提交于
      This patch fixes to return error number of f2fs_truncate, so that we
      can handle the error correctly in callers.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      b0154891
    • J
      f2fs: use __GFP_NOFAIL to avoid infinite loop · 80c54505
      Jaegeuk Kim 提交于
      __GFP_NOFAIL can avoid retrying the whole path of kmem_cache_alloc and
      bio_alloc.
      And, it also fixes the use cases of GFP_ATOMIC correctly.
      Suggested-by: NChao Yu <chao2.yu@samsung.com>
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      80c54505
  14. 22 8月, 2015 1 次提交
    • C
      f2fs: adjust showing of extent cache stat · 029e13cc
      Chao Yu 提交于
      This patch alters to replace total hit stat with rbtree hit stat,
      and then adjust showing of extent cache stat:
      
      Hit Count:
      L1-1: for largest node hit count;
      L1-2: for last cached node hit count;
      L2: for extent node hit after lookuping in rbtree.
      
      Hit Ratio:
      ratio (hit count / total lookup count)
      
      Inner Struct Count:
      tree count, node count.
      
      Before:
      Extent Hit Ratio: 0 / 2
      
      Extent Tree Count: 3
      
      Extent Node Count: 2
      
      Patched:
      Exten Cacache:
        - Hit Count: L1-1:4871 L1-2:2074 L2:208
        - Hit Ratio: 1% (7153 / 550751)
        - Inner Struct Count: tree: 26560, node: 11824
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      029e13cc