1. 01 1月, 2016 1 次提交
  2. 31 12月, 2015 6 次提交
  3. 17 12月, 2015 2 次提交
  4. 16 12月, 2015 1 次提交
  5. 15 12月, 2015 2 次提交
  6. 10 12月, 2015 1 次提交
  7. 05 12月, 2015 4 次提交
  8. 22 10月, 2015 1 次提交
  9. 14 10月, 2015 1 次提交
    • C
      f2fs crypto: fix racing of accessing encrypted page among · 08b39fbd
      Chao Yu 提交于
       different competitors
      
      Since we use different page cache (normally inode's page cache for R/W
      and meta inode's page cache for GC) to cache the same physical block
      which is belong to an encrypted inode. Writeback of these two page
      cache should be exclusive, but now we didn't handle writeback state
      well, so there may be potential racing problem:
      
      a)
      kworker:				f2fs_gc:
       - f2fs_write_data_pages
        - f2fs_write_data_page
         - do_write_data_page
          - write_data_page
           - f2fs_submit_page_mbio
      (page#1 in inode's page cache was queued
      in f2fs bio cache, and be ready to write
      to new blkaddr)
      					 - gc_data_segment
      					  - move_encrypted_block
      					   - pagecache_get_page
      					(page#2 in meta inode's page cache
      					was cached with the invalid datas
      					of physical block located in new
      					blkaddr)
      					   - f2fs_submit_page_mbio
      					(page#1 was submitted, later, page#2
      					with invalid data will be submitted)
      
      b)
      f2fs_gc:
       - gc_data_segment
        - move_encrypted_block
         - f2fs_submit_page_mbio
      (page#1 in meta inode's page cache was
      queued in f2fs bio cache, and be ready
      to write to new blkaddr)
      					user thread:
      					 - f2fs_write_begin
      					  - f2fs_submit_page_bio
      					(we submit the request to block layer
      					to update page#2 in inode's page cache
      					with physical block located in new
      					blkaddr, so here we may read gabbage
      					data from new blkaddr since GC hasn't
      					writebacked the page#1 yet)
      
      This patch fixes above potential racing problem for encrypted inode.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      08b39fbd
  10. 13 10月, 2015 3 次提交
  11. 10 10月, 2015 7 次提交
  12. 27 8月, 2015 1 次提交
    • C
      f2fs: update extent tree in batches · 19b2c30d
      Chao Yu 提交于
      This patch introduce a new helper f2fs_update_extent_tree_range which can
      do extent mapping update at a specified range.
      
      The main idea is:
      1) punch all mapping info in extent node(s) which are at a specified range;
      2) try to merge new extent mapping with adjacent node, or failing that,
         insert the mapping into extent tree as a new node.
      
      In order to see the benefit, I add a function for stating time stamping
      count as below:
      
      uint64_t rdtsc(void)
      {
      	uint32_t lo, hi;
      	__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
      	return (uint64_t)hi << 32 | lo;
      }
      
      My test environment is: ubuntu, intel i7-3770, 16G memory, 256g micron ssd.
      
      truncation path:	update extent cache from truncate_data_blocks_range
      non-truncataion path:	update extent cache from other paths
      total:			all update paths
      
      a) Removing 128MB file which has one extent node mapping whole range of
      file:
      1. dd if=/dev/zero of=/mnt/f2fs/128M bs=1M count=128
      2. sync
      3. rm /mnt/f2fs/128M
      
      Before:
      		total		count		average
      truncation:	7651022		32768		233.49
      
      Patched:
      		total		count		average
      truncation:	3321		33		100.64
      
      b) fsstress:
      fsstress -d /mnt/f2fs -l 5 -n 100 -p 20
      Test times:		5 times.
      
      Before:
      		total		count		average
      truncation:	5812480.6	20911.6		277.95
      non-truncation:	7783845.6	13440.8		579.12
      total:		13596326.2	34352.4		395.79
      
      Patched:
      		total		count		average
      truncation:	1281283.0	3041.6		421.25
      non-truncation:	7355844.4	13662.8		538.38
      total:		8637127.4	16704.4		517.06
      
      1) For the updates in truncation path:
       - we can see updating in batches leads total tsc and update count reducing
         explicitly;
       - besides, for a single batched updating, punching multiple extent nodes
         in a loop, result in executing more operations, so our average tsc
         increase intensively.
      2) For the updates in non-truncation path:
       - there is a little improvement, that is because for the scenario that we
         just need to update in the head or tail of extent node, new interface
         optimize to update info in extent node directly, rather than removing
         original extent node for updating and then inserting that updated one
         into cache as new node.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      19b2c30d
  13. 25 8月, 2015 1 次提交
  14. 21 8月, 2015 1 次提交
  15. 11 8月, 2015 1 次提交
  16. 05 8月, 2015 7 次提交
    • J
      f2fs: handle error cases in commit_inmem_pages · edb27dee
      Jaegeuk Kim 提交于
      This patch adds to handle error cases in commit_inmem_pages.
      If an error occurs, it stops to write the pages and return the error right
      away.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      edb27dee
    • C
      f2fs: convert inline data before set atomic/volatile flag · f4c9c743
      Chao Yu 提交于
      In f2fs_ioc_start_{atomic,volatile}_write, if we failed in converting
      inline data, we will report error to user, but still remain atomic/volatile
      flag in inode, it will impact further writes for this file. Fix it.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      f4c9c743
    • C
      f2fs: fix to wait all atomic written pages writeback · a5f64b6a
      Chao Yu 提交于
      This patch fixes the incorrect range (0, LONG_MAX) which is used
      in ranged fsync. If we use LONG_MAX as the parameter for indicating
      the end of file we want to synchronize, in 32-bits architecture
      machine, these datas after 4GB offset may not be persisted in
      storage after ->fsync returned.
      
      Here, we alter LONG_MAX to LLONG_MAX to fix this issue.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      a5f64b6a
    • C
      f2fs: fix double lock in handle_failed_inode · 55f57d2c
      Chao Yu 提交于
      In handle_failed_inode, there is a potential deadlock which can happen
      in below call path:
      
      - f2fs_create
       - f2fs_lock_op   down_read(cp_rwsem)
       - f2fs_add_link
        - __f2fs_add_link
         - init_inode_metadata
          - f2fs_init_security    failed
          - truncate_blocks    failed
       - handle_failed_inode
        - f2fs_truncate
         - truncate_blocks(..,true)
      					- write_checkpoint
      					 - block_operations
      					  - f2fs_lock_all  down_write(cp_rwsem)
          - f2fs_lock_op   down_read(cp_rwsem)
      
      So in this path, we pass parameter to f2fs_truncate to make sure
      cp_rwsem in truncate_blocks will not be locked again.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      55f57d2c
    • C
      f2fs: reduce region of cp_rwsem covered in f2fs_do_collapse · ecbaa406
      Chao Yu 提交于
      In f2fs_do_collapse, region cp_rwsem covered is large, since it will be
      held until all blocks are left shifted, so if we try to collapse small
      area at the beginning of large file, checkpoint who want to grab writer's
      lock of cp_rwsem will be delayed for long time.
      
      In order to avoid this condition, altering to lock/unlock cp_rwsem each
      shift operation.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      ecbaa406
    • C
      f2fs: warm up cold page after mmaped write · 5b339124
      Chao Yu 提交于
      With cost-benifit method, background gc will consider old section with
      fewer valid blocks as candidate victim, these old blocks in section will
      be treated as cold data, and laterly will be moved into cold segment.
      
      But if the gcing page is attached by user through buffered or mmaped
      write, we should reset the page as non-cold one, because this page may
      have more opportunity for further updating.
      
      So fix to add clearing code for the missed 'mmap' case.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      5b339124
    • C
      f2fs: add new ioctl F2FS_IOC_GARBAGE_COLLECT · c1c1b583
      Chao Yu 提交于
      When background gc is off, the only way to trigger gc is executing
      a force gc in some operations who wants to grab space in disk.
      
      The executing condition is limited: to execute force gc, we should
      wait for the time when there is almost no more free section for LFS
      allocation. This seems not reasonable for our user who wants to
      control triggering gc by himself.
      
      This patch introduces F2FS_IOC_GARBAGE_COLLECT interface for
      triggering garbage collection by using ioctl. It provides our users
      one more option to trigger gc.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      c1c1b583