1. 28 5月, 2013 2 次提交
    • J
      f2fs: skip get_node_page if locked node page is passed · 1646cfac
      Jaegeuk Kim 提交于
      If get_dnode_of_data gets a locked node page, let's skip redundant
      get_node_page calls.
      This is for the futher enhancement.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      1646cfac
    • J
      f2fs: fix inconsistency of block count during recovery · 65e5cd0a
      Jaegeuk Kim 提交于
      Currently f2fs recovers the dentry of fsynced files.
      When power-off-recovery is conducted, this newly recovered inode should increase
      node block count as well as inode block count.
      
      This patch resolves this inconsistency that results in:
      
      1. create a file
      2. write data
      3. fsync
      4. reboot without sync
      5. mount and recover the file
      6. node block count is 1 and inode block count is 2
       : fall into the inconsistent state
      7. unlink the file
       : trigger the following BUG_ON
      
      ------------[ cut here ]------------
      kernel BUG at /home/zeus/f2fs_test/src/fs/f2fs/f2fs.h:716!
      Call Trace:
       [<ffffffffa0344100>] ? get_node_page+0x50/0x1a0 [f2fs]
       [<ffffffffa0344bfc>] remove_inode_page+0x8c/0x100 [f2fs]
       [<ffffffffa03380f0>] ? f2fs_evict_inode+0x180/0x2d0 [f2fs]
       [<ffffffffa033812e>] f2fs_evict_inode+0x1be/0x2d0 [f2fs]
       [<ffffffff811c7a67>] evict+0xa7/0x1a0
       [<ffffffff811c82b5>] iput+0x105/0x190
       [<ffffffff811c2b30>] d_kill+0xe0/0x120
       [<ffffffff811c2c57>] dput+0xe7/0x1e0
       [<ffffffff811acc3d>] __fput+0x19d/0x2d0
       [<ffffffff811acd7e>] ____fput+0xe/0x10
       [<ffffffff81070645>] task_work_run+0xb5/0xe0
       [<ffffffff81002941>] do_notify_resume+0x71/0xb0
       [<ffffffff8175f14a>] int_signal+0x12/0x17
      Reported-and-Tested-by: NChris Fries <C.Fries@motorola.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      65e5cd0a
  2. 08 5月, 2013 4 次提交
  3. 30 4月, 2013 2 次提交
    • J
      f2fs: modify the number of issued pages to merge IOs · ac5d156c
      Jaegeuk Kim 提交于
      When testing f2fs on an SSD, I found some 128 page IOs followed by 1 page IO
      were issued by f2fs_write_node_pages.
      This means that there were some mishandling flows which degrades performance.
      
      Previous f2fs_write_node_pages determines the number of pages to be written,
      nr_to_write, as follows.
      
      1. The bio_get_nr_vecs returns 129 pages.
      2. The bio_alloc makes a room for 128 pages.
      3. The initial 128 pages go into one bio.
      4. The existing bio is submitted, and a new bio is prepared for the last 1 page.
      5. Finally, sync_node_pages submits the last 1 page bio.
      
      The problem is from the use of bio_get_nr_vecs, so this patch replace it
      with max_hw_blocks using queue_max_sectors.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      ac5d156c
    • H
      f2fs: fix inconsistent using of NM_WOUT_THRESHOLD · 6cac3759
      Haicheng Li 提交于
      try_to_free_nats() is usually called with parameter nr_shrink as
      	"nm_i->nat_cnt - NM_WOUT_THRESHOLD"
      by flush_nat_entries() during checkpointing process.
      
      However, this is inconsistent with the actual threshold check as
      	"if (nm_i->nat_cnt < 2 * NM_WOUT_THRESHOLD)"
      , which will ignore the free_nats requests when
      	NM_WOUT_THRESHOLD < nm_i->nat_cnt < 2 * NM_WOUT_THRESHOLD
      
      So fix the threshold check condition.
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      6cac3759
  4. 29 4月, 2013 2 次提交
  5. 26 4月, 2013 2 次提交
    • J
      f2fs: check nid == 0 in add_free_nid · 9198aceb
      Jaegeuk Kim 提交于
      It is more obvious that add_free_nid checks whether the free nid is zero or not.
      Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      9198aceb
    • J
      f2fs: give a chance to merge IOs by IO scheduler · c718379b
      Jaegeuk Kim 提交于
      Previously, background GC submits many 4KB read requests to load victim blocks
      and/or its (i)node blocks.
      
      ...
      f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb61, blkaddr = 0x3b964ed
      f2fs_gc : block_rq_complete: 8,16 R () 499854968 + 8 [0]
      f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb6f, blkaddr = 0x3b964ee
      f2fs_gc : block_rq_complete: 8,16 R () 499854976 + 8 [0]
      f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb79, blkaddr = 0x3b964ef
      f2fs_gc : block_rq_complete: 8,16 R () 499854984 + 8 [0]
      ...
      
      However, by the fact that many IOs are sequential, we can give a chance to merge
      the IOs by IO scheduler.
      In order to do that, let's use blk_plug.
      
      ...
      f2fs_gc : f2fs_iget: ino = 143
      f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c6, blkaddr = 0x2e6ee
      f2fs_gc : f2fs_iget: ino = 143
      f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c7, blkaddr = 0x2e6ef
      <idle> : block_rq_complete: 8,16 R () 1519616 + 8 [0]
      <idle> : block_rq_complete: 8,16 R () 1519848 + 8 [0]
      <idle> : block_rq_complete: 8,16 R () 1520432 + 96 [0]
      <idle> : block_rq_complete: 8,16 R () 1520536 + 104 [0]
      <idle> : block_rq_complete: 8,16 R () 1521008 + 112 [0]
      <idle> : block_rq_complete: 8,16 R () 1521440 + 152 [0]
      <idle> : block_rq_complete: 8,16 R () 1521688 + 144 [0]
      <idle> : block_rq_complete: 8,16 R () 1522128 + 192 [0]
      <idle> : block_rq_complete: 8,16 R () 1523256 + 328 [0]
      ...
      
      Note that this issue should be addressed in checkpoint, and some readahead
      flows too.
      Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      c718379b
  6. 23 4月, 2013 1 次提交
  7. 09 4月, 2013 1 次提交
    • J
      f2fs: introduce a new global lock scheme · 39936837
      Jaegeuk Kim 提交于
      In the previous version, f2fs uses global locks according to the usage types,
      such as directory operations, block allocation, block write, and so on.
      
      Reference the following lock types in f2fs.h.
      enum lock_type {
      	RENAME,		/* for renaming operations */
      	DENTRY_OPS,	/* for directory operations */
      	DATA_WRITE,	/* for data write */
      	DATA_NEW,	/* for data allocation */
      	DATA_TRUNC,	/* for data truncate */
      	NODE_NEW,	/* for node allocation */
      	NODE_TRUNC,	/* for node truncate */
      	NODE_WRITE,	/* for node write */
      	NR_LOCK_TYPE,
      };
      
      In that case, we lose the performance under the multi-threading environment,
      since every types of operations must be conducted one at a time.
      
      In order to address the problem, let's share the locks globally with a mutex
      array regardless of any types.
      So, let users grab a mutex and perform their jobs in parallel as much as
      possbile.
      
      For this, I propose a new global lock scheme as follows.
      
      0. Data structure
       - f2fs_sb_info -> mutex_lock[NR_GLOBAL_LOCKS]
       - f2fs_sb_info -> node_write
      
      1. mutex_lock_op(sbi)
       - try to get an avaiable lock from the array.
       - returns the index of the gottern lock variable.
      
      2. mutex_unlock_op(sbi, index of the lock)
       - unlock the given index of the lock.
      
      3. mutex_lock_all(sbi)
       - grab all the locks in the array before the checkpoint.
      
      4. mutex_unlock_all(sbi)
       - release all the locks in the array after checkpoint.
      
      5. block_operations()
       - call mutex_lock_all()
       - sync_dirty_dir_inodes()
       - grab node_write
       - sync_node_pages()
      
      Note that,
       the pairs of mutex_lock_op()/mutex_unlock_op() and
       mutex_lock_all()/mutex_unlock_all() should be used together.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      39936837
  8. 03 4月, 2013 3 次提交
  9. 31 3月, 2013 1 次提交
  10. 27 3月, 2013 1 次提交
  11. 20 3月, 2013 5 次提交
  12. 18 3月, 2013 7 次提交
  13. 12 2月, 2013 4 次提交
  14. 08 2月, 2013 2 次提交
  15. 22 1月, 2013 2 次提交
  16. 28 12月, 2012 1 次提交