1. 08 5月, 2016 5 次提交
  2. 28 4月, 2016 1 次提交
    • C
      f2fs: move node pages only in victim section during GC · da011cc0
      Chao Yu 提交于
      For foreground GC, we cache node blocks in victim section and set them
      dirty, then we call sync_node_pages to flush these node pages, but
      meanwhile, those node pages which does not locate in victim section
      will be flushed together, so more bandwidth and continuous free space
      would be occupied.
      
      So for this condition, it's better to leave those unrelated node page
      in cache for further write hit, and let CP or VM to flush them afterward.
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      da011cc0
  3. 27 4月, 2016 2 次提交
  4. 15 4月, 2016 3 次提交
    • C
      f2fs: fix to convert inline directory correctly · 675f10bd
      Chao Yu 提交于
      With below serials, we will lose parts of dirents:
      
      1) mount f2fs with inline_dentry option
      2) echo 1 > /sys/fs/f2fs/sdX/dir_level
      3) mkdir dir
      4) touch 180 files named [1-180] in dir
      5) touch 181 in dir
      6) echo 3 > /proc/sys/vm/drop_caches
      7) ll dir
      
      ls: cannot access 2: No such file or directory
      ls: cannot access 4: No such file or directory
      ls: cannot access 5: No such file or directory
      ls: cannot access 6: No such file or directory
      ls: cannot access 8: No such file or directory
      ls: cannot access 9: No such file or directory
      ...
      total 360
      drwxr-xr-x 2 root root 4096 Feb 19 15:12 ./
      drwxr-xr-x 3 root root 4096 Feb 19 15:11 ../
      -rw-r--r-- 1 root root    0 Feb 19 15:12 1
      -rw-r--r-- 1 root root    0 Feb 19 15:12 10
      -rw-r--r-- 1 root root    0 Feb 19 15:12 100
      -????????? ? ?    ?       ?            ? 101
      -????????? ? ?    ?       ?            ? 102
      -????????? ? ?    ?       ?            ? 103
      ...
      
      The reason is: when doing the inline dir conversion, we didn't consider
      that directory has hierarchical hash structure which can be configured
      through sysfs interface 'dir_level'.
      
      By default, dir_level of directory inode is 0, it means we have one bucket
      in hash table located in first level, all dirents will be hashed in this
      bucket, so it has no problem for us to do the duplication simply between
      inline dentry page and converted normal dentry page.
      
      However, if we configured dir_level with the value N (greater than 0), it
      will expand the bucket number of first level hash table by 2^N - 1, it
      hashs dirents into different buckets according their hash value, if we
      still move all dirents to first bucket, it makes incorrent locating for
      inline dirents, the result is, although we can iterate all dirents through
      ->readdir, we can't stat some of them in ->lookup which based on hash
      table searching.
      
      This patch fixes this issue by rehashing dirents into correct position
      when converting inline directory.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      675f10bd
    • J
      f2fs: give -EINVAL for norecovery and rw mount · 6781eabb
      Jaegeuk Kim 提交于
      Once detecting something to recover, f2fs should stop mounting, given norecovery
      and rw mount options.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      6781eabb
    • J
      f2fs: recover superblock at RW remounts · df728b0f
      Jaegeuk Kim 提交于
      This patch adds a sbi flag, SBI_NEED_SB_WRITE, which indicates it needs to
      recover superblock when (re)mounting as RW. This is set only when f2fs is
      mounted as RO.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      df728b0f
  5. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  6. 18 3月, 2016 2 次提交
  7. 03 3月, 2016 1 次提交
    • Y
      f2fs: mutex can't be used by down_write_nest_lock() · 59692b7c
      Yang Shi 提交于
      f2fs_lock_all() calls down_write_nest_lock() to acquire a rw_sem and check
      a mutex, but down_write_nest_lock() is designed for two rw_sem accoring to the
      comment in include/linux/rwsem.h. And, other than f2fs, it is just called in
      mm/mmap.c with two rwsem.
      
      So, it looks it is used wrongly by f2fs. And, it causes the below compile
      warning on -rt kernel too.
      
      In file included from fs/f2fs/xattr.c:25:0:
      fs/f2fs/f2fs.h: In function 'f2fs_lock_all':
      fs/f2fs/f2fs.h:962:34: warning: passing argument 2 of 'down_write_nest_lock' from incompatible pointer type [-Wincompatible-pointer-types]
        f2fs_down_write(&sbi->cp_rwsem, &sbi->cp_mutex);
                                        ^
      fs/f2fs/f2fs.h:27:55: note: in definition of macro 'f2fs_down_write'
       #define f2fs_down_write(x, y) down_write_nest_lock(x, y)
                                                             ^
      In file included from include/linux/rwsem.h:22:0,
                       from fs/f2fs/xattr.c:21:
      include/linux/rwsem_rt.h:138:20: note: expected 'struct rw_semaphore *' but argument is of type 'struct mutex *'
       static inline void down_write_nest_lock(struct rw_semaphore *sem,
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Reviewed-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      59692b7c
  8. 27 2月, 2016 3 次提交
  9. 23 2月, 2016 18 次提交
    • C
      f2fs: trace old block address for CoWed page · 7a9d7548
      Chao Yu 提交于
      This patch enables to trace old block address of CoWed page for better
      debugging.
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4f0, oldaddr = 0xfe8ab, newaddr = 0xfee90 rw = WRITE_SYNC, type = NODE
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4f8, oldaddr = 0xfe8b0, newaddr = 0xfee91 rw = WRITE_SYNC, type = NODE
      f2fs_submit_page_mbio: dev = (1,0), ino = 1, page_index = 0x1d4fa, oldaddr = 0xfe8ae, newaddr = 0xfee92 rw = WRITE_SYNC, type = NODE
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x96, oldaddr = 0xf049b, newaddr = 0x2bbe rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x97, oldaddr = 0xf049c, newaddr = 0x2bbf rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 134824, page_index = 0x98, oldaddr = 0xf049d, newaddr = 0x2bc0 rw = WRITE, type = DATA
      
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x47, oldaddr = 0xffffffff, newaddr = 0xf2631 rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x48, oldaddr = 0xffffffff, newaddr = 0xf2632 rw = WRITE, type = DATA
      f2fs_submit_page_mbio: dev = (1,0), ino = 135260, page_index = 0x49, oldaddr = 0xffffffff, newaddr = 0xf2633 rw = WRITE, type = DATA
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      7a9d7548
    • S
      f2fs: move sanity checking of cp into get_valid_checkpoint · 984ec63c
      Shawn Lin 提交于
      >From the function name of get_valid_checkpoint, it seems to return
      the valid cp or NULL for caller to check. If no valid one is found,
      f2fs_fill_super will print the err log. But if get_valid_checkpoint
      get one valid(the return value indicate that it's valid, however actually
      it is invalid after sanity checking), then print another similar err
      log. That seems strange. Let's keep sanity checking inside the procedure
      of geting valid cp. Another improvement we gained from this move is
      that even the large volume is supported, we check the cp in advanced
      to skip the following procedure if failing the sanity checking.
      Signed-off-by: NShawn Lin <shawn.lin@rock-chips.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      984ec63c
    • C
      f2fs: introduce f2fs_journal struct to wrap journal info · dfc08a12
      Chao Yu 提交于
      Introduce a new structure f2fs_journal to wrap journal info in struct
      f2fs_summary_block for readability.
      
      struct f2fs_journal {
      	union {
      		__le16 n_nats;
      		__le16 n_sits;
      	};
      	union {
      		struct nat_journal nat_j;
      		struct sit_journal sit_j;
      		struct f2fs_extra_info info;
      	};
      } __packed;
      
      struct f2fs_summary_block {
      	struct f2fs_summary entries[ENTRIES_IN_SUM];
      	struct f2fs_journal journal;
      	struct summary_footer footer;
      } __packed;
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      dfc08a12
    • C
      f2fs crypto: avoid unneeded memory allocation when {en/de}crypting symlink · 922ec355
      Chao Yu 提交于
      This patch adopts f2fs with codes of ext4, it removes unneeded memory
      allocation in creating/accessing path of symlink.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      922ec355
    • C
      f2fs: support revoking atomic written pages · 28bc106b
      Chao Yu 提交于
      f2fs support atomic write with following semantics:
      1. open db file
      2. ioctl start atomic write
      3. (write db file) * n
      4. ioctl commit atomic write
      5. close db file
      
      With this flow we can avoid file becoming corrupted when abnormal power
      cut, because we hold data of transaction in referenced pages linked in
      inmem_pages list of inode, but without setting them dirty, so these data
      won't be persisted unless we commit them in step 4.
      
      But we should still hold journal db file in memory by using volatile
      write, because our semantics of 'atomic write support' is incomplete, in
      step 4, we could fail to submit all dirty data of transaction, once
      partial dirty data was committed in storage, then after a checkpoint &
      abnormal power-cut, db file will be corrupted forever.
      
      So this patch tries to improve atomic write flow by adding a revoking flow,
      once inner error occurs in committing, this gives another chance to try to
      revoke these partial submitted data of current transaction, it makes
      committing operation more like aotmical one.
      
      If we're not lucky, once revoking operation was failed, EAGAIN will be
      reported to user for suggesting doing the recovery with held journal file,
      or retrying current transaction again.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      28bc106b
    • C
      f2fs: split drop_inmem_pages from commit_inmem_pages · 29b96b54
      Chao Yu 提交于
      Split drop_inmem_pages from commit_inmem_pages for code readability,
      and prepare for the following modification.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      29b96b54
    • J
      f2fs crypto: f2fs_page_crypto() doesn't need a encryption context · ce855a3b
      Jaegeuk Kim 提交于
      This patch adopts:
      	ext4 crypto: ext4_page_crypto() doesn't need a encryption context
      
      Since ext4_page_crypto() doesn't need an encryption context (at least
      not any more), this allows us to simplify a number function signature
      and also allows us to avoid needing to allocate a context in
      ext4_block_write_begin().  It also means we no longer need a separate
      ext4_decrypt_one() function.
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      ce855a3b
    • J
      f2fs: preallocate blocks for buffered aio writes · 24b84912
      Jaegeuk Kim 提交于
      This patch preallocates data blocks for buffered aio writes.
      With this patch, we can avoid redundant locking and unlocking of node pages
      given consecutive aio request.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      24b84912
    • J
      f2fs: move dio preallocation into f2fs_file_write_iter · b439b103
      Jaegeuk Kim 提交于
      This patch moves preallocation code for direct IOs into f2fs_file_write_iter.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      b439b103
    • C
      f2fs: introduce f2fs_submit_merged_bio_cond · 0c3a5797
      Chao Yu 提交于
      f2fs use single bio buffer per type data (META/NODE/DATA) for caching
      writes locating in continuous block address as many as possible, after
      submitting, these writes may be still cached in bio buffer, so we have
      to flush cached writes in bio buffer by calling f2fs_submit_merged_bio.
      
      Unfortunately, in the scenario of high concurrency, bio buffer could be
      flushed by someone else before we submit it as below reasons:
      a) there is no space in bio buffer.
      b) add a request of different type (SYNC, ASYNC).
      c) add a discontinuous block address.
      
      For this condition, f2fs_submit_merged_bio will be devastating, because
      it could break the following merging of writes in bio buffer, split one
      big bio into two smaller one.
      
      This patch introduces f2fs_submit_merged_bio_cond which can do a
      conditional submitting with bio buffer, before submitting it will judge
      whether:
       - page in DATA type bio buffer is matching with specified page;
       - page in DATA type bio buffer is belong to specified inode;
       - page in NODE type bio buffer is belong to specified inode;
      If there is no eligible page in bio buffer, we will skip submitting step,
      result in gaining more chance to merge consecutive block IOs in bio cache.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      0c3a5797
    • C
      f2fs: speed up handling holes in fiemap · da85985c
      Chao Yu 提交于
      This patch makes f2fs_map_blocks supporting returning next potential
      page offset which skips hole region in indirect tree of inode, and
      use it to speed up fiemap in handling big hole case.
      
      Test method:
      xfs_io -f /mnt/f2fs/file  -c "pwrite 1099511627776 4096"
      time xfs_io -f /mnt/f2fs/file -c "fiemap -v"
      
      Before:
      time xfs_io -f /mnt/f2fs/file -c "fiemap -v"
      /mnt/f2fs/file:
       EXT: FILE-OFFSET              BLOCK-RANGE      TOTAL FLAGS
         0: [0..2147483647]:         hole             2147483648
         1: [2147483648..2147483655]: 81920..81927         8   0x1
      
      real    3m3.518s
      user    0m0.000s
      sys     3m3.456s
      
      After:
      time xfs_io -f /mnt/f2fs/file -c "fiemap -v"
      /mnt/f2fs/file:
       EXT: FILE-OFFSET              BLOCK-RANGE      TOTAL FLAGS
         0: [0..2147483647]:         hole             2147483648
         1: [2147483648..2147483655]: 81920..81927         8   0x1
      
      real    0m0.008s
      user    0m0.000s
      sys     0m0.008s
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      da85985c
    • C
      f2fs: introduce get_next_page_offset to speed up SEEK_DATA · 3cf45747
      Chao Yu 提交于
      When seeking data in ->llseek, if we encounter a big hole which covers
      several dnode pages, we will try to seek data from index of page which
      is the first page of next dnode page, at most we could skip searching
      (ADDRS_PER_BLOCK - 1) pages.
      
      However it's still not efficient, because if our indirect/double-indirect
      pointer are NULL, there are no dnode page locate in the tree indirect/
      double-indirect pointer point to, it's not necessary to search the whole
      region.
      
      This patch introduces get_next_page_offset to calculate next page offset
      based on current searching level and max searching level returned from
      get_dnode_of_data, with this, we could skip searching the entire area
      indirect or double-indirect node block is not exist.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      3cf45747
    • C
      f2fs: remove unneeded pointer conversion · 81ca7350
      Chao Yu 提交于
      There are redundant pointer conversion in following call stack:
       - at position a, inode was been converted to f2fs_file_info.
       - at position b, f2fs_file_info was been converted to inode again.
      
       - truncate_blocks(inode,..)
        - fi = F2FS_I(inode)		---a
        - ADDRS_PER_PAGE(node_page, fi)
         - addrs_per_inode(fi)
          - inode = &fi->vfs_inode	---b
          - f2fs_has_inline_xattr(inode)
           - fi = F2FS_I(inode)
           - is_inode_flag_set(fi,..)
      
      In order to avoid unneeded conversion, alter ADDRS_PER_PAGE and
      addrs_per_inode to acept parameter with type of inode pointer.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      81ca7350
    • S
      f2fs: introduce lifetime write IO statistics · 8f1dbbbb
      Shuoran Liu 提交于
      This patch introduces lifetime IO write statistics exposed to the sysfs interface.
      The write IO amount is obtained from block layer, accumulated in the file system and
      stored in the hot node summary of checkpoint.
      Signed-off-by: NShuoran Liu <liushuoran@huawei.com>
      Signed-off-by: NPengyang Hou <houpengyang@huawei.com>
      [Jaegeuk Kim: add sysfs documentation]
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      8f1dbbbb
    • H
      f2fs: improve shrink performance of extent nodes · 201ef5e0
      Hou Pengyang 提交于
      On the worst case, we need to scan the whole radix tree and every rb-tree to
      free the victimed extent_nodes when shrinking.
      
      Pengyang initially introduced a victim_list to record the victimed extent_nodes,
      and free these extent_nodes by just scanning a list.
      
      Later, Chao Yu enhances the original patch to improve memory footprint by
      removing victim list.
      
      The policy of lru list shrinking becomes:
      1) lock lru list's lock
      2) trylock extent tree's lock
      3) remove extent node from lru list
      4) unlock lru list's lock
      5) do shrink
      6) repeat 1) to 5)
      Signed-off-by: NHou Pengyang <houpengyang@huawei.com>
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      201ef5e0
    • J
      f2fs: use wait_for_stable_page to avoid contention · fec1d657
      Jaegeuk Kim 提交于
      In write_begin, if storage supports stable_page, we don't need to wait for
      writeback to update its contents.
      This patch introduces to use wait_for_stable_page instead of
      wait_on_page_writeback.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      fec1d657
    • C
      f2fs: export dirty_nats_ratio in sysfs · 2304cb0c
      Chao Yu 提交于
      This patch exports a new sysfs entry 'dirty_nat_ratio' to control threshold
      of dirty nat entries, if current ratio exceeds configured threshold,
      checkpoint will be triggered in f2fs_balance_fs_bg for flushing dirty nats.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      2304cb0c
    • C
      f2fs: relocate is_merged_page · 0fd785eb
      Chao Yu 提交于
      Operations in is_merged_page is related to inner bio cache, move it to
      data.c.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      0fd785eb
  10. 12 1月, 2016 3 次提交
  11. 09 1月, 2016 1 次提交