1. 28 12月, 2012 2 次提交
    • J
      f2fs: invalidate the node page if allocation is failed · 71e9fec5
      Jaegeuk Kim 提交于
      The new_node_page() is processed as the following procedure.
      
      1. A new node page is allocated.
      2. Set PageUptodate with proper footer information.
      3. Check if there is a free space for allocation
       4.a. If there is no space, f2fs returns with -ENOSPC.
       4.b. Otherwise, go next.
      
      In the case of step #4.a, f2fs remains a wrong node page in the page cache
      with the uptodate flag.
      
      Also, even though a new node page is allocated successfully, an error can be
      occurred afterwards due to allocation failure of the other data structures.
      In such a case, remove_inode_page() would be triggered, so that we have to
      clear uptodate flag in truncate_node() too.
      
      So, we should remove the uptodate flag, if allocation is failed.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      71e9fec5
    • G
      f2fs: add missing #include <linux/prefetch.h> · 690e4a3e
      Geert Uytterhoeven 提交于
      m68k allmodconfig:
      
      fs/f2fs/data.c: In function ‘read_end_io’:
      fs/f2fs/data.c:311: error: implicit declaration of function ‘prefetchw’
      
      fs/f2fs/segment.c: In function ‘f2fs_end_io_write’:
      fs/f2fs/segment.c:628: error: implicit declaration of function ‘prefetchw’
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      690e4a3e
  2. 26 12月, 2012 6 次提交
    • J
      f2fs: do f2fs_balance_fs in front of dir operations · 1efef832
      Jaegeuk Kim 提交于
      In order to conserve free sections to deal with the worst-case scenarios, f2fs
      should be able to freeze all the directory operations especially when there are
      not enough free sections. The f2fs_balance_fs() is for this use.
      
      When FS utilization becomes almost 100%, directory operations can be failed due
      to -ENOSPC frequently, which produces some dirty node pages occasionally.
      
      Previously, in such a case, f2fs_balance_fs() is not able to be triggered since
      it is triggered only if the directory operation ends up with success.
      
      So, this patch triggers f2fs_balance_fs() at first before handling directory
      operations.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      1efef832
    • J
      f2fs: should recover orphan and fsync data · 30f0c758
      Jaegeuk Kim 提交于
      The recovery routine should do all the time regardless of normal umount action.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      30f0c758
    • J
      f2fs: fix handling errors got by f2fs_write_inode · 398b1ac5
      Jaegeuk Kim 提交于
      Ruslan reported that f2fs hangs with an infinite loop in f2fs_sync_file():
      
      	while (sync_node_pages(sbi, inode->i_ino, &wbc) == 0)
      		f2fs_write_inode(inode, NULL);
      
      The reason was revealed that the cold flag is not set even thought this inode is
      a normal file. Therefore, sync_node_pages() skips to write node blocks since it
      only writes cold node blocks.
      
      The cold flag is stored to the node_footer in node block, and whenever a new
      node page is allocated, it is set according to its file type, file or directory.
      
      But, after sudden-power-off, when recovering the inode page, f2fs doesn't recover
      its cold flag.
      
      So, let's assign the cold flag in more right places.
      
      One more thing:
      If f2fs_write_inode() returns an error due to whatever situations, there would
      be no dirty node pages so that sync_node_pages() returns zero.
      (i.e., zero means nothing was written.)
      Reported-by: NRuslan N. Marchenko <me@ruff.mobi>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      398b1ac5
    • N
      f2fs: fix up f2fs_get_parent issue to retrieve correct parent inode number · 38e0abdc
      Namjae Jeon 提交于
      Test Case:
      [NFS Client]
      ls -lR .
      
      [NFS Server]
      while [ 1 ]
      do
      echo 3 > /proc/sys/vm/drop_caches
      done
      
      Error on NFS Client: "No such file or directory"
      
      When cache is dropped at the server, it results in lookup failure at the
      NFS client due to non-connection with the parent. The default path is it
      initiates a lookup by calculating the hash value for the name, even though
      the hash values stored on the disk for "." and ".." is maintained as zero,
      which results in failure from find_in_block due to not matching HASH values.
      Fix up, by using the correct hashing values for these entries.
      Signed-off-by: NNamjae Jeon <namjae.jeon@samsung.com>
      Signed-off-by: NAmit Sahrawat <a.sahrawat@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      38e0abdc
    • J
      f2fs: fix wrong calculation on f_files in statfs · 1362b5e3
      Jaegeuk Kim 提交于
      In f2fs_statfs(), f_files should be the total number of available inodes
      instead of the currently allocated inodes.
      So, this patch should resolve the reported bug below.
      
      Note that, showing 10% usage is not a bug, since f2fs reveals whole volume size
      as much as possible and shows the space overhead as *used*.
      This policy is fair enough with respect to other file systems.
      
      <Reported Bug>
      (loop0 is backed by 1GiB file)
      
      $ mkfs.f2fs /dev/loop0
      
      F2FS-tools: Ver: 1.1.0 (2012-12-11)
      Info: sector size = 512
      Info: total sectors = 2097152 (in 512bytes)
      Info: zone aligned segment0 blkaddr: 512
      Info: format successful
      
      $ mount /dev/loop0 mnt/
      
      $ df mnt/
      Filesystem     1K-blocks  Used Available Use% Mounted on
      /dev/loop0       1046528 98312    929784  10%
      /home/zeta/linux-devel/mtd-bench/mnt
      
      $ df mnt/ -i
      Filesystem     Inodes   IUsed  IFree IUse% Mounted on
      /dev/loop0       1 -465918 465919     - /home/zeta/linux-devel/mtd-bench/mnt
      
      Notice IUsed is negative. Also, 10% usage on a fresh f2fs seems too
      much to be correct.
      Reported-and-Tested-by: NEzequiel Garcia <elezegarcia@gmail.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      1362b5e3
    • J
      f2fs: remove set_page_dirty for atomic f2fs_end_io_write · dfb7c0ce
      Jaegeuk Kim 提交于
      We should guarantee not to do *scheduling while atomic*.
      I found, in atomic f2fs_end_io_write(), there is a set_page_dirty() call
      to deal with IO errors.
      
      But, set_page_dirty() calls:
       -> f2fs_set_data_page_dirty()
         -> set_dirty_dir_page()
            -> cond_resched() which results in scheduling.
      
      In order to avoid this, I'd like to remove simply set_page_dirty(),
      since the page is already marked as ERROR and f2fs will be operated
      as the read-only mode as well.
      So, there is no recovery issue with this.
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      dfb7c0ce
  3. 24 12月, 2012 3 次提交
  4. 23 12月, 2012 1 次提交
  5. 22 12月, 2012 28 次提交