1. 23 2月, 2017 2 次提交
  2. 29 1月, 2017 2 次提交
    • J
      f2fs: drop exist_data for inline_data when truncated to 0 · bb95d9ab
      Jaegeuk Kim 提交于
      A test program gets the SEEK_DATA with two values between
      a new created file and the exist file on f2fs filesystem.
      
      F2FS filesystem,  (the first "test1" is a new file)
      SEEK_DATA size != 0 (offset = 8192)
      SEEK_DATA size != 0 (offset = 4096)
      
      PNFS filesystem, (the first "test1" is a new file)
      SEEK_DATA size != 0 (offset = 4096)
      SEEK_DATA size != 0 (offset = 4096)
      
      int main(int argc, char **argv)
      {
              char *filename = argv[1];
              int offset = 1, i = 0, fd = -1;
      
              if (argc < 2) {
                      printf("Usage: %s f2fsfilename\n", argv[0]);
                      return -1;
              }
      
              /*
              if (!access(filename, F_OK) || errno != ENOENT) {
                      printf("Needs a new file for test, %m\n");
                      return -1;
              }*/
      
              fd = open(filename, O_RDWR | O_CREAT, 0777);
              if (fd < 0) {
                      printf("Create test file %s failed, %m\n", filename);
                      return -1;
              }
      
              for (i = 0; i < 20; i++) {
                      offset = 1 << i;
                      ftruncate(fd, 0);
                      lseek(fd, offset, SEEK_SET);
                      write(fd, "test", 5);
                      /* Get the alloc size by seek data equal zero*/
                      if (lseek(fd, 0, SEEK_DATA)) {
                              printf("SEEK_DATA size != 0 (offset = %d)\n", offset);
                              break;
                      }
              }
      
              close(fd);
              return 0;
      }
      Reported-and-Tested-by: NKinglong Mee <kinglongmee@gmail.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      bb95d9ab
    • J
      f2fs: show the max number of atomic operations · 26a28a0c
      Jaegeuk Kim 提交于
      This patch adds to show the max number of atomic operations which are
      conducting concurrently.
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      26a28a0c
  3. 13 12月, 2016 1 次提交
  4. 12 12月, 2016 1 次提交
  5. 06 12月, 2016 1 次提交
  6. 30 11月, 2016 1 次提交
    • J
      f2fs: do not activate auto_recovery for fallocated i_size · 26787236
      Jaegeuk Kim 提交于
      If a file needs to keep its i_size by fallocate, we need to turn off auto
      recovery during roll-forward recovery.
      
      This will resolve the below scenario.
      
      1. xfs_io -f /mnt/f2fs/file -c "pwrite 0 4096" -c "fsync"
      2. xfs_io -f /mnt/f2fs/file -c "falloc -k 4096 4096" -c "fsync"
      3. md5sum /mnt/f2fs/file;
      4. godown /mnt/f2fs/
      5. umount /mnt/f2fs/
      6. mount -t f2fs /dev/sdx /mnt/f2fs
      7. md5sum /mnt/f2fs/file
      Reported-by: NChao Yu <chao@kernel.org>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      26787236
  7. 26 11月, 2016 4 次提交
    • C
      f2fs: fix fdatasync · 281518c6
      Chao Yu 提交于
      For below two cases, we can't guarantee data consistence:
      
      a)
      1. xfs_io "pwrite 0 4195328" "fsync"
      2. xfs_io "pwrite 4195328 1024" "fdatasync"
      3. godown
      4. umount & mount
      --> isize we updated before fdatasync won't be recovered
      
      b)
      1. xfs_io "pwrite -S 0xcc 0 4202496" "fsync"
      2. xfs_io "fpunch 4194304 4096" "fdatasync"
      3. godown
      4. umount & mount
      --> dnode we punched before fdatasync won't be recovered
      
      The reason is that normally fdatasync won't be aware of modification
      of metadata in file, e.g. isize changing, dnode updating, so in ->fsync
      we will skip flushing node pages for above cases, result in making
      fdatasynced file being lost during recovery.
      
      Currently we have introduced DIRTY_META global list in sbi for tracking
      dirty inode selectively, so in fdatasync we can choose to flush nodes
      depend on dirty state of current inode in the list.
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      281518c6
    • C
      f2fs: don't wait writeback for datas during checkpoint · 36951b38
      Chao Yu 提交于
      Normally, while committing checkpoint, we will wait on all pages to be
      writebacked no matter the page is data or metadata, so in scenario where
      there are lots of data IO being submitted with metadata, we may suffer
      long latency for waiting writeback during checkpoint.
      
      Indeed, we only care about persistence for pages with metadata, but not
      pages with data, as file system consistent are only related to metadate,
      so in order to avoid encountering long latency in above scenario, let's
      recognize and reference metadata in submitted IOs, wait writeback only
      for metadatas.
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      36951b38
    • J
      f2fs: avoid BG_GC in f2fs_balance_fs · 7702bdbe
      Jaegeuk Kim 提交于
      If many threads hit has_not_enough_free_secs() in f2fs_balance_fs() at the same
      time, all the threads would do FG_GC or BG_GC.
      In this critical path, we totally don't need to do BG_GC at all.
      Let's avoid that.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      7702bdbe
    • J
      f2fs: use err for f2fs_preallocate_blocks · a7de6086
      Jaegeuk Kim 提交于
      This patch has no functional change.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      a7de6086
  8. 24 11月, 2016 4 次提交
  9. 08 10月, 2016 1 次提交
  10. 28 9月, 2016 1 次提交
  11. 22 9月, 2016 1 次提交
  12. 15 9月, 2016 1 次提交
  13. 14 9月, 2016 1 次提交
  14. 13 9月, 2016 2 次提交
  15. 10 9月, 2016 1 次提交
  16. 08 9月, 2016 2 次提交
  17. 30 8月, 2016 1 次提交
  18. 19 8月, 2016 2 次提交
  19. 21 7月, 2016 1 次提交
  20. 19 7月, 2016 1 次提交
  21. 16 7月, 2016 3 次提交
  22. 09 7月, 2016 2 次提交
  23. 07 7月, 2016 1 次提交
  24. 14 6月, 2016 1 次提交
  25. 08 6月, 2016 1 次提交
  26. 03 6月, 2016 1 次提交