1. 24 9月, 2014 1 次提交
  2. 16 9月, 2014 5 次提交
  3. 11 9月, 2014 1 次提交
  4. 10 9月, 2014 10 次提交
    • J
      f2fs: fix negative value for lseek offset · 0b4c5afd
      Jaegeuk Kim 提交于
      If application throws negative value of lseek with SEEK_DATA|SEEK_HOLE,
      previous f2fs went into BUG_ON in get_dnode_of_data, which was reported
      by Tommi Rantala.
      
      He could make a simple code to detect this having:
      	lseek(fd, -17595150933902LL, SEEK_DATA);
      
      This patch should resolve that bug.
      Reported-by: NTommi Rentala <tt.rantala@gmail.com>
      [Jaegeuk Kim: relocate the condition as suggested by Chao]
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      0b4c5afd
    • H
      f2fs: avoid node page to be written twice in gc_node_segment · 9a01b56b
      Huang Ying 提交于
      In gc_node_segment, if node page gc is run concurrently with node page
      writeback, and check_valid_map and get_node_page run after page locked
      and before cur_valid_map is updated as below, it is possible for the
      page to be written twice unnecessarily.
      
      			sync_node_pages
      			  try_lock_page
      			  ...
      check_valid_map		  f2fs_write_node_page
      			    ...
      			    write_node_page
      			      do_write_page
      			        allocate_data_block
      				  ...
      				  refresh_sit_entry /* update cur_valid_map */
      				  ...
      			    ...
      			    unlock_page
      get_node_page
      ...
      set_page_dirty
      ...
      f2fs_put_page
        unlock_page
      
      This can be solved via calling check_valid_map after get_node_page again.
      Signed-off-by: NHuang, Ying <ying.huang@intel.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      9a01b56b
    • G
      f2fs: use lock-less list(llist) to simplify the flush cmd management · 721bd4d5
      Gu Zheng 提交于
      We use flush cmd control to collect many flush cmds, and flush them
      together. In this case, we use two list to manage the flush cmds
      (collect and dispatch), and one spin lock is used to protect this.
      In fact, the lock-less list(llist) is very suitable to this case,
      and we use simplify this routine.
      
      -
      v2:
      -use llist_for_each_entry_safe to fix possible use-after-free issue.
      -remove the unused field from struct flush_cmd.
      Thanks for Yu's suggestion.
      -
      Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      721bd4d5
    • C
      f2fs: refactor flush_sit_entries codes for reducing SIT writes · 184a5cd2
      Chao Yu 提交于
      In commit aec71382 ("f2fs: refactor flush_nat_entries codes for reducing NAT
      writes"), we descripte the issue as below:
      
      "Although building NAT journal in cursum reduce the read/write work for NAT
      block, but previous design leave us lower performance when write checkpoint
      frequently for these cases:
      1. if journal in cursum has already full, it's a bit of waste that we flush all
         nat entries to page for persistence, but not to cache any entries.
      2. if journal in cursum is not full, we fill nat entries to journal util
         journal is full, then flush the left dirty entries to disk without merge
         journaled entries, so these journaled entries may be flushed to disk at next
         checkpoint but lost chance to flushed last time."
      
      Actually, we have the same problem in using SIT journal area.
      
      In this patch, firstly we will update sit journal with dirty entries as many as
      possible. Secondly if there is no space in sit journal, we will remove all
      entries in journal and walk through the whole dirty entry bitmap of sit,
      accounting dirty sit entries located in same SIT block to sit entry set. All
      entry sets are linked to list sit_entry_set in sm_info, sorted ascending order
      by count of entries in set. Later we flush entries in set which have fewest
      entries into journal as many as we can, and then flush dense set with merged
      entries to disk.
      
      In this way we can use sit journal area more effectively, also we will reduce
      SIT update, result in gaining in performance and saving lifetime of flash
      device.
      
      In my testing environment, it shows this patch can help to reduce SIT block
      update obviously.
      
      virtual machine + hard disk:
      fsstress -p 20 -n 400 -l 5
      		sit page num	cp count	sit pages/cp
      based		2006.50		1349.75		1.486
      patched		1566.25		1463.25		1.070
      
      Our latency of merging op is small when handling a great number of dirty SIT
      entries in flush_sit_entries:
      latency(ns)	dirty sit count
      36038		2151
      49168		2123
      37174		2232
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      184a5cd2
    • C
      f2fs: remove unneeded sit_i in macro SIT_BLOCK_OFFSET/START_SEGNO · d3a14afd
      Chao Yu 提交于
      sit_i in macro SIT_BLOCK_OFFSET/START_SEGNO is not used, remove it.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      d3a14afd
    • J
      f2fs: need fsck.f2fs if the recovery was failed · b0c44f05
      Jaegeuk Kim 提交于
      If the roll-forward recovery was failed, we'd better conduct fsck.f2fs.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      b0c44f05
    • J
      f2fs: handle bug cases by letting fsck.f2fs initiate · ec325b52
      Jaegeuk Kim 提交于
      This patch adds to handle corner buggy cases for fsck.f2fs.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      ec325b52
    • J
      f2fs: add BUG cases to initiate fsck.f2fs · 05796763
      Jaegeuk Kim 提交于
      This patch replaces BUG cases with f2fs_bug_on to remain fsck.f2fs information.
      And it implements some void functions to initiate fsck.f2fs too.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      05796763
    • J
      f2fs: need fsck.f2fs when f2fs_bug_on is triggered · 9850cf4a
      Jaegeuk Kim 提交于
      If any f2fs_bug_on is triggered, fsck.f2fs is needed.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      9850cf4a
    • J
      f2fs: retain inconsistency information to initiate fsck.f2fs · 2ae4c673
      Jaegeuk Kim 提交于
      This patch adds sbi->need_fsck to conduct fsck.f2fs later.
      This flag can only be removed by fsck.f2fs.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      2ae4c673
  5. 04 9月, 2014 1 次提交
  6. 02 9月, 2014 1 次提交
    • C
      f2fs: reposition unlock_new_inode to prevent accessing invalid inode · b73e5282
      Chao Yu 提交于
      As the race condition on the inode cache, following scenario can appear:
      [Thread a]				[Thread b]
      					->f2fs_mkdir
      					  ->f2fs_add_link
      					    ->__f2fs_add_link
      					      ->init_inode_metadata failed here
      ->gc_thread_func
        ->f2fs_gc
          ->do_garbage_collect
            ->gc_data_segment
              ->f2fs_iget
                ->iget_locked
                  ->wait_on_inode
      					  ->unlock_new_inode
              ->move_data_page
      					  ->make_bad_inode
      					  ->iput
      
      When we fail in create/symlink/mkdir/mknod/tmpfile, the new allocated inode
      should be set as bad to avoid being accessed by other thread. But in above
      scenario, it allows f2fs to access the invalid inode before this inode was set
      as bad.
      This patch fix the potential problem, and this issue was found by code review.
      
      change log from v1:
       o Add condition judgment in gc_data_segment() suggested by Changman Lee.
       o use iget_failed to simplify code.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      b73e5282
  7. 30 8月, 2014 4 次提交
  8. 29 8月, 2014 6 次提交
    • J
      f2fs: fix wrong casting for dentry name · 3304b564
      Jaegeuk Kim 提交于
      The dentry name type is unsigned char *.
      If we don't match this type, some character codes can be changed by signed bit.
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      3304b564
    • D
      ext4: fix same-dir rename when inline data directory overflows · d80d448c
      Darrick J. Wong 提交于
      When performing a same-directory rename, it's possible that adding or
      setting the new directory entry will cause the directory to overflow
      the inline data area, which causes the directory to be converted to an
      extent-based directory.  Under this circumstance it is necessary to
      re-read the directory when deleting the old dirent because the "old
      directory" context still points to i_block in the inode table, which
      is now an extent tree root!  The delete fails with an FS error, and
      the subsequent fsck complains about incorrect link counts and
      hardlinked directories.
      
      Test case (originally found with flat_dir_test in the metadata_csum
      test program):
      
      # mkfs.ext4 -O inline_data /dev/sda
      # mount /dev/sda /mnt
      # mkdir /mnt/x
      # touch /mnt/x/changelog.gz /mnt/x/copyright /mnt/x/README.Debian
      # sync
      # for i in /mnt/x/*; do mv $i $i.longer; done
      # ls -la /mnt/x/
      total 0
      -rw-r--r-- 1 root root 0 Aug 25 12:03 changelog.gz.longer
      -rw-r--r-- 1 root root 0 Aug 25 12:03 copyright
      -rw-r--r-- 1 root root 0 Aug 25 12:03 copyright.longer
      -rw-r--r-- 1 root root 0 Aug 25 12:03 README.Debian.longer
      
      (Hey!  Why are there four files now??)
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      d80d448c
    • D
      jbd2: fix descriptor block size handling errors with journal_csum · db9ee220
      Darrick J. Wong 提交于
      It turns out that there are some serious problems with the on-disk
      format of journal checksum v2.  The foremost is that the function to
      calculate descriptor tag size returns sizes that are too big.  This
      causes alignment issues on some architectures and is compounded by the
      fact that some parts of jbd2 use the structure size (incorrectly) to
      determine the presence of a 64bit journal instead of checking the
      feature flags.
      
      Therefore, introduce journal checksum v3, which enlarges the
      descriptor block tag format to allow for full 32-bit checksums of
      journal blocks, fix the journal tag function to return the correct
      sizes, and fix the jbd2 recovery code to use feature flags to
      determine 64bitness.
      
      Add a few function helpers so we don't have to open-code quite so
      many pieces.
      
      Switching to a 16-byte block size was found to increase journal size
      overhead by a maximum of 0.1%, to convert a 32-bit journal with no
      checksumming to a 32-bit journal with checksum v3 enabled.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reported-by: NTR Reardon <thomas_reardon@hotmail.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      db9ee220
    • D
      jbd2: fix infinite loop when recovering corrupt journal blocks · 022eaa75
      Darrick J. Wong 提交于
      When recovering the journal, don't fall into an infinite loop if we
      encounter a corrupt journal block.  Instead, just skip the block and
      return an error, which fails the mount and thus forces the user to run
      a full filesystem fsck.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      022eaa75
    • D
      ext4: update i_disksize coherently with block allocation on error path · 6603120e
      Dmitry Monakhov 提交于
      In case of delalloc block i_disksize may be less than i_size. So we
      have to update i_disksize each time we allocated and submitted some
      blocks beyond i_disksize.  We weren't doing this on the error paths,
      so fix this.
      
      testcase: xfstest generic/019
      Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      6603120e
    • D
      f2fs: simplify by using a literal · 922cedbd
      Dan Carpenter 提交于
      We can make the code a bit simpler because we know that "!retry" is
      zero.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      922cedbd
  9. 28 8月, 2014 2 次提交
  10. 27 8月, 2014 3 次提交
  11. 26 8月, 2014 1 次提交
  12. 25 8月, 2014 1 次提交
    • B
      aio: fix reqs_available handling · d856f32a
      Benjamin LaHaise 提交于
      As reported by Dan Aloni, commit f8567a38 ("aio: fix aio request
      leak when events are reaped by userspace") introduces a regression when
      user code attempts to perform io_submit() with more events than are
      available in the ring buffer.  Reverting that commit would reintroduce a
      regression when user space event reaping is used.
      
      Fixing this bug is a bit more involved than the previous attempts to fix
      this regression.  Since we do not have a single point at which we can
      count events as being reaped by user space and io_getevents(), we have
      to track event completion by looking at the number of events left in the
      event ring.  So long as there are as many events in the ring buffer as
      there have been completion events generate, we cannot call
      put_reqs_available().  The code to check for this is now placed in
      refill_reqs_available().
      
      A test program from Dan and modified by me for verifying this bug is available
      at http://www.kvack.org/~bcrl/20140824-aio_bug.c .
      Reported-by: NDan Aloni <dan@kernelim.com>
      Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org>
      Acked-by: NDan Aloni <dan@kernelim.com>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: stable@vger.kernel.org      # v3.16 and anything that f8567a38 was backported to
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d856f32a
  13. 24 8月, 2014 4 次提交
    • L
      Btrfs: fix task hang under heavy compressed write · 9e0af237
      Liu Bo 提交于
      This has been reported and discussed for a long time, and this hang occurs in
      both 3.15 and 3.16.
      
      Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
      
      Btrfs has a kind of work queued as an ordered way, which means that its
      ordered_func() must be processed in the way of FIFO, so it usually looks like --
      
      normal_work_helper(arg)
          work = container_of(arg, struct btrfs_work, normal_work);
      
          work->func() <---- (we name it work X)
          for ordered_work in wq->ordered_list
                  ordered_work->ordered_func()
                  ordered_work->ordered_free()
      
      The hang is a rare case, first when we find free space, we get an uncached block
      group, then we go to read its free space cache inode for free space information,
      so it will
      
      file a readahead request
          btrfs_readpages()
               for page that is not in page cache
                      __do_readpage()
                           submit_extent_page()
                                 btrfs_submit_bio_hook()
                                       btrfs_bio_wq_end_io()
                                       submit_bio()
                                       end_workqueue_bio() <--(ret by the 1st endio)
                                            queue a work(named work Y) for the 2nd
                                            also the real endio()
      
      So the hang occurs when work Y's work_struct and work X's work_struct happens
      to share the same address.
      
      A bit more explanation,
      
      A,B,C -- struct btrfs_work
      arg   -- struct work_struct
      
      kthread:
      worker_thread()
          pick up a work_struct from @worklist
          process_one_work(arg)
      	worker->current_work = arg;  <-- arg is A->normal_work
      	worker->current_func(arg)
      		normal_work_helper(arg)
      		     A = container_of(arg, struct btrfs_work, normal_work);
      
      		     A->func()
      		     A->ordered_func()
      		     A->ordered_free()  <-- A gets freed
      
      		     B->ordered_func()
      			  submit_compressed_extents()
      			      find_free_extent()
      				  load_free_space_inode()
      				      ...   <-- (the above readhead stack)
      				      end_workqueue_bio()
      					   btrfs_queue_work(work C)
      		     B->ordered_free()
      
      As if work A has a high priority in wq->ordered_list and there are more ordered
      works queued after it, such as B->ordered_func(), its memory could have been
      freed before normal_work_helper() returns, which means that kernel workqueue
      code worker_thread() still has worker->current_work pointer to be work
      A->normal_work's, ie. arg's address.
      
      Meanwhile, work C is allocated after work A is freed, work C->normal_work
      and work A->normal_work are likely to share the same address(I confirmed this
      with ftrace output, so I'm not just guessing, it's rare though).
      
      When another kthread picks up work C->normal_work to process, and finds our
      kthread is processing it(see find_worker_executing_work()), it'll think
      work C as a collision and skip then, which ends up nobody processing work C.
      
      So the situation is that our kthread is waiting forever on work C.
      
      Besides, there're other cases that can lead to deadlock, but the real problem
      is that all btrfs workqueue shares one work->func, -- normal_work_helper,
      so this makes each workqueue to have its own helper function, but only a
      wraper pf normal_work_helper.
      
      With this patch, I no long hit the above hang.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      9e0af237
    • D
      ext4: move i_size,i_disksize update routines to helper function · 4631dbf6
      Dmitry Monakhov 提交于
      Cc: stable@vger.kernel.org # needed for bug fix patches
      Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      4631dbf6
    • T
      ext4: fix BUG_ON in mb_free_blocks() · c99d1e6e
      Theodore Ts'o 提交于
      If we suffer a block allocation failure (for example due to a memory
      allocation failure), it's possible that we will call
      ext4_discard_allocated_blocks() before we've actually allocated any
      blocks.  In that case, fe_len and fe_start in ac->ac_f_ex will still
      be zero, and this will result in mb_free_blocks(inode, e4b, 0, 0)
      triggering the BUG_ON on mb_free_blocks():
      
      	BUG_ON(last >= (sb->s_blocksize << 3));
      
      Fix this by bailing out of ext4_discard_allocated_blocks() if fs_len
      is zero.
      
      Also fix a missing ext4_mb_unload_buddy() call in
      ext4_discard_allocated_blocks().
      
      Google-Bug-Id: 16844242
      
      Fixes: 86f0afd4Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      c99d1e6e
    • T
      ext4: propagate errors up to ext4_find_entry()'s callers · 36de9286
      Theodore Ts'o 提交于
      If we run into some kind of error, such as ENOMEM, while calling
      ext4_getblk() or ext4_dx_find_entry(), we need to make sure this error
      gets propagated up to ext4_find_entry() and then to its callers.  This
      way, transient errors such as ENOMEM can get propagated to the VFS.
      This is important so that the system calls return the appropriate
      error, and also so that in the case of ext4_lookup(), we return an
      error instead of a NULL inode, since that will result in a negative
      dentry cache entry that will stick around long past the OOM condition
      which caused a transient ENOMEM error.
      
      Google-Bug-Id: #17142205
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      36de9286