1. 03 2月, 2015 2 次提交
  2. 03 1月, 2015 1 次提交
    • C
      Btrfs: don't delay inode ref updates during log replay · 6f896054
      Chris Mason 提交于
      Commit 1d52c78a (Btrfs: try not to ENOSPC on log replay) added a
      check to skip delayed inode updates during log replay because it
      confuses the enospc code.  But the delayed processing will end up
      ignoring delayed refs from log replay because the inode itself wasn't
      put through the delayed code.
      
      This can end up triggering a warning at commit time:
      
      WARNING: CPU: 2 PID: 778 at fs/btrfs/delayed-inode.c:1410 btrfs_assert_delayed_root_empty+0x32/0x34()
      
      Which is repeated for each commit because we never process the delayed
      inode ref update.
      
      The fix used here is to change btrfs_delayed_delete_inode_ref to return
      an error if we're currently in log replay.  The caller will do the ref
      deletion immediately and everything will work properly.
      Signed-off-by: NChris Mason <clm@fb.com>
      cc: stable@vger.kernel.org # v3.18 and any stable series that picked 1d52c78a
      6f896054
  3. 18 9月, 2014 1 次提交
  4. 24 8月, 2014 1 次提交
    • L
      Btrfs: fix task hang under heavy compressed write · 9e0af237
      Liu Bo 提交于
      This has been reported and discussed for a long time, and this hang occurs in
      both 3.15 and 3.16.
      
      Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
      
      Btrfs has a kind of work queued as an ordered way, which means that its
      ordered_func() must be processed in the way of FIFO, so it usually looks like --
      
      normal_work_helper(arg)
          work = container_of(arg, struct btrfs_work, normal_work);
      
          work->func() <---- (we name it work X)
          for ordered_work in wq->ordered_list
                  ordered_work->ordered_func()
                  ordered_work->ordered_free()
      
      The hang is a rare case, first when we find free space, we get an uncached block
      group, then we go to read its free space cache inode for free space information,
      so it will
      
      file a readahead request
          btrfs_readpages()
               for page that is not in page cache
                      __do_readpage()
                           submit_extent_page()
                                 btrfs_submit_bio_hook()
                                       btrfs_bio_wq_end_io()
                                       submit_bio()
                                       end_workqueue_bio() <--(ret by the 1st endio)
                                            queue a work(named work Y) for the 2nd
                                            also the real endio()
      
      So the hang occurs when work Y's work_struct and work X's work_struct happens
      to share the same address.
      
      A bit more explanation,
      
      A,B,C -- struct btrfs_work
      arg   -- struct work_struct
      
      kthread:
      worker_thread()
          pick up a work_struct from @worklist
          process_one_work(arg)
      	worker->current_work = arg;  <-- arg is A->normal_work
      	worker->current_func(arg)
      		normal_work_helper(arg)
      		     A = container_of(arg, struct btrfs_work, normal_work);
      
      		     A->func()
      		     A->ordered_func()
      		     A->ordered_free()  <-- A gets freed
      
      		     B->ordered_func()
      			  submit_compressed_extents()
      			      find_free_extent()
      				  load_free_space_inode()
      				      ...   <-- (the above readhead stack)
      				      end_workqueue_bio()
      					   btrfs_queue_work(work C)
      		     B->ordered_free()
      
      As if work A has a high priority in wq->ordered_list and there are more ordered
      works queued after it, such as B->ordered_func(), its memory could have been
      freed before normal_work_helper() returns, which means that kernel workqueue
      code worker_thread() still has worker->current_work pointer to be work
      A->normal_work's, ie. arg's address.
      
      Meanwhile, work C is allocated after work A is freed, work C->normal_work
      and work A->normal_work are likely to share the same address(I confirmed this
      with ftrace output, so I'm not just guessing, it's rare though).
      
      When another kthread picks up work C->normal_work to process, and finds our
      kthread is processing it(see find_worker_executing_work()), it'll think
      work C as a collision and skip then, which ends up nobody processing work C.
      
      So the situation is that our kthread is waiting forever on work C.
      
      Besides, there're other cases that can lead to deadlock, but the real problem
      is that all btrfs workqueue shares one work->func, -- normal_work_helper,
      so this makes each workqueue to have its own helper function, but only a
      wraper pf normal_work_helper.
      
      With this patch, I no long hit the above hang.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      9e0af237
  5. 10 6月, 2014 1 次提交
  6. 11 3月, 2014 2 次提交
  7. 29 1月, 2014 7 次提交
  8. 12 11月, 2013 4 次提交
  9. 01 9月, 2013 3 次提交
  10. 29 6月, 2013 1 次提交
  11. 14 6月, 2013 1 次提交
  12. 07 5月, 2013 2 次提交
  13. 07 3月, 2013 1 次提交
    • C
      Btrfs: improve the delayed inode throttling · de3cb945
      Chris Mason 提交于
      The delayed inode code batches up changes to the btree in hopes of doing
      them in bulk.  As the changes build up, processes kick off worker
      threads and wait for them to make progress.
      
      The current code kicks off an async work queue item for each delayed
      node, which creates a lot of churn.  It also uses a fixed 1 HZ waiting
      period for the throttle, which allows us to build a lot of pending
      work and can slow down the commit.
      
      This changes us to watch a sequence counter as it is bumped during the
      operations.  We kick off fewer work items and have each work item do
      more work.
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      de3cb945
  14. 21 2月, 2013 1 次提交
  15. 20 2月, 2013 2 次提交
  16. 13 12月, 2012 1 次提交
  17. 12 12月, 2012 1 次提交
    • M
      Btrfs: improve the noflush reservation · 08e007d2
      Miao Xie 提交于
      In some places(such as: evicting inode), we just can not flush the reserved
      space of delalloc, flushing the delayed directory index and delayed inode
      is OK, but we don't try to flush those things and just go back when there is
      no enough space to be reserved. This patch fixes this problem.
      
      We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
      If we can in the transaction, we should not flush anything, or the deadlock
      would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
      would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
      and we will flush all things.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      08e007d2
  18. 02 10月, 2012 2 次提交
  19. 21 9月, 2012 1 次提交
  20. 29 8月, 2012 2 次提交
  21. 24 7月, 2012 2 次提交
    • L
      Btrfs: zero unused bytes in inode item · 293f7e07
      Li Zefan 提交于
      The otime field is not zeroed, so users will see random otime in an old
      filesystem with a new kernel which has otime support in the future.
      
      The reserved bytes are also not zeroed, and we'll have compatibility
      issue if we make use of those bytes.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      293f7e07
    • J
      Btrfs: flush delayed inodes if we're short on space · 96c3f433
      Josef Bacik 提交于
      Those crazy gentoo guys have been complaining about ENOSPC errors on their
      portage volumes.  This is because doing things like untar tends to create
      lots of new files which will soak up all the reservation space in the
      delayed inodes.  Usually this gets papered over by the fact that we will try
      and commit the transaction, however if this happens in the wrong spot or we
      choose not to commit the transaction you will be screwed.  So add the
      ability to expclitly flush delayed inodes to free up space.  Please test
      this out guys to make sure it works since as usual I cannot reproduce.
      Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      96c3f433
  22. 15 6月, 2012 1 次提交