1. 13 4月, 2015 5 次提交
  2. 27 3月, 2015 1 次提交
    • F
      Btrfs: change the insertion criteria for the qgroup operations rbtree · bf691960
      Filipe Manana 提交于
      After looking at Liu Bo's recent patch (titled
      "Btrfs: fix comp_oper to get right order") I realized the search made by
      qgroup_oper_exists() was buggy because its rbtree navigation comparison
      function, comp_oper_exist(), only looks at the fields bytenr and ref_root
      of a tree node, ignoring the seq field completely. This was wrong because
      when we insert a node into the rbtree we use comp_oper(), which takes a
      decision based first on bytenr, then on seq and then on the ref_root field.
      That means qgroup_oper_exists() could miss the fact that at least one
      operation with given bytenr and ref_root exists.
      
      Consider the following simple example of a 3 nodes qgroup operations
      rbtree (created using comp_oper before this patch), where each node's key
      is a tuple with the shape (bytenr, seq, ref_root, op):
      
                                [ (4096, 2, 20, op X) ]
                               /                       \
                              /                         \
         [ (4096, 1, 5, op Y) ]                         [ (4096, 3, 10, op Z) ]
      
      qgroup_oper_exists() when called to search for an existing operation for
      bytenr 4096 and ref root 10 wouldn't find anything because it would go to
      the left subtree instead of the right subtree, since comp_oper_exits()
      ignores the seq field completely.
      
      Fix this by changing the insertion navigation function to use the ref_root
      field right after using the bytenr field and before using the seq field,
      so that qgroup_oper_exists() / comp_oper_exist() work as expected.
      
      This patch applies on top of the patch mentioned above from Liu.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      bf691960
  3. 14 3月, 2015 1 次提交
  4. 04 3月, 2015 1 次提交
  5. 17 2月, 2015 1 次提交
    • D
      Btrfs: disk-io: replace root args iff only fs_info used · 01d58472
      Daniel Dressler 提交于
      This is the 3rd independent patch of a larger project to cleanup btrfs's
      internal usage of btrfs_root. Many functions take btrfs_root only to
      grab the fs_info struct.
      
      By requiring a root these functions cause programmer overhead. That
      these functions can accept any valid root is not obvious until
      inspection.
      
      This patch reduces the specificity of such functions to accept the
      fs_info directly.
      
      These patches can be applied independently and thus are not being
      submitted as a patch series. There should be about 26 patches by the
      project's completion. Each patch will cleanup between 1 and 34 functions
      apiece.  Each patch covers a single file's functions.
      
      This patch affects the following function(s):
        1) csum_tree_block
        2) csum_dirty_buffer
        3) check_tree_block_fsid
        4) btrfs_find_tree_block
        5) clean_tree_block
      Signed-off-by: NDaniel Dressler <danieru.dressler@gmail.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      01d58472
  6. 22 1月, 2015 1 次提交
  7. 02 10月, 2014 1 次提交
  8. 18 9月, 2014 3 次提交
    • M
      btrfs: don't go readonly on existing qgroup items · 0b4699dc
      Mark Fasheh 提交于
      btrfs_drop_snapshot() leaves subvolume qgroup items on disk after
      completion. This can cause problems with snapshot creation. If a new
      snapshot tries to claim the deleted subvolumes id, btrfs will get -EEXIST
      from add_qgroup_item() and go read-only. The following commands will
      reproduce this problem (assume btrfs is on /dev/sda and is mounted at
      /btrfs)
      
      mkfs.btrfs -f /dev/sda
      mount -t btrfs /dev/sda /btrfs/
      btrfs quota enable /btrfs/
      btrfs su sna /btrfs/ /btrfs/snap
      btrfs su de /btrfs/snap
      sleep 45
      umount /btrfs/
      mount -t btrfs /dev/sda /btrfs/
      
      We can fix this by catching -EEXIST in add_qgroup_item() and
      initializing the existing items. We have the problem of orphaned
      relation items being on disk from an old snapshot but that is outside
      the scope of this patch.
      Signed-off-by: NMark Fasheh <mfasheh@suse.de>
      Signed-off-by: NChris Mason <clm@fb.com>
      0b4699dc
    • M
      btrfs: add trace for qgroup accounting · d3982100
      Mark Fasheh 提交于
      We want this to debug qgroup changes on live systems.
      Signed-off-by: NMark Fasheh <mfasheh@suse.de>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      d3982100
    • D
      btrfs: use nodesize everywhere, kill leafsize · 707e8a07
      David Sterba 提交于
      The nodesize and leafsize were never of different values. Unify the
      usage and make nodesize the one. Cleanup the redundant checks and
      helpers.
      
      Shaves a few bytes from .text:
      
        text    data     bss     dec     hex filename
      852418   24560   23112  900090   dbbfa btrfs.ko.before
      851074   24584   23112  898770   db6d2 btrfs.ko.after
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      707e8a07
  9. 24 8月, 2014 1 次提交
    • L
      Btrfs: fix task hang under heavy compressed write · 9e0af237
      Liu Bo 提交于
      This has been reported and discussed for a long time, and this hang occurs in
      both 3.15 and 3.16.
      
      Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
      
      Btrfs has a kind of work queued as an ordered way, which means that its
      ordered_func() must be processed in the way of FIFO, so it usually looks like --
      
      normal_work_helper(arg)
          work = container_of(arg, struct btrfs_work, normal_work);
      
          work->func() <---- (we name it work X)
          for ordered_work in wq->ordered_list
                  ordered_work->ordered_func()
                  ordered_work->ordered_free()
      
      The hang is a rare case, first when we find free space, we get an uncached block
      group, then we go to read its free space cache inode for free space information,
      so it will
      
      file a readahead request
          btrfs_readpages()
               for page that is not in page cache
                      __do_readpage()
                           submit_extent_page()
                                 btrfs_submit_bio_hook()
                                       btrfs_bio_wq_end_io()
                                       submit_bio()
                                       end_workqueue_bio() <--(ret by the 1st endio)
                                            queue a work(named work Y) for the 2nd
                                            also the real endio()
      
      So the hang occurs when work Y's work_struct and work X's work_struct happens
      to share the same address.
      
      A bit more explanation,
      
      A,B,C -- struct btrfs_work
      arg   -- struct work_struct
      
      kthread:
      worker_thread()
          pick up a work_struct from @worklist
          process_one_work(arg)
      	worker->current_work = arg;  <-- arg is A->normal_work
      	worker->current_func(arg)
      		normal_work_helper(arg)
      		     A = container_of(arg, struct btrfs_work, normal_work);
      
      		     A->func()
      		     A->ordered_func()
      		     A->ordered_free()  <-- A gets freed
      
      		     B->ordered_func()
      			  submit_compressed_extents()
      			      find_free_extent()
      				  load_free_space_inode()
      				      ...   <-- (the above readhead stack)
      				      end_workqueue_bio()
      					   btrfs_queue_work(work C)
      		     B->ordered_free()
      
      As if work A has a high priority in wq->ordered_list and there are more ordered
      works queued after it, such as B->ordered_func(), its memory could have been
      freed before normal_work_helper() returns, which means that kernel workqueue
      code worker_thread() still has worker->current_work pointer to be work
      A->normal_work's, ie. arg's address.
      
      Meanwhile, work C is allocated after work A is freed, work C->normal_work
      and work A->normal_work are likely to share the same address(I confirmed this
      with ftrace output, so I'm not just guessing, it's rare though).
      
      When another kthread picks up work C->normal_work to process, and finds our
      kthread is processing it(see find_worker_executing_work()), it'll think
      work C as a collision and skip then, which ends up nobody processing work C.
      
      So the situation is that our kthread is waiting forever on work C.
      
      Besides, there're other cases that can lead to deadlock, but the real problem
      is that all btrfs workqueue shares one work->func, -- normal_work_helper,
      so this makes each workqueue to have its own helper function, but only a
      wraper pf normal_work_helper.
      
      With this patch, I no long hit the above hang.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      9e0af237
  10. 21 8月, 2014 1 次提交
  11. 15 8月, 2014 2 次提交
    • M
      btrfs: correctly handle return from ulist_add · f90e579c
      Mark Fasheh 提交于
      ulist_add() can return '1' on sucess, which qgroup_subtree_accounting()
      doesn't take into account. As a result, that value can be bubbled up to
      callers, causing an error to be printed. Fix this by only returning the
      value of ulist_add() when it indicates an error.
      Signed-off-by: NMark Fasheh <mfasheh@suse.de>
      Signed-off-by: NChris Mason <clm@fb.com>
      f90e579c
    • M
      btrfs: qgroup: account shared subtrees during snapshot delete · 1152651a
      Mark Fasheh 提交于
      During its tree walk, btrfs_drop_snapshot() will skip any shared
      subtrees it encounters. This is incorrect when we have qgroups
      turned on as those subtrees need to have their contents
      accounted. In particular, the case we're concerned with is when
      removing our snapshot root leaves the subtree with only one root
      reference.
      
      In those cases we need to find the last remaining root and add
      each extent in the subtree to the corresponding qgroup exclusive
      counts.
      
      This patch implements the shared subtree walk and a new qgroup
      operation, BTRFS_QGROUP_OPER_SUB_SUBTREE. When an operation of
      this type is encountered during qgroup accounting, we search for
      any root references to that extent and in the case that we find
      only one reference left, we go ahead and do the math on it's
      exclusive counts.
      Signed-off-by: NMark Fasheh <mfasheh@suse.de>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      1152651a
  12. 14 6月, 2014 1 次提交
  13. 10 6月, 2014 3 次提交
    • J
      Btrfs: free tmp ulist for qgroup rescan · 2a108409
      Josef Bacik 提交于
      Memory leaks are bad mmkay?
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      2a108409
    • J
      Btrfs: add sanity tests for new qgroup accounting code · faa2dbf0
      Josef Bacik 提交于
      This exercises the various parts of the new qgroup accounting code.  We do some
      basic stuff and do some things with the shared refs to make sure all that code
      works.  I had to add a bunch of infrastructure because I needed to be able to
      insert items into a fake tree without having to do all the hard work myself,
      hopefully this will be usefull in the future.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      faa2dbf0
    • J
      Btrfs: rework qgroup accounting · fcebe456
      Josef Bacik 提交于
      Currently qgroups account for space by intercepting delayed ref updates to fs
      trees.  It does this by adding sequence numbers to delayed ref updates so that
      it can figure out how the tree looked before the update so we can adjust the
      counters properly.  The problem with this is that it does not allow delayed refs
      to be merged, so if you say are defragging an extent with 5k snapshots pointing
      to it we will thrash the delayed ref lock because we need to go back and
      manually merge these things together.  Instead we want to process quota changes
      when we know they are going to happen, like when we first allocate an extent, we
      free a reference for an extent, we add new references etc.  This patch
      accomplishes this by only adding qgroup operations for real ref changes.  We
      only modify the sequence number when we need to lookup roots for bytenrs, this
      reduces the amount of churn on the sequence number and allows us to merge
      delayed refs as we add them most of the time.  This patch encompasses a bunch of
      architectural changes
      
      1) qgroup ref operations: instead of tracking qgroup operations through the
      delayed refs we simply add new ref operations whenever we notice that we need to
      when we've modified the refs themselves.
      
      2) tree mod seq:  we no longer have this separation of major/minor counters.
      this makes the sequence number stuff much more sane and we can remove some
      locking that was needed to protect the counter.
      
      3) delayed ref seq: we now read the tree mod seq number and use that as our
      sequence.  This means each new delayed ref doesn't have it's own unique sequence
      number, rather whenever we go to lookup backrefs we inc the sequence number so
      we can make sure to keep any new operations from screwing up our world view at
      that given point.  This allows us to merge delayed refs during runtime.
      
      With all of these changes the delayed ref stuff is a little saner and the qgroup
      accounting stuff no longer goes negative in some cases like it was before.
      Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      fcebe456
  14. 11 3月, 2014 2 次提交
  15. 29 1月, 2014 3 次提交
  16. 01 9月, 2013 5 次提交
  17. 14 6月, 2013 5 次提交
    • J
      Btrfs: fix qgroup rescan resume on mount · b382a324
      Jan Schmidt 提交于
      When called during mount, we cannot start the rescan worker thread until
      open_ctree is done. This commit restuctures the qgroup rescan internals to
      enable a clean deferral of the rescan resume operation.
      
      First of all, the struct qgroup_rescan is removed, saving us a malloc and
      some initialization synchronizations problems. Its only element (the worker
      struct) now lives within fs_info just as the rest of the rescan code.
      
      Then setting up a rescan worker is split into several reusable stages.
      Currently we have three different rescan startup scenarios:
      	(A) rescan ioctl
      	(B) rescan resume by mount
      	(C) rescan by quota enable
      
      Each case needs its own combination of the four following steps:
      	(1) set the progress [A, C: zero; B: state of umount]
      	(2) commit the transaction [A]
      	(3) set the counters [A, C: zero; B: state of umount]
      	(4) start worker [A, B, C]
      
      qgroup_rescan_init does step (1). There's no extra function added to commit
      a transaction, we've got that already. qgroup_rescan_zero_tracking does
      step (3). Step (4) is nothing more than a call to the generic
      btrfs_queue_worker.
      
      We also get rid of a double check for the rescan progress during
      btrfs_qgroup_account_ref, which is no longer required due to having step 2
      from the list above.
      
      As a side effect, this commit prepares to move the rescan start code from
      btrfs_run_qgroups (which is run during commit) to a less time critical
      section.
      Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      b382a324
    • J
      Btrfs: avoid double free of fs_info->qgroup_ulist · eb1716af
      Jan Schmidt 提交于
      When btrfs_read_qgroup_config or btrfs_quota_enable return non-zero, we've
      already freed the fs_info->qgroup_ulist. The final btrfs_free_qgroup_config
      called from quota_disable makes another ulist_free(fs_info->qgroup_ulist)
      call.
      
      We set fs_info->qgroup_ulist to NULL on the mentioned error paths, turning
      the ulist_free in btrfs_free_qgroup_config into a noop.
      
      Cc: Wang Shilong <wangsl-fnst@cn.fujitsu.com>
      Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      eb1716af
    • J
      Btrfs: fix memory patcher through fs_info->qgroup_ulist · 4373519d
      Jan Schmidt 提交于
      Commit 5b7c665e introduced fs_info->qgroup_ulist, that is allocated during
      btrfs_read_qgroup_config and meant to be used later by the qgroup accounting
      code. However, it is always freed before btrfs_read_qgroup_config returns,
      becuase the commit mentioned above adds a check for (ret), where a check
      for (ret < 0) would have been the right choice. This commit fixes the check.
      
      Cc: Wang Shilong <wangsl-fnst@cn.fujitsu.com>
      Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      4373519d
    • J
      Btrfs: add ioctl to wait for qgroup rescan completion · 57254b6e
      Jan Schmidt 提交于
      btrfs_qgroup_wait_for_completion waits until the currently running qgroup
      operation completes. It returns immediately when no rescan process is in
      progress. This is useful to automate things around the rescan process (e.g.
      testing).
      Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      57254b6e
    • W
      Btrfs: introduce qgroup_ulist to avoid frequently allocating/freeing ulist · 1e8f9158
      Wang Shilong 提交于
      When doing qgroup accounting, we call ulist_alloc()/ulist_free() every time
      when we want to walk qgroup tree.
      
      By introducing 'qgroup_ulist', we only need to call ulist_alloc()/ulist_free()
      once. This reduce some sys time to allocate memory, see the measurements below
      
      fsstress -p 4 -n 10000 -d $dir
      
      With this patch:
      
      real    0m50.153s
      user    0m0.081s
      sys     0m6.294s
      
      real    0m51.113s
      user    0m0.092s
      sys     0m6.220s
      
      real    0m52.610s
      user    0m0.096s
      sys     0m6.125s	avg 6.213
      -----------------------------------------------------
      Without the patch:
      
      real    0m54.825s
      user    0m0.061s
      sys     0m10.665s
      
      real    1m6.401s
      user    0m0.089s
      sys     0m11.218s
      
      real    1m13.768s
      user    0m0.087s
      sys     0m10.665s       avg 10.849
      
      we can see the sys time reduce ~43%.
      Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      1e8f9158
  18. 07 5月, 2013 3 次提交