1. 11 11月, 2013 12 次提交
    • K
      bcache: Add explicit keylist arg to btree_insert() · 4f3d4014
      Kent Overstreet 提交于
      Some refactoring - better to explicitly pass stuff around instead of
      having it all in the "big bag of state", struct btree_op. Going to prune
      struct btree_op quite a bit over time.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      4f3d4014
    • K
      bcache: Convert btree_insert_check_key() to btree_insert_node() · e7c590eb
      Kent Overstreet 提交于
      This was the main point of all this refactoring - now,
      btree_insert_check_key() won't fail just because the leaf node happened
      to be full.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      e7c590eb
    • K
      bcache: Insert multiple keys at a time · 403b6cde
      Kent Overstreet 提交于
      We'll often end up with a list of adjacent keys to insert -
      because bch_data_insert() may have to fragment the data it writes.
      
      Originally, to simplify things and avoid having to deal with corner
      cases bch_btree_insert() would pass keys from this list one at a time to
      btree_insert_recurse() - mainly because the list of keys might span leaf
      nodes, so it was easier this way.
      
      With the btree_insert_node() refactoring, it's now a lot easier to just
      pass down the whole list and have btree_insert_recurse() iterate over
      leaf nodes until it's done.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      403b6cde
    • K
      bcache: Add btree_insert_node() · 26c949f8
      Kent Overstreet 提交于
      The flow of control in the old btree insertion code was rather -
      backwards; we'd recurse down the btree (in btree_insert_recurse()), and
      then if we needed to split the keys to be inserted into the parent node
      would be effectively returned up to btree_insert_recurse(), which would
      notice there was more work to do and finish the insertion.
      
      The main problem with this was that the full logic for btree insertion
      could only be used by calling btree_insert_recurse; if you'd gotten to a
      btree leaf some other way and had a key to insert, if it turned out that
      node needed to be split you were SOL.
      
      This inverts the flow of control so btree_insert_node() does _full_
      btree insertion, including splitting - and takes a (leaf) btree node to
      insert into as a parameter.
      
      This means we can now _correctly_ handle cache misses - for cache
      misses, we need to insert a fake "check" key into the btree when we
      discover we have a cache miss - while we still have the btree locked.
      Previously, if the btree node was full inserting a cache miss would just
      fail.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      26c949f8
    • K
      bcache: Explicitly track btree node's parent · d6fd3b11
      Kent Overstreet 提交于
      This is prep work for the reworked btree insertion code.
      
      The way we set b->parent is ugly and hacky... the problem is, when
      btree_split() or garbage collection splits or rewrites a btree node, the
      parent changes for all its (potentially already cached) children.
      
      I may change this later and add some code to look through the btree node
      cache and find all our cached child nodes and change the parent pointer
      then...
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      d6fd3b11
    • K
      bcache: Remove unnecessary check in should_split() · 8304ad4d
      Kent Overstreet 提交于
      Checking i->seq was redundant, because since ages ago we always
      initialize the new bset when advancing b->written
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      8304ad4d
    • K
      bcache: Stripe size isn't necessarily a power of two · 2d679fc7
      Kent Overstreet 提交于
      Originally I got this right... except that the divides didn't use
      do_div(), which broke 32 bit kernels. When I went to fix that, I forgot
      that the raid stripe size usually isn't a power of two... doh
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      2d679fc7
    • K
      bcache: Add on error panic/unregister setting · 77c320eb
      Kent Overstreet 提交于
      Works kind of like the ext4 setting, to panic or remount read only on
      errors.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      77c320eb
    • K
      bcache: Use blkdev_issue_discard() · 49b1212d
      Kent Overstreet 提交于
      The old asynchronous discard code was really a relic from when all the
      allocation code was asynchronous - now that allocation runs out of a
      dedicated thread there's no point in keeping around all that complicated
      machinery.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      49b1212d
    • K
      bcache: Fix a lockdep splat · dd9ec84d
      Kent Overstreet 提交于
      bch_keybuf_del() takes a spinlock that can't be taken in interrupt context -
      whoops. Fortunately, this code isn't enabled by default (you have to toggle a
      sysfs thing).
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      dd9ec84d
    • K
      bcache: Fix a journalling performance bug · 7857d5d4
      Kent Overstreet 提交于
      7857d5d4
    • K
      bcache: Fix dirty_data accounting · 1fa8455d
      Kent Overstreet 提交于
      Dirty data accounting wasn't quite right - firstly, we were adding the key we're
      inserting after it could have merged with another dirty key already in the
      btree, and secondly we could sometimes pass the wrong offset to
      bcache_dev_sectors_dirty_add() for dirty data we were overwriting - which is
      important when tracking dirty data by stripe.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
      1fa8455d
  2. 09 11月, 2013 28 次提交