1. 11 11月, 2013 5 次提交
  2. 25 9月, 2013 3 次提交
  3. 12 7月, 2013 1 次提交
    • K
      bcache: Journal replay fix · faa56736
      Kent Overstreet 提交于
      The journal replay code starts by finding something that looks like a
      valid journal entry, then it does a binary search over the unchecked
      region of the journal for the journal entries with the highest sequence
      numbers.
      
      Trouble is, the logic was wrong - journal_read_bucket() returns true if
      it found journal entries we need, but if the range of journal entries
      we're looking for loops around the end of the journal - in that case
      journal_read_bucket() could return true when it hadn't found the highest
      sequence number we'd seen yet, and in that case the binary search did
      the wrong thing. Whoops.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
      faa56736
  4. 02 7月, 2013 1 次提交
  5. 27 6月, 2013 2 次提交
    • K
      bcache: Fix/revamp tracepoints · c37511b8
      Kent Overstreet 提交于
      The tracepoints were reworked to be more sensible, and fixed a null
      pointer deref in one of the tracepoints.
      
      Converted some of the pr_debug()s to tracepoints - this is partly a
      performance optimization; it used to be that with DEBUG or
      CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it
      was changed to an empty inline function.
      
      Some of the pr_debug() statements had rather expensive function calls as
      part of the arguments, so this code was getting run unnecessarily even
      on non debug kernels - in some fast paths, too.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      c37511b8
    • K
      bcache: Refactor btree io · 57943511
      Kent Overstreet 提交于
      The most significant change is that btree reads are now done
      synchronously, instead of asynchronously and doing the post read stuff
      from a workqueue.
      
      This was originally done because we can't block on IO under
      generic_make_request(). But - we already have a mechanism to punt cache
      lookups to workqueue if needed, so if we just use that we don't have to
      deal with the complexity of doing things asynchronously.
      
      The main benefit is this makes the locking situation saner; we can hold
      our write lock on the btree node until we're finished reading it, and we
      don't need that btree_node_read_done() flag anymore.
      
      Also, for writes, btree_write() was broken out into btree_node_write()
      and btree_leaf_dirty() - the old code with the boolean argument was dumb
      and confusing.
      
      The prio_blocked mechanism was improved a bit too, now the only counter
      is in struct btree_write, we don't mess with transfering a count from
      struct btree anymore.
      
      This required changing garbage collection to block prios at the start
      and unblock when it finishes, which is cleaner than what it was doing
      anyways (the old code had mostly the same effect, but was doing it in a
      convoluted way)
      
      And the btree iter btree_node_read_done() uses was converted to a real
      mempool.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      57943511
  6. 09 4月, 2013 1 次提交
  7. 29 3月, 2013 1 次提交
  8. 26 3月, 2013 1 次提交
  9. 24 3月, 2013 1 次提交