• K
    bcache: Fix a writeback performance regression · c2a4f318
    Kent Overstreet 提交于
    Background writeback works by scanning the btree for dirty data and
    adding those keys into a fixed size buffer, then for each dirty key in
    the keybuf writing it to the backing device.
    
    When read_dirty() finishes and it's time to scan for more dirty data, we
    need to wait for the outstanding writeback IO to finish - they still
    take up slots in the keybuf (so that foreground writes can check for
    them to avoid races) - without that wait, we'll continually rescan when
    we'll be able to add at most a key or two to the keybuf, and that takes
    locks that starves foreground IO.  Doh.
    Signed-off-by: NKent Overstreet <kmo@daterainc.com>
    Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    c2a4f318
util.h 15.3 KB