1. 15 5月, 2018 3 次提交
    • K
      block: Convert bio_set to mempool_init() · 8aa6ba2f
      Kent Overstreet 提交于
      Minor performance improvement by getting rid of pointer indirections
      from allocation/freeing fastpaths.
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8aa6ba2f
    • K
      mempool: Add mempool_init()/mempool_exit() · c1a67fef
      Kent Overstreet 提交于
      Allows mempools to be embedded in other structs, getting rid of a
      pointer indirection from allocation fastpaths.
      
      mempool_exit() is safe to call on an uninitialized but zeroed mempool.
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c1a67fef
    • J
      sbitmap: fix race in wait batch accounting · c854ab57
      Jens Axboe 提交于
      If we have multiple callers of sbq_wake_up(), we can end up in a
      situation where the wait_cnt will continually go more and more
      negative. Consider the case where our wake batch is 1, hence
      wait_cnt will start out as 1.
      
      wait_cnt == 1
      
      CPU0				CPU1
      atomic_dec_return(), cnt == 0
      				atomic_dec_return(), cnt == -1
      				cmpxchg(-1, 0) (succeeds)
      				[wait_cnt now 0]
      cmpxchg(0, 1) (fails)
      
      This ends up with wait_cnt being 0, we'll wakeup immediately
      next time. Going through the same loop as above again, and
      we'll have wait_cnt -1.
      
      For the case where we have a larger wake batch, the only
      difference is that the starting point will be higher. We'll
      still end up with continually smaller batch wakeups, which
      defeats the purpose of the rolling wakeups.
      
      Always reset the wait_cnt to the batch value. Then it doesn't
      matter who wins the race. But ensure that whomever does win
      the race is the one that increments the ws index and wakes up
      our batch count, loser gets to call __sbq_wake_up() again to
      account his wakeups towards the next active wait state index.
      
      Fixes: 6c0ca7ae ("sbitmap: fix wakeup hang after sbq resize")
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c854ab57
  2. 14 5月, 2018 7 次提交
  3. 12 5月, 2018 7 次提交
  4. 11 5月, 2018 9 次提交
  5. 10 5月, 2018 1 次提交
  6. 09 5月, 2018 12 次提交
  7. 08 5月, 2018 1 次提交