1. 19 6月, 2017 2 次提交
  2. 27 5月, 2017 1 次提交
  3. 04 5月, 2017 1 次提交
  4. 02 5月, 2017 1 次提交
  5. 27 4月, 2017 1 次提交
  6. 21 4月, 2017 1 次提交
  7. 08 4月, 2017 2 次提交
  8. 07 4月, 2017 4 次提交
    • O
      blk-mq-sched: fix crash in switch error path · 54d5329d
      Omar Sandoval 提交于
      In elevator_switch(), if blk_mq_init_sched() fails, we attempt to fall
      back to the original scheduler. However, at this point, we've already
      torn down the original scheduler's tags, so this causes a crash. Doing
      the fallback like the legacy elevator path is much harder for mq, so fix
      it by just falling back to none, instead.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      54d5329d
    • O
      blk-mq-sched: set up scheduler tags when bringing up new queues · 93252632
      Omar Sandoval 提交于
      If a new hardware queue is added at runtime, we don't allocate scheduler
      tags for it, leading to a crash. This hooks up the scheduler framework
      to blk_mq_{init,exit}_hctx() to make sure everything gets properly
      initialized/freed.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      93252632
    • O
      blk-mq-sched: refactor scheduler initialization · 6917ff0b
      Omar Sandoval 提交于
      Preparation cleanup for the next couple of fixes, push
      blk_mq_sched_setup() and e->ops.mq.init_sched() into a helper.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      6917ff0b
    • O
      blk-mq: use the right hctx when getting a driver tag fails · 81380ca1
      Omar Sandoval 提交于
      While dispatching requests, if we fail to get a driver tag, we mark the
      hardware queue as waiting for a tag and put the requests on a
      hctx->dispatch list to be run later when a driver tag is freed. However,
      blk_mq_dispatch_rq_list() may dispatch requests from multiple hardware
      queues if using a single-queue scheduler with a multiqueue device. If
      blk_mq_get_driver_tag() fails, it doesn't update the hardware queue we
      are processing. This means we end up using the hardware queue of the
      previous request, which may or may not be the same as that of the
      current request. If it isn't, the wrong hardware queue will end up
      waiting for a tag, and the requests will be on the wrong dispatch list,
      leading to a hang.
      
      The fix is twofold:
      
      1. Make sure we save which hardware queue we were trying to get a
         request for in blk_mq_get_driver_tag() regardless of whether it
         succeeds or not.
      2. Make blk_mq_dispatch_rq_list() take a request_queue instead of a
         blk_mq_hw_queue to make it clear that it must handle multiple
         hardware queues, since I've already messed this up on a couple of
         occasions.
      
      This didn't appear in testing with nvme and mq-deadline because nvme has
      more driver tags than the default number of scheduler tags. However,
      with the blk_mq_update_nr_hw_queues() fix, it showed up with nbd.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      81380ca1
  9. 02 3月, 2017 3 次提交
  10. 24 2月, 2017 1 次提交
    • O
      blk-mq-sched: separate mark hctx and queue restart operations · d38d3515
      Omar Sandoval 提交于
      In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart()
      after we dispatch requests left over on our hardware queue dispatch
      list. This is so we'll go back and dispatch requests from the scheduler.
      In this case, it's only necessary to restart the hardware queue that we
      are running; there's no reason to run other hardware queues just because
      we are using shared tags.
      
      So, split out blk_mq_sched_mark_restart() into two operations, one for
      just the hardware queue and one for the whole request queue. The core
      code only needs the hctx variant, but I/O schedulers will want to use
      both.
      
      This also requires adjusting blk_mq_sched_restart_queues() to always
      check the queue restart flag, not just when using shared tags.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d38d3515
  11. 23 2月, 2017 1 次提交
  12. 18 2月, 2017 2 次提交
  13. 11 2月, 2017 1 次提交
  14. 09 2月, 2017 1 次提交
  15. 04 2月, 2017 1 次提交
    • J
      block: free merged request in the caller · e4d750c9
      Jens Axboe 提交于
      If we end up doing a request-to-request merge when we have completed
      a bio-to-request merge, we free the request from deep down in that
      path. For blk-mq-sched, the merge path has to hold the appropriate
      lock, but we don't need it for freeing the request. And in fact
      holding the lock is problematic, since we are now calling the
      mq sched put_rq_private() hook with the lock held. Other call paths
      do not hold this lock.
      
      Fix this inconsistency by ensuring that the caller frees a merged
      request. Then we can do it outside of the lock, making it both more
      efficient and fixing the blk-mq-sched problem of invoking parts of
      the scheduler with an unknown lock state.
      Reported-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      e4d750c9
  16. 03 2月, 2017 1 次提交
  17. 28 1月, 2017 3 次提交
  18. 27 1月, 2017 4 次提交
  19. 18 1月, 2017 2 次提交