1. 18 12月, 2018 2 次提交
    • M
      blk-mq: fix dispatch from sw queue · c16d6b5a
      Ming Lei 提交于
      When a request is added to rq list of sw queue(ctx), the rq may be from
      a different type of hctx, especially after multi queue mapping is
      introduced.
      
      So when dispach request from sw queue via blk_mq_flush_busy_ctxs() or
      blk_mq_dequeue_from_ctx(), one request belonging to other queue type of
      hctx can be dispatched to current hctx in case that read queue or poll
      queue is enabled.
      
      This patch fixes this issue by introducing per-queue-type list.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      
      Changed by me to not use separately cacheline aligned lists, just
      place them all in the same cacheline where we had just the one list
      and lock before.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c16d6b5a
    • D
      block: mq-deadline: Fix write completion handling · 7211aef8
      Damien Le Moal 提交于
      For a zoned block device using mq-deadline, if a write request for a
      zone is received while another write was already dispatched for the same
      zone, dd_dispatch_request() will return NULL and the newly inserted
      write request is kept in the scheduler queue waiting for the ongoing
      zone write to complete. With this behavior, when no other request has
      been dispatched, rq_list in blk_mq_sched_dispatch_requests() is empty
      and blk_mq_sched_mark_restart_hctx() not called. This in turn leads to
      __blk_mq_free_request() call of blk_mq_sched_restart() to not run the
      queue when the already dispatched write request completes. The newly
      dispatched request stays stuck in the scheduler queue until eventually
      another request is submitted.
      
      This problem does not affect SCSI disk as the SCSI stack handles queue
      restart on request completion. However, this problem is can be triggered
      the nullblk driver with zoned mode enabled.
      
      Fix this by always requesting a queue restart in dd_dispatch_request()
      if no request was dispatched while WRITE requests are queued.
      
      Fixes: 5700f691 ("mq-deadline: Introduce zone locking support")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      
      Add missing export of blk_mq_sched_restart()
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7211aef8
  2. 16 12月, 2018 1 次提交
  3. 21 11月, 2018 1 次提交
  4. 20 11月, 2018 1 次提交
  5. 16 11月, 2018 1 次提交
  6. 08 11月, 2018 5 次提交
  7. 21 8月, 2018 1 次提交
    • J
      blk-mq: init hctx sched after update ctx and hctx mapping · d48ece20
      Jianchao Wang 提交于
      Currently, when update nr_hw_queues, IO scheduler's init_hctx will
      be invoked before the mapping between ctx and hctx is adapted
      correctly by blk_mq_map_swqueue. The IO scheduler init_hctx (kyber)
      may depend on this mapping and get wrong result and panic finally.
      A simply way to fix this is that switch the IO scheduler to 'none'
      before update the nr_hw_queues, and then switch it back after
      update nr_hw_queues. blk_mq_sched_init_/exit_hctx are removed due
      to nobody use them any more.
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d48ece20
  8. 18 7月, 2018 1 次提交
    • M
      blk-mq: issue directly if hw queue isn't busy in case of 'none' · 6ce3dd6e
      Ming Lei 提交于
      In case of 'none' io scheduler, when hw queue isn't busy, it isn't
      necessary to enqueue request to sw queue and dequeue it from
      sw queue because request may be submitted to hw queue asap without
      extra cost, meantime there shouldn't be much request in sw queue,
      and we don't need to worry about effect on IO merge.
      
      There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas, ...)
      which may connect high performance devices, so 'none' is often required
      for obtaining good performance.
      
      This patch improves IOPS and decreases CPU unilization on megaraid_sas,
      per Kashyap's test.
      
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Laurence Oberman <loberman@redhat.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Reported-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Tested-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6ce3dd6e
  9. 09 7月, 2018 3 次提交
    • M
      blk-mq: dequeue request one by one from sw queue if hctx is busy · 6e768717
      Ming Lei 提交于
      It won't be efficient to dequeue request one by one from sw queue,
      but we have to do that when queue is busy for better merge performance.
      
      This patch takes the Exponential Weighted Moving Average(EWMA) to figure
      out if queue is busy, then only dequeue request one by one from sw queue
      when queue is busy.
      
      Fixes: b347689f ("blk-mq-sched: improve dispatching from sw queue")
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Laurence Oberman <loberman@redhat.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Reported-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Tested-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6e768717
    • M
      blk-mq: only attempt to merge bio if there is rq in sw queue · b04f50ab
      Ming Lei 提交于
      Only attempt to merge bio iff the ctx->rq_list isn't empty, because:
      
      1) for high-performance SSD, most of times dispatch may succeed, then
      there may be nothing left in ctx->rq_list, so don't try to merge over
      sw queue if it is empty, then we can save one acquiring of ctx->lock
      
      2) we can't expect good merge performance on per-cpu sw queue, and missing
      one merge on sw queue won't be a big deal since tasks can be scheduled from
      one CPU to another.
      
      Cc: Laurence Oberman <loberman@redhat.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Tested-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Reported-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b04f50ab
    • M
      blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set() · 97889f9a
      Ming Lei 提交于
      We have to remove synchronize_rcu() from blk_queue_cleanup(),
      otherwise long delay can be caused during lun probe. For removing
      it, we have to avoid to iterate the set->tag_list in IO path, eg,
      blk_mq_sched_restart().
      
      This patch reverts 5b79413946d (Revert "blk-mq: don't handle
      TAG_SHARED in restart"). Given we have fixed enough IO hang issue,
      and there isn't any reason to restart all queues in one tags any more,
      see the following reasons:
      
      1) blk-mq core can deal with shared-tags case well via blk_mq_get_driver_tag(),
      which can wake up queues waiting for driver tag.
      
      2) SCSI is a bit special because it may return BLK_STS_RESOURCE if queue,
      target or host is ready, but SCSI built-in restart can cover all these well,
      see scsi_end_request(), queue will be rerun after any request initiated from
      this host/target is completed.
      
      In my test on scsi_debug(8 luns), this patch may improve IOPS by 20% ~ 30%
      when running I/O on these 8 luns concurrently.
      
      Fixes: 705cda97 ("blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list")
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Cc: linux-scsi@vger.kernel.org
      Reported-by: NAndrew Jones <drjones@redhat.com>
      Tested-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      97889f9a
  10. 03 6月, 2018 1 次提交
  11. 01 6月, 2018 2 次提交
  12. 31 5月, 2018 1 次提交
  13. 02 2月, 2018 1 次提交
  14. 18 1月, 2018 1 次提交
  15. 05 1月, 2018 1 次提交
  16. 11 11月, 2017 2 次提交
    • J
      blk-mq: only run the hardware queue if IO is pending · 79f720a7
      Jens Axboe 提交于
      Currently we are inconsistent in when we decide to run the queue. Using
      blk_mq_run_hw_queues() we check if the hctx has pending IO before
      running it, but we don't do that from the individual queue run function,
      blk_mq_run_hw_queue(). This results in a lot of extra and pointless
      queue runs, potentially, on flush requests and (much worse) on tag
      starvation situations. This is observable just looking at top output,
      with lots of kworkers active. For the !async runs, it just adds to the
      CPU overhead of blk-mq.
      
      Move the has-pending check into the run function instead of having
      callers do it.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      79f720a7
    • J
      Revert "blk-mq: don't handle TAG_SHARED in restart" · 05b79413
      Jens Axboe 提交于
      This reverts commit 358a3a6b.
      
      We have cases that aren't covered 100% in the drivers, so for now
      we have to retain the shared tag restart loops.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      05b79413
  17. 05 11月, 2017 3 次提交
    • M
      blk-mq: don't allocate driver tag upfront for flush rq · 923218f6
      Ming Lei 提交于
      The idea behind it is simple:
      
      1) for none scheduler, driver tag has to be borrowed for flush rq,
         otherwise we may run out of tag, and that causes an IO hang. And
         get/put driver tag is actually noop for none, so reordering tags
         isn't necessary at all.
      
      2) for a real I/O scheduler, we need not allocate a driver tag upfront
         for flush rq. It works just fine to follow the same approach as
         normal requests: allocate driver tag for each rq just before calling
         ->queue_rq().
      
      One driver visible change is that the driver tag isn't shared in the
      flush request sequence. That won't be a problem, since we always do that
      in legacy path.
      
      Then flush rq need not be treated specially wrt. get/put driver tag.
      This cleans up the code - for instance, reorder_tags_to_front() can be
      removed, and we needn't worry about request ordering in dispatch list
      for avoiding I/O deadlock.
      
      Also we have to put the driver tag before requeueing.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      923218f6
    • M
      blk-mq-sched: decide how to handle flush rq via RQF_FLUSH_SEQ · a6a252e6
      Ming Lei 提交于
      In case of IO scheduler we always pre-allocate one driver tag before
      calling blk_insert_flush(), and flush request will be marked as
      RQF_FLUSH_SEQ once it is in flush machinery.
      
      So if RQF_FLUSH_SEQ isn't set, we call blk_insert_flush() to handle
      the request, otherwise the flush request is dispatched to ->dispatch
      list directly.
      
      This is a preparation patch for not preallocating a driver tag for flush
      requests, and for not treating flush requests as a special case. This is
      similar to what the legacy path does.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a6a252e6
    • M
      blk-mq: don't handle failure in .get_budget · 88022d72
      Ming Lei 提交于
      It is enough to just check if we can get the budget via .get_budget().
      And we don't need to deal with device state change in .get_budget().
      
      For SCSI, one issue to be fixed is that we have to call
      scsi_mq_uninit_cmd() to free allocated ressources if SCSI device fails
      to handle the request. And it isn't enough to simply call
      blk_mq_end_request() to do that if this request is marked as
      RQF_DONTPREP.
      
      Fixes: 0df21c86(scsi: implement .get_budget and .put_budget for blk-mq)
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      88022d72
  18. 01 11月, 2017 6 次提交
    • M
      blk-mq: don't restart queue when .get_budget returns BLK_STS_RESOURCE · 1f460b63
      Ming Lei 提交于
      SCSI restarts its queue in scsi_end_request() automatically, so we don't
      need to handle this case in blk-mq.
      
      Especailly any request won't be dequeued in this case, we needn't to
      worry about IO hang caused by restart vs. dispatch.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1f460b63
    • M
      blk-mq: don't handle TAG_SHARED in restart · 358a3a6b
      Ming Lei 提交于
      Now restart is used in the following cases, and TAG_SHARED is for
      SCSI only.
      
      1) .get_budget() returns BLK_STS_RESOURCE
      - if resource in target/host level isn't satisfied, this SCSI device
      will be added in shost->starved_list, and the whole queue will be rerun
      (via SCSI's built-in RESTART) in scsi_end_request() after any request
      initiated from this host/targe is completed. Forget to mention, host level
      resource can't be an issue for blk-mq at all.
      
      - the same is true if resource in the queue level isn't satisfied.
      
      - if there isn't outstanding request on this queue, then SCSI's RESTART
      can't work(blk-mq's can't work too), and the queue will be run after
      SCSI_QUEUE_DELAY, and finally all starved sdevs will be handled by SCSI's
      RESTART when this request is finished
      
      2) scsi_dispatch_cmd() returns BLK_STS_RESOURCE
      - if there isn't onprogressing request on this queue, the queue
      will be run after SCSI_QUEUE_DELAY
      
      - otherwise, SCSI's RESTART covers the rerun.
      
      3) blk_mq_get_driver_tag() failed
      - BLK_MQ_S_TAG_WAITING covers the cross-queue RESTART for driver
      allocation.
      
      In one word, SCSI's built-in RESTART is enough to cover the queue
      rerun, and we don't need to pay special attention to TAG_SHARED wrt. restart.
      
      In my test on scsi_debug(8 luns), this patch improves IOPS by 20% ~ 30% when
      running I/O on these 8 luns concurrently.
      
      Aslo Roman Pen reported the current RESTART is very expensive especialy
      when there are lots of LUNs attached in one host, such as in his
      test, RESTART causes half of IOPS be cut.
      
      Fixes: https://marc.info/?l=linux-kernel&m=150832216727524&w=2
      Fixes: 6d8c6c0f ("blk-mq: Restart a single queue if tag sets are shared")
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      358a3a6b
    • M
      blk-mq-sched: improve dispatching from sw queue · b347689f
      Ming Lei 提交于
      SCSI devices use host-wide tagset, and the shared driver tag space is
      often quite big. However, there is also a queue depth for each lun(
      .cmd_per_lun), which is often small, for example, on both lpfc and
      qla2xxx, .cmd_per_lun is just 3.
      
      So lots of requests may stay in sw queue, and we always flush all
      belonging to same hw queue and dispatch them all to driver.
      Unfortunately it is easy to cause queue busy because of the small
      .cmd_per_lun.  Once these requests are flushed out, they have to stay in
      hctx->dispatch, and no bio merge can happen on these requests, and
      sequential IO performance is harmed.
      
      This patch introduces blk_mq_dequeue_from_ctx for dequeuing a request
      from a sw queue, so that we can dispatch them in scheduler's way. We can
      then avoid dequeueing too many requests from sw queue, since we don't
      flush ->dispatch completely.
      
      This patch improves dispatching from sw queue by using the .get_budget
      and .put_budget callbacks.
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b347689f
    • M
      blk-mq: introduce .get_budget and .put_budget in blk_mq_ops · de148297
      Ming Lei 提交于
      For SCSI devices, there is often a per-request-queue depth, which needs
      to be respected before queuing one request.
      
      Currently blk-mq always dequeues the request first, then calls
      .queue_rq() to dispatch the request to lld. One obvious issue with this
      approach is that I/O merging may not be successful, because when the
      per-request-queue depth can't be respected, .queue_rq() has to return
      BLK_STS_RESOURCE, and then this request has to stay in hctx->dispatch
      list. This means it never gets a chance to be merged with other IO.
      
      This patch introduces .get_budget and .put_budget callback in blk_mq_ops,
      then we can try to get reserved budget first before dequeuing request.
      If the budget for queueing I/O can't be satisfied, we don't need to
      dequeue request at all. Hence the request can be left in the IO
      scheduler queue, for more merging opportunities.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      de148297
    • M
      blk-mq-sched: move actual dispatching into one helper · caf8eb0d
      Ming Lei 提交于
      So that it becomes easy to support to dispatch from sw queue in the
      following patch.
      
      No functional change.
      Reviewed-by: NBart Van Assche <bart.vanassche@wdc.com>
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Suggested-by: Christoph Hellwig <hch@lst.de> # for simplifying dispatch logic
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      caf8eb0d
    • M
      blk-mq-sched: dispatch from scheduler IFF progress is made in ->dispatch · 5e3d02bb
      Ming Lei 提交于
      When the hw queue is busy, we shouldn't take requests from the scheduler
      queue any more, otherwise it is difficult to do IO merge.
      
      This patch fixes the awful IO performance on some SCSI devices(lpfc,
      qla2xxx, ...) when mq-deadline/kyber is used by not taking requests if
      hw queue is busy.
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Reviewed-by: NBart Van Assche <bart.vanassche@wdc.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5e3d02bb
  19. 04 7月, 2017 1 次提交
    • M
      blk-mq-sched: fix performance regression of mq-deadline · 32825c45
      Ming Lei 提交于
      When mq-deadline is taken, IOPS of sequential read and
      seqential write is observed more than 20% drop on sata(scsi-mq)
      devices, compared with using 'none' scheduler.
      
      The reason is that the default nr_requests for scheduler is
      too big for small queuedepth devices, and latency is increased
      much.
      
      Since the principle of taking 256 requests for mq scheduler
      is based on 128 queue depth, this patch changes into
      double size of min(hw queue_depth, 128).
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      32825c45
  20. 22 6月, 2017 1 次提交
  21. 21 6月, 2017 1 次提交
  22. 19 6月, 2017 3 次提交