1. 04 9月, 2020 3 次提交
  2. 01 7月, 2020 1 次提交
  3. 29 6月, 2020 1 次提交
  4. 15 6月, 2020 1 次提交
  5. 07 6月, 2020 2 次提交
  6. 30 5月, 2020 4 次提交
  7. 27 2月, 2020 1 次提交
  8. 14 11月, 2019 1 次提交
  9. 05 8月, 2019 1 次提交
    • M
      blk-mq: introduce blk_mq_tagset_wait_completed_request() · f9934a80
      Ming Lei 提交于
      blk-mq may schedule to call queue's complete function on remote CPU via
      IPI, but doesn't provide any way to synchronize the request's complete
      fn. The current queue freeze interface can't provide the synchonization
      because aborted requests stay at blk-mq queues during EH.
      
      In some driver's EH(such as NVMe), hardware queue's resource may be freed &
      re-allocated. If the completed request's complete fn is run finally after the
      hardware queue's resource is released, kernel crash will be triggered.
      
      Prepare for fixing this kind of issue by introducing
      blk_mq_tagset_wait_completed_request().
      
      Cc: Max Gurtovoy <maxg@mellanox.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f9934a80
  10. 03 7月, 2019 1 次提交
  11. 01 5月, 2019 1 次提交
  12. 01 2月, 2019 1 次提交
  13. 01 12月, 2018 1 次提交
    • J
      sbitmap: optimize wakeup check · 5d2ee712
      Jens Axboe 提交于
      Even if we have no waiters on any of the sbitmap_queue wait states, we
      still have to loop every entry to check. We do this for every IO, so
      the cost adds up.
      
      Shift a bit of the cost to the slow path, when we actually have waiters.
      Wrap prepare_to_wait_exclusive() and finish_wait(), so we can maintain
      an internal count of how many are currently active. Then we can simply
      check this count in sbq_wake_ptr() and not have to loop if we don't
      have any sleepers.
      
      Convert the two users of sbitmap with waiting, blk-mq-tag and iSCSI.
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5d2ee712
  14. 09 11月, 2018 2 次提交
  15. 08 11月, 2018 3 次提交
  16. 26 9月, 2018 1 次提交
  17. 22 9月, 2018 1 次提交
  18. 21 8月, 2018 1 次提交
    • J
      blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter · f5bbbbe4
      Jianchao Wang 提交于
      For blk-mq, part_in_flight/rw will invoke blk_mq_in_flight/rw to
      account the inflight requests. It will access the queue_hw_ctx and
      nr_hw_queues w/o any protection. When updating nr_hw_queues and
      blk_mq_in_flight/rw occur concurrently, panic comes up.
      
      Before update nr_hw_queues, the q will be frozen. So we could use
      q_usage_counter to avoid the race. percpu_ref_is_zero is used here
      so that we will not miss any in-flight request. The access to
      nr_hw_queues and queue_hw_ctx in blk_mq_queue_tag_busy_iter are
      under rcu critical section, __blk_mq_update_nr_hw_queues could use
      synchronize_rcu to ensure the zeroed q_usage_counter to be globally
      visible.
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f5bbbbe4
  19. 09 8月, 2018 1 次提交
    • J
      blk-mq: count the hctx as active before allocating tag · d263ed99
      Jianchao Wang 提交于
      Currently, we count the hctx as active after allocate driver tag
      successfully. If a previously inactive hctx try to get tag first
      time, it may fails and need to wait. However, due to the stale tag
      ->active_queues, the other shared-tags users are still able to
      occupy all driver tags while there is someone waiting for tag.
      Consequently, even if the previously inactive hctx is waked up, it
      still may not be able to get a tag and could be starved.
      
      To fix it, we count the hctx as active before try to allocate driver
      tag, then when it is waiting the tag, the other shared-tag users
      will reserve budget for it.
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d263ed99
  20. 03 8月, 2018 2 次提交
    • M
      blk-mq: fix blk_mq_tagset_busy_iter · 2d5ba0e2
      Ming Lei 提交于
      Commit d250bf4e("blk-mq: only iterate over inflight requests
      in blk_mq_tagset_busy_iter") uses 'blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT'
      to replace 'blk_mq_request_started(req)', this way is wrong, and causes
      lots of test system hang during booting.
      
      Fix the issue by using blk_mq_request_started(req) inside bt_tags_iter().
      
      Fixes: d250bf4e ("blk-mq: only iterate over inflight requests in blk_mq_tagset_busy_iter")
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matt Hart <matthew.hart@linaro.org>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: John Garry <john.garry@huawei.com>
      Cc: Hannes Reinecke <hare@suse.com>,
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>,
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
      Cc: linux-scsi@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: NBart Van Assche <bart.vanassche@wdc.com>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Reported-by: NMark Brown <broonie@kernel.org>
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2d5ba0e2
    • M
      blk-mq: fix updating tags depth · 75d6e175
      Ming Lei 提交于
      The passed 'nr' from userspace represents the total depth, meantime
      inside 'struct blk_mq_tags', 'nr_tags' stores the total tag depth,
      and 'nr_reserved_tags' stores the reserved part.
      
      There are two issues in blk_mq_tag_update_depth() now:
      
      1) for growing tags, we should have used the passed 'nr', and keep the
      number of reserved tags not changed.
      
      2) the passed 'nr' should have been used for checking against
      'tags->nr_tags', instead of number of the normal part.
      
      This patch fixes the above two cases, and avoids kernel crash caused
      by wrong resizing sbitmap queue.
      
      Cc: "Ewan D. Milne" <emilne@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bart Van Assche <bart.vanassche@sandisk.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Tested by: Marco Patalano <mpatalan@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      75d6e175
  21. 14 6月, 2018 1 次提交
  22. 31 5月, 2018 1 次提交
  23. 25 5月, 2018 1 次提交
    • M
      blk-mq: avoid starving tag allocation after allocating process migrates · e6fc4649
      Ming Lei 提交于
      When the allocation process is scheduled back and the mapped hw queue is
      changed, fake one extra wake up on previous queue for compensating wake
      up miss, so other allocations on the previous queue won't be starved.
      
      This patch fixes one request allocation hang issue, which can be
      triggered easily in case of very low nr_request.
      
      The race is as follows:
      
      1) 2 hw queues, nr_requests are 2, and wake_batch is one
      
      2) there are 3 waiters on hw queue 0
      
      3) two in-flight requests in hw queue 0 are completed, and only two
         waiters of 3 are waken up because of wake_batch, but both the two
         waiters can be scheduled to another CPU and cause to switch to hw
         queue 1
      
      4) then the 3rd waiter will wait for ever, since no in-flight request
         is in hw queue 0 any more.
      
      5) this patch fixes it by the fake wakeup when waiter is scheduled to
         another hw queue
      
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      
      Modified commit message to make it clearer, and make it apply on
      top of the 4.18 branch.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e6fc4649
  24. 23 12月, 2017 1 次提交
    • J
      blk-mq: improve heavily contended tag case · 4e5dff41
      Jens Axboe 提交于
      Even with a number of waitqueues, we can get into a situation where we
      are heavily contended on the waitqueue lock. I got a report on spc1
      where we're spending seconds doing this. Arguably the use case is nasty,
      I reproduce it with one device and 1000 threads banging on the device.
      But that doesn't mean we shouldn't be handling it better.
      
      What ends up happening is that a thread will fail to get a tag, add
      itself to the waitqueue, and subsequently get woken up when a tag is
      freed - only to find itself going back to sleep on the waitqueue.
      
      Instead of waking all threads, use an exclusive wait and wake up our
      sbitmap batch count instead. This seems to work well for me (massive
      improvement for this use case), and it survives basic testing. But I
      haven't fully verified it yet.
      
      An additional improvement is running the queue and checking for a new
      tag BEFORE needing to add ourselves to the waitqueue.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4e5dff41
  25. 19 10月, 2017 2 次提交
  26. 18 8月, 2017 1 次提交
  27. 10 8月, 2017 1 次提交
  28. 15 4月, 2017 1 次提交
  29. 13 3月, 2017 1 次提交