1. 30 9月, 2015 1 次提交
    • A
      blk-mq: fix sysfs registration/unregistration race · 4593fdbe
      Akinobu Mita 提交于
      There is a race between cpu hotplug handling and adding/deleting
      gendisk for blk-mq, where both are trying to register and unregister
      the same sysfs entries.
      
      null_add_dev
          --> blk_mq_init_queue
              --> blk_mq_init_allocated_queue
                  --> add to 'all_q_list' (*)
          --> add_disk
              --> blk_register_queue
                  --> blk_mq_register_disk (++)
      
      null_del_dev
          --> del_gendisk
              --> blk_unregister_queue
                  --> blk_mq_unregister_disk (--)
          --> blk_cleanup_queue
              --> blk_mq_free_queue
                  --> del from 'all_q_list' (*)
      
      blk_mq_queue_reinit
          --> blk_mq_sysfs_unregister (-)
          --> blk_mq_sysfs_register (+)
      
      While the request queue is added to 'all_q_list' (*),
      blk_mq_queue_reinit() can be called for the queue anytime by CPU
      hotplug callback.  But blk_mq_sysfs_unregister (-) and
      blk_mq_sysfs_register (+) in blk_mq_queue_reinit must not be called
      before blk_mq_register_disk (++) and after blk_mq_unregister_disk (--)
      is finished.  Because '/sys/block/*/mq/' is not exists.
      
      There has already been BLK_MQ_F_SYSFS_UP flag in hctx->flags which can
      be used to track these sysfs stuff, but it is only fixing this issue
      partially.
      
      In order to fix it completely, we just need per-queue flag instead of
      per-hctx flag with appropriate locking.  So this introduces
      q->mq_sysfs_init_done which is properly protected with all_q_mutex.
      
      Also, we need to ensure that blk_mq_map_swqueue() is called with
      all_q_mutex is held.  Since hctx->nr_ctx is reset temporarily and
      updated in blk_mq_map_swqueue(), so we should avoid
      blk_mq_register_hctx() seeing the temporary hctx->nr_ctx value
      in CPU hotplug handling or adding/deleting gendisk .
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Reviewed-by: NMing Lei <tom.leiming@gmail.com>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4593fdbe
  2. 02 6月, 2015 1 次提交
    • K
      blk-mq: Shared tag enhancements · f26cdc85
      Keith Busch 提交于
      Storage controllers may expose multiple block devices that share hardware
      resources managed by blk-mq. This patch enhances the shared tags so a
      low-level driver can access the shared resources not tied to the unshared
      h/w contexts. This way the LLD can dynamically add and delete disks and
      request queues without having to track all the request_queue hctx's to
      iterate outstanding tags.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f26cdc85
  3. 17 4月, 2015 1 次提交
  4. 10 4月, 2015 1 次提交
  5. 13 3月, 2015 2 次提交
    • M
      blk-mq: export blk_mq_run_hw_queues · b94ec296
      Mike Snitzer 提交于
      Rename blk_mq_run_queues to blk_mq_run_hw_queues, add async argument,
      and export it.
      
      DM's suspend support must be able to run the queue without starting
      stopped hw queues.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b94ec296
    • M
      blk-mq: add blk_mq_init_allocated_queue and export blk_mq_register_disk · b62c21b7
      Mike Snitzer 提交于
      Add a variant of blk_mq_init_queue that allows a previously allocated
      queue to be initialized.  blk_mq_init_allocated_queue models
      blk_init_allocated_queue -- which was also created for DM's use.
      
      DM's approach to device creation requires a placeholder request_queue be
      allocated for use with alloc_dev() but the decision about what type of
      request_queue will be ultimately created is deferred until all component
      devices referenced in the DM table are processed to determine the table
      type (request-based, blk-mq request-based, or bio-based).
      
      Also, because of DM's late finalization of the request_queue type
      the call to blk_mq_register_disk() doesn't happen during alloc_dev().
      Must export blk_mq_register_disk() so that DM can backfill the 'mq' dir
      once the blk-mq queue is fully allocated.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Reviewed-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b62c21b7
  6. 11 2月, 2015 1 次提交
  7. 24 1月, 2015 1 次提交
    • S
      blk-mq: add tag allocation policy · 24391c0d
      Shaohua Li 提交于
      This is the blk-mq part to support tag allocation policy. The default
      allocation policy isn't changed (though it's not a strict FIFO). The new
      policy is round-robin for libata. But it's a try-best implementation. If
      multiple tasks are competing, the tags returned will be mixed (which is
      unavoidable even with !mq, as requests from different tasks can be
      mixed in queue)
      
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      24391c0d
  8. 08 1月, 2015 4 次提交
  9. 03 1月, 2015 1 次提交
  10. 21 12月, 2014 1 次提交
  11. 18 11月, 2014 1 次提交
    • J
      blk-mq: add blk_mq_free_hctx_request() · 7c7f2f2b
      Jens Axboe 提交于
      It's silly to use blk_mq_free_request() which in turn maps the
      request to the hardware queue, for places where we already know
      what the hardware queue is. This saves us an extra mapping of a
      hardware queue on request completion, if the caller knows this
      information already.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7c7f2f2b
  12. 12 11月, 2014 1 次提交
  13. 30 10月, 2014 2 次提交
    • J
      blk-mq: add BLK_MQ_F_DEFER_ISSUE support flag · e167dfb5
      Jens Axboe 提交于
      Drivers can now tell blk-mq if they take advantage of the deferred
      issue through 'last' or not. If they do, don't do queue-direct
      for sync IO. This is a preparation patch for the nvme conversion.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e167dfb5
    • J
      blk-mq: add a 'list' parameter to ->queue_rq() · 74c45052
      Jens Axboe 提交于
      Since we have the notion of a 'last' request in a chain, we can use
      this to have the hardware optimize the issuing of requests. Add
      a list_head parameter to queue_rq that the driver can use to
      temporarily store hw commands for issue when 'last' is true. If we
      are doing a chain of requests, pass in a NULL list for the first
      request to force issue of that immediately, then batch the remainder
      for deferred issue until the last request has been sent.
      
      Instead of adding yet another argument to the hot ->queue_rq path,
      encapsulate the passed arguments in a blk_mq_queue_data structure.
      This is passed as a constant, and has been tested as faster than
      passing 4 (or even 3) args through ->queue_rq. Update drivers for
      the new ->queue_rq() prototype. There are no functional changes
      in this patch for drivers - if they don't use the passed in list,
      then they will just queue requests individually like before.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c45052
  14. 26 9月, 2014 1 次提交
    • M
      blk-mq: support per-distpatch_queue flush machinery · f70ced09
      Ming Lei 提交于
      This patch supports to run one single flush machinery for
      each blk-mq dispatch queue, so that:
      
      - current init_request and exit_request callbacks can
      cover flush request too, then the buggy copying way of
      initializing flush request's pdu can be fixed
      
      - flushing performance gets improved in case of multi hw-queue
      
      In fio sync write test over virtio-blk(4 hw queues, ioengine=sync,
      iodepth=64, numjobs=4, bs=4K), it is observed that througput gets
      increased a lot over my test environment:
      	- throughput: +70% in case of virtio-blk over null_blk
      	- throughput: +30% in case of virtio-blk over SSD image
      
      The multi virtqueue feature isn't merged to QEMU yet, and patches for
      the feature can be found in below tree:
      
      	git://kernel.ubuntu.com/ming/qemu.git  	v2.1.0-mq.4
      
      And simply passing 'num_queues=4 vectors=5' should be enough to
      enable multi queue(quad queue) feature for QEMU virtio-blk.
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f70ced09
  15. 25 9月, 2014 1 次提交
    • T
      blk-mq, percpu_ref: start q->mq_usage_counter in atomic mode · 17497acb
      Tejun Heo 提交于
      blk-mq uses percpu_ref for its usage counter which tracks the number
      of in-flight commands and used to synchronously drain the queue on
      freeze.  percpu_ref shutdown takes measureable wallclock time as it
      involves a sched RCU grace period.  This means that draining a blk-mq
      takes measureable wallclock time.  One would think that this shouldn't
      matter as queue shutdown should be a rare event which takes place
      asynchronously w.r.t. userland.
      
      Unfortunately, SCSI probing involves synchronously setting up and then
      tearing down a lot of request_queues back-to-back for non-existent
      LUNs.  This means that SCSI probing may take above ten seconds when
      scsi-mq is used.
      
        [    0.949892] scsi host0: Virtio SCSI HBA
        [    1.007864] scsi 0:0:0:0: Direct-Access     QEMU     QEMU HARDDISK    1.1. PQ: 0 ANSI: 5
        [    1.021299] scsi 0:0:1:0: Direct-Access     QEMU     QEMU HARDDISK    1.1. PQ: 0 ANSI: 5
        [    1.520356] tsc: Refined TSC clocksource calibration: 2491.910 MHz
      
        <stall>
      
        [   16.186549] sd 0:0:0:0: Attached scsi generic sg0 type 0
        [   16.190478] sd 0:0:1:0: Attached scsi generic sg1 type 0
        [   16.194099] osd: LOADED open-osd 0.2.1
        [   16.203202] sd 0:0:0:0: [sda] 31457280 512-byte logical blocks: (16.1 GB/15.0 GiB)
        [   16.208478] sd 0:0:0:0: [sda] Write Protect is off
        [   16.211439] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
        [   16.218771] sd 0:0:1:0: [sdb] 31457280 512-byte logical blocks: (16.1 GB/15.0 GiB)
        [   16.223264] sd 0:0:1:0: [sdb] Write Protect is off
        [   16.225682] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
      
      This is also the reason why request_queues start in bypass mode which
      is ended on blk_register_queue() as shutting down a fully functional
      queue also involves a RCU grace period and the queues for non-existent
      SCSI devices never reach registration.
      
      blk-mq basically needs to do the same thing - start the mq in a
      degraded mode which is faster to shut down and then make it fully
      functional only after the queue reaches registration.  percpu_ref
      recently grew facilities to force atomic operation until explicitly
      switched to percpu mode, which can be used for this purpose.  This
      patch makes blk-mq initialize q->mq_usage_counter in atomic mode and
      switch it to percpu mode only once blk_register_queue() is reached.
      
      Note that this issue was previously worked around by 0a30288d
      ("blk-mq, percpu_ref: implement a kludge for SCSI blk-mq stall during
      probe") for v3.17.  The temp fix was reverted in preparation of adding
      persistent atomic mode to percpu_ref by 9eca8046 ("Revert "blk-mq,
      percpu_ref: implement a kludge for SCSI blk-mq stall during probe"").
      This patch and the prerequisite percpu_ref changes will be merged
      during v3.18 devel cycle.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NChristoph Hellwig <hch@infradead.org>
      Link: http://lkml.kernel.org/g/20140919113815.GA10791@lst.de
      Fixes: add703fd ("blk-mq: use percpu_ref for mq usage count")
      Reviewed-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      17497acb
  16. 23 9月, 2014 5 次提交
  17. 16 8月, 2014 1 次提交
  18. 18 6月, 2014 1 次提交
  19. 06 6月, 2014 1 次提交
    • J
      blk-mq: bump max tag depth to 10K tags · a4391c64
      Jens Axboe 提交于
      For some scsi-mq cases, the tag map can be huge. So increase the
      max number of tags we support.
      
      Additionally, don't fail with EINVAL if a user requests too many
      tags. Warn that the tag depth has been adjusted down, and store
      the new value inside the tag_set passed in.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a4391c64
  20. 05 6月, 2014 1 次提交
  21. 30 5月, 2014 2 次提交
  22. 29 5月, 2014 2 次提交
  23. 28 5月, 2014 5 次提交
  24. 24 5月, 2014 1 次提交
  25. 22 5月, 2014 1 次提交