1. 18 1月, 2017 2 次提交
  2. 12 1月, 2017 1 次提交
  3. 10 12月, 2016 1 次提交
  4. 09 11月, 2016 1 次提交
  5. 03 11月, 2016 4 次提交
  6. 23 9月, 2016 1 次提交
  7. 22 9月, 2016 1 次提交
  8. 21 9月, 2016 1 次提交
  9. 17 9月, 2016 1 次提交
  10. 15 9月, 2016 5 次提交
  11. 14 9月, 2016 2 次提交
  12. 29 8月, 2016 2 次提交
    • J
      blk-mq: improve layout of blk_mq_hw_ctx · 8d354f13
      Jens Axboe 提交于
      Various cache line optimizations:
      
      - Move delay_work towards the end. It's huge, and we don't use it
        a lot (only SCSI).
      
      - Move the atomic state into the same cacheline as the the dispatch
        list and lock.
      
      - Rearrange a few members to pack it better.
      
      - Shrink the max-order for dispatch accounting from 10 to 7. This
        means that ->dispatched[] and ->run now take up their own
        cacheline.
      
      This shrinks struct blk_mq_hw_ctx down to 8 cachelines.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8d354f13
    • J
      blk-mq: turn hctx->run_work into a regular work struct · 27489a3c
      Jens Axboe 提交于
      We don't need the larger delayed work struct, since we always run it
      immediately.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      27489a3c
  13. 08 7月, 2016 1 次提交
  14. 06 7月, 2016 1 次提交
  15. 13 4月, 2016 2 次提交
  16. 20 3月, 2016 1 次提交
  17. 10 2月, 2016 1 次提交
    • K
      blk-mq: dynamic h/w context count · 868f2f0b
      Keith Busch 提交于
      The hardware's provided queue count may change at runtime with resource
      provisioning. This patch allows a block driver to alter the number of
      h/w queues available when its resource count changes.
      
      The main part is a new blk-mq API to request a new number of h/w queues
      for a given live tag set. The new API freezes all queues using that set,
      then adjusts the allocated count prior to remapping these to CPUs.
      
      The bulk of the rest just shifts where h/w contexts and all their
      artifacts are allocated and freed.
      
      The number of max h/w contexts is capped to the number of possible cpus
      since there is no use for more than that. As such, all pre-allocated
      memory for pointers need to account for the max possible rather than
      the initial number of queues.
      
      A side effect of this is that the blk-mq will proceed successfully as
      long as it can allocate at least one h/w context. Previously it would
      fail request queue initialization if less than the requested number
      was allocated.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NJon Derrick <jonathan.derrick@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      868f2f0b
  18. 02 12月, 2015 1 次提交
  19. 08 11月, 2015 1 次提交
    • J
      block: add block polling support · 05229bee
      Jens Axboe 提交于
      Add basic support for polling for specific IO to complete. This uses
      the cookie that blk-mq passes back, which enables the block layer
      to pass this cookie to the driver to spin for a specific request.
      
      This will be combined with request latency tracking, so we can make
      qualified decisions about when to poll and when not to. For now, for
      benchmark purposes, we add a sysfs file that controls whether polling
      is enabled or not.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NKeith Busch <keith.busch@intel.com>
      05229bee
  20. 22 10月, 2015 1 次提交
    • D
      block: generic request_queue reference counting · 3ef28e83
      Dan Williams 提交于
      Allow pmem, and other synchronous/bio-based block drivers, to fallback
      on a per-cpu reference count managed by the core for tracking queue
      live/dead state.
      
      The existing per-cpu reference count for the blk_mq case is promoted to
      be used in all block i/o scenarios.  This involves initializing it by
      default, waiting for it to drop to zero at exit, and holding a live
      reference over the invocation of q->make_request_fn() in
      generic_make_request().  The blk_mq code continues to take its own
      reference per blk_mq request and retains the ability to freeze the
      queue, but the check that the queue is frozen is moved to
      generic_make_request().
      
      This fixes crash signatures like the following:
      
       BUG: unable to handle kernel paging request at ffff880140000000
       [..]
       Call Trace:
        [<ffffffff8145e8bf>] ? copy_user_handle_tail+0x5f/0x70
        [<ffffffffa004e1e0>] pmem_do_bvec.isra.11+0x70/0xf0 [nd_pmem]
        [<ffffffffa004e331>] pmem_make_request+0xd1/0x200 [nd_pmem]
        [<ffffffff811c3162>] ? mempool_alloc+0x72/0x1a0
        [<ffffffff8141f8b6>] generic_make_request+0xd6/0x110
        [<ffffffff8141f966>] submit_bio+0x76/0x170
        [<ffffffff81286dff>] submit_bh_wbc+0x12f/0x160
        [<ffffffff81286e62>] submit_bh+0x12/0x20
        [<ffffffff813395bd>] jbd2_write_superblock+0x8d/0x170
        [<ffffffff8133974d>] jbd2_mark_journal_empty+0x5d/0x90
        [<ffffffff813399cb>] jbd2_journal_destroy+0x24b/0x270
        [<ffffffff810bc4ca>] ? put_pwq_unlocked+0x2a/0x30
        [<ffffffff810bc6f5>] ? destroy_workqueue+0x225/0x250
        [<ffffffff81303494>] ext4_put_super+0x64/0x360
        [<ffffffff8124ab1a>] generic_shutdown_super+0x6a/0xf0
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      3ef28e83
  21. 01 10月, 2015 2 次提交
  22. 30 9月, 2015 1 次提交
    • A
      blk-mq: fix sysfs registration/unregistration race · 4593fdbe
      Akinobu Mita 提交于
      There is a race between cpu hotplug handling and adding/deleting
      gendisk for blk-mq, where both are trying to register and unregister
      the same sysfs entries.
      
      null_add_dev
          --> blk_mq_init_queue
              --> blk_mq_init_allocated_queue
                  --> add to 'all_q_list' (*)
          --> add_disk
              --> blk_register_queue
                  --> blk_mq_register_disk (++)
      
      null_del_dev
          --> del_gendisk
              --> blk_unregister_queue
                  --> blk_mq_unregister_disk (--)
          --> blk_cleanup_queue
              --> blk_mq_free_queue
                  --> del from 'all_q_list' (*)
      
      blk_mq_queue_reinit
          --> blk_mq_sysfs_unregister (-)
          --> blk_mq_sysfs_register (+)
      
      While the request queue is added to 'all_q_list' (*),
      blk_mq_queue_reinit() can be called for the queue anytime by CPU
      hotplug callback.  But blk_mq_sysfs_unregister (-) and
      blk_mq_sysfs_register (+) in blk_mq_queue_reinit must not be called
      before blk_mq_register_disk (++) and after blk_mq_unregister_disk (--)
      is finished.  Because '/sys/block/*/mq/' is not exists.
      
      There has already been BLK_MQ_F_SYSFS_UP flag in hctx->flags which can
      be used to track these sysfs stuff, but it is only fixing this issue
      partially.
      
      In order to fix it completely, we just need per-queue flag instead of
      per-hctx flag with appropriate locking.  So this introduces
      q->mq_sysfs_init_done which is properly protected with all_q_mutex.
      
      Also, we need to ensure that blk_mq_map_swqueue() is called with
      all_q_mutex is held.  Since hctx->nr_ctx is reset temporarily and
      updated in blk_mq_map_swqueue(), so we should avoid
      blk_mq_register_hctx() seeing the temporary hctx->nr_ctx value
      in CPU hotplug handling or adding/deleting gendisk .
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Reviewed-by: NMing Lei <tom.leiming@gmail.com>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4593fdbe
  23. 02 6月, 2015 1 次提交
    • K
      blk-mq: Shared tag enhancements · f26cdc85
      Keith Busch 提交于
      Storage controllers may expose multiple block devices that share hardware
      resources managed by blk-mq. This patch enhances the shared tags so a
      low-level driver can access the shared resources not tied to the unshared
      h/w contexts. This way the LLD can dynamically add and delete disks and
      request queues without having to track all the request_queue hctx's to
      iterate outstanding tags.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f26cdc85
  24. 17 4月, 2015 1 次提交
  25. 10 4月, 2015 1 次提交
  26. 13 3月, 2015 2 次提交
    • M
      blk-mq: export blk_mq_run_hw_queues · b94ec296
      Mike Snitzer 提交于
      Rename blk_mq_run_queues to blk_mq_run_hw_queues, add async argument,
      and export it.
      
      DM's suspend support must be able to run the queue without starting
      stopped hw queues.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b94ec296
    • M
      blk-mq: add blk_mq_init_allocated_queue and export blk_mq_register_disk · b62c21b7
      Mike Snitzer 提交于
      Add a variant of blk_mq_init_queue that allows a previously allocated
      queue to be initialized.  blk_mq_init_allocated_queue models
      blk_init_allocated_queue -- which was also created for DM's use.
      
      DM's approach to device creation requires a placeholder request_queue be
      allocated for use with alloc_dev() but the decision about what type of
      request_queue will be ultimately created is deferred until all component
      devices referenced in the DM table are processed to determine the table
      type (request-based, blk-mq request-based, or bio-based).
      
      Also, because of DM's late finalization of the request_queue type
      the call to blk_mq_register_disk() doesn't happen during alloc_dev().
      Must export blk_mq_register_disk() so that DM can backfill the 'mq' dir
      once the blk-mq queue is fully allocated.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Reviewed-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b62c21b7
  27. 11 2月, 2015 1 次提交