1. 28 1月, 2017 1 次提交
  2. 27 1月, 2017 2 次提交
  3. 18 1月, 2017 4 次提交
  4. 10 12月, 2016 1 次提交
  5. 11 11月, 2016 1 次提交
    • J
      block: add scalable completion tracking of requests · cf43e6be
      Jens Axboe 提交于
      For legacy block, we simply track them in the request queue. For
      blk-mq, we track them on a per-sw queue basis, which we can then
      sum up through the hardware queues and finally to a per device
      state.
      
      The stats are tracked in, roughly, 0.1s interval windows.
      
      Add sysfs files to display the stats.
      
      The feature is off by default, to avoid any extra overhead. In-kernel
      users of it can turn it on by setting QUEUE_FLAG_STATS in the queue
      flags. We currently don't turn it on if someone just reads any of
      the stats files, that is something we could add as well.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      cf43e6be
  6. 09 11月, 2016 1 次提交
  7. 03 11月, 2016 1 次提交
  8. 22 9月, 2016 1 次提交
  9. 17 9月, 2016 2 次提交
  10. 15 9月, 2016 2 次提交
  11. 10 2月, 2016 1 次提交
    • K
      blk-mq: dynamic h/w context count · 868f2f0b
      Keith Busch 提交于
      The hardware's provided queue count may change at runtime with resource
      provisioning. This patch allows a block driver to alter the number of
      h/w queues available when its resource count changes.
      
      The main part is a new blk-mq API to request a new number of h/w queues
      for a given live tag set. The new API freezes all queues using that set,
      then adjusts the allocated count prior to remapping these to CPUs.
      
      The bulk of the rest just shifts where h/w contexts and all their
      artifacts are allocated and freed.
      
      The number of max h/w contexts is capped to the number of possible cpus
      since there is no use for more than that. As such, all pre-allocated
      memory for pointers need to account for the max possible rather than
      the initial number of queues.
      
      A side effect of this is that the blk-mq will proceed successfully as
      long as it can allocate at least one h/w context. Previously it would
      fail request queue initialization if less than the requested number
      was allocated.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NJon Derrick <jonathan.derrick@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      868f2f0b
  12. 02 12月, 2015 1 次提交
  13. 12 11月, 2015 1 次提交
  14. 10 10月, 2015 1 次提交
  15. 30 9月, 2015 1 次提交
    • A
      blk-mq: avoid inserting requests before establishing new mapping · 5778322e
      Akinobu Mita 提交于
      Notifier callbacks for CPU_ONLINE action can be run on the other CPU
      than the CPU which was just onlined.  So it is possible for the
      process running on the just onlined CPU to insert request and run
      hw queue before establishing new mapping which is done by
      blk_mq_queue_reinit_notify().
      
      This can cause a problem when the CPU has just been onlined first time
      since the request queue was initialized.  At this time ctx->index_hw
      for the CPU, which is the index in hctx->ctxs[] for this ctx, is still
      zero before blk_mq_queue_reinit_notify() is called by notifier
      callbacks for CPU_ONLINE action.
      
      For example, there is a single hw queue (hctx) and two CPU queues
      (ctx0 for CPU0, and ctx1 for CPU1).  Now CPU1 is just onlined and
      a request is inserted into ctx1->rq_list and set bit0 in pending
      bitmap as ctx1->index_hw is still zero.
      
      And then while running hw queue, flush_busy_ctxs() finds bit0 is set
      in pending bitmap and tries to retrieve requests in
      hctx->ctxs[0]->rq_list.  But htx->ctxs[0] is a pointer to ctx0, so the
      request in ctx1->rq_list is ignored.
      
      Fix it by ensuring that new mapping is established before onlined cpu
      starts running.
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Reviewed-by: NMing Lei <tom.leiming@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5778322e
  16. 30 1月, 2015 1 次提交
  17. 01 1月, 2015 1 次提交
  18. 09 12月, 2014 1 次提交
  19. 26 9月, 2014 2 次提交
  20. 23 9月, 2014 1 次提交
  21. 02 7月, 2014 1 次提交
    • T
      blk-mq: decouble blk-mq freezing from generic bypassing · 780db207
      Tejun Heo 提交于
      blk_mq freezing is entangled with generic bypassing which bypasses
      blkcg and io scheduler and lets IO requests fall through the block
      layer to the drivers in FIFO order.  This allows forward progress on
      IOs with the advanced features disabled so that those features can be
      configured or altered without worrying about stalling IO which may
      lead to deadlock through memory allocation.
      
      However, generic bypassing doesn't quite fit blk-mq.  blk-mq currently
      doesn't make use of blkcg or ioscheds and it maps bypssing to
      freezing, which blocks request processing and drains all the in-flight
      ones.  This causes problems as bypassing assumes that request
      processing is online.  blk-mq works around this by conditionally
      allowing request processing for the problem case - during queue
      initialization.
      
      Another weirdity is that except for during queue cleanup, bypassing
      started on the generic side prevents blk-mq from processing new
      requests but doesn't drain the in-flight ones.  This shouldn't break
      anything but again highlights that something isn't quite right here.
      
      The root cause is conflating blk-mq freezing and generic bypassing
      which are two different mechanisms.  The only intersecting purpose
      that they serve is during queue cleanup.  Let's properly separate
      blk-mq freezing from generic bypassing and simply use it where
      necessary.
      
      * request_queue->mq_freeze_depth is added and
        blk_mq_[un]freeze_queue() now operate on this counter instead of
        ->bypass_depth.  The replacement for QUEUE_FLAG_BYPASS isn't added
        but the counter is tested directly.  This will be further updated by
        later changes.
      
      * blk_mq_drain_queue() is dropped and "__" prefix is dropped from
        blk_mq_freeze_queue().  Queue cleanup path now calls
        blk_mq_freeze_queue() directly.
      
      * blk_queue_enter()'s fast path condition is simplified to simply
        check @q->mq_freeze_depth.  Previously, the condition was
      
      	!blk_queue_dying(q) &&
      	    (!blk_queue_bypass(q) || !blk_queue_init_done(q))
      
        mq_freeze_depth is incremented right after dying is set and
        blk_queue_init_done() exception isn't necessary as blk-mq doesn't
        start frozen, which only leaves the blk_queue_bypass() test which
        can be replaced by @q->mq_freeze_depth test.
      
      This change simplifies the code and reduces confusion in the area.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      780db207
  22. 04 6月, 2014 2 次提交
  23. 30 5月, 2014 1 次提交
    • J
      blk-mq: make the sysfs mq/ layout reflect current mappings · 67aec14c
      Jens Axboe 提交于
      Currently blk-mq registers all the hardware queues in sysfs,
      regardless of whether it uses them (e.g. they have CPU mappings)
      or not. The unused hardware queues lack the cpux/ directories,
      and the other sysfs entries (like active, pending, etc) are all
      zeroes.
      
      Change this so that sysfs correctly reflects the current mappings
      of the hardware queues.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      67aec14c
  24. 28 5月, 2014 1 次提交
  25. 22 5月, 2014 1 次提交
  26. 21 5月, 2014 1 次提交
    • J
      blk-mq: allow changing of queue depth through sysfs · e3a2b3f9
      Jens Axboe 提交于
      For request_fn based devices, the block layer exports a 'nr_requests'
      file through sysfs to allow adjusting of queue depth on the fly.
      Currently this returns -EINVAL for blk-mq, since it's not wired up.
      Wire this up for blk-mq, so that it now also always dynamic
      adjustments of the allowed queue depth for any given block device
      managed by blk-mq.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e3a2b3f9
  27. 20 5月, 2014 1 次提交
  28. 09 5月, 2014 1 次提交
    • J
      blk-mq: implement new and more efficient tagging scheme · 4bb659b1
      Jens Axboe 提交于
      blk-mq currently uses percpu_ida for tag allocation. But that only
      works well if the ratio between tag space and number of CPUs is
      sufficiently high. For most devices and systems, that is not the
      case. The end result if that we either only utilize the tag space
      partially, or we end up attempting to fully exhaust it and run
      into lots of lock contention with stealing between CPUs. This is
      not optimal.
      
      This new tagging scheme is a hybrid bitmap allocator. It uses
      two tricks to both be SMP friendly and allow full exhaustion
      of the space:
      
      1) We cache the last allocated (or freed) tag on a per blk-mq
         software context basis. This allows us to limit the space
         we have to search. The key element here is not caching it
         in the shared tag structure, otherwise we end up dirtying
         more shared cache lines on each allocate/free operation.
      
      2) The tag space is split into cache line sized groups, and
         each context will start off randomly in that space. Even up
         to full utilization of the space, this divides the tag users
         efficiently into cache line groups, avoiding dirtying the same
         one both between allocators and between allocator and freeer.
      
      This scheme shows drastically better behaviour, both on small
      tag spaces but on large ones as well. It has been tested extensively
      to show better performance for all the cases blk-mq cares about.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4bb659b1
  29. 25 4月, 2014 1 次提交
    • C
      blk-mq: respect rq_affinity · 38535201
      Christoph Hellwig 提交于
      The blk-mq code is using it's own version of the I/O completion affinity
      tunables, which causes a few issues:
      
       - the rq_affinity sysfs file doesn't work for blk-mq devices, even if it
         still is present, thus breaking existing tuning setups.
       - the rq_affinity = 1 mode, which is the defauly for legacy request based
         drivers isn't implemented at all.
       - blk-mq drivers don't implement any completion affinity with the default
         flag settings.
      
      This patches removes the blk-mq ipi_redirect flag and sysfs file, as well
      as the internal BLK_MQ_F_SHOULD_IPI flag and replaces it with code that
      respects the queue-wide rq_affinity flags and also implements the
      rq_affinity = 1 mode.
      
      This means I/O completion affinity can now only be tuned block-queue wide
      instead of per context, which seems more sensible to me anyway.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      38535201
  30. 24 4月, 2014 1 次提交
    • J
      blk-mq: fix race with timeouts and requeue events · 87ee7b11
      Jens Axboe 提交于
      If a requeue event races with a timeout, we can get into the
      situation where we attempt to complete a request from the
      timeout handler when it's not start anymore. This causes a crash.
      So have the timeout handler check that REQ_ATOM_STARTED is still
      set on the request - if not, we ignore the event. If this happens,
      the request has now been marked as complete. As a consequence, we
      need to ensure to clear REQ_ATOM_COMPLETE in blk_mq_start_request(),
      as to maintain proper request state.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      87ee7b11
  31. 16 4月, 2014 2 次提交