1. 26 9月, 2014 2 次提交
  2. 23 9月, 2014 14 次提交
  3. 10 9月, 2014 1 次提交
    • J
      blk-mq: scale depth and rq map appropriate if low on memory · a5164405
      Jens Axboe 提交于
      If we are running in a kdump environment, resources are scarce.
      For some SCSI setups with a huge set of shared tags, we run out
      of memory allocating what the drivers is asking for. So implement
      a scale back logic to reduce the tag depth for those cases, allowing
      the driver to successfully load.
      
      We should extend this to detect low memory situations, and implement
      a sane fallback for those (1 queue, 64 tags, or something like that).
      Tested-by: NRobert Elliott <elliott@hp.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a5164405
  4. 04 9月, 2014 1 次提交
    • R
      blk-mq: cleanup after blk_mq_init_rq_map failures · 5676e7b6
      Robert Elliott 提交于
      In blk-mq.c blk_mq_alloc_tag_set, if:
      	set->tags = kmalloc_node()
      succeeds, but one of the blk_mq_init_rq_map() calls fails,
      	goto out_unwind;
      needs to free set->tags so the caller is not obligated
      to do so.  None of the current callers (null_blk,
      virtio_blk, virtio_blk, or the forthcoming scsi-mq)
      do so.
      
      set->tags needs to be set to NULL after doing so,
      so other tag cleanup logic doesn't try to free
      a stale pointer later.  Also set it to NULL
      in blk_mq_free_tag_set.
      
      Tested with error injection on the forthcoming
      scsi-mq + hpsa combination.
      Signed-off-by: NRobert Elliott <elliott@hp.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5676e7b6
  5. 29 8月, 2014 1 次提交
    • J
      block,scsi: fixup blk_get_request dead queue scenarios · a492f075
      Joe Lawrence 提交于
      The blk_get_request function may fail in low-memory conditions or during
      device removal (even if __GFP_WAIT is set). To distinguish between these
      errors, modify the blk_get_request call stack to return the appropriate
      ERR_PTR. Verify that all callers check the return status and consider
      IS_ERR instead of a simple NULL pointer check.
      
      For consistency, make a similar change to the blk_mq_alloc_request leg
      of blk_get_request.  It may fail if the queue is dead, or the caller was
      unwilling to wait.
      Signed-off-by: NJoe Lawrence <joe.lawrence@stratus.com>
      Acked-by: Jiri Kosina <jkosina@suse.cz> [for pktdvd]
      Acked-by: Boaz Harrosh <bharrosh@panasas.com> [for osd]
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a492f075
  6. 23 8月, 2014 1 次提交
  7. 22 8月, 2014 3 次提交
  8. 16 8月, 2014 1 次提交
  9. 02 7月, 2014 5 次提交
    • T
      blk-mq: use percpu_ref for mq usage count · add703fd
      Tejun Heo 提交于
      Currently, blk-mq uses a percpu_counter to keep track of how many
      usages are in flight.  The percpu_counter is drained while freezing to
      ensure that no usage is left in-flight after freezing is complete.
      blk_mq_queue_enter/exit() and blk_mq_[un]freeze_queue() implement this
      per-cpu gating mechanism.
      
      This type of code has relatively high chance of subtle bugs which are
      extremely difficult to trigger and it's way too hairy to be open coded
      in blk-mq.  percpu_ref can serve the same purpose after the recent
      changes.  This patch replaces the open-coded per-cpu usage counting
      and draining mechanism with percpu_ref.
      
      blk_mq_queue_enter() performs tryget_live on the ref and exit()
      performs put.  blk_mq_freeze_queue() kills the ref and waits until the
      reference count reaches zero.  blk_mq_unfreeze_queue() revives the ref
      and wakes up the waiters.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      add703fd
    • T
      blk-mq: collapse __blk_mq_drain_queue() into blk_mq_freeze_queue() · 72d6f02a
      Tejun Heo 提交于
      Keeping __blk_mq_drain_queue() as a separate function doesn't buy us
      anything and it's gonna be further simplified.  Let's flatten it into
      its caller.
      
      This patch doesn't make any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      72d6f02a
    • T
      blk-mq: decouble blk-mq freezing from generic bypassing · 780db207
      Tejun Heo 提交于
      blk_mq freezing is entangled with generic bypassing which bypasses
      blkcg and io scheduler and lets IO requests fall through the block
      layer to the drivers in FIFO order.  This allows forward progress on
      IOs with the advanced features disabled so that those features can be
      configured or altered without worrying about stalling IO which may
      lead to deadlock through memory allocation.
      
      However, generic bypassing doesn't quite fit blk-mq.  blk-mq currently
      doesn't make use of blkcg or ioscheds and it maps bypssing to
      freezing, which blocks request processing and drains all the in-flight
      ones.  This causes problems as bypassing assumes that request
      processing is online.  blk-mq works around this by conditionally
      allowing request processing for the problem case - during queue
      initialization.
      
      Another weirdity is that except for during queue cleanup, bypassing
      started on the generic side prevents blk-mq from processing new
      requests but doesn't drain the in-flight ones.  This shouldn't break
      anything but again highlights that something isn't quite right here.
      
      The root cause is conflating blk-mq freezing and generic bypassing
      which are two different mechanisms.  The only intersecting purpose
      that they serve is during queue cleanup.  Let's properly separate
      blk-mq freezing from generic bypassing and simply use it where
      necessary.
      
      * request_queue->mq_freeze_depth is added and
        blk_mq_[un]freeze_queue() now operate on this counter instead of
        ->bypass_depth.  The replacement for QUEUE_FLAG_BYPASS isn't added
        but the counter is tested directly.  This will be further updated by
        later changes.
      
      * blk_mq_drain_queue() is dropped and "__" prefix is dropped from
        blk_mq_freeze_queue().  Queue cleanup path now calls
        blk_mq_freeze_queue() directly.
      
      * blk_queue_enter()'s fast path condition is simplified to simply
        check @q->mq_freeze_depth.  Previously, the condition was
      
      	!blk_queue_dying(q) &&
      	    (!blk_queue_bypass(q) || !blk_queue_init_done(q))
      
        mq_freeze_depth is incremented right after dying is set and
        blk_queue_init_done() exception isn't necessary as blk-mq doesn't
        start frozen, which only leaves the blk_queue_bypass() test which
        can be replaced by @q->mq_freeze_depth test.
      
      This change simplifies the code and reduces confusion in the area.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      780db207
    • T
      block, blk-mq: draining can't be skipped even if bypass_depth was non-zero · 776687bc
      Tejun Heo 提交于
      Currently, both blk_queue_bypass_start() and blk_mq_freeze_queue()
      skip queue draining if bypass_depth was already above zero.  The
      assumption is that the one which bumped the bypass_depth should have
      performed draining already; however, there's nothing which prevents a
      new instance of bypassing/freezing from starting before the previous
      one finishes draining.  The current code may allow the later
      bypassing/freezing instances to complete while there still are
      in-flight requests which haven't finished draining.
      
      Fix it by draining regardless of bypass_depth.  We still skip draining
      from blk_queue_bypass_start() while the queue is initializing to avoid
      introducing excessive delays during boot.  INIT_DONE setting is moved
      above the initial blk_queue_bypass_end() so that bypassing attempts
      can't slip inbetween.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      776687bc
    • T
      blk-mq: fix a memory ordering bug in blk_mq_queue_enter() · 531ed626
      Tejun Heo 提交于
      blk-mq uses a percpu_counter to keep track of how many usages are in
      flight.  The percpu_counter is drained while freezing to ensure that
      no usage is left in-flight after freezing is complete.
      
      blk_mq_queue_enter/exit() and blk_mq_[un]freeze_queue() implement this
      per-cpu gating mechanism; unfortunately, it contains a subtle bug -
      smp_wmb() in blk_mq_queue_enter() doesn't prevent prevent the cpu from
      fetching @q->bypass_depth before incrementing @q->mq_usage_counter and
      if freezing happens inbetween the caller can slip through and freezing
      can be complete while there are active users.
      
      Use smp_mb() instead so that bypass_depth and mq_usage_counter
      modifications and tests are properly interlocked.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      531ed626
  10. 25 6月, 2014 1 次提交
  11. 14 6月, 2014 2 次提交
  12. 10 6月, 2014 1 次提交
  13. 09 6月, 2014 1 次提交
  14. 07 6月, 2014 2 次提交
  15. 06 6月, 2014 1 次提交
    • J
      blk-mq: bump max tag depth to 10K tags · a4391c64
      Jens Axboe 提交于
      For some scsi-mq cases, the tag map can be huge. So increase the
      max number of tags we support.
      
      Additionally, don't fail with EINVAL if a user requests too many
      tags. Warn that the tag depth has been adjusted down, and store
      the new value inside the tag_set passed in.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a4391c64
  16. 05 6月, 2014 1 次提交
  17. 04 6月, 2014 2 次提交