1. 02 6月, 2015 1 次提交
    • K
      blk-mq: Shared tag enhancements · f26cdc85
      Keith Busch 提交于
      Storage controllers may expose multiple block devices that share hardware
      resources managed by blk-mq. This patch enhances the shared tags so a
      low-level driver can access the shared resources not tied to the unshared
      h/w contexts. This way the LLD can dynamically add and delete disks and
      request queues without having to track all the request_queue hctx's to
      iterate outstanding tags.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f26cdc85
  2. 19 3月, 2015 1 次提交
  3. 12 2月, 2015 1 次提交
  4. 24 1月, 2015 1 次提交
    • S
      blk-mq: add tag allocation policy · 24391c0d
      Shaohua Li 提交于
      This is the blk-mq part to support tag allocation policy. The default
      allocation policy isn't changed (though it's not a strict FIFO). The new
      policy is round-robin for libata. But it's a try-best implementation. If
      multiple tasks are competing, the tags returned will be mixed (which is
      unavoidable even with !mq, as requests from different tasks can be
      mixed in queue)
      
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      24391c0d
  5. 14 1月, 2015 1 次提交
    • J
      blk-mq: fix false negative out-of-tags condition · 0bf36498
      Jens Axboe 提交于
      The blk-mq tagging tries to maintain some locality between CPUs and
      the tags issued. The tags are split into groups of words, and the
      words may not be fully populated. When searching for a new free tag,
      blk-mq may look at partial words, hence it passes in an offset/size
      to find_next_zero_bit(). However, it does that wrong, the size must
      always be the full length of the number of tags in that word,
      otherwise we'll potentially miss some near the end.
      
      Another issue is when __bt_get() goes from one word set to the next.
      It bumps the index, but not the last_tag associated with the
      previous index. Bump that to be in the range of the new word.
      
      Finally, clean up __bt_get() and __bt_get_word() a bit and get
      rid of the goto in there, and the unnecessary 'wrap' variable.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0bf36498
  6. 01 1月, 2015 1 次提交
  7. 15 12月, 2014 1 次提交
  8. 10 12月, 2014 3 次提交
    • B
      blk-mq: Micro-optimize bt_get() · 52f7eb94
      Bart Van Assche 提交于
      Remove a superfluous finish_wait() call. Convert the two bt_wait_ptr()
      calls into a single call.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Robert Elliott <elliott@hp.com>
      Cc: Ming Lei <ming.lei@canonical.com>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      52f7eb94
    • B
      blk-mq: Fix a race between bt_clear_tag() and bt_get() · c38d185d
      Bart Van Assche 提交于
      What we need is the following two guarantees:
      * Any thread that observes the effect of the test_and_set_bit() by
        __bt_get_word() also observes the preceding addition of 'current'
        to the appropriate wait list. This is guaranteed by the semantics
        of the spin_unlock() operation performed by prepare_and_wait().
        Hence the conversion of test_and_set_bit_lock() into
        test_and_set_bit().
      * The wait lists are examined by bt_clear() after the tag bit has
        been cleared. clear_bit_unlock() guarantees that any thread that
        observes that the bit has been cleared also observes the store
        operations preceding clear_bit_unlock(). However,
        clear_bit_unlock() does not prevent that the wait lists are examined
        before that the tag bit is cleared. Hence the addition of a memory
        barrier between clear_bit() and the wait list examination.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Robert Elliott <elliott@hp.com>
      Cc: Ming Lei <ming.lei@canonical.com>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Cc: <stable@vger.kernel.org> # v3.13+
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c38d185d
    • B
      blk-mq: Avoid that __bt_get_word() wraps multiple times · 9e98e9d7
      Bart Van Assche 提交于
      If __bt_get_word() is called with last_tag != 0, if the first
      find_next_zero_bit() fails, if after wrap-around the
      test_and_set_bit() call fails and find_next_zero_bit() succeeds,
      if the next test_and_set_bit() call fails and subsequently
      find_next_zero_bit() does not find a zero bit, then another
      wrap-around will occur. Avoid this by introducing an additional
      local variable.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Robert Elliott <elliott@hp.com>
      Cc: Ming Lei <ming.lei@canonical.com>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Cc: <stable@vger.kernel.org> # v3.13+
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9e98e9d7
  9. 08 12月, 2014 2 次提交
  10. 25 11月, 2014 1 次提交
  11. 12 11月, 2014 1 次提交
  12. 07 10月, 2014 2 次提交
  13. 23 9月, 2014 1 次提交
  14. 18 6月, 2014 3 次提交
    • A
      blk-mq: bitmap tag: fix races in bt_get() function · 86fb5c56
      Alexander Gordeev 提交于
      This update fixes few issues in bt_get() function:
      
      - list_empty(&wait.task_list) check is not protected;
      
      - was_empty check is always true which results in *every* thread
        entering the loop resets bt_wait_state::wait_cnt counter rather
        than every bt->wake_cnt'th thread;
      
      - 'bt_wait_state::wait_cnt' counter update is redundant, since
        it also gets reset in bt_clear_tag() function;
      
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      86fb5c56
    • A
      blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt · 2971c35f
      Alexander Gordeev 提交于
      This piece of code in bt_clear_tag() function is racy:
      
      	bs = bt_wake_ptr(bt);
      	if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
      		atomic_set(&bs->wait_cnt, bt->wake_cnt);
       		wake_up(&bs->wait);
      	}
      
      Since nothing prevents bt_wake_ptr() from returning the very
      same 'bs' address on multiple CPUs, the following scenario is
      possible:
      
          CPU1                                CPU2
          ----                                ----
      
      0.  bs = bt_wake_ptr(bt);               bs = bt_wake_ptr(bt);
      1.  atomic_dec_and_test(&bs->wait_cnt)
      2.                                      atomic_dec_and_test(&bs->wait_cnt)
      3.  atomic_set(&bs->wait_cnt, bt->wake_cnt);
      
      If the decrement in [1] yields zero then for some amount of time
      the decrement in [2] results in a negative/overflow value, which
      is not expected. The follow-up assignment in [3] overwrites the
      invalid value with the batch value (and likely prevents the issue
      from being severe) which is still incorrect and should be a lesser.
      
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2971c35f
    • A
      blk-mq: bitmap tag: fix races on shared ::wake_index fields · 8537b120
      Alexander Gordeev 提交于
      Fix racy updates of shared blk_mq_bitmap_tags::wake_index
      and blk_mq_hw_ctx::wake_index fields.
      
      Cc: Ming Lei <tom.leiming@gmail.com>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8537b120
  15. 04 6月, 2014 1 次提交
  16. 29 5月, 2014 1 次提交
  17. 28 5月, 2014 1 次提交
    • C
      blk-mq: remove blk_mq_wait_for_tags · a3bd7756
      Christoph Hellwig 提交于
      The current logic for blocking tag allocation is rather confusing, as we
      first allocated and then free again a tag in blk_mq_wait_for_tags, just
      to attempt a non-blocking allocation and then repeat if someone else
      managed to grab the tag before us.
      
      Instead change blk_mq_alloc_request_pinned to simply do a blocking tag
      allocation itself and use the request we get back from it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a3bd7756
  18. 24 5月, 2014 1 次提交
  19. 21 5月, 2014 1 次提交
    • J
      blk-mq: allow changing of queue depth through sysfs · e3a2b3f9
      Jens Axboe 提交于
      For request_fn based devices, the block layer exports a 'nr_requests'
      file through sysfs to allow adjusting of queue depth on the fly.
      Currently this returns -EINVAL for blk-mq, since it's not wired up.
      Wire this up for blk-mq, so that it now also always dynamic
      adjustments of the allowed queue depth for any given block device
      managed by blk-mq.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e3a2b3f9
  20. 20 5月, 2014 1 次提交
  21. 14 5月, 2014 1 次提交
    • J
      blk-mq: improve support for shared tags maps · 0d2602ca
      Jens Axboe 提交于
      This adds support for active queue tracking, meaning that the
      blk-mq tagging maintains a count of active users of a tag set.
      This allows us to maintain a notion of fairness between users,
      so that we can distribute the tag depth evenly without starving
      some users while allowing others to try unfair deep queues.
      
      If sharing of a tag set is detected, each hardware queue will
      track the depth of its own queue. And if this exceeds the total
      depth divided by the number of active queues, the user is actively
      throttled down.
      
      The active queue count is done lazily to avoid bouncing that data
      between submitter and completer. Each hardware queue gets marked
      active when it allocates its first tag, and gets marked inactive
      when 1) the last tag is cleared, and 2) the queue timeout grace
      period has passed.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0d2602ca
  22. 11 5月, 2014 4 次提交
  23. 10 5月, 2014 1 次提交
    • J
      blk-mq: use sparser tag layout for lower queue depth · 59d13bf5
      Jens Axboe 提交于
      For best performance, spreading tags over multiple cachelines
      makes the tagging more efficient on multicore systems. But since
      we have 8 * sizeof(unsigned long) tags per cacheline, we don't
      always get a nice spread.
      
      Attempt to spread the tags over at least 4 cachelines, using fewer
      number of bits per unsigned long if we have to. This improves
      tagging performance in setups with 32-128 tags. For higher depths,
      the spread is the same as before (BITS_PER_LONG tags per cacheline).
      Signed-off-by: NJens Axboe <axboe@fb.com>
      59d13bf5
  24. 09 5月, 2014 1 次提交
    • J
      blk-mq: implement new and more efficient tagging scheme · 4bb659b1
      Jens Axboe 提交于
      blk-mq currently uses percpu_ida for tag allocation. But that only
      works well if the ratio between tag space and number of CPUs is
      sufficiently high. For most devices and systems, that is not the
      case. The end result if that we either only utilize the tag space
      partially, or we end up attempting to fully exhaust it and run
      into lots of lock contention with stealing between CPUs. This is
      not optimal.
      
      This new tagging scheme is a hybrid bitmap allocator. It uses
      two tricks to both be SMP friendly and allow full exhaustion
      of the space:
      
      1) We cache the last allocated (or freed) tag on a per blk-mq
         software context basis. This allows us to limit the space
         we have to search. The key element here is not caching it
         in the shared tag structure, otherwise we end up dirtying
         more shared cache lines on each allocate/free operation.
      
      2) The tag space is split into cache line sized groups, and
         each context will start off randomly in that space. Even up
         to full utilization of the space, this divides the tag users
         efficiently into cache line groups, avoiding dirtying the same
         one both between allocators and between allocator and freeer.
      
      This scheme shows drastically better behaviour, both on small
      tag spaces but on large ones as well. It has been tested extensively
      to show better performance for all the cases blk-mq cares about.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4bb659b1
  25. 30 4月, 2014 1 次提交
    • J
      blk-mq: fix waiting for reserved tags · 5810d903
      Jens Axboe 提交于
      blk_mq_wait_for_tags() is only able to wait for "normal" tags,
      not reserved tags. Pass in which one we should attempt to get
      a tag for, so that waiting for reserved tags will work.
      
      Reserved tags are used for internal commands, which are usually
      serialized. Hence no waiting generally takes place, but we should
      ensure that it actually works if users need that functionality.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5810d903
  26. 16 4月, 2014 1 次提交
  27. 11 2月, 2014 1 次提交
  28. 24 1月, 2014 1 次提交
    • K
      percpu_ida: Make percpu_ida_alloc + callers accept task state bitmask · 6f6b5d1e
      Kent Overstreet 提交于
      This patch changes percpu_ida_alloc() + callers to accept task state
      bitmask for prepare_to_wait() for code like target/iscsi that needs
      it for interruptible sleep, that is provided in a subsequent patch.
      
      It now expects TASK_UNINTERRUPTIBLE when the caller is able to sleep
      waiting for a new tag, or TASK_RUNNING when the caller cannot sleep,
      and is forced to return a negative value when no tags are available.
      
      v2 changes:
        - Include blk-mq + tcm_fc + vhost/scsi + target/iscsi changes
        - Drop signal_pending_state() call
      v3 changes:
        - Only call prepare_to_wait() + finish_wait() when != TASK_RUNNING
          (PeterZ)
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: <stable@vger.kernel.org> #3.12+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      6f6b5d1e
  29. 25 10月, 2013 1 次提交
    • J
      blk-mq: new multi-queue block IO queueing mechanism · 320ae51f
      Jens Axboe 提交于
      Linux currently has two models for block devices:
      
      - The classic request_fn based approach, where drivers use struct
        request units for IO. The block layer provides various helper
        functionalities to let drivers share code, things like tag
        management, timeout handling, queueing, etc.
      
      - The "stacked" approach, where a driver squeezes in between the
        block layer and IO submitter. Since this bypasses the IO stack,
        driver generally have to manage everything themselves.
      
      With drivers being written for new high IOPS devices, the classic
      request_fn based driver doesn't work well enough. The design dates
      back to when both SMP and high IOPS was rare. It has problems with
      scaling to bigger machines, and runs into scaling issues even on
      smaller machines when you have IOPS in the hundreds of thousands
      per device.
      
      The stacked approach is then most often selected as the model
      for the driver. But this means that everybody has to re-invent
      everything, and along with that we get all the problems again
      that the shared approach solved.
      
      This commit introduces blk-mq, block multi queue support. The
      design is centered around per-cpu queues for queueing IO, which
      then funnel down into x number of hardware submission queues.
      We might have a 1:1 mapping between the two, or it might be
      an N:M mapping. That all depends on what the hardware supports.
      
      blk-mq provides various helper functions, which include:
      
      - Scalable support for request tagging. Most devices need to
        be able to uniquely identify a request both in the driver and
        to the hardware. The tagging uses per-cpu caches for freed
        tags, to enable cache hot reuse.
      
      - Timeout handling without tracking request on a per-device
        basis. Basically the driver should be able to get a notification,
        if a request happens to fail.
      
      - Optional support for non 1:1 mappings between issue and
        submission queues. blk-mq can redirect IO completions to the
        desired location.
      
      - Support for per-request payloads. Drivers almost always need
        to associate a request structure with some driver private
        command structure. Drivers can tell blk-mq this at init time,
        and then any request handed to the driver will have the
        required size of memory associated with it.
      
      - Support for merging of IO, and plugging. The stacked model
        gets neither of these. Even for high IOPS devices, merging
        sequential IO reduces per-command overhead and thus
        increases bandwidth.
      
      For now, this is provided as a potential 3rd queueing model, with
      the hope being that, as it matures, it can replace both the classic
      and stacked model. That would get us back to having just 1 real
      model for block devices, leaving the stacked approach to dm/md
      devices (as it was originally intended).
      
      Contributions in this patch from the following people:
      
      Shaohua Li <shli@fusionio.com>
      Alexander Gordeev <agordeev@redhat.com>
      Christoph Hellwig <hch@infradead.org>
      Mike Christie <michaelc@cs.wisc.edu>
      Matias Bjorling <m@bjorling.me>
      Jeff Moyer <jmoyer@redhat.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      320ae51f