1. 21 3月, 2014 5 次提交
    • M
      rt,blk,mq: Make blk_mq_cpu_notify_lock a raw spinlock · 55c816e3
      Mike Galbraith 提交于
      [  365.164040] BUG: sleeping function called from invalid context at kernel/rtmutex.c:674
      [  365.164041] in_atomic(): 1, irqs_disabled(): 1, pid: 26, name: migration/1
      [  365.164043] no locks held by migration/1/26.
      [  365.164044] irq event stamp: 6648
      [  365.164056] hardirqs last  enabled at (6647): [<ffffffff8153d377>] restore_args+0x0/0x30
      [  365.164062] hardirqs last disabled at (6648): [<ffffffff810ed98d>] multi_cpu_stop+0x9d/0x120
      [  365.164070] softirqs last  enabled at (0): [<ffffffff810543bc>] copy_process.part.28+0x6fc/0x1920
      [  365.164072] softirqs last disabled at (0): [<          (null)>]           (null)
      [  365.164076] CPU: 1 PID: 26 Comm: migration/1 Tainted: GF           N  3.12.12-rt19-0.gcb6c4a2-rt #3
      [  365.164078] Hardware name: QCI QSSC-S4R/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.S013.032920111005 03/29/2011
      [  365.164091]  0000000000000001 ffff880a42ea7c30 ffffffff815367e6 ffffffff81a086c0
      [  365.164099]  ffff880a42ea7c40 ffffffff8108919c ffff880a42ea7c60 ffffffff8153c24f
      [  365.164107]  ffff880a42ea91f0 00000000ffffffe1 ffff880a42ea7c88 ffffffff81297ec0
      [  365.164108] Call Trace:
      [  365.164119]  [<ffffffff810060b1>] try_stack_unwind+0x191/0x1a0
      [  365.164127]  [<ffffffff81004872>] dump_trace+0x92/0x360
      [  365.164133]  [<ffffffff81006108>] show_trace_log_lvl+0x48/0x60
      [  365.164138]  [<ffffffff81004c18>] show_stack_log_lvl+0xd8/0x1d0
      [  365.164143]  [<ffffffff81006160>] show_stack+0x20/0x50
      [  365.164153]  [<ffffffff815367e6>] dump_stack+0x54/0x9a
      [  365.164163]  [<ffffffff8108919c>] __might_sleep+0xfc/0x140
      [  365.164173]  [<ffffffff8153c24f>] rt_spin_lock+0x1f/0x70
      [  365.164182]  [<ffffffff81297ec0>] blk_mq_main_cpu_notify+0x20/0x70
      [  365.164191]  [<ffffffff81540a1c>] notifier_call_chain+0x4c/0x70
      [  365.164201]  [<ffffffff81083499>] __raw_notifier_call_chain+0x9/0x10
      [  365.164207]  [<ffffffff810567be>] cpu_notify+0x1e/0x40
      [  365.164217]  [<ffffffff81525da2>] take_cpu_down+0x22/0x40
      [  365.164223]  [<ffffffff810ed9c6>] multi_cpu_stop+0xd6/0x120
      [  365.164229]  [<ffffffff810edd97>] cpu_stopper_thread+0xd7/0x1e0
      [  365.164235]  [<ffffffff810863a3>] smpboot_thread_fn+0x203/0x380
      [  365.164241]  [<ffffffff8107cbf8>] kthread+0xc8/0xd0
      [  365.164250]  [<ffffffff8154440c>] ret_from_fork+0x7c/0xb0
      [  365.164429] smpboot: CPU 1 is now offline
      Signed-off-by: NMike Galbraith <bitbucket@online.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      55c816e3
    • C
      blk-mq: support partial I/O completions · 7237c740
      Christoph Hellwig 提交于
      Add a new blk_mq_end_io_partial function to partially complete requests
      as needed by the SCSI layer.  We do this by reusing blk_update_request
      to advance the bio instead of having a simplified version of it in
      the blk-mq code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7237c740
    • C
      blk-mq: merge blk_mq_insert_request and blk_mq_run_request · eeabc850
      Christoph Hellwig 提交于
      It's almost identical to blk_mq_insert_request, so fold the two into one
      slightly more generic function by making the flush special case a bit
      smarted.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      eeabc850
    • C
      blk-mq: remove blk_mq_alloc_rq · 081241e5
      Christoph Hellwig 提交于
      There's only one caller, which is a straight wrapper and fits the naming
      scheme of the related functions a lot better.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      081241e5
    • J
      blk-mq: don't dump CPU -> hw queue map on driver load · 676141e4
      Jens Axboe 提交于
      Now that we are out of initial debug/bringup mode, remove
      the verbose dump of the mapping table.
      
      Provide the mapping table in sysfs, under the hardware queue
      directory, in the cpu_list file.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      676141e4
  2. 20 3月, 2014 1 次提交
    • J
      blk-mq: fix wrong usage of hctx->state vs hctx->flags · 5d12f905
      Jens Axboe 提交于
      BLK_MQ_F_* flags are for hctx->flags, and are non-atomic and
      set at registration time. BLK_MQ_S_* flags are dynamic and
      atomic, and are accessed through hctx->state.
      
      Some of the BLK_MQ_S_STOPPED uses were wrong. Additionally,
      the header file should not use a bit shift for the _S_ flags,
      as they are done through the set/test_bit functions.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5d12f905
  3. 15 3月, 2014 1 次提交
    • J
      blk-mq: allow blk_mq_init_commands() to return failure · 95363efd
      Jens Axboe 提交于
      If drivers do dynamic allocation in the hardware command init
      path, then we need to be able to handle and return failures.
      
      And if they do allocations or mappings in the init command path,
      then we need a cleanup function to free up that space at exit
      time. So add blk_mq_free_commands() as the cleanup function.
      
      This is required for the mtip32xx driver conversion to blk-mq.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      95363efd
  4. 13 3月, 2014 1 次提交
    • J
      block: remove old blk_iopoll_enabled variable · 89f8b33c
      Jens Axboe 提交于
      This was a debugging measure to toggle enabled/disabled
      when testing. But for real production setups, it's not
      safe to toggle this setting without either reloading
      drivers of quiescing IO first. Neither of which the toggle
      enforces.
      
      Additionally, it makes drivers deal with the conditional
      state.
      
      Remove it completely. It's up to the driver whether iopoll
      is enabled or not.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      89f8b33c
  5. 06 3月, 2014 1 次提交
    • R
      blktrace: fix accounting of partially completed requests · af5040da
      Roman Pen 提交于
      trace_block_rq_complete does not take into account that request can
      be partially completed, so we can get the following incorrect output
      of blkparser:
      
        C   R 232 + 240 [0]
        C   R 240 + 232 [0]
        C   R 248 + 224 [0]
        C   R 256 + 216 [0]
      
      but should be:
      
        C   R 232 + 8 [0]
        C   R 240 + 8 [0]
        C   R 248 + 8 [0]
        C   R 256 + 8 [0]
      
      Also, the whole output summary statistics of completed requests and
      final throughput will be incorrect.
      
      This patch takes into account real completion size of the request and
      fixes wrong completion accounting.
      Signed-off-by: NRoman Pen <r.peniaev@gmail.com>
      CC: Steven Rostedt <rostedt@goodmis.org>
      CC: Frederic Weisbecker <fweisbec@gmail.com>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: linux-kernel@vger.kernel.org
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      af5040da
  6. 25 2月, 2014 4 次提交
    • F
      smp: Rename __smp_call_function_single() to smp_call_function_single_async() · c46fff2a
      Frederic Weisbecker 提交于
      The name __smp_call_function_single() doesn't tell much about the
      properties of this function, especially when compared to
      smp_call_function_single().
      
      The comments above the implementation are also misleading. The main
      point of this function is actually not to be able to embed the csd
      in an object. This is actually a requirement that result from the
      purpose of this function which is to raise an IPI asynchronously.
      
      As such it can be called with interrupts disabled. And this feature
      comes at the cost of the caller who then needs to serialize the
      IPIs on this csd.
      
      Lets rename the function and enhance the comments so that they reflect
      these properties.
      Suggested-by: NChristoph Hellwig <hch@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c46fff2a
    • F
      smp: Remove wait argument from __smp_call_function_single() · fce8ad15
      Frederic Weisbecker 提交于
      The main point of calling __smp_call_function_single() is to send
      an IPI in a pure asynchronous way. By embedding a csd in an object,
      a caller can send the IPI without waiting for a previous one to complete
      as is required by smp_call_function_single() for example. As such,
      sending this kind of IPI can be safe even when irqs are disabled.
      
      This flexibility comes at the expense of the caller who then needs to
      synchronize the csd lifecycle by himself and make sure that IPIs on a
      single csd are serialized.
      
      This is how __smp_call_function_single() works when wait = 0 and this
      usecase is relevant.
      
      Now there don't seem to be any usecase with wait = 1 that can't be
      covered by smp_call_function_single() instead, which is safer. Lets look
      at the two possible scenario:
      
      1) The user calls __smp_call_function_single(wait = 1) on a csd embedded
         in an object. It looks like a nice and convenient pattern at the first
         sight because we can then retrieve the object from the IPI handler easily.
      
         But actually it is a waste of memory space in the object since the csd
         can be allocated from the stack by smp_call_function_single(wait = 1)
         and the object can be passed an the IPI argument.
      
         Besides that, embedding the csd in an object is more error prone
         because the caller must take care of the serialization of the IPIs
         for this csd.
      
      2) The user calls __smp_call_function_single(wait = 1) on a csd that
         is allocated on the stack. It's ok but smp_call_function_single()
         can do it as well and it already takes care of the allocation on the
         stack. Again it's more simple and less error prone.
      
      Therefore, using the underscore prepend API version with wait = 1
      is a bad pattern and a sign that the caller can do safer and more
      simple.
      
      There was a single user of that which has just been converted.
      So lets remove this option to discourage further users.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fce8ad15
    • J
      block: Stop abusing rq->csd.list in blk-softirq · 6d113398
      Jan Kara 提交于
      Abusing rq->csd.list for a list of requests to complete is rather ugly.
      We use rq->queuelist instead which is much cleaner. It is safe because
      queuelist is used by the block layer only for requests waiting to be
      submitted to a device. Thus it is unused when irq reports the request IO
      is finished.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      6d113398
    • J
      block: Stop abusing csd.list for fifo_time · 8b4922d3
      Jan Kara 提交于
      Block layer currently abuses rq->csd.list.next for storing fifo_time.
      That is a terrible hack and completely unnecessary as well. Union
      achieves the same space saving in a cleaner way.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8b4922d3
  7. 19 2月, 2014 2 次提交
  8. 13 2月, 2014 1 次提交
  9. 12 2月, 2014 2 次提交
  10. 11 2月, 2014 3 次提交
    • M
      block: Fix type mismatch in ssize_t_blk_mq_tag_sysfs_show · 11c94444
      Masanari Iida 提交于
      cppcheck detected following format string mismatch.
      [blk-mq-tag.c:201]: (warning) %u in format string (no. 1) requires
      'unsigned int' but the argument type is 'int'.
      
      Change "cpu" from int to unsigned int, because the cpu
      never become minus value.
      Signed-off-by: NMasanari Iida <standby24x7@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      11c94444
    • C
      blk-mq: rework flush sequencing logic · 18741986
      Christoph Hellwig 提交于
      Witch to using a preallocated flush_rq for blk-mq similar to what's done
      with the old request path.  This allows us to set up the request properly
      with a tag from the actually allowed range and ->rq_disk as needed by
      some drivers.  To make life easier we also switch to dynamic allocation
      of ->flush_rq for the old path.
      
      This effectively reverts most of
      
          "blk-mq: fix for flush deadlock"
      
      and
      
          "blk-mq: Don't reserve a tag for flush request"
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      18741986
    • C
      blk-mq: rework I/O completions · 30a91cb4
      Christoph Hellwig 提交于
      Rework I/O completions to work more like the old code path.  blk_mq_end_io
      now stays out of the business of deferring completions to others CPUs
      and calling blk_mark_rq_complete.  The latter is very important to allow
      completing requests that have timed out and thus are already marked completed,
      the former allows using the IPI callout even for driver specific completions
      instead of having to reimplement them.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      30a91cb4
  11. 08 2月, 2014 6 次提交
  12. 31 1月, 2014 2 次提交
    • T
      block: __elv_next_request() shouldn't call into the elevator if bypassing · 556ee818
      Tejun Heo 提交于
      request_queue bypassing is used to suppress higher-level function of a
      request_queue so that they can be switched, reconfigured and shut
      down.  A request_queue does the followings while bypassing.
      
      * bypasses elevator and io_cq association and queues requests directly
        to the FIFO dispatch queue.
      
      * bypasses block cgroup request_list lookup and always uses the root
        request_list.
      
      Once confirmed to be bypassing, specific elevator and block cgroup
      policy implementations can assume that nothing is in flight for them
      and perform various operations which would be dangerous otherwise.
      
      Such confirmation is acheived by short-circuiting all new requests
      directly to the dispatch queue and waiting for all the requests which
      were issued before to finish.  Unfortunately, while the request
      allocating and draining sides were properly handled, we forgot to
      actually plug the request dispatch path.  Even after bypassing mode is
      confirmed, if the attached driver tries to fetch a request and the
      dispatch queue is empty, __elv_next_request() would invoke the current
      elevator's elevator_dispatch_fn() callback.  As all in-flight requests
      were drained, the elevator wouldn't contain any request but once
      bypass is confirmed we don't even know whether the elevator is even
      there.  It might be in the process of being switched and half torn
      down.
      
      Frank Mayhar reports that this actually happened while switching
      elevators, leading to an oops.
      
      Let's fix it by making __elv_next_request() avoid invoking the
      elevator_dispatch_fn() callback if the queue is bypassing.  It already
      avoids invoking the callback if the queue is dying.  As a dying queue
      is guaranteed to be bypassing, we can simply replace blk_queue_dying()
      check with blk_queue_bypass().
      Reported-by: NFrank Mayhar <fmayhar@google.com>
      References: http://lkml.kernel.org/g/1390319905.20232.38.camel@bobble.lax.corp.google.com
      Cc: stable@vger.kernel.org
      Tested-by: NFrank Mayhar <fmayhar@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      556ee818
    • S
      blk-mq: Don't reserve a tag for flush request · f0276924
      Shaohua Li 提交于
      Reserving a tag (request) for flush to avoid dead lock is a overkill. A
      tag is valuable resource. We can track the number of flush requests and
      disallow having too many pending flush requests allocated. With this
      patch, blk_mq_alloc_request_pinned() could do a busy nop (but not a dead
      loop) if too many pending requests are allocated and new flush request
      is allocated. But this should not be a problem, too many pending flush
      requests are very rare case.
      
      I verified this can fix the deadlock caused by too many pending flush
      requests.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f0276924
  13. 29 1月, 2014 1 次提交
  14. 24 1月, 2014 1 次提交
    • K
      percpu_ida: Make percpu_ida_alloc + callers accept task state bitmask · 6f6b5d1e
      Kent Overstreet 提交于
      This patch changes percpu_ida_alloc() + callers to accept task state
      bitmask for prepare_to_wait() for code like target/iscsi that needs
      it for interruptible sleep, that is provided in a subsequent patch.
      
      It now expects TASK_UNINTERRUPTIBLE when the caller is able to sleep
      waiting for a new tag, or TASK_RUNNING when the caller cannot sleep,
      and is forced to return a negative value when no tags are available.
      
      v2 changes:
        - Include blk-mq + tcm_fc + vhost/scsi + target/iscsi changes
        - Drop signal_pending_state() call
      v3 changes:
        - Only call prepare_to_wait() + finish_wait() when != TASK_RUNNING
          (PeterZ)
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: <stable@vger.kernel.org> #3.12+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      6f6b5d1e
  15. 22 1月, 2014 2 次提交
  16. 09 1月, 2014 3 次提交
  17. 04 1月, 2014 1 次提交
  18. 01 1月, 2014 3 次提交