1. 03 11月, 2016 2 次提交
    • B
      blk-mq: Add a kick_requeue_list argument to blk_mq_requeue_request() · 2b053aca
      Bart Van Assche 提交于
      Most blk_mq_requeue_request() and blk_mq_add_to_requeue_list() calls
      are followed by kicking the requeue list. Hence add an argument to
      these two functions that allows to kick the requeue list. This was
      proposed by Christoph Hellwig.
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Hannes Reinecke <hare@suse.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2b053aca
    • B
      blk-mq: Avoid that requeueing starts stopped queues · 52d7f1b5
      Bart Van Assche 提交于
      Since blk_mq_requeue_work() starts stopped queues and since
      execution of this function can be scheduled after a queue has
      been stopped it is not possible to stop queues without using
      an additional state variable to track whether or not the queue
      has been stopped. Hence modify blk_mq_requeue_work() such that it
      does not start stopped queues. My conclusion after a review of
      the blk_mq_stop_hw_queues() and blk_mq_{delay_,}kick_requeue_list()
      callers is as follows:
      * In the dm driver starting and stopping queues should only happen
        if __dm_suspend() or __dm_resume() is called and not if the
        requeue list is processed.
      * In the SCSI core queue stopping and starting should only be
        performed by the scsi_internal_device_block() and
        scsi_internal_device_unblock() functions but not by any other
        function. Although the blk_mq_stop_hw_queue() call in
        scsi_queue_rq() may help to reduce CPU load if a LLD queue is
        full, figuring out whether or not a queue should be restarted
        when requeueing a command would require to introduce additional
        locking in scsi_mq_requeue_cmd() to avoid a race with
        scsi_internal_device_block(). Avoid this complexity by removing
        the blk_mq_stop_hw_queue() call from scsi_queue_rq().
      * In the NVMe core only the functions that call
        blk_mq_start_stopped_hw_queues() explicitly should start stopped
        queues.
      * A blk_mq_start_stopped_hwqueues() call must be added in the
        xen-blkfront driver in its blkif_recover() function.
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Roger Pau Monné <roger.pau@citrix.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: James Bottomley <jejb@linux.vnet.ibm.com>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      52d7f1b5
  2. 28 10月, 2016 1 次提交
  3. 15 9月, 2016 1 次提交
  4. 23 5月, 2016 1 次提交
  5. 11 5月, 2016 1 次提交
  6. 16 4月, 2016 5 次提交
  7. 27 2月, 2016 1 次提交
  8. 24 2月, 2016 1 次提交
  9. 03 12月, 2015 2 次提交
  10. 07 11月, 2015 1 次提交
  11. 01 10月, 2015 1 次提交
  12. 29 8月, 2015 1 次提交
  13. 26 8月, 2015 1 次提交
  14. 31 7月, 2015 2 次提交
  15. 09 4月, 2015 1 次提交
    • B
      Defer processing of REQ_PREEMPT requests for blocked devices · bba0bdd7
      Bart Van Assche 提交于
      SCSI transport drivers and SCSI LLDs block a SCSI device if the
      transport layer is not operational. This means that in this state
      no requests should be processed, even if the REQ_PREEMPT flag has
      been set. This patch avoids that a rescan shortly after a cable
      pull sporadically triggers the following kernel oops:
      
      BUG: unable to handle kernel paging request at ffffc9001a6bc084
      IP: [<ffffffffa04e08f2>] mlx4_ib_post_send+0xd2/0xb30 [mlx4_ib]
      Process rescan-scsi-bus (pid: 9241, threadinfo ffff88053484a000, task ffff880534aae100)
      Call Trace:
       [<ffffffffa0718135>] srp_post_send+0x65/0x70 [ib_srp]
       [<ffffffffa071b9df>] srp_queuecommand+0x1cf/0x3e0 [ib_srp]
       [<ffffffffa0001ff1>] scsi_dispatch_cmd+0x101/0x280 [scsi_mod]
       [<ffffffffa0009ad1>] scsi_request_fn+0x411/0x4d0 [scsi_mod]
       [<ffffffff81223b37>] __blk_run_queue+0x27/0x30
       [<ffffffff8122a8d2>] blk_execute_rq_nowait+0x82/0x110
       [<ffffffff8122a9c2>] blk_execute_rq+0x62/0xf0
       [<ffffffffa000b0e8>] scsi_execute+0xe8/0x190 [scsi_mod]
       [<ffffffffa000b2f3>] scsi_execute_req+0xa3/0x130 [scsi_mod]
       [<ffffffffa000c1aa>] scsi_probe_lun+0x17a/0x450 [scsi_mod]
       [<ffffffffa000ce86>] scsi_probe_and_add_lun+0x156/0x480 [scsi_mod]
       [<ffffffffa000dc2f>] __scsi_scan_target+0xdf/0x1f0 [scsi_mod]
       [<ffffffffa000dfa3>] scsi_scan_host_selected+0x183/0x1c0 [scsi_mod]
       [<ffffffffa000edfb>] scsi_scan+0xdb/0xe0 [scsi_mod]
       [<ffffffffa000ee13>] store_scan+0x13/0x20 [scsi_mod]
       [<ffffffff811c8d9b>] sysfs_write_file+0xcb/0x160
       [<ffffffff811589de>] vfs_write+0xce/0x140
       [<ffffffff81158b53>] sys_write+0x53/0xa0
       [<ffffffff81464592>] system_call_fastpath+0x16/0x1b
       [<00007f611c9d9300>] 0x7f611c9d92ff
      Reported-by: NMax Gurtuvoy <maxg@mellanox.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: NMike Christie <michaelc@cs.wisc.edu>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJames Bottomley <JBottomley@Odin.com>
      bba0bdd7
  16. 24 1月, 2015 1 次提交
    • S
      blk-mq: add tag allocation policy · 24391c0d
      Shaohua Li 提交于
      This is the blk-mq part to support tag allocation policy. The default
      allocation policy isn't changed (though it's not a strict FIFO). The new
      policy is round-robin for libata. But it's a try-best implementation. If
      multiple tasks are competing, the tags returned will be mixed (which is
      unavoidable even with !mq, as requests from different tasks can be
      mixed in queue)
      
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      24391c0d
  17. 20 1月, 2015 1 次提交
  18. 09 1月, 2015 1 次提交
  19. 15 12月, 2014 1 次提交
  20. 25 11月, 2014 3 次提交
  21. 12 11月, 2014 7 次提交
  22. 30 10月, 2014 1 次提交
    • J
      blk-mq: add a 'list' parameter to ->queue_rq() · 74c45052
      Jens Axboe 提交于
      Since we have the notion of a 'last' request in a chain, we can use
      this to have the hardware optimize the issuing of requests. Add
      a list_head parameter to queue_rq that the driver can use to
      temporarily store hw commands for issue when 'last' is true. If we
      are doing a chain of requests, pass in a NULL list for the first
      request to force issue of that immediately, then batch the remainder
      for deferred issue until the last request has been sent.
      
      Instead of adding yet another argument to the hot ->queue_rq path,
      encapsulate the passed arguments in a blk_mq_queue_data structure.
      This is passed as a constant, and has been tested as faster than
      passing 4 (or even 3) args through ->queue_rq. Update drivers for
      the new ->queue_rq() prototype. There are no functional changes
      in this patch for drivers - if they don't use the passed in list,
      then they will just queue requests individually like before.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c45052
  23. 28 10月, 2014 1 次提交
  24. 23 9月, 2014 2 次提交