1. 06 6月, 2015 1 次提交
    • K
      NVMe: add sysfs and ioctl controller reset · 4cc06521
      Keith Busch 提交于
      We need the ability to perform an nvme controller reset as discussed on
      the mailing list thread:
      
        http://lists.infradead.org/pipermail/linux-nvme/2015-March/001585.html
      
      This adds a sysfs entry that when written to will reset perform an NVMe
      controller reset if the controller was successfully initialized in the
      first place.
      
      This also adds locking around resetting the device in the async probe
      method so the driver can't schedule two resets.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Cc: Brandon Schultz <brandon.schulz@hgst.com>
      Cc: David Sariel <david.sariel@pmcs.com>
      
      Updated by Jens to:
      
      1) Merge this with the ioctl reset patch from David Sariel. The ioctl
         path now shares the reset code from the sysfs path.
      
      2) Don't flush work if we fail issuing the reset.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4cc06521
  2. 02 6月, 2015 3 次提交
    • A
      null_blk: restart request processing on completion handler · 8b70f45e
      Akinobu Mita 提交于
      When irqmode=2 (IRQ completion handler is timer) and queue_mode=1
      (Block interface to use is rq), the completion handler should restart
      request handling for any pending requests on a queue because request
      processing stops when the number of commands are queued more than
      hw_queue_depth (null_rq_prep_fn returns BLKPREP_DEFER).
      
      Without this change, the following command cannot finish.
      
      	# modprobe null_blk irqmode=2 queue_mode=1 hw_queue_depth=1
      	# fio --name=t --rw=read --size=1g --direct=1 \
      	  --ioengine=libaio --iodepth=64 --filename=/dev/nullb0
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8b70f45e
    • A
      null_blk: prevent timer handler running on a different CPU where started · 419c21a3
      Akinobu Mita 提交于
      When irqmode=2 (IRQ completion handler is timer), timer handler should
      be called on the same CPU where the timer has been started.
      
      Since completion_queues are per-cpu and the completion handler only
      touches completion_queue for local CPU, we need to prevent the handler
      from running on a different CPU where the timer has been started.
      Otherwise, the IO cannot be completed until another completion handler
      is executed on that CPU.
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      419c21a3
    • K
      NVMe: Remove hctx reliance for multi-namespace · 42483228
      Keith Busch 提交于
      The driver needs to track shared tags to support multiple namespaces
      that may be dynamically allocated or deleted. Relying on the first
      request_queue's hctx's is not appropriate as we cannot clear outstanding
      tags for all namespaces using this handle, nor can the driver easily track
      all request_queue's hctx as namespaces are attached/detached. Instead,
      this patch uses the nvme_dev's tagset to get the shared tag resources
      instead of through a request_queue hctx.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      42483228
  3. 30 5月, 2015 2 次提交
  4. 23 5月, 2015 1 次提交
  5. 22 5月, 2015 9 次提交
  6. 20 5月, 2015 4 次提交
  7. 19 5月, 2015 3 次提交
  8. 06 5月, 2015 4 次提交
  9. 02 5月, 2015 1 次提交
    • I
      rbd: end I/O the entire obj_request on error · 082a75da
      Ilya Dryomov 提交于
      When we end I/O struct request with error, we need to pass
      obj_request->length as @nr_bytes so that the entire obj_request worth
      of bytes is completed.  Otherwise block layer ends up confused and we
      trip on
      
          rbd_assert(more ^ (which == img_request->obj_request_count));
      
      in rbd_img_obj_callback() due to more being true no matter what.  We
      already do it in most cases but we are missing some, in particular
      those where we don't even get a chance to submit any obj_requests, due
      to an early -ENOMEM for example.
      
      A number of obj_request->xferred assignments seem to be redundant but
      I haven't touched any of obj_request->xferred stuff to keep this small
      and isolated.
      
      Cc: Alex Elder <elder@linaro.org>
      Cc: stable@vger.kernel.org # 3.10+
      Reported-by: NShawn Edwards <lesser.evil@gmail.com>
      Reviewed-by: NSage Weil <sage@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      082a75da
  10. 22 4月, 2015 1 次提交
  11. 20 4月, 2015 2 次提交
  12. 16 4月, 2015 9 次提交