1. 24 6月, 2015 1 次提交
  2. 21 6月, 2015 1 次提交
  3. 20 6月, 2015 1 次提交
  4. 17 6月, 2015 1 次提交
  5. 16 6月, 2015 8 次提交
  6. 11 6月, 2015 2 次提交
  7. 06 6月, 2015 3 次提交
  8. 02 6月, 2015 3 次提交
    • A
      null_blk: restart request processing on completion handler · 8b70f45e
      Akinobu Mita 提交于
      When irqmode=2 (IRQ completion handler is timer) and queue_mode=1
      (Block interface to use is rq), the completion handler should restart
      request handling for any pending requests on a queue because request
      processing stops when the number of commands are queued more than
      hw_queue_depth (null_rq_prep_fn returns BLKPREP_DEFER).
      
      Without this change, the following command cannot finish.
      
      	# modprobe null_blk irqmode=2 queue_mode=1 hw_queue_depth=1
      	# fio --name=t --rw=read --size=1g --direct=1 \
      	  --ioengine=libaio --iodepth=64 --filename=/dev/nullb0
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8b70f45e
    • A
      null_blk: prevent timer handler running on a different CPU where started · 419c21a3
      Akinobu Mita 提交于
      When irqmode=2 (IRQ completion handler is timer), timer handler should
      be called on the same CPU where the timer has been started.
      
      Since completion_queues are per-cpu and the completion handler only
      touches completion_queue for local CPU, we need to prevent the handler
      from running on a different CPU where the timer has been started.
      Otherwise, the IO cannot be completed until another completion handler
      is executed on that CPU.
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      419c21a3
    • K
      NVMe: Remove hctx reliance for multi-namespace · 42483228
      Keith Busch 提交于
      The driver needs to track shared tags to support multiple namespaces
      that may be dynamically allocated or deleted. Relying on the first
      request_queue's hctx's is not appropriate as we cannot clear outstanding
      tags for all namespaces using this handle, nor can the driver easily track
      all request_queue's hctx as namespaces are attached/detached. Instead,
      this patch uses the nvme_dev's tagset to get the shared tag resources
      instead of through a request_queue hctx.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      42483228
  9. 30 5月, 2015 2 次提交
  10. 23 5月, 2015 1 次提交
  11. 22 5月, 2015 9 次提交
  12. 20 5月, 2015 4 次提交
  13. 19 5月, 2015 3 次提交
  14. 06 5月, 2015 1 次提交
    • M
      block: loop: avoiding too many pending per work I/O · 4d4e41ae
      Ming Lei 提交于
      If there are too many pending per work I/O, too many
      high priority work thread can be generated so that
      system performance can be effected.
      
      This patch limits the max_active parameter of workqueue as 16.
      
      This patch fixes Fedora 22 live booting performance
      regression when it is booted from squashfs over dm
      based on loop, and looks the following reasons are
      related with the problem:
      
      - not like other filesyststems(such as ext4), squashfs
      is a bit special, and I observed that increasing I/O jobs
      to access file in squashfs only improve I/O performance a
      little, but it can make big difference for ext4
      
      - nested loop: both squashfs.img and ext3fs.img are mounted
      as loop block, and ext3fs.img is inside the squashfs
      
      - during booting, lots of tasks may run concurrently
      
      Fixes: b5dd2f60
      Cc: stable@vger.kernel.org (v4.0)
      Cc: Justin M. Forbes <jforbes@fedoraproject.org>
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4d4e41ae