1. 09 5月, 2015 4 次提交
    • S
      blk-mq: do limited block plug for multiple queue case · f984df1f
      Shaohua Li 提交于
      plug is still helpful for workload with IO merge, but it can be harmful
      otherwise especially with multiple hardware queues, as there is
      (supposed) no lock contention in this case and plug can introduce
      latency. For multiple queues, we do limited plug, eg plug only if there
      is request merge. If a request doesn't have merge with following
      request, the requet will be dispatched immediately.
      
      V2: check blk_queue_nomerges() as suggested by Jeff.
      
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f984df1f
    • S
      blk-mq: avoid re-initialize request which is failed in direct dispatch · 239ad215
      Shaohua Li 提交于
      If we directly issue a request and it fails, we use
      blk_mq_merge_queue_io(). But we already assigned bio to a request in
      blk_mq_bio_to_request. blk_mq_merge_queue_io shouldn't run
      blk_mq_bio_to_request again.
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      239ad215
    • J
      blk-mq: fix plugging in blk_sq_make_request · e6c4438b
      Jeff Moyer 提交于
      The following appears in blk_sq_make_request:
      
      	/*
      	 * If we have multiple hardware queues, just go directly to
      	 * one of those for sync IO.
      	 */
      
      We clearly don't have multiple hardware queues, here!  This comment was
      introduced with this commit 07068d5b (blk-mq: split make request
      handler for multi and single queue):
      
          We want slightly different behavior from them:
      
          - On single queue devices, we currently use the per-process plug
            for deferred IO and for merging.
      
          - On multi queue devices, we don't use the per-process plug, but
            we want to go straight to hardware for SYNC IO.
      
      The old code had this:
      
              use_plug = !is_flush_fua && ((q->nr_hw_queues == 1) || !is_sync);
      
      and that was converted to:
      
      	use_plug = !is_flush_fua && !is_sync;
      
      which is not equivalent.  For the single queue case, that second half of
      the && expression is always true.  So, what I think was actually inteded
      follows (and this more closely matches what is done in blk_queue_bio).
      
      V2: delete the 'likely', which should not be a big deal
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e6c4438b
    • S
      blk: clean up plug · dd6cf3e1
      Shaohua Li 提交于
      Current code looks like inner plug gets flushed with a
      blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
      to current->plug, while only outmost plug is assigned to current->plug.
      So inner plug always has empty request/callback list, which makes
      blk_flush_plug_list() a nop. This tries to make the code more clear.
      Signed-off-by: NShaohua Li <shli@fb.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      dd6cf3e1
  2. 06 5月, 2015 3 次提交
  3. 17 4月, 2015 1 次提交
  4. 16 4月, 2015 1 次提交
    • C
      blk-mq: reduce unnecessary software queue looping · 889fa31f
      Chong Yuan 提交于
      In flush_busy_ctxs() and blk_mq_hctx_has_pending(), regardless of how many
      ctxs assigned to one hctx, they will all loop hctx->ctx_map.map_size
      times. Here hctx->ctx_map.map_size is a const ALIGN(nr_cpu_ids, 8) / 8.
      Especially, flush_busy_ctxs() is in hot code path. And it's unnecessary.
      Change ->map_size to contain the actually mapped software queues, so we
      only loop for as many iterations as we have to.
      
      And remove cpumask setting and nr_ctx count in blk_mq_init_cpu_queues()
      since they are all re-done in blk_mq_map_swqueue().
      blk_mq_map_swqueue().
      Signed-off-by: NChong Yuan <chong.yuan@memblaze.com>
      Reviewed-by: NWenbo Wang <wenbo.wang@memblaze.com>
      
      Updated by me for formatting and commenting.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      889fa31f
  5. 12 4月, 2015 3 次提交
  6. 31 3月, 2015 1 次提交
    • M
      block: fix blk_stack_limits() regression due to lcm() change · e9637415
      Mike Snitzer 提交于
      Linux 3.19 commit 69c953c8 ("lib/lcm.c: lcm(n,0)=lcm(0,n) is 0, not n")
      caused blk_stack_limits() to not properly stack queue_limits for stacked
      devices (e.g. DM).
      
      Fix this regression by establishing lcm_not_zero() and switching
      blk_stack_limits() over to using it.
      
      DM uses blk_set_stacking_limits() to establish the initial top-level
      queue_limits that are then built up based on underlying devices' limits
      using blk_stack_limits().  In the case of optimal_io_size (io_opt)
      blk_set_stacking_limits() establishes a default value of 0.  With commit
      69c953c8, lcm(0, n) is no longer n, which compromises proper stacking of
      the underlying devices' io_opt.
      
      Test:
      $ modprobe scsi_debug dev_size_mb=10 num_tgts=1 opt_blks=1536
      $ cat /sys/block/sde/queue/optimal_io_size
      786432
      $ dmsetup create node --table "0 100 linear /dev/sde 0"
      
      Before this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      0
      
      After this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      786432
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.19+
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9637415
  7. 30 3月, 2015 2 次提交
  8. 25 3月, 2015 1 次提交
  9. 20 3月, 2015 1 次提交
  10. 19 3月, 2015 1 次提交
  11. 13 3月, 2015 4 次提交
  12. 21 2月, 2015 1 次提交
    • T
      blk-throttle: check stats_cpu before reading it from sysfs · 045c47ca
      Thadeu Lima de Souza Cascardo 提交于
      When reading blkio.throttle.io_serviced in a recently created blkio
      cgroup, it's possible to race against the creation of a throttle policy,
      which delays the allocation of stats_cpu.
      
      Like other functions in the throttle code, just checking for a NULL
      stats_cpu prevents the following oops caused by that race.
      
      [ 1117.285199] Unable to handle kernel paging request for data at address 0x7fb4d0020
      [ 1117.285252] Faulting instruction address: 0xc0000000003efa2c
      [ 1137.733921] Oops: Kernel access of bad area, sig: 11 [#1]
      [ 1137.733945] SMP NR_CPUS=2048 NUMA PowerNV
      [ 1137.734025] Modules linked in: bridge stp llc kvm_hv kvm binfmt_misc autofs4
      [ 1137.734102] CPU: 3 PID: 5302 Comm: blkcgroup Not tainted 3.19.0 #5
      [ 1137.734132] task: c000000f1d188b00 ti: c000000f1d210000 task.ti: c000000f1d210000
      [ 1137.734167] NIP: c0000000003efa2c LR: c0000000003ef9f0 CTR: c0000000003ef980
      [ 1137.734202] REGS: c000000f1d213500 TRAP: 0300   Not tainted  (3.19.0)
      [ 1137.734230] MSR: 9000000000009032 <SF,HV,EE,ME,IR,DR,RI>  CR: 42008884  XER: 20000000
      [ 1137.734325] CFAR: 0000000000008458 DAR: 00000007fb4d0020 DSISR: 40000000 SOFTE: 0
      GPR00: c0000000003ed3a0 c000000f1d213780 c000000000c59538 0000000000000000
      GPR04: 0000000000000800 0000000000000000 0000000000000000 0000000000000000
      GPR08: ffffffffffffffff 00000007fb4d0020 00000007fb4d0000 c000000000780808
      GPR12: 0000000022000888 c00000000fdc0d80 0000000000000000 0000000000000000
      GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      GPR20: 000001003e120200 c000000f1d5b0cc0 0000000000000200 0000000000000000
      GPR24: 0000000000000001 c000000000c269e0 0000000000000020 c000000f1d5b0c80
      GPR28: c000000000ca3a08 c000000000ca3dec c000000f1c667e00 c000000f1d213850
      [ 1137.734886] NIP [c0000000003efa2c] .tg_prfill_cpu_rwstat+0xac/0x180
      [ 1137.734915] LR [c0000000003ef9f0] .tg_prfill_cpu_rwstat+0x70/0x180
      [ 1137.734943] Call Trace:
      [ 1137.734952] [c000000f1d213780] [d000000005560520] 0xd000000005560520 (unreliable)
      [ 1137.734996] [c000000f1d2138a0] [c0000000003ed3a0] .blkcg_print_blkgs+0xe0/0x1a0
      [ 1137.735039] [c000000f1d213960] [c0000000003efb50] .tg_print_cpu_rwstat+0x50/0x70
      [ 1137.735082] [c000000f1d2139e0] [c000000000104b48] .cgroup_seqfile_show+0x58/0x150
      [ 1137.735125] [c000000f1d213a70] [c0000000002749dc] .kernfs_seq_show+0x3c/0x50
      [ 1137.735161] [c000000f1d213ae0] [c000000000218630] .seq_read+0xe0/0x510
      [ 1137.735197] [c000000f1d213bd0] [c000000000275b04] .kernfs_fop_read+0x164/0x200
      [ 1137.735240] [c000000f1d213c80] [c0000000001eb8e0] .__vfs_read+0x30/0x80
      [ 1137.735276] [c000000f1d213cf0] [c0000000001eb9c4] .vfs_read+0x94/0x1b0
      [ 1137.735312] [c000000f1d213d90] [c0000000001ebb38] .SyS_read+0x58/0x100
      [ 1137.735349] [c000000f1d213e30] [c000000000009218] syscall_exit+0x0/0x98
      [ 1137.735383] Instruction dump:
      [ 1137.735405] 7c6307b4 7f891800 409d00b8 60000000 60420000 3d420004 392a63b0 786a1f24
      [ 1137.735471] 7d49502a e93e01c8 7d495214 7d2ad214 <7cead02a> e9090008 e9490010 e9290018
      
      And here is one code that allows to easily reproduce this, although this
      has first been found by running docker.
      
      void run(pid_t pid)
      {
      	int n;
      	int status;
      	int fd;
      	char *buffer;
      	buffer = memalign(BUFFER_ALIGN, BUFFER_SIZE);
      	n = snprintf(buffer, BUFFER_SIZE, "%d\n", pid);
      	fd = open(CGPATH "/test/tasks", O_WRONLY);
      	write(fd, buffer, n);
      	close(fd);
      	if (fork() > 0) {
      		fd = open("/dev/sda", O_RDONLY | O_DIRECT);
      		read(fd, buffer, 512);
      		close(fd);
      		wait(&status);
      	} else {
      		fd = open(CGPATH "/test/blkio.throttle.io_serviced", O_RDONLY);
      		n = read(fd, buffer, BUFFER_SIZE);
      		close(fd);
      	}
      	free(buffer);
      	exit(0);
      }
      
      void test(void)
      {
      	int status;
      	mkdir(CGPATH "/test", 0666);
      	if (fork() > 0)
      		wait(&status);
      	else
      		run(getpid());
      	rmdir(CGPATH "/test");
      }
      
      int main(int argc, char **argv)
      {
      	int i;
      	for (i = 0; i < NR_TESTS; i++)
      		test();
      	return 0;
      }
      Reported-by: NRicardo Marin Matinata <rmm@br.ibm.com>
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      045c47ca
  13. 12 2月, 2015 4 次提交
  14. 11 2月, 2015 1 次提交
  15. 10 2月, 2015 1 次提交
  16. 06 2月, 2015 8 次提交
  17. 05 2月, 2015 1 次提交
    • P
      block: Simplify bsg complete all · 2c561246
      Peter Zijlstra 提交于
      It took me a few tries to figure out what this code did; lets rewrite
      it into a more regular form.
      
      The thing that makes this one 'special' is the BSG_F_BLOCK flag, if
      that is not set we're not supposed/allowed to block and should spin
      wait for completion.
      
      The (new) io_wait_event() will never see a false condition in case of
      the spinning and we will therefore not block.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2c561246
  18. 30 1月, 2015 2 次提交