1. 05 8月, 2018 1 次提交
    • L
      Partially revert "block: fail op_is_write() requests to read-only partitions" · a32e236e
      Linus Torvalds 提交于
      It turns out that commit 721c7fc7 ("block: fail op_is_write()
      requests to read-only partitions"), while obviously correct, causes
      problems for some older lvm2 installations.
      
      The reason is that the lvm snapshotting will continue to write to the
      snapshow COW volume, even after the volume has been marked read-only.
      End result: snapshot failure.
      
      This has actually been fixed in newer version of the lvm2 tool, but the
      old tools still exist, and the breakage was reported both in the kernel
      bugzilla and in the Debian bugzilla:
      
        https://bugzilla.kernel.org/show_bug.cgi?id=200439
        https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=900442
      
      The lvm2 fix is here
      
        https://sourceware.org/git/?p=lvm2.git;a=commit;h=a6fdb9d9d70f51c49ad11a87ab4243344e6701a3
      
      but until everybody has updated to recent versions, we'll have to weaken
      the "never write to read-only partitions" check.  It now allows the
      write to happen, but causes a warning, something like this:
      
        generic_make_request: Trying to write to read-only block-device dm-3 (partno X)
        Modules linked in: nf_tables xt_cgroup xt_owner kvm_intel iwlmvm kvm irqbypass iwlwifi
        CPU: 1 PID: 77 Comm: kworker/1:1 Not tainted 4.17.9-gentoo #3
        Hardware name: LENOVO 20B6A019RT/20B6A019RT, BIOS GJET91WW (2.41 ) 09/21/2016
        Workqueue: ksnaphd do_metadata
        RIP: 0010:generic_make_request_checks+0x4ac/0x600
        ...
        Call Trace:
         generic_make_request+0x64/0x400
         submit_bio+0x6c/0x140
         dispatch_io+0x287/0x430
         sync_io+0xc3/0x120
         dm_io+0x1f8/0x220
         do_metadata+0x1d/0x30
         process_one_work+0x1b9/0x3e0
         worker_thread+0x2b/0x3c0
         kthread+0x113/0x130
         ret_from_fork+0x35/0x40
      
      Note that this is a "revert" in behavior only.  I'm leaving alone the
      actual code cleanups in commit 721c7fc7, but letting the previously
      uncaught request go through with a warning instead of stopping it.
      
      Fixes: 721c7fc7 ("block: fail op_is_write() requests to read-only partitions")
      Reported-and-tested-by: NWGH <wgh@torlan.ru>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Ilya Dryomov <idryomov@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Zdenek Kabelac <zkabelac@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a32e236e
  2. 28 6月, 2018 1 次提交
  3. 20 6月, 2018 1 次提交
    • B
      Revert "block: Add warning for bi_next not NULL in bio_endio()" · 9c24c10a
      Bart Van Assche 提交于
      Commit 0ba99ca4 ("block: Add warning for bi_next not NULL in
      bio_endio()") breaks the dm driver. end_clone_bio() detects whether
      or not a bio is the last bio associated with a request by checking
      the .bi_next field. Commit 0ba99ca4 clears that field before
      end_clone_bio() has had a chance to inspect that field. Hence revert
      commit 0ba99ca4.
      
      This patch avoids that KASAN reports the following complaint when
      running the srp-test software (srp-test/run_tests -c -d -r 10 -t 02-mq):
      
      ==================================================================
      BUG: KASAN: use-after-free in bio_advance+0x11b/0x1d0
      Read of size 4 at addr ffff8801300e06d0 by task ksoftirqd/0/9
      
      CPU: 0 PID: 9 Comm: ksoftirqd/0 Not tainted 4.18.0-rc1-dbg+ #1
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
      Call Trace:
       dump_stack+0xa4/0xf5
       print_address_description+0x6f/0x270
       kasan_report+0x241/0x360
       __asan_load4+0x78/0x80
       bio_advance+0x11b/0x1d0
       blk_update_request+0xa7/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
       run_ksoftirqd+0x3f/0x60
       smpboot_thread_fn+0x352/0x460
       kthread+0x1c1/0x1e0
       ret_from_fork+0x24/0x30
      
      Allocated by task 1918:
       save_stack+0x43/0xd0
       kasan_kmalloc+0xad/0xe0
       kasan_slab_alloc+0x11/0x20
       kmem_cache_alloc+0xfe/0x350
       mempool_alloc_slab+0x15/0x20
       mempool_alloc+0xfb/0x270
       bio_alloc_bioset+0x244/0x350
       submit_bh_wbc+0x9c/0x2f0
       __block_write_full_page+0x299/0x5a0
       block_write_full_page+0x16b/0x180
       blkdev_writepage+0x18/0x20
       __writepage+0x42/0x80
       write_cache_pages+0x376/0x8a0
       generic_writepages+0xbe/0x110
       blkdev_writepages+0xe/0x10
       do_writepages+0x9b/0x180
       __filemap_fdatawrite_range+0x178/0x1c0
       file_write_and_wait_range+0x59/0xc0
       blkdev_fsync+0x46/0x80
       vfs_fsync_range+0x66/0x100
       do_fsync+0x3d/0x70
       __x64_sys_fsync+0x21/0x30
       do_syscall_64+0x77/0x230
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Freed by task 9:
       save_stack+0x43/0xd0
       __kasan_slab_free+0x137/0x190
       kasan_slab_free+0xe/0x10
       kmem_cache_free+0xd3/0x380
       mempool_free_slab+0x17/0x20
       mempool_free+0x63/0x160
       bio_free+0x81/0xa0
       bio_put+0x59/0x60
       end_bio_bh_io_sync+0x5d/0x70
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       end_clone_bio+0xa3/0xd0 [dm_mod]
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
      
      The buggy address belongs to the object at ffff8801300e0640
       which belongs to the cache bio-0 of size 200
      The buggy address is located 144 bytes inside of
       200-byte region [ffff8801300e0640, ffff8801300e0708)
      The buggy address belongs to the page:
      page:ffffea0004c03800 count:1 mapcount:0 mapping:ffff88015a563a00 index:0x0 compound_mapcount: 0
      flags: 0x8000000000008100(slab|head)
      raw: 8000000000008100 dead000000000100 dead000000000200 ffff88015a563a00
      raw: 0000000000000000 0000000000330033 00000001ffffffff 0000000000000000
      page dumped because: kasan: bad access detected
      
      Memory state around the buggy address:
       ffff8801300e0580: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
       ffff8801300e0600: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
      >ffff8801300e0680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                       ^
       ffff8801300e0700: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
       ffff8801300e0780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      ==================================================================
      
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Fixes: 0ba99ca4 ("block: Add warning for bi_next not NULL in bio_endio()")
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9c24c10a
  4. 07 6月, 2018 1 次提交
  5. 03 6月, 2018 1 次提交
    • J
      block: don't use blocking queue entered for recursive bio submits · cd4a4ae4
      Jens Axboe 提交于
      If we end up splitting a bio and the queue goes away between
      the initial submission and the later split submission, then we
      can block forever in blk_queue_enter() waiting for the reference
      to drop to zero. This will never happen, since we already hold
      a reference.
      
      Mark a split bio as already having entered the queue, so we can
      just use the live non-blocking queue enter variant.
      
      Thanks to Tetsuo Handa for the analysis.
      
      Reported-by: syzbot+c4f9cebf9d651f6e54de@syzkaller.appspotmail.com
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd4a4ae4
  6. 01 6月, 2018 3 次提交
  7. 31 5月, 2018 1 次提交
  8. 29 5月, 2018 1 次提交
    • K
      blk-mq: Remove generation seqeunce · 12f5b931
      Keith Busch 提交于
      This patch simplifies the timeout handling by relying on the request
      reference counting to ensure the iterator is operating on an inflight
      and truly timed out request. Since the reference counting prevents the
      tag from being reallocated, the block layer no longer needs to prevent
      drivers from completing their requests while the timeout handler is
      operating on it: a driver completing a request is allowed to proceed to
      the next state without additional syncronization with the block layer.
      
      This also removes any need for generation sequence numbers since the
      request lifetime is prevented from being reallocated as a new sequence
      while timeout handling is operating on it.
      
      To enables this a refcount is added to struct request so that request
      users can be sure they're operating on the same request without it
      changing while they're processing it.  The request's tag won't be
      released for reuse until both the timeout handler and the completion
      are done with it.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [hch: slight cleanups, added back submission side hctx lock, use cmpxchg
       for completions]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      12f5b931
  9. 15 5月, 2018 2 次提交
  10. 14 5月, 2018 4 次提交
  11. 09 5月, 2018 3 次提交
  12. 08 5月, 2018 2 次提交
    • T
      block: Shorten interrupt disabled regions · 50864670
      Thomas Gleixner 提交于
      Commit 9c40cef2 ("sched: Move blk_schedule_flush_plug() out of
      __schedule()") moved the blk_schedule_flush_plug() call out of the
      interrupt/preempt disabled region in the scheduler. This allows to replace
      local_irq_save/restore(flags) by local_irq_disable/enable() in
      blk_flush_plug_list().
      
      But it makes more sense to disable interrupts explicitly when the request
      queue is locked end reenable them when the request to is unlocked. This
      shortens the interrupt disabled section which is important when the plug
      list contains requests for more than one queue. The comment which claims
      that disabling interrupts around the loop is misleading as the called
      functions can reenable interrupts unconditionally anyway and obfuscates the
      scope badly:
      
       local_irq_save(flags);
         spin_lock(q->queue_lock);
         ...
         queue_unplugged(q...);
           scsi_request_fn();
             spin_unlock_irq(q->queue_lock);
      
      -------------------^^^ ????
      
             spin_lock_irq(q->queue_lock);
           spin_unlock(q->queue_lock);
       local_irq_restore(flags);
      
      Aside of that the detached interrupt disabling is a constant pain for
      PREEMPT_RT as it requires patching and special casing when RT is enabled
      while with the spin_*_irq() variants this happens automatically.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.deSigned-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      50864670
    • A
      block: Remove redundant WARN_ON() · 656cb6d0
      Anna-Maria Gleixner 提交于
      Commit 2fff8a92 ("block: Check locking assumptions at runtime") added a
      lockdep_assert_held(q->queue_lock) which makes the WARN_ON() redundant
      because lockdep will detect and warn about context violations.
      
      The unconditional WARN_ON() does not provide real additional value, so it
      can be removed.
      Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      656cb6d0
  13. 17 4月, 2018 1 次提交
    • J
      blk-mq: start request gstate with gen 1 · f4560231
      Jianchao Wang 提交于
      rq->gstate and rq->aborted_gstate both are zero before rqs are
      allocated. If we have a small timeout, when the timer fires,
      there could be rqs that are never allocated, and also there could
      be rq that has been allocated but not initialized and started. At
      the moment, the rq->gstate and rq->aborted_gstate both are 0, thus
      the blk_mq_terminate_expired will identify the rq is timed out and
      invoke .timeout early.
      
      For scsi, this will cause scsi_times_out to be invoked before the
      scsi_cmnd is not initialized, scsi_cmnd->device is still NULL at
      the moment, then we will get crash.
      
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Martin Steigerwald <Martin@Lichtvoll.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f4560231
  14. 15 4月, 2018 1 次提交
    • A
      block: do not use interruptible wait anywhere · 1dc3039b
      Alan Jenkins 提交于
      When blk_queue_enter() waits for a queue to unfreeze, or unset the
      PREEMPT_ONLY flag, do not allow it to be interrupted by a signal.
      
      The PREEMPT_ONLY flag was introduced later in commit 3a0a5299
      ("block, scsi: Make SCSI quiesce and resume work reliably").  Note the SCSI
      device is resumed asynchronously, i.e. after un-freezing userspace tasks.
      
      So that commit exposed the bug as a regression in v4.15.  A mysterious
      SIGBUS (or -EIO) sometimes happened during the time the device was being
      resumed.  Most frequently, there was no kernel log message, and we saw Xorg
      or Xwayland killed by SIGBUS.[1]
      
      [1] E.g. https://bugzilla.redhat.com/show_bug.cgi?id=1553979
      
      Without this fix, I get an IO error in this test:
      
      # dd if=/dev/sda of=/dev/null iflag=direct & \
        while killall -SIGUSR1 dd; do sleep 0.1; done & \
        echo mem > /sys/power/state ; \
        sleep 5; killall dd  # stop after 5 seconds
      
      The interruptible wait was added to blk_queue_enter in
      commit 3ef28e83 ("block: generic request_queue reference counting").
      Before then, the interruptible wait was only in blk-mq, but I don't think
      it could ever have been correct.
      Reviewed-by: NBart Van Assche <bart.vanassche@wdc.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlan Jenkins <alan.christopher.jenkins@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1dc3039b
  15. 11 4月, 2018 1 次提交
  16. 20 3月, 2018 1 次提交
    • B
      block: Change a rcu_read_{lock,unlock}_sched() pair into rcu_read_{lock,unlock}() · 818e0fa2
      Bart Van Assche 提交于
      scsi_device_quiesce() uses synchronize_rcu() to guarantee that the
      effect of blk_set_preempt_only() will be visible for percpu_ref_tryget()
      calls that occur after the queue unfreeze by using the approach
      explained in https://lwn.net/Articles/573497/. The rcu read lock and
      unlock calls in blk_queue_enter() form a pair with the synchronize_rcu()
      call in scsi_device_quiesce(). Both scsi_device_quiesce() and
      blk_queue_enter() must either use regular RCU or RCU-sched.
      Since neither the RCU-protected code in blk_queue_enter() nor
      blk_queue_usage_counter_release() sleeps, regular RCU protection
      is sufficient. Note: scsi_device_quiesce() does not have to be
      modified since it already uses synchronize_rcu().
      Reported-by: NTejun Heo <tj@kernel.org>
      Fixes: 3a0a5299 ("block, scsi: Make SCSI quiesce and resume work reliably")
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: Oleksandr Natalenko <oleksandr@natalenko.name>
      Cc: Martin Steigerwald <martin@lichtvoll.de>
      Cc: stable@vger.kernel.org # v4.15
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      818e0fa2
  17. 18 3月, 2018 1 次提交
  18. 09 3月, 2018 2 次提交
  19. 01 3月, 2018 4 次提交
  20. 07 2月, 2018 1 次提交
    • H
      block: Add should_fail_bio() for bpf error injection · 30abb3a6
      Howard McLauchlan 提交于
      The classic error injection mechanism, should_fail_request() does not
      support use cases where more information is required (from the entire
      struct bio, for example).
      
      To that end, this patch introduces should_fail_bio(), which calls
      should_fail_request() under the hood but provides a convenient
      place for kprobes to hook into if they require the entire struct bio.
      This patch also replaces some existing calls to should_fail_request()
      with should_fail_bio() with no degradation in performance.
      Signed-off-by: NHoward McLauchlan <hmclauchlan@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      30abb3a6
  21. 02 2月, 2018 1 次提交
    • J
      blk-mq: fix discard merge with scheduler attached · 445251d0
      Jens Axboe 提交于
      I ran into an issue on my laptop that triggered a bug on the
      discard path:
      
      WARNING: CPU: 2 PID: 207 at drivers/nvme/host/core.c:527 nvme_setup_cmd+0x3d3/0x430
       Modules linked in: rfcomm fuse ctr ccm bnep arc4 binfmt_misc snd_hda_codec_hdmi nls_iso8859_1 nls_cp437 vfat snd_hda_codec_conexant fat snd_hda_codec_generic iwlmvm snd_hda_intel snd_hda_codec snd_hwdep mac80211 snd_hda_core snd_pcm snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq x86_pkg_temp_thermal intel_powerclamp kvm_intel uvcvideo iwlwifi btusb snd_seq_device videobuf2_vmalloc btintel videobuf2_memops kvm snd_timer videobuf2_v4l2 bluetooth irqbypass videobuf2_core aesni_intel aes_x86_64 crypto_simd cryptd snd glue_helper videodev cfg80211 ecdh_generic soundcore hid_generic usbhid hid i915 psmouse e1000e ptp pps_core xhci_pci xhci_hcd intel_gtt
       CPU: 2 PID: 207 Comm: jbd2/nvme0n1p7- Tainted: G     U           4.15.0+ #176
       Hardware name: LENOVO 20FBCTO1WW/20FBCTO1WW, BIOS N1FET59W (1.33 ) 12/19/2017
       RIP: 0010:nvme_setup_cmd+0x3d3/0x430
       RSP: 0018:ffff880423e9f838 EFLAGS: 00010217
       RAX: 0000000000000000 RBX: ffff880423e9f8c8 RCX: 0000000000010000
       RDX: ffff88022b200010 RSI: 0000000000000002 RDI: 00000000327f0000
       RBP: ffff880421251400 R08: ffff88022b200000 R09: 0000000000000009
       R10: 0000000000000000 R11: 0000000000000000 R12: 000000000000ffff
       R13: ffff88042341e280 R14: 000000000000ffff R15: ffff880421251440
       FS:  0000000000000000(0000) GS:ffff880441500000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 000055b684795030 CR3: 0000000002e09006 CR4: 00000000001606e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
       Call Trace:
        nvme_queue_rq+0x40/0xa00
        ? __sbitmap_queue_get+0x24/0x90
        ? blk_mq_get_tag+0xa3/0x250
        ? wait_woken+0x80/0x80
        ? blk_mq_get_driver_tag+0x97/0xf0
        blk_mq_dispatch_rq_list+0x7b/0x4a0
        ? deadline_remove_request+0x49/0xb0
        blk_mq_do_dispatch_sched+0x4f/0xc0
        blk_mq_sched_dispatch_requests+0x106/0x170
        __blk_mq_run_hw_queue+0x53/0xa0
        __blk_mq_delay_run_hw_queue+0x83/0xa0
        blk_mq_run_hw_queue+0x6c/0xd0
        blk_mq_sched_insert_request+0x96/0x140
        __blk_mq_try_issue_directly+0x3d/0x190
        blk_mq_try_issue_directly+0x30/0x70
        blk_mq_make_request+0x1a4/0x6a0
        generic_make_request+0xfd/0x2f0
        ? submit_bio+0x5c/0x110
        submit_bio+0x5c/0x110
        ? __blkdev_issue_discard+0x152/0x200
        submit_bio_wait+0x43/0x60
        ext4_process_freed_data+0x1cd/0x440
        ? account_page_dirtied+0xe2/0x1a0
        ext4_journal_commit_callback+0x4a/0xc0
        jbd2_journal_commit_transaction+0x17e2/0x19e0
        ? kjournald2+0xb0/0x250
        kjournald2+0xb0/0x250
        ? wait_woken+0x80/0x80
        ? commit_timeout+0x10/0x10
        kthread+0x111/0x130
        ? kthread_create_worker_on_cpu+0x50/0x50
        ? do_group_exit+0x3a/0xa0
        ret_from_fork+0x1f/0x30
       Code: 73 89 c1 83 ce 10 c1 e1 10 09 ca 83 f8 04 0f 87 0f ff ff ff 8b 4d 20 48 8b 7d 00 c1 e9 09 48 01 8c c7 00 08 00 00 e9 f8 fe ff ff <0f> ff 4c 89 c7 41 bc 0a 00 00 00 e8 0d 78 d6 ff e9 a1 fc ff ff
       ---[ end trace 50d361cc444506c8 ]---
       print_req_error: I/O error, dev nvme0n1, sector 847167488
      
      Decoding the assembly, the request claims to have 0xffff segments,
      while nvme counts two. This turns out to be because we don't check
      for a data carrying request on the mq scheduler path, and since
      blk_phys_contig_segment() returns true for a non-data request,
      we decrement the initial segment count of 0 and end up with
      0xffff in the unsigned short.
      
      There are a few issues here:
      
      1) We should initialize the segment count for a discard to 1.
      2) The discard merging is currently using the data limits for
         segments and sectors.
      
      Fix this up by having attempt_merge() correctly identify the
      request, and by initializing the segment count correctly
      for discards.
      
      This can only be triggered with mq-deadline on discard capable
      devices right now, which isn't a common configuration.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      445251d0
  22. 31 1月, 2018 1 次提交
    • M
      blk-mq: introduce BLK_STS_DEV_RESOURCE · 86ff7c2a
      Ming Lei 提交于
      This status is returned from driver to block layer if device related
      resource is unavailable, but driver can guarantee that IO dispatch
      will be triggered in future when the resource is available.
      
      Convert some drivers to return BLK_STS_DEV_RESOURCE.  Also, if driver
      returns BLK_STS_RESOURCE and SCHED_RESTART is set, rerun queue after
      a delay (BLK_MQ_DELAY_QUEUE) to avoid IO stalls.  BLK_MQ_DELAY_QUEUE is
      3 ms because both scsi-mq and nvmefc are using that magic value.
      
      If a driver can make sure there is in-flight IO, it is safe to return
      BLK_STS_DEV_RESOURCE because:
      
      1) If all in-flight IOs complete before examining SCHED_RESTART in
      blk_mq_dispatch_rq_list(), SCHED_RESTART must be cleared, so queue
      is run immediately in this case by blk_mq_dispatch_rq_list();
      
      2) if there is any in-flight IO after/when examining SCHED_RESTART
      in blk_mq_dispatch_rq_list():
      - if SCHED_RESTART isn't set, queue is run immediately as handled in 1)
      - otherwise, this request will be dispatched after any in-flight IO is
        completed via blk_mq_sched_restart()
      
      3) if SCHED_RESTART is set concurently in context because of
      BLK_STS_RESOURCE, blk_mq_delay_run_hw_queue() will cover the above two
      cases and make sure IO hang can be avoided.
      
      One invariant is that queue will be rerun if SCHED_RESTART is set.
      Suggested-by: NJens Axboe <axboe@kernel.dk>
      Tested-by: NLaurence Oberman <loberman@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      86ff7c2a
  23. 20 1月, 2018 2 次提交
  24. 19 1月, 2018 1 次提交
  25. 18 1月, 2018 1 次提交
    • M
      blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback · 396eaf21
      Ming Lei 提交于
      blk_insert_cloned_request() is called in the fast path of a dm-rq driver
      (e.g. blk-mq request-based DM mpath).  blk_insert_cloned_request() uses
      blk_mq_request_bypass_insert() to directly append the request to the
      blk-mq hctx->dispatch_list of the underlying queue.
      
      1) This way isn't efficient enough because the hctx spinlock is always
      used.
      
      2) With blk_insert_cloned_request(), we completely bypass underlying
      queue's elevator and depend on the upper-level dm-rq driver's elevator
      to schedule IO.  But dm-rq currently can't get the underlying queue's
      dispatch feedback at all.  Without knowing whether a request was issued
      or not (e.g. due to underlying queue being busy) the dm-rq elevator will
      not be able to provide effective IO merging (as a side-effect of dm-rq
      currently blindly destaging a request from its elevator only to requeue
      it after a delay, which kills any opportunity for merging).  This
      obviously causes very bad sequential IO performance.
      
      Fix this by updating blk_insert_cloned_request() to use
      blk_mq_request_direct_issue().  blk_mq_request_direct_issue() allows a
      request to be issued directly to the underlying queue and returns the
      dispatch feedback (blk_status_t).  If blk_mq_request_direct_issue()
      returns BLK_SYS_RESOURCE the dm-rq driver will now use DM_MAPIO_REQUEUE
      to _not_ destage the request.  Whereby preserving the opportunity to
      merge IO.
      
      With this, request-based DM's blk-mq sequential IO performance is vastly
      improved (as much as 3X in mpath/virtio-scsi testing).
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      [blk-mq.c changes heavily influenced by Ming Lei's initial solution, but
      they were refactored to make them less fragile and easier to read/review]
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      396eaf21
  26. 11 1月, 2018 1 次提交