1. 06 9月, 2018 1 次提交
  2. 18 8月, 2018 1 次提交
  3. 15 8月, 2018 1 次提交
  4. 09 8月, 2018 1 次提交
  5. 05 8月, 2018 1 次提交
    • L
      Partially revert "block: fail op_is_write() requests to read-only partitions" · a32e236e
      Linus Torvalds 提交于
      It turns out that commit 721c7fc7 ("block: fail op_is_write()
      requests to read-only partitions"), while obviously correct, causes
      problems for some older lvm2 installations.
      
      The reason is that the lvm snapshotting will continue to write to the
      snapshow COW volume, even after the volume has been marked read-only.
      End result: snapshot failure.
      
      This has actually been fixed in newer version of the lvm2 tool, but the
      old tools still exist, and the breakage was reported both in the kernel
      bugzilla and in the Debian bugzilla:
      
        https://bugzilla.kernel.org/show_bug.cgi?id=200439
        https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=900442
      
      The lvm2 fix is here
      
        https://sourceware.org/git/?p=lvm2.git;a=commit;h=a6fdb9d9d70f51c49ad11a87ab4243344e6701a3
      
      but until everybody has updated to recent versions, we'll have to weaken
      the "never write to read-only partitions" check.  It now allows the
      write to happen, but causes a warning, something like this:
      
        generic_make_request: Trying to write to read-only block-device dm-3 (partno X)
        Modules linked in: nf_tables xt_cgroup xt_owner kvm_intel iwlmvm kvm irqbypass iwlwifi
        CPU: 1 PID: 77 Comm: kworker/1:1 Not tainted 4.17.9-gentoo #3
        Hardware name: LENOVO 20B6A019RT/20B6A019RT, BIOS GJET91WW (2.41 ) 09/21/2016
        Workqueue: ksnaphd do_metadata
        RIP: 0010:generic_make_request_checks+0x4ac/0x600
        ...
        Call Trace:
         generic_make_request+0x64/0x400
         submit_bio+0x6c/0x140
         dispatch_io+0x287/0x430
         sync_io+0xc3/0x120
         dm_io+0x1f8/0x220
         do_metadata+0x1d/0x30
         process_one_work+0x1b9/0x3e0
         worker_thread+0x2b/0x3c0
         kthread+0x113/0x130
         ret_from_fork+0x35/0x40
      
      Note that this is a "revert" in behavior only.  I'm leaving alone the
      actual code cleanups in commit 721c7fc7, but letting the previously
      uncaught request go through with a warning instead of stopping it.
      
      Fixes: 721c7fc7 ("block: fail op_is_write() requests to read-only partitions")
      Reported-and-tested-by: NWGH <wgh@torlan.ru>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Ilya Dryomov <idryomov@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Zdenek Kabelac <zkabelac@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a32e236e
  6. 03 8月, 2018 1 次提交
  7. 30 7月, 2018 1 次提交
  8. 18 7月, 2018 1 次提交
    • M
      block: Add and use op_stat_group() for indexing disk_stat fields. · ddcf35d3
      Michael Callahan 提交于
      Add and use a new op_stat_group() function for indexing partition stat
      fields rather than indexing them by rq_data_dir() or bio_data_dir().
      This function works similarly to op_is_sync() in that it takes the
      request::cmd_flags or bio::bi_opf flags and determines which stats
      should et updated.
      
      In addition, the second parameter to generic_start_io_acct() and
      generic_end_io_acct() is now a REQ_OP rather than simply a read or
      write bit and it uses op_stat_group() on the parameter to determine
      the stat group.
      
      Note that the partition in_flight counts are not part of the per-cpu
      statistics and as such are not indexed via this function.  It's now
      indexed by op_is_write().
      
      tj: Refreshed on top of v4.17.  Updated to pass around REQ_OP.
      Signed-off-by: NMichael Callahan <michaelcallahan@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philipp Reisner <philipp.reisner@linbit.com>
      Cc: Matias Bjorling <mb@lightnvm.io>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ddcf35d3
  9. 09 7月, 2018 5 次提交
  10. 28 6月, 2018 1 次提交
  11. 20 6月, 2018 1 次提交
    • B
      Revert "block: Add warning for bi_next not NULL in bio_endio()" · 9c24c10a
      Bart Van Assche 提交于
      Commit 0ba99ca4 ("block: Add warning for bi_next not NULL in
      bio_endio()") breaks the dm driver. end_clone_bio() detects whether
      or not a bio is the last bio associated with a request by checking
      the .bi_next field. Commit 0ba99ca4 clears that field before
      end_clone_bio() has had a chance to inspect that field. Hence revert
      commit 0ba99ca4.
      
      This patch avoids that KASAN reports the following complaint when
      running the srp-test software (srp-test/run_tests -c -d -r 10 -t 02-mq):
      
      ==================================================================
      BUG: KASAN: use-after-free in bio_advance+0x11b/0x1d0
      Read of size 4 at addr ffff8801300e06d0 by task ksoftirqd/0/9
      
      CPU: 0 PID: 9 Comm: ksoftirqd/0 Not tainted 4.18.0-rc1-dbg+ #1
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
      Call Trace:
       dump_stack+0xa4/0xf5
       print_address_description+0x6f/0x270
       kasan_report+0x241/0x360
       __asan_load4+0x78/0x80
       bio_advance+0x11b/0x1d0
       blk_update_request+0xa7/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
       run_ksoftirqd+0x3f/0x60
       smpboot_thread_fn+0x352/0x460
       kthread+0x1c1/0x1e0
       ret_from_fork+0x24/0x30
      
      Allocated by task 1918:
       save_stack+0x43/0xd0
       kasan_kmalloc+0xad/0xe0
       kasan_slab_alloc+0x11/0x20
       kmem_cache_alloc+0xfe/0x350
       mempool_alloc_slab+0x15/0x20
       mempool_alloc+0xfb/0x270
       bio_alloc_bioset+0x244/0x350
       submit_bh_wbc+0x9c/0x2f0
       __block_write_full_page+0x299/0x5a0
       block_write_full_page+0x16b/0x180
       blkdev_writepage+0x18/0x20
       __writepage+0x42/0x80
       write_cache_pages+0x376/0x8a0
       generic_writepages+0xbe/0x110
       blkdev_writepages+0xe/0x10
       do_writepages+0x9b/0x180
       __filemap_fdatawrite_range+0x178/0x1c0
       file_write_and_wait_range+0x59/0xc0
       blkdev_fsync+0x46/0x80
       vfs_fsync_range+0x66/0x100
       do_fsync+0x3d/0x70
       __x64_sys_fsync+0x21/0x30
       do_syscall_64+0x77/0x230
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Freed by task 9:
       save_stack+0x43/0xd0
       __kasan_slab_free+0x137/0x190
       kasan_slab_free+0xe/0x10
       kmem_cache_free+0xd3/0x380
       mempool_free_slab+0x17/0x20
       mempool_free+0x63/0x160
       bio_free+0x81/0xa0
       bio_put+0x59/0x60
       end_bio_bh_io_sync+0x5d/0x70
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       end_clone_bio+0xa3/0xd0 [dm_mod]
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
      
      The buggy address belongs to the object at ffff8801300e0640
       which belongs to the cache bio-0 of size 200
      The buggy address is located 144 bytes inside of
       200-byte region [ffff8801300e0640, ffff8801300e0708)
      The buggy address belongs to the page:
      page:ffffea0004c03800 count:1 mapcount:0 mapping:ffff88015a563a00 index:0x0 compound_mapcount: 0
      flags: 0x8000000000008100(slab|head)
      raw: 8000000000008100 dead000000000100 dead000000000200 ffff88015a563a00
      raw: 0000000000000000 0000000000330033 00000001ffffffff 0000000000000000
      page dumped because: kasan: bad access detected
      
      Memory state around the buggy address:
       ffff8801300e0580: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
       ffff8801300e0600: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
      >ffff8801300e0680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                       ^
       ffff8801300e0700: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
       ffff8801300e0780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      ==================================================================
      
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Fixes: 0ba99ca4 ("block: Add warning for bi_next not NULL in bio_endio()")
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9c24c10a
  12. 07 6月, 2018 1 次提交
  13. 03 6月, 2018 1 次提交
    • J
      block: don't use blocking queue entered for recursive bio submits · cd4a4ae4
      Jens Axboe 提交于
      If we end up splitting a bio and the queue goes away between
      the initial submission and the later split submission, then we
      can block forever in blk_queue_enter() waiting for the reference
      to drop to zero. This will never happen, since we already hold
      a reference.
      
      Mark a split bio as already having entered the queue, so we can
      just use the live non-blocking queue enter variant.
      
      Thanks to Tetsuo Handa for the analysis.
      
      Reported-by: syzbot+c4f9cebf9d651f6e54de@syzkaller.appspotmail.com
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd4a4ae4
  14. 01 6月, 2018 3 次提交
  15. 31 5月, 2018 1 次提交
  16. 29 5月, 2018 1 次提交
    • K
      blk-mq: Remove generation seqeunce · 12f5b931
      Keith Busch 提交于
      This patch simplifies the timeout handling by relying on the request
      reference counting to ensure the iterator is operating on an inflight
      and truly timed out request. Since the reference counting prevents the
      tag from being reallocated, the block layer no longer needs to prevent
      drivers from completing their requests while the timeout handler is
      operating on it: a driver completing a request is allowed to proceed to
      the next state without additional syncronization with the block layer.
      
      This also removes any need for generation sequence numbers since the
      request lifetime is prevented from being reallocated as a new sequence
      while timeout handling is operating on it.
      
      To enables this a refcount is added to struct request so that request
      users can be sure they're operating on the same request without it
      changing while they're processing it.  The request's tag won't be
      released for reuse until both the timeout handler and the completion
      are done with it.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [hch: slight cleanups, added back submission side hctx lock, use cmpxchg
       for completions]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      12f5b931
  17. 15 5月, 2018 2 次提交
  18. 14 5月, 2018 4 次提交
  19. 09 5月, 2018 3 次提交
  20. 08 5月, 2018 2 次提交
    • T
      block: Shorten interrupt disabled regions · 50864670
      Thomas Gleixner 提交于
      Commit 9c40cef2 ("sched: Move blk_schedule_flush_plug() out of
      __schedule()") moved the blk_schedule_flush_plug() call out of the
      interrupt/preempt disabled region in the scheduler. This allows to replace
      local_irq_save/restore(flags) by local_irq_disable/enable() in
      blk_flush_plug_list().
      
      But it makes more sense to disable interrupts explicitly when the request
      queue is locked end reenable them when the request to is unlocked. This
      shortens the interrupt disabled section which is important when the plug
      list contains requests for more than one queue. The comment which claims
      that disabling interrupts around the loop is misleading as the called
      functions can reenable interrupts unconditionally anyway and obfuscates the
      scope badly:
      
       local_irq_save(flags);
         spin_lock(q->queue_lock);
         ...
         queue_unplugged(q...);
           scsi_request_fn();
             spin_unlock_irq(q->queue_lock);
      
      -------------------^^^ ????
      
             spin_lock_irq(q->queue_lock);
           spin_unlock(q->queue_lock);
       local_irq_restore(flags);
      
      Aside of that the detached interrupt disabling is a constant pain for
      PREEMPT_RT as it requires patching and special casing when RT is enabled
      while with the spin_*_irq() variants this happens automatically.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.deSigned-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      50864670
    • A
      block: Remove redundant WARN_ON() · 656cb6d0
      Anna-Maria Gleixner 提交于
      Commit 2fff8a92 ("block: Check locking assumptions at runtime") added a
      lockdep_assert_held(q->queue_lock) which makes the WARN_ON() redundant
      because lockdep will detect and warn about context violations.
      
      The unconditional WARN_ON() does not provide real additional value, so it
      can be removed.
      Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      656cb6d0
  21. 17 4月, 2018 1 次提交
    • J
      blk-mq: start request gstate with gen 1 · f4560231
      Jianchao Wang 提交于
      rq->gstate and rq->aborted_gstate both are zero before rqs are
      allocated. If we have a small timeout, when the timer fires,
      there could be rqs that are never allocated, and also there could
      be rq that has been allocated but not initialized and started. At
      the moment, the rq->gstate and rq->aborted_gstate both are 0, thus
      the blk_mq_terminate_expired will identify the rq is timed out and
      invoke .timeout early.
      
      For scsi, this will cause scsi_times_out to be invoked before the
      scsi_cmnd is not initialized, scsi_cmnd->device is still NULL at
      the moment, then we will get crash.
      
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Martin Steigerwald <Martin@Lichtvoll.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f4560231
  22. 15 4月, 2018 1 次提交
    • A
      block: do not use interruptible wait anywhere · 1dc3039b
      Alan Jenkins 提交于
      When blk_queue_enter() waits for a queue to unfreeze, or unset the
      PREEMPT_ONLY flag, do not allow it to be interrupted by a signal.
      
      The PREEMPT_ONLY flag was introduced later in commit 3a0a5299
      ("block, scsi: Make SCSI quiesce and resume work reliably").  Note the SCSI
      device is resumed asynchronously, i.e. after un-freezing userspace tasks.
      
      So that commit exposed the bug as a regression in v4.15.  A mysterious
      SIGBUS (or -EIO) sometimes happened during the time the device was being
      resumed.  Most frequently, there was no kernel log message, and we saw Xorg
      or Xwayland killed by SIGBUS.[1]
      
      [1] E.g. https://bugzilla.redhat.com/show_bug.cgi?id=1553979
      
      Without this fix, I get an IO error in this test:
      
      # dd if=/dev/sda of=/dev/null iflag=direct & \
        while killall -SIGUSR1 dd; do sleep 0.1; done & \
        echo mem > /sys/power/state ; \
        sleep 5; killall dd  # stop after 5 seconds
      
      The interruptible wait was added to blk_queue_enter in
      commit 3ef28e83 ("block: generic request_queue reference counting").
      Before then, the interruptible wait was only in blk-mq, but I don't think
      it could ever have been correct.
      Reviewed-by: NBart Van Assche <bart.vanassche@wdc.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlan Jenkins <alan.christopher.jenkins@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1dc3039b
  23. 11 4月, 2018 1 次提交
  24. 20 3月, 2018 1 次提交
    • B
      block: Change a rcu_read_{lock,unlock}_sched() pair into rcu_read_{lock,unlock}() · 818e0fa2
      Bart Van Assche 提交于
      scsi_device_quiesce() uses synchronize_rcu() to guarantee that the
      effect of blk_set_preempt_only() will be visible for percpu_ref_tryget()
      calls that occur after the queue unfreeze by using the approach
      explained in https://lwn.net/Articles/573497/. The rcu read lock and
      unlock calls in blk_queue_enter() form a pair with the synchronize_rcu()
      call in scsi_device_quiesce(). Both scsi_device_quiesce() and
      blk_queue_enter() must either use regular RCU or RCU-sched.
      Since neither the RCU-protected code in blk_queue_enter() nor
      blk_queue_usage_counter_release() sleeps, regular RCU protection
      is sufficient. Note: scsi_device_quiesce() does not have to be
      modified since it already uses synchronize_rcu().
      Reported-by: NTejun Heo <tj@kernel.org>
      Fixes: 3a0a5299 ("block, scsi: Make SCSI quiesce and resume work reliably")
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: Oleksandr Natalenko <oleksandr@natalenko.name>
      Cc: Martin Steigerwald <martin@lichtvoll.de>
      Cc: stable@vger.kernel.org # v4.15
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      818e0fa2
  25. 18 3月, 2018 1 次提交
  26. 09 3月, 2018 2 次提交