1. 04 5月, 2017 11 次提交
  2. 03 5月, 2017 1 次提交
  3. 02 5月, 2017 3 次提交
  4. 28 4月, 2017 4 次提交
  5. 27 4月, 2017 10 次提交
  6. 24 4月, 2017 1 次提交
  7. 22 4月, 2017 2 次提交
    • I
      block: get rid of blk_integrity_revalidate() · 19b7ccf8
      Ilya Dryomov 提交于
      Commit 25520d55 ("block: Inline blk_integrity in struct gendisk")
      introduced blk_integrity_revalidate(), which seems to assume ownership
      of the stable pages flag and unilaterally clears it if no blk_integrity
      profile is registered:
      
          if (bi->profile)
                  disk->queue->backing_dev_info->capabilities |=
                          BDI_CAP_STABLE_WRITES;
          else
                  disk->queue->backing_dev_info->capabilities &=
                          ~BDI_CAP_STABLE_WRITES;
      
      It's called from revalidate_disk() and rescan_partitions(), making it
      impossible to enable stable pages for drivers that support partitions
      and don't use blk_integrity: while the call in revalidate_disk() can be
      trivially worked around (see zram, which doesn't support partitions and
      hence gets away with zram_revalidate_disk()), rescan_partitions() can
      be triggered from userspace at any time.  This breaks rbd, where the
      ceph messenger is responsible for generating/verifying CRCs.
      
      Since blk_integrity_{un,}register() "must" be used for (un)registering
      the integrity profile with the block layer, move BDI_CAP_STABLE_WRITES
      setting there.  This way drivers that call blk_integrity_register() and
      use integrity infrastructure won't interfere with drivers that don't
      but still want stable pages.
      
      Fixes: 25520d55 ("block: Inline blk_integrity in struct gendisk")
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 4.4+, needs backporting
      Tested-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      19b7ccf8
    • B
      blk-mq: Fix preempt count imbalance · abc25a69
      Bart Van Assche 提交于
      Avoid that the following kernel bug gets triggered:
      
      BUG: sleeping function called from invalid context at ./include/linux/buffer_head.h:349
      in_atomic(): 1, irqs_disabled(): 0, pid: 8019, name: find
      CPU: 10 PID: 8019 Comm: find Tainted: G        W I     4.11.0-rc4-dbg+ #2
      Call Trace:
       dump_stack+0x68/0x93
       ___might_sleep+0x16e/0x230
       __might_sleep+0x4a/0x80
       __ext4_get_inode_loc+0x1e0/0x4e0
       ext4_iget+0x70/0xbc0
       ext4_iget_normal+0x2f/0x40
       ext4_lookup+0xb6/0x1f0
       lookup_slow+0x104/0x1e0
       walk_component+0x19a/0x330
       path_lookupat+0x4b/0x100
       filename_lookup+0x9a/0x110
       user_path_at_empty+0x36/0x40
       vfs_statx+0x67/0xc0
       SYSC_newfstatat+0x20/0x40
       SyS_newfstatat+0xe/0x10
       entry_SYSCALL_64_fastpath+0x18/0xad
      
      This happens since the big if/else in blk_mq_make_request() doesn't
      have final else section that also drops the ctx. Add that.
      
      Fixes: b00c53e8 ("blk-mq: fix schedule-while-atomic with scheduler attached")
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Cc: Omar Sandoval <osandov@fb.com>
      
      Added a bit more to the commit log.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      abc25a69
  8. 21 4月, 2017 8 次提交
    • J
      blk-stat: kill blk_stat_rq_ddir() · 99c749a4
      Jens Axboe 提交于
      No point in providing and exporting this helper. There's just
      one (real) user of it, just use rq_data_dir().
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      99c749a4
    • B
      blk-mq: Remove blk_mq_sched_move_to_dispatch() · 246665db
      Bart Van Assche 提交于
      commit c13660a0 ("blk-mq-sched: change ->dispatch_requests()
      to ->dispatch_request()") removed the last user of this function.
      Hence also remove the function itself.
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Hannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      246665db
    • J
      blk-mq: add might_sleep check to blk_mq_get_driver_tag() · 5feeacdd
      Jens Axboe 提交于
      If the caller passes in wait=true, it has to be able to block
      for a driver tag. We just had a bug where flush insertion
      would block on tag allocation, while we had preempt disabled.
      Ensure that we catch cases like that earlier next time.
      Reviewed-by: NBart Van Assche <Bart.VanAssche@sandisk.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5feeacdd
    • S
      blk-mq: Fix poll_stat for new size-based bucketing. · 0206319f
      Stephen Bates 提交于
      Fixes an issue where the size of the poll_stat array in request_queue
      does not match the size expected by the new size based bucketing for
      IO completion polling.
      
      Fixes: 720b8ccc ("blk-mq: Add a polling specific stats function")
      Signed-off-by: NStephen Bates <sbates@raithlin.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0206319f
    • J
      blk-mq: fix schedule-while-atomic with scheduler attached · b00c53e8
      Jens Axboe 提交于
      We must have dropped the ctx before we call
      blk_mq_sched_insert_request() with can_block=true, otherwise we risk
      that a flush request can block on insertion if we are currently out of
      tags.
      
      [   47.667190] BUG: scheduling while atomic: jbd2/sda2-8/2089/0x00000002
      [   47.674493] Modules linked in: x86_pkg_temp_thermal btrfs xor zlib_deflate raid6_pq sr_mod cdre
      [   47.690572] Preemption disabled at:
      [   47.690584] [<ffffffff81326c7c>] blk_mq_sched_get_request+0x6c/0x280
      [   47.701764] CPU: 1 PID: 2089 Comm: jbd2/sda2-8 Not tainted 4.11.0-rc7+ #271
      [   47.709630] Hardware name: Dell Inc. PowerEdge T630/0NT78X, BIOS 2.3.4 11/09/2016
      [   47.718081] Call Trace:
      [   47.720903]  dump_stack+0x4f/0x73
      [   47.724694]  ? blk_mq_sched_get_request+0x6c/0x280
      [   47.730137]  __schedule_bug+0x6c/0xc0
      [   47.734314]  __schedule+0x559/0x780
      [   47.738302]  schedule+0x3b/0x90
      [   47.741899]  io_schedule+0x11/0x40
      [   47.745788]  blk_mq_get_tag+0x167/0x2a0
      [   47.750162]  ? remove_wait_queue+0x70/0x70
      [   47.754901]  blk_mq_get_driver_tag+0x92/0xf0
      [   47.759758]  blk_mq_sched_insert_request+0x134/0x170
      [   47.765398]  ? blk_account_io_start+0xd0/0x270
      [   47.770679]  blk_mq_make_request+0x1b2/0x850
      [   47.775766]  generic_make_request+0xf7/0x2d0
      [   47.780860]  submit_bio+0x5f/0x120
      [   47.784979]  ? submit_bio+0x5f/0x120
      [   47.789631]  submit_bh_wbc.isra.46+0x10d/0x130
      [   47.794902]  submit_bh+0xb/0x10
      [   47.798719]  journal_submit_commit_record+0x190/0x210
      [   47.804686]  ? _raw_spin_unlock+0x13/0x30
      [   47.809480]  jbd2_journal_commit_transaction+0x180a/0x1d00
      [   47.815925]  kjournald2+0xb6/0x250
      [   47.820022]  ? kjournald2+0xb6/0x250
      [   47.824328]  ? remove_wait_queue+0x70/0x70
      [   47.829223]  kthread+0x10e/0x140
      [   47.833147]  ? commit_timeout+0x10/0x10
      [   47.837742]  ? kthread_create_on_node+0x40/0x40
      [   47.843122]  ret_from_fork+0x29/0x40
      
      Fixes: a4d907b6 ("blk-mq: streamline blk_mq_make_request")
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b00c53e8
    • S
      blk-mq: Add a polling specific stats function · 720b8ccc
      Stephen Bates 提交于
      Rather than bucketing IO statisics based on direction only we also
      bucket based on the IO size. This leads to improved polling
      performance. Update the bucket callback function and use it in the
      polling latency estimation.
      Signed-off-by: NStephen Bates <sbates@raithlin.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      720b8ccc
    • S
      blk-stat: convert blk-stat bucket callback to signed · a37244e4
      Stephen Bates 提交于
      In order to allow for filtering of IO based on some other properties
      of the request than direction we allow the bucket function to return
      an int.
      
      If the bucket callback returns a negative do no count it in the stats
      accumulation.
      Signed-off-by: NStephen Bates <sbates@raithlin.com>
      
      Fixed up Kyber scheduler stat callback.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a37244e4
    • J
      blk-mq: fix potential oops with polling and blk-mq scheduler · 3a07bb1d
      Jens Axboe 提交于
      If we have a scheduler attached, blk_mq_tag_to_rq() on the
      scheduled tags will return NULL if a request is no longer
      in flight. This is different than using the normal tags,
      where it will always return the fixed request. Check for
      this condition for polling, in case we happen to enter
      polling for a completed request.
      
      The request address remains valid, so this check and return
      should be perfectly safe.
      
      Fixes: bd166ef1 ("blk-mq-sched: add framework for MQ capable IO schedulers")
      Tested-by: NStephen Bates <sbates@raithlin.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      3a07bb1d