1. 02 12月, 2021 1 次提交
  2. 29 11月, 2021 2 次提交
  3. 03 11月, 2021 1 次提交
  4. 29 10月, 2021 1 次提交
  5. 22 10月, 2021 1 次提交
  6. 19 10月, 2021 2 次提交
  7. 18 10月, 2021 2 次提交
  8. 04 10月, 2021 1 次提交
  9. 22 6月, 2021 2 次提交
  10. 17 4月, 2021 1 次提交
    • S
      blk-mq: Fix spurious debugfs directory creation during initialization · 1e91e28e
      Saravanan D 提交于
      blk_mq_debugfs_register_sched_hctx() called from
      device_add_disk()->elevator_init_mq()->blk_mq_init_sched()
      initialization sequence does not have relevant parent directory
      setup and thus spuriously attempts "sched" directory creation
      from root mount of debugfs for every hw queue detected on the
      block device
      
      dmesg
      ...
      debugfs: Directory 'sched' with parent '/' already present!
      debugfs: Directory 'sched' with parent '/' already present!
      .
      .
      debugfs: Directory 'sched' with parent '/' already present!
      ...
      
      The parent debugfs directory for hw queues get properly setup
      device_add_disk()->blk_register_queue()->blk_mq_debugfs_register()
      ->blk_mq_debugfs_register_hctx() later in the block device
      initialization sequence.
      
      A simple check for debugfs_dir has been added to thwart premature
      debugfs directory/file creation attempts.
      Signed-off-by: NSaravanan D <saravanand@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1e91e28e
  11. 03 4月, 2021 1 次提交
  12. 02 3月, 2021 1 次提交
  13. 08 1月, 2021 1 次提交
  14. 30 12月, 2020 1 次提交
  15. 10 12月, 2020 1 次提交
  16. 25 9月, 2020 1 次提交
  17. 04 9月, 2020 2 次提交
  18. 01 7月, 2020 1 次提交
  19. 29 6月, 2020 1 次提交
  20. 24 6月, 2020 1 次提交
    • L
      block: create the request_queue debugfs_dir on registration · 85e0cbbb
      Luis Chamberlain 提交于
      We were only creating the request_queue debugfs_dir only
      for make_request block drivers (multiqueue), but never for
      request-based block drivers. We did this as we were only
      creating non-blktrace additional debugfs files on that directory
      for make_request drivers. However, since blktrace *always* creates
      that directory anyway, we special-case the use of that directory
      on blktrace. Other than this being an eye-sore, this exposes
      request-based block drivers to the same debugfs fragile
      race that used to exist with make_request block drivers
      where if we start adding files onto that directory we can later
      run a race with a double removal of dentries on the directory
      if we don't deal with this carefully on blktrace.
      
      Instead, just simplify things by always creating the request_queue
      debugfs_dir on request_queue registration. Rename the mutex also to
      reflect the fact that this is used outside of the blktrace context.
      Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      85e0cbbb
  21. 30 5月, 2020 1 次提交
    • M
      blk-mq: drain I/O when all CPUs in a hctx are offline · bf0beec0
      Ming Lei 提交于
      Most of blk-mq drivers depend on managed IRQ's auto-affinity to setup
      up queue mapping. Thomas mentioned the following point[1]:
      
      "That was the constraint of managed interrupts from the very beginning:
      
       The driver/subsystem has to quiesce the interrupt line and the associated
       queue _before_ it gets shutdown in CPU unplug and not fiddle with it
       until it's restarted by the core when the CPU is plugged in again."
      
      However, current blk-mq implementation doesn't quiesce hw queue before
      the last CPU in the hctx is shutdown.  Even worse, CPUHP_BLK_MQ_DEAD is a
      cpuhp state handled after the CPU is down, so there isn't any chance to
      quiesce the hctx before shutting down the CPU.
      
      Add new CPUHP_AP_BLK_MQ_ONLINE state to stop allocating from blk-mq hctxs
      where the last CPU goes away, and wait for completion of in-flight
      requests.  This guarantees that there is no inflight I/O before shutting
      down the managed IRQ.
      
      Add a BLK_MQ_F_STACKING and set it for dm-rq and loop, so we don't need
      to wait for completion of in-flight requests from these drivers to avoid
      a potential dead-lock. It is safe to do this for stacking drivers as those
      do not use interrupts at all and their I/O completions are triggered by
      underlying devices I/O completion.
      
      [1] https://lore.kernel.org/linux-block/alpine.DEB.2.21.1904051331270.1802@nanos.tec.linutronix.de/
      
      [hch: different retry mechanism, merged two patches, minor cleanups]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bf0beec0
  22. 23 4月, 2020 1 次提交
    • C
      block: remove RQF_COPY_USER · e64a0e16
      Christoph Hellwig 提交于
      The RQF_COPY_USER is set for bio where the passthrough request mapping
      helpers decided that bounce buffering is required.  It is then used to
      pad scatterlist for drivers that required it.  But given that
      non-passthrough requests are per definition aligned, and directly mapped
      pass-through request must be aligned it is not actually required at all.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e64a0e16
  23. 07 7月, 2019 1 次提交
  24. 21 6月, 2019 1 次提交
  25. 20 6月, 2019 3 次提交
  26. 17 6月, 2019 1 次提交
  27. 15 6月, 2019 1 次提交
  28. 13 6月, 2019 1 次提交
  29. 01 5月, 2019 1 次提交
  30. 15 2月, 2019 2 次提交
  31. 10 2月, 2019 1 次提交
  32. 06 2月, 2019 1 次提交
新手
引导
客服 返回
顶部