1. 18 10月, 2021 9 次提交
  2. 24 8月, 2021 1 次提交
  3. 06 8月, 2021 1 次提交
  4. 01 7月, 2021 1 次提交
  5. 18 6月, 2021 1 次提交
  6. 12 6月, 2021 4 次提交
  7. 22 4月, 2021 1 次提交
  8. 05 3月, 2021 2 次提交
  9. 10 2月, 2021 1 次提交
  10. 25 1月, 2021 2 次提交
  11. 10 12月, 2020 2 次提交
  12. 08 12月, 2020 1 次提交
    • M
      blk-mq: add new API of blk_mq_hctx_set_fq_lock_class · fb01a293
      Ming Lei 提交于
      flush_end_io() may be called recursively from some driver, such as
      nvme-loop, so lockdep may complain 'possible recursive locking'.
      Commit b3c6a599("block: Fix a lockdep complaint triggered by
      request queue flushing") tried to address this issue by assigning
      dynamically allocated per-flush-queue lock class. This solution
      adds synchronize_rcu() for each hctx's release handler, and causes
      horrible SCSI MQ probe delay(more than half an hour on megaraid sas).
      
      Add new API of blk_mq_hctx_set_fq_lock_class() for these drivers, so
      we just need to use driver specific lock class for avoiding the
      lockdep warning of 'possible recursive locking'.
      Tested-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Reported-by: NQian Cai <cai@redhat.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: John Garry <john.garry@huawei.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Hannes Reinecke <hare@suse.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fb01a293
  13. 02 12月, 2020 1 次提交
  14. 29 10月, 2020 1 次提交
  15. 04 9月, 2020 4 次提交
  16. 02 9月, 2020 1 次提交
  17. 29 7月, 2020 1 次提交
  18. 01 7月, 2020 1 次提交
  19. 30 6月, 2020 1 次提交
  20. 29 6月, 2020 1 次提交
  21. 24 6月, 2020 2 次提交
  22. 30 5月, 2020 1 次提交
    • M
      blk-mq: drain I/O when all CPUs in a hctx are offline · bf0beec0
      Ming Lei 提交于
      Most of blk-mq drivers depend on managed IRQ's auto-affinity to setup
      up queue mapping. Thomas mentioned the following point[1]:
      
      "That was the constraint of managed interrupts from the very beginning:
      
       The driver/subsystem has to quiesce the interrupt line and the associated
       queue _before_ it gets shutdown in CPU unplug and not fiddle with it
       until it's restarted by the core when the CPU is plugged in again."
      
      However, current blk-mq implementation doesn't quiesce hw queue before
      the last CPU in the hctx is shutdown.  Even worse, CPUHP_BLK_MQ_DEAD is a
      cpuhp state handled after the CPU is down, so there isn't any chance to
      quiesce the hctx before shutting down the CPU.
      
      Add new CPUHP_AP_BLK_MQ_ONLINE state to stop allocating from blk-mq hctxs
      where the last CPU goes away, and wait for completion of in-flight
      requests.  This guarantees that there is no inflight I/O before shutting
      down the managed IRQ.
      
      Add a BLK_MQ_F_STACKING and set it for dm-rq and loop, so we don't need
      to wait for completion of in-flight requests from these drivers to avoid
      a potential dead-lock. It is safe to do this for stacking drivers as those
      do not use interrupts at all and their I/O completions are triggered by
      underlying devices I/O completion.
      
      [1] https://lore.kernel.org/linux-block/alpine.DEB.2.21.1904051331270.1802@nanos.tec.linutronix.de/
      
      [hch: different retry mechanism, merged two patches, minor cleanups]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bf0beec0