1. 23 6月, 2022 2 次提交
  2. 14 6月, 2022 8 次提交
  3. 31 5月, 2022 1 次提交
  4. 28 5月, 2022 1 次提交
  5. 16 5月, 2022 3 次提交
    • S
      nvme-pci: harden drive presence detect in nvme_dev_disable() · b98235d3
      Stefan Roese 提交于
      On our ZynqMP system we observe, that a NVMe drive that resets itself
      while doing a firmware update causes a Kernel crash like this:
      
      [ 67.720772] pcieport 0000:02:02.0: pciehp: Slot(2): Link Down
      [ 67.720783] pcieport 0000:02:02.0: pciehp: Slot(2): Card not present
      [ 67.720795] nvme 0000:04:00.0: PME# disabled
      [ 67.720849] Internal error: synchronous external abort: 96000010 [#1] PREEMPT SMP
      [ 67.720853] nwl-pcie fd0e0000.pcie: Slave error
      
      Analysis: When nvme_dev_disable() is called because of this PCIe hotplug
      event, pci_is_enabled() is still true. And accessing the NVMe drive
      which is currently not available as it's in reboot process causes this
      "synchronous external abort" on this ARM64 platform.
      
      This patch adds the pci_device_is_present() check as well, which returns
      false in this "Card not present" hot-plug case. With this change, the
      NVMe driver does not try to access the NVMe registers any more and the
      FW update finishes without any problems.
      Signed-off-by: NStefan Roese <sr@denx.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      b98235d3
    • S
      nvme-pci: fix a NULL pointer dereference in nvme_alloc_admin_tags · da427611
      Smith, Kyle Miller (Nimble Kernel) 提交于
      In nvme_alloc_admin_tags, the admin_q can be set to an error (typically
      -ENOMEM) if the blk_mq_init_queue call fails to set up the queue, which
      is checked immediately after the call. However, when we return the error
      message up the stack, to nvme_reset_work the error takes us to
      nvme_remove_dead_ctrl()
        nvme_dev_disable()
         nvme_suspend_queue(&dev->queues[0]).
      
      Here, we only check that the admin_q is non-NULL, rather than not
      an error or NULL, and begin quiescing a queue that never existed, leading
      to bad / NULL pointer dereference.
      Signed-off-by: NKyle Smith <kyles@hpe.com>
      Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      da427611
    • C
      nvme: mark internal passthru request RQF_QUIET · 128126a7
      Chaitanya Kulkarni 提交于
      Most of the internal passthru commands use __nvme_submit_sync_cmd()
      interface. There are few places we open code the request submission :-
      
      1. nvme_keep_alive_work(struct work_struct *work)
      2. nvme_timeout(struct request *req, bool reserved)
      3. nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
      
      Mark the internal passthru request quiet so that we can skip the verbose
      error message from nvme_log_error() in nvme_end_req() completion path,
      this will be consistent with what we have in __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <kch@nvidia.com>
      Reviewed-by: NAlan Adamson <alan.adamson@oracle.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      128126a7
  6. 15 4月, 2022 2 次提交
  7. 23 3月, 2022 2 次提交
  8. 16 3月, 2022 1 次提交
  9. 04 3月, 2022 1 次提交
  10. 27 1月, 2022 1 次提交
  11. 06 1月, 2022 1 次提交
  12. 17 12月, 2021 3 次提交
  13. 29 11月, 2021 1 次提交
  14. 21 10月, 2021 1 次提交
  15. 20 10月, 2021 1 次提交
  16. 19 10月, 2021 3 次提交
    • J
      nvme: wire up completion batching for the IRQ path · 4f502245
      Jens Axboe 提交于
      Trivial to do now, just need our own io_comp_batch on the stack and pass
      that in to the usual command completion handling.
      
      I pondered making this dependent on how many entries we had to process,
      but even for a single entry there's no discernable difference in
      performance or latency. Running a sync workload over io_uring:
      
      t/io_uring -b512 -d1 -s1 -c1 -p0 -F1 -B1 -n2 /dev/nvme1n1 /dev/nvme2n1
      
      yields the below performance before the patch:
      
      IOPS=254820, BW=124MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251174, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=250806, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      
      and the following after:
      
      IOPS=255972, BW=124MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251920, BW=123MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251794, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      
      which definitely isn't slower, about the same if you factor in a bit of
      variance. For peak performance workloads, benchmarking shows a 2%
      improvement.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4f502245
    • J
      nvme: add support for batched completion of polled IO · c234a653
      Jens Axboe 提交于
      Take advantage of struct io_comp_batch, if passed in to the nvme poll
      handler. If it's set, rather than complete each request individually
      inline, store them in the io_comp_batch list. We only do so for requests
      that will complete successfully, anything else will be completed inline as
      before.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c234a653
    • J
      block: add a struct io_comp_batch argument to fops->iopoll() · 5a72e899
      Jens Axboe 提交于
      struct io_comp_batch contains a list head and a completion handler, which
      will allow completions to more effciently completed batches of IO.
      
      For now, no functional changes in this patch, we just define the
      io_comp_batch structure and add the argument to the file_operations iopoll
      handler.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5a72e899
  17. 18 10月, 2021 1 次提交
  18. 07 10月, 2021 1 次提交
  19. 28 9月, 2021 1 次提交
  20. 16 8月, 2021 5 次提交