1. 20 2月, 2019 3 次提交
  2. 06 2月, 2019 1 次提交
  3. 04 2月, 2019 2 次提交
  4. 10 1月, 2019 3 次提交
  5. 19 12月, 2018 1 次提交
  6. 14 12月, 2018 1 次提交
  7. 13 12月, 2018 3 次提交
  8. 12 12月, 2018 1 次提交
  9. 08 12月, 2018 6 次提交
  10. 07 12月, 2018 1 次提交
    • J
      nvme: validate controller state before rescheduling keep alive · 86880d64
      James Smart 提交于
      Delete operations are seeing NULL pointer references in call_timer_fn.
      Tracking these back, the timer appears to be the keep alive timer.
      
      nvme_keep_alive_work() which is tied to the timer that is cancelled
      by nvme_stop_keep_alive(), simply starts the keep alive io but doesn't
      wait for it's completion. So nvme_stop_keep_alive() only stops a timer
      when it's pending. When a keep alive is in flight, there is no timer
      running and the nvme_stop_keep_alive() will have no affect on the keep
      alive io. Thus, if the io completes successfully, the keep alive timer
      will be rescheduled.   In the failure case, delete is called, the
      controller state is changed, the nvme_stop_keep_alive() is called while
      the io is outstanding, and the delete path continues on. The keep
      alive happens to successfully complete before the delete paths mark it
      as aborted as part of the queue termination, so the timer is restarted.
      The delete paths then tear down the controller, and later on the timer
      code fires and the timer entry is now corrupt.
      
      Fix by validating the controller state before rescheduling the keep
      alive. Testing with the fix has confirmed the condition above was hit.
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      86880d64
  11. 01 12月, 2018 1 次提交
  12. 28 11月, 2018 1 次提交
    • I
      nvme-pci: fix surprise removal · 751a0cc0
      Igor Konopko 提交于
      When a PCIe NVMe device is not present, nvme_dev_remove_admin() calls
      blk_cleanup_queue() on the admin queue, which frees the hctx for that
      queue.  Moments later, on the same path nvme_kill_queues() calls
      blk_mq_unquiesce_queue() on admin queue and tries to access hctx of it,
      which leads to following OOPS:
      
      Oops: 0000 [#1] SMP PTI
      RIP: 0010:sbitmap_any_bit_set+0xb/0x40
      Call Trace:
       blk_mq_run_hw_queue+0xd5/0x150
       blk_mq_run_hw_queues+0x3a/0x50
       nvme_kill_queues+0x26/0x50
       nvme_remove_namespaces+0xb2/0xc0
       nvme_remove+0x60/0x140
       pci_device_remove+0x3b/0xb0
      
      Fixes: cb4bfda6 ("nvme-pci: fix hot removal during error handling")
      Signed-off-by: NIgor Konopko <igor.j.konopko@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      751a0cc0
  13. 27 11月, 2018 1 次提交
  14. 09 11月, 2018 2 次提交
  15. 18 10月, 2018 1 次提交
  16. 17 10月, 2018 3 次提交
  17. 08 10月, 2018 1 次提交
  18. 02 10月, 2018 3 次提交
  19. 28 9月, 2018 2 次提交
  20. 08 8月, 2018 1 次提交
  21. 30 7月, 2018 2 次提交