1. 12 10月, 2016 2 次提交
    • K
      nvme: don't schedule multiple resets · c5f6ce97
      Keith Busch 提交于
      The queue_work only fails if the work is pending, but not yet running. If
      the work is running, the work item would get requeued, triggering a
      double reset. If the first reset fails for any reason, the second
      reset triggers:
      
      	WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)
      
      Hitting that schedules controller deletion for a second time, which
      potentially takes a reference on the device that is being deleted.
      If the reset occurs at the same time as a hot removal event, this causes
      a double-free.
      
      This patch has the reset helper function check if the work is busy
      prior to queueing, and changes all places that schedule resets to use
      this function. Since most users don't want to sync with that work, the
      "flush_work" is moved to the only caller that wants to sync.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: Sagi Grimberg<sagi@grimberg.me>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c5f6ce97
    • K
      nvme: Delete created IO queues on reset · 70659060
      Keith Busch 提交于
      The driver was decrementing the online_queues prior to attempting to
      delete those IO queues, so the driver ended up not requesting the
      controller delete any. This patch saves the online_queues prior to
      suspending them, and adds that parameter for deleting io queues.
      
      Fixes: c21377f8 ("nvme: Suspend all queues before deletion")
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      70659060
  2. 15 9月, 2016 2 次提交
  3. 09 9月, 2016 1 次提交
  4. 07 9月, 2016 1 次提交
  5. 11 8月, 2016 1 次提交
    • G
      nvme: Suspend all queues before deletion · c21377f8
      Gabriel Krisman Bertazi 提交于
      When nvme_delete_queue fails in the first pass of the
      nvme_disable_io_queues() loop, we return early, failing to suspend all
      of the IO queues.  Later, on the nvme_pci_disable path, this causes us
      to disable MSI without actually having freed all the IRQs, which
      triggers the BUG_ON in free_msi_irqs(), as show below.
      
      This patch refactors nvme_disable_io_queues to suspend all queues before
      start submitting delete queue commands.  This way, we ensure that we
      have at least returned every IRQ before continuing with the removal
      path.
      
      [  487.529200] kernel BUG at ../drivers/pci/msi.c:368!
      cpu 0x46: Vector: 700 (Program Check) at [c0000078c5b83650]
          pc: c000000000627a50: free_msi_irqs+0x90/0x200
          lr: c000000000627a40: free_msi_irqs+0x80/0x200
          sp: c0000078c5b838d0
         msr: 9000000100029033
        current = 0xc0000078c5b40000
        paca    = 0xc000000002bd7600   softe: 0        irq_happened: 0x01
          pid   = 1376, comm = kworker/70:1H
      kernel BUG at ../drivers/pci/msi.c:368!
      Linux version 4.7.0.mainline+ (root@iod76) (gcc version 5.3.1 20160413
      (Ubuntu/IBM 5.3.1-14ubuntu2.1) ) #104 SMP Fri Jul 29 09:20:17 CDT 2016
      enter ? for help
      [c0000078c5b83920] d0000000363b0cd8 nvme_dev_disable+0x208/0x4f0 [nvme]
      [c0000078c5b83a10] d0000000363b12a4 nvme_timeout+0xe4/0x250 [nvme]
      [c0000078c5b83ad0] c0000000005690e4 blk_mq_rq_timed_out+0x64/0x110
      [c0000078c5b83b40] c00000000056c930 bt_for_each+0x160/0x170
      [c0000078c5b83bb0] c00000000056d928 blk_mq_queue_tag_busy_iter+0x78/0x110
      [c0000078c5b83c00] c0000000005675d8 blk_mq_timeout_work+0xd8/0x1b0
      [c0000078c5b83c50] c0000000000e8cf0 process_one_work+0x1e0/0x590
      [c0000078c5b83ce0] c0000000000e9148 worker_thread+0xa8/0x660
      [c0000078c5b83d80] c0000000000f2090 kthread+0x110/0x130
      [c0000078c5b83e30] c0000000000095f0 ret_from_kernel_thread+0x5c/0x6c
      Signed-off-by: NGabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
      Cc: Brian King <brking@linux.vnet.ibm.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: linux-nvme@lists.infradead.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c21377f8
  6. 21 7月, 2016 1 次提交
  7. 14 7月, 2016 1 次提交
  8. 13 7月, 2016 1 次提交
    • K
      nvme: Limit command retries · f80ec966
      Keith Busch 提交于
      Many controller implementations will return errors to commands that will
      not succeed, but without the DNR bit set. The driver previously retried
      these commands an unlimited number of times until the command timeout
      has exceeded, which takes an unnecessarilly long period of time.
      
      This patch limits the number of retries a command can have, defaulting
      to 5, but is user tunable at load or runtime.
      
      The struct request's 'retries' field is used to track the number of
      retries attempted. This is in contrast with scsi's use of this field,
      which indicates how many retries are allowed.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f80ec966
  9. 12 7月, 2016 1 次提交
    • G
      nvme/quirk: Add a delay before checking for adapter readiness · 54adc010
      Guilherme G. Piccoli 提交于
      When disabling the controller, the specification says the register
      NVME_REG_CC should be written and then driver needs to wait the
      adapter to be ready, which is checked by reading another register
      bit (NVME_CSTS_RDY). There's a timeout validation in this checking,
      so in case this timeout is reached the driver gives up and removes
      the adapter from the system.
      
      After a firmware activation procedure, the PCI_DEVICE(0x1c58, 0x0003)
      (HGST adapter) end up being removed if we issue a reset_controller,
      because driver keeps verifying the NVME_REG_CSTS until the timeout is
      reached. This patch adds a necessary quirk for this adapter, by
      introducing a delay before nvme_wait_ready(), so the reset procedure
      is able to be completed. This quirk is needed because just increasing
      the timeout is not enough in case of this adapter - the driver must
      wait before start reading NVME_REG_CSTS register on this specific
      device.
      Signed-off-by: NGuilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      54adc010
  10. 06 7月, 2016 3 次提交
  11. 22 6月, 2016 1 次提交
  12. 12 6月, 2016 1 次提交
  13. 10 6月, 2016 1 次提交
  14. 08 6月, 2016 2 次提交
  15. 18 5月, 2016 6 次提交
  16. 04 5月, 2016 1 次提交
    • K
      NVMe: Fix reset/remove race · 87c32077
      Keith Busch 提交于
      This fixes a scenario where device is present and being reset, but a
      request to unbind the driver occurs.
      
      A previous patch series addressing a device failure removal scenario
      flushed reset_work after controller disable to unblock reset_work waiting
      on a completion that wouldn't occur. This isn't safe as-is. The broken
      scenario can potentially be induced with:
      
        modprobe nvme && modprobe -r nvme
      
      To fix, the reset work is flushed immediately after setting the controller
      removing flag, and any subsequent reset will not proceed with controller
      initialization if the flag is set.
      
      The controller status must be polled while active, so the watchdog timer
      is also left active until the controller is disabled to cleanup requests
      that may be stuck during namespace removal.
      
      [Fixes: ff23a2a1]
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      87c32077
  17. 02 5月, 2016 7 次提交
  18. 15 4月, 2016 1 次提交
    • K
      NVMe: Always use MSI/MSI-x interrupts · a5229050
      Keith Busch 提交于
      Multiple users have reported device initialization failure due the driver
      not receiving legacy PCI interrupts. This is not unique to any particular
      controller, but has been observed on multiple platforms.
      
      There have been no issues reported or observed when with message signaled
      interrupts, so this patch attempts to use MSI-x during initialization,
      falling back to MSI. If that fails, legacy would become the default.
      
      The setup_io_queues error handling had to change as a result: the admin
      queue's msix_entry used to be initialized to the legacy IRQ. The case
      where nr_io_queues is 0 would fail request_irq when setting up the admin
      queue's interrupt since re-enabling MSI-x fails with 0 vectors, leaving
      the admin queue's msix_entry invalid. Instead, return success immediately.
      Reported-by: NTim Muhlemmer <muhlemmer@gmail.com>
      Reported-by: NJon Derrick <jonathan.derrick@intel.com>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a5229050
  19. 13 4月, 2016 6 次提交