1. 28 1月, 2014 6 次提交
    • K
      NVMe: Disable admin queue on init failure · a1a5ef99
      Keith Busch 提交于
      Disable the admin queue if device fails during initialization so the
      queue's irq is freed.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [rewritten to use nvme_free_queues]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      a1a5ef99
    • M
      NVMe: Dynamically allocate partition numbers · 469071a3
      Matthew Wilcox 提交于
      Some users need more than 64 partitions per device.  Rather than simply
      increasing the number of partitions, switch to the dynamic partition
      allocation scheme.
      
      This means that minor numbers are not stable across boots, but since major
      numbers aren't either, I cannot see this being a significant problem.
      Tested-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      469071a3
    • K
      NVMe: Async IO queue deletion · 4d115420
      Keith Busch 提交于
      This attempts to delete all IO queues at the same time asynchronously on
      shutdown. This is necessary for a present device that is not responding;
      a shutdown operation previously would take 2 minutes per queue-pair
      to timeout before moving on to the next queue, making a device removal
      appear to take a very long time or "hung" as reported by users.
      
      In the previous worst case, a removal may be stuck forever until a kill
      signal is given if there are more than 32 queue pairs since it would run
      out of admin command IDs after over an hour of timed out sync commands
      (admin queue depth is 64).
      
      This patch will wait for the admin command timeout for all commands to
      complete, so the worst case now for an unresponsive controller is 60
      seconds, though that still seems like a long time.
      
      Since this adds another way to take queues offline, some duplicate code
      resulted so I moved these into more convienient functions.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [make functions static, correct line length and whitespace issues]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      4d115420
    • K
      NVMe: Surprise removal handling · 0e53d180
      Keith Busch 提交于
      This adds checks to see if the nvme pci device was removed. The check
      reads the status register for the value of -1, which it should never be
      unless the device is no longer present.
      
      If a user performs a surprise removal on an nvme device, the driver will
      be notified either by the pci driver remove callback if the platform's
      slot is capable of this event, or via reading the device BAR status
      register, which will indicate controller failure and trigger a reset.
      
      Either way, the device is not present so all outstanding commands would
      timeout. This will not send queue deletion commands to a drive that
      isn't present and fail after ioremap, significantly speeding up surprise
      removal; previously this took over 2 minutes per IO queue pair created,
      but this will complete removing the device within a few seconds.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      0e53d180
    • K
      NVMe: Abort timed out commands · c30341dc
      Keith Busch 提交于
      Send nvme abort command to io requests that have timed out on an
      initialized device. If the command is not returned after another timeout,
      schedule the controller for reset.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [fix endianness issues]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      c30341dc
    • K
      NVMe: Schedule reset for failed controllers · d4b4ff8e
      Keith Busch 提交于
      Schedules a controller reset when it indicates it has a failed status. If
      the device does not become ready after a reset, the pci device will be
      scheduled for removal.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [fixed checkpatch issue]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      d4b4ff8e
  2. 17 12月, 2013 4 次提交
  3. 19 11月, 2013 2 次提交
  4. 22 9月, 2013 1 次提交
  5. 07 9月, 2013 1 次提交
  6. 04 9月, 2013 10 次提交
  7. 25 6月, 2013 2 次提交
  8. 24 6月, 2013 1 次提交
    • M
      NVMe: Return correct value from interrupt handler · e9539f47
      Matthew Wilcox 提交于
      The interrupt handler currently reports whether it found any new
      completion queue entries.  If the completion queue is primarily being
      processed by a method other than the interrupt handler, it may return
      IRQ_NONE so often that Linux thinks that the interrupt is being falsely
      triggered.
      
      To solve this problem, report whether any completion queue entries have
      been seen since the last interrupt was received for this queue.
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      e9539f47
  9. 21 6月, 2013 1 次提交
  10. 20 6月, 2013 1 次提交
    • M
      NVMe: Restructure MSI / MSI-X setup · 063a8096
      Matthew Wilcox 提交于
      The current code copies 'nr_io_queues' into 'q_count', modifies
      'nr_io_queues' during MSI-X setup, then resets 'nr_io_queues' for
      MSI setup.  Instead, copy 'nr_io_queues' into 'vecs' and modify 'vecs'
      during both MSI-X and MSI setup.
      
      This lets us simplify the for-loops that set up MSI-X and MSI, and opens
      the possibility of using more I/O queues than we have interrupt vectors,
      should future benchmarking prove that to be a useful feature.
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      063a8096
  11. 31 5月, 2013 1 次提交
  12. 29 5月, 2013 1 次提交
  13. 24 5月, 2013 1 次提交
  14. 17 5月, 2013 3 次提交
  15. 10 5月, 2013 1 次提交
  16. 08 5月, 2013 2 次提交
  17. 03 5月, 2013 2 次提交