1. 05 11月, 2014 3 次提交
  2. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  3. 13 6月, 2014 1 次提交
    • K
      NVMe: Fix hot cpu notification dead lock · f3db22fe
      Keith Busch 提交于
      There is a potential dead lock if a cpu event occurs during nvme probe
      since it registered with hot cpu notification. This fixes the race by
      having the module register with notification outside of probe rather
      than have each device register.
      
      The actual work is done in a scheduled work queue instead of in the
      notifier since assigning IO queues has the potential to block if the
      driver creates additional queues.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      f3db22fe
  4. 04 6月, 2014 6 次提交
  5. 28 5月, 2014 1 次提交
  6. 10 5月, 2014 1 次提交
  7. 05 5月, 2014 6 次提交
  8. 11 4月, 2014 6 次提交
  9. 24 3月, 2014 5 次提交
  10. 14 3月, 2014 1 次提交
  11. 07 3月, 2014 1 次提交
    • T
      nvme: don't use PREPARE_WORK · 9ca97374
      Tejun Heo 提交于
      PREPARE_[DELAYED_]WORK() are being phased out.  They have few users
      and a nasty surprise in terms of reentrancy guarantee as workqueue
      considers work items to be different if they don't have the same work
      function.
      
      nvme_dev->reset_work is multiplexed with multiple work functions.
      Introduce nvme_reset_workfn() which invokes nvme_dev->reset_workfn and
      always use it as the work function and update the users to set the
      ->reset_workfn field instead of overriding the work function using
      PREPARE_WORK().
      
      It would probably be best to route this with other related updates
      through the workqueue tree.
      
      Compile tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: linux-nvme@lists.infradead.org
      9ca97374
  12. 03 2月, 2014 1 次提交
  13. 30 1月, 2014 1 次提交
  14. 28 1月, 2014 6 次提交
    • M
      NVMe: Include device and queue numbers in interrupt name · 3193f07b
      Matthew Wilcox 提交于
      On larger systems with many drives, it may help debugging to know which
      queue is tied to which interrupt, just by looking at /proc/interrupts.
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      3193f07b
    • K
      NVMe: Add a pci_driver shutdown method · 09ece142
      Keith Busch 提交于
      We need to shut down the device cleanly when the system is being shut down.
      This was in an earlier patch but was inadvertently lost during a rewrite.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      09ece142
    • K
      NVMe: Disable admin queue on init failure · a1a5ef99
      Keith Busch 提交于
      Disable the admin queue if device fails during initialization so the
      queue's irq is freed.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [rewritten to use nvme_free_queues]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      a1a5ef99
    • M
      NVMe: Dynamically allocate partition numbers · 469071a3
      Matthew Wilcox 提交于
      Some users need more than 64 partitions per device.  Rather than simply
      increasing the number of partitions, switch to the dynamic partition
      allocation scheme.
      
      This means that minor numbers are not stable across boots, but since major
      numbers aren't either, I cannot see this being a significant problem.
      Tested-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      469071a3
    • K
      NVMe: Async IO queue deletion · 4d115420
      Keith Busch 提交于
      This attempts to delete all IO queues at the same time asynchronously on
      shutdown. This is necessary for a present device that is not responding;
      a shutdown operation previously would take 2 minutes per queue-pair
      to timeout before moving on to the next queue, making a device removal
      appear to take a very long time or "hung" as reported by users.
      
      In the previous worst case, a removal may be stuck forever until a kill
      signal is given if there are more than 32 queue pairs since it would run
      out of admin command IDs after over an hour of timed out sync commands
      (admin queue depth is 64).
      
      This patch will wait for the admin command timeout for all commands to
      complete, so the worst case now for an unresponsive controller is 60
      seconds, though that still seems like a long time.
      
      Since this adds another way to take queues offline, some duplicate code
      resulted so I moved these into more convienient functions.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      [make functions static, correct line length and whitespace issues]
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      4d115420
    • K
      NVMe: Surprise removal handling · 0e53d180
      Keith Busch 提交于
      This adds checks to see if the nvme pci device was removed. The check
      reads the status register for the value of -1, which it should never be
      unless the device is no longer present.
      
      If a user performs a surprise removal on an nvme device, the driver will
      be notified either by the pci driver remove callback if the platform's
      slot is capable of this event, or via reading the device BAR status
      register, which will indicate controller failure and trigger a reset.
      
      Either way, the device is not present so all outstanding commands would
      timeout. This will not send queue deletion commands to a drive that
      isn't present and fail after ioremap, significantly speeding up surprise
      removal; previously this took over 2 minutes per IO queue pair created,
      but this will complete removing the device within a few seconds.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      0e53d180