1. 21 7月, 2015 1 次提交
  2. 06 6月, 2015 1 次提交
    • K
      NVMe: Automatic namespace rescan · a5768aa8
      Keith Busch 提交于
      Namespaces may be dynamically allocated and deleted or attached and
      detached. This has the driver rescan the device for namespace changes
      after each device reset or namespace change asynchronous event.
      
      There could potentially be many detached namespaces that we don't want
      polluting /dev/ with unusable block handles, so this will delete disks
      if the namespace is not active as indicated by the response from identify
      namespace. This also skips adding the disk if no capacity is provisioned
      to the namespace in the first place.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a5768aa8
  3. 22 5月, 2015 3 次提交
  4. 08 4月, 2015 1 次提交
  5. 20 2月, 2015 5 次提交
    • K
      NVMe: Fix potential corruption during shutdown · 07836e65
      Keith Busch 提交于
      The driver has to end unreturned commands at some point even if the
      controller has not provided a completion. The driver tried to be safe by
      deleting IO queues prior to ending all unreturned commands. That should
      cause the controller to internally abort inflight commands, but IO queue
      deletion request does not have to be successful, so all bets are off. We
      still have to make progress, so to be extra safe, this patch doesn't
      clear a queue to release the dma mapping for a command until after the
      pci device has been disabled.
      
      This patch removes the special handling during device initialization
      so controller recovery can be done all the time. This is possible since
      initialization is not inlined with pci probe anymore.
      Reported-by: NNilish Choudhury <nilesh.choudhury@oracle.com>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      07836e65
    • K
      NVMe: Asynchronous controller probe · 2e1d8448
      Keith Busch 提交于
      This performs the longest parts of nvme device probe in scheduled work.
      This speeds up probe significantly when multiple devices are in use.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      2e1d8448
    • K
      NVMe: Register management handle under nvme class · b3fffdef
      Keith Busch 提交于
      This creates a new class type for nvme devices to register their
      management character devices with. This is so we do not rely on miscdev
      to provide enough minors for as many nvme devices some people plan to
      use. The previous limit was approximately 60 NVMe controllers, depending
      on the platform and kernel. Now the limit is 1M, which ought to be enough
      for anybody.
      
      Since we have a new device class, it makes sense to attach the block
      devices under this as well, so part of this patch moves the management
      handle initialization prior to the namespaces discovery.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      b3fffdef
    • K
      NVMe: Update SCSI Inquiry VPD 83h translation · 4f1982b4
      Keith Busch 提交于
      The original translation created collisions on Inquiry VPD 83 for many
      existing devices. Newer specifications provide other ways to translate
      based on the device's version can be used to create unique identifiers.
      
      Version 1.1 provides an EUI64 field that uniquely identifies each
      namespace, and 1.2 added the longer NGUID field for the same reason.
      Both follow the IEEE EUI format and readily translate to the SCSI device
      identification EUI designator type 2h. For devices implementing either,
      the translation will use this type, defaulting to the EUI64 8-byte type if
      implemented then NGUID's 16 byte version if not. If neither are provided,
      the 1.0 translation is used, and is updated to use the SCSI String format
      to guarantee a unique identifier.
      
      Knowing when to use the new fields depends on the nvme controller's
      revision. The NVME_VS macro was not decoding this correctly, so that is
      fixed in this patch and moved to a more appropriate place.
      
      Since the Identify Namespace structure required an update for the NGUID
      field, this patch adds the remaining new 1.2 fields to the structure.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      4f1982b4
    • K
      NVMe: Metadata format support · e1e5e564
      Keith Busch 提交于
      Adds support for NVMe metadata formats and exposes block devices for
      all namespaces regardless of their format. Namespace formats that are
      unusable will have disk capacity set to 0, but a handle to the block
      device is created to simplify device management. A namespace is not
      usable when the format requires host interleave block and metadata in
      single buffer, has no provisioned storage, or has better data but failed
      to register with blk integrity.
      
      The namespace has to be scanned in two phases to support separate
      metadata formats. The first establishes the sector size and capacity
      prior to invoking add_disk. If metadata is required, the capacity will
      be temporarilly set to 0 until it can be revalidated and registered with
      the integrity extenstions after add_disk completes.
      
      The driver relies on the integrity extensions to provide the metadata
      buffer. NVMe requires this be a single physically contiguous region,
      so only one integrity segment is allowed per command. If the metadata
      is used for T10 PI, the driver provides mappings to save and restore
      the reftag physical block translation. The driver provides no-op
      functions for generate and verify if metadata is not used for protection
      information. This way the setup is always provided by the block layer.
      
      If a request does not supply a required metadata buffer, the command
      is failed with bad address. This could only happen if a user manually
      disables verify/generate on such a disk. The only exception to where
      this is okay is if the controller is capable of stripping/generating
      the metadata, which is possible on some types of formats.
      
      The metadata scatter gather list now occupies the spot in the nvme_iod
      that used to be used to link retryable IOD's, but we don't do that
      anymore, so the field was unused.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      e1e5e564
  6. 30 1月, 2015 1 次提交
    • J
      NVMe: avoid kmalloc/kfree for smaller IO · ac3dd5bd
      Jens Axboe 提交于
      Currently we allocate an nvme_iod for each IO, which holds the
      sg list, prps, and other IO related info. Set a threshold of
      2 pages and/or 8KB of data, below which we can just embed this
      in the per-command pdu in blk-mq. For any IO at or below
      NVME_INT_PAGES and NVME_INT_BYTES, we save a kmalloc and kfree.
      
      For higher IOPS, this saves up to 1% of CPU time.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      ac3dd5bd
  7. 05 11月, 2014 3 次提交
    • M
      NVMe: Convert to blk-mq · a4aea562
      Matias Bjørling 提交于
      This converts the NVMe driver to a blk-mq request-based driver.
      
      The NVMe driver is currently bio-based and implements queue logic within
      itself.  By using blk-mq, a lot of these responsibilities can be moved
      and simplified.
      
      The patch is divided into the following blocks:
      
       * Per-command data and cmdid have been moved into the struct request
         field. The cmdid_data can be retrieved using blk_mq_rq_to_pdu() and id
         maintenance are now handled by blk-mq through the rq->tag field.
      
       * The logic for splitting bio's has been moved into the blk-mq layer.
         The driver instead notifies the block layer about limited gap support in
         SG lists.
      
       * blk-mq handles timeouts and is reimplemented within nvme_timeout().
         This both includes abort handling and command cancelation.
      
       * Assignment of nvme queues to CPUs are replaced with the blk-mq
         version. The current blk-mq strategy is to assign the number of
         mapped queues and CPUs to provide synergy, while the nvme driver
         assign as many nvme hw queues as possible. This can be implemented in
         blk-mq if needed.
      
       * NVMe queues are merged with the tags structure of blk-mq.
      
       * blk-mq takes care of setup/teardown of nvme queues and guards invalid
         accesses. Therefore, RCU-usage for nvme queues can be removed.
      
       * IO tracing and accounting are handled by blk-mq and therefore removed.
      
       * Queue suspension logic is replaced with the logic from the block
         layer.
      
      Contributions in this patch from:
      
        Sam Bradshaw <sbradshaw@micron.com>
        Jens Axboe <axboe@fb.com>
        Keith Busch <keith.busch@intel.com>
        Robert Nelson <rlnelson@google.com>
      Acked-by: NKeith Busch <keith.busch@intel.com>
      Acked-by: NJens Axboe <axboe@fb.com>
      
      Updated for new ->queue_rq() prototype.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a4aea562
    • K
      NVMe: Mismatched host/device page size support · 1d090624
      Keith Busch 提交于
      Adds support for devices with max page size smaller than the host's.
      In the case we encounter such a host/device combination, the driver will
      split a page into as many PRP entries as necessary for the device's page
      size capabilities. If the device's reported minimum page size is greater
      than the host's, the driver will not attempt to enable the device and
      return an error instead.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1d090624
    • K
      NVMe: Async event request · 6fccf938
      Keith Busch 提交于
      Submits NVMe asynchronous event requests, one event up to the controller
      maximum or number of possible different event types (8), whichever is
      smaller. Events successfully returned by the controller are logged.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      6fccf938
  8. 13 6月, 2014 1 次提交
    • K
      NVMe: Fix hot cpu notification dead lock · f3db22fe
      Keith Busch 提交于
      There is a potential dead lock if a cpu event occurs during nvme probe
      since it registered with hot cpu notification. This fixes the race by
      having the module register with notification outside of probe rather
      than have each device register.
      
      The actual work is done in a scheduled work queue instead of in the
      notifier since assigning IO queues has the potential to block if the
      driver creates additional queues.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      f3db22fe
  9. 04 6月, 2014 1 次提交
  10. 05 5月, 2014 3 次提交
  11. 11 4月, 2014 4 次提交
  12. 24 3月, 2014 2 次提交
    • K
      NVMe: IOCTL path RCU protect queue access · 4f5099af
      Keith Busch 提交于
      This adds rcu protected access to a queue in the nvme IOCTL path
      to fix potential races between a surprise removal and queue usage in
      nvme_submit_sync_cmd. The fix holds the rcu_read_lock() here to prevent
      the nvme_queue from freeing while this path is executing so it can't
      sleep, and so this path will no longer wait for a available command
      id should they all be in use at the time a passthrough IOCTL request
      is received.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      4f5099af
    • K
      NVMe: RCU protected access to io queues · 5a92e700
      Keith Busch 提交于
      This adds rcu protected access to nvme_queue to fix a race between a
      surprise removal freeing the queue and a thread with open reference on
      a NVMe block device using that queue.
      
      The queues do not need to be rcu protected during the initialization or
      shutdown parts, so I've added a helper function for raw deferencing
      to get around the sparse errors.
      
      There is still a hole in the IOCTL path for the same problem, which is
      fixed in a subsequent patch.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
      5a92e700
  13. 07 3月, 2014 1 次提交
    • T
      nvme: don't use PREPARE_WORK · 9ca97374
      Tejun Heo 提交于
      PREPARE_[DELAYED_]WORK() are being phased out.  They have few users
      and a nasty surprise in terms of reentrancy guarantee as workqueue
      considers work items to be different if they don't have the same work
      function.
      
      nvme_dev->reset_work is multiplexed with multiple work functions.
      Introduce nvme_reset_workfn() which invokes nvme_dev->reset_workfn and
      always use it as the work function and update the users to set the
      ->reset_workfn field instead of overriding the work function using
      PREPARE_WORK().
      
      It would probably be best to route this with other related updates
      through the workqueue tree.
      
      Compile tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: linux-nvme@lists.infradead.org
      9ca97374
  14. 28 1月, 2014 2 次提交
  15. 17 12月, 2013 2 次提交
  16. 19 11月, 2013 1 次提交
  17. 04 9月, 2013 3 次提交
  18. 21 6月, 2013 1 次提交
  19. 08 5月, 2013 1 次提交
  20. 03 5月, 2013 2 次提交
  21. 17 4月, 2013 1 次提交