1. 26 7月, 2017 1 次提交
  2. 20 7月, 2017 3 次提交
  3. 10 7月, 2017 2 次提交
  4. 06 7月, 2017 2 次提交
  5. 03 7月, 2017 1 次提交
  6. 02 7月, 2017 4 次提交
  7. 29 6月, 2017 1 次提交
  8. 28 6月, 2017 6 次提交
  9. 15 6月, 2017 9 次提交
  10. 09 6月, 2017 2 次提交
    • C
      blk-mq: switch ->queue_rq return value to blk_status_t · fc17b653
      Christoph Hellwig 提交于
      Use the same values for use for request completion errors as the return
      value from ->queue_rq.  BLK_STS_RESOURCE is special cased to cause
      a requeue, and all the others are completed as-is.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fc17b653
    • C
      block: introduce new block status code type · 2a842aca
      Christoph Hellwig 提交于
      Currently we use nornal Linux errno values in the block layer, and while
      we accept any error a few have overloaded magic meanings.  This patch
      instead introduces a new  blk_status_t value that holds block layer specific
      status codes and explicitly explains their meaning.  Helpers to convert from
      and to the previous special meanings are provided for now, but I suspect
      we want to get rid of them in the long run - those drivers that have a
      errno input (e.g. networking) usually get errnos that don't know about
      the special block layer overloads, and similarly returning them to userspace
      will usually return somethings that strictly speaking isn't correct
      for file system operations, but that's left as an exercise for later.
      
      For now the set of errors is a very limited set that closely corresponds
      to the previous overloaded errno values, but there is some low hanging
      fruite to improve it.
      
      blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse
      typechecking, so that we can easily catch places passing the wrong values.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2a842aca
  11. 07 6月, 2017 1 次提交
    • R
      nvme-pci: fix multiple ctrl removal scheduling · 82b057ca
      Rakesh Pandit 提交于
      Commit c5f6ce97 tries to address multiple resets but fails as
      work_busy doesn't involve any synchronization and can fail.  This is
      reproducible easily as can be seen by WARNING below which is triggered
      with line:
      
      WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)
      
      Allowing multiple resets can result in multiple controller removal as
      well if different conditions inside nvme_reset_work fail and which
      might deadlock on device_release_driver.
      
      [  480.327007] WARNING: CPU: 3 PID: 150 at drivers/nvme/host/pci.c:1900 nvme_reset_work+0x36c/0xec0
      [  480.327008] Modules linked in: rfcomm fuse nf_conntrack_netbios_ns nf_conntrack_broadcast...
      [  480.327044]  btusb videobuf2_core ghash_clmulni_intel snd_hwdep cfg80211 acer_wmi hci_uart..
      [  480.327065] CPU: 3 PID: 150 Comm: kworker/u16:2 Not tainted 4.12.0-rc1+ #13
      [  480.327065] Hardware name: Acer Predator G9-591/Mustang_SLS, BIOS V1.10 03/03/2016
      [  480.327066] Workqueue: nvme nvme_reset_work
      [  480.327067] task: ffff880498ad8000 task.stack: ffffc90002218000
      [  480.327068] RIP: 0010:nvme_reset_work+0x36c/0xec0
      [  480.327069] RSP: 0018:ffffc9000221bdb8 EFLAGS: 00010246
      [  480.327070] RAX: 0000000000460000 RBX: ffff880498a98128 RCX: dead000000000200
      [  480.327070] RDX: 0000000000000001 RSI: ffff8804b1028020 RDI: ffff880498a98128
      [  480.327071] RBP: ffffc9000221be50 R08: 0000000000000000 R09: 0000000000000000
      [  480.327071] R10: ffffc90001963ce8 R11: 000000000000020d R12: ffff880498a98000
      [  480.327072] R13: ffff880498a53500 R14: ffff880498a98130 R15: ffff880498a98128
      [  480.327072] FS:  0000000000000000(0000) GS:ffff8804c1cc0000(0000) knlGS:0000000000000000
      [  480.327073] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  480.327074] CR2: 00007ffcf3c37f78 CR3: 0000000001e09000 CR4: 00000000003406e0
      [  480.327074] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  480.327075] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  480.327075] Call Trace:
      [  480.327079]  ? __switch_to+0x227/0x400
      [  480.327081]  process_one_work+0x18c/0x3a0
      [  480.327082]  worker_thread+0x4e/0x3b0
      [  480.327084]  kthread+0x109/0x140
      [  480.327085]  ? process_one_work+0x3a0/0x3a0
      [  480.327087]  ? kthread_park+0x60/0x60
      [  480.327102]  ret_from_fork+0x2c/0x40
      [  480.327103] Code: e8 5a dc ff ff 85 c0 41 89 c1 0f.....
      
      This patch addresses the problem by using state of controller to
      decide whether reset should be queued or not as state change is
      synchronizated using controller spinlock.  Also cancel_work_sync is
      used to make sure remove cancels the reset_work and waits for it to
      finish.  This patch also changes return value from -ENODEV to more
      appropriate -EBUSY if nvme_reset fails to change state.
      
      Fixes: c5f6ce97 ("nvme: don't schedule multiple resets")
      Signed-off-by: NRakesh Pandit <rakesh@tuxera.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      82b057ca
  12. 26 5月, 2017 3 次提交
  13. 21 5月, 2017 1 次提交
  14. 02 5月, 2017 1 次提交
  15. 21 4月, 2017 3 次提交
    • K
      nvme/pci: Poll CQ on timeout · 7776db1c
      Keith Busch 提交于
      If an IO timeout occurs, it's helpful to know if the controller did not
      post a completion or the driver missed an interrupt. While we never expect
      the latter, this patch will make it possible to tell the difference so
      we don't have to guess.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      7776db1c
    • H
      nvme: improve performance for virtual NVMe devices · f9f38e33
      Helen Koike 提交于
      This change provides a mechanism to reduce the number of MMIO doorbell
      writes for the NVMe driver. When running in a virtualized environment
      like QEMU, the cost of an MMIO is quite hefy here. The main idea for
      the patch is provide the device two memory location locations:
       1) to store the doorbell values so they can be lookup without the doorbell
          MMIO write
       2) to store an event index.
      I believe the doorbell value is obvious, the event index not so much.
      Similar to the virtio specification, the virtual device can tell the
      driver (guest OS) not to write MMIO unless you are writing past this
      value.
      
      FYI: doorbell values are written by the nvme driver (guest OS) and the
      event index is written by the virtual device (host OS).
      
      The patch implements a new admin command that will communicate where
      these two memory locations reside. If the command fails, the nvme
      driver will work as before without any optimizations.
      
      Contributions:
        Eric Northup <digitaleric@google.com>
        Frank Swiderski <fes@google.com>
        Ted Tso <tytso@mit.edu>
        Keith Busch <keith.busch@intel.com>
      
      Just to give an idea on the performance boost with the vendor
      extension: Running fio [1], a stock NVMe driver I get about 200K read
      IOPs with my vendor patch I get about 1000K read IOPs. This was
      running with a null device i.e. the backing device simply returned
      success on every read IO request.
      
      [1] Running on a 4 core machine:
        fio --time_based --name=benchmark --runtime=30
        --filename=/dev/nvme0n1 --nrfiles=1 --ioengine=libaio --iodepth=32
        --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4
        --rw=randread --blocksize=4k --randrepeat=false
      Signed-off-by: NRob Nelson <rlnelson@google.com>
      [mlin: port for upstream]
      Signed-off-by: NMing Lin <mlin@kernel.org>
      [koike: updated for upstream]
      Signed-off-by: NHelen Koike <helen.koike@collabora.co.uk>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      f9f38e33
    • K
      nvme/pci: Don't set reserved SQ create flags · 81c1cd98
      Keith Busch 提交于
      The QPRIO field is only valid if weighted round robin arbitration is used,
      and this driver doesn't enable that controller configuration option.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      81c1cd98