1. 21 6月, 2019 2 次提交
  2. 18 5月, 2019 1 次提交
  3. 01 5月, 2019 1 次提交
  4. 14 3月, 2019 1 次提交
  5. 20 2月, 2019 3 次提交
  6. 06 2月, 2019 1 次提交
  7. 04 2月, 2019 1 次提交
  8. 10 1月, 2019 1 次提交
  9. 19 12月, 2018 1 次提交
  10. 13 12月, 2018 2 次提交
  11. 12 12月, 2018 1 次提交
  12. 08 12月, 2018 5 次提交
  13. 01 12月, 2018 1 次提交
  14. 09 11月, 2018 1 次提交
  15. 18 10月, 2018 1 次提交
  16. 02 10月, 2018 1 次提交
  17. 28 9月, 2018 1 次提交
  18. 30 7月, 2018 1 次提交
  19. 28 7月, 2018 3 次提交
  20. 23 7月, 2018 2 次提交
  21. 22 6月, 2018 1 次提交
    • J
      nvme-pci: limit max IO size and segments to avoid high order allocations · 943e942e
      Jens Axboe 提交于
      nvme requires an sg table allocation for each request. If the request
      is large, then the allocation can become quite large. For instance,
      with our default software settings of 1280KB IO size, we'll need
      10248 bytes of sg table. That turns into a 2nd order allocation,
      which we can't always guarantee. If we fail the allocation, blk-mq
      will retry it later. But there's no guarantee that we'll EVER be
      able to allocate that much contigious memory.
      
      Limit the IO size such that we never need more than a single page
      of memory. That's a lot faster and more reliable. Then back that
      allocation with a mempool, so that we know we'll always be able
      to succeed the allocation at some point.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Acked-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      943e942e
  22. 14 6月, 2018 1 次提交
  23. 09 6月, 2018 1 次提交
  24. 01 6月, 2018 3 次提交
  25. 23 5月, 2018 1 次提交
    • J
      nvme: fix lockdep warning in nvme_mpath_clear_current_path · 978628ec
      Johannes Thumshirn 提交于
      When running blktest's nvme/005 with a lockdep enabled kernel the test
      case fails due to the following lockdep splat in dmesg:
      
       =============================
       WARNING: suspicious RCU usage
       4.17.0-rc5 #881 Not tainted
       -----------------------------
       drivers/nvme/host/nvme.h:457 suspicious rcu_dereference_check() usage!
      
       other info that might help us debug this:
      
       rcu_scheduler_active = 2, debug_locks = 1
       3 locks held by kworker/u32:5/1102:
        #0:         (ptrval) ((wq_completion)"nvme-wq"){+.+.}, at: process_one_work+0x152/0x5c0
        #1:         (ptrval) ((work_completion)(&ctrl->scan_work)){+.+.}, at: process_one_work+0x152/0x5c0
        #2:         (ptrval) (&subsys->lock#2){+.+.}, at: nvme_ns_remove+0x43/0x1c0 [nvme_core]
      
      The only caller of nvme_mpath_clear_current_path() is nvme_ns_remove()
      which holds the subsys lock so it's likely a false positive, but when
      using rcu_access_pointer(), we're telling rcu and lockdep that we're
      only after the pointer falue.
      
      Fixes: 32acab31 ("nvme: implement multipath access to nvme subsystems")
      Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Suggested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      978628ec
  26. 19 5月, 2018 1 次提交
  27. 12 5月, 2018 1 次提交
    • J
      nvme: add quirk to force medium priority for SQ creation · 9abd68ef
      Jens Axboe 提交于
      Some P3100 drives have a bug where they think WRRU (weighted round robin)
      is always enabled, even though the host doesn't set it. Since they think
      it's enabled, they also look at the submission queue creation priority. We
      used to set that to MEDIUM by default, but that was removed in commit
      81c1cd98. This causes various issues on that drive. Add a quirk to
      still set MEDIUM priority for that controller.
      
      Fixes: 81c1cd98 ("nvme/pci: Don't set reserved SQ create flags")
      Cc: stable@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      9abd68ef