1. 17 10月, 2018 1 次提交
  2. 02 10月, 2018 3 次提交
  3. 28 9月, 2018 2 次提交
  4. 08 8月, 2018 1 次提交
  5. 30 7月, 2018 2 次提交
  6. 28 7月, 2018 3 次提交
  7. 23 7月, 2018 2 次提交
  8. 20 7月, 2018 1 次提交
  9. 17 7月, 2018 2 次提交
    • W
      nvme: don't enable AEN if not supported · fa441b71
      Weiping Zhang 提交于
      Avoid excuting set_feature command if there is no supported bit in
      Optional Asynchronous Events Supported (OAES).
      
      Fixes: c0561f82 ("nvme: submit AEN event configuration on startup")
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NWeiping Zhang <zhangweiping@didichuxing.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      fa441b71
    • S
      nvme: ensure forward progress during Admin passthru · cf39a6bc
      Scott Bauer 提交于
      If the controller supports effects and goes down during the passthru admin
      command we will deadlock during namespace revalidation.
      
      [  363.488275] INFO: task kworker/u16:5:231 blocked for more than 120 seconds.
      [  363.488290]       Not tainted 4.17.0+ #2
      [  363.488296] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [  363.488303] kworker/u16:5   D    0   231      2 0x80000000
      [  363.488331] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
      [  363.488338] Call Trace:
      [  363.488385]  schedule+0x75/0x190
      [  363.488396]  rwsem_down_read_failed+0x1c3/0x2f0
      [  363.488481]  call_rwsem_down_read_failed+0x14/0x30
      [  363.488504]  down_read+0x1d/0x80
      [  363.488523]  nvme_stop_queues+0x1e/0xa0 [nvme_core]
      [  363.488536]  nvme_dev_disable+0xae4/0x1620 [nvme]
      [  363.488614]  nvme_reset_work+0xd1e/0x49d9 [nvme]
      [  363.488911]  process_one_work+0x81a/0x1400
      [  363.488934]  worker_thread+0x87/0xe80
      [  363.488955]  kthread+0x2db/0x390
      [  363.488977]  ret_from_fork+0x35/0x40
      
      Fixes: 84fef62d ("nvme: check admin passthru command effects")
      Signed-off-by: NScott Bauer <scott.bauer@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@linux.intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      cf39a6bc
  10. 22 6月, 2018 1 次提交
    • J
      nvme-pci: limit max IO size and segments to avoid high order allocations · 943e942e
      Jens Axboe 提交于
      nvme requires an sg table allocation for each request. If the request
      is large, then the allocation can become quite large. For instance,
      with our default software settings of 1280KB IO size, we'll need
      10248 bytes of sg table. That turns into a 2nd order allocation,
      which we can't always guarantee. If we fail the allocation, blk-mq
      will retry it later. But there's no guarantee that we'll EVER be
      able to allocate that much contigious memory.
      
      Limit the IO size such that we never need more than a single page
      of memory. That's a lot faster and more reliable. Then back that
      allocation with a mempool, so that we know we'll always be able
      to succeed the allocation at some point.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Acked-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      943e942e
  11. 14 6月, 2018 1 次提交
  12. 13 6月, 2018 1 次提交
  13. 11 6月, 2018 1 次提交
  14. 09 6月, 2018 1 次提交
  15. 01 6月, 2018 4 次提交
  16. 31 5月, 2018 1 次提交
  17. 30 5月, 2018 1 次提交
  18. 25 5月, 2018 2 次提交
  19. 23 5月, 2018 1 次提交
  20. 19 5月, 2018 1 次提交
  21. 16 5月, 2018 1 次提交
  22. 12 5月, 2018 1 次提交
  23. 07 5月, 2018 1 次提交
    • J
      nvme: fix use-after-free in nvme_free_ns_head · 12d9f070
      Jianchao Wang 提交于
      Currently only nvme_ctrl will take a reference counter of
      nvme_subsystem, nvme_ns_head also needs it. Otherwise
      nvme_free_ns_head will access the nvme_subsystem.ns_ida
      which has been freed by __nvme_release_subsystem after all the
      reference of nvme_subsystem have been released by nvme_free_ctrl.
      This could cause memory corruption.
      
       BUG: KASAN: use-after-free in radix_tree_next_chunk+0x9f/0x4b0
       Read of size 8 at addr ffff88036494d2e8 by task fio/1815
      
       CPU: 1 PID: 1815 Comm: fio Kdump: loaded Tainted: G        W         4.17.0-rc1+ #18
       Hardware name: LENOVO 10MLS0E339/3106, BIOS M1AKT22A 06/27/2017
       Call Trace:
        dump_stack+0x91/0xeb
        print_address_description+0x6b/0x290
        kasan_report+0x261/0x360
        radix_tree_next_chunk+0x9f/0x4b0
        ida_remove+0x8b/0x180
        ida_simple_remove+0x26/0x40
        nvme_free_ns_head+0x58/0xc0
        __blkdev_put+0x30a/0x3a0
        blkdev_close+0x44/0x50
        __fput+0x184/0x380
        task_work_run+0xaf/0xe0
        do_exit+0x501/0x1440
        do_group_exit+0x89/0x140
        __x64_sys_exit_group+0x28/0x30
        do_syscall_64+0x72/0x230
      Signed-off-by: NJianchao Wang <jianchao.w.wang@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      12d9f070
  24. 03 5月, 2018 3 次提交
  25. 12 4月, 2018 2 次提交
    • J
      nvme: expand nvmf_check_if_ready checks · bb06ec31
      James Smart 提交于
      The nvmf_check_if_ready() checks that were added are very simplistic.
      As such, the routine allows a lot of cases to fail ios during windows
      of reset or re-connection. In cases where there are not multi-path
      options present, the error goes back to the callee - the filesystem
      or application. Not good.
      
      The common routine was rewritten and calling syntax slightly expanded
      so that per-transport is_ready routines don't need to be present.
      The transports now call the routine directly. The routine is now a
      fabrics routine rather than an inline function.
      
      The routine now looks at controller state to decide the action to
      take. Some states mandate io failure. Others define the condition where
      a command can be accepted.  When the decision is unclear, a generic
      queue-or-reject check is made to look for failfast or multipath ios and
      only fails the io if it is so marked. Otherwise, the io will be queued
      and wait for the controller state to resolve.
      
      Admin commands issued via ioctl share a live admin queue with commands
      from the transport for controller init. The ioctls could be intermixed
      with the initialization commands. It's possible for the ioctl cmd to
      be issued prior to the controller being enabled. To block this, the
      ioctl admin commands need to be distinguished from admin commands used
      for controller init. Added a USERCMD nvme_req(req)->rq_flags bit to
      reflect this division and set it on ioctls requests.  As the
      nvmf_check_if_ready() routine is called prior to nvme_setup_cmd(),
      ensure that commands allocated by the ioctl path (actually anything
      in core.c) preps the nvme_req(req) before starting the io. This will
      preserve the USERCMD flag during execution and/or retry.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.e>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bb06ec31
    • K
      nvme: Use admin command effects for admin commands · 62843c2e
      Keith Busch 提交于
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      62843c2e