1. 07 12月, 2018 1 次提交
  2. 09 11月, 2018 1 次提交
  3. 18 10月, 2018 2 次提交
    • L
      nvmet: Optionally use PCI P2P memory · c6925093
      Logan Gunthorpe 提交于
      Create a configfs attribute in each nvme-fabrics namespace to enable P2P
      memory use.  The attribute may be enabled (with a boolean) or a specific
      P2P device may be given (with the device's PCI name).
      
      When enabled, the namespace will ensure the underlying block device
      supports P2P and is compatible with any specified P2P device.  If no device
      was specified it will ensure there is compatible P2P memory somewhere in
      the system.  Enabling a namespace with P2P memory will fail with EINVAL
      (and an appropriate dmesg error) if any of these conditions are not met.
      
      Once a controller is set up on a specific port, the P2P device to use for
      each namespace will be found and stored in a radix tree by namespace ID.
      When memory is allocated for a request, the tree is used to look up the P2P
      device to allocate memory against.  If no device is in the tree (because no
      appropriate device was found), or if allocation of P2P memory fails, fall
      back to using regular memory.
      Signed-off-by: NStephen Bates <sbates@raithlin.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      [hch: partial rewrite of the initial code]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      c6925093
    • L
      nvmet: Introduce helper functions to allocate and free request SGLs · 5b2322e4
      Logan Gunthorpe 提交于
      Add helpers to allocate and free the SGL in a struct nvmet_req:
      
        int nvmet_req_alloc_sgl(struct nvmet_req *req)
        void nvmet_req_free_sgl(struct nvmet_req *req)
      
      This will be expanded in a future patch to implement peer-to-peer memory
      DMAs and should be common with all target drivers.
      
      The new helpers are used in nvmet-rdma.  Seeing we use req.transfer_len as
      the length of the SGL it is set earlier and cleared on any error.  It also
      seems to be unnecessary to accumulate the length as the map_sgl functions
      should only ever be called once per request.
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NSagi Grimberg <sagi@grimberg.me>
      5b2322e4
  4. 17 10月, 2018 1 次提交
  5. 05 10月, 2018 1 次提交
    • S
      nvmet-rdma: use a private workqueue for delete · 2acf70ad
      Sagi Grimberg 提交于
      Queue deletion is done asynchronous when the last reference on the queue
      is dropped.  Thus, in order to make sure we don't over allocate under a
      connect/disconnect storm, we let queue deletion complete before making
      forward progress.
      
      However, given that we flush the system_wq from rdma_cm context which
      runs from a workqueue context, we can have a circular locking complaint
      [1]. Fix that by using a private workqueue for queue deletion.
      
      [1]:
      ======================================================
      WARNING: possible circular locking dependency detected
      4.19.0-rc4-dbg+ #3 Not tainted
      ------------------------------------------------------
      kworker/5:0/39 is trying to acquire lock:
      00000000a10b6db9 (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x6f/0x440 [rdma_cm]
      
      but task is already holding lock:
      00000000331b4e2c ((work_completion)(&queue->release_work)){+.+.}, at: process_one_work+0x3ed/0xa20
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #3 ((work_completion)(&queue->release_work)){+.+.}:
             process_one_work+0x474/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #2 ((wq_completion)"events"){+.+.}:
             flush_workqueue+0xf3/0x970
             nvmet_rdma_cm_handler+0x133d/0x1734 [nvmet_rdma]
             cma_ib_req_handler+0x72f/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
      nvmet_rdma:nvmet_rdma_cm_handler: nvmet_rdma: disconnected (10): status 0 id 0000000040357082
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      nvme nvme0: Reconnecting in 10 seconds...
      
      -> #1 (&id_priv->handler_mutex/1){+.+.}:
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             cma_ib_req_handler+0x6aa/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #0 (&id_priv->handler_mutex){+.+.}:
             lock_acquire+0xc5/0x200
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             rdma_destroy_id+0x6f/0x440 [rdma_cm]
             nvmet_rdma_release_queue_work+0x8e/0x1b0 [nvmet_rdma]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      Fixes: 777dc823 ("nvmet-rdma: occasionally flush ongoing controller teardown")
      Reported-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Tested-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2acf70ad
  6. 06 9月, 2018 1 次提交
    • S
      nvmet-rdma: fix possible bogus dereference under heavy load · 8407879c
      Sagi Grimberg 提交于
      Currently we always repost the recv buffer before we send a response
      capsule back to the host. Since ordering is not guaranteed for send
      and recv completions, it is posible that we will receive a new request
      from the host before we got a send completion for the response capsule.
      
      Today, we pre-allocate 2x rsps the length of the queue, but in reality,
      under heavy load there is nothing that is really preventing the gap to
      expand until we exhaust all our rsps.
      
      To fix this, if we don't have any pre-allocated rsps left, we dynamically
      allocate a rsp and make sure to free it when we are done. If under memory
      pressure we fail to allocate a rsp, we silently drop the command and
      wait for the host to retry.
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      [hch: dropped a superflous assignment]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8407879c
  7. 25 7月, 2018 1 次提交
  8. 23 7月, 2018 3 次提交
  9. 19 6月, 2018 1 次提交
  10. 26 3月, 2018 5 次提交
  11. 08 1月, 2018 2 次提交
  12. 07 1月, 2018 1 次提交
  13. 11 11月, 2017 2 次提交
  14. 18 8月, 2017 1 次提交
  15. 28 6月, 2017 2 次提交
  16. 21 5月, 2017 1 次提交
  17. 04 4月, 2017 3 次提交
  18. 17 3月, 2017 1 次提交
  19. 23 2月, 2017 2 次提交
  20. 26 1月, 2017 1 次提交
  21. 15 12月, 2016 1 次提交
  22. 06 12月, 2016 2 次提交
  23. 14 11月, 2016 3 次提交
  24. 24 9月, 2016 1 次提交