1. 02 11月, 2018 1 次提交
  2. 19 10月, 2018 2 次提交
  3. 18 10月, 2018 5 次提交
  4. 17 10月, 2018 17 次提交
  5. 09 10月, 2018 3 次提交
    • J
      lightnvm: do no update csecs and sos on 1.2 · 6fd05cad
      Javier González 提交于
      1.2 devices exposes their data and metadata size through the separate
      identify command. Make sure that the NVMe LBA format does not override
      these values.
      Signed-off-by: NJavier González <javier@cnexlabs.com>
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6fd05cad
    • J
      lightnvm: use internal allocation for chunk log page · 090ee26f
      Javier González 提交于
      The lightnvm subsystem provides helpers to retrieve chunk metadata,
      where the target needs to provide a buffer to store the metadata. An
      implicit assumption is that this buffer is contiguous and can be used to
      retrieve the data from the device. If the device exposes too many
      chunks, then kmalloc might fail, thus failing instance creation.
      
      This patch removes this assumption by implementing an internal buffer in
      the lightnvm subsystem to retrieve chunk metadata. Targets can then
      use virtual memory allocations. Since this is a target API change, adapt
      pblk accordingly.
      Signed-off-by: NJavier González <javier@cnexlabs.com>
      Reviewed-by: NHans Holmberg <hans.holmberg@cnexlabs.com>
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      090ee26f
    • M
      lightnvm: move bad block and chunk state logic to core · aff3fb18
      Matias Bjørling 提交于
      pblk implements two data paths for recovery line state. One for 1.2
      and another for 2.0, instead of having pblk implement these, combine
      them in the core to reduce complexity and make available to other
      targets.
      
      The new interface will adhere to the 2.0 chunk definition,
      including managing open chunks with an active write pointer. To provide
      this interface, a 1.2 device recovers the state of the chunks by
      manually detecting if a chunk is either free/open/close/offline, and if
      open, scanning the flash pages sequentially to find the next writeable
      page. This process takes on average ~10 seconds on a device with 64 dies,
      1024 blocks and 60us read access time. The process can be parallelized
      but is left out for maintenance simplicity, as the 1.2 specification is
      deprecated. For 2.0 devices, the logic is maintained internally in the
      drive and retrieved through the 2.0 interface.
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      aff3fb18
  6. 08 10月, 2018 1 次提交
  7. 05 10月, 2018 1 次提交
    • S
      nvmet-rdma: use a private workqueue for delete · 2acf70ad
      Sagi Grimberg 提交于
      Queue deletion is done asynchronous when the last reference on the queue
      is dropped.  Thus, in order to make sure we don't over allocate under a
      connect/disconnect storm, we let queue deletion complete before making
      forward progress.
      
      However, given that we flush the system_wq from rdma_cm context which
      runs from a workqueue context, we can have a circular locking complaint
      [1]. Fix that by using a private workqueue for queue deletion.
      
      [1]:
      ======================================================
      WARNING: possible circular locking dependency detected
      4.19.0-rc4-dbg+ #3 Not tainted
      ------------------------------------------------------
      kworker/5:0/39 is trying to acquire lock:
      00000000a10b6db9 (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x6f/0x440 [rdma_cm]
      
      but task is already holding lock:
      00000000331b4e2c ((work_completion)(&queue->release_work)){+.+.}, at: process_one_work+0x3ed/0xa20
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #3 ((work_completion)(&queue->release_work)){+.+.}:
             process_one_work+0x474/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #2 ((wq_completion)"events"){+.+.}:
             flush_workqueue+0xf3/0x970
             nvmet_rdma_cm_handler+0x133d/0x1734 [nvmet_rdma]
             cma_ib_req_handler+0x72f/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
      nvmet_rdma:nvmet_rdma_cm_handler: nvmet_rdma: disconnected (10): status 0 id 0000000040357082
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      nvme nvme0: Reconnecting in 10 seconds...
      
      -> #1 (&id_priv->handler_mutex/1){+.+.}:
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             cma_ib_req_handler+0x6aa/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #0 (&id_priv->handler_mutex){+.+.}:
             lock_acquire+0xc5/0x200
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             rdma_destroy_id+0x6f/0x440 [rdma_cm]
             nvmet_rdma_release_queue_work+0x8e/0x1b0 [nvmet_rdma]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      Fixes: 777dc823 ("nvmet-rdma: occasionally flush ongoing controller teardown")
      Reported-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Tested-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2acf70ad
  8. 03 10月, 2018 1 次提交
  9. 02 10月, 2018 9 次提交