提交 ca465e1f 编写于 作者: T Tao Liu 提交者: Jason Gunthorpe

RDMA/cma: Fix listener leak in rdma_cma_listen_on_all() failure

If cma_listen_on_all() fails it leaves the per-device ID still on the
listen_list but the state is not set to RDMA_CM_ADDR_BOUND.

When the cmid is eventually destroyed cma_cancel_listens() is not called
due to the wrong state, however the per-device IDs are still holding the
refcount preventing the ID from being destroyed, thus deadlocking:

 task:rping state:D stack:   0 pid:19605 ppid: 47036 flags:0x00000084
 Call Trace:
  __schedule+0x29a/0x780
  ? free_unref_page_commit+0x9b/0x110
  schedule+0x3c/0xa0
  schedule_timeout+0x215/0x2b0
  ? __flush_work+0x19e/0x1e0
  wait_for_completion+0x8d/0xf0
  _destroy_id+0x144/0x210 [rdma_cm]
  ucma_close_id+0x2b/0x40 [rdma_ucm]
  __destroy_id+0x93/0x2c0 [rdma_ucm]
  ? __xa_erase+0x4a/0xa0
  ucma_destroy_id+0x9a/0x120 [rdma_ucm]
  ucma_write+0xb8/0x130 [rdma_ucm]
  vfs_write+0xb4/0x250
  ksys_write+0xb5/0xd0
  ? syscall_trace_enter.isra.19+0x123/0x190
  do_syscall_64+0x33/0x40
  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Ensure that cma_listen_on_all() atomically unwinds its action under the
lock during error.

Fixes: c80a0c52 ("RDMA/cma: Add missing error handling of listen_id")
Link: https://lore.kernel.org/r/20210913093344.17230-1-thomas.liu@ucloud.cnSigned-off-by: NTao Liu <thomas.liu@ucloud.cn>
Reviewed-by: NLeon Romanovsky <leonro@nvidia.com>
Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
上级 2cc74e1e
...@@ -1746,15 +1746,16 @@ static void cma_cancel_route(struct rdma_id_private *id_priv) ...@@ -1746,15 +1746,16 @@ static void cma_cancel_route(struct rdma_id_private *id_priv)
} }
} }
static void cma_cancel_listens(struct rdma_id_private *id_priv) static void _cma_cancel_listens(struct rdma_id_private *id_priv)
{ {
struct rdma_id_private *dev_id_priv; struct rdma_id_private *dev_id_priv;
lockdep_assert_held(&lock);
/* /*
* Remove from listen_any_list to prevent added devices from spawning * Remove from listen_any_list to prevent added devices from spawning
* additional listen requests. * additional listen requests.
*/ */
mutex_lock(&lock);
list_del(&id_priv->list); list_del(&id_priv->list);
while (!list_empty(&id_priv->listen_list)) { while (!list_empty(&id_priv->listen_list)) {
...@@ -1768,6 +1769,12 @@ static void cma_cancel_listens(struct rdma_id_private *id_priv) ...@@ -1768,6 +1769,12 @@ static void cma_cancel_listens(struct rdma_id_private *id_priv)
rdma_destroy_id(&dev_id_priv->id); rdma_destroy_id(&dev_id_priv->id);
mutex_lock(&lock); mutex_lock(&lock);
} }
}
static void cma_cancel_listens(struct rdma_id_private *id_priv)
{
mutex_lock(&lock);
_cma_cancel_listens(id_priv);
mutex_unlock(&lock); mutex_unlock(&lock);
} }
...@@ -2579,7 +2586,7 @@ static int cma_listen_on_all(struct rdma_id_private *id_priv) ...@@ -2579,7 +2586,7 @@ static int cma_listen_on_all(struct rdma_id_private *id_priv)
return 0; return 0;
err_listen: err_listen:
list_del(&id_priv->list); _cma_cancel_listens(id_priv);
mutex_unlock(&lock); mutex_unlock(&lock);
if (to_destroy) if (to_destroy)
rdma_destroy_id(&to_destroy->id); rdma_destroy_id(&to_destroy->id);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册