提交 b0c057ca 编写于 作者: M Michael S. Tsirkin 提交者: David S. Miller

vhost: fix a theoretical race in device cleanup

vhost_zerocopy_callback accesses VQ right after it drops a ubuf
reference.  In theory, this could race with device removal which waits
on the ubuf kref, and crash on use after free.

Do all accesses within rcu read side critical section, and synchronize
on release.

Since callbacks are always invoked from bh, synchronize_rcu_bh seems
enough and will help release complete a bit faster.
Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
Acked-by: NJason Wang <jasowang@redhat.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 0ad8b480
...@@ -308,6 +308,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success) ...@@ -308,6 +308,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
struct vhost_virtqueue *vq = ubufs->vq; struct vhost_virtqueue *vq = ubufs->vq;
int cnt; int cnt;
rcu_read_lock_bh();
/* set len to mark this desc buffers done DMA */ /* set len to mark this desc buffers done DMA */
vq->heads[ubuf->desc].len = success ? vq->heads[ubuf->desc].len = success ?
VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN; VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
...@@ -322,6 +324,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success) ...@@ -322,6 +324,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
*/ */
if (cnt <= 1 || !(cnt % 16)) if (cnt <= 1 || !(cnt % 16))
vhost_poll_queue(&vq->poll); vhost_poll_queue(&vq->poll);
rcu_read_unlock_bh();
} }
/* Expects to be always run from workqueue - which acts as /* Expects to be always run from workqueue - which acts as
...@@ -799,6 +803,8 @@ static int vhost_net_release(struct inode *inode, struct file *f) ...@@ -799,6 +803,8 @@ static int vhost_net_release(struct inode *inode, struct file *f)
fput(tx_sock->file); fput(tx_sock->file);
if (rx_sock) if (rx_sock)
fput(rx_sock->file); fput(rx_sock->file);
/* Make sure no callbacks are outstanding */
synchronize_rcu_bh();
/* We do an extra flush before freeing memory, /* We do an extra flush before freeing memory,
* since jobs can re-queue themselves. */ * since jobs can re-queue themselves. */
vhost_net_flush(n); vhost_net_flush(n);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册