提交 61a5ff15 编写于 作者: A Amos Kong 提交者: David S. Miller

tun: do not put self in waitq if doing a nonblock read

Perf shows a relatively high rate (about 8%) race in
spin_lock_irqsave() when doing netperf between external host and
guest. It's mainly becuase the lock contention between the
tun_do_read() and tun_xmit_skb(), so this patch do not put self into
waitqueue to reduce this kind of race. After this patch, it drops to
4%.
Signed-off-by: NJason Wang <jasowang@redhat.com>
Signed-off-by: NAmos Kong <akong@redhat.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 6f7c156c
...@@ -817,7 +817,8 @@ static ssize_t tun_do_read(struct tun_struct *tun, ...@@ -817,7 +817,8 @@ static ssize_t tun_do_read(struct tun_struct *tun,
tun_debug(KERN_INFO, tun, "tun_chr_read\n"); tun_debug(KERN_INFO, tun, "tun_chr_read\n");
add_wait_queue(&tun->wq.wait, &wait); if (unlikely(!noblock))
add_wait_queue(&tun->wq.wait, &wait);
while (len) { while (len) {
current->state = TASK_INTERRUPTIBLE; current->state = TASK_INTERRUPTIBLE;
...@@ -848,7 +849,8 @@ static ssize_t tun_do_read(struct tun_struct *tun, ...@@ -848,7 +849,8 @@ static ssize_t tun_do_read(struct tun_struct *tun,
} }
current->state = TASK_RUNNING; current->state = TASK_RUNNING;
remove_wait_queue(&tun->wq.wait, &wait); if (unlikely(!noblock))
remove_wait_queue(&tun->wq.wait, &wait);
return ret; return ret;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册