提交 31432412 编写于 作者: D David S. Miller

[TCP]: Fix stretch ACK performance killer when doing ucopy.

When we are doing ucopy, we try to defer the ACK generation to
cleanup_rbuf().  This works most of the time very well, but if the
ucopy prequeue is large, this ACKing behavior kills performance.

With TSO, it is possible to fill the prequeue so large that by the
time the ACK is sent and gets back to the sender, most of the window
has emptied of data and performance suffers significantly.

This behavior does help in some cases, so we should think about
re-enabling this trick in the future, using some kind of limit in
order to avoid the bug case.
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 e16fa6b9
......@@ -4355,16 +4355,7 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
goto no_ack;
}
if (eaten) {
if (tcp_in_quickack_mode(tp)) {
tcp_send_ack(sk);
} else {
tcp_send_delayed_ack(sk);
}
} else {
__tcp_ack_snd_check(sk, 0);
}
__tcp_ack_snd_check(sk, 0);
no_ack:
if (eaten)
__kfree_skb(skb);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册