提交 326f36e9 编写于 作者: J John Heffner 提交者: David S. Miller

[TCP]: receive buffer growth limiting with mixed MTU

This is a patch for discussion addressing some receive buffer growing issues.
This is partially related to the thread "Possible BUG in IPv4 TCP window
handling..." last week.

Specifically it addresses the problem of an interaction between rcvbuf
moderation (receiver autotuning) and rcv_ssthresh.  The problem occurs when
sending small packets to a receiver with a larger MTU.  (A very common case I
have is a host with a 1500 byte MTU sending to a host with a 9k MTU.)  In
such a case, the rcv_ssthresh code is targeting a window size corresponding
to filling up the current rcvbuf, not taking into account that the new rcvbuf
moderation may increase the rcvbuf size.

One hunk makes rcv_ssthresh use tcp_rmem[2] as the size target rather than
rcvbuf.  The other changes the behavior when it overflows its memory bounds
with in-order data so that it tries to grow rcvbuf (the same as with
out-of-order data).

These changes should help my problem of mixed MTUs, and should also help the
case from last week's thread I think.  (In both cases though you still need
tcp_rmem[2] to be set much larger than the TCP window.)  One question is if
this is too aggressive at trying to increase rcvbuf if it's under memory
stress.

Orignally-from: John Heffner <jheffner@psc.edu>
Signed-off-by: NStephen Hemminger <shemminger@osdl.org>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 9772efb9
...@@ -234,7 +234,7 @@ static int __tcp_grow_window(const struct sock *sk, struct tcp_sock *tp, ...@@ -234,7 +234,7 @@ static int __tcp_grow_window(const struct sock *sk, struct tcp_sock *tp,
{ {
/* Optimize this! */ /* Optimize this! */
int truesize = tcp_win_from_space(skb->truesize)/2; int truesize = tcp_win_from_space(skb->truesize)/2;
int window = tcp_full_space(sk)/2; int window = tcp_win_from_space(sysctl_tcp_rmem[2])/2;
while (tp->rcv_ssthresh <= window) { while (tp->rcv_ssthresh <= window) {
if (truesize <= skb->len) if (truesize <= skb->len)
...@@ -327,37 +327,18 @@ static void tcp_init_buffer_space(struct sock *sk) ...@@ -327,37 +327,18 @@ static void tcp_init_buffer_space(struct sock *sk)
static void tcp_clamp_window(struct sock *sk, struct tcp_sock *tp) static void tcp_clamp_window(struct sock *sk, struct tcp_sock *tp)
{ {
struct inet_connection_sock *icsk = inet_csk(sk); struct inet_connection_sock *icsk = inet_csk(sk);
struct sk_buff *skb;
unsigned int app_win = tp->rcv_nxt - tp->copied_seq;
int ofo_win = 0;
icsk->icsk_ack.quick = 0; icsk->icsk_ack.quick = 0;
skb_queue_walk(&tp->out_of_order_queue, skb) { if (sk->sk_rcvbuf < sysctl_tcp_rmem[2] &&
ofo_win += skb->len; !(sk->sk_userlocks & SOCK_RCVBUF_LOCK) &&
} !tcp_memory_pressure &&
atomic_read(&tcp_memory_allocated) < sysctl_tcp_mem[0]) {
/* If overcommit is due to out of order segments, sk->sk_rcvbuf = min(atomic_read(&sk->sk_rmem_alloc),
* do not clamp window. Try to expand rcvbuf instead. sysctl_tcp_rmem[2]);
*/
if (ofo_win) {
if (sk->sk_rcvbuf < sysctl_tcp_rmem[2] &&
!(sk->sk_userlocks & SOCK_RCVBUF_LOCK) &&
!tcp_memory_pressure &&
atomic_read(&tcp_memory_allocated) < sysctl_tcp_mem[0])
sk->sk_rcvbuf = min(atomic_read(&sk->sk_rmem_alloc),
sysctl_tcp_rmem[2]);
} }
if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf) { if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
app_win += ofo_win;
if (atomic_read(&sk->sk_rmem_alloc) >= 2 * sk->sk_rcvbuf)
app_win >>= 1;
if (app_win > icsk->icsk_ack.rcv_mss)
app_win -= icsk->icsk_ack.rcv_mss;
app_win = max(app_win, 2U*tp->advmss);
tp->rcv_ssthresh = min(tp->window_clamp, 2U*tp->advmss); tp->rcv_ssthresh = min(tp->window_clamp, 2U*tp->advmss);
}
} }
/* Receiver "autotuning" code. /* Receiver "autotuning" code.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册