提交 916e6d1a 编写于 作者: E Eric Dumazet 提交者: David S. Miller

tcp: defer xmit timer reset in tcp_xmit_retransmit_queue()

As hinted in prior change ("tcp: refine tcp_pacing_delay()
for very low pacing rates"), it is probably best arming
the xmit timer only when all the packets have been scheduled,
rather than when the head of rtx queue has been re-sent.

This does matter for flows having extremely low pacing rates,
since their tp->tcp_wstamp_ns could be far in the future.

Note that the regular xmit path has a stronger limit
in tcp_small_queue_check(), meaning it is less likely to
go beyond the pacing horizon.
Signed-off-by: NEric Dumazet <edumazet@google.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 8dc242ad
...@@ -3112,6 +3112,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk) ...@@ -3112,6 +3112,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
const struct inet_connection_sock *icsk = inet_csk(sk); const struct inet_connection_sock *icsk = inet_csk(sk);
struct sk_buff *skb, *rtx_head, *hole = NULL; struct sk_buff *skb, *rtx_head, *hole = NULL;
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
bool rearm_timer = false;
u32 max_segs; u32 max_segs;
int mib_idx; int mib_idx;
...@@ -3134,7 +3135,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk) ...@@ -3134,7 +3135,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
segs = tp->snd_cwnd - tcp_packets_in_flight(tp); segs = tp->snd_cwnd - tcp_packets_in_flight(tp);
if (segs <= 0) if (segs <= 0)
return; break;
sacked = TCP_SKB_CB(skb)->sacked; sacked = TCP_SKB_CB(skb)->sacked;
/* In case tcp_shift_skb_data() have aggregated large skbs, /* In case tcp_shift_skb_data() have aggregated large skbs,
* we need to make sure not sending too bigs TSO packets * we need to make sure not sending too bigs TSO packets
...@@ -3159,10 +3160,10 @@ void tcp_xmit_retransmit_queue(struct sock *sk) ...@@ -3159,10 +3160,10 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
continue; continue;
if (tcp_small_queue_check(sk, skb, 1)) if (tcp_small_queue_check(sk, skb, 1))
return; break;
if (tcp_retransmit_skb(sk, skb, segs)) if (tcp_retransmit_skb(sk, skb, segs))
return; break;
NET_ADD_STATS(sock_net(sk), mib_idx, tcp_skb_pcount(skb)); NET_ADD_STATS(sock_net(sk), mib_idx, tcp_skb_pcount(skb));
...@@ -3171,10 +3172,13 @@ void tcp_xmit_retransmit_queue(struct sock *sk) ...@@ -3171,10 +3172,13 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
if (skb == rtx_head && if (skb == rtx_head &&
icsk->icsk_pending != ICSK_TIME_REO_TIMEOUT) icsk->icsk_pending != ICSK_TIME_REO_TIMEOUT)
tcp_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rearm_timer = true;
inet_csk(sk)->icsk_rto,
TCP_RTO_MAX);
} }
if (rearm_timer)
tcp_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
inet_csk(sk)->icsk_rto,
TCP_RTO_MAX);
} }
/* We allow to exceed memory limits for FIN packets to expedite /* We allow to exceed memory limits for FIN packets to expedite
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册