tcp: be more careful in tcp_fragment()
mainline inclusion
from mainline-v5.2
commit b617158dc096709d8600c53b6052144d12b89fab
category: bugfix
bugzilla: 18679
CVE: NA
-------------------------------------------------
Some applications set tiny SO_SNDBUF values and expect
TCP to just work. Recent patches to address CVE-2019-11478
broke them in case of losses, since retransmits might
be prevented.
We should allow these flows to make progress.
This patch allows the first and last skb in retransmit queue
to be split even if memory limits are hit.
It also adds the some room due to the fact that tcp_sendmsg()
and tcp_sendpage() might overshoot sk_wmem_queued by about one full
TSO skb (64KB size). Note this allowance was already present
in stable backports for kernels < 4.15
Note for < 4.15 backports :
tcp_rtx_queue_tail() will probably look like :
static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk)
{
struct sk_buff *skb = tcp_send_head(sk);
return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk);
}
Fixes: f070ef2ac667 ("tcp: tcp_fragment() should apply sane memory limits")
Signed-off-by: NEric Dumazet <edumazet@google.com>
Reported-by: NAndrew Prout <aprout@ll.mit.edu>
Tested-by: NAndrew Prout <aprout@ll.mit.edu>
Tested-by: NJonathan Lemon <jonathan.lemon@gmail.com>
Tested-by: NMichal Kubecek <mkubecek@suse.cz>
Acked-by: NNeal Cardwell <ncardwell@google.com>
Acked-by: NYuchung Cheng <ycheng@google.com>
Acked-by: NChristoph Paasch <cpaasch@apple.com>
Cc: Jonathan Looney <jtl@netflix.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
Reviewed-by: NWenan Mao <maowenan@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
Showing
想要评论请 注册 或 登录