提交 b5941f06 编写于 作者: P Paolo Abeni 提交者: Jakub Kicinski

mptcp: fix sk_forward_memory corruption on retransmission

MPTCP sk_forward_memory handling is a bit special, as such field
is protected by the msk socket spin_lock, instead of the plain
socket lock.

Currently we have a code path updating such field without handling
the relevant lock:

__mptcp_retrans() -> __mptcp_clean_una_wakeup()

Several helpers in __mptcp_clean_una_wakeup() will update
sk_forward_alloc, possibly causing such field corruption, as reported
by Matthieu.

Address the issue providing and using a new variant of blamed function
which explicitly acquires the msk spin lock.

Fixes: 64b9cea7 ("mptcp: fix spurious retransmissions")
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/172Reported-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
Tested-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: NJakub Kicinski <kuba@kernel.org>
上级 44991d61
......@@ -947,6 +947,10 @@ static void __mptcp_update_wmem(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
#ifdef CONFIG_LOCKDEP
WARN_ON_ONCE(!lockdep_is_held(&sk->sk_lock.slock));
#endif
if (!msk->wmem_reserved)
return;
......@@ -1085,10 +1089,20 @@ static void __mptcp_clean_una(struct sock *sk)
static void __mptcp_clean_una_wakeup(struct sock *sk)
{
#ifdef CONFIG_LOCKDEP
WARN_ON_ONCE(!lockdep_is_held(&sk->sk_lock.slock));
#endif
__mptcp_clean_una(sk);
mptcp_write_space(sk);
}
static void mptcp_clean_una_wakeup(struct sock *sk)
{
mptcp_data_lock(sk);
__mptcp_clean_una_wakeup(sk);
mptcp_data_unlock(sk);
}
static void mptcp_enter_memory_pressure(struct sock *sk)
{
struct mptcp_subflow_context *subflow;
......@@ -2299,7 +2313,7 @@ static void __mptcp_retrans(struct sock *sk)
struct sock *ssk;
int ret;
__mptcp_clean_una_wakeup(sk);
mptcp_clean_una_wakeup(sk);
dfrag = mptcp_rtx_head(sk);
if (!dfrag) {
if (mptcp_data_fin_enabled(msk)) {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册