提交 e8d77a1c 编写于 作者: G Guoju Fang 提交者: Zheng Zengkai

net: sched: add barrier to fix packet stuck problem for lockless qdisc

stable inclusion
from stable-v5.10.122
commit e05dd93826e1a0f9dedb6bcf8f99ba17462ae08f
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5W6OE

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=e05dd93826e1a0f9dedb6bcf8f99ba17462ae08f

--------------------------------

[ Upstream commit 2e8728c9 ]

In qdisc_run_end(), the spin_unlock() only has store-release semantic,
which guarantees all earlier memory access are visible before it. But
the subsequent test_bit() has no barrier semantics so may be reordered
ahead of the spin_unlock(). The store-load reordering may cause a packet
stuck problem.

The concurrent operations can be described as below,
         CPU 0                      |          CPU 1
   qdisc_run_end()                  |     qdisc_run_begin()
          .                         |           .
 ----> /* may be reorderd here */   |           .
|         .                         |           .
|     spin_unlock()                 |         set_bit()
|         .                         |         smp_mb__after_atomic()
 ---- test_bit()                    |         spin_trylock()
          .                         |          .

Consider the following sequence of events:
    CPU 0 reorder test_bit() ahead and see MISSED = 0
    CPU 1 calls set_bit()
    CPU 1 calls spin_trylock() and return fail
    CPU 0 executes spin_unlock()

At the end of the sequence, CPU 0 calls spin_unlock() and does nothing
because it see MISSED = 0. The skb on CPU 1 has beed enqueued but no one
take it, until the next cpu pushing to the qdisc (if ever ...) will
notice and dequeue it.

This patch fix this by adding one explicit barrier. As spin_unlock() and
test_bit() ordering is a store-load ordering, a full memory barrier
smp_mb() is needed here.

Fixes: a90c57f2 ("net: sched: fix packet stuck problem for lockless qdisc")
Signed-off-by: NGuoju Fang <gjfang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20220528101628.120193-1-gjfang@linux.alibaba.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
Signed-off-by: NSasha Levin <sashal@kernel.org>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
Reviewed-by: NWei Li <liwei391@huawei.com>
上级 d82b78c9
...@@ -201,6 +201,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc) ...@@ -201,6 +201,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
if (qdisc->flags & TCQ_F_NOLOCK) { if (qdisc->flags & TCQ_F_NOLOCK) {
spin_unlock(&qdisc->seqlock); spin_unlock(&qdisc->seqlock);
/* spin_unlock() only has store-release semantic. The unlock
* and test_bit() ordering is a store-load ordering, so a full
* memory barrier is needed here.
*/
smp_mb();
if (unlikely(test_bit(__QDISC_STATE_MISSED, if (unlikely(test_bit(__QDISC_STATE_MISSED,
&qdisc->state))) { &qdisc->state))) {
clear_bit(__QDISC_STATE_MISSED, &qdisc->state); clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册