提交 ebda37c2 编写于 作者: E Eric Dumazet 提交者: David S. Miller

rps: avoid one atomic in enqueue_to_backlog

If CONFIG_SMP=y, then we own a queue spinlock, we can avoid the atomic
test_and_set_bit() from napi_schedule_prep().

We now have same number of atomic ops per netif_rx() calls than with
pre-RPS kernel.
Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 3f78d1f2
...@@ -2432,8 +2432,10 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu, ...@@ -2432,8 +2432,10 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
return NET_RX_SUCCESS; return NET_RX_SUCCESS;
} }
/* Schedule NAPI for backlog device */ /* Schedule NAPI for backlog device
if (napi_schedule_prep(&sd->backlog)) { * We can use non atomic operation since we own the queue lock
*/
if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state)) {
if (!rps_ipi_queued(sd)) if (!rps_ipi_queued(sd))
____napi_schedule(sd, &sd->backlog); ____napi_schedule(sd, &sd->backlog);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册