提交 9cbfea02 编写于 作者: S Sieng Piaw Liew 提交者: Jakub Kicinski

bcm63xx_enet: batch process rx path

Use netif_receive_skb_list to batch process rx skb.
Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance
by 12.5%.

Before:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   120 MBytes  33.7 Mbits/sec  277         sender
[  4]   0.00-30.00  sec   120 MBytes  33.5 Mbits/sec            receiver

After:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   136 MBytes  37.9 Mbits/sec  203         sender
[  4]   0.00-30.00  sec   135 MBytes  37.7 Mbits/sec            receiver
Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com>
Acked-by: NFlorian Fainelli <f.fainelli@gmail.com>
Signed-off-by: NJakub Kicinski <kuba@kernel.org>
上级 2e423387
......@@ -297,10 +297,12 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t)
static int bcm_enet_receive_queue(struct net_device *dev, int budget)
{
struct bcm_enet_priv *priv;
struct list_head rx_list;
struct device *kdev;
int processed;
priv = netdev_priv(dev);
INIT_LIST_HEAD(&rx_list);
kdev = &priv->pdev->dev;
processed = 0;
......@@ -391,10 +393,12 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
skb->protocol = eth_type_trans(skb, dev);
dev->stats.rx_packets++;
dev->stats.rx_bytes += len;
netif_receive_skb(skb);
list_add_tail(&skb->list, &rx_list);
} while (--budget > 0);
netif_receive_skb_list(&rx_list);
if (processed || !priv->rx_desc_count) {
bcm_enet_refill_rx(dev);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册