提交 3003af9a 编写于 作者: J Juergen Gross 提交者: Zheng Zengkai

xen/netback: don't queue unlimited number of packages

stable inclusion
from stable-v5.10.88
commit 88f20cccbeec9a5e83621df5cc2453b5081454dc
bugzilla: 186058 https://gitee.com/openeuler/kernel/issues/I4QW6A

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=88f20cccbeec9a5e83621df5cc2453b5081454dc

--------------------------------

commit be81992f upstream.

In case a guest isn't consuming incoming network traffic as fast as it
is coming in, xen-netback is buffering network packages in unlimited
numbers today. This can result in host OOM situations.

Commit f48da8b1 ("xen-netback: fix unlimited guest Rx internal
queue and carrier flapping") meant to introduce a mechanism to limit
the amount of buffered data by stopping the Tx queue when reaching the
data limit, but this doesn't work for cases like UDP.

When hitting the limit don't queue further SKBs, but drop them instead.
In order to be able to tell Rx packages have been dropped increment the
rx_dropped statistics counter in this case.

It should be noted that the old solution to continue queueing SKBs had
the additional problem of an overflow of the 32-bit rx_queue_len value
would result in intermittent Tx queue enabling.

This is part of XSA-392

Fixes: f48da8b1 ("xen-netback: fix unlimited guest Rx internal queue and carrier flapping")
Signed-off-by: NJuergen Gross <jgross@suse.com>
Reviewed-by: NJan Beulich <jbeulich@suse.com>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 b08e39dd
...@@ -88,16 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) ...@@ -88,16 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
spin_lock_irqsave(&queue->rx_queue.lock, flags); spin_lock_irqsave(&queue->rx_queue.lock, flags);
if (skb_queue_empty(&queue->rx_queue)) if (queue->rx_queue_len >= queue->rx_queue_max) {
xenvif_update_needed_slots(queue, skb);
__skb_queue_tail(&queue->rx_queue, skb);
queue->rx_queue_len += skb->len;
if (queue->rx_queue_len > queue->rx_queue_max) {
struct net_device *dev = queue->vif->dev; struct net_device *dev = queue->vif->dev;
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
kfree_skb(skb);
queue->vif->dev->stats.rx_dropped++;
} else {
if (skb_queue_empty(&queue->rx_queue))
xenvif_update_needed_slots(queue, skb);
__skb_queue_tail(&queue->rx_queue, skb);
queue->rx_queue_len += skb->len;
} }
spin_unlock_irqrestore(&queue->rx_queue.lock, flags); spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
...@@ -147,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue) ...@@ -147,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
break; break;
xenvif_rx_dequeue(queue); xenvif_rx_dequeue(queue);
kfree_skb(skb); kfree_skb(skb);
queue->vif->dev->stats.rx_dropped++;
} }
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册