提交 fd07160b 编写于 作者: V Vitaly Kuznetsov 提交者: David S. Miller

xen-netfront: avoid packet loss when ethernet header crosses page boundary

Small packet loss is reported on complex multi host network configurations
including tunnels, NAT, ... My investigation led me to the following check
in netback which drops packets:

        if (unlikely(txreq.size < ETH_HLEN)) {
                netdev_err(queue->vif->dev,
                           "Bad packet size: %d\n", txreq.size);
                xenvif_tx_err(queue, &txreq, extra_count, idx);
                break;
        }

But this check itself is legitimate. SKBs consist of a linear part (which
has to have the ethernet header) and (optionally) a number of frags.
Netfront transmits the head of the linear part up to the page boundary
as the first request and all the rest becomes frags so when we're
reconstructing the SKB in netback we can't distinguish between original
frags and the 'tail' of the linear part. The first SKB needs to be at
least ETH_HLEN size. So in case we have an SKB with its linear part
starting too close to the page boundary the packet is lost.

I see two ways to fix the issue:
- Change the 'wire' protocol between netfront and netback to start keeping
  the original SKB structure. We'll have to add a flag indicating the fact
  that the particular request is a part of the original linear part and not
  a frag. We'll need to know the length of the linear part to pre-allocate
  memory.
- Avoid transmitting SKBs with linear parts starting too close to the page
  boundary. That seems preferable short-term and shouldn't bring
  significant performance degradation as such packets are rare. That's what
  this patch is trying to achieve with skb_copy().
Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: NDavid Vrabel <david.vrabel@citrix.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 1a21101d
...@@ -565,6 +565,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -565,6 +565,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct netfront_queue *queue = NULL; struct netfront_queue *queue = NULL;
unsigned int num_queues = dev->real_num_tx_queues; unsigned int num_queues = dev->real_num_tx_queues;
u16 queue_index; u16 queue_index;
struct sk_buff *nskb;
/* Drop the packet if no queues are set up */ /* Drop the packet if no queues are set up */
if (num_queues < 1) if (num_queues < 1)
...@@ -593,6 +594,20 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -593,6 +594,20 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
page = virt_to_page(skb->data); page = virt_to_page(skb->data);
offset = offset_in_page(skb->data); offset = offset_in_page(skb->data);
/* The first req should be at least ETH_HLEN size or the packet will be
* dropped by netback.
*/
if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
nskb = skb_copy(skb, GFP_ATOMIC);
if (!nskb)
goto drop;
dev_kfree_skb_any(skb);
skb = nskb;
page = virt_to_page(skb->data);
offset = offset_in_page(skb->data);
}
len = skb_headlen(skb); len = skb_headlen(skb);
spin_lock_irqsave(&queue->tx_lock, flags); spin_lock_irqsave(&queue->tx_lock, flags);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册