提交 61e800cf 编写于 作者: M Matt Carlson 提交者: David S. Miller

tg3: Enforce DMA mapping / skb assignment ordering

Michael Chan noted that there is nothing in the code that would prevent
the compiler from delaying the access of the "mapping" member of the
newly arrived packet until much later.  If this happened after the
skb = NULL assignment, it is possible for the driver to pass a bad
dma_addr value to pci_unmap_single().  To enforce this ordering, we need
a write memory barrier.  The pairing read memory barrier already exists
in tg3_rx_prodring_xfer() under the comments starting with
"Ensure that updates to the...".
Signed-off-by: NMatt Carlson <mcarlson@broadcom.com>
Signed-off-by: NMichael Chan <mchan@broadcom.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 99405162
...@@ -4659,11 +4659,16 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget) ...@@ -4659,11 +4659,16 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
if (skb_size < 0) if (skb_size < 0)
goto drop_it; goto drop_it;
ri->skb = NULL;
pci_unmap_single(tp->pdev, dma_addr, skb_size, pci_unmap_single(tp->pdev, dma_addr, skb_size,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
/* Ensure that the update to the skb happens
* after the usage of the old DMA mapping.
*/
smp_wmb();
ri->skb = NULL;
skb_put(skb, len); skb_put(skb, len);
} else { } else {
struct sk_buff *copy_skb; struct sk_buff *copy_skb;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册