提交 ea78dc8b 编写于 作者: T Thomas Falcon 提交者: Greg Kroah-Hartman

ibmvnic: Unmap DMA address of TX descriptor buffers after use

[ Upstream commit 80f0fe0934cd3daa13a5e4d48a103f469115b160 ]

There's no need to wait until a completion is received to unmap
TX descriptor buffers that have been passed to the hypervisor.
Instead unmap it when the hypervisor call has completed. This patch
avoids the possibility that a buffer will not be unmapped because
a TX completion is lost or mishandled.
Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com>
Tested-by: NDevesh K. Singh <devesh_singh@in.ibm.com>
Signed-off-by: NThomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
Signed-off-by: NSasha Levin <sashal@kernel.org>
上级 4fcb9b3f
...@@ -1586,6 +1586,8 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) ...@@ -1586,6 +1586,8 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num], lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
(u64)tx_buff->indir_dma, (u64)tx_buff->indir_dma,
(u64)num_entries); (u64)num_entries);
dma_unmap_single(dev, tx_buff->indir_dma,
sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
} else { } else {
tx_buff->num_entries = num_entries; tx_buff->num_entries = num_entries;
lpar_rc = send_subcrq(adapter, handle_array[queue_num], lpar_rc = send_subcrq(adapter, handle_array[queue_num],
...@@ -2747,7 +2749,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter, ...@@ -2747,7 +2749,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
union sub_crq *next; union sub_crq *next;
int index; int index;
int i, j; int i, j;
u8 *first;
restart_loop: restart_loop:
while (pending_scrq(adapter, scrq)) { while (pending_scrq(adapter, scrq)) {
...@@ -2777,14 +2778,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter, ...@@ -2777,14 +2778,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
txbuff->data_dma[j] = 0; txbuff->data_dma[j] = 0;
} }
/* if sub_crq was sent indirectly */
first = &txbuff->indir_arr[0].generic.first;
if (*first == IBMVNIC_CRQ_CMD) {
dma_unmap_single(dev, txbuff->indir_dma,
sizeof(txbuff->indir_arr),
DMA_TO_DEVICE);
*first = 0;
}
if (txbuff->last_frag) { if (txbuff->last_frag) {
dev_kfree_skb_any(txbuff->skb); dev_kfree_skb_any(txbuff->skb);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册