提交 a1bfb6eb 编写于 作者: V Vignesh R 提交者: Greg Kroah-Hartman

serial: 8250: 8250_omap: Fix race b/w dma completion and RX timeout

DMA RX completion handler for UART is called from a tasklet and hence
may be delayed depending on the system load. In meanwhile, there may be
RX timeout interrupt which can get serviced first before DMA RX
completion handler is executed for the completed transfer.
omap_8250_rx_dma_flush() which is called on RX timeout interrupt makes
sure that the DMA RX buffer is pushed and then the FIFO is drained and
also queues a new DMA request. But, when DMA RX completion handler
executes, it will erroneously flush the currently queued DMA transfer
which sometimes results in data corruption and double queueing of DMA RX
requests.

Fix this by checking whether RX completion is for the currently queued
transfer or not. And also hold port lock when in DMA completion to avoid
race wrt RX timeout handler preempting it.
Signed-off-by: NVignesh R <vigneshr@ti.com>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
上级 cfde770d
......@@ -786,8 +786,27 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
static void __dma_rx_complete(void *param)
{
__dma_rx_do_complete(param);
omap_8250_rx_dma(param);
struct uart_8250_port *p = param;
struct uart_8250_dma *dma = p->dma;
struct dma_tx_state state;
unsigned long flags;
spin_lock_irqsave(&p->port.lock, flags);
/*
* If the tx status is not DMA_COMPLETE, then this is a delayed
* completion callback. A previous RX timeout flush would have
* already pushed the data, so exit.
*/
if (dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state) !=
DMA_COMPLETE) {
spin_unlock_irqrestore(&p->port.lock, flags);
return;
}
__dma_rx_do_complete(p);
omap_8250_rx_dma(p);
spin_unlock_irqrestore(&p->port.lock, flags);
}
static void omap_8250_rx_dma_flush(struct uart_8250_port *p)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册