1. 20 1月, 2012 1 次提交
  2. 12 1月, 2012 1 次提交
  3. 05 1月, 2012 1 次提交
  4. 23 12月, 2011 2 次提交
  5. 02 12月, 2011 1 次提交
  6. 22 11月, 2011 1 次提交
  7. 27 10月, 2011 2 次提交
  8. 19 9月, 2011 1 次提交
  9. 14 9月, 2011 1 次提交
  10. 25 8月, 2011 3 次提交
  11. 26 7月, 2011 3 次提交
  12. 09 7月, 2011 1 次提交
    • S
      amba pl011: workaround for uart registers lockup · def90f42
      Shreshtha Kumar Sahu 提交于
      This workaround aims to break the deadlock situation
      which raises during continuous transfer of data for long
      duration over uart with hardware flow control. It is
      observed that CTS interrupt cannot be cleared in uart
      interrupt register (ICR). Hence further transfer over
      uart gets blocked.
      
      It is seen that during such deadlock condition ICR
      don't get cleared even on multiple write. This leads
      pass_counter to decrease and finally reach zero. This
      can be taken as trigger point to run this UART_BT_WA.
      
      Workaround backups the register configuration, does soft
      reset of UART using BIT-0 of PRCC_K_SOFTRST_SET/CLEAR
      registers and restores the registers.
      
      This patch also provides support for uart init and exit
      function calls if present.
      Signed-off-by: NShreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      def90f42
  13. 17 6月, 2011 1 次提交
    • S
      amba pl011: workaround for uart registers lockup · c16d51a3
      Shreshtha Kumar Sahu 提交于
      This workaround aims to break the deadlock situation
      which raises during continuous transfer of data for long
      duration over uart with hardware flow control. It is
      observed that CTS interrupt cannot be cleared in uart
      interrupt register (ICR). Hence further transfer over
      uart gets blocked.
      
      It is seen that during such deadlock condition ICR
      don't get cleared even on multiple write. This leads
      pass_counter to decrease and finally reach zero. This
      can be taken as trigger point to run this UART_BT_WA.
      
      Workaround backups the register configuration, does soft
      reset of UART using BIT-0 of PRCC_K_SOFTRST_SET/CLEAR
      registers and restores the registers.
      
      This patch also provides support for uart init and exit
      function calls if present.
      Signed-off-by: NShreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      c16d51a3
  14. 31 3月, 2011 1 次提交
  15. 24 2月, 2011 4 次提交
  16. 19 2月, 2011 2 次提交
  17. 04 2月, 2011 1 次提交
    • R
      ARM: mmci: add dmaengine-based DMA support · c8ebae37
      Russell King 提交于
      Based on a patch from Linus Walleij.
      
      Add dmaengine based support for DMA to the MMCI driver, using the
      Primecell DMA engine interface.  The changes over Linus' driver are:
      
      - rename txsize_threshold to dmasize_threshold, as this reflects the
        purpose more.
      - use 'mmci_dma_' as the function prefix rather than 'dma_mmci_'.
      - clean up requesting of dma channels.
      - don't release a single channel twice when it's shared between tx and rx.
      - get rid of 'dma_enable' bool - instead check whether the channel is NULL.
      - detect incomplete DMA at the end of a transfer.  Some DMA controllers
        (eg, PL08x) are unable to be configured for scatter DMA and also listen
        to all four DMA request signals [BREQ,SREQ,LBREQ,LSREQ] from the MMCI.
        They can do one or other but not both.  As MMCI uses LBREQ/LSREQ for the
        final burst/words, PL08x does not transfer the last few words.
      - map and unmap DMA buffers using the DMA engine struct device, not the
        MMCI struct device - the DMA engine is doing the DMA transfer, not us.
      - avoid double-unmapping of the DMA buffers on MMCI data errors.
      - don't check for negative values from the dmaengine tx submission
        function - Dan says this must never fail.
      - use new dmaengine helper functions rather than using the ugly function
        pointers directly.
      - allow DMA code to be fully optimized away using dma_inprogress() which
        is defined to constant 0 if DMA engine support is disabled.
      - request maximum segment size from the DMA engine struct device and
        set this appropriately.
      - removed checking of buffer alignment - the DMA engine should deal with
        its own restrictions on buffer alignment, not the individual DMA engine
        users.
      - removed setting DMAREQCTL - this confuses some DMA controllers as it
        causes LBREQ to be asserted for the last seven transfers, rather than
        six SREQ and one LSREQ.
      - removed burst setting - the DMA controller should not burst past the
        transfer size required to complete the DMA operation.
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c8ebae37
  18. 17 1月, 2011 1 次提交
  19. 15 1月, 2011 1 次提交
    • D
      ARM: PL08x: fix a warning · 96a608a4
      Dan Williams 提交于
      drivers/dma/amba-pl08x.c: In function 'pl08x_start_txd':
      drivers/dma/amba-pl08x.c:205: warning: dereferencing 'void *' pointer
      
      We never dereference llis_va aside from assigning it to a struct
      pl08x_lli pointer or calculating the address of array element 0.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      96a608a4
  20. 06 1月, 2011 2 次提交
    • R
      ARM: PL011: add DMA burst threshold support for ST variants · 38d62436
      Russell King 提交于
      ST Micro variants has some specific dma burst threshold compensation,
      which allows them to make better use of a DMA controller.  Add support
      to set this up.
      
      Based on a patch from Linus Walleij.
      Acked-by: NLinus Walleij <linus.walleij@stericsson.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      38d62436
    • R
      ARM: PL011: Add support for transmit DMA · 68b65f73
      Russell King 提交于
      Add DMA engine support for transmit to the PL011 driver.  Based on a
      patch from Linus Walliej, with the following changes:
      
      - remove RX DMA support.  As PL011 doesn't give us receive timeout
        interrupts, we only get notified of received data when the RX DMA
        has completed.  This rather sucks for interactive use of the TTY.
      
      - remove abuse of completions.  Completions are supposed to be for
        events, not to tell what condition buffers are in.  Replace it with
        a simple 'queued' bool.
      
      - fix locking - it is only safe to access the circular buffer with the
        port lock held.
      
      - only map the DMA buffer when required - if we're ever behind an IOMMU
        this helps keep IOMMU usage down, and also ensures that we're legal
        when we change the scatterlist entry length.
      
      - fix XON/XOFF sending - we must send XON/XOFF characters out as soon
        as possible - waiting for up to 4095 characters in the DMA buffer
        to be sent first is not acceptable.
      
      - fix XON/XOFF receive handling - we need to stop DMA when instructed
        to by the TTY layer, and restart it again when instructed to.  There
        is a subtle problem here: we must not completely empty the circular
        buffer with DMA, otherwise we will not be notified of XON.
      
      - change the 'enable_dma' flag into a 'using DMA' flag, and track
        whether we can use TX DMA by whether the channel pointer is non-NULL.
        This gives us more control over whether we use DMA in the driver.
      
      - we don't need to have the TX DMA buffer continually allocated for
        each port - instead, allocate it when the port starts up, and free
        it when it's shut down.  Update the 'using DMA' flag if we get
        the buffer, and adjust the TTY FIFO size appropriately.
      
      - if we're going to use PIO to send characters, use the existing IRQ
        based functionality rather than reimplementing it.  This also ensures
        we call uart_write_wakeup() at the appropriate time, otherwise we'll
        stall.
      
      - use DMA engine helper functions for type safety.
      
      - fix init when built as a module - we can't have to initcall functions,
        so we must settle on one.  This means we can eliminate the deferred
        DMA initialization.
      
      - there is no need to terminate transfers on a failed prep_slave_sg()
        call - nothing has been setup, so nothing needs to be terminated.
        This avoids a potential deadlock in the DMA engine code
        (tasklet->callback->failed prepare->terminate->tasklet_disable
         which then ends up waiting for the tasklet to finish running.)
      
      - Dan says that the submission callback should not return an error:
        | dma_submit_error() is something I should have removed after commit
        | a0587bcf "ioat1: move descriptor allocation from submit to prep" all
        | errors should be notified by prep failing to return a descriptor
        | handle.  Negative dma_cookie_t values are only returned by the
        | dma_async_memcpy* calls which translate a prep failure into -ENOMEM.
        So remove the error handling at that point.  This also solves the
        potential deadlock mentioned in the previous comment.
      Acked-by: NLinus Walleij <linus.walleij@stericsson.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      68b65f73
  21. 05 1月, 2011 9 次提交