1. 04 9月, 2013 5 次提交
    • J
      dma: edma: Find missed events and issue them · c5f47990
      Joel Fernandes 提交于
      In an effort to move to using Scatter gather lists of any size with
      EDMA as discussed at [1] instead of placing limitations on the driver,
      we work through the limitations of the EDMAC hardware to find missed
      events and issue them.
      
      The sequence of events that require this are:
      
      For the scenario where MAX slots for an EDMA channel is 3:
      
      SG1 -> SG2 -> SG3 -> SG4 -> SG5 -> SG6 -> Null
      
      The above SG list will have to be DMA'd in 2 sets:
      
      (1) SG1 -> SG2 -> SG3 -> Null
      (2) SG4 -> SG5 -> SG6 -> Null
      
      After (1) is succesfully transferred, the events from the MMC controller
      donot stop coming and are missed by the time we have setup the transfer
      for (2). So here, we catch the events missed as an error condition and
      issue them manually.
      
      In the second part of the patch, we make handle the NULL slot cases:
      For crypto IP, we continue to receive events even continuously in
      NULL slot, the setup of the next set of SG elements happens after
      the error handler executes. This is results in some recursion problems.
      Due to this, we continously receive error interrupts when we manually
      trigger an event from the error handler.
      
      We fix this, by first detecting if the Channel is currently transferring
      from a NULL slot or not, that's where the edma_read_slot in the error
      callback from interrupt handler comes in. With this we can determine if
      the set up of the next SG list has completed, and we manually trigger
      only in this case. If the setup has _not_ completed, we are still in NULL
      so we just set a missed flag and allow the manual triggerring to happen
      in edma_execute which will be eventually called. This fixes the above
      mentioned race conditions seen with the crypto drivers.
      
      [1] http://marc.info/?l=linux-omap&m=137416733628831&w=2Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      c5f47990
    • J
      ARM: edma: Add function to manually trigger an EDMA channel · 96874b9a
      Joel Fernandes 提交于
      Manual trigger for events missed as a result of splitting a
      scatter gather list and DMA'ing it in batches. Add a helper
      function to trigger a channel incase any such events are missed.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Acked-by: NSekhar Nori <nsekhar@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      96874b9a
    • J
      dma: edma: Write out and handle MAX_NR_SG at a given time · 53407062
      Joel Fernandes 提交于
      Process SG-elements in batches of MAX_NR_SG if they are greater
      than MAX_NR_SG. Due to this, at any given time only those many
      slots will be used in the given channel no matter how long the
      scatter list is. We keep track of how much has been written
      inorder to process the next batch of elements in the scatter-list
      and detect completion.
      
      For such intermediate transfer completions (one batch of MAX_NR_SG),
      make use of pause and resume functions instead of start and stop
      when such intermediate transfer is in progress or completed as we
      donot want to clear any pending events.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      53407062
    • J
      dma: edma: Setup parameters to DMA MAX_NR_SG at a time · 6fbe24da
      Joel Fernandes 提交于
      Changes are made here for configuring existing parameters to support
      DMA'ing them out in batches as needed.
      
      Also allocate as many as slots as needed by the SG list, but not more
      than MAX_NR_SG. Then these slots will be reused accordingly.
      For ex, if MAX_NR_SG=10, and number of SG entries is 40, still only
      10 slots will be allocated to DMA the entire SG list of size 40.
      
      Also enable TC interrupts for slots that are a last in a current
      iteration, or that fall on a MAX_NR_SG boundary.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      6fbe24da
    • V
      Merge branch 'topic/api_caps' into for-linus · bd127639
      Vinod Koul 提交于
      bd127639
  2. 03 9月, 2013 1 次提交
  3. 02 9月, 2013 21 次提交
  4. 28 8月, 2013 2 次提交
    • L
      dma: pl330: Fix handling of TERMINATE_ALL while processing completed descriptors · 39ff8613
      Lars-Peter Clausen 提交于
      The pl330 DMA driver is broken in regard to handling a terminate all request
      while it is processing the list of completed descriptors. This is most visible
      when calling dmaengine_terminate_all() from within the descriptors callback for
      cyclic transfers. In this case the TERMINATE_ALL transfer will clear the
      work_list and stop the transfer. But after all callbacks for all completed
      descriptors have been handled the descriptors will be re-enqueued into the (now
      empty) work_list. So the next time dma_async_issue_pending() is called for the
      channel these descriptors will be transferred again which will cause data
      corruption. Similar issues can occur if dmaengine_terminate_all() is not called
      from within the descriptor callback but runs on a different CPU at the same time
      as the completed descriptor list is processed.
      
      This patch introduces a new per channel list which will hold the completed
      descriptors. While processing the list the channel's lock will be held to avoid
      racing against dmaengine_terminate_all(). The lock will be released when calling
      the descriptors callback though. Since the list of completed descriptors might
      be modified (e.g. by calling dmaengine_terminate_all() from the callback) we can
      not use the normal list iterator macros. Instead we'll need to check for each
      loop iteration again if there are still items in the list. The drivers
      TERMINATE_ALL implementation is updated to move descriptors from both the
      work_list as well the new completed_list back to the descriptor pool. This makes
      sure that none of the descripts finds its way back into the work list and also
      that we do not call any futher complete callbacks after
      dmaengine_terminate_all() has been called.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      39ff8613
    • Z
      dmaengine: Add hisilicon k3 DMA engine driver · 8e6152bc
      Zhangfei Gao 提交于
      Add dmaengine driver for hisilicon k3 platform based on virt_dma
      Signed-off-by: NZhangfei Gao <zhangfei.gao@linaro.org>
      Tested-by: NKai Yang <jean.yangkai@huawei.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      8e6152bc
  5. 27 8月, 2013 10 次提交
  6. 26 8月, 2013 1 次提交