1. 28 7月, 2014 4 次提交
  2. 26 7月, 2014 2 次提交
  3. 25 7月, 2014 6 次提交
  4. 23 7月, 2014 1 次提交
  5. 16 7月, 2014 19 次提交
  6. 15 7月, 2014 6 次提交
    • G
    • J
      dmaengine: Use dma_zalloc_coherent · 9f92d223
      Joe Perches 提交于
      Use the zeroing function instead of dma_alloc_coherent & memset(,0,)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      9f92d223
    • A
      dmaengine: qcom_bam_dma: Add descriptor flags · 89751d0a
      Andy Gross 提交于
      This patch adds support for end of transaction (EOT) and notify when done (NWD)
      hardware descriptor flags.
      
      The EOT flag requests that the peripheral assert an end of transaction interrupt
      when that descriptor is complete.  It also results in special signaling protocol
      that is used between the attached peripheral and the core using the DMA
      controller.  Clients will specify DMA_PREP_INTERRUPT to enable this flag.
      
      The NWD flag requests that the peripheral wait until the data has been fully
      processed by the peripheral before moving on to the next descriptor.  Clients
      will specify DMA_PREP_FENCE to enable this flag.
      Signed-off-by: NAndy Gross <agross@codeaurora.org>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      89751d0a
    • H
      dmaengine: Freescale: change descriptor release process for supporting async_tx · 43452fad
      Hongbo Zhang 提交于
      Fix the potential risk when enable config NET_DMA and ASYNC_TX. Async_tx is
      lack of support in current release process of dma descriptor, all descriptors
      will be released whatever is acked or no-acked by async_tx, so there is a
      potential race condition when dma engine is uesd by others clients (e.g. when
      enable NET_DMA to offload TCP).
      
      In our case, a race condition which is raised when use both of talitos and
      dmaengine to offload xor is because napi scheduler will sync all pending
      requests in dma channels, it affects the process of raid operations due to
      ack_tx is not checked in fsl dma. The no-acked descriptor is freed which is
      submitted just now, as a dependent tx, this freed descriptor trigger
      BUG_ON(async_tx_test_ack(depend_tx)) in async_tx_submit().
      
      TASK = ee1a94a0[1390] 'md0_raid5' THREAD: ecf40000 CPU: 0
      GPR00: 00000001 ecf41ca0 ee44/921a94a0 0000003f 00000001 c00593e4 00000000 00000001
      GPR08: 00000000 a7a7a7a7 00000001 045/920000002 42028042 100a38d4 ed576d98 00000000
      GPR16: ed5a11b0 00000000 2b162000 00000200 046/920000000 2d555000 ed3015e8 c15a7aa0
      GPR24: 00000000 c155fc40 00000000 ecb63220 ecf41d28 e47/92f640bb0 ef640c30 ecf41ca0
      NIP [c02b048c] async_tx_submit+0x6c/0x2b4
      LR [c02b068c] async_tx_submit+0x26c/0x2b4
      Call Trace:
      [ecf41ca0] [c02b068c] async_tx_submit+0x26c/0x2b448/92 (unreliable)
      [ecf41cd0] [c02b0a4c] async_memcpy+0x240/0x25c
      [ecf41d20] [c0421064] async_copy_data+0xa0/0x17c
      [ecf41d70] [c0421cf4] __raid_run_ops+0x874/0xe10
      [ecf41df0] [c0426ee4] handle_stripe+0x820/0x25e8
      [ecf41e90] [c0429080] raid5d+0x3d4/0x5b4
      [ecf41f40] [c04329b8] md_thread+0x138/0x16c
      [ecf41f90] [c008277c] kthread+0x8c/0x90
      [ecf41ff0] [c0011630] kernel_thread+0x4c/0x68
      
      Another modification in this patch is the change of completed descriptors,
      there is a potential risk which caused by exception interrupt, all descriptors
      in ld_running list are seemed completed when an interrupt raised, it works fine
      under normal condition, but if there is an exception occured, it cannot work as
      our excepted. Hardware should not be depend on s/w list, the right way is to
      read current descriptor address register to find the last completed descriptor.
      If an interrupt is raised by an error, all descriptors in ld_running should not
      be seemed finished, or these unfinished descriptors in ld_running will be
      released wrongly.
      
      A simple way to reproduce:
      Enable dmatest first, then insert some bad descriptors which can trigger
      Programming Error interrupts before the good descriptors. Last, the good
      descriptors will be freed before they are processsed because of the exception
      intrerrupt.
      
      Note: the bad descriptors are only for simulating an exception interrupt.  This
      case can illustrate the potential risk in current fsl-dma very well.
      Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com>
      Signed-off-by: NQiang Liu <qiang.liu@freescale.com>
      Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      43452fad
    • H
      dmaengine: Freescale: add suspend resume functions for DMA driver · 14c6a333
      Hongbo Zhang 提交于
      This patch adds suspend and resume functions for Freescale DMA driver.
      Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      14c6a333
    • H
      dmaengine: Freescale: use spin_lock_bh instead of spin_lock_irqsave · 2baff570
      Hongbo Zhang 提交于
      The usage of spin_lock_irqsave() is a stronger locking mechanism than is
      required throughout the driver. The minimum locking required should be used
      instead. Interrupts will be turned off and context will be saved, it is
      unnecessary to use irqsave.
      
      This patch changes all instances of spin_lock_irqsave() to spin_lock_bh(). All
      manipulation of protected fields is done using tasklet context or weaker, which
      makes spin_lock_bh() the correct choice.
      Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com>
      Signed-off-by: NQiang Liu <qiang.liu@freescale.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      2baff570
  7. 09 7月, 2014 1 次提交
  8. 01 7月, 2014 1 次提交
    • R
      Update imx-sdma cyclic handling to report residue · d1a792f3
      Russell King - ARM Linux 提交于
      I received a report this morning from one of the Novena developers that
      the behaviour of the iMX6 ASoC codec driver (using imx-pcm-dma.c) was
      sub-optimal under high system load.
      
      While there are issues relating to system load remaining, upon reviewing
      the ASoC imx-pcm-dma.c driver, it was noticed that it not using the
      residue support, because SDMA doesn't support it.  This has the effect
      that SDMA has to make multiple calls into the ASoC and ALSA code, one
      for each period.
      
      Since ALSA's snd_pcm_elapsed() does not need to be called multiple times
      and it is entirely sufficient to call it once to update ALSA with the
      current buffer position via the pointer method, we can do better here.
      We can also avoid stopping the DMA entirely, just like real cyclic DMA
      implementations behave.  While this means that we replay some old samples,
      this is a nicer behaviour than having audio stop and restart.
      
      The changes to achieve this are relatively minor - imx-sdma.c can track
      where the DMA is to the nearest descriptor boundary - it does this
      already when deciding how many callbacks to issue.  In doing this,
      buf_tail always points at the descriptor which will complete next.
      
      The residue is defined by the bytes remaining to the end of the buffer,
      when the buffer is viewed as a single block of memory [start...end].
      So, when we start out, there's a full buffer worth of residue, and this
      counts down as we approach the end of the buffer, eventually becoming
      zero at the end, before returning to the full buffer worth when we
      wrap back to the start.
      
      Moving the walking of the descriptors into the interrupt handler means
      that we can update the BD_DONE flag at interrupt time, thus avoiding
      a delayed tasklet stopping the cyclic DMA.
      
      This means that the residue can be calculated from (total descriptors -
      buf_tail) * descriptor size.  This is what the change below does.  We
      update imx-pcm-dma.c to remove the NO_RESIDUE flag since we now provide
      the residue.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Tested-by: NShawn Guo <shawn.guo@linaro.org>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      d1a792f3