1. 26 7月, 2014 3 次提交
  2. 25 7月, 2014 9 次提交
  3. 23 7月, 2014 1 次提交
  4. 21 7月, 2014 1 次提交
  5. 16 7月, 2014 20 次提交
  6. 15 7月, 2014 6 次提交
    • G
    • G
    • G
      dmaengine: Update documentation for inline wrappers · 96cb9898
      Geert Uytterhoeven 提交于
      During the last few years, several inline wrappers for DMA operations have
      been introduced:
        - commit 16052827 ("dmaengine/dma_slave:
          introduce inline wrappers"),
        - commit a14acb4a ("DMAEngine: add
          dmaengine_prep_interleaved_dma wrapper for interleaved api"),
        - commit 6e3ecaf0 ("dmaengine: add
          wrapper functions for device control functions").
      
      Update the documentation to use the wrappers.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      96cb9898
    • J
      dmaengine: Use dma_zalloc_coherent · 9f92d223
      Joe Perches 提交于
      Use the zeroing function instead of dma_alloc_coherent & memset(,0,)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      9f92d223
    • A
      dmaengine: qcom_bam_dma: Add descriptor flags · 89751d0a
      Andy Gross 提交于
      This patch adds support for end of transaction (EOT) and notify when done (NWD)
      hardware descriptor flags.
      
      The EOT flag requests that the peripheral assert an end of transaction interrupt
      when that descriptor is complete.  It also results in special signaling protocol
      that is used between the attached peripheral and the core using the DMA
      controller.  Clients will specify DMA_PREP_INTERRUPT to enable this flag.
      
      The NWD flag requests that the peripheral wait until the data has been fully
      processed by the peripheral before moving on to the next descriptor.  Clients
      will specify DMA_PREP_FENCE to enable this flag.
      Signed-off-by: NAndy Gross <agross@codeaurora.org>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      89751d0a
    • H
      dmaengine: Freescale: change descriptor release process for supporting async_tx · 43452fad
      Hongbo Zhang 提交于
      Fix the potential risk when enable config NET_DMA and ASYNC_TX. Async_tx is
      lack of support in current release process of dma descriptor, all descriptors
      will be released whatever is acked or no-acked by async_tx, so there is a
      potential race condition when dma engine is uesd by others clients (e.g. when
      enable NET_DMA to offload TCP).
      
      In our case, a race condition which is raised when use both of talitos and
      dmaengine to offload xor is because napi scheduler will sync all pending
      requests in dma channels, it affects the process of raid operations due to
      ack_tx is not checked in fsl dma. The no-acked descriptor is freed which is
      submitted just now, as a dependent tx, this freed descriptor trigger
      BUG_ON(async_tx_test_ack(depend_tx)) in async_tx_submit().
      
      TASK = ee1a94a0[1390] 'md0_raid5' THREAD: ecf40000 CPU: 0
      GPR00: 00000001 ecf41ca0 ee44/921a94a0 0000003f 00000001 c00593e4 00000000 00000001
      GPR08: 00000000 a7a7a7a7 00000001 045/920000002 42028042 100a38d4 ed576d98 00000000
      GPR16: ed5a11b0 00000000 2b162000 00000200 046/920000000 2d555000 ed3015e8 c15a7aa0
      GPR24: 00000000 c155fc40 00000000 ecb63220 ecf41d28 e47/92f640bb0 ef640c30 ecf41ca0
      NIP [c02b048c] async_tx_submit+0x6c/0x2b4
      LR [c02b068c] async_tx_submit+0x26c/0x2b4
      Call Trace:
      [ecf41ca0] [c02b068c] async_tx_submit+0x26c/0x2b448/92 (unreliable)
      [ecf41cd0] [c02b0a4c] async_memcpy+0x240/0x25c
      [ecf41d20] [c0421064] async_copy_data+0xa0/0x17c
      [ecf41d70] [c0421cf4] __raid_run_ops+0x874/0xe10
      [ecf41df0] [c0426ee4] handle_stripe+0x820/0x25e8
      [ecf41e90] [c0429080] raid5d+0x3d4/0x5b4
      [ecf41f40] [c04329b8] md_thread+0x138/0x16c
      [ecf41f90] [c008277c] kthread+0x8c/0x90
      [ecf41ff0] [c0011630] kernel_thread+0x4c/0x68
      
      Another modification in this patch is the change of completed descriptors,
      there is a potential risk which caused by exception interrupt, all descriptors
      in ld_running list are seemed completed when an interrupt raised, it works fine
      under normal condition, but if there is an exception occured, it cannot work as
      our excepted. Hardware should not be depend on s/w list, the right way is to
      read current descriptor address register to find the last completed descriptor.
      If an interrupt is raised by an error, all descriptors in ld_running should not
      be seemed finished, or these unfinished descriptors in ld_running will be
      released wrongly.
      
      A simple way to reproduce:
      Enable dmatest first, then insert some bad descriptors which can trigger
      Programming Error interrupts before the good descriptors. Last, the good
      descriptors will be freed before they are processsed because of the exception
      intrerrupt.
      
      Note: the bad descriptors are only for simulating an exception interrupt.  This
      case can illustrate the potential risk in current fsl-dma very well.
      Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com>
      Signed-off-by: NQiang Liu <qiang.liu@freescale.com>
      Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      43452fad