1. 25 10月, 2013 2 次提交
  2. 21 10月, 2013 2 次提交
  3. 14 10月, 2013 1 次提交
  4. 13 10月, 2013 3 次提交
  5. 11 10月, 2013 3 次提交
  6. 07 10月, 2013 8 次提交
  7. 04 10月, 2013 3 次提交
    • M
      dmaengine: imx-dma: fix callback path in tasklet · fcaaba6c
      Michael Grzeschik 提交于
      We need to free the ld_active list head before jumping into the callback
      routine. Otherwise the callback could run into issue_pending and change
      our ld_active list head we just going to free. This will run the channel
      list into an currupted and undefined state.
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      fcaaba6c
    • M
      dmaengine: imx-dma: fix lockdep issue between irqhandler and tasklet · 5a276fa6
      Michael Grzeschik 提交于
      The tasklet and irqhandler are using spin_lock while other routines are
      using spin_lock_irqsave/restore. This leads to lockdep issues as
      described bellow. This patch is changing the code to use
      spinlock_irq_save/restore in both code pathes.
      
      As imxdma_xfer_desc always gets called with spin_lock_irqsave lock held,
      this patch also removes the spare call inside the routine to avoid
      double locking.
      
      [  403.358162] =================================
      [  403.362549] [ INFO: inconsistent lock state ]
      [  403.366945] 3.10.0-20130823+ #904 Not tainted
      [  403.371331] ---------------------------------
      [  403.375721] inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
      [  403.381769] swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
      [  403.386762]  (&(&imxdma->lock)->rlock){?.-...}, at: [<c019d77c>] imxdma_tasklet+0x20/0x134
      [  403.395201] {IN-HARDIRQ-W} state was registered at:
      [  403.400108]   [<c004b264>] mark_lock+0x2a0/0x6b4
      [  403.404798]   [<c004d7c8>] __lock_acquire+0x650/0x1a64
      [  403.410004]   [<c004f15c>] lock_acquire+0x94/0xa8
      [  403.414773]   [<c02f74e4>] _raw_spin_lock+0x54/0x8c
      [  403.419720]   [<c019d094>] dma_irq_handler+0x78/0x254
      [  403.424845]   [<c0061124>] handle_irq_event_percpu+0x38/0x1b4
      [  403.430670]   [<c00612e4>] handle_irq_event+0x44/0x64
      [  403.435789]   [<c0063a70>] handle_level_irq+0xd8/0xf0
      [  403.440903]   [<c0060a20>] generic_handle_irq+0x28/0x38
      [  403.446194]   [<c0009cc4>] handle_IRQ+0x68/0x8c
      [  403.450789]   [<c0008714>] avic_handle_irq+0x3c/0x48
      [  403.455811]   [<c0008f84>] __irq_svc+0x44/0x74
      [  403.460314]   [<c0040b04>] cpu_startup_entry+0x88/0xf4
      [  403.465525]   [<c02f00d0>] rest_init+0xb8/0xe0
      [  403.470045]   [<c03e07dc>] start_kernel+0x28c/0x2d4
      [  403.474986]   [<a0008040>] 0xa0008040
      [  403.478709] irq event stamp: 50854
      [  403.482140] hardirqs last  enabled at (50854): [<c001c6b8>] tasklet_action+0x38/0xdc
      [  403.489954] hardirqs last disabled at (50853): [<c001c6a0>] tasklet_action+0x20/0xdc
      [  403.497761] softirqs last  enabled at (50850): [<c001bc64>] _local_bh_enable+0x14/0x18
      [  403.505741] softirqs last disabled at (50851): [<c001c268>] irq_exit+0x88/0xdc
      [  403.513026]
      [  403.513026] other info that might help us debug this:
      [  403.519593]  Possible unsafe locking scenario:
      [  403.519593]
      [  403.525548]        CPU0
      [  403.528020]        ----
      [  403.530491]   lock(&(&imxdma->lock)->rlock);
      [  403.534828]   <Interrupt>
      [  403.537474]     lock(&(&imxdma->lock)->rlock);
      [  403.541983]
      [  403.541983]  *** DEADLOCK ***
      [  403.541983]
      [  403.547951] no locks held by swapper/0.
      [  403.551813]
      [  403.551813] stack backtrace:
      [  403.556222] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0-20130823+ #904
      [  403.563039] Backtrace:
      [  403.565581] [<c000b98c>] (dump_backtrace+0x0/0x10c) from [<c000bb28>] (show_stack+0x18/0x1c)
      [  403.574054]  r6:00000000 r5:c05c51d8 r4:c040bd58 r3:00200000
      [  403.579872] [<c000bb10>] (show_stack+0x0/0x1c) from [<c02f398c>] (dump_stack+0x20/0x28)
      [  403.587955] [<c02f396c>] (dump_stack+0x0/0x28) from [<c02f29c8>] (print_usage_bug.part.28+0x224/0x28c)
      [  403.597340] [<c02f27a4>] (print_usage_bug.part.28+0x0/0x28c) from [<c004b404>] (mark_lock+0x440/0x6b4)
      [  403.606682]  r8:c004a41c r7:00000000 r6:c040bd58 r5:c040c040 r4:00000002
      [  403.613566] [<c004afc4>] (mark_lock+0x0/0x6b4) from [<c004d844>] (__lock_acquire+0x6cc/0x1a64)
      [  403.622244] [<c004d178>] (__lock_acquire+0x0/0x1a64) from [<c004f15c>] (lock_acquire+0x94/0xa8)
      [  403.631010] [<c004f0c8>] (lock_acquire+0x0/0xa8) from [<c02f74e4>] (_raw_spin_lock+0x54/0x8c)
      [  403.639614] [<c02f7490>] (_raw_spin_lock+0x0/0x8c) from [<c019d77c>] (imxdma_tasklet+0x20/0x134)
      [  403.648434]  r6:c3847010 r5:c040e890 r4:c38470d4
      [  403.653194] [<c019d75c>] (imxdma_tasklet+0x0/0x134) from [<c001c70c>] (tasklet_action+0x8c/0xdc)
      [  403.662013]  r8:c0599160 r7:00000000 r6:00000000 r5:c040e890 r4:c3847114 r3:c019d75c
      [  403.670042] [<c001c680>] (tasklet_action+0x0/0xdc) from [<c001bd4c>] (__do_softirq+0xe4/0x1f0)
      [  403.678687]  r7:00000101 r6:c0402000 r5:c059919c r4:00000001
      [  403.684498] [<c001bc68>] (__do_softirq+0x0/0x1f0) from [<c001c268>] (irq_exit+0x88/0xdc)
      [  403.692652] [<c001c1e0>] (irq_exit+0x0/0xdc) from [<c0009cc8>] (handle_IRQ+0x6c/0x8c)
      [  403.700514]  r4:00000030 r3:00000110
      [  403.704192] [<c0009c5c>] (handle_IRQ+0x0/0x8c) from [<c0008714>] (avic_handle_irq+0x3c/0x48)
      [  403.712664]  r5:c0403f28 r4:c0593ebc
      [  403.716343] [<c00086d8>] (avic_handle_irq+0x0/0x48) from [<c0008f84>] (__irq_svc+0x44/0x74)
      [  403.724733] Exception stack(0xc0403f28 to 0xc0403f70)
      [  403.729841] 3f20:                   00000001 00000004 00000000 20000013 c0402000 c04104a8
      [  403.738078] 3f40: 00000002 c0b69620 a0004000 41069264 a03fb5f4 c0403f7c c0403f40 c0403f70
      [  403.746301] 3f60: c004b92c c0009e74 20000013 ffffffff
      [  403.751383]  r6:ffffffff r5:20000013 r4:c0009e74 r3:c004b92c
      [  403.757210] [<c0009e30>] (arch_cpu_idle+0x0/0x4c) from [<c0040b04>] (cpu_startup_entry+0x88/0xf4)
      [  403.766161] [<c0040a7c>] (cpu_startup_entry+0x0/0xf4) from [<c02f00d0>] (rest_init+0xb8/0xe0)
      [  403.774753] [<c02f0018>] (rest_init+0x0/0xe0) from [<c03e07dc>] (start_kernel+0x28c/0x2d4)
      [  403.783051]  r6:c03fc484 r5:ffffffff r4:c040a0e0
      [  403.787797] [<c03e0550>] (start_kernel+0x0/0x2d4) from [<a0008040>] (0xa0008040)
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      5a276fa6
    • M
      dmaengine: imx-dma: fix slow path issue in prep_dma_cyclic · edc530fe
      Michael Grzeschik 提交于
      When perparing cyclic_dma buffers by the sound layer, it will dump the
      following lockdep trace. The leading snd_pcm_action_single get called
      with read_lock_irq called. To fix this, we change the kcalloc call from
      GFP_KERNEL to GFP_ATOMIC.
      
      WARNING: at kernel/lockdep.c:2740 lockdep_trace_alloc+0xcc/0x114()
      DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags))
      Modules linked in:
      CPU: 0 PID: 832 Comm: aplay Not tainted 3.11.0-20130823+ #903
      Backtrace:
      [<c000b98c>] (dump_backtrace+0x0/0x10c) from [<c000bb28>] (show_stack+0x18/0x1c)
       r6:c004c090 r5:00000009 r4:c2e0bd18 r3:00404000
      [<c000bb10>] (show_stack+0x0/0x1c) from [<c02f397c>] (dump_stack+0x20/0x28)
      [<c02f395c>] (dump_stack+0x0/0x28) from [<c001531c>] (warn_slowpath_common+0x54/0x70)
      [<c00152c8>] (warn_slowpath_common+0x0/0x70) from [<c00153dc>] (warn_slowpath_fmt+0x38/0x40)
       r8:00004000 r7:a3b90000 r6:000080d0 r5:60000093 r4:c2e0a000 r3:00000009
      [<c00153a4>] (warn_slowpath_fmt+0x0/0x40) from [<c004c090>] (lockdep_trace_alloc+0xcc/0x114)
       r3:c03955d8 r2:c03907db
      [<c004bfc4>] (lockdep_trace_alloc+0x0/0x114) from [<c008f16c>] (__kmalloc+0x34/0x118)
       r6:000080d0 r5:c3800120 r4:000080d0 r3:c040a0f8
      [<c008f138>] (__kmalloc+0x0/0x118) from [<c019c95c>] (imxdma_prep_dma_cyclic+0x64/0x168)
       r7:a3b90000 r6:00000004 r5:c39d8420 r4:c3847150
      [<c019c8f8>] (imxdma_prep_dma_cyclic+0x0/0x168) from [<c024618c>] (snd_dmaengine_pcm_trigger+0xa8/0x160)
      [<c02460e4>] (snd_dmaengine_pcm_trigger+0x0/0x160) from [<c0241fa8>] (soc_pcm_trigger+0x90/0xb4)
       r8:c058c7b0 r7:c3b8140c r6:c39da560 r5:00000001 r4:c3b81000
      [<c0241f18>] (soc_pcm_trigger+0x0/0xb4) from [<c022ece4>] (snd_pcm_do_start+0x2c/0x38)
       r7:00000000 r6:00000003 r5:c058c7b0 r4:c3b81000
      [<c022ecb8>] (snd_pcm_do_start+0x0/0x38) from [<c022e958>] (snd_pcm_action_single+0x40/0x6c)
      [<c022e918>] (snd_pcm_action_single+0x0/0x6c) from [<c022ea64>] (snd_pcm_action_lock_irq+0x7c/0x9c)
       r7:00000003 r6:c3b810f0 r5:c3b810f0 r4:c3b81000
      [<c022e9e8>] (snd_pcm_action_lock_irq+0x0/0x9c) from [<c023009c>] (snd_pcm_common_ioctl1+0x7f8/0xfd0)
       r8:c3b7f888 r7:005407b8 r6:c2c991c0 r5:c3b81000 r4:c3b81000 r3:00004142
      [<c022f8a4>] (snd_pcm_common_ioctl1+0x0/0xfd0) from [<c023117c>] (snd_pcm_playback_ioctl1+0x464/0x488)
      [<c0230d18>] (snd_pcm_playback_ioctl1+0x0/0x488) from [<c02311d4>] (snd_pcm_playback_ioctl+0x34/0x40)
       r8:c3b7f888 r7:00004142 r6:00000004 r5:c2c991c0 r4:005407b8
      [<c02311a0>] (snd_pcm_playback_ioctl+0x0/0x40) from [<c00a14a4>] (vfs_ioctl+0x30/0x44)
      [<c00a1474>] (vfs_ioctl+0x0/0x44) from [<c00a1fe8>] (do_vfs_ioctl+0x55c/0x5c0)
      [<c00a1a8c>] (do_vfs_ioctl+0x0/0x5c0) from [<c00a208c>] (SyS_ioctl+0x40/0x68)
      [<c00a204c>] (SyS_ioctl+0x0/0x68) from [<c0009380>] (ret_fast_syscall+0x0/0x44)
       r8:c0009544 r7:00000036 r6:bedeaa58 r5:00000000 r4:000000c0
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      edc530fe
  8. 17 9月, 2013 2 次提交
  9. 13 9月, 2013 1 次提交
  10. 10 9月, 2013 1 次提交
  11. 04 9月, 2013 6 次提交
    • J
      dma: edma: Remove limits on number of slots · 5622ff1a
      Joel Fernandes 提交于
      With this series, this check is no longer required and
      we shouldn't need to reject drivers DMA'ing more than the
      MAX number of slots.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      5622ff1a
    • J
      dma: edma: Leave linked to Null slot instead of DUMMY slot · b267b3bc
      Joel Fernandes 提交于
      Dummy slot has been used as a way for missed-events not to be
      reported as missing. This has been particularly troublesome for cases
      where we might want to temporarily pause all incoming events.
      
      For EDMA DMAC, there is no way to do any such pausing of events as
      the occurence of the "next" event is not software controlled.
      Using "edma_pause" in IRQ handlers doesn't help as by then the event
      in concern from the slave is already missed.
      
      Linking a dummy slot, is seen to absorb these events which we didn't
      want to miss. So we don't link to dummy, but instead leave it linked
      to NULL set, allow an error condition and detect the channel that
      missed it.
      
      Consider the case where we have a scatter-list like:
      SG1->SG2->SG3->SG4->SG5->SG6->Null
      
      For ex, for a MAX_NR_SG of 2, earlier we were splitting this as:
      SG1->SG2->Null
      SG3->SG4->Null
      SG5->SG6->Null
      
      Now we split it as
      SG1->SG2->Null
      SG3->SG4->Null
      SG5->SG6->Dummy
      
      This approach results in lesser unwanted interrupts that occur
      for the last list split. The Dummy slot has the property of not
      raising an error condition if events are missed unlike the Null
      slot. We are OK with this as we're done with processing the
      whole list once we reach Dummy.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      [modifed duplicate s-o-b & patch title]
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      b267b3bc
    • J
      dma: edma: Find missed events and issue them · c5f47990
      Joel Fernandes 提交于
      In an effort to move to using Scatter gather lists of any size with
      EDMA as discussed at [1] instead of placing limitations on the driver,
      we work through the limitations of the EDMAC hardware to find missed
      events and issue them.
      
      The sequence of events that require this are:
      
      For the scenario where MAX slots for an EDMA channel is 3:
      
      SG1 -> SG2 -> SG3 -> SG4 -> SG5 -> SG6 -> Null
      
      The above SG list will have to be DMA'd in 2 sets:
      
      (1) SG1 -> SG2 -> SG3 -> Null
      (2) SG4 -> SG5 -> SG6 -> Null
      
      After (1) is succesfully transferred, the events from the MMC controller
      donot stop coming and are missed by the time we have setup the transfer
      for (2). So here, we catch the events missed as an error condition and
      issue them manually.
      
      In the second part of the patch, we make handle the NULL slot cases:
      For crypto IP, we continue to receive events even continuously in
      NULL slot, the setup of the next set of SG elements happens after
      the error handler executes. This is results in some recursion problems.
      Due to this, we continously receive error interrupts when we manually
      trigger an event from the error handler.
      
      We fix this, by first detecting if the Channel is currently transferring
      from a NULL slot or not, that's where the edma_read_slot in the error
      callback from interrupt handler comes in. With this we can determine if
      the set up of the next SG list has completed, and we manually trigger
      only in this case. If the setup has _not_ completed, we are still in NULL
      so we just set a missed flag and allow the manual triggerring to happen
      in edma_execute which will be eventually called. This fixes the above
      mentioned race conditions seen with the crypto drivers.
      
      [1] http://marc.info/?l=linux-omap&m=137416733628831&w=2Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      c5f47990
    • J
      dma: edma: Write out and handle MAX_NR_SG at a given time · 53407062
      Joel Fernandes 提交于
      Process SG-elements in batches of MAX_NR_SG if they are greater
      than MAX_NR_SG. Due to this, at any given time only those many
      slots will be used in the given channel no matter how long the
      scatter list is. We keep track of how much has been written
      inorder to process the next batch of elements in the scatter-list
      and detect completion.
      
      For such intermediate transfer completions (one batch of MAX_NR_SG),
      make use of pause and resume functions instead of start and stop
      when such intermediate transfer is in progress or completed as we
      donot want to clear any pending events.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      53407062
    • J
      dma: edma: Setup parameters to DMA MAX_NR_SG at a time · 6fbe24da
      Joel Fernandes 提交于
      Changes are made here for configuring existing parameters to support
      DMA'ing them out in batches as needed.
      
      Also allocate as many as slots as needed by the SG list, but not more
      than MAX_NR_SG. Then these slots will be reused accordingly.
      For ex, if MAX_NR_SG=10, and number of SG entries is 40, still only
      10 slots will be allocated to DMA the entire SG list of size 40.
      
      Also enable TC interrupts for slots that are a last in a current
      iteration, or that fall on a MAX_NR_SG boundary.
      Signed-off-by: NJoel Fernandes <joelf@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      6fbe24da
    • A
      coh901318: don't open-code simple_read_from_buffer() · 5d30b427
      Al Viro 提交于
      ... and BTW, failing copy_to_user() means EFAULT, not EINVAL
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5d30b427
  12. 03 9月, 2013 1 次提交
  13. 02 9月, 2013 7 次提交