1. 24 8月, 2015 1 次提交
  2. 23 8月, 2015 1 次提交
    • L
      dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller · 0e3b67b3
      Lars-Peter Clausen 提交于
      Add support for the Analog Devices AXI-DMAC DMA controller. This controller
      is a soft peripheral that can be instantiated in a FPGA and is often used
      in Analog Devices' reference designs for FPGA platforms.
      
      The peripheral has various configuration options that can be selected at
      synthesis time and influence the supported features of the instantiated
      peripheral, those options are represented as device-tree properties to
      allow the driver to behave accordingly.
      
      The peripheral has a zero latency architecture, which means it is possible
      to switch from one to the next descriptor without any delay. This is
      archived by having a internal queue which can hold multiple descriptors.
      The driver supports this, which means it will submit new descriptors
      directly to the hardware until the queue is full and not wait for a
      descriptor to complete before the next one is submitted. Interrupts are
      used for the descriptor queue flow control.
      
      Currently the driver supports SG, cyclic and interleaved slave DMA.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      0e3b67b3
  3. 21 8月, 2015 1 次提交
    • D
      dmaengine: ioatdma: fix zero day warning on incompatible pointer type · aaecdebc
      Dave Jiang 提交于
      The 32bit build is creating this warning. Since we don't expect anyone
      actually use this on 32bit, restrict ioatdma to be built only on x86_64.
      This issue has long existed and only reason it's surfacing due to code
      refactoring.
      
         drivers/dma/ioat/dma.c: In function 'ioat_timer_event':
      >> drivers/dma/ioat/dma.c:870:39: warning: passing argument 2 of 'ioat_cleanup_preamble' from incompatible pointer type
           if (ioat_cleanup_preamble(ioat_chan, &phys_complete))
                                                ^
         drivers/dma/ioat/dma.c:577:13: note: expected 'u64 *' but argument is of type 'dma_addr_t *'
          static bool ioat_cleanup_preamble(struct ioatdma_chan *ioat_chan,
                      ^
      Signed-off-by: NDave Jiang <dave.jiang@intel.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      aaecdebc
  4. 20 8月, 2015 1 次提交
  5. 19 8月, 2015 1 次提交
  6. 28 7月, 2015 1 次提交
  7. 16 7月, 2015 1 次提交
  8. 26 5月, 2015 1 次提交
    • R
      dmaengine: pxa: add pxa dmaengine driver · a57e16cf
      Robert Jarzmik 提交于
      This is a new driver for pxa SoCs, which is also compatible with the former
      mmp_pdma.
      
      The rationale behind a new driver (as opposed to incremental patching) was :
      
       - the new driver relies on virt-dma, which obsoletes all the internal
         structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
         functions
      
       - mmp_pdma allocates dma coherent descriptors containing not only hardware
         descriptors but linked list information
         The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
         dma pool allocated memory. This changes completely the way descriptors are
         handled
      
       - the architecture behind the interrupt/tasklet management was rewritten to be
         more conforming to virt-dma
      
       - the buffers alignment is handled differently
         The former driver assumed that the DMA channel stopped between each
         descriptor. The new one chains descriptors to let the channel running. This
         is a necessary guarantee for real-time high bandwidth usecases such as video
         capture on "old" architectures such as pxa.
      
       - hot chaining / cold chaining / no chaining
         Whenever possible, submitting a descriptor "hot chains" it to a running
         channel. There is still no guarantee that the descriptor will be issued, as
         the channel might be stopped just before the descriptor is submitted. Yet
         this allows to submit several video buffers, and resubmit a buffer while
         another is under handling.
         As before, dma_async_issue_pending() is the only guarantee to have all the
         buffers issued.
         When an alignment issue is detected (ie. one address in a descriptor is not
         a multiple of 8), if the already running channel is in "aligned mode", the
         channel will stop, and restarted in "misaligned mode" to finished the issued
         list.
      
       - descriptors reusing
         A submitted, issued and completed descriptor can be reused, ie resubmitted if
         it was prepared with the proper flag (DMA_PREP_ACK).  Only a channel
         resources release will in this case release that buffer.
         This allows a rolling ring of buffers to be reused, where there are several
         thousands of hardware descriptors used (video buffer for example).
      
      Additionally, a set of more casual features is introduced :
       - debugging traces
       - lockless way to know if a descriptor is terminated or not
      
      The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
      with dmatest, pxa_camera and pxamci.
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      a57e16cf
  9. 14 5月, 2015 1 次提交
  10. 27 4月, 2015 1 次提交
  11. 02 4月, 2015 2 次提交
  12. 01 4月, 2015 1 次提交
  13. 17 3月, 2015 1 次提交
  14. 07 3月, 2015 1 次提交
  15. 05 2月, 2015 1 次提交
  16. 24 1月, 2015 1 次提交
  17. 17 11月, 2014 2 次提交
  18. 06 11月, 2014 1 次提交
  19. 28 9月, 2014 1 次提交
  20. 23 9月, 2014 1 次提交
  21. 09 8月, 2014 1 次提交
  22. 07 8月, 2014 1 次提交
  23. 04 8月, 2014 1 次提交
  24. 30 7月, 2014 1 次提交
  25. 25 7月, 2014 1 次提交
  26. 17 7月, 2014 1 次提交
  27. 13 7月, 2014 1 次提交
  28. 02 6月, 2014 1 次提交
  29. 22 5月, 2014 1 次提交
  30. 30 4月, 2014 1 次提交
  31. 16 4月, 2014 1 次提交
    • J
      platform: Fix timberdale dependencies · 2dda47d1
      Jean Delvare 提交于
      VIDEO_TIMBERDALE selects TIMB_DMA which itself depends on
      MFD_TIMBERDALE, so VIDEO_TIMBERDALE should either select or depend on
      MFD_TIMBERDALE as well. I chose to make it depend on it because I
      think it makes more sense and it is consistent with what other options
      are doing.
      
      Adding a "|| HAS_IOMEM" to the TIMB_DMA dependencies silenced the
      kconfig warning about unmet direct dependencies but it was wrong:
      without MFD_TIMBERDALE, TIMB_DMA is useless as the driver has no
      device to bind to.
      Signed-off-by: NJean Delvare <jdelvare@suse.de>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      2dda47d1
  32. 05 4月, 2014 1 次提交
  33. 18 2月, 2014 1 次提交
  34. 12 2月, 2014 1 次提交
  35. 03 2月, 2014 1 次提交
  36. 20 1月, 2014 1 次提交
  37. 08 1月, 2014 1 次提交
  38. 19 12月, 2013 1 次提交
    • D
      net_dma: mark broken · 77873803
      Dan Williams 提交于
      net_dma can cause data to be copied to a stale mapping if a
      copy-on-write fault occurs during dma.  The application sees missing
      data.
      
      The following trace is triggered by modifying the kernel to WARN if it
      ever triggers copy-on-write on a page that is undergoing dma:
      
       WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
       ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
       Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
       CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
        00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
        ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
        ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
       Call Trace:
        [<ffffffff81751041>] dump_stack+0x46/0x58
        [<ffffffff8104ed9c>] warn_slowpath_common+0x8c/0xc0
        [<ffffffff810f3646>] ? ftrace_pid_func+0x26/0x30
        [<ffffffff8104ee86>] warn_slowpath_fmt+0x46/0x50
        [<ffffffff8139c062>] debug_dma_assert_idle+0xd2/0x120
        [<ffffffff81154a40>] do_wp_page+0xd0/0x790
        [<ffffffff811582ac>] handle_mm_fault+0x51c/0xde0
        [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20
        [<ffffffff8175fc2c>] __do_page_fault+0x19c/0x530
        [<ffffffff8175c196>] ? _raw_spin_lock_bh+0x16/0x40
        [<ffffffff810f3539>] ? trace_clock_local+0x9/0x10
        [<ffffffff810fa1f4>] ? rb_reserve_next_event+0x64/0x310
        [<ffffffffa0014c00>] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
        [<ffffffff8175ffce>] do_page_fault+0xe/0x10
        [<ffffffff8175c862>] page_fault+0x22/0x30
        [<ffffffff81643991>] ? __kfree_skb+0x51/0xd0
        [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20
        [<ffffffff81388ea2>] ? memcpy_toiovec+0x52/0xa0
        [<ffffffff8164770f>] skb_copy_datagram_iovec+0x5f/0x2a0
        [<ffffffff8169d0f4>] tcp_rcv_established+0x674/0x7f0
        [<ffffffff816a68c5>] tcp_v4_do_rcv+0x2e5/0x4a0
        [..]
       ---[ end trace e30e3b01191b7617 ]---
       Mapped at:
        [<ffffffff8139c169>] debug_dma_map_page+0xb9/0x160
        [<ffffffff8142bf47>] dma_async_memcpy_pg_to_pg+0x127/0x210
        [<ffffffff8142cce9>] dma_memcpy_pg_to_iovec+0x119/0x1f0
        [<ffffffff81669d3c>] dma_skb_copy_datagram_iovec+0x11c/0x2b0
        [<ffffffff8169d1ca>] tcp_rcv_established+0x74a/0x7f0:
      
      ...the problem is that the receive path falls back to cpu-copy in
      several locations and this trace is just one of the areas.  A few
      options were considered to fix this:
      
      1/ sync all dma whenever a cpu copy branch is taken
      
      2/ modify the page fault handler to hold off while dma is in-flight
      
      Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
      with cpu-copy.  Option 2 adds checks for behavior that is already documented as
      broken when using get_user_pages().  At a minimum a debug mode is warranted to
      catch and flag these violations of the dma-api vs get_user_pages().
      
      Thanks to David for his reproducer.
      
      Cc: <stable@vger.kernel.org>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Reported-by: NDavid Whipple <whipple@securedatainnovations.ch>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      77873803