1. 10 7月, 2018 1 次提交
    • M
      dmaengine: add support for reporting pause and resume separately · d8095f94
      Marek Szyprowski 提交于
      'cmd_pause' DMA channel capability means that respective DMA engine
      supports both pausing and resuming given DMA channel. However, in some
      cases it is important to know if DMA channel can be paused without the
      need to resume it. This is a typical requirement for proper residue
      reading on transfer timeout in UART drivers. There are also some DMA
      engines with limited hardware, which doesn't really support resuming.
      
      Reporting pause and resume capabilities separately allows UART drivers to
      properly check for the really required capabilities and operate in DMA
      mode also in systems with limited DMA hardware. On the other hand drivers,
      which rely on full channel suspend/resume support, should now check for
      both 'pause' and 'resume' features.
      
      Existing clients of dma_get_slave_caps() have been checked and the only
      driver which rely on proper channel resuming is soc-generic-dmaengine-pcm
      driver, which has been updated to check the newly added capability.
      Existing 'cmd_pause' now only indicates that DMA engine support pausing
      given DMA channel.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Acked-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NVinod Koul <vkoul@kernel.org>
      d8095f94
  2. 16 6月, 2018 1 次提交
  3. 28 8月, 2017 1 次提交
  4. 22 8月, 2017 1 次提交
  5. 14 3月, 2017 1 次提交
    • M
      dmaengine: Fix array index out of bounds warning in __get_unmap_pool() · 23f963e9
      Matthias Kaehlcke 提交于
      This fixes the following warning when building with clang and
      CONFIG_DMA_ENGINE_RAID=n :
      
      drivers/dma/dmaengine.c:1102:11: error: array index 2 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
                      return &unmap_pool[2];
                              ^          ~
      drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
      static struct dmaengine_unmap_pool unmap_pool[] = {
      ^
      drivers/dma/dmaengine.c:1104:11: error: array index 3 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
                      return &unmap_pool[3];
                              ^          ~
      drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
      static struct dmaengine_unmap_pool unmap_pool[] = {
      Signed-off-by: NMatthias Kaehlcke <mka@chromium.org>
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      23f963e9
  6. 02 1月, 2017 1 次提交
  7. 22 8月, 2016 1 次提交
    • V
      dmaengine: device must have at least one channel · 76d7b84b
      Viresh Kumar 提交于
      The DMA device can't be registered if it doesn't have any channels
      registered at all. Moreover, it leads to memory leak and is reported by
      kmemleak as (on 3.10 kernel, and same shall happen on mainline):
      
      unreferenced object 0xffffffc09e597240 (size 64):
        comm "swapper/0", pid 1, jiffies 4294877736 (age 7060.280s)
        hex dump (first 32 bytes):
          00 00 00 00 c0 ff ff ff 30 00 00 ff 00 00 00 ff  ........0.......
          00 00 00 ff 00 00 00 ff 00 00 00 ff 00 00 00 ff  ................
        backtrace:
          [<ffffffc0003079ec>] create_object+0x148/0x2a0
          [<ffffffc000cc150c>] kmemleak_alloc+0x80/0xbc
          [<ffffffc000303a7c>] kmem_cache_alloc_trace+0x120/0x1ac
          [<ffffffc00054771c>] dma_async_device_register+0x160/0x46c
          [<ffffffc000548958>] foo_probe+0x1a0/0x264
          [<ffffffc0005d6658>] platform_drv_probe+0x14/0x20
          [<ffffffc0005d50cc>] driver_probe_device+0x160/0x374
          [<ffffffc0005d538c>] __driver_attach+0x60/0x90
          [<ffffffc0005d3e78>] bus_for_each_dev+0x7c/0xb0
          [<ffffffc0005d4a0c>] driver_attach+0x1c/0x28
          [<ffffffc0005d459c>] bus_add_driver+0x124/0x248
          [<ffffffc0005d59cc>] driver_register+0x90/0x110
          [<ffffffc0005d6bf4>] platform_driver_register+0x58/0x64
          [<ffffffc00142a70c>] foo_driver_init+0x10/0x1c
          [<ffffffc000200878>] do_one_initcall+0xac/0x148
          [<ffffffc00140096c>] kernel_init_freeable+0x1a0/0x258
      
      Return -ENODEV from dma_async_device_register() on such a case.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      76d7b84b
  8. 14 5月, 2016 1 次提交
  9. 12 5月, 2016 1 次提交
  10. 13 4月, 2016 1 次提交
  11. 06 4月, 2016 1 次提交
  12. 04 4月, 2016 1 次提交
  13. 09 2月, 2016 1 次提交
  14. 18 12月, 2015 3 次提交
    • P
      dmaengine: core: Introduce new, universal API to request a channel · a8135d0d
      Peter Ujfalusi 提交于
      The two API function can cover most, if not all current APIs used to
      request a channel. With minimal effort dmaengine drivers, platforms and
      dmaengine user drivers can be converted to use the two function.
      
      struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask);
      
      To request any channel matching with the requested capabilities, can be
      used to request channel for memcpy, memset, xor, etc where no hardware
      synchronization is needed.
      
      struct dma_chan *dma_request_chan(struct device *dev, const char *name);
      To request a slave channel. The dma_request_chan() will try to find the
      channel via DT, ACPI or in case if the kernel booted in non DT/ACPI mode
      it will use a filter lookup table and retrieves the needed information from
      the dma_slave_map provided by the DMA drivers.
      This legacy mode needs changes in platform code, in dmaengine drivers and
      finally the dmaengine user drivers can be converted:
      
      For each dmaengine driver an array of DMA device, slave and the parameter
      for the filter function needs to be added:
      
      static const struct dma_slave_map da830_edma_map[] = {
      	{ "davinci-mcasp.0", "rx", EDMA_FILTER_PARAM(0, 0) },
      	{ "davinci-mcasp.0", "tx", EDMA_FILTER_PARAM(0, 1) },
      	{ "davinci-mcasp.1", "rx", EDMA_FILTER_PARAM(0, 2) },
      	{ "davinci-mcasp.1", "tx", EDMA_FILTER_PARAM(0, 3) },
      	{ "davinci-mcasp.2", "rx", EDMA_FILTER_PARAM(0, 4) },
      	{ "davinci-mcasp.2", "tx", EDMA_FILTER_PARAM(0, 5) },
      	{ "spi_davinci.0", "rx", EDMA_FILTER_PARAM(0, 14) },
      	{ "spi_davinci.0", "tx", EDMA_FILTER_PARAM(0, 15) },
      	{ "da830-mmc.0", "rx", EDMA_FILTER_PARAM(0, 16) },
      	{ "da830-mmc.0", "tx", EDMA_FILTER_PARAM(0, 17) },
      	{ "spi_davinci.1", "rx", EDMA_FILTER_PARAM(0, 18) },
      	{ "spi_davinci.1", "tx", EDMA_FILTER_PARAM(0, 19) },
      };
      
      This information is going to be needed by the dmaengine driver, so
      modification to the platform_data is needed, and the driver map should be
      added to the pdata of the DMA driver:
      
      da8xx_edma0_pdata.slave_map = da830_edma_map;
      da8xx_edma0_pdata.slavecnt = ARRAY_SIZE(da830_edma_map);
      
      The DMA driver then needs to configure the needed device -> filter_fn
      mapping before it registers with dma_async_device_register() :
      
      ecc->dma_slave.filter_map.map = info->slave_map;
      ecc->dma_slave.filter_map.mapcnt = info->slavecnt;
      ecc->dma_slave.filter_map.fn = edma_filter_fn;
      
      When neither DT or ACPI lookup is available the dma_request_chan() will
      try to match the requester's device name with the filter_map's list of
      device names, when a match found it will use the information from the
      dma_slave_map to get the channel with the dma_get_channel() internal
      function.
      Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      a8135d0d
    • P
      dmaengine: core: Move and merge the code paths using private_candidate · 7bd903c5
      Peter Ujfalusi 提交于
      Channel matching with private_candidate() is used in two paths, the error
      checking is slightly different in them and they are duplicating code also.
      Move the code under find_candidate() to provide consistent execution and
      going to allow us to reuse this mode of channel lookup later.
      Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com>
      Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      7bd903c5
    • P
      dmaengine: core: Skip mask matching when it is not provided to private_candidate · 26b64256
      Peter Ujfalusi 提交于
      If mask is NULL skip the mask matching against the DMA device capabilities.
      Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com>
      Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      26b64256
  15. 16 11月, 2015 2 次提交
    • R
      dmaengine: enable DMA_CTRL_REUSE · 9eeacd3a
      Robert Jarzmik 提交于
      In the current state, the capability of transfer reuse can neither be
      set by a slave dmaengine driver, nor used by a client driver, because
      the capability is not available to dma_get_slave_caps().
      
      Fix this by adding a way to declare the capability.
      
      Fixes: 27242021 ("dmaengine: Add DMA_CTRL_REUSE")
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      9eeacd3a
    • L
      dmaengine: Add transfer termination synchronization support · b36f09c3
      Lars-Peter Clausen 提交于
      The DMAengine API has a long standing race condition that is inherent to
      the API itself. Calling dmaengine_terminate_all() is supposed to stop and
      abort any pending or active transfers that have previously been submitted.
      Unfortunately it is possible that this operation races against a currently
      running (or with some drivers also scheduled) completion callback.
      
      Since the API allows dmaengine_terminate_all() to be called from atomic
      context as well as from within a completion callback it is not possible to
      synchronize to the execution of the completion callback from within
      dmaengine_terminate_all() itself.
      
      This means that a user of the DMAengine API does not know when it is safe
      to free resources used in the completion callback, which can result in a
      use-after-free race condition.
      
      This patch addresses the issue by introducing an explicit synchronization
      primitive to the DMAengine API called dmaengine_synchronize().
      
      The existing dmaengine_terminate_all() is deprecated in favor of
      dmaengine_terminate_sync() and dmaengine_terminate_async(). The former
      aborts all pending and active transfers and synchronizes to the current
      context, meaning it will wait until all running completion callbacks have
      finished. This means it is only possible to call this function from
      non-atomic context. The later function does not synchronize, but can still
      be used in atomic context or from within a complete callback. It has to be
      followed up by dmaengine_synchronize() before a client can free the
      resources used in a completion callback.
      
      In addition to this the semantics of the device_terminate_all() callback
      are slightly relaxed by this patch. It is now OK for a driver to only
      schedule the termination of the active transfer, but does not necessarily
      have to wait until the DMA controller has completely stopped. The driver
      must ensure though that the controller has stopped and no longer accesses
      any memory when the device_synchronize() callback returns.
      
      This was in part done since most drivers do not pay attention to this
      anyway at the moment and to emphasize that this needs to be done when the
      device_synchronize() callback is implemented. But it also helps with
      implementing support for devices where stopping the controller can require
      operations that may sleep.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      b36f09c3
  16. 30 9月, 2015 1 次提交
    • P
      dmaengine: fix balance of privatecnt · 214fc4e4
      Peter Ujfalusi 提交于
      dma_release_channel() decrements privatecnt counter and almost all dma_get*
      function increments it with the exception of dma_get_slave_channel().
      In most cases this does not cause issue since normally the channel is not
      requested and released, but if a driver requests DMA channel via
      dma_get_slave_channel() and releases the channel the privatecnt will be
      unbalanced and this will prevent for example getting channel for memcpy.
      Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      214fc4e4
  17. 22 9月, 2015 1 次提交
  18. 18 8月, 2015 1 次提交
  19. 12 6月, 2015 2 次提交
  20. 02 6月, 2015 1 次提交
  21. 09 5月, 2015 1 次提交
    • P
      dmaengine: of_dma: Support for DMA routers · 56f13c0d
      Peter Ujfalusi 提交于
      DMA routers are transparent devices used to mux DMA requests from
      peripherals to DMA controllers. They are used when the SoC integrates more
      devices with DMA requests then their controller can handle.
      DRA7x is one example of such SoC, where the sDMA can hanlde 128 DMA request
      lines, but in SoC level it has 205 DMA requests.
      
      The of_dma_router will be registered as of_dma_controller with special
      xlate function and additional parameters. The driver for the router is
      responsible to craft the dma_spec (in the of_dma_route_allocate callback)
      which can be used to requests a DMA channel from the real DMA controller.
      This way the router can be transparent for the system while remaining generic
      enough to be used in different environments.
      Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      56f13c0d
  22. 29 4月, 2015 1 次提交
  23. 12 4月, 2015 1 次提交
  24. 17 3月, 2015 1 次提交
  25. 06 3月, 2015 1 次提交
  26. 18 1月, 2015 1 次提交
  27. 22 12月, 2014 4 次提交
  28. 09 12月, 2014 1 次提交
  29. 28 9月, 2014 1 次提交
  30. 22 5月, 2014 1 次提交
  31. 12 2月, 2014 1 次提交
  32. 13 12月, 2013 2 次提交
    • D
      dmaengine: fix sleep in atomic · 8194ee27
      Dan Williams 提交于
       BUG: sleeping function called from invalid context at mm/mempool.c:203
       in_atomic(): 1, irqs_disabled(): 0, pid: 43502, name: linbug
       no locks held by linbug/43502.
       CPU: 7 PID: 43502 Comm: linbug Not tainted 3.13.0-rc1+ #15
       Hardware name:
        0000000000000010 ffff88005ebd1878 ffffffff8172d512 ffff8801752bc1c0
        ffff8801752bc1c0 ffff88005ebd1898 ffffffff8109d1f6 ffff88005f9a3c58
        ffff880177f0f080 ffff88005ebd1918 ffffffff81161f43 ffff88005ebd18f8
       Call Trace:
        [<ffffffff8172d512>] dump_stack+0x4e/0x68
        [<ffffffff8109d1f6>] __might_sleep+0xe6/0x120
        [<ffffffff81161f43>] mempool_alloc+0x93/0x170
        [<ffffffff810c0c34>] ? mark_held_locks+0x74/0x140
        [<ffffffff8118a826>] ? follow_page_mask+0x556/0x600
        [<ffffffff814107ae>] dmaengine_get_unmap_data+0x2e/0x60
        [<ffffffff81410f11>] dma_async_memcpy_pg_to_pg+0x41/0x1c0
        [<ffffffff814110e0>] dma_async_memcpy_buf_to_pg+0x50/0x60
        [<ffffffff81411bdc>] dma_memcpy_to_iovec+0xfc/0x190
        [<ffffffff816163af>] dma_skb_copy_datagram_iovec+0x6f/0x2b0
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      8194ee27
    • D
      dmaengine: fix enable for high order unmap pools · 3cc377b9
      Dan Williams 提交于
      The higher order mempools support raid operations, and we want to
      disable them when raid support is not enabled.  Making them conditional
      on ASYNC_TX_DMA is not sufficient as other users (specifically dmatest)
      will also issue raid operations.  Make raid drivers explicitly request
      that the core carry the higher order pools.
      Reported-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com>
      Tested-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3cc377b9