1. 22 12月, 2014 4 次提交
  2. 09 12月, 2014 1 次提交
  3. 28 9月, 2014 1 次提交
  4. 22 5月, 2014 1 次提交
  5. 12 2月, 2014 1 次提交
  6. 13 12月, 2013 2 次提交
    • D
      dmaengine: fix sleep in atomic · 8194ee27
      Dan Williams 提交于
       BUG: sleeping function called from invalid context at mm/mempool.c:203
       in_atomic(): 1, irqs_disabled(): 0, pid: 43502, name: linbug
       no locks held by linbug/43502.
       CPU: 7 PID: 43502 Comm: linbug Not tainted 3.13.0-rc1+ #15
       Hardware name:
        0000000000000010 ffff88005ebd1878 ffffffff8172d512 ffff8801752bc1c0
        ffff8801752bc1c0 ffff88005ebd1898 ffffffff8109d1f6 ffff88005f9a3c58
        ffff880177f0f080 ffff88005ebd1918 ffffffff81161f43 ffff88005ebd18f8
       Call Trace:
        [<ffffffff8172d512>] dump_stack+0x4e/0x68
        [<ffffffff8109d1f6>] __might_sleep+0xe6/0x120
        [<ffffffff81161f43>] mempool_alloc+0x93/0x170
        [<ffffffff810c0c34>] ? mark_held_locks+0x74/0x140
        [<ffffffff8118a826>] ? follow_page_mask+0x556/0x600
        [<ffffffff814107ae>] dmaengine_get_unmap_data+0x2e/0x60
        [<ffffffff81410f11>] dma_async_memcpy_pg_to_pg+0x41/0x1c0
        [<ffffffff814110e0>] dma_async_memcpy_buf_to_pg+0x50/0x60
        [<ffffffff81411bdc>] dma_memcpy_to_iovec+0xfc/0x190
        [<ffffffff816163af>] dma_skb_copy_datagram_iovec+0x6f/0x2b0
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      8194ee27
    • D
      dmaengine: fix enable for high order unmap pools · 3cc377b9
      Dan Williams 提交于
      The higher order mempools support raid operations, and we want to
      disable them when raid support is not enabled.  Making them conditional
      on ASYNC_TX_DMA is not sufficient as other users (specifically dmatest)
      will also issue raid operations.  Make raid drivers explicitly request
      that the core carry the higher order pools.
      Reported-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com>
      Tested-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3cc377b9
  7. 10 12月, 2013 2 次提交
    • S
      dma: add dma_get_any_slave_channel(), for use in of_xlate() · 8010dad5
      Stephen Warren 提交于
      mmp_pdma.c implements a custom of_xlate() function that is 95% identical
      to what Tegra will need. Create a function to implement the common part,
      so everyone doesn't just cut/paste the implementation.
      
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Lars-Peter Clausen <lars@metafoo.de>
      Cc: dmaengine@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      8010dad5
    • S
      dma: add channel request API that supports deferred probe · 0ad7c000
      Stephen Warren 提交于
      dma_request_slave_channel() simply returns NULL whenever DMA channel
      lookup fails. Lookup could fail for two distinct reasons:
      
      a) No DMA specification exists for the channel name.
         This includes situations where no DMA specifications exist at all, or
         other general lookup problems.
      
      b) A DMA specification does exist, yet the driver for that channel is not
         yet registered.
      
      Case (b) should trigger deferred probe in client drivers. However, since
      they have no way to differentiate the two situations, it cannot.
      
      Implement new function dma_request_slave_channel_reason(), which performs
      identically to dma_request_slave_channel(), except that it returns an
      error-pointer rather than NULL, which allows callers to detect when
      deferred probe should occur.
      
      Eventually, all drivers should be converted to this new API, the old API
      removed, and the new API renamed to the more desirable name. This patch
      doesn't convert the existing API and all drivers in one go, since some
      drivers call dma_request_slave_channel() then dma_request_channel() if
      that fails. That would require either modifying dma_request_channel() in
      the same way, or adding extra error-handling code to all affected
      drivers, and there are close to 100 drivers using the other API, rather
      than just the 15-20 or so that use dma_request_slave_channel(), which
      might be tenable in a single patch.
      
      acpi_dma_request_slave_chan_by_name() doesn't currently implement
      deferred probe. It should, but this will be addressed later.
      Acked-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      0ad7c000
  8. 15 11月, 2013 3 次提交
  9. 14 11月, 2013 2 次提交
  10. 25 10月, 2013 1 次提交
  11. 02 9月, 2013 1 次提交
  12. 23 8月, 2013 1 次提交
    • B
      dmaengine: make dma_channel_rebalance() NUMA aware · c4d27c4d
      Brice Goglin 提交于
      dma_channel_rebalance() currently distributes channels by processor ID.
      These IDs often change with the BIOS, and the order isn't related to
      the DMA channel list (related to PCI bus ids).
      * On my SuperMicro dual E5 machine, first socket has processor IDs [0-7]
        (and [16-23] for hyperthreads), second socket has [8-15]+[24-31]
        => channels are properly allocated to local CPUs.
      * On Dells R720 with same processors, first socket has even processor IDs,
        second socket has odd numbers
        => half the processors get channels on the remote socket, causing
           cross-NUMA traffic and lower DMA performance.
      
      Change nth_chan() to return the channel with min table_count and in the
      NUMA node of the given CPU, if any. If none, the (non-local) channel with
      min table_count is returned. nth_chan() is therefore renamed into min_chan()
      since we don't iterate until the nth channel anymore. In practice, the
      behavior is the same because first channels are taken first and are then
      ignored because they got an additional reference.
      
      The new code has a slightly higher complexity since we always scan the
      entire list of channels for finding the minimal table_count (instead
      of stopping after N chans), and because we check whether the CPU is in the
      DMA device locality mask. Overall we still have time complexity =
      number of chans x number of processors. This rebalance is rarely used,
      so this won't hurt.
      
      On the above SuperMicro machine, channels are still allocated the same.
      On the Dells, there are no locality issue anymore (MEMCPY channel X goes
      to processor X and to its hyperthread sibling).
      Signed-off-by: NBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: NDan Williams <djbw@fb.com>
      c4d27c4d
  13. 19 8月, 2013 1 次提交
  14. 13 8月, 2013 1 次提交
    • Z
      dmaengine: add interface of dma_get_slave_channel · 7bb587f4
      Zhangfei Gao 提交于
      Suggested by Arnd, add dma_get_slave_channel interface
      Dma host driver could get specific channel specificied by request line, rather than filter.
      
      host example:
      static struct dma_chan *xx_of_dma_simple_xlate(struct of_phandle_args *dma_spec,
      		struct of_dma *ofdma)
      {
      	struct xx_dma_dev *d = ofdma->of_dma_data;
      	unsigned int request = dma_spec->args[0];
      
      	if (request > d->dma_requests)
      		return NULL;
      
      	return dma_get_slave_channel(&(d->chans[request].vc.chan));
      }
      
      probe:
      of_dma_controller_register((&op->dev)->of_node, xx_of_dma_simple_xlate, d);
      Signed-off-by: NZhangfei Gao <zhangfei.gao@linaro.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      7bb587f4
  15. 26 7月, 2013 1 次提交
  16. 04 7月, 2013 1 次提交
  17. 16 4月, 2013 1 次提交
  18. 15 4月, 2013 2 次提交
  19. 28 2月, 2013 1 次提交
  20. 08 1月, 2013 1 次提交
  21. 07 1月, 2013 1 次提交
    • J
      dmaengine: add helper function to request a slave DMA channel · 9a6cecc8
      Jon Hunter 提交于
      Currently slave DMA channels are requested by calling dma_request_channel()
      and requires DMA clients to pass various filter parameters to obtain the
      appropriate channel.
      
      With device-tree being used by architectures such as arm and the addition of
      device-tree helper functions to extract the relevant DMA client information
      from device-tree, add a new function to request a slave DMA channel using
      device-tree. This function is currently a simple wrapper that calls the
      device-tree of_dma_request_slave_channel() function.
      
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Benoit Cousson <b-cousson@ti.com>
      Cc: Stephen Warren <swarren@nvidia.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Rob Herring <rob.herring@calxeda.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Dan Williams <djbw@fb.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJon Hunter <jon-hunter@ti.com>
      Reviewed-by: NStephen Warren <swarren@wwwdotorg.org>
      Acked-by: NRob Herring <rob.herring@calxeda.com>
      Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
      9a6cecc8
  22. 06 10月, 2012 1 次提交
  23. 20 7月, 2012 1 次提交
  24. 06 4月, 2012 1 次提交
  25. 06 3月, 2012 1 次提交
  26. 18 11月, 2011 1 次提交
    • J
      DMAEngine: Define interleaved transfer request api · b14dab79
      Jassi Brar 提交于
      Define a new api that could be used for doing fancy data transfers
      like interleaved to contiguous copy and vice-versa.
      Traditional SG_list based transfers tend to be very inefficient in
      such cases as where the interleave and chunk are only a few bytes,
      which call for a very condensed api to convey pattern of the transfer.
      This api supports all 4 variants of scatter-gather and contiguous transfer.
      
      Of course, neither can this api help transfers that don't lend to DMA by
      nature, i.e, scattered tiny read/writes with no periodic pattern.
      
      Also since now we support SLAVE channels that might not provide
      device_prep_slave_sg callback but device_prep_interleaved_dma,
      remove the BUG_ON check.
      Signed-off-by: NJassi Brar <jaswinder.singh@linaro.org>
      Acked-by: NBarry Song <Baohua.Song@csr.com>
      [renamed dmaxfer_template to dma_interleaved_template
       did fixup after the enum dma_transfer_merge]
      Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
      b14dab79
  27. 04 8月, 2011 1 次提交
  28. 24 6月, 2011 1 次提交
  29. 22 6月, 2011 1 次提交
    • A
      net: remove mm.h inclusion from netdevice.h · b7f080cf
      Alexey Dobriyan 提交于
      Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually).
      
      To prevent mm.h inclusion via other channels also extract "enum dma_data_direction"
      definition into separate header. This tiny piece is what gluing netdevice.h with mm.h
      via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h".
      Removal of mm.h from scatterlist.h was tried and was found not feasible
      on most archs, so the link was cutoff earlier.
      
      Hope people are OK with tiny include file.
      
      Note, that mm_types.h is still dragged in, but it is a separate story.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b7f080cf
  30. 08 10月, 2010 2 次提交