1. 01 7月, 2015 1 次提交
    • V
      genalloc: rename of_get_named_gen_pool() to of_gen_pool_get() · abdd4a70
      Vladimir Zapolskiy 提交于
      To be consistent with other kernel interface namings, rename
      of_get_named_gen_pool() to of_gen_pool_get().  In the original function
      name "_named" suffix references to a device tree property, which contains
      a phandle to a device and the corresponding device driver is assumed to
      register a gen_pool object.
      
      Due to a weak relation and to avoid any confusion (e.g.  in future
      possible scenario if gen_pool objects are named) the suffix is removed.
      
      [sfr@canb.auug.org.au: crypto/marvell/cesa - fix up for of_get_named_gen_pool() rename]
      Signed-off-by: NVladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Philipp Zabel <p.zabel@pengutronix.de>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: Sascha Hauer <kernel@pengutronix.de>
      Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Boris BREZILLON <boris.brezillon@free-electrons.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      abdd4a70
  2. 25 6月, 2015 3 次提交
  3. 18 6月, 2015 1 次提交
    • R
      dmaengine: virt-dma: don't always free descriptor upon completion · b9855f03
      Robert Jarzmik 提交于
      This patch attempts to enhance the case of a transfer submitted multiple
      times, and where the cost of creating the descriptors chain is not
      negligible.
      
      This happens with big video buffers (several megabytes, ie. several
      thousands of linked descriptors in one scatter-gather list). In these
      cases, a video driver would want to do :
       - tx = dmaengine_prep_slave_sg()
       - dma_engine_submit(tx);
       - dma_async_issue_pending()
       - wait for video completion
       - read video data (or not, skipping a frame is also possible)
       - dma_engine_submit(tx)
         => here, the descriptors chain recalculation will take time
         => the dma coherent allocation over and over might create holes in
            the dma pool, which is counter-productive.
       - dma_async_issue_pending()
       - etc ...
      
      In order to cope with this case, virt-dma is modified to prevent freeing
      the descriptors upon completion if DMA_CTRL_ACK flag is set in the
      transfer.
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      b9855f03
  4. 12 6月, 2015 4 次提交
  5. 11 6月, 2015 5 次提交
  6. 08 6月, 2015 4 次提交
    • M
      dmaengine: pl330: fix wording in mcbufsz message · e5489d5e
      Michal Suchanek 提交于
      The kernel is not trying to increase mcbufsz. It suggests you should try
      doing so. Also print the calculated required size of mcbufsz.
      Signed-off-by: NMichal Suchanek <hramrach@gmail.com>
      Reviewed-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      e5489d5e
    • L
      dmaengine: at_xdmac: rework slave configuration part · 765c37d8
      Ludovic Desroches 提交于
      Rework slave configuration part in order to more report wrong errors
      about the configuration.
      Only maxburst and addr width values are checked when doing the slave
      configuration. The validity of the channel configuration is done at
      prepare time.
      Signed-off-by: NLudovic Desroches <ludovic.desroches@atmel.com>
      Cc: stable@vger.kernel.org # 4.0 and later
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      765c37d8
    • L
      dmaengine: at_xdmac: lock fixes · 4c374fc7
      Ludovic Desroches 提交于
      Using _bh variant for spin locks causes this kind of warning:
      Starting logging: ------------[ cut here ]------------
      WARNING: CPU: 0 PID: 3 at /ssd_drive/linux/kernel/softirq.c:151
      __local_bh_enable_ip+0xe8/0xf4()
      Modules linked in:
      CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 4.1.0-rc2+ #94
      Hardware name: Atmel SAMA5
      [<c0013c04>] (unwind_backtrace) from [<c00118a4>] (show_stack+0x10/0x14)
      [<c00118a4>] (show_stack) from [<c001bbcc>]
      (warn_slowpath_common+0x80/0xac)
      [<c001bbcc>] (warn_slowpath_common) from [<c001bc14>]
      (warn_slowpath_null+0x1c/0x24)
      [<c001bc14>] (warn_slowpath_null) from [<c001e28c>]
      (__local_bh_enable_ip+0xe8/0xf4)
      [<c001e28c>] (__local_bh_enable_ip) from [<c01fdbd0>]
      (at_xdmac_device_terminate_all+0xf4/0x100)
      [<c01fdbd0>] (at_xdmac_device_terminate_all) from [<c02221a4>]
      (atmel_complete_tx_dma+0x34/0xf4)
      [<c02221a4>] (atmel_complete_tx_dma) from [<c01fe4ac>]
      (at_xdmac_tasklet+0x14c/0x1ac)
      [<c01fe4ac>] (at_xdmac_tasklet) from [<c001de58>]
      (tasklet_action+0x68/0xb4)
      [<c001de58>] (tasklet_action) from [<c001dfdc>]
      (__do_softirq+0xfc/0x238)
      [<c001dfdc>] (__do_softirq) from [<c001e140>] (run_ksoftirqd+0x28/0x34)
      [<c001e140>] (run_ksoftirqd) from [<c0033a3c>]
      (smpboot_thread_fn+0x138/0x18c)
      [<c0033a3c>] (smpboot_thread_fn) from [<c0030e7c>] (kthread+0xdc/0xf0)
      [<c0030e7c>] (kthread) from [<c000f480>] (ret_from_fork+0x14/0x34)
      ---[ end trace b57b14a99c1d8812 ]---
      
      It comes from the fact that devices can called some code from the DMA
      controller with irq disabled. _bh variant is not intended to be used in
      this case since it can enable irqs. Switch to irqsave/irqrestore variant to
      avoid this situation.
      Signed-off-by: NLudovic Desroches <ludovic.desroches@atmel.com>
      Cc: stable@vger.kernel.org # 4.0 and later
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      4c374fc7
    • H
      dmaengine: sirf: add CSRatlas7 SoC support · 0a45dcab
      Hao Liu 提交于
      add support for new CSR atlas7 SoC. atlas7 exists V1 and V2 IP.
      atlas7 DMAv1 is basically moved from marco, which has never been
      delivered to customers and renamed in this patch.
      atlas7 DMAv2 supports chain DMA by a chain table, this
      patch also adds chain DMA support for atlas7.
      
      atlas7 DMAv1 and DMAv2 co-exist in the same chip. there are some HW
      configuration differences(register offset etc.) with old prima2 chips,
      so we use compatible string to differentiate old prima2 and new atlas7,
      then results in different set in HW for them.
      Signed-off-by: NHao Liu <Hao.Liu@csr.com>
      Signed-off-by: NYanchang Li <Yanchang.Li@csr.com>
      Signed-off-by: NBarry Song <Baohua.Song@csr.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      0a45dcab
  7. 03 6月, 2015 1 次提交
    • S
      x86/mm: Decouple <linux/vmalloc.h> from <asm/io.h> · d6472302
      Stephen Rothwell 提交于
      Nothing in <asm/io.h> uses anything from <linux/vmalloc.h>, so
      remove it from there and fix up the resulting build problems
      triggered on x86 {64|32}-bit {def|allmod|allno}configs.
      
      The breakages were triggering in places where x86 builds relied
      on vmalloc() facilities but did not include <linux/vmalloc.h>
      explicitly and relied on the implicit inclusion via <asm/io.h>.
      
      Also add:
      
        - <linux/init.h> to <linux/io.h>
        - <asm/pgtable_types> to <asm/io.h>
      
      ... which were two other implicit header file dependencies.
      Suggested-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      [ Tidied up the changelog. ]
      Acked-by: NDavid Miller <davem@davemloft.net>
      Acked-by: NTakashi Iwai <tiwai@suse.de>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NVinod Koul <vinod.koul@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Colin Cross <ccross@android.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: James E.J. Bottomley <JBottomley@odin.com>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Kristen Carlson Accardi <kristen@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Suma Ramars <sramars@cisco.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d6472302
  8. 02 6月, 2015 3 次提交
  9. 29 5月, 2015 1 次提交
    • R
      dmaengine: pxa_dma: add support for legacy transition · c91134d9
      Robert Jarzmik 提交于
      In order to achieve smooth transition of pxa drivers from old legacy dma
      handling to new dmaengine, introduce a function to "hide" dma physical
      channels from dmaengine.
      
      This is temporary situation where pxa dma will be handled in 2 places :
       - arch/arm/plat-pxa/dma.c
       - drivers/dma/pxa_dma.c
      
      The resources, ie. dma channels, will be controlled by pxa_dma. The
      legacy code will request or release a channel with
      pxad_toggle_reserved_channel().
      
      This is not very pretty, but it ensures both legacy and dmaengine
      consumers can live in the same kernel until the conversion is done.
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      c91134d9
  10. 28 5月, 2015 1 次提交
    • L
      kernel/params: constify struct kernel_param_ops uses · 9c27847d
      Luis R. Rodriguez 提交于
      Most code already uses consts for the struct kernel_param_ops,
      sweep the kernel for the last offending stragglers. Other than
      include/linux/moduleparam.h and kernel/params.c all other changes
      were generated with the following Coccinelle SmPL patch. Merge
      conflicts between trees can be handled with Coccinelle.
      
      In the future git could get Coccinelle merge support to deal with
      patch --> fail --> grammar --> Coccinelle --> new patch conflicts
      automatically for us on patches where the grammar is available and
      the patch is of high confidence. Consider this a feature request.
      
      Test compiled on x86_64 against:
      
      	* allnoconfig
      	* allmodconfig
      	* allyesconfig
      
      @ const_found @
      identifier ops;
      @@
      
      const struct kernel_param_ops ops = {
      };
      
      @ const_not_found depends on !const_found @
      identifier ops;
      @@
      
      -struct kernel_param_ops ops = {
      +const struct kernel_param_ops ops = {
      };
      
      Generated-by: Coccinelle SmPL
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Junio C Hamano <gitster@pobox.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: cocci@systeme.lip6.fr
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      9c27847d
  11. 26 5月, 2015 5 次提交
    • R
      dmaengine: pxa_dma: add debug information · c01d1b51
      Robert Jarzmik 提交于
      Reuse the debugging features which were available in pxa architecture.
      This is a copy of the code from arch/arm/plat-pxa/dma, which is doomed
      to disappear once the conversion is completed towards dmaengine.
      
      This is a transfer of the commit "[ARM] pxa/dma: add debugfs
      entries" (d294948c).
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      c01d1b51
    • R
      dmaengine: pxa: add pxa dmaengine driver · a57e16cf
      Robert Jarzmik 提交于
      This is a new driver for pxa SoCs, which is also compatible with the former
      mmp_pdma.
      
      The rationale behind a new driver (as opposed to incremental patching) was :
      
       - the new driver relies on virt-dma, which obsoletes all the internal
         structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
         functions
      
       - mmp_pdma allocates dma coherent descriptors containing not only hardware
         descriptors but linked list information
         The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
         dma pool allocated memory. This changes completely the way descriptors are
         handled
      
       - the architecture behind the interrupt/tasklet management was rewritten to be
         more conforming to virt-dma
      
       - the buffers alignment is handled differently
         The former driver assumed that the DMA channel stopped between each
         descriptor. The new one chains descriptors to let the channel running. This
         is a necessary guarantee for real-time high bandwidth usecases such as video
         capture on "old" architectures such as pxa.
      
       - hot chaining / cold chaining / no chaining
         Whenever possible, submitting a descriptor "hot chains" it to a running
         channel. There is still no guarantee that the descriptor will be issued, as
         the channel might be stopped just before the descriptor is submitted. Yet
         this allows to submit several video buffers, and resubmit a buffer while
         another is under handling.
         As before, dma_async_issue_pending() is the only guarantee to have all the
         buffers issued.
         When an alignment issue is detected (ie. one address in a descriptor is not
         a multiple of 8), if the already running channel is in "aligned mode", the
         channel will stop, and restarted in "misaligned mode" to finished the issued
         list.
      
       - descriptors reusing
         A submitted, issued and completed descriptor can be reused, ie resubmitted if
         it was prepared with the proper flag (DMA_PREP_ACK).  Only a channel
         resources release will in this case release that buffer.
         This allows a rolling ring of buffers to be reused, where there are several
         thousands of hardware descriptors used (video buffer for example).
      
      Additionally, a set of more casual features is introduced :
       - debugging traces
       - lockless way to know if a descriptor is terminated or not
      
      The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
      with dmatest, pxa_camera and pxamci.
      Signed-off-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      a57e16cf
    • J
      dmaengine: rcar-dmac: Use DECLARE_BITMAP · 08acf38e
      Joe Perches 提交于
      Use the generic mechanism to declare a bitmap instead of unsigned long.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      08acf38e
    • M
      dmaengine: pl330: Initialize pl330 for pl330_prep_dma_memcpy after NULL check of pch · f5636854
      Maninder Singh 提交于
      Currently pch pointer is already dereferenced before NULL check
      and thus we are getting below warning:
      warn: variable dereferenced before check 'pch'
      
      So initialize struct pl330_dmac *pl330 after NULL check
      of dma_pl330_chan *pch.
      Signed-off-by: NManinder Singh <maninder1.s@samsung.com>
      Reviewed-by: NVaneet Narang <v.narang@samsung.com>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      f5636854
    • G
      dmaengine: shdma: r8a73a4: Make dma_ts_shift[] static · 8f64b276
      Geert Uytterhoeven 提交于
      dma_ts_shift[] isn't used outside this source file. All other users use
      the definition from arch/arm/mach-shmobile/dma-register.h.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: NSimon Horman <horms+renesas@verge.net.au>
      Signed-off-by: NVinod Koul <vinod.koul@intel.com>
      8f64b276
  12. 22 5月, 2015 2 次提交
  13. 18 5月, 2015 4 次提交
  14. 14 5月, 2015 1 次提交
  15. 09 5月, 2015 4 次提交