1. 23 1月, 2016 1 次提交
  2. 16 12月, 2015 1 次提交
    • D
      Revert "scatterlist: use sg_phys()" · 3e6110fd
      Dan Williams 提交于
      commit db0fa0cb "scatterlist: use sg_phys()" did replacements of
      the form:
      
          phys_addr_t phys = page_to_phys(sg_page(s));
          phys_addr_t phys = sg_phys(s) & PAGE_MASK;
      
      However, this breaks platforms where sizeof(phys_addr_t) >
      sizeof(unsigned long).  Revert for 4.3 and 4.4 to make room for a
      combined helper in 4.5.
      
      Cc: <stable@vger.kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Fixes: db0fa0cb ("scatterlist: use sg_phys()")
      Suggested-by: NJoerg Roedel <joro@8bytes.org>
      Reported-by: NVitaly Lavrov <vel21ripn@gmail.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3e6110fd
  3. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  4. 03 10月, 2015 2 次提交
  5. 17 9月, 2015 1 次提交
    • A
      ARM: 8437/1: dma-mapping: fix build warning with new DMA_ERROR_CODE definition · 90cde558
      Andre Przywara 提交于
      Commit 96231b26: ("ARM: 8419/1: dma-mapping: harmonize definition
      of DMA_ERROR_CODE") changed the definition of DMA_ERROR_CODE to use
      dma_addr_t, which makes the compiler barf on assigning this to an
      "int" variable on ARM with LPAE enabled:
      *************
      In file included from /src/linux/include/linux/dma-mapping.h:86:0,
                       from /src/linux/arch/arm/mm/dma-mapping.c:21:
      /src/linux/arch/arm/mm/dma-mapping.c: In function '__iommu_create_mapping':
      /src/linux/arch/arm/include/asm/dma-mapping.h:16:24: warning:
      overflow in implicit constant conversion [-Woverflow]
       #define DMA_ERROR_CODE (~(dma_addr_t)0x0)
                              ^
      /src/linux/arch/arm/mm/dma-mapping.c:1252:15: note: in expansion of
      macro DMA_ERROR_CODE'
        int i, ret = DMA_ERROR_CODE;
                     ^
      *************
      
      Remove the actually unneeded initialization of "ret" in
      __iommu_create_mapping() and move the variable declaration inside the
      for-loop to make the scope of this variable more clear.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      90cde558
  6. 11 9月, 2015 1 次提交
    • C
      dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent} · 6894258e
      Christoph Hellwig 提交于
      Since 2009 we have a nice asm-generic header implementing lots of DMA API
      functions for architectures using struct dma_map_ops, but unfortunately
      it's still missing a lot of APIs that all architectures still have to
      duplicate.
      
      This series consolidates the remaining functions, although we still need
      arch opt outs for two of them as a few architectures have very
      non-standard implementations.
      
      This patch (of 5):
      
      The coherent DMA allocator works the same over all architectures supporting
      dma_map operations.
      
      This patch consolidates them and converges the minor differences:
      
       - the debug_dma helpers are now called from all architectures, including
         those that were previously missing them
       - dma_alloc_from_coherent and dma_release_from_coherent are now always
         called from the generic alloc/free routines instead of the ops
         dma-mapping-common.h always includes dma-coherent.h to get the defintions
         for them, or the stubs if the architecture doesn't support this feature
       - checks for ->alloc / ->free presence are removed.  There is only one
         magic instead of dma_map_ops without them (mic_dma_ops) and that one
         is x86 only anyway.
      
      Besides that only x86 needs special treatment to replace a default devices
      if none is passed and tweak the gfp_flags.  An optional arch hook is provided
      for that.
      
      [linux@roeck-us.net: fix build]
      [jcmvbkbc@gmail.com: fix xtensa]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6894258e
  7. 17 8月, 2015 1 次提交
    • D
      scatterlist: use sg_phys() · db0fa0cb
      Dan Williams 提交于
      Coccinelle cleanup to replace open coded sg to physical address
      translations.  This is in preparation for introducing scatterlists that
      reference __pfn_t.
      
      // sg_phys.cocci: convert usage page_to_phys(sg_page(sg)) to sg_phys(sg)
      // usage: make coccicheck COCCI=sg_phys.cocci MODE=patch
      
      virtual patch
      
      @@
      struct scatterlist *sg;
      @@
      
      - page_to_phys(sg_page(sg)) + sg->offset
      + sg_phys(sg)
      
      @@
      struct scatterlist *sg;
      @@
      
      - page_to_phys(sg_page(sg))
      + sg_phys(sg) & PAGE_MASK
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      db0fa0cb
  8. 04 8月, 2015 1 次提交
  9. 02 8月, 2015 1 次提交
    • R
      ARM: reduce visibility of dmac_* functions · 1234e3fd
      Russell King 提交于
      The dmac_* functions are private to the ARM DMA API implementation, and
      should not be used by drivers.  In order to discourage their use, remove
      their prototypes and macros from asm/*.h.
      
      We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code
      use these; once these sites are fixed, this can be moved also.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1234e3fd
  10. 17 7月, 2015 1 次提交
  11. 06 6月, 2015 1 次提交
    • M
      ARM: 8387/1: arm/mm/dma-mapping.c: Add arm_coherent_dma_mmap · 55af8a91
      Mike Looijmans 提交于
      When dma-coherent transfers are enabled, the mmap call must
      not change the pg_prot flags in the vma struct.
      
      Split the arm_dma_mmap into a common and specific parts,
      and add a "arm_coherent_dma_mmap" implementation that does
      not alter the page protection flags.
      
      Tested on a topic-miami board (Zynq) using the ACP port
      to transfer data between FPGA and CPU using the Dyplo
      framework. Without this patch, byte-wise access to mmapped
      coherent DMA memory was about 20x slower because of the
      memory being marked as non-cacheable, and transfer speeds
      would not exceed 240MB/s.
      
      After this patch, the mapped memory is cacheable and the
      transfer speed is again 600MB/s (limited by the FPGA) when
      the data is in the L2 cache, while data integrity is being
      maintained.
      
      The patch has no effect on non-coherent DMA.
      Signed-off-by: NMike Looijmans <mike.looijmans@topic.nl>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      55af8a91
  12. 04 5月, 2015 1 次提交
  13. 02 4月, 2015 1 次提交
  14. 18 3月, 2015 1 次提交
  15. 13 3月, 2015 1 次提交
  16. 11 3月, 2015 1 次提交
    • R
      ARM: dma-api: fix off-by-one error in __dma_supported() · 8bf1268f
      Russell King 提交于
      When validating the mask against the amount of memory we have available
      (so that we can trap 32-bit DMA addresses with >32-bits memory), we had
      not taken account of the fact that max_pfn is the maximum PFN number
      plus one that would be in the system.
      
      There are several references in the code which bear this out:
      
      mm/page_owner.c:
      	for (; pfn < max_pfn; pfn++) {
      	}
      
      arch/x86/kernel/setup.c:
      	high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1)
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8bf1268f
  17. 23 2月, 2015 1 次提交
  18. 20 2月, 2015 1 次提交
  19. 30 1月, 2015 1 次提交
  20. 29 1月, 2015 1 次提交
  21. 02 12月, 2014 1 次提交
  22. 30 10月, 2014 1 次提交
  23. 10 10月, 2014 2 次提交
  24. 07 8月, 2014 1 次提交
  25. 18 7月, 2014 1 次提交
  26. 22 5月, 2014 2 次提交
  27. 20 5月, 2014 1 次提交
  28. 07 5月, 2014 1 次提交
  29. 23 4月, 2014 1 次提交
  30. 28 2月, 2014 2 次提交
    • M
      arm: dma-mapping: remove order parameter from arm_iommu_create_mapping() · 68efd7d2
      Marek Szyprowski 提交于
      The 'order' parameter for IOMMU-aware dma-mapping implementation was
      introduced mainly as a hack to reduce size of the bitmap used for
      tracking IO virtual address space. Since now it is possible to dynamically
      resize the bitmap, this hack is not needed and can be removed without any
      impact on the client devices. This way the parameters for
      arm_iommu_create_mapping() becomes much easier to understand. 'size'
      parameter now means the maximum supported IO address space size.
      
      The code will allocate (resize) bitmap in chunks, ensuring that a single
      chunk is not larger than a single memory page to avoid unreliable
      allocations of size larger than PAGE_SIZE in atomic context.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      68efd7d2
    • A
      arm: dma-mapping: Add support to extend DMA IOMMU mappings · 4d852ef8
      Andreas Herrmann 提交于
      Instead of using just one bitmap to keep track of IO virtual addresses
      (handed out for IOMMU use) introduce an array of bitmaps. This allows
      us to extend existing mappings when running out of iova space in the
      initial mapping etc.
      
      If there is not enough space in the mapping to service an IO virtual
      address allocation request, __alloc_iova() tries to extend the mapping
      -- by allocating another bitmap -- and makes another allocation
      attempt using the freshly allocated bitmap.
      
      This allows arm iommu drivers to start with a decent initial size when
      an dma_iommu_mapping is created and still to avoid running out of IO
      virtual addresses for the mapping.
      Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com>
      [mszyprow: removed extensions parameter to arm_iommu_create_mapping()
       function, which will be modified in the next patch anyway, also some
       debug messages about extending bitmap]
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      4d852ef8
  31. 19 2月, 2014 1 次提交
    • S
      ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocator · 6ea41c80
      Steven Capper 提交于
      The Coherant DMA allocator allocates pages of high order then splits
      them up into smaller pages.
      
      This splitting logic would run into problems if the allocator was
      given compound pages. Thus the Coherant DMA allocator was originally
      incompatible with compound pages existing and, by extension, huge
      pages. A compile #error was put in place whenever huge pages were
      enabled.
      
      Compatibility with compound pages has since been introduced by the
      following commit (which merely excludes GFP_COMP pages from being
      requested by the coherant DMA allocator):
        ea2e7057 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations
      
      When huge page support was introduced to ARM, the compile #error in
      dma-mapping.c was replaced by a #warning when it should have been
      removed instead.
      
      This patch removes the compile #warning in dma-mapping.c when huge
      pages are enabled.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ea41c80
  32. 11 2月, 2014 1 次提交
  33. 11 12月, 2013 1 次提交
    • R
      ARM: fix executability of CMA mappings · 71b55663
      Russell King 提交于
      The CMA region was being marked executable:
      
      0xdc04e000-0xdc050000           8K     RW x      MEM/CACHED/WBRA
      0xdc060000-0xdc100000         640K     RW x      MEM/CACHED/WBRA
      0xdc4f5000-0xdc500000          44K     RW x      MEM/CACHED/WBRA
      0xdcce9000-0xe0000000       52316K     RW x      MEM/CACHED/WBRA
      
      This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but
      there are also a few other places in dma-mapping which should be
      corrected to use the right constant.  Fix all these places:
      
      0xdc04e000-0xdc050000           8K     RW NX     MEM/CACHED/WBRA
      0xdc060000-0xdc100000         640K     RW NX     MEM/CACHED/WBRA
      0xdc280000-0xdc300000         512K     RW NX     MEM/CACHED/WBRA
      0xdc6fc000-0xe0000000       58384K     RW NX     MEM/CACHED/WBRA
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      71b55663
  34. 10 12月, 2013 1 次提交
  35. 30 11月, 2013 1 次提交
  36. 31 10月, 2013 1 次提交
    • R
      ARM: DMA-API: better handing of DMA masks for coherent allocations · 4dcfa600
      Russell King 提交于
      We need to start treating DMA masks as something which is specific to
      the bus that the device resides on, otherwise we're going to hit all
      sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit
      systems, where memory is offset from PFN 0.
      
      In order to start doing this, we convert the DMA mask to a PFN using
      the device specific dma_to_pfn() macro.  This is the reverse of the
      pfn_to_dma() macro which is used to get the DMA address for the device.
      
      This gives us a PFN mask, which we can then check against the PFN
      limit of the DMA zone.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4dcfa600