1. 22 9月, 2022 1 次提交
    • W
      arm64: dma: Drop cache invalidation from arch_dma_prep_coherent() · c44094ee
      Will Deacon 提交于
      arch_dma_prep_coherent() is called when preparing a non-cacheable region
      for a consistent DMA buffer allocation. Since the buffer pages may
      previously have been written via a cacheable mapping and consequently
      allocated as dirty cachelines, the purpose of this function is to remove
      these dirty lines from the cache, writing them back so that the
      non-coherent device is able to see them.
      
      On arm64, this operation can be achieved with a clean to the point of
      coherency; a subsequent invalidation is not required and serves little
      purpose in the presence of a cacheable alias (e.g. the linear map),
      since clean lines can be speculatively fetched back into the cache after
      the invalidation operation has completed.
      
      Relax the cache maintenance in arch_dma_prep_coherent() so that only a
      clean, and not a clean-and-invalidate operation is performed.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/20220823122111.17439-1-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c44094ee
  2. 05 7月, 2022 1 次提交
  3. 06 6月, 2022 1 次提交
  4. 25 6月, 2021 1 次提交
  5. 23 4月, 2021 1 次提交
  6. 06 10月, 2020 2 次提交
  7. 21 11月, 2019 1 次提交
  8. 11 9月, 2019 2 次提交
  9. 29 8月, 2019 2 次提交
    • C
      dma-mapping: make dma_atomic_pool_init self-contained · 8e3a68fb
      Christoph Hellwig 提交于
      The memory allocated for the atomic pool needs to have the same
      mapping attributes that we use for remapping, so use
      pgprot_dmacoherent instead of open coding it.  Also deduct a
      suitable zone to allocate the memory from based on the presence
      of the DMA zones.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8e3a68fb
    • C
      dma-mapping: remove arch_dma_mmap_pgprot · 419e2f18
      Christoph Hellwig 提交于
      arch_dma_mmap_pgprot is used for two things:
      
       1) to override the "normal" uncached page attributes for mapping
          memory coherent to devices that can't snoop the CPU caches
       2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older
          arm systems and some mips platforms
      
      Replace one with the pgprot_dmacoherent macro that is already provided
      by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE
      handling to common code with an explicit arch opt-in.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      Acked-by: Paul Burton <paul.burton@mips.com>		# mips
      419e2f18
  10. 11 8月, 2019 1 次提交
    • C
      dma-mapping: fix page attributes for dma_mmap_* · 33dcb37c
      Christoph Hellwig 提交于
      All the way back to introducing dma_common_mmap we've defaulted to mark
      the pages as uncached.  But this is wrong for DMA coherent devices.
      Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
      flag is only treated special on the alloc side for non-coherent devices.
      
      Introduce a new dma_pgprot helper that deals with the check for coherent
      devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
      and we thus ensure no aliasing of page attributes happens, which makes
      the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
      remaining ones.
      
      Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
      we'll phase it out soon.
      
      Fixes: 64ccc9c0 ("common: dma-mapping: add support for generic dma_mmap_* calls")
      Reported-by: NShawn Anastasio <shawn@anastas.io>
      Reported-by: NGavin Li <git@thegavinli.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
      33dcb37c
  11. 19 6月, 2019 1 次提交
  12. 17 6月, 2019 1 次提交
  13. 27 5月, 2019 4 次提交
  14. 07 5月, 2019 1 次提交
  15. 24 1月, 2019 1 次提交
  16. 14 12月, 2018 3 次提交
  17. 11 12月, 2018 1 次提交
  18. 06 12月, 2018 2 次提交
  19. 02 12月, 2018 1 次提交
  20. 03 11月, 2018 1 次提交
  21. 31 10月, 2018 1 次提交
  22. 19 10月, 2018 3 次提交
  23. 26 9月, 2018 1 次提交
  24. 25 9月, 2018 1 次提交
    • R
      arm64/dma-mapping: Mildly optimise non-coherent IOMMU ops · 7adb562c
      Robin Murphy 提交于
      Whilst the symmetry of deferring to the existing sync callback in
      __iommu_map_page() is nice, taking a round-trip through
      iommu_iova_to_phys() is a pretty heavyweight way to get an address we
      can trivially compute from the page we already have. Tweaking it to just
      perform the cache maintenance directly when appropriate doesn't really
      make the code any more complicated, and the runtime efficiency gain can
      only be a benefit.
      
      Furthermore, the sync operations themselves know they can only be
      invoked on a managed DMA ops domain, so can use the fast specific domain
      lookup to avoid excessive manipulation of the group refcount
      (particularly in the scatterlist cases).
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      7adb562c
  25. 08 9月, 2018 1 次提交
  26. 18 8月, 2018 1 次提交
  27. 19 6月, 2018 1 次提交
  28. 15 5月, 2018 1 次提交
    • C
      arm64: Increase ARCH_DMA_MINALIGN to 128 · ebc7e21e
      Catalin Marinas 提交于
      This patch increases the ARCH_DMA_MINALIGN to 128 so that it covers the
      currently known Cache Writeback Granule (CTR_EL0.CWG) on arm64 and moves
      the fallback in cache_line_size() from L1_CACHE_BYTES to this constant.
      In addition, it warns (and taints) if the CWG is larger than
      ARCH_DMA_MINALIGN as this is not safe with non-coherent DMA.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ebc7e21e
  29. 08 5月, 2018 1 次提交