1. 20 2月, 2021 1 次提交
  2. 10 2月, 2021 1 次提交
  3. 18 11月, 2020 2 次提交
  4. 20 10月, 2020 1 次提交
  5. 06 10月, 2020 2 次提交
    • C
      dma-mapping: move dma-debug.h to kernel/dma/ · a1fd09e8
      Christoph Hellwig 提交于
      Most of dma-debug.h is not required by anything outside of kernel/dma.
      Move the four declarations needed by dma-mappin.h or dma-ops providers
      into dma-mapping.h and dma-map-ops.h, and move the remainder of the
      file to kernel/dma/debug.h.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a1fd09e8
    • C
      dma-mapping: split <linux/dma-mapping.h> · 0a0f0d8b
      Christoph Hellwig 提交于
      Split out all the bits that are purely for dma_map_ops implementations
      and related code into a new <linux/dma-map-ops.h> header so that they
      don't get pulled into all the drivers.  That also means the architecture
      specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h>
      any more, which leads to a missing includes that were pulled in by the
      x86 or arm versions in a few not overly portable drivers.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0a0f0d8b
  6. 25 9月, 2020 7 次提交
  7. 18 9月, 2020 1 次提交
  8. 04 9月, 2020 2 次提交
    • N
      dma-mapping: set default segment_boundary_mask to ULONG_MAX · 135ba11a
      Nicolin Chen 提交于
      The default segment_boundary_mask was set to DMA_BIT_MAKS(32)
      a decade ago by referencing SCSI/block subsystem, as a 32-bit
      mask was good enough for most of the devices.
      
      Now more and more drivers set dma_masks above DMA_BIT_MAKS(32)
      while only a handful of them call dma_set_seg_boundary(). This
      means that most drivers have a 4GB segmention boundary because
      DMA API returns a 32-bit default value, though they might not
      really have such a limit.
      
      The default segment_boundary_mask should mean "no limit" since
      the device doesn't explicitly set the mask. But a 32-bit mask
      certainly limits those devices capable of 32+ bits addressing.
      
      So this patch sets default segment_boundary_mask to ULONG_MAX.
      Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Acked-by: NNiklas Schnelle <schnelle@linux.ibm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      135ba11a
    • N
      dma-mapping: introduce dma_get_seg_boundary_nr_pages() · 1e9d90db
      Nicolin Chen 提交于
      We found that callers of dma_get_seg_boundary mostly do an ALIGN
      with page mask and then do a page shift to get number of pages:
          ALIGN(boundary + 1, 1 << shift) >> shift
      
      However, the boundary might be as large as ULONG_MAX, which means
      that a device has no specific boundary limit. So either "+ 1" or
      passing it to ALIGN() would potentially overflow.
      
      According to kernel defines:
          #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
          #define ALIGN(x, a)	ALIGN_MASK(x, (typeof(x))(a) - 1)
      
      We can simplify the logic here into a helper function doing:
        ALIGN(boundary + 1, 1 << shift) >> shift
      = ALIGN_MASK(b + 1, (1 << s) - 1) >> s
      = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s
      = [b + 1 + (1 << s) - 1] >> s
      = [b + (1 << s)] >> s
      = (b >> s) + 1
      
      This patch introduces and applies dma_get_seg_boundary_nr_pages()
      as an overflow-free helper for the dma_get_seg_boundary() callers
      to get numbers of pages. It also takes care of the NULL dev case
      for non-DMA API callers.
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Acked-by: NNiklas Schnelle <schnelle@linux.ibm.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1e9d90db
  9. 14 8月, 2020 1 次提交
  10. 19 7月, 2020 1 次提交
  11. 16 7月, 2020 1 次提交
  12. 30 6月, 2020 1 次提交
  13. 27 6月, 2020 1 次提交
  14. 13 5月, 2020 1 次提交
    • M
      dma-mapping: add generic helpers for mapping sgtable objects · d9d200bc
      Marek Szyprowski 提交于
      struct sg_table is a common structure used for describing a memory
      buffer. It consists of a scatterlist with memory pages and DMA addresses
      (sgl entry), as well as the number of scatterlist entries: CPU pages
      (orig_nents entry) and DMA mapped pages (nents entry).
      
      It turned out that it was a common mistake to misuse nents and orig_nents
      entries, calling DMA-mapping functions with a wrong number of entries or
      ignoring the number of mapped entries returned by the dma_map_sg
      function.
      
      To avoid such issues, let's introduce a common wrappers operating
      directly on the struct sg_table objects, which take care of the proper
      use of the nents and orig_nents entries.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d9d200bc
  15. 20 4月, 2020 1 次提交
    • D
      dma-pool: add additional coherent pools to map to gfp mask · c84dc6e6
      David Rientjes 提交于
      The single atomic pool is allocated from the lowest zone possible since
      it is guaranteed to be applicable for any DMA allocation.
      
      Devices may allocate through the DMA API but not have a strict reliance
      on GFP_DMA memory.  Since the atomic pool will be used for all
      non-blockable allocations, returning all memory from ZONE_DMA may
      unnecessarily deplete the zone.
      
      Provision for multiple atomic pools that will map to the optimal gfp
      mask of the device.
      
      When allocating non-blockable memory, determine the optimal gfp mask of
      the device and use the appropriate atomic pool.
      
      The coherent DMA mask will remain the same between allocation and free
      and, thus, memory will be freed to the same atomic pool it was allocated
      from.
      
      __dma_atomic_pool_init() will be changed to return struct gen_pool *
      later once dynamic expansion is added.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      c84dc6e6
  16. 22 11月, 2019 1 次提交
    • N
      dma-mapping: treat dev->bus_dma_mask as a DMA limit · a7ba70f1
      Nicolas Saenz Julienne 提交于
      Using a mask to represent bus DMA constraints has a set of limitations.
      The biggest one being it can only hold a power of two (minus one). The
      DMA mapping code is already aware of this and treats dev->bus_dma_mask
      as a limit. This quirk is already used by some architectures although
      still rare.
      
      With the introduction of the Raspberry Pi 4 we've found a new contender
      for the use of bus DMA limits, as its PCIe bus can only address the
      lower 3GB of memory (of a total of 4GB). This is impossible to represent
      with a mask. To make things worse the device-tree code rounds non power
      of two bus DMA limits to the next power of two, which is unacceptable in
      this case.
      
      In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
      over the tree and treat it as such. Note that dev->bus_dma_limit should
      contain the higher accessible DMA address.
      Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a7ba70f1
  17. 15 11月, 2019 1 次提交
  18. 31 10月, 2019 2 次提交
  19. 04 9月, 2019 5 次提交
  20. 03 9月, 2019 1 次提交
  21. 29 8月, 2019 1 次提交
  22. 22 8月, 2019 1 次提交
  23. 23 7月, 2019 1 次提交
  24. 17 7月, 2019 1 次提交
  25. 10 7月, 2019 1 次提交
  26. 08 4月, 2019 1 次提交