1. 07 3月, 2019 1 次提交
  2. 08 1月, 2019 1 次提交
  3. 04 1月, 2019 4 次提交
  4. 23 12月, 2018 1 次提交
  5. 20 12月, 2018 1 次提交
  6. 14 12月, 2018 6 次提交
  7. 06 12月, 2018 3 次提交
  8. 02 12月, 2018 1 次提交
  9. 27 11月, 2018 1 次提交
  10. 09 10月, 2018 1 次提交
  11. 08 10月, 2018 1 次提交
    • S
      dma-debug: Check for drivers mapping invalid addresses in dma_map_single() · 99c65fa7
      Stephen Boyd 提交于
      I recently debugged a DMA mapping oops where a driver was trying to map
      a buffer returned from request_firmware() with dma_map_single(). Memory
      returned from request_firmware() is mapped into the vmalloc region and
      this isn't a valid region to map with dma_map_single() per the DMA
      documentation's "What memory is DMA'able?" section.
      
      Unfortunately, we don't really check that in the DMA debugging code, so
      enabling DMA debugging doesn't help catch this problem. Let's add a new
      DMA debug function to check for a vmalloc address or an invalid virtual
      address and print a warning if this happens. This makes it a little
      easier to debug these sorts of problems, instead of seeing odd behavior
      or crashes when drivers attempt to map the vmalloc space for DMA.
      
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NStephen Boyd <swboyd@chromium.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      99c65fa7
  12. 01 10月, 2018 1 次提交
  13. 26 9月, 2018 1 次提交
  14. 20 9月, 2018 3 次提交
    • C
      dma-mapping: support non-coherent devices in dma_common_get_sgtable · 9406a49f
      Christoph Hellwig 提交于
      We can use the arch_dma_coherent_to_pfn hook to provide a ->get_sgtable
      implementation.  Note that this isn't an endorsement of this interface
      (which is a horrible bad idea), but it is required to move arm64 over
      to the generic code without a loss of functionality.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9406a49f
    • C
      dma-mapping: consolidate the dma mmap implementations · 58b04406
      Christoph Hellwig 提交于
      The only functional differences (modulo a few missing fixes in the arch
      code) is that architectures without coherent caches need a hook to
      convert a virtual or dma address into a pfn, given that we don't have
      the kernel linear mapping available for the otherwise easy virt_to_page
      call.  As a side effect we can support mmap of the per-device coherent
      area even on architectures not providing the callback, and we make
      previous dangerous default methods dma_common_mmap actually save for
      non-coherent architectures by rejecting it without the right helper.
      
      In addition to that we need a hook so that some architectures can
      override the protection bits when mmaping a dma coherent allocations.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
      58b04406
    • C
      dma-mapping: merge direct and noncoherent ops · bc3ec75d
      Christoph Hellwig 提交于
      All the cache maintainance is already stubbed out when not enabled,
      but merging the two allows us to nicely handle the case where
      cache maintainance is required for some devices, but not others.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
      bc3ec75d
  15. 08 9月, 2018 3 次提交
  16. 25 7月, 2018 1 次提交
    • R
      dma-mapping: relax warning for per-device areas · d27fb99f
      Robin Murphy 提交于
      The reasons why dma_free_attrs() should not be called from IRQ context
      are not necessarily obvious and somewhat buried in the development
      history, so let's start by documenting the warning itself to help anyone
      who does happen to hit it and wonder what the deal is.
      
      However, this check turns out to be slightly over-restrictive for the
      way that per-device memory has been spliced into the general API, since
      for that case we know that dma_declare_coherent_memory() has created an
      appropriate CPU mapping for the entire area and nothing dynamic should
      be happening. Given that the usage model for per-device memory is often
      more akin to streaming DMA than 'real' coherent DMA (e.g. allocating and
      freeing space to copy short-lived packets in and out), it is also
      somewhat more reasonable for those operations to happen in IRQ handlers
      for such devices.
      
      Therefore, let's move the irqs_disabled() check down past the per-device
      area hook, so that that gets a chance to resolve the request before we
      reach definite "you're doing it wrong" territory.
      Reported-by: NFredrik Noring <noring@nocrew.org>
      Tested-by: NFredrik Noring <noring@nocrew.org>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d27fb99f
  17. 28 5月, 2018 1 次提交
  18. 25 5月, 2018 1 次提交
  19. 19 5月, 2018 1 次提交
  20. 09 5月, 2018 1 次提交
  21. 07 5月, 2018 1 次提交
    • C
      PCI: remove PCI_DMA_BUS_IS_PHYS · 325ef185
      Christoph Hellwig 提交于
      This was used by the ide, scsi and networking code in the past to
      determine if they should bounce payloads.  Now that the dma mapping
      always have to support dma to all physical memory (thanks to swiotlb
      for non-iommu systems) there is no need to this crude hack any more.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv)
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      325ef185
  22. 28 3月, 2018 1 次提交
  23. 17 3月, 2018 2 次提交
  24. 12 2月, 2018 1 次提交
  25. 15 1月, 2018 1 次提交