1. 06 12月, 2022 1 次提交
  2. 10 11月, 2022 1 次提交
    • F
      drm/amdgpu: Set MTYPE in PTE based on BO flags · d1a372af
      Felix Kuehling 提交于
      The same BO may need different MTYPEs and SNOOP flags in PTEs depending
      on its current location relative to the mapping GPU. Setting MTYPEs from
      clients ahead of time is not practical for coherent memory sharing.
      Instead determine the correct MTYPE for the desired coherence model and
      current BO location when updating the page tables.
      
      To maintain backwards compatibility with MTYPE-selection in
      AMDGPU_VA_OP_MAP, the coherence-model-based MTYPE selection is only
      applied if it chooses an MTYPE other than MTYPE_NC (the default).
      
      Add two AMDGPU_GEM_CREATE_... flags to indicate the coherence model. The
      default if no flag is specified is non-coherent (i.e. coarse-grained
      coherent at dispatch boundaries).
      
      Update amdgpu_amdkfd_gpuvm.c to use this new method to choose the
      correct MTYPE depending on the current memory location.
      
      v2:
      * check that bo is not NULL (e.g. PRT mappings)
      * Fix missing ~ bitmask in gmc_v11_0.c
      v3:
      * squash in "drm/amdgpu: Inherit coherence flags on dmabuf import"
      Suggested-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com>
      Acked-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      d1a372af
  3. 29 9月, 2022 2 次提交
  4. 07 4月, 2022 1 次提交
  5. 17 12月, 2021 1 次提交
  6. 10 12月, 2021 1 次提交
  7. 22 6月, 2021 2 次提交
  8. 16 6月, 2021 1 次提交
  9. 06 6月, 2021 2 次提交
  10. 02 6月, 2021 1 次提交
  11. 27 5月, 2021 1 次提交
    • T
      drm/amdgpu: Implement mmap as GEM object function · 71df0368
      Thomas Zimmermann 提交于
      Moving the driver-specific mmap code into a GEM object function allows
      for using DRM helpers for various mmap callbacks.
      
      This change resolves several inconsistencies between regular mmap and
      prime-based mmap. The vm_ops field in vma is now set for all mmap'ed
      areas. Previously it way only set for regular mmap calls, prime-based
      mmap used TTM's default vm_ops. The function amdgpu_verify_access() is
      no longer being called and therefore removed by this patch.
      
      As a side effect, amdgpu_ttm_vm_ops and amdgpu_ttm_fault() are now
      implemented in amdgpu's GEM code.
      
      v4:
      	* rebased
      v3:
      	* rename mmap function to amdgpu_gem_object_mmap() (Christian)
      	* remove unnecessary checks from mmap (Christian)
      v2:
      	* rename amdgpu_ttm_vm_ops and amdgpu_ttm_fault() to
      	  amdgpu_gem_vm_ops and amdgpu_gem_fault() (Christian)
      	* the check for kfd_bo has meanwhile been removed
      Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20210525151055.8174-3-tzimmermann@suse.de
      71df0368
  12. 16 4月, 2021 1 次提交
  13. 24 3月, 2021 2 次提交
  14. 27 2月, 2021 1 次提交
  15. 06 1月, 2021 1 次提交
  16. 14 12月, 2020 1 次提交
  17. 10 12月, 2020 1 次提交
  18. 09 12月, 2020 1 次提交
  19. 09 11月, 2020 1 次提交
  20. 24 9月, 2020 1 次提交
  21. 09 9月, 2020 1 次提交
  22. 25 8月, 2020 1 次提交
  23. 11 8月, 2020 1 次提交
  24. 13 7月, 2020 1 次提交
    • M
      drm: amdgpu: fix common struct sg_table related issues · 39913934
      Marek Szyprowski 提交于
      The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
      returns the number of the created entries in the DMA address space.
      However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
      dma_unmap_sg must be called with the original number of the entries
      passed to the dma_map_sg().
      
      struct sg_table is a common structure used for describing a non-contiguous
      memory buffer, used commonly in the DRM and graphics subsystems. It
      consists of a scatterlist with memory pages and DMA addresses (sgl entry),
      as well as the number of scatterlist entries: CPU pages (orig_nents entry)
      and DMA mapped pages (nents entry).
      
      It turned out that it was a common mistake to misuse nents and orig_nents
      entries, calling DMA-mapping functions with a wrong number of entries or
      ignoring the number of mapped entries returned by the dma_map_sg()
      function.
      
      To avoid such issues, lets use a common dma-mapping wrappers operating
      directly on the struct sg_table objects and use scatterlist page
      iterators where possible. This, almost always, hides references to the
      nents and orig_nents entries, making the code robust, easier to follow
      and copy/paste safe.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Link: https://patchwork.freedesktop.org/patch/371142/Signed-off-by: NChristian König <christian.koenig@amd.com>
      39913934
  25. 20 5月, 2020 2 次提交
  26. 01 4月, 2020 3 次提交
  27. 27 2月, 2020 4 次提交
  28. 06 12月, 2019 1 次提交
  29. 28 10月, 2019 2 次提交