- 09 7月, 2019 1 次提交
-
-
由 Philip Yang 提交于
An upcoming change in the hmm_range_register API requires passing in a pointer to an hmm_mirror instead of mm_struct. To access the hmm_mirror we need pass bo instead of ttm to amdgpu_ttm_tt_get_user_pages because mirror is part of amdgpu_mn structure, which is accessible from bo. v2: fix building without CONFIG_HMM_MIRROR (Arnd) Signed-off-by: NPhilip Yang <Philip.Yang@amd.com> Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 5月, 2019 2 次提交
-
-
由 Philip Yang 提交于
Only select HMM_MIRROR will get kernel config dependency warnings if CONFIG_HMM is missing in the config. Add depends on HMM will solve the issue. Add conditional compilation to fix compilation errors if HMM_MIRROR is not enabled as HMM config is not enabled. Remove unused function amdgpu_ttm_tt_mark_user_pages. Signed-off-by: NPhilip Yang <Philip.Yang@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Philip Yang 提交于
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd. If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry. HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct. HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty(). Signed-off-by: NPhilip Yang <Philip.Yang@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 3月, 2019 2 次提交
-
-
由 Alex Deucher 提交于
This reverts commit 915d3eec. This depends on an HMM fix which is not upstream yet. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
This reverts commit 6b8f7e3d. This depends on an HMM fix which is not upstream yet. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 3月, 2019 2 次提交
-
-
由 Philip Yang 提交于
Only select HMM_MIRROR will get kernel config dependency warnings if CONFIG_HMM is missing in the config. Add depends on HMM will solve the issue. Add conditional compilation to fix compilation errors if HMM_MIRROR is not enabled as HMM config is not enabled. Remove unused function amdgpu_ttm_tt_mark_user_pages. Signed-off-by: NPhilip Yang <Philip.Yang@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Philip Yang 提交于
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd. If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry. HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct. HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty(). Signed-off-by: NPhilip Yang <Philip.Yang@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 06 11月, 2018 2 次提交
-
-
由 Christian König 提交于
Make sure that the global BO state is always correctly initialized. This allows removing all the device code to initialize it. v2: fix up vbox (Alex) Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
As the name says we only need one global instance of ttm_mem_global. Drop all the driver initialization and just use a single exported instance which is initialized during BO global initialization. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 8月, 2018 1 次提交
-
-
由 Christian König 提交于
Helper to get the PDE for a PD/PT. v2: improve documentation Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 14 7月, 2018 1 次提交
-
-
由 Michel Dänzer 提交于
Instead of CPU invisible VRAM. Preparation for the following, no functional change intended. v2: * Also change amdgpu_vram_mgr_bo_invisible_size to amdgpu_vram_mgr_bo_visible_size, allowing further simplification (Christian König) Cc: stable@vger.kernel.org Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 6月, 2018 1 次提交
-
-
由 Michel Dänzer 提交于
Preparation for the following fix, no functional change intended. Cc: stable@vger.kernel.org Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 5月, 2018 1 次提交
-
-
由 Andrey Grodzovsky 提交于
Reserved VRAM is used to avoid overriding pre OS FB. Once our display stack takes over we don't need the reserved VRAM anymore. v2: Remove comment, we know actually why we need to reserve the stolen VRAM. Fix return type for amdgpu_ttm_late_init. v3: Return 0 in amdgpu_bo_late_init, rebase on changes to previous patch v4: rebase v5: For GMC9 reserve always just 9M and keep the stolem memory around until GART table curruption on S3 resume is resolved. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAndrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 06 3月, 2018 3 次提交
-
-
由 Christian König 提交于
The ring status can change during GPU reset, but we still need to be able to schedule TTM buffer moves in the meantime. Otherwise we can ran into problems because of aborted move/fill operations during GPU resets. v2: still check if ring is available during direct submit. Signed-off-by: NChristian König <christian.koenig@amd.com> Acked-by: NChunming zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Instead of setting the active VRAM size directly provide a the info if we can use the buffer functions or not. Signed-off-by: NChristian König <christian.koenig@amd.com> Acked-by: NChunming zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Those belong to the TTM handling. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 01 3月, 2018 1 次提交
-
-
由 Amber Lin 提交于
When using CPU to update page table, we need to kmap all the PDs/PTs after they are allocated and that requires a TLB shot down on each CPU, which is quite heavy. Instead, we map the whole visible VRAM to a kernel address at once. Pages can be obtained from the offset. v2: move the mapping base from gmc to amdgpu_mman structure, and the implementation in amdgpu_ttm_* functions Signed-off-by: NAmber Lin <Amber.Lin@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 2月, 2018 1 次提交
-
-
由 Christian König 提交于
This reverts commit 7bdc53f9 and commit 330df03b. Neither are needed any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 08 12月, 2017 1 次提交
-
-
由 Lucas Stach 提交于
This moves and renames the AMDGPU scheduler to a common location in DRM in order to facilitate re-use by other drivers. This is mostly a straight forward rename with no code changes. One notable exception is the function to_drm_sched_fence(), which is no longer a inline header function to avoid the need to export the drm_sched_fence_ops_scheduled and drm_sched_fence_ops_finished structures. Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Tested-by: NDieter Nützel <Dieter@nuetzel-hh.de> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 05 12月, 2017 4 次提交
-
-
由 Christian König 提交于
We actually don't bind here, but rather allocate GART space if necessary. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
The GTT manager handles the GART address space anyway, so it is completely pointless to keep the same information around twice. v2: rebased Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Rename amdgpu_gtt_mgr_is_allocated() to amdgpu_gtt_mgr_has_gart_addr() and use that instead. v2: rename the function as well. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
We always use the BO mem now. v2: minor rebase Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 10月, 2017 1 次提交
-
-
由 Harish Kasiviswanathan 提交于
Add more generic function amdgpu_copy_ttm_mem_to_mem() that supports arbitrary copy size, offsets and two BOs (source & dest.). This is useful for KFD Cross Memory Attach feature where data needs to be copied from BOs from different processes v2: Add struct amdgpu_copy_mem and changed amdgpu_copy_ttm_mem_to_mem() function parameters to use the struct v3: Minor function name change Signed-off-by: NHarish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 27 9月, 2017 1 次提交
-
-
由 Tom St Denis 提交于
Signed-off-by: NTom St Denis <tom.stdenis@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> (v2): add domains and avoid strcmp
-
- 13 9月, 2017 1 次提交
-
-
由 Christian König 提交于
Just some cleanup. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 8月, 2017 1 次提交
-
-
由 Christian König 提交于
Use ttm_bo_mem_space instead of manually allocating GART space. This allows us to evict BOs when there isn't enought GART space any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 24 8月, 2017 1 次提交
-
-
由 Christian König 提交于
Use ttm_bo_mem_space instead of manually allocating GART space. This allows us to evict BOs when there isn't enought GART space any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 18 8月, 2017 2 次提交
-
-
由 Christian König 提交于
Looks like a better place for this. v2: use atomic64_t members instead Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
It doesn't make much sense to count those numbers twice. v2: use and atomic64_t instead Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 8月, 2017 1 次提交
-
-
由 Yong Zhao 提交于
That function will be used later to support setting a page table block with 64 bit value. Signed-off-by: NYong Zhao <Yong.Zhao@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 14 7月, 2017 5 次提交
-
-
由 Christian König 提交于
No functional change, just cleanup. v2: rebased, keep gart name. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
This way we don't need to map the full BO at a time any more. v2: use fixed windows for src/dst Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
We want to use them as remap address space. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
This avoids binding them later on. v2: fix typo in function name Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com>
-
由 Christian König 提交于
This allows us to flush the system VM here. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com>
-
- 28 1月, 2017 1 次提交
-
-
由 Christian König 提交于
Keeping groups of BOs on the LRU is to time consuming on command submission. Instead use the newly added BO priority to give a certain eviction order. v2: agd: trivial warning fix Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRoger.He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 26 10月, 2016 1 次提交
-
-
由 Christian König 提交于
Split VRAM allocations into 4MB blocks. v2: fix typo in comment, some suggested cleanups v3: document how to disable the feature, fix rebase issue Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Tested-by: NMike Lothian <mike@fireburn.co.uk> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 10月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
-
- 29 9月, 2016 1 次提交
-
-
由 Christian König 提交于
Only allocate address space when we really need it. v2: fix a typo, add correct function description, stop leaking the node in the error case. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-