- 28 3月, 2019 1 次提交
-
-
由 shaoyunl 提交于
Driver vote low to high pstate switch whenever there is an outstanding XGMI mapping request. Driver vote high to low pstate when all the outstanding XGMI mapping is terminated. Signed-off-by: Nshaoyunl <shaoyun.liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 22 3月, 2019 4 次提交
-
-
由 Christian König 提交于
And remove the existing code when it is unused. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Separate out all functions for SDMA and CPU based page table updates into separate backends. This way we can keep most of the complexity of those from the core VM code. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Move the update parameter into the VM header and rename them. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not needed any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 3月, 2019 4 次提交
-
-
由 Christian König 提交于
Make sure that not only the entities are flush, but that we also wait for the HW to finish all processing. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Remove the chash implementation for now since it isn't used any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not needed any more since we now free PDs/PTs on demand. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NHuang Rui <ray.huang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Let's start to allocate VM PDs/PTs on demand instead of pre-allocating them during mapping. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NHuang Rui <ray.huang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 26 1月, 2019 1 次提交
-
-
由 Chunming Zhou 提交于
if lru is changed, we cannot do bulk moving. v2: root bo isn't in bulk moving, skip its change. Signed-off-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 08 12月, 2018 1 次提交
-
-
由 Christian König 提交于
printk_ratelimit() is much better suited to limit the number of reported VM faults. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 14 9月, 2018 1 次提交
-
-
由 Christian König 提交于
Instead of the double linked list. Gets the size of amdgpu_vm_pt down to 64 bytes again. We could even reduce it down to 32 bytes, but that would require some rather extreme hacks. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 13 9月, 2018 1 次提交
-
-
由 Oak Zeng 提交于
In stead of share one fault hash table per device, make it per vm. This can avoid inter-process lock issue when fault hash table is full. Change-Id: I5d1281b7c41eddc8e26113e010516557588d3708 Signed-off-by: NOak Zeng <Oak.Zeng@amd.com> Suggested-by: NChristian Konig <Christian.Koenig@amd.com> Suggested-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian Konig <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 9月, 2018 2 次提交
-
-
由 Christian König 提交于
Correct sign extend the GMC addresses to 48bit. v2: sign extending turned out easier than thought. v3: clean up the defines and move them into amdgpu_gmc.h as well Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Allows us to avoid taking the spinlock in more places. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 8月, 2018 2 次提交
-
-
由 Oak Zeng 提交于
For compute vm acquired from amdgpu, vm.pasid is managed by kfd. Decouple pasid from such vm on process destroy to avoid duplicate pasid release. Signed-off-by: NOak Zeng <Oak.Zeng@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Oak Zeng 提交于
To make a amdgpu vm to a compute vm, the old pasid will be freed and replaced with a pasid managed by kfd. Kfd can't reuse original pasid allocated by amdgpu because kfd uses different pasid policy with amdgpu. For example, all graphic devices share one same pasid in a process. v2: rebase (Alex) Signed-off-by: NOak Zeng <Oak.Zeng@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 8月, 2018 6 次提交
-
-
由 Christian König 提交于
Just another leftover from radeon. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Acked-by: NHuang Rui <ray.huang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Felix Kuehling 提交于
Set the VM size based on system memory size between the ASIC-specific limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their default VM size of 256TB (48 bit). Only older GPUs will adjust VM size depending on system memory size. This makes more VM space available for ROCm applications on GFXv8 GPUs that want to map all available VRAM and system memory in their SVM address space. v2: * Clarify comment * Round up memory size before >> 30 * Round up automatic vm_size to power of two Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NJunwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Felix Kuehling 提交于
Set the VM size based on system memory size between the ASIC-specific limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their default VM size of 256TB (48 bit). Only older GPUs will adjust VM size depending on system memory size. This makes more VM space available for ROCm applications on GFXv8 GPUs that want to map all available VRAM and system memory in their SVM address space. v2: * Clarify comment * Round up memory size before >> 30 * Round up automatic vm_size to power of two Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NJunwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Huang Rui 提交于
I continue to work for bulk moving that based on the proposal by Christian. Background: amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of them on the end of LRU list one by one. Thus, that cause so many BOs moved to the end of the LRU, and impact performance seriously. Then Christian provided a workaround to not move PD/PT BOs on LRU with below patch: Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid validating VM PTs") However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU instead of one by one. Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be validated we move all BOs together to the end of the LRU without dropping the lock for the LRU. While doing so we note the beginning and end of this block in the LRU list. Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do, we don't move every BO one by one, but instead cut the LRU list into pieces so that we bulk move everything to the end in just one operation. Test data: +--------------+-----------------+-----------+---------------------------------------+ | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) | | |Principle(Vulkan)| | | +------------------------------------------------------------------------------------+ | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) | | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) | +------------------------------------------------------------------------------------+ | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) | |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)| |PT BOs on LRU)| | | | +------------------------------------------------------------------------------------+ | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) | | | | |0.214 ms(8K) 0.225 ms(16K) | +--------------+-----------------+-----------+---------------------------------------+ After test them with above three benchmarks include vulkan and opencl. We can see the visible improvement than original, and even better than original with workaround. v2: move all BOs include idle, relocated, and moved list to the end of LRU and put them together. v3: remove unused parameter and use list_for_each_entry instead of the one with save entry. v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time, all bo will be back on idle list. v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of validated, and move ttm_bo_bulk_move_lru_tail() also into amdgpu_vm_move_to_lru_tail(). v6: clean up and fix return value. Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NHuang Rui <ray.huang@amd.com> Tested-by: NMike Lothian <mike@fireburn.co.uk> Tested-by: NDieter Nützel <Dieter@nuetzel-hh.de> Acked-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Instead of the fixed round robin use let the scheduler balance the load of page table updates. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Huang Rui 提交于
Demangle amdgpu.h. Signed-off-by: NHuang Rui <ray.huang@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 01 8月, 2018 1 次提交
-
-
由 Christian König 提交于
This allows us to trace all VM ranges which should be valid inside a CS. v2: dump mappings without BO as well Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> (v1) Reviewed-by: Huang Rui <ray.huang@amd.com> (v1) Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 7月, 2018 1 次提交
-
-
由 Andrey Grodzovsky 提交于
Add process and thread names and pids and a function to extract this info from relevant amdgpu_vm. v2: Add documentation and fix identation. v3: Add getter and setter functions for amdgpu_task_info. Signed-off-by: NAndrey Grodzovsky <andrey.grodzovsky@amd.com> Acked-by: NJim Qu <Jim.Qu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 24 5月, 2018 2 次提交
-
-
由 Christian König 提交于
Move all BOs belonging to a VM on the LRU with every submission. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Only the moved state needs a separate spin lock protection. All other states are protected by reserving the VM anyway. v2: fix some more incorrect cases Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 19 5月, 2018 1 次提交
-
-
由 Christian König 提交于
This lock isn't used any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 5月, 2018 1 次提交
-
-
由 Yong Zhao 提交于
This change prepares for a workaround in amdkfd for a GFX9 HW bug. It requires the control stack memory of compute queues, which is allocated from the second page of MQD gart BOs, to have mtype NC, rather than the default UC. Signed-off-by: NYong Zhao <yong.zhao@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 3月, 2018 2 次提交
-
-
由 Felix Kuehling 提交于
v2: Removed updating and checking of vm->vm_context v3: Enable amdgpu_vm_clear_bo in amdgpu_vm_make_compute Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@gmail.com>
-
由 Felix Kuehling 提交于
Remove struct amdkfd_vm and move the fields into struct amdgpu_vm. This will allow turning a VM created by a DRM render node into a KFD VM. v2: Removed vm_context field Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@gmail.com>
-
- 20 2月, 2018 1 次提交
-
-
由 Christian König 提交于
1MB should be more than enough, currently we use about 8K. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Acked-by: NMonk Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 12月, 2017 2 次提交
-
-
由 Christian König 提交于
Use the fence context from the scheduler entity. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Move both into the new files amdgpu_ids.[ch]. No functional change. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 19 12月, 2017 1 次提交
-
-
由 Christian König 提交于
Instead of falling back to 2 level and very limited address space use 2+1 PD support and 128TB + 512GB of virtual address space. v2: cleanup defines, rebase on top of level enum v3: fix inverted check in hardware setup Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-and-Tested-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 15 12月, 2017 1 次提交
-
-
由 Chunming Zhou 提交于
v2: remove SUBPTB member v3: remove last_level, use AMDGPU_VM_PTB directly instead. Signed-off-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 13 12月, 2017 2 次提交
-
-
由 Christian König 提交于
No more double house keeping. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not needed any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 07 2月, 2018 1 次提交
-
-
由 Felix Kuehling 提交于
Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@gmail.com>
-
- 08 12月, 2017 1 次提交
-
-
由 Lucas Stach 提交于
This moves and renames the AMDGPU scheduler to a common location in DRM in order to facilitate re-use by other drivers. This is mostly a straight forward rename with no code changes. One notable exception is the function to_drm_sched_fence(), which is no longer a inline header function to avoid the need to export the drm_sched_fence_ops_scheduled and drm_sched_fence_ops_finished structures. Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Tested-by: NDieter Nützel <Dieter@nuetzel-hh.de> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-