- 14 7月, 2017 7 次提交
-
-
由 Evan Quan 提交于
- v2: added missing spinlock init Signed-off-by: NEvan Quan <evan.quan@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
This allows us to write the mapped PTEs into an IB instead of the table directly. v2: fix build with debugfs enabled, remove unused assignment Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Ken Wang 提交于
Certain MC registers need a delay after writing them to properly update in the init sequence. Signed-off-by: NKen Wang <Ken.Wang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Keep them where they belong. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com>
-
由 Christian König 提交于
Stop spreading the code over all GMC generations. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
These are no longer needed now that we use the fb_location programmed by the vbios. Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Not used. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 6月, 2017 2 次提交
-
-
由 Alex Xie 提交于
The function is called only once inside the .c file. v2: update the commit message (Michel) Signed-off-by: NAlex Xie <AlexBin.Xie@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Flora Cui 提交于
Newer asics with 4 SEs are not able to fit the entire bitmask in the original field, use an array instead. v2: keep cu_ao_mask for backward compatibility. Signed-off-by: NFlora Cui <Flora.Cui@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 6月, 2017 1 次提交
-
-
由 Alex Xie 提交于
In original function amdgpu_bo_list_get, the waiting for result->lock can be quite long while mutex bo_list_lock was holding. It can make other tasks waiting for bo_list_lock for long period. Secondly, this patch allows several tasks(readers of idr) to proceed at the same time. v2: use rcu and kref (Dave Airlie and Christian König) v3: update v1 commit message (Michel Dänzer) v4: rebase on upstream (Alex Deucher) Signed-off-by: NAlex Xie <AlexBin.Xie@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 17 6月, 2017 1 次提交
-
-
由 Dave Airlie 提交于
This creates a new command submission chunk for amdgpu to add in and out sync objects around the submission. Sync objects are managed via the drm syncobj ioctls. The command submission interface is enhanced with two new chunks, one for syncobj pre submission dependencies, and one for post submission sync obj signalling, and just takes a list of handles for each. This is based on work originally done by David Zhou at AMD, with input from Christian Konig on what things should look like. In theory VkFences could be backed with sync objects and just get passed into the cs as syncobj handles as well. NOTE: this interface addition needs a version bump to expose it to userspace. TODO: update to dep_sync when rebasing onto amdgpu master. (with this - r-b from Christian) v1.1: keep file reference on import. v2: move to using syncobjs v2.1: change some APIs to just use p pointer. v3: make more robust against CS failures, we now add the wait sems but only remove them once the CS job has been submitted. v4: rewrite names of API and base on new syncobj code. v5: move post deps earlier, rename some apis v6: lookup post deps earlier, and just replace fences in post deps stage (Christian) Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 15 6月, 2017 2 次提交
-
-
由 Huang Rui 提交于
gpu_info firmware is released after data is used. But when system enters into suspend, upper class driver will cache all firmware names. At that time, gpu_info will be failing to load. It seems an upper class issue, that we should not release gpu_info firmware until device finished. [ 903.236589] cache_firmware: amdgpu/vega10_sdma1.bin [ 903.236590] fw_set_page_data: fw-amdgpu/vega10_sdma1.bin buf=ffff88041eee10c0 data=ffffc90002561000 size=17408 [ 903.236591] cache_firmware: amdgpu/vega10_sdma1.bin ret=0 [ 903.464160] __allocate_fw_buf: fw-amdgpu/vega10_gpu_info.bin buf=ffff88041eee2c00 [ 903.471815] (NULL device *): loading /lib/firmware/updates/4.11.0-custom/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.482870] (NULL device *): loading /lib/firmware/updates/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.492716] (NULL device *): loading /lib/firmware/4.11.0-custom/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.503156] (NULL device *): direct-loading amdgpu/vega10_gpu_info.bin Signed-off-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Hawking Zhang 提交于
Signed-off-by: NHawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 09 6月, 2017 2 次提交
-
-
由 Alex Xie 提交于
There are two identical function prototypes in same header file Signed-off-by: NAlex Xie <AlexBin.Xie@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Harish Kasiviswanathan 提交于
Add VM update mode module param (amdgpu.vm_update_mode) that can used to control how VM pde/pte are updated for Graphics and Compute. BIT0 controls Graphics and BIT1 Compute. BIT0 [= 0] Graphics updated by SDMA [= 1] by CPU BIT1 [= 0] Compute updated by SDMA [= 1] by CPU By default, only for large BAR system vm_update_mode = 2, indicating that Graphics VMs will be updated via SDMA and Compute VMs will be updated via CPU. And for all all other systems (by default) vm_update_mode = 0 Signed-off-by: NHarish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 08 6月, 2017 3 次提交
-
-
由 Felix Kuehling 提交于
If AMDGPU supports SI, add a module parameter to control SI support. It's off by default in AMDGPU as long as SI suppost is experimental, while it is on by default in radeon. Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NMichel Dänzer <michel.daenzer@amd.com> [ Michel Dänzer: Squash in amdgpu_si_support initialization fix ] Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Felix Kuehling 提交于
If AMDGPU supports CIK, add a module parameter to control CIK support. It's on by default in AMDGPU, while it is off by default in radeon. Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NMichel Dänzer <michel.daenzer@amd.com>
-
由 Alex Deucher 提交于
They are gfx related, not general helpers. Reviewed-by: NAlex Xie <AlexBin.Xie@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 07 6月, 2017 1 次提交
-
-
由 Huang Rui 提交于
Something writes over the first 8 MB so reserve this on vega10 until we root cause it. Signed-off-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 01 6月, 2017 9 次提交
-
-
由 Andres Rodriguez 提交于
Use an LRU policy to map usermode rings to HW compute queues. Most compute clients use one queue, and usually the first queue available. This results in poor pipe/queue work distribution when multiple compute apps are running. In most cases pipe 0 queue 0 is the only queue that gets used. In order to better distribute work across multiple HW queues, we adopt a policy to map the usermode ring ids to the LRU HW queue. This fixes a large majority of multi-app compute workloads sharing the same HW queue, even though 7 other queues are available. v2: use ring->funcs->type instead of ring->hw_ip v3: remove amdgpu_queue_mapper_funcs v4: change ring_lru_list_lock to spinlock, grab only once in lru_get() Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Add amdgpu_queue_mgr, a mechanism that allows disjointing usermode's ring ids from the kernel's ring ids. The queue manager maintains a per-file descriptor map of user ring ids to amdgpu_ring pointers. Once a map is created it is permanent (this is required to maintain FIFO execution guarantees for a context's ring). Different queue map policies can be configured for each HW IP. Currently all HW IPs use the identity mapper, i.e. kernel ring id is equal to the user ring id. The purpose of this mechanism is to distribute the load across multiple queues more effectively for HW IPs that support multiple rings. Userspace clients are unable to check whether a specific resource is in use by a different client. Therefore, it is up to the kernel driver to make the optimal choice. v2: remove amdgpu_queue_mapper_funcs v3: made amdgpu_queue_mgr per context instead of per-fd v4: add context_put on error paths v5: rebase and include new IPs UVD_ENC & VCN_* v6: drop unused amdgpu_ring_is_valid_index (Alex) Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Instead of picking an arbitrary queue for KIQ, search for one according to policy. The queue must be unused. Also report the KIQ as an unavailable resource to KFD. In testing I ran into KCQ initialization issues when using pipes 2/3 of MEC2 for the KIQ. Therefore the policy disallows grabbing one of these. v2: fix (ring.me + 1) to (ring.me -1) in amdgpu_amdkfd_device_init Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Pipes provide better concurrency than queues, therefore we want to make sure that apps use queues from different pipes whenever possible. Optimize for the trivial case where an app will consume rings in order, therefore we don't want adjacent rings to belong to the same pipe. Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Previously the queue/pipe split with kfd operated with pipe granularity. This patch allows amdgpu to take ownership of an arbitrary set of queues. It also consolidates the last few magic numbers in the compute initialization process into mec_init. v2: support for gfx9 v3: renamed AMDGPU_MAX_QUEUES to AMDGPU_MAX_COMPUTE_QUEUES v4: fix off-by-one in num_mec checks in *_compute_queue_acquire Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Make amdgpu the owner of all per-pipe state of the HQDs. This change will allow us to split the queues between kfd and amdgpu with a queue granularity instead of pipe granularity. This patch fixes kfd allocating an HDP_EOP region for its 3 pipes which goes unused. v2: support for gfx9 v3: fix gfx7 HPD intitialization Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Rename adjust_mc_addr to get_vm_pde and check the address bits in one place. v2: handle vcn as well, keep setting the valid bit manually, add a BUG_ON() for GMC v6, v7 and v8 as well. v3: handle vcn_v1_0_enc_ring_emit_vm_flush as well. v4: fix the BUG_ON mask for GFX6-8 Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Hawking Zhang 提交于
Signed-off-by: NHawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Shirish S 提交于
amdgpu_device_resume() & amdgpu_device_init() have a high time consuming call of amdgpu_late_init() which sets the clock_gating state of all IP blocks and is blocking. This patch defers only this setting of clock gating state operation to post resume of amdgpu driver but ideally before the UI comes up or in some cases post ui as well. With this change the resume time of amdgpu_device comes down from 1.299s to 0.199s which further helps in reducing the overall system resume time. V1: made the optimization applicable during driver load as well. TEST:(For ChromiumOS on STONEY only) * UI comes up * amdgpu_late_init() call gets called consistently and no errors reported. Signed-off-by: NShirish S <shirish.s@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 5月, 2017 12 次提交
-
-
由 Marek Olšák 提交于
v2: bump the DRM version Signed-off-by: NMarek Olšák <marek.olsak@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
The fence in dep_sync cannot be optimized. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Tested and Reviewed-by: Roger.He <Hongbo.He@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
below ioctl will return -ENODEV: amdgpu_cs_ioctl amdgpu_cs_wait_ioctl amdgpu_cs_wait_fences_ioctl amdgpu_gem_va_ioctl amdgpu_info_ioctl v2: only for map and replace cases in amdgpu_gem_va_ioctl Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
backup first 64 byte of gart table as reset magic, check if magic is same after gpu hw reset. v2: use memcmp instead of manual innovation. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Leo Liu 提交于
Signed-off-by: NLeo Liu <leo.liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Leo Liu 提交于
VCN is the new media block on Raven. Add core support and the ring and ib tests for decode. Signed-off-by: NLeo Liu <leo.liu@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NHawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
1,TDR will kickout guilty job if it hang exceed the threshold of the given one from kernel paramter "job_hang_limit", that way a bad command stream will not infinitly cause GPU hang. by default this threshold is 1 so a job will be kicked out after it hang. 2,if a job timeout TDR routine will not reset all sched/ring, instead if will only reset on the givn one which is indicated by @job of amdgpu_sriov_gpu_reset, that way we don't need to reset and recover each sched/ring if we already know which job cause GPU hang. 3,unblock sriov_gpu_reset for AI family. V2: 1:put kickout guilty job after sched parked. 2:since parking scheduler prior to kickout already occupies a while, we can do last check on the in question job before doing hw_reset. TODO: 1:when a job is considered as guilty, we should mark some flag in its fence status flag, and let UMD side aware that this fence signaling is not due to job complete but job hang. 2:if gpu reset cause all video memory lost, we need introduce a new policy to implement TDR, like drop all jobs not yet signaled, and all IOCTL on this device will return ERROR DEVICE_LOST. this will be implemented later. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
v2: directly return for 'if' case. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
this is an improvement for previous patch, the sched_sync is to store fence that could be skipped as scheduled, when job is executed, we didn't need pipeline_sync if all fences in sched_sync are signalled, otherwise insert pipeline_sync still. v2: handle error when adding fence to sync failed. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> (v1) Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Roger.He 提交于
to cover below case: 1. A task gart bind/unbind but not add to adev->gtt_list yet 2. at this time gpu reset, gtt only recover those gtt in adev->gtt_list Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NRoger.He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Nikola Pajkovsky 提交于
else branch is pointless if it's right at the end of function and use unlikely() on err path. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NNikola Pajkovsky <npajkovsky@suse.cz> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-