- 30 3月, 2017 2 次提交
-
-
由 Chunming Zhou 提交于
big number is to high priority. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
A unique id is useful for debugging and tracing. Intended to replace pointers in ftrace output. Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 02 3月, 2017 1 次提交
-
-
由 Ingo Molnar 提交于
We are going to move scheduler ABI details to <uapi/linux/sched/types.h>, which will be used from a number of .c files. Create empty placeholder header that maps to <linux/types.h>. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 11月, 2016 1 次提交
-
-
由 Christian König 提交于
Some fences might be alive even after we have stopped the scheduler leading to warnings about leaked objects from the SLUB allocator. Fix this by allocating/freeing the SLUB allocator from the module init/fini functions just like we do it for hw fences. v2: make variable static, add link to bug Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=97500Reported-by: NGrazvydas Ignotas <notasas@gmail.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> (v1) Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 25 10月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
-
由 Grazvydas Ignotas 提交于
To free fences, call_rcu() is used, which calls amd_sched_fence_free() after a grace period. During teardown, there is no guarantee all callbacks have finished, so sched_fence_slab may be destroyed before all fences have been freed. If we are lucky, this results in some slab warnings, if not, we get a crash in one of rcu threads because callback is called after amdgpu has already been unloaded. Fix it with a rcu_barrier(). Fixes: 189e0fb7 ("drm/amdgpu: RCU protected amd_sched_fence_release") Acked-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NGrazvydas Ignotas <notasas@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 8月, 2016 1 次提交
-
-
由 Christian König 提交于
Could be that we don't actually have a timeout set. Signed-off-by: NChristian König <christian.koenig@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 7月, 2016 2 次提交
-
-
由 Chunming Zhou 提交于
run_job involves mutex, which could sleep. V2: use list_for_each_entry_safe, since the job might complete while we dropped the lock. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
Means the hw ring is empty after gpu reset. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 08 7月, 2016 15 次提交
-
-
由 Chunming Zhou 提交于
Which is to recover hw jobs when gpu reset. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
amd_sched_hw_job_reset will remove callback from hw fence. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
Parent of sched fence is hw fence which is to signal sched fence. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
We return the fence as part of the job structur anyway, no need to do this twice. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
A regular spin_lock/unlock should do here as well. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Drop the lock before calling cancel_delayed_work_sync(). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96445Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Tested-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Make it two events, one for the job being scheduled and one when it is finished. Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Acked-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Remove the job reference counting and just properly destroy it from a work item which blocks on any potential running timeout handler. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Otherwise the locking becomes rather confusing. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
The driver shouldn't mess with the scheduler internals. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Remembering the code path in a variable to cleanup differently is usually not a good idea at all. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
No need for double housekeeping here. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Completely pointless and confusing to use a callback to call into the same code file. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
v2: fix even more Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMonk.Liu <monk.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 05 5月, 2016 1 次提交
-
-
由 Nils Wallménius 提交于
This marks the struct amdgpu_sched_ops const and adjusts amd_sched_init to take a const pointer for the ops param. The ops member of struct amd_gpu_scheduler is also changed to const. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NNils Wallménius <nils.wallmenius@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 03 5月, 2016 6 次提交
-
-
由 Monk Liu 提交于
this is to fix fatal page fault error that occured if: job is signaled/released after its timeout work is already put to the global queue (in this case the cancel_delayed_work will return false), which will lead to NX-protection error page fault during job_timeout_func. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
Add two callbacks to scheduler to maintain jobs, and invoked for job timeout calculations. Now TDR measures time gap from job is processed by hw. v2: fix typo Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
original time out detect routine is incorrect, cuz it measures the gap from job scheduled, but we should only measure the gap from processed by hw. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
the mirror_list will be used for later time out detect feature. This is needed to properly detect a GPU timeout with the scheduler. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
for those jobs submitted through scheduler, do not free it immediately after scheduled, instead free it in global workqueue by its sched fence signaling callback function. v2: call uf's bo_undef after job_run() call job's sync free after job_run() no static inline __amdgpu_job_free() anymore, just use kfree(job) to replace it. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
Consolidate job initialization in one place rather than duplicating it in multiple places. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 2月, 2016 1 次提交
-
-
由 Monk Liu 提交于
since the dependency job is also scheduled by the same scheduler with the job depended on it, no need to call wake up scheduler when the dep is scheduled. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 15 12月, 2015 1 次提交
-
-
由 Chunming Zhou 提交于
umd somtimes will create a context for every ring, that means some entities wouldn't be used at all. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 05 12月, 2015 1 次提交
-
-
由 Nicolai Hähnle 提交于
As soon as we leave the spinlock after the job has been added to the job queue, we can no longer rely on the job's data to be available. I have seen a null-pointer dereference due to sched == NULL in amd_sched_wakeup via amd_sched_entity_push_job and amd_sched_ib_submit_kernel_helper. Since the latter initializes sched_job->sched with the address of the ring scheduler, which is guaranteed to be non-NULL, this race appears to be a likely culprit. Signed-off-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Bugzilla: https://bugs.freedesktop.org/attachment.cgi?bugid=93079Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 03 12月, 2015 2 次提交
-
-
由 Chunming Zhou 提交于
Allows us to set priorities in the scheduler. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com>
-
由 Nicolai Hähnle 提交于
As soon as we leave the spinlock after the job has been added to the job queue, we can no longer rely on the job's data to be available. I have seen a null-pointer dereference due to sched == NULL in amd_sched_wakeup via amd_sched_entity_push_job and amd_sched_ib_submit_kernel_helper. Since the latter initializes sched_job->sched with the address of the ring scheduler, which is guaranteed to be non-NULL, this race appears to be a likely culprit. Signed-off-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Bugzilla: https://bugs.freedesktop.org/attachment.cgi?bugid=93079Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 24 11月, 2015 2 次提交
-
-
由 Christian König 提交于
This way the driver isn't limited in the dependency handling callback. v2: remove extra check in amd_sched_entity_pop_job() Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com>
-
由 Christian König 提交于
We only need to wait for jobs to be scheduled when the dependency is from the same scheduler. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com>
-
- 17 11月, 2015 2 次提交
-
-
由 Christian König 提交于
Before this patch the scheduler fence was created when we push the job into the queue, so we could only get the fence after pushing it. The mutex now was necessary to prevent the thread pushing the jobs to the hardware from running faster than the thread pushing the jobs into the queue. Otherwise the thread pushing jobs into the queue would have accessed possible freed up memory when it tries to get a reference to the fence. So what you get in the end is thread A: mutex_lock(&job->lock); ... Kick of thread B. ... mutex_unlock(&job->lock); And thread B: mutex_lock(&job->lock); .... mutex_unlock(&job->lock); kfree(job); I'm actually not sure if I'm still up to date on this, but this usage pattern used to be not allowed with mutexes. See here as well https://lwn.net/Articles/575460/. v2: remove unrelated changes, fix missing owner v3: rebased, add more commit message Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
The code was correct, but getting two references when the ownership is linearly moved on is a bit awkward and just overhead. Signed: Christian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-