1. 08 7月, 2016 13 次提交
  2. 12 5月, 2016 4 次提交
  3. 05 5月, 2016 1 次提交
  4. 03 5月, 2016 4 次提交
  5. 17 3月, 2016 3 次提交
  6. 09 3月, 2016 1 次提交
  7. 01 3月, 2016 1 次提交
  8. 13 2月, 2016 2 次提交
  9. 11 2月, 2016 9 次提交
  10. 17 11月, 2015 2 次提交
    • C
      drm/amdgpu: fix incorrect mutex usage v3 · e2840221
      Christian König 提交于
      Before this patch the scheduler fence was created when we push the job
      into the queue, so we could only get the fence after pushing it.
      
      The mutex now was necessary to prevent the thread pushing the jobs to
      the hardware from running faster than the thread pushing the jobs into
      the queue.
      
      Otherwise the thread pushing jobs into the queue would have accessed
      possible freed up memory when it tries to get a reference to the fence.
      
      So what you get in the end is thread A:
      mutex_lock(&job->lock);
      ...
      Kick of thread B.
      ...
      mutex_unlock(&job->lock);
      
      And thread B:
      mutex_lock(&job->lock);
      ....
      mutex_unlock(&job->lock);
      kfree(job);
      
      I'm actually not sure if I'm still up to date on this, but this usage
      pattern used to be not allowed with mutexes. See here as well
      https://lwn.net/Articles/575460/.
      
      v2: remove unrelated changes, fix missing owner
      v3: rebased, add more commit message
      Signed-off-by: NChristian König <christian.koenig@amd.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      e2840221
    • C
      drm/amdgpu: cleanup scheduler fence get/put dance · 4a562283
      Christian König 提交于
      The code was correct, but getting two references when the ownership
      is linearly moved on is a bit awkward and just overhead.
      
      Signed: Christian König <christian.koenig@amd.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      4a562283