1. 24 4月, 2021 1 次提交
  2. 21 4月, 2021 1 次提交
  3. 10 4月, 2021 1 次提交
  4. 24 3月, 2021 1 次提交
  5. 10 3月, 2021 2 次提交
  6. 18 12月, 2020 1 次提交
  7. 30 10月, 2020 1 次提交
  8. 13 10月, 2020 2 次提交
  9. 06 10月, 2020 1 次提交
  10. 26 9月, 2020 1 次提交
  11. 23 9月, 2020 1 次提交
  12. 16 9月, 2020 1 次提交
  13. 01 9月, 2020 1 次提交
  14. 27 8月, 2020 3 次提交
  15. 28 7月, 2020 1 次提交
  16. 16 7月, 2020 3 次提交
  17. 03 7月, 2020 1 次提交
  18. 01 7月, 2020 4 次提交
  19. 29 4月, 2020 1 次提交
  20. 02 4月, 2020 1 次提交
  21. 27 2月, 2020 1 次提交
  22. 13 2月, 2020 1 次提交
    • R
      drm/amdkfd: refactor runtime pm for baco · 9593f4d6
      Rajneesh Bhardwaj 提交于
      So far the kfd driver implemented same routines for runtime and system
      wide suspend and resume (s2idle or mem). During system wide suspend the
      kfd aquires an atomic lock that prevents any more user processes to
      create queues and interact with kfd driver and amd gpu. This mechanism
      created problem when amdgpu device is runtime suspended with BACO
      enabled. Any application that relies on kfd driver fails to load because
      the driver reports a locked kfd device since gpu is runtime suspended.
      
      However, in an ideal case, when gpu is runtime  suspended the kfd driver
      should be able to:
      
       - auto resume amdgpu driver whenever a client requests compute service
       - prevent runtime suspend for amdgpu  while kfd is in use
      
      This change refactors the amdgpu and amdkfd drivers to support BACO and
      runtime power management.
      Reviewed-by: NOak Zeng <oak.zeng@amd.com>
      Reviewed-by: NFelix Kuehling <felix.kuehling@amd.com>
      Signed-off-by: NRajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      9593f4d6
  23. 08 1月, 2020 1 次提交
  24. 19 12月, 2019 1 次提交
  25. 14 11月, 2019 1 次提交
  26. 26 10月, 2019 1 次提交
    • P
      drm/amdkfd: don't use dqm lock during device reset/suspend/resume · 2c99a547
      Philip Yang 提交于
      If device reset/suspend/resume failed for some reason, dqm lock is
      hold forever and this causes deadlock. Below is a kernel backtrace when
      application open kfd after suspend/resume failed.
      
      Instead of holding dqm lock in pre_reset and releasing dqm lock in
      post_reset, add dqm->sched_running flag which is modified in
      dqm->ops.start and dqm->ops.stop. The flag doesn't need lock protection
      because write/read are all inside dqm lock.
      
      For HWS case, map_queues_cpsch and unmap_queues_cpsch checks
      sched_running flag before sending the updated runlist.
      
      v2: For no-HWS case, when device is stopped, don't call
      load/destroy_mqd for eviction, restore and create queue, and avoid
      debugfs dump hdqs.
      
      Backtrace of dqm lock deadlock:
      
      [Thu Oct 17 16:43:37 2019] INFO: task rocminfo:3024 blocked for more
      than 120 seconds.
      [Thu Oct 17 16:43:37 2019]       Not tainted
      5.0.0-rc1-kfd-compute-rocm-dkms-no-npi-1131 #1
      [Thu Oct 17 16:43:37 2019] "echo 0 >
      /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [Thu Oct 17 16:43:37 2019] rocminfo        D    0  3024   2947
      0x80000000
      [Thu Oct 17 16:43:37 2019] Call Trace:
      [Thu Oct 17 16:43:37 2019]  ? __schedule+0x3d9/0x8a0
      [Thu Oct 17 16:43:37 2019]  schedule+0x32/0x70
      [Thu Oct 17 16:43:37 2019]  schedule_preempt_disabled+0xa/0x10
      [Thu Oct 17 16:43:37 2019]  __mutex_lock.isra.9+0x1e3/0x4e0
      [Thu Oct 17 16:43:37 2019]  ? __call_srcu+0x264/0x3b0
      [Thu Oct 17 16:43:37 2019]  ? process_termination_cpsch+0x24/0x2f0
      [amdgpu]
      [Thu Oct 17 16:43:37 2019]  process_termination_cpsch+0x24/0x2f0
      [amdgpu]
      [Thu Oct 17 16:43:37 2019]
      kfd_process_dequeue_from_all_devices+0x42/0x60 [amdgpu]
      [Thu Oct 17 16:43:37 2019]  kfd_process_notifier_release+0x1be/0x220
      [amdgpu]
      [Thu Oct 17 16:43:37 2019]  __mmu_notifier_release+0x3e/0xc0
      [Thu Oct 17 16:43:37 2019]  exit_mmap+0x160/0x1a0
      [Thu Oct 17 16:43:37 2019]  ? __handle_mm_fault+0xba3/0x1200
      [Thu Oct 17 16:43:37 2019]  ? exit_robust_list+0x5a/0x110
      [Thu Oct 17 16:43:37 2019]  mmput+0x4a/0x120
      [Thu Oct 17 16:43:37 2019]  do_exit+0x284/0xb20
      [Thu Oct 17 16:43:37 2019]  ? handle_mm_fault+0xfa/0x200
      [Thu Oct 17 16:43:37 2019]  do_group_exit+0x3a/0xa0
      [Thu Oct 17 16:43:37 2019]  __x64_sys_exit_group+0x14/0x20
      [Thu Oct 17 16:43:37 2019]  do_syscall_64+0x4f/0x100
      [Thu Oct 17 16:43:37 2019]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      Suggested-by: NFelix Kuehling <Felix.Kuehling@amd.com>
      Signed-off-by: NPhilip Yang <Philip.Yang@amd.com>
      Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      2c99a547
  27. 08 10月, 2019 2 次提交
  28. 03 10月, 2019 3 次提交