1. 29 9月, 2022 2 次提交
  2. 20 9月, 2022 1 次提交
  3. 17 8月, 2022 2 次提交
  4. 07 6月, 2022 1 次提交
    • A
      drm/amd/amdgpu: Enable high priority gfx queue · b07d1d73
      Arunpravin Paneer Selvam 提交于
      Starting from SIENNA CICHLID asic supports two gfx pipes, enabling
      two graphics queues, 1 on each pipe, pipe0 queue0 would be the normal
      piority queue and pipe1 queue0 would be the high priority queue
      
      Only one queue per pipe is visble to SPI, SPI looks at the priority
      value assigned to CP_GFX_HQD_QUEUE_PRIORITY from each of the queue's
      HQD/MQD.
      
      Create contexts applying AMDGPU_CTX_PRIORITY_HIGH which submits job
      to the high priority queue on GFX pipe1. There would be starvation
      of LP workload if HP workload is always available.
      
      v2:
        - remove unnecessary check(Nirmoy)
        - make pipe1 hardware support a separate patch(Nirmoy)
        - remove duplicate code(Shashank)
        - add CSA support for second gfx pipe(Alex)
      
      v3(Christian):
        - fix incorrect indentation
        - merge COMPUTE and GFX switch cases as both calls the same function.
      
      v4:
        - rebase w/ latest code base
      Signed-off-by: NArunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
      Acked-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      b07d1d73
  5. 07 5月, 2022 1 次提交
  6. 04 5月, 2022 3 次提交
  7. 29 4月, 2022 1 次提交
  8. 26 3月, 2022 1 次提交
  9. 03 3月, 2022 2 次提交
  10. 18 2月, 2022 1 次提交
  11. 15 1月, 2022 2 次提交
  12. 02 9月, 2021 1 次提交
  13. 10 4月, 2021 1 次提交
  14. 24 3月, 2021 3 次提交
  15. 10 2月, 2021 1 次提交
  16. 14 11月, 2020 1 次提交
  17. 13 11月, 2020 1 次提交
  18. 17 10月, 2020 1 次提交
  19. 16 10月, 2020 1 次提交
  20. 23 9月, 2020 1 次提交
    • S
      drm/amdgpu: update athub interrupt harvesting handle · 3f975d0f
      Stanley.Yang 提交于
      GCEA/MMHUB EA error should not result to DF freeze, this is
      fixed in next generation, but for some reasons the GCEA/MMHUB
      EA error will result to DF freeze in previous generation,
      diver should avoid to indicate GCEA/MMHUB EA error as hw fatal
      error in kernel message by read GCEA/MMHUB err status registers.
      
      Changed from V1:
          make query_ras_error_status function more general
          make read mmhub er status register more friendly
      
      Changed from V2:
          move ras error status query function into do_recovery workqueue
      
      Changed from V3:
          remove useless code from V2, print GCEA error status
          instance number
      Signed-off-by: NStanley.Yang <Stanley.Yang@amd.com>
      Reviewed-by: NHawking Zhang <Hawking.Zhang@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      3f975d0f
  21. 15 8月, 2020 1 次提交
  22. 22 7月, 2020 1 次提交
  23. 04 6月, 2020 1 次提交
  24. 02 5月, 2020 1 次提交
  25. 24 4月, 2020 1 次提交
    • Y
      drm/amdgpu: request reg_val_offs each kiq read reg · 54208194
      Yintian Tao 提交于
      According to the current kiq read register method,
      there will be race condition when using KIQ to read
      register if multiple clients want to read at same time
      just like the expample below:
      1. client-A start to read REG-0 throguh KIQ
      2. client-A poll the seqno-0
      3. client-B start to read REG-1 through KIQ
      4. client-B poll the seqno-1
      5. the kiq complete these two read operation
      6. client-A to read the register at the wb buffer and
         get REG-1 value
      
      Therefore, use amdgpu_device_wb_get() to request reg_val_offs
      for each kiq read register.
      
      v2: fix the error remove
      v3: fix the print typo
      v4: remove unused variables
      Signed-off-by: NYintian Tao <yttao@amd.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      54208194
  26. 09 4月, 2020 1 次提交
    • N
      drm/amdgpu: rework sched_list generation · 1c6d567b
      Nirmoy Das 提交于
      Generate HW IP's sched_list in amdgpu_ring_init() instead of
      amdgpu_ctx.c. This makes amdgpu_ctx_init_compute_sched(),
      ring.has_high_prio and amdgpu_ctx_init_sched() unnecessary.
      This patch also stores sched_list for all HW IPs in one big
      array in struct amdgpu_device which makes amdgpu_ctx_init_entity()
      much more leaner.
      
      v2:
      fix a coding style issue
      do not use drm hw_ip const to populate amdgpu_ring_type enum
      
      v3:
      remove ctx reference and move sched array and num_sched to a struct
      use num_scheds to detect uninitialized scheduler list
      
      v4:
      use array_index_nospec for user space controlled variables
      fix possible checkpatch.pl warnings
      Signed-off-by: NNirmoy Das <nirmoy.das@amd.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      1c6d567b
  27. 10 3月, 2020 1 次提交
  28. 05 3月, 2020 1 次提交
  29. 29 2月, 2020 1 次提交
  30. 23 1月, 2020 1 次提交
  31. 17 1月, 2020 1 次提交
  32. 19 12月, 2019 1 次提交