1. 07 7月, 2018 4 次提交
  2. 06 7月, 2018 1 次提交
  3. 05 7月, 2018 2 次提交
  4. 03 7月, 2018 1 次提交
  5. 02 7月, 2018 3 次提交
  6. 28 6月, 2018 1 次提交
  7. 23 6月, 2018 1 次提交
  8. 22 6月, 2018 2 次提交
  9. 20 6月, 2018 5 次提交
  10. 19 6月, 2018 2 次提交
  11. 18 6月, 2018 1 次提交
  12. 16 6月, 2018 2 次提交
  13. 12 6月, 2018 1 次提交
  14. 01 6月, 2018 1 次提交
  15. 31 5月, 2018 1 次提交
    • N
      drm/bridge/synopsys: dw-hdmi: fix dw_hdmi_setup_rx_sense · c32048d9
      Neil Armstrong 提交于
      The dw_hdmi_setup_rx_sense exported function should not use struct device
      to recover the dw-hdmi context using drvdata, but take struct dw_hdmi
      directly like other exported functions.
      
      This caused a regression using Meson DRM on S905X since v4.17-rc1 :
      
      Internal error: Oops: 96000007 [#1] PREEMPT SMP
      [...]
      CPU: 0 PID: 124 Comm: irq/32-dw_hdmi_ Not tainted 4.17.0-rc7 #2
      Hardware name: Libre Technology CC (DT)
      [...]
      pc : osq_lock+0x54/0x188
      lr : __mutex_lock.isra.0+0x74/0x530
      [...]
      Process irq/32-dw_hdmi_ (pid: 124, stack limit = 0x00000000adf418cb)
      Call trace:
        osq_lock+0x54/0x188
        __mutex_lock_slowpath+0x10/0x18
        mutex_lock+0x30/0x38
        __dw_hdmi_setup_rx_sense+0x28/0x98
        dw_hdmi_setup_rx_sense+0x10/0x18
        dw_hdmi_top_thread_irq+0x2c/0x50
        irq_thread_fn+0x28/0x68
        irq_thread+0x10c/0x1a0
        kthread+0x128/0x130
        ret_from_fork+0x10/0x18
       Code: 34000964 d00050a2 51000484 9135c042 (f864d844)
       ---[ end trace 945641e1fbbc07da ]---
       note: irq/32-dw_hdmi_[124] exited with preempt_count 1
       genirq: exiting task "irq/32-dw_hdmi_" (124) is an active IRQ thread (irq 32)
      
      Fixes: eea034af ("drm/bridge/synopsys: dw-hdmi: don't clobber drvdata")
      Signed-off-by: NNeil Armstrong <narmstrong@baylibre.com>
      Tested-by: NKoen Kooi <koen@dominion.thruhere.net>
      Signed-off-by: NSean Paul <seanpaul@chromium.org>
      Link: https://patchwork.freedesktop.org/patch/msgid/1527673438-20643-1-git-send-email-narmstrong@baylibre.com
      c32048d9
  16. 24 5月, 2018 2 次提交
  17. 19 5月, 2018 1 次提交
  18. 18 5月, 2018 1 次提交
  19. 17 5月, 2018 3 次提交
  20. 16 5月, 2018 3 次提交
    • N
      drm/scheduler: remove unused parameter · 8344c53f
      Nayan Deshmukh 提交于
      this patch also effect the amdgpu and etnaviv drivers which
      use the function drm_sched_entity_init
      Signed-off-by: NNayan Deshmukh <nayan26deshmukh@gmail.com>
      Suggested-by: NChristian König <christian.koenig@amd.com>
      Acked-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      8344c53f
    • L
      drm/amdgpu: add VEGAM ASIC type · 48ff108d
      Leo Liu 提交于
      Signed-off-by: NLeo Liu <leo.liu@amd.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      48ff108d
    • E
      drm/gpu-sched: fix force APP kill hang(v4) · 8ee3a52e
      Emily Deng 提交于
      issue:
      there are VMC page fault occurred if force APP kill during
      3dmark test, the cause is in entity_fini we manually signal
      all those jobs in entity's queue which confuse the sync/dep
      mechanism:
      
      1)page fault occurred in sdma's clear job which operate on
      shadow buffer, and shadow buffer's Gart table is cleaned by
      ttm_bo_release since the fence in its reservation was fake signaled
      by entity_fini() under the case of SIGKILL received.
      
      2)page fault occurred in gfx' job because during the lifetime
      of gfx job we manually fake signal all jobs from its entity
      in entity_fini(), thus the unmapping/clear PTE job depend on those
      result fence is satisfied and sdma start clearing the PTE and lead
      to GFX page fault.
      
      fix:
      1)should at least wait all jobs already scheduled complete in entity_fini()
      if SIGKILL is the case.
      
      2)if a fence signaled and try to clear some entity's dependency, should
      set this entity guilty to prevent its job really run since the dependency
      is fake signaled.
      
      v2:
      splitting drm_sched_entity_fini() into two functions:
      1)The first one is does the waiting, removes the entity from the
      runqueue and returns an error when the process was killed.
      2)The second one then goes over the entity, install it as
      completion signal for the remaining jobs and signals all jobs
      with an error code.
      
      v3:
      1)Replace the fini1 and fini2 with better name
      2)Call the first part before the VM teardown in
      amdgpu_driver_postclose_kms() and the second part
      after the VM teardown
      3)Keep the original function drm_sched_entity_fini to
      refine the code.
      
      v4:
      1)Rename entity->finished to entity->last_scheduled;
      2)Rename drm_sched_entity_fini_job_cb() to
      drm_sched_entity_kill_jobs_cb();
      3)Pass NULL to drm_sched_entity_fini_job_cb() if -ENOENT;
      4)Replace the type of entity->fini_status with "int";
      5)Remove the check about entity->finished.
      Signed-off-by: NMonk Liu <Monk.Liu@amd.com>
      Signed-off-by: NEmily Deng <Emily.Deng@amd.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      8ee3a52e
  21. 15 5月, 2018 1 次提交
  22. 14 5月, 2018 1 次提交