1. 08 1月, 2020 1 次提交
  2. 26 12月, 2019 2 次提交
  3. 22 12月, 2019 5 次提交
  4. 19 11月, 2019 1 次提交
  5. 16 11月, 2019 1 次提交
  6. 05 11月, 2019 1 次提交
  7. 01 11月, 2019 2 次提交
  8. 30 10月, 2019 1 次提交
    • C
      drm/i915/gt: Always track callers to intel_rps_mark_interactive() · a06375a9
      Chris Wilson 提交于
      During startup, we may find ourselves in an interesting position where
      we haven't fully enabled RPS before the display starts trying to use it.
      This may lead to an imbalance in our "interactive" counter:
      
      <3>[    4.813326] intel_rps_mark_interactive:652 GEM_BUG_ON(!rps->power.interactive)
      <4>[    4.813396] ------------[ cut here ]------------
      <2>[    4.813398] kernel BUG at drivers/gpu/drm/i915/gt/intel_rps.c:652!
      <4>[    4.813430] invalid opcode: 0000 [#1] PREEMPT SMP PTI
      <4>[    4.813438] CPU: 1 PID: 18 Comm: kworker/1:0H Not tainted 5.4.0-rc5-CI-CI_DRM_7209+ #1
      <4>[    4.813447] Hardware name:  /NUC7i5BNB, BIOS BNKBL357.86A.0054.2017.1025.1822 10/25/2017
      <4>[    4.813525] Workqueue: events_highpri intel_atomic_cleanup_work [i915]
      <4>[    4.813589] RIP: 0010:intel_rps_mark_interactive+0xb3/0xc0 [i915]
      <4>[    4.813597] Code: bc 3f de e0 48 8b 35 84 2e 24 00 49 c7 c0 f3 d4 4e a0 b9 8c 02 00 00 48 c7 c2 80 9c 48 a0 48 c7 c7 3e 73 34 a0 e8 8d 3b e5 e0 <0f> 0b 90 66 2e 0f 1f 84 00 00 00 00 00 80 bf c0 00 00 00 00 74 32
      <4>[    4.813616] RSP: 0018:ffffc900000efe00 EFLAGS: 00010286
      <4>[    4.813623] RAX: 000000000000000e RBX: ffff8882583cc7f0 RCX: 0000000000000000
      <4>[    4.813631] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff888275969c00
      <4>[    4.813639] RBP: 0000000000000000 R08: 0000000000000008 R09: ffff888275ace000
      <4>[    4.813646] R10: ffffc900000efe00 R11: ffff888275969c00 R12: ffff8882583cc8d8
      <4>[    4.813654] R13: ffff888276abce00 R14: 0000000000000000 R15: ffff88825e878860
      <4>[    4.813662] FS:  0000000000000000(0000) GS:ffff888276a80000(0000) knlGS:0000000000000000
      <4>[    4.813672] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4>[    4.813678] CR2: 00007f051d5ca0a8 CR3: 0000000262f48001 CR4: 00000000003606e0
      <4>[    4.813686] Call Trace:
      <4>[    4.813755]  intel_cleanup_plane_fb+0x4e/0x60 [i915]
      <4>[    4.813764]  drm_atomic_helper_cleanup_planes+0x4d/0x70
      <4>[    4.813833]  intel_atomic_cleanup_work+0x15/0x80 [i915]
      <4>[    4.813842]  process_one_work+0x26a/0x620
      <4>[    4.813850]  worker_thread+0x37/0x380
      <4>[    4.813857]  ? process_one_work+0x620/0x620
      <4>[    4.813864]  kthread+0x119/0x130
      <4>[    4.813870]  ? kthread_park+0x80/0x80
      <4>[    4.813878]  ret_from_fork+0x3a/0x50
      <4>[    4.813887] Modules linked in: i915(+) mei_hdcp x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul btusb btrtl btbcm btintel snd_hda_intel snd_intel_nhlt snd_hda_codec bluetooth snd_hwdep snd_hda_core ghash_clmulni_intel snd_pcm e1000e ecdh_generic ecc ptp pps_core mei_me mei prime_numbers
      <4>[    4.813934] ---[ end trace c13289af88174ffc ]---
      
      The solution employed is to not worry about RPS state and keep the tally
      of the interactive counter separate. When we do enable RPS, we will then
      take the display activity into account.
      
      Fixes: 3e7abf81 ("drm/i915: Extract GT render power state management")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Andi Shyti <andi.shyti@intel.com>
      Acked-by: NAndi Shyti <andi.shyti@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191030103827.2413-1-chris@chris-wilson.co.uk
      a06375a9
  9. 27 10月, 2019 1 次提交
  10. 24 10月, 2019 1 次提交
  11. 18 10月, 2019 2 次提交
  12. 08 10月, 2019 1 次提交
  13. 05 10月, 2019 1 次提交
  14. 04 10月, 2019 2 次提交
    • C
      drm/i915: Move request runtime management onto gt · 66101975
      Chris Wilson 提交于
      Requests are run from the gt and are tided into the gt runtime power
      management, so pull the runtime request management under gt/
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-12-chris@chris-wilson.co.uk
      66101975
    • C
      drm/i915: Pull i915_vma_pin under the vm->mutex · 2850748e
      Chris Wilson 提交于
      Replace the struct_mutex requirement for pinning the i915_vma with the
      local vm->mutex instead. Note that the vm->mutex is tainted by the
      shrinker (we require unbinding from inside fs-reclaim) and so we cannot
      allocate while holding that mutex. Instead we have to preallocate
      workers to do allocate and apply the PTE updates after we have we
      reserved their slot in the drm_mm (using fences to order the PTE writes
      with the GPU work and with later unbind).
      
      In adding the asynchronous vma binding, one subtle requirement is to
      avoid coupling the binding fence into the backing object->resv. That is
      the asynchronous binding only applies to the vma timeline itself and not
      to the pages as that is a more global timeline (the binding of one vma
      does not need to be ordered with another vma, nor does the implicit GEM
      fencing depend on a vma, only on writes to the backing store). Keeping
      the vma binding distinct from the backing store timelines is verified by
      a number of async gem_exec_fence and gem_exec_schedule tests. The way we
      do this is quite simple, we keep the fence for the vma binding separate
      and only wait on it as required, and never add it to the obj->resv
      itself.
      
      Another consequence in reducing the locking around the vma is the
      destruction of the vma is no longer globally serialised by struct_mutex.
      A natural solution would be to add a kref to i915_vma, but that requires
      decoupling the reference cycles, possibly by introducing a new
      i915_mm_pages object that is own by both obj->mm and vma->pages.
      However, we have not taken that route due to the overshadowing lmem/ttm
      discussions, and instead play a series of complicated games with
      trylocks to (hopefully) ensure that only one destruction path is called!
      
      v2: Add some commentary, and some helpers to reduce patch churn.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
      2850748e
  15. 27 9月, 2019 1 次提交
  16. 11 9月, 2019 1 次提交
  17. 07 9月, 2019 1 次提交
  18. 16 8月, 2019 1 次提交
  19. 12 8月, 2019 1 次提交
  20. 09 8月, 2019 1 次提交
  21. 03 8月, 2019 1 次提交
    • C
      drm/i915: Hide unshrinkable context objects from the shrinker · 1aff1903
      Chris Wilson 提交于
      The shrinker cannot touch objects used by the contexts (logical state
      and ring). Currently we mark those as "pin_global" to let the shrinker
      skip over them, however, if we remove them from the shrinker lists
      entirely, we don't event have to include them in our shrink accounting.
      
      By keeping the unshrinkable objects in our shrinker tracking, we report
      a large number of objects available to be shrunk, and leave the shrinker
      deeply unsatisfied when we fail to reclaim those. The shrinker will
      persist in trying to reclaim the unavailable objects, forcing the system
      into a livelock (not even hitting the dread oomkiller).
      
      v2: Extend unshrinkable protection for perma-pinned scratch and guc
      allocations (Tvrtko)
      v3: Notice that we should be pinned when marking unshrinkable and so the
      link cannot be empty; merge duplicate paths.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190802212137.22207-1-chris@chris-wilson.co.uk
      1aff1903
  22. 02 8月, 2019 2 次提交
  23. 31 7月, 2019 1 次提交
  24. 13 7月, 2019 1 次提交
  25. 21 6月, 2019 7 次提交