1. 03 1月, 2020 1 次提交
  2. 01 1月, 2020 1 次提交
  3. 30 12月, 2019 1 次提交
  4. 23 12月, 2019 1 次提交
  5. 22 12月, 2019 3 次提交
  6. 20 12月, 2019 1 次提交
  7. 19 12月, 2019 1 次提交
  8. 17 12月, 2019 2 次提交
  9. 14 12月, 2019 1 次提交
  10. 08 12月, 2019 1 次提交
  11. 07 12月, 2019 1 次提交
  12. 05 12月, 2019 1 次提交
  13. 04 12月, 2019 2 次提交
    • C
      drm/i915/gt: Set the PD again for Haswell · 13bb5b99
      Chris Wilson 提交于
      And Haswell still occasionally forgets it is meant to be using a new
      page directory, so repeat ourselves a little louder.
      
      <7> [509.919864] heartbeat rcs0 heartbeat {prio:-2147483645} not ticking
      <7> [509.919895] heartbeat 	Awake? 8
      <7> [509.919903] heartbeat 	Barriers?: no
      <7> [509.919912] heartbeat 	Heartbeat: 3008 ms ago
      <7> [509.919930] heartbeat 	Reset count: 0 (global 0)
      <7> [509.919937] heartbeat 	Requests:
      <7> [509.921008] heartbeat 		active  a7eb:56e1*  @ 5847ms:
      <7> [509.921157] heartbeat 		ring->start:  0x00001000
      <7> [509.921164] heartbeat 		ring->head:   0x00001610
      <7> [509.921170] heartbeat 		ring->tail:   0x000023d8
      <7> [509.921176] heartbeat 		ring->emit:   0x000023d8
      <7> [509.921182] heartbeat 		ring->space:  0x00002570
      <7> [509.921189] heartbeat 		ring->hwsp:   0x7fffe100
      <7> [509.921197] heartbeat [head 1628, postfix 1738, tail 1750, batch 0xffffffff_ffffffff]:
      <7> [509.921289] heartbeat [0000] 7a000002 00100002 00000000 00000000 7a000002 01154c1e 7ffff080 00000000
      <7> [509.921299] heartbeat [0020] 11000001 00002220 ffffffff 12400001 00002220 7ffff000 00000000 11000001
      <7> [509.921308] heartbeat [0040] 00002228 6e900000 7a000002 00100002 00000000 00000000 7a000002 01154c1e
      <7> [509.921317] heartbeat [0060] 7ffff080 00000000 12400001 00002228 7ffff000 00000000 7a000002 00100002
      <7> [509.921326] heartbeat [0080] 00000000 00000000 7a000002 01154c1e 7ffff080 00000000 7a000002 001010a1
      <7> [509.921335] heartbeat [00a0] 7ffff080 00000000 04000000 11000005 00022050 00010001 00012050 00010001
      <7> [509.921345] heartbeat [00c0] 0001a050 00010001 00000000 0c000000 459a110c 00000000 11000005 00022050
      <7> [509.921354] heartbeat [00e0] 00010000 00012050 00010000 0001a050 00010000 12400001 0001a050 7ffff000
      <7> [509.921363] heartbeat [0100] 00000000 04000001 18802100 00000000 7a000002 011050a1 7fffe100 000056e1
      <7> [509.921370] heartbeat [0120] 01000000 00000000
      <7> [509.921538] heartbeat 	MMIO base:  0x00002000
      <7> [509.921682] heartbeat 	CCID: 0x3fa0110d
      <7> [509.922342] heartbeat 	RING_START: 0x00001000
      <7> [509.922353] heartbeat 	RING_HEAD:  0x00001628
      <7> [509.922366] heartbeat 	RING_TAIL:  0x000023d8
      <7> [509.922381] heartbeat 	RING_CTL:   0x00003001
      <7> [509.922396] heartbeat 	RING_MODE:  0x00004000
      <7> [509.922408] heartbeat 	RING_IMR: ffffffde
      <7> [509.922421] heartbeat 	ACTHD:  0x00000000_30e01628
      <7> [509.922434] heartbeat 	BBADDR: 0x00000000_00004004
      <7> [509.922446] heartbeat 	DMA_FADDR: 0x00000000_00002800
      <7> [509.922458] heartbeat 	IPEIR: 0x00000000
      <7> [509.922470] heartbeat 	IPEHR: 0x780c0000
      <7> [509.922642] heartbeat 	PP_DIR_BASE: 0x6e700000
      <7> [509.922652] heartbeat 	PP_DIR_BASE_READ: 0x00000000
      <7> [509.922662] heartbeat 	PP_DIR_DCLV: 0xffffffff
      <7> [509.922678] heartbeat 		E  a7eb:56e1*  @ 5849ms:
      <7> [509.922689] heartbeat 		E  a7eb:56e2-  @ 5849ms:
      <7> [509.922698] heartbeat 		E  a7eb:56e3  @ 5848ms:
      <7> [509.922707] heartbeat 		E  a7eb:56e4  @ 5848ms:
      <7> [509.922715] heartbeat 		E  a7eb:56e5  @ 5847ms:
      <7> [509.922724] heartbeat 		E  a7eb:56e6  @ 5846ms:
      <7> [509.922735] heartbeat 		E  a7eb:56e7  @ 5846ms:
      <7> [509.922744] heartbeat 		...skipping 4 executing requests...
      <7> [509.922754] heartbeat 		E  a7eb:56ec  @ 3010ms:
      <7> [509.922796] heartbeat HWSP:
      <7> [509.922807] heartbeat [0000] 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922817] heartbeat [0020] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922826] heartbeat *
      <7> [509.922836] heartbeat [0100] 000056e0 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922845] heartbeat [0120] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922851] heartbeat *
      <7> [509.922870] heartbeat Idle? no
      <7> [509.922878] heartbeat Signals:
      <7> [509.923000] heartbeat 	[a7eb:56e2] @ 5850ms
      
      Here, we have a failed context restore after the PD switch, but note
      that the PP_DIR_BASE register does not match the LRI in the ring.
      
      Bump it to 8^W 4 loops, and with that Baytrail starts passing the sanity
      checks.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191203211631.3167430-1-chris@chris-wilson.co.uk
      13bb5b99
    • C
      drm/i915/gt: Track the context validity explicitly · f70de8d2
      Chris Wilson 提交于
      Rather than assume if and only if the engine->default_state is not set
      that the context is invalid, instead track when we know the context has
      valid state -- either because we have copied the default_state or we
      have completed a context switch to save the HW state.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191203124155.3019926-1-chris@chris-wilson.co.uk
      f70de8d2
  14. 30 11月, 2019 2 次提交
  15. 14 11月, 2019 1 次提交
  16. 13 11月, 2019 1 次提交
  17. 12 11月, 2019 1 次提交
  18. 24 10月, 2019 1 次提交
  19. 18 10月, 2019 1 次提交
  20. 08 10月, 2019 1 次提交
  21. 05 10月, 2019 1 次提交
  22. 04 10月, 2019 1 次提交
    • C
      drm/i915: Pull i915_vma_pin under the vm->mutex · 2850748e
      Chris Wilson 提交于
      Replace the struct_mutex requirement for pinning the i915_vma with the
      local vm->mutex instead. Note that the vm->mutex is tainted by the
      shrinker (we require unbinding from inside fs-reclaim) and so we cannot
      allocate while holding that mutex. Instead we have to preallocate
      workers to do allocate and apply the PTE updates after we have we
      reserved their slot in the drm_mm (using fences to order the PTE writes
      with the GPU work and with later unbind).
      
      In adding the asynchronous vma binding, one subtle requirement is to
      avoid coupling the binding fence into the backing object->resv. That is
      the asynchronous binding only applies to the vma timeline itself and not
      to the pages as that is a more global timeline (the binding of one vma
      does not need to be ordered with another vma, nor does the implicit GEM
      fencing depend on a vma, only on writes to the backing store). Keeping
      the vma binding distinct from the backing store timelines is verified by
      a number of async gem_exec_fence and gem_exec_schedule tests. The way we
      do this is quite simple, we keep the fence for the vma binding separate
      and only wait on it as required, and never add it to the obj->resv
      itself.
      
      Another consequence in reducing the locking around the vma is the
      destruction of the vma is no longer globally serialised by struct_mutex.
      A natural solution would be to add a kref to i915_vma, but that requires
      decoupling the reference cycles, possibly by introducing a new
      i915_mm_pages object that is own by both obj->mm and vma->pages.
      However, we have not taken that route due to the overshadowing lmem/ttm
      discussions, and instead play a series of complicated games with
      trylocks to (hopefully) ensure that only one destruction path is called!
      
      v2: Add some commentary, and some helpers to reduce patch churn.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
      2850748e
  23. 20 9月, 2019 1 次提交
    • C
      drm/i915: Mark i915_request.timeline as a volatile, rcu pointer · d19d71fc
      Chris Wilson 提交于
      The request->timeline is only valid until the request is retired (i.e.
      before it is completed). Upon retiring the request, the context may be
      unpinned and freed, and along with it the timeline may be freed. We
      therefore need to be very careful when chasing rq->timeline that the
      pointer does not disappear beneath us. The vast majority of users are in
      a protected context, either during request construction or retirement,
      where the timeline->mutex is held and the timeline cannot disappear. It
      is those few off the beaten path (where we access a second timeline) that
      need extra scrutiny -- to be added in the next patch after first adding
      the warnings about dangerous access.
      
      One complication, where we cannot use the timeline->mutex itself, is
      during request submission onto hardware (under spinlocks). Here, we want
      to check on the timeline to finalize the breadcrumb, and so we need to
      impose a second rule to ensure that the request->timeline is indeed
      valid. As we are submitting the request, it's context and timeline must
      be pinned, as it will be used by the hardware. Since it is pinned, we
      know the request->timeline must still be valid, and we cannot submit the
      idle barrier until after we release the engine->active.lock, ergo while
      submitting and holding that spinlock, a second thread cannot release the
      timeline.
      
      v2: Don't be lazy inside selftests; hold the timeline->mutex for as long
      as we need it, and tidy up acquiring the timeline with a bit of
      refactoring (i915_active_add_request)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190919111912.21631-1-chris@chris-wilson.co.uk
      d19d71fc
  24. 18 9月, 2019 1 次提交
  25. 10 9月, 2019 1 次提交
  26. 31 8月, 2019 1 次提交
  27. 19 8月, 2019 1 次提交
    • C
      drm/i915: Always wrap the ring offset before resetting · 6a736ebf
      Chris Wilson 提交于
      We were passing in an unwrapped offset into intel_ring_reset() on
      unpinning. Sooner or later that had to land on ring->size.
      
      <3> [314.872147] intel_ring_reset:1237 GEM_BUG_ON(!intel_ring_offset_valid(ring, tail))
      <4> [314.872272] ------------[ cut here ]------------
      <2> [314.872276] kernel BUG at drivers/gpu/drm/i915/gt/intel_ringbuffer.c:1237!
      <4> [314.872320] invalid opcode: 0000 [#1] PREEMPT SMP PTI
      <4> [314.872331] CPU: 1 PID: 3466 Comm: i915_selftest Tainted: G     U            5.3.0-rc4-CI-Patchwork_14061+ #1
      <4> [314.872346] Hardware name: Hewlett-Packard HP Compaq 8000 Elite CMT PC/3647h, BIOS 786G7 v01.02 10/22/2009
      <4> [314.872477] RIP: 0010:intel_ring_reset+0x51/0x70 [i915]
      <4> [314.872487] Code: 9e db 51 e0 48 8b 35 b6 c7 22 00 49 c7 c0 f8 d9 d6 a0 b9 d5 04 00 00 48 c7 c2 70 5b d4 a0 48 c7 c7 6c fc c0 a0 e8 cf be 58 e0 <0f> 0b 89 77 20 89 77 1c 89 77 24 e9 4f ed ff ff 0f 1f 44 00 00 66
      <4> [314.872512] RSP: 0018:ffffc9000034fa98 EFLAGS: 00010282
      <4> [314.872523] RAX: 0000000000000010 RBX: ffff8881019412c8 RCX: 0000000000000000
      <4> [314.872534] RDX: 0000000000000001 RSI: 0000000000000008 RDI: 0000000000000f20
      <4> [314.872545] RBP: ffff888104e0f740 R08: 0000000000000000 R09: 0000000000000f20
      <4> [314.872557] R10: 0000000000000000 R11: ffff888117094518 R12: ffffffffa0d3d2c0
      <4> [314.872569] R13: ffffffffa0e2a250 R14: ffffffffa0e2a1e0 R15: ffffc9000034fe88
      <4> [314.872581] FS:  00007fe6d49f6e40(0000) GS:ffff888117a80000(0000) knlGS:0000000000000000
      <4> [314.872595] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [314.872605] CR2: 000055e3283e9cc8 CR3: 0000000108842000 CR4: 00000000000406e0
      <4> [314.872616] Call Trace:
      <4> [314.872701]  intel_ring_unpin+0x1a/0x220 [i915]
      <4> [314.872787]  ring_destroy+0x48/0xc0 [i915]
      <4> [314.872870]  intel_engines_cleanup+0x24/0x40 [i915]
      <4> [314.872964]  i915_gem_driver_release+0x1b/0xf0 [i915]
      <4> [314.872984]  i915_driver_release+0x1c/0x80 [i915]
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190819075835.20065-2-chris@chris-wilson.co.uk
      6a736ebf
  28. 16 8月, 2019 1 次提交
  29. 12 8月, 2019 2 次提交
  30. 11 8月, 2019 1 次提交
  31. 10 8月, 2019 3 次提交
  32. 09 8月, 2019 1 次提交