1. 27 9月, 2019 2 次提交
  2. 18 8月, 2019 1 次提交
  3. 14 8月, 2019 1 次提交
  4. 13 8月, 2019 1 次提交
  5. 12 8月, 2019 1 次提交
  6. 07 8月, 2019 1 次提交
  7. 08 5月, 2019 1 次提交
    • C
      drm/i915: Seal races between async GPU cancellation, retirement and signaling · 0152b3b3
      Chris Wilson 提交于
      Currently there is an underlying assumption that i915_request_unsubmit()
      is synchronous wrt the GPU -- that is the request is no longer in flight
      as we remove it. In the near future that may change, and this may upset
      our signaling as we can process an interrupt for that request while it
      is no longer in flight.
      
      CPU0					CPU1
      intel_engine_breadcrumbs_irq
      (queue request completion)
      					i915_request_cancel_signaling
      ...					...
      					i915_request_enable_signaling
      dma_fence_signal
      
      Hence in the time it took us to drop the lock to signal the request, a
      preemption event may have occurred and re-queued the request. In the
      process, that request would have seen I915_FENCE_FLAG_SIGNAL clear and
      so reused the rq->signal_link that was in use on CPU0, leading to bad
      pointer chasing in intel_engine_breadcrumbs_irq.
      
      A related issue was that if someone started listening for a signal on a
      completed but no longer in-flight request, we missed the opportunity to
      immediately signal that request.
      
      Furthermore, as intel_contexts may be immediately released during
      request retirement, in order to be entirely sure that
      intel_engine_breadcrumbs_irq may no longer dereference the intel_context
      (ce->signals and ce->signal_link), we must wait for irq spinlock.
      
      In order to prevent the race, we use a bit in the fence.flags to signal
      the transfer onto the signal list inside intel_engine_breadcrumbs_irq.
      For simplicity, we use the DMA_FENCE_FLAG_SIGNALED_BIT as it then
      quickly signals to any outside observer that the fence is indeed signaled.
      
      v2: Sketch out potential dma-fence API for manual signaling
      v3: And the test_and_set_bit()
      
      Fixes: 52c0fdb2 ("drm/i915: Replace global breadcrumbs with per-context interrupt tracking")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190508112452.18942-1-chris@chris-wilson.co.uk
      0152b3b3
  8. 07 5月, 2019 1 次提交
  9. 25 4月, 2019 1 次提交
  10. 16 4月, 2019 1 次提交
  11. 09 3月, 2019 1 次提交
    • C
      drm/i915: Acquire breadcrumb ref before cancelling · a89c0962
      Chris Wilson 提交于
      We may race the interrupt signaling with retirement, in which case the
      order in which we acquire the reference inside the interrupt is vital to
      provide the correct barrier against the request being freed in
      retirement, i.e. we need to acquire our reference before marking the
      breadcrumb as cancelled (as soon as the breadcrumb is cancelled
      retirement may drop its reference to the request without serialisation
      with the interrupt handler).
      
      <3>[  683.372226] BUG i915_request (Tainted: G     U           ): Object already free
      <3>[  683.372269] -----------------------------------------------------------------------------
      
      <4>[  683.372323] Disabling lock debugging due to kernel taint
      <3>[  683.372393] INFO: Allocated in i915_request_alloc+0x169/0x810 [i915] age=0 cpu=2 pid=1420
      <3>[  683.372412] 	kmem_cache_alloc+0x21c/0x280
      <3>[  683.372478] 	i915_request_alloc+0x169/0x810 [i915]
      <3>[  683.372540] 	i915_gem_do_execbuffer+0x84e/0x1ae0 [i915]
      <3>[  683.372603] 	i915_gem_execbuffer2_ioctl+0x11b/0x420 [i915]
      <3>[  683.372617] 	drm_ioctl_kernel+0x83/0xf0
      <3>[  683.372626] 	drm_ioctl+0x2f3/0x3b0
      <3>[  683.372636] 	do_vfs_ioctl+0xa0/0x6e0
      <3>[  683.372645] 	ksys_ioctl+0x35/0x60
      <3>[  683.372654] 	__x64_sys_ioctl+0x11/0x20
      <3>[  683.372664] 	do_syscall_64+0x55/0x190
      <3>[  683.372675] 	entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <3>[  683.372740] INFO: Freed in i915_request_retire_upto+0xfb/0x2e0 [i915] age=0 cpu=0 pid=1419
      <3>[  683.372807] 	i915_request_retire_upto+0xfb/0x2e0 [i915]
      <3>[  683.372870] 	i915_request_add+0x3bd/0x9d0 [i915]
      <3>[  683.372931] 	i915_gem_do_execbuffer+0x141c/0x1ae0 [i915]
      <3>[  683.372991] 	i915_gem_execbuffer2_ioctl+0x11b/0x420 [i915]
      <3>[  683.373001] 	drm_ioctl_kernel+0x83/0xf0
      <3>[  683.373008] 	drm_ioctl+0x2f3/0x3b0
      <3>[  683.373015] 	do_vfs_ioctl+0xa0/0x6e0
      <3>[  683.373023] 	ksys_ioctl+0x35/0x60
      <3>[  683.373030] 	__x64_sys_ioctl+0x11/0x20
      <3>[  683.373037] 	do_syscall_64+0x55/0x190
      <3>[  683.373045] 	entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <3>[  683.373054] INFO: Slab 0x0000000079bcdd71 objects=30 used=2 fp=0x000000006d77b8af flags=0x8000000000010201
      <3>[  683.373069] INFO: Object 0x000000006d77b8af @offset=24000 fp=0x000000007b061eab
      
      <3>[  683.373083] Redzone 00000000ee47ef28: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373097] Redzone 000000000cb91471: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373111] Redzone 00000000cf2b86ee: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373125] Redzone 00000000f1f5a2cd: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373139] Object 000000006d77b8af: 00 00 00 00 5a 5a 5a 5a 00 3c 49 c0 ff ff ff ff  ....ZZZZ.<I.....
      <3>[  683.373153] Object 000000006f9b6204: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373167] Object 0000000091410ffb: e0 dd 6b fa 87 9f ff ff e0 dd 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373181] Object 000000004cdf799d: 20 de 6b fa 87 9f ff ff 3d 00 00 00 00 00 00 00   .k.....=.......
      <3>[  683.373195] Object 00000000545afebc: aa b3 00 00 00 00 00 00 0f 00 00 00 00 00 00 00  ................
      <3>[  683.373209] Object 00000000e4a394a8: 25 bd bd 1b 9f 00 00 00 00 00 00 00 5a 5a 5a 5a  %...........ZZZZ
      <3>[  683.373223] Object 0000000029a7878a: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
      <3>[  683.373237] Object 00000000d37797b3: ff ff ff ff ff ff ff ff e8 6e 57 c0 ff ff ff ff  .........nW.....
      <3>[  683.373251] Object 00000000d50414f6: 00 b3 c8 8e ff ff ff ff 80 b0 c8 8e ff ff ff ff  ................
      <3>[  683.373265] Object 00000000c28e8847: 41 01 4b c0 ff ff ff ff 00 00 88 8e 88 9f ff ff  A.K.............
      <3>[  683.373279] Object 00000000c74212ab: 38 c1 6d 8a 88 9f ff ff 58 21 74 8a 88 9f ff ff  8.m.....X!t.....
      <3>[  683.373293] Object 000000000d8012cf: c0 c1 6d 8a 88 9f ff ff 58 79 dd d9 87 9f ff ff  ..m.....Xy......
      <3>[  683.373306] Object 00000000c9900b91: 98 d0 4e 8a 88 9f ff ff 58 3c e8 9b 88 9f ff ff  ..N.....X<......
      <3>[  683.373320] Object 0000000044bb8c3d: 58 3c e8 9b 88 9f ff ff 64 f5 04 00 00 00 00 00  X<......d.......
      <3>[  683.373334] Object 00000000180c4cca: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
      <3>[  683.373348] Object 00000000c9044498: ff ff ff ff ff ff ff ff e0 6e 57 c0 ff ff ff ff  .........nW.....
      <3>[  683.373362] Object 0000000072d0dfb3: 00 00 00 00 00 00 00 00 c0 b1 c8 8e ff ff ff ff  ................
      <3>[  683.373376] Object 0000000081f198b9: 55 01 4b c0 ff ff ff ff d8 de 6b fa 87 9f ff ff  U.K.......k.....
      <3>[  683.373390] Object 000000006a375a13: d8 de 6b fa 87 9f ff ff cc 05 39 c0 ff ff ff ff  ..k.......9.....
      <3>[  683.373404] Object 00000000b8392dd1: ff ff ff ff 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ....ZZZZZZZZZZZZ
      <3>[  683.373418] Object 00000000e5c1bbcb: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373432] Object 00000000199feccd: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373446] Object 0000000020f5e08b: 20 df 6b fa 87 9f ff ff 20 df 6b fa 87 9f ff ff   .k..... .k.....
      <3>[  683.373460] Object 0000000090591b0f: 30 df 6b fa 87 9f ff ff 30 df 6b fa 87 9f ff ff  0.k.....0.k.....
      <3>[  683.373473] Object 00000000232f7cd0: 40 df 6b fa 87 9f ff ff 40 df 6b fa 87 9f ff ff  @.k.....@.k.....
      <3>[  683.373487] Object 0000000060458027: 50 df 6b fa 87 9f ff ff 50 df 6b fa 87 9f ff ff  P.k.....P.k.....
      <3>[  683.373501] Object 00000000e3c82ce2: 06 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
      <3>[  683.373515] Object 00000000ec804eb8: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373529] Object 00000000ce7ccc08: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373543] Object 000000002dbc575c: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373557] Object 00000000b86d3417: 5a 5a 5a 5a 5a 5a 5a 5a 00 de 6b fa 87 9f ff ff  ZZZZZZZZ..k.....
      <3>[  683.373571] Object 00000000d1e82276: b8 61 dd d9 87 9f ff ff a0 06 00 00 d0 06 00 00  .a..............
      <3>[  683.373585] Object 00000000cc53f969: e8 06 00 00 20 07 00 00 28 07 00 00 00 00 00 00  .... ...(.......
      <3>[  683.373599] Object 00000000ea2426d2: 40 0c 8c 7b 88 9f ff ff 00 00 00 00 00 00 00 00  @..{............
      <3>[  683.373613] Object 00000000b860c1c3: 68 0d 8c 7b 88 9f ff ff 68 25 8c 7b 88 9f ff ff  h..{....h%.{....
      <3>[  683.373627] Object 0000000016455ea0: 96 d5 05 00 01 00 00 00 00 5a 5a 5a 5a 5a 5a 5a  .........ZZZZZZZ
      <3>[  683.373640] Object 00000000e66ede82: 00 e0 6b fa 87 9f ff ff 00 e0 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373654] Object 0000000080964939: 10 e0 6b fa 87 9f ff ff 10 e0 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373668] Object 00000000e7ffc5dd: 00 00 00 00 00 00 00 00 00 01 00 00 00 00 ad de  ................
      <3>[  683.373682] Object 000000000ce9d6ca: 00 02 00 00 00 00 ad de 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
      <3>[  683.373696] Object 00000000386659d0: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373710] Redzone 0000000075d2069d: bb bb bb bb bb bb bb bb                          ........
      <3>[  683.373723] Padding 0000000054e14c6b: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373737] Padding 00000000425e5b34: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373751] Padding 00000000ad3d4db9: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <4>[  683.373767] CPU: 1 PID: 151 Comm: kworker/1:2 Tainted: G    BU            5.0.0-rc8-g39139489403b-drmtip_236+ #1
      <4>[  683.373769] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3087.A00.1902250334 02/25/2019
      <4>[  683.373773] Workqueue: events delayed_fput
      <4>[  683.373775] Call Trace:
      <4>[  683.373777]  <IRQ>
      <4>[  683.373781]  dump_stack+0x67/0x9b
      <4>[  683.373783]  free_debug_processing+0x344/0x370
      <4>[  683.373832]  ? intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373836]  __slab_free+0x337/0x4f0
      <4>[  683.373840]  ? _raw_spin_unlock_irqrestore+0x39/0x60
      <4>[  683.373844]  ? debug_check_no_obj_freed+0x132/0x210
      <4>[  683.373889]  ? intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373892]  ? kmem_cache_free+0x275/0x2e0
      <4>[  683.373894]  kmem_cache_free+0x275/0x2e0
      <4>[  683.373939]  intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373984]  gen8_cs_irq_handler+0x4e/0xa0 [i915]
      <4>[  683.374026]  gen11_irq_handler+0x24b/0x330 [i915]
      <4>[  683.374032]  __handle_irq_event_percpu+0x41/0x2d0
      <4>[  683.374034]  ? handle_irq_event+0x27/0x50
      <4>[  683.374038]  handle_irq_event_percpu+0x2b/0x70
      <4>[  683.374040]  handle_irq_event+0x2f/0x50
      <4>[  683.374044]  handle_edge_irq+0xe7/0x190
      <4>[  683.374048]  handle_irq+0x67/0x160
      <4>[  683.374051]  do_IRQ+0x5e/0x130
      <4>[  683.374054]  common_interrupt+0xf/0xf
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109827
      Fixes: 52c0fdb2 ("drm/i915: Replace global breadcrumbs with per-context interrupt tracking")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190304114113.371-1-chris@chris-wilson.co.uk
      (cherry picked from commit e781a7a3)
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      a89c0962
  12. 05 3月, 2019 1 次提交
    • C
      drm/i915: Acquire breadcrumb ref before cancelling · e781a7a3
      Chris Wilson 提交于
      We may race the interrupt signaling with retirement, in which case the
      order in which we acquire the reference inside the interrupt is vital to
      provide the correct barrier against the request being freed in
      retirement, i.e. we need to acquire our reference before marking the
      breadcrumb as cancelled (as soon as the breadcrumb is cancelled
      retirement may drop its reference to the request without serialisation
      with the interrupt handler).
      
      <3>[  683.372226] BUG i915_request (Tainted: G     U           ): Object already free
      <3>[  683.372269] -----------------------------------------------------------------------------
      
      <4>[  683.372323] Disabling lock debugging due to kernel taint
      <3>[  683.372393] INFO: Allocated in i915_request_alloc+0x169/0x810 [i915] age=0 cpu=2 pid=1420
      <3>[  683.372412] 	kmem_cache_alloc+0x21c/0x280
      <3>[  683.372478] 	i915_request_alloc+0x169/0x810 [i915]
      <3>[  683.372540] 	i915_gem_do_execbuffer+0x84e/0x1ae0 [i915]
      <3>[  683.372603] 	i915_gem_execbuffer2_ioctl+0x11b/0x420 [i915]
      <3>[  683.372617] 	drm_ioctl_kernel+0x83/0xf0
      <3>[  683.372626] 	drm_ioctl+0x2f3/0x3b0
      <3>[  683.372636] 	do_vfs_ioctl+0xa0/0x6e0
      <3>[  683.372645] 	ksys_ioctl+0x35/0x60
      <3>[  683.372654] 	__x64_sys_ioctl+0x11/0x20
      <3>[  683.372664] 	do_syscall_64+0x55/0x190
      <3>[  683.372675] 	entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <3>[  683.372740] INFO: Freed in i915_request_retire_upto+0xfb/0x2e0 [i915] age=0 cpu=0 pid=1419
      <3>[  683.372807] 	i915_request_retire_upto+0xfb/0x2e0 [i915]
      <3>[  683.372870] 	i915_request_add+0x3bd/0x9d0 [i915]
      <3>[  683.372931] 	i915_gem_do_execbuffer+0x141c/0x1ae0 [i915]
      <3>[  683.372991] 	i915_gem_execbuffer2_ioctl+0x11b/0x420 [i915]
      <3>[  683.373001] 	drm_ioctl_kernel+0x83/0xf0
      <3>[  683.373008] 	drm_ioctl+0x2f3/0x3b0
      <3>[  683.373015] 	do_vfs_ioctl+0xa0/0x6e0
      <3>[  683.373023] 	ksys_ioctl+0x35/0x60
      <3>[  683.373030] 	__x64_sys_ioctl+0x11/0x20
      <3>[  683.373037] 	do_syscall_64+0x55/0x190
      <3>[  683.373045] 	entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <3>[  683.373054] INFO: Slab 0x0000000079bcdd71 objects=30 used=2 fp=0x000000006d77b8af flags=0x8000000000010201
      <3>[  683.373069] INFO: Object 0x000000006d77b8af @offset=24000 fp=0x000000007b061eab
      
      <3>[  683.373083] Redzone 00000000ee47ef28: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373097] Redzone 000000000cb91471: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373111] Redzone 00000000cf2b86ee: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373125] Redzone 00000000f1f5a2cd: bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb  ................
      <3>[  683.373139] Object 000000006d77b8af: 00 00 00 00 5a 5a 5a 5a 00 3c 49 c0 ff ff ff ff  ....ZZZZ.<I.....
      <3>[  683.373153] Object 000000006f9b6204: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373167] Object 0000000091410ffb: e0 dd 6b fa 87 9f ff ff e0 dd 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373181] Object 000000004cdf799d: 20 de 6b fa 87 9f ff ff 3d 00 00 00 00 00 00 00   .k.....=.......
      <3>[  683.373195] Object 00000000545afebc: aa b3 00 00 00 00 00 00 0f 00 00 00 00 00 00 00  ................
      <3>[  683.373209] Object 00000000e4a394a8: 25 bd bd 1b 9f 00 00 00 00 00 00 00 5a 5a 5a 5a  %...........ZZZZ
      <3>[  683.373223] Object 0000000029a7878a: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
      <3>[  683.373237] Object 00000000d37797b3: ff ff ff ff ff ff ff ff e8 6e 57 c0 ff ff ff ff  .........nW.....
      <3>[  683.373251] Object 00000000d50414f6: 00 b3 c8 8e ff ff ff ff 80 b0 c8 8e ff ff ff ff  ................
      <3>[  683.373265] Object 00000000c28e8847: 41 01 4b c0 ff ff ff ff 00 00 88 8e 88 9f ff ff  A.K.............
      <3>[  683.373279] Object 00000000c74212ab: 38 c1 6d 8a 88 9f ff ff 58 21 74 8a 88 9f ff ff  8.m.....X!t.....
      <3>[  683.373293] Object 000000000d8012cf: c0 c1 6d 8a 88 9f ff ff 58 79 dd d9 87 9f ff ff  ..m.....Xy......
      <3>[  683.373306] Object 00000000c9900b91: 98 d0 4e 8a 88 9f ff ff 58 3c e8 9b 88 9f ff ff  ..N.....X<......
      <3>[  683.373320] Object 0000000044bb8c3d: 58 3c e8 9b 88 9f ff ff 64 f5 04 00 00 00 00 00  X<......d.......
      <3>[  683.373334] Object 00000000180c4cca: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
      <3>[  683.373348] Object 00000000c9044498: ff ff ff ff ff ff ff ff e0 6e 57 c0 ff ff ff ff  .........nW.....
      <3>[  683.373362] Object 0000000072d0dfb3: 00 00 00 00 00 00 00 00 c0 b1 c8 8e ff ff ff ff  ................
      <3>[  683.373376] Object 0000000081f198b9: 55 01 4b c0 ff ff ff ff d8 de 6b fa 87 9f ff ff  U.K.......k.....
      <3>[  683.373390] Object 000000006a375a13: d8 de 6b fa 87 9f ff ff cc 05 39 c0 ff ff ff ff  ..k.......9.....
      <3>[  683.373404] Object 00000000b8392dd1: ff ff ff ff 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ....ZZZZZZZZZZZZ
      <3>[  683.373418] Object 00000000e5c1bbcb: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373432] Object 00000000199feccd: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373446] Object 0000000020f5e08b: 20 df 6b fa 87 9f ff ff 20 df 6b fa 87 9f ff ff   .k..... .k.....
      <3>[  683.373460] Object 0000000090591b0f: 30 df 6b fa 87 9f ff ff 30 df 6b fa 87 9f ff ff  0.k.....0.k.....
      <3>[  683.373473] Object 00000000232f7cd0: 40 df 6b fa 87 9f ff ff 40 df 6b fa 87 9f ff ff  @.k.....@.k.....
      <3>[  683.373487] Object 0000000060458027: 50 df 6b fa 87 9f ff ff 50 df 6b fa 87 9f ff ff  P.k.....P.k.....
      <3>[  683.373501] Object 00000000e3c82ce2: 06 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
      <3>[  683.373515] Object 00000000ec804eb8: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373529] Object 00000000ce7ccc08: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373543] Object 000000002dbc575c: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373557] Object 00000000b86d3417: 5a 5a 5a 5a 5a 5a 5a 5a 00 de 6b fa 87 9f ff ff  ZZZZZZZZ..k.....
      <3>[  683.373571] Object 00000000d1e82276: b8 61 dd d9 87 9f ff ff a0 06 00 00 d0 06 00 00  .a..............
      <3>[  683.373585] Object 00000000cc53f969: e8 06 00 00 20 07 00 00 28 07 00 00 00 00 00 00  .... ...(.......
      <3>[  683.373599] Object 00000000ea2426d2: 40 0c 8c 7b 88 9f ff ff 00 00 00 00 00 00 00 00  @..{............
      <3>[  683.373613] Object 00000000b860c1c3: 68 0d 8c 7b 88 9f ff ff 68 25 8c 7b 88 9f ff ff  h..{....h%.{....
      <3>[  683.373627] Object 0000000016455ea0: 96 d5 05 00 01 00 00 00 00 5a 5a 5a 5a 5a 5a 5a  .........ZZZZZZZ
      <3>[  683.373640] Object 00000000e66ede82: 00 e0 6b fa 87 9f ff ff 00 e0 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373654] Object 0000000080964939: 10 e0 6b fa 87 9f ff ff 10 e0 6b fa 87 9f ff ff  ..k.......k.....
      <3>[  683.373668] Object 00000000e7ffc5dd: 00 00 00 00 00 00 00 00 00 01 00 00 00 00 ad de  ................
      <3>[  683.373682] Object 000000000ce9d6ca: 00 02 00 00 00 00 ad de 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
      <3>[  683.373696] Object 00000000386659d0: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373710] Redzone 0000000075d2069d: bb bb bb bb bb bb bb bb                          ........
      <3>[  683.373723] Padding 0000000054e14c6b: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373737] Padding 00000000425e5b34: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <3>[  683.373751] Padding 00000000ad3d4db9: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      <4>[  683.373767] CPU: 1 PID: 151 Comm: kworker/1:2 Tainted: G    BU            5.0.0-rc8-g39139489403b-drmtip_236+ #1
      <4>[  683.373769] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3087.A00.1902250334 02/25/2019
      <4>[  683.373773] Workqueue: events delayed_fput
      <4>[  683.373775] Call Trace:
      <4>[  683.373777]  <IRQ>
      <4>[  683.373781]  dump_stack+0x67/0x9b
      <4>[  683.373783]  free_debug_processing+0x344/0x370
      <4>[  683.373832]  ? intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373836]  __slab_free+0x337/0x4f0
      <4>[  683.373840]  ? _raw_spin_unlock_irqrestore+0x39/0x60
      <4>[  683.373844]  ? debug_check_no_obj_freed+0x132/0x210
      <4>[  683.373889]  ? intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373892]  ? kmem_cache_free+0x275/0x2e0
      <4>[  683.373894]  kmem_cache_free+0x275/0x2e0
      <4>[  683.373939]  intel_engine_breadcrumbs_irq+0x2e4/0x380 [i915]
      <4>[  683.373984]  gen8_cs_irq_handler+0x4e/0xa0 [i915]
      <4>[  683.374026]  gen11_irq_handler+0x24b/0x330 [i915]
      <4>[  683.374032]  __handle_irq_event_percpu+0x41/0x2d0
      <4>[  683.374034]  ? handle_irq_event+0x27/0x50
      <4>[  683.374038]  handle_irq_event_percpu+0x2b/0x70
      <4>[  683.374040]  handle_irq_event+0x2f/0x50
      <4>[  683.374044]  handle_edge_irq+0xe7/0x190
      <4>[  683.374048]  handle_irq+0x67/0x160
      <4>[  683.374051]  do_IRQ+0x5e/0x130
      <4>[  683.374054]  common_interrupt+0xf/0xf
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109827
      Fixes: 52c0fdb2 ("drm/i915: Replace global breadcrumbs with per-context interrupt tracking")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190304114113.371-1-chris@chris-wilson.co.uk
      e781a7a3
  13. 30 1月, 2019 2 次提交
    • C
      drm/i915: Drop fake breadcrumb irq · 789659f4
      Chris Wilson 提交于
      Missed breadcrumb detection is defunct due to the tight coupling with
      dma_fence signaling and the myriad ways we may signal fences from
      everywhere but from an interrupt, i.e. we frequently signal a fence
      before we even see its interrupt. This means that even if we miss an
      interrupt for a fence, it still is signaled before our breadcrumb
      hangcheck fires, so simplify the breadcrumb hangchecking by moving it
      into the GPU hangcheck and forgo fake interrupts.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-3-chris@chris-wilson.co.uk
      789659f4
    • C
      drm/i915: Replace global breadcrumbs with per-context interrupt tracking · 52c0fdb2
      Chris Wilson 提交于
      A few years ago, see commit 688e6c72 ("drm/i915: Slaughter the
      thundering i915_wait_request herd"), the issue of handling multiple
      clients waiting in parallel was brought to our attention. The
      requirement was that every client should be woken immediately upon its
      request being signaled, without incurring any cpu overhead.
      
      To handle certain fragility of our hw meant that we could not do a
      simple check inside the irq handler (some generations required almost
      unbounded delays before we could be sure of seqno coherency) and so
      request completion checking required delegation.
      
      Before commit 688e6c72, the solution was simple. Every client
      waiting on a request would be woken on every interrupt and each would do
      a heavyweight check to see if their request was complete. Commit
      688e6c72 introduced an rbtree so that only the earliest waiter on
      the global timeline would woken, and would wake the next and so on.
      (Along with various complications to handle requests being reordered
      along the global timeline, and also a requirement for kthread to provide
      a delegate for fence signaling that had no process context.)
      
      The global rbtree depends on knowing the execution timeline (and global
      seqno). Without knowing that order, we must instead check all contexts
      queued to the HW to see which may have advanced. We trim that list by
      only checking queued contexts that are being waited on, but still we
      keep a list of all active contexts and their active signalers that we
      inspect from inside the irq handler. By moving the waiters onto the fence
      signal list, we can combine the client wakeup with the dma_fence
      signaling (a dramatic reduction in complexity, but does require the HW
      being coherent, the seqno must be visible from the cpu before the
      interrupt is raised - we keep a timer backup just in case).
      
      Having previously fixed all the issues with irq-seqno serialisation (by
      inserting delays onto the GPU after each request instead of random delays
      on the CPU after each interrupt), we can rely on the seqno state to
      perfom direct wakeups from the interrupt handler. This allows us to
      preserve our single context switch behaviour of the current routine,
      with the only downside that we lose the RT priority sorting of wakeups.
      In general, direct wakeup latency of multiple clients is about the same
      (about 10% better in most cases) with a reduction in total CPU time spent
      in the waiter (about 20-50% depending on gen). Average herd behaviour is
      improved, but at the cost of not delegating wakeups on task_prio.
      
      v2: Capture fence signaling state for error state and add comments to
      warm even the most cold of hearts.
      v3: Check if the request is still active before busywaiting
      v4: Reduce the amount of pointer misdirection with list_for_each_safe
      and using a local i915_request variable inside the loops
      v5: Add a missing pluralisation to a purely informative selftest message.
      
      References: 688e6c72 ("drm/i915: Slaughter the thundering i915_wait_request herd")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-2-chris@chris-wilson.co.uk
      52c0fdb2
  14. 22 1月, 2019 1 次提交
  15. 18 1月, 2019 2 次提交
  16. 31 12月, 2018 1 次提交
  17. 03 12月, 2018 1 次提交
  18. 07 8月, 2018 1 次提交
  19. 29 6月, 2018 1 次提交
  20. 19 5月, 2018 1 次提交
  21. 26 4月, 2018 1 次提交
  22. 25 4月, 2018 1 次提交
  23. 13 3月, 2018 1 次提交
  24. 07 3月, 2018 2 次提交
  25. 06 3月, 2018 2 次提交
  26. 05 3月, 2018 1 次提交
  27. 22 2月, 2018 1 次提交
  28. 14 2月, 2018 1 次提交
  29. 07 2月, 2018 1 次提交
  30. 05 2月, 2018 1 次提交
  31. 01 2月, 2018 1 次提交
    • C
      drm/i915: Always run hangcheck while the GPU is busy · b26a32a8
      Chris Wilson 提交于
      Previously, we relied on only running the hangcheck while somebody was
      waiting on the GPU, in order to minimise the amount of time hangcheck
      had to run. (If nobody was watching the GPU, nobody would notice if the
      GPU wasn't responding -- eventually somebody would care and so kick
      hangcheck into action.) However, this falls apart from around commit
      4680816b ("drm/i915: Wait first for submission, before waiting for
      request completion"), as not all waiters declare themselves to hangcheck
      and so we could switch off hangcheck and miss GPU hangs even when
      waiting under the struct_mutex.
      
      If we enable hangcheck from the first request submission, and let it run
      until the GPU is idle again, we forgo all the complexity involved with
      only enabling around waiters. We just have to remember to be careful that
      we do not declare a GPU hang when idly waiting for the next request to
      be come ready, as we will run hangcheck continuously even when the
      engines are stalled waiting for external events. This should be true
      already as we should only be tracking requests submitted to hardware for
      execution as an indicator that the engine is busy.
      
      Fixes: 4680816b ("drm/i915: Wait first for submission, before waiting for request completion"
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104840Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180129144104.3921-1-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      (cherry picked from commit 88923048)
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      b26a32a8
  32. 31 1月, 2018 1 次提交
    • C
      drm/i915: Always run hangcheck while the GPU is busy · 88923048
      Chris Wilson 提交于
      Previously, we relied on only running the hangcheck while somebody was
      waiting on the GPU, in order to minimise the amount of time hangcheck
      had to run. (If nobody was watching the GPU, nobody would notice if the
      GPU wasn't responding -- eventually somebody would care and so kick
      hangcheck into action.) However, this falls apart from around commit
      4680816b ("drm/i915: Wait first for submission, before waiting for
      request completion"), as not all waiters declare themselves to hangcheck
      and so we could switch off hangcheck and miss GPU hangs even when
      waiting under the struct_mutex.
      
      If we enable hangcheck from the first request submission, and let it run
      until the GPU is idle again, we forgo all the complexity involved with
      only enabling around waiters. We just have to remember to be careful that
      we do not declare a GPU hang when idly waiting for the next request to
      be come ready, as we will run hangcheck continuously even when the
      engines are stalled waiting for external events. This should be true
      already as we should only be tracking requests submitted to hardware for
      execution as an indicator that the engine is busy.
      
      Fixes: 4680816b ("drm/i915: Wait first for submission, before waiting for request completion"
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104840Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180129144104.3921-1-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      88923048
  33. 04 1月, 2018 1 次提交
  34. 14 12月, 2017 1 次提交
    • C
      drm/i915: Stop listening to request resubmission from the signaler kthread · 74c7b078
      Chris Wilson 提交于
      The intent here was that we would be listening to
      i915_gem_request_unsubmit in order to cancel the signaler quickly and
      release the reference on the request. Cancelling the signaler is done
      directly via intel_engine_cancel_signaling (called from unsubmit), but
      that does not directly wake up the signaling thread, and neither does
      setting the request->global_seqno back to zero wake up listeners to the
      request->execute waitqueue. So the only time that listening to the
      request->execute waitqueue would wake up the signaling kthread would be
      on the request resubmission, during which time we would already receive
      wake ups from rejoining the global breadcrumbs wait rbtree.
      
      Trying to wake up to release the request remains an issue. If the
      signaling was cancelled and no other request required signaling, then it
      is possible for us to shutdown with the reference on the request still
      held. To ensure that we do not try to shutdown, leaking that request, we
      kick the signaling threads whenever we disarm the breadcrumbs, i.e. on
      parking the engine when idle.
      
      v2: We do need to be sure to release the last reference on stopping the
      kthread; asserting that it has been dropped already is insufficient.
      
      Fixes: d6a2289d ("drm/i915: Remove the preempted request from the execution queue")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20171208121033.5236-1-chris@chris-wilson.co.ukAcked-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      (cherry picked from commit 776bc27f)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      74c7b078
  35. 12 12月, 2017 1 次提交