1. 14 10月, 2019 1 次提交
  2. 04 10月, 2019 3 次提交
  3. 23 9月, 2019 1 次提交
  4. 20 9月, 2019 2 次提交
    • C
      drm/i915: Lock signaler timeline while navigating · 6a79d848
      Chris Wilson 提交于
      As we need to take a walk back along the signaler timeline to find the
      fence before upon which we want to wait, we need to lock that timeline
      to prevent it being modified as we walk. Similarly, we also need to
      acquire a reference to the earlier fence while it still exists!
      
      Though we lack the correct locking today, we are saved by the
      overarching struct_mutex -- but that protection is being removed.
      
      v2: Tvrtko made me realise I was being lax and using annotations to
      ignore the AB-BA deadlock from the timeline overlap. As it would be
      possible to construct a second request that was using a semaphore from the
      same timeline as ourselves, we could quite easily end up in a situation
      where we deadlocked in our mutex waits. Avoid that by using a trylock
      and falling back to a normal dma-fence await if contended.
      
      v3: Eek, the signal->timeline is volatile and must be carefully
      dereferenced to ensure it is valid.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190919111912.21631-2-chris@chris-wilson.co.uk
      6a79d848
    • C
      drm/i915: Mark i915_request.timeline as a volatile, rcu pointer · d19d71fc
      Chris Wilson 提交于
      The request->timeline is only valid until the request is retired (i.e.
      before it is completed). Upon retiring the request, the context may be
      unpinned and freed, and along with it the timeline may be freed. We
      therefore need to be very careful when chasing rq->timeline that the
      pointer does not disappear beneath us. The vast majority of users are in
      a protected context, either during request construction or retirement,
      where the timeline->mutex is held and the timeline cannot disappear. It
      is those few off the beaten path (where we access a second timeline) that
      need extra scrutiny -- to be added in the next patch after first adding
      the warnings about dangerous access.
      
      One complication, where we cannot use the timeline->mutex itself, is
      during request submission onto hardware (under spinlocks). Here, we want
      to check on the timeline to finalize the breadcrumb, and so we need to
      impose a second rule to ensure that the request->timeline is indeed
      valid. As we are submitting the request, it's context and timeline must
      be pinned, as it will be used by the hardware. Since it is pinned, we
      know the request->timeline must still be valid, and we cannot submit the
      idle barrier until after we release the engine->active.lock, ergo while
      submitting and holding that spinlock, a second thread cannot release the
      timeline.
      
      v2: Don't be lazy inside selftests; hold the timeline->mutex for as long
      as we need it, and tidy up acquiring the timeline with a bit of
      refactoring (i915_active_add_request)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190919111912.21631-1-chris@chris-wilson.co.uk
      d19d71fc
  5. 19 9月, 2019 1 次提交
    • C
      drm/i915: Verify the engine after acquiring the active.lock · 37fa0de3
      Chris Wilson 提交于
      When using virtual engines, the rq->engine is not stable until we hold
      the engine->active.lock (as the virtual engine may be exchanged with the
      sibling). Since commit 22b7a426 ("drm/i915/execlists: Preempt-to-busy")
      we may retire a request concurrently with resubmitting it to HW, we need
      to be extra careful to verify we are holding the correct lock for the
      request's active list. This is similar to the issue we saw with
      rescheduling the virtual requests, see sched_lock_engine().
      
      Or else:
      
      <4> [876.736126] list_add corruption. prev->next should be next (ffff8883f931a1f8), but was dead000000000100. (prev=ffff888361ffa610).
      <4> [876.736136] WARNING: CPU: 2 PID: 21 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
      <4> [876.736137] Modules linked in: i915(+) amdgpu gpu_sched ttm vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic mei_hdcp x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul snd_intel_nhlt snd_hda_codec snd_hwdep snd_hda_core ghash_clmulni_intel e1000e cdc_ether usbnet mii snd_pcm ptp pps_core mei_me mei prime_numbers btusb btrtl btbcm btintel bluetooth ecdh_generic ecc [last unloaded: i915]
      <4> [876.736154] CPU: 2 PID: 21 Comm: ksoftirqd/2 Tainted: G     U            5.3.0-CI-CI_DRM_6898+ #1
      <4> [876.736156] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
      <4> [876.736157] RIP: 0010:__list_add_valid+0x4d/0x70
      <4> [876.736159] Code: c3 48 89 d1 48 c7 c7 20 33 0e 82 48 89 c2 e8 4a 4a bc ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 70 33 0e 82 e8 33 4a bc ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 c0 33 0e 82 e8
      <4> [876.736160] RSP: 0018:ffffc9000018bd30 EFLAGS: 00010082
      <4> [876.736162] RAX: 0000000000000000 RBX: ffff888361ffc840 RCX: 0000000000000104
      <4> [876.736163] RDX: 0000000080000104 RSI: 0000000000000000 RDI: 00000000ffffffff
      <4> [876.736164] RBP: ffffc9000018bd68 R08: 0000000000000000 R09: 0000000000000001
      <4> [876.736165] R10: 00000000aed95de3 R11: 000000007fe927eb R12: ffff888361ffca10
      <4> [876.736166] R13: ffff888361ffa610 R14: ffff888361ffc880 R15: ffff8883f931a1f8
      <4> [876.736168] FS:  0000000000000000(0000) GS:ffff88849fd00000(0000) knlGS:0000000000000000
      <4> [876.736169] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [876.736170] CR2: 00007f093a9173c0 CR3: 00000003bba08005 CR4: 0000000000760ee0
      <4> [876.736171] PKRU: 55555554
      <4> [876.736172] Call Trace:
      <4> [876.736226]  __i915_request_submit+0x152/0x370 [i915]
      <4> [876.736263]  __execlists_submission_tasklet+0x6da/0x1f50 [i915]
      <4> [876.736293]  ? execlists_submission_tasklet+0x29/0x50 [i915]
      <4> [876.736321]  execlists_submission_tasklet+0x34/0x50 [i915]
      <4> [876.736325]  tasklet_action_common.isra.5+0x47/0xb0
      <4> [876.736328]  __do_softirq+0xd8/0x4ae
      <4> [876.736332]  ? smpboot_thread_fn+0x23/0x280
      <4> [876.736334]  ? smpboot_thread_fn+0x6b/0x280
      <4> [876.736336]  run_ksoftirqd+0x2b/0x50
      <4> [876.736338]  smpboot_thread_fn+0x1d3/0x280
      <4> [876.736341]  ? sort_range+0x20/0x20
      <4> [876.736343]  kthread+0x119/0x130
      <4> [876.736345]  ? kthread_park+0xa0/0xa0
      <4> [876.736347]  ret_from_fork+0x24/0x50
      <4> [876.736353] irq event stamp: 2290145
      <4> [876.736356] hardirqs last  enabled at (2290144): [<ffffffff8123cde8>] __slab_free+0x3e8/0x500
      <4> [876.736358] hardirqs last disabled at (2290145): [<ffffffff819cfb4d>] _raw_spin_lock_irqsave+0xd/0x50
      <4> [876.736360] softirqs last  enabled at (2290114): [<ffffffff81c0033e>] __do_softirq+0x33e/0x4ae
      <4> [876.736361] softirqs last disabled at (2290119): [<ffffffff810b815b>] run_ksoftirqd+0x2b/0x50
      <4> [876.736363] WARNING: CPU: 2 PID: 21 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
      <4> [876.736364] ---[ end trace 3e58d6c7356c65bf ]---
      <4> [876.736406] ------------[ cut here ]------------
      <4> [876.736415] list_del corruption. prev->next should be ffff888361ffca10, but was ffff88840ac2c730
      <4> [876.736421] WARNING: CPU: 2 PID: 5490 at lib/list_debug.c:53 __list_del_entry_valid+0x79/0x90
      <4> [876.736422] Modules linked in: i915(+) amdgpu gpu_sched ttm vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic mei_hdcp x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul snd_intel_nhlt snd_hda_codec snd_hwdep snd_hda_core ghash_clmulni_intel e1000e cdc_ether usbnet mii snd_pcm ptp pps_core mei_me mei prime_numbers btusb btrtl btbcm btintel bluetooth ecdh_generic ecc [last unloaded: i915]
      <4> [876.736433] CPU: 2 PID: 5490 Comm: i915_selftest Tainted: G     U  W         5.3.0-CI-CI_DRM_6898+ #1
      <4> [876.736435] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
      <4> [876.736436] RIP: 0010:__list_del_entry_valid+0x79/0x90
      <4> [876.736438] Code: 0b 31 c0 c3 48 89 fe 48 c7 c7 30 34 0e 82 e8 ae 49 bc ff 0f 0b 31 c0 c3 48 89 f2 48 89 fe 48 c7 c7 68 34 0e 82 e8 97 49 bc ff <0f> 0b 31 c0 c3 48 c7 c7 a8 34 0e 82 e8 86 49 bc ff 0f 0b 31 c0 c3
      <4> [876.736439] RSP: 0018:ffffc900003ef758 EFLAGS: 00010086
      <4> [876.736440] RAX: 0000000000000000 RBX: ffff888361ffc840 RCX: 0000000000000002
      <4> [876.736442] RDX: 0000000080000002 RSI: 0000000000000000 RDI: 00000000ffffffff
      <4> [876.736443] RBP: ffffc900003ef780 R08: 0000000000000000 R09: 0000000000000001
      <4> [876.736444] R10: 000000001418e4b7 R11: 000000007f0ea93b R12: ffff888361ffcab8
      <4> [876.736445] R13: ffff88843b6d0000 R14: 000000000000217c R15: 0000000000000001
      <4> [876.736447] FS:  00007f4e6f255240(0000) GS:ffff88849fd00000(0000) knlGS:0000000000000000
      <4> [876.736448] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [876.736449] CR2: 00007f093a9173c0 CR3: 00000003bba08005 CR4: 0000000000760ee0
      <4> [876.736450] PKRU: 55555554
      <4> [876.736451] Call Trace:
      <4> [876.736488]  i915_request_retire+0x224/0x8e0 [i915]
      <4> [876.736521]  i915_request_create+0x4b/0x1b0 [i915]
      <4> [876.736550]  nop_virtual_engine+0x230/0x4d0 [i915]
      
      Fixes: 22b7a426 ("drm/i915/execlists: Preempt-to-busy")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111695Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190918145453.8800-1-chris@chris-wilson.co.uk
      37fa0de3
  6. 17 9月, 2019 1 次提交
  7. 29 8月, 2019 1 次提交
  8. 24 8月, 2019 2 次提交
    • C
      drm/i915: Keep drm_i915_file_private around under RCU · 77715906
      Chris Wilson 提交于
      Ensure that the drm_i915_file_private continues to exist as we attempt
      to remove a request from its list, which may race with the destruction
      of the file.
      
      <6> [38.380714] [IGT] gem_ctx_create: starting subtest basic-files
      <0> [42.201329] BUG: spinlock bad magic on CPU#0, kworker/u16:0/7
      <4> [42.201356] general protection fault: 0000 [#1] PREEMPT SMP PTI
      <4> [42.201371] CPU: 0 PID: 7 Comm: kworker/u16:0 Tainted: G     U            5.3.0-rc5-CI-Patchwork_14169+ #1
      <4> [42.201391] Hardware name: Dell Inc.                 OptiPlex 745                 /0GW726, BIOS 2.3.1  05/21/2007
      <4> [42.201594] Workqueue: i915 retire_work_handler [i915]
      <4> [42.201614] RIP: 0010:spin_dump+0x5a/0x90
      <4> [42.201625] Code: 00 48 8d 88 c0 06 00 00 48 c7 c7 00 71 09 82 e8 35 ef 00 00 48 85 db 44 8b 4d 08 41 b8 ff ff ff ff 48 c7 c1 0b cd 0f 82 74 0e <44> 8b 83 e0 04 00 00 48 8d 8b c0 06 00 00 8b 55 04 48 89 ee 48 c7
      <4> [42.201660] RSP: 0018:ffffc9000004bd80 EFLAGS: 00010202
      <4> [42.201673] RAX: 0000000000000031 RBX: 6b6b6b6b6b6b6b6b RCX: ffffffff820fcd0b
      <4> [42.201688] RDX: 0000000000000000 RSI: ffff88803de266f8 RDI: 00000000ffffffff
      <4> [42.201703] RBP: ffff888038381ff8 R08: 00000000ffffffff R09: 000000006b6b6b6b
      <4> [42.201718] R10: 0000000041cb0b89 R11: 646162206b636f6c R12: ffff88802a618500
      <4> [42.201733] R13: ffff88802b32c288 R14: ffff888038381ff8 R15: ffff88802b32c250
      <4> [42.201748] FS:  0000000000000000(0000) GS:ffff88803de00000(0000) knlGS:0000000000000000
      <4> [42.201765] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [42.201778] CR2: 00007f2cefc6d180 CR3: 00000000381ee000 CR4: 00000000000006f0
      <4> [42.201793] Call Trace:
      <4> [42.201805]  do_raw_spin_lock+0x66/0xb0
      <4> [42.201898]  i915_request_retire+0x548/0x7c0 [i915]
      <4> [42.201989]  retire_requests+0x4d/0x60 [i915]
      <4> [42.202078]  i915_retire_requests+0x144/0x2e0 [i915]
      <4> [42.202169]  retire_work_handler+0x10/0x40 [i915]
      
      Recently, in commit 44c22f3f ("drm/i915: Serialize insertion into the
      file->mm.request_list"), we fixed a race on insertion. Now, it appears
      we also have a race with destruction!
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190823181455.31910-1-chris@chris-wilson.co.uk
      77715906
    • C
      drm/i915: Hold irq-off for the entire fake lock period · 6dcb85a0
      Chris Wilson 提交于
      Sadly lockdep records when the irqs are re-enabled and then marks up the
      fake lock as being irq-unsafe. Our hand is forced and so we must mark up
      the entire fake lock critical section as irq-off.
      
      Hopefully this is the last tweak required!
      
      v2: Not quite, we need to mark the timeline spinlock as irqsafe. That
      was a genuine bug being hidden by the earlier lockdep splat.
      
      Fixes: d6773926 ("drm/i915/gt: Mark up the nested engine-pm timeline lock as irqsafe")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190823132700.25286-2-chris@chris-wilson.co.uk
      6dcb85a0
  9. 20 8月, 2019 2 次提交
  10. 18 8月, 2019 1 次提交
  11. 17 8月, 2019 1 次提交
  12. 16 8月, 2019 1 次提交
  13. 15 8月, 2019 1 次提交
  14. 14 8月, 2019 1 次提交
    • C
      drm/i915: Push the wakeref->count deferral to the backend · a79ca656
      Chris Wilson 提交于
      If the backend wishes to defer the wakeref parking, make it responsible
      for unlocking the wakeref (i.e. bumping the counter). This allows it to
      time the unlock much more carefully in case it happens to needs the
      wakeref to be active during its deferral.
      
      For instance, during engine parking we may choose to emit an idle
      barrier (a request). To do so, we borrow the engine->kernel_context
      timeline and to ensure exclusive access we keep the
      engine->wakeref.count as 0. However, to submit that request to HW may
      require a intel_engine_pm_get() (e.g. to keep the submission tasklet
      alive) and before we allow that we have to rewake our wakeref to avoid a
      recursive deadlock.
      
      <4> [257.742916] IRQs not enabled as expected
      <4> [257.742930] WARNING: CPU: 0 PID: 0 at kernel/softirq.c:169 __local_bh_enable_ip+0xa9/0x100
      <4> [257.742936] Modules linked in: vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic i915 btusb btrtl btbcm btintel snd_hda_intel snd_intel_nhlt bluetooth snd_hda_codec coretemp snd_hwdep crct10dif_pclmul snd_hda_core crc32_pclmul ecdh_generic ecc ghash_clmulni_intel snd_pcm r8169 realtek lpc_ich prime_numbers i2c_hid
      <4> [257.742991] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G     U  W         5.3.0-rc3-g5d0a06cd532c-drmtip_340+ #1
      <4> [257.742998] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F6 02/17/2015
      <4> [257.743008] RIP: 0010:__local_bh_enable_ip+0xa9/0x100
      <4> [257.743017] Code: 37 5b 5d c3 8b 80 50 08 00 00 85 c0 75 a9 80 3d 0b be 25 01 00 75 a0 48 c7 c7 f3 0c 06 ac c6 05 fb bd 25 01 01 e8 77 84 ff ff <0f> 0b eb 89 48 89 ef e8 3b 41 06 00 eb 98 e8 e4 5c f4 ff 5b 5d c3
      <4> [257.743025] RSP: 0018:ffffa78600003cb8 EFLAGS: 00010086
      <4> [257.743035] RAX: 0000000000000000 RBX: 0000000000000200 RCX: 0000000000010302
      <4> [257.743042] RDX: 0000000080010302 RSI: 0000000000000000 RDI: 00000000ffffffff
      <4> [257.743050] RBP: ffffffffc0494bb3 R08: 0000000000000000 R09: 0000000000000001
      <4> [257.743058] R10: 0000000014c8f0e9 R11: 00000000fee2ff8e R12: ffffa23ba8c38008
      <4> [257.743065] R13: ffffa23bacc579c0 R14: ffffa23bb7db0f60 R15: ffffa23b9cc8c430
      <4> [257.743074] FS:  0000000000000000(0000) GS:ffffa23bbba00000(0000) knlGS:0000000000000000
      <4> [257.743082] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [257.743089] CR2: 00007fe477b20778 CR3: 000000011f72a000 CR4: 00000000001006f0
      <4> [257.743096] Call Trace:
      <4> [257.743104]  <IRQ>
      <4> [257.743265]  __i915_request_commit+0x240/0x5d0 [i915]
      <4> [257.743427]  ? __i915_request_create+0x228/0x4c0 [i915]
      <4> [257.743584]  __engine_park+0x64/0x250 [i915]
      <4> [257.743730]  ____intel_wakeref_put_last+0x1c/0x70 [i915]
      <4> [257.743878]  i915_sample+0x2ee/0x310 [i915]
      <4> [257.744030]  ? i915_pmu_cpu_offline+0xb0/0xb0 [i915]
      <4> [257.744040]  __hrtimer_run_queues+0x11e/0x4b0
      <4> [257.744068]  hrtimer_interrupt+0xea/0x250
      <4> [257.744079]  ? lockdep_hardirqs_off+0x79/0xd0
      <4> [257.744101]  smp_apic_timer_interrupt+0x96/0x280
      <4> [257.744114]  apic_timer_interrupt+0xf/0x20
      <4> [257.744125] RIP: 0010:__do_softirq+0xb3/0x4ae
      
      v2: Keep the priority_hint assert
      v3: That assert was desperately trying to point out my bug. Sorry, little
      assert.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111378Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190813190705.23869-1-chris@chris-wilson.co.uk
      a79ca656
  15. 13 8月, 2019 1 次提交
  16. 10 8月, 2019 1 次提交
  17. 07 8月, 2019 1 次提交
  18. 06 8月, 2019 1 次提交
  19. 13 7月, 2019 1 次提交
  20. 10 7月, 2019 1 次提交
  21. 21 6月, 2019 1 次提交
  22. 20 6月, 2019 2 次提交
    • C
      drm/i915/execlists: Preempt-to-busy · 22b7a426
      Chris Wilson 提交于
      When using a global seqno, we required a precise stop-the-workd event to
      handle preemption and unwind the global seqno counter. To accomplish
      this, we would preempt to a special out-of-band context and wait for the
      machine to report that it was idle. Given an idle machine, we could very
      precisely see which requests had completed and which we needed to feed
      back into the run queue.
      
      However, now that we have scrapped the global seqno, we no longer need
      to precisely unwind the global counter and only track requests by their
      per-context seqno. This allows us to loosely unwind inflight requests
      while scheduling a preemption, with the enormous caveat that the
      requests we put back on the run queue are still _inflight_ (until the
      preemption request is complete). This makes request tracking much more
      messy, as at any point then we can see a completed request that we
      believe is not currently scheduled for execution. We also have to be
      careful not to rewind RING_TAIL past RING_HEAD on preempting to the
      running context, and for this we use a semaphore to prevent completion
      of the request before continuing.
      
      To accomplish this feat, we change how we track requests scheduled to
      the HW. Instead of appending our requests onto a single list as we
      submit, we track each submission to ELSP as its own block. Then upon
      receiving the CS preemption event, we promote the pending block to the
      inflight block (discarding what was previously being tracked). As normal
      CS completion events arrive, we then remove stale entries from the
      inflight tracker.
      
      v2: Be a tinge paranoid and ensure we flush the write into the HWS page
      for the GPU semaphore to pick in a timely fashion.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-1-chris@chris-wilson.co.uk
      22b7a426
    • C
      drm/i915: Flush the execution-callbacks on retiring · b87b6c0d
      Chris Wilson 提交于
      In the unlikely case the request completes while we regard it as not even
      executing on the GPU (see the next patch!), we have to flush any pending
      execution callbacks at retirement and ensure that we do not add any
      more.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-4-chris@chris-wilson.co.uk
      b87b6c0d
  23. 19 6月, 2019 3 次提交
  24. 18 6月, 2019 1 次提交
  25. 15 6月, 2019 3 次提交
  26. 14 6月, 2019 2 次提交
  27. 12 6月, 2019 1 次提交
  28. 11 6月, 2019 1 次提交
  29. 28 5月, 2019 1 次提交