1. 06 12月, 2019 1 次提交
  2. 28 11月, 2019 1 次提交
  3. 25 11月, 2019 1 次提交
    • C
      drm/i915/gt: Schedule request retirement when timeline idles · 4f88f874
      Chris Wilson 提交于
      The major drawback of commit 7e34f4e4 ("drm/i915/gen8+: Add RC6 CTX
      corruption WA") is that it disables RC6 while Skylake (and friends) is
      active, and we do not consider the GPU idle until all outstanding
      requests have been retired and the engine switched over to the kernel
      context. If userspace is idle, this task falls onto our background idle
      worker, which only runs roughly once a second, meaning that userspace has
      to have been idle for a couple of seconds before we enable RC6 again.
      Naturally, this causes us to consume considerably more energy than
      before as powersaving is effectively disabled while a display server
      (here's looking at you Xorg) is running.
      
      As execlists will get a completion event as each context is completed,
      we can use this interrupt to queue a retire worker bound to this engine
      to cleanup idle timelines. We will then immediately notice the idle
      engine (without userspace intervention or the aid of the background
      retire worker) and start parking the GPU. Thus during light workloads,
      we will do much more work to idle the GPU faster...  Hopefully with
      commensurate power saving!
      
      v2: Watch context completions and only look at those local to the engine
      when retiring to reduce the amount of excess work we perform.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112315
      References: 7e34f4e4 ("drm/i915/gen8+: Add RC6 CTX corruption WA")
      References: 2248a283 ("drm/i915/gen8+: Add RC6 CTX corruption WA")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191125105858.1718307-3-chris@chris-wilson.co.uk
      4f88f874
  4. 12 11月, 2019 1 次提交
    • C
      drm/i915: Protect context while grabbing its name for the request · d231c15a
      Chris Wilson 提交于
      Inside print_request(), we query the context/timeline name. Nothing
      immediately protects the context from being freed if the request is
      complete -- we rely on serialisation by the caller to keep the name
      valid until they finish using it. Inside intel_engine_dump(), we
      generally only print the requests in the execution queue protected by the
      engine->active.lock, but we also show the pending execlists ports which
      are not protected and so require a rcu_read_lock to keep the pointer
      valid.
      
      [ 1695.700883] BUG: KASAN: use-after-free in i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.700981] Read of size 8 at addr ffff8887344f4d50 by task gem_ctx_persist/2968
      [ 1695.701068]
      [ 1695.701156] CPU: 1 PID: 2968 Comm: gem_ctx_persist Tainted: G     U            5.4.0-rc6+ #331
      [ 1695.701246] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
      [ 1695.701334] Call Trace:
      [ 1695.701424]  dump_stack+0x5b/0x90
      [ 1695.701870]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.701964]  print_address_description.constprop.7+0x36/0x50
      [ 1695.702408]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.702856]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.702947]  __kasan_report.cold.10+0x1a/0x3a
      [ 1695.703390]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.703836]  i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.704241]  print_request+0x82/0x2e0 [i915]
      [ 1695.704638]  ? fwtable_read32+0x133/0x360 [i915]
      [ 1695.705042]  ? write_timestamp+0x110/0x110 [i915]
      [ 1695.705133]  ? _raw_spin_lock_irqsave+0x79/0xc0
      [ 1695.705221]  ? refcount_inc_not_zero_checked+0x91/0x110
      [ 1695.705306]  ? refcount_dec_and_mutex_lock+0x50/0x50
      [ 1695.705709]  ? intel_engine_find_active_request+0x202/0x230 [i915]
      [ 1695.706115]  intel_engine_dump+0x2c9/0x900 [i915]
      
      Fixes: c36eebd9 ("drm/i915/gt: execlists->active is serialised by the tasklet")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191111114323.5833-1-chris@chris-wilson.co.uk
      (cherry picked from commit fecffa46)
      Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      d231c15a
  5. 11 11月, 2019 1 次提交
    • C
      drm/i915: Protect context while grabbing its name for the request · fecffa46
      Chris Wilson 提交于
      Inside print_request(), we query the context/timeline name. Nothing
      immediately protects the context from being freed if the request is
      complete -- we rely on serialisation by the caller to keep the name
      valid until they finish using it. Inside intel_engine_dump(), we
      generally only print the requests in the execution queue protected by the
      engine->active.lock, but we also show the pending execlists ports which
      are not protected and so require a rcu_read_lock to keep the pointer
      valid.
      
      [ 1695.700883] BUG: KASAN: use-after-free in i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.700981] Read of size 8 at addr ffff8887344f4d50 by task gem_ctx_persist/2968
      [ 1695.701068]
      [ 1695.701156] CPU: 1 PID: 2968 Comm: gem_ctx_persist Tainted: G     U            5.4.0-rc6+ #331
      [ 1695.701246] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
      [ 1695.701334] Call Trace:
      [ 1695.701424]  dump_stack+0x5b/0x90
      [ 1695.701870]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.701964]  print_address_description.constprop.7+0x36/0x50
      [ 1695.702408]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.702856]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.702947]  __kasan_report.cold.10+0x1a/0x3a
      [ 1695.703390]  ? i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.703836]  i915_fence_get_timeline_name+0x53/0x90 [i915]
      [ 1695.704241]  print_request+0x82/0x2e0 [i915]
      [ 1695.704638]  ? fwtable_read32+0x133/0x360 [i915]
      [ 1695.705042]  ? write_timestamp+0x110/0x110 [i915]
      [ 1695.705133]  ? _raw_spin_lock_irqsave+0x79/0xc0
      [ 1695.705221]  ? refcount_inc_not_zero_checked+0x91/0x110
      [ 1695.705306]  ? refcount_dec_and_mutex_lock+0x50/0x50
      [ 1695.705709]  ? intel_engine_find_active_request+0x202/0x230 [i915]
      [ 1695.706115]  intel_engine_dump+0x2c9/0x900 [i915]
      
      Fixes: c36eebd9 ("drm/i915/gt: execlists->active is serialised by the tasklet")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191111114323.5833-1-chris@chris-wilson.co.uk
      fecffa46
  6. 08 11月, 2019 1 次提交
  7. 30 10月, 2019 1 次提交
  8. 29 10月, 2019 1 次提交
  9. 24 10月, 2019 4 次提交
  10. 22 10月, 2019 5 次提交
  11. 18 10月, 2019 2 次提交
  12. 16 10月, 2019 1 次提交
  13. 10 10月, 2019 2 次提交
  14. 09 10月, 2019 1 次提交
  15. 08 10月, 2019 2 次提交
  16. 28 9月, 2019 1 次提交
    • M
      drm/i915: check for kernel_context · b178a3f6
      Matthew Auld 提交于
      Explosions during early driver init on the error path. Make sure we fail
      gracefully.
      
      [ 9547.672258] BUG: kernel NULL pointer dereference, address: 000000000000007c
      [ 9547.672288] #PF: supervisor read access in kernel mode
      [ 9547.672292] #PF: error_code(0x0000) - not-present page
      [ 9547.672296] PGD 8000000846b41067 P4D 8000000846b41067 PUD 797034067 PMD 0
      [ 9547.672303] Oops: 0000 [#1] SMP PTI
      [ 9547.672307] CPU: 1 PID: 25634 Comm: i915_selftest Tainted: G     U            5.3.0-rc8+ #73
      [ 9547.672313] Hardware name:  /NUC6i7KYB, BIOS KYSKLi70.86A.0050.2017.0831.1924 08/31/2017
      [ 9547.672395] RIP: 0010:intel_context_unpin+0x9/0x100 [i915]
      [ 9547.672400] Code: 6b 60 00 e9 17 ff ff ff bd fc ff ff ff e9 7c ff ff ff 66 66 2e 0f 1f 84 00 00 00 00
       00 0f 1f 40 00 0f 1f 44 00 00 41 54 55 53 <8b> 47 7c 83 f8 01 74 26 8d 48 ff f0 0f b1 4f 7c 48 8d 57 7c
       75 05
      [ 9547.672413] RSP: 0018:ffffae8ac24ff878 EFLAGS: 00010246
      [ 9547.672417] RAX: ffff944a1b7842d0 RBX: ffff944a1b784000 RCX: ffff944a12dd6fa8
      [ 9547.672422] RDX: ffff944a1b7842c0 RSI: ffff944a12dd5328 RDI: 0000000000000000
      [ 9547.672428] RBP: 0000000000000000 R08: ffff944a11e5d840 R09: 0000000000000000
      [ 9547.672433] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
      [ 9547.672438] R13: ffffffffc11aaf00 R14: 00000000ffffffe4 R15: ffff944a0e29bf38
      [ 9547.672443] FS:  00007fc259b88ac0(0000) GS:ffff944a1f880000(0000) knlGS:0000000000000000
      [ 9547.672449] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 9547.672454] CR2: 000000000000007c CR3: 0000000853346003 CR4: 00000000003606e0
      [ 9547.672459] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 9547.672464] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [ 9547.672469] Call Trace:
      [ 9547.672518]  intel_engine_cleanup_common+0xe3/0x270 [i915]
      [ 9547.672567]  execlists_destroy+0xe/0x30 [i915]
      [ 9547.672669]  intel_engines_init+0x94/0xf0 [i915]
      [ 9547.672749]  i915_gem_init+0x191/0x950 [i915]
      Signed-off-by: NMatthew Auld <matthew.auld@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190927173409.31175-2-matthew.auld@intel.com
      b178a3f6
  17. 20 9月, 2019 1 次提交
    • C
      drm/i915: Mark i915_request.timeline as a volatile, rcu pointer · d19d71fc
      Chris Wilson 提交于
      The request->timeline is only valid until the request is retired (i.e.
      before it is completed). Upon retiring the request, the context may be
      unpinned and freed, and along with it the timeline may be freed. We
      therefore need to be very careful when chasing rq->timeline that the
      pointer does not disappear beneath us. The vast majority of users are in
      a protected context, either during request construction or retirement,
      where the timeline->mutex is held and the timeline cannot disappear. It
      is those few off the beaten path (where we access a second timeline) that
      need extra scrutiny -- to be added in the next patch after first adding
      the warnings about dangerous access.
      
      One complication, where we cannot use the timeline->mutex itself, is
      during request submission onto hardware (under spinlocks). Here, we want
      to check on the timeline to finalize the breadcrumb, and so we need to
      impose a second rule to ensure that the request->timeline is indeed
      valid. As we are submitting the request, it's context and timeline must
      be pinned, as it will be used by the hardware. Since it is pinned, we
      know the request->timeline must still be valid, and we cannot submit the
      idle barrier until after we release the engine->active.lock, ergo while
      submitting and holding that spinlock, a second thread cannot release the
      timeline.
      
      v2: Don't be lazy inside selftests; hold the timeline->mutex for as long
      as we need it, and tidy up acquiring the timeline with a bit of
      refactoring (i915_active_add_request)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190919111912.21631-1-chris@chris-wilson.co.uk
      d19d71fc
  18. 17 9月, 2019 1 次提交
  19. 29 8月, 2019 1 次提交
  20. 24 8月, 2019 1 次提交
  21. 20 8月, 2019 1 次提交
  22. 17 8月, 2019 1 次提交
  23. 16 8月, 2019 1 次提交
  24. 14 8月, 2019 2 次提交
  25. 12 8月, 2019 1 次提交
  26. 10 8月, 2019 2 次提交
  27. 09 8月, 2019 1 次提交
  28. 08 8月, 2019 1 次提交