1. 07 9月, 2020 4 次提交
  2. 18 8月, 2020 1 次提交
  3. 14 7月, 2020 1 次提交
    • C
      drm/i915: Skip signaling a signaled request · 1d9221e9
      Chris Wilson 提交于
      Preempt-to-busy introduces various fascinating complications in that the
      requests may complete as we are unsubmitting them from HW. As they may
      then signal after unsubmission, we may find ourselves having to cleanup
      the signaling request from within the signaling callback. This causes us
      to recurse onto the same i915_request.lock.
      
      However, if the request is already signaled (as it will be before we
      enter the signal callbacks), we know we can skip the signaling of that
      request during submission, neatly evading the spinlock recursion.
      
      unsubmit(ve.rq0) # timeslice expiration or other preemption
       -> virtual_submit_request(ve.rq0)
      dma_fence_signal(ve.rq0) # request completed before preemption ack
       -> submit_notify(ve.rq1)
         -> virtual_submit_request(ve.rq1) # sees that we have completed ve.rq0
            -> __i915_request_submit(ve.rq0)
      
      [  264.210142] BUG: spinlock recursion on CPU#2, sample_multi_tr/2093
      [  264.210150]  lock: 0xffff9efd6ac55080, .magic: dead4ead, .owner: sample_multi_tr/2093, .owner_cpu: 2
      [  264.210155] CPU: 2 PID: 2093 Comm: sample_multi_tr Tainted: G     U
      [  264.210158] Hardware name: Intel Corporation CoffeeLake Client Platform/CoffeeLake S UDIMM RVP, BIOS CNLSFWR1.R00.X212.B01.1909060036 09/06/2019
      [  264.210160] Call Trace:
      [  264.210167]  dump_stack+0x98/0xda
      [  264.210174]  spin_dump.cold+0x24/0x3c
      [  264.210178]  do_raw_spin_lock+0x9a/0xd0
      [  264.210184]  _raw_spin_lock_nested+0x6a/0x70
      [  264.210314]  __i915_request_submit+0x10a/0x3c0 [i915]
      [  264.210415]  virtual_submit_request+0x9b/0x380 [i915]
      [  264.210516]  submit_notify+0xaf/0x14c [i915]
      [  264.210602]  __i915_sw_fence_complete+0x8a/0x230 [i915]
      [  264.210692]  i915_sw_fence_complete+0x2d/0x40 [i915]
      [  264.210762]  __dma_i915_sw_fence_wake+0x19/0x30 [i915]
      [  264.210767]  dma_fence_signal_locked+0xb1/0x1c0
      [  264.210772]  dma_fence_signal+0x29/0x50
      [  264.210871]  i915_request_wait+0x5cb/0x830 [i915]
      [  264.210876]  ? dma_resv_get_fences_rcu+0x294/0x5d0
      [  264.210974]  i915_gem_object_wait_fence+0x2f/0x40 [i915]
      [  264.211084]  i915_gem_object_wait+0xce/0x400 [i915]
      [  264.211178]  i915_gem_wait_ioctl+0xff/0x290 [i915]
      
      Fixes: 22b7a426 ("drm/i915/execlists: Preempt-to-busy")
      References: 6d06779e ("drm/i915: Load balancing across a virtual engine")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: "Nayana, Venkata Ramana" <venkata.ramana.nayana@intel.com>
      Cc: <stable@vger.kernel.org> # v5.4+
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200713141636.29326-1-chris@chris-wilson.co.uk
      1d9221e9
  4. 03 6月, 2020 1 次提交
  5. 01 6月, 2020 3 次提交
  6. 30 5月, 2020 1 次提交
    • C
      drm/i915: Check for awaits on still currently executing requests · b55230e5
      Chris Wilson 提交于
      With the advent of preempt-to-busy, a request may still be on the GPU as
      we unwind. And in the case of a unpreemptible [due to HW] request, that
      request will remain indefinitely on the GPU even though we have
      returned it back to our submission queue, and cleared the active bit.
      
      We only run the execution callbacks on transferring the request from our
      submission queue to the execution queue, but if this is a bonded request
      that the HW is waiting for, we will not submit it (as we wait for a
      fresh execution) even though it is still being executed.
      
      As we know that there are always preemption points between requests, we
      know that only the currently executing request may be still active even
      though we have cleared the flag. However, we do not precisely know which
      request is in ELSP[0] due to a delay in processing events, and
      furthermore we only store the last request in a context in our state
      tracker.
      
      Fixes: 22b7a426 ("drm/i915/execlists: Preempt-to-busy")
      Testcase: igt/gem_exec_balancer/bonded-dual
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200529143926.3245-1-chris@chris-wilson.co.uk
      b55230e5
  7. 29 5月, 2020 1 次提交
  8. 27 5月, 2020 2 次提交
  9. 26 5月, 2020 1 次提交
  10. 25 5月, 2020 1 次提交
  11. 22 5月, 2020 1 次提交
  12. 14 5月, 2020 2 次提交
  13. 12 5月, 2020 2 次提交
  14. 09 5月, 2020 3 次提交
  15. 08 5月, 2020 4 次提交
  16. 07 5月, 2020 1 次提交
  17. 03 4月, 2020 1 次提交
    • C
      drm/i915: Keep a per-engine request pool · 43acd651
      Chris Wilson 提交于
      Add a tiny per-engine request mempool so that we should always have a
      request available for powermanagement allocations from tricky
      contexts. This reserve is expected to be only used for kernel
      contexts when barriers must be emitted [almost] without fail.
      
      The main consumer for this reserved request is expected to be engine-pm,
      for which we know that there will always be at least the previous pm
      request that we can reuse under mempressure (so there should always be
      a spare request for engine_park()).
      
      This is an alternative to using a comparatively bulky mempool, which
      requires custom handling for both our reserved allocation requirement
      and to protect our TYPESAFE_BY_RCU slab cache. The advantage of mempool
      would be that it would allow us to keep a larger per-engine request
      pool. However, converting over to mempool is straightforward should the
      need arise.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-and-tested-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200402184037.21630-1-chris@chris-wilson.co.uk
      43acd651
  18. 23 3月, 2020 1 次提交
  19. 12 3月, 2020 3 次提交
  20. 11 3月, 2020 2 次提交
    • C
      drm/i915: Mark up racy read of active rq->engine · 326611dd
      Chris Wilson 提交于
      As a virtual engine may change the rq->engine to point to the active
      request in flight, we need to warn the compiler that an active request's
      engine is volatile.
      
      [   95.017686] write (marked) to 0xffff8881e8386b10 of 8 bytes by interrupt on cpu 2:
      [   95.018123]  execlists_dequeue+0x762/0x2150 [i915]
      [   95.018539]  __execlists_submission_tasklet+0x48/0x60 [i915]
      [   95.018955]  execlists_submission_tasklet+0xd3/0x170 [i915]
      [   95.018986]  tasklet_action_common.isra.0+0x42/0xa0
      [   95.019016]  __do_softirq+0xd7/0x2cd
      [   95.019043]  irq_exit+0xbe/0xe0
      [   95.019068]  irq_work_interrupt+0xf/0x20
      [   95.019491]  i915_request_retire+0x2c5/0x670 [i915]
      [   95.019937]  retire_requests+0xa1/0xf0 [i915]
      [   95.020348]  engine_retire+0xa1/0xe0 [i915]
      [   95.020376]  process_one_work+0x3b1/0x690
      [   95.020403]  worker_thread+0x80/0x670
      [   95.020429]  kthread+0x19a/0x1e0
      [   95.020454]  ret_from_fork+0x1f/0x30
      [   95.020476]
      [   95.020498] read to 0xffff8881e8386b10 of 8 bytes by task 8909 on cpu 3:
      [   95.020918]  __i915_request_commit+0x177/0x220 [i915]
      [   95.021329]  i915_gem_do_execbuffer+0x38c4/0x4e50 [i915]
      [   95.021750]  i915_gem_execbuffer2_ioctl+0x2c3/0x580 [i915]
      [   95.021784]  drm_ioctl_kernel+0xe4/0x120
      [   95.021809]  drm_ioctl+0x297/0x4c7
      [   95.021832]  ksys_ioctl+0x89/0xb0
      [   95.021865]  __x64_sys_ioctl+0x42/0x60
      [   95.021901]  do_syscall_64+0x6e/0x2c0
      [   95.021927]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200310142403.5953-1-chris@chris-wilson.co.uk
      326611dd
    • C
      drm/i915: Defer semaphore priority bumping to a workqueue · 209df10b
      Chris Wilson 提交于
      Since the semaphore fence may be signaled from inside an interrupt
      handler from inside a request holding its request->lock, we cannot then
      enter into the engine->active.lock for processing the semaphore priority
      bump as we may traverse our call tree and end up on another held
      request.
      
      CPU 0:
      [ 2243.218864]  _raw_spin_lock_irqsave+0x9a/0xb0
      [ 2243.218867]  i915_schedule_bump_priority+0x49/0x80 [i915]
      [ 2243.218869]  semaphore_notify+0x6d/0x98 [i915]
      [ 2243.218871]  __i915_sw_fence_complete+0x61/0x420 [i915]
      [ 2243.218874]  ? kmem_cache_free+0x211/0x290
      [ 2243.218876]  i915_sw_fence_complete+0x58/0x80 [i915]
      [ 2243.218879]  dma_i915_sw_fence_wake+0x3e/0x80 [i915]
      [ 2243.218881]  signal_irq_work+0x571/0x690 [i915]
      [ 2243.218883]  irq_work_run_list+0xd7/0x120
      [ 2243.218885]  irq_work_run+0x1d/0x50
      [ 2243.218887]  smp_irq_work_interrupt+0x21/0x30
      [ 2243.218889]  irq_work_interrupt+0xf/0x20
      
      CPU 1:
      [ 2242.173107]  _raw_spin_lock+0x8f/0xa0
      [ 2242.173110]  __i915_request_submit+0x64/0x4a0 [i915]
      [ 2242.173112]  __execlists_submission_tasklet+0x8ee/0x2120 [i915]
      [ 2242.173114]  ? i915_sched_lookup_priolist+0x1e3/0x2b0 [i915]
      [ 2242.173117]  execlists_submit_request+0x2e8/0x2f0 [i915]
      [ 2242.173119]  submit_notify+0x8f/0xc0 [i915]
      [ 2242.173121]  __i915_sw_fence_complete+0x61/0x420 [i915]
      [ 2242.173124]  ? _raw_spin_unlock_irqrestore+0x39/0x40
      [ 2242.173137]  i915_sw_fence_complete+0x58/0x80 [i915]
      [ 2242.173140]  i915_sw_fence_commit+0x16/0x20 [i915]
      
      Closes: https://gitlab.freedesktop.org/drm/intel/issues/1318
      Fixes: b7404c7e ("drm/i915: Bump ready tasks ahead of busywaits")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: <stable@vger.kernel.org> # v5.2+
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200310101720.9944-1-chris@chris-wilson.co.uk
      209df10b
  21. 10 3月, 2020 2 次提交
    • C
      drm/i915: Improve the start alignment of bonded pairs · 798fa870
      Chris Wilson 提交于
      Always wait on the start of the signaler request to reduce the problem
      of dequeueing the bonded pair too early -- we want both payloads to
      start at the same time, with no latency, and yet still allow others to
      make full use of the slack in the system. This reduce the amount of time
      we spend waiting on the semaphore used to synchronise the start of the
      bonded payload.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200306133852.3420322-3-chris@chris-wilson.co.uk
      798fa870
    • C
      drm/i915: Mark racy read of intel_engine_cs.saturated · 60900add
      Chris Wilson 提交于
      [ 3783.276728] BUG: KCSAN: data-race in __i915_request_submit [i915] / i915_request_await_dma_fence [i915]
      [ 3783.276766]
      [ 3783.276787] write to 0xffff8881f1bc60a0 of 1 bytes by interrupt on cpu 2:
      [ 3783.277187]  __i915_request_submit+0x47e/0x4a0 [i915]
      [ 3783.277580]  __execlists_submission_tasklet+0x997/0x2780 [i915]
      [ 3783.277973]  execlists_submission_tasklet+0xd3/0x170 [i915]
      [ 3783.278006]  tasklet_action_common.isra.0+0x42/0xa0
      [ 3783.278035]  __do_softirq+0xd7/0x2cd
      [ 3783.278063]  irq_exit+0xbe/0xe0
      [ 3783.278089]  do_IRQ+0x51/0x100
      [ 3783.278114]  ret_from_intr+0x0/0x1c
      [ 3783.278140]  finish_task_switch+0x72/0x260
      [ 3783.278170]  __schedule+0x1e5/0x510
      [ 3783.278198]  schedule+0x45/0xb0
      [ 3783.278226]  smpboot_thread_fn+0x23e/0x300
      [ 3783.278256]  kthread+0x19a/0x1e0
      [ 3783.278283]  ret_from_fork+0x1f/0x30
      [ 3783.278305]
      [ 3783.278327] read to 0xffff8881f1bc60a0 of 1 bytes by task 19440 on cpu 3:
      [ 3783.278724]  i915_request_await_dma_fence+0x2a6/0x530 [i915]
      [ 3783.279130]  i915_request_await_object+0x2fe/0x470 [i915]
      [ 3783.279524]  i915_gem_do_execbuffer+0x45dc/0x4c20 [i915]
      [ 3783.279908]  i915_gem_execbuffer2_ioctl+0x2c3/0x580 [i915]
      [ 3783.279940]  drm_ioctl_kernel+0xe4/0x120
      [ 3783.279968]  drm_ioctl+0x297/0x4c7
      [ 3783.279996]  ksys_ioctl+0x89/0xb0
      [ 3783.280021]  __x64_sys_ioctl+0x42/0x60
      [ 3783.280047]  do_syscall_64+0x6e/0x2c0
      [ 3783.280074]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200309132726.28358-1-chris@chris-wilson.co.uk
      60900add
  22. 07 3月, 2020 1 次提交
    • C
      drm/i915: Do not poison i915_request.link on removal · dff2a11b
      Chris Wilson 提交于
      Do not poison the timeline link on the i915_request to allow both
      forward/backward list traversal under RCU.
      
      [ 9759.139229] RIP: 0010:active_request+0x2a/0x90 [i915]
      [ 9759.139240] Code: 41 56 41 55 41 54 55 48 89 fd 53 48 89 f3 48 83 c5 60 e8 49 de dc e0 48 8b 83 e8 01 00 00 48 39 c5 74 12 48 8d 90 20 fe ff ff <48> 8b 80 50 fe ff ff a8 01 74 11 e8 66 20 dd e0 48 89 d8 5b 5d 41
      [ 9759.139251] RSP: 0018:ffffc9000014ce80 EFLAGS: 00010012
      [ 9759.139260] RAX: dead000000000122 RBX: ffff888817cac040 RCX: 0000000000022000
      [ 9759.139267] RDX: deacffffffffff42 RSI: ffff888817cac040 RDI: ffff888851fee900
      [ 9759.139275] RBP: ffff888851fee960 R08: 000000000000001a R09: ffffffffa04702e0
      [ 9759.139282] R10: ffffffff82187ea0 R11: 0000000000000002 R12: 0000000000000004
      [ 9759.139289] R13: ffffffffa04d5179 R14: ffff8887f994ae40 R15: ffff888857b9a068
      [ 9759.139296] FS:  0000000000000000(0000) GS:ffff88885ed80000(0000) knlGS:0000000000000000
      [ 9759.139304] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 9759.139311] CR2: 00007fff5bdec000 CR3: 00000008534fe001 CR4: 00000000001606e0
      [ 9759.139318] Call Trace:
      [ 9759.139325]  <IRQ>
      [ 9759.139389]  execlists_reset+0x14d/0x310 [i915]
      [ 9759.139400]  ? _raw_spin_unlock_irqrestore+0xf/0x30
      [ 9759.139445]  ? fwtable_read32+0x90/0x230 [i915]
      [ 9759.139499]  execlists_submission_tasklet+0xf6/0x150 [i915]
      [ 9759.139508]  tasklet_action_common.isra.17+0x32/0xa0
      [ 9759.139516]  __do_softirq+0x114/0x3dc
      [ 9759.139525]  ? handle_irq_event_percpu+0x59/0x70
      [ 9759.139533]  irq_exit+0xa1/0xc0
      [ 9759.139540]  do_IRQ+0x76/0x150
      [ 9759.139547]  common_interrupt+0xf/0xf
      [ 9759.139554]  </IRQ>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200306140115.3495686-1-chris@chris-wilson.co.uk
      dff2a11b
  23. 06 3月, 2020 1 次提交