1. 21 3月, 2017 2 次提交
  2. 17 3月, 2017 2 次提交
  3. 16 3月, 2017 1 次提交
    • C
      drm/i915/scheduler: emulate a scheduler for guc · 31de7350
      Chris Wilson 提交于
      This emulates execlists on top of the GuC in order to defer submission of
      requests to the hardware. This deferral allows time for high priority
      requests to gazump their way to the head of the queue, however it nerfs
      the GuC by converting it back into a simple execlist (where the CPU has
      to wake up after every request to feed new commands into the GuC).
      
      v2: Drop hack status - though iirc there is still a lockdep inversion
      between fence and engine->timeline->lock (which is impossible as the
      nesting only occurs on different fences - hopefully just requires some
      judicious lockdep annotation)
      v3: Apply lockdep nesting to enabling signaling on the request, using
      the pattern we already have in __i915_gem_request_submit();
      v4: Replaying requests after a hang also now needs the timeline
      spinlock, to disable the interrupts at least
      v5: Hold wq lock for completeness, and emit a tracepoint for enabling signal
      v6: Reorder interrupt checking for a happier gcc.
      v7: Only signal the tasklet after a user-interrupt if using guc scheduling
      v8: Restore lost update of rq through the i915_guc_irq_handler (Tvrtko)
      v9: Avoid re-initialising the engine->irq_tasklet from inside a reset
      v10: Hook up the execlists-style tracepoints
      v11: Clear the execlists irq_posted bit after taking over the interrupt/tasklet
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170316125619.6856-1-chris@chris-wilson.co.ukReviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
      31de7350
  4. 15 3月, 2017 1 次提交
  5. 13 3月, 2017 1 次提交
  6. 12 3月, 2017 2 次提交
  7. 10 3月, 2017 1 次提交
  8. 02 3月, 2017 1 次提交
    • C
      drm/i915/guc: Disable irq for __i915_guc_submit wq_lock · 25afdf89
      Chris Wilson 提交于
      __i915_guc_submit may be, despite my assertion, called from outside of
      an irq-safe spinlock so we need to use a full spin_lock_irqsave and not
      cheat using a spin_lock. (The initial notify callback from the completed
      fence is called before the spinlock is taken to wake up all waiters and
      call their callbacks.)
      
      [   48.166581] kernel BUG at drivers/gpu/drm/i915/i915_guc_submission.c:527!
      [   48.166617] invalid opcode: 0000 [#1] PREEMPT SMP
      [   48.166644] Modules linked in: i915 prime_numbers x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel mei_me mei i2c_i801 netconsole i2c_hid [last unloaded: i915]
      [   48.166733] CPU: 2 PID: 5 Comm: kworker/u8:0 Tainted: G     U          4.10.0nightly-170302-guc_scrub+ #19
      [   48.166778] Hardware name:                  /NUC6i5SYB, BIOS SYSKLi35.86A.0054.2016.0930.1102 09/30/2016
      [   48.166835] Workqueue: i915 __intel_autoenable_gt_powersave [i915]
      [   48.166865] task: ffff88084ab7cf40 task.stack: ffffc90000064000
      [   48.166921] RIP: 0010:__i915_guc_submit+0x1e6/0x2a0 [i915]
      [   48.166953] RSP: 0018:ffffc90000067c80 EFLAGS: 00010202
      [   48.166979] RAX: 0000000000000202 RBX: ffff8808465e0c68 RCX: 0000000000000201
      [   48.167016] RDX: 0000000080000201 RSI: ffff88084ab7d798 RDI: ffff88082b8a8040
      [   48.167054] RBP: ffffc90000067cd8 R08: 0000000000000001 R09: 0000000000000000
      [   48.167085] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88082b8a8148
      [   48.167126] R13: 0000000000000000 R14: ffff88082f440000 R15: ffff88082e85e660
      [   48.167156] FS:  0000000000000000(0000) GS:ffff88086ed00000(0000) knlGS:0000000000000000
      [   48.167195] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   48.167226] CR2: 000055862ffcdc2c CR3: 0000000001e0f000 CR4: 00000000003406e0
      [   48.167257] Call Trace:
      [   48.168112]  ? trace_hardirqs_on+0xd/0x10
      [   48.168966]  ? _raw_spin_unlock_irqrestore+0x4a/0x80
      [   48.169831]  i915_guc_submit+0x1a/0x20 [i915]
      [   48.170680]  submit_notify+0x89/0xc0 [i915]
      [   48.171512]  __i915_sw_fence_complete+0x175/0x220 [i915]
      [   48.172340]  i915_sw_fence_complete+0x2a/0x50 [i915]
      [   48.173158]  i915_sw_fence_commit+0x21/0x30 [i915]
      [   48.173968]  __i915_add_request+0x238/0x530 [i915]
      [   48.174764]  __intel_autoenable_gt_powersave+0x8b/0xb0 [i915]
      [   48.175549]  process_one_work+0x218/0x690
      [   48.176318]  ? process_one_work+0x197/0x690
      [   48.177183]  worker_thread+0x4e/0x4a0
      [   48.178039]  kthread+0x10c/0x140
      [   48.178878]  ? process_one_work+0x690/0x690
      [   48.179718]  ? kthread_create_on_node+0x40/0x40
      [   48.180568]  ret_from_fork+0x31/0x40
      [   48.181423] Code: 02 00 00 43 89 84 ae 50 11 00 00 e8 75 01 62 e1 48 83 c4 30 5b 41 5c 41 5d 41 5e 41 5f 5d c3 48 c1 e0 20 48 09 c2 49 89 d0 eb 82 <0f> 0b 0f 0b 0f 0b 0f 0b 0f 0b 0f 0b 49 c1 e8 20 44 89 43 34 4a
      [   48.183336] RIP: __i915_guc_submit+0x1e6/0x2a0 [i915] RSP: ffffc90000067c80
      Reported-by: NArkadiusz Hiler <arkadiusz.hiler@intel.com>
      Fixes: 349ab919 ("drm/i915/guc: Make wq_lock irq-safe")
      Fixes: 67b807a8 ("drm/i915: Delay disabling the user interrupt for breadcrumbs")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170302145323.12886-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NArkadiusz Hiler <arkadiusz.hiler@intel.com>
      Tested-by: NArkadiusz Hiler <arkadiusz.hiler@intel.com>
      25afdf89
  9. 28 2月, 2017 2 次提交
  10. 21 2月, 2017 1 次提交
  11. 19 1月, 2017 1 次提交
  12. 18 1月, 2017 2 次提交
  13. 13 1月, 2017 1 次提交
  14. 07 1月, 2017 1 次提交
  15. 28 12月, 2016 1 次提交
  16. 19 12月, 2016 1 次提交
    • C
      drm/i915: Unify active context tracking between legacy/execlists/guc · e8a9c58f
      Chris Wilson 提交于
      The requests conversion introduced a nasty bug where we could generate a
      new request in the middle of constructing a request if we needed to idle
      the system in order to evict space for a context. The request to idle
      would be executed (and waited upon) before the current one, creating a
      minor havoc in the seqno accounting, as we will consider the current
      request to already be completed (prior to deferred seqno assignment) but
      ring->last_retired_head would have been updated and still could allow
      us to overwrite the current request before execution.
      
      We also employed two different mechanisms to track the active context
      until it was switched out. The legacy method allowed for waiting upon an
      active context (it could forcibly evict any vma, including context's),
      but the execlists method took a step backwards by pinning the vma for
      the entire active lifespan of the context (the only way to evict was to
      idle the entire GPU, not individual contexts). However, to circumvent
      the tricky issue of locking (i.e. we cannot take struct_mutex at the
      time of i915_gem_request_submit(), where we would want to move the
      previous context onto the active tracker and unpin it), we take the
      execlists approach and keep the contexts pinned until retirement.
      The benefit of the execlists approach, more important for execlists than
      legacy, was the reduction in work in pinning the context for each
      request - as the context was kept pinned until idle, it could short
      circuit the pinning for all active contexts.
      
      We introduce new engine vfuncs to pin and unpin the context
      respectively. The context is pinned at the start of the request, and
      only unpinned when the following request is retired (this ensures that
      the context is idle and coherent in main memory before we unpin it). We
      move the engine->last_context tracking into the retirement itself
      (rather than during request submission) in order to allow the submission
      to be reordered or unwound without undue difficultly.
      
      And finally an ulterior motive for unifying context handling was to
      prepare for mock requests.
      
      v2: Rename to last_retired_context, split out legacy_context tracking
      for MI_SET_CONTEXT.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161218153724.8439-3-chris@chris-wilson.co.uk
      e8a9c58f
  17. 16 12月, 2016 1 次提交
  18. 02 12月, 2016 2 次提交
  19. 29 11月, 2016 4 次提交
    • C
      drm/i915/guc: Split hw submission for replay after GPU reset · 34ba5a80
      Chris Wilson 提交于
      Something I missed before sending off the partial series was that the
      non-scheduler guc reset path was broken (in the full series, this is
      pushed to the execlists reset handler). The issue is that after a reset,
      we have to refill the GuC workqueues, which we do by resubmitting the
      requests. However, if we already have submitted them, the fences within
      them have already been used and triggering them again is an error.
      Instead, just repopulate the guc workqueue.
      
      [  115.858560] [IGT] gem_busy: starting subtest hang-render
      [  135.839867] [drm] GPU HANG: ecode 9:0:0xe757fefe, in gem_busy [1716], reason: Hang on render ring, action: reset
      [  135.839902] drm/i915: Resetting chip after gpu hang
      [  135.839957] [drm] RC6 on
      [  135.858351] ------------[ cut here ]------------
      [  135.858357] WARNING: CPU: 2 PID: 45 at drivers/gpu/drm/i915/i915_sw_fence.c:108 i915_sw_fence_complete+0x25/0x30
      [  135.858357] Modules linked in: rfcomm bnep binfmt_misc nls_iso8859_1 input_leds snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core btusb btrtl snd_hwdep snd_pcm 8250_dw snd_seq_midi hid_lenovo snd_seq_midi_event snd_rawmidi iwlwifi x86_pkg_temp_thermal coretemp snd_seq crct10dif_pclmul snd_seq_device hci_uart snd_timer crc32_pclmul ghash_clmulni_intel idma64 aesni_intel virt_dma btbcm snd btqca aes_x86_64 btintel lrw cfg80211 bluetooth gf128mul glue_helper ablk_helper cryptd soundcore intel_lpss_pci intel_pch_thermal intel_lpss_acpi intel_lpss acpi_als mfd_core kfifo_buf acpi_pad industrialio autofs4 hid_plantronics usbhid dm_mirror dm_region_hash dm_log sdhci_pci ahci sdhci libahci i2c_hid hid
      [  135.858389] CPU: 2 PID: 45 Comm: kworker/2:1 Tainted: G        W       4.9.0-rc4+ #238
      [  135.858389] Hardware name:                  /NUC6i3SYB, BIOS SYSKLi35.86A.0024.2015.1027.2142 10/27/2015
      [  135.858392] Workqueue: events_long i915_hangcheck_elapsed
      [  135.858394]  ffffc900001bf9b8 ffffffff812bb238 0000000000000000 0000000000000000
      [  135.858396]  ffffc900001bf9f8 ffffffff8104f621 0000006c00000000 ffff8808296137f8
      [  135.858398]  0000000000000a00 ffff8808457a0000 ffff880845764e60 ffff880845760000
      [  135.858399] Call Trace:
      [  135.858403]  [<ffffffff812bb238>] dump_stack+0x4d/0x65
      [  135.858405]  [<ffffffff8104f621>] __warn+0xc1/0xe0
      [  135.858406]  [<ffffffff8104f748>] warn_slowpath_null+0x18/0x20
      [  135.858408]  [<ffffffff813f8c15>] i915_sw_fence_complete+0x25/0x30
      [  135.858410]  [<ffffffff813f8fad>] i915_sw_fence_commit+0xd/0x30
      [  135.858412]  [<ffffffff8142e591>] __i915_gem_request_submit+0xe1/0xf0
      [  135.858413]  [<ffffffff8142e5c8>] i915_gem_request_submit+0x28/0x40
      [  135.858415]  [<ffffffff814433e7>] i915_guc_submit+0x47/0x210
      [  135.858417]  [<ffffffff81443e98>] i915_guc_submission_enable+0x468/0x540
      [  135.858419]  [<ffffffff81442495>] intel_guc_setup+0x715/0x810
      [  135.858421]  [<ffffffff8142b6b4>] i915_gem_init_hw+0x114/0x2a0
      [  135.858423]  [<ffffffff813eeaa8>] i915_reset+0xe8/0x120
      [  135.858424]  [<ffffffff813f3937>] i915_reset_and_wakeup+0x157/0x180
      [  135.858426]  [<ffffffff813f79db>] i915_handle_error+0x1ab/0x230
      [  135.858428]  [<ffffffff812c760d>] ? scnprintf+0x4d/0x90
      [  135.858430]  [<ffffffff81435985>] i915_hangcheck_elapsed+0x275/0x3d0
      [  135.858432]  [<ffffffff810668cf>] process_one_work+0x12f/0x410
      [  135.858433]  [<ffffffff81066bf3>] worker_thread+0x43/0x4d0
      [  135.858435]  [<ffffffff81066bb0>] ? process_one_work+0x410/0x410
      [  135.858436]  [<ffffffff81066bb0>] ? process_one_work+0x410/0x410
      [  135.858438]  [<ffffffff8106bbb4>] kthread+0xd4/0xf0
      [  135.858440]  [<ffffffff8106bae0>] ? kthread_park+0x60/0x60
      
      v2: Only resubmit submitted requests
      v3: Don't forget the pending requests have reserved space.
      
      Fixes: d55ac5bf ("drm/i915: Defer transfer onto execution timeline to actual hw submission")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161129121024.22650-6-chris@chris-wilson.co.uk
      34ba5a80
    • C
      drm/i915/guc: Keep the execbuf client allocated across reset · 4d357af4
      Chris Wilson 提交于
      In order to avoid some complexity in trying to reconstruct the
      workqueues across reset, remember them instead. The issue comes when we
      have to handle a reset between request allocation and submission, the
      request has reserved space in the wq, but is not in any list so we fail
      to restore the reserved space. By keeping the execbuf client intact
      across the reset, we also keep the reservations.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161129121024.22650-5-chris@chris-wilson.co.uk
      4d357af4
    • C
    • C
      drm/i915/guc: Rename client->cookie to match use · 357248bf
      Chris Wilson 提交于
      The client->cookie is a shadow of the doorbell->cookie value, so rename
      it to indicate its association with the doorbell, like the doorbell id
      and offset.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161129121024.22650-3-chris@chris-wilson.co.uk
      357248bf
  20. 26 11月, 2016 5 次提交
  21. 15 11月, 2016 2 次提交
    • C
      drm/i915/scheduler: Execute requests in order of priorities · 20311bd3
      Chris Wilson 提交于
      Track the priority of each request and use it to determine the order in
      which we submit requests to the hardware via execlists.
      
      The priority of the request is determined by the user (eventually via
      the context) but may be overridden at any time by the driver. When we set
      the priority of the request, we bump the priority of all of its
      dependencies to match - so that a high priority drawing operation is not
      stuck behind a background task.
      
      When the request is ready to execute (i.e. we have signaled the submit
      fence following completion of all its dependencies, including third
      party fences), we put the request into a priority sorted rbtree to be
      submitted to the hardware. If the request is higher priority than all
      pending requests, it will be submitted on the next context-switch
      interrupt as soon as the hardware has completed the current request. We
      do not currently preempt any current execution to immediately run a very
      high priority request, at least not yet.
      
      One more limitation, is that this is first implementation is for
      execlists only so currently limited to gen8/gen9.
      
      v2: Replace recursive priority inheritance bumping with an iterative
      depth-first search list.
      v3: list_next_entry() for walking lists
      v4: Explain how the dfs solves the recursion problem with PI.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161114204105.29171-8-chris@chris-wilson.co.uk
      20311bd3
    • C
      drm/i915: Defer transfer onto execution timeline to actual hw submission · d55ac5bf
      Chris Wilson 提交于
      Defer the transfer from the client's timeline onto the execution
      timeline from the point of readiness to the point of actual submission.
      For example, in execlists, a request is finally submitted to hardware
      when the hardware is ready, and only put onto the hardware queue when
      the request is ready. By deferring the transfer, we ensure that the
      timeline is maintained in retirement order if we decide to queue the
      requests onto the hardware in a different order than fifo.
      
      v2: Rebased onto distinct global/user timeline lock classes.
      v3: Play with the position of the spin_lock().
      v4: Nesting finally resolved with distinct sw_fence lock classes.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161114204105.29171-4-chris@chris-wilson.co.uk
      d55ac5bf
  22. 04 11月, 2016 1 次提交
  23. 29 10月, 2016 2 次提交
  24. 26 10月, 2016 1 次提交
    • A
      drm/i915/guc: WA to address the Ringbuffer coherency issue · ed4596ea
      Akash Goel 提交于
      Driver accesses the ringbuffer pages, via GMADR BAR, if the pages are
      pinned in mappable aperture portion of GGTT and for ringbuffer pages
      allocated from Stolen memory, access can only be done through GMADR BAR.
      In case of GuC based submission, updates done in ringbuffer via GMADR
      may not get committed to memory by the time the Command streamer starts
      reading them, resulting in fetching of stale data.
      
      For Host based submission, such problem is not there as the write to Ring
      Tail or ELSP register happens from the Host side prior to submission.
      Access to any GFX register from CPU side goes to GTTMMADR BAR and Hw already
      enforces the ordering between outstanding GMADR writes & new GTTMADR access.
      MMIO writes from GuC side do not go to GTTMMADR BAR as GuC communication to
      registers within GT is contained within GT, so ordering is not enforced
      resulting in a race, which can manifest in form of a hang.
      
      To ensure the flush of in-flight GMADR writes, a POSTING READ is done to
      GuC register prior to doorbell ring.
      There is already a similar WA in i915_gem_object_flush_gtt_write_domain(),
      which takes care of GMADR writes from User space to GEM buffers, but not the
      ringbuffer writes from KMD.
      This WA is needed on all recent HW.
      
      v2:
      - Use POSTING_READ_FW instead of POSTING_READ as GuC register do not lie
        in any forcewake domain range and so the overhead of spinlock & search
        in the forcewake table is avoidable. (Chris)
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NSagar Arun Kamble <sagar.a.kamble@intel.com>
      Signed-off-by: NAkash Goel <akash.goel@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/1477413323-1880-1-git-send-email-akash.goel@intel.com
      ed4596ea
  25. 25 10月, 2016 1 次提交
    • A
      drm/i915: Mark the GuC log buffer flush interrupts handling WQ as freezable · 7ef54de7
      Akash Goel 提交于
      The GuC log buffer flush work item has to do a register access to send the
      ack to GuC and this work item, if not synced before suspend, can potentially
      get executed after the GFX device is suspended. This work item function uses
      rpm get/put calls around the Hw access, which covers the rpm suspend case
      but for system suspend a sync would be required as kernel can potentially
      schedule the work items even after some devices, including GFX, have been
      put to suspend. But sync has to be done only for the system suspend case,
      as sync along with rpm get/put can cause a deadlock for rpm suspend path.
      To have the sync, but like a NOOP, for rpm suspend path also this work
      item could have been queued from the irq handler only when the device is
      runtime active & kept active while that work item is pending or getting
      executed but an interrupt can come even after the device is out of use and
      so can potentially lead to missing of this work item.
      
      By marking the workqueue, dedicated for handling GuC log buffer flush
      interrupts, as freezable we don't have to bother about flushing of this
      work item from the suspend hooks, the pending work item if any will be
      either executed before the suspend or scheduled later on resume. This way
      the handling of log buffer flush work item can be kept same between system
      suspend & rpm suspend.
      Suggested-by: NImre Deak <imre.deak@intel.com>
      Cc: Imre Deak <imre.deak@intel.com>
      Signed-off-by: NAkash Goel <akash.goel@intel.com>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      7ef54de7