1. 25 3月, 2021 2 次提交
  2. 11 1月, 2021 1 次提交
  3. 09 1月, 2021 1 次提交
  4. 05 1月, 2021 2 次提交
  5. 24 12月, 2020 2 次提交
    • C
      drm/i915/gt: Resubmit the virtual engine on schedule-out · f81475bb
      Chris Wilson 提交于
      Having recognised that we do not change the sibling until we schedule
      out, we can then defer the decision to resubmit the virtual engine from
      the unwind of the active queue to scheduling out of the virtual context.
      This improves our resilence in virtual engine scheduling, and should
      eliminate the rare cases of gem_exec_balance failing.
      
      By keeping the unwind order intact on the local engine, we can preserve
      data dependency ordering while doing a preempt-to-busy pass until we
      have determined the new ELSP. This means that if we try to timeslice
      between a virtual engine and a data-dependent ordinary request, the pair
      will maintain their relative ordering and we will avoid the
      resubmission, cancelling the timeslicing until further change.
      
      The dilemma though is that we then may end up in a situation where the
      'demotion' of the virtual request to an ordinary request in the engine
      queue results in filling the ELSP[] with virtual requests instead of
      spreading the load across the engines. To compensate for this, we mark
      each virtual request and refuse to resubmit a virtual request in the
      secondary ELSP slots, thus forcing subsequent virtual requests to be
      scheduled out after timeslicing. By delaying the decision until we
      schedule out, we will avoid unnecessary resubmission.
      
      Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2079
      Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2098Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-7-chris@chris-wilson.co.uk
      f81475bb
    • C
      drm/i915/gt: Replace direct submit with direct call to tasklet · 16f2941a
      Chris Wilson 提交于
      Rather than having special case code for opportunistically calling
      process_csb() and performing a direct submit while holding the engine
      spinlock for submitting the request, simply call the tasklet directly.
      This allows us to retain the direct submission path, including the CS
      draining to allow fast/immediate submissions, without requiring any
      duplicated code paths, and most importantly greatly simplifying the
      control flow by removing reentrancy. This will enable us to close a few
      races in the virtual engines in the next few patches.
      
      The trickiest part here is to ensure that paired operations (such as
      schedule_in/schedule_out) remain under consistent locking domains,
      e.g. when pulled outside of the engine->active.lock
      
      v2: Use bh kicking, see commit 3c53776e ("Mark HI and TASKLET
      softirq synchronous").
      v3: Update engine-reset to be tasklet aware
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-1-chris@chris-wilson.co.uk
      16f2941a
  6. 22 12月, 2020 2 次提交
  7. 21 12月, 2020 2 次提交
  8. 10 12月, 2020 1 次提交
  9. 07 9月, 2020 3 次提交
  10. 09 7月, 2020 1 次提交
  11. 18 6月, 2020 1 次提交
  12. 17 6月, 2020 5 次提交
  13. 16 6月, 2020 3 次提交
  14. 13 6月, 2020 1 次提交
  15. 11 6月, 2020 1 次提交
  16. 04 6月, 2020 1 次提交
  17. 28 5月, 2020 1 次提交
  18. 21 5月, 2020 1 次提交
  19. 20 5月, 2020 1 次提交
  20. 19 5月, 2020 3 次提交
  21. 18 5月, 2020 1 次提交
  22. 07 5月, 2020 1 次提交
    • C
      drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore · 220dcfc1
      Chris Wilson 提交于
      If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
      user batch or in our own preamble, the engine raises a
      GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
      respond to a semaphore wait by yielding the timeslice, if we have
      another context to yield to!
      
      The only real complication is that the interrupt is only generated for
      the start of the semaphore wait, and is asynchronous to our
      process_csb() -- that is, we may not have registered the timeslice before
      we see the interrupt. To ensure we don't miss a potential semaphore
      blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
      the interrupt and apply it to the next timeslice regardless of whether it
      was active at the time.
      
      v2: We use semaphores in preempt-to-busy, within the timeslicing
      implementation itself! Ergo, when we do insert a preemption due to an
      expired timeslice, the new context may start with the missed semaphore
      flagged by the retired context and be yielded, ad infinitum. To avoid
      this, read the context id at the time of the semaphore interrupt and
      only yield if that context is still active.
      
      Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
      (cherry picked from commit c4e8ba73)
      (cherry picked from commit cd60e4ac4738a6921592c4f7baf87f9a3499f0e2)
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      220dcfc1
  23. 06 5月, 2020 1 次提交
  24. 04 5月, 2020 1 次提交
  25. 30 4月, 2020 1 次提交
    • C
      drm/i915/gt: Keep a no-frills swappable copy of the default context state · be1cb55a
      Chris Wilson 提交于
      We need to keep the default context state around to instantiate new
      contexts (aka golden rendercontext), and we also keep it pinned while
      the engine is active so that we can quickly reset a hanging context.
      However, the default contexts are large enough to merit keeping in
      swappable memory as opposed to kernel memory, so we store them inside
      shmemfs. Currently, we use the normal GEM objects to create the default
      context image, but we can throw away all but the shmemfs file.
      
      This greatly simplifies the tricky power management code which wants to
      run underneath the normal GT locking, and we definitely do not want to
      use any high level objects that may appear to recurse back into the GT.
      Though perhaps the primary advantage of the complex GEM object is that
      we aggressively cache the mapping, but here we are recreating the
      vm_area everytime time we unpark. At the worst, we add a lightweight
      cache, but first find a microbenchmark that is impacted.
      
      Having started to create some utility functions to make working with
      shmemfs objects easier, we can start putting them to wider use, where
      GEM objects are overkill, such as storing persistent error state.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Ramalingam C <ramalingam.c@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200429172429.6054-1-chris@chris-wilson.co.uk
      be1cb55a