1. 22 10月, 2019 1 次提交
  2. 18 10月, 2019 2 次提交
  3. 09 10月, 2019 1 次提交
  4. 08 10月, 2019 1 次提交
  5. 04 10月, 2019 6 次提交
  6. 28 9月, 2019 1 次提交
  7. 07 9月, 2019 1 次提交
  8. 09 8月, 2019 1 次提交
  9. 13 7月, 2019 1 次提交
  10. 21 6月, 2019 4 次提交
  11. 15 6月, 2019 1 次提交
    • C
      drm/i915: Keep contexts pinned until after the next kernel context switch · ce476c80
      Chris Wilson 提交于
      We need to keep the context image pinned in memory until after the GPU
      has finished writing into it. Since it continues to write as we signal
      the final breadcrumb, we need to keep it pinned until the request after
      it is complete. Currently we know the order in which requests execute on
      each engine, and so to remove that presumption we need to identify a
      request/context-switch we know must occur after our completion. Any
      request queued after the signal must imply a context switch, for
      simplicity we use a fresh request from the kernel context.
      
      The sequence of operations for keeping the context pinned until saved is:
      
       - On context activation, we preallocate a node for each physical engine
         the context may operate on. This is to avoid allocations during
         unpinning, which may be from inside FS_RECLAIM context (aka the
         shrinker)
      
       - On context deactivation on retirement of the last active request (which
         is before we know the context has been saved), we add the
         preallocated node onto a barrier list on each engine
      
       - On engine idling, we emit a switch to kernel context. When this
         switch completes, we know that all previous contexts must have been
         saved, and so on retiring this request we can finally unpin all the
         contexts that were marked as deactivated prior to the switch.
      
      We can enhance this in future by flushing all the idle contexts on a
      regular heartbeat pulse of a switch to kernel context, which will also
      be used to check for hung engines.
      
      v2: intel_context_active_acquire/_release
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190614164606.15633-1-chris@chris-wilson.co.uk
      ce476c80
  12. 14 6月, 2019 3 次提交
  13. 12 6月, 2019 1 次提交
  14. 06 6月, 2019 1 次提交
  15. 28 5月, 2019 1 次提交
  16. 08 5月, 2019 2 次提交
  17. 27 4月, 2019 1 次提交
  18. 25 4月, 2019 3 次提交
  19. 21 3月, 2019 1 次提交
  20. 08 3月, 2019 1 次提交
  21. 06 3月, 2019 1 次提交
  22. 28 2月, 2019 2 次提交
  23. 09 2月, 2019 1 次提交
    • C
      drm/i915: Revoke mmaps and prevent access to fence registers across reset · 2caffbf1
      Chris Wilson 提交于
      Previously, we were able to rely on the recursive properties of
      struct_mutex to allow us to serialise revoking mmaps and reacquiring the
      FENCE registers with them being clobbered over a global device reset.
      I then proceeded to throw out the baby with the bath water in order to
      pursue a struct_mutex-less reset.
      
      Perusing LWN for alternative strategies, the dilemma on how to serialise
      access to a global resource on one side was answered by
      https://lwn.net/Articles/202847/ -- Sleepable RCU:
      
          1  int readside(void) {
          2      int idx;
          3      rcu_read_lock();
          4	   if (nomoresrcu) {
          5          rcu_read_unlock();
          6	       return -EINVAL;
          7      }
          8	   idx = srcu_read_lock(&ss);
          9	   rcu_read_unlock();
          10	   /* SRCU read-side critical section. */
          11	   srcu_read_unlock(&ss, idx);
          12	   return 0;
          13 }
          14
          15 void cleanup(void)
          16 {
          17     nomoresrcu = 1;
          18     synchronize_rcu();
          19     synchronize_srcu(&ss);
          20     cleanup_srcu_struct(&ss);
          21 }
      
      No more worrying about stop_machine, just an uber-complex mutex,
      optimised for reads, with the overhead pushed to the rare reset path.
      
      However, we do run the risk of a deadlock as we allocate underneath the
      SRCU read lock, and the allocation may require a GPU reset, causing a
      dependency cycle via the in-flight requests. We resolve that by declaring
      the driver wedged and cancelling all in-flight rendering.
      
      v2: Use expedited rcu barriers to match our earlier timing
      characteristics.
      v3: Try to annotate locking contexts for sparse
      v4: Reduce selftest lock duration to avoid a reset deadlock with fences
      v5: s/srcu/reset_backoff_srcu/
      v6: Remove more stale comments
      
      Testcase: igt/gem_mmap_gtt/hang
      Fixes: eb8d0f5a ("drm/i915: Remove GPU reset dependence on struct_mutex")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190208153708.20023-2-chris@chris-wilson.co.uk
      2caffbf1
  24. 29 1月, 2019 1 次提交
  25. 25 1月, 2019 1 次提交