1. 21 3月, 2019 2 次提交
  2. 18 3月, 2019 1 次提交
  3. 09 3月, 2019 1 次提交
  4. 08 3月, 2019 8 次提交
  5. 06 3月, 2019 3 次提交
  6. 28 2月, 2019 4 次提交
  7. 22 2月, 2019 1 次提交
  8. 21 2月, 2019 2 次提交
  9. 13 2月, 2019 2 次提交
  10. 12 2月, 2019 1 次提交
  11. 09 2月, 2019 1 次提交
    • C
      drm/i915: Revoke mmaps and prevent access to fence registers across reset · 2caffbf1
      Chris Wilson 提交于
      Previously, we were able to rely on the recursive properties of
      struct_mutex to allow us to serialise revoking mmaps and reacquiring the
      FENCE registers with them being clobbered over a global device reset.
      I then proceeded to throw out the baby with the bath water in order to
      pursue a struct_mutex-less reset.
      
      Perusing LWN for alternative strategies, the dilemma on how to serialise
      access to a global resource on one side was answered by
      https://lwn.net/Articles/202847/ -- Sleepable RCU:
      
          1  int readside(void) {
          2      int idx;
          3      rcu_read_lock();
          4	   if (nomoresrcu) {
          5          rcu_read_unlock();
          6	       return -EINVAL;
          7      }
          8	   idx = srcu_read_lock(&ss);
          9	   rcu_read_unlock();
          10	   /* SRCU read-side critical section. */
          11	   srcu_read_unlock(&ss, idx);
          12	   return 0;
          13 }
          14
          15 void cleanup(void)
          16 {
          17     nomoresrcu = 1;
          18     synchronize_rcu();
          19     synchronize_srcu(&ss);
          20     cleanup_srcu_struct(&ss);
          21 }
      
      No more worrying about stop_machine, just an uber-complex mutex,
      optimised for reads, with the overhead pushed to the rare reset path.
      
      However, we do run the risk of a deadlock as we allocate underneath the
      SRCU read lock, and the allocation may require a GPU reset, causing a
      dependency cycle via the in-flight requests. We resolve that by declaring
      the driver wedged and cancelling all in-flight rendering.
      
      v2: Use expedited rcu barriers to match our earlier timing
      characteristics.
      v3: Try to annotate locking contexts for sparse
      v4: Reduce selftest lock duration to avoid a reset deadlock with fences
      v5: s/srcu/reset_backoff_srcu/
      v6: Remove more stale comments
      
      Testcase: igt/gem_mmap_gtt/hang
      Fixes: eb8d0f5a ("drm/i915: Remove GPU reset dependence on struct_mutex")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190208153708.20023-2-chris@chris-wilson.co.uk
      2caffbf1
  12. 07 2月, 2019 2 次提交
  13. 06 2月, 2019 1 次提交
  14. 30 1月, 2019 1 次提交
    • C
      drm/i915: Identify active requests · 85474441
      Chris Wilson 提交于
      To allow requests to forgo a common execution timeline, one question we
      need to be able to answer is "is this request running?". To track
      whether a request has started on HW, we can emit a breadcrumb at the
      beginning of the request and check its timeline's HWSP to see if the
      breadcrumb has advanced past the start of this request. (This is in
      contrast to the global timeline where we need only ask if we are on the
      global timeline and if the timeline has advanced past the end of the
      previous request.)
      
      There is still confusion from a preempted request, which has already
      started but relinquished the HW to a high priority request. For the
      common case, this discrepancy should be negligible. However, for
      identification of hung requests, knowing which one was running at the
      time of the hang will be much more important.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-2-chris@chris-wilson.co.uk
      85474441
  15. 29 1月, 2019 6 次提交
  16. 25 1月, 2019 1 次提交
  17. 24 1月, 2019 1 次提交
  18. 17 1月, 2019 2 次提交