1. 23 6月, 2015 23 次提交
  2. 22 6月, 2015 1 次提交
  3. 15 6月, 2015 1 次提交
  4. 27 5月, 2015 1 次提交
    • C
      drm/i915: Use spinlocks for checking when to waitboost · 8d3afd7d
      Chris Wilson 提交于
      In commit 1854d5ca
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Apr 7 16:20:32 2015 +0100
      
          drm/i915: Deminish contribution of wait-boosting from clients
      
      we removed an atomic timer based check for allowing waitboosting and
      moved it below the mutex taken during RPS. However, that mutex can be
      held for long periods of time on Vallyview/Cherryview as communication
      with the PCU is slow. As clients may frequently wait for results (e.g.
      such as tranform feedback) we introduced contention between the client
      and the RPS worker. We can take advantage of the RPS worker, by
      switching the wait boost decision to use spin locks and defer the
      actual reclocking to the worker.
      
      Fixes a regression of up to 45% on Baytrail and Baswell!
      
      v2 (Daniel):
      - Use max_freq_softlimit instead of the not-yet-merged boost
        frequency.
      - Don't inject a fake irq into the boost work, instead treat
        client_boost as just another legit waker.
      
      v3: Drop the now unused mask (Chris).
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90112
      Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8d3afd7d
  5. 22 5月, 2015 3 次提交
  6. 21 5月, 2015 4 次提交
    • C
      drm/i915: Convert RPS tracking to a intel_rps_client struct · 2e1b8730
      Chris Wilson 提交于
      Now that we have internal clients, rather than faking a whole
      drm_i915_file_private just for tracking RPS boosts, create a new struct
      intel_rps_client and pass it along when waiting.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2e1b8730
    • C
      drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts · a6f766f3
      Chris Wilson 提交于
      Ring switches can occur many times per frame, and are often out of
      control, causing frequent RPS boosting for no practical benefit. Treat
      the sw semaphore synchronisation as a separate client and only allow it
      to boost once per busy/idle cycle.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a6f766f3
    • C
      drm/i915: Implement inter-engine read-read optimisations · b4716185
      Chris Wilson 提交于
      Currently, we only track the last request globally across all engines.
      This prevents us from issuing concurrent read requests on e.g. the RCS
      and BCS engines (or more likely the render and media engines). Without
      semaphores, we incur costly stalls as we synchronise between rings -
      greatly impacting the current performance of Broadwell versus Haswell in
      certain workloads (like video decode). With the introduction of
      reference counted requests, it is much easier to track the last request
      per ring, as well as the last global write request so that we can
      optimise inter-engine read read requests (as well as better optimise
      certain CPU waits).
      
      v2: Fix inverted readonly condition for nonblocking waits.
      v3: Handle non-continguous engine array after waits
      v4: Rebase, tidy, rewrite ring list debugging
      v5: Use obj->active as a bitfield, it looks cool
      v6: Micro-optimise, mostly involving moving code around
      v7: Fix retire-requests-upto for execlists (and multiple rq->ringbuf)
      v8: Rebase
      v9: Refactor i915_gem_object_sync() to allow the compiler to better
      optimise it.
      
      Benchmark: igt/gem_read_read_speed
      hsw:gt3e (with semaphores):
      Before: Time to read-read 1024k:		275.794µs
      After:  Time to read-read 1024k:		123.260µs
      
      hsw:gt3e (w/o semaphores):
      Before: Time to read-read 1024k:		230.433µs
      After:  Time to read-read 1024k:		124.593µs
      
      bdw-u (w/o semaphores):             Before          After
      Time to read-read 1x1:            26.274µs       10.350µs
      Time to read-read 128x128:        40.097µs       21.366µs
      Time to read-read 256x256:        77.087µs       42.608µs
      Time to read-read 512x512:       281.999µs      181.155µs
      Time to read-read 1024x1024:    1196.141µs     1118.223µs
      Time to read-read 2048x2048:    5639.072µs     5225.837µs
      Time to read-read 4096x4096:   22401.662µs    21137.067µs
      Time to read-read 8192x8192:   89617.735µs    85637.681µs
      
      Testcase: igt/gem_concurrent_blit (read-read and friends)
      Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v8]
      [danvet: s/\<rq\>/req/g]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4716185
    • D
      drm/i915: s/\<rq\>/req/g · eed29a5b
      Daniel Vetter 提交于
      The merged seqno->request conversion from John called request
      variables req, but some (not all) of Chris' recent patches changed
      those to just rq. We've had a lenghty (and inconclusive) discussion on
      irc which is the more meaningful name with maybe at most a slight bias
      towards req.
      
      Given that the "don't change names without good reason to avoid
      conflicts" rule applies, so lets go back to a req everywhere for
      consistency. I'll sed any patches for which this will cause conflicts
      before applying.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: John Harrison <John.C.Harrison@Intel.com>
      [danvet: s/origina/merged/ as pointed out by Chris - the first
      mass-conversion patch was from Chris, the merged one from John.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      eed29a5b
  7. 20 5月, 2015 2 次提交
  8. 08 5月, 2015 5 次提交