1. 23 6月, 2015 11 次提交
  2. 03 6月, 2015 3 次提交
  3. 22 5月, 2015 1 次提交
  4. 21 5月, 2015 3 次提交
    • C
      drm/i915: Implement inter-engine read-read optimisations · b4716185
      Chris Wilson 提交于
      Currently, we only track the last request globally across all engines.
      This prevents us from issuing concurrent read requests on e.g. the RCS
      and BCS engines (or more likely the render and media engines). Without
      semaphores, we incur costly stalls as we synchronise between rings -
      greatly impacting the current performance of Broadwell versus Haswell in
      certain workloads (like video decode). With the introduction of
      reference counted requests, it is much easier to track the last request
      per ring, as well as the last global write request so that we can
      optimise inter-engine read read requests (as well as better optimise
      certain CPU waits).
      
      v2: Fix inverted readonly condition for nonblocking waits.
      v3: Handle non-continguous engine array after waits
      v4: Rebase, tidy, rewrite ring list debugging
      v5: Use obj->active as a bitfield, it looks cool
      v6: Micro-optimise, mostly involving moving code around
      v7: Fix retire-requests-upto for execlists (and multiple rq->ringbuf)
      v8: Rebase
      v9: Refactor i915_gem_object_sync() to allow the compiler to better
      optimise it.
      
      Benchmark: igt/gem_read_read_speed
      hsw:gt3e (with semaphores):
      Before: Time to read-read 1024k:		275.794µs
      After:  Time to read-read 1024k:		123.260µs
      
      hsw:gt3e (w/o semaphores):
      Before: Time to read-read 1024k:		230.433µs
      After:  Time to read-read 1024k:		124.593µs
      
      bdw-u (w/o semaphores):             Before          After
      Time to read-read 1x1:            26.274µs       10.350µs
      Time to read-read 128x128:        40.097µs       21.366µs
      Time to read-read 256x256:        77.087µs       42.608µs
      Time to read-read 512x512:       281.999µs      181.155µs
      Time to read-read 1024x1024:    1196.141µs     1118.223µs
      Time to read-read 2048x2048:    5639.072µs     5225.837µs
      Time to read-read 4096x4096:   22401.662µs    21137.067µs
      Time to read-read 8192x8192:   89617.735µs    85637.681µs
      
      Testcase: igt/gem_concurrent_blit (read-read and friends)
      Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v8]
      [danvet: s/\<rq\>/req/g]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4716185
    • I
      drm/i915/skl: enable WaForceContextSaveRestoreNonCoherent · 8ea6f892
      Imre Deak 提交于
      v2:
      - set the override disable flag too on stepping F0 (mika)
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8ea6f892
    • I
      drm/i915/bxt: fix WaForceContextSaveRestoreNonCoherent on steppings B0+ · 2a0ee94f
      Imre Deak 提交于
      On B0 and C0 steppings the workaround enable bit would be overriden by
      default, so the overriding must be disabled.
      
      The WA was added in
      commit 83a24979
      Author: Nick Hoath <nicholas.hoath@intel.com>
      Date:   Fri Apr 10 13:12:26 2015 +0100
      
          drm/i915/bxt: Add WaForceContextSaveRestoreNonCoherent
      Spotted-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2a0ee94f
  5. 20 5月, 2015 1 次提交
  6. 08 5月, 2015 12 次提交
  7. 14 4月, 2015 2 次提交
  8. 10 4月, 2015 3 次提交
  9. 09 4月, 2015 1 次提交
  10. 07 4月, 2015 1 次提交
  11. 01 4月, 2015 2 次提交
    • J
      drm/i915: Fix for ringbuf space wait in LRC mode · dbe4646d
      John Harrison 提交于
      The legacy and LRC code paths have an almost identical procedure for waiting for
      space in the ring buffer. They both search for a request in the free list that
      will advance the tail to a point where sufficient space is available. They then
      wait for that request, retire it and recalculate the free space value.
      
      Unfortunately, a bug in the LRC side meant that the resulting free space might
      not be as large as expected and indeed, might not be sufficient. This is because
      it was testing against the value of request->tail not request->postfix. Whereas,
      when a request is retired, ringbuf->tail is updated to req->postfix not
      req->tail.
      
      Another significant difference between the two is that the LRC one did not trust
      the wait for request to work! It redid the is there enough space available test
      and would fail the call if insufficient. Whereas, the legacy version just said
      'return 0' - it assumed the preceeding code works. This difference meant that
      the LRC version still worked even with the bug - it just fell back to the
      polling wait path.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NThomas Daniel <thomas.daniel@intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      dbe4646d
    • J
      drm/i915: Move common request allocation code into a common function · 6689cb2b
      John Harrison 提交于
      The request allocation code is largely duplicated between legacy mode and
      execlist mode. The actual difference between the two versions of the code is
      pretty minimal.
      
      This patch moves the common code out into a separate function. This is then
      called by the execution specific version prior to setting up the one different
      value.
      
      For: VIZ-5190
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6689cb2b