1. 30 5月, 2015 1 次提交
  2. 29 5月, 2015 2 次提交
    • R
      drm/i915: Another fbdev hack to avoid PSR on fbcon. · d9a946b5
      Rodrigo Vivi 提交于
      With unified modeset and flip paths introduced recently when switching
      to fbcon PSR was being disabled on fb_set_par path but re-enabled on
      fb_pan_display one, causing missed screen updates and un unusable
      console.
      
      Regression introduced with:
      
      commit bb546623
      Author: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
      Date:   Tue Apr 21 17:13:13 2015 +0300
      
          drm/i915: Unify modeset and flip paths of intel_crtc_set_config()
      
      Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d9a946b5
    • R
      drm/i915: Return the frontbuffer flip to enable intel_crtc_enable_planes. · 2d847d45
      Rodrigo Vivi 提交于
      Without this frontbuffer flip when enabling planes PSR got compromised
      and wasn't being enabled waiting forever on the flush that never
      arrived.
      
      Another solution would to create a enable_cursor function and split this
      frontbuffer flip among the different plane enable and disable functions.
      But if necessary this can be done in a follow up work. For now let's
      just fix the regression.
      
      It was removed by:
      
      commit 87d4300a
      Author: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Date:   Tue Apr 21 17:12:54 2015 +0300
      
          drm/i915: Move intel_(pre_disable/post_enable)_primary to intel_display.c, and use it there.
      
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2d847d45
  3. 28 5月, 2015 7 次提交
  4. 27 5月, 2015 2 次提交
  5. 26 5月, 2015 2 次提交
  6. 23 5月, 2015 1 次提交
  7. 22 5月, 2015 19 次提交
  8. 21 5月, 2015 6 次提交
    • C
      drm/i915: Don't downclock whilst we have clients waiting for GPU results · f5a4c67d
      Chris Wilson 提交于
      If we have clients stalled waiting for requests, ignore the GPU if it
      signals that it should downclock due to low load. This helps prevent
      the automatic timeout from causing extremely long running batches from
      taking even longer.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      f5a4c67d
    • C
      drm/i915: Convert RPS tracking to a intel_rps_client struct · 2e1b8730
      Chris Wilson 提交于
      Now that we have internal clients, rather than faking a whole
      drm_i915_file_private just for tracking RPS boosts, create a new struct
      intel_rps_client and pass it along when waiting.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2e1b8730
    • C
      drm/i915: Limit mmio flip RPS boosts · bcafc4e3
      Chris Wilson 提交于
      Since we will often pageflip to an active surface, we will often have to
      wait for the surface to be written before issuing the flip. Also we are
      likely to wait on that surface in plenty of time before the vblank.
      Since we have a mechanism for boosting when a flip misses the expected
      vblank, curtain the number of times we RPS boost when simply waiting for
      mmioflip.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      bcafc4e3
    • C
      drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts · a6f766f3
      Chris Wilson 提交于
      Ring switches can occur many times per frame, and are often out of
      control, causing frequent RPS boosting for no practical benefit. Treat
      the sw semaphore synchronisation as a separate client and only allow it
      to boost once per busy/idle cycle.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a6f766f3
    • C
      drm/i915: Inline check required for object syncing prior to execbuf · 03ade511
      Chris Wilson 提交于
      This trims a little overhead from the common case of not needing to
      synchronize between rings.
      
      v2: execlists is special and likes to duplicate code.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      03ade511
    • C
      drm/i915: Implement inter-engine read-read optimisations · b4716185
      Chris Wilson 提交于
      Currently, we only track the last request globally across all engines.
      This prevents us from issuing concurrent read requests on e.g. the RCS
      and BCS engines (or more likely the render and media engines). Without
      semaphores, we incur costly stalls as we synchronise between rings -
      greatly impacting the current performance of Broadwell versus Haswell in
      certain workloads (like video decode). With the introduction of
      reference counted requests, it is much easier to track the last request
      per ring, as well as the last global write request so that we can
      optimise inter-engine read read requests (as well as better optimise
      certain CPU waits).
      
      v2: Fix inverted readonly condition for nonblocking waits.
      v3: Handle non-continguous engine array after waits
      v4: Rebase, tidy, rewrite ring list debugging
      v5: Use obj->active as a bitfield, it looks cool
      v6: Micro-optimise, mostly involving moving code around
      v7: Fix retire-requests-upto for execlists (and multiple rq->ringbuf)
      v8: Rebase
      v9: Refactor i915_gem_object_sync() to allow the compiler to better
      optimise it.
      
      Benchmark: igt/gem_read_read_speed
      hsw:gt3e (with semaphores):
      Before: Time to read-read 1024k:		275.794µs
      After:  Time to read-read 1024k:		123.260µs
      
      hsw:gt3e (w/o semaphores):
      Before: Time to read-read 1024k:		230.433µs
      After:  Time to read-read 1024k:		124.593µs
      
      bdw-u (w/o semaphores):             Before          After
      Time to read-read 1x1:            26.274µs       10.350µs
      Time to read-read 128x128:        40.097µs       21.366µs
      Time to read-read 256x256:        77.087µs       42.608µs
      Time to read-read 512x512:       281.999µs      181.155µs
      Time to read-read 1024x1024:    1196.141µs     1118.223µs
      Time to read-read 2048x2048:    5639.072µs     5225.837µs
      Time to read-read 4096x4096:   22401.662µs    21137.067µs
      Time to read-read 8192x8192:   89617.735µs    85637.681µs
      
      Testcase: igt/gem_concurrent_blit (read-read and friends)
      Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v8]
      [danvet: s/\<rq\>/req/g]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4716185