1. 13 6月, 2013 1 次提交
  2. 11 6月, 2013 1 次提交
  3. 07 6月, 2013 1 次提交
    • R
      drm/i915: WA: FBC Render Nuke. · fd3da6c9
      Rodrigo Vivi 提交于
      WaFbcNukeOn3DBlt for IVB, HSW.
      
      According BSPec: "Workaround: Do not enable Render Command Streamer tracking for FBC.
      Instead insert a LRI to address 0x50380 with data 0x00000004 after the PIPE_CONTROL that
      follows each render submission."
      
      v2: Chris noticed that flush_domains check was missing here and also suggested to do
          LRI only when fbc is enabled. To avoid do a I915_READ on every flush lets use the
          module parameter check.
      
      v3: Adding Wa name as Damien suggested.
      
      v4: Ville noticed VLV doesn't support fbc at all and comment came wrong from spec.
      
      v5: Ville noticed than on blt a Cache Clean LRI should be used instead the Nuke one.
      
      v6: Check for flush domain on blt (by Ville).
          Check for scanout dirty (by Chris).
      
      v7: Apply proper fbc_dirty implemented by Chris.
      
      v8: remove unused variables.
      
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      fd3da6c9
  4. 01 6月, 2013 12 次提交
  5. 11 5月, 2013 1 次提交
  6. 20 2月, 2013 2 次提交
  7. 23 1月, 2013 2 次提交
  8. 22 1月, 2013 1 次提交
  9. 20 1月, 2013 2 次提交
  10. 18 1月, 2013 1 次提交
  11. 19 12月, 2012 2 次提交
  12. 18 12月, 2012 1 次提交
    • D
      drm/i915: Implement workaround for broken CS tlb on i830/845 · b45305fc
      Daniel Vetter 提交于
      Now that Chris Wilson demonstrated that the key for stability on early
      gen 2 is to simple _never_ exchange the physical backing storage of
      batch buffers I've tried a stab at a kernel solution. Doesn't look too
      nefarious imho, now that I don't try to be too clever for my own good
      any more.
      
      v2: After discussing the various techniques, we've decided to always blit
      batches on the suspect devices, but allow userspace to opt out of the
      kernel workaround assume full responsibility for providing coherent
      batches. The principal reason is that avoiding the blit does improve
      performance in a few key microbenchmarks and also in cairo-trace
      replays.
      Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet:
      - Drop the hunk which uses HAS_BROKEN_CS_TLB to implement the ring
        wrap w/a. Suggested by Chris Wilson.
      - Also add the ACTHD check from Chris Wilson for the error state
        dumping, so that we still catch batches when userspace opts out of
        the w/a.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b45305fc
  13. 11 12月, 2012 1 次提交
  14. 06 12月, 2012 2 次提交
  15. 04 12月, 2012 1 次提交
  16. 01 12月, 2012 1 次提交
  17. 29 11月, 2012 2 次提交
    • C
      drm/i915: Rearrange code to only have a single method for waiting upon the ring · 3e960501
      Chris Wilson 提交于
      Replace the wait for the ring to be clear with the more common wait for
      the ring to be idle. The principle advantage is one less exported
      intel_ring_wait function, and the removal of a hardcoded value.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      3e960501
    • C
      drm/i915: Preallocate next seqno before touching the ring · 9d773091
      Chris Wilson 提交于
      Based on the work by Mika Kuoppala, we realised that we need to handle
      seqno wraparound prior to committing our changes to the ring. The most
      obvious point then is to grab the seqno inside intel_ring_begin(), and
      then to reuse that seqno for all ring operations until the next request.
      As intel_ring_begin() can fail, the callers must already be prepared to
      handle such failure and so we can safely add further checks.
      
      This patch looks like it should be split up into the interface
      changes and the tweaks to move seqno wrapping from the execbuffer into
      the core seqno increment. However, I found no easy way to break it into
      incremental steps without introducing further broken behaviour.
      
      v2: Mika found a silly mistake and a subtle error in the existing code;
      inside i915_gem_retire_requests() we were resetting the sync_seqno of
      the target ring based on the seqno from this ring - which are only
      related by the order of their allocation, not retirement. Hence we were
      applying the optimisation that the rings were synchronised too early,
      fortunately the only real casualty there is the handling of seqno
      wrapping.
      
      v3: Do not forget to reset the sync_seqno upon module reinitialisation,
      ala resume.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=863861
      Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [v2]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      9d773091
  18. 22 11月, 2012 1 次提交
  19. 16 11月, 2012 1 次提交
  20. 12 11月, 2012 4 次提交