1. 13 1月, 2016 1 次提交
  2. 12 1月, 2016 1 次提交
  3. 09 1月, 2016 1 次提交
  4. 07 1月, 2016 3 次提交
  5. 05 1月, 2016 3 次提交
  6. 30 12月, 2015 1 次提交
  7. 21 12月, 2015 1 次提交
  8. 12 12月, 2015 1 次提交
    • D
      drm/i915: mark GEM object pages dirty when mapped & written by the CPU · 033908ae
      Dave Gordon 提交于
      In various places, a single page of a (regular) GEM object is mapped into
      CPU address space and updated. In each such case, either the page or the
      the object should be marked dirty, to ensure that the modifications are
      not discarded if the object is evicted under memory pressure.
      
      The typical sequence is:
      	va = kmap_atomic(i915_gem_object_get_page(obj, pageno));
      	*(va+offset) = ...
      	kunmap_atomic(va);
      
      Here we introduce i915_gem_object_get_dirty_page(), which performs the
      same operation as i915_gem_object_get_page() but with the side-effect
      of marking the returned page dirty in the pagecache.  This will ensure
      that if the object is subsequently evicted (due to memory pressure),
      the changes are written to backing store rather than discarded.
      
      Note that it works only for regular (shmfs-backed) GEM objects, but (at
      least for now) those are the only ones that are updated in this way --
      the objects in question are contexts and batchbuffers, which are always
      shmfs-backed.
      
      Separate patches deal with the cases where whole objects are (or may
      be) dirtied.
      
      v3: Mark two more pages dirty in the page-boundary-crossing
          cases of the execbuffer relocation code [Chris Wilson]
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/1449773486-30822-2-git-send-email-david.s.gordon@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      033908ae
  9. 10 12月, 2015 1 次提交
  10. 05 12月, 2015 1 次提交
  11. 03 12月, 2015 1 次提交
    • N
      drm/i915: Extend LRC pinning to cover GPU context writeback · 6d65ba94
      Nick Hoath 提交于
      Use the first retired request on a new context to unpin
      the old context. This ensures that the hw context remains
      bound until it has been written back to by the GPU.
      Now that the context is pinned until later in the request/context
      lifecycle, it no longer needs to be pinned from context_queue to
      retire_requests.
      This fixes an issue with GuC submission where the GPU might not
      have finished writing back the context before it is unpinned. This
      results in a GPU hang.
      
      v2: Moved the new pin to cover GuC submission (Alex Dai)
          Moved the new unpin to request_retire to fix coverage leak
      v3: Added switch to default context if freeing a still pinned
          context just in case the hw was actually still using it
      v4: Unwrapped context unpin to allow calling without a request
      v5: Only create a switch to idle context if the ring doesn't
          already have a request pending on it (Alex Dai)
          Rename unsaved to dirty to avoid double negatives (Dave Gordon)
          Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
          Split out per engine cleanup from context_free as it
          was getting unwieldy
          Corrected locking (Dave Gordon)
      v6: Removed some bikeshedding (Mika Kuoppala)
          Added explanation of the GuC hang that this fixes (Daniel Vetter)
      v7: Removed extra per request pinning from ring reset code (Alex Dai)
          Added forced ring unpin/clean in error case in context free (Alex Dai)
      Signed-off-by: NNick Hoath <nicholas.hoath@intel.com>
      Issue: VIZ-4277
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Gordon <david.s.gordon@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Alex Dai <yu.dai@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NAlex Dai <yu.dai@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6d65ba94
  12. 20 11月, 2015 1 次提交
  13. 18 11月, 2015 6 次提交
  14. 29 10月, 2015 2 次提交
  15. 21 10月, 2015 2 次提交
  16. 13 10月, 2015 1 次提交
  17. 07 10月, 2015 1 次提交
  18. 28 9月, 2015 1 次提交
    • M
      drm/i915: Consider HW CSB write pointer before resetting the sw read pointer · dfc53c5e
      Michel Thierry 提交于
      A previous commit resets the Context Status Buffer (CSB) read pointer in
      ring init
          commit c0a03a2e ("drm/i915: Reset CSB read pointer in ring init")
      
      This is generally correct, but this pointer is not reset after
      suspend/resume in some platforms (cht). In this case, the driver should
      read the register value instead of resetting the sw read counter to 0.
      Otherwise we process old events, leading to unwanted pre-emptions or
      something worse.
      
      But in other platforms (bdw) and also during GPU reset or power up, the
      CSBWP is reset to 0x7 (an invalid number), and in this case the read
      pointer should be set to 5 (the interrupt code will increment this
      counter one more time, and will start reading from CSB[0]).
      
      v2: When the CSB registers are reset, the read pointer needs to be set
      to 5, otherwise the first write (CSB[0]) won't be read (Mika).
      Replace magic numbers with GEN8_CSB_ENTRIES (6) and GEN8_CSB_PTR_MASK
      (0x07).
      
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: stable@vger.kernel.org # v4.0+
      Signed-off-by: NLei Shen <lei.shen@intel.com>
      Signed-off-by: NDeepak S <deepak.s@intel.com>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      dfc53c5e
  19. 23 9月, 2015 2 次提交
  20. 22 9月, 2015 1 次提交
  21. 14 9月, 2015 4 次提交
  22. 04 9月, 2015 1 次提交
  23. 02 9月, 2015 2 次提交
  24. 26 8月, 2015 1 次提交
    • I
      drm/i915/bxt: work around HW coherency issue when accessing GPU seqno · 319404df
      Imre Deak 提交于
      By running igt/store_dword_loop_render on BXT we can hit a coherency
      problem where the seqno written at GPU command completion time is not
      seen by the CPU. This results in __i915_wait_request seeing the stale
      seqno and not completing the request (not considering the lost
      interrupt/GPU reset mechanism). I also verified that this isn't a case
      of a lost interrupt, or that the command didn't complete somehow: when
      the coherency issue occured I read the seqno via an uncached GTT mapping
      too. While the cached version of the seqno still showed the stale value
      the one read via the uncached mapping was the correct one.
      
      Work around this issue by clflushing the corresponding CPU cacheline
      following any store of the seqno and preceding any reading of it. When
      reading it do this only when the caller expects a coherent view.
      
      v2:
      - fix using the proper logical && instead of a bitwise & (Jani, Mika)
      - limit the workaround to A stepping, on later steppings this HW issue
        is fixed
      v3:
      - use a separate get_seqno/set_seqno vfunc (Chris)
      
      Testcase: igt/store_dword_loop_render
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      319404df