1. 21 1月, 2016 2 次提交
    • D
      drm/i915: abolish separate per-ring default_context pointers · ed54c1a1
      Dave Gordon 提交于
      Now that we've eliminated a lot of uses of ring->default_context,
      we can eliminate the pointer itself.
      
      All the engines share the same default intel_context, so we can just
      keep a single reference to it in the dev_priv structure rather than one
      in each of the engine[] elements. This make refcounting more sensible
      too, as we now have a refcount of one for the one pointer, rather than
      a refcount of one but multiple pointers.
      
      From an idea by Chris Wilson.
      
      v2:	transform an extra instance of ring->default_context introduced by
          42f1cae8 drm/i915: Restore inhibiting the load of the default context
          That patch's commentary includes:
      	v2: Mark the global default context as uninitialized on GPU reset so
      	    that the context-local workarounds are reloaded upon re-enabling
          The code implementing that now also benefits from the replacement of
          the multiple (per-ring) pointers to the default context with a single
          pointer to the unique kernel context.
      
      v4:	Rebased, remove underused local (Nick Hoath)
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Reviewed-by: NNick Hoath <nicholas.hoath@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/1453230175-19330-3-git-send-email-david.s.gordon@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      ed54c1a1
    • D
      drm/i915: simplify allocation of driver-internal requests · 26827088
      Dave Gordon 提交于
      There are a number of places where the driver needs a request, but isn't
      working on behalf of any specific user or in a specific context. At
      present, we associate them with the per-engine default context. A future
      patch will abolish those per-engine context pointers; but we can already
      eliminate a lot of the references to them, just by making the allocator
      allow NULL as a shorthand for "an appropriate context for this ring",
      which will mean that the callers don't need to know anything about how
      the "appropriate context" is found (e.g. per-ring vs per-device, etc).
      
      So this patch renames the existing i915_gem_request_alloc(), and makes
      it local (static inline), and replaces it with a wrapper that provides
      a default if the context is NULL, and also has a nicer calling
      convention (doesn't require a pointer to an output parameter). Then we
      change all callers to use the new convention:
      OLD:
      	err = i915_gem_request_alloc(ring, user_ctx, &req);
      	if (err) ...
      NEW:
      	req = i915_gem_request_alloc(ring, user_ctx);
      	if (IS_ERR(req)) ...
      OLD:
      	err = i915_gem_request_alloc(ring, ring->default_context, &req);
      	if (err) ...
      NEW:
      	req = i915_gem_request_alloc(ring, NULL);
      	if (IS_ERR(req)) ...
      
      v4:	Rebased
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Reviewed-by: NNick Hoath <nicholas.hoath@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1453230175-19330-2-git-send-email-david.s.gordon@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      26827088
  2. 18 1月, 2016 3 次提交
  3. 15 1月, 2016 1 次提交
  4. 13 1月, 2016 3 次提交
  5. 12 1月, 2016 1 次提交
  6. 09 1月, 2016 1 次提交
  7. 07 1月, 2016 3 次提交
  8. 05 1月, 2016 3 次提交
  9. 30 12月, 2015 1 次提交
  10. 21 12月, 2015 1 次提交
  11. 12 12月, 2015 1 次提交
    • D
      drm/i915: mark GEM object pages dirty when mapped & written by the CPU · 033908ae
      Dave Gordon 提交于
      In various places, a single page of a (regular) GEM object is mapped into
      CPU address space and updated. In each such case, either the page or the
      the object should be marked dirty, to ensure that the modifications are
      not discarded if the object is evicted under memory pressure.
      
      The typical sequence is:
      	va = kmap_atomic(i915_gem_object_get_page(obj, pageno));
      	*(va+offset) = ...
      	kunmap_atomic(va);
      
      Here we introduce i915_gem_object_get_dirty_page(), which performs the
      same operation as i915_gem_object_get_page() but with the side-effect
      of marking the returned page dirty in the pagecache.  This will ensure
      that if the object is subsequently evicted (due to memory pressure),
      the changes are written to backing store rather than discarded.
      
      Note that it works only for regular (shmfs-backed) GEM objects, but (at
      least for now) those are the only ones that are updated in this way --
      the objects in question are contexts and batchbuffers, which are always
      shmfs-backed.
      
      Separate patches deal with the cases where whole objects are (or may
      be) dirtied.
      
      v3: Mark two more pages dirty in the page-boundary-crossing
          cases of the execbuffer relocation code [Chris Wilson]
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/1449773486-30822-2-git-send-email-david.s.gordon@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      033908ae
  12. 10 12月, 2015 1 次提交
  13. 05 12月, 2015 1 次提交
  14. 03 12月, 2015 1 次提交
    • N
      drm/i915: Extend LRC pinning to cover GPU context writeback · 6d65ba94
      Nick Hoath 提交于
      Use the first retired request on a new context to unpin
      the old context. This ensures that the hw context remains
      bound until it has been written back to by the GPU.
      Now that the context is pinned until later in the request/context
      lifecycle, it no longer needs to be pinned from context_queue to
      retire_requests.
      This fixes an issue with GuC submission where the GPU might not
      have finished writing back the context before it is unpinned. This
      results in a GPU hang.
      
      v2: Moved the new pin to cover GuC submission (Alex Dai)
          Moved the new unpin to request_retire to fix coverage leak
      v3: Added switch to default context if freeing a still pinned
          context just in case the hw was actually still using it
      v4: Unwrapped context unpin to allow calling without a request
      v5: Only create a switch to idle context if the ring doesn't
          already have a request pending on it (Alex Dai)
          Rename unsaved to dirty to avoid double negatives (Dave Gordon)
          Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
          Split out per engine cleanup from context_free as it
          was getting unwieldy
          Corrected locking (Dave Gordon)
      v6: Removed some bikeshedding (Mika Kuoppala)
          Added explanation of the GuC hang that this fixes (Daniel Vetter)
      v7: Removed extra per request pinning from ring reset code (Alex Dai)
          Added forced ring unpin/clean in error case in context free (Alex Dai)
      Signed-off-by: NNick Hoath <nicholas.hoath@intel.com>
      Issue: VIZ-4277
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Gordon <david.s.gordon@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Alex Dai <yu.dai@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NAlex Dai <yu.dai@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6d65ba94
  15. 20 11月, 2015 1 次提交
  16. 18 11月, 2015 6 次提交
  17. 29 10月, 2015 2 次提交
  18. 21 10月, 2015 2 次提交
  19. 13 10月, 2015 1 次提交
  20. 07 10月, 2015 1 次提交
  21. 28 9月, 2015 1 次提交
    • M
      drm/i915: Consider HW CSB write pointer before resetting the sw read pointer · dfc53c5e
      Michel Thierry 提交于
      A previous commit resets the Context Status Buffer (CSB) read pointer in
      ring init
          commit c0a03a2e ("drm/i915: Reset CSB read pointer in ring init")
      
      This is generally correct, but this pointer is not reset after
      suspend/resume in some platforms (cht). In this case, the driver should
      read the register value instead of resetting the sw read counter to 0.
      Otherwise we process old events, leading to unwanted pre-emptions or
      something worse.
      
      But in other platforms (bdw) and also during GPU reset or power up, the
      CSBWP is reset to 0x7 (an invalid number), and in this case the read
      pointer should be set to 5 (the interrupt code will increment this
      counter one more time, and will start reading from CSB[0]).
      
      v2: When the CSB registers are reset, the read pointer needs to be set
      to 5, otherwise the first write (CSB[0]) won't be read (Mika).
      Replace magic numbers with GEN8_CSB_ENTRIES (6) and GEN8_CSB_PTR_MASK
      (0x07).
      
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: stable@vger.kernel.org # v4.0+
      Signed-off-by: NLei Shen <lei.shen@intel.com>
      Signed-off-by: NDeepak S <deepak.s@intel.com>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      dfc53c5e
  22. 23 9月, 2015 2 次提交
  23. 22 9月, 2015 1 次提交