1. 25 10月, 2016 1 次提交
  2. 14 10月, 2016 1 次提交
    • A
      drm/i915: Allocate intel_engine_cs structure only for the enabled engines · 3b3f1650
      Akash Goel 提交于
      With the possibility of addition of many more number of rings in future,
      the drm_i915_private structure could bloat as an array, of type
      intel_engine_cs, is embedded inside it.
      	struct intel_engine_cs engine[I915_NUM_ENGINES];
      Though this is still fine as generally there is only a single instance of
      drm_i915_private structure used, but not all of the possible rings would be
      enabled or active on most of the platforms. Some memory can be saved by
      allocating intel_engine_cs structure only for the enabled/active engines.
      Currently the engine/ring ID is kept static and dev_priv->engine[] is simply
      indexed using the enums defined in intel_engine_id.
      To save memory and continue using the static engine/ring IDs, 'engine' is
      defined as an array of pointers.
      	struct intel_engine_cs *engine[I915_NUM_ENGINES];
      dev_priv->engine[engine_ID] will be NULL for disabled engine instances.
      
      There is a text size reduction of 928 bytes, from 1028200 to 1027272, for
      i915.o file (but for i915.ko file text size remain same as 1193131 bytes).
      
      v2:
      - Remove the engine iterator field added in drm_i915_private structure,
        instead pass a local iterator variable to the for_each_engine**
        macros. (Chris)
      - Do away with intel_engine_initialized() and instead directly use the
        NULL pointer check on engine pointer. (Chris)
      
      v3:
      - Remove for_each_engine_id() macro, as the updated macro for_each_engine()
        can be used in place of it. (Chris)
      - Protect the access to Render engine Fault register with a NULL check, as
        engine specific init is done later in Driver load sequence.
      
      v4:
      - Use !!dev_priv->engine[VCS] style for the engine check in getparam. (Chris)
      - Kill the superfluous init_engine_lists().
      
      v5:
      - Cleanup the intel_engines_init() & intel_engines_setup(), with respect to
        allocation of intel_engine_cs structure. (Chris)
      
      v6:
      - Rebase.
      
      v7:
      - Optimize the for_each_engine_masked() macro. (Chris)
      - Change the type of 'iter' local variable to enum intel_engine_id. (Chris)
      - Rebase.
      
      v8: Rebase.
      
      v9: Rebase.
      
      v10:
      - For index calculation use engine ID instead of pointer based arithmetic in
        intel_engine_sync_index() as engine pointers are not contiguous now (Chris)
      - For appropriateness, rename local enum variable 'iter' to 'id'. (Joonas)
      - Use for_each_engine macro for cleanup in intel_engines_init() and remove
        check for NULL engine pointer in cleanup() routines. (Joonas)
      
      v11: Rebase.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NAkash Goel <akash.goel@intel.com>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1476378888-7372-1-git-send-email-akash.goel@intel.com
      3b3f1650
  3. 10 10月, 2016 1 次提交
  4. 07 10月, 2016 1 次提交
  5. 15 9月, 2016 1 次提交
  6. 09 9月, 2016 3 次提交
    • C
      drm/i915/guc: Prepare for nonblocking execbuf submission · dadd481b
      Chris Wilson 提交于
      Currently the presumption is that the request construction and its
      submission to the GuC are all under the same holding of struct_mutex. We
      wish to relax this to separate the request construction and the later
      submission to the GuC. This requires us to reserve some space in the
      GuC command queue for the future submission. For flexibility to handle
      out-of-order request submission we do not preallocate the next slot in
      the GuC command queue during request construction, just ensuring that
      there is enough space later.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-17-chris@chris-wilson.co.uk
      dadd481b
    • C
      drm/i915: Drive request submission through fence callbacks · 5590af3e
      Chris Wilson 提交于
      Drive final request submission from a callback from the fence. This way
      the request is queued until all dependencies are resolved, at which
      point it is handed to the backend for queueing to hardware. At this
      point, no dependencies are set on the request, so the callback is
      immediate.
      
      A side-effect of imposing a heavier-irqsafe spinlock for execlist
      submission is that we lose the softirq enabling after scheduling the
      execlists tasklet. To compensate, we manually kickstart the softirq by
      disabling and enabling the bh around the fence signaling.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: NJohn Harrison <john.c.harrison@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-14-chris@chris-wilson.co.uk
      5590af3e
    • C
      drm/i915: Update reset path to fix incomplete requests · 821ed7df
      Chris Wilson 提交于
      Update reset path in preparation for engine reset which requires
      identification of incomplete requests and associated context and fixing
      their state so that engine can resume correctly after reset.
      
      The request that caused the hang will be skipped and head is reset to the
      start of breadcrumb. This allows us to resume from where we left-off.
      Since this request didn't complete normally we also need to cleanup elsp
      queue manually. This is vital if we employ nonblocking request
      submission where we may have a web of dependencies upon the hung request
      and so advancing the seqno manually is no longer trivial.
      
      ABI: gem_reset_stats / DRM_IOCTL_I915_GET_RESET_STATS
      
      We change the way we count pending batches. Only the active context
      involved in the reset is marked as either innocent or guilty, and not
      mark the entire world as pending. By inspection this only affects
      igt/gem_reset_stats (which assumes implementation details) and not
      piglit.
      
      ARB_robustness gives this guide on how we expect the user of this
      interface to behave:
      
       * Provide a mechanism for an OpenGL application to learn about
         graphics resets that affect the context.  When a graphics reset
         occurs, the OpenGL context becomes unusable and the application
         must create a new context to continue operation. Detecting a
         graphics reset happens through an inexpensive query.
      
      And with regards to the actual meaning of the reset values:
      
         Certain events can result in a reset of the GL context. Such a reset
         causes all context state to be lost. Recovery from such events
         requires recreation of all objects in the affected context. The
         current status of the graphics reset state is returned by
      
      	enum GetGraphicsResetStatusARB();
      
         The symbolic constant returned indicates if the GL context has been
         in a reset state at any point since the last call to
         GetGraphicsResetStatusARB. NO_ERROR indicates that the GL context
         has not been in a reset state since the last call.
         GUILTY_CONTEXT_RESET_ARB indicates that a reset has been detected
         that is attributable to the current GL context.
         INNOCENT_CONTEXT_RESET_ARB indicates a reset has been detected that
         is not attributable to the current GL context.
         UNKNOWN_CONTEXT_RESET_ARB indicates a detected graphics reset whose
         cause is unknown.
      
      The language here is explicit in that we must mark up the guilty batch,
      but is loose enough for us to relax the innocent (i.e. pending)
      accounting as only the active batches are involved with the reset.
      
      In the future, we are looking towards single engine resetting (with
      minimal locking), where it seems inappropriate to mark the entire world
      as innocent since the reset occurred on a different engine. Reducing the
      information available means we only have to encounter the pain once, and
      also reduces the information leaking from one context to another.
      
      v2: Legacy ringbuffer submission required a reset following hibernation,
      or else we restore stale values to the RING_HEAD and walked over
      stolen garbage.
      
      v3: GuC requires replaying the requests after a reset.
      
      v4: Restore engine IRQ after reset (so waiters will be woken!)
          Rearm hangcheck if resetting with a waiter.
      
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-13-chris@chris-wilson.co.uk
      821ed7df
  7. 05 9月, 2016 1 次提交
  8. 27 8月, 2016 1 次提交
  9. 17 8月, 2016 1 次提交
  10. 15 8月, 2016 5 次提交
  11. 10 8月, 2016 4 次提交
  12. 05 8月, 2016 1 次提交
  13. 03 8月, 2016 3 次提交
  14. 20 7月, 2016 2 次提交
  15. 06 7月, 2016 1 次提交
  16. 05 7月, 2016 1 次提交
  17. 04 7月, 2016 1 次提交
  18. 21 6月, 2016 2 次提交
  19. 14 6月, 2016 6 次提交
  20. 13 6月, 2016 2 次提交
  21. 07 6月, 2016 1 次提交
    • D
      drm/i915/guc: disable GuC submission earlier during GuC (re)load · 29fb72c7
      Dave Gordon 提交于
      When resetting and reloading the GuC, the GuC submission management code
      also needs to destroy and recreate the GuC client(s). Currently this is
      done by a separate call from the GuC loader, but really, it's just an
      internal detail of the submission code. So here we remove the call from
      the loader (which is too late, really, because the GuC has already been
      reloaded at this point) and put it into guc_submission_init() instead.
      This means that any preexisting client is destroyed *before* the GuC
      (re)load and then recreated after, iff the firmware was successfully
      loaded. If the GuC reload fails, we don't recreate the client, so
      fallback to execlists mode (if active) won't leak the client object
      (previously, the now-unusable client would have been left allocated,
      and leaked if the driver were unloaded).
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      29fb72c7