1. 23 6月, 2015 7 次提交
    • J
      drm/i915: Split i915_ppgtt_init_hw() in half - generic and per ring · 4ad2fd88
      John Harrison 提交于
      The i915_gem_init_hw() function calls a bunch of smaller initialisation
      functions. Multiple of which have generic sections and per ring sections. This
      means multiple passes are done over the rings. Each pass writes data to the ring
      which floats around in that ring's OLR until some random point in the future
      when an add_request() is done by some random other piece of code.
      
      This patch breaks i915_ppgtt_init_hw() in two with the per ring initialisation
      now being done in i915_ppgtt_init_ring(). The ring looping is now done at the
      top level in i915_gem_init_hw().
      
      v2: Fix dumb loop variable re-use.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: Tomas Elf <tomas.elf@intel.com> (v1)
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      4ad2fd88
    • J
      drm/i915: Update i915_gpu_idle() to manage its own request · 73cfa865
      John Harrison 提交于
      Added explicit request creation and submission to the GPU idle code path.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      73cfa865
    • J
      drm/i915: Add flag to i915_add_request() to skip the cache flush · 5b4a60c2
      John Harrison 提交于
      In order to explcitly track all GPU work (and completely remove the outstanding
      lazy request), it is necessary to add extra i915_add_request() calls to various
      places. Some of these do not need the implicit cache flush done as part of the
      standard batch buffer submission process.
      
      This patch adds a flag to _add_request() to specify whether the flush is
      required or not.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5b4a60c2
    • J
      drm/i915: Update alloc_request to return the allocated request · 217e46b5
      John Harrison 提交于
      The alloc_request() function does not actually return the newly allocated
      request. Instead, it must be pulled from ring->outstanding_lazy_request. This
      patch fixes this so that code can create a request and start using it knowing
      exactly which request it actually owns.
      
      v2: Updated for new i915_gem_request_alloc() scheme.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      217e46b5
    • J
      drm/i915: Set context in request from creation even in legacy mode · 40e895ce
      John Harrison 提交于
      In execlist mode, the context object pointer is written in to the request
      structure (and reference counted) at the point of request creation. In legacy
      mode, this only happens inside i915_add_request().
      
      This patch updates the legacy code path to match the execlist version. This
      allows all the intermediate code between request creation and request submission
      to get at the context object given only a request structure. Thus negating the
      need to pass context pointers here, there and everywhere.
      
      v2: Moved the context reference so it does not need to be undone if the
      get_seqno() fails.
      
      v3: Fixed execlist mode always hitting a warning about invalid last_contexts
      (which don't exist in execlist mode).
      
      v4: Updated for new i915_gem_request_alloc() scheme.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      40e895ce
    • J
      drm/i915: i915_add_request must not fail · bf7dc5b7
      John Harrison 提交于
      The i915_add_request() function is called to keep track of work that has been
      written to the ring buffer. It adds epilogue commands to track progress (seqno
      updates and such), moves the request structure onto the right list and other
      such house keeping tasks. However, the work itself has already been written to
      the ring and will get executed whether or not the add request call succeeds. So
      no matter what goes wrong, there isn't a whole lot of point in failing the call.
      
      At the moment, this is fine(ish). If the add request does bail early on and not
      do the housekeeping, the request will still float around in the
      ring->outstanding_lazy_request field and be picked up next time. It means
      multiple pieces of work will be tagged as the same request and driver can't
      actually wait for the first piece of work until something else has been
      submitted. But it all sort of hangs together.
      
      This patch series is all about removing the OLR and guaranteeing that each piece
      of work gets its own personal request. That means that there is no more
      'hoovering up of forgotten requests'. If the request does not get tracked then
      it will be leaked. Thus the add request call _must_ not fail. The previous patch
      should have already ensured that it _will_ not fail by removing the potential
      for running out of ring space. This patch enforces the rule by actually removing
      the early exit paths and the return code.
      
      Note that if something does manage to fail and the epilogue commands don't get
      written to the ring, the driver will still hang together. The request will be
      added to the tracking lists. And as in the old case, any subsequent work will
      generate a new seqno which will suffice for marking the old one as complete.
      
      v2: Improved WARNings (Tomas Elf review request).
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      bf7dc5b7
    • J
      drm/i915: Reserve ring buffer space for i915_add_request() commands · 29b1b415
      John Harrison 提交于
      It is a bad idea for i915_add_request() to fail. The work will already have been
      send to the ring and will be processed, but there will not be any tracking or
      management of that work.
      
      The only way the add request call can fail is if it can't write its epilogue
      commands to the ring (cache flushing, seqno updates, interrupt signalling). The
      reasons for that are mostly down to running out of ring buffer space and the
      problems associated with trying to get some more. This patch prevents that
      situation from happening in the first place.
      
      When a request is created, it marks sufficient space as reserved for the
      epilogue commands. Thus guaranteeing that by the time the epilogue is written,
      there will be plenty of space for it. Note that a ring_begin() call is required
      to actually reserve the space (and do any potential waiting). However, that is
      not currently done at request creation time. This is because the ring_begin()
      code can allocate a request. Hence calling begin() from the request allocation
      code would lead to infinite recursion! Later patches in this series remove the
      need for begin() to do the allocate. At that point, it becomes safe for the
      allocate to call begin() and really reserve the space.
      
      Until then, there is a potential for insufficient space to be available at the
      point of calling i915_add_request(). However, that would only be in the case
      where the request was created and immediately submitted without ever calling
      ring_begin() and adding any work to that request. Which should never happen. And
      even if it does, and if that request happens to fall down the tiny window of
      opportunity for failing due to being out of ring space then does it really
      matter because the request wasn't doing anything in the first place?
      
      v2: Updated the 'reserved space too small' warning to include the offending
      sizes. Added a 'cancel' operation to clean up when a request is abandoned. Added
      re-initialisation of tracking state after a buffer wrap to keep the sanity
      checks accurate.
      
      v3: Incremented the reserved size to accommodate Ironlake (after finally
      managing to run on an ILK system). Also fixed missing wrap code in LRC mode.
      
      v4: Added extra comment and removed duplicate WARN (feedback from Tomas).
      
      For: VIZ-5115
      CC: Tomas Elf <tomas.elf@intel.com>
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      29b1b415
  2. 22 6月, 2015 1 次提交
  3. 15 6月, 2015 1 次提交
  4. 27 5月, 2015 1 次提交
    • C
      drm/i915: Use spinlocks for checking when to waitboost · 8d3afd7d
      Chris Wilson 提交于
      In commit 1854d5ca
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Apr 7 16:20:32 2015 +0100
      
          drm/i915: Deminish contribution of wait-boosting from clients
      
      we removed an atomic timer based check for allowing waitboosting and
      moved it below the mutex taken during RPS. However, that mutex can be
      held for long periods of time on Vallyview/Cherryview as communication
      with the PCU is slow. As clients may frequently wait for results (e.g.
      such as tranform feedback) we introduced contention between the client
      and the RPS worker. We can take advantage of the RPS worker, by
      switching the wait boost decision to use spin locks and defer the
      actual reclocking to the worker.
      
      Fixes a regression of up to 45% on Baytrail and Baswell!
      
      v2 (Daniel):
      - Use max_freq_softlimit instead of the not-yet-merged boost
        frequency.
      - Don't inject a fake irq into the boost work, instead treat
        client_boost as just another legit waker.
      
      v3: Drop the now unused mask (Chris).
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90112
      Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8d3afd7d
  5. 22 5月, 2015 3 次提交
  6. 21 5月, 2015 4 次提交
    • C
      drm/i915: Convert RPS tracking to a intel_rps_client struct · 2e1b8730
      Chris Wilson 提交于
      Now that we have internal clients, rather than faking a whole
      drm_i915_file_private just for tracking RPS boosts, create a new struct
      intel_rps_client and pass it along when waiting.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2e1b8730
    • C
      drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts · a6f766f3
      Chris Wilson 提交于
      Ring switches can occur many times per frame, and are often out of
      control, causing frequent RPS boosting for no practical benefit. Treat
      the sw semaphore synchronisation as a separate client and only allow it
      to boost once per busy/idle cycle.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: s/rq/req/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a6f766f3
    • C
      drm/i915: Implement inter-engine read-read optimisations · b4716185
      Chris Wilson 提交于
      Currently, we only track the last request globally across all engines.
      This prevents us from issuing concurrent read requests on e.g. the RCS
      and BCS engines (or more likely the render and media engines). Without
      semaphores, we incur costly stalls as we synchronise between rings -
      greatly impacting the current performance of Broadwell versus Haswell in
      certain workloads (like video decode). With the introduction of
      reference counted requests, it is much easier to track the last request
      per ring, as well as the last global write request so that we can
      optimise inter-engine read read requests (as well as better optimise
      certain CPU waits).
      
      v2: Fix inverted readonly condition for nonblocking waits.
      v3: Handle non-continguous engine array after waits
      v4: Rebase, tidy, rewrite ring list debugging
      v5: Use obj->active as a bitfield, it looks cool
      v6: Micro-optimise, mostly involving moving code around
      v7: Fix retire-requests-upto for execlists (and multiple rq->ringbuf)
      v8: Rebase
      v9: Refactor i915_gem_object_sync() to allow the compiler to better
      optimise it.
      
      Benchmark: igt/gem_read_read_speed
      hsw:gt3e (with semaphores):
      Before: Time to read-read 1024k:		275.794µs
      After:  Time to read-read 1024k:		123.260µs
      
      hsw:gt3e (w/o semaphores):
      Before: Time to read-read 1024k:		230.433µs
      After:  Time to read-read 1024k:		124.593µs
      
      bdw-u (w/o semaphores):             Before          After
      Time to read-read 1x1:            26.274µs       10.350µs
      Time to read-read 128x128:        40.097µs       21.366µs
      Time to read-read 256x256:        77.087µs       42.608µs
      Time to read-read 512x512:       281.999µs      181.155µs
      Time to read-read 1024x1024:    1196.141µs     1118.223µs
      Time to read-read 2048x2048:    5639.072µs     5225.837µs
      Time to read-read 4096x4096:   22401.662µs    21137.067µs
      Time to read-read 8192x8192:   89617.735µs    85637.681µs
      
      Testcase: igt/gem_concurrent_blit (read-read and friends)
      Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v8]
      [danvet: s/\<rq\>/req/g]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4716185
    • D
      drm/i915: s/\<rq\>/req/g · eed29a5b
      Daniel Vetter 提交于
      The merged seqno->request conversion from John called request
      variables req, but some (not all) of Chris' recent patches changed
      those to just rq. We've had a lenghty (and inconclusive) discussion on
      irc which is the more meaningful name with maybe at most a slight bias
      towards req.
      
      Given that the "don't change names without good reason to avoid
      conflicts" rule applies, so lets go back to a req everywhere for
      consistency. I'll sed any patches for which this will cause conflicts
      before applying.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: John Harrison <John.C.Harrison@Intel.com>
      [danvet: s/origina/merged/ as pointed out by Chris - the first
      mass-conversion patch was from Chris, the merged one from John.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      eed29a5b
  7. 20 5月, 2015 2 次提交
  8. 08 5月, 2015 6 次提交
  9. 24 4月, 2015 3 次提交
    • M
      drm/i915: Workaround to avoid lite restore with HEAD==TAIL · 53292cdb
      Michel Thierry 提交于
      WaIdleLiteRestore is an execlists-only workaround, and requires the driver
      to ensure that any context always has HEAD!=TAIL when attempting lite
      restore.
      
      Add two extra MI_NOOP instructions at the end of each request, but keep
      the requests tail pointing before the MI_NOOPs. We may not need to
      executed them, and this is why request->tail is sampled before adding
      these extra instructions.
      
      If we submit a context to the ELSP which has previously been submitted,
      move the tail pointer past the MI_NOOPs. This ensures HEAD!=TAIL.
      
      v2: Move overallocation to gen8_emit_request, and added note about
      sampling request->tail in commit message (Chris).
      
      v3: Remove redundant request->tail assignment in __i915_add_request, in
      lrc mode this is already set in execlists_context_queue.
      Do not add wa implementation details inside gem (Chris).
      
      v4: Apply the wa whenever the req has been resubmitted and update
      comment (Chris).
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NThomas Daniel <thomas.daniel@intel.com>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      53292cdb
    • D
      drm/i915: Fix up the vma aliasing ppgtt binding · 0875546c
      Daniel Vetter 提交于
      Currently we have the problem that the decision whether ptes need to
      be (re)written is splattered all over the codebase. Move all that into
      i915_vma_bind. This needs a few changes:
      - Just reuse the PIN_* flags for i915_vma_bind and do the conversion
        to vma->bound in there to avoid duplicating the conversion code all
        over.
      - We need to make binding for EXECBUF (i.e. pick aliasing ppgtt if
        around) explicit, add PIN_USER for that.
      - Two callers want to update ptes, give them a PIN_UPDATE for that.
      
      Of course we still want to avoid double-binding, but that should be
      taken care of:
      - A ppgtt vma will only ever see PIN_USER, so no issue with
        double-binding.
      - A ggtt vma with aliasing ppgtt needs both types of binding, and we
        track that properly now.
      - A ggtt vma without aliasing ppgtt could be bound twice. In the
        lower-level ->bind_vma functions hence unconditionally set
        GLOBAL_BIND when writing the ggtt ptes.
      
      There's still a bit room for cleanup, but that's for follow-up
      patches.
      
      v2: Fixup fumbles.
      
      v3: s/PIN_EXECBUF/PIN_USER/ for clearer meaning, suggested by Chris.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      0875546c
    • D
      drm/i915: Remove misleading comment around bind_to_vm · cd102a68
      Daniel Vetter 提交于
      It's true that we might need to context switch, but both the signalling
      and implementation of the same are a few source files away. Remove it.
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      cd102a68
  10. 20 4月, 2015 2 次提交
  11. 16 4月, 2015 1 次提交
    • T
      drm/i915: Fix view type in warning message · 5678ad73
      Tvrtko Ursulin 提交于
      One month passed between posting a patch and it getting merged, and
      unfortunately even though it still applies, it needs fixing to account
      for changes in function parameters since:
      
         commit d385612e15b8b6eb3db328d83f1872ef8a381788
         Author: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
         Date:   Tue Mar 17 14:45:29 2015 +0000
      
             drm/i915: Log view type when printing warnings
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      [danvet: Squash in fixup from Tvrtko to fix the rebase conflict.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5678ad73
  12. 13 4月, 2015 2 次提交
    • C
      drm/i915: Remove obj->pin_mappable · 30154650
      Chris Wilson 提交于
      The obj->pin_mappable flag only exists for debug purposes and is a
      hindrance that is mistreated with rotated GGTT views. For debug
      purposes, it suffices to mark objects with pin_display as being of note.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      30154650
    • C
      drm/i915: Optimistically spin for the request completion · 2def4ad9
      Chris Wilson 提交于
      This provides a nice boost to mesa in swap bound scenarios (as mesa
      throttles itself to the previous frame and given the scenario that will
      complete shortly). It will also provide a good boost to systems running
      with semaphores disabled and so frequently waiting on the GPU as it
      switches rings. In the most favourable of microbenchmarks, this can
      increase performance by around 15% - though in practice improvements
      will be marginal and rarely noticeable.
      
      v2: Account for user timeouts
      v3: Limit the spinning to a single jiffie (~1us) at most. On an
      otherwise idle system, there is no scheduler contention and so without a
      limit we would spin until the GPU is ready.
      v4: Drop forcewake - the lazy coherent access doesn't require it, and we
      have no reason to believe that the forcewake itself improves seqno
      coherency - it only adds delay.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Eero Tamminen <eero.t.tamminen@intel.com>
      Cc: "Rantala, Valtteri" <valtteri.rantala@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2def4ad9
  13. 10 4月, 2015 7 次提交