1. 23 6月, 2015 18 次提交
    • J
      drm/i915: Update init_context() to take a request structure · 8753181e
      John Harrison 提交于
      Now that everything above has been converted to use requests, it is possible to
      update init_context() to take a request pointer instead of a ring/context pair.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8753181e
    • J
      drm/i915: Update deferred context creation to do explicit request management · 76c39168
      John Harrison 提交于
      In execlist mode, context initialisation is deferred until first use of the
      given context. This is because execlist mode has per ring context state and thus
      many more context storage objects than legacy mode and many are never actually
      used. Previously, the initialisation commands were written to the ring and
      tagged with some random request structure via the OLR. This seemed to be causing
      a null pointer deference bug under certain circumstances (BZ:88865).
      
      This patch adds explicit request creation and submission to the deferred
      initialisation code path. Thus removing any reliance on or randomness caused by
      the OLR.
      
      Note that it should be possible to move the deferred context creation until even
      later - when the context is actually switched to rather than when it is merely
      validated. This would allow the initialisation to be done within the request of
      the work that is wanting to use the context. Hence, the extra request that is
      created, used and retired just for the context init could be removed completely.
      However, this is left for a follow up patch.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      76c39168
    • J
      drm/i915: Add explicit request management to i915_gem_init_hw() · dc4be607
      John Harrison 提交于
      Now that a single per ring loop is being done for all the different
      intialisation steps in i915_gem_init_hw(), it is possible to add proper request
      management as well. The last remaining issue is that the context enable call
      eventually ends up within *_render_state_init() and this does its own private
      _i915_add_request() call.
      
      This patch adds explicit request creation and submission to the top level loop
      and removes the add_request() from deep within the sub-functions.
      
      v2: Updated for removal of batch_obj from add_request call in previous patch.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      dc4be607
    • J
      drm/i915: Don't tag kernel batches as user batches · a3fbe05a
      John Harrison 提交于
      The render state initialisation code does an explicit i915_add_request() call to
      commit the init commands. It was passing in the initialisation batch buffer to
      add_request() as the batch object parameter. However, the batch object entry in
      the request structure (which is all that parameter is used for) is meant for
      keeping track of user generated batch buffers for blame tagging during GPU
      hangs.
      
      This patch clears the batch object parameter so that kernel generated batch
      buffers are not tagged as being user generated.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a3fbe05a
    • J
      drm/i915: Add flag to i915_add_request() to skip the cache flush · 5b4a60c2
      John Harrison 提交于
      In order to explcitly track all GPU work (and completely remove the outstanding
      lazy request), it is necessary to add extra i915_add_request() calls to various
      places. Some of these do not need the implicit cache flush done as part of the
      standard batch buffer submission process.
      
      This patch adds a flag to _add_request() to specify whether the flush is
      required or not.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5b4a60c2
    • J
      drm/i915: Update execbuffer_move_to_active() to take a request structure · 8a8edb59
      John Harrison 提交于
      The plan is to pass requests around as the basic submission tracking structure
      rather than rings and contexts. This patch updates the
      execbuffer_move_to_active() code path.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8a8edb59
    • J
      drm/i915: Update move_to_gpu() to take a request structure · 535fbe82
      John Harrison 提交于
      The plan is to pass requests around as the basic submission tracking structure
      rather than rings and contexts. This patch updates the move_to_gpu() code paths.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      535fbe82
    • J
      drm/i915: Update the dispatch tracepoint to use params->request · 95c24161
      John Harrison 提交于
      Updated a couple of trace points to use the now cached request pointer rather
      than extracting it from the ring.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      95c24161
    • J
      drm/i915: Update alloc_request to return the allocated request · 217e46b5
      John Harrison 提交于
      The alloc_request() function does not actually return the newly allocated
      request. Instead, it must be pulled from ring->outstanding_lazy_request. This
      patch fixes this so that code can create a request and start using it knowing
      exactly which request it actually owns.
      
      v2: Updated for new i915_gem_request_alloc() scheme.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      217e46b5
    • J
      drm/i915: Simplify i915_gem_execbuffer_retire_commands() parameters · adeca76d
      John Harrison 提交于
      Shrunk the parameter list of i915_gem_execbuffer_retire_commands() to a single
      structure as everything it requires is available in the execbuff_params object.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      adeca76d
    • J
      drm/i915: Merged the many do_execbuf() parameters into a structure · 5f19e2bf
      John Harrison 提交于
      The do_execbuf() function takes quite a few parameters. The actual set of
      parameters is going to change with the conversion to passing requests around.
      Further, it is due to grow massively with the arrival of the GPU scheduler.
      
      This patch simplifies the prototype by passing a parameter structure instead.
      Changing the parameter set in the future is then simply a matter of
      adding/removing items to the structure.
      
      Note that the structure does not contain absolutely everything that is passed
      in. This is because the intention is to use this structure more extensively
      later in this patch series and more especially in the GPU scheduler that is
      coming soon. The latter requires hanging on to the structure as the final
      hardware submission can be delayed until long after the execbuf IOCTL has
      returned to user land. Thus it is unsafe to put anything in the structure that
      is local to the IOCTL call itself - such as the 'args' parameter. All entries
      must be copies of data or pointers to structures that are reference counted in
      some way and guaranteed to exist for the duration of the batch buffer's life.
      
      v2: Rebased to newer tree and updated for changes to the command parser.
      Specifically, a code shuffle has required saving the batch start address in the
      params structure.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5f19e2bf
    • J
      drm/i915: Set context in request from creation even in legacy mode · 40e895ce
      John Harrison 提交于
      In execlist mode, the context object pointer is written in to the request
      structure (and reference counted) at the point of request creation. In legacy
      mode, this only happens inside i915_add_request().
      
      This patch updates the legacy code path to match the execlist version. This
      allows all the intermediate code between request creation and request submission
      to get at the context object given only a request structure. Thus negating the
      need to pass context pointers here, there and everywhere.
      
      v2: Moved the context reference so it does not need to be undone if the
      get_seqno() fails.
      
      v3: Fixed execlist mode always hitting a warning about invalid last_contexts
      (which don't exist in execlist mode).
      
      v4: Updated for new i915_gem_request_alloc() scheme.
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      40e895ce
    • J
      drm/i915: i915_add_request must not fail · bf7dc5b7
      John Harrison 提交于
      The i915_add_request() function is called to keep track of work that has been
      written to the ring buffer. It adds epilogue commands to track progress (seqno
      updates and such), moves the request structure onto the right list and other
      such house keeping tasks. However, the work itself has already been written to
      the ring and will get executed whether or not the add request call succeeds. So
      no matter what goes wrong, there isn't a whole lot of point in failing the call.
      
      At the moment, this is fine(ish). If the add request does bail early on and not
      do the housekeeping, the request will still float around in the
      ring->outstanding_lazy_request field and be picked up next time. It means
      multiple pieces of work will be tagged as the same request and driver can't
      actually wait for the first piece of work until something else has been
      submitted. But it all sort of hangs together.
      
      This patch series is all about removing the OLR and guaranteeing that each piece
      of work gets its own personal request. That means that there is no more
      'hoovering up of forgotten requests'. If the request does not get tracked then
      it will be leaked. Thus the add request call _must_ not fail. The previous patch
      should have already ensured that it _will_ not fail by removing the potential
      for running out of ring space. This patch enforces the rule by actually removing
      the early exit paths and the return code.
      
      Note that if something does manage to fail and the epilogue commands don't get
      written to the ring, the driver will still hang together. The request will be
      added to the tracking lists. And as in the old case, any subsequent work will
      generate a new seqno which will suffice for marking the old one as complete.
      
      v2: Improved WARNings (Tomas Elf review request).
      
      For: VIZ-5115
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Reviewed-by: NTomas Elf <tomas.elf@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      bf7dc5b7
    • J
      drm/i915: Reserve ring buffer space for i915_add_request() commands · 29b1b415
      John Harrison 提交于
      It is a bad idea for i915_add_request() to fail. The work will already have been
      send to the ring and will be processed, but there will not be any tracking or
      management of that work.
      
      The only way the add request call can fail is if it can't write its epilogue
      commands to the ring (cache flushing, seqno updates, interrupt signalling). The
      reasons for that are mostly down to running out of ring buffer space and the
      problems associated with trying to get some more. This patch prevents that
      situation from happening in the first place.
      
      When a request is created, it marks sufficient space as reserved for the
      epilogue commands. Thus guaranteeing that by the time the epilogue is written,
      there will be plenty of space for it. Note that a ring_begin() call is required
      to actually reserve the space (and do any potential waiting). However, that is
      not currently done at request creation time. This is because the ring_begin()
      code can allocate a request. Hence calling begin() from the request allocation
      code would lead to infinite recursion! Later patches in this series remove the
      need for begin() to do the allocate. At that point, it becomes safe for the
      allocate to call begin() and really reserve the space.
      
      Until then, there is a potential for insufficient space to be available at the
      point of calling i915_add_request(). However, that would only be in the case
      where the request was created and immediately submitted without ever calling
      ring_begin() and adding any work to that request. Which should never happen. And
      even if it does, and if that request happens to fall down the tiny window of
      opportunity for failing due to being out of ring space then does it really
      matter because the request wasn't doing anything in the first place?
      
      v2: Updated the 'reserved space too small' warning to include the offending
      sizes. Added a 'cancel' operation to clean up when a request is abandoned. Added
      re-initialisation of tracking state after a buffer wrap to keep the sanity
      checks accurate.
      
      v3: Incremented the reserved size to accommodate Ironlake (after finally
      managing to run on an ILK system). Also fixed missing wrap code in LRC mode.
      
      v4: Added extra comment and removed duplicate WARN (feedback from Tomas).
      
      For: VIZ-5115
      CC: Tomas Elf <tomas.elf@intel.com>
      Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      29b1b415
    • A
      drm/i915/gen8: Add WaFlushCoherentL3CacheLinesAtContextSwitch workaround · c82435bb
      Arun Siluvery 提交于
      In Indirect context w/a batch buffer,
      +WaFlushCoherentL3CacheLinesAtContextSwitch:bdw
      
      v2: Add LRI commands to set/reset bit that invalidates coherent lines,
      update WA to include programming restrictions and exclude CHV as
      it is not required (Ville)
      
      v3: Avoid unnecessary read when it can be done by reading register once (Chris).
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Signed-off-by: NRafael Barbalho <rafael.barbalho@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Acked-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      c82435bb
    • A
      drm/i915/gen8: Add WaDisableCtxRestoreArbitration workaround · 7ad00d1a
      Arun Siluvery 提交于
      In Indirect and Per context w/a batch buffer,
      +WaDisableCtxRestoreArbitration
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Signed-off-by: NRafael Barbalho <rafael.barbalho@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Acked-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      7ad00d1a
    • A
      drm/i915/gen8: Re-order init pipe_control in lrc mode · c4db7599
      Arun Siluvery 提交于
      Some of the WA applied using WA batch buffers perform writes to scratch page.
      In the current flow WA are initialized before scratch obj is allocated.
      This patch reorders intel_init_pipe_control() to have a valid scratch obj
      before we initialize WA.
      
      v2: Check for valid scratch page before initializing WA as some of them
      perform writes to it.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      c4db7599
    • A
      drm/i915/gen8: Add infrastructure to initialize WA batch buffers · 17ee950d
      Arun Siluvery 提交于
      Some of the WA are to be applied during context save but before restore and
      some at the end of context save/restore but before executing the instructions
      in the ring, WA batch buffers are created for this purpose and these WA cannot
      be applied using normal means. Each context has two registers to load the
      offsets of these batch buffers. If they are non-zero, HW understands that it
      need to execute these batches.
      
      v1: In this version two separate ring_buffer objects were used to load WA
      instructions for indirect and per context batch buffers and they were part
      of every context.
      
      v2: Chris suggested to include additional page in context and use it to load
      these WA instead of creating separate objects. This will simplify lot of things
      as we need not explicity pin/unpin them. Thomas Daniel further pointed that GuC
      is planning to use a similar setup to share data between GuC and driver and
      WA batch buffers can probably share that page. However after discussions with
      Dave who is implementing GuC changes, he suggested to use an independent page
      for the reasons - GuC area might grow and these WA are initialized only once and
      are not changed afterwards so we can share them share across all contexts.
      
      The page is updated with WA during render ring init. This has an advantage of
      not adding more special cases to default_context.
      
      We don't know upfront the number of WA we will applying using these batch buffers.
      For this reason the size was fixed earlier but it is not a good idea. To fix this,
      the functions that load instructions are modified to report the no of commands
      inserted and the size is now calculated after the batch is updated. A macro is
      introduced to add commands to these batch buffers which also checks for overflow
      and returns error.
      We have a full page dedicated for these WA so that should be sufficient for
      good number of WA, anything more means we have major issues.
      The list for Gen8 is small, same for Gen9 also, maybe few more gets added
      going forward but not close to filling entire page. Chris suggested a two-pass
      approach but we agreed to go with single page setup as it is a one-off routine
      and simpler code wins.
      
      One additional option is offset field which is helpful if we would like to
      have multiple batches at different offsets within the page and select them
      based on some criteria. This is not a requirement at this point but could
      help in future (Dave).
      
      Chris provided some helpful macros and suggestions which further simplified
      the code, they will also help in reducing code duplication when WA for
      other Gen are added. Add detailed comments explaining restrictions.
      Use do {} while(0) for wa_ctx_emit() macro.
      
      (Many thanks to Chris, Dave and Thomas for their reviews and inputs)
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Signed-off-by: NRafael Barbalho <rafael.barbalho@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      17ee950d
  2. 27 5月, 2015 1 次提交
  3. 21 5月, 2015 2 次提交
    • C
      drm/i915: Inline check required for object syncing prior to execbuf · 03ade511
      Chris Wilson 提交于
      This trims a little overhead from the common case of not needing to
      synchronize between rings.
      
      v2: execlists is special and likes to duplicate code.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      03ade511
    • C
      drm/i915: Implement inter-engine read-read optimisations · b4716185
      Chris Wilson 提交于
      Currently, we only track the last request globally across all engines.
      This prevents us from issuing concurrent read requests on e.g. the RCS
      and BCS engines (or more likely the render and media engines). Without
      semaphores, we incur costly stalls as we synchronise between rings -
      greatly impacting the current performance of Broadwell versus Haswell in
      certain workloads (like video decode). With the introduction of
      reference counted requests, it is much easier to track the last request
      per ring, as well as the last global write request so that we can
      optimise inter-engine read read requests (as well as better optimise
      certain CPU waits).
      
      v2: Fix inverted readonly condition for nonblocking waits.
      v3: Handle non-continguous engine array after waits
      v4: Rebase, tidy, rewrite ring list debugging
      v5: Use obj->active as a bitfield, it looks cool
      v6: Micro-optimise, mostly involving moving code around
      v7: Fix retire-requests-upto for execlists (and multiple rq->ringbuf)
      v8: Rebase
      v9: Refactor i915_gem_object_sync() to allow the compiler to better
      optimise it.
      
      Benchmark: igt/gem_read_read_speed
      hsw:gt3e (with semaphores):
      Before: Time to read-read 1024k:		275.794µs
      After:  Time to read-read 1024k:		123.260µs
      
      hsw:gt3e (w/o semaphores):
      Before: Time to read-read 1024k:		230.433µs
      After:  Time to read-read 1024k:		124.593µs
      
      bdw-u (w/o semaphores):             Before          After
      Time to read-read 1x1:            26.274µs       10.350µs
      Time to read-read 128x128:        40.097µs       21.366µs
      Time to read-read 256x256:        77.087µs       42.608µs
      Time to read-read 512x512:       281.999µs      181.155µs
      Time to read-read 1024x1024:    1196.141µs     1118.223µs
      Time to read-read 2048x2048:    5639.072µs     5225.837µs
      Time to read-read 4096x4096:   22401.662µs    21137.067µs
      Time to read-read 8192x8192:   89617.735µs    85637.681µs
      
      Testcase: igt/gem_concurrent_blit (read-read and friends)
      Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v8]
      [danvet: s/\<rq\>/req/g]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4716185
  4. 20 5月, 2015 1 次提交
  5. 08 5月, 2015 1 次提交
  6. 24 4月, 2015 1 次提交
    • M
      drm/i915: Workaround to avoid lite restore with HEAD==TAIL · 53292cdb
      Michel Thierry 提交于
      WaIdleLiteRestore is an execlists-only workaround, and requires the driver
      to ensure that any context always has HEAD!=TAIL when attempting lite
      restore.
      
      Add two extra MI_NOOP instructions at the end of each request, but keep
      the requests tail pointing before the MI_NOOPs. We may not need to
      executed them, and this is why request->tail is sampled before adding
      these extra instructions.
      
      If we submit a context to the ELSP which has previously been submitted,
      move the tail pointer past the MI_NOOPs. This ensures HEAD!=TAIL.
      
      v2: Move overallocation to gen8_emit_request, and added note about
      sampling request->tail in commit message (Chris).
      
      v3: Remove redundant request->tail assignment in __i915_add_request, in
      lrc mode this is already set in execlists_context_queue.
      Do not add wa implementation details inside gem (Chris).
      
      v4: Apply the wa whenever the req has been resubmitted and update
      comment (Chris).
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NThomas Daniel <thomas.daniel@intel.com>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      53292cdb
  7. 14 4月, 2015 1 次提交
  8. 10 4月, 2015 12 次提交
    • M
      drm/i915: Remove unused variable from execlists_context_queue · a6631bc8
      Michel Thierry 提交于
      After commit d7b9ca2f
      ("drm/i915: Remove request->uniq")
      
      dev_priv is no longer needed.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a6631bc8
    • C
      drm/i915: Allocate context objects from stolen · 149c86e7
      Chris Wilson 提交于
      As we never expose context objects directly to userspace, we can forgo
      allocating a first-class GEM object for them and prefer to use the
      limited resource of reserved/stolen memory for them. Note this means
      that their initial contents are undefined.
      
      However, a downside of using stolen objects for execlists is that we
      cannot access the physical address directly (thanks MCH!) which prevents
      their use.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      149c86e7
    • C
      drm/i915: Remove request->uniq · d7b9ca2f
      Chris Wilson 提交于
      We already assign a unique identifier to every request: seqno. That
      someone felt like adding a second one without even mentioning why and
      tweaking ABI smells very fishy.
      
      Fixes regression from
      commit b3a38998
      Author: Nick Hoath <nicholas.hoath@intel.com>
      Date:   Thu Feb 19 16:30:47 2015 +0000
      
          drm/i915: Fix a use after free, and unbalanced refcounting
      
      v2: Rebase
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Nick Hoath <nicholas.hoath@intel.com>
      Cc: Thomas Daniel <thomas.daniel@intel.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Jani Nikula <jani.nikula@intel.com>
      [danvet: Fixup because different merge order.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d7b9ca2f
    • C
      drm/i915: Reduce locking in execlist command submission · a6111f7b
      Chris Wilson 提交于
      This eliminates six needless spin lock/unlock pairs when writing out
      ELSP.
      
      v2: Respin with my preferred colour.
      v3: Mostly back to the original colour
      
      Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v1]
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a6111f7b
    • D
      drm/i915: Remove unused variable in intel_lrc.c · 19ee66af
      Daniel Vetter 提交于
      Already tagged this one and 0-day builder is failing me.
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      19ee66af
    • C
      drm/i915: Remove vestigal DRI1 ring quiescing code · 595e1eeb
      Chris Wilson 提交于
      After the removal of DRI1, all access to the rings are through requests
      and so we can always be sure that there is a request to wait upon to
      free up available space. The fallback code only existed so that we could
      quiesce the GPU following unmediated access by DRI1.
      
      v2: Rebase
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      595e1eeb
    • C
      drm/i915: Use the global runtime-pm wakelock for a busy GPU for execlists · 4bb1bedb
      Chris Wilson 提交于
      When we submit a request to the GPU, we first take the rpm wakelock, and
      only release it once the GPU has been idle for a small period of time
      after all requests have been complete. This means that we are sure no
      new interrupt can arrive whilst we do not hold the rpm wakelock and so
      can drop the individual get/put around every single request inside
      execlists.
      
      Note: to close one potential issue we should mark the GPU as busy
      earlier in __i915_add_request.
      
      To elaborate: The issue is that we emit the irq signalling sequence
      before we grab the rpm reference, which means we could miss the
      resulting interrupt (since that's not set up when suspended). The only
      bad side effect is a missed interrupt, gt mmio writes automatically
      wake up the hw itself. But otoh we have an umbrella rpm reference for
      the entirety of execbuf, as long as that's there we're covered.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: Explain a bit more about the add_request issue, which after
      some irc chatting with Chris turns out to not be an issue really.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      4bb1bedb
    • C
      drm/i915: Use simpler form of spin_lock_irq(execlist_lock) · b5eba372
      Chris Wilson 提交于
      We can use the simpler spinlock form to disable interrupts as we are
      always outside of an irq/softirq handler.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b5eba372
    • M
      drm/i915/gen8: Dynamic page table allocations · d7b2633d
      Michel Thierry 提交于
      This finishes off the dynamic page tables allocations, in the legacy 3
      level style that already exists. Most everything has already been setup
      to this point, the patch finishes off the enabling by setting the
      appropriate function pointers.
      
      In LRC mode, contexts need to know the PDPs when they are populated. With
      dynamic page table allocations, these PDPs may not exist yet. Check if
      PDPs have been allocated and use the scratch page if they do not exist yet.
      
      Before submission, update the PDPs in the logic ring context as PDPs
      have been allocated.
      
      v2: Update aliasing/true ppgtt allocate/teardown/clear functions for
      gen 6 & 7.
      
      v3: Rebase.
      
      v4: Remove BUG() from ppgtt_unbind_vma, but keep checking that either
      teardown_va_range or clear_range functions exist (Daniel).
      
      v5: Similar to gen6, in init, gen8_ppgtt_clear_range call is only needed
      for aliasing ppgtt. Zombie tracking was originally added for teardown
      function and is no longer required.
      
      v6: Update err_out case in gen8_alloc_va_range (missed from lastest
      rebase).
      
      v7: Rebase after s/page_tables/page_table/.
      
      v8: Updated scratch_pt check after scratch flag was removed in previous
      patch.
      
      v9: Note that lrc mode needs to be updated to support init state without
      any PDP.
      
      v10: Unmap correct page_table in gen8_alloc_va_range's error case,  clean-up
      gen8_aliasing_ppgtt_init (remove duplicated map), and initialize PTs
      during page table allocation.
      
      v11: Squashed LRC enabling commit, otherwise LRC mode would be left broken
      until it was updated to handle the init case without any PDP.
      
      v12: Do not overallocate new_pts bitmap, make alloc_gen8_temp_bitmaps
      static and don't abuse of inline functions. (Mika)
      
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d7b2633d
    • M
      drm/i915/gen8: Split out mappings · e5815a2e
      Michel Thierry 提交于
      When we do dynamic page table allocations for gen8, we'll need to have
      more control over how and when we map page tables, similar to gen6.
      In particular, DMA mappings for page directories/tables occur at allocation
      time.
      
      This patch adds the functionality and calls it at init, which should
      have no functional change.
      
      The PDPEs are still a special case for now. We'll need a function for
      that in the future as well.
      
      v2: Handle renamed unmap_and_free_page functions.
      v3: Updated after teardown_va logic was removed.
      v4: Rebase after s/page_tables/page_table/.
      v5: No longer allocate all PDPs in GEN8+ systems with less than 4GB of
      memory, and update populate_lr_context to handle this new case (proper
      tracking will be added later in the patch series).
      v6: Assign lrc page directory pointer addresses using a macro. (Mika)
      
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e5815a2e
    • C
      drm/i915: Split the batch pool by engine · 06fbca71
      Chris Wilson 提交于
      I woke up one morning and found 50k objects sitting in the batch pool
      and every search seemed to iterate the entire list... Painting the
      screen in oils would provide a more fluid display.
      
      One issue with the current design is that we only check for retirements
      on the current ring when preparing to submit a new batch. This means
      that we can have thousands of "active" batches on another ring that we
      have to walk over. The simplest way to avoid that is to split the pools
      per ring and then our LRU execution ordering will also ensure that the
      inactive buffers remain at the front.
      
      v2: execlists still requires duplicate code.
      v3: execlists requires more duplicate code
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      06fbca71
    • A
      drm/i915: Do not set L3-LLC Coherency bit in ctx descriptor · 51847fb9
      Arun Siluvery 提交于
      According to Spec this is a reserved bit for Gen9+ and should not be set.
      
      Change-Id: I0215fb7057b94139b7a2f90ecc7a0201c0c93ad4
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      51847fb9
  9. 01 4月, 2015 3 次提交