- 03 12月, 2014 5 次提交
-
-
由 John Harrison 提交于
There is no longer any need to retrieve a seqno value from an i915_add_request() call. The calling code already knows which request structure is being processed (it can only be ring->OLR). And as the request itself is now used in preference to the basic seqno value, the latter is now redundant in this situation. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Updated i915_wait_seqno() to take a request structure instead of a seqno value and renamed it accordingly. Internally, it just pulls the seqno out of the request and calls on to __wait_seqno() as before. However, all the code further up the stack is now simplified as it can just pass the request object straight through without having to peek inside. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> [danvet: Squash in hunk from an earlier patch which was rebased wrongly.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The OLS value is now obsolete. Exactly the same value is guarateed to be always available as PLR->seqno. Thus it is safe to remove the OLS completely. And also to rename the PLR to OLR to keep the 'outstanding lazy ...' naming convention valid. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The plan is to use request structures everywhere that seqno values were previously used. This means saving pointers to structures in places that used to be simple integers. In turn, that means that the target structure now needs much more stringent lifetime tracking. That is, it must not be freed while some other random object still holds a pointer to it. To achieve this tracking, a reference count needs to be added. Whenever a pointer to the structure is saved away, the count must be incremented and the free must only occur when all references have been released. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The aim is to replace seqno values with request structures. A step along the way is to switch to using the PLR in preference to the OLS. That requires the PLR to only be valid when and only when the OLS is also valid. I.e., the two must be kept in lock step. Then, code which was using the OLS can be safely switched over to using the PLR instead. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 11月, 2014 2 次提交
-
-
由 Chris Wilson 提交于
With the deprecation of UMS, and by association DRI1, we have a tough choice when updating the ring access routines. We either rewrite the DRI1 routines blindly without testing (so likely to be broken) or take the liberty of declaring them no longer supported and remove them entirely. This takes the latter approach. v2: Also remove the DRI1 sarea updates Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fix rebase conflicts.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Thomas Daniel 提交于
Same as with the context, pinning to GGTT regardless is harmful (it badly fragments the GGTT and can even exhaust it). Unfortunately, this case is also more complex than the previous one because we need to map and access the ringbuffer in several places along the execbuffer path (and we cannot make do by leaving the default ringbuffer pinned, as before). Also, the context object itself contains a pointer to the ringbuffer address that we have to keep updated if we are going to allow the ringbuffer to move around. v2: Same as with the context pinning, we cannot really do it during an interrupt. Also, pin the default ringbuffers objects regardless (makes error capture a lot easier). v3: Rebased. Take a pin reference of the ringbuffer for each item in the execlist request queue because the hardware may still be using the ringbuffer after the MI_USER_INTERRUPT to notify the seqno update is executed. The ringbuffer must remain pinned until the context save is complete. No longer pin and unpin ringbuffer in populate_lr_context() - this transient address is meaningless and the pinning can cause a sleep while atomic. v4: Moved ringbuffer pin and unpin into the lr_context_pin functions. Downgraded pinning check BUG_ONs to WARN_ONs. v5: Reinstated WARN_ONs for unexpected execlist states. Removed unused variable. Issue: VIZ-4277 Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 11月, 2014 4 次提交
-
-
由 Michel Thierry 提交于
Following the legacy ring submission example, update the ring->init_context() hook to support the execlist submission mode. v2: update to use the new workaround macros and cleanup unused code. This takes care of both bdw and chv workarounds. v2.1: Add missing call to init_context() during deferred context creation. v3: Split init_context (emit) in legacy/lrc modes. For lrc, get the ringbuf from the context (Mika/Daniel). v4: Merge init_context interfaces back, the legacy mode only needs the ring, but the lrc mode needs the ring and context (Mika). Issue: VIZ-4092 Issue: GMIN-3475 Change-Id: Ie3d093b2542ab0e2a44b90460533e2f979788d6c Cc: Deepak S <deepak.s@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Align function paramater lists properly.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
+WaForceEnableNonCoherent:chv +WaHdcDisableFetchWhenMasked:chv For: VIZ-4090 Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
WaDisablePartialInstShootdown:chv and WaDisableThreadStallDopClockGating:chv are related to the same register so combine them. Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
-WaDisableDopClockGating:chv -WaDisableSamplerPowerBypass:chv -WaDisableGunitClockGating:chv -WaDisableFfDopClockGating:chv -WaDisableDopClockGating:chv v2: Remove pre-production WA instead of restricting them based on revision id (Ville) For: VIZ-4090 Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 11月, 2014 1 次提交
-
-
由 John Harrison 提交于
If a ring failed to initialise for any reason then the error path would try to clean up all rings including those that had not yet been allocated. The ring clean up code did a check that the ring was valid before starting its work. Unfortunately, that was after it had already dereferenced the ring to obtain a dev_private pointer. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 10月, 2014 3 次提交
-
-
由 Arun Siluvery 提交于
The number of DWords should be even when doing ring emits as command sequences require QWord alignment. There was some discussion about the maximum length of the MI_LRI command. Quoting Mika "I did some test with bdw: "The maximum is 128 writes, resulting the 8 bit length field of the command being 0xff, thus following the spec. The 128'th write went through. "Perhaps the max command length is then less in older gens? "Perhaps WARN_ON(x > 128) in MI_LOAD_REGISTER_IMM would be in place but one needs minor tweak to command parser a bit also then. #define I915_MAX_WA_REGS 16 keeps us safe for now atleast." Ville commented that on pre-gen6 the length field seems to be restricted to 0x3f though. So for all cases we should be ok. v2: user LRI variant that can write multiple regs in one go (Damien). We can simply insert one NOP at the end instead of one per register write. Cc: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Add a summary of the MI_LRI length discussion.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
If we build the workaround list in ring initialization and decouple it from the actual writing of values, we gain the ability to decide where and how we want to apply the values. The advantage of this will become more clear when we need to initialize workarounds on older gens where it is not possible to write all the registers through ring LRIs. v2: rebase on newest bdw workarounds Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Cc: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> [danvet: Resolve tiny conflict in comments and ocd alignments a bit.] [danvet2: Remove bogus force_wake_get call spotted by Paulo and QA.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
Let's clean this a bit v2: Rebase after other Mika's patch that removed some BDW production workarounds. v3: Removed stepping info. Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 9月, 2014 1 次提交
-
-
由 Rodrigo Vivi 提交于
This WA affect BDW GT3 pre-production steppings. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Don't mention steppings ...] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 9月, 2014 1 次提交
-
-
由 Rodrigo Vivi 提交于
The sw cache clean on BDW is a tempoorary workaround because we cannot set cache clean on blt ring with risk of hungs. So we are doing the cache clean on sw. However we are doing much more than needed. Not only when using blt ring. So, with this extra w/a we minimize the ammount of cache cleans and call it only on same cases that it was being called on gen7. The traditional FBC Cache clean happens over LRI on BLT ring when there is a frontbuffer touch happening. frontbuffer tracking set fbc_dirty variable to let BLT flush that it must clean FBC cache. fbc.need_sw_cache_clean works in the opposite information direction of ring->fbc_dirty telling software on frontbuffer tracking to perform the cache clean on sw side. v2: Clean it a little bit and fully check for Broadwell instead of gen8. v3: Rebase after frontbuffer organization. v4: Wiggle confused me. So fixing v3! Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 9月, 2014 2 次提交
-
-
由 Imre Deak 提交于
The following sets the AsyncFlip performance mode for everything above Gen6: commit 4790cb36b3eede8fb0cca529dc1d31b9936fa24b Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Sun Jan 20 16:11:20 2013 +0000 drm/i915: Disable AsyncFlip performance optimisations Starting from Gen9 the MI_MODE register layout changes and doesn't include the above bit. Reviewed-by: NThomas Wood <thomas.wood@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
as these have been fixed in production hw and hurt performance if applied. v2: adjust requested ring space (Ville) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=83482Tested-by: Nzhoujian <jianx.zhou@intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 9月, 2014 2 次提交
-
-
由 Daniel Vetter 提交于
Yet another place that wasn't properly transformed when implementing SOix. While at it convert the checks to WARN_ON on gen5+ (since we don't have UMS potentially doing stupid things on those platforms). And also add the corresponding checks to the put functions (again with a WARN_ON) for gen5+. v2: Drop the WARNINGS in the irq_put functions (including the existing one for vebox), Chris convinced me that they're not that terribly useful. v3: Don't forget about execlist code. Cc: Imre Deak <imre.deak@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Chris Wilson 提交于
gen6 and earlier conflate address space selection (ppgtt vs ggtt) with the security bit (i.e. only privileged batches were allowed to run from ggtt). From Haswell only, you are able to select the security bit separate from the address space - and we always requested to use ppgtt. This breaks the golden render state batch execution with full-ppgtt as that is only present in the global GTT and more generally any secure batch that is not colocated in the ppgtt and ggtt. So we need to disable the use of the ppgtt selector bit for secure batches, or else we hang immediately upon boot and thence after every GPU reset... v2: Only HSW differentiates between secure dispatch and ggtt, so simply ignore the differentiation and always use secure==ggtt. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> [danvet: Rectify commit message as noted by Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 9月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
One small change I forgot to make in commit c4d69da1 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Sep 8 14:25:41 2014 +0100 drm/i915: Evict CS TLBs between batches was to update the copy width for the compact BLT copy instruction. Reported-by: NThomas Richter <thor@math.tu-berlin.de> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Thomas Richter <thor@math.tu-berlin.de> Cc: Jani Nikula <jani.nikula@intel.com> Cc: stable@vger.kernel.org Tested-by: NThomas Richter <thor@math.tu-berlin.de> Acked-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
Running igt, I was encountering the invalid TLB bug on my 845g, despite that it was using the CS workaround. Examining the w/a buffer in the error state, showed that the copy from the user batch into the workaround itself was suffering from the invalid TLB bug (the first cacheline was broken with the first two words reversed). Time to try a fresh approach. This extends the workaround to write into each page of our scratch buffer in order to overflow the TLB and evict the invalid entries. This could be refined to only do so after we update the GTT, but for simplicity, we do it before each batch. I suspect this supersedes our current workaround, but for safety keep doing both. v2: The magic number shall be 2. This doesn't conclusively prove that it is the mythical TLB bug we've been trying to workaround for so long, that it requires touching a number of pages to prevent the corruption indicates to me that it is TLB related, but the corruption (the reversed cacheline) is more subtle than a TLB bug, where we would expect it to read the wrong page entirely. Oh well, it prevents a reliable hang for me and so probably for others as well. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 04 9月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
Ville found an old w/a documented for g4x that suggested that we need to reset the HEAD after writing START. This is a useful fixup for some of the g4x ring initialisation woes, but as usual, not all. v2: Do the rewrite unconditionally anyway References: https://bugs.freedesktop.org/show_bug.cgi?id=76554Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 9月, 2014 7 次提交
-
-
由 Damien Lespiau 提交于
Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
If we happen to emit more than I915_MAX_WA_REGS workarounds, we will currently discard them, not even emit the LRI. Not really what we want, so warn loudly. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
When entering intel_ring_emit_wa() with num_wa_regs equal to I915_MAX_WA_REGS, we end up indexing the intel_wa_regs array beyond its allocation. Fix the check then. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Follow the BDW example and apply the workarounds touching registers which are saved in the context image through LRIs in the new ring->init_context() hook. This makes Mesa much happier and eg. glxgears doesn't hang after the first frame. Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> [danvet: Add missing wa table initialization to avoid a functional conflict with Arun's wa table debugfs support.] Reviewed-by: N"Barbalho, Rafael" <rafael.barbalho@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
The workarounds that are applied are exported to a debugfs file; this is used to verify their state after the test case (reset or suspend/resume etc). This patch is only required to support i-g-t. Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
For BDW workarounds are currently initialized in init_clock_gating() but they are lost during reset, suspend/resume etc; this patch moves the WAs that are part of register state context to render ring init fn otherwise default context ends up with incorrect values as they don't get initialized until init_clock_gating fn. v2: Add workarounds to golden render state This method has its own issues, first of all this is different for each gen and it is generated using a tool so adding new workaround and mainitaining them across gens is not a straightforward process. v3: Use LRIs to emit these workarounds (Ville) Instead of modifying the golden render state the same LRIs are emitted from within the driver. v4: Use abstract name when exporting gen specific routines (Chris) For: VIZ-4092 Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
According to spec FBC on BDW and HSW are identical without any gaps. So let's copy the nuke and let FBC really start compressing stuff. Without this patch we can verify with false color that nothing is being compressed. With the nuke in place and false color it is possible to see false color debugs. Unfortunatelly on some rings like BCS on BDW we have to avoid Bits 22:18 on LRIs due to a high risk of hung. So, when using Blt ring for frontbuffer rend cache would never been cleaned and FBC would stop compressing buffer. One alternative is to cache clean on software frontbuffer tracking. v2: Fix rebase conflict. v3: Do not clean cache on BCS ring. Instead use sw frontbuffer tracking. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 8月, 2014 1 次提交
-
-
由 Oscar Mateo 提交于
If we reset a ring after a hang, we have to make sure that we clear out all queued Execlists requests. v2: The ring is, at this point, already being correctly re-programmed for Execlists, and the hangcheck counters cleared. v3: Daniel suggests to drop the "if (execlists)" because the Execlists queue should be empty in legacy mode (which is true, if we do the INIT_LIST_HEAD). v4: Do the pending intel_runtime_pm_put Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 8月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
A subsequent patch will no longer initialize the aliasing ppgtt if we have full ppgtt enabled, since we simply don't need that any more. Unfortunately a few places check for the aliasing ppgtt instead of checking for ppgtt in general. Fix them up. One special case are the gtt offset and size macros, which have some code to remap the aliasing ppgtt to the global gtt. The aliasing ppgtt is _not_ a logical address space, so passing that in as the vm is plain and simple a bug. So just WARN about it and carry on - we have a gracefully fall-through anyway if we can't find the vma. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 8月, 2014 2 次提交
-
-
由 Oscar Mateo 提交于
Same as the legacy-style ring->flush. v2: The BSD invalidate bit still exists in GEN8! Add it for the VCS rings (but still consolidate the blt and bsd ring flushes into one). This was noticed by Brad Volkin. v3: The command for BSD and for other rings is slightly different: get it exactly the same as in gen6_ring_flush + gen6_bsd_ring_flush Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Checkpatch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Well, new-ish: if all this code looks familiar, that's because it's a clone of the existing submission mechanism (with some modifications here and there to adapt it to LRCs and Execlists). And why did we do this instead of reusing code, one might wonder? Well, there are some fears that the differences are big enough that they will end up breaking all platforms. Also, Execlists offer several advantages, like control over when the GPU is done with a given workload, that can help simplify the submission mechanism, no doubt. I am interested in getting Execlists to work first and foremost, but in the future this parallel submission mechanism will help us to fine tune the mechanism without affecting old gens. v2: Pass the ringbuffer only (whenever possible). Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Appease checkpatch. Again. And drop the legacy sarea gunk that somehow crept in.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 8月, 2014 5 次提交
-
-
由 Oscar Mateo 提交于
Logical rings do not need most of the initialization their legacy ringbuffer counterparts do: we just need the pipe control object for the render ring, enable Execlists on the hardware and a few workarounds. v2: Squash with: "drm/i915: Extract pipe control fini & make init outside accesible". Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Make checkpatch happy.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Allocate and populate the default LRC for every ring, call gen-specific init/cleanup, init/fini the command parser and set the status page (now inside the LRC object). These are things all engines/rings have in common. Stopping the ring before cleanup and initializing the seqnos is left as a TODO task (we need more infrastructure in place before we can achieve this). v2: Check the ringbuffer backing obj for ring_is_initialized, instead of the context backing obj (similar, but not exactly the same). Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Any given ringbuffer is unequivocally tied to one context and one engine. By setting the appropriate pointers to them, the ringbuffer struct holds all the infromation you might need to submit a workload for processing, Execlists style. v2: Drop ring->ctx since that looks terribly ill-defined for legacy ringbuffer submission. Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v1) Acked-by: Damien Lespiau <damien.lespiau@intel.com> (v2) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
As we have said a couple of times by now, logical ring contexts have their own ringbuffers: not only the backing pages, but the whole management struct. In a previous version of the series, this was achieved with two separate patches: drm/i915/bdw: Allocate ringbuffer backing objects for default global LRC drm/i915/bdw: Allocate ringbuffer for user-created LRCs Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
During ring initialisation, sometimes we observe, though not in production hardware, that the idle flag is not set even though the ring is empty. Double check before giving up. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Damien Lespiau <damien.lespiau@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-