- 10 4月, 2015 5 次提交
-
-
由 Chris Wilson 提交于
We can use the simpler spinlock form to disable interrupts as we are always outside of an irq/softirq handler. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
This finishes off the dynamic page tables allocations, in the legacy 3 level style that already exists. Most everything has already been setup to this point, the patch finishes off the enabling by setting the appropriate function pointers. In LRC mode, contexts need to know the PDPs when they are populated. With dynamic page table allocations, these PDPs may not exist yet. Check if PDPs have been allocated and use the scratch page if they do not exist yet. Before submission, update the PDPs in the logic ring context as PDPs have been allocated. v2: Update aliasing/true ppgtt allocate/teardown/clear functions for gen 6 & 7. v3: Rebase. v4: Remove BUG() from ppgtt_unbind_vma, but keep checking that either teardown_va_range or clear_range functions exist (Daniel). v5: Similar to gen6, in init, gen8_ppgtt_clear_range call is only needed for aliasing ppgtt. Zombie tracking was originally added for teardown function and is no longer required. v6: Update err_out case in gen8_alloc_va_range (missed from lastest rebase). v7: Rebase after s/page_tables/page_table/. v8: Updated scratch_pt check after scratch flag was removed in previous patch. v9: Note that lrc mode needs to be updated to support init state without any PDP. v10: Unmap correct page_table in gen8_alloc_va_range's error case, clean-up gen8_aliasing_ppgtt_init (remove duplicated map), and initialize PTs during page table allocation. v11: Squashed LRC enabling commit, otherwise LRC mode would be left broken until it was updated to handle the init case without any PDP. v12: Do not overallocate new_pts bitmap, make alloc_gen8_temp_bitmaps static and don't abuse of inline functions. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
When we do dynamic page table allocations for gen8, we'll need to have more control over how and when we map page tables, similar to gen6. In particular, DMA mappings for page directories/tables occur at allocation time. This patch adds the functionality and calls it at init, which should have no functional change. The PDPEs are still a special case for now. We'll need a function for that in the future as well. v2: Handle renamed unmap_and_free_page functions. v3: Updated after teardown_va logic was removed. v4: Rebase after s/page_tables/page_table/. v5: No longer allocate all PDPs in GEN8+ systems with less than 4GB of memory, and update populate_lr_context to handle this new case (proper tracking will be added later in the patch series). v6: Assign lrc page directory pointer addresses using a macro. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
I woke up one morning and found 50k objects sitting in the batch pool and every search seemed to iterate the entire list... Painting the screen in oils would provide a more fluid display. One issue with the current design is that we only check for retirements on the current ring when preparing to submit a new batch. This means that we can have thousands of "active" batches on another ring that we have to walk over. The simplest way to avoid that is to split the pools per ring and then our LRU execution ordering will also ensure that the inactive buffers remain at the front. v2: execlists still requires duplicate code. v3: execlists requires more duplicate code Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
According to Spec this is a reserved bit for Gen9+ and should not be set. Change-Id: I0215fb7057b94139b7a2f90ecc7a0201c0c93ad4 Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 4月, 2015 3 次提交
-
-
由 John Harrison 提交于
The legacy and LRC code paths have an almost identical procedure for waiting for space in the ring buffer. They both search for a request in the free list that will advance the tail to a point where sufficient space is available. They then wait for that request, retire it and recalculate the free space value. Unfortunately, a bug in the LRC side meant that the resulting free space might not be as large as expected and indeed, might not be sufficient. This is because it was testing against the value of request->tail not request->postfix. Whereas, when a request is retired, ringbuf->tail is updated to req->postfix not req->tail. Another significant difference between the two is that the LRC one did not trust the wait for request to work! It redid the is there enough space available test and would fail the call if insufficient. Whereas, the legacy version just said 'return 0' - it assumed the preceeding code works. This difference meant that the LRC version still worked even with the bug - it just fell back to the polling wait path. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The request allocation code is largely duplicated between legacy mode and execlist mode. The actual difference between the two versions of the code is pretty minimal. This patch moves the common code out into a separate function. This is then called by the execution specific version prior to setting up the one different value. For: VIZ-5190 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The only usage of intel_logical_ring_begin() is within intel_lrc.c so it can be made static. To avoid a forward declaration at the top of the file, it and bunch of other functions have been shuffled upwards. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 2月, 2015 3 次提交
-
-
由 John Harrison 提交于
In execlist mode, the ringbuf is a function of the ring and context whereas in legacy mode, it is derived from the ring alone. Thus the calculation required to determine the ringbuf pointer from the ring (and context) also needs to test execlist mode or not. This is messy. Further, the request structure holds a pointer to both the ring and the context for which it was created. Thus, given a request, it is possible to derive the ringbuf in either legacy or execlist mode. Hence it is necessary to pass just the request in to all the low level functions rather than some combination of request, ring, context and ringbuf. However, rather than recalculating it each time, it is much simpler to just cache the ringbuf pointer in the request structure itself. Caching the pointer means the calculation is done once at request creation time and all further code and simply read it directly from the request structure. OTC-Jira: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> [danvet: Drop contentless comment in lrc alloc request entirely. And spelling fix in the commit message.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
There is a trace point in the legacy execbuffer execution path that is missing from the execlist path. Trace points are extremely useful for debugging and are used by various automated validation tests. Hence, this patch adds the missing trace point back in. OTC-Jira: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
There is a flags word that is passed through the execbuffer code path all the way from initial decoding of the user parameters down to the very final dispatch buffer call. It is simply called 'flags'. Unfortuantely, there are many other flags words floating around in the same blocks of code. Even more once the GPU scheduler arrives. This patch makes it more obvious exactly which flags word is which by renaming 'flags' to 'dispatch_flags'. Note that the bit definitions for this flags word already have an 'I915_DISPATCH_' prefix on them and so are not quite so ambiguous. OTC-Jira: VIZ-1587 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> [danvet: Resolve conflict with Chris' rework of the bb parsing.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 25 2月, 2015 2 次提交
-
-
由 Ben Widawsky 提交于
As we move toward dynamic page table allocation, it becomes much easier to manage our data structures if break do things less coarsely by breaking up all of our actions into individual tasks. This makes the code easier to write, read, and verify. Aside from the dissection of the allocation functions, the patch statically allocates the page table structures without a page directory. This remains the same for all platforms, The patch itself should not have much functional difference. The primary noticeable difference is the fact that page tables are no longer allocated, but rather statically declared as part of the page directory. This has non-zero overhead, but things gain additional complexity as a result. This patch exists for a few reasons: 1. Splitting out the functions allows easily combining GEN6 and GEN8 code. Page tables have no difference based on GEN8. As we'll see in a future patch when we add the DMA mappings to the allocations, it requires only one small change to make work, and error handling should just fall into place. 2. Unless we always want to allocate all page tables under a given PDE, we'll have to eventually break this up into an array of pointers (or pointer to pointer). 3. Having the discrete functions is easier to review, and understand. All allocations and frees now take place in just a couple of locations. Reviewing, and catching leaks should be easy. 4. Less important: the GFP flags are confined to one location, which makes playing around with such things trivial. v2: Updated commit message to explain why this patch exists v3: For lrc, s/pdp.page_directory[i].daddr/pdp.page_directory[i]->daddr/ v4: Renamed free_pt/pd_single functions to unmap_and_free_pt/pd (Daniel) v5: Added additional safety checks in gen8 clear/free/unmap. v6: Use WARN_ON and return -EINVAL in alloc_pt_range (Mika). v7: Make err_out loop symmetrical to the way we allocate in alloc_pt_range. Also s/page_tables/page_table and correct commit message (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v3+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Move the remaining members over to the new page table structures. This can be squashed with the previous commit if desire. The reasoning is the same as that patch. I simply felt it is easier to review if split. v2: In lrc: s/ppgtt->pd_dma_addr[i]/ppgtt->pdp.page_directory[i].daddr/ v3: Rebase. v4: Rebased after s/page_tables/page_table/. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 2月, 2015 3 次提交
-
-
由 Nick Hoath 提交于
When converting from implicitly tracked execlist queue items to ref counted requests, not all frees of requests were replaced with unrefs, and extraneous refs/unrefs of contexts were added. Correct the unbalanced refcount & replace the frees. Remove a noisy warning when hitting the request creation path. drm_i915_gem_request and intel_context are both kref reference counted structures. Upon allocation, drm_i915_gem_request's ref count should be bumped using kref_init. When a context is assigned to the request, the context's reference count should be bumped using i915_gem_context_reference. i915_gem_request_reference will reduce the context reference count when the request is freed. Problem introduced in commit 6d3d8274 Author: Nick Hoath <nicholas.hoath@intel.com> AuthorDate: Thu Jan 15 13:10:39 2015 +0000 drm/i915: Subsume intel_ctx_submit_request in to drm_i915_gem_request v2: Added comments explaining how the ctx pointer and the request object should be ref-counted. Removed noisy warning. v3: Cleaned up the language used in the commit & the header description (Thanks David Gordon) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88652Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Thomas Daniel 提交于
Work was getting left behind in LRC contexts during reset. This causes a hang if the GPU is reset when HEAD==TAIL because the context's ringbuffer head and tail don't get reset and retiring a request doesn't alter them, so the ring still appears full. Added a function intel_lr_context_reset() to reset head and tail on a LRC and its ringbuffer. Call intel_lr_context_reset() for each context in i915_gem_context_reset() when in execlists mode. Testcase: igt/pm_rps --run-subtest reset #bdw Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88096Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> [danvet: Flatten control flow in the lrc reset code a notch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jeff McGee 提交于
On Gen9 the render power gating can leave slice/subslice/EU in a partially enabled state. We must make an explicit request for full SSEU enablement through the Render Power Clock State register when resuming render work. This register is save/ restored in the logical ring context image for execlist submission mode. Initialize its value in each LRC image to request full enablement according to the device SSEU config. Thanks to Sharma Ankitprasad and Akash Goel for highlighting the issue and proposing the initial fix on which this patch is based. v2: Adjusted the names of the power gating support flags to fit update of an earlier patch. Signed-off-by: NJeff McGee <jeff.mcgee@intel.com> Reviewed-by: "Akash Goel <akash.goel@intel.com>" Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 2月, 2015 5 次提交
-
-
由 Damien Lespiau 提交于
WaDisableAsyncFlipPerfMode isn't listed for SKL and INSTPM_FORCE_ORDERING is MBZ so let's make a gen9 specific render init function. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NNick Hoath <nicholas.hoath@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
This function is only used in intel_lrc.c, so restrict it to that file. The function was moved around to avoid a forward declaration and group it with its user. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
This function is only used in intel_lrc.c, so restrict it to that file. The function was moved around to avoid a forward declaration and group it with its user. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Zhi Wang 提交于
This patch introduces 2 bit definitions of context save/restore control register. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Suggested-by: NDave Gordon <david.s.gordon@intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Add: WaEnableForceRestoreInCtxtDescForVCS v2: Add stepping check. v3: Fixed stepping check direction. Cleaned up indentation. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 2月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
This looked like an odd regression from commit ec5cc0f9 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Jun 12 10:28:55 2014 +0100 drm/i915: Restrict GPU boost to the RCS engine but in reality it undercovered a much older coherency bug. The issue that boosting the GPU frequency on the BCS ring was masking was that we could wake the CPU up after completion of a BCS batch and inspect memory prior to the write cache being fully evicted. In order to serialise the breadcrumb interrupt (and so ensure that the CPU's view of memory is coherent) we need to perform a post-sync operation in the MI_FLUSH_DW. v2: Fix all the MI_FLUSH_DW (bsd plus the duplication in execlists). Also fix the invalidate_domains mask in gen8_emit_flush() for ring != VCS. Testcase: gpuX-rcs-gpu-read-after-write Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Acked-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 31 1月, 2015 1 次提交
-
-
由 Nick Hoath 提交于
Remove request from list before unreferencing it, in case it's actually the only reference. (Found by Tvrtko Ursulin) This issue has been most likely introduced in commit 6d3d8274 Author: Nick Hoath <nicholas.hoath@intel.com> Date: Thu Jan 15 13:10:39 2015 +0000 drm/i915: Subsume intel_ctx_submit_request in to drm_i915_gem_request Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 1月, 2015 8 次提交
-
-
由 Mika Kuoppala 提交于
We increase it when we pin, so for the casual reader rename it to cause less confusion. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
We pin when we submit to execlist queue. Balance the pinning when the submitted queue is cleaned on reset. Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
We have multiple forcewake domains now on recent gens. Change the function naming to reflect this. v2: More verbose names (Chris) v3: Rebase v4: Rebase v5: Add documentation for forcewake_get/put Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Deepak S <deepak.s@linux.intel.com> (v2) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
On user forcewake access, assert that runtime pm reference is held. Fix and cleanup the callsites accordingly. v2: Remove intel_runtime_pm_get() rebasehap (Deepak) v3: use drivers own runtime state tracking as pm_runtime_active() will return wrong results when we are in resume callchain (Mika) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Deepak S <deepak.s@linux.intel.com> (v2) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Move all remaining elements that were unique to execlists queue items in to the associated request. Issue: VIZ-4274 v2: Rebase. Fixed issue of overzealous freeing of request. v3: Removed re-addition of cleanup work queue (found by Daniel Vetter) v4: Rebase. v5: Actual removal of intel_ctx_submit_request. Update both tail and postfix pointer in __i915_add_request (found by Thomas Daniel) v6: Removed unrelated changes Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> [danvet: Reformat comment with strange linebreaks.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
The first pass implementation of execlists required a backpointer to the context to be held in the intel_ringbuffer. However the context pointer is available higher in the call stack. Remove the backpointer from the ring buffer structure and instead pass it down through the call stack. v2: Integrate this changeset with the removal of duplicate request/execlist queue item members. v3: Rebase v4: Rebase. Remove passing of context when the request is passed. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Where there were duplicate variables for the tail, context and ring (engine) in the gem request and the execlist queue item, use the one from the request and remove the duplicate from the execlist queue item. Issue: VIZ-4274 v1: Rebase v2: Fixed build issues. Keep separate postfix & tail pointers as these are used in different ways. Reinserted missing full tail pointer update. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Add a reference and pointer from the execlist queue item to the associated gem request. For execlist requests that don't have a request, create one as a placeholder. Issue: VIZ-4274 v1: Rebase after upstream of "Replace seqno values with request structures" patchset. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 1月, 2015 1 次提交
-
-
由 Thomas Daniel 提交于
A previous commit enabled execlists by default: commit 27401d12 ("drm/i915/bdw: Enable execlists by default where supported") This allowed routine testing of execlists which exposed a regression when resuming from suspend. The cause was tracked down the to recent changes to the ring init sequence: commit 35a57ffb ("drm/i915: Only init engines once") During a suspend/resume cycle the hardware Context Status Buffer write pointer is reset. However since the recent changes to the init sequence the software CSB read pointer is no longer reset. This means that context status events are not handled correctly and new contexts are not written to the ELSP, resulting in an apparent GPU hang. Pending further changes to the ring init code, just move the ring->next_context_status_buffer initialization into gen8_init_common_ring to fix this regression. v2: Moved init into gen8_init_common_ring rather than context_enable after feedback from Daniel Vetter. Updated commit msg to reflect this and also cite commits related to the regression. Fixed bz link to correct bug. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88096 Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 16 12月, 2014 1 次提交
-
-
由 Michel Thierry 提交于
Otherwise, new platforms without workarounds will hit this warning for every new context created. Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 12月, 2014 2 次提交
-
-
由 Thomas Daniel 提交于
Execlist support in the i915 driver is now considered good enough for the feature to be enabled by default on Gen8 and later and routinely tested. Adjusted i915 parameters structure initialization to reflect this and updated the comment in intel_sanitize_enable_execlists(). There's still work to do before we can let the wider massive onto it, but there's still time left before the 3.20 cutoff. v2: Update the MODULE_PARM_DESC too. Issue: VIZ-2020 Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> [danvet: Add note that there's still some work left to do.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We consistently use the _irq_handler postfix for functions called in hardirq context. Especially when it's a non-static function hardirq is a crazy enough calling context to warrant this level of ocd. So rename it. Cc: Thomas Daniel <thomas.daniel@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 06 12月, 2014 2 次提交
-
-
由 John Harrison 提交于
For debugging purposes, it is useful to be able to uniquely identify a given request structure as it works its way through the system. This becomes especially tricky once the seqno value is lazily allocated as then the request has nothing but its pointer to identify it for much of its life. Change-Id: Ie76b2268b940467f4cdf5a4ba6f5a54cbb96445d For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
There is a general theory that kzmalloc is better/safer than kmalloc, especially for interesting data structures. This change updates the request structure allocation to be zero filled. This also fixes crashes in the reset code. Quoting Mika's patch: "Clean the request structure on alloc. Otherwise we might end up referencing uninitialized fields. This is apparent when we try to cleanup the preallocated request on ring reset, before any request has been submitted to the ring. The request->ctx is foobar and we end up freeing the foobarness." Note that this fixes a regression introduced in commit 9eba5d4a Author: John Harrison <John.C.Harrison@Intel.com> Date: Mon Nov 24 18:49:23 2014 +0000 drm/i915: Ensure OLS & PLR are always in sync References: https://bugs.freedesktop.org/show_bug.cgi?id=86959 References: https://bugs.freedesktop.org/show_bug.cgi?id=86962 References: https://bugs.freedesktop.org/show_bug.cgi?id=86992 Change-Id: I68715ef758025fab8db763941ef63bf60d7031e2 For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 12月, 2014 1 次提交
-
-
由 Ville Syrjälä 提交于
MI_STORE_DWORD_IMM length has been the same ever since gen4. Rename the define to avoid potential confusion if someone tries to use this on pre-gen8. Also correct the comment on MI_MEM_VIRTUAL bit. It's present on 945,g33 and 965 only. Cc: Oscar Mateo <oscar.mateo@intel.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> [danvet: Add USE_GGTT define for g4x+ too.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 12月, 2014 2 次提交
-
-
由 Thomas Daniel 提交于
A previous commit introduced engine init changes: commit 372ee59699d9 ("drm/i915: Only init engines once") This broke execlists as intel_lr_context_render_state_init was trying to emit commands to the RCS for the default context before the ring->init_hw was called. Made a new gen8_init_rcs_context function and assign in to render ring init_context. Moved call to intel_logical_ring_workarounds_emit into gen8_init_rcs_context to maintain previous functionality. Moved call to render_state_init from lr_context_deferred_create into gen8_init_rcs_context, and modified deferred_create to call ring->init_context for non-default contexts. Modified i915_gem_context_enable to call ring->init_context for the default context. So init_context will now always be called when the hw is ready - in i915_gem_context_enable for the default context and in lr_context_deferred_create for other contexts. Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Now that sanity prevails and we have the clean split between software init and starting the engines we can drop all the "have we allocate this struct already?" nonsense. Execlist code could benefit quite a bit more still, but that's for another patch. Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-