- 07 1月, 2016 1 次提交
-
-
由 Ben Widawsky 提交于
I think this patch is a worthwhile cleanup even if it might look only marginally useful. It gets more useful in upcoming patches and for handling of future GEN platforms. The only non-mechanical part of this is the removal of the extra & operation on the ring->next_context_status_buffer. This is safe because right above this, we already did a modulus operation. Signed-off-by: NBen Widawsky <benjamin.widawsky@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1452018609-10142-2-git-send-email-benjamin.widawsky@intel.comReviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 1月, 2016 3 次提交
-
-
由 Dave Gordon 提交于
This function was recently renamed & exposed, so now it gets documented Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1451996493-16079-1-git-send-email-david.s.gordon@intel.com
-
由 Dave Gordon 提交于
The GuC code needs to know the size of a logical context, so we expose get_lr_context_size(), renaming it intel_lr_context__size() to fit the naming conventions for nonstatic functions. For: VIZ-2021 Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NAlex Dai <yu.dai@intel.com> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1450468812-4882-2-git-send-email-yu.dai@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Alex Dai 提交于
Split GuC work queue space checking from submission and move it to ring_alloc_request_extras. The reason is that failure in later i915_add_request() won't be handled. In the case timeout happens, driver can return early in order to handle the error. v1: Move wq_reserve_space to ring_reserve_space v2: Move wq_reserve_space to alloc_request_extras (Chris Wilson) v3: The work queue head pointer is cached by driver now. So we can quickly return if space is available. s/reserve/check/g (Dave Gordon) v4: Update cached wq head after ring doorbell; check wq space before ring doorbell in case unexpected error happens; call wq space check only when GuC submission is enabled. (Dave Gordon) Signed-off-by: NAlex Dai <yu.dai@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1450295155-10050-1-git-send-email-yu.dai@intel.comReviewed-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 12月, 2015 1 次提交
-
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <benjamin.widawsky@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1451427643-7266-1-git-send-email-benjamin.widawsky@intel.comSigned-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 21 12月, 2015 1 次提交
-
-
由 Ben Widawsky 提交于
It is unclear if this is even required on BXT. v2: Make sure to set the default value to false. Uncertain how my compiler doesn't complain with v1. Cc: Imre Deak <imre.deak@intel.com> Signed-off-by: NBen Widawsky <benjamin.widawsky@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1450374597-7021-1-git-send-email-benjamin.widawsky@intel.comReviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 12月, 2015 1 次提交
-
-
由 Dave Gordon 提交于
In various places, a single page of a (regular) GEM object is mapped into CPU address space and updated. In each such case, either the page or the the object should be marked dirty, to ensure that the modifications are not discarded if the object is evicted under memory pressure. The typical sequence is: va = kmap_atomic(i915_gem_object_get_page(obj, pageno)); *(va+offset) = ... kunmap_atomic(va); Here we introduce i915_gem_object_get_dirty_page(), which performs the same operation as i915_gem_object_get_page() but with the side-effect of marking the returned page dirty in the pagecache. This will ensure that if the object is subsequently evicted (due to memory pressure), the changes are written to backing store rather than discarded. Note that it works only for regular (shmfs-backed) GEM objects, but (at least for now) those are the only ones that are updated in this way -- the objects in question are contexts and batchbuffers, which are always shmfs-backed. Separate patches deal with the cases where whole objects are (or may be) dirtied. v3: Mark two more pages dirty in the page-boundary-crossing cases of the execbuffer relocation code [Chris Wilson] Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1449773486-30822-2-git-send-email-david.s.gordon@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 12月, 2015 1 次提交
-
-
由 Dave Gordon 提交于
Based on Chris Wilson's patch from 6 months ago, rebased and adapted. The current implementation of intel_ring_initialized() is too heavyweight; it's a non-inlined function that chases several levels of pointers. This wouldn't matter too much if it were rarely called, but it's used inside the iterator test of for_each_ring() and is therefore called quite frequently. So let's make it simple and inline ... The idea here is to use ring->dev as an indicator showing which engines have been initialised and are therefore to be included in iterations that use for_each_ring(). This allows us to avoid multiple memory references and a (non-inlined) function call on each iteration of each such loop. Fixes regression from commit 48d82387 Author: Oscar Mateo <oscar.mateo@intel.com> Date: Thu Jul 24 17:04:23 2014 +0100 drm/i915/bdw: Generic logical ring init and cleanup Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1449586956-32360-2-git-send-email-david.s.gordon@intel.com
-
- 05 12月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
This reverts commit 6d65ba94. Mika Kuoppala traced down a use-after-free crash in module unload to this commit, because ring->last_context is leaked beyond when the context gets destroyed. Mika submitted a quick fix to patch that up in the context destruction code, but that's too much of a hack. The right fix is instead for the ring to hold a full reference onto it's last context, like we do for legacy contexts. Since this is causing a regression in BAT it gets reverted before we can close this. Cc: Nick Hoath <nicholas.hoath@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Alex Dai <yu.dai@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93248Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 03 12月, 2015 1 次提交
-
-
由 Nick Hoath 提交于
Use the first retired request on a new context to unpin the old context. This ensures that the hw context remains bound until it has been written back to by the GPU. Now that the context is pinned until later in the request/context lifecycle, it no longer needs to be pinned from context_queue to retire_requests. This fixes an issue with GuC submission where the GPU might not have finished writing back the context before it is unpinned. This results in a GPU hang. v2: Moved the new pin to cover GuC submission (Alex Dai) Moved the new unpin to request_retire to fix coverage leak v3: Added switch to default context if freeing a still pinned context just in case the hw was actually still using it v4: Unwrapped context unpin to allow calling without a request v5: Only create a switch to idle context if the ring doesn't already have a request pending on it (Alex Dai) Rename unsaved to dirty to avoid double negatives (Dave Gordon) Changed _no_req postfix to __ prefix for consistency (Dave Gordon) Split out per engine cleanup from context_free as it was getting unwieldy Corrected locking (Dave Gordon) v6: Removed some bikeshedding (Mika Kuoppala) Added explanation of the GuC hang that this fixes (Daniel Vetter) v7: Removed extra per request pinning from ring reset code (Alex Dai) Added forced ring unpin/clean in error case in context free (Alex Dai) Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Issue: VIZ-4277 Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Alex Dai <yu.dai@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NAlex Dai <yu.dai@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 11月, 2015 1 次提交
-
-
由 Arun Siluvery 提交于
This reverts commit 2e5356da. It is now redundant as it is already covered in below commit which introduced the changes to reuse initialization of resources in resume/reset path. commit e84fe803 Author: Nick Hoath <nicholas.hoath@intel.com> Date: Fri Sep 11 12:53:46 2015 +0100 drm/i915: Split alloc from init for lrc lrc_setup_hardware_status_page() in the same function gen8_init_common_ring() takes care of this. Cc: Nick Hoath <nicholas.hoath@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1447951664-9347-1-git-send-email-arun.siluvery@linux.intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 11月, 2015 6 次提交
-
-
由 Ville Syrjälä 提交于
Make I915_READ and I915_WRITE more type safe by wrapping the register offset in a struct. This should eliminate most of the fumbles we've had with misplaced parens. This only takes care of normal mmio registers. We could extend the idea to other register types and define each with its own struct. That way you wouldn't be able to accidentally pass the wrong thing to a specific register access function. The gpio_reg setup is probably the ugliest thing left. But I figure I'd just leave it for now, and wait for some divine inspiration to strike before making it nice. As for the generated code, it's actually a bit better sometimes. Eg. looking at i915_irq_handler(), we can see the following change: lea 0x70024(%rdx,%rax,1),%r9d mov $0x1,%edx - movslq %r9d,%r9 - mov %r9,%rsi - mov %r9,-0x58(%rbp) - callq *0xd8(%rbx) + mov %r9d,%esi + mov %r9d,-0x48(%rbp) callq *0xd8(%rbx) So previously gcc thought the register offset might be signed and decided to sign extend it, just in case. The rest appears to be mostly just minor shuffling of instructions. v2: i915_mmio_reg_{offset,equal,valid}() helpers added s/_REG/_MMIO/ in the register defines mo more switch statements left to worry about ring_emit stuff got sorted in a prep patch cmd parser, lrc context and w/a batch buildup also in prep patch vgpu stuff cleaned up and moved to a prep patch all other unrelated changes split out v3: Rebased due to BXT DSI/BLC, MOCS, etc. v4: Rebased due to churn, s/i915_mmio_reg_t/i915_reg_t/ Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1447853606-2751-1-git-send-email-ville.syrjala@linux.intel.com
-
由 Ville Syrjälä 提交于
We set up a load of LRIs in the logical ring context. Wrap that stuff in a macro to avoid typos with position of each reg/value pair in the context. This also makes it easier to make the register defines type safe. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1446672017-24497-24-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Ville Syrjälä 提交于
The logical render context population has a bunch of raw ring register offsets. Use the names we have for them, and in cases where we we don't, give them names. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1446672017-24497-23-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Ville Syrjälä 提交于
Add a helper for emitting register offsets (for LRI/SRM) into the w/a batch buffer. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1446672017-24497-21-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Ville Syrjälä 提交于
When register type safety happens, we can't just try to emit the register itself to the ring. Instead we'll need to extract the offset from it first. Add some convenience functions that will do that. v2: Convert MOCS setup too Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1446672017-24497-20-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 29 10月, 2015 2 次提交
-
-
由 Tim Gore 提交于
Since A1 chips use the same GPU as A0, they need all the same wa's in the i915 driver. Update some conditionals to do this. Signed-off-by: NTim Gore <tim.gore@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1445856538-5417-1-git-send-email-tim.gore@intel.comSigned-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
Having flushed all requests from all queues, we know that all ringbuffers must now be empty. However, since we do not reclaim all space when retiring the request (to prevent HEADs colliding with rapid ringbuffer wraparound) the amount of available space on each ringbuffer upon reset is less than when we start. Do one more pass over all the ringbuffers to reset the available space Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Dave Gordon <david.s.gordon@intel.com>
-
- 21 10月, 2015 2 次提交
-
-
由 Jani Nikula 提交于
Revision checks are almost always accompanied by a platform check. (The exceptions are platform specific code.) Add helpers to check for a platform and a revision range: IS_SKL_REVID() and IS_BXT_REVID(). In most places this simplifies and clarifies the code. It will be obvious that revid macros are used for the correct platform. This should make it easier to find all the revision checks for workarounds for each platform, and make it easier to remove them once we drop support for early hardware revisions. This should also make it easier to differentiate between Skylake and Kabylake revision checks when Kabylake support is added. v2: rebase Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1445343722-3312-3-git-send-email-jani.nikula@intel.com
-
由 Jani Nikula 提交于
Prefer inclusive ranges for revision checks rather than "below B0". Per specs A2 is not used, so revid <= A1 matches revid < B0. Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1445343722-3312-2-git-send-email-jani.nikula@intel.comSigned-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 13 10月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
In order to flush the results from in-batch pipecontrol writes (used for example in glQuery) before declaring the batch complete (and so declaring the query results coherent), we need to set the FlushEnable bit in our flushing pipecontrol. The FlushEnable bit "waits until all previous writes of immediate data from post-sync circles are complete before executing the next command". I get GPU hangs on byt without flushing these writes (running ue4). piglit has examples where the flush is required for correct rendering. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Acked-by: NDaniel Vetter <daniel@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 07 10月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Passing cliprects into the kernel for it to re-execute the batch buffer with different CMD_DRAWRECT died out long ago. As DRI1 support has been removed from the kernel, we can now simply reject any execbuf trying to use this "feature". To keep Daniel happy with the prospect of being able to reuse these fields in the next decade, continue to ensure that current userspace is not passing garbage in through the dead fields. v2: Fix the cliprects_ptr check Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 28 9月, 2015 1 次提交
-
-
由 Michel Thierry 提交于
A previous commit resets the Context Status Buffer (CSB) read pointer in ring init commit c0a03a2e ("drm/i915: Reset CSB read pointer in ring init") This is generally correct, but this pointer is not reset after suspend/resume in some platforms (cht). In this case, the driver should read the register value instead of resetting the sw read counter to 0. Otherwise we process old events, leading to unwanted pre-emptions or something worse. But in other platforms (bdw) and also during GPU reset or power up, the CSBWP is reset to 0x7 (an invalid number), and in this case the read pointer should be set to 5 (the interrupt code will increment this counter one more time, and will start reading from CSB[0]). v2: When the CSB registers are reset, the read pointer needs to be set to 5, otherwise the first write (CSB[0]) won't be read (Mika). Replace magic numbers with GEN8_CSB_ENTRIES (6) and GEN8_CSB_PTR_MASK (0x07). Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NLei Shen <lei.shen@intel.com> Signed-off-by: NDeepak S <deepak.s@intel.com> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 23 9月, 2015 2 次提交
-
-
由 Nick Hoath 提交于
Remove extraneous request cancel in request allocation failure path in intel_lr_context_deferred_alloc (Tvrtko Ursulin) Regression from: commit e84fe803 Author: Nick Hoath <nicholas.hoath@intel.com> Date: Fri Sep 11 12:53:46 2015 +0100 drm/i915: Split alloc from init for lrc Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 9月, 2015 1 次提交
-
-
由 Andrzej Hajda 提交于
The function can return negative value. The problem has been detected using proposed semantic patch scripts/coccinelle/tests/unsigned_lesser_than_zero.cocci [1]. [1]: http://permalink.gmane.org/gmane.linux.kernel/2038576Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 9月, 2015 4 次提交
-
-
由 Masanari Iida 提交于
This patch fix following warnings while "make xmldocs". .//drivers/gpu/drm/i915/intel_lrc.c:780: warning: No description found for parameter 'req' .//drivers/gpu/drm/i915/intel_lrc.c:780: warning: Excess function parameter 'request' description in 'intel_logical_ring_begin' .//drivers/gpu/drm/i915/intel_lrc.c:780: warning: Excess function parameter 'ctx' description in 'intel_logical_ring_begin' Signed-off-by: NMasanari Iida <standby24x7@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Extend init/init_hw split to context init. - Move context initialisation in to i915_gem_init_hw - Move one off initialisation for render ring to i915_gem_validate_context - Move default context initialisation to logical_ring_init Rename intel_lr_context_deferred_create to intel_lr_context_deferred_alloc, to reflect reduced functionality & alloc/init split. This patch is intended to split out the allocation of resources & initialisation to allow easier reuse of code for resume/gpu reset. v2: Removed function ptr wrapping of do_switch_context (Daniel Vetter) Left ->init_context int intel_lr_context_deferred_alloc (Daniel Vetter) Remove unnecessary init flag & ring type test. (Daniel Vetter) Improve commit message (Daniel Vetter) v3: On init/reinit, set the hw next sequence number to the sw next sequence number. This is set to 1 at driver load time. This prevents the seqno being reset on reinit (Chris Wilson) v4: Set seqno back to ~0 - 0x1000 at start-of-day, and increment by 0x100 on reset. This makes it obvious which bbs are which after a reset. (David Gordon & John Harrison) Rebase. v5: Rebase. Fixed rebase breakage. Put context pinning in separate function. Removed code churn. (Thomas Daniel) v6: Cleanup up issues introduced in v2 & v5 (Thomas Daniel) Issue: VIZ-4798 Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: John Harrison <john.c.harrison@intel.com> Cc: David Gordon <david.s.gordon@intel.com> Cc: Thomas Daniel <thomas.daniel@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
When WaEnableForceRestoreInCtxtDescForVCS is required, it is only safe to send new contexts if the last reported event is "active to idle". Otherwise the same context can fully preempt itself because lite-restore is disabled. Testcase: igt/gem_concurrent_blit Reported-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Tested-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
Also check for correct revision id in each Gen9 platform (SKL until B0 and BXT until A0). Cc: Nick Hoath <nicholas.hoath@intel.com> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Tested-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 9月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
A small, very small, step to sharing the duplicate code between execlists and legacy submission engines, starting with the ringbuffer allocation code. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 02 9月, 2015 2 次提交
-
-
由 Zhiyuan Lv 提交于
Broadwell hardware supports both ring buffer mode and execlist mode. When i915 runs inside a VM with Intel GVT-g, we allow execlist mode only. The main reason of EXECLIST only is that GVT-g does not support the dynamic mode switch between ring buffer mode and execlist mode when running multiple virtual machines. v2: - Adjust the position of vgpu check in sanitize function (Joonas) - Add vgpu error check in context initialization. (Joonas, Daniel) Signed-off-by: NZhiyuan Lv <zhiyuan.lv@intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Zhiyuan Lv 提交于
This is based on Mika Kuoppala's patch below: http://article.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/61104/match=workaround+hw+preload The patch will preallocate the page directories for 32-bit PPGTT when i915 runs inside a virtual machine with Intel GVT-g. With this change, the root pointers in EXECLIST context will always keep the same. The change is needed for vGPU because Intel GVT-g will do page table shadowing, and needs to track all the page table changes from guest i915 driver. However, if guest PPGTT is modified through GPU commands like LRI, it is not possible to trap the operations in the right time, so it will be hard to make shadow PPGTT to work correctly. Shadow PPGTT could be much simpler with this change. Meanwhile hypervisor could simply prohibit any attempt of PPGTT modification through GPU command for security. The function gen8_preallocate_top_level_pdps() in the patch is from Mika, with only one change to set "used_pdpes" to avoid duplicated allocation later. Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Signed-off-by: NZhiyuan Lv <zhiyuan.lv@intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 8月, 2015 2 次提交
-
-
由 Imre Deak 提交于
By running igt/store_dword_loop_render on BXT we can hit a coherency problem where the seqno written at GPU command completion time is not seen by the CPU. This results in __i915_wait_request seeing the stale seqno and not completing the request (not considering the lost interrupt/GPU reset mechanism). I also verified that this isn't a case of a lost interrupt, or that the command didn't complete somehow: when the coherency issue occured I read the seqno via an uncached GTT mapping too. While the cached version of the seqno still showed the stale value the one read via the uncached mapping was the correct one. Work around this issue by clflushing the corresponding CPU cacheline following any store of the seqno and preceding any reading of it. When reading it do this only when the caller expects a coherent view. v2: - fix using the proper logical && instead of a bitwise & (Jani, Mika) - limit the workaround to A stepping, on later steppings this HW issue is fixed v3: - use a separate get_seqno/set_seqno vfunc (Chris) Testcase: igt/store_dword_loop_render Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Arun Siluvery 提交于
MI_STORE_REGISTER_MEM, MI_LOAD_REGISTER_MEM instructions are not really variable length instructions unlike MI_LOAD_REGISTER_IMM where it expects (reg, addr) pairs so use fixed length for these instructions. v2: rebase Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Appease checkpatch as Mika spotted in i915_reg.h - it seems terminally unhappy about i915_cmd_parser.c so that would be a separate patch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 17 8月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Everytime we use the logical context with execlists it becomes dirty (as the hardware will write the new register values afterwards, as well as the GPU state that will be used). We need to then flag the context as dirty everytime since after a swap-out/swap-in cycle the dirty flag will be cleared, and a further swap-out cycle will then loose the most recent GPU state. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 15 8月, 2015 3 次提交
-
-
由 Alex Dai 提交于
GuC-based submission is mostly the same as execlist mode, up to intel_logical_ring_advance_and_submit(), where the context being dispatched would be added to the execlist queue; at this point we submit the context to the GuC backend instead. There are, however, a few other changes also required, notably: 1. Contexts must be pinned at GGTT addresses accessible by the GuC i.e. NOT in the range [0..WOPCM_SIZE), so we have to add the PIN_OFFSET_BIAS flag to the relevant GGTT-pinning calls. 2. The GuC's TLB must be invalidated after a context is pinned at a new GGTT address. 3. GuC firmware uses the one page before Ring Context as shared data. Therefore, whenever driver wants to get base address of LRC, we will offset one page for it. LRC_PPHWSP_PN is defined as the page number of LRCA. 4. In the work queue used to pass requests to the GuC, the GuC firmware requires the ring-tail-offset to be represented as an 11-bit value, expressed in QWords. Therefore, the ringbuffer size must be reduced to the representable range (4 pages). v2: Defer adding #defines until needed [Chris Wilson] Rationalise type declarations [Chris Wilson] v4: Squashed kerneldoc patch into here [Daniel Vetter] v5: Update request->tail in code common to both GuC and execlist modes. Add a private version of lr_context_update(), as sharing the execlist version leads to race conditions when the CPU and the GuC both update TAIL in the context image. Conversion of error-captured HWS page to string must account for offset from start of object to actual HWS (LRC_PPHWSP_PN). Issue: VIZ-4884 Signed-off-by: NAlex Dai <yu.dai@intel.com> Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Dave Gordon 提交于
GuC submission is basically execlist submission, but with the GuC handling the actual writes to the ELSP and the resulting context switch interrupts. So to describe a context for submission via the GuC, we need one of the same functions used in execlist mode. This commit exposes one such function, changing its name to better describe what it does (it's related to logical ring contexts rather than to execlists per se). v2: Replaces previous "drm/i915: Move execlists defines from .c to .h" v3: Incorporates a change to one of the functions exposed here that was previously part of an internal patch, but which was omitted from the version recently committed to drm-intel-nightly: 7a01a0a2 drm/i915/lrc: Update PDPx registers with lri commands So we reinstate this change here. v4: Drop v3 change, update function parameters due to collision with 8ee36152 drm/i915: Convert execlists_ctx_descriptor() for requests v5: Don't expose execlists_update_context() after all. The current version is no longer compatible with GuC submission; trying to share the execlist version of this function results in both GuC and CPU updating TAIL in the context image, with bad results when they get out of step. The GuC submission path now has its own private version that just updates the ringbuffer start address, and not TAIL or PDPx. v6: Rebased Issue: VIZ-4884 Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
In 64b (48bit canonical) PPGTT addressing, the PDP0 register contains the base address to PML4, while the other PDP registers are ignored. In LRC, the addressing mode must be specified in every context descriptor, and the base address to PML4 is stored in the reg state. v2: PML4 update in legacy context switch is left for historic reasons, the preferred mode of operation is with lrc context based submission. v3: s/gen8_map_page_directory/gen8_setup_page_directory and s/gen8_map_page_directory_pointer/gen8_setup_page_directory_pointer. Also, clflush will be needed for bxt. (Akash) v4: Squashed lrc-specific code and use a macro to set PML4 register. v5: Rebase after Mika's ppgtt cleanup / scratch merge patch series. PDP update in bb_start is only for legacy 32b mode. v6: Rebase after final merged version of Mika's ppgtt/scratch patches. v7: There is no need to update the pml4 register value in execlists_update_context. (Akash) v8: Move pd and pdp setup functions to a previous patch, they do not belong here. (Akash) v9: Check USES_FULL_48BIT_PPGTT instead of GEN8_CTX_ADDRESSING_MODE in gen8_emit_bb_start to check if emit pdps is needed. (Akash) Cc: Akash Goel <akash.goel@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-