- 04 2月, 2014 1 次提交
-
-
由 Mika Kuoppala 提交于
With full ppgtt using acthd is not enough to find guilty batch buffer. We get multiple false positives as acthd is per vm. Instead of scanning which vm was running on a ring, to find corressponding context, use a different, simpler, strategy of finding batches that caused gpu hang: If hangcheck has declared ring to be hung, find first non complete request on that ring and claim it was guilty. v2: Rebase Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73652Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 9月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
Ignoring the legacy DRI1 code, and a couple of special cases (to be discussed later), all access to the ring is mediated through requests. The first write to a ring will grab a seqno and mark the ring as having an outstanding_lazy_request. Either through explicitly adding a request after an execbuffer or through an implicit wait (either by the CPU or by a semaphore), that sequence of writes will be terminated with a request. So we can ellide all the intervening writes to the tail register and send the entire command stream to the GPU at once. This will reduce the number of *serialising* writes to the tail register by a factor or 3-5 times (depending upon architecture and number of workarounds, context switches, etc involved). This becomes even more noticeable when the register write is overloaded with a number of debugging tools. The astute reader will wonder if it is then possible to overflow the ring with a single command. It is not. When we start a command sequence to the ring, we check for available space and issue a wait in case we have not. The ring wait will in this case be forced to flush the outstanding register write and then poll the ACTHD for sufficient space to continue. The exception to the rule where everything is inside a request are a few initialisation cases where we may want to write GPU commands via the CS before userspace wakes up and page flips. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 9月, 2013 1 次提交
-
-
由 Mika Kuoppala 提交于
Score and action reveals what all the rings were doing and why hang was declared. Add idle state so that we can distinguish between waiting and idle ring. v2: - add idle as a hangcheck action - consensed hangcheck status to single line (Chris) - mark active explicitly when we are making progress (Chris) Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 9月, 2013 2 次提交
-
-
由 Chris Wilson 提交于
It is possible for us to be forced to perform an allocation for the lazy request whilst running the shrinker. This allocation may fail, leaving us unable to reclaim any memory leading to premature OOM. A neat solution to the problem is to preallocate the request at the same time as acquiring the seqno for the ring transaction. This means that we can report ENOMEM prior to touching the rings. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Prior to preallocating an request for lazy emission, rename the existing field to make way (and differentiate the seqno from the request struct). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 9月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
We now have more devices using ring->private than not, and they all want the same structure. Worse, I would like to use a scratch page from outside of intel_ringbuffer.c and so for convenience would like to reuse ring->private. Embed the object into the struct intel_ringbuffer so that we can keep the code clean. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 8月, 2013 1 次提交
-
-
由 Damien Lespiau 提交于
The code directly uses the registers and ring->mmio_base. Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 8月, 2013 1 次提交
-
-
由 Jani Nikula 提交于
The short lowercase names are bound to collide. The default warnings don't even warn about shadowing. Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 7月, 2013 2 次提交
-
-
由 Daniel Vetter 提交于
With the simplified locking there's no reason any more to keep the refcounts seperate. v2: Readd the lost comment that ring->irq_refcount is protected by dev_priv->irq_lock. Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Now that the rps interrupt locking isn't clearly separated (at elast conceptually) from all the other interrupt locking having a different lock stopped making sense: It protects much more than just the rps workqueue it started out with. But with the addition of VECS the separation started to blurr and resulted in some more complex locking for the ring interrupt refcount. With this we can (again) unifiy the ringbuffer irq refcounts without causing a massive confusion, but that's for the next patch. v2: Explain better why the rps.lock once made sense and why no longer, requested by Ben. Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 6月, 2013 1 次提交
-
-
由 Mika Kuoppala 提交于
For guilty batchbuffer analysis later on when rings are reset, store what state the ring was on when hang was declared. This helps to weed out the waiting rings from the active ones. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 6月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
If we detect a ring is in a valid wait for another, just let it be. Eventually it will either begin to progress again, or the entire system will come grinding to a halt and then hangcheck will fire as soon as the deadlock is detected. This error was foretold by Ben in commit 05407ff8 Author: Mika Kuoppala <mika.kuoppala@linux.intel.com> Date: Thu May 30 09:04:29 2013 +0300 drm/i915: detect hang using per ring hangcheck_score "If ring B is waiting on ring A via semaphore, and ring A is making progress, albeit slowly - the hangcheck will fire. The check will determine that A is moving, however ring B will appear hung because the ACTHD doesn't move. I honestly can't say if that's actually a realistic problem to hit it probably implies the timeout value is too low." v2: Make sure we don't even incur the KICK cost whilst waiting. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65394Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 6月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
This is required for tracking render damage for use with FBC and will be used in subsequent patches. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 6月, 2013 1 次提交
-
-
由 Mika Kuoppala 提交于
Keep track of ring seqno progress and if there are no progress detected, declare hang. Use actual head (acthd) to distinguish between ring stuck and batchbuffer looping situation. Stuck ring will be kicked to trigger progress. This commit adds a hard limit for batchbuffer completion time. If batchbuffer completion time is more than 4.5 seconds, the gpu will be declared hung. Review comment from Ben which nicely clarifies the semantic change: "Maybe I'm just stating the functional changes of the patch, but in case they were unintended here is what I see as potential issues: 1. "If ring B is waiting on ring A via semaphore, and ring A is making progress, albeit slowly - the hangcheck will fire. The check will determine that A is moving, however ring B will appear hung because the ACTHD doesn't move. I honestly can't say if that's actually a realistic problem to hit it probably implies the timeout value is too low. 2. "There's also another corner case on the kick. If the seqno = 2 (though not stuck), and on the 3rd hangcheck, the ring is stuck, and we try to kick it... we don't actually try to find out if the kick helped" v2: use atchd to detect stuck ring from loop (Ben Widawsky) v3: Use acthd to check when ring needs kicking. Declare hang on third time in order to give time for kick_ring to take effect. v4: Update commit msg Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Paste in Ben's review comment.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 6月, 2013 7 次提交
-
-
由 Ben Widawsky 提交于
v2: Use the correct lock to protect PM interrupt regs, this was accidentally lost from earlier (Haihao) Fix return types (Ben) Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
It's overkill on older gens, but it's useful for newer gens. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
v2: Add set_seqno which didn't exist before rebase (Haihao) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NXiang, Haihao <haihao.xiang@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The video enhancement command streamer is a new ring on HSW which does what it sounds like it does. This patch provides the most minimal inception of the ring. In order to support a new ring, we need to bump the number. The patch may look trivial to the untrained eye, but bumping the number of rings is a bit scary. As such the patch is not terribly useful by itself, but a pretty nice place to find issues during a bisection. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
This replaces the existing MBOX update code with a more generalized calculation for emitting mbox updates. We also create a sentinel for doing the updates so we can more abstractly deal with the rings. When doing MBOX updates the code must be aware of the /other/ rings. Until now the platforms which supported semaphores had a fixed number of rings and so it made sense for the code to be very specialized (hardcoded). The patch does contain a functional change, but should have no behavioral changes. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Semaphores are tied very closely to the rings in the GPU. Trivial patch adds comments to the existing code so that when we add new rings we can include comments there as well. It also helps distinguish the ring to semaphore mailbox interactions by using the ringname in the semaphore data structures. This patch should have no functional impact. v2: The English parts (as opposed to register names) of the comments were reversed. (Damien) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
Instead of relying in acthd, track ring seqno progression to detect if ring has hung. v2: put hangcheck stuff inside struct (Chris Wilson) v3: initialize hangcheck.seqno (Ben Widawsky) Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 5月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
In order to be notified of when the context and all of its associated objects is idle (for if the context maps to a ppgtt) we need a callback from the retire handler. We can arrange this by using the kref_get/put of the context for request tracking and by inserting a request to demarque the switch away from the old context. [Ben: fixed minor error to patch compile, AND s/last_context/from/] Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 12月, 2012 2 次提交
-
-
由 Mika Kuoppala 提交于
Hardware status page needs to have proper seqno set as our initial seqno can be arbitrary. If initial seqno is close to wrap boundary on init and i915_seqno_passed() (31bit space) refers to hw status page which contains zero, errorneous result will be returned. v2: clear mboxes and set hws page directly instead of going through rings. Suggested by Chris Wilson. v3: hws needs to be updated for all gens. Noticed by Chris Wilson. References: https://bugs.freedesktop.org/show_bug.cgi?id=58230Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
In preparation for setting per ring initial seqno values add ring::set_seqno(). Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
Now that Chris Wilson demonstrated that the key for stability on early gen 2 is to simple _never_ exchange the physical backing storage of batch buffers I've tried a stab at a kernel solution. Doesn't look too nefarious imho, now that I don't try to be too clever for my own good any more. v2: After discussing the various techniques, we've decided to always blit batches on the suspect devices, but allow userspace to opt out of the kernel workaround assume full responsibility for providing coherent batches. The principal reason is that avoiding the blit does improve performance in a few key microbenchmarks and also in cairo-trace replays. Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: - Drop the hunk which uses HAS_BROKEN_CS_TLB to implement the ring wrap w/a. Suggested by Chris Wilson. - Also add the ACTHD check from Chris Wilson for the error state dumping, so that we still catch batches when userspace opts out of the w/a.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 12月, 2012 1 次提交
-
-
由 Mika Kuoppala 提交于
If there are pre-wrap values in semaphore-mbox registers after wrap, syncing against some after-wrap request will complete immediately. Fix this by emitting ring commands to set mbox registers to zero when the wrap happens. v2: Use __intel_ring_begin to emit ring commands, from Chris Wilson. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Add a small comment to handle_seqno_wrap.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 12月, 2012 1 次提交
-
-
由 Ville Syrjälä 提交于
From BSpec: "If the Ring Buffer Head Pointer and the Tail Pointer are on the same cacheline, the Head Pointer must not be greater than the Tail Pointer." The easiest way to enforce this is to reduce the reported ring space. References: Gen2 BSpec "1. Programming Environment" / 1.4.4.6 "Ring Buffer Use" Gen3 BSpec "vol1c Memory Interface Functions" / 2.3.4.5 "Ring Buffer Use" Gen4+ BSpec "vol1c Memory Interface and Command Stream" / 5.3.4.5 "Ring Buffer Use" v2: Include the exact BSpec references in the description v3: s/64/I915_RING_FREE_SPACE, and add the BSpec information to the code Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 11月, 2012 2 次提交
-
-
由 Chris Wilson 提交于
Replace the wait for the ring to be clear with the more common wait for the ring to be idle. The principle advantage is one less exported intel_ring_wait function, and the removal of a hardcoded value. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Based on the work by Mika Kuoppala, we realised that we need to handle seqno wraparound prior to committing our changes to the ring. The most obvious point then is to grab the seqno inside intel_ring_begin(), and then to reuse that seqno for all ring operations until the next request. As intel_ring_begin() can fail, the callers must already be prepared to handle such failure and so we can safely add further checks. This patch looks like it should be split up into the interface changes and the tweaks to move seqno wrapping from the execbuffer into the core seqno increment. However, I found no easy way to break it into incremental steps without introducing further broken behaviour. v2: Mika found a silly mistake and a subtle error in the existing code; inside i915_gem_retire_requests() we were resetting the sync_seqno of the target ring based on the seqno from this ring - which are only related by the order of their allocation, not retirement. Hence we were applying the optimisation that the rings were synchronised too early, fortunately the only real casualty there is the handling of seqno wrapping. v3: Do not forget to reset the sync_seqno upon module reinitialisation, ala resume. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=863861 Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [v2] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 11月, 2012 1 次提交
-
-
由 Jesse Barnes 提交于
So store into the scratch space of the HWS to make sure the invalidate occurs. v2: use GTT address space for store, clean up #defines (Chris) v3: use correct #define in blt ring flush (Chris) Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NAntti Koskipää <antti.koskipaa@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> References: https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1063252Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 10月, 2012 1 次提交
-
-
由 Chris Wilson 提交于
With the introduction of per-process GTT space, the hardware designers thought it wise to also limit the ability to write to MMIO space to only a "secure" batch buffer. The ability to rewrite registers is the only way to program the hardware to perform certain operations like scanline waits (required for tear-free windowed updates). So we either have a choice of adding an interface to perform those synchronized updates inside the kernel, or we permit certain processes the ability to write to the "safe" registers from within its command stream. This patch exposes the ability to submit a SECURE batch buffer to DRM_ROOT_ONLY|DRM_MASTER processes. v2: Haswell split up bit8 into a ppgtt bit (still bit8) and a security bit (bit 13, accidentally not set). Also add a comment explaining why secure batches need a global gtt binding. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1) [danvet: added hsw fixup.] Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 8月, 2012 1 次提交
-
-
由 Chris Wilson 提交于
Avoid the forcewake overhead when simply retiring requests, as often the last seen seqno is good enough to satisfy the retirment process and will be promptly re-run in any case. Only ensure that we force the coherent seqno read when we are explicitly waiting upon a completion event to be sure that none go missing, and also for when we are reporting seqno values in case of error or debugging. This greatly reduces the load for userspace using the busy-ioctl to track active buffers, for instance halving the CPU used by X in pushing the pixels from a software render (flash). The effect will be even more magnified with userptr and so providing a zero-copy upload path in that instance, or in similar instances where X is simply compositing DRI buffers. v2: Reverse the polarity of the tachyon stream. Daniel suggested that 'force' was too generic for the parameter name and that 'lazy_coherency' better encapsulated the semantics of it being an optimization and its purpose. Also notice that gen6_get_seqno() is only used by gen6/7 chipsets and so the test for IS_GEN6 || IS_GEN7 is redundant in that function. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 7月, 2012 2 次提交
-
-
由 Chris Wilson 提交于
By moving the function to intel_ringbuffer and currying the appropriate parameter, hopefully we make the callsites easier to read and understand. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
This is now handled by a global flag to ensure we emit a flush before the next serialisation point (if we failed to queue one previously). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 6月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
This is just the minimal patch to disable all this code so that we can do decent amounts of QA before we rip it all out. The complicating thing is that we need to flush the gpu caches after the batchbuffer is emitted. Which is past the point of no return where execbuffer can't fail any more (otherwise we risk submitting the same batch multiple times). Hence we need to add a flag to track whether any caches associated with that ring are dirty. And emit the flush in add_request if that's the case. Note that this has a quite a few behaviour changes: - Caches get flushed/invalidated unconditionally. - Invalidation now happens after potential inter-ring sync. I've bantered around a bit with Chris on irc whether this fixes anything, and it might or might not. The only thing clear is that with these changes it's much easier to reason about correctness. Also rip out a lone get_next_request_seqno in the execbuffer retire_commands function. I've dug around and I couldn't figure out why that is still there, with the outstanding lazy request stuff it shouldn't be necessary. v2: Chris Wilson complained that I also invalidate the read caches when flushing after a batchbuffer. Now optimized. v3: Added some comments to explain the new flushing behaviour. Cc: Eric Anholt <eric@anholt.net> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 6月, 2012 3 次提交
-
-
由 Ben Widawsky 提交于
From http://intellinuxgraphics.org/documentation/SNB/IHD_OS_Vol1_Part3.pdf [DevSNB] If Flush TLB invalidation Mode is enabled it's the driver's responsibility to invalidate the TLBs at least once after the previous context switch after any GTT mappings changed (including new GTT entries). This can be done by a pipelined PIPE_CONTROL with TLB inv bit set immediately before MI_SET_CONTEXT. On GEN7 the invalidation mode is explicitly set, but this appears to be lacking for GEN6. Since I don't know the history on this, I've decided to dynamically read the value at ring init time, and use that value throughout. v2: better comment (daniel) Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
-
由 Ben Widawsky 提交于
Implement the context switch code as well as the interfaces to do the context switch. This patch also doesn't match 1:1 with the RFC patches. The main difference is that from Daniel's responses the last context object is now stored instead of the last context. This aids in allows us to free the context data structure, and context object independently. There is room for optimization: this code will pin the context object until the next context is active. The optimal way to do it is to actually pin the object, move it to the active list, do the context switch, and then unpin it. This allows the eviction code to actually evict the context object if needed. The context switch code is missing workarounds, they will be implemented in future patches. v2: actually do obj->dirty=1 in switch (daniel) Modified comment around above Remove flags to context switch (daniel) Move mi_set_context code to i915_gem_context.c (daniel) Remove seqno , use lazy request instead (daniel) v3: use i915_gem_request_next_seqno instead of outstanding_lazy_request (Daniel) remove id's from trace events (Daniel) Put the context BO in the instruction domain (Daniel) Don't unref the BO is context switch fails (Chris) Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
-
由 Ben Widawsky 提交于
Invent an abstraction for a hw context which is passed around through the core functions. The main bit a hw context holds is the buffer object which backs the context. The rest of the members are just helper functions. Specifically the ring member, which could likely go away if we decide to never implement whatever other hw context support exists. Of note here is the introduction of the 64k alignment constraint for the BO. If contexts become heavily used, we should consider tweaking this down to 4k. Until the contexts are merged and tested a bit though, I think 64k is a nice start (based on docs). Since we don't yet switch contexts, there is really not much complexity here. Creation/destruction works pretty much as one would expect. An idr is used to generate the context id numbers which are unique per file descriptor. v2: add DRM_DEBUG_DRIVERS to distinguish ENOMEM failures (ben) convert a BUG_ON to WARN_ON, default destruction is still fatal (ben) Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
-
- 20 5月, 2012 1 次提交
-
-
由 Chris Wilson 提交于
In many places we wish to iterate over the rings associated with the GPU, so refactor them to use a common macro. Along the way, there are a few code removals that should be side-effect free and some rearrangement which should only have a cosmetic impact, such as error-state. Note that this slightly changes the semantics in the hangcheck code: We now always cycle through all enabled rings instead of short-circuiting the logic. v2: Pull in a couple of suggestions from Ben and Daniel for intel_ring_initialized() and not removing the warning (just moving them to a new home, closer to the error). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Added note to commit message about the small behaviour change, suggested by Ben Widawsky.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 5月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
Two things: - ring->virtual start is an __iomem pointer, treat it accordingly. - dev_priv->status_page.page_addr is now always a cpu addr, no pointer casting needed for that. Take the opportunity to remove the unnecessary drm indirection when setting up the ringbuffer iomapping. v2: Add a compiler barrier before reading the hw status page. Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-