- 29 9月, 2010 2 次提交
-
-
由 Chris Wilson 提交于
Replaced by tracepoints. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
This has bitrotted through inuse and superseded by tracing and debugfs. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 28 9月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
With multiple rings generating requests independently, the outstanding requests must also be track independently. Reported-by: NWang Jinjin <jinjin.wang@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30380Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 27 9月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
Introduced by 48b956c5, I had thought I had already fixed this. Oh well. Reported-by: NSitsofe Wheeler <sitsofe@yahoo.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 26 9月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
Daniel Vetter pointed out that in this case is would be clearer and cleaner to use a spinlock instead of a mutex to protect the per-file request list manipulation. Make it so. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 25 9月, 2010 3 次提交
-
-
由 Chris Wilson 提交于
... and combine it with the wedged completion handler. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Owain Ainsworth reported an issue between the interaction of the hangcheck and userspace immediately (and permanently) falling back to s/w rasterisation. In order to break the mutex and begin resetting the GPU, we must abort the current operation (usually within the wait) and climb sufficiently far back up the call chain to drop the mutex. In his implementation, Owain has a loop within the ioctl handler to detect the hang and then sleep until the error handler has run. I've chosen to return to userspace and report an EAGAIN which should trigger the userspace ioctl handler to repeat the call (simply because it felt less invasive...). Before hitting a wedged GPU, we then wait upon completion of the error handler. Reported-by: NOwain G. Ainsworth <zerooa@googlemail.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Avoid cause latencies in other clients by not taking the global struct mutex and moving the per-client request manipulation a local per-client mutex. For example, this allows a compositor to schedule a page-flip (through X) whilst an OpenGL application is monopolising the GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 24 9月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
We need to drain the pending flips prior to disabling the pipe during modeset, and these need to be done in an uninterruptible fashion. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 23 9月, 2010 2 次提交
-
-
由 Chris Wilson 提交于
This is already performed with the pipelined flush, so by the time we schedule the flush in the page-flip, the ring is NULL and we OOPs instead. Reported-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
A minor typo caused a single fence register to be incorrectly programmed, resulting in occassional tiling corruption. Reported-and-tested-by: NHans de Bruin <bruinjm@xs4all.nl> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=18962Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@kernel.org
-
- 22 9月, 2010 2 次提交
-
-
由 Chris Wilson 提交于
We are not currently using it as intended, so remove the complication. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Otherwise we will hit a list handling assertion when moving the object to the inactive list. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 21 9月, 2010 12 次提交
-
-
由 Chris Wilson 提交于
During i915_gem_create_mmap_offset() if the subsystem reports an error code, use it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Keep a list of pinned objects and display it via debugfs. Now all objects that exist in the GTT are always tracked on one of the active, flushing, inactive or pinned lists. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
If we have queued a page flip on the current fb and then request a mode change, wait until the page flip completes before performing the new request. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Track if the gpu requires the fence for the execution of a batch buffer and so only wait upon the retirement of the object's last rendering seqno if the fence is in use by the GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Use the ring abstraction to hide the details of having choose the appropriate flushing method. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Xiang, Haihao 提交于
Introduce intel_init_render_ring_buffer(), intel_init_bsd_ring_buffer for ring initialization. Signed-off-by: NXiang, Haihao <haihao.xiang@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Clear the GPU read domain for the inactive objects on a reset so that they are correctly invalidated on reuse. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Owain Ainsworth noticed that the reset code failed to clear the flushing list leaving the driver in an inconsistent state following a hung GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
When flushing the GPU domains,we emit a flush on *both* rings, even though they share a unified cache. Only emit the flush on the currently active ring. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Change the semantics to retire any buffer older than the current seqno rather than repeatedly calling calling the function to retire the buffer at the head of the list matching the request seqno. Whilst this should have no semantic impact on the implementation, Daniel was wondering if there was a bug where we might miss a retirement and so end up with a continually growing active list. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Avoid confusion between i965g meaning broadwater and the gen4+ chipset families. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 17 9月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
With 5 places to update when adding handling for fence registers, it is easy to overlook one or two. Correct that oversight, but fence management should be improved before a new set of registers is added. Bugzilla: https://bugs.freedesktop.org/show_bug?id=30199 Original patch by: Yuanhan Liu <yuanhan.liu@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@kernel.org
-
- 15 9月, 2010 3 次提交
-
-
由 Chris Wilson 提交于
This can always be re-added should somebody find a use... Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
As we currently may need to acquire a fence register during a modeset, we need to be able to do so in an uninterruptible manner. So expose that parameter to the callers of the fence management code. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
This ensures that we do wait upon the flushes to complete if necessary and avoid the visual tears, whilst enabling pipelined page-flips. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 14 9月, 2010 3 次提交
-
-
由 Chris Wilson 提交于
I pulled the wrong version of the patch from Daniel Vetter which was missing the read barriers -- and the one that was causing all the trouble was from i915_gem_object_put_fence_reg(), leading to GPU hangs on gen3. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
By reducing the hangcheck frequency we check less often, conserving resources, and still detect a lock up quickly. On a fast machine with a slow GPU (like a Core2 paired with a 945G) it is easy for the hangcheck to misfire as we check too fast. Also once hung and if we fail to completely reset the chip, we have a nasty habit of proclaming a hang many times a second and generating a strobe-like display. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 08 9月, 2010 8 次提交
-
-
由 Chris Wilson 提交于
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
This spinlock only served debugging purposes in a time when we could not be sure of the mutex ever being released upon a GPU hang. As we now should be able rely on hangcheck to do the job for us (and that error reporting should not itself require the struct mutex) we can kill the incomplete attempt at protection. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
By allocating the request prior to writing to the ringbuffer, we can abort the operation without leaving the GPU in an inconsistent state. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
... take advantage of the new implicit request issuing of i915_wait_request. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Daniel Vetter 提交于
One caller (for the pageflip support) wants a purely pipelined flush. Distinguish this case by a new parameter. This will also be useful later on for pipelined fencing. v2: Simplify the code by depending upon the implicit request emitting of i915_wait_request. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [ickle: And drop the non-interruptible support in the process.] Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Daniel Vetter 提交于
By moving one i915_add_request we can solely depend on the new auto-seqno-numbering behaviour. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Daniel Vetter 提交于
i915_gem_object_move_to_active can handle zero seqno for us now. And not emitting a request is not fatal here - we'll try to emit a new one if we have to wait for some rendering to complete. In case this assumption ever gets accidentally broken, there's already a BUG_ON to catch it in i915_do_wait_request. So just silently ignore ENOMEM here instead of screwing up the whole drm. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Daniel Vetter 提交于
... instead of threading flush_domains through the execbuf code to i915_add_request. With this change 2 small cleanups are possible (likewise the majority of the patch): - The flush_domains parameter of i915_add_request is always 0. Drop it and the corresponding logic. - Ditto for the seqno param of i915_gem_process_flushing_list. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-