- 10 7月, 2013 3 次提交
-
-
由 Chris Wilson 提交于
This reverts commit 25ff1195 and the follow on for Valleyview commit 2dc8aae0. commit 25ff1195 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Apr 4 21:31:03 2013 +0100 drm/i915: Workaround incoherence between fences and LLC across multiple CPUs commit 2dc8aae0 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed May 22 17:08:06 2013 +0100 drm/i915: Workaround incoherence with fence updates on Valleyview Jon Bloomfield came up with a plausible explanation and cheap fix (drm/i915: Fix incoherence with fence updates on Sandybridge+) for the race condition, so lets run with it. This is a candidate for stable as the old workaround incurs a significant cost (calling wbinvd on all CPUs before performing the register write) for some workloads as noted by Carsten Emde. Link: http://lists.freedesktop.org/archives/intel-gfx/2013-June/028819.html References: https://www.osadl.org/?id=1543#c7602 References: https://bugs.freedesktop.org/show_bug.cgi?id=63825Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Carsten Emde <C.Emde@osadl.org> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
This hopefully fixes the root cause behind the workaround added in commit 25ff1195 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Apr 4 21:31:03 2013 +0100 drm/i915: Workaround incoherence between fences and LLC across multiple CPUs Thanks to further investigation by Jon Bloomfield, he realised that the 64-bit register might be broken up by the hardware into two 32-bit writes (a problem we have encountered elsewhere). This non-atomicity would then cause an issue where a second thread would see an intermediate register state (new high dword, old low dword), and this register would randomly be used in preference to its own thread register. This would cause the second thread to read from and write into a fairly random tiled location. Breaking the operation into 3 explicit 32-bit updates (first disable the fence, poke the upper bits, then poke the lower bits and enable) ensures that, given proper serialisation between the 32-bit register write and the memory transfer, that the fence value is always consistent. Armed with this knowledge, we can explain how the previous workaround work. The key to the corruption is that a second thread sees an erroneous fence register that conflicts and overrides its own. By serialising the fence update across all CPUs, we have a small window where no GTT access is occurring and so hide the potential corruption. This also leads to the conclusion that the earlier workaround was incomplete. v2: Be overly paranoid about the order in which fence updates become visible to the GPU to make really sure that we turn the fence off before doing the update, and then only switch the fence on afterwards. Signed-off-by: NJon Bloomfield <jon.bloomfield@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Carsten Emde <C.Emde@osadl.org> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Daniel noticed a problem where is we wrote to an object with ring A in the middle of a very long running batch, then executed a quick batch on ring B before a batch that reads from the same object, its obj->ring would now point to ring B, but its last_write_seqno would be still relative to ring A. This would allow for the user to read from the object before the GPU had completed the write, as set_domain would only check that ring B had passed the last_write_seqno. To fix this simply (and inelegantly), we bump the last_write_seqno when switching rings so that the last_write_seqno is always relative to the current obj->ring. This fixes igt/tests/gem_write_read_ring_switch. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org [danvet: Add note about the newly created igt which exercises this bug.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 7月, 2013 1 次提交
-
-
由 Xiong Zhang 提交于
obj->mm_list link to dev_priv->mm.inactive_list/active_list obj->global_list link to dev_priv->mm.unbound_list/bound_list This regression has been introduced in commit 93927ca5 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Jan 10 18:03:00 2013 +0100 drm/i915: Revert shrinker changes from "Track unbound pages" Cc: stable@vger.kernel.org Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> [danvet: Add regression notice.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 7月, 2013 4 次提交
-
-
由 Chris Wilson 提交于
Harmonise the completion logic between the non-blocking and normal wait_rendering paths, and move that logic into a common function. In the process, we note that the last_write_seqno is by definition the earlier of the two read/write seqnos and so all successful waits will have passed the last_write_seqno. Therefore we can unconditionally clear the write seqno and its domains in the completion logic. v2: Add the missing ring parameter, because sometimes it is good to have things compile. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
In the introduction of the non-blocking wait, I cut'n'pasted the wait completion code from normal locked path. Unfortunately, this neglected that the normal path returned early if the wait returned early. The result is that read-only waits may return whilst the GPU is still writing to the bo. Fixes regression from commit 3236f57a [v3.7] Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Aug 24 09:35:09 2012 +0100 drm/i915: Use a non-blocking wait for set-to-domain ioctl Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66163 Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jani Nikula 提交于
drivers/gpu/drm/i915/i915_gem.c: In function ‘i915_gem_object_bind_to_gtt’: drivers/gpu/drm/i915/i915_gem.c:3002:3: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 5 has type ‘size_t’ [-Wformat] v2: Use %zu instead of %d. Two char patch, and 100% wrong. (Ville) Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Konrad Rzeszutek Wilk 提交于
Git commit 90797e6d ("drm/i915: create compact dma scatter lists for gem objects") makes certain assumptions about the under laying DMA API that are not always correct. On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup I see: [drm:intel_pipe_set_base] *ERROR* pin & fence failed [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28 Bit of debugging traced it down to dma_map_sg failing (in i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB). That unfortunately are sizes that the SWIOTLB is incapable of handling - the maximum it can handle is a an entry of 512KB of virtual contiguous memory for its bounce buffer. (See IO_TLB_SEGSIZE). Previous to the above mention git commit the SG entries were of 4KB, and the code introduced by above git commit squashed the CPU contiguous PFNs in one big virtual address provided to DMA API. This patch is a simple semi-revert - were we emulate the old behavior if we detect that SWIOTLB is online. If it is not online then we continue on with the new compact scatter gather mechanism. An alternative solution would be for the the '.get_pages' and the i915_gem_gtt_prepare_object to retry with smaller max gap of the amount of PFNs that can be combined together - but with this issue discovered during rc7 that might be too risky. Reported-and-Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: Chris Wilson <chris@chris-wilson.co.uk> CC: Imre Deak <imre.deak@intel.com> CC: Daniel Vetter <daniel.vetter@ffwll.ch> CC: David Airlie <airlied@linux.ie> CC: <dri-devel@lists.freedesktop.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 6月, 2013 3 次提交
-
-
由 Mika Kuoppala 提交于
After hang check timer has declared gpu to be hung, rings are reset. In ring reset, when clearing request list, do post mortem analysis to find out the guilty batch buffer. Select requests for further analysis by inspecting the completed sequence number which has been updated into the HWS page. If request was completed, it can't be related to the hang. For noncompleted requests mark the batch as guilty if the ring was not waiting and the ring head was stuck inside the buffer object or in the flush region right after the batch. For everything else, mark them as innocents. v2: Fixed a typo in commit message (Ville Syrjälä) v3: - more descriptive function parameters (Chris Wilson) - use masked head address when inspecting if request is in ring - s/hangcheck.last_action/hangcheck.action - added comment about unmasked head hitting batch_obj range Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Acked-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
In order to track down a batch buffer and context which caused the ring to hang, store reference to bo into the request struct. Request can also cause gpu to hang after the batch in the flush section in the ring. To detect this add start of the flush portion offset into the request. v2: Included comment about request vs batch_obj lifetimes (Chris Wilson) Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
Only execbuffer needed all the parameters on i915_add_request(). By putting __i915_add_request behind macro, all current callsites become cleaner. Following patch will introduce a new parameter for __i915_add_request. With this patch, only the relevant callsite will reflect the change making commit smaller and easier to understand. v2: _i915_add_request as function name (Chris Wilson) v3: change name __i915_add_request and fix ordering of params (Ben Widawsky) Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 6月, 2013 4 次提交
-
-
由 Daniel Vetter 提交于
Chris Wilson noticed that since commit 1f83fee0 [v3.9] Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Nov 15 17:17:22 2012 +0100 drm/i915: clear up wedged transitions X can again get -EIO when it does not expect it. And even worse score a SIGBUS when accessing gtt mmaps. The established ABI is that we _only_ return an -EIO from execbuf - all other ioctls should just work. And since the reset code moves all bos out of gpu domains and clears out all the last_seqno/ring tracking there really shouldn't be any reason for non-execbuf code to ever touch the hw and see an -EIO. After some extensive discussions we've noticed that these spurios -EIO are caused by i915_gem_wait_for_error: http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg20540.html That is easy to fix by returning 0 instead of -EIO, since grabbing the dev->struct_mutex does not yet mean that we actually want to touch the hw. And so there is no reason at all to fail with -EIO. But that's not the entire since, since often (at least it's easily googleable) dmesg indicates that the reset fails and we declare the gpu wedged. Then, quite a bit later X wakes up with the "Timed out waiting for the gpu reset to complete" DRM_ERROR message in wait_for_errror and brings down the desktop with an -EIO/SIGBUS. So clearly we're missing a wakeup somewhere, since the gpu reset just doesn't take 10 seconds to complete. And indeed we're do handle the terminally wedged state wrong. Fix this all up. References: https://bugs.freedesktop.org/show_bug.cgi?id=63921 References: https://bugs.freedesktop.org/show_bug.cgi?id=64073 Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Damien Lespiau <damien.lespiau@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Since it will be used for the global bound/unbound list with full PPGTT, this helps clarify things for upcoming code rework. Recommended-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
If we properly keep track of the pages_pin_count, then when we later add multiple address spaces, the put_pages doesn't need any special checks to be able to perform it's job. CC: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Rebased on top of the fix for stolen memory pinning.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The way the stolen handling works is we take a pin on the backing pages, but we never actually get a reference to the bo. On freeing objects allocated with stolen memory, the final unref will end up freeing the object with pinned pages count left. To enable an assertion to catch bugs in this code path, this patch cleans up that remaining pin. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 6月, 2013 2 次提交
-
-
由 Ben Widawsky 提交于
v2: Add set_seqno which didn't exist before rebase (Haihao) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NXiang, Haihao <haihao.xiang@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Since I'll need to modify i915_gem_object_bind_to_gtt(), fix the errors now to get checkpatch to not complain. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Resolve conflict with Chris' improved debug output, and bikeshed the new variable with s/max/gtt_max/ a bit while at it.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 5月, 2013 2 次提交
-
-
由 Chris Wilson 提交于
In commit 25ff1195 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Apr 4 21:31:03 2013 +0100 drm/i915: Workaround incoherence between fences and LLC across multiple CPUs we introduced an empirical workaround for memory corruption when using fences from multiple CPUs. At the time, we did not have any results for Valleyview, so the presumption was that it was limited to recent generations using LLC. Now we have evidence that Valleyview also suffers incoherence and requires a similar but different workaround. For Valleyview, the wbinvd instruction is insufficient and we require the serialising register write per-CPU. Conversely, that serialising register write is not enough for SNB/IVB/HSW. To compromise and keep the code relatively clean, employ both serialisation techniques in the same workaround. Reported-by: NJon Bloomfield <jon.bloomfield@intel.com> Tested-by: NJon Bloomfield <jon.bloomfield@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62191Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
This should help debugging the truly unexpected cases where it occurs - in particular to see which value is garbage. References: https://bugzilla.kernel.org/show_bug.cgi?id=58511Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: s/%ld/%zd/ as spotted by Wu Fengguang's autobuilder.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 5月, 2013 1 次提交
-
-
由 Imre Deak 提交于
At the moment wait_event_timeout/wait_event_interruptible_timeout may time out 1 jiffy too early, as the calculated expiry time is 1 less than needed. Besides timing out too early this also means that the calculation of the remaining time will be incorrect and we will pass a non-zero remaining time to user space in case of a time out. This is one reason for the following bugzilla report: Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=64270Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 5月, 2013 1 次提交
-
-
由 Mika Kuoppala 提交于
Storing context reference into request struct allows us to inspect context and its associated objects when requests are retired. Both ppgtt and arb robustness work will need this. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 4月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
As we recompute the remaining timeout after waiting, there is a potential for that timeout to be less than zero and so need sanitizing. The timeout is always returned to userspace and validated, so we should always perform the sanitation. v2 [vsyrjala]: Only normalize the timespec if it's invalid v3: Add a comment to clarify the situation and remove the now useless WARN_ON() (ickle) Cc: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Tested-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 4月, 2013 4 次提交
-
-
由 Ville Syrjälä 提交于
Increase the number of fence registers to 32 on IVB/HSW. VLV however only has 16 fence registers according to the docs. Increasing the number of fences was attempted before [1], but there was some uncertainty about the maximum CPU fence number for FBC. Since then BSpec has been updated to state that there are in fact 32 fence registers, and the CPU fence number field in the SNB_DPFC_CTL_SA register is 5 bits, and the CPU fence number field in the ILK_DPFC_CONTROL register must be zero. So now it all makes sense. [1] http://lists.freedesktop.org/archives/intel-gfx/2011-October/012865.html v2: Include some background information based on the previous attempt Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
I'm really not happy that we have to support this, but this will be the simplest way to handle cases where PPGTT init can fail, which I promise will be coming in the future. v2: Resolve conflicts due to patch series reordering. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Since we've already set up a nice vtable to abstract other PPGTT functions, also abstract the actual register programming to enable things. This function will probably need to change a bit as we implement real processes. v2: Resolve conflicts due to patch series reordering. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
In order to fully serialize access to the fenced region and the update to the fence register we need to take extreme measures on SNB+, and manually flush writes to memory prior to writing the fence register in conjunction with the memory barriers placed around the register write. Fixes i-g-t/gem_fence_thrash v2: Bring a bigger gun v3: Switch the bigger gun for heavier bullets (Arjan van de Ven) v4: Remove changes for working generations. v5: Reduce to a per-cpu wbinvd() call prior to updating the fences. v6: Rewrite comments to ellide forgotten history. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62191Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Tested-by: Jon Bloomfield <jon.bloomfield@intel.com> (v2) Cc: stable@vger.kernel.org Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 4月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
BIOS should be setting this, but in case it doesn't... v2: Define the bits we actually want to clear (Jesse) Make it an RMW op (Jesse) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 28 3月, 2013 1 次提交
-
-
由 Imre Deak 提交于
The i915 driver uses sg lists for memory without backing 'struct page' pages, similarly to other IO memory regions, setting only the DMA address for these. It does this, so that it can program the HW MMU tables in a uniform way both for sg lists with and without backing pages. Without a valid page pointer we can't call nth_page to get the current page in __sg_page_iter_next, so add a helper that relevant users can call separately. Also add a helper to get the DMA address of the current page (idea from Daniel). Convert all places in i915, to use the new API. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 3月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
There is a minute window for a race between put-fence removing the fence and for a new transaction by an external party on the GTT mmap. That is we must zap the mmap prior to removing the fence and not afterwards. Fixes regression from commit 61050808 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Apr 17 15:31:31 2012 +0100 drm/i915: Refactor put_fence() to use the common fence writing routine v2: Remember the fence to remove with a local variable (gcc) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 3月, 2013 2 次提交
-
-
由 Imre Deak 提交于
So far we created a sparse dma scatter list for gem objects, where each scatter list entry represented only a single page. In the future we'll have to handle compact scatter lists too where each entry can consist of multiple pages, for example for objects imported through PRIME. The previous patches have already fixed up all other places where the i915 driver _walked_ these lists. Here we have the corresponding fix to _create_ compact lists. It's not a performance or memory footprint improvement, but it helps to better exercise the new logic. Reference: http://www.spinics.net/lists/dri-devel/msg33917.htmlSigned-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
So far the assumption was that each dma scatter list entry contains only a single page. This might not hold in the future, when we'll introduce compact scatter lists, so prepare for this everywhere in the i915 code where we walk such a list. We'll fix the place _creating_ these lists separately in the next patch to help the reviewing/bisectability. Reference: http://www.spinics.net/lists/dri-devel/msg33917.htmlSigned-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 3月, 2013 1 次提交
-
-
由 Jesse Barnes 提交于
We need to set the 'allow force wake' bit to enable forcewake handling later on. v2: split from clock gating patch (Jani) check for allowwakeack (Ville) Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 3月, 2013 1 次提交
-
-
由 Ville Syrjälä 提交于
to_user_ptr() simply casts a pointer passed as u64 from user space to void __user * correctly. Using this lets us get rid of all the tiresome casts. The idea came from Chris Wilson <chris@chris-wilson.co.uk>. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 2月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 20 2月, 2013 2 次提交
-
-
由 Daniel Vetter 提交于
Yet another remnant ... this might explain why l3 remapping didn't really work on HSW. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=57441Spotted-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Cc: stable@vger.kernel.org Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
As explained by Chris Wilson gem objects in stolen memory are always coherent with the GPU so we don't need to ever flush the CPU caches for these. This fixes a breakage - at least with the compact sg patches applied - during the resume/restore gtt mappings path, when we tried to clflush an FB object in stolen memory, but since stolen objects don't have backing pages we passed an invalid page pointer to drm_clflush_page(). Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 2月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
The ring initialization will differ a bit in upcoming generations, and this split will prepare the code for what's needed. This patch also fixes a bug introduced in: commit 99433931 Author: Mika Kuoppala <mika.kuoppala@linux.intel.com> Date: Tue Jan 22 14:12:17 2013 +0200 drm/i915: use gem_set_seqno() on hardware init After doing the extraction, the bad error handling became obvious. I acknowledge that this should be two patches, but it's a pretty small/trivial patch. If requested, I can certainly do the fix as a distinct patch. v2: Should be cleanup blt, not init blt on failure (Chris) v3: Forgot to git add on v2 Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 31 1月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
When adding the fb idle detection to mark-inactive, it was forgotten that userspace can drive the processing of retire-requests. We assumed that it would be principally driven by the retire requests worker, running once every second whilst active and so we would get the deferred timer for free. Instead we spend too many CPU cycles reclocking the LVDS preventing real work from being done. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reported-and-tested-by: NAlexander Lam <lambchop468@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58843 Cc: stable@vger.kernel.org Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 1月, 2013 2 次提交
-
-
由 Mika Kuoppala 提交于
When machine was rebooted or module was reloaded, gem_hw_init() set last_seqno to be identical to next_seqno. This lead to situation that waits for first ever request always passed immediately regardless if it was actually executed. Use gem_set_seqno() to be consistent how hw is initialized on init, wrap and on resume. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
With the previous patch the state transition handling of the reset code itself is now (hopefully) race free and solid. But that still leaves out everyone else - with the various lock-free wait paths we have there's the possibility that the reset happens between the point where we read the seqno we should wait on and the actual wait. And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll happily wait for a seqno which will in all likelyhood never signal. In practice this is not a big problem since the X server gets constantly interrupted, and can then submit more work (hopefully) to unblock everyone else: As soon as a new seqno write lands, all waiters will unblock. But running the i-g-t reset testcase ZZ_hangman can expose this race, especially on slower hw with fewer cpu cores. Now looking forward to ARB_robustness and friends that's not the best possible behaviour, hence this patch adds a reset_counter to be able to detect any reset, even if a given thread never observed the in-progress state. The important part is to correctly order things: - The write side needs to increment the counter after any seqno gets reset. Hence we need to do that at the end of the reset work, and again wake everyone up. We also need to place a barrier in between any possible seqno changes and the counter increment, since any unlock operations only guarantee that nothing leaks out, but not that at later load operation gets moved ahead. - On the read side we need to ensure that no reset can sneak in and invalidate the seqno. In all cases we can use the one-sided barrier that unlock operations guarantee (of the lock protecting the respective seqno/ring pair) to ensure correct ordering. Hence it is sufficient to place the atomic read before the mutex/spin_unlock and no additional barriers are required. The end-result of all this is that we need to wake up everyone twice in a reset operation: - First, before the reset starts, to get any lockholders of the locks, so that the reset can proceed. - Second, after the reset is completed, to allow waiters to properly and reliably detect the reset condition and bail out. I admit that this entire reset_counter thing smells a bit like overkill, but I think it's justified since it makes it really explicit what the bail-out condition is. And we need a reset counter anyway to implement ARB_robustness, and imo with finer-grained locking on the horizont this is the most resilient scheme I could think of. v2: Drop spurious change in the wait_for_error EXIT_COND - we only need to wait until we leave the reset-in-progress wedged state. v3: Don't play tricks with barriers in the throttle ioctl, the spin_unlock is barrier enough. I've also considered using a little helper to grab the current reset_counter, but then decided that hiding the atomic_read isn't a great idea, since having it explicitly show up in the code is a nice remainder to reviews to check the memory barriers. v4: Add a comment to explain why we need to fall through in __wait_seqno in the end variable assignments. v5: Review from Damien: - s/smb/smp/ in a comment - don't increment the reset counter after we've set it to WEDGED. Now we (again) properly wedge the gpu when the reset fails. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-