- 10 1月, 2014 2 次提交
-
-
由 Rob Clark 提交于
Add a VRAM carveout that is used for systems which do not have an IOMMU. The VRAM carveout uses CMA. The arch code must setup a CMA pool for the device (preferrably in highmem.. a 256m-512m VRAM pool in lowmem is not cool). The user can configure the VRAM pool size using msm.vram module param. Technically, the abstraction of IOMMU behind msm_mmu is not strictly needed, but it simplifies the GEM code a bit, and will be useful later when I add support for a2xx devices with GPUMMU, so I decided to keep this part. It appears to be possible to configure the GPU to restrict access to addresses within the VRAM pool, but this is not done yet. So for now the GPU will refuse to load if there is no sort of mmu. Once address based limits are supported and tested to confirm that we aren't giving the GPU access to arbitrary memory, this restriction can be lifted Signed-off-by: NRob Clark <robdclark@gmail.com>
-
由 Rob Clark 提交于
This got a bit broken with original patches when re-arranging things to move dependencies on mach-msm inside #ifndef OF. Signed-off-by: NRob Clark <robdclark@gmail.com>
-
- 02 11月, 2013 1 次提交
-
-
由 Rob Clark 提交于
Re-arrange things a bit so that we can get work requested after a bo fence passes, like pageflip, done before retiring bo's. Without any sort of bo cache in userspace, some games can trigger hundred's of transient bo's, which can cause retire to take a long time (5-10ms). Obviously we want a bo cache.. but this cleanup will make things a bit easier for atomic as well and makes things a bit cleaner. Signed-off-by: NRob Clark <robdclark@gmail.com> Acked-by: NDavid Brown <davidb@codeaurora.org>
-
- 12 9月, 2013 2 次提交
-
-
由 Wei Yongjun 提交于
The dereference to 'pdata' should be moved below the NULL test. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn>
-
由 Rob Clark 提交于
Occasionally we seem to miss an IRQ from the ME (microengine). I'm not entirely sure the root cause, but for now we can unwedge things by retiring from the hangcheck timer. Signed-off-by: NRob Clark <robdclark@gmail.com>
-
- 11 9月, 2013 2 次提交
-
-
由 Rob Clark 提交于
If gpu locks up with the rptr shortly beyond the wrap-around point in the ringbuffer, because the rptr was not reset (but wptr is, by virtue of resetting rb->cur), we could end up in a scenario where we think there is not enough space in the ringbuffer for the next cmds. And since the CP won't reset rptr until after processing an IB, this leaves things in a sort of deadlock. So reset rptr too. And a bit more spiffing up of hangcheck to make things easier to debug. Signed-off-by: NRob Clark <robdclark@gmail.com>
-
由 Rob Clark 提交于
The userspace API already had everything needed to handle read vs write synchronization. This patch actually bothers to hook it up properly, so that we don't need to (for example) stall on userspace read access to a buffer that gpu is also still reading. Signed-off-by: NRob Clark <robdclark@gmail.com>
-
- 25 8月, 2013 2 次提交
-
-
由 Rob Clark 提交于
A basic, no-frills recovery mechanism in case the gpu gets wedged. We could try to be a bit more fancy and restart the next submit after the one that got wedged, but for now keep it simple. This is enough to recover things if, for example, the gpu hangs mid way through a piglit run. Signed-off-by: NRob Clark <robdclark@gmail.com>
-
由 Rob Clark 提交于
Add initial support for a3xx 3d core. So far, with hardware that I've seen to date, we can have: + zero, one, or two z180 2d cores + a3xx or a2xx 3d core, which share a common CP (the firmware for the CP seems to implement some different PM4 packet types but the basics of cmdstream submission are the same) Which means that the eventual complete "class" hierarchy, once support for all past and present hw is in place, becomes: + msm_gpu + adreno_gpu + a3xx_gpu + a2xx_gpu + z180_gpu This commit splits out the parts that will eventually be common between a2xx/a3xx into adreno_gpu, and the parts that are even common to z180 into msm_gpu. Note that there is no cmdstream validation required. All memory access from the GPU is via IOMMU/MMU. So as long as you don't map silly things to the GPU, there isn't much damage that the GPU can do. Signed-off-by: NRob Clark <robdclark@gmail.com>
-