- 20 3月, 2015 5 次提交
-
-
由 Ben Widawsky 提交于
This patch was formerly known as, "Force pd restore when PDEs change, gen6-7." I had to change the name because it is needed for GEN8 too. The real issue this is trying to solve is when a new object is mapped into the current address space. The GPU does not snoop the new mapping so we must do the gen specific action to reload the page tables. GEN8 and GEN7 do differ in the way they load page tables for the RCS. GEN8 does so with the context restore, while GEN7 requires the proper load commands in the command streamer. Non-render is similar for both. Caveat for GEN7 The docs say you cannot change the PDEs of a currently running context. We never map new PDEs of a running context, and expect them to be present - so I think this is okay. (We can unmap, but this should also be okay since we only unmap unreferenced objects that the GPU shouldn't be tryingto va->pa xlate.) The MI_SET_CONTEXT command does have a flag to signal that even if the context is the same, force a reload. It's unclear exactly what this does, but I have a hunch it's the right thing to do. The logic assumes that we always emit a context switch after mapping new PDEs, and before we submit a batch. This is the case today, and has been the case since the inception of hardware contexts. A note in the comment let's the user know. It's not just for gen8. If the current context has mappings change, we need a context reload to switch v2: Rebased after ppgtt clean up patches. Split the warning for aliasing and true ppgtt options. And do not break aliasing ppgtt, where to->ppgtt is always null. v3: Invalidate PPGTT TLBs inside alloc_va_range. v4: Rename ppgtt_invalidate_tlbs to mark_tlbs_dirty and move pd_dirty_rings from i915_address_space to i915_hw_ppgtt. Fixes when neither ctx->ppgtt and aliasing_ppgtt exist. v5: Removed references to teardown_va_range. v6: Updated needs_pd_load_pre/post. v7: Fix pd_dirty_rings check in needs_pd_load_post, and update/move comment about updated PDEs to object_pin/bind (Mika). Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Instead of implementing the full tracking + dynamic allocation, this patch does a bit less than half of the work, by tracking and warning on unexpected conditions. The tracking itself follows which PTEs within a page table are currently being used for objects. The next patch will modify this to actually allocate the page tables only when necessary. With the current patch there isn't much in the way of making a gen agnostic range allocation function. However, in the next patch we'll add more specificity which makes having separate functions a bit easier to manage. One important change introduced here is that DMA mappings are created/destroyed at the same page directories/tables are allocated/deallocated. Notice that aliasing PPGTT is not managed here. The patch which actually begins dynamic allocation/teardown explains the reasoning for this. v2: s/pdp.page_directory/pdp.page_directories Make a scratch page allocation helper v3: Rebase and expand commit message. v4: Allocate required pagetables only when it is needed, _bind_to_vm instead of bind_vma (Daniel). v5: Rebased to remove the unnecessary noise in the diff, also: - PDE mask is GEN agnostic, renamed GEN6_PDE_MASK to I915_PDE_MASK. - Removed unnecessary checks in gen6_alloc_va_range. - Changed map/unmap_px_single macros to use dma functions directly and be part of a static inline function instead. - Moved drm_device plumbing through page tables operation to its own patch. - Moved allocate/teardown_va_range calls until they are fully implemented (in subsequent patch). - Merged pt and scratch_pt unmap_and_free path. - Moved scratch page allocator helper to the patch that will use it. v6: Reduce complexity by not tearing down pagetables dynamically, the same can be achieved while freeing empty vms. (Daniel) v7: s/i915_dma_map_px_single/i915_dma_map_single s/gen6_write_pdes/gen6_write_pde Prevent a NULL case when only GGTT is available. (Mika) v8: Rebased after s/page_tables/page_table/. v9: Reworked i915_pte_index and i915_pte_count. Also exercise bitmap allocation here (gen6_alloc_va_range) and fix incorrect write_page_range in i915_gem_restore_gtt_mappings (Mika). Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v3+) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Two code changes: - Extract i915_gem_shrinker_init. - Inline i915_gem_object_is_purgeable since we open-code it everywhere else too. This already has the benefit of pulling all the shrinker code together, next patch adds a bit of kerneldoc. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tvrtko Ursulin 提交于
This makes the interface consistent to old i915_gem_obj_ggtt_pin. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Joonas Lahtinen 提交于
GGTT views are only applicable when dealing with GGTT. Change the code to reject ggtt_view where it should not be used and require it when it should be. v2: - Dropped _ppgtt_ infixes, allow both types to be passed - Disregard other but normal views when no view is specified - More checks that valid parameters are passed - More readable error checking v3: - Prefer WARN_ONCE over BUG_ON when there is code path for failure Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> [danvet: Drop unecessary forward decl from earlier patch iterations.] [danvet: Remove unused variable spotted by Tvrtko.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 3月, 2015 2 次提交
-
-
由 Paulo Zanoni 提交于
We need this for FBC, and possibly for PSR too. v2: Don't only flush: invalidate too (Daniel). Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
We want to port FBC to the frontbuffer tracking infrastructure, but for that we need to know what caused the object invalidation so we can react accordingly: CPU mmaps need manual, GTT mmaps and flips don't need handling and ring rendering needs nukes. v2: - s/ORIGIN_RENDER/ORIGIN_CS/ (Daniel, Rodrigo) - Fix copy/pasted wrong documentation - Rebase v3: - Rebase v4: - Don't pass the operation to flushes (Daniel). Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 3月, 2015 2 次提交
-
-
由 Chris Wilson 提交于
Long ago I found that I was getting sporadic errors when booting SNB, with the symptom being that the first batch died with IPEHR != *ACTHD, typically caused by the TLB being invalid. These magically disappeared if I held the forcewake during the entire ring initialisation sequence. (It can probably be shortened to a short critical section, but the whole initialisation is full of register writes and so we would be taking and releasing forcewake almost continually, and so holding it over the entire sequence will probably be a net win!) Note some of the kernels I encounted the issue already had the deferred forcewake release, so it is still relevant. I know that there have been a few other reports with similar failure conditions on SNB, I think such as References: https://bugs.freedesktop.org/show_bug.cgi?id=80913 v2: Wrap i915_gem_init_hw() with its own security blanket as we take that path following resume and reset. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
This fixes a regression from commit 5ed0bdf2 Author: Thomas Gleixner <tglx@linutronix.de> Date: Wed Jul 16 21:05:06 2014 +0000 drm: i915: Use nsec based interfaces that made a negative timeout return immediately rather than the previously defined behaviour of waiting indefinitely. Testcase: igt/gem_wait Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89494Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Kristian Høgsberg <krh@bitplanet.net> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [Jani: fixed a checkpatch complaint about whitespace.] Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 28 2月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
For an object right on the boundary of mappable space, as the fenceable size is stricly greater than the actual size, its fence region may extend out of mappable space. Note that only pnv/g33 has fence_size > obj.size and an unmappable range in the gtt, and there alignment constraints prevent bad things from happening. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Clarify why this shouldn't change anything as per the discussion on intel-gfx.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 2月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
Hooray! Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 2月, 2015 1 次提交
-
-
由 John Harrison 提交于
In execlist mode, the ringbuf is a function of the ring and context whereas in legacy mode, it is derived from the ring alone. Thus the calculation required to determine the ringbuf pointer from the ring (and context) also needs to test execlist mode or not. This is messy. Further, the request structure holds a pointer to both the ring and the context for which it was created. Thus, given a request, it is possible to derive the ringbuf in either legacy or execlist mode. Hence it is necessary to pass just the request in to all the low level functions rather than some combination of request, ring, context and ringbuf. However, rather than recalculating it each time, it is much simpler to just cache the ringbuf pointer in the request structure itself. Caching the pointer means the calculation is done once at request creation time and all further code and simply read it directly from the request structure. OTC-Jira: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> [danvet: Drop contentless comment in lrc alloc request entirely. And spelling fix in the commit message.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 2月, 2015 1 次提交
-
-
由 Nick Hoath 提交于
When converting from implicitly tracked execlist queue items to ref counted requests, not all frees of requests were replaced with unrefs, and extraneous refs/unrefs of contexts were added. Correct the unbalanced refcount & replace the frees. Remove a noisy warning when hitting the request creation path. drm_i915_gem_request and intel_context are both kref reference counted structures. Upon allocation, drm_i915_gem_request's ref count should be bumped using kref_init. When a context is assigned to the request, the context's reference count should be bumped using i915_gem_context_reference. i915_gem_request_reference will reduce the context reference count when the request is freed. Problem introduced in commit 6d3d8274 Author: Nick Hoath <nicholas.hoath@intel.com> AuthorDate: Thu Jan 15 13:10:39 2015 +0000 drm/i915: Subsume intel_ctx_submit_request in to drm_i915_gem_request v2: Added comments explaining how the ctx pointer and the request object should be ref-counted. Removed noisy warning. v3: Cleaned up the language used in the commit & the header description (Thanks David Gordon) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88652Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 14 2月, 2015 2 次提交
-
-
由 Mika Kuoppala 提交于
We use the pid of the process which opened our device when we track which was the culprit of the gpu hang. But as that file descriptor might get inherited, we might blame the wrong process when we record the error state. Track process identifiers in requests to always find the correct offender. v2: Track only user processes (Chris) Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: drop NULL check before put_pid as suggested by Chris.] Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Yu Zhang 提交于
With Intel GVT-g, the fence registers are partitioned by multiple vGPU instances in different VMs. Routine i915_gem_load() is modified to reset the num_fence_regs, when the driver detects it's running in a VM. Accesses to the fence registers from vGPU will be trapped and remapped by the host side. And the allocated fence number is provided in PV INFO page structure. By now, the value of fence number is fixed, but in the future we can relax this limitation, to allocate the fence registers dynamically from host side. Signed-off-by: NYu Zhang <yu.c.zhang@linux.intel.com> Signed-off-by: NJike Song <jike.song@intel.com> Signed-off-by: NEddie Dong <eddie.dong@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 1月, 2015 2 次提交
-
-
由 Chris Wilson 提交于
The core fix was applied in commit a63b03e2 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Jan 6 10:29:35 2015 +0000 mutex: Always clear owner field upon mutex_unlock() (note the absence of stable@ tag) so we can now revert our band-aid commit 226e5ae9 for -next. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
When run as a timer, i915_hangcheck_elapsed() must adhere to all the rules of running in a softirq context. This is advantageous to us as we want to minimise the risk that a driver bug will prevent us from detecting a hung GPU. However, that is irrelevant if the driver bug prevents us from resetting and recovering. Still it is prudent not to rely on mutexes inside the checker, but given the coarseness of dev->struct_mutex doing so is extremely hard. Give in and run from a work queue, i.e. outside of softirq. v2: Use own workqueue to avoid deadlocks (Daniel) Cleanup commit msg and add comment to i915_queue_hangcheck() (Chris) Cc: Jani Nikula <jani.nikula@intel.com> Cc: Daniel Vetter <dnaiel.vetter@ffwll.chm> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1) Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Remove accidental kerneldoc comment starter, to appease the 0 day builder.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 1月, 2015 6 次提交
-
-
由 Daniel Vetter 提交于
We can push down the decision whether to force flushing into the implementation since in all places that matter obj->pin_display is accurate already. The only place where the optimization really matters is the sw_finish_ioctl, and that already checks for obj->pin_display on its own. I suspect that this was simply an artifact of how commit 2c22569b Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Aug 9 12:26:45 2013 +0100 drm/i915: Update rules for writing through the LLC with the cpu evolved - only v2 added the pin_display tracking. Note that we still retain the gist of this logic from the above commit with the explicit force argument for the low-level clflush function. Ville noted in his review that there's a slight behavioural change in the set_to_gtt_domain function, which now also will flush display plane data. This opens-open the potential for userspace to start doing buggy things by omitting the sw_finish_ioctl, which is why I've rejected a functional equivalent patch from Ville a while ago: http://lists.freedesktop.org/archives/intel-gfx/2013-November/036421.html But on second consideration it's not that evil, and in any case the justification here is more clarity, not allowing crazy userspace. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Currently we are hitting the WARN inside i915_gem_object_set_cache_level() as we can now have an unbound object in the GTT write domain (due to 43566ded "drm/i915: Broaden application of set-domain(GTT)"). To avoid the warning, we need to track when we elided the clflush on a cacheable object and then evict the cache for the object when we move the object out of a cacheable domain. Reported-by: NJani Nikula <jani.nikula@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: NJani Nikula <jani.nikula@intel.com> Testcase: igt/gem_mmap_wc/set-cache-level Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88607 Tested-by: huax.lu@intel.com [danvet: Split if into nested if as discussion on the m-l.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
We pin when we submit to execlist queue. Balance the pinning when the submitted queue is cleaned on reset. Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Move all remaining elements that were unique to execlists queue items in to the associated request. Issue: VIZ-4274 v2: Rebase. Fixed issue of overzealous freeing of request. v3: Removed re-addition of cleanup work queue (found by Daniel Vetter) v4: Rebase. v5: Actual removal of intel_ctx_submit_request. Update both tail and postfix pointer in __i915_add_request (found by Thomas Daniel) v6: Removed unrelated changes Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> [danvet: Reformat comment with strange linebreaks.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
The first pass implementation of execlists required a backpointer to the context to be held in the intel_ringbuffer. However the context pointer is available higher in the call stack. Remove the backpointer from the ring buffer structure and instead pass it down through the call stack. v2: Integrate this changeset with the removal of duplicate request/execlist queue item members. v3: Rebase v4: Rebase. Remove passing of context when the request is passed. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Nick Hoath 提交于
Where there were duplicate variables for the tail, context and ring (engine) in the gem request and the execlist queue item, use the one from the request and remove the duplicate from the execlist queue item. Issue: VIZ-4274 v1: Rebase v2: Fixed build issues. Keep separate postfix & tail pointers as these are used in different ways. Reinserted missing full tail pointer update. Signed-off-by: NNick Hoath <nicholas.hoath@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 1月, 2015 2 次提交
-
-
由 Bob Paauwe 提交于
When creating a fence for a tiled object, only fence the area that makes up the actual tiles. The object may be larger than the tiled area and if we allow those extra addresses to be fenced, they'll get converted to addresses beyond where the object is mapped. This opens up the possiblity of writes beyond the end of object. To prevent this, we adjust the size of the fence to only encompass the area that makes up the actual tiles. The extra space is considered un-tiled and now behaves as if it was a linear object. Testcase: igt/gem_tiled_fence_overflow Reported-by: NDan Hettena <danh@ghs.com> Signed-off-by: NBob Paauwe <bob.j.paauwe@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 David Woodhouse 提交于
Commit 82460d97 ("drm/i915: Rework ppgtt init to no require an aliasing ppgtt") introduced a regression on Broadwell, triggering the following IOMMU fault at startup: vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem dmar: DRHD: handling fault status reg 2 dmar: DMAR:[DMA Write] Request device [00:02.0] fault addr 880000 DMAR:[fault reason 23] Unknown fbcon: inteldrmfb (fb0) is primary device Further commentary from Daniel: I sugggested this change to David after staring at the offending patch for a while. I have no idea and theory whatsoever why this would upset the gpu less than the other way round. But it seems to work. David promised to chase hw people a bit more to get a more meaningful answer. Wrt the comment that this deletes: I've done some digging and afaict loading context before ppgtt enable was once required before our recent restructuring of the context/ppgtt init code: Before that context sw setup (i.e. allocating the default context) and hw setup was smashed together. Also the setup of the default context was the bit that actually allocated the aliasing ppgtt structures. Which is the reason for the context before ppgtt depency. Or was, since with all the untangling there's no no real depency any more (functional, who knows what the hw is doing), so the comment is just stale. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 12 1月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
If CONFIG_DEBUG_MUTEXES is set, the mutex->owner field is only cleared if the mutex debugging is enabled which introduces a race in our mutex_is_locked_by() - i.e. we may inspect the old owner value before it is acquired by the new task. This is the root cause of this error: diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 5cf6731..3ef3736 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -80,13 +80,13 @@ void debug_mutex_unlock(struct mutex *lock) DEBUG_LOCKS_WARN_ON(lock->owner != current); DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next); - mutex_clear_owner(lock); } /* * __mutex_slowpath_needs_to_unlock() is explicitly 0 for debug * mutexes so that we can do it here after we've verified state. */ + mutex_clear_owner(lock); atomic_set(&lock->count, 1); } Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87955Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 07 1月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
This will allow us to set per-file, or even per-context, periods in the future. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 1月, 2015 2 次提交
-
-
由 Akash Goel 提交于
This patch provides support to create write-combining virtual mappings of GEM object. It intends to provide the same funtionality of 'mmap_gtt' interface without the constraints and contention of a limited aperture space, but requires clients handles the linear to tile conversion on their own. This is for improving the CPU write operation performance, as with such mapping, writes and reads are almost 50% faster than with mmap_gtt. Similar to the GTT mmapping, unlike the regular CPU mmapping, it avoids the cache flush after update from CPU side, when object is passed onto GPU. This type of mapping is specially useful in case of sub-region update, i.e. when only a portion of the object is to be updated. Using a CPU mmap in such cases would normally incur a clflush of the whole object, and using a GTT mmapping would likely require eviction of an active object or fence and thus stall. The write-combining CPU mmap avoids both. To ensure the cache coherency, before using this mapping, the GTT domain has been reused here. This provides the required cache flush if the object is in CPU domain or synchronization against the concurrent rendering. Although the access through an uncached mmap should automatically invalidate the cache lines, this may not be true for non-temporal write instructions and also not all pages of the object may be updated at any given point of time through this mapping. Having a call to get_pages in set_to_gtt_domain function, as added in the earlier patch 'drm/i915: Broaden application of set-domain(GTT)', would guarantee the clflush and so there will be no cachelines holding the data for the object before it is accessed through this map. The drm_i915_gem_mmap structure (for the DRM_I915_GEM_MMAP_IOCTL) has been extended with a new flags field (defaulting to 0 for existent users). In order for userspace to detect the extended ioctl, a new parameter I915_PARAM_MMAP_VERSION has been added for versioning the ioctl interface. v2: Fix error handling, invalid flag detection, renaming (ickle) v3: Rebase to latest drm-intel-nightly codebase The new mmapping is exercised by igt/gem_mmap_wc, igt/gem_concurrent_blit and igt/gem_gtt_speed. Change-Id: Ie883942f9e689525f72fe9a8d3780c3a9faa769a Signed-off-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Previously, this was restricted to only operate on bound objects - to make pointer access through the GTT to the object coherent with writes to and from the GPU. A second usecase is drm_intel_bo_wait_rendering() which at present does not function unless the object also happens to be bound into the GGTT (on current systems that is becoming increasingly rare, especially for the typical requests from mesa). A third usecase is a future patch wishing to extend the coverage of the GTT domain to include objects not bound into the GGTT but still in its coherent cache domain. For the latter pair of requests, we need to operate on the object regardless of its bind state. v2: After discussion with Akash, we came to the conclusion that the get-pages was required in order for accurate domain tracking in the corner cases (like the shrinker) and also useful for ensuring memory coherency with earlier cached CPU mmaps in case userspace uses exotic cache bypass (non-temporal) instructions. v3: Fix the inactive object check. v4: Rebase to latest drm-intel-nightly codebase Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 12月, 2014 1 次提交
-
-
由 Dave Airlie 提交于
This reverts commit 355a7018. This had some bad side effects under normal operation, and should have been dropped earlier. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 18 12月, 2014 2 次提交
-
-
由 Imre Deak 提交于
Without this RPM ref we can hit the device suspended WARN via: i915_gem_object_pin()->ggtt_bind_vma->gen6_ggtt_insert_entries(). I noticed this on my BYT while keeping the i915 device in runtime suspended state for a while. I chose this place to take the ref to avoid the possible deadlock via the mutex_lock taken both later in this function and in the runtime suspend handler. This can happen if an RPM suspend event is queued and need to be flushed before taking the RPM ref. Testcase: igt/pm_rpm/gem-evict-pwrite Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87363Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Rodrigo Vivi 提交于
Let's be optimistic that for future platforms this will remain the same and reorg a bit. This reorg in if blocks instead of switch make life easier for future platform support addition. v2: Jani pointed out I was missing reg_830 for some gen3 platforms. So let's make this platforms subcases of Gen checks. Cc: Jani Nikula <jani.nikula@intel.com> Cc: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 16 12月, 2014 2 次提交
-
-
由 Brad Volkin 提交于
This patch sets up all of the tracking and copying necessary to use batch pools with the command parser and dispatches the copied (shadow) batch to the hardware. After this patch, the parser is in 'enabling' mode. Note that performance takes a hit from the copy in some cases and will likely need some work. At a rough pass, the memcpy appears to be the bottleneck. Without having done a deeper analysis, two ideas that come to mind are: 1) Copy sections of the batch at a time, as they are reached by parsing. Might improve cache locality. 2) Copy only up to the userspace-supplied batch length and memset the rest of the buffer. Reduces the number of reads. v2: - Remove setting the capacity of the pool - One global pool instead of per-ring pools - Replace batch_obj with shadow_batch_obj and hook into eb->vmas - Memset any space in the shadow batch beyond what gets copied - Rebased on execlist prep refactoring v3: - Rebase on chained batch handling - Squash in setting the secure dispatch flag - Add a note about the interaction w/secure dispatch pinning - Check for request->batch_obj == NULL in i915_gem_free_request v4: - Fix read domains for shadow_batch_obj - Remove the set_to_gtt_domain call from i915_parse_cmds - ggtt_pin/unpin in the parser block to simplify error handling - Check USES_FULL_PPGTT before setting DISPATCH_SECURE flag - Remove i915_gem_batch_pool_put calls v5: - Move 'pending_read_domains |= I915_GEM_DOMAIN_COMMAND' after the parser (danvet, from v4 0/7 feedback) Issue: VIZ-4719 Signed-off-by: NBrad Volkin <bradley.d.volkin@intel.com> Reviewed-By: NJon Bloomfield <jon.bloomfield@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Brad Volkin 提交于
This adds a small module for managing a pool of batch buffers. The only current use case is for the command parser, as described in the kerneldoc in the patch. The code is simple, but separating it out makes it easier to change the underlying algorithms and to extend to future use cases should they arise. The interface is simple: init to create an empty pool, fini to clean it up, get to obtain a new buffer. Note that all buffers are expected to be inactive before cleaning up the pool. Locking is currently based on the caller holding the struct_mutex. We already do that in the places where we will use the batch pool for the command parser. v2: - s/BUG_ON/WARN_ON/ for locking assertions - Remove the cap on pool size - Switch from alloc/free to init/fini v3: - Idiomatic looping structure in _fini - Correct handling of purged objects - Don't return a buffer that's too much larger than needed v4: - Rebased to latest -nightly v5: - Remove _put() function and clean up comments to match v6: - Move purged check inside the loop (danvet, from v4 1/7 feedback) v7: - Use single list instead of two. (Chris W) - s/active_list/cache_list - Squashed in debug patches (Chris W) drm/i915: Add a batch pool debugfs file It provides some useful information about the buffers in the global command parser batch pool. v2: rebase on global pool instead of per-ring pools v3: rebase drm/i915: Add batch pool details to i915_gem_objects debugfs To better account for the potentially large memory consumption of the batch pool. v8: - Keep cache in LRU order (danvet, from v6 1/5 feedback) Issue: VIZ-4719 Signed-off-by: NBrad Volkin <bradley.d.volkin@intel.com> Reviewed-By: NJon Bloomfield <jon.bloomfield@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 12月, 2014 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Things like reliable GGTT mappings and mirrored 2d-on-3d display will need to map objects into the same address space multiple times. Added a GGTT view concept and linked it with the VMA to distinguish between multiple instances per address space. New objects and GEM functions which do not take this new view as a parameter assume the default of zero (I915_GGTT_VIEW_NORMAL) which preserves the previous behaviour. This now means that objects can have multiple VMA entries so the code which assumed there will only be one also had to be modified. Alternative GGTT views are supposed to borrow DMA addresses from obj->pages which is DMA mapped on first VMA instantiation and unmapped on the last one going away. v2: * Removed per view special casing in i915_gem_ggtt_prepare / finish_object in favour of creating and destroying DMA mappings on first VMA instantiation and last VMA destruction. (Daniel Vetter) * Simplified i915_vma_unbind which does not need to count the GGTT views. (Daniel Vetter) * Also moved obj->map_and_fenceable reset under the same check. * Checkpatch cleanups. v3: * Only retire objects once the last VMA is unbound. v4: * Keep scatter-gather table for alternative views persistent for the lifetime of the VMA. * Propagate binding errors to callers and handle appropriately. v5: * Explicitly look for normal GGTT view in i915_gem_obj_bound to align usage in i915_gem_object_ggtt_unpin. (Michel Thierry) * Change to single if statement in i915_gem_obj_to_ggtt. (Michel Thierry) * Removed stray semi-colon in i915_gem_object_set_cache_level. For: VIZ-4544 Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NMichel Thierry <michel.thierry@intel.com> [danvet: Drop hunk from i915_gem_shrink since it's just prettification but upsets a __must_check warning.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 12月, 2014 3 次提交
-
-
由 Jani Nikula 提交于
Release struct_mutex if init_rings() fails. This is a regression introduced in commit 35a57ffb Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Nov 20 00:33:07 2014 +0100 drm/i915: Only init engines once Reported-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
So apparently jiffies<->nsec<->ktime isn't accurate or something. At elast if we timeout there's occasionally still a few hundred us left (in a 2 second timeout). Stuff I've tried and thrown out again: - Sampling the before timestamp before jiffies. Doesn't improve test path rate at all. - Using jiffies. Way to inaccurate, which means way too much drift with signals plus automatic ioctl restarting in userspace. In hindsight we should have used an absolute timeout, but hey we need something for v3 of the i915 gem wait interfaces ;-) - Trying to figure out where accuracy gets lost. gl testcase really don't care all that much about this (as long as isn't not massively off), it's just that the testcase gets a bit upset if it receives an EITME with timeout > 0. So as long as we're in the ballbark it's good enough. So patch everything up if we're at most one jiffies off. I get's me a solid test again. This regression is probably introduced in commit 5ed0bdf2 Author: Thomas Gleixner <tglx@linutronix.de> Date: Wed Jul 16 21:05:06 2014 +0000 drm: i915: Use nsec based interfaces Use ktime_get_raw_ns() and get rid of the back and forth timespec conversions. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Probably because I'm too lazy to confirm myself and still waiting for QA ;-) Cc: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82749Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Daniel Vetter 提交于
We've lost the +1 required for correct timeouts in commit 5ed0bdf2 Author: Thomas Gleixner <tglx@linutronix.de> Date: Wed Jul 16 21:05:06 2014 +0000 drm: i915: Use nsec based interfaces Use ktime_get_raw_ns() and get rid of the back and forth timespec conversions. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> So fix this up by reinstating our handrolled _timeout function. While at it bother with handling MAX_JIFFIES. v2: Convert to usecs (we don't care about the accuracy anyway) first to avoid overflow issues Dave Gordon spotted. v3: Drop the explicit MAX_JIFFY_OFFSET check, usecs_to_jiffies should take care of that already. It might be a bit too enthusiastic about it though. v4: Chris has a much nicer color, so use his implementation. This requires to export nsec_to_jiffies from time.c. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82749 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Acked-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 04 12月, 2014 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Multiple GGTT VMAs per object will be introduced in the near future which will make it impossible to guarantee normal GGTT view is at the head of the list. Purpose of this patch is to break this assumption straight away so any potential hidden assumptions in the code base can be bisected to this simple patch. For: VIZ-4544 Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Suggested-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 12月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
We need to do that every time we resume the rings, not just at load. I've overlooked this in my untangling of the ring init code. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-