- 04 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The is-closed flag may be added after we have acquired the vma under the ctx->mutex, but will not take effect until after we release the vm->mutex. i.e. the flag may be set on the vma as attempt to bind it and that will cause the vma to be unbound later after we unpin it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200303093157.1153887-1-chris@chris-wilson.co.uk
-
- 22 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As there is a delay before we pin a vma, there is an opportunity for another thread to have closed the vm and its vma (including us). Check as soon as we acquire the vm->mutex and know the vm/vma is stable. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1291Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200221121940.2741563-1-chris@chris-wilson.co.uk
-
- 11 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Only assert that the i915_vma is now idle if and only if no other pins are present. If another user has the i915_vma pinned, they may submit more work to the i915_vma skipping the vm->mutex used to serialise the unbind. We need to wait again, if we want to continue and unbind this vma. However, if we own the i915_vma (we hold the vm->mutex for the unbind and the pin_count is 0), we can assert that the vma remains idle as we unbind. Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Closes: https://gitlab.freedesktop.org/drm/intel/issues/530Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-1-chris@chris-wilson.co.uk (cherry picked from commit 60e94557) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 03 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Take a reference to the previous exclusive fence on the i915_active, as we wish to add an await to it in the caller (and so must prevent it from being freed until we have completed that task). Fixes: e3793468 ("drm/i915: Use the async worker to avoid reclaim tainting the ggtt->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200203094152.4150550-1-chris@chris-wilson.co.uk
-
- 31 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
On Braswell and Broxton (also known as Valleyview and Apollolake), we need to serialise updates of the GGTT using the big stop_machine() hammer. This has the side effect of appearing to lockdep as a possible reclaim (since it uses the cpuhp mutex and that is tainted by per-cpu allocations). However, we want to use vm->mutex (including ggtt->mutex) from within the shrinker and so must avoid such possible taints. For this purpose, we introduced the asynchronous vma binding and we can apply it to the PIN_GLOBAL so long as take care to add the necessary waits for the worker afterwards. Closes: https://gitlab.freedesktop.org/drm/intel/issues/211Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130181710.2030251-3-chris@chris-wilson.co.uk
-
- 24 1月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Optimistically wait for the prior vma activity before taking the mutex to minimise the mutex hold time while unbinding. We will then verify the vma is idle with a second wait under the mutex to ensure it is safe to unbind. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Only assert that the i915_vma is now idle if and only if no other pins are present. If another user has the i915_vma pinned, they may submit more work to the i915_vma skipping the vm->mutex used to serialise the unbind. We need to wait again, if we want to continue and unbind this vma. However, if we own the i915_vma (we hold the vm->mutex for the unbind and the pin_count is 0), we can assert that the vma remains idle as we unbind. Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Closes: https://gitlab.freedesktop.org/drm/intel/issues/530Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-1-chris@chris-wilson.co.uk
-
- 22 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
While we do flush writes to the vma before unbinding (to make sure they go through the right detiling register), we may also be concurrently poking at the GGTT_WRITE bit from set-domain, as we mark all GGTT vma associated with an object. We know this is for another vma, as we are currently unbinding this one -- so if this vma will be reused, it will be refaulted and have its dirty bit set before the next write. Closes: https://gitlab.freedesktop.org/drm/intel/issues/999Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200121222447.419489-1-chris@chris-wilson.co.uk
-
- 11 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We need to hold the runtime-pm wakeref to update the global PTEs (as they exist behind a PCI BAR). However, some systems invoke ACPI during runtime resume and so require allocations, which is verboten inside the vm->mutex. Ergo, we must not use intel_runtime_pm_get() inside the mutex, but lift the call outside. Closes: https://gitlab.freedesktop.org/drm/intel/issues/958Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200110144418.1415639-1-chris@chris-wilson.co.uk
-
- 09 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Access through the GGTT (iomap) into the vma does require the device to be awake. However, we often take the i915_vma_pin_iomap() as an early preparatory step that is long before we use the iomap. Asserting that the device is awake at pin time does not protect us, and is merely a nuisance. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200108153550.3803446-2-chris@chris-wilson.co.uk
-
- 23 12月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Start introducing a kref on i915_vma in order to protect the vma unbind (i915_gem_object_unbind) from a parallel destruction (i915_vma_parked). Later, we will use the refcount to manage all access and turn i915_vma into a first class container. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Acked-by: NImre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191222210256.2066451-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk (cherry picked from commit da42104f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 22 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Allocate only an internal intel_context for the kernel_context, forgoing a global GEM context for internal use as we only require a separate address space (for our own protection). Now having weaned GT from requiring ce->gem_context, we can stop referencing it entirely. This also means we no longer have to create random and unnecessary GEM contexts for internal use. GEM contexts are now entirely for tracking GEM clients, and intel_context the execution environment on the GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Acked-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191221160324.1073045-1-chris@chris-wilson.co.uk
-
- 18 12月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we inherit an error along the fence chain, we skip the main work callback and go straight to the error. In the case of the vma bind worker, we only dropped the pinned pages from the worker. In the process, make sure we call the release earlier rather than wait until the final reference to the fence is dropped (as a reference is kept while being listened upon). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191216161717.2688274-1-chris@chris-wilson.co.uk
-
- 09 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
If someone else acquires the i915_vma before we complete our wait and unbind it, we currently error out with -EBUSY. Use -EAGAIN instead so that if necessary the caller is prepared to try again. Closes: https://gitlab.freedesktop.org/drm/intel/issues/683Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191208161252.3015727-2-chris@chris-wilson.co.uk
-
- 06 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Remove the vma we wish to destroy from the gt->closed_list to avoid having two i915_vma_parked() try and free it. Fixes: aa5e4453 ("drm/i915/gem: Try to flush pending unbind events") References: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205214159.829727-1-chris@chris-wilson.co.uk
-
- 05 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
It is not acceptable for context pinning to fail with -ENOSPC as we should always be able to make space in the GGTT. The only reason we may fail is that other "temporary" context pins are reserving their space and we need to wait for an available slot. Closes: https://gitlab.freedesktop.org/drm/intel/issues/676Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205113726.413351-2-chris@chris-wilson.co.uk
-
- 04 12月, 2019 1 次提交
-
-
由 Abdiel Janulgue 提交于
This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature comes from the value returned by this ioctl which is the offset into the device fd which userpace uses with mmap(2). mmap_gtt was our initial mmap_offset implementation, this extends our CPU mmap support to allow additional fault handlers that depends on the object's backing pages. Note that we multiplex mmap_gtt and mmap_offset through the same ioctl, and use the zero extending behaviour of drm to differentiate between them, when we inspect the flags. To support multiple mmap types on an object we need to support multiple mmap_offsets for an object (each offset in the global device address space corresponding to a unique instance of the object for a file + mmap type). As we drop the simplified drm core idea of a single mmap_offset, we need to provide replacement hooks for the dumb mmap interface as well. Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675 Testcase: igt/gem_mmap_offset Signed-off-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
-
- 31 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
In order to keep the assert_bind_count() valid, we need to hold the vma page reference until after we drop the bind count. However, we must also keep the drm_mm_remove_node() as the last action of i915_vma_unbind() so that it serialises with the unlocked check inside i915_vma_destroy(). So we need to split up i915_vma_remove() so that we order the detach, drop pages and remove as required during unbind. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112067Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191030192159.18404-1-chris@chris-wilson.co.uk
-
- 22 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Currently even though i915_vma_parked() operates on a per-gt struct, it is called from a global pm notify. This oddity was only because the long term plan is to decouple the vma cache from the pm notification, but right now the oddity stands out like a sore thumb! Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191021183236.21790-1-chris@chris-wilson.co.uk
-
- 15 10月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
We now do the page pin count upfront in vma_get_pages/vma_put_pages, so that we do the allocations before we enter the vm->mutex. Our vma page references we are tracked in vma->pages_count and the extra obj->pages_pin_count being performed later in i915_vma_insert and i915_vma_remove is redundant, and worse throws off the shrinker's logic on when it can free an object by unbinding it. Reported-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reported-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191015100155.10376-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Before we attempt to set_pages on the vma, we claim a obj.pages_pin_count for it. If we subsequently fail to set the pages on the vma, we need to drop our pinning before returning the error. Reported-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191015093915.3995-1-chris@chris-wilson.co.uk
-
- 04 10月, 2019 6 次提交
-
-
由 Chris Wilson 提交于
Forgo the struct_mutex serialisation for i915_active, and interpose its own mutex handling for active/retire. This is a multi-layered sleight-of-hand. First, we had to ensure that no active/retire callbacks accidentally inverted the mutex ordering rules, nor assumed that they were themselves serialised by struct_mutex. More challenging though, is the rule over updating elements of the active rbtree. Instead of the whole i915_active now being serialised by struct_mutex, allocations/rotations of the tree are serialised by the i915_active.mutex and individual nodes are serialised by the caller using the i915_timeline.mutex (we need to use nested spinlocks to interact with the dma_fence callback lists). The pain point here is that instead of a single mutex around execbuf, we now have to take a mutex for active tracker (one for each vma, context, etc) and a couple of spinlocks for each fence update. The improvement in fine grained locking allowing for multiple concurrent clients (eventually!) should be worth it in typical loads. v2: Add some comments that barely elucidate anything :( Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we need to use a mutex to serialise i915_active activation (because we want to allow the callback to sleep), we need to push the i915_active.retire into a worker callback in case we get need to retire from an atomic context. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Replace the struct_mutex requirement for pinning the i915_vma with the local vm->mutex instead. Note that the vm->mutex is tainted by the shrinker (we require unbinding from inside fs-reclaim) and so we cannot allocate while holding that mutex. Instead we have to preallocate workers to do allocate and apply the PTE updates after we have we reserved their slot in the drm_mm (using fences to order the PTE writes with the GPU work and with later unbind). In adding the asynchronous vma binding, one subtle requirement is to avoid coupling the binding fence into the backing object->resv. That is the asynchronous binding only applies to the vma timeline itself and not to the pages as that is a more global timeline (the binding of one vma does not need to be ordered with another vma, nor does the implicit GEM fencing depend on a vma, only on writes to the backing store). Keeping the vma binding distinct from the backing store timelines is verified by a number of async gem_exec_fence and gem_exec_schedule tests. The way we do this is quite simple, we keep the fence for the vma binding separate and only wait on it as required, and never add it to the obj->resv itself. Another consequence in reducing the locking around the vma is the destruction of the vma is no longer globally serialised by struct_mutex. A natural solution would be to add a kref to i915_vma, but that requires decoupling the reference cycles, possibly by introducing a new i915_mm_pages object that is own by both obj->mm and vma->pages. However, we have not taken that route due to the overshadowing lmem/ttm discussions, and instead play a series of complicated games with trylocks to (hopefully) ensure that only one destruction path is called! v2: Add some commentary, and some helpers to reduce patch churn. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The premise here is to simply avoiding having to acquire the vm->mutex inside vma create/destroy to update the vm->unbound_lists, to avoid some nasty lock recursions later. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
A subset of 71724f70 ("drm/mm: Use helpers for drm_mm_node booleans") in order to prepare drm-intel-next-queued for subsequent patches before we can backmerge 71724f70 itself. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004142226.13711-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In preparation for rearranging the booleans into a flags field, ensure all the current users are using the inline helpers and not directly accessing the members. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003210100.22250-3-chris@chris-wilson.co.uk
-
- 20 9月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
The request->timeline is only valid until the request is retired (i.e. before it is completed). Upon retiring the request, the context may be unpinned and freed, and along with it the timeline may be freed. We therefore need to be very careful when chasing rq->timeline that the pointer does not disappear beneath us. The vast majority of users are in a protected context, either during request construction or retirement, where the timeline->mutex is held and the timeline cannot disappear. It is those few off the beaten path (where we access a second timeline) that need extra scrutiny -- to be added in the next patch after first adding the warnings about dangerous access. One complication, where we cannot use the timeline->mutex itself, is during request submission onto hardware (under spinlocks). Here, we want to check on the timeline to finalize the breadcrumb, and so we need to impose a second rule to ensure that the request->timeline is indeed valid. As we are submitting the request, it's context and timeline must be pinned, as it will be used by the hardware. Since it is pinned, we know the request->timeline must still be valid, and we cannot submit the idle barrier until after we release the engine->active.lock, ergo while submitting and holding that spinlock, a second thread cannot release the timeline. v2: Don't be lazy inside selftests; hold the timeline->mutex for as long as we need it, and tidy up acquiring the timeline with a bit of refactoring (i915_active_add_request) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190919111912.21631-1-chris@chris-wilson.co.uk
-
- 11 9月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
In preparation for reducing struct_mutex stranglehold around the vm, make the vma.flags atomic so that we can acquire a pin on the vma atomically before deciding if we need to take the mutex. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190911090243.16786-1-chris@chris-wilson.co.uk
-
- 10 9月, 2019 2 次提交
-
-
由 Matthew Auld 提交于
Try to tidy up the cache-coloring such that we rid the code of any mm.color_adjust assumptions, this should hopefully make it more obvious in the code when we need to actually use the cache-level as the color, and as a bonus should make adding a different color-scheme simpler. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190909124052.22900-3-matthew.auld@intel.com
-
由 Matthew Auld 提交于
Export color_differs so that we can use it elsewhere. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190909124052.22900-1-matthew.auld@intel.com
-
- 22 8月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Avoid calling i915_vma_put_fence() by using our alternate paths that bind a secondary vma avoiding the original fenced vma. For the few instances where we need to release the fence (i.e. on binding when the GGTT range becomes invalid), replace the put_fence with a revoke_fence. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190822061557.18402-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we want to revoke the ggtt vma from only under the ggtt->mutex, we need to move protection of the userfault tracking from the struct_mutex to the ggtt->mutex. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190822060914.2671-2-chris@chris-wilson.co.uk
-
- 20 8月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Before we acquire the vma for GPU activity, ensure that the underlying object is not already in the process of being freed. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190820100531.8430-1-chris@chris-wilson.co.uk
-
- 17 8月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
As every i915_active_request should be serialised by a dedicated lock, i915_active consists of a tree of locks; one for each node. Markup up the i915_active_request with what lock is supposed to be guarding it so that we can verify that the serialised updated are indeed serialised. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190816121000.8507-2-chris@chris-wilson.co.uk
-
- 16 8月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Move the active tracking for the frontbuffer operations out of the i915_gem_object and into its own first class (refcounted) object. In the process of detangling, we switch from low level request tracking to the easier i915_active -- with the plan that this avoids any potential atomic callbacks as the frontbuffer tracking wishes to sleep as it flushes. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190816074635.26062-1-chris@chris-wilson.co.uk
-
- 13 8月, 2019 2 次提交
-
-
由 Christian König 提交于
Be more consistent with the naming of the other DMA-buf objects. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/323401/
-
由 Chris Wilson 提交于
We were using the last_fence to track the last request that used this vma that might be interpreted by a fence register and forced ourselves to wait for this request before modifying any fence register that overlapped our vma. Due to requirement that we need to track any XY_BLT command, linear or tiled, this in effect meant that we have to track the vma for its active lifespan anyway, so we can forgo the explicit last_fence tracking and just use the whole vma->active. Another solution would be to pipeline the register updates, and would help resolve some long running stalls for gen3 (but only gen 2 and 3!) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190812174804.26180-1-chris@chris-wilson.co.uk
-