- 10 1月, 2022 1 次提交
-
-
由 Matthew Auld 提交于
Ditch the writeback hook and drop i915_gem_object_writeback(). We already support the shrinker_release_pages hook which can just call shmem_writeback directly. Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211215110746.865-1-matthew.auld@intel.com
-
- 21 12月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
This is required for i915_gem_evict_vm, to be able to evict the entire VM, including objects that are already locked to the current ww ctx. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216142749.1966107-12-maarten.lankhorst@linux.intel.com
-
- 25 11月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
For now, we will only allow async migration when TTM is used, so the paths we care about are related to TTM. The mmap path is handled by having the fence in ttm_bo->moving, when pinning, the binding only becomes available after the moving fence is signaled, and pinning a cpu map will only work after the moving fence signals. This should close all holes where userspace can read a buffer before it's fully migrated. v2: - Fix a couple of SPARSE warnings v3: - Fix a NULL pointer dereference v4: - Ditch the moving fence waiting for i915_vma_pin_iomap() and replace with a verification that the vma is already bound. (Matthew Auld) - Squash with a previous patch introducing moving fence waiting and accessing interfaces (Matthew Auld) - Rename to indicated that we also add support for sync waiting. v5: - Fix check for NULL and unreferencing i915_vma_verify_bind_complete() (Matthew Auld) - Fix compilation failure if !CONFIG_DRM_I915_DEBUG_GEM - Fix include ordering. (Matthew Auld) v7: - Fix yet another compilation failure with clang if !CONFIG_DRM_I915_DEBUG_GEM Co-developed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-2-thomas.hellstrom@linux.intel.com
-
- 02 11月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
Move it next to its partner in crime; gpu_write_needs_clflush. For better readability lets keep gpu vs cpu at least in the same file. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211027161813.3094681-3-matthew.auld@intel.com
-
由 Thomas Hellström 提交于
As we start to introduce asynchronous failsafe object migration, where we update the object state and then submit asynchronous commands we need to record what memory resources are actually used by various part of the command stream. Initially for three purposes: 1) Error capture. 2) Asynchronous migration error recovery. 3) Asynchronous vma bind. At the time where these happens, the object state may have been updated to be several migrations ahead and object sg-tables discarded. In order to make it possible to keep sg-tables with memory resource information for these operations, introduce refcounted sg-tables that aren't freed until the last user is done with them. The alternative would be to reference information sitting on the corresponding ttm_resources which typically have the same lifetime as these refcountes sg_tables, but that leads to other awkward constructs: Due to the design direction chosen for ttm resource managers that would lead to diamond-style inheritance, the LMEM resources may sometimes be prematurely freed, and finally the subclassed struct ttm_resource would have to bleed into the asynchronous vma bind code. v3: - Address a number of style issues (Matthew Auld) v4: - Dont check for st->sgl being NULL in i915_ttm_tt__shmem_unpopulate(), that should never happen. (Matthew Auld) v5: - Fix a Potential double-free (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211101122444.114607-1-thomas.hellstrom@linux.intel.com
-
- 22 10月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
We currently just evict lmem objects to system memory when under memory pressure. For this case we might lack the usual object mm.pages, which effectively hides the pages from the i915-gem shrinker, until we actually "attach" the TT to the object, or in the case of lmem-only objects it just gets migrated back to lmem when touched again. For all cases we can just adjust the i915 shrinker LRU each time we also adjust the TTM LRU. The two cases we care about are: 1) When something is moved by TTM, including when initially populating an object. Importantly this covers the case where TTM moves something from lmem <-> smem, outside of the normal get_pages() interface, which should still ensure the shmem pages underneath are reclaimable. 2) When calling into i915_gem_object_unlock(). The unlock should ensure the object is removed from the shinker LRU, if it was indeed swapped out, or just purged, when the shrinker drops the object lock. v2(Thomas): - Handle managing the shrinker LRU in adjust_lru, where it is always safe to touch the object. v3(Thomas): - Pretty much a re-write. This time piggy back off the shrink_pin stuff, which actually seems to fit quite well for what we want here. v4(Thomas): - Just use a simple boolean for tracking ttm_shrinkable. v5: - Ensure we call adjust_lru when faulting the object, to ensure the pages are visible to the shrinker, if needed. - Add back the adjust_lru when in i915_ttm_move (Thomas) v6(Reported-by: kernel test robot <lkp@intel.com>): - Remove unused i915_tt Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v4 Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-6-matthew.auld@intel.com
-
由 Matthew Auld 提交于
For cached objects we can allocate our pages directly in shmem. This should make it possible(in a later patch) to utilise the existing i915-gem shrinker code for such objects. For now this is still disabled. v2(Thomas): - Add optional try_to_writeback hook for objects. Importantly we need to check if the object is even still shrinkable; in between us dropping the shrinker LRU lock and acquiring the object lock it could for example have been moved. Also we need to differentiate between "lazy" shrinking and the immediate writeback mode. Also later we need to handle objects which don't even have mm.pages, so bundling this into put_pages() would require somehow handling that edge case, hence just letting the ttm backend handle everything in try_to_writeback doesn't seem too bad. v3(Thomas): - Likely a bad idea to touch the object from the unpopulate hook, since it's not possible to hold a reference, without also creating circular dependency, so likely this is too fragile. For now just ensure we at least mark the pages as dirty/accessed when called from the shrinker on WILLNEED objects. - s/try_to_writeback/shrinker_release_pages, since this can do more than just writeback. - Get rid of do_backup boolean and just set the SWAPPED flag prior to calling unpopulate. - Keep shmem_tt as lowest priority for the TTM LRU bo_swapout walk, since these just get skipped anyway. We can try to come up with something better later. v4(Thomas): - s/PCI_DMA/DMA/. Also drop NO_KERNEL_MAPPING and NO_WARN, which apparently doesn't do anything with streaming mappings. - Just pass along the error for ->truncate, and assume nothing. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Christian König <christian.koenig@amd.com> Cc: Oak Zeng <oak.zeng@intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: NOak Zeng <oak.zeng@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-2-matthew.auld@intel.com
-
- 20 10月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
It looks like we will need this in some more places, so extract as a helper. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-3-matthew.auld@intel.com
-
- 05 10月, 2021 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
This api allow user mode to create protected buffers and to mark contexts as making use of such objects. Only when using contexts marked in such a way is the execution guaranteed to work as expected. Contexts can only be marked as using protected content at creation time (i.e. the parameter is immutable) and they must be both bannable and not recoverable. Given that the protected session gets invalidated on suspend, contexts created this way hold a runtime pm wakeref until they're either destroyed or invalidated. All protected objects and contexts will be considered invalid when the PXP session is destroyed and all new submissions using them will be rejected. All intel contexts within the invalidated gem contexts will be marked banned. Userspace can detect that an invalidation has occurred via the RESET_STATS ioctl, where we report it the same way as a ban due to a hang. v5: squash patches, rebase on proto_ctx, update kerneldoc v6: rebase on obj create_ext changes v7: Use session counter to check if an object it valid, hold wakeref in context, don't add a new flag to RESET_STATS (Daniel) v8: don't increase guilty count for contexts banned during pxp invalidation (Rodrigo) v9: better comments, avoid wakeref put race between pxp_inval and context_close, add usage examples (Rodrigo) v10: modify internal set/get-protected-context functions to not return -ENODEV when setting PXP param to false or getting param when running on pxp-unsupported hw or getting param when i915 was built with CONFIG_PXP off Signed-off-by: NAlan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NBommu Krishnaiah <krishnaiah.bommu@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-11-alan.previn.teres.alexis@intel.com
-
- 01 10月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
We may end up in i915_ttm_bo_destroy() in an error path before the object is fully initialized. In that case it's not correct to call __i915_gem_free_object(), because that function a) Assumes the gem object refcount is 0, which it isn't. b) frees the placements which are owned by the caller until the init_object() region ops returns successfully. Fix this by providing a lightweight cleanup function __i915_gem_object_fini() which is also called by __i915_gem_free_object(). While doing this, also make sure we call dma_resv_fini() as part of ordinary object destruction and not from the RCU callback that frees the object. This will help track down bugs where the object is incorrectly locked from an RCU lookup. Finally, make sure the object isn't put on the region list until it's either locked or fully initialized in order to block list processing of partially initialized objects. v2: - The TTM object backend memory was freed before the gem pages were put. Separate this functionality into __i915_gem_object_pages_fini() and call it from the TTM delete_mem_notify() callback. v3: - Include i915_gem_object_free_mmaps() in __i915_gem_object_pages_fini() to make sure we don't inadvertedly introduce a race. Fixes: 48b09612 ("drm/i915: Move __i915_gem_free_object to ttm_bo_destroy") Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20210930113236.583531-1-thomas.hellstrom@linux.intel.com
-
- 22 9月, 2021 1 次提交
-
-
由 Tvrtko Ursulin 提交于
It is not very useful to have code which tries to report a rapidly transient state which will not report anything majority of the time, especially since it is currently only used from <debugfs>/i915_gem_framebuffers. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: NChristian König <christian.koenig@amd.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210915114153.951670-1-tvrtko.ursulin@linux.intel.com
-
- 28 7月, 2021 1 次提交
-
-
由 Daniel Vetter 提交于
With the global kmem_cache shrink infrastructure gone there's nothing special and we can convert them over. I'm doing this split up into each patch because there's quite a bit of noise with removing the static global.slab_objects to just a slab_objects. v2: Make slab static (Jason, 0day) Reviewed-by: NJason Ekstrand <jason@jlekstrand.net> Cc: Jason Ekstrand <jason@jlekstrand.net> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210727121037.2041102-6-daniel.vetter@ffwll.ch
-
- 26 7月, 2021 1 次提交
-
-
由 Jason Ekstrand 提交于
Instead of hand-rolling the same three calls in each function, pull them into an i915_gem_object_create_user helper. Apart from re-ordering of the placements array ENOMEM check, there should be no functional change. v2 (Matthew Auld): - Add the call to i915_gem_flush_free_objects() from i915_gem_dumb_create() in a separate patch - Move i915_gem_object_alloc() below the simple error checks v3 (Matthew Auld): - Add __ to i915_gem_object_create_user and kerneldoc which warns the caller that it's not validating anything. Signed-off-by: NJason Ekstrand <jason@jlekstrand.net> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-5-jason@jlekstrand.net
-
- 17 7月, 2021 1 次提交
-
-
由 Jason Ekstrand 提交于
This reverts the rest of 0edbb9ba ("drm/i915: Move cmd parser pinning to execbuffer"). Now that the only user of i915_gem_object_get_sg without allow_alloc has been removed, we can drop the parameter. This portion of the revert was broken into its own patch to aid review. Signed-off-by: NJason Ekstrand <jason@jlekstrand.net> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NJon Bloomfield <jon.bloomfield@intel.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210714193419.1459723-4-jason@jlekstrand.net
-
- 09 7月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
For discrete, users of pin_map() needs to obey the same rules at the TTM backend, where we map system only objects as WB, and everything else as WC. The simplest for now is to just force the correct mapping type as per the new rules for discrete. Suggested-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: NRamalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210705135310.1502437-1-matthew.auld@intel.com
-
- 30 6月, 2021 2 次提交
-
-
由 Thomas Hellström 提交于
Objects intended to be used as display framebuffers must reside in LMEM for discrete. If they happen to not do that, migrate them to LMEM before pinning. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210629151203.209465-4-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Introduce an interface to migrate objects between regions. This is primarily intended to migrate objects to LMEM for display and to SYSTEM for dma-buf, but might be reused in one form or another for performance-based migration. v2: - Verify that the memory region given as an id really exists. (Reported by Matthew Auld) - Call i915_gem_object_{init,release}_memory_region() when switching region to handle also switching region lists. (Reported by Matthew Auld) v3: - Fix i915_gem_object_can_migrate() to return true if object is already in the correct region, even if the object ops doesn't have a migrate() callback. - Update typo in commit message. - Fix kerneldoc of i915_gem_object_wait_migration(). v4: - Improve documentation (Suggested by Mattew Auld and Michael Ruhl) - Always assume TTM migration hits a TTM move and unsets the pages through move_notify. (Reported by Matthew Auld) - Add a dma_fence_might_wait() annotation to i915_gem_object_wait_migration() (Suggested by Daniel Vetter) v5: - Re-add might_sleep() instead of __dma_fence_might_wait(), Sent v4 with the wrong version, didn't compile and __dma_fence_might_wait() is not exported. - Added an R-B. Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210629151203.209465-2-thomas.hellstrom@linux.intel.com
-
- 25 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by much of our code. Introduce a new mem_flags member to hold these and make sure checks for these flags being set are either done under the object lock or with pages properly pinned. The flags will change during migration under the object lock. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-2-thomas.hellstrom@linux.intel.com
-
- 17 6月, 2021 2 次提交
-
-
由 Thomas Hellström 提交于
As we're about to add more ww-related functionality, break out the dma_resv ww locking utilities to their own files Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210617063018.92802-3-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Since the ww transaction endpoint easily end up far out-of-scope of the objects on the ww object list, particularly for contending lock objects, make sure we reference objects on the list so they don't disappear under us. This comes with a performance penalty so it's been debated whether this is really needed. But I think this is motivated by the fact that locking is typically difficult to get right, and whatever we can do to make it simpler for developers moving forward should be done, unless the performance impact is far too high. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210617063018.92802-2-thomas.hellstrom@linux.intel.com
-
- 14 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
Use an rwlock instead of spinlock for the global notifier lock to reduce risk of contention in execbuf. Protect object state with the object lock whenever possible rather than with the global notifier lock Don't take an explicit page_ref in userptr_submit_init() but rather call get_pages() after obtaining the page list so that get_pages() holds the page_ref. This means we don't need to call userptr_submit_fini(), which is needed to avoid awkward locking in our upcoming VM_BIND code. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610143525.624677-1-thomas.hellstrom@linux.intel.com
-
- 11 6月, 2021 3 次提交
-
-
由 Maarten Lankhorst 提交于
Use the ttm handlers for servicing page faults, and vm_access. We do our own validation of read-only access, otherwise use the ttm handlers as much as possible. Because the ttm handlers expect the vma_node at vma->base, we slightly need to massage the mmap handlers to look at vma_node->driver_private to fetch the bo, if it's NULL, we assume i915's normal mmap_offset uapi is used. This is the easiest way to achieve compatibility without changing ttm's semantics. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-5-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Since objects can be migrated or evicted when not pinned or locked, update the checks for lmem residency or future residency so that the value returned is not immediately stale. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-3-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Most logical place to introduce TTM buffer objects is as an i915 gem object backend. We need to add some ops to account for added functionality like delayed delete and LRU list manipulation. Initially we support only LMEM and SYSTEM memory, but SYSTEM (which in this case means evicted LMEM objects) is not visible to i915 GEM yet. The plan is to move the i915 gem system region over to the TTM system memory type in upcoming patches. We set up GPU bindings directly both from LMEM and from the system region, as there is no need to use the legacy TTM_TT memory type. We reserve that for future porting of GGTT bindings to TTM. Remove the old lmem backend. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-2-thomas.hellstrom@linux.intel.com
-
- 06 6月, 2021 1 次提交
-
-
由 Christian König 提交于
That describes much better what the function is doing here. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NJason Ekstrand <jason@jlekstrand.net> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210602111714.212426-6-christian.koenig@amd.com
-
- 25 3月, 2021 11 次提交
-
-
由 Matthew Auld 提交于
Throw it into a simple helper, and throw a warning if we encounter an object which has been initialised with an object size that exceeds our limit of INT_MAX pages. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210122181514.541436-2-matthew.auld@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Maarten Lankhorst 提交于
With all callers and selftests fixed to use ww locking, we can now finally remove this lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-62-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
In the ucode functions, the calls are done before userspace runs, when debugging using debugfs, or when creating semi-permanent mappings; we can safely use the unlocked versions that does the ww dance for us. Because there is no pin_pages_unlocked yet, add it as convenience function. This removes possible lockdep splats about missing resv lock for ucode. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-37-maarten.lankhorst@linux.intel.com
-
由 Thomas Hellström 提交于
Stolen objects need to lock, and we may call put_pages when refcount drops to 0, ensure all calls are handled correctly. Changes since v1: - Rebase on top of upstream changes. Idea-from: Thomas Hellström <thomas.hellstrom@intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-33-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
pin_map needs the ww lock, so ensure we pin both before submission. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Again pick older version just to side-step conflicts.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210128162612.927917-32-maarten.lankhorst@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-32-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
We are removing obj->mm.lock, and need to take the reservation lock before we can pin pages. Move the pinning pages into the helper, and merge gtt pwrite/pread preparation and cleanup paths. The fence lock is also removed; it will conflict with fence annotations, because of memory allocations done when pagefaulting inside copy_*_user. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Pick the older version to avoid the conflicts] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210128162612.927917-31-maarten.lankhorst@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-31-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
Instead of multiple lockings, lock the object once, and perform the ww dance around attach_phys and pin_pages. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-23-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
With userptr fixed, there is no need for all separate lockdep classes now, and we can remove all lockdep tricks used. A trylock in the shrinker is all we need now to flatten the locking hierarchy. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Resolve conflict because we don't have the patch from Chris to rebrand i915_gem_shrinker_taints_mutex to fs_reclaim_taints_mutex. It's not a bad idea, but if we do it, it should be moved to the right header. See https://lore.kernel.org/intel-gfx/20210202154318.19246-1-chris@chris-wilson.co.uk/] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-18-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
Instead of doing what we do currently, which will never work with PROVE_LOCKING, do the same as AMD does, and something similar to relocation slowpath. When all locks are dropped, we acquire the pages for pinning. When the locks are taken, we transfer those pages in .get_pages() to the bo. As a final check before installing the fences, we ensure that the mmu notifier was not called; if it is, we return -EAGAIN to userspace to signal it has to start over. Changes since v1: - Unbinding is done in submit_init only. submit_begin() removed. - MMU_NOTFIER -> MMU_NOTIFIER Changes since v2: - Make i915->mm.notifier a spinlock. Changes since v3: - Add WARN_ON if there are any page references left, should have been 0. - Return 0 on success in submit_init(), bug from spinlock conversion. - Release pvec outside of notifier_lock (Thomas). Changes since v4: - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas) - Actually check all invalidations in eb_move_to_gpu. (Thomas) - Do not wait when process is exiting to fix gem_ctx_persistence.userptr. Changes since v5: - Clarify why check on PF_EXITING is (temporarily) required. Changes since v6: - Ensure userptr validity is checked in set_domain through a special path. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: NDave Airlie <airlied@redhat.com> [danvet: s/kfree/kvfree/ in i915_gem_object_userptr_drop_ref in the previous review round, but which got lost. The other open questions around page refcount are imo better discussed in a separate series, with amdgpu folks involved]. Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-17-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
Now that unsynchronized mappings are removed, the only time userptr works is when the MMU notifier is enabled. Put all of the userptr code behind a mmu notifier ifdef. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-16-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
There are a couple of ioctl's related to tiling and cache placement, that make no sense for userptr, reject those: - i915_gem_set_tiling_ioctl() Tiling should always be linear for userptr. Changing placement will fail with -ENXIO. - i915_gem_set_caching_ioctl() Userptr memory should always be cached. Changing caching mode will fail with -ENXIO. - i915_gem_set_domain_ioctl() Still temporarily allowed to work as intended, it's used to check userptr validity. With the reworked userptr code, it will keep working for this usecase. This plus the previous changes have been tested against beignet by using its own unit tests, and intel-video-compute by using piglit's opencl tests. Changes since v1: - set_domain was apparently used in iris for checking userptr validity, keep it working as intended. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: NJason Ekstrand <jason@jlekstrand.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-14-maarten.lankhorst@linux.intel.com
-
- 24 3月, 2021 4 次提交
-
-
由 Maarten Lankhorst 提交于
Simple adding of i915_gem_object_lock, we may start to pass ww to get_pages() in the future, but that won't be the case here; We override shmem's get_pages() handling by calling i915_gem_object_get_pages_phys(), no ww is needed. Changes since v1: - Call shmem put pages directly, the callback would go down the phys free path. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-10-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
Instead of creating a separate object type, we make changes to the shmem type, to clear struct page backing. This will allow us to ensure we never run into a race when we exchange obj->ops with other function pointers. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-9-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: apply with wiggle] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
We need to get rid of allocations in the cmd parser, because it needs to be called from a signaling context, first move all pinning to execbuf, where we already hold all locks. Allocate jump_whitelist in the execbuffer, and add annotations around intel_engine_cmd_parser(), to ensure we only call the command parser without allocating any memory, or taking any locks we're not supposed to. Because i915_gem_object_get_page() may also allocate memory, add a path to i915_gem_object_get_sg() that prevents memory allocations, and walk the sg list manually. It should be similarly fast. This has the added benefit of being able to catch all memory allocation errors before the point of no return, and return -ENOMEM safely to the execbuf submitter. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-4-maarten.lankhorst@linux.intel.com
-