- 23 12月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
Convert free_work into delayed_work, similar to ttm to allow converting the blocking lock in __i915_gem_free_objects to a trylock. Unlike ttm, the object should already be idle, as it's kept alive by a reference through struct i915_vma->active, which is dropped after all vma's are idle. Because of this, we can use a no wait by default, or when the lock is contested, we use ttm's 10 ms. The trylock should only fail when the object is sharing it's resv with other objects, and typically objects are not kept locked for a long time, so we can safely retry on failure. Fixes: be7612fd ("drm/i915: Require object lock when freeing pages during destruction") Testcase: igt/gem_exec_alignment/pi* Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211222155622.2960379-1-maarten.lankhorst@linux.intel.com
-
- 21 12月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
TTM already requires this, and we require it for delayed destroy. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216142749.1966107-11-maarten.lankhorst@linux.intel.com
-
- 25 11月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
For now, we will only allow async migration when TTM is used, so the paths we care about are related to TTM. The mmap path is handled by having the fence in ttm_bo->moving, when pinning, the binding only becomes available after the moving fence is signaled, and pinning a cpu map will only work after the moving fence signals. This should close all holes where userspace can read a buffer before it's fully migrated. v2: - Fix a couple of SPARSE warnings v3: - Fix a NULL pointer dereference v4: - Ditch the moving fence waiting for i915_vma_pin_iomap() and replace with a verification that the vma is already bound. (Matthew Auld) - Squash with a previous patch introducing moving fence waiting and accessing interfaces (Matthew Auld) - Rename to indicated that we also add support for sync waiting. v5: - Fix check for NULL and unreferencing i915_vma_verify_bind_complete() (Matthew Auld) - Fix compilation failure if !CONFIG_DRM_I915_DEBUG_GEM - Fix include ordering. (Matthew Auld) v7: - Fix yet another compilation failure with clang if !CONFIG_DRM_I915_DEBUG_GEM Co-developed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-2-thomas.hellstrom@linux.intel.com
-
- 23 11月, 2021 1 次提交
-
-
由 Randy Dunlap 提交于
Correct kernel-doc warnings in i915_drm_object.c: i915_gem_object.c:103: warning: expecting prototype for i915_gem_object_fini(). Prototype was for __i915_gem_object_fini() instead i915_gem_object.c:110: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Mark up the object's coherency levels for a given cache_level i915_gem_object.c:110: warning: missing initial short description on line: * Mark up the object's coherency levels for a given cache_level i915_gem_object.c:457: warning: No description found for return value of 'i915_gem_object_read_from_page' Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: Nkernel test robot <lkp@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: intel-gfx@lists.freedesktop.org Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211123050928.20434-1-rdunlap@infradead.org
-
- 02 11月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
Should not be needed. Even with non-coherent display, we should be using device local-memory there, and not system memory. v2: also add a warning in i915_gem_clflush_object Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20211027161813.3094681-4-matthew.auld@intel.com
-
- 22 10月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
The comment here is no longer accurate, since the current shrinker code requires a full ref before touching any objects. Also unset_pages() should already do the required make_unshrinkable() for us, if needed, which is also nicely balanced with set_pages(). Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-4-matthew.auld@intel.com
-
- 20 10月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
It looks like we will need this in some more places, so extract as a helper. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-3-matthew.auld@intel.com
-
- 05 10月, 2021 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
This api allow user mode to create protected buffers and to mark contexts as making use of such objects. Only when using contexts marked in such a way is the execution guaranteed to work as expected. Contexts can only be marked as using protected content at creation time (i.e. the parameter is immutable) and they must be both bannable and not recoverable. Given that the protected session gets invalidated on suspend, contexts created this way hold a runtime pm wakeref until they're either destroyed or invalidated. All protected objects and contexts will be considered invalid when the PXP session is destroyed and all new submissions using them will be rejected. All intel contexts within the invalidated gem contexts will be marked banned. Userspace can detect that an invalidation has occurred via the RESET_STATS ioctl, where we report it the same way as a ban due to a hang. v5: squash patches, rebase on proto_ctx, update kerneldoc v6: rebase on obj create_ext changes v7: Use session counter to check if an object it valid, hold wakeref in context, don't add a new flag to RESET_STATS (Daniel) v8: don't increase guilty count for contexts banned during pxp invalidation (Rodrigo) v9: better comments, avoid wakeref put race between pxp_inval and context_close, add usage examples (Rodrigo) v10: modify internal set/get-protected-context functions to not return -ENODEV when setting PXP param to false or getting param when running on pxp-unsupported hw or getting param when i915 was built with CONFIG_PXP off Signed-off-by: NAlan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NBommu Krishnaiah <krishnaiah.bommu@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-11-alan.previn.teres.alexis@intel.com
-
- 01 10月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
We may end up in i915_ttm_bo_destroy() in an error path before the object is fully initialized. In that case it's not correct to call __i915_gem_free_object(), because that function a) Assumes the gem object refcount is 0, which it isn't. b) frees the placements which are owned by the caller until the init_object() region ops returns successfully. Fix this by providing a lightweight cleanup function __i915_gem_object_fini() which is also called by __i915_gem_free_object(). While doing this, also make sure we call dma_resv_fini() as part of ordinary object destruction and not from the RCU callback that frees the object. This will help track down bugs where the object is incorrectly locked from an RCU lookup. Finally, make sure the object isn't put on the region list until it's either locked or fully initialized in order to block list processing of partially initialized objects. v2: - The TTM object backend memory was freed before the gem pages were put. Separate this functionality into __i915_gem_object_pages_fini() and call it from the TTM delete_mem_notify() callback. v3: - Include i915_gem_object_free_mmaps() in __i915_gem_object_pages_fini() to make sure we don't inadvertedly introduce a race. Fixes: 48b09612 ("drm/i915: Move __i915_gem_free_object to ttm_bo_destroy") Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20210930113236.583531-1-thomas.hellstrom@linux.intel.com
-
- 28 7月, 2021 1 次提交
-
-
由 Daniel Vetter 提交于
With the global kmem_cache shrink infrastructure gone there's nothing special and we can convert them over. I'm doing this split up into each patch because there's quite a bit of noise with removing the static global.slab_objects to just a slab_objects. v2: Make slab static (Jason, 0day) Reviewed-by: NJason Ekstrand <jason@jlekstrand.net> Cc: Jason Ekstrand <jason@jlekstrand.net> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210727121037.2041102-6-daniel.vetter@ffwll.ch
-
- 26 7月, 2021 2 次提交
-
-
由 Jason Ekstrand 提交于
Without TTM, we have no such hook so we exit early but this is fine because we use TTM on all LMEM platforms and, on integrated platforms, there is no real migration. If we do have the hook, it's better to just let TTM handle the migration because it knows where things are actually placed. This fixes a bug where i915_gem_object_migrate fails to migrate newly created LMEM objects. In that scenario, the object has obj->mm.region set to LMEM but TTM has it in SMEM because that's where all new objects are placed there prior to getting actual pages. When we invoke i915_gem_object_migrate, it exits early because, from the point of view of the GEM object, it's already in LMEM and no migration is needed. Then, when we try to pin the pages, __i915_ttm_get_pages is called which, unaware of our failed attempt at a migration, places the object in SMEM. This only happens on newly created objects because they have this weird state where TTM thinks they're in SMEM, GEM thinks they're in LMEM, and the reality is that they don't exist at all. It's better if GEM just always calls into TTM and let's TTM handle things. That way the lies stay better contained. Once the migration is complete, the object will have pages, obj->mm.region will be correct, and we're done lying. Signed-off-by: NJason Ekstrand <jason@jlekstrand.net> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-7-jason@jlekstrand.net
-
由 Jason Ekstrand 提交于
We don't roll them together entirely because there are still a couple cases where we want a separate can_migrate check. For instance, the display code checks that you can migrate a buffer to LMEM before it accepts it in fb_create. The dma-buf import code also uses it to do an early check and return a different error code if someone tries to attach a LMEM-only dma-buf to another driver. However, no one actually wants to call object_migrate when can_migrate has failed. The stated intention is for self-tests but none of those actually take advantage of this unsafe migration. Signed-off-by: NJason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-2-jason@jlekstrand.net
-
- 22 7月, 2021 1 次提交
-
-
由 Daniel Vetter 提交于
This essentially reverts commit 84a10749 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Jan 24 11:36:08 2018 +0000 drm/i915: Shrink the GEM kmem_caches upon idling mm/vmscan.c:do_shrink_slab() is a thing, if there's an issue with it then we need to fix that there, not hand-roll our own slab shrinking code in i915. Also when this was added there was only one other caller of kmem_cache_shrink (added 2005 to the acpi code). Now there's a 2nd one outside of i915 code in a kunit test, which seems legit since that wants to very carefully control what's in the kmem_cache. This out of a total of over 500 calls to kmem_cache_create. This alone should have been warning sign enough that we're doing something silly. Noticed while reviewing a patch set from Jason to fix up some issues in our i915_init() and i915_exit() module load/cleanup code. Now that i915_globals.c isn't any different than normal init/exit functions, we should convert them over to one unified table and remove i915_globals.[hc] entirely. v2: Improve commit message (Jason) Reviewed-by: NJason Ekstrand <jason@jlekstrand.net> Cc: David Airlie <airlied@linux.ie> Cc: Jason Ekstrand <jason@jlekstrand.net> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721183229.4136488-1-daniel.vetter@ffwll.ch
-
- 09 7月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
For discrete, users of pin_map() needs to obey the same rules at the TTM backend, where we map system only objects as WB, and everything else as WC. The simplest for now is to just force the correct mapping type as per the new rules for discrete. Suggested-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: NRamalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210705135310.1502437-1-matthew.auld@intel.com
-
- 30 6月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
A selftest for the gem object migrate functionality. Slightly adapted from the original by Matthew to the new interface and new fill blit code. v4: - Initialize buffers and check contents after migration (Suggested by Matthew Auld) - Perform async migration (if implemented) in the igt_lmem_pages_migrate test - Test also migration to the current region. Co-developed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> #v3 Link: https://patchwork.freedesktop.org/patch/msgid/20210629151203.209465-3-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Introduce an interface to migrate objects between regions. This is primarily intended to migrate objects to LMEM for display and to SYSTEM for dma-buf, but might be reused in one form or another for performance-based migration. v2: - Verify that the memory region given as an id really exists. (Reported by Matthew Auld) - Call i915_gem_object_{init,release}_memory_region() when switching region to handle also switching region lists. (Reported by Matthew Auld) v3: - Fix i915_gem_object_can_migrate() to return true if object is already in the correct region, even if the object ops doesn't have a migrate() callback. - Update typo in commit message. - Fix kerneldoc of i915_gem_object_wait_migration(). v4: - Improve documentation (Suggested by Mattew Auld and Michael Ruhl) - Always assume TTM migration hits a TTM move and unsets the pages through move_notify. (Reported by Matthew Auld) - Add a dma_fence_might_wait() annotation to i915_gem_object_wait_migration() (Suggested by Daniel Vetter) v5: - Re-add might_sleep() instead of __dma_fence_might_wait(), Sent v4 with the wrong version, didn't compile and __dma_fence_might_wait() is not exported. - Added an R-B. Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210629151203.209465-2-thomas.hellstrom@linux.intel.com
-
- 25 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by much of our code. Introduce a new mem_flags member to hold these and make sure checks for these flags being set are either done under the object lock or with pages properly pinned. The flags will change during migration under the object lock. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-2-thomas.hellstrom@linux.intel.com
-
- 11 6月, 2021 2 次提交
-
-
由 Thomas Hellström 提交于
Since objects can be migrated or evicted when not pinned or locked, update the checks for lmem residency or future residency so that the value returned is not immediately stale. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-3-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Most logical place to introduce TTM buffer objects is as an i915 gem object backend. We need to add some ops to account for added functionality like delayed delete and LRU list manipulation. Initially we support only LMEM and SYSTEM memory, but SYSTEM (which in this case means evicted LMEM objects) is not visible to i915 GEM yet. The plan is to move the i915 gem system region over to the TTM system memory type in upcoming patches. We set up GPU bindings directly both from LMEM and from the system region, as there is no need to use the legacy TTM_TT memory type. We reserve that for future porting of GGTT bindings to TTM. Remove the old lmem backend. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-2-thomas.hellstrom@linux.intel.com
-
- 02 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
Embed a struct ttm_buffer_object into the i915 gem object, making sure we alias the gem object part. It's a bit unfortunate that the struct ttm_buffer_ojbect embeds a gem object since we otherwise could make the TTM part private to the TTM backend, and use the usual i915 gem object for the other backends. To make this a bit more storage efficient for the other backends, we'd have to use a pointer for the gem object which would require a lot of changes in the driver. We postpone that for later. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210602083818.241793-3-thomas.hellstrom@linux.intel.com
-
- 01 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
We are currently sharing the VM reservation locks across a number of gem objects with page-table memory. Since TTM will individiualize the reservation locks when freeing objects, including accessing the shared locks, make sure that the shared locks are not freed until that is done. For PPGTT we add an additional refcount, for GGTT we take additional measures to make sure objects sharing the GGTT reservation lock are freed at GGTT takedown Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210601074654.3103-3-thomas.hellstrom@linux.intel.com
-
- 04 5月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
Add new extension to support setting an immutable-priority-list of potential placements, at creation time. If we use the normal gem_create or gem_create_ext without the extensions/placements then we still get the old behaviour with only placing the object in system memory. v2(Daniel & Jason): - Add a bunch of kernel-doc - Simplify design for placements extension Testcase: igt/gem_create/create-ext-placement-sanity-check Testcase: igt/gem_create/create-ext-placement-each Testcase: igt/gem_create/create-ext-placement-all Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NCQ Tang <cq.tang@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Dave Airlie <airlied@gmail.com> Cc: dri-devel@lists.freedesktop.org Cc: mesa-dev@lists.freedesktop.org Reviewed-by: NKenneth Graunke <kenneth@whitecape.org> Link: https://patchwork.freedesktop.org/patch/msgid/20210429103056.407067-6-matthew.auld@intel.com
-
- 25 3月, 2021 2 次提交
-
-
由 Maarten Lankhorst 提交于
With all callers and selftests fixed to use ww locking, we can now finally remove this lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-62-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
With userptr fixed, there is no need for all separate lockdep classes now, and we can remove all lockdep tricks used. A trylock in the shrinker is all we need now to flatten the locking hierarchy. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Resolve conflict because we don't have the patch from Chris to rebrand i915_gem_shrinker_taints_mutex to fs_reclaim_taints_mutex. It's not a bad idea, but if we do it, it should be moved to the right header. See https://lore.kernel.org/intel-gfx/20210202154318.19246-1-chris@chris-wilson.co.uk/] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-18-maarten.lankhorst@linux.intel.com
-
- 24 3月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: apply with wiggle] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
-
- 22 1月, 2021 1 次提交
-
-
由 Imre Deak 提交于
Add a simple helper to read data with the CPU from the page of a GEM object. Do the read either via a kmap if the object has struct pages or an iomap otherwise. This is needed by the next patch, reading a u64 value from the object (w/o requiring the obj to be mapped to the GPU). Suggested by Chris. v2 (Chris): - Sanitize the type and order of func params. - Avoid consts requiring too many casts. - Use BUG_ON instead of WARN_ON, simplify the conditions. - Fix __iomem sparse errors. - Leave locking/syncing/pinning up to the caller, require only that the caller has pinned the object pages. - Check for iomem backing store before reading via an iomap. v3: - Fix offset passed to io_mapping_map_wc() missing a mem.region.start delta. (Chris, Matthew) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210120213834.1435710-1-imre.deak@intel.com
-
- 20 1月, 2021 1 次提交
-
-
由 Chris Wilson 提交于
flush_write_domain() is only used within the GEM domain management code, so move it to i915_gem_domain.c and drop the export. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119144912.12653-5-chris@chris-wilson.co.uk
-
- 06 10月, 2020 1 次提交
-
-
由 Tvrtko Ursulin 提交于
As the previous patch fixed the places where we walk the whole scatterlist for DMA addresses, this patch fixes the random lookup functionality. To achieve this we have to add a second lookup iterator and add a i915_gem_object_get_sg_dma helper, to be used analoguous to existing i915_gem_object_get_sg_dma. Therefore two lookup caches are maintained per object and they are flushed at the same point for simplicity. (Strictly speaking the DMA cache should be flushed from i915_gem_gtt_finish_pages, but today this conincides with unsetting of the pages in general.) Partial VMA view is then fixed to use the new DMA lookup and properly query sg length. v2: * Checkpatch. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Tom Murphy <murphyt7@tcd.ie> Cc: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201006092508.1064287-2-tvrtko.ursulin@linux.intel.com
-
- 25 9月, 2020 1 次提交
-
-
由 Thomas Zimmermann 提交于
GEM object functions deprecate several similar callback interfaces in struct drm_driver. This patch replaces the per-driver callbacks with per-instance callbacks in i915. v2: * move object-function instance to i915_gem_object.c (Jani) Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200923102159.24084-7-tzimmermann@suse.de
-
- 03 7月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Rather than reuse the common ctx->mutex for locking the execbuffer LUT, split it into its own lock to avoid being taken [as part of ctx->mutex] at inappropriate times. In particular to avoid the inversion from taking the timeline->mutex for the whole execbuf submission in the next patch. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200703004306.11117-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Avoid waking up the device and taking stale locks if we know that the object is not currently mmapped. This is particularly useful as not many object are actually mmapped and so we can destroy them without waking the device up, and gives us a little more freedom of workqueue ordering during shutdown. v2: Pull the release_mmap() into its single user in freeing the objects, where there can not be any race with a concurrent user of the freed object. Or so one hopes! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Link: https://patchwork.freedesktop.org/patch/msgid/20200702163623.6402-2-chris@chris-wilson.co.uk
-
- 01 7月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The obj->lut_list is traversed when the object is closed as the file table is destroyed during process termination. As this occurs before we kill any outstanding context if, due to some bug or another, the closure is blocked, then we fail to shootdown any inflight operations potentially leaving the GPU spinning forever. As we only need to guard the list against concurrent closures and insertions, the hold is short and merits being treated as a simple spinlock. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200701084439.17025-1-chris@chris-wilson.co.uk
-
- 30 5月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Name the object classes and their offspring for easier lockdep debugging. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200529183204.16850-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we declare that an object type is shrinkable (any that we can reclaim to recover system pages), make sure we taint the object mutex so that lockdep expects us to use it within fs_reclaim. lockdep will then complain the first time we try to allocate while holding the plain mutex, as doing so invites potential recursion. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200529183204.16850-1-chris@chris-wilson.co.uk
-
- 04 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We only need the device wakeref on freeing the objects if we have to unbind the object from the global GTT, or otherwise update device information. If the objects are clean, we never need the wakeref, so avoid taking until required. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Reviewed-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200503171513.18704-1-chris@chris-wilson.co.uk
-
- 24 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The history of i915_vma_close() is confusing, as is its use. As the lifetime of the i915_vma is currently bounded by the object it is attached to, we needed a means of identify when a vma was no longer in use by userspace (via the user's fd). This is further complicated by that only ppgtt vma should be closed at the user's behest, as the ggtt were always shared. Now that we attach the vma to a lut on the user's context, the open count does indicate how many unique and open context/vm are referencing this vma from the user. As such, we can and should just use the open_count to track when the vma is still in use by userspace. It's a poor man's replacement for reference counting. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1193Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200422190558.30509-1-chris@chris-wilson.co.uk
-
- 02 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We cached the number of vma bound to the object in order to speed up shrinker decisions. This has been superseded by being more proactive in removing objects we cannot shrink from the shrinker lists, and so we can drop the clumsy attempt at atomically counting the bind count and comparing it to the number of pinned mappings of the object. This will only get more clumsier with asynchronous binding and unbinding. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401223924.16667-1-chris@chris-wilson.co.uk
-
- 02 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Call cond_resched() between each freed object in case we have a really, really long list, and we don't want to block normal processes. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200221100953.2587176-1-chris@chris-wilson.co.uk (cherry picked from commit deeee411) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 22 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Call cond_resched() between each freed object in case we have a really, really long list, and we don't want to block normal processes. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200221100953.2587176-1-chris@chris-wilson.co.uk
-
- 11 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Currently we create a new mmap_offset for every call to mmap_offset_ioctl. This exposes ourselves to an abusive client that may simply create new mmap_offsets ad infinitum, which will exhaust physical memory and the virtual address space. In addition to the exhaustion, a very long linear list of mmap_offsets causes other clients using the object to incur long list walks -- these long lists can also be generated by simply having many clients generate their own mmap_offset. However, we can simply use the drm_vma_node itself to manage the file association (allow/revoke) dropping our need to keep an mmo per-file. Then if we keep a small rbtree of per-type mmap_offsets, we can lookup duplicate requests quickly. Fixes: cc662126 ("drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Reviewed-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120104924.4000706-3-chris@chris-wilson.co.uk (cherry picked from commit 78655598) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-