- 20 2月, 2022 1 次提交
-
-
由 Matthew Auld 提交于
discrete cards optimise 64K GTT pages for local-memory, since everything should be allocated at 64K granularity. We say goodbye to sparse entries, and instead get a compact 256B page-table for 64K pages, which should be more cache friendly. 4K pages for local-memory are no longer supported by the HW. v4: don't return uninitialized err in igt_ppgtt_compact Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NStuart Summers <stuart.summers@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Signed-off-by: NRobert Beckett <bob.beckett@collabora.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220218184752.7524-8-ramalingam.c@intel.com
-
- 18 1月, 2022 1 次提交
-
-
由 Maarten Lankhorst 提交于
We want to remove more members of i915_vma, which requires the locking to be held more often. Start requiring gem object lock for i915_vma_unbind, as it's one of the callers that may unpin pages. Some special care is needed when evicting, because the last reference to the object may be held by the VMA, so after __i915_vma_unbind, vma may be garbage, and we need to cache vma->obj before unlocking. Changes since v1: - Make trylock failing a WARN. (Matt) - Remove double i915_vma_wait_for_bind() (Matt) - Move atomic_set to right before mutex_unlock(), to make it more clear they belong together. (Matt) Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220114132320.109030-5-maarten.lankhorst@linux.intel.com
-
- 11 1月, 2022 1 次提交
-
-
由 Thomas Hellström 提交于
When introducing asynchronous unbinding, the vma itself may no longer be alive when the actual binding or unbinding takes place. Update the gtt i915_vma_ops accordingly to take a struct i915_vma_resource instead of a struct i915_vma for the bind_vma() and unbind_vma() ops. Similarly change the insert_entries() op for struct i915_address_space. Replace a couple of i915_vma_snapshot members with their newly introduced i915_vma_resource counterparts, since they have the same lifetime. Also make sure to avoid changing the struct i915_vma_flags (in particular the bind flags) async. That should now only be done sync under the vm mutex. v2: - Update the vma_res::bound_flags when binding to the aliased ggtt v6: - Remove I915_VMA_ALLOC_BIT (Matthew Auld) - Change some members of struct i915_vma_resource from unsigned long to u64 (Matthew Auld) v7: - Fix vma resource size parameters to be u64 rather than unsigned long (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-3-thomas.hellstrom@linux.intel.com
-
- 18 12月, 2021 1 次提交
-
-
由 Michał Winiarski 提交于
Use to_gt() helper consistently throughout the codebase. Pure mechanical s/i915->gt/to_gt(i915). No functional changes. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214193346.21231-6-andi.shyti@linux.intel.com
-
- 25 11月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
There is an interesting refcounting loop: struct intel_memory_region has a struct ttm_resource_manager, ttm_resource_manager->move may hold a reference to i915_request, i915_request may hold a reference to intel_context, intel_context may hold a reference to drm_i915_gem_object, drm_i915_gem_object may hold a reference to intel_memory_region. Break this loop by dropping region reference counting. In addition, Have regions with a manager moving fence make sure that all region objects are released before freeing the region. v6: - Fix a code comment. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-4-thomas.hellstrom@linux.intel.com
-
- 05 11月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
In the next commit, we don't evict when refcount = 0, so we need to call drain freed objects, because we want to pin new bo's in the same place, causing a test failure. Furthermore, since each subtest is separated, it's a lot better to use i915_live_selftests, so each subtest starts with a clean slate, and a clean address space. v2(Reported-by: kernel test robot <lkp@intel.com>): - Make hugepage_ctx static. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211028125855.3281674-9-matthew.auld@intel.com
-
- 20 10月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
Just like we do for internal objects. Also just use i915_gem_object_set_cache_coherency() here. No need for over-flushing on LLC platforms. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-9-matthew.auld@intel.com
-
- 24 9月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
In commit: commit 1e6decf3 Author: Hugh Dickins <hughd@google.com> Date: Thu Sep 2 14:54:43 2021 -0700 shmem: shmem_writepage() split unlikely i915 THP it looks THP + shmem_writeback was an unexpected combination, and ends up hitting some BUG_ON, but it also looks like that is now fixed. While the IGTs did eventually hit this(although not during pre-merge it seems), it's likely worthwhile adding some explicit coverage for this scenario in the shrink_thp selftest. References: https://gitlab.freedesktop.org/drm/intel/-/issues/4166Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210921142116.3807946-1-matthew.auld@intel.com
-
由 Thomas Hellström 提交于
We really only need memcpy restore for objects that affect the operability of the migrate context. That is, primarily the page-table objects of the migrate VM. Add an object flag, I915_BO_ALLOC_PM_EARLY for objects that need early restores using memcpy and a way to assign LMEM page-table object flags to be used by the vms. Restore objects without this flag with the gpu blitter and only objects carrying the flag using TTM memcpy. Initially mark the migrate, gt, gtt and vgpu vms to use this flag, and defer for a later audit which vms actually need it. Most importantly, user- allocated vms with pinned page-table objects can be restored using the blitter. Performance-wise memcpy restore is probably as fast as gpu restore if not faster, but using gpu restore will help tackling future restrictions in mappable LMEM size. v4: - Don't mark the aliasing ppgtt page table flags for early resume, but rather the ggtt page table flags as intended. (Matthew Auld) - The check for user buffer objects during early resume is pointless, since they are never marked I915_BO_ALLOC_PM_EARLY. (Matthew Auld) v5: - Mark GuC LMEM objects with I915_BO_ALLOC_PM_EARLY to have them restored before we fire up the migrate context. Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-8-thomas.hellstrom@linux.intel.com
-
- 08 9月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
Since the object might still be active here, the shrink_all will simply ignore it, which blows up in the test, since the pages will still be there. Currently THP is disabled which should result in the test being skipped, but if we ever re-enable THP we might start seeing the failure. Fix this by forcing I915_SHRINK_ACTIVE. v2: Some machine in the shard runs doesn't seem to have any available swap when running this test. Try to handle this. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210906091729.2093312-1-matthew.auld@intel.com
-
- 06 9月, 2021 2 次提交
-
-
由 Daniel Vetter 提交于
It's been invariant since commit ccbc1b97 Author: Jason Ekstrand <jason@jlekstrand.net> Date: Thu Jul 8 10:48:30 2021 -0500 drm/i915/gem: Don't allow changing the VM on running contexts (v4) this just completes the deed. I've tried to split out prep work for more careful review as much as possible, this is what's left: - get_ppgtt gets simplified since we don't need to grab a temporary reference - we can rely on the temporary reference for the gem_ctx while we inspect the vm. The new vm_id still needs a full i915_vm_open ofc. This also removes the final caller of context_get_vm_rcu - A pile of selftests can now just look at ctx->vm instead of rcu_dereference_protected( , true) or similar things. - All callers of i915_gem_context_vm also disappear. - I've changed the hugepage selftest to set scrub_64K without any locking, because when we inspect that setting we're also not taking any locks either. It works because it's a selftests that's careful (single threaded gives you nice ordering) and not a live driver where races can happen from anywhere. These can only be split up further if we have some intermediate state with a bunch more rcu_dereference_protected(ctx->vm, true), just to shut up lockdep and sparse. The conversion to __rcu happened in commit a4e7ccda Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Oct 4 14:40:09 2019 +0100 drm/i915: Move context management under GEM Note that we're not breaking the actual bugfix in there: The real bugfix is pushing the i915_vm_relase onto a separate worker, to avoid locking inversion issues. The rcu conversion was just thrown in for entertainment value on top (no vm lookup isn't even close to anything that's a hotpath where removing the single spinlock can be measured). v2: Rebase over the change to move the i915_vm_put() into i915_gem_context_release(). v3: Trivial conflict against repainted shed. Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210902142057.929669-9-daniel.vetter@ffwll.ch
-
由 Daniel Vetter 提交于
The important part isn't so much that this does an rcu lookup - that's more an implementation detail, which will also be removed. The thing that makes this different from other functions is that it's gettting you the vm that batchbuffers will run in for that gem context, which is either a full ppgtt stored in gem->ctx, or the ggtt. We'll make more use of this function later on. Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210902142057.929669-5-daniel.vetter@ffwll.ch
-
- 30 6月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
For some specialised objects we might need something larger than the regions min_page_size due to some hw restriction, and slightly more hairy is needing something smaller with the guarantee that such objects will never be inserted into any GTT, which is the case for the paging structures. This also fixes how we setup the BO page_alignment, if we later migrate the object somewhere else. For example if the placements are {SMEM, LMEM}, then we might get this wrong. Pushing the min_page_size behaviour into the manager should fix this. v2(Thomas): push the default page size behaviour into buddy_man, and let the user override it with the page-alignment, which looks cleaner v3: rebase on ttm sys changes Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210625103824.558481-1-matthew.auld@intel.com
-
- 25 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by much of our code. Introduce a new mem_flags member to hold these and make sure checks for these flags being set are either done under the object lock or with pages properly pinned. The flags will change during migration under the object lock. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-2-thomas.hellstrom@linux.intel.com
-
- 25 3月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
Straightforward conversion, just convert a bunch of calls to unlocked versions. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-42-maarten.lankhorst@linux.intel.com
-
- 24 3月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: apply with wiggle] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
-
- 01 12月, 2020 2 次提交
-
-
由 Matthew Auld 提交于
For the LMEM case if we have suitable alignment and 2M physical pages we should always get 2M GTT pages within the constraints of the hugepages selftest. If we don't then something might be wrong in our construction of the backing pages. References: 330b7d33 ("drm/i915/region: fix order when adding blocks") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201130141809.65330-2-matthew.auld@intel.com
-
由 Matthew Auld 提交于
In igt_ppgtt_sanity_check we should also exercise the non-contiguous option for LMEM, since this will give us slightly different sg layouts and alignment. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201130141809.65330-1-matthew.auld@intel.com
-
- 21 9月, 2020 1 次提交
-
-
由 Daniel Vetter 提交于
Just some prep work before we rework the lifetime handling, which requires replacing all the drm_dev_put in selftests by something else. v2: Don't go with a static inline, upsets the header tests and separation. Acked-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200918132505.2316382-2-daniel.vetter@ffwll.ch
-
- 07 9月, 2020 3 次提交
-
-
由 Maarten Lankhorst 提交于
Execbuffer submission will perform its own WW locking, and we cannot rely on the implicit lock there. This also makes it clear that the GVT code will get a lockdep splat when multiple batchbuffer shadows need to be performed in the same instance, fix that up. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-7-maarten.lankhorst@linux.intel.comSigned-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Maarten Lankhorst 提交于
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory eviction. We don't use it yet, but lets start adding the definition first. To use it, we have to pass a non-NULL ww to gem_object_lock, and don't unlock directly. It is done in i915_gem_ww_ctx_fini. Changes since v1: - Change ww_ctx and obj order in locking functions (Jonas Lahtinen) Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-6-maarten.lankhorst@linux.intel.comSigned-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
The GEM object is grossly overweight for the practicality of tracking large numbers of individual pages, yet it is currently our only abstraction for tracking DMA allocations. Since those allocations need to be reserved upfront before an operation, and that we need to break away from simple system memory, we need to ditch using plain struct page wrappers. In the process, we drop the WC mapping as we ended up clflushing everything anyway due to various issues across a wider range of platforms. Though in a future step, we need to drop the kmap_atomic approach which suggests we need to pre-map all the pages and keep them mapped. v2: Verify our large scratch page is suitably DMA aligned; and manually clear the scratch since we are allocating plain struct pages full of prior content. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200729164219.5737-2-chris@chris-wilson.co.ukSigned-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 30 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Name the object classes and their offspring for easier lockdep debugging. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200529183204.16850-2-chris@chris-wilson.co.uk
-
- 22 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As we no longer use PIN_UPDATE (since commit 7d0aa0db ("drm/i915/gem: Unbind all current vma on changing cache-level")) we can remove PIN_UPDATE itself. The benefit is just in simplifing the vma bind. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200521144949.25357-1-chris@chris-wilson.co.uk
-
- 28 4月, 2020 1 次提交
-
-
由 Xiyu Yang 提交于
igt_ppgtt_pin_update() invokes i915_gem_context_get_vm_rcu(), which returns a reference of the i915_address_space object to "vm" with increased refcount. When igt_ppgtt_pin_update() returns, "vm" becomes invalid, so the refcount should be decreased to keep refcount balanced. The reference counting issue happens in two exception handling paths of igt_ppgtt_pin_update(). When i915_gem_object_create_internal() returns IS_ERR, the refcnt increased by i915_gem_context_get_vm_rcu() is not decreased, causing a refcnt leak. Fix this issue by jumping to "out_vm" label when i915_gem_object_create_internal() returns IS_ERR. Fixes: a4e7ccda ("drm/i915: Move context management under GEM") Signed-off-by: NXiyu Yang <xiyuyang19@fudan.edu.cn> Signed-off-by: NXin Tan <tanxin.ctf@gmail.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1587361342-83494-1-git-send-email-xiyuyang19@fudan.edu.cn (cherry picked from commit e07c7606) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 24 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The history of i915_vma_close() is confusing, as is its use. As the lifetime of the i915_vma is currently bounded by the object it is attached to, we needed a means of identify when a vma was no longer in use by userspace (via the user's fd). This is further complicated by that only ppgtt vma should be closed at the user's behest, as the ggtt were always shared. Now that we attach the vma to a lut on the user's context, the open count does indicate how many unique and open context/vm are referencing this vma from the user. As such, we can and should just use the open_count to track when the vma is still in use by userspace. It's a poor man's replacement for reference counting. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1193Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200422190558.30509-1-chris@chris-wilson.co.uk
-
- 21 4月, 2020 1 次提交
-
-
由 Xiyu Yang 提交于
igt_ppgtt_pin_update() invokes i915_gem_context_get_vm_rcu(), which returns a reference of the i915_address_space object to "vm" with increased refcount. When igt_ppgtt_pin_update() returns, "vm" becomes invalid, so the refcount should be decreased to keep refcount balanced. The reference counting issue happens in two exception handling paths of igt_ppgtt_pin_update(). When i915_gem_object_create_internal() returns IS_ERR, the refcnt increased by i915_gem_context_get_vm_rcu() is not decreased, causing a refcnt leak. Fix this issue by jumping to "out_vm" label when i915_gem_object_create_internal() returns IS_ERR. Fixes: a4e7ccda ("drm/i915: Move context management under GEM") Signed-off-by: NXiyu Yang <xiyuyang19@fudan.edu.cn> Signed-off-by: NXin Tan <tanxin.ctf@gmail.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1587361342-83494-1-git-send-email-xiyuyang19@fudan.edu.cn
-
- 07 2月, 2020 1 次提交
-
-
由 Matthew Auld 提交于
We already have tests that exhaustively exercise the most interesting page-size combinations, along with tests that offer randomisation, and so we should already be testing objects(local, system) with a varying mix of page-sizes, which leaves igt_ppgtt_exhaust_huge providing not much in terms of extra coverage. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200206170340.102613-1-matthew.auld@intel.com
-
- 08 1月, 2020 1 次提交
-
-
由 Matthew Auld 提交于
Attempt to split i915_gem_gtt.[ch] into more manageable chunks. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200107134009.3255354-1-chris@chris-wilson.co.uk
-
- 03 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Create a vmap for discontinguous lmem objects to support i915_gem_object_pin_map(). v2: Offset io address by region.start for fake-lmem Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200102204215.1519103-1-chris@chris-wilson.co.uk
-
- 23 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Start introducing a kref on i915_vma in order to protect the vma unbind (i915_gem_object_unbind) from a parallel destruction (i915_vma_parked). Later, we will use the refcount to manage all access and turn i915_vma into a first class container. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Acked-by: NImre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191222210256.2066451-2-chris@chris-wilson.co.uk
-
- 17 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
When creating a handle, it is just that, an abstract handle. The fact that we cannot currently support a handle larger than the size of the backing storage is an artifact of our whole-object-at-a-time handling in get_pages() and being an implementation limitation is best handled at that point -- similar to shmem, where we only barf when asked to populate the whole object if larger than RAM. (Pinning the whole object at a time is major hindrance that we are likely to have to overcome in the near future.) In the case of the buddy allocator, the late check is preferable as the request size may often be smaller than the required size. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191216122603.2598155-1-chris@chris-wilson.co.uk
-
- 08 11月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Since drm provided us with a real struct file we can use for our anonymous internal clients (mock_file), complete our transition to using that as the primary interface (and not the mocked up struct drm_file we previous were using). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191107213929.23286-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As drm now exports a method to create an anonymous struct file around a drm_device for internal use, make use of it to avoid our horrible hacks. Danial suggested that the mock_file_put() wrapper was suitable for drm-core, along with the mock_drm_getfile() [and that the vestigal mock_drm_file() in this patch should perhaps be the drm interface itself]. However, the eventual goal is to remove the mock_drm_file() and use the struct file and fput() directly, in this patch we take a simple transition in that direction. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20191107180601.30815-3-chris@chris-wilson.co.uk
-
- 07 11月, 2019 1 次提交
-
-
由 Daniel Vetter 提交于
The trouble with having a plain nesting flag for locks which do not naturally nest (unlike block devices and their partitions, which is the original motivation for nesting levels) is that lockdep will never spot a true deadlock if you screw up. This patch is an attempt at trying better, by highlighting a bit more of the actual nature of the nesting that's going on. Essentially we have two kinds of objects: - objects without pages allocated, which cannot be on any lru and are hence inaccessible to the shrinker. - objects which have pages allocated, which are on an lru, and which the shrinker can decide to throw out. For the former type of object, memory allocations while holding obj->mm.lock are permissible. For the latter they are not. And get/put_pages transitions between the two types of objects. This is still not entirely fool-proof since the rules might change. But as long as we run such a code ever at runtime lockdep should be able to observe the inconsistency and complain (like with any other lockdep class that we've split up in multiple classes). But there are a few clear benefits: - We can drop the nesting flag parameter from __i915_gem_object_put_pages, because that function by definition is never going allocate memory, and calling it on an object which doesn't have its pages allocated would be a bug. - We strictly catch more bugs, since there's not only one place in the entire tree which is annotated with the special class. All the other places that had explicit lockdep nesting annotations we're now going to leave up to lockdep again. - Specifically this catches stuff like calling get_pages from put_pages (which isn't really a good idea, if we can call get_pages so could the shrinker). I've seen patches do exactly that. Of course I fully expect CI will show me for the fool I am with this one here :-) v2: There can only be one (lockdep only has a cache for the first subclass, not for deeper ones, and we don't want to make these locks even slower). Still separate enums for better documentation. Real fix: don't forget about phys objs and pin_map(), and fix the shrinker to have the right annotations ... silly me. v3: Forgot usertptr too ... v4: Improve comment for pages_pin_count, drop the IMPORTANT comment and instead prime lockdep (Chris). v5: Appease checkpatch, no double empty lines (Chris) v6: More rebasing over selftest changes. Also somehow I forgot to push this patch :-/ Also format comments consistently while at it. v7: Fix typo in commit message (Joonas) Also drop the priming, with the lmem merge we now have allocations while holding the lmem lock, which wreaks the generic priming I've done in earlier patches. Should probably be resurrected when lmem is fixed. See commit 232a6eba Author: Matthew Auld <matthew.auld@intel.com> Date: Tue Oct 8 17:01:14 2019 +0100 drm/i915: introduce intel_memory_region I'm keeping the priming patch locally so it wont get lost. Cc: Matthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Tang, CQ" <cq.tang@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v5) Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v6) Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191105090148.30269-1-daniel.vetter@ffwll.ch [mlankhorst: Fix commit typos pointed out by Michael Ruhl]
-
- 26 10月, 2019 3 次提交
-
-
由 Matthew Auld 提交于
Now that for all the relevant backends we do randomised testing, we need to make sure we still sanity check the obvious cases that might blow up, such that introducing a temporary regression is less likely. Also rather than do this for every backend, just limit to our two memory types: system and local. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191025153728.23689-7-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
Ditch the dubious static list of sizes to enumerate, in favour of choosing a random size within the limits of each backing store. With repeated CI runs this should give us a wider range of object sizes, and in turn more page-size combinations, while using less machine time. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191025153728.23689-6-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
Add LMEM objects to list of backends we test for huge-GTT-pages. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191025153728.23689-5-chris@chris-wilson.co.uk
-
- 22 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Separate each object class into a separate lock type to avoid lockdep cross-contamination between paths (i.e. userptr!). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191022144501.26486-1-chris@chris-wilson.co.uk
-
- 09 10月, 2019 1 次提交
-
-
由 Matthew Auld 提交于
Volatile objects are marked as DONTNEED while pinned, therefore once unpinned the backing store can be discarded. This is limited to kernel internal objects. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NCQ Tang <cq.tang@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-4-matthew.auld@intel.com
-