- 20 2月, 2022 2 次提交
-
-
由 Matthew Auld 提交于
discrete cards optimise 64K GTT pages for local-memory, since everything should be allocated at 64K granularity. We say goodbye to sparse entries, and instead get a compact 256B page-table for 64K pages, which should be more cache friendly. 4K pages for local-memory are no longer supported by the HW. v4: don't return uninitialized err in igt_ppgtt_compact Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NStuart Summers <stuart.summers@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Signed-off-by: NRobert Beckett <bob.beckett@collabora.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220218184752.7524-8-ramalingam.c@intel.com
-
由 Matthew Auld 提交于
For local-memory objects we need to align the GTT addresses to 64K, both for the ppgtt and ggtt. We need to support vm->min_alignment > 4K, depending on the vm itself and the type of object we are inserting. With this in mind update the GTT selftests to take this into account. For compact-pt we further align and pad lmem object GTT addresses to 2MB to ensure PDEs contain consistent page sizes as required by the HW. v3: * use needs_compact_pt flag to discriminate between 64K and 64K with compact-pt * add i915_vm_obj_min_alignment * use i915_vm_obj_min_alignment to round up vma reservation if compact-pt instead of hard coding v5: * fix i915_vm_obj_min_alignment for internal objects which have no memory region v6: * tiled_blits_create correctly pick largest required alignment v8: * i915_vm_min_alignment protect against array overflow for mock region Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Signed-off-by: NRobert Beckett <bob.beckett@collabora.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220218184752.7524-7-ramalingam.c@intel.com
-
- 11 1月, 2022 2 次提交
-
-
由 Thomas Hellström 提交于
Implement async (non-blocking) unbinding by not syncing the vma before calling unbind on the vma_resource. Add the resulting unbind fence to the object's dma_resv from where it is picked up by the ttm migration code. Ideally these unbind fences should be coalesced with the migration blit fence to avoid stalling the migration blit waiting for unbind, as they can certainly go on in parallel, but since we don't yet have a reasonable data structure to use to coalesce fences and attach the resulting fence to a timeline, we defer that for now. Note that with async unbinding, even while the unbind waits for the preceding bind to complete before unbinding, the vma itself might have been destroyed in the process, clearing the vma pages. Therefore we can only allow async unbinding if we have a refcounted sg-list and keep a refcount on that for the vma resource pages to stay intact until binding occurs. If this condition is not met, a request for an async unbind is diverted to a sync unbind. v2: - Use a separate kmem_cache for vma resources for now to isolate their memory allocation and aid debugging. - Move the check for vm closed to the actual unbinding thread. Regardless of whether the vm is closed, we need the unbind fence to properly wait for capture. - Clear vma_res::vm on unbind and update its documentation. v4: - Take cache coloring into account when searching for vma resources pending unbind. (Matthew Auld) v5: - Fix timeout and error check in i915_vma_resource_bind_dep_await(). - Avoid taking a reference on the object for async binding if async unbind capable. - Fix braces around a single-line if statement. v6: - Fix up the cache coloring adjustment. (Kernel test robot <lkp@intel.com>) - Don't allow async unbinding if the vma_res pages are not the same as the object pages. (Matthew Auld) v7: - s/unsigned long/u64/ in a number of places (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-5-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
When introducing asynchronous unbinding, the vma itself may no longer be alive when the actual binding or unbinding takes place. Update the gtt i915_vma_ops accordingly to take a struct i915_vma_resource instead of a struct i915_vma for the bind_vma() and unbind_vma() ops. Similarly change the insert_entries() op for struct i915_address_space. Replace a couple of i915_vma_snapshot members with their newly introduced i915_vma_resource counterparts, since they have the same lifetime. Also make sure to avoid changing the struct i915_vma_flags (in particular the bind flags) async. That should now only be done sync under the vm mutex. v2: - Update the vma_res::bound_flags when binding to the aliased ggtt v6: - Remove I915_VMA_ALLOC_BIT (Matthew Auld) - Change some members of struct i915_vma_resource from unsigned long to u64 (Matthew Auld) v7: - Fix vma resource size parameters to be u64 rather than unsigned long (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-3-thomas.hellstrom@linux.intel.com
-
- 20 12月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
Big delta, but boils down to moving set_pages to i915_vma.c, and removing the special handling, all callers use the defaults anyway. We only remap in ggtt, so default case will fall through. Because we still don't require locking in i915_vma_unpin(), handle this by using xchg in get_pages(), as it's locked with obj->mutex, and cmpxchg in unpin, which only fails if we race a against a new pin. Changes since v1: - aliasing gtt sets ZERO_SIZE_PTR, not -ENODEV, remove special case from __i915_vma_get_pages(). (Matt) Changes since v2: - Free correct old pages in __i915_vma_get_pages(). (Matt) Remove race of clearing vma->pages accidentally from put, free it but leave it set, as only get has the lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216142749.1966107-4-maarten.lankhorst@linux.intel.comReviewed-by: NMatthew Auld <matthew.auld@intel.com>
-
- 10 12月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
On some platforms the hw has dropped support for 4K GTT pages when dealing with LMEM, and due to the design of 64K GTT pages in the hw, we can only mark the *entire* page-table as operating in 64K GTT mode, since the enable bit is still on the pde, and not the pte. And since we we still need to allow 4K GTT pages for SMEM objects, we can't have a "normal" 4K page-table with scratch pointing to LMEM, since that's undefined from the hw pov. The simplest solution is to just move the 64K scratch page to SMEM on such platforms and call it a day, since that should work for all configurations. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Reviewed-by: NThomas Hellstrom <thomas.hellstrom@linux.intel.com> Reviewed-by: NAndi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-4-ramalingam.c@intel.com
-
- 07 12月, 2021 1 次提交
-
-
由 Michael Cheng 提交于
Certain functions within i915 uses macros that are defined for specific architectures by the mmu, such as _PAGE_RW and _PAGE_PRESENT (Some architectures don't even have these macros defined, like ARM64). Instead of re-using bits defined for the CPU, we should use bits defined for i915. This patch introduces two new 64 bit macros, GEN8_PAGE_PRESENT and GEN8_PAGE_RW, to check for bits 0 and 1 and, to replace all occurrences of _PAGE_RW and _PAGE_PRESENT within i915. v2(Michael Cheng): Use GEN8_ instead of I915_ Signed-off-by: NMichael Cheng <michael.cheng@intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> [ Move defines together with other GEN8 defines ] Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206215245.513677-2-michael.cheng@intel.com
-
- 03 11月, 2021 1 次提交
-
-
由 Imre Deak 提交于
Factor out functions that are needed by the next patch to suspend/resume the memory mappings for DPT FBs. No functional change, except reordering during suspend the ggtt->invalidate(ggtt) call wrt. atomic_set(&ggtt->vm.open, open) and mutex_unlock(&ggtt->vm.mutex). This shouldn't matter due to the i915 suspend sequence being single threaded. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211101183551.3580546-1-imre.deak@intel.com
-
- 24 9月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
We really only need memcpy restore for objects that affect the operability of the migrate context. That is, primarily the page-table objects of the migrate VM. Add an object flag, I915_BO_ALLOC_PM_EARLY for objects that need early restores using memcpy and a way to assign LMEM page-table object flags to be used by the vms. Restore objects without this flag with the gpu blitter and only objects carrying the flag using TTM memcpy. Initially mark the migrate, gt, gtt and vgpu vms to use this flag, and defer for a later audit which vms actually need it. Most importantly, user- allocated vms with pinned page-table objects can be restored using the blitter. Performance-wise memcpy restore is probably as fast as gpu restore if not faster, but using gpu restore will help tackling future restrictions in mappable LMEM size. v4: - Don't mark the aliasing ppgtt page table flags for early resume, but rather the ggtt page table flags as intended. (Matthew Auld) - The check for user buffer objects during early resume is pointless, since they are never marked I915_BO_ALLOC_PM_EARLY. (Matthew Auld) v5: - Mark GuC LMEM objects with I915_BO_ALLOC_PM_EARLY to have them restored before we fire up the migrate context. Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-8-thomas.hellstrom@linux.intel.com
-
- 06 9月, 2021 1 次提交
-
-
由 Daniel Vetter 提交于
The full audit is quite a bit of work: - i915_dpt has very simple lifetime (somehow we create a display pagetable vm per object, so its _very_ simple, there's only ever a single vma in there), and uses i915_vm_close(), which internally does a i915_vm_put(). No rcu. Aside: wtf is i915_dpt doing in the intel_display.c garbage collector as a new feature, instead of added as a separate file with some clean-ish interface. Also, i915_dpt unfortunately re-introduces some coding patterns from pre-dma_resv_lock conversion times. - i915_gem_proto_ctx is fully refcounted and no rcu, all protected by fpriv->proto_context_lock. - i915_gem_context is itself rcu protected, and that might leak to anything it points at. Before commit cf977e18 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Dec 2 11:21:40 2020 +0000 drm/i915/gem: Spring clean debugfs and commit db80a129 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Jan 18 11:08:54 2021 +0000 drm/i915/gem: Remove per-client stats from debugfs/i915_gem_objects we had a bunch of debugfs files that relied on rcu protecting everything, but those are gone now. The main one was removed even earlier with There doesn't seem to be anything left that's actually protecting stuff now that the ctx->vm itself is invariant. See commit ccbc1b97 Author: Jason Ekstrand <jason@jlekstrand.net> Date: Thu Jul 8 10:48:30 2021 -0500 drm/i915/gem: Don't allow changing the VM on running contexts (v4) Note that we drop the vm refcount before the final release of the gem context refcount, so this is all very dangerous even without rcu. Note that aside from later on creating new engines (a defunct feature) and debug output we're never looked at gem_ctx->vm for anything functional, hence why this is ok. Fingers crossed. Preceeding patches removed all vestiges of rcu use from gem_ctx->vm derferencing to make it clear it's really not used. The gem_ctx->rcu protection was introduced in commit a4e7ccda Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Oct 4 14:40:09 2019 +0100 drm/i915: Move context management under GEM The commit message is somewhat entertaining because it fails to mention this fact completely, and compensates that by an in-commit changelog entry that claims that ctx->vm is protected by ctx->mutex. Which was the case _before_ this commit, but no longer after it. - intel_context holds a full reference. Unfortunately intel_context is also rcu protected and the reference to the ->vm is dropped before the rcu barrier - only the kfree is delayed. So again we need to check whether that leaks anywhere on the intel_context->vm. RCU is only used to protect intel_context sitting on the breadcrumb lists, which don't look at the vm anywhere, so we are fine. Nothing else relies on rcu protection of intel_context and hence is fully protected by the kref refcount alone, which protects intel_context->vm in turn. The breadcrumbs rcu usage was added in commit c744d503 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Nov 26 14:04:06 2020 +0000 drm/i915/gt: Split the breadcrumb spinlock between global and contexts its parent commit added the intel_context rcu protection: commit 14d1eaf0 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Nov 26 14:04:05 2020 +0000 drm/i915/gt: Protect context lifetime with RCU given some credence to my claim that I've actually caught them all. - drm_i915_gem_object's shares_resv_from pointer has a full refcount to the dma_resv, which is a sub-refcount that's released after the final i915_vm_put() has been called. Safe. Aside: Maybe we should have a struct dma_resv_shared which is just dma_resv + kref as a stand-alone thing. It's a pretty useful pattern which other drivers might want to copy. For a bit more context see commit 4d8151ae Author: Thomas Hellström <thomas.hellstrom@linux.intel.com> Date: Tue Jun 1 09:46:41 2021 +0200 drm/i915: Don't free shared locks while shared - the fpriv->vm_xa was relying on rcu_read_lock for lookup, but that was updated in a prep patch too to just be a spinlock-protected lookup. - intel_gt->vm is set at driver load in intel_gt_init() and released in intel_gt_driver_release(). There seems to be some issue that in some error paths this is called twice, but otherwise no rcu to be found anywhere. This was added in the below commit, which unfortunately doesn't explain why this complication exists. commit e6ba7648 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Sat Dec 21 16:03:24 2019 +0000 drm/i915: Remove i915->kernel_context The proper fix most likely for this is to start using drmm_ at large scale, but that's also huge amounts of work. - i915_vma->vm is some real pain, because rcu is rcu protected, at least in the vma lookup in the context lookup cache in eb_lookup_vma(). This was added in commit 4ff4b44c Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jun 16 15:05:16 2017 +0100 drm/i915: Store a direct lookup from object handle to vma This was changed to a radix tree from the hashtable in, but with the locking unchanged, in commit d1b48c1e Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Aug 16 09:52:08 2017 +0100 drm/i915: Replace execbuf vma ht with an idr In commit 93159e12 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Mar 23 09:28:41 2020 +0000 drm/i915/gem: Avoid gem_context->mutex for simple vma lookup the locking was changed from dev->struct_mutex to rcu, which added the requirement to rcu protect i915_vma. Somehow this was missed in review (or I'm completely blind). Irrespective of all that the vma lookup cache rcu_read_lock grabs a full reference of the vma and the rcu doesn't leak further. So no impact on i915_address_space from that. I have not found any other rcu use for i915_vma, but given that it seems broken I also didn't bother to do a careful in-depth audit. Alltogether there's nothing left in-tree anymore which requires that a pointer deref to an i915_address_space is safe undre rcu_read_lock only. rcu protection of i915_address_space was introduced in commit b32fa811 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Jun 20 19:37:05 2019 +0100 drm/i915/gtt: Defer address space cleanup to an RCU worker by mixing up a bugfixing (i915_address_space needs to be released from a worker) with enabling rcu support. The commit message also seems somewhat confused, because it talks about cleanup of WC pages requiring sleep, while the code and linked bugzilla are about a requirement to take dev->struct_mutex (which yes sleeps but it's a much more specific problem). Since final kref_put can be called from pretty much anywhere (including hardirq context through the scheduler's i915_active cleanup) we need a worker here. Hence that part must be kept. Ideally all these reclaim workers should have some kind of integration with our shrinkers, but for some of these it's rather tricky. Anyway, that's a preexisting condition in the codeebase that we wont fix in this patch here. We also remove the rcu_barrier in ggtt_cleanup_hw added in commit 60a4233a Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Jul 29 14:24:12 2019 +0100 drm/i915: Flush the i915_vm_release before ggtt shutdown Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210902142057.929669-11-daniel.vetter@ffwll.ch
-
- 09 7月, 2021 1 次提交
-
-
由 Jason Ekstrand 提交于
There's a big comment saying how useful it is but no one is using this for anything anymore. It was added in 2bfa996e ("drm/i915: Store owning file on the i915_address_space") and used for debugfs at the time as well as telling the difference between the global GTT and a PPGTT. In f6e8aa38 ("drm/i915: Report the number of closed vma held by each context in debugfs") we removed one use of it by switching to a context walk and comparing with the VM in the context. Finally, VM stats for debugfs were entirely nuked in db80a129 ("drm/i915/gem: Remove per-client stats from debugfs/i915_gem_objects") v2 (Daniel Vetter): - Delete a struct drm_i915_file_private pre-declaration - Add a comment to the commit message about history Signed-off-by: NJason Ekstrand <jason@jlekstrand.net> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210708154835.528166-24-jason@jlekstrand.net
-
- 17 6月, 2021 1 次提交
-
-
由 Chris Wilson 提交于
In the next patch, we will want to look at the dma addresses of individual page tables, so add a routine to iterate over them. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210617063018.92802-6-thomas.hellstrom@linux.intel.com
-
- 01 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
We are currently sharing the VM reservation locks across a number of gem objects with page-table memory. Since TTM will individiualize the reservation locks when freeing objects, including accessing the shared locks, make sure that the shared locks are not freed until that is done. For PPGTT we add an additional refcount, for GGTT we take additional measures to make sure objects sharing the GGTT reservation lock are freed at GGTT takedown Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210601074654.3103-3-thomas.hellstrom@linux.intel.com
-
- 25 5月, 2021 1 次提交
-
-
由 Imre Deak 提交于
An object mapped via DPT can have remapped and rotated VMA instances besides the normal VMA instance, similarly to GGTT VMA instances. Adjust the corresponding VMA lookup asserts. While at it also check if a DPT VM is passed incorrectly to i915_vm_to_ppgtt(). Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJosé Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210524172703.2113058-2-imre.deak@intel.com
-
- 07 5月, 2021 1 次提交
-
-
由 Ville Syrjälä 提交于
Add support for DPT (display page table). DPT is a slightly peculiar two level page table scheme used for tiled scanout buffers (linear uses direct ggtt mapping still). The plane surface address will point at a page in the DPT which holds the PTEs for 512 actual pages. Thus we require 1/512 of the ggttt address space compared to a direct ggtt mapping. We create a new DPT address space for each framebuffer and track two vmas (one for the DPT, another for the ggtt). TODO: - Is the i915_address_space approaach sane? - Maybe don't map the whole DPT to write the PTEs? - Deal with remapping/rotation? Need to create a separate DPT for each remapped/rotated plane I guess. Or else we'd need to make the per-fb DPT large enough to support potentially several remapped/rotated vmas. How large should that be? Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NBommu Krishnaiah <krishnaiah.bommu@intel.com> Cc: Wilson Chris P <Chris.P.Wilson@intel.com> Cc: Tang CQ <cq.tang@intel.com> Cc: Auld Matthew <matthew.auld@intel.com> Reviewed-by: NUma Shankar <uma.shankar@intel.com> Reviewed-by: NWilson Chris P <Chris.P.Wilson@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210506161930.309688-5-imre.deak@intel.com
-
- 27 4月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
It's a requirement that for dgfx we place all the paging structures in device local-memory. v2: use i915_coherent_map_type() v3: improve the shared dma-resv object comment Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210427085417.120246-4-matthew.auld@intel.com
-
由 Matthew Auld 提交于
We need to generalise our accessor for the page directories and tables from using the simple kmap_atomic to support local memory, and this setup must be done on acquisition of the backing storage prior to entering fence execution contexts. Here we replace the kmap with the object mapping code that for simple single page shmemfs object will return a plain kmap, that is then kept for the lifetime of the page directory. Note that keeping the mapping around is a potential concern here, since while the vma is pinned the mapping remains there for the PDs underneath, or at least until the used_count reaches zero, at which point we can safely destroy the mapping. For 32b this will be even worse since the address space is more limited, but since this change mostly impacts full ppGTT platforms, the justification is that for modern platforms we shouldn't care too much about 32b. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210427085417.120246-3-matthew.auld@intel.com
-
- 25 3月, 2021 4 次提交
-
-
由 Matthew Auld 提交于
For the PTEs we get an LM bit, to signal whether the page resides in SMEM or LMEM. Based on a patch from Michel Thierry. BSpec: 45015 Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210203171231.551338-3-matthew.auld@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Matthew Auld 提交于
For the PTEs we get an LM bit, to signal whether the page resides in SMEM or LMEM. BSpec: 45040 v2: just use gen8_pte_encode for dg1 Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NNiranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: NVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210203171231.551338-2-matthew.auld@intel.comSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Maarten Lankhorst 提交于
We may create page table objects on the fly, but we may need to wait with the ww lock held. Instead of waiting on a freed obj lock, ensure we have the same lock for each object to keep -EDEADLK working. This ensures that i915_vma_pin_ww can lock the page tables when required. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-41-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
This should be done as part of the ww loop, in order to remove a i915_vma_pin that needs ww held. Now only i915_ggtt_pin() callers remaining. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-25-maarten.lankhorst@linux.intel.com
-
- 21 12月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Primarily used by selftests, but also by runtime debugging of engine w/a, is a routine to create a temporarily bound buffer for readback. Almagamate the duplicated routines into one. Suggested-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201219020343.22681-2-chris@chris-wilson.co.uk
-
- 06 10月, 2020 1 次提交
-
-
由 Tvrtko Ursulin 提交于
When walking DMA mapped scatterlists sg_dma_len has to be used since it can be different (coalesced) from the backing store entry. This also means we have to end the walk when encountering a zero length DMA entry and cannot rely on the normal sg list end marker. Both issues were there in theory for some time but were hidden by the fact Intel IOMMU driver was never coalescing entries. As there are ongoing efforts to change this we need to start handling it. v2: * Use unsigned int for local storing sg_dma_len. (Logan) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> References: 85d1225e ("drm/i915: Introduce & use new lightweight SGL iterators") References: b31144c0 ("drm/i915: Micro-optimise gen6_ppgtt_insert_entries()") Reported-by: NTom Murphy <murphyt7@tcd.ie> Suggested-by: Tom Murphy <murphyt7@tcd.ie> # __sgt_iter Suggested-by: Logan Gunthorpe <logang@deltatee.com> # __sgt_iter Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201006092508.1064287-1-tvrtko.ursulin@linux.intel.com
-
- 07 9月, 2020 3 次提交
-
-
由 Chris Wilson 提交于
kmalloc uses power-of-two slab buckets for small allocations (up to a few pages). Since i915_page_directory is a page of pointers, plus a couple more, this is rounded up to 8K, and we waste nearly 50% of that allocation. Long terms this leads to poor memory utilisation, bloating the kernel footprint, but the problem is exacerbated by our conservative preallocation scheme for binding VMA. As we are required to allocate all levels for each vma just in case we need to insert them upon binding, this leads to a large multiplication factor for a single page vma. By halving the allocation we need for the page directory structure, we halve the impact of that factor, bringing workloads that once fitted into memory, hopefully back to fitting into memory. We maintain the split between i915_page_directory and i915_page_table as we only need half the allocation for the lowest, most populous, level. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200729164219.5737-3-chris@chris-wilson.co.ukSigned-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
The GEM object is grossly overweight for the practicality of tracking large numbers of individual pages, yet it is currently our only abstraction for tracking DMA allocations. Since those allocations need to be reserved upfront before an operation, and that we need to break away from simple system memory, we need to ditch using plain struct page wrappers. In the process, we drop the WC mapping as we ended up clflushing everything anyway due to various issues across a wider range of platforms. Though in a future step, we need to drop the kmap_atomic approach which suggests we need to pre-map all the pages and keep them mapped. v2: Verify our large scratch page is suitably DMA aligned; and manually clear the scratch since we are allocating plain struct pages full of prior content. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200729164219.5737-2-chris@chris-wilson.co.ukSigned-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
We need to make the DMA allocations used for page directories to be performed up front so that we can include those allocations in our memory reservation pass. The downside is that we have to assume the worst case, even before we know the final layout, and always allocate enough page directories for this object, even when there will be overlap. This unfortunately can be quite expensive, especially as we have to clear/reset the page directories and DMA pages, but it should only be required during early phases of a workload when new objects are being discovered, or after memory/eviction pressure when we need to rebind. Once we reach steady state, the objects should not be moved and we no longer need to preallocating the pages tables. It should be noted that the lifetime for the page directories DMA is more or less decoupled from individual fences as they will be shared across objects across timelines. v2: Only allocate enough PD space for the PTE we may use, we do not need to allocate PD that will be left as scratch. v3: Store the shift unto the first PD level to encapsulate the different PTE counts for gen6/gen8. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200729164219.5737-1-chris@chris-wilson.co.ukSigned-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 03 7月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Reuse the ppgtt_bind_vma() for aliasing_ppgtt_bind_vma() so we can reduce some code near-duplication. The catch is that we need to then pass along the i915_address_space and not rely on vma->vm, as they differ with the aliasing-ppgtt. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200703102519.26539-1-chris@chris-wilson.co.uk
-
- 17 3月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Since the number of fence regs can vary dramactically between platforms, allocate the array on demand so we don't waste as much space. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200316113846.4974-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since the fence registers control HW detiling through the GGTT aperture, make them a part of the intel_ggtt under gt/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200316113846.4974-1-chris@chris-wilson.co.uk
-
- 28 2月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Pull the final atomic_dec of vm->open (marking the vm as closed) underneath the same vm->mutex as used to close it. This is required to correctly serialise with attempting to reuse the vma as the vm is closed by a second thread. References: 00de702c ("drm/i915: Check that the vma hasn't been closed before we insert it") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200227085723.1961649-10-chris@chris-wilson.co.uk
-
由 Daniele Ceraolo Spurio 提交于
On TGL, bits 2-4 in the GGTT PTE are not ignored anymore and are instead used for some extra VT-d capabilities. We don't (yet?) have support for those capabilities, but, given that we shared the pte_encode function betweed GGTT and PPGTT, we still set those bits to the PPGTT PPAT values. The DMA engine gets very confused when those bits are set while the iommu is enabled, leading to errors. E.g. when loading the GuC we get: [ 9.796218] DMAR: DRHD: handling fault status reg 2 [ 9.796235] DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 02] Present bit in context entry is clear [ 9.899215] [drm:intel_guc_fw_upload [i915]] *ERROR* GuC firmware signature verification failed To fix this, just have dedicated gen8_pte_encode function per type of gtt. Also, explicitly set vm->pte_encode for gen8_ppgtt, even if we don't use it, to make sure we don't accidentally assign it to the GGTT one, like we do for gen6_ppgtt, in case we need it in the future. Reported-by: N"Sodhi, Vunny" <vunny.sodhi@intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200226185657.26445-1-daniele.ceraolospurio@intel.com
-
- 31 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The i915_ggtt now sits beneath gt/ outside of the auspices of gem/ and should be given a fresh name to reflect that. We also want to give it a name that reflects its role in the system suspend/resume, with the intention of pulling together all the GGTT operations (e.g. restoring the fence registers once they are pulled under gt/intel_ggtt_detiler.c) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Rreviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130181710.2030251-2-chris@chris-wilson.co.uk
-
- 10 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
In the near future, we will want to start a GPU error capture from a new context, from inside the softirq region of a forced preemption. To do so requires us to break up the monolithic error capture to provide new entry points with finer control; in particular focusing on one engine/gt, and being able to compose an error state from little pieces of HW capture. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Acked-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200110123059.1348712-1-chris@chris-wilson.co.uk
-
- 08 1月, 2020 1 次提交
-
-
由 Matthew Auld 提交于
Attempt to split i915_gem_gtt.[ch] into more manageable chunks. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200107134009.3255354-1-chris@chris-wilson.co.uk
-
- 30 11月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
After much hair pulling, resort to preallocating the ppGTT entries on init to circumvent the apparent lack of PD invalidate following the write to PP_DCLV upon switching mm between contexts (and here the same context after binding new objects). However, the details of that PP_DCLV invalidate are still unknown, and it appears we need to reload the mm twice to cover over a timing issue. Worrying. Fixes: 3dc007fe ("drm/i915/gtt: Downgrade gen7 (ivb, byt, hsw) back to aliasing-ppgtt") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191129201328.1398583-1-chris@chris-wilson.co.uk
-
- 29 10月, 2019 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
The following patches in the series will use it to avoid certain operations when the mappable aperture is not available in HW. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191029095856.25431-1-matthew.auld@intel.com
-
- 17 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
The HW performs swizzling as part of its fence tiling inside the Global GTT. We already do the probing of the HW settings from the GGTT setup, complete the picture by storing the information as part of the GGTT. The primary benefit is the consistency of our probe routines do not break the i915_ggtt encapsulation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191016143234.4075-2-chris@chris-wilson.co.uk
-
- 04 10月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
Replace the struct_mutex requirement for pinning the i915_vma with the local vm->mutex instead. Note that the vm->mutex is tainted by the shrinker (we require unbinding from inside fs-reclaim) and so we cannot allocate while holding that mutex. Instead we have to preallocate workers to do allocate and apply the PTE updates after we have we reserved their slot in the drm_mm (using fences to order the PTE writes with the GPU work and with later unbind). In adding the asynchronous vma binding, one subtle requirement is to avoid coupling the binding fence into the backing object->resv. That is the asynchronous binding only applies to the vma timeline itself and not to the pages as that is a more global timeline (the binding of one vma does not need to be ordered with another vma, nor does the implicit GEM fencing depend on a vma, only on writes to the backing store). Keeping the vma binding distinct from the backing store timelines is verified by a number of async gem_exec_fence and gem_exec_schedule tests. The way we do this is quite simple, we keep the fence for the vma binding separate and only wait on it as required, and never add it to the obj->resv itself. Another consequence in reducing the locking around the vma is the destruction of the vma is no longer globally serialised by struct_mutex. A natural solution would be to add a kref to i915_vma, but that requires decoupling the reference cycles, possibly by introducing a new i915_mm_pages object that is own by both obj->mm and vma->pages. However, we have not taken that route due to the overshadowing lmem/ttm discussions, and instead play a series of complicated games with trylocks to (hopefully) ensure that only one destruction path is called! v2: Add some commentary, and some helpers to reduce patch churn. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we cannot allocate underneath the vm->mutex (it is used in the direct-reclaim paths), we need to shift the allocations off into a mutexless worker with fence recursion prevention. To know when we need this protection, we mark up the address spaces that do allocate before insertion. In the future, we may wish to extend the async bind scheme to more than just allocations. v2: s/vm->bind_alloc/vm->bind_async_flags/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The premise here is to simply avoiding having to acquire the vm->mutex inside vma create/destroy to update the vm->unbound_lists, to avoid some nasty lock recursions later. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-2-chris@chris-wilson.co.uk
-