- 11 1月, 2022 3 次提交
-
-
由 Thomas Hellström 提交于
There is always a struct vma_resource guaranteed to be alive when we access a corresponding struct vma_snapshot. So ditch the latter and instead of allocating vma_snapshots, reference the already existning vma_resource. This requires a couple of extra members in struct vma_resource but that's a small price to pay for the simplification. v2: - Fix a missing include and declaration (kernel test robot <lkp@intel.com>) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-7-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
Implement async (non-blocking) unbinding by not syncing the vma before calling unbind on the vma_resource. Add the resulting unbind fence to the object's dma_resv from where it is picked up by the ttm migration code. Ideally these unbind fences should be coalesced with the migration blit fence to avoid stalling the migration blit waiting for unbind, as they can certainly go on in parallel, but since we don't yet have a reasonable data structure to use to coalesce fences and attach the resulting fence to a timeline, we defer that for now. Note that with async unbinding, even while the unbind waits for the preceding bind to complete before unbinding, the vma itself might have been destroyed in the process, clearing the vma pages. Therefore we can only allow async unbinding if we have a refcounted sg-list and keep a refcount on that for the vma resource pages to stay intact until binding occurs. If this condition is not met, a request for an async unbind is diverted to a sync unbind. v2: - Use a separate kmem_cache for vma resources for now to isolate their memory allocation and aid debugging. - Move the check for vm closed to the actual unbinding thread. Regardless of whether the vm is closed, we need the unbind fence to properly wait for capture. - Clear vma_res::vm on unbind and update its documentation. v4: - Take cache coloring into account when searching for vma resources pending unbind. (Matthew Auld) v5: - Fix timeout and error check in i915_vma_resource_bind_dep_await(). - Avoid taking a reference on the object for async binding if async unbind capable. - Fix braces around a single-line if statement. v6: - Fix up the cache coloring adjustment. (Kernel test robot <lkp@intel.com>) - Don't allow async unbinding if the vma_res pages are not the same as the object pages. (Matthew Auld) v7: - s/unsigned long/u64/ in a number of places (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-5-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
When introducing asynchronous unbinding, the vma itself may no longer be alive when the actual binding or unbinding takes place. Update the gtt i915_vma_ops accordingly to take a struct i915_vma_resource instead of a struct i915_vma for the bind_vma() and unbind_vma() ops. Similarly change the insert_entries() op for struct i915_address_space. Replace a couple of i915_vma_snapshot members with their newly introduced i915_vma_resource counterparts, since they have the same lifetime. Also make sure to avoid changing the struct i915_vma_flags (in particular the bind flags) async. That should now only be done sync under the vm mutex. v2: - Update the vma_res::bound_flags when binding to the aliased ggtt v6: - Remove I915_VMA_ALLOC_BIT (Matthew Auld) - Change some members of struct i915_vma_resource from unsigned long to u64 (Matthew Auld) v7: - Fix vma resource size parameters to be u64 rather than unsigned long (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220110172219.107131-3-thomas.hellstrom@linux.intel.com
-
- 06 1月, 2022 2 次提交
-
-
由 Andi Shyti 提交于
The reference to the GGTT from the private date is not used anymore. Remove it. The ggtt in the root gt will now be dynamically allocated and the deallocation handled by the drmm_* managed allocation. Suggested-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NSujaritha Sundaresan <sujaritha.sundaresan@intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211219212500.61432-7-andi.shyti@linux.intel.com
-
由 Michał Winiarski 提交于
GGTT is currently available both through i915->ggtt and gt->ggtt, and we eventually want to get rid of the i915->ggtt one. Use to_gt() for all i915->ggtt accesses to help with the future refactoring. During the probe of i915 the early intiialization of the gt (intel_gt_init_hw_early()) is moved prior to any access to the ggtt. This because it's in that moment we assign the ggtt to the gt and we want to do that before using it. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Reviewed-by: NSujaritha Sundaresan <sujaritha.sundaresan@intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211221195946.3180-1-andi.shyti@linux.intel.com
-
- 24 12月, 2021 3 次提交
-
-
由 John Harrison 提交于
A fault injection probe test hit a BUG_ON in a GuC error path. It showed that the GuC code could potentially attempt to do many things when the device is actually wedged. So, add a check in to prevent that. v2: Use intel_gt_is_wedged instead of testing bits directly in the GuC submission code (review feedback from Tvrtko). Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211221210212.1438670-1-John.C.Harrison@Intel.com
-
由 Matthew Brost 提交于
A weak implementation of parallel submission (multi-bb execbuf IOCTL) for execlists. Doing as little as possible to support this interface for execlists - basically just passing submit fences between each request generated and virtual engines are not allowed. This is on par with what is there for the existing (hopefully soon deprecated) bonding interface. We perma-pin these execlists contexts to align with GuC implementation. v2: (John Harrison) - Drop siblings array as num_siblings must be 1 v3: (John Harrison) - Drop single submission v4: (John Harrison) - Actually drop single submission - Use IS_ERR check on return value from intel_context_create - Set last request to NULL on unpin Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211222223532.28698-1-matthew.brost@intel.com
-
由 John Harrison 提交于
Don't silently drop reset notifications from the GuC. It might not be safe to do an error capture but we still want some kind of report that the reset happened. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211223013128.1739792-1-John.C.Harrison@Intel.com
-
- 22 12月, 2021 2 次提交
-
-
由 Thomas Hellström 提交于
Since the gt migration code was using only a single fence for dependencies, these were collected in a dma_fence_array. However, it turns out that it's illegal to use some dma_fences in a dma_fence_array, in particular other dma_fence_arrays and dma_fence_chains, and this causes trouble for us moving forward. Have the gt migration code instead take a const struct i915_deps for dependencies. This means we can skip the dma_fence_array creation and instead pass the struct i915_deps instead to circumvent the problem. v2: - Make the prev_deps() function static. (kernel test robot <lkp@intel.com>) - Update the struct i915_deps kerneldoc. v4: - Rebase. Fixes: 5652df82 ("drm/i915/ttm: Update i915_gem_obj_copy_ttm() to be asynchronous") Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211221200050.436316-2-thomas.hellstrom@linux.intel.com
-
由 Vinay Belgaumkar 提交于
By default, GT (and GuC) run at RPn. Requesting for RP0 before firmware load can speed up DMA and HuC auth as well. In addition to writing to 0xA008, we also need to enable swreq in 0xA024 so that Punit will pay heed to our request. SLPC will restore the frequency back to RPn after initialization, but we need to manually do that for the non-SLPC path. We don't need a manual override in the SLPC disabled case, just use the intel_rps_set function to ensure consistent RPS state. Signed-off-by: NVinay Belgaumkar <vinay.belgaumkar@intel.com> Reviewed-by: NSujaritha Sundaresan <sujaritha.sundaresan@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216233022.21351-1-vinay.belgaumkar@intel.com
-
- 21 12月, 2021 4 次提交
-
-
由 Maarten Lankhorst 提交于
This is required for i915_gem_evict_vm, to be able to evict the entire VM, including objects that are already locked to the current ww ctx. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216142749.1966107-12-maarten.lankhorst@linux.intel.com
-
由 John Harrison 提交于
If GuC encounters an error during engine reset, the i915 driver promotes to full GT reset. This includes an info message about why the reset is happening. However, that is not treated as a failure by any of the CI systems because resets are an expected occurrance during testing. This kind of failure is a major problem and should never happen. So, complain more loudly and make sure CI notices. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211065859.2248188-4-John.C.Harrison@Intel.com
-
由 John Harrison 提交于
Lots of testing is done with the DEBUG_GEM config option enabled but not the DEBUG_GUC option. That means we only get teeny-tiny GuC logs which are not hugely useful. Enabling full DEBUG_GUC also spews lots of other detailed output that is not generally desired. However, bigger GuC logs are extremely useful for almost any regression debug. So enable bigger logs for DEBUG_GEM builds as well. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211065859.2248188-3-John.C.Harrison@Intel.com
-
由 John Harrison 提交于
Add support for telling the debugfs interface the size of the GuC log dump in advance. Without that, the underlying framework keeps calling the 'show' function with larger and larger buffer allocations until it fits. That means reading the log from graphics memory many times - 16 times with the full 18MB log size. v2: Don't return error codes from size query. Report overflow in the error dump as well (review feedback from Daniele). Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211065859.2248188-2-John.C.Harrison@Intel.com
-
- 20 12月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
Big delta, but boils down to moving set_pages to i915_vma.c, and removing the special handling, all callers use the defaults anyway. We only remap in ggtt, so default case will fall through. Because we still don't require locking in i915_vma_unpin(), handle this by using xchg in get_pages(), as it's locked with obj->mutex, and cmpxchg in unpin, which only fails if we race a against a new pin. Changes since v1: - aliasing gtt sets ZERO_SIZE_PTR, not -ENODEV, remove special case from __i915_vma_get_pages(). (Matt) Changes since v2: - Free correct old pages in __i915_vma_get_pages(). (Matt) Remove race of clearing vma->pages accidentally from put, free it but leave it set, as only get has the lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211216142749.1966107-4-maarten.lankhorst@linux.intel.comReviewed-by: NMatthew Auld <matthew.auld@intel.com>
-
- 18 12月, 2021 3 次提交
-
-
由 Michał Winiarski 提交于
Use to_gt() helper consistently throughout the codebase. Pure mechanical s/i915->gt/to_gt(i915). No functional changes. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214193346.21231-5-andi.shyti@linux.intel.com
-
由 Michał Winiarski 提交于
To allow further refactoring and abstract away the fact that GT is stored inside i915 private. No functional changes. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214193346.21231-3-andi.shyti@linux.intel.com
-
由 Michał Winiarski 提交于
We now support a per-gt uncore, yet we're not able to infer which GT we're operating upon. Let's store a backpointer for now. At this point the early initialization of the gt needs to be broken in two parts where the first is needed to assign to the gt the i915 private data pointer and the uncore. A temporary function has been made and the two parts are __intel_gt_init_early() and intel_gt_init_early(). This split will be fixed in the future with the multitile patch. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAndi Shyti <andi.shyti@linux.intel.com> Signed-off-by: NAndi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214193346.21231-2-andi.shyti@linux.intel.com
-
- 16 12月, 2021 7 次提交
-
-
由 Matthew Brost 提交于
Testing the stealing of guc ids is hard from user space as we have 64k guc_ids. Add a selftest, which artificially reduces the number of guc ids, and forces a steal. The test creates a spinner which is used to block all subsequent submissions until it completes. Next, a loop creates a context and a NOP request each iteration until the guc_ids are exhausted (request creation returns -EAGAIN). The spinner is ended, unblocking all requests created in the loop. At this point all guc_ids are exhausted but are available to steal. Try to create another request which should successfully steal a guc_id. Wait on last request to complete, idle GPU, verify a guc_id was stolen via a counter, and exit the test. Test also artificially reduces the number of guc_ids so the test runs in a timely manner. v2: (John Harrison) - s/stole/stolen - Fix some wording in test description - Rework indexing into context array - Add test description to commit message - Fix typo in commit message (Checkpatch) - s/guc/(guc) in NUMBER_MULTI_LRC_GUC_ID v3: (John Harrison) - Set array value to NULL after extracting error - Fix a few typos in comments / error messages - Delete redundant comment in commit message Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-8-matthew.brost@intel.com
-
由 Matthew Brost 提交于
Let's be paranoid and kick the G2H tasklet, which dequeues messages, if G2H credits are exhausted. Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-7-matthew.brost@intel.com
-
由 Matthew Brost 提交于
Print CT state (H2G + G2H head / tail pointers, credits) on CT deadlock. v2: (John Harrison) - Add units to debug messages Reviewed-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-6-matthew.brost@intel.com
-
由 John Harrison 提交于
While attempting to debug a CT deadlock issue in various CI failures (most easily reproduced with gem_ctx_create/basic-files), I was seeing CPU deadlock errors being reported. This were because the context destroy loop was blocking waiting on H2G space from inside an IRQ spinlock. There no was deadlock as such, it's just that the H2G queue was full of context destroy commands and GuC was taking a long time to process them. However, the kernel was seeing the large amount of time spent inside the IRQ lock as a dead CPU. Various Bad Things(tm) would then happen (heartbeat failures, CT deadlock errors, outstanding H2G WARNs, etc.). Re-working the loop to only acquire the spinlock around the list management (which is all it is meant to protect) rather than the entire destroy operation seems to fix all the above issues. v2: (John Harrison) - Fix typo in comment message Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-5-matthew.brost@intel.com
-
由 Matthew Brost 提交于
A full GT reset can race with the last context put resulting in the context ref count being zero but the destroyed bit not yet being set. Remove GEM_BUG_ON in scrub_guc_desc_for_outstanding_g2h that asserts the destroyed bit must be set in ref count is zero. Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-4-matthew.brost@intel.com
-
由 Matthew Brost 提交于
Previously assigned whole guc_id structure (list, spin lock) which is incorrect, only assign the guc_id.id. Fixes: 0f797650 ("drm/i915/guc: Rework and simplify locking") Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-3-matthew.brost@intel.com
-
由 Matthew Brost 提交于
s/ce/cn/ when grabbing guc_state.lock before calling clr_context_registered. Fixes: 0f797650 ("drm/i915/guc: Rework and simplify locking") Signed-off-by: NMatthew Brost <matthew.brost@intel.com> Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211214170500.28569-2-matthew.brost@intel.com
-
- 14 12月, 2021 3 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
Some of the newer HW will use bigger RSA keys to authenticate the GuC binary. On those platforms the HW will read the key from memory instead of the RSA registers, so we need to copy it in a dedicated vma, like we do for the HuC. The address of the key is provided to the HW via the first RSA register. v2: clarify that the RSA behavior is hardcoded in the bootrom (Matt) Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211000756.1698923-4-daniele.ceraolospurio@intel.com
-
由 Michal Wajdeczko 提交于
Future GuC/HuC firmwares might be signed with different key sizes. Don't assume that it must be always 2048 bits long. Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211000756.1698923-3-daniele.ceraolospurio@intel.com
-
由 Daniele Ceraolo Spurio 提交于
The FAILURE state of uc_fw currently implies that the fw is loadable (i.e init completed), so we can't use it for init failures and instead need a dedicated error code. Note that this currently does not cause any issues because if we fail to init any of the firmwares we abort the load, but better be accurate anyway in case things change in the future. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211211000756.1698923-2-daniele.ceraolospurio@intel.com
-
- 13 12月, 2021 1 次提交
-
-
This is a revert of commits d6773926 ("drm/i915/gt: Mark up the nested engine-pm timeline lock as irqsafe") 6c69a454 ("drm/i915/gt: Mark context->active_count as protected by timeline->mutex") 6dcb85a0 ("drm/i915: Hold irq-off for the entire fake lock period") The existing code leads to a different behaviour depending on whether lockdep is enabled or not. Any following lock that is acquired without disabling interrupts (but needs to) will not be noticed by lockdep. This it not just a lockdep annotation but is used but an actual mutex_t that is properly used as a lock but in case of __timeline_mark_lock() lockdep is only told that it is acquired but no lock has been acquired. It appears that its purpose is just satisfy the lockdep_assert_held() check in intel_context_mark_active(). The other problem with disabling interrupts is that on PREEMPT_RT interrupts are also disabled which leads to problems for instance later during memory allocation. Add a CONTEXT_IS_PARKING bit to intel_engine_cs and set_bit/clear_bit it instead of mutex_acquire/mutex_release. Use test_bit in the two identified spots which relied on the lockdep annotation. Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/YbO8Ie1Nj7XcQPNQ@linutronix.de
-
- 11 12月, 2021 2 次提交
-
-
由 John Harrison 提交于
If the GuC has failed to load for any reason and then the user pokes the debugfs GuC log interface, a BUG and/or null pointer deref can occur. Don't let that happen. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211210044022.1842938-5-John.C.Harrison@Intel.com
-
由 John Harrison 提交于
It is possible for platforms to require GuC but not HuC firmware. Also, the firmware versions for GuC and HuC advance independently. So split the macros up to allow the lists to be maintained separately. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211210044022.1842938-2-John.C.Harrison@Intel.com
-
- 10 12月, 2021 5 次提交
-
-
由 Umesh Nerlige Ramappa 提交于
GuC PMU busyness gets gt wakeref if awake, but fails to release the wakeref if a reset is in progress. Release the wakeref if it was acquried successfully. v2: Simplify the fix (Ashutosh) Fixes: 2a67b18e ("drm/i915/pmu: Fix synchronization of PMU callback with reset") Signed-off-by: NUmesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: NAshutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211207020239.43402-1-umesh.nerlige.ramappa@intel.com
-
由 Umesh Nerlige Ramappa 提交于
live_engine_busy_stats waits for busyness to start ticking before sampling busyness for the test sample duration. The wait accesses an MMIO register and the uncore call to read it takes up to 3 ms in the worst case. This can result in the wait timing out since the MMIO read itself consumes up the timeout of 500us. Increase the timeout to a larger value of 10ms to account for the MMIO read time. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4536 Fixes: 77cdd054 ("drm/i915/pmu: Connect engine busyness stats from GuC to pmu") Signed-off-by: NUmesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: NMatthew Brost <matthew.brost@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208183313.13126-1-umesh.nerlige.ramappa@intel.com
-
由 Matthew Auld 提交于
If the device needs 64K minimum GTT pages for device local-memory, like on XEHPSDV, then we need to fail the allocation if we can't meet it, instead of falling back to 4K pages, otherwise we can't safely support the insertion of device local-memory pages for this vm, since the HW expects the correct physical alignment and size for every PTE, if we mark the page-table as 64K GTT mode. v2: s/userpsace/userspace [Thomas] Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NAndi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-5-ramalingam.c@intel.com
-
由 Matthew Auld 提交于
On some platforms the hw has dropped support for 4K GTT pages when dealing with LMEM, and due to the design of 64K GTT pages in the hw, we can only mark the *entire* page-table as operating in 64K GTT mode, since the enable bit is still on the pde, and not the pte. And since we we still need to allow 4K GTT pages for SMEM objects, we can't have a "normal" 4K page-table with scratch pointing to LMEM, since that's undefined from the hw pov. The simplest solution is to just move the 64K scratch page to SMEM on such platforms and call it a day, since that should work for all configurations. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Reviewed-by: NThomas Hellstrom <thomas.hellstrom@linux.intel.com> Reviewed-by: NAndi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-4-ramalingam.c@intel.com
-
由 Matthew Auld 提交于
Conditionally allocate LMEM with 64K granularity, since 4K page support for LMEM will be dropped on some platforms when using the PPGTT. v2: updated commit msg [Thomas] Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NStuart Summers <stuart.summers@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NThomas Hellstrom <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208154854.28037-1-ramalingam.c@intel.com
-
- 09 12月, 2021 1 次提交
-
-
由 Tejas Upadhyay 提交于
We need a way to reset engines by their reset domains. This change sets up way to fetch reset domains of each engine globally. Changes since V1: - Use static reset domain array - Ville and Tvrtko - Use BUG_ON at appropriate place - Tvrtko Signed-off-by: NTejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206081026.4024401-1-tejaskumarx.surendrakumar.upadhyay@intel.com
-
- 08 12月, 2021 3 次提交
-
-
由 Matthew Auld 提交于
Ensure we account for any object rounding due to min_page_size restrictions. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: NRamalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-4-matthew.auld@intel.com
-
由 Matthew Auld 提交于
No need to insert PTEs for the PTE window itself, also foreach expects a length not an end offset, which could be gigantic here with a second engine. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: NRamalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-3-matthew.auld@intel.com
-
由 Matthew Auld 提交于
Ensure we add the engine base only after we calculate the qword offset into the PTE window. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: NRamalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-2-matthew.auld@intel.com
-