- 09 7月, 2020 4 次提交
-
-
SSEUs are a GT capability, so track them under gt_info. Signed-off-by: NVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200708003952.21831-8-daniele.ceraolospurio@intel.com
-
由 Daniele Ceraolo Spurio 提交于
Since the engines belong to the GT, move the runtime-updated list of available engines to the intel_gt struct. The original mask has been renamed to indicate it contains the maximum engine list that can be found on a matching device. In preparation for other info being moved to the gt in follow up patches (sseu), introduce an intel_gt_info structure to group all gt-related runtime info. v2: s/max_engine_mask/platform_engine_mask (tvrtko), fix selftest Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200708003952.21831-5-daniele.ceraolospurio@intel.com
-
由 Daniele Ceraolo Spurio 提交于
All the info we read in intel_device_info_init_mmio are engine-related and since we already have an engine_init_mmio function we can just perform the operations from there. v2: clarify comment about forcewake requirements and pruning (Chris) Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200708003952.21831-4-daniele.ceraolospurio@intel.com
-
由 Daniele Ceraolo Spurio 提交于
A follow up patch will move the engine mask under the gt structure, so get ready for that. v2: switch the remaining gvt case using dev_priv->gt to gvt->gt (Chris) Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200708003952.21831-3-daniele.ceraolospurio@intel.com
-
- 20 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Since we always enable the busy-stats, the culmulative runtime should be accurate, and might be useful for diagnosing issues with the engine. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200619191053.9654-1-chris@chris-wilson.co.uk
-
- 18 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Return the monotonic timestamp (ktime_get()) at the time of sampling the busy-time. This is used in preference to taking ktime_get() separately before or after the read seqlock as there can be some large variance in reported timestamps. For selftests trying to ascertain that we are reporting accurate to within a few microseconds, even a small delay leads to the test failing. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200617130916.15261-2-chris@chris-wilson.co.uk
-
- 16 6月, 2020 3 次提交
-
-
由 Chris Wilson 提交于
In commit 5ba32c7b ("drm/i915/execlists: Always force a context reload when rewinding RING_TAIL"), we placed the check for rewinding a context on actually submitting the next request in that context. This was so that we only had to check once, and could do so with precision avoiding as many forced restores as possible. For example, to ensure that we can resubmit the same request a couple of times, we include a small wa_tail such that on the next submission, the ring->tail will appear to move forwards when resubmitting the same request. This is very common as it will happen for every lite-restore to fill the second port after a context switch. However, intel_ring_direction() is limited in precision to movements of upto half the ring size. The consequence being that if we tried to unwind many requests, we could exceed half the ring and flip the sense of the direction, so missing a force restore. As no request can be greater than half the ring (i.e. 2048 bytes in the smallest case), we can check for rollback incrementally. As we check against the tail that would be submitted, we do not lose any sensitivity and allow lite restores for the simple case. We still need to double check upon submitting the context, to allow for multiple preemptions and resubmissions. Fixes: 5ba32c7b ("drm/i915/execlists: Always force a context reload when rewinding RING_TAIL") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NBruce Chang <yu.bruce.chang@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200609151723.12971-1-chris@chris-wilson.co.uk (cherry picked from commit e36ba817) Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
If the tasklet is not being used, don't try and flush it. Fixes: 59489387 ("drm/i915/gt: Add a safety submission flush in the heartbeat") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200615183935.17389-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Just in case everything fails (like for example "missed interrupt syndrome" on Sandybridge), always flush the submission tasklet from the heartbeat. This papers over such issues, but will still appear as a second long glitch, and prevents us from detecting it unless we happen to be performing a timed test. v2: We rely on flush_submission() synchronizing with the tasklet on another CPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200615165013.22973-3-chris@chris-wilson.co.uk
-
- 10 6月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
This may be useful to identify contexts that are running even though they are supposed to be closed or banned. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200610154046.22449-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In commit 5ba32c7b ("drm/i915/execlists: Always force a context reload when rewinding RING_TAIL"), we placed the check for rewinding a context on actually submitting the next request in that context. This was so that we only had to check once, and could do so with precision avoiding as many forced restores as possible. For example, to ensure that we can resubmit the same request a couple of times, we include a small wa_tail such that on the next submission, the ring->tail will appear to move forwards when resubmitting the same request. This is very common as it will happen for every lite-restore to fill the second port after a context switch. However, intel_ring_direction() is limited in precision to movements of upto half the ring size. The consequence being that if we tried to unwind many requests, we could exceed half the ring and flip the sense of the direction, so missing a force restore. As no request can be greater than half the ring (i.e. 2048 bytes in the smallest case), we can check for rollback incrementally. As we check against the tail that would be submitted, we do not lose any sensitivity and allow lite restores for the simple case. We still need to double check upon submitting the context, to allow for multiple preemptions and resubmissions. Fixes: 5ba32c7b ("drm/i915/execlists: Always force a context reload when rewinding RING_TAIL") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NBruce Chang <yu.bruce.chang@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200609151723.12971-1-chris@chris-wilson.co.uk
-
- 05 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Add engine->fw_domain/active to the pretty printer for debug dumps and debugfs. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200605144705.31127-1-chris@chris-wilson.co.uk
-
- 03 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We infrequently use the direct i915 backpointer from the i915_request, so do we really need to waste the space in the struct for it? 8 bytes from the most frequently allocated struct vs an 3 bytes and pointer chasing in using rq->engine->i915? Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200602220953.21178-1-chris@chris-wilson.co.uk
-
- 01 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
If we fail during engine setup, we may leave some engines not yet setup. During the error cleanup, we have to be careful not to try and use the uninitialise engines before discarding them. [ 16.136152] RIP: 0010:__flush_work+0x198/0x1b0 [ 16.136168] Code: ff ff 8b 0b 48 8b 53 08 83 e1 08 48 0f ba 2b 03 80 c9 f0 e9 63 ff ff ff 0f 0b 48 83 c4 48 44 89 f0 5b 5d 41 5c 41 5d 41 5e c3 <0f> 0b 45 31 f6 e9 62 ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f [ 16.136186] RSP: 0018:ffffc900003bb928 EFLAGS: 00010246 [ 16.136201] RAX: 0000000000000000 RBX: ffff88844f392168 RCX: 0000000000000000 [ 16.136216] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88844f392168 [ 16.136231] RBP: ffff88844f392130 R08: 0000000000000000 R09: 0000000000000001 [ 16.136246] R10: ffff888441e31e40 R11: ffff88845e329c70 R12: ffff88844f796988 [ 16.136261] R13: ffff888441e4fb80 R14: 0000000000000001 R15: ffff88844f790000 [ 16.136388] FS: 00007fecbd208880(0000) GS:ffff88845e380000(0000) knlGS:0000000000000000 [ 16.136405] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 16.136420] CR2: 00007ff3ce748f90 CR3: 0000000457a6a001 CR4: 00000000000606e0 [ 16.136437] Call Trace: [ 16.136456] ? try_to_del_timer_sync+0x3a/0x50 [ 16.136529] intel_wakeref_wait_for_idle+0x87/0xb0 [i915] [ 16.136606] ? intel_engines_release+0x68/0xc0 [i915] [ 16.136680] intel_engines_release+0x49/0xc0 [i915] [ 16.136757] intel_gt_init+0x2f4/0x5e0 [i915] Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200601072446.19548-1-chris@chris-wilson.co.uk
-
- 14 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
By providing the default values configured into the kernel via sysfs, it is much more convenient for userspace to restore those sane defaults, or at least know what are considered good baseline. This is useful, for example, to cleanup after any failed userspace prior to commencing new jobs. /sys/class/drm/card0/engine/rcs0/ ├── capabilities ├── class ├── .defaults │ ├── heartbeat_interval_ms │ ├── max_busywait_duration_ns │ ├── preempt_timeout_ms │ ├── stop_timeout_ms │ └── timeslice_duration_ms ├── heartbeat_interval_ms ├── instance ├── known_capabilities ├── max_busywait_duration_ns ├── mmio_base ├── name ├── preempt_timeout_ms ├── stop_timeout_ms └── timeslice_duration_ms Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMaciej Patelczyk <maciej.patelczyk@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200514062905.28668-1-chris@chris-wilson.co.uk
-
- 07 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the user batch or in our own preamble, the engine raises a GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so respond to a semaphore wait by yielding the timeslice, if we have another context to yield to! The only real complication is that the interrupt is only generated for the start of the semaphore wait, and is asynchronous to our process_csb() -- that is, we may not have registered the timeslice before we see the interrupt. To ensure we don't miss a potential semaphore blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark the interrupt and apply it to the next timeslice regardless of whether it was active at the time. v2: We use semaphores in preempt-to-busy, within the timeslicing implementation itself! Ergo, when we do insert a preemption due to an expired timeslice, the new context may start with the missed semaphore flagged by the retired context and be yielded, ad infinitum. To avoid this, read the context id at the time of the semaphore interrupt and only yield if that context is still active. Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk (cherry picked from commit c4e8ba73) (cherry picked from commit cd60e4ac4738a6921592c4f7baf87f9a3499f0e2) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 01 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Since the introduction of 'soft-rc6', we aim to park the device quickly and that results in frequent idling of the whole device. Currently upon idling we free the batch buffer pool, and so this renders the cache ineffective for many workloads. If we want to have an effective cache of recently allocated buffers available for reuse, we need to decouple that cache from the engine powermanagement and make it timer based. As there is no reason then to keep it within the engine (where it once made retirement order easier to track), we can move it up the hierarchy to the owner of the memory allocations. v2: Hook up to debugfs/drop_caches to clear the cache on demand. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200430111819.10262-2-chris@chris-wilson.co.uk
-
- 30 4月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
In the near future, we will utilize the busy-stats on each engine to approximate the C0 cycles of each, and use that as an input to a manual RPS mechanism. That entails having busy-stats always enabled and so we can remove the enable/disable routines and simplify the pmu setup. As a consequence of always having the stats enabled, we can also show the current active time via sysfs/engine/xcs/active_time_ns. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200429205446.3259-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We need to keep the default context state around to instantiate new contexts (aka golden rendercontext), and we also keep it pinned while the engine is active so that we can quickly reset a hanging context. However, the default contexts are large enough to merit keeping in swappable memory as opposed to kernel memory, so we store them inside shmemfs. Currently, we use the normal GEM objects to create the default context image, but we can throw away all but the shmemfs file. This greatly simplifies the tricky power management code which wants to run underneath the normal GT locking, and we definitely do not want to use any high level objects that may appear to recurse back into the GT. Though perhaps the primary advantage of the complex GEM object is that we aggressively cache the mapping, but here we are recreating the vm_area everytime time we unpark. At the worst, we add a lightweight cache, but first find a microbenchmark that is impacted. Having started to create some utility functions to make working with shmemfs objects easier, we can start putting them to wider use, where GEM objects are overkill, such as storing persistent error state. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200429172429.6054-1-chris@chris-wilson.co.uk
-
- 29 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The bspec is confusing on the nature of the upper 32bits of the LRC descriptor. Once upon a time, it said that it uses the upper 32b to decide if it should perform a lite-restore, and so we must ensure that each unique context submitted to HW is given a unique CCID [for the duration of it being on the HW]. Currently, this is achieved by using a small circular tag, and assigning every context submitted to HW a new id. However, this tag is being cleared on repinning an inflight context such that we end up re-using the 0 tag for multiple contexts. To avoid accidentally clearing the CCID in the upper 32bits of the LRC descriptor, split the descriptor into two dwords so we can update the GGTT address separately from the CCID. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1796 Fixes: 2935ed53 ("drm/i915: Remove logical HW ID") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200428184751.11257-1-chris@chris-wilson.co.uk
-
- 07 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the user batch or in our own preamble, the engine raises a GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so respond to a semaphore wait by yielding the timeslice, if we have another context to yield to! The only real complication is that the interrupt is only generated for the start of the semaphore wait, and is asynchronous to our process_csb() -- that is, we may not have registered the timeslice before we see the interrupt. To ensure we don't miss a potential semaphore blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark the interrupt and apply it to the next timeslice regardless of whether it was active at the time. v2: We use semaphores in preempt-to-busy, within the timeslicing implementation itself! Ergo, when we do insert a preemption due to an expired timeslice, the new context may start with the missed semaphore flagged by the retired context and be yielded, ad infinitum. To avoid this, read the context id at the time of the semaphore interrupt and only yield if that context is still active. Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
-
- 04 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
While extremely unlikely to be populated, we could capture a request on the virtual engine which we should free along with the virtual engine. Fixes: 43acd651 ("drm/i915: Keep a per-engine request pool") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200403203303.10903-1-chris@chris-wilson.co.uk
-
- 03 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Add a tiny per-engine request mempool so that we should always have a request available for powermanagement allocations from tricky contexts. This reserve is expected to be only used for kernel contexts when barriers must be emitted [almost] without fail. The main consumer for this reserved request is expected to be engine-pm, for which we know that there will always be at least the previous pm request that we can reuse under mempressure (so there should always be a spare request for engine_park()). This is an alternative to using a comparatively bulky mempool, which requires custom handling for both our reserved allocation requirement and to protect our TYPESAFE_BY_RCU slab cache. The advantage of mempool would be that it would allow us to keep a larger per-engine request pool. However, converting over to mempool is straightforward should the need arise. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-and-tested-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200402184037.21630-1-chris@chris-wilson.co.uk
-
- 01 4月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Insert a space so that the same fields between active/pending execlists state are aligned. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401111554.6279-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we print out EXECLISTS_STATUS in the dump, also print out the CCID of each context so we can cross check between the two. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200331094239.23145-1-chris@chris-wilson.co.uk
-
- 26 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We've migrated all the heavy users over to the intel_gt, and can finally drop the last few users and with that the mirror in dev_priv->engine[]. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200325234803.6175-1-chris@chris-wilson.co.uk
-
- 12 3月, 2020 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Allow super long OpenCL workloads which cannot be preempted within the default timeout to run out of the box. v2: * Make it stick out more and apply only to RCS. (Chris) v3: * Mention platform override in kconfig. (Joonas) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michal Mrozek <michal.mrozek@intel.com> Cc: <stable@vger.kernel.org> # v5.6+ Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMichal Mrozek <Michal.mrozek@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200312115748.29970-1-tvrtko.ursulin@linux.intel.com
-
- 11 3月, 2020 1 次提交
-
-
由 Takashi Iwai 提交于
Since snprintf() returns the would-be-output size instead of the actual output size, the succeeding calls may go beyond the given buffer limit. Fix it by replacing with scnprintf(). Signed-off-by: NTakashi Iwai <tiwai@suse.de> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200311073256.6535-1-tiwai@suse.de
-
- 29 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We busywait on an inflight request (one that is currently executing on HW, and so might complete quickly) prior to setting up an interrupt and sleeping. The trade off is that we keep an expensive CPU core busy in order to avoid wake up latency: where that trade off should lie is best left to the sysadmin. The busywait mechanism can be compiled out with ./scripts/config --set-val DRM_I915_SPIN_REQUEST 0 The maximum busywait duration can be adjusted per-engine using, /sys/class/drm/card?/engine/*/ms_busywait_duration_ns Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NSteve Carbonari <steven.carbonari@intel.com> Tested-by: NSteve Carbonari <steven.carbonari@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200228131716.3243616-4-chris@chris-wilson.co.uk
-
- 22 2月, 2020 1 次提交
-
-
由 Colin Ian King 提交于
Variable dw is being initialized with a value that is never read, it is assigned a new value later on. The assignment is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200222134755.134209-1-colin.king@canonical.com
-
- 19 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As we have the total runtime known to us, show it when dumping the engine state for debug. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200218162150.1300405-2-chris@chris-wilson.co.uk
-
- 12 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
In order to support out-of-line error capture, we need to remove the active request from HW and put it to one side while a worker compresses and stores all the details associated with that request. (As that compression may take an arbitrary user-controlled amount of time, we want to let the engine continue running on other workloads while the hanging request is dumped.) Not only do we need to remove the active request, but we also have to remove its context and all requests that were dependent on it (both in flight, queued and future submission). Finally once the capture is complete, we need to be able to resubmit the request and its dependents and allow them to execute. v2: Replace stack recursion with a simple list. v3: Check all the parents, not just the first, when searching for a stuck ancestor! References: https://gitlab.freedesktop.org/drm/intel/issues/738Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200116184754.2860848-2-chris@chris-wilson.co.uk (cherry picked from commit 32ff621f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 08 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We set up a dummy ring in order to measure the size we require for our breadcrumb emission, so that we don't have to manually count dwords! We can pass in the kernel_context to use for this so that if required it is known for the breadcrumb emitter, and we can reuse some details from the kernel_context to reduce the number of temporaries we have to mock. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200207125827.2787472-1-chris@chris-wilson.co.uk
-
- 01 2月, 2020 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
The workarounds are a common "feature" across gens and submission mechanisms and we already call the other WA related functions from common engine ones (<setup/cleanup>_common), so it makes sense to do the same with WA application. Medium-term, This will help us reduce the duplication once the GuC resume function is added, but short term it will also allow us to use the workaround lists for pre-gen8 engine workarounds. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200131075716.2212299-2-chris@chris-wilson.co.uk
-
- 31 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
On Braswell and Broxton (also known as Valleyview and Apollolake), we need to serialise updates of the GGTT using the big stop_machine() hammer. This has the side effect of appearing to lockdep as a possible reclaim (since it uses the cpuhp mutex and that is tainted by per-cpu allocations). However, we want to use vm->mutex (including ggtt->mutex) from within the shrinker and so must avoid such possible taints. For this purpose, we introduced the asynchronous vma binding and we can apply it to the PIN_GLOBAL so long as take care to add the necessary waits for the worker afterwards. Closes: https://gitlab.freedesktop.org/drm/intel/issues/211Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130181710.2030251-3-chris@chris-wilson.co.uk
-
- 30 1月, 2020 2 次提交
-
-
由 Wambui Karuga 提交于
Conversion of the remaining printk based drm logging macros to the new struct drm_device based logging macros in i915/gt/intel_engine_cs.c. Signed-off-by: NWambui Karuga <wambui.karugax@gmail.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128071437.9284-5-wambui.karugax@gmail.com
-
由 Lionel Landwerlin 提交于
Could be helpful for debugging purposes. Signed-off-by: NLionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200129181638.1528150-1-lionel.g.landwerlin@intel.com
-
- 29 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Now that we have offline error capture and can reset an engine from inside an atomic context while also preserving the GPU state for post-mortem analysis, it is time to handle error interrupts thrown by the command parser. This provides a much, much faster mechanism for us to detect known problems than using heartbeats/hangchecks, and also provides a mechanism for when those are disabled. However, it is limited to problems the HW can detect in the CS and so not a complete solution for detecting lockups. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128204318.4182039-2-chris@chris-wilson.co.uk
-
- 25 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Due to the asynchronous nature of releasing our wakerefs, we can signal the main GT wakeref as complete before the individual engines have cleared their own wakerefs. During shutdown we assert that the engines are indeed parked before we release them, but for this to be always true we need to flush their workers as well. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200124143339.140988-1-chris@chris-wilson.co.uk
-
- 22 1月, 2020 1 次提交
-
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } command: spatch --sp-file <script> --dir drivers/gpu/drm/i915/gt \ --linux-spacing --in-place Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200115034455.17658-7-pankaj.laxminarayan.bharadiya@intel.com
-