- 06 3月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
As we allow per-context engine allows the legacy concept of I915_EXEC_RING no longer applies universally. We are still exposing the unrelated exec-id in GEM_BUSY, so transition this ioctl (once more slightly changing its ABI, but no one cares) over to only reporting the uabi-class (not instance as we can not foreseeably fit those into the small bitmask). The only user of the extended ring information from GEM_BUSY is ddx/sna, which tries to use the non-rcs business information to guide which engine to use for subsequent operations on foreign bo. All that matters for it is the decision between rcs and !rcs, so it is unaffected by the change in higher bits. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190305162643.20243-1-chris@chris-wilson.co.uk
-
- 02 3月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Having introduced per-context seqno, we now have a means to identity progress across the system without feel of rollback as befell the global_seqno. That is we can program a MI_SEMAPHORE_WAIT operation in advance of submission safe in the knowledge that our target seqno and address is stable. However, since we are telling the GPU to busy-spin on the target address until it matches the signaling seqno, we only want to do so when we are sure that busy-spin will be completed quickly. To achieve this we only submit the request to HW once the signaler is itself executing (modulo preemption causing us to wait longer), and we only do so for default and above priority requests (so that idle priority tasks never themselves hog the GPU waiting for others). As might be reasonably expected, HW semaphores excel in inter-engine synchronisation microbenchmarks (where the 3x reduced latency / increased throughput more than offset the power cost of spinning on a second ring) and have significant improvement (can be up to ~10%, most see no change) for single clients that utilize multiple engines (typically media players and transcoders), without regressing multiple clients that can saturate the system or changing the power envelope dramatically. v3: Drop the older NEQ branch, now we pin the signaler's HWSP anyway. v4: Tell the world and include it as part of scheduler caps. Testcase: igt/gem_exec_whisper Testcase: igt/benchmarks/gem_wsim Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190301170901.8340-3-chris@chris-wilson.co.uk
-
- 28 2月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
If we have parked, then we must have passed an idleness test and still be idle. We chose not to use this shortcut in the past so that we could use the idleness test at any time and inspect HW. However, some HW like Sandybridge, doesn't like being woken up frivolously, so avoid doing so. References: 0b702dca ("drm/i915: Avoid waking the engines just to check if they are idle") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190227214159.7946-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
This reverts commit 0b702dca. CI reports that this is not as reliable as it first appears, with failures starting to sporadically occur in selftests. Fixes: 0b702dca ("drm/i915: Avoid waking the engines just to check if they are idle") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190227204654.14907-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Do a pass over all the engines upon starting to determine the global scheduler capability flags (those that are agreed upon by all). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190226102404.29153-7-chris@chris-wilson.co.uk
-
- 27 2月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Exploit that reads of the ring registers return 0 from the engine when it is idle and we do not apply forcewake to know that if the engine is idle then both reads will be identical (and so we interpret the ring as idle). The ulterior motive is to try and reduce the number of spurious wakeups to avoid untimely death, such as: <3> [85.046836] [drm:fw_domains_get [i915]] *ERROR* render: timed out waiting for forcewake ack request. <4> [85.051916] ------------[ cut here ]------------ <4> [85.051917] GT thread status wait timed out <4> [85.051963] WARNING: CPU: 2 PID: 2195 at drivers/gpu/drm/i915/intel_uncore.c:303 __gen6_gt_wait_for_thread_c0+0x6e/0xa0 [i915] <4> [85.051964] Modules linked in: snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic i915 x86_pkg_temp_thermal coretemp mei_hdcp crct10dif_pclmul crc32_pclmul snd_hda_intel ghash_clmulni_intel snd_hda_codec broadcom bcm_phy_lib i2c_i801 snd_hwdep snd_hda_core tg3 snd_pcm ptp pps_core mei_me mei prime_numbers lpc_ich <4> [85.051980] CPU: 2 PID: 2195 Comm: drm_read Tainted: G U 5.0.0-rc8-CI-CI_DRM_5662+ #1 <4> [85.051981] Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 <4> [85.052012] RIP: 0010:__gen6_gt_wait_for_thread_c0+0x6e/0xa0 [i915] <4> [85.052015] Code: 8b 92 5c 80 13 00 83 e2 07 75 d5 5b 5d c3 80 3d 5b 6a 1a 00 00 75 f4 48 c7 c7 38 21 31 a0 c6 05 4b 6a 1a 00 01 e8 e2 84 ea e0 <0f> 0b eb dd 80 3d 3a 6a 1a 00 00 75 98 48 c7 c6 08 21 31 a0 48 c7 <4> [85.052016] RSP: 0018:ffffc9000043bd00 EFLAGS: 00010086 <4> [85.052019] RAX: 0000000000000000 RBX: ffff888217c50000 RCX: 0000000000000000 <4> [85.052020] RDX: 0000000000000007 RSI: ffffffff820cb141 RDI: 00000000ffffffff <4> [85.052022] RBP: 00000013cd30f2fb R08: 0000000000000000 R09: 0000000000000001 <4> [85.052024] R10: ffffc9000043bce0 R11: 0000000000000000 R12: ffff888217c50ee0 <4> [85.052025] R13: 0000000000000001 R14: 00000000ffffffff R15: ffff888218076530 <4> [85.052028] FS: 00007fc79d049980(0000) GS:ffff888227a80000(0000) knlGS:0000000000000000 <4> [85.052029] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [85.052031] CR2: 00007f782e2940f8 CR3: 000000022458e006 CR4: 00000000000606e0 <4> [85.052033] Call Trace: <4> [85.052064] gen6_read32+0x14e/0x250 [i915] <4> [85.052096] intel_engine_is_idle+0x7d/0x180 [i915] <4> [85.052126] intel_engines_are_idle+0x29/0x50 [i915] <4> [85.052153] i915_drop_caches_set+0x21c/0x290 [i915] <4> [85.052160] simple_attr_write+0xb0/0xd0 <4> [85.052165] full_proxy_write+0x51/0x80 <4> [85.052170] __vfs_write+0x31/0x190 <4> [85.052176] ? rcu_read_lock_sched_held+0x6f/0x80 <4> [85.052178] ? rcu_sync_lockdep_assert+0x29/0x50 <4> [85.052181] ? __sb_start_write+0x152/0x1f0 <4> [85.052183] ? __sb_start_write+0x163/0x1f0 <4> [85.052187] vfs_write+0xbd/0x1b0 <4> [85.052191] ksys_write+0x50/0xc0 <4> [85.052196] do_syscall_64+0x55/0x190 <4> [85.052200] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [85.052202] RIP: 0033:0x7fc79c9d3281 <4> [85.052204] Code: c3 0f 1f 84 00 00 00 00 00 48 8b 05 59 8d 20 00 c3 0f 1f 84 00 00 00 00 00 8b 05 8a d1 20 00 85 c0 75 16 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 41 54 55 49 89 d4 53 <4> [85.052206] RSP: 002b:00007fffa4a0a7f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 <4> [85.052208] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fc79c9d3281 <4> [85.052210] RDX: 0000000000000005 RSI: 00007fffa4a0a880 RDI: 0000000000000008 <4> [85.052212] RBP: 00007fffa4a0a820 R08: 0000000000000000 R09: 0000000000000000 <4> [85.052213] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fc79c9bc718 <4> [85.052215] R13: 0000000000000003 R14: 00007fc79c9c1628 R15: 00007fc79c9bdd80 <4> [85.052223] irq event stamp: 71630 <4> [85.052226] hardirqs last enabled at (71629): [<ffffffff8197b64c>] _raw_spin_unlock_irqrestore+0x4c/0x60 <4> [85.052228] hardirqs last disabled at (71630): [<ffffffff8197b4bd>] _raw_spin_lock_irqsave+0xd/0x50 <4> [85.052231] softirqs last enabled at (70444): [<ffffffff81c0033a>] __do_softirq+0x33a/0x4b9 <4> [85.052234] softirqs last disabled at (70433): [<ffffffff810b51b1>] irq_exit+0xd1/0xe0 <4> [85.052264] WARNING: CPU: 2 PID: 2195 at drivers/gpu/drm/i915/intel_uncore.c:303 __gen6_gt_wait_for_thread_c0+0x6e/0xa0 [i915] Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190227114958.32438-1-chris@chris-wilson.co.uk
-
- 26 2月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
Having weaned the interrupt handling off using a single global execution queue, we no longer need to emit a global_seqno. Note that we still have a few assumptions about execution order along engine timelines, but this removes the most obvious artefact! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190226094922.31617-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Stop accessing the HWSP to read the global seqno, and stop tracking the mirror in the engine's execution timeline -- it is unused. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190226094922.31617-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
To determine whether an engine has 'stuck', we simply check whether or not is still on the same seqno for several seconds. To keep this simple mechanism intact over the loss of a global seqno, we can simply add a new global heartbeat seqno instead. As we cannot know the sequence in which requests will then be completed, we use a primitive random number generator instead (with a cycle long enough to not matter over an interval of a few thousand requests between hangcheck samples). The alternative to using a dedicated seqno on every request is to issue a heartbeat request and query its progress through the system. Sadly this requires us to reduce struct_mutex so that we can issue requests without requiring that bkl. v2: And without the extra CS_STALL for the hangcheck seqno -- we don't need strict serialisation with what comes later, we just need to be sure we don't write the hangcheck seqno before our batch is flushed. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190226094922.31617-1-chris@chris-wilson.co.uk
-
- 21 2月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
At a few points in our uABI, we check to see if the driver is wedged and report -EIO back to the user in that case. However, as we perform the check and reset asynchronously (where once before they were both serialised by the struct_mutex), we may instead see the temporary wedging used to cancel inflight rendering to avoid a deadlock during reset (caused by either us timing out in our reset handler, i915_wedge_on_timeout or with malice aforethought in intel_reset_prepare for a stuck modeset). If we suspect this is the case, that is we see a wedged driver *and* reset in progress, then wait until the reset is resolved before reporting upon the wedged status. v2: might_sleep() (Mika) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109580Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190220145637.23503-1-chris@chris-wilson.co.uk
-
- 12 2月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
While this is mainly only useful for ELSP[0], it is definitely useful to know the current timeline seqno wrt to the queued set of requests for that port, as this carries additional information above and beyond the near-defunct global_seqno and global HWSP. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190211131004.11634-1-chris@chris-wilson.co.uk
-
- 06 2月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Looking forward, we need to break the struct_mutex dependency on i915_gem_active. In the meantime, external use of i915_gem_active is quite beguiling, little do new users suspect that it implies a barrier as each request it tracks must be ordered wrt the previous one. As one of many, it can be used to track activity across multiple timelines, a shared fence, which fits our unordered request submission much better. We need to steer external users away from the singular, exclusive fence imposed by i915_gem_active to i915_active instead. As part of that process, we move i915_gem_active out of i915_request.c into i915_active.c to start separating the two concepts, and rename it to i915_active_request (both to tie it to the concept of tracking just one request, and to give it a longer, less appealing name). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130005.2807-5-chris@chris-wilson.co.uk
-
- 30 1月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
A few years ago, see commit 688e6c72 ("drm/i915: Slaughter the thundering i915_wait_request herd"), the issue of handling multiple clients waiting in parallel was brought to our attention. The requirement was that every client should be woken immediately upon its request being signaled, without incurring any cpu overhead. To handle certain fragility of our hw meant that we could not do a simple check inside the irq handler (some generations required almost unbounded delays before we could be sure of seqno coherency) and so request completion checking required delegation. Before commit 688e6c72, the solution was simple. Every client waiting on a request would be woken on every interrupt and each would do a heavyweight check to see if their request was complete. Commit 688e6c72 introduced an rbtree so that only the earliest waiter on the global timeline would woken, and would wake the next and so on. (Along with various complications to handle requests being reordered along the global timeline, and also a requirement for kthread to provide a delegate for fence signaling that had no process context.) The global rbtree depends on knowing the execution timeline (and global seqno). Without knowing that order, we must instead check all contexts queued to the HW to see which may have advanced. We trim that list by only checking queued contexts that are being waited on, but still we keep a list of all active contexts and their active signalers that we inspect from inside the irq handler. By moving the waiters onto the fence signal list, we can combine the client wakeup with the dma_fence signaling (a dramatic reduction in complexity, but does require the HW being coherent, the seqno must be visible from the cpu before the interrupt is raised - we keep a timer backup just in case). Having previously fixed all the issues with irq-seqno serialisation (by inserting delays onto the GPU after each request instead of random delays on the CPU after each interrupt), we can rely on the seqno state to perfom direct wakeups from the interrupt handler. This allows us to preserve our single context switch behaviour of the current routine, with the only downside that we lose the RT priority sorting of wakeups. In general, direct wakeup latency of multiple clients is about the same (about 10% better in most cases) with a reduction in total CPU time spent in the waiter (about 20-50% depending on gen). Average herd behaviour is improved, but at the cost of not delegating wakeups on task_prio. v2: Capture fence signaling state for error state and add comments to warm even the most cold of hearts. v3: Check if the request is still active before busywaiting v4: Reduce the amount of pointer misdirection with list_for_each_safe and using a local i915_request variable inside the loops v5: Add a missing pluralisation to a purely informative selftest message. References: 688e6c72 ("drm/i915: Slaughter the thundering i915_wait_request herd") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
After noticing that we trigger preemption events for currently executing requests, as well as requests that complete before the preemption and attempting to suppress those preemption events, it is wise to not consider the queue_priority to be authoritative. As we only track the maximum priority seen between dequeue passes, if the maximum priority request is no longer available for dequeuing (it completed or is even executing on another engine), we have no knowledge of the previous queue_priority as it would require us to keep a full history of enqueued requests -- but we already have that history in the priolists! Rename the queue_priority to queue_priority_hint so that we do not confuse it as being exactly the maximum priority in the queue, but merely an indication that we have seen a new maximum priority value and as such we should check whether it should preempt the currently running request. v2: s/preempt_priority_hint/queue_priority_hint/ as preempt implies it being only used for the singular task of preemption and not the wider question of waking up due to a change in the queue. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
To allow requests to forgo a common execution timeline, one question we need to be able to answer is "is this request running?". To track whether a request has started on HW, we can emit a breadcrumb at the beginning of the request and check its timeline's HWSP to see if the breadcrumb has advanced past the start of this request. (This is in contrast to the global timeline where we need only ask if we are on the global timeline and if the timeline has advanced past the end of the previous request.) There is still confusion from a preempted request, which has already started but relinquished the HW to a high priority request. For the common case, this discrepancy should be negligible. However, for identification of hung requests, knowing which one was running at the time of the hang will be much more important. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-2-chris@chris-wilson.co.uk
-
- 29 1月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
Now that we have allocated ourselves a cacheline to store a breadcrumb, we can emit a write from the GPU into the timeline's HWSP of the per-context seqno as we complete each request. This drops the mirroring of the per-engine HWSP and allows each context to operate independently. We do not need to unwind the per-context timeline, and so requests are always consistent with the timeline breadcrumb, greatly simplifying the completion checks as we no longer need to be concerned about the global_seqno changing mid check. One complication though is that we have to be wary that the request may outlive the HWSP and so avoid touching the potentially danging pointer after we have retired the fence. We also have to guard our access of the HWSP with RCU, the release of the obj->mm.pages should already be RCU-safe. At this point, we are emitting both per-context and global seqno and still using the single per-engine execution timeline for resolving interrupts. v2: s/fake_complete/mark_complete/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128181812.22804-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Allocate a page for use as a status page by a group of timelines, as we only need a dword of storage for each (rounded up to the cacheline for safety) we can pack multiple timelines into the same page. Each timeline will then be able to track its own HW seqno. v2: Reuse the common per-engine HWSP for the solitary ringbuffer timeline, so that we do not have to emit (using per-gen specialised vfuncs) the breadcrumb into the distinct timeline HWSP and instead can keep on using the common MI_STORE_DWORD_INDEX. However, to maintain the sleight-of-hand for the global/per-context seqno switchover, we will store both temporarily (and so use a custom offset for the shared timeline HWSP until the switch over). v3: Keep things simple and allocate a page for each timeline, page sharing comes next. v4: I was caught repeating the same MI_STORE_DWORD_IMM over and over again in selftests. v5: And caught red handed copying create timeline + check. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128181812.22804-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Currently we only allocate an object and vma if we are using a GGTT virtual HWSP, and a plain struct page for a physical HWSP. For convenience later on with global timelines, it will be useful to always have the status page being tracked by a struct i915_vma. Make it so. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-4-chris@chris-wilson.co.uk
-
- 25 1月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
Now that the submission backends are controlled via their own spinlocks, with a wave of a magic wand we can lift the struct_mutex requirement around GPU reset. That is we allow the submission frontend (userspace) to keep on submitting while we process the GPU reset as we can suspend the backend independently. The major change is around the backoff/handoff strategy for performing the reset. With no mutex deadlock, we no longer have to coordinate with any waiter, and just perform the reset immediately. Testcase: igt/gem_mmap_gtt/hang # regresses Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190125132230.22221-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Now that we know we measure the size of the engine->emit_breadcrumb() correctly, we can remove the previous manual counting. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190125120005.25191-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Instead of tediously and fragilely counting up the number of dwords required to emit the breadcrumb to seal a request, fake a request and measure it automatically once during engine setup. The downside is that this requires a fair amount of mocking to create a proper breadcrumb. Still, should be less error prone in future as the breadcrumb size fluctuates! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190125100520.20163-1-chris@chris-wilson.co.uk
-
- 18 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Since commit d4ccceb0 ("drm/i915/icl: Ringbuffer interrupt handling") we have required a mechanism to avoid touching the interrupt hardware for breadcrumbs, superseding our mock interface for selftests. The residual problem (ideas welcome) is in probing the mock ring registers for ring_is_idle. Hmm, maybe we should just install mock handlers for i915->uncore.mmio__write and friends? Only problem being is that we would to truly mock some expected reads. :( References: d4ccceb0 ("drm/i915/icl: Ringbuffer interrupt handling") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190118112225.13780-1-chris@chris-wilson.co.uk
-
- 17 1月, 2019 2 次提交
-
-
由 Jani Nikula 提交于
Mixed C99 and kernel types use is getting ugly. Prefer kernel types. sed -i 's/\buint\(8\|16\|32\|64\)_t\b/u\1/g' Minor checkpatch fixes sprinkled on top of the changed lines. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJosé Roberto de Souza <jose.souza@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/14ed72e7f04c9340a057855c5950b54811f8a477.1547629303.git.jani.nikula@intel.com
-
由 Chris Wilson 提交于
Currently the code to reset the GPU and our state is spread widely across a few files. Pull the logic together into a common file. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190116153304.787-1-chris@chris-wilson.co.uk
-
- 16 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Move the debug pretty printer into a standalone routine prior to extending it in upcoming feature work. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190115212948.10423-1-chris@chris-wilson.co.uk
-
- 15 1月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Keep track of the temporary rpm wakerefs used for user access to the device, so that we can cancel them upon release and clearly identify any leaks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-10-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The majority of runtime-pm operations are bounded and scoped within a function; these are easy to verify that the wakeref are handled correctly. We can employ the compiler to help us, and reduce the number of wakerefs tracked when debugging, by passing around cookies provided by the various rpm_get functions to their rpm_put counterpart. This makes the pairing explicit, and given the required wakeref cookie the compiler can verify that we pass an initialised value to the rpm_put (quite handy for double checking error paths). For regular builds, the compiler should be able to eliminate the unused local variables and the program growth should be minimal. Fwiw, it came out as a net improvement as gcc was able to refactor rpm_get and rpm_get_if_in_use together, v2: Just s/rpm_put/rpm_put_unchecked/ everywhere, leaving the manual mark up for smaller more targeted patches. v3: Mention the cookie in Returns Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-2-chris@chris-wilson.co.uk
-
- 03 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
When we first introduced the reset to sanitize the GPU on taking over from the BIOS and before returning control to third parties (the BIOS!), we restricted it to only systems utilizing HW contexts as we were uncertain of how stable our reset mechanism truly was. We now have reasonable coverage across all machines that expose a GPU reset method, and so we should be safe to sanitize the GPU state everywhere. v2: We _have_ to skip the reset if it would clobber the display. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190103112104.19561-1-chris@chris-wilson.co.uk
-
- 02 1月, 2019 1 次提交
-
-
由 Jani Nikula 提交于
First move the low hanging fruit, the fields that are only initialized runtime. Use RUNTIME_INFO() exclusively to access the fields. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/c24fe7a4b0492a888690c46814c0ff21ce2f12b1.1546267488.git.jani.nikula@intel.com
-
- 31 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Now that we have eliminated the CPU-side irq_seqno_barrier by moving the delays on the GPU before emitting the MI_USER_INTERRUPT, we can remove the engine->irq_seqno_barrier infrastructure. Though intentionally slowing down the GPU is nasty, so is the code we can now remove! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181228171641.16531-6-chris@chris-wilson.co.uk
-
- 28 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
The writing is on the wall for the existence of a single execution queue along each engine, and as a consequence we will not be able to track dependencies along the HW queue itself, i.e. we will not be able to use HW semaphores on gen7 as they use a global set of registers (and unlike gen8+ we can not effectively target memory to keep per-context seqno and dependencies). On the positive side, when we implement request reordering for gen7 we also can not presume a simple execution queue and would also require removing the current semaphore generation code. So this bring us another step closer to request reordering for ringbuffer submission! The negative side is that using interrupts to drive inter-engine synchronisation is much slower (4us -> 15us to do a nop on each of the 3 engines on ivb). This is much better than it was at the time of introducing the HW semaphores and equally important userspace weaned itself off intermixing dependent BLT/RENDER operations (the prime culprit was glyph rendering in UXA). So while we regress the microbenchmarks, it should not impact the user. References: https://bugs.freedesktop.org/show_bug.cgi?id=108888Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181228140736.32606-2-chris@chris-wilson.co.uk
-
- 18 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Having completed a test run of gem_eio across all machines in CI we also observe the phenomenon (of lost interrupts after resetting the GPU) on gen3 machines as well as the previously sighted gen6/gen7. Let's apply the same HWSTAM workaround that was effective for gen6+ for all, as although we haven't seen the same failure on gen4/5 it seems prudent to keep the code the same. As a consequence we can remove the extra setting of HWSTAM and apply the register from a single site. v2: Delazy and move the HWSTAM into its own function v3: Mask off all HWSP writes on driver unload and engine cleanup. v4: And what about the physical hwsp? v5: No, engine->init_hw() is not called from driver_init_hw(), don't be daft. Really scrub HWSTAM as early as we can in driver_init_mmio() v6: Rename set_hwsp as it was setting the mask not the hwsp register. v7: Ville pointed out that although vcs(bsd) was introduced for g4x/ilk, per-engine HWSTAM was not introduced until gen6! References: https://bugs.freedesktop.org/show_bug.cgi?id=108735Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181218102712.11058-1-chris@chris-wilson.co.uk
-
- 13 12月, 2018 3 次提交
-
-
由 Lucas De Marchi 提交于
Instead of using IS_GEN() for consecutive gen checks, let's pass the range to IS_GEN_RANGE(). By code inspection these were the ranges deemed necessary for spatch: @@ expression e; @@ ( - IS_GEN(e, 3) || IS_GEN(e, 2) + IS_GEN_RANGE(e, 2, 3) | - IS_GEN(e, 3) || IS_GEN(e, 4) + IS_GEN_RANGE(e, 3, 4) | - IS_GEN(e, 5) || IS_GEN(e, 6) + IS_GEN_RANGE(e, 5, 6) | - IS_GEN(e, 6) || IS_GEN(e, 7) + IS_GEN_RANGE(e, 6, 7) | - IS_GEN(e, 7) || IS_GEN(e, 8) + IS_GEN_RANGE(e, 7, 8) | - IS_GEN(e, 8) || IS_GEN(e, 9) + IS_GEN_RANGE(e, 8, 9) | - IS_GEN(e, 10) || IS_GEN(e, 9) + IS_GEN_RANGE(e, 9, 10) | - IS_GEN(e, 9) || IS_GEN(e, 10) + IS_GEN_RANGE(e, 9, 10) ) After conversion, checking we don't have any missing IS_GEN_RANGE() || IS_GEN() was also done. Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-3-lucas.demarchi@intel.com
-
由 Lucas De Marchi 提交于
Define IS_GEN() similarly to our IS_GEN_RANGE(). but use gen instead of gen_mask to do the comparison. Now callers can pass then gen as a parameter, so we don't require one macro for each gen. The following spatch was used to convert the users of these macros: @@ expression e; @@ ( - IS_GEN2(e) + IS_GEN(e, 2) | - IS_GEN3(e) + IS_GEN(e, 3) | - IS_GEN4(e) + IS_GEN(e, 4) | - IS_GEN5(e) + IS_GEN(e, 5) | - IS_GEN6(e) + IS_GEN(e, 6) | - IS_GEN7(e) + IS_GEN(e, 7) | - IS_GEN8(e) + IS_GEN(e, 8) | - IS_GEN9(e) + IS_GEN(e, 9) | - IS_GEN10(e) + IS_GEN(e, 10) | - IS_GEN11(e) + IS_GEN(e, 11) ) v2: use IS_GEN rather than GT_GEN and compare to info.gen rather than using the bitmask Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-2-lucas.demarchi@intel.com
-
由 Lucas De Marchi 提交于
RANGE makes it longer, but clearer. We are also going to add a macro to check an individual gen, so add the _RANGE prefix here. Diff generated with: sed 's/IS_GEN(/IS_GEN_RANGE(/g' drivers/gpu/drm/i915/{*/,}*.{c,h} -i v2: use IS_GEN rather than GT_GEN Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-1-lucas.demarchi@intel.com
-
- 12 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Currently we allocate a scratch page for each engine, but since we only ever write into it for post-sync operations, it is not exposed to userspace nor do we care for coherency. As we then do not care about its contents, we can use one page for all, reducing our allocations and avoid complications by not assuming per-engine isolation. For later use, it simplifies engine initialisation (by removing the allocation that required struct_mutex!) and means that we can always rely on there being a scratch page. v2: Check that we allocated a large enough scratch for I830 w/a Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.18.20+ (cherry picked from commit 51797499) [Joonas: Use new function in gen9_init_indirectctx_bb too] Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 07 12月, 2018 1 次提交
-
-
由 Christian König 提交于
For a lot of use cases we need 64bit sequence numbers. Currently drivers overload the dma_fence structure to store the additional bits. Stop doing that and make the sequence number in the dma_fence always 64bit. For compatibility with hardware which can do only 32bit sequences the comparisons in __dma_fence_is_later only takes the lower 32bits as significant when the upper 32bits are all zero. v2: change the logic in __dma_fence_is_later Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Link: https://patchwork.freedesktop.org/patch/266927/
-
- 05 12月, 2018 1 次提交
-
-
由 Tvrtko Ursulin 提交于
We stopped re-applying the GT workarounds after engine reset since commit 59b449d5 ("drm/i915: Split out functions for different kinds of workarounds"). Issue with this is that some of the GT workarounds live in the MMIO space which gets lost during engine resets. So far the registers in 0x2xxx and 0xbxxx address range have been identified to be affected. This losing of applied workarounds has obvious negative effects and can even lead to hard system hangs (see the linked Bugzilla). Rather than just restoring this re-application, because we have also observed that it is not safe to just re-write all GT workarounds after engine resets (GPU might be live and weird hardware states can happen), we introduce a new class of per-engine workarounds and move only the affected GT workarounds over. Using the framework introduced in the previous patch, we therefore after engine reset, re-apply only the workarounds living in the affected MMIO address ranges. v2: * Move Wa_1406609255:icl to engine workarounds as well. * Rename API. (Chris Wilson) * Drop redundant IS_KABYLAKE. (Chris Wilson) * Re-order engine wa/ init so latest platforms are first. (Rodrigo Vivi) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Bugzilla: https://bugzilla.freedesktop.org/show_bug.cgi?id=107945 Fixes: 59b449d5 ("drm/i915: Split out functions for different kinds of workarounds") Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: intel-gfx@lists.freedesktop.org Acked-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20181203133341.10258-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit 4a15c75c) Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 04 12月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
Currently we allocate a scratch page for each engine, but since we only ever write into it for post-sync operations, it is not exposed to userspace nor do we care for coherency. As we then do not care about its contents, we can use one page for all, reducing our allocations and avoid complications by not assuming per-engine isolation. For later use, it simplifies engine initialisation (by removing the allocation that required struct_mutex!) and means that we can always rely on there being a scratch page. v2: Check that we allocated a large enough scratch for I830 w/a Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.18.20+
-
由 Tvrtko Ursulin 提交于
Convert the per context workaround handling code to run against the newly introduced common workaround framework and fuse the two to use the existing smarter list add helper, the one which does the sorted insert and merges registers where possible. This completes migration of all four classes of workarounds onto the common framework. Existing macros are kept untouched for smaller code churn. v2: * Rename to list name ctx_wa_list and move from dev_priv to engine. v3: * API rename and parameters tweaking. (Chris Wilson) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20181203133357.10341-1-tvrtko.ursulin@linux.intel.com
-