- 04 12月, 2019 3 次提交
-
-
由 Abdiel Janulgue 提交于
This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature comes from the value returned by this ioctl which is the offset into the device fd which userpace uses with mmap(2). mmap_gtt was our initial mmap_offset implementation, this extends our CPU mmap support to allow additional fault handlers that depends on the object's backing pages. Note that we multiplex mmap_gtt and mmap_offset through the same ioctl, and use the zero extending behaviour of drm to differentiate between them, when we inspect the flags. To support multiple mmap types on an object we need to support multiple mmap_offsets for an object (each offset in the global device address space corresponding to a unique instance of the object for a file + mmap type). As we drop the simplified drm core idea of a single mmap_offset, we need to provide replacement hooks for the dumb mmap interface as well. Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675 Testcase: igt/gem_mmap_offset Signed-off-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
And Haswell still occasionally forgets it is meant to be using a new page directory, so repeat ourselves a little louder. <7> [509.919864] heartbeat rcs0 heartbeat {prio:-2147483645} not ticking <7> [509.919895] heartbeat Awake? 8 <7> [509.919903] heartbeat Barriers?: no <7> [509.919912] heartbeat Heartbeat: 3008 ms ago <7> [509.919930] heartbeat Reset count: 0 (global 0) <7> [509.919937] heartbeat Requests: <7> [509.921008] heartbeat active a7eb:56e1* @ 5847ms: <7> [509.921157] heartbeat ring->start: 0x00001000 <7> [509.921164] heartbeat ring->head: 0x00001610 <7> [509.921170] heartbeat ring->tail: 0x000023d8 <7> [509.921176] heartbeat ring->emit: 0x000023d8 <7> [509.921182] heartbeat ring->space: 0x00002570 <7> [509.921189] heartbeat ring->hwsp: 0x7fffe100 <7> [509.921197] heartbeat [head 1628, postfix 1738, tail 1750, batch 0xffffffff_ffffffff]: <7> [509.921289] heartbeat [0000] 7a000002 00100002 00000000 00000000 7a000002 01154c1e 7ffff080 00000000 <7> [509.921299] heartbeat [0020] 11000001 00002220 ffffffff 12400001 00002220 7ffff000 00000000 11000001 <7> [509.921308] heartbeat [0040] 00002228 6e900000 7a000002 00100002 00000000 00000000 7a000002 01154c1e <7> [509.921317] heartbeat [0060] 7ffff080 00000000 12400001 00002228 7ffff000 00000000 7a000002 00100002 <7> [509.921326] heartbeat [0080] 00000000 00000000 7a000002 01154c1e 7ffff080 00000000 7a000002 001010a1 <7> [509.921335] heartbeat [00a0] 7ffff080 00000000 04000000 11000005 00022050 00010001 00012050 00010001 <7> [509.921345] heartbeat [00c0] 0001a050 00010001 00000000 0c000000 459a110c 00000000 11000005 00022050 <7> [509.921354] heartbeat [00e0] 00010000 00012050 00010000 0001a050 00010000 12400001 0001a050 7ffff000 <7> [509.921363] heartbeat [0100] 00000000 04000001 18802100 00000000 7a000002 011050a1 7fffe100 000056e1 <7> [509.921370] heartbeat [0120] 01000000 00000000 <7> [509.921538] heartbeat MMIO base: 0x00002000 <7> [509.921682] heartbeat CCID: 0x3fa0110d <7> [509.922342] heartbeat RING_START: 0x00001000 <7> [509.922353] heartbeat RING_HEAD: 0x00001628 <7> [509.922366] heartbeat RING_TAIL: 0x000023d8 <7> [509.922381] heartbeat RING_CTL: 0x00003001 <7> [509.922396] heartbeat RING_MODE: 0x00004000 <7> [509.922408] heartbeat RING_IMR: ffffffde <7> [509.922421] heartbeat ACTHD: 0x00000000_30e01628 <7> [509.922434] heartbeat BBADDR: 0x00000000_00004004 <7> [509.922446] heartbeat DMA_FADDR: 0x00000000_00002800 <7> [509.922458] heartbeat IPEIR: 0x00000000 <7> [509.922470] heartbeat IPEHR: 0x780c0000 <7> [509.922642] heartbeat PP_DIR_BASE: 0x6e700000 <7> [509.922652] heartbeat PP_DIR_BASE_READ: 0x00000000 <7> [509.922662] heartbeat PP_DIR_DCLV: 0xffffffff <7> [509.922678] heartbeat E a7eb:56e1* @ 5849ms: <7> [509.922689] heartbeat E a7eb:56e2- @ 5849ms: <7> [509.922698] heartbeat E a7eb:56e3 @ 5848ms: <7> [509.922707] heartbeat E a7eb:56e4 @ 5848ms: <7> [509.922715] heartbeat E a7eb:56e5 @ 5847ms: <7> [509.922724] heartbeat E a7eb:56e6 @ 5846ms: <7> [509.922735] heartbeat E a7eb:56e7 @ 5846ms: <7> [509.922744] heartbeat ...skipping 4 executing requests... <7> [509.922754] heartbeat E a7eb:56ec @ 3010ms: <7> [509.922796] heartbeat HWSP: <7> [509.922807] heartbeat [0000] 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [509.922817] heartbeat [0020] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [509.922826] heartbeat * <7> [509.922836] heartbeat [0100] 000056e0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [509.922845] heartbeat [0120] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [509.922851] heartbeat * <7> [509.922870] heartbeat Idle? no <7> [509.922878] heartbeat Signals: <7> [509.923000] heartbeat [a7eb:56e2] @ 5850ms Here, we have a failed context restore after the PD switch, but note that the PP_DIR_BASE register does not match the LRI in the ring. Bump it to 8^W 4 loops, and with that Baytrail starts passing the sanity checks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191203211631.3167430-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Rather than assume if and only if the engine->default_state is not set that the context is invalid, instead track when we know the context has valid state -- either because we have copied the default_state or we have completed a context switch to save the HW state. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191203124155.3019926-1-chris@chris-wilson.co.uk
-
- 03 12月, 2019 5 次提交
-
-
由 Chris Wilson 提交于
Only along the submission path can we guarantee that the locked request is indeed from a foreign engine, and so the nesting of engine/rq is permissible. On the submission tasklet (process_csb()), we may find ourselves competing with the normal nesting of rq/engine, invalidating our nesting. As we only use the spinlock for debug purposes, skip the debug if we cannot acquire the spinlock for safe validation - catching 99% of the bugs is better than causing a hard lockup. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: c95d31c3 ("drm/i915/execlists: Lock the request while validating it during promotion") Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191203152631.3107653-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Check the pending request submission is valid: that it at least has a reference for the submission and that the request is on the active list. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191203152631.3107653-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Once inside a request, inside the timeline->mutex, pinning is verboten. <4> [896.032829] ====================================================== <4> [896.032831] WARNING: possible circular locking dependency detected <4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U <4> [896.032838] ------------------------------------------------------ <4> [896.032841] gem_exec_parall/3720 is trying to acquire lock: <4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915] <4> [896.032915] but task is already holding lock: <4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915] <4> [896.032952] which lock already depends on the new lock. <4> [896.032954] the existing dependency chain (in reverse order) is: <4> [896.032956] -> #1 (&vm->mutex){+.+.}: <4> [896.032961] __mutex_lock+0x9a/0x9d0 <4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915] <4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915] <4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915] <4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915] <4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915] <4> [896.033149] pci_device_probe+0x9e/0x120 <4> [896.033154] really_probe+0xea/0x420 <4> [896.033158] driver_probe_device+0x10b/0x120 <4> [896.033161] device_driver_attach+0x4a/0x50 <4> [896.033164] __driver_attach+0x97/0x130 <4> [896.033168] bus_for_each_dev+0x74/0xc0 <4> [896.033171] bus_add_driver+0x142/0x220 <4> [896.033174] driver_register+0x56/0xf0 <4> [896.033178] do_one_initcall+0x58/0x2ff <4> [896.033183] do_init_module+0x56/0x1f8 <4> [896.033187] load_module+0x243e/0x29f0 <4> [896.033190] __do_sys_finit_module+0xe9/0x110 <4> [896.033194] do_syscall_64+0x4f/0x210 <4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [896.033200] -> #0 (&kernel#2){+.+.}: <4> [896.033206] __lock_acquire+0x1328/0x15d0 <4> [896.033209] lock_acquire+0xa7/0x1c0 <4> [896.033213] __mutex_lock+0x9a/0x9d0 <4> [896.033255] i915_request_create+0x16/0x1c0 [i915] <4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915] <4> [896.033327] ggtt_flush+0x37/0x60 [i915] <4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915] <4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915] <4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915] <4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915] <4> [896.033523] ring_context_pin+0x2e/0xc0 [i915] <4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915] <4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915] <4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915] <4> [896.033632] drm_ioctl_kernel+0xa7/0xf0 <4> [896.033635] drm_ioctl+0x2e1/0x390 <4> [896.033638] do_vfs_ioctl+0xa0/0x6f0 <4> [896.033641] ksys_ioctl+0x35/0x60 <4> [896.033644] __x64_sys_ioctl+0x11/0x20 <4> [896.033647] do_syscall_64+0x4f/0x210 <4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe Lift the object allocation and pin prior to the request construction. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Quite simply we only need to check for prior corruption on enabling rc6 on module load and resume, so by hooking into the common entry points. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191202110836.2342685-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Now that we have soft-rc6 in place, we can use that instead of the forcewake to disable rc6 while active; preferred by a few microbenchmarks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191202110836.2342685-1-chris@chris-wilson.co.uk
-
- 30 11月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
Move our "wait for the PD load to complete" paranoia before the MI_SET_CONTEXT just in case the context restore tries to access local addresses. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191130120503.1609483-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
After much hair pulling, resort to preallocating the ppGTT entries on init to circumvent the apparent lack of PD invalidate following the write to PP_DCLV upon switching mm between contexts (and here the same context after binding new objects). However, the details of that PP_DCLV invalidate are still unknown, and it appears we need to reload the mm twice to cover over a timing issue. Worrying. Fixes: 3dc007fe ("drm/i915/gtt: Downgrade gen7 (ivb, byt, hsw) back to aliasing-ppgtt") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191129201328.1398583-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we only cancel the timers asynchronously, they may still be running on another CPU as we shutdown, raising one last softirq. So be safe and make sure the tasklet is flushed before destroying the engine's memory. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191129172542.1222810-1-chris@chris-wilson.co.uk
-
- 29 11月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Wait on only the last request on the kernel_context after emitting a barrier so that we do not wait for everything in general and by doing so cause an accidental emission of the barrier! Bugzilla; https://bugs.freedesktop.org/show_bug.cgi?id=112405Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191129103455.744389-1-chris@chris-wilson.co.uk
-
由 Michel Thierry 提交于
Implement Wa_1604555607 (set the DS pairing timer to 128 cycles). FF_MODE2 is part of the register state context, that's why it is implemented here. At TGL A0 stepping, FF_MODE2 register read back is broken, hence disabling the WA verification. v2: Rebased on top of the WA refactoring (Oscar) v3: Correctly add to ctx_workarounds_init (Michel) v4: uncore read is used [Tvrtko] Macros as used for MASK definition [Chris] v5: Skip the Wa_1604555607 verification [Ram] i915 ptr retrieved from engine. [Tvrtko] v6: Added wa_add as a wrapper for __wa_add [Chris] wa_add is directly called instead of new wrapper [tvrtko] BSpec: 19363 HSDES: 1604555607 Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NRamalingam C <ramlingam.c@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v5] Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191128021005.3350-1-ramalingam.c@intel.com
-
- 28 11月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
We have a case of a mysteriously absent pulse, so dump the engine details to see if we can find out what happened to it. References: https://bugs.freedesktop.org/show_bug.cgi?id=112405Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191128102546.3857140-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The design of our interrupt handlers is that we ack the receipt of the interrupt first, inside the critical section where the master interrupt control is off and other cpus cannot start processing the next interrupt; and then process the interrupt events afterwards. However, Icelake introduced a whole new set of banked GT_IIR that are inherently serialised and slow to retrieve the IIR and must be processed within the critical section. We can still push our breadcrumbs out of this critical section by using our irq_worker. On bdw+, this should not make too much of a difference as we only slightly defer the breadcrumbs, but on icl+ this should make a big difference to our throughput of interrupts from concurrently executing engines. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127115813.3345823-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The expected downside to commit 58b4c1a0 ("drm/i915: Reduce nested prepare_remote_context() to a trylock") was that it would need to return -EAGAIN to userspace in order to resolve potential mutex inversion. Such an unsightly round trip is unnecessary if we could atomically insert a barrier into the i915_active_fence, so make it happen. Currently, we use the timeline->mutex (or some other named outer lock) to order insertion into the i915_active_fence (and so individual nodes of i915_active). Inside __i915_active_fence_set, we only need then serialise with the interrupt handler in order to claim the timeline for ourselves. However, if we remove the outer lock, we need to ensure the order is intact between not only multiple threads trying to insert themselves into the timeline, but also with the interrupt handler completing the previous occupant. We use xchg() on insert so that we have an ordered sequence of insertions (and each caller knows the previous fence on which to wait, preserving the chain of all fences in the timeline), but we then have to cmpxchg() in the interrupt handler to avoid overwriting the new occupant. The only nasty side-effect is having to temporarily strip off the RCU-annotations to apply the atomic operations, otherwise the rules are much more conventional! Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112402 Fixes: 58b4c1a0 ("drm/i915: Reduce nested prepare_remote_context() to a trylock") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127134527.3438410-1-chris@chris-wilson.co.uk
-
- 27 11月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Now that we rapidly park the GT when the GPU idles, we often find ourselves idling faster than the RC6 promotion timer. Thus if we tell the GPU to enter RC6 manually as we park, we can do so quicker (by around 50ms, half an EI on average) and marginally increase our powersaving across all execlists platforms. v2: Now with a selftest to check we can enter RC6 manually Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Imre Deak <imre.deak@intel.com> Acked-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127095657.3209854-1-chris@chris-wilson.co.uk
-
- 26 11月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
On context retiring, we may invoke the kernel_context to unpin this context. Elsewhere, we may use the kernel_context to modify this context. This currently leads to an AB-BA lock inversion, so we need to back-off from the contended lock, and repeat. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111732Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Fixes: a9877da2 ("drm/i915/oa: Reconfigure contexts on the fly") Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191126065521.2331017-1-chris@chris-wilson.co.uk
-
- 25 11月, 2019 5 次提交
-
-
由 Chris Wilson 提交于
The major drawback of commit 7e34f4e4 ("drm/i915/gen8+: Add RC6 CTX corruption WA") is that it disables RC6 while Skylake (and friends) is active, and we do not consider the GPU idle until all outstanding requests have been retired and the engine switched over to the kernel context. If userspace is idle, this task falls onto our background idle worker, which only runs roughly once a second, meaning that userspace has to have been idle for a couple of seconds before we enable RC6 again. Naturally, this causes us to consume considerably more energy than before as powersaving is effectively disabled while a display server (here's looking at you Xorg) is running. As execlists will get a completion event as each context is completed, we can use this interrupt to queue a retire worker bound to this engine to cleanup idle timelines. We will then immediately notice the idle engine (without userspace intervention or the aid of the background retire worker) and start parking the GPU. Thus during light workloads, we will do much more work to idle the GPU faster... Hopefully with commensurate power saving! v2: Watch context completions and only look at those local to the engine when retiring to reduce the amount of excess work we perform. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112315 References: 7e34f4e4 ("drm/i915/gen8+: Add RC6 CTX corruption WA") References: 2248a283 ("drm/i915/gen8+: Add RC6 CTX corruption WA") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191125105858.1718307-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we will introduce a new asynchronous retirement worker, fed by execlists CS events. Here we may queue a retirement as soon as a request is submitted to HW (and completes instantly), and we also want to process that retirement as early as possible and cannot afford to postpone (as there may not be another opportunity to retire it for a few seconds). To allow the new async retirer to run in parallel with our submission, pull the __i915_request_queue (that passes the request to HW) inside the timelines spinlock so that the retirement cannot release the timeline before we have completed the submission. v2: Actually to play nicely with engine_retire, we have to raise the timeline.active_lock before releasing the HW. intel_gt_retire_requsts() is still serialised by the outer lock so they cannot see this intermediate state, and engine_retire is serialised by HW submission. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191125105858.1718307-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As the engine->kernel_context is used within the engine-pm barrier, we have to be careful when emitting requests outside of the barrier, as the strict timeline locking rules do not apply. Instead, we must ensure the engine_park() cannot be entered as we build the request, which is simplest by taking an explicit engine-pm wakeref around the request construction. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191125105858.1718307-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
I rushed a last minute correction to cancel_port_requests() to prevent the snooping of *execlists->active as the inflight array was being updated, without noticing we iterated the inflight array starting from active! Oops. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112387 Fixes: 331bf905 ("drm/i915/gt: Mark the execlists->active as the primary volatile access") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191125112520.1760492-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we want to do a lockless read of the current active request, and that request is written to by process_csb also without serialisation, we need to instruct gcc to take care in reading the pointer itself. Otherwise, we have observed execlists_active() to report 0x40. [ 2400.760381] igt/para-4098 1..s. 2376479300us : process_csb: rcs0 cs-irq head=3, tail=4 [ 2400.760826] igt/para-4098 1..s. 2376479303us : process_csb: rcs0 csb[4]: status=0x00000001:0x00000000 [ 2400.761271] igt/para-4098 1..s. 2376479306us : trace_ports: rcs0: promote { b9c59:2622, b9c55:2624 } [ 2400.761726] igt/para-4097 0d... 2376479311us : __i915_schedule: rcs0: -2147483648->3, inflight:0000000000000040, rq:ffff888208c1e940 which is impossible! The answer is that as we keep the existing execlists->active pointing into the array as we copy over that array, the unserialised read may see a partial pointer value. Fixes: df403069 ("drm/i915/execlists: Lift process_csb() out of the irq-off spinlock") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191125094318.1630806-1-chris@chris-wilson.co.uk
-
- 23 11月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Before checking the current i915_active state for the asynchronous work we submitted, flush any ongoing callback. This ensures that our sampling is robust and does not sporadically fail due to bad timing as the work is running on another cpu. v2: Drop the fence callback sync, retiring under the lock should be good enough to synchronize with engine_retire() and the intel_gt_retire_requests() background worker. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191122132404.690440-1-chris@chris-wilson.co.uk
-
- 22 11月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Bonded request submission is designed to allow requests to execute in parallel as laid out by the user. If the master request is already finished before its bonded pair is submitted, the pair were not destined to run in parallel and we lose the information about the master engine to dictate selection of the secondary. If the second request was required to be run on a particular engine in a virtual set, that should have been specified, rather than left to the whims of a random unconnected requests! In the selftest, I made the mistake of not ensuring the master would overlap with its bonded pairs, meaning that it could indeed complete before we submitted the bonds. Those bonds were then free to select any available engine in their virtual set, and not the one expected by the test. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191122112152.660743-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Whenever we wait on a request, make sure we actually hold a reference to it and that it cannot be retired/freed on another CPU! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191121071044.97798-4-chris@chris-wilson.co.uk
-
- 21 11月, 2019 9 次提交
-
-
由 Chris Wilson 提交于
Assume that intel_wakeref_get() may take the mutex, and perform other sleeping actions in the course of its callbacks and so use might_sleep() to ensure that all callers abide. Anything that cannot sleep has to use e.g. intel_wakeref_get_if_active() to guarantee its avoidance of the non-atomic paths. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191121130528.309474-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since the request is already on the HW as we perform its validation, it and even its subsequent barrier may be concurrently retired before we process the assertions. If it is retired already and so off the HW, our assertions become void and we need to ignore them. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112363Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191121103546.146487-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we wait upon a request, we must be holding a reference to it, and be wary that i915_request_add() consumes the passed in reference. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191121093326.134774-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
From inside an active timeline in the execbuf ioctl, we may try to reclaim some space in the GGTT. We need GGTT space for all objects on !full-ppgtt platforms, and for context images everywhere. However, to free up space in the GGTT we may need to remove some pinned objects (e.g. context images) that require flushing the idle barriers to remove. For this we use the big hammer of intel_gt_wait_for_idle() However, commit 7936a22d ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()") will continue spinning on the wait if a timeline is active but lacks requests, as is the case during execbuf reservation. Spinning forever is quite time consuming, so revert that commit and start again. In practice, the effect commit 7936a22d was trying to achieve is accomplished by commit 1683d24c ("drm/i915/gt: Move new timelines to the end of active_list"), so there is no immediate rush to replace the looping. Testcase: igt/gem_exec_reloc/basic-range Fixes: 7936a22d ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()") References: 1683d24c ("drm/i915/gt: Move new timelines to the end of active_list") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191121071044.97798-1-chris@chris-wilson.co.uk
-
由 Stuart Summers 提交于
GuC submission path can be called from an interrupt context and so should use a worker to avoid holding a mutex. References: 07779a76 ("drm/i915: Mark up the calling context for intel_wakeref_put()") Signed-off-by: NStuart Summers <stuart.summers@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191120211321.88021-1-stuart.summers@intel.com
-
由 Chris Wilson 提交于
pm_suspend_target_state is declared under CONFIG_PM_SLEEP but only defined under CONFIG_SUSPEND. Play safe and only use the symbol if it is both declared and defined. Reported-by: kbuild-all@lists.01.org Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Fixes: a70a9e99 ("drm/i915: Defer rc6 shutdown to suspend_late") Cc: Andi Shyti <andi.shyti@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191120182209.3967833-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Now that we never allow the intel_wakeref callbacks to be invoked from interrupt context, we do not need the irqsafe spinlock for the timeline. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191120170858.3965380-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In commit a79ca656 ("drm/i915: Push the wakeref->count deferral to the backend"), I erroneously concluded that we last modify the engine inside __i915_request_commit() meaning that we could enable concurrent submission for userspace as we enqueued this request. However, this falls into a trap with other users of the engine->kernel_context waking up and submitting their request before the idle-switch is queued, with the result that the kernel_context is executed out-of-sequence most likely upsetting the GPU and certainly ourselves when we try to retire the out-of-sequence requests. As such we need to hold onto the effective engine->kernel_context mutex lock (via the engine pm mutex proxy) until we have finish queuing the request to the engine. v2: Serialise against concurrent intel_gt_retire_requests() v3: Describe the hairy locking scheme with intel_gt_retire_requests() for future reference. v4: Combine timeline->lock and engine pm release; it's hairy. Fixes: a79ca656 ("drm/i915: Push the wakeref->count deferral to the backend") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191120165514.3955081-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The general concept was that intel_timeline.active_count was locked by the intel_timeline.mutex. The exception was for power management, where the engine->kernel_context->timeline could be manipulated under the global wakeref.mutex. This was quite solid, as we always manipulated the timeline only while we held an engine wakeref. And then we started retiring requests outside of struct_mutex, only using the timelines.active_list and the timeline->mutex. There we started manipulating intel_timeline.active_count outside of an engine wakeref, and so introduced a race between __engine_park() and intel_gt_retire_requests(), a race that could result in the engine->kernel_context not being added to the active timelines and so losing requests, which caused us to keep the system permanently powered up [and unloadable]. The race would be easy to close if we could take the engine wakeref for the timeline before we retire -- except timelines are not bound to any engine and so we would need to keep all active engines awake. The alternative is to guard intel_timeline_enter/intel_timeline_exit for use outside of the timeline->mutex. Fixes: e5dadff4 ("drm/i915: Protect request retirement with timeline->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191120165514.3955081-1-chris@chris-wilson.co.uk
-
- 20 11月, 2019 5 次提交
-
-
由 Chris Wilson 提交于
Previously, we assumed we could use mutex_trylock() within an atomic context, falling back to a worker if contended. However, such trickery is illegal inside interrupt context, and so we need to always use a worker under such circumstances. As we normally are in process context, we can typically use a plain mutex, and only defer to a work when we know we are being called from an interrupt path. Fixes: 51fbd8de ("drm/i915/pmu: Atomically acquire the gt_pm wakeref") References: a0855d24 ("locking/mutex: Complain upon mutex API misuse in IRQ contexts") References: https://bugs.freedesktop.org/show_bug.cgi?id=111626Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191120125433.3767149-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Reading from CTX_INFO upsets rc6, requiring us to detect and prevent possible rc6 context corruption. Poke at the bear! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Tested-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191119154723.3311814-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Retire all requests if we resort to wedged the driver on suspend. They will now be idle, so we might as we free them before shutting down. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191118230254.2615942-16-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we may park the gt during request retirement, we may cancel the retirement worker only to then program the delayed worker once more. If we schedule the next delayed retirement worker first, if we then park the gt, the work will remain cancelled. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191119162559.3313003-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
When adding a new active timeline, place it at the end of the list. This allows for intel_gt_retire_requests() to pick up the newcomer more quickly and hopefully complete the retirement sooner. A miniscule optimisation. References: 7936a22d ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191119162559.3313003-1-chris@chris-wilson.co.uk
-