- 16 1月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Make i915_gem_set_wedged() and i915_gem_unset_wedged() behaviour more consistent if called concurrently, and only do the wedging and reporting once, curtailing any possible race where we start unwedging in the middle of a wedge. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114210408.4561-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since commit 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers") we have been able to report failure from mmu_invalidate_range_start which allows us to use a trylock on the struct_mutex to avoid potential recursion and report -EBUSY instead. Furthermore, this allows us to pull the work into the main callback and avoid the sleight-of-hand in using a workqueue to avoid lockdep. However, not all paths to mmu_invalidate_range_start are prepared to handle failure, so instead of reporting the recursion, deal with it by propagating the failure upwards, who can decide themselves to handle it or report it. v2: Mark up the recursive lock behaviour and comment on the various weak points. v3: Follow commit 3824e419 ("drm/i915: Use mutex_lock_killable() from inside the shrinker") and also use mutex_lock_killable(). v3.1: No leak on EINTR. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108375 References: 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190115124442.3500-1-chris@chris-wilson.co.uk
-
- 15 1月, 2019 7 次提交
-
-
由 Chris Wilson 提交于
As we may frequently mark the device as wedged to flush requests off it during the normal course of events, quite often we have a large state dump that is of no interest. Don't bother dumping it all if the engines are all idle. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190115122057.1677-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As the GT_IRQ power domain implies a wakeref, we can use it inplace of our existing redundant rpm grab. v2: Drop papering over forgetting to take the runtime wakeref in selftests Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-20-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The majority of runtime-pm operations are bounded and scoped within a function; these are easy to verify that the wakeref are handled correctly. We can employ the compiler to help us, and reduce the number of wakerefs tracked when debugging, by passing around cookies provided by the various rpm_get functions to their rpm_put counterpart. This makes the pairing explicit, and given the required wakeref cookie the compiler can verify that we pass an initialised value to the rpm_put (quite handy for double checking error paths). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-16-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Frequently, we use intel_runtime_pm_get/_put around a small block. Formalise that usage by providing a macro to define such a block with an automatic closure to scope the intel_runtime_pm wakeref to that block, i.e. macro abuse smelling of python. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-15-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Keep track of the temporary rpm wakerefs used for user access to the device, so that we can cancel them upon release and clearly identify any leaks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-10-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Record the wakeref used for keeping the device awake as the GPU is executing requests and be sure to cancel the tracking upon parking. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The majority of runtime-pm operations are bounded and scoped within a function; these are easy to verify that the wakeref are handled correctly. We can employ the compiler to help us, and reduce the number of wakerefs tracked when debugging, by passing around cookies provided by the various rpm_get functions to their rpm_put counterpart. This makes the pairing explicit, and given the required wakeref cookie the compiler can verify that we pass an initialised value to the rpm_put (quite handy for double checking error paths). For regular builds, the compiler should be able to eliminate the unused local variables and the program growth should be minimal. Fwiw, it came out as a net improvement as gcc was able to refactor rpm_get and rpm_get_if_in_use together, v2: Just s/rpm_put/rpm_put_unchecked/ everywhere, leaving the manual mark up for smaller more targeted patches. v3: Mention the cookie in Returns Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-2-chris@chris-wilson.co.uk
-
- 09 1月, 2019 1 次提交
-
-
由 Jani Nikula 提交于
Needs just a few additional includes here and there. Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190108082709.3748-1-jani.nikula@intel.com
-
- 05 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Our attempt to account for bit17 swizzling of pread/pwrite onto tiled objects was flawed due to the simple fact that we do not always know the swizzling for a particular page (due to the swizzling varying based on location in certain unbalanced configurations). Furthermore, the pread/pwrite paths are now unbalanced in that we are required to use the GTT as in some cases we do not have direct CPU access to the backing physical pages (thus some paths trying to account for the swizzle, but others neglecting, chaos ensues). There are no known users who do use pread/pwrite into a tiled object (you need to manually detile anyway, so why now just use mmap and avoid the copy?) and no user bug reports to indicate that it is being used in the wild. As no one is hitting the buggy path, we can just remove the buggy code. v2: Just use the fault allowing kmap() + normal copy_(to|from)_user v3: Avoid int overflow in computing 'length' from 'remain' (Tvrtko) References: fe115628 ("drm/i915: Implement pwrite without struct-mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: NKenneth Graunke <kenneth@whitecape.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190105120758.9237-1-chris@chris-wilson.co.uk
-
- 04 1月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
If we declare the driver wedged during early initialisation, we leave the driver in an undefined state (with respect to GEM execution). As this leads to unexpected behaviour if we allow the user to unwedge the device (through debugfs, and performed by igt at test start), do not. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190103213340.1669-1-chris@chris-wilson.co.uk
-
由 Linus Torvalds 提交于
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
When we first introduced the reset to sanitize the GPU on taking over from the BIOS and before returning control to third parties (the BIOS!), we restricted it to only systems utilizing HW contexts as we were uncertain of how stable our reset mechanism truly was. We now have reasonable coverage across all machines that expose a GPU reset method, and so we should be safe to sanitize the GPU state everywhere. v2: We _have_ to skip the reset if it would clobber the display. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190103112104.19561-1-chris@chris-wilson.co.uk
-
- 31 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Now that we have eliminated the CPU-side irq_seqno_barrier by moving the delays on the GPU before emitting the MI_USER_INTERRUPT, we can remove the engine->irq_seqno_barrier infrastructure. Though intentionally slowing down the GPU is nasty, so is the code we can now remove! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181228171641.16531-6-chris@chris-wilson.co.uk
-
- 29 12月, 2018 1 次提交
-
-
由 Arun KS 提交于
totalram_pages and totalhigh_pages are made static inline function. Main motivation was that managed_page_count_lock handling was complicating things. It was discussed in length here, https://lore.kernel.org/patchwork/patch/995739/#1181785 So it seemes better to remove the lock and convert variables to atomic, with preventing poteintial store-to-read tearing as a bonus. [akpm@linux-foundation.org: coding style fixes] Link: http://lkml.kernel.org/r/1542090790-21750-4-git-send-email-arunks@codeaurora.orgSigned-off-by: NArun KS <arunks@codeaurora.org> Suggested-by: NMichal Hocko <mhocko@suse.com> Suggested-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
The writing is on the wall for the existence of a single execution queue along each engine, and as a consequence we will not be able to track dependencies along the HW queue itself, i.e. we will not be able to use HW semaphores on gen7 as they use a global set of registers (and unlike gen8+ we can not effectively target memory to keep per-context seqno and dependencies). On the positive side, when we implement request reordering for gen7 we also can not presume a simple execution queue and would also require removing the current semaphore generation code. So this bring us another step closer to request reordering for ringbuffer submission! The negative side is that using interrupts to drive inter-engine synchronisation is much slower (4us -> 15us to do a nop on each of the 3 engines on ivb). This is much better than it was at the time of introducing the HW semaphores and equally important userspace weaned itself off intermixing dependent BLT/RENDER operations (the prime culprit was glyph rendering in UXA). So while we regress the microbenchmarks, it should not impact the user. References: https://bugs.freedesktop.org/show_bug.cgi?id=108888Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181228140736.32606-2-chris@chris-wilson.co.uk
-
- 13 12月, 2018 1 次提交
-
-
由 Lucas De Marchi 提交于
Define IS_GEN() similarly to our IS_GEN_RANGE(). but use gen instead of gen_mask to do the comparison. Now callers can pass then gen as a parameter, so we don't require one macro for each gen. The following spatch was used to convert the users of these macros: @@ expression e; @@ ( - IS_GEN2(e) + IS_GEN(e, 2) | - IS_GEN3(e) + IS_GEN(e, 3) | - IS_GEN4(e) + IS_GEN(e, 4) | - IS_GEN5(e) + IS_GEN(e, 5) | - IS_GEN6(e) + IS_GEN(e, 6) | - IS_GEN7(e) + IS_GEN(e, 7) | - IS_GEN8(e) + IS_GEN(e, 8) | - IS_GEN9(e) + IS_GEN(e, 9) | - IS_GEN10(e) + IS_GEN(e, 10) | - IS_GEN11(e) + IS_GEN(e, 11) ) v2: use IS_GEN rather than GT_GEN and compare to info.gen rather than using the bitmask Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-2-lucas.demarchi@intel.com
-
- 12 12月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Currently we allocate a scratch page for each engine, but since we only ever write into it for post-sync operations, it is not exposed to userspace nor do we care for coherency. As we then do not care about its contents, we can use one page for all, reducing our allocations and avoid complications by not assuming per-engine isolation. For later use, it simplifies engine initialisation (by removing the allocation that required struct_mutex!) and means that we can always rely on there being a scratch page. v2: Check that we allocated a large enough scratch for I830 w/a Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.18.20+ (cherry picked from commit 51797499) [Joonas: Use new function in gen9_init_indirectctx_bb too] Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 05 12月, 2018 1 次提交
-
-
由 Tvrtko Ursulin 提交于
To enable later verification of GT workaround state at various stages of driver lifetime, we record the list of applicable ones per platforms to a list, from which they are also applied. The added data structure is a simple array of register, mask and value items, which is allocated on demand as workarounds are added to the list. This is a temporary implementation which later in the series gets fused with the existing per context workaround list handling. It is separated at this stage since the following patch fixes a bug which needs to be as easy to backport as possible. Also, since in the following patch we will be adding a new class of workarounds (per engine) which can be applied from interrupt context, we straight away make the provision for safe read-modify-write cycle. v2: * Change dev_priv to i915 along the init path. (Chris Wilson) * API rename. (Chris Wilson) v3: * Remove explicit list size tracking in favour of growing the allocation in power of two chunks. (Chris Wilson) v4: Chris Wilson: * Change wa_list_finish to early return. * Copy workarounds using the compiler for static checking. * Do not bother zeroing unused entries. * Re-order struct i915_wa_list. v5: * kmalloc_array. * Whitespace cleanup. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20181203133319.10174-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit 25d140fa) Fixes: 59b449d5 ("drm/i915: Split out functions for different kinds of workarounds") Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 04 12月, 2018 4 次提交
-
-
由 Chris Wilson 提交于
Currently we allocate a scratch page for each engine, but since we only ever write into it for post-sync operations, it is not exposed to userspace nor do we care for coherency. As we then do not care about its contents, we can use one page for all, reducing our allocations and avoid complications by not assuming per-engine isolation. For later use, it simplifies engine initialisation (by removing the allocation that required struct_mutex!) and means that we can always rely on there being a scratch page. v2: Check that we allocated a large enough scratch for I830 w/a Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.18.20+
-
由 Tvrtko Ursulin 提交于
Since we now have all the GT workarounds in a table, by adding a simple shared helper function we can now verify that their values are still applied after some interesting events in the lifetime of the driver. Initially we only do this after GPU initialization. v2: Chris Wilson: * Simplify verification by realizing it's a simple xor and and. * Remove verification from engine reset path. * Return bool straight away from the verify API. v3: * API rename. (Chris Wilson) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20181203125014.3219-4-tvrtko.ursulin@linux.intel.com
-
由 Tvrtko Ursulin 提交于
To enable later verification of GT workaround state at various stages of driver lifetime, we record the list of applicable ones per platforms to a list, from which they are also applied. The added data structure is a simple array of register, mask and value items, which is allocated on demand as workarounds are added to the list. This is a temporary implementation which later in the series gets fused with the existing per context workaround list handling. It is separated at this stage since the following patch fixes a bug which needs to be as easy to backport as possible. Also, since in the following patch we will be adding a new class of workarounds (per engine) which can be applied from interrupt context, we straight away make the provision for safe read-modify-write cycle. v2: * Change dev_priv to i915 along the init path. (Chris Wilson) * API rename. (Chris Wilson) v3: * Remove explicit list size tracking in favour of growing the allocation in power of two chunks. (Chris Wilson) v4: Chris Wilson: * Change wa_list_finish to early return. * Copy workarounds using the compiler for static checking. * Do not bother zeroing unused entries. * Re-order struct i915_wa_list. v5: * kmalloc_array. * Whitespace cleanup. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20181203133319.10174-1-tvrtko.ursulin@linux.intel.com
-
由 Chris Wilson 提交于
We inspect the requests under the assumption that they will be marked as completed when they are removed from the queue. Currently however, in the process of wedging the requests will be removed from the queue before they are completed, so rearrange the code to complete the fences before the locks are dropped. <1>[ 354.473346] BUG: unable to handle kernel NULL pointer dereference at 0000000000000250 <6>[ 354.473363] PGD 0 P4D 0 <4>[ 354.473370] Oops: 0000 [#1] PREEMPT SMP PTI <4>[ 354.473380] CPU: 0 PID: 4470 Comm: gem_eio Tainted: G U 4.20.0-rc4-CI-CI_DRM_5216+ #1 <4>[ 354.473393] Hardware name: Intel Corporation NUC7CJYH/NUC7JYB, BIOS JYGLKCPX.86A.0027.2018.0125.1347 01/25/2018 <4>[ 354.473480] RIP: 0010:__i915_schedule+0x311/0x5e0 [i915] <4>[ 354.473490] Code: 49 89 44 24 20 4d 89 4c 24 28 4d 89 29 44 39 b3 a0 04 00 00 7d 3a 41 8b 44 24 78 85 c0 74 13 48 8b 93 78 04 00 00 48 83 e2 fc <39> 82 50 02 00 00 79 1e 44 89 b3 a0 04 00 00 48 8d bb d0 03 00 00 <4>[ 354.473515] RSP: 0018:ffffc900001bba90 EFLAGS: 00010046 <4>[ 354.473524] RAX: 0000000000000003 RBX: ffff8882624c8008 RCX: f34a737800000000 <4>[ 354.473535] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8882624c8048 <4>[ 354.473545] RBP: ffffc900001bbab0 R08: 000000005963f1f1 R09: 0000000000000000 <4>[ 354.473556] R10: ffffc900001bba10 R11: ffff8882624c8060 R12: ffff88824fdd7b98 <4>[ 354.473567] R13: ffff88824fdd7bb8 R14: 0000000000000001 R15: ffff88824fdd7750 <4>[ 354.473578] FS: 00007f44b4b5b980(0000) GS:ffff888277e00000(0000) knlGS:0000000000000000 <4>[ 354.473590] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4>[ 354.473599] CR2: 0000000000000250 CR3: 000000026976e000 CR4: 0000000000340ef0 <4>[ 354.473611] Call Trace: <4>[ 354.473622] ? lock_acquire+0xa6/0x1c0 <4>[ 354.473677] ? i915_schedule_bump_priority+0x57/0xd0 [i915] <4>[ 354.473736] i915_schedule_bump_priority+0x72/0xd0 [i915] <4>[ 354.473792] i915_request_wait+0x4db/0x840 [i915] <4>[ 354.473804] ? get_pwq.isra.4+0x2c/0x50 <4>[ 354.473813] ? ___preempt_schedule+0x16/0x18 <4>[ 354.473824] ? wake_up_q+0x70/0x70 <4>[ 354.473831] ? wake_up_q+0x70/0x70 <4>[ 354.473882] ? gen6_rps_boost+0x118/0x120 [i915] <4>[ 354.473936] i915_gem_object_wait_fence+0x8a/0x110 [i915] <4>[ 354.473991] i915_gem_object_wait+0x113/0x500 [i915] <4>[ 354.474047] i915_gem_wait_ioctl+0x11c/0x2f0 [i915] <4>[ 354.474101] ? i915_gem_unset_wedged+0x210/0x210 [i915] <4>[ 354.474113] drm_ioctl_kernel+0x81/0xf0 <4>[ 354.474123] drm_ioctl+0x2de/0x390 <4>[ 354.474175] ? i915_gem_unset_wedged+0x210/0x210 [i915] <4>[ 354.474187] ? finish_task_switch+0x95/0x260 <4>[ 354.474197] ? lock_acquire+0xa6/0x1c0 <4>[ 354.474207] do_vfs_ioctl+0xa0/0x6e0 <4>[ 354.474217] ? __fget+0xfc/0x1e0 <4>[ 354.474225] ksys_ioctl+0x35/0x60 <4>[ 354.474233] __x64_sys_ioctl+0x11/0x20 <4>[ 354.474241] do_syscall_64+0x55/0x190 <4>[ 354.474251] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 354.474260] RIP: 0033:0x7f44b3de65d7 <4>[ 354.474267] Code: b3 66 90 48 8b 05 b1 48 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48 <4>[ 354.474293] RSP: 002b:00007fff974948e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 <4>[ 354.474305] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f44b3de65d7 <4>[ 354.474316] RDX: 00007fff97494940 RSI: 00000000c010646c RDI: 0000000000000007 <4>[ 354.474327] RBP: 00007fff97494940 R08: 0000000000000000 R09: 00007f44b40bbc40 <4>[ 354.474337] R10: 0000000000000000 R11: 0000000000000246 R12: 00000000c010646c <4>[ 354.474348] R13: 0000000000000007 R14: 0000000000000000 R15: 0000000000000000 v2: Avoid floating requests. v3: Can't call dma_fence_signal() under the timeline lock! v4: Can't call dma_fence_signal() from inside another fence either. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181203113701.12106-2-chris@chris-wilson.co.uk
-
- 09 11月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
While our little rcu worker might be able to be replaced now by the dedicated rcu_work, in the meantime we should mark up the rcu_head for correct debugobjects tracking. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181108092101.27598-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Make the rcu_head known to the system, in particular for debugobjects. And having declared it for debugobjects, we need to tidy up afterwards. v2: mark the obj->rcu as being destroyed when we reuse its location for the freed list. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108691Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181109090311.15321-1-chris@chris-wilson.co.uk
-
- 07 11月, 2018 1 次提交
-
-
由 Kuo-Hsin Yang 提交于
The i915 driver uses shmemfs to allocate backing storage for gem objects. These shmemfs pages can be pinned (increased ref count) by shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan wastes a lot of time scanning these pinned pages. In some extreme case, all pages in the inactive anon lru are pinned, and only the inactive anon lru is scanned due to inactive_ratio, the system cannot swap and invokes the oom-killer. Mark these pinned pages as unevictable to speed up vmscan. Export pagevec API check_move_unevictable_pages(). This patch was inspired by Chris Wilson's change [1]. [1]: https://patchwork.kernel.org/patch/9768741/ Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com> Signed-off-by: NKuo-Hsin Yang <vovoy@chromium.org> Acked-by: Michal Hocko <mhocko@suse.com> # mm part Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NAndrew Morton <akpm@linux-foundation.org> Link: https://patchwork.freedesktop.org/patch/msgid/20181106132324.17390-1-chris@chris-wilson.co.ukSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 06 11月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
As we may have to iterate a few thousand elements to acquire and release the shmemfs backing storage for a GPU object, we need to break up the long loop with cond_resched() to retain a modicum of low latency for other processes. Testcase: igt/benchmarks/gem_syslatency Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Kuo-Hsin Yang <vovoy@chromium.org> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181105170640.26905-1-chris@chris-wilson.co.uk
-
- 18 10月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Handle integer overflow when computing the sub-page length for shmem backed pread/pwrite. Reported-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181012140228.29783-1-chris@chris-wilson.co.uk (cherry picked from commit a5e856a5) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 15 10月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Handle integer overflow when computing the sub-page length for shmem backed pread/pwrite. Reported-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181012140228.29783-1-chris@chris-wilson.co.uk
-
- 02 10月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Latency is in the eye of the beholder. In the case where a client stops and waits for the gpu, give that request chain a small priority boost (not so that it overtakes higher priority clients, to preserve the external ordering) so that ideally the wait completes earlier. v2: Tvrtko recommends to keep the boost-from-user-stall as small as possible and to allow new client flows to be preferred for interactivity over stalls. Testcase: igt/gem_sync/switch-default Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181001144755.7978-3-chris@chris-wilson.co.uk
-
- 30 9月, 2018 1 次提交
-
-
由 Matthew Wilcox 提交于
Introduce xarray value entries and tagged pointers to replace radix tree exceptional entries. This is a slight change in encoding to allow the use of an extra bit (we can now store BITS_PER_LONG - 1 bits in a value entry). It is also a change in emphasis; exceptional entries are intimidating and different. As the comment explains, you can choose to store values or pointers in the xarray and they are both first-class citizens. Signed-off-by: NMatthew Wilcox <willy@infradead.org> Reviewed-by: NJosef Bacik <jbacik@fb.com>
-
- 27 9月, 2018 2 次提交
-
-
由 Tvrtko Ursulin 提交于
Partial views are small but there can be many of them, and since the sg list space for them is allocated pessimistically, we can save some slab by trimming the unused tail entries. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180926080353.20867-1-tvrtko.ursulin@linux.intel.com
-
由 José Roberto de Souza 提交于
Right now RESET_PCH_HANDSHAKE_ENABLE is enabled all the times inside of intel_power_domains_init_hw() and if PCH is NOP it is unsed in i915_gem_init_hw(). So making skl_pch_reset_handshake() handle both cases and calling it for the missing gens in intel_power_domains_init_hw(). Ivybridge have a different register and bits but with the same objective so moving it too. v2(Rodrigo): - handling IVYBRIDGE case inside intel_pch_reset_handshake() v4(Rodrigo and Ville): - moving the enable/disable decision to callers Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJosé Roberto de Souza <jose.souza@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180918204714.27306-2-jose.souza@intel.com
-
- 26 9月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
In commit 9144d75e ("include/linux/bitops.h: introduce BITS_PER_TYPE"), we made BITS_PER_TYPE available to all and now we can use the macro to replace some open-coded computation of sizeof(T) * BITS_PER_BYTE. Suggested-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180926104707.17410-1-chris@chris-wilson.co.uk
-
- 21 9月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Once we have flushed the first request through the system to both load a context and record the default state; tell the GPU to park and idle itself, putting itself immediately (hopefully at least) into a powersaving state, and allowing ourselves to start from known state after setting up all our bookkeeping. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180920161343.1117-1-chris@chris-wilson.co.uk
-
- 20 9月, 2018 1 次提交
-
-
由 Matthew Auld 提交于
If we copy all the contents of the sg across and not just the page link, we can then also put it to work in fake_get_huge_pages and beyond. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180920142707.19659-1-matthew.auld@intel.com
-
- 14 9月, 2018 3 次提交
-
-
由 Chris Wilson 提交于
The prior assumption was that we did not need to reset the CSB on wedging when cancelling the outstanding requests as it would be cleaned up in the subsequent reset prior to restarting the GPU. However, what was not accounted for was that in preparing for the reset, we would try to process the outstanding CSB entries. If the GPU happened to complete a CS event just as we were performing the cancellation of requests, that event would be kept in the CSB until the reset -- but our bookkeeping was cleared, causing confusion when trying to complete the CS event. v2: Use a sanitize on unwedge to avoid interfering with eio suspend (where we intentionally disable GPU reset). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107925Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
That we use a WB mapping for updating the RING_TAIL register inside the context image even on !llc machines has been a source of consternation for every reader. It appears to work on bsw+, but it may just have been that we have been incredibly bad at detecting the errors. v2: With extra enthusiasm. v3: Drop force of map type for pinned default_state as by the time we pin it, the map type is always WB and doesn't conflict with the earlier use by ce->state. v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so we do not need the MAP_FORCE littered around the backends Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Check we can indeed acquire a WB mapping of the context image on module load. Later this will give us the opportunity to validate that we can switch from WC to WB as required. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-2-chris@chris-wilson.co.uk
-