- 04 7月, 2016 6 次提交
-
-
由 Chris Wilson 提交于
Now that we have (near) universal GPU recovery code, we can inject a real hang from userspace and not need any fakery. Not only does this mean that the testing is far more realistic, but we can simplify the kernel in the process. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-7-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Describe the intent of boosting the GPU frequency to maximum before waiting on the GPU. RPS waitboosting was introduced with commit b29c19b6 ("drm/i915: Boost RPS frequency for CPU stalls") but lacked a concise comment in the code to explain itself. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-5-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Ideally, we want to automagically have the GPU respond to the instantaneous load by reclocking itself. However, reclocking occurs relatively slowly, and to the client waiting for a result from the GPU, too late. To compensate and reduce the client latency, we allow the first wait from a client to boost the GPU clocks to maximum. This overcomes the lag in autoreclocking, at the expense of forcing the GPU clocks too high. So to offset the excessive power usage, we currently allow a client to only boost the clocks once before we detect the GPU is idle again. This works reasonably for say the first frame in a benchmark, but for many more synchronous workloads (like OpenCL) we find the GPU clocks remain too low. By noting a wait which would idle the GPU (i.e. we just waited upon the last known request), we can give that client the idle boost credit (for their next wait) without the 100ms delay required for us to detect the GPU idle state. The intention is to boost clients that are stalling in the process of feeding the GPU more work (and who in doing so let the GPU idle), without granting boost credits to clients that are throttling themselves (such as compositors). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: "Zou, Nanhai" <nanhai.zou@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We know, by design, that whilst the GPU is active (and thus we are throttling) the retire_worker is queued. Therefore attempting to requeue it with queue_delayed_work() is a no-op and we can safely remove it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Rather than persistently postponing the idle-work everytime somebody calls i915_gem_retire_requests() (potentially ensuring that we never reach the idle state), queue the work the first time we detect all requests are complete. Then if in 100ms, more requests have been queued, we will abort the idle-worker and wait again until all the new requests have been completed. Of course, this does depend upon the idle worker cancelling itself gracefully from the previous patch. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The retire worker is a low frequency task that makes sure we retire outstanding requests if userspace is being lax. We only need to start it once as it remains active until the GPU is idle, so do a cheap test before the more expensive queue_work(). A consequence of this is that we need correct locking in the worker to make the hot path of request submission cheap. To keep the symmetry and keep hangcheck strictly bound by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle worker before dropping the wakelock. v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick the waiter. v3: Remove the wakeref assertion squelching (now we hold a wakeref for the hangcheck, any rpm error there is genuine). v4: To prevent excess work when retiring requests, we split the busy flag into two, a boolean to denote whether we hold the wakeref and a bitmask of active engines. v5: Reorder cancelling hangcheck upon idling to avoid a race where we might cancel a hangcheck after being preempted by a new task Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> References: https://bugs.freedesktop.org/show_bug.cgi?id=88437Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467616119-4093-1-git-send-email-chris@chris-wilson.co.uk
-
- 02 7月, 2016 6 次提交
-
-
由 Chris Wilson 提交于
If we convert the tracing over from direct use of ring->irq_get() and over to the breadcrumb infrastructure, we only have a single user of the ring->irq_get and so we will be able to simplify the driver routines (eliminating the redundant validation and irq refcounting). Process context is preferred over softirq (or even hardirq) for a couple of reasons: - we already utilize process context to have fast wakeup of a single client (i.e. the client waiting for the GPU inspects the seqno for itself following an interrupt to avoid the overhead of a context switch before it returns to userspace) - engine->irq_seqno() is not suitable for use from an softirq/hardirq context as we may require long waits (100-250us) to ensure the seqno write is posted before we read it from the CPU A signaling framework is a requirement for enabling dma-fences. v2: Move to a signaling framework based upon the waiter. v3: Track the first-signal to avoid having to walk the rbtree everytime. v4: Mark the signaler thread as RT priority to reduce latency in the indirect wakeups. v5: Make failure to allocate the thread fatal. v6: Rename kthreads to i915/signal:%u Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-16-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We have testcases to ensure that seqno wraparound works fine, so we can forgo forcing everyone to encounter seqno wraparound during early uptime. seqno wraparound incurs a full GPU stall so not forcing it will eliminate one jitter from the early system. Using the testcases, we have very deterministic testing which given how difficult it would be to debug an issue (GPU hang) stemming from a wraparound using pure postmortem analysis I see no value in forcing a wrap during boot. Advancing the global next_seqno after a GPU reset is equally pointless. References? https://bugs.freedesktop.org/show_bug.cgi?id=95023Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-15-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
When waiting for an interrupt (waiting for the engine to complete some work), we know we are the only waiter to be woken on this engine. We also know when the GPU has nearly completed our request (or at least started processing it), so after being woken and we detect that the GPU is active and working on our request, allow us the bottom-half (the first waiter who wakes up to handle checking the seqno after the interrupt) to spin for a very short while to reduce client latencies. The impact is minimal, there was an improvement to the realtime-vs-many clients case, but exporting the function proves useful later. However, it is tempting to adjust irq_seqno_barrier to include the spin. The problem is first ensuring that the "start-of-request" seqno is coherent as we use that as our basis for judging when it is ok to spin. If we could, spinning there could dramatically shorten some sleeps, and allow us to make the barriers more conservative to handle missed seqno writes on more platforms (all gen7+ are known to have the occasional issue, at least). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-7-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
One particularly stressful scenario consists of many independent tasks all competing for GPU time and waiting upon the results (e.g. realtime transcoding of many, many streams). One bottleneck in particular is that each client waits on its own results, but every client is woken up after every batchbuffer - hence the thunder of hooves as then every client must do its heavyweight dance to read a coherent seqno to see if it is the lucky one. Ideally, we only want one client to wake up after the interrupt and check its request for completion. Since the requests must retire in order, we can select the first client on the oldest request to be woken. Once that client has completed his wait, we can then wake up the next client and so on. However, all clients then incur latency as every process in the chain may be delayed for scheduling - this may also then cause some priority inversion. To reduce the latency, when a client is added or removed from the list, we scan the tree for completed seqno and wake up all the completed waiters in parallel. Using igt/benchmarks/gem_latency, we can demonstrate this effect. The benchmark measures the number of GPU cycles between completion of a batch and the client waking up from a call to wait-ioctl. With many concurrent waiters, with each on a different request, we observe that the wakeup latency before the patch scales nearly linearly with the number of waiters (before external factors kick in making the scaling much worse). After applying the patch, we can see that only the single waiter for the request is being woken up, providing a constant wakeup latency for every operation. However, the situation is not quite as rosy for many waiters on the same request, though to the best of my knowledge this is much less likely in practice. Here, we can observe that the concurrent waiters incur extra latency from being woken up by the solitary bottom-half, rather than directly by the interrupt. This appears to be scheduler induced (having discounted adverse effects from having a rbtree walk/erase in the wakeup path), each additional wake_up_process() costs approximately 1us on big core. Another effect of performing the secondary wakeups from the first bottom-half is the incurred delay this imposes on high priority threads - rather than immediately returning to userspace and leaving the interrupt handler to wake the others. To offset the delay incurred with additional waiters on a request, we could use a hybrid scheme that did a quick read in the interrupt handler and dequeued all the completed waiters (incurring the overhead in the interrupt handler, not the best plan either as we then incur GPU submission latency) but we would still have to wake up the bottom-half every time to do the heavyweight slow read. Or we could only kick the waiters on the seqno with the same priority as the current task (i.e. in the realtime waiter scenario, only it is woken up immediately by the interrupt and simply queues the next waiter before returning to userspace, minimising its delay at the expense of the chain, and also reducing contention on its scheduler runqueue). This is effective at avoid long pauses in the interrupt handler and at avoiding the extra latency in realtime/high-priority waiters. v2: Convert from a kworker per engine into a dedicated kthread for the bottom-half. v3: Rename request members and tweak comments. v4: Use a per-engine spinlock in the breadcrumbs bottom-half. v5: Fix race in locklessly checking waiter status and kicking the task on adding a new waiter. v6: Fix deciding when to force the timer to hide missing interrupts. v7: Move the bottom-half from the kthread to the first client process. v8: Reword a few comments v9: Break the busy loop when the interrupt is unmasked or has fired. v10: Comments, unnecessary churn, better debugging from Tvrtko v11: Wake all completed waiters on removing the current bottom-half to reduce the latency of waking up a herd of clients all waiting on the same request. v12: Rearrange missed-interrupt fault injection so that it works with igt/drv_missed_irq_hang v13: Rename intel_breadcrumb and friends to intel_wait in preparation for signal handling. v14: RCU commentary, assert_spin_locked v15: Hide BUG_ON behind the compiler; report on gem_latency findings. v16: Sort seqno-groups by priority so that first-waiter has the highest task priority (and so avoid priority inversion). v17: Add waiters to post-mortem GPU hang state. v18: Return early for a completed wait after acquiring the spinlock. Avoids adding ourselves to the tree if the is already complete, and skips the awkward question of why we don't do completion wakeups for waits earlier than or equal to ourselves. v19: Prepare for init_breadcrumbs to fail. Later patches may want to allocate during init, so be prepared to propagate back the error code. Testcase: igt/gem_concurrent_blit Testcase: igt/benchmarks/gem_latency Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: "Goel, Akash" <akash.goel@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> #v18 Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-6-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Currently __i915_wait_request uses a per-engine wait_queue_t for the dual purpose of waking after the GPU advances or for waking after an error. In the future, we may add even more wake sources and require greater separation, but for now we can conceptually simplify wakeups by separating the two sources. In particular, this allows us to use different wait-queues (e.g. one on the engine advancement, a global one for errors and one on each requests) without any hassle. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-5-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We can forgo queuing the hangcheck from the start of every request to until we wait upon a request. This reduces the overhead of every request, but may increase the latency of detecting a hang. However, if nothing every waits upon a hang, did it ever hang? It also improves the robustness of the wait-request by ensuring that the hangchecker is indeed running before we sleep indefinitely (and thereby ensuring that we never actually sleep forever waiting for a dead GPU). As pointed out by Tvrtko, it is possible for a GPU hang to go unnoticed for as long as nobody is waiting for the GPU. Though this rare, during that time we may be consuming more power than if we had promptly recovered, and in the most extreme case we may exhaust all memory before forcing the hangcheck. Something to be wary off in future. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-2-git-send-email-chris@chris-wilson.co.uk
-
- 30 6月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
Since commit 2ed53a94 ("drm/i915: On GPU reset, set the HWS breadcrumb to the last seqno") once a hang is completed, the seqno is advanced past all current requests. With this we know that if we wake up from waiting for a request, if a hang has occurred and reset completed, our request will be considered complete (i.e. i915_gem_request_completed() returns true). Therefore we only need to worry about the situation where a hang has occurred, but not yet reset, where we may need to release our struct_mutex. Since we don't need to detect the completed reset using the global gpu_error->reset_counter anymore, we do not need to track the reset_counter epoch inside the request. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467211874-11552-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com>
-
- 24 6月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
We only need to force a switch to the kernel context placeholder during eviction. All other uses of i915_gpu_idle() just want to wait until existing work on the GPU is idle. Rename i915_gpu_idle() to i915_gem_wait_for_idle() to avoid any implications about "parking" the context first. v2: Tweak an error message if the wait fails for the ilk vtd w/a Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466776558-21516-6-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
During suspend (or module unload), if we have never accessed the engine (i.e. userspace never submitted a batch to it), the engine is idle. Then we attempt to idle the engine by forcing it to the default context, which actually means we submit a render batch to setup the golden context state and then wait for it to complete. We can skip this entirely as we know the engine is idle. v2: Drop incorrect comment. References: https://bugs.freedesktop.org/show_bug.cgi?id=95634Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466776558-21516-1-git-send-email-chris@chris-wilson.co.uk
-
- 21 6月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
... instead of the previous ORIGIN_GTT. This should actually invalidate FBC once something is written on the frontbuffer using WC mmaps. The problem with ORIGIN_GTT is that the automatic hardware tracking is not able to detect the WC writes as it can detect the GTT writes. This should help fix the SKL bug where nothing happens when you type your username/password on lightdm. This patch was originally pasted on an email by Chris and converted to an actual git patch by Paulo. v2 (from Paulo): - Make it a full variable instead of a bit-field (Daniel) - Use WRITE_ONCE (Chris) v3 (from Paulo): - Remove huge comment since now we have WRITE_ONCE (Chris) - Remove uneeded new line (Chris) - Add Chris' Signed-off-by, authorized via IRC Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466185599-26401-1-git-send-email-paulo.r.zanoni@intel.com
-
- 20 6月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
The idea behind relaxing the restriction for pread/pwrite was to handle !obj->base.flip, i.e. non-shmemfs backed objects, which only requires that the object provide struct pages. v2: Remove excess (). Note enough editing after copy'n'paste. v3: Use new i915_gem_object_has_struct_page() Testcase: igt/prime_vgem/read Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466431552-17860-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Currently to see if an object is backed by struct pages (as opposed to being a simple pointer to stolen memory, for example) we do a manual check on the obj->ops->flags. This is quite shouty and before adding more checks in future, we should make it a bit calmer. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466431552-17860-1-git-send-email-chris@chris-wilson.co.uk
-
- 13 6月, 2016 2 次提交
-
-
由 Ankitprasad Sharma 提交于
This patch adds support for extending the pread/pwrite functionality for objects not backed by shmem. The access will be made through gtt interface. This will cover objects backed by stolen memory as well as other non-shmem backed objects. v2: Drop locks around slow_user_access, prefault the pages before access (Chris) v3: Rebased to the latest drm-intel-nightly (Ankit) v4: Moved page base & offset calculations outside the copy loop, corrected data types for size and offset variables, corrected if-else braces format (Tvrtko/kerneldocs) v5: Enabled pread/pwrite for all non-shmem backed objects including without tiling restrictions (Ankit) v6: Using pwrite_fast for non-shmem backed objects as well (Chris) v7: Updated commit message, Renamed i915_gem_gtt_read to i915_gem_gtt_copy, added pwrite slow path for non-shmem backed objects (Chris/Tvrtko) v8: Updated v7 commit message, mutex unlock around pwrite slow path for non-shmem backed objects (Tvrtko) v9: Corrected check during pread_ioctl, to avoid shmem_pread being called for non-shmem backed objects (Tvrtko) v10: Moved the write_domain check to needs_clflush and tiling mode check to pwrite_fast (Chris) v11: Use pwrite_fast fallback for all objects (shmem and non-shmem backed), call fast_user_write regardless of pagefault in previous iteration v12: Use page-by-page copy for slow user access too (Chris) v13: Handled EFAULT, Avoid use of WARN_ON, put_fence only if whole obj pinned (Chris) v14: Corrected datatypes/initializations (Tvrtko) Testcase: igt/gem_stolen, igt/gem_pread, igt/gem_pwrite Signed-off-by: NAnkitprasad Sharma <ankitprasad.r.sharma@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465548783-19712-1-git-send-email-ankitprasad.r.sharma@intel.com
-
由 Ankitprasad Sharma 提交于
In pwrite_fast, map an object page by page if obj_ggtt_pin fails. First, we try a nonblocking pin for the whole object (since that is fastest if reused), then failing that we try to grab one page in the mappable aperture. It also allows us to handle objects larger than the mappable aperture (e.g. if we need to pwrite with vGPU restricting the aperture to a measely 8MiB or something like that). v2: Pin pages before starting pwrite, Combined duplicate loops (Chris) v3: Combined loops based on local patch by Chris (Chris) v4: Added i915 wrapper function for drm_mm_insert_node_in_range (Chris) v5: Renamed wrapper function for drm_mm_insert_node_in_range (Chris) v5: Added wrapper for drm_mm_remove_node() (Chris) v6: Added get_pages call before pinning the pages (Tvrtko) Added remove_mappable_node() wrapper for drm_mm_remove_node() (Chris) v7: Added size argument for insert_mappable_node (Tvrtko) v8: Do not put_pages after pwrite, do memset of node in the wrapper function (insert_mappable_node) (Chris) v9: Rebase (Ankit) Signed-off-by: NAnkitprasad Sharma <ankitprasad.r.sharma@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 07 6月, 2016 1 次提交
-
-
由 Dave Gordon 提交于
The last stage of the GuC loader also sanitises the GuC submission settings, so should be called unconditionally (even on platforms without a GuC) to ensure consistent settings; in particular, this prevents any attempt to use GuC submission on GuCless platforms! Also fix error path handling and clarify DRM_INFO fallback message. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 06 6月, 2016 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Just a bunch of stale kerneldocs generating warnings when building the docs. Mostly function parameters so not very useful but still. v2: Tidy. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1464958937-23344-1-git-send-email-tvrtko.ursulin@linux.intel.com
-
- 24 5月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
Our goal is to rename the anonymous per-engine struct beneath the current intel_context. However, after a lively debate resolving around the confusion between intel_context_engine and intel_engine_context, the realisation is that the two structs target different users. The outer struct is API / user facing, and so carries the higher level GEM information. The inner struct is hw facing. Thus we want to name the inner struct intel_context and the outer one i915_gem_context. As the first step, we need to rename the current struct: s/struct intel_context/struct i915_gem_context/ which fits much better with its constructors already conveying the i915_gem_context prefix! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1464098023-3294-1-git-send-email-chris@chris-wilson.co.uk
-
由 Michal Hocko 提交于
i915_gem_mmap_ioctl relies on mmap_sem for write. If the waiting task gets killed by the oom killer it would block oom_reaper from asynchronous address space reclaim and reduce the chances of timely OOM resolving. Wait for the lock in the killable mode and return with EINTR if the task got killed while waiting. Signed-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: David Airlie <airlied@linux.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 5月, 2016 5 次提交
-
-
由 Dave Gordon 提交于
Split the function of "enable_guc_submission" into two separate options. The new one ("enable_guc_loading") controls only the *fetching and loading* of the GuC firmware image. The existing one is redefined to control only the *use* of the GuC for batch submission once the firmware is loaded. In addition, the degree of control has been refined from a simple bool to an integer key, allowing several options: -1 (default) whatever the platform default is 0 DISABLE don't load/use the GuC 1 BEST EFFORT try to load/use the GuC, fallback if not available 2 REQUIRE must load/use the GuC, else leave the GPU wedged The new platform default (as coded here) will be to attempt to load the GuC iff the device has a GuC that requires firmware, but not yet to use it for submission. A later patch will change to enable it if appropriate. v4: Changed some error-message levels, mostly ERROR->INFO, per review comments by Tvrtko Ursulin. v5: Dropped one more error message, disabled GuC submission on hypothetical firmware-free devices [Tvrtko Ursulin]. v6: Logging tidy by Tvrtko Ursulin: * Do not log falling back to execlists when wedging the GPU. * Do not log fw load errors when load was disabled by user. * Pass down some error code from fw load for log message to make more sense. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> (v5) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Tested-by: NFiedorowicz, Lukasz <lukasz.fiedorowicz@intel.com> Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> (v5) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Nick Hoath <nicholas.hoath@intel.com> (v6)
-
由 Dave Gordon 提交于
For now, anything with a GuC requires uCode loading, and then supports command submission once loaded. But these are logically distinct from simply "having a GuC", so we need a separate macro for the latter. Then, various tests should use this new macro rather than HAS_GUC_UCODE() or testing enable_guc_submission. v4: Added a couple more uses of the new macro. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Dave Gordon 提交于
The GuC initialisation code could do other things apart from loading firmware, so here we rename the three primary entry points to remove any specific reference to "ucode" (no functional changes, just renaming). Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
Following a GPU hang, we break out of the request loop in order to unlock the struct_mutex for use by the GPU reset. However, if we retire all the requests at that moment, we cannot identify the guilty request after performing the reset. v2: Not automatically retiring requests forces us to recheck for available ringspace. Fixes: f4457ae7 ("drm/i915: Prevent leaking of -EIO from i915_wait_request()") Testcase: igt/gem_reset_stats/ban-* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Tested-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463137042-9669-4-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit e075a32f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Ville Syrjälä 提交于
Move the intel_enable_gtt() call to happen before we touch the GTT during resume. Right now it's done way too late. Before commit ebb7c78d ("agp/intel-gtt: Only register fake agp driver for gen1") it was actually done earlier on account of also getting called from the resume hook of the fake agp driver. With the fake agp driver no longer getting registered we must move the call up. The symptoms I've seen on my 830 machine include lowmem corruption, other kinds of memory corruption, and straight up hung machine during or just after resume. Not really sure what causes the memory corruption, but so far I've not seen any with this fix. I think we shouldn't really need to call this during init, but we have been doing that so I've decided to keep the call. However moving that call earlier could be prudent as well. Doing it right after the intel-gtt probe seems appropriate. Also tested this on 946gz,elk,ilk and all seemed quite happy with this change. v2: Reorder init_hw vs. enable_hw functions (Chris) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: drm-intel-fixes@lists.freedesktop.org Fixes: ebb7c78d ("agp/intel-gtt: Only register fake agp driver for gen1") Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1462559755-353-1-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit ac840ae5) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 20 5月, 2016 4 次提交
-
-
由 Dave Gordon 提交于
The existing for_each_sg_page() iterator is somewhat heavyweight, and is limiting i915 driver performance in a few benchmarks. So here we introduce somewhat lighter weight iterators, primarily for use with GEM objects or other case where we need only deal with whole aligned pages. Unlike the old iterator, the new iterators use an internal state structure which is not intended to be accessed by the caller; instead each takes as a parameter an output variable which is set before each iteration. This makes them particularly simple to use :) One of the new iterators provides the caller with the DMA address of each page in turn; the other provides the 'struct page' pointer required by many memory management operations. Various uses of for_each_sg_page() are then converted to the new macros. v2: Force inlining of the sg_iter constructor and make the union anonymous. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-4-git-send-email-chris@chris-wilson.co.uk
-
由 Dave Gordon 提交于
We're using this function for ringbuffers and other "small" objects, so it's worth avoiding an extra malloc()/free() cycle if the page array is small enough to put on the stack. Here we've chosen an arbitrary cutoff of 32 (4k) pages, which is big enough for a ringbuffer (4 pages) or a context image (currently up to 22 pages). v5: change name of local array [Chris Wilson] Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-3-git-send-email-chris@chris-wilson.co.uk
-
由 Dave Gordon 提交于
The recently-added i915_gem_object_pin_map() can be further optimised for "small" objects. To facilitate this, and simplify the error paths before adding the new code, this patch pulls out the "mapping" part of the operation (involving local allocations which must be undone before return) into its own subfunction. The next patch will then insert the new optimisation into the middle of the now-separated subfunction. This reorganisation will probably not affect the generated code, as the compiler will most likely inline it anyway, but it makes the logical structure a bit clearer and easier to modify. v2: Restructure loop-over-pages & error check [Chris Wilson] v3: Add page count to debug messages [Chris Wilson] Convert WARN_ON() to GEM_BUG_ON() v4: Drop the DEBUG messages [Tvrtko Ursulin] Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
userptr directly only uses drm_device in a single interface where it meant to use drm_i915_private (everywhere else we have to derive it from the drm_i915_gem_object and so require going from drm_device). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463671036-3235-1-git-send-email-chris@chris-wilson.co.uk
-
- 17 5月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
drm_gem_object_lookup() has never required the drm_device for its file local translation of the user handle to the GEM object. Let's remove the unused parameter and save some space. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Dave Airlie <airlied@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Fixup kerneldoc too.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 5月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
When creating the hibernation image, the CPU will read the pages of all objects and thus conflict with our domain tracking. We need to update our domain tracking to accurately reflect the state on restoration. v2: Perform the domain tracking inside freeze, before the image is written, rather than upon restoration. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Cc: David Weinehall <david.weinehall@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463207195-22076-2-git-send-email-chris@chris-wilson.co.uk
-
- 13 5月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
Following a GPU hang, we break out of the request loop in order to unlock the struct_mutex for use by the GPU reset. However, if we retire all the requests at that moment, we cannot identify the guilty request after performing the reset. v2: Not automatically retiring requests forces us to recheck for available ringspace. Fixes: f4457ae7 ("drm/i915: Prevent leaking of -EIO from i915_wait_request()") Testcase: igt/gem_reset_stats/ban-* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Tested-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463137042-9669-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to reduce the workload of the caller, we do not want to actually have to retire requests of others when checking the busy status of this object. This applies to both busy/wait ioctls as the wait ioctl has a semantically equivalent mode to the busy ioctl. At the present time, this is only a minor improvement to reduce the workload of the busy ioctl under the struct_mutex which helps to reduce its impact upon contention of struct_mutex. However, since it is mostly a victim in highly contended scenarios, the impact is very minor until we can eliminate the struct_mutex requirement for busy-ioctl in the near future. v2: Mention the patches intended limited impact. It is just paving the way for greater changes whilst reducing the impact of a bugfix in the next patch. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463137042-9669-3-git-send-email-chris@chris-wilson.co.uk
-
- 11 5月, 2016 1 次提交
-
-
由 Tvrtko Ursulin 提交于
This way optimization from a previous patch works even better. v2: Rebase. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJani Nikula <jani.nikula@intel.com>
-
- 10 5月, 2016 1 次提交
-
-
由 Ville Syrjälä 提交于
Move the intel_enable_gtt() call to happen before we touch the GTT during resume. Right now it's done way too late. Before commit ebb7c78d ("agp/intel-gtt: Only register fake agp driver for gen1") it was actually done earlier on account of also getting called from the resume hook of the fake agp driver. With the fake agp driver no longer getting registered we must move the call up. The symptoms I've seen on my 830 machine include lowmem corruption, other kinds of memory corruption, and straight up hung machine during or just after resume. Not really sure what causes the memory corruption, but so far I've not seen any with this fix. I think we shouldn't really need to call this during init, but we have been doing that so I've decided to keep the call. However moving that call earlier could be prudent as well. Doing it right after the intel-gtt probe seems appropriate. Also tested this on 946gz,elk,ilk and all seemed quite happy with this change. v2: Reorder init_hw vs. enable_hw functions (Chris) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: drm-intel-fixes@lists.freedesktop.org Fixes: ebb7c78d ("agp/intel-gtt: Only register fake agp driver for gen1") Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1462559755-353-1-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 09 5月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
text data bss dec hex filename 6309351 3578714 696320 10584385 a18141 vmlinux 6308391 3578714 696320 10583425 a17d81 vmlinux Almost 1KiB of code reduction. v2: More s/INTEL_INFO()->gen/INTEL_GEN()/ and IS_GENx() conversions text data bss dec hex filename 6304579 3578778 696320 10579677 a16edd vmlinux 6303427 3578778 696320 10578525 a16a5d vmlinux Now over 1KiB! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1462545621-30125-3-git-send-email-chris@chris-wilson.co.uk
-