- 08 9月, 2017 1 次提交
-
-
由 fred gao 提交于
Currently i915 request structure and shadow ring buffer are allocated before command scan, so it will have to restore to previous states once any error happens afterwards in the long dispatch_workload path. This patch is to introduce a reserved ring buffer created at the beginning of vGPU initialization. Workload will be coped to this reserved buffer and be scanned first, the i915 request and shadow ring buffer are only allocated after the result of scan is successful. To balance the memory usage and buffer alloc time, the coming bigger ring buffer will be reallocated and kept until more bigger buffer is coming. v2: - use kmalloc for the smaller ring buffer, realloc if required. (Zhenyu) v3: - remove the dynamically allocated ring buffer. (Zhenyu) v4: - code style polish. - kfree previous allocated buffer once kmalloc failed. (Zhenyu) Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 10 8月, 2017 4 次提交
-
-
由 Kechen Lu 提交于
The current context logic only updates the descriptor of context when it's being pinned to graphics memory space. But this cannot satisfy the requirement of shadow context. The addressing mode of the pinned shadow context descriptor may be changed according to the guest addressing mode. And this won't be updated, as the already pinned shadow context has no chance to update its descriptor. And this will lead to GPU hang issue, as shadow context is used with wrong descriptor. This patch fixes this issue by letting the pinned shadow context descriptor update its addressing mode on demand. This patch fixes GPU HANG issue which happends after changing the grub parameter i915.enable_ppgtt form 0x01 to 0x03 or vice versa and then rebooting the guest. Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NKechen Lu <kechen.lu@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
The function workload scan and shadow have to hold the drm.struct_mutex before called. To avoid misusing of this function, add a lockdep assert in it. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
Let the workload audit and shadow ahead of vGPU scheduling, that will eliminate GPU idle time and improve performance for multi-VM. The performance of Heaven running simultaneously in 3VMs has improved 20% after this patch. v2:Remove condition current->vgpu==vgpu when shadow during ELSP writing. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Reviewed-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
To perform the workload scan and shadow in ELSP writing stage for performance consideration, the workload scan and shadow stuffs should be factored out from dispatch_workload(). v2:Put context pin before i915_add_request; Refine the comments; Rename some APIs; v3:workload->status should set only when error happens. v4:i915_add_request is must to have after i915_gem_request_alloc. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Reviewed-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 02 8月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Use resetting_eng to identify which engine is resetting so the rest ones' workload won't be impacted v2: - use ENGINE_MASK(ring_id) instead of (1 << ring_id). (Zhenyu) Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 11 7月, 2017 2 次提交
-
-
由 Chuanxiao Dong 提交于
The req->fence.error will be set if this request caused GPU hang so we can use this value to workload->status to indicate whether this GVT request caused any problem. If it caused GPU hang, we shouldn't trigger any context switch back to the guest. v2: - only take -EIO from fence->error. (Zhenyu) Fixes: 8f1117ab (drm/i915/gvt: handle workload lifecycle properly) Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
For the vGPU workloads, now GVT-g use per vGPU scheduler, the per-ring work_thread only pick workload belongs to the current vGPU. And with time slice based scheduler, it waits all the engines become idle before do vGPU switch. So we can run free dispatch in per-ring work_thread, different ring running in different 'vGPU' won't happen. For the workloads between vGPU and Host, this scheduler_mutex can't block host to dispatch workload into other ring engines. Here remove this mutex since it impacts the performance when applications use more than 1 ring engines in 1 vgpu. ring0 running in vGPU1, ring1 running in Host. Will happen. ring0 running in vGPU1, ring1 running in vGPU2. Won't happen. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 6月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
If we move the actual cleanup of the context to a worker, we can allow the final free to be called from any context and avoid undue latency in the caller. v2: Negotiate handling the delayed contexts free by flushing the workqueue before calling i915_gem_context_fini() and performing the final free of the kernel context directly v3: Flush deferred frees before new context allocations Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170620110547.15947-2-chris@chris-wilson.co.uk
-
- 08 6月, 2017 2 次提交
-
-
由 Ping Gao 提交于
The time based scheduler poll context busy status at every micro-second during vGPU switch, it will make GPU idle for a while when the context is very small and completed before the next micro-second arrival. Trigger scheduling immediately after context complete will eliminate GPU idle and improve performance. Create two vGPU with same type, run Heaven simultaneously: Before this patch: +---------+----------+----------+ | | vGPU1 | vGPU2 | +---------+----------+----------+ | Heaven | 357 | 354 | +-------------------------------+ After this patch: +---------+----------+----------+ | | vGPU1 | vGPU2 | +---------+----------+----------+ | Heaven | 397 | 398 | +-------------------------------+ v2: Let need_reschedule protect by gvt-lock. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Commit ab9da627906a ("drm/i915: make context status notifier head be per engine") gives us a chance to inspect every single request. Then we can eliminate unnecessary mmio switching for same vGPU. We only need mmio switching for different VMs (including host). This patch introduced a new general API intel_gvt_switch_mmio() to replace the old intel_gvt_load/restore_render_mmio(). This function can be further optimized for vGPU to vGPU switching. To support individual ring switch, we track the owner who occupy each ring. When another VM or host request a ring we do the mmio context switching. Otherwise no need to switch the ring. This optimization is very useful if only one guest has plenty of workloads and the host is mostly idle. The best case is no mmio switching will happen. v2: o fix missing ring switch issue. (chuanxiao) o support individual ring switch. Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 04 5月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Since unifying ringbuffer/execlist submission to use engine->pin_context, we ensure that the intel_ring is available before we start constructing the request. We can therefore move the assignment of the request->ring to the central i915_gem_request_alloc() and not require it in every engine->request_alloc() callback. Another small step towards simplification (of the core, but at a cost of handling error pointers in less important callers of engine->pin_context). v2: Rearrange a few branches to reduce impact of PTR_ERR() on gcc's code generation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NOscar Mateo <oscar.mateo@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170504093308.4137-1-chris@chris-wilson.co.uk
-
- 28 4月, 2017 1 次提交
-
-
由 Joonas Lahtinen 提交于
Pre-calculate engine context size based on engine class and device generation and store it in the engine instance. v2: - Squash and get rid of hw_context_size (Chris) v3: - Move after MMIO init for probing on Gen7 and 8 (Chris) - Retained rounding (Tvrtko) v4: - Rebase for deferred legacy context allocation Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: intel-gvt-dev@lists.freedesktop.org Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 13 4月, 2017 1 次提交
-
-
由 Zhenyu Wang 提交于
As those debug messages might appear in every timer call for scheduler, it's too noisy, eat too much log and aren't meaningful. So remove them. Cc: Ping Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 29 3月, 2017 1 次提交
-
-
由 Xu Han 提交于
Extend function dispatch logic to support KBL platform. Signed-off-by: NXu Han <xu.han@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 22 3月, 2017 1 次提交
-
-
由 Changbin Du 提交于
In my previous Commit ab9da627906a ("drm/i915: make context status notifier head be per engine") rely on scheduler->current_workload[x] to distinguish gvt spacial request from i915 request. But this is not always true since no synchronization between workload_thread and lrc irq handler. lrc irq handler workload_thread ---- ---- pick i915 requests; intel_vgpu_submit_execlist(); current_workload[x] = xxx; shadow_context_status_change(); Then current_workload[x] is not null but current request is of i915 self. So instead we check ctx flag CONTEXT_FORCE_SINGLE_SUBMISSION. Only gvt request set this flag and always set. v2: Reverse the order of multi-condition 'if' statement. Fixes: ab9da6279 ("drm/i915: make context status notifier head be per engine") Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NYulei Zhang <yulei.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 3月, 2017 1 次提交
-
-
由 Changbin Du 提交于
GVTg has introduced the context status notifier to schedule the GVTg workload. At that time, the notifier is bound to GVTg context only, so GVTg is not aware of host workloads. Now we are going to improve GVTg's guest workload scheduler policy, and add Guc emulation support for new Gen graphics. Both these two features require acknowledgment for all contexts running on hardware. (But will not alter host workload.) So here try to make some change. The change is simple: 1. Move the context status notifier head from i915_gem_context to intel_engine_cs. Which means there is a notifier head per engine instead of per context. Execlist driver still call notifier for each context sched-in/out events of current engine. 2. At GVTg side, it binds a notifier_block for each physical engine at GVTg initialization period. Then GVTg can hear all context status events. In this patch, GVTg do nothing for host context event, but later will add a function there. But in any case, the notifier callback is a noop if this is no active vGPU. Since intel_gvt_init() is called at early initialization stage and require the status notifier head has been initiated, I initiate it in intel_engine_setup(). v2: remove a redundant newline. (chris) Fixes: 3c7ba635 ("drm/i915: Introduce execlist context status change notification") Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232Signed-off-by: NChangbin Du <changbin.du@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.comAcked-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit 3fc03069) Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170321144720.17020-1-chris@chris-wilson.co.uk
-
- 17 3月, 2017 6 次提交
-
-
由 Chris Wilson 提交于
The only time we need to emit a flush inside request emission is after an execbuffer, for which we can use the full __i915_add_request(). All other instances want the simpler i915_add_request() without flushing, so remove the useless helper. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317114709.8388-1-chris@chris-wilson.co.uk
-
由 Chuanxiao Dong 提交于
When handling guest request, GVT needs to populate/update shadow_ctx with guest context. This behavior needs to make sure the shadow_ctx is pinned. The current implementation is relying on i195 allocate request to pin but this way cannot guarantee the i915 not to unpin the shadow_ctx when GVT update the guest context from shadow_ctx. So GVT should pin/unpin the shadow_ctx by itself. Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Tina Zhang 提交于
The shadow indirect context image should be only scanned when valid. So far, Only RCS ring has the shadow indirect context image. This patch limits the scan logic only for RCS ring. v2. refine description of the subject v3. fix alignment. (Zhenyu) Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Chris Wilson 提交于
commit 8f1117ab ("drm/i915/gvt: handle workload lifecycle properly") includes some nonsense to retry a indefinite wait - i915_wait_request() does not return until the request is completed when used from an uninterruptible context. Fixes: 8f1117ab ("drm/i915/gvt: handle workload lifecycle properly" Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chuanxiao Dong <chuanxiao.dong@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Tina Zhang 提交于
gvt_err should be used only for the very few critical error message during host i915 drvier initialization. This patch 1. removes the redundant gvt_err; 2. creates a new gvt_vgpu_err to show errors caused by vgpu; 3. replaces the most gvt_err with gvt_vgpu_err; 4. leaves very few gvt_err for dumping gvt error during host gvt initialization. v2. change name to gvt_vgpu_err and add vgpu id to the message. (Kevin) add gpu id to gvt_vgpu_err. (Zhi) v3. remove gpu id from gvt_vgpu_err caller. (Zhi) v4. add vgpu check to the gvt_vgpu_err macro. (Zhiyuan) v5. add comments for v3 and v4. v6. split the big patch into two, with this patch only for checking gvt_vgpu_err. (Zhenyu) v7. rebase to staging branch v8. rebase to fix branch Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
GVTg has introduced the context status notifier to schedule the GVTg workload. At that time, the notifier is bound to GVTg context only, so GVTg is not aware of host workloads. Now we are going to improve GVTg's guest workload scheduler policy, and add Guc emulation support for new Gen graphics. Both these two features require acknowledgment for all contexts running on hardware. (But will not alter host workload.) So here try to make some change. The change is simple: 1. Move the context status notifier head from i915_gem_context to intel_engine_cs. Which means there is a notifier head per engine instead of per context. Execlist driver still call notifier for each context sched-in/out events of current engine. 2. At GVTg side, it binds a notifier_block for each physical engine at GVTg initialization period. Then GVTg can hear all context status events. In this patch, GVTg do nothing for host context event, but later will add a function there. But in any case, the notifier callback is a noop if this is no active vGPU. Since intel_gvt_init() is called at early initialization stage and require the status notifier head has been initiated, I initiate it in intel_engine_setup(). v2: remove a redundant newline. (chris) Fixes: 3c7ba635 ("drm/i915: Introduce execlist context status change notification") Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232Signed-off-by: NChangbin Du <changbin.du@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.comAcked-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 06 3月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Currently i915 has a request replay mechanism which can make sure the request can be replayed after a GPU reset. With this mechanism, gvt should wait until the GVT request seqno passed before complete the current workload. So that there should be a context switch interrupt come before gvt free the workload. In this way, workload lifecylce matches with the i915 request lifecycle. The workload can only be freed after the request is completed. v2: use gvt_dbg_sched instead of gvt_err to print when wait again Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 23 2月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Due to the request replay, context switch interrupt may come after gvt free the workload thus can cause a kernel NULL pointer kernel panic. This patch will add a simple check to avoid this for a short term. From long term, gvt workload lifecycle doesn't match with i915 request and need to find a proper way to manage this. v4: simplify the NULL pointer check. v5: add unlikely to optimize. Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 14 2月, 2017 1 次提交
-
-
由 Zhenyu Wang 提交于
We need to be careful to only update addr mode for gvt shadow context descriptor but keep other valid config. This fixes GPU hang caused by invalid descriptor submitted for gvt workload. Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 09 2月, 2017 1 次提交
-
-
由 Changbin Du 提交于
Remove a redundant end of line in below log. 'will complete workload %p\n, status: %d\n' Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 09 1月, 2017 2 次提交
-
-
由 Changbin Du 提交于
The vgpu->running_workload_num is used to determine whether a vgpu has any workload running or not. So we should make sure the workload is really done before we dec running_workload_num. Function complete_current_workload is not the right place to do it, since this function is still processing the workload. This patch move the dec op afterward. v2: move dec op before wake_up(&scheduler->workload_complete_wq) (Min He) Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NMin He <min.he@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
In the function workload_thread(), we invoke complete_current_workload() to cleanup the just processed workload (workload will be freed there). So we cannot access workload->req after that. This patch move complete_current_workload() afterward. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 19 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
As the shadow gvt is not user accessible and does not have an associated vm, we can mark it as closed during its construction. This saves leaking the internal knowledge of i915_gem_context into gvt/. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161218153724.8439-5-chris@chris-wilson.co.uk
-
- 25 11月, 2016 1 次提交
-
-
由 Zhenyu Wang 提交于
Need to be careful to release struct_mutext when request alloc failed and take consistent handling for return status as with normal go out path. Ensure to check correct workload request in complete path too. v2: Add Fixes note Fixes: 90d27a1b ("drm/i915/gvt: fix deadlock in workload_thread") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Pei Zhang <pei.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 14 11月, 2016 1 次提交
-
-
由 Pei Zhang 提交于
It's a classical abba type deadlock when using 2 mutex objects, which are gvt.lock(a) and drm.struct_mutex(b). Deadlock happens in threads: 1. intel_gvt_create/destroy_vgpu: P(a)->P(b) 2. workload_thread: P(b)->P(a) Fix solution is align the lock acquire sequence in both threads. This patch choose to adjust the sequence in workload_thread function. This fixed lockup symptom for guest-reboot stress test. v2: adjust sequence in workload_thread based on zhenyu's suggestion. adjust sequence in create/destroy_vgpu function. v3: fix to still require struct_mutex for dispatch_workload() Signed-off-by: NPei Zhang <pei.zhang@intel.com> [zhenyuw: fix unused variables warnings.] Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 10 11月, 2016 1 次提交
-
-
由 Xiaoguang Chen 提交于
kmap_atomic doesn't allow sleep until unmapped. However, it's necessary to allow sleep during reading/writing guest memory, so use kmap instead. Signed-off-by: NBing Niu <bing.niu@intel.com> Signed-off-by: NXiaoguang Chen <xiaoguang.chen@intel.com> Signed-off-by: NJike Song <jike.song@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 07 11月, 2016 1 次提交
-
-
由 Zhenyu Wang 提交于
From commit e95433c7, workload status setting was changed to only capture on error path, but we need to set it properly in normal path too, otherwise we'll fail to complete workload which could lead guest VM vGPU reset. v2: uses braces and add Fixes tag. Fixes: e95433c7 ("drm/i915: Rearrange i915_wait_request() accounting with callers") Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 29 10月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
Our low-level wait routine has evolved from our generic wait interface that handled unlocked, RPS boosting, waits with time tracking. If we push our GEM fence tracking to use reservation_objects (required for handling multiple timelines), we lose the ability to pass the required information down to i915_wait_request(). However, if we push the extra functionality from i915_wait_request() to the individual callsites (i915_gem_object_wait_rendering and i915_gem_wait_ioctl) that make use of those extras, we can both simplify our low level wait and prepare for extending the GEM interface for use of reservation_objects. v2: Rewrite i915_wait_request() kerneldocs Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-4-chris@chris-wilson.co.uk
-
- 27 10月, 2016 1 次提交
-
-
由 Du, Changbin 提交于
We cannot use blocking method mutex_lock inside a wait loop. Here we invoke pick_next_workload() which needs acquire a mutex in our "condition" experssion. Then we go into a another of the going-to-sleep sequence and changing the task state. This is a dangerous. Let's rewrite the wait sequence to avoid nested sleeping. v2: fix do...while loop exit condition (zhenyu) v3: rebase to gvt-staging branch Signed-off-by: NDu, Changbin <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 20 10月, 2016 4 次提交
-
-
由 Du, Changbin 提交于
Mark all local functions & variables as static. Signed-off-by: NDu, Changbin <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Zhenyu Wang 提交于
Switch to use new for_each_engine() helper to properly access enabled intel_engine_cs as i915 core has changed that to be dynamic managed. At GVT-g init time would still depend on ring mask to determine engine list as it's earlier. Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Chris Wilson 提交于
For whatever reason, the gvt scheduler runs synchronously. At the very least, lets run synchronously without holding the struct_mutex. v2: cut'n'paste mutex_lock instead of unlock. Replace long hold of struct_mutex with a mutex to serialise the worker threads. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Chris Wilson 提交于
The kthread will not be interrupted, don't even bother checking. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-