- 04 12月, 2017 3 次提交
-
-
由 Zhenyu Wang 提交于
We shouldn't mark inactive for vGPU context if preempted, which would still be re-scheduled later. So keep active state. Fixes: d6c05113 ("drm/i915/execlists: Distinguish the incomplete context notifies") Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
The current schedule policy rely on a 1ms timer to execute workload. This can introduce maximum 1ms unnecessary latency. This is especially bad for small media workloads. And I don't think we need this timer for QoS, but the change is not simply remove the code. So I made a new API intel_gvt_kick_schedule() for future change. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Convert the macro to a function which should always be preferred. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 11月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
drivers/gpu/drm/i915/gvt/execlist.c:531:6: warning: symbol 'clean_execlist' was not declared. Should it be static? drivers/gpu/drm/i915/gvt/execlist.c:545:6: warning: symbol 'reset_execlist' was not declared. Should it be static? drivers/gpu/drm/i915/gvt/execlist.c:556:5: warning: symbol 'init_execlist' was not declared. Should it be static? drivers/gpu/drm/i915/gvt/scheduler.c:248:6: warning: symbol 'release_shadow_wa_ctx' was not declared. Should it be static? References: 06bb372f ("drm/i915/gvt: Introduce intel_vgpu_reset_submission") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Zhi Wang <zhi.a.wang@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: intel-gvt-dev@lists.freedesktop.org Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 16 11月, 2017 17 次提交
-
-
由 fred gao 提交于
Previously the performance is improved through the workload auditing and shadowing ahead of vGPU scheduling, however, there is the case that more requests are allocated in submit_context before the previous request is added, the timeline will hold its seqno which is later. This patch is to move the request alloc to dispatch_workload function, where is the same place as request is added. It will fix the issue of kernel BUG for (timeline->seqno != request->fence.seqno) check when add_request. Fixes: 89ea20b9 ("drm/i915/gvt: Factor out scan and shadow from workload dispatch") Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Xiong Zhang 提交于
mmio_read_from_hw() let vgpu could read hw reg, if vgpu's workload is running on hw, things is good. Otherwise vgpu will get other vgpu's reg val, it is unsafe. This patch limit such hw access to active vgpu. If vgpu isn't running on hw, the reg read of this vgpu will get the last active val which saved at schedule_out. v2: ring timestamp is walking continuously even if the ring is idle. so read hw directly. (Zhenyu) Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Zhi Wang 提交于
As there is already an I915_GTT_PAGE_SIZE marco in i915, let GVT-g use it as well. Also this patch re-names some GTT marcos with additional prefix. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
1) Use standard i915 GEM object sequence to access the shadow batch buffer. 2) Manage i915 vma life cycle to solve one FIXME. v2: - Refine code structure. - Refine the usage of GEM APIs. - Add the missing lock/unlock in release_shadow_batch_buffer. Test on my SKL NuC. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Move clean_workloads() into scheduler.c since it's not specific to execlist. v2: - Remove clean_workloads in intel_vgpu_select_submission_ops. (Zhenyu) Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Introduce an generic API to reset vGPU virtual submission interface. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Introduce vGPU submission ops to support easy switching submission mode of one vGPU between different OSes. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Move common vGPU workload creation functions into scheduler.c since they are not specific to execlist emulation. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Move common workload preparation into prepare_workload() in scheduler.c, as they are not specific to execlist emulation. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Factor out prepare_workload() for the following re-factor. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Factor out vGPU workload creation/destroy functions since they are not specific to execlist emulation. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 fred gao 提交于
When a scan error occurs in dispatch_workload, this patch is to check the healthy state and free all the queued workloads before the failsafe mode is entered. Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 fred gao 提交于
Generally, there are 3 types of errors during command scan: a) some commands might be unknown with EBADRQC; b) some cmd access invalid address with EFAULT; c) some unexpected force nonpriv cmd with EPERM. later the healthy state can be judged through the return error. v2: - remove some internal i915 errors rating. (Zhenyu) v3: - the healthy state is judged through the internal defined return error. (Zhenyu) - force non priv cmd error can be ignored. (Kevin) v4: - reuse standard defined errno instead of recreate, e.g EBADRQC for unknown cmd, EFAULT for invalid address, EPERM for nonpriv. (Zhenyu) v5: - remove some irrelevant code for the patch. - fix typo of vgpu_is_vm_unhealthy. (Zhenyu) v6: - move the healthy check and failsafe code into another patch. (Zhenyu) v7: - polish title and commit message. (Zhenyu) Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Zhi Wang 提交于
Move tlb_handle_pending into intel_vgpu_submssion since it belongs to a part of vGPU submission stuffs Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Introduce intel_vgpu_submission to hold all members related to submission in struct intel_vgpu before. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Move vGPU workload cache initialization/de-initialization into intel_vgpu_{setup, clean}_submission() since they are not specific to execlist stuffs. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
To move workload related functions into scheduler.c, an expected way is to collect all the init/clean functions related to vGPU workload submission into fewer functions. Rename intel_vgpu_{init, clean}_gvt_context() for above usage in future. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
- 11 11月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Take a copy of the HW state after a reset upon module loading by executing a context switch from a blank context to the kernel context, thus saving the default hw state over the blank context image. We can then use the default hw state to initialise any future context, ensuring that each starts with the default view of hw state. v2: Unmap our default state from the GTT after stealing it from the context. This should stop us from accidentally overwriting it via the GTT (and frees up some precious GTT space). Testcase: igt/gem_ctx_isolation Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171110142634.10551-7-chris@chris-wilson.co.uk
-
- 05 10月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Let the listener know that the context we just scheduled out was not complete, and will be scheduled back in at a later point. v2: Handle CONTEXT_STATUS_PREEMPTED in gvt by aliasing it to CONTEXT_STATUS_OUT for the moment, gvt can expand upon the difference later. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: "Zhenyu Wang" <zhenyuw@linux.intel.com> Cc: "Wang, Zhi A" <zhi.a.wang@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171003203453.15692-3-chris@chris-wilson.co.uk
-
- 13 9月, 2017 1 次提交
-
-
由 Michel Thierry 提交于
Not only the context image consist of two parts (the PPHWSP, and the logical context state), but we also allocate a header at the start of for sharing data with GuC. Thus every lrc looks like this: | [guc] | [hwsp] [logical state] | |<- our header ->|<- context image ->| So far, we have oversimplified whenever we use each of these parts of the context, just because the GuC header happens to be in page 0, and the (PP)HWSP is in page 1. But this had led to using the same define for more than one meaning (as a page index in the lrc and as 1 page). This patch adds defines for the GuC shared page, the PPHWSP page and the start of the logical state. It also updated the places where the old define was being used. Since we are not changing the size (or format) of the context, there are no functional changes. v2: Use PPHWSP index for hws again. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: intel-gvt-dev@lists.freedesktop.org Link: http://patchwork.freedesktop.org/patch/msgid/20170712193032.27080-1-michel.thierry@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170913085605.18299-1-chris@chris-wilson.co.uk
-
- 08 9月, 2017 3 次提交
-
-
由 fred gao 提交于
When an error occurs in dispatch_workload, this patch is to do the proper cleanup and rollback to the original states before the workload is abandoned. v2: - split the mixed several error paths for better review. (Zhenyu) v3: - original PTR_ERR(cs) is good and code cleanup. (Zhenyu) v4: - reuse the existing i915_add_request for error handling. (Zhenyu) v5: - remove the duplicate error handling release_shadow_wa_ctx and move the engine->context_unpin upper. (Zhenyu) v6: - keep the old label "out". (Zhenyu) Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 fred gao 提交于
When an error occurs after shadow_indirect_ctx, this patch is to do the proper cleanup and rollback to the original states for shadowed indirect context before the workload is abandoned. v2: - split the mixed several error paths for better review. (Zhenyu) v3: - no return check for clean up functions. (Changbin) v4: - expose and reuse the existing release_shadow_wa_ctx. (Zhenyu) v5: - move the release function to scheduler.c file. (Zhenyu) v6: - move error handling code of intel_gvt_scan_and_shadow_workload to here. (Zhenyu) Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 fred gao 提交于
Currently i915 request structure and shadow ring buffer are allocated before command scan, so it will have to restore to previous states once any error happens afterwards in the long dispatch_workload path. This patch is to introduce a reserved ring buffer created at the beginning of vGPU initialization. Workload will be coped to this reserved buffer and be scanned first, the i915 request and shadow ring buffer are only allocated after the result of scan is successful. To balance the memory usage and buffer alloc time, the coming bigger ring buffer will be reallocated and kept until more bigger buffer is coming. v2: - use kmalloc for the smaller ring buffer, realloc if required. (Zhenyu) v3: - remove the dynamically allocated ring buffer. (Zhenyu) v4: - code style polish. - kfree previous allocated buffer once kmalloc failed. (Zhenyu) Signed-off-by: Nfred gao <fred.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 10 8月, 2017 4 次提交
-
-
由 Kechen Lu 提交于
The current context logic only updates the descriptor of context when it's being pinned to graphics memory space. But this cannot satisfy the requirement of shadow context. The addressing mode of the pinned shadow context descriptor may be changed according to the guest addressing mode. And this won't be updated, as the already pinned shadow context has no chance to update its descriptor. And this will lead to GPU hang issue, as shadow context is used with wrong descriptor. This patch fixes this issue by letting the pinned shadow context descriptor update its addressing mode on demand. This patch fixes GPU HANG issue which happends after changing the grub parameter i915.enable_ppgtt form 0x01 to 0x03 or vice versa and then rebooting the guest. Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NKechen Lu <kechen.lu@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
The function workload scan and shadow have to hold the drm.struct_mutex before called. To avoid misusing of this function, add a lockdep assert in it. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
Let the workload audit and shadow ahead of vGPU scheduling, that will eliminate GPU idle time and improve performance for multi-VM. The performance of Heaven running simultaneously in 3VMs has improved 20% after this patch. v2:Remove condition current->vgpu==vgpu when shadow during ELSP writing. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Reviewed-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Ping Gao 提交于
To perform the workload scan and shadow in ELSP writing stage for performance consideration, the workload scan and shadow stuffs should be factored out from dispatch_workload(). v2:Put context pin before i915_add_request; Refine the comments; Rename some APIs; v3:workload->status should set only when error happens. v4:i915_add_request is must to have after i915_gem_request_alloc. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Reviewed-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 02 8月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Use resetting_eng to identify which engine is resetting so the rest ones' workload won't be impacted v2: - use ENGINE_MASK(ring_id) instead of (1 << ring_id). (Zhenyu) Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 11 7月, 2017 2 次提交
-
-
由 Chuanxiao Dong 提交于
The req->fence.error will be set if this request caused GPU hang so we can use this value to workload->status to indicate whether this GVT request caused any problem. If it caused GPU hang, we shouldn't trigger any context switch back to the guest. v2: - only take -EIO from fence->error. (Zhenyu) Fixes: 8f1117ab (drm/i915/gvt: handle workload lifecycle properly) Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
For the vGPU workloads, now GVT-g use per vGPU scheduler, the per-ring work_thread only pick workload belongs to the current vGPU. And with time slice based scheduler, it waits all the engines become idle before do vGPU switch. So we can run free dispatch in per-ring work_thread, different ring running in different 'vGPU' won't happen. For the workloads between vGPU and Host, this scheduler_mutex can't block host to dispatch workload into other ring engines. Here remove this mutex since it impacts the performance when applications use more than 1 ring engines in 1 vgpu. ring0 running in vGPU1, ring1 running in Host. Will happen. ring0 running in vGPU1, ring1 running in vGPU2. Won't happen. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 6月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
If we move the actual cleanup of the context to a worker, we can allow the final free to be called from any context and avoid undue latency in the caller. v2: Negotiate handling the delayed contexts free by flushing the workqueue before calling i915_gem_context_fini() and performing the final free of the kernel context directly v3: Flush deferred frees before new context allocations Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170620110547.15947-2-chris@chris-wilson.co.uk
-
- 08 6月, 2017 2 次提交
-
-
由 Ping Gao 提交于
The time based scheduler poll context busy status at every micro-second during vGPU switch, it will make GPU idle for a while when the context is very small and completed before the next micro-second arrival. Trigger scheduling immediately after context complete will eliminate GPU idle and improve performance. Create two vGPU with same type, run Heaven simultaneously: Before this patch: +---------+----------+----------+ | | vGPU1 | vGPU2 | +---------+----------+----------+ | Heaven | 357 | 354 | +-------------------------------+ After this patch: +---------+----------+----------+ | | vGPU1 | vGPU2 | +---------+----------+----------+ | Heaven | 397 | 398 | +-------------------------------+ v2: Let need_reschedule protect by gvt-lock. Signed-off-by: NPing Gao <ping.a.gao@intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Commit ab9da627906a ("drm/i915: make context status notifier head be per engine") gives us a chance to inspect every single request. Then we can eliminate unnecessary mmio switching for same vGPU. We only need mmio switching for different VMs (including host). This patch introduced a new general API intel_gvt_switch_mmio() to replace the old intel_gvt_load/restore_render_mmio(). This function can be further optimized for vGPU to vGPU switching. To support individual ring switch, we track the owner who occupy each ring. When another VM or host request a ring we do the mmio context switching. Otherwise no need to switch the ring. This optimization is very useful if only one guest has plenty of workloads and the host is mostly idle. The best case is no mmio switching will happen. v2: o fix missing ring switch issue. (chuanxiao) o support individual ring switch. Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 04 5月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Since unifying ringbuffer/execlist submission to use engine->pin_context, we ensure that the intel_ring is available before we start constructing the request. We can therefore move the assignment of the request->ring to the central i915_gem_request_alloc() and not require it in every engine->request_alloc() callback. Another small step towards simplification (of the core, but at a cost of handling error pointers in less important callers of engine->pin_context). v2: Rearrange a few branches to reduce impact of PTR_ERR() on gcc's code generation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NOscar Mateo <oscar.mateo@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170504093308.4137-1-chris@chris-wilson.co.uk
-
- 28 4月, 2017 1 次提交
-
-
由 Joonas Lahtinen 提交于
Pre-calculate engine context size based on engine class and device generation and store it in the engine instance. v2: - Squash and get rid of hw_context_size (Chris) v3: - Move after MMIO init for probing on Gen7 and 8 (Chris) - Retained rounding (Tvrtko) v4: - Rebase for deferred legacy context allocation Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: intel-gvt-dev@lists.freedesktop.org Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 13 4月, 2017 1 次提交
-
-
由 Zhenyu Wang 提交于
As those debug messages might appear in every timer call for scheduler, it's too noisy, eat too much log and aren't meaningful. So remove them. Cc: Ping Gao <ping.a.gao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-