- 12 11月, 2018 1 次提交
-
-
由 Xinyun Liu 提交于
Some engines are not available for all Gens. eg, Gen11 introduced VCS3/VCS4/VECS2, and VCS2 is not supported on some Gen9 machines. So need to add check before access them. Signed-off-by: NXinyun Liu <xinyun.liu@intel.com> Signed-off-by: NYakui Zhao <Yakui.Zhao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 31 10月, 2018 1 次提交
-
-
由 Xinyun Liu 提交于
CSFE_CHICKEN1(0x20d4) needs access with mask. This is caught in AcrnGT conformance check test: [drm:intel_gvt_vgpu_conformance_check] *ERROR* gvt: vgpu1 unconformance mmio 0x20d4:0x40004,0x4 Signed-off-by: NXinyun Liu <xinyun.liu@intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 30 8月, 2018 1 次提交
-
-
由 Hang Yuan 提交于
pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915 device is not active. When stop vgpu schedule, the device may be inactive. So need to move runtime_pm_get out of spin_lock/unlock. Fixes: b24881e0("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio Cc: <stable@vger.kernel.org> Signed-off-by: NHang Yuan <hang.yuan@linux.intel.com> Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 07 8月, 2018 1 次提交
-
-
由 Zhenyu Wang 提交于
To consolidate all gvt private MMIO definition in one place, this moves some not yet used in i915 to reg.h. Reviewed-by: NHang Yuan <hang.yuan@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 05 7月, 2018 1 次提交
-
-
由 Hang Yuan 提交于
Commit cd7e 61b9"init mmio by lri command in vgpu inhibit context" initializes registers saved/restored in context with its vreg value through lri command in ring buffer. It relies on vreg got updated on every guest access. There is a case found that Linux guest uses lri command in inhibit-ctx to update the register. This patch adds vreg update on this case. v2: move mmio_attribute functions to gvt.h (Zhenyu) v3: use mask_mmio_write in vreg update v4: refine codes and add more comments (Zhenyu) Fixes: cd7e61b9("drm/i915/gvt: init mmio by lri command in vgpu inhibit context") Signed-off-by: NHang Yuan <hang.yuan@linux.intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 13 6月, 2018 1 次提交
-
-
由 Colin Xu 提交于
Handle pending tlb flush, mocs/mmio switch and context as KBL. Signed-off-by: NColin Xu <colin.xu@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 18 5月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
To ease the frequent and ugly pointer dance of &request->gem_context->engine[request->engine->id] during request submission, store that pointer as request->hw_context. One major advantage that we will exploit later is that this decouples the logical context state from the engine itself. v2: Set mock_context->ops so we don't crash and burn in selftests. Cleanups from Tvrtko. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: NZhenyu Wang <zhenyuw@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180517212633.24934-3-chris@chris-wilson.co.uk
-
- 30 4月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
Make life easier in upcoming patches by moving the context_pin and context_unpin vfuncs into inline helpers. v2: Fixup mock_engine to mark the context as pinned on use. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180430131503.5375-2-chris@chris-wilson.co.uk
-
- 09 3月, 2018 1 次提交
-
-
由 Xiong Zhang 提交于
If user continuously create vgpu, boot guest, shoutdown guest and destroy vgpu from remote, the following calltrace exists in dmesg sometimes: [ 6412.954721] RPM wakelock ref not held during HW access [ 6412.954795] WARNING: CPU: 7 PID: 11941 at linux/drivers/gpu/drm/i915/intel_drv.h:1800 intel_uncore_forcewake_get.part.7+0x96/0xa0 [i915] [ 6412.954915] Call Trace: [ 6412.954951] intel_uncore_forcewake_get+0x18/0x20 [i915] [ 6412.954989] intel_gvt_switch_mmio+0x8e/0x770 [i915] [ 6412.954996] ? __slab_free+0x14d/0x2c0 [ 6412.955001] ? __slab_free+0x14d/0x2c0 [ 6412.955006] ? __slab_free+0x14d/0x2c0 [ 6412.955041] intel_vgpu_stop_schedule+0x92/0xd0 [i915] [ 6412.955073] intel_gvt_deactivate_vgpu+0x48/0x60 [i915] [ 6412.955078] __intel_vgpu_release+0x55/0x260 [kvmgt] when this happens, gvt_switch_mmio is called at vgpu destroy, host i915 is idle and doesn't hold RPM wakelock, igd is in powersave mode, but gvt_switch_mmio require igd power on to access register, so intel_runtime_pm_get should be added to make sure igd power on before gvt_switch_mmio. v2: Move runtime_pm_get/put into gvt_switch_mmio.(Zhenyu) Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
- 06 3月, 2018 3 次提交
-
-
由 Weinan Li 提交于
There is one issue relates to Coarse Power Gating(CPG) on KBL NUC in GVT-g, vgpu can't get the correct default context by updating the registers before inhibit context submission. It always get back the hardware default value unless the inhibit context submission happened before the 1st time forcewake put. With this wrong default context, vgpu will run with incorrect state and meet unknown issues. The solution is initialize these mmios by adding lri command in ring buffer of the inhibit context, then gpu hardware has no chance to go down RC6 when lri commands are right being executed, and then vgpu can get correct default context for further use. v3: - fix code fault, use 'for' to loop through mmio render list(Zhenyu) v4: - save the count of engine mmio need to be restored for inhibit context and refine some comments. (Kevin) v5: - code rebase Cc: Kevin Tian <kevin.tian@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
No functional change, just for easy to use. v4: - refine comment (Kevin) Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
No functional change. This defination will also be used in future patchesi. v4: - refine patch description (Kevin) Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 14 2月, 2018 1 次提交
-
-
由 Weinan Li 提交于
Guest may set this register on KBL platform, it can impact hardware behavior, so add it into the gen9 render list. Otherwise gpu hang issue may happen during different vgpu switch. v2: separate it from patch set. Cc: Zhi Wang <zhi.a.wang@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 01 2月, 2018 2 次提交
-
-
由 Michel Thierry 提交于
The mocs reg array is defined locally but then we iterate over its elements using I915_NUM_ENGINES. There is no 'hard' connection between I915_NUM_ENGINES and the regs array and there will be problems if either of them increases. Use the size of the mocs reg array instead to safely iterate over it. Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Cc: Weinan Li <weinan.z.li@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
由 Xiong Zhang 提交于
while(mmio++) increase mmio to next, mmio[0] never take effect in while loop. This patch change while to for and fix the above issue. v2: Correct Fixes format.(Zhenyu) v3: Rebase to latest staging.(Zhenyu) Fixes: 83164886("drm/i915/gvt: Select appropriate mmio list at initialization time") Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 22 12月, 2017 1 次提交
-
-
由 Zhenyu Wang 提交于
We had previous hack that tried to accept either i915_reg_t or offset value to access vGPU virtual/shadow regs which broke that purpose to be type safe in context. This one trys to explicitly separate the usage of typed mmio reg with real offset. Old vgpu_vreg(offset) helper is used only for offset now with new vgpu_vreg_t(reg) is used for i915_reg_t only. Convert left usage of that to new helper. Also fixed left KASAN warning issues caused by previous hack. v2: rebase, fixup against recent mmio switch change Reviewed-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 18 12月, 2017 4 次提交
-
-
由 Weinan Li 提交于
Load host render mocs registers once for delta update of mocs switch, it reduces mmio read times obviously, then brings performance improvement during multi-vms switch. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
Save and restore the mocs regs of one VM in GVT-g burning too much CPU utilization. Add LRI command scan to monitor the change of mocs registers, save the state in vreg, and use delta update policy to restore them. It can obviously reduce the MMIO r/w count, and improve the performance of context switch. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
Now mmio switch between vGPUs need to switch to host first then to expected vGPU, it waste one time mmio save/restore. r/w mmio usually is time-consuming, and there are so many mocs registers need to save/restore during vGPU switch. Combine the switch_to_host and switch_to_vgpu can reduce 1 time mmio save/restore, it will reduce the CPU utilization and performance while there is multi VMs with heavy work load. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
Refine trace_render_mmio to show the vm id before and after vgpu switch, tag host id as '0', this patch will be used in the future patch for refine mocs switch policy. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 08 12月, 2017 4 次提交
-
-
由 Changbin Du 提交于
Rename the files to reflect their real role - to switch the mmio context of each vGPU engine. v2: update Makefile. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
After engine mmio switched, software still need write workload submission registers. So we can remove the MMIO barriar in MMIO switch. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Select appropriate mmio list at initialization time, so we don't need to do duplicated work at where requires the mmio list. V2: - Add a termination mark of mmio list. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
To improve the readability, let's remove the hard code for each mmio definition. The raw offset remained as a comment, which give us an offset based view. This refine is to make it convenient for new platform enabling. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 11月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Execlists and legacy ringbuffer submission are no longer feature comparable (execlists now offer greater functionality that should overcome their performance hit) and obsoletes the unsafe module parameter, i.e. comparing the two modes of execution is no longer useful, so remove the debug tool. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> #i915_perf.c Link: https://patchwork.freedesktop.org/patch/msgid/20171120205504.21892-1-chris@chris-wilson.co.uk
-
- 16 11月, 2017 3 次提交
-
-
由 Changbin Du 提交于
Use I915_WRITE_FW instead of I915_WRITE to reduce overhead. The overall mmio switch latency lowers from ~600us to ~180us. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Zhi Wang 提交于
Move tlb_handle_pending into intel_vgpu_submssion since it belongs to a part of vGPU submission stuffs Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
由 Zhi Wang 提交于
Introduce intel_vgpu_submission to hold all members related to submission in struct intel_vgpu before. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com>
-
- 22 9月, 2017 1 次提交
-
-
由 Michal Wajdeczko 提交于
Our global struct with params is named exactly the same way as new preferred name for the drm_i915_private function parameter. To avoid such name reuse lets use different name for the global. v5: pure rename v6: fix Credits-to: Coccinelle @@ identifier n; @@ ( - i915.n + i915_modparams.n ) Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjala <ville.syrjala@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170919193846.38060-1-michal.wajdeczko@intel.com
-
- 10 8月, 2017 2 次提交
-
-
由 Changbin Du 提交于
The I915_READ/WRITE is not only a mmio read/write, it also contains debug checking and Forcewake domain lookup. This is too heavy for GVT ring switch case which access batch of mmio registers on ring switch. We can handle Forcewake manually and use the raw i915_read/write instead. The benefit from this is 2x faster mmio switch performance. Before After cycles ~550000 ~250000 v2: Use existing I915_READ_FW/I915_WRITE_FW macro. (zhenyu) Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
There are lots of POSTING_READ alongside each mmio write Op. While actually this is not necessary. It just bring too much latency since PCIe read Op is very slow which is of non-posted transaction. For PCIe device, the mem transaction for strong ordering rules are: o PCIe mmio write sequence is FIFO. Posted request cannot pass previous posted request. o PCIe mmio read will not go ahead of previous write. Intel graphics doesn't support RO, so we can apply above rules. In our case, we only need one POSTING_READ at last. This can remove half of mmio read Op and then the average ring switch performance is nearly doubled. Before After cycles ~970000 ~550000 Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 08 6月, 2017 2 次提交
-
-
由 Xiong Zhang 提交于
Currently gvt dmesg is so heavy at drm.debug=0x2 that guest and host almost couldn't run on xengt. This patch transfer these repeated messages into trace, so dmesg is light at drm.debug=0x2, and user could get the target message through trace event and trace filter. Suggested-by: NZhi Wang <zhi.a.wang@intel.com> Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Commit ab9da627906a ("drm/i915: make context status notifier head be per engine") gives us a chance to inspect every single request. Then we can eliminate unnecessary mmio switching for same vGPU. We only need mmio switching for different VMs (including host). This patch introduced a new general API intel_gvt_switch_mmio() to replace the old intel_gvt_load/restore_render_mmio(). This function can be further optimized for vGPU to vGPU switching. To support individual ring switch, we track the owner who occupy each ring. When another VM or host request a ring we do the mmio context switching. Otherwise no need to switch the ring. This optimization is very useful if only one guest has plenty of workloads and the host is mostly idle. The best case is no mmio switching will happen. v2: o fix missing ring switch issue. (chuanxiao) o support individual ring switch. Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 08 5月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Needn't to restore the in-context MMIO when SCHEDULE_OUT. Sometimes with restoring the in-context MMIO, some GPU hang can be observed. So remove the in-context MMIO restore Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 12 4月, 2017 2 次提交
-
-
由 Changbin Du 提交于
The platform check is done outside, no need check again. Platform doesn't include mocs should not invoke this two functions. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Changbin Du 提交于
Make the global mmio list be cacheline aligned to improve performance. Signed-off-by: NChangbin Du <changbin.du@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 29 3月, 2017 2 次提交
-
-
由 Xu Han 提交于
Extend function dispatch logic to support KBL platform. Signed-off-by: NXu Han <xu.han@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Xu Han 提交于
Add some KBL specially registers to save/restore list. Signed-off-by: NXu Han <xu.han@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 3月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
Fix the wrong offset of the RCS specific mocs Fixes: 17865713 ("drm/i915/gvt: vGPU context switch") Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 17 3月, 2017 1 次提交
-
-
由 Tina Zhang 提交于
gvt_err should be used only for the very few critical error message during host i915 drvier initialization. This patch 1. removes the redundant gvt_err; 2. creates a new gvt_vgpu_err to show errors caused by vgpu; 3. replaces the most gvt_err with gvt_vgpu_err; 4. leaves very few gvt_err for dumping gvt error during host gvt initialization. v2. change name to gvt_vgpu_err and add vgpu id to the message. (Kevin) add gpu id to gvt_vgpu_err. (Zhi) v3. remove gpu id from gvt_vgpu_err caller. (Zhi) v4. add vgpu check to the gvt_vgpu_err macro. (Zhiyuan) v5. add comments for v3 and v4. v6. split the big patch into two, with this patch only for checking gvt_vgpu_err. (Zhenyu) v7. rebase to staging branch v8. rebase to fix branch Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-