- 18 10月, 2019 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Medium term goal is to eliminate the i915->engine[] array and to get there we have recently introduced equivalent array in intel_gt. Now we need to migrate the code further towards this state. This next step is to eliminate usage of i915->engines[] from the for_each_engine_masked iterator. For this to work we also need to use engine->id as index when populating the gt->engine[] array and adjust the default engine set indexing to use engine->legacy_idx instead of assuming gt->engines[] indexing. v2: * Populate gt->engine[] earlier. * Check that we don't duplicate engine->legacy_idx v3: * Work around the initialization order issue between default_engines() and intel_engines_driver_register() which sets engine->legacy_idx for now. It will be fixed properly later. v4: * Merge with forgotten v2.5. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191017161852.8836-1-tvrtko.ursulin@linux.intel.com
-
- 04 10月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Keep track of the GEM contexts underneath i915->gem.contexts and assign them their own lock for the purposes of list management. v2: Focus on lock tracking; ctx->vm is protected by ctx->mutex v3: Correct split with removal of logical HW ID Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-15-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Forgo the struct_mutex serialisation for i915_active, and interpose its own mutex handling for active/retire. This is a multi-layered sleight-of-hand. First, we had to ensure that no active/retire callbacks accidentally inverted the mutex ordering rules, nor assumed that they were themselves serialised by struct_mutex. More challenging though, is the rule over updating elements of the active rbtree. Instead of the whole i915_active now being serialised by struct_mutex, allocations/rotations of the tree are serialised by the i915_active.mutex and individual nodes are serialised by the caller using the i915_timeline.mutex (we need to use nested spinlocks to interact with the dma_fence callback lists). The pain point here is that instead of a single mutex around execbuf, we now have to take a mutex for active tracker (one for each vma, context, etc) and a couple of spinlocks for each fence update. The improvement in fine grained locking allowing for multiple concurrent clients (eventually!) should be worth it in typical loads. v2: Add some comments that barely elucidate anything :( Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
-
- 06 9月, 2019 2 次提交
-
-
由 Weinan Li 提交于
The guest may use this register to identify the running state of one context. Emulate it as the value in context image as if the context runs on the GPU hardware. Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Xiaolin Zhang 提交于
when creating a vGPU workload, the guest context head pointer should be updated correctly by comparing with the exsiting workload in the guest worklod queue including the current running context. in some situation, there is a running context A and then received 2 new vGPU workload context B and A. in the new workload context A, it's head pointer should be updated with the running context A's tail. v2: walk through guest workload list in backward way. Cc: stable@vger.kernel.org Signed-off-by: NXiaolin Zhang <xiaolin.zhang@intel.com> Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 10 8月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Push the ring creation flags from the outer GEM context to the inner intel_context to avoid an unsightly back-reference from inside the backend. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190809182518.20486-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we are phasing out using the GEM context for internal clients that need to manipulate logical context state directly, remove the constructor for the GVT context. We are not using it for anything other than default setup and allocation of an i915_ppgtt. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190809182518.20486-1-chris@chris-wilson.co.uk
-
- 09 8月, 2019 1 次提交
-
-
由 Dan Carpenter 提交于
We can't free "workload" until after the printk or it's a use after free. Fixes: 2089a76a ("drm/i915/gvt: Checking workload's gma earlier") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 08 8月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Ignore the central i915->kernel_context for allocating an engine, as that GEM context is being phased out. For internal clients, we just need the per-engine logical state, so allocate it at the point of use. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190808110612.23539-1-chris@chris-wilson.co.uk
-
由 Umesh Nerlige Ramappa 提交于
The oa object manages the oa buffer and must be allocated when the user intends to read performance counter snapshots. This can be achieved by making the oa object part of the stream object which is allocated when a stream is opened by the user. Attributes in the oa object that are gen-specific are moved to the perf object so that they can be initialized on driver load. The split provides a better separation of the objects used in perf implementation of i915 driver so that resources are allocated and initialized only when needed. v2: Fix checkpatch warnings v3: Addressed Lionel's review comment v4: Rebase v5: Fix rebase/merge issue with ratelimit_state_init Signed-off-by: NUmesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: NLionel Landwerlin <lionel.g.landwerlin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190806233002.984-1-umesh.nerlige.ramappa@intel.com
-
- 02 8月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
We only compute the lrc_descriptor() on pinning the context, i.e. infrequently, so we do not benefit from storing the template as the addressing mode is also fixed for the lifetime of the intel_context. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NPrathap Kumar Valsan <prathap.kumar.valsan@intel.com> Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190730133035.1977-9-chris@chris-wilson.co.uk
-
- 30 7月, 2019 4 次提交
-
-
由 Chris Wilson 提交于
Track the currently bound address space used by the HW context. Minor conversions to use the local intel_context.vm are made, leaving behind some more surgery required to make intel_context the primary through the selftests. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190730143209.4549-2-chris@chris-wilson.co.uk
-
由 Colin Xu 提交于
Windows guest can't run after force-TDR with host log: ... gvt: vgpu 1: workload shadow ppgtt isn't ready gvt: vgpu 1: fail to dispatch workload, skip ... The error is raised by set_context_ppgtt_from_shadow(), when it checks and found the shadow_mm isn't marked as shadowed. In work thread before each submission, a shadow_mm is set to shadowed in: shadow_ppgtt_mm() <-intel_vgpu_pin_mm() <-prepare_workload() <-dispatch_workload() <-workload_thread() However checking whether or not shadow_mm is shadowed is prior to it: set_context_ppgtt_from_shadow() <-dispatch_workload() <-workload_thread() In normal case, create workload will check the existence of shadow_mm, if not it will create a new one and marked as shadowed. If already exist it will reuse the old one. Since shadow_mm is reused, checking of shadowed in set_context_ppgtt_from_shadow() actually always see the state set in creation, but not the state set in intel_vgpu_pin_mm(). When force-TDR, all engines are reset, since it's not dmlr level, all ppgtt_mm are invalidated but not destroyed. Invalidation will mark all reused shadow_mm as not shadowed but still keeps in ppgtt_mm_list_head. If workload submission phase those shadow_mm are reused with shadowed not set, then set_context_ppgtt_from_shadow() will report error. Pin for context after shadow_mm pinned and shadow pdps settled. v2: Move set_context_ppgtt_from_shadow() after prepare_workload(). (zhenyu) v3: Move set_context_ppgtt_from_shadow() after shadow pdps updated.(zhenyu) Fixes: 4f15665c ("drm/i915: Add ppgtt to GVT GEM context") Cc: stable@vger.kernel.org Signed-off-by: NColin Xu <colin.xu@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Xiaolin Zhang 提交于
in workload_thread, it should grab runtime pm wakelock and later uncore forcewake get will check rpm wakelock held successfully. otherwise, sometimes, rpm wakelock not hold and print call trace below: Call Trace: intel_uncore_forcewake_get+0x15/0x20 [i915] workload_thread+0x5f9/0x16f0 [i915] ? __switch_to_asm+0x34/0x70 ? __switch_to_asm+0x40/0x70 ? __switch_to_asm+0x34/0x70 ? __switch_to_asm+0x40/0x70 ? __switch_to_asm+0x34/0x70 ? __switch_to+0x85/0x3f0 ? __switch_to_asm+0x40/0x70 ? do_wait_intr_irq+0x90/0x90 kthread+0x121/0x140 ? intel_vgpu_clean_workloads+0x100/0x100 [i915] ? kthread_park+0x90/0x90 ret_from_fork+0x35/0x40 --[ end trace 86525f742a02e12c ]-- v2: adapted to use rpm structure. Fixes: 251d46b0 ("drm/i915/gvt: Pin the per-engine GVT shadow contexts") Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NXiaolin Zhang <xiaolin.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Xiong Zhang 提交于
Workload contains RB and WA_CTX which are in ggtt space, if they aren't in valid ggtt space, the workload shouldn't be shadowed and scanned. So checking them earlier to avoid shadow them. Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 17 6月, 2019 1 次提交
-
-
由 Mika Kuoppala 提交于
All page directories are identical in function, only the position in the hierarchy differ. Use same base type for directory functionality. v2: cleanup, size always 512, init to null Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190614164350.30415-2-mika.kuoppala@linux.intel.com
-
- 14 6月, 2019 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
The functions where internally already only using the structure, so we need to just flip the interface. v2: rebase Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190613232156.34940-7-daniele.ceraolospurio@intel.com
-
- 11 6月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Keeping the _hw_ in there does not help to distinguish it from its only brethren i915_ggtt, so drop it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190611091238.15808-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Make the kref common to both derived structs (i915_ggtt and i915_ppgtt) so that we can safely reference count an abstract ctx->vm address space. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190611091238.15808-1-chris@chris-wilson.co.uk
-
- 03 6月, 2019 1 次提交
-
-
由 Xiaolin Zhang 提交于
Save RING_HEAD into vgpu reg when vgpu switched out and report it's value back to guest. v6: addressed comment for ring head wrap count support. (Zhenyu) v5: ring head wrap count support. v4: updated HEAD/TAIL with guest value, not host value. (Yan Zhao) v3: save RING HEAD/TAIL vgpu reg in save_ring_hw_state. (Zhenyu Wang) v2: save RING_TAIL as well during vgpu mmio switch to meet ring_is_idle condition. (Fred Gao) v1: based on input from Weinan. (Weinan Li) [zhenyuw: Include this fix for possible future guest kernel that would utilize RING_HEAD for hangcheck.] Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NXiaolin Zhang <xiaolin.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 28 5月, 2019 4 次提交
-
-
由 Chris Wilson 提交于
An old optimisation to reduce the number of atomics per batch sadly relies on struct_mutex for coordination. In order to remove struct_mutex from serialising object/context closing, always taking and releasing an active reference on first use / last use greatly simplifies the locking. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-15-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Use the per-object local lock to control the cache domain of the individual GEM objects, not struct_mutex. This is a huge leap forward for us in terms of object-level synchronisation; execbuffers are coordinated using the ww_mutex and pread/pwrite is finally fully serialised again. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-10-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Continuing the theme of separating out the GEM clutter. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-8-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Continuing the decluttering of i915_gem.c, that of the read/write domains, perhaps the biggest of GEM's follies? Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-7-chris@chris-wilson.co.uk
-
- 21 5月, 2019 2 次提交
-
-
由 Yan Zhao 提交于
for restore-inhibit context, hardware will not load in-context mmios (engine context part) to hardware, but hardware will save the mmio values in hardware back to context image. So, in order to save correct values of vGPU back to context image, values of vGPU mmios have to be loaded into hardware first for restore-inhibit context. In this patch, the mechanism is applied to all gen9 platform. The reason excluding gen8 platforms is only because of lacking of testing on those platforms. v3: for mocs registers, goto in-context mmios save-restore path for skl platform as well (weinan li) v2: update vreg when scanning indirect context for inhibit context for gen9 Cc: Weinan Li <weinan.z.li@intel.com> Acked-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NYan Zhao <yan.y.zhao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan 提交于
"To track whether a request has started on HW, we can emit a breadcrumb at the beginning of the request and check its timeline's HWSP to see if the breadcrumb has advanced past the start of this request." It means all the request which timeline's has_init_breadcrumb is true, then the emit_init_breadcrumb process must have before emitting the real commands, otherwise, the scheduler might get a wrong state of this request during reset. If the request is exactly the guilty one, the scheduler won't terminate it with the wrong state. To avoid this, do emit_init_breadcrumb for all the requests from gvt. v2: cc to stable kernel Fixes: 85474441 ("drm/i915: Identify active requests") Cc: stable@vger.kernel.org Acked-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NWeinan <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 27 4月, 2019 3 次提交
-
-
由 Chris Wilson 提交于
We switched to a tree of per-engine HW context to accommodate the introduction of virtual engines. However, we plan to also support multiple instances of the same engine within the GEM context, defeating our use of the engine as a key to looking up the HW context. Just allocate a logical per-engine instance and always use an index into the ctx->engines[]. Later on, this ctx->engines[] may be replaced by a user specified map. v2: Add for_each_gem_engine() helper to iterator within the engines lock v3: intel_context_create_request() helper v4: s/unsigned long/unsigned int/ 4 billion engines is quite enough. v5: Push iterator locking to caller Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-7-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We want to pass in a intel_context into intel_context_pin() and that requires us to first be able to lookup the intel_context! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Our eventual goal is to rid request construction of struct_mutex, with the short term step of lifting the struct_mutex requirements into the higher levels (i.e. the caller must ensure that the context is already pinned into the GTT). In this patch, we pin GVT's shadow context upon allocation and so keep them pinned into the GGTT for as long as the virtual machine is alive, and so we can use the simpler request construction path safe in the knowledge that the hard work is already done. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Acked-by: NZhenyu Wang <zhenyuw@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-1-chris@chris-wilson.co.uk
-
- 25 4月, 2019 1 次提交
-
-
由 Aleksei Gimbitskii 提交于
Typedef is not recommended in the Linux kernel.The klocwork static code analyzer takes the enumeration as the full range of intel_gvt_gtt_type_t. But the intel_gvt_gtt_type_t will never be used in full range. For example, the GTT_TYPE_INVALID will never be used as an index of an array. Remove the typedef and let the enumeration starts from zero to pass klocwork analysis. This patch fixed the critial issues #483, #551, #665 reported by klockwork. v3: - Remove the typedef and let the enumeration starts from zero. Signed-off-by: NAleksei Gimbitskii <aleksei.gimbitskii@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> CC: Colin Xu <colin.xu@intel.com> Reviewed-by: NColin Xu <colin.xu@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 02 4月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
We want to use intel_engine_mask_t inside i915_request.h, which means extracting it from the general header file mess and placing it inside a types.h. A knock on effect is that the compiler wants to warn about type-contraction of ALL_ENGINES into intel_engine_maskt_t, so prepare for the worst. v2: Use intel_engine_mask_t consistently v3: Move I915_NUM_ENGINES to its natural home at the end of the enum Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190401162641.10963-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 29 3月, 2019 1 次提交
-
-
由 Yan Zhao 提交于
in workload creation routine, if any failure occurs, do not queue this workload for delivery. if this failure is fatal, enter into failsafe mode. Fixes: 6d763035 ("drm/i915/gvt: Move common vGPU workload creation into scheduler.c") Cc: stable@vger.kernel.org #4.19+ Cc: zhenyuw@linux.intel.com Signed-off-by: NYan Zhao <yan.y.zhao@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 21 3月, 2019 1 次提交
-
-
由 Daniele Ceraolo Spurio 提交于
Now that the internal code all works on intel_uncore, flip the external-facing interface. v2: fix GVT. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-4-daniele.ceraolospurio@intel.com
-
- 15 3月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Large ppGTT are differentiated by the requirement to go to four levels to address more than 32b. Given the introduction of more 4 level ppGTT with different sizes of addressable bits, rename i915_vm_is_48b() to better reflect the commonality of using 4 levels. Based on a patch by Bob Paauwe. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Bob Paauwe <bob.j.paauwe@intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-4-chris@chris-wilson.co.uk
-
- 06 3月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
In the next patch, we are introducing a broad virtual engine to encompass multiple physical engines, losing the 1:1 nature of BIT(engine->id). To reflect the broader set of engines implied by the virtual instance, lets store the full bitmask. v2: Use intel_engine_mask_t (s/ring_mask/engine_mask/) v3: Tvrtko voted for moah churn so teach everyone to not mention ring and use $class$instance throughout. v4: Comment upon the disparity in bspec for using VCS1,VCS2 in gen8 and VCS[0-4] in later gen. We opt to keep the code consistent and use 0-index naming throughout. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190305180332.30900-1-chris@chris-wilson.co.uk
-
- 04 3月, 2019 1 次提交
-
-
由 Zhenyu Wang 提交于
This moves ppgtt root hook out of scan and shadow function, as it's only required at dispatch time. Also make sure this checks against shadow mm to be ready, otherwise bail to fail earlier. Reviewed-by: NXiong Zhang <xiong.y.zhang@intel.com> Cc: Xiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 01 3月, 2019 2 次提交
-
-
由 Zhenyu Wang 提交于
As vGPU shadow ctx is loaded with guest context state, arbitrarily submitting request in error workload dispatch path would cause trouble. So don't try to submit in error path now like in previous code. This is to fix VM failure when GPU hang happens. Fixes: f0e99437 ("drm/i915/gvt: Fix workload request allocation before request add") Reviewed-by: NXiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Weinan Li 提交于
There is one corner case that workload_thread may pick and dispatch one workload of vgpu after it's already deactivated. Below is the scenario: 1. deactive_vgpu got the vgpu_lock, it found pending workload was submitted, then it released the vgpu_lock and wait for vgpu idle. 2. before deactive_vgpu got the vgpu_lock back, workload_thread might pick one new valid workload, then it was blocked by the vgpu_lock. 3. deactive_vgpu got the vgpu_lock again, finished the last processes of deactivating, then release the vgpu_lock. 4. workload_thread got the vgpu_lock, then it will try to dispatch the fetched workload. It's not expected one workload of deactivated vgpu is dispatched. The solution is to add condition check of the vgpu's active flag and stop to schedule when it's inactive. Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 23 1月, 2019 1 次提交
-
-
由 Weinan Li 提交于
GVT-g will shadow the privilege batch buffer and the indirect context during command scan, move the release process into intel_vgpu_destroy_workload() to ensure the resources are recycled properly. Fixes: 0cce2823 ("drm/i915/gvt/kvmgt:Refine error handling for prepare_execlist_workload") Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NWeinan Li <weinan.z.li@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 15 1月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
The majority of runtime-pm operations are bounded and scoped within a function; these are easy to verify that the wakeref are handled correctly. We can employ the compiler to help us, and reduce the number of wakerefs tracked when debugging, by passing around cookies provided by the various rpm_get functions to their rpm_put counterpart. This makes the pairing explicit, and given the required wakeref cookie the compiler can verify that we pass an initialised value to the rpm_put (quite handy for double checking error paths). For regular builds, the compiler should be able to eliminate the unused local variables and the program growth should be minimal. Fwiw, it came out as a net improvement as gcc was able to refactor rpm_get and rpm_get_if_in_use together, v2: Just s/rpm_put/rpm_put_unchecked/ everywhere, leaving the manual mark up for smaller more targeted patches. v3: Mention the cookie in Returns Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-2-chris@chris-wilson.co.uk
-