- 02 7月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
One particularly stressful scenario consists of many independent tasks all competing for GPU time and waiting upon the results (e.g. realtime transcoding of many, many streams). One bottleneck in particular is that each client waits on its own results, but every client is woken up after every batchbuffer - hence the thunder of hooves as then every client must do its heavyweight dance to read a coherent seqno to see if it is the lucky one. Ideally, we only want one client to wake up after the interrupt and check its request for completion. Since the requests must retire in order, we can select the first client on the oldest request to be woken. Once that client has completed his wait, we can then wake up the next client and so on. However, all clients then incur latency as every process in the chain may be delayed for scheduling - this may also then cause some priority inversion. To reduce the latency, when a client is added or removed from the list, we scan the tree for completed seqno and wake up all the completed waiters in parallel. Using igt/benchmarks/gem_latency, we can demonstrate this effect. The benchmark measures the number of GPU cycles between completion of a batch and the client waking up from a call to wait-ioctl. With many concurrent waiters, with each on a different request, we observe that the wakeup latency before the patch scales nearly linearly with the number of waiters (before external factors kick in making the scaling much worse). After applying the patch, we can see that only the single waiter for the request is being woken up, providing a constant wakeup latency for every operation. However, the situation is not quite as rosy for many waiters on the same request, though to the best of my knowledge this is much less likely in practice. Here, we can observe that the concurrent waiters incur extra latency from being woken up by the solitary bottom-half, rather than directly by the interrupt. This appears to be scheduler induced (having discounted adverse effects from having a rbtree walk/erase in the wakeup path), each additional wake_up_process() costs approximately 1us on big core. Another effect of performing the secondary wakeups from the first bottom-half is the incurred delay this imposes on high priority threads - rather than immediately returning to userspace and leaving the interrupt handler to wake the others. To offset the delay incurred with additional waiters on a request, we could use a hybrid scheme that did a quick read in the interrupt handler and dequeued all the completed waiters (incurring the overhead in the interrupt handler, not the best plan either as we then incur GPU submission latency) but we would still have to wake up the bottom-half every time to do the heavyweight slow read. Or we could only kick the waiters on the seqno with the same priority as the current task (i.e. in the realtime waiter scenario, only it is woken up immediately by the interrupt and simply queues the next waiter before returning to userspace, minimising its delay at the expense of the chain, and also reducing contention on its scheduler runqueue). This is effective at avoid long pauses in the interrupt handler and at avoiding the extra latency in realtime/high-priority waiters. v2: Convert from a kworker per engine into a dedicated kthread for the bottom-half. v3: Rename request members and tweak comments. v4: Use a per-engine spinlock in the breadcrumbs bottom-half. v5: Fix race in locklessly checking waiter status and kicking the task on adding a new waiter. v6: Fix deciding when to force the timer to hide missing interrupts. v7: Move the bottom-half from the kthread to the first client process. v8: Reword a few comments v9: Break the busy loop when the interrupt is unmasked or has fired. v10: Comments, unnecessary churn, better debugging from Tvrtko v11: Wake all completed waiters on removing the current bottom-half to reduce the latency of waking up a herd of clients all waiting on the same request. v12: Rearrange missed-interrupt fault injection so that it works with igt/drv_missed_irq_hang v13: Rename intel_breadcrumb and friends to intel_wait in preparation for signal handling. v14: RCU commentary, assert_spin_locked v15: Hide BUG_ON behind the compiler; report on gem_latency findings. v16: Sort seqno-groups by priority so that first-waiter has the highest task priority (and so avoid priority inversion). v17: Add waiters to post-mortem GPU hang state. v18: Return early for a completed wait after acquiring the spinlock. Avoids adding ourselves to the tree if the is already complete, and skips the awkward question of why we don't do completion wakeups for waits earlier than or equal to ourselves. v19: Prepare for init_breadcrumbs to fail. Later patches may want to allocate during init, so be prepared to propagate back the error code. Testcase: igt/gem_concurrent_blit Testcase: igt/benchmarks/gem_latency Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: "Goel, Akash" <akash.goel@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> #v18 Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-6-git-send-email-chris@chris-wilson.co.uk
-
- 01 7月, 2016 15 次提交
-
-
由 Chris Wilson 提交于
Just plonk all the default irq vfuncs together in one function to keep the initialisers of reasonable size. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467361093-20209-2-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
Consolidate the block of default vfuncs for dispatching the batchbuffer. Just a minor tweak on top of Tvrtko's great job of tidying up the vfunc initialisation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467361093-20209-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Tvrtko Ursulin 提交于
Just a bit of cleanup after the previous refactoring. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1467212972-861-1-git-send-email-tvrtko.ursulin@linux.intel.com
-
由 Tvrtko Ursulin 提交于
Replace per-engine initialization with a common half-programatic, half-data driven code for ease of maintenance and compactness. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Store the semaphore offset in a temporary variable to avoid having to get the VMA offset twice. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Replace the macro initializer with a programatic loop which results in smaller code and hopefully just as clear. v2: Rebase. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
The object needs to be created before semaphores can be initialized on any ring and it makes sense to pull it out to this semaphore dedicated helper. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Tvrtko Ursulin 提交于
v2: Put dispatch_execbuffer before add_request. (Chris Wilson) v3: Fix add_request and irq_seqno_barrier for gen8+. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
v2: Consistent INTEL_GEN vs IS_GEN usage. (Chris Wilson) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
All engines apart from render select this based on Gen. Move it to the common helper as well. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Tvrtko Ursulin 提交于
Introduce a function which initializes vfuncs mostly common across engines and move write_tail initialization in it since only one engine overrides the default. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 30 6月, 2016 4 次提交
-
-
由 Chris Wilson 提交于
Since we have a sequence of register reads and writes, we can reduce the latency of starting the BSD ring by performing all the mmio operations under the same forcewake wakeref. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467297225-21379-62-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
By using the out-of-line intel_wait_for_register() not only do we can efficiency from using the hybrid wait_for() contained within, but we avoid code bloat from the numerous inlined loops, in total (all patches): text data bss dec hex filename 1078551 4557 416 1083524 108884 drivers/gpu/drm/i915/i915.ko 1070775 4557 416 1075748 106a24 drivers/gpu/drm/i915/i915.ko Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467297225-21379-48-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
By using the out-of-line intel_wait_for_register() not only do we can efficiency from using the hybrid wait_for() contained within, but we avoid code bloat from the numerous inlined loops, in total (all patches): text data bss dec hex filename 1078551 4557 416 1083524 108884 drivers/gpu/drm/i915/i915.ko 1070775 4557 416 1075748 106a24 drivers/gpu/drm/i915/i915.ko Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467297225-21379-47-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
By using the out-of-line intel_wait_for_register() not only do we can efficiency from using the hybrid wait_for() contained within, but we avoid code bloat from the numerous inlined loops, in total (all patches): text data bss dec hex filename 1078551 4557 416 1083524 108884 drivers/gpu/drm/i915/i915.ko 1070775 4557 416 1075748 106a24 drivers/gpu/drm/i915/i915.ko Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467297225-21379-46-git-send-email-chris@chris-wilson.co.uk
-
- 24 6月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
The kernel context exists simply as a placeholder and should never be executed with a render context. It does not need the golden render state, as that will always be applied to a user context. By skipping the initialisation we can avoid issues in attempting to program the golden render context when trying to make the hardware idle. v2: Rebase Testcase: igt/drm_module_reload_basic #byt Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95634Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466776558-21516-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
This is so that we have symmetry with intel_lrc.c and avoid a source of if (i915.enable_execlists) layering violation within i915_gem_context.c - that is we move the specific handling of the dev_priv->kernel_context for legacy submission into the legacy submission code. This depends upon the init/fini ordering between contexts and engines already defined by intel_lrc.c, and also exporting the context alignment required for pinning the legacy context. v2: Separate out pin/unpin context funcs for greater symmetry with intel_lrc. One more step towards unifying behaviour between the two classes of engines and towards fixing another bug in i915_switch_context vs requests. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466776558-21516-2-git-send-email-chris@chris-wilson.co.uk
-
- 14 6月, 2016 1 次提交
-
-
This is a WA affecting pooled eu which is a bxt specific feature. Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Cc: Winiarski, Michal <michal.winiarski@intel.com> Cc: Zou, Nanhai <nanhai.zou@intel.com> Cc: Yang, Rong R <rong.r.yang@intel.com> Cc: Jeff McGee <jeff.mcgee@intel.com> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 13 6月, 2016 1 次提交
-
-
由 Tim Gore 提交于
This patch enables a workaround for a mid thread preemption issue where a hardware timing problem can prevent the context restore from happening, leading to a hang. v2: move to gen9_init_workarounds (Arun) v3: move to start of gen9_init_workarounds (Arun) Signed-off-by: NTim Gore <tim.gore@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465816501-25557-1-git-send-email-tim.gore@intel.com
-
- 08 6月, 2016 12 次提交
-
-
由 Mika Kuoppala 提交于
There is ambiguity in the documentation between D0 and E0. Extend this workaround to E0. References: BSID#779 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-23-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
This is needed for all kbl revision. v2: Don't add revid checks to generic gen9 init (Arun) References: HSD#2135593 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-21-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
We need to disable clock gating in this unit to work around hardware issue causing possible corruption/hang. v2: name the bit (Ville) v3: leave the fix enabled for 2227050 and set correct bit (Matthew) v4: Split out the skl part in separate commit for easier backport References: HSD#2227156, HSD#2227050 Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-20-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Add this workaround for both bxt and kbl up to until rev B0. References: HSD#2136703 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-16-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Bspec states that we need to turn off dynamic credit sharing on kbl revid a0 and b0. This happens by writing bit 28 on 0x4ab8. References: HSD#2225601, HSD#2226938, HSD#2225763 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-15-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Extend the scope of this workaround, already used in skl, to also take effect in kbl. v2: Fix KBL_REVID_E0 (Matthew) References: HSD#2132677 Cc: Matthew Auld <matthew.william.auld@gmail.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-12-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Add this workaround for kbl revid A0 only. v2: rebase v3: carve out a non related workaround (Chris) References: HSD#1911714 Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-9-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
We need this crucial workaround from skl also to all kbl revisions. Lack of it was causing system hangs on skl enabling so this is a must have. v2: Don't add revid checks to gen9 init workarounds (Arun) References: HSD#2126660 Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-8-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Past evidence with system hangs and hsds tie WaForceEnableNonCoherent and WaDisableHDCInvalidation to WaForceContextSaveRestoreNonCoherent. Documentation states that WaForceContextSaveRestoreNonCoherent would not be needed on skl past E0 but evidence proved otherwise. See commit <510650e8> ("drm/i915/skl: Fix spurious gpu hang with gt3/gt4 revs"). In this scope consider kbl to be skl with a bigger revision than E0 so play it safe and bind these two workarounds to the WaForceContextSaveRestoreNonCoherent, and apply to all gen9. v2: fix comment (Matthew) References: HSD#2134449, HSD#2131413 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-7-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
The revision id range for this workaround has changed. So apply it to all revids on all gen9. References: HSD#2134449 Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-6-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
Kabylake is part of gen9 family so init the generic gen9 workarounds for it. v2: rebase Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-3-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
We need to disable clock gating in this unit to work around hardware issue causing possible corruption/hang. v2: name the bit (Ville) v3: leave the fix enabled for 2227050 and set correct bit (Matthew) References: HSD#2227156, HSD#2227050 Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465309159-30531-2-git-send-email-mika.kuoppala@intel.com
-
- 06 6月, 2016 1 次提交
-
-
Kernel only need to add a register to HW whitelist, required for a preemption related issue. Reference: HSD#2131039 Reviewed-by: NJeff McGee <jeff.mcgee@intel.com> Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1465203169-16591-1-git-send-email-arun.siluvery@linux.intel.com
-
- 23 5月, 2016 3 次提交
-
-
由 Chris Wilson 提交于
Following a GPU hang, we break out of the request loop in order to unlock the struct_mutex for use by the GPU reset. However, if we retire all the requests at that moment, we cannot identify the guilty request after performing the reset. v2: Not automatically retiring requests forces us to recheck for available ringspace. Fixes: f4457ae7 ("drm/i915: Prevent leaking of -EIO from i915_wait_request()") Testcase: igt/gem_reset_stats/ban-* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Tested-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463137042-9669-4-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit e075a32f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
Combine the near identical implementations of intel_logical_ring_begin() and intel_ring_begin() - the only difference is that the logical wait has to check for a matching ring (which is assumed by legacy). In the process some debug messages are culled as there were following a WARN if we hit an actual error. v2: Updated commentary Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-12-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit 987046ad) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
With the introduction of a distinct engine->id vs the hardware id, we need to fix up the value we use for selecting the target engine when signaling a semaphore. Note that these values can be merged with engine->guc_id. Fixes: de1add36 Cc: stable@vger.kernel.org # v4.6 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1461932305-14637-3-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit 215a7e32) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-