- 05 7月, 2018 4 次提交
-
-
由 Chris Wilson 提交于
Currently, the wc-stash used for providing flushed WC pages ready for constructing the page directories is assumed to be protected by the struct_mutex. However, we want to remove this global lock and so must install a replacement global lock for accessing the global wc-stash (the per-vm stash continues to be guarded by the vm). We need to push ahead on this patch due to an oversight in hastily removing the struct_mutex guard around the igt_ppgtt_alloc selftest. No matter, it will prove very useful (i.e. will be required) in the near future. v2: Restore the onstack stash so that we can drop the vm->mutex in future across the allocation. v3: Restore the lost pagevec_init of the onstack allocation, and repaint function names. v4: Reorder init so that we don't try and use i915_address_space before it is ininitialised. Fixes: 1f6f0023 ("drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180704185518.4193-1-chris@chris-wilson.co.uk
-
由 Ville Syrjälä 提交于
For whatever reason we only unmask and enable the master error interrut on gen4. With the EIR handling fixed let's do that on gen2/3 as well. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180611200258.27121-4-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Adjust the EIR clearing to cope with the edge triggered IIR on i965/g4x. To guarantee an edge in the ISR master error bit we temporarily mask everything in EMR. As some of the EIR bits can't even be directly cleared we also borrow a trick from i915_clear_error_registers() and permanently mask any bit that remains high. No real thought given to how we might unmask them again once the cause for the error has been clered. I suppose on pre-g4x GPU reset will reinitialize EMR from scratch. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180611200258.27121-3-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Just like with PIPESTAT, the edge triggered IIR on i965/g4x also causes problems for hotplug interrupts. To make sure we don't get the IIR port interrupt bit stuck low with the ISR bit high we must force an edge in ISR. Unfortunately we can't borrow the PIPESTAT trick and toggle the enable bits in PORT_HOTPLUG_EN as that act itself generates hotplug interrupts. Instead we just have to loop until we've cleared PORT_HOTPLUG_STAT, or we just give up and WARN. v2: Don't frob with PORT_HOTPLUG_EN Cc: stable@vger.kernel.org Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180614175625.1615-1-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
- 04 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
For a ppgtt that we are constructing, there is no struct_mutex dependence so skip it. In the process, also ping the scheduler frequently to try and avoid the NMI watchdog. v2: gen6 requires struct_mutex to clean up (currently) Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> References: https://bugs.freedesktop.org/show_bug.cgi?id=107094Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180703135331.12265-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
live_gtt is a very slow test to run, simply because it tries to allocate and use as much as the 48b address space as possibly can and in the process will try to own all of the system memory. This leads to resource exhaustion and CPU starvation; the latter impacts us when the NMI watchdog declares a task hung due to a mutex contention with ourselves. This we can prevent by releasing the struct_mutex and forcing our i915/rcu workers to run, and in particular flushing the freed object worker that is the cause for concern. References: https://bugs.freedesktop.org/show_bug.cgi?id=107094Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180703101829.7360-1-chris@chris-wilson.co.uk
-
- 03 7月, 2018 5 次提交
-
-
由 Tarun Vyas 提交于
The PIPEDSL freezes on PSR entry and if PSR hasn't fully exited, then the pipe_update_start call schedules itself out to check back later. On ChromeOS-4.4 kernel, which is fairly up-to-date w.r.t drm/i915 but lags w.r.t core kernel code, hot plugging an external display triggers tons of "potential atomic update errors" in the dmesg, on *pipe A*. A closer analysis reveals that we try to read the scanline 3 times and eventually timeout, b/c PSR hasn't exited fully leading to a PIPEDSL stuck @ 1599. This issue is not seen on upstream kernels, b/c for *some* reason we loop inside intel_pipe_update start for ~2+ msec which in this case is more than enough to exit PSR fully, hence an *unstuck* PIPEDSL counter, hence no error. On the other hand, the ChromeOS kernel spends ~1.1 msec looping inside intel_pipe_update_start and hence errors out b/c the source is still in PSR. Regardless, we should wait for PSR exit (if PSR is disabled, we incur a ~1-2 usec penalty) before reading the PIPEDSL, b/c if we haven't fully exited PSR, then checking for vblank evasion isn't actually applicable. v4: Comment explaining psr_wait after enabling VBL interrupts (DK) v5: CAN_PSR() to handle platforms that don't support PSR. v6: Handle local_irq_disable on early return (Chris) Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Signed-off-by: NTarun Vyas <tarun.vyas@intel.com> Signed-off-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-2-tarun.vyas@intel.com
-
由 Tarun Vyas 提交于
This is a lockless version of the exisiting psr_wait_for_idle(). We want to wait for PSR to idle out inside intel_pipe_update_start. At the time of a pipe update, we should never race with any psr enable or disable code, which is a part of crtc enable/disable. The follow up patch will use this lockless wait inside pipe_update_ start to wait for PSR to idle out before checking for vblank evasion. We need to keep the wait in pipe_update_start to as less as it can be. So,we can live and flourish w/o taking any psr locks at all. Even if psr is never enabled, psr2_enabled will be false and this function will wait for PSR1 to idle out, which should just return immediately, so a very short (~1-2 usec) wait for cases where PSR is disabled. v2: Add comment to explain the 25msec timeout (DK) v3: Rename psr_wait_for_idle to __psr_wait_for_idle_locked to avoid naming conflicts and propagate err (if any) to the caller (Chris) v5: Form a series with the next patch v7: Better explain the need for lockless wait and increase the max timeout to handle refresh rates < 60 Hz (Daniel Vetter) v8: Rebase Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Signed-off-by: NTarun Vyas <tarun.vyas@intel.com> Signed-off-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-1-tarun.vyas@intel.com
-
由 Dhinakaran Pandiyan 提交于
There is already a check to allow only RGB8888 formats with CCS modifiers. Signed-off-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628061854.6430-1-dhinakaran.pandiyan@intel.com
-
由 Vathsala Nagaraju 提交于
Prints live state of psr1.Extending the existing PSR2 live state function to cover psr1. Tested on KBL with psr2 and psr1 panel. v2: rebase v3: DK Rename psr2_live_status to psr_source_status. v4: DK Move EDP_PSR_STATUS_STATE_SHIFT below EDP_PSR_STATUS_STATE_MASK. Pass seq to psr_source_status, handle source status prints in psr_source_status. v5: Fixed CI warning messages v6: Remove extra space in the title before the colon.(DK) Rebase. (Jani) v7: Use tabs for indenting the values.(Jani) v8: Addressed dk's review comments. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Reviewed-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Signed-off-by: NVathsala Nagaraju <vathsala.nagaraju@intel.com> Signed-off-by: NDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1530086910-15914-1-git-send-email-vathsala.nagaraju@intel.com
-
由 Chris Wilson 提交于
If the whole object is already pinned by HW for use as scanout, we will fail to move it to the mappable region and so must resort to using a partial VMA covering the whole object. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104513 Fixes: aa136d9d ("drm/i915: Convert partial ggtt vma to full ggtt if it spans the entire object") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180630090509.469-1-chris@chris-wilson.co.uk
-
- 02 7月, 2018 1 次提交
-
-
由 Jani Nikula 提交于
Try to describe what the pick variants do, and which to prefer. No functional changes. Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180629102039.2435-1-jani.nikula@intel.com
-
- 30 6月, 2018 3 次提交
-
-
由 Michal Wajdeczko 提交于
While debugging we may want to examine params passed to GuC. v2: drop #ifdef DEBUG_GUC - Michal Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> #1 Cc: Michal Winiarski <michal.winiarski@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180618111821.47088-1-michal.wajdeczko@intel.com
-
由 Chris Wilson 提交于
make_obj_busy() makes a dummy busy object, but didn't attach the fence to the reservation object, so it would not have registered as busy. For completeness, attach the dummy request as the exclusive fence and mark the object as written (in i915_vma_move_to_active) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We correctly attach the exclusive fetch for the scratch object when emitting a request that writes into it, but for completeness we should also declared the write to i915_vma_move_to_active() Reported-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-1-chris@chris-wilson.co.uk
-
- 29 6月, 2018 23 次提交
-
-
由 Maarten Lankhorst 提交于
The only time we should start FBC is when we have waited a vblank after the atomic update. We've already forced a vblank wait by doing wait_for_flip_done before intel_post_plane_update(), so we don't need to wait a second time before enabling. Removing the worker simplifies the code and removes possible race conditions, like happening in 103167. Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-2-maarten.lankhorst@linux.intel.comReviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
-
由 Maarten Lankhorst 提交于
There is a small race window in which FBC can be enabled after pre_plane_update is called, but before the page flip has been queued or completed. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167 Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-1-maarten.lankhorst@linux.intel.comReviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
-
由 Chris Wilson 提交于
Back in commit 27af5eea ("drm/i915: Move execlists irq handler to a bottom half"), we came to the conclusion that running our CSB processing and ELSP submission from inside the irq handler was a bad idea. A really bad idea as we could impose nearly 1s latency on other users of the system, on average! Deferring our work to a tasklet allowed us to do the processing with irqs enabled, reducing the impact to an average of about 50us. We have since eradicated the use of forcewaked mmio from inside the CSB processing and ELSP submission, bringing the impact down to around 5us (on Kabylake); an order of magnitude better than our measurements 2 years ago on Broadwell and only about 2x worse on average than the gem_syslatency on an unladen system. In this iteration of the tasklet-vs-direct submission debate, we seek a compromise where by we submit new requests immediately to the HW but defer processing the CS interrupt onto a tasklet. We gain the advantage of low-latency and ksoftirqd avoidance when waking up the HW, while avoiding the system-wide starvation of our CS irq-storms. Comparing the impact on the maximum latency observed (that is the time stolen from an RT process) over a 120s interval, repeated several times (using gem_syslatency, similar to RT's cyclictest) while the system is fully laden with i915 nops, we see that direct submission an actually improve the worse case. Maximum latency in microseconds of a third party RT thread (gem_syslatency -t 120 -f 2) x Always using tasklets (a couple of >1000us outliers removed) + Only using tasklets from CS irq, direct submission of requests +------------------------------------------------------------------------+ | + | | + | | + | | + + | | + + + | | + + + + x x x | | +++ + + + x x x x x x | | +++ + ++ + + *x x x x x x | | +++ + ++ + * *x x * x x x | | + +++ + ++ * * +*xxx * x x xx | | * +++ + ++++* *x+**xx+ * x x xxxx x | | **x++++*++**+*x*x****x+ * +x xx xxxx x x | |x* ******+***************++*+***xxxxxx* xx*x xxx + x+| | |__________MA___________| | | |______M__A________| | +------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 118 91 186 124 125.28814 16.279137 + 120 92 187 109 112.00833 13.458617 Difference at 95.0% confidence -13.2798 +/- 3.79219 -10.5994% +/- 3.02677% (Student's t, pooled s = 14.9237) However the mean latency is adversely affected: Mean latency in microseconds of a third party RT thread (gem_syslatency -t 120 -f 1) x Always using tasklets + Only using tasklets from CS irq, direct submission of requests +------------------------------------------------------------------------+ | xxxxxx + ++ | | xxxxxx + ++ | | xxxxxx + +++ ++ | | xxxxxxx +++++ ++ | | xxxxxxx +++++ ++ | | xxxxxxx +++++ +++ | | xxxxxxx + ++++++++++ | | xxxxxxxx ++ ++++++++++ | | xxxxxxxx ++ ++++++++++ | | xxxxxxxxxx +++++++++++++++ | | xxxxxxxxxxx x +++++++++++++++ | |x xxxxxxxxxxxxx x + + ++++++++++++++++++ +| | |__A__| | | |____A___| | +------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 120 3.506 3.727 3.631 3.6321417 0.02773109 + 120 3.834 4.149 4.039 4.0375167 0.041221676 Difference at 95.0% confidence 0.405375 +/- 0.00888913 11.1608% +/- 0.244735% (Student's t, pooled s = 0.03513) However, since the mean latency corresponds to the amount of irqsoff processing we have to do for a CS interrupt, we only need to speed that up to benefit not just system latency but our own throughput. v2: Remember to defer submissions when under reset. v4: Only use direct submission for new requests v5: Be aware that with mixing direct tasklet evaluation and deferred tasklets, we may end up idling before running the deferred tasklet. v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the annotation to reset_in_progress(). v7: Take the full timeline.lock when enabling perf_pmu stats as the tasklet is no longer a valid guard. A consequence is that the stats are now only valid for engines also using the timeline.lock to process state. Testcase: igt/gem_exec_latency/*rthog* References: 27af5eea ("drm/i915: Move execlists irq handler to a bottom half") Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Now that we use the CSB stored in the CPU friendly HWSP, we do not need to track interrupts for when the mmio CSB registers are valid and can just check where we read up to last from the cached HWSP. This means we can forgo the atomic bit tracking from interrupt, and in the next patch it means we can check the CSB at any time. v2: Change the splitting inside reset_prepare, we only want to lose testing the interrupt in this patch, the next patch requires the change in locking Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we now never read back our current head position from the CSB pointers register, and the HW itself doesn't use it to prevent overwriting unread CSB entries, we do not need to keep updating the register. As it turns out this register is not listed as being shadowed, and so requires forcewake -- but we haven't been taking forcewake around it so the writes has probably been regularly dropped. Fortuitously, we only read the value after a reset where it did not matter, and zero was the right answer (well, close enough). Mika pointed out that this was how we used to do it (accidentally!) before he fixed it in commit cc53699b ("drm/i915: Use masked write for Context Status Buffer Pointer"). References: cc53699b ("drm/i915: Use masked write for Context Status Buffer Pointer") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
On HW reset, the HW clears the write pointer (to 0). But since it also writes its first CSB entry to slot 0, we need to reset the write pointer back to the element before (so the first entry we read is 0). This is required for the next patch, where we trust the CSB completely! v2: Use _MASKED_FIELD v3: Store the reset value, so that we differentiate between mmio/hwsp transparently and without pretense. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Following the removal of the last workarounds, the only CSB mmio access is for the old vGPU interface. The mmio registers presented by vGPU do not require forcewake and can be treated as ordinary volatile memory, i.e. they behave just like the HWSP access just at a different location. We can reduce the CSB access to a set of read/write/buffer pointers and treat the various paths identically and not worry about forcewake. (Forcewake is nightmare for worstcase latency, and we want to process this all with irqsoff -- no latency allowed!) v2: Comments, comments, comments. Well, 2 bonus comments. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we will process the CSB events directly from the submission path, rather than only after a CS interrupt. Hence, we will no longer have the need for a loop until the has-interrupt bit is clear, and in the meantime can remove that small optimisation. v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet after each irq, when assuming that the tasklet is called for each irq. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the following patch, we will process the CSB events under the timeline.lock and not serialised by the tasklet. This also means that we will need to protect access to common variables such as execlists->csb_head with the timeline.lock during reset. v2: Move sync_irq to avoid deadlocks between taking timeline.lock from our interrupt handler. v3: Kill off the synchronize_hardirq as it raises more questions than answered; now we use the timeline.lock entirely for CSB serialisation between the irq and elsewhere, we don't need to be so heavy handed with flushing v4: Treat request cancellation (wedging after failed reset) similarly Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we will begin processing the CSB from inside the submission path (underneath an irqsoff section, and even from inside interrupt handlers). This means that updating the execlists->port[] will no longer be serialised by the tasklet but needs to be locked by the engine->timeline.lock instead. Pull dequeue and submit under the same lock for protection. (An alternate future plan is to keep the in/out arrays separate for concurrent processing and reduced lock coverage.) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We do not need to do a posting read of our uncached mmio write to re-enable the master interrupt lines after handling an interrupt, so don't. This saves us a slow UC read before we can process the interrupt, most noticeable in execlists where any stalls imposes extra latency on GPU command execution. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjala <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-1-chris@chris-wilson.co.uk
-
由 Michal Wajdeczko 提交于
We're fetching GuC/HuC firmwares directly from uc level during init_early stage but this breaks guc/huc struct isolation and also strict SW-only initialization rule for init_early. Move fw fetching to init phase and do it separately per guc/huc struct. v2: don't forget to move wopcm_init - Michele v3: fetch in init_misc phase - Michal Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> #2 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-2-michal.wajdeczko@intel.com
-
由 Michal Wajdeczko 提交于
We will add more init steps to misc phase and there is no need to expose them separately for use in uc_init_misc function. Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-1-michal.wajdeczko@intel.com
-
由 Chris Wilson 提交于
Avoid calling dma_fence_signal() from inside the interrupt if we haven't enabled signaling on the request. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Rather than have multiple locked instructions inside the notify_ring() irq handler, move them inside the spinlock and reduce their intrinsic locking. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we have more interrupts pending (because we know there are more breadcrumb signals before the completion), then we do not need to trigger an irq_seqno_barrier or even wakeup the task on this interrupt as there will be another. To allow some margin of error (we are trying to work around incoherent seqno after all), we wakeup the breadcrumb before the target as well as on the target. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
By taking advantage of the RCU protection of the task struct, we can find the appropriate signaler under the spinlock and then release the spinlock before waking the task and signaling the fence. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
At the moment, gem_exec_gttfill fails with a sporadic EBUSY due to us wanting to unbind a pinned batch. Let's dump who first bound that vma to see if that helps us identify who still unexpectedly has it pinned. v2: We cannot allocate inside the printer (as it may be on an fs-reclaim path), so hope for the best and build the string on the stack v3: stack depth of 16 routinely overflows a 512 character string, limit it to 12 to avoid unsightly truncation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628132206.8329-1-chris@chris-wilson.co.uk
-
由 Thomas Zimmermann 提交于
This patch unifies the naming of DRM functions for reference counting of struct drm_device. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tdz@users.sourceforge.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-6-tdz@users.sourceforge.net
-
由 Thomas Zimmermann 提交于
This patch unifies the naming of DRM functions for reference counting of struct drm_gem_object. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tdz@users.sourceforge.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-5-tdz@users.sourceforge.net
-
由 Thomas Zimmermann 提交于
This patch unifies the naming of DRM functions for reference counting of struct drm_gem_object. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tdz@users.sourceforge.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-4-tdz@users.sourceforge.net
-
由 Thomas Zimmermann 提交于
This patch unifies the naming of DRM functions for reference counting of struct drm_gem_object. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tdz@users.sourceforge.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-3-tdz@users.sourceforge.net
-
由 Thomas Zimmermann 提交于
This patch unifies the naming of DRM functions for reference counting of struct drm_connector. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tdz@users.sourceforge.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-2-tdz@users.sourceforge.net
-
- 28 6月, 2018 2 次提交
-
-
由 Anusha Srivatsa 提交于
This patch addresses Interrupts from south display engine (SDE). ICP has two registers - SHOTPLUG_CTL_DDI and SHOTPLUG_CTL_TC. Introduce these registers and their intended values. Introduce icp_irq_handler(). The icp_irq_postinstall() takes care of enabling all PCH interrupt sources, to unmask them as needed with SDEIMR, as is done done by ibx_irq_pre_postinstall() for earlier platforms. We do not need to explicitly call the ibx_irq_pre_postinstall(). Also, while changing these, s/CPT/PPT/CPT-CNP comment. v2: - remove redundant register defines.(Lucas) - Change register names to be more consistent with previous platforms (Lucas) v3: -Reorder bit defines to a more appropriate location. Change the comments. Confirm in the commit message that icp_irq_postinstall() need not go to ibx_irq_pre_postinstall() and ibx_irq_postinstall() as in earlier platforms. (Paulo) Cc: Lucas De Marchi <lucas.de.marchi@gmail.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> [Paulo: coding style bikesheds and rebases]. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1530046343-30649-1-git-send-email-anusha.srivatsa@intel.com
-
由 Chris Wilson 提交于
In the next^W forthcoming patch, we will start to defer retiring the request from the engine list if it is still active on the submission backend. To preserve the semantics that after wait-for-idle completes the system is idle and fully retired, we need to therefore wait for the backends to idle before calling i915_retire_requests(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627115334.16282-1-chris@chris-wilson.co.uk
-