- 14 7月, 2015 1 次提交
-
-
由 Tomas Elf 提交于
The hang checker needs to inspect whether or not the ring request list is empty as well as if the given engine has reached or passed the most recently submitted request. The problem with this is that the hang checker cannot grab the struct_mutex, which is required in order to safely inspect requests since requests might be deallocated during inspection. In the past we've had kernel panics due to this very unsynchronized access in the hang checker. One solution to this problem is to not inspect the requests directly since we're only interested in the seqno of the most recently submitted request - not the request itself. Instead the seqno of the most recently submitted request is stored separately, which the hang checker then inspects, circumventing the issue of synchronization from the hang checker entirely. This fixes a regression introduced in commit 44cdd6d2 Author: John Harrison <John.C.Harrison@Intel.com> Date: Mon Nov 24 18:49:40 2014 +0000 drm/i915: Convert 'ring_idle()' to use requests not seqnos v2 (Chris Wilson): - Pass current engine seqno to ring_idle() from i915_hangcheck_elapsed() rather than compute it over again. - Remove extra whitespace. Issue: VIZ-5998 Signed-off-by: NTomas Elf <tomas.elf@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Add regressing commit citation provided by Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 5月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
In commit 1854d5ca Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Apr 7 16:20:32 2015 +0100 drm/i915: Deminish contribution of wait-boosting from clients we removed an atomic timer based check for allowing waitboosting and moved it below the mutex taken during RPS. However, that mutex can be held for long periods of time on Vallyview/Cherryview as communication with the PCU is slow. As clients may frequently wait for results (e.g. such as tranform feedback) we introduced contention between the client and the RPS worker. We can take advantage of the RPS worker, by switching the wait boost decision to use spin locks and defer the actual reclocking to the worker. Fixes a regression of up to 45% on Baytrail and Baswell! v2 (Daniel): - Use max_freq_softlimit instead of the not-yet-merged boost frequency. - Don't inject a fake irq into the boost work, instead treat client_boost as just another legit waker. v3: Drop the now unused mask (Chris). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90112 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 21 5月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
If we have clients stalled waiting for requests, ignore the GPU if it signals that it should downclock due to low load. This helps prevent the automatic timeout from causing extremely long running batches from taking even longer. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 5月, 2015 2 次提交
-
-
由 Ville Syrjälä 提交于
Use HOTPLUG_INT_STATUS_G4X instead of HOTPLUG_INT_STATUS_I915 on VLV/CHV so that we don't confuse the AUX status bits with SDVO status bits. Avoid pointless log spam as below while handling AUX interrupts: [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00000040, dig 0x00000000 [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00000040, dig 0x00000000 [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00000040, dig 0x00000000 [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00000040, dig 0x00000000 [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00000040, dig 0x00000000 [drm:intel_dp_aux_ch] dp_aux_ch timeout status 0x71450064 Note that there's no functional issue, it's just that the sdvo bits overlap with the dp aux bits. Hence every time we receive an aux interrupt we also think there's an sdvo hpd interrupt, but due to lack of any sdvo encoders nothing ever happens because of that. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> [danvet: Add Ville's explanation why nothing functional really changes.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Remove some inline keywords. One of the functions has clearly outgrown it anyway, so let's just leave it to the compiler. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 4月, 2015 6 次提交
-
-
由 Daniel Vetter 提交于
We stopped handling them in commit aaecdf61 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Tue Nov 4 15:52:22 2014 +0100 drm/i915: Stop gathering error states for CS error interrupts but just clearing is apparently not enough: A sufficiently dead gpu left behind by firmware (*cough* coreboot *cough*) can keep the gpu in an endless loop of such interrupts, eventually leading to the nmi firing. And definitely to what looks like a machine hang. Since we don't even enable these interrupts on gen5+ let's do the same on earlier platforms. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=93171Tested-by: NMono <mono-for-kernel-org@donderklumpen.de> Tested-by: info@gluglug.org.uk Cc: stable@vger.kernel.org Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Shashank Sharma 提交于
GMBUS interrupt has been moved to CPU side in BXT. What this patch does is: 1. Enable GMBUS IRQ in de_post_install function 2. Handle this interrupt as a port interrupt in display irq handler v2: Rebase on top of the for_each_pipe() change adding dev_priv as first argument (Damien). v3: read BXT_DE_PORT_GMBUS IIR flag only on BXT on other platforms it's reserved (imre) v4: (jani) - remove redundant 'BXT GMBUS' comment - fix formatting of BXT_DE_PORT_GMBUS definition Reviewed-by: NSatheeshakrishna M <satheeshakrishna.m@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: Shashank Sharma <shashank.sharma@intel.com> (v1) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Shashank Sharma 提交于
This patch adds conditional checks in gen8_irq functions to support BXT. Most of the checks just look for PCH split availability, and block the call to PCH interrupt functions if not available. v2: (jani) - drop redundant TODO comment about PCH IRQ flags on BXT - check HAS_PCH_SPLIT instead of IS_BROXTON when handling PCH specific IRQ events in gen8_irq_handler() - check HAS_PCH_SPLIT before calling the function instead of a corresponding early return within the called function for ibx_irq_reset(), ibx_irq_pre_postinstall(), ibx_irq_postinstall() v3: (jani) - in ironlake_irq_postinstall() and ironlake_irq_reset() HAS_PCH_SPLIT is always true, so drop the check for it Reviewed-by: NSatheeshakrishna M <satheeshakrishna.m@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: Shashank Sharma <ppashank.sharma@intel.com> (v1) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Shashank Sharma 提交于
This patch adds a hot plug interrupt handler function for BXT. What this function typically does is: 1. Check if hot plug is enabled from hot plug control register. 2. Call hpd_irq_handler with appropriate trigger to detect a plug storm and schedule a bottom half. 3. Clear sticky status bits in hot plug control register.. v2: (jani) - drop redundant unlikely() - s/Todo/FIXME:/ in code comment - declare 'found' var in the scope where it's used - check for IS_BROXTON before handling BXT_DE_PORT_HOTPLUG_MASK Reviewed-by: NSatheeshakrishna M <satheeshakrishna.m@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: Shashank Sharma <shashank.sharma@intel.com> (v1) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
All non-GMCH platforms have the same register layout for HPD long/short status, so let's use this condition instead of HAS_PCH_SPLIT, as the latter doesn't apply for BXT. Noticed by Daniel. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Shashank Sharma 提交于
In BXT, DDI hotplug control has been moved to CPU from PCH. This patch adds a new IRQ setup function for BXT which: 1. Checks which HPD ports are requested to be enabled by encoders. 2. Enables those ports in the hot plug control register. 3. Un-masks these port interrupts in the IMR register. 4. Enables these port interrupts in the IER register. V3: Kept the default HPD filter count to default (500 us) as per satheesh's comment v4: Remove unused HPD filter defines (Damien) v5: warn if trying to setup HPD on port A (imre) v6: fix order of definitions for register bitfields (Daniel) v7: (jani) - define the size of the hpd_bxt array explicitly for bound checking - use for_each_intel_encoder instead of open coding it - fix format/order of definitions for BXT_HOTPLUG_CTL reg bitfields Reviewed-by: NSatheeshakrishna M <satheeshakrishna.m@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: Shashank Sharma <shashank.sharma@intel.com> (v4) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 4月, 2015 4 次提交
-
-
由 Chris Wilson 提交于
Remove some needless variables and parameter passing. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Similar in vain in reducing the number of unrequired spinlocks used for execlist command submission (where the forcewake is required but manually controlled), we know that the IRQ registers are outside of the powerwell and so we can access them directly. Since we now have direct access exported via I915_READ_FW/I915_WRITE_FW, lets put those to use in the irq handlers as well. In the process, reorder the execlist submission to happen as early as possible. v2: Restrict the untraced register mmio to just the GT path (i.e. the hotpath for execlists) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
The issue is that by computing the last_adj value after applying the clamping, we can end up with a bogus value for feeding into the next RPS autotuning step. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Deepak S <deepak.s@linux.intel.com> Reviewed-by: NDeepak S <deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Reuse the same reclocking strategy for Baytail as on its bigger brethren, Sandybridge and Ivybridge. In particular, this makes the device quicker to reclock (both up and down) though the tendency now is to downclock more aggressively to compensate for the RPS boosts. v2: Rebase v3: Exclude Cherrytrail as Deepak was concerned that the increased number of register writes would wake the common powerwell too often. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Deepak S <deepak.s@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDeepak S <deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 3月, 2015 2 次提交
-
-
由 Imre Deak 提交于
The logical place for clearing the RPS latched interrupt bits is when resetting the RPS interrupts, so move the corresponding part from the RPS disable function to the reset function. During resetting we already cleared the IIR bits, so the only thing missing there was clearing pm_iir. Note that we call gen6_disable_rps_interrupts() also during driver load and resume time via intel_uncore_sanitize() when i915 interrupts are still not installed. If there are any pending RPS bits at this point (which after this patch wouldn't be cleared) they will be cleared by the reset code via the interrupt preinstall hooks. Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
When disabling RPS interrupts there is a race where we disable RPS inerrupts while the interrupt handler is running and the handler has already latched the pending RPS interrupt from the master IIR register. Afterwards the disabling path clears the PM IIR bits, making the state of pending interrupts inconsistent from the interrupt handler's point of view. This triggers the following warning: "The master control interrupt lied (PM)!". To fix this make sure that any running interrupt handler (which may have already latched the master IIR) finishes before clearing the IIR bits. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87347Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 3月, 2015 2 次提交
-
-
由 Chris Wilson 提交于
Use both up/down manual ei calcuations for symmetry and greater flexibility for reclocking, instead of faking the down interrupt based on a fixed integer number of up interrupts. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Rewrite commit 31685c25 Author: Deepak S <deepak.s@linux.intel.com> Date: Thu Jul 3 17:33:01 2014 -0400 drm/i915/vlv: WA for Turbo and RC6 to work together. Other than code clarity, the major improvement is to disable the extra interrupts generated when idle. However, the reclocking remains rather slow under the new manual regime, in particular it fails to downclock as quickly as desired. The second major improvement is that for certain workloads, like games, we need to combine render+media activity counters as the work of displaying the frame is split across the engines and both need to be taken into account when deciding the global GPU frequency as memory cycles are shared. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Deepak S <deepak.s@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 3月, 2015 3 次提交
-
-
由 Akash Goel 提交于
Earlier Turbo interrupts were not being processed for SKL, as something was amiss in turbo programming for SKL. Now missing changes have been added, so enabling the Turbo interrupt processing for SKL. Signed-off-by: NAkash Goel <akash.goel@intel.com> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
The pipe interrupt registers are in the actual pipe power well, so we need to restore them when re-enable the corresponding power well. I've also copied what we do on HSW/BDW for VGA, even if the we haven't enabled unclaimed registers just yet. v2: Don't run skl_power_well_post_enable() if the power well is already enabled (Paulo) Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
While we only need to restore pipe B/C interrupt registers on BDW when enabling the power well, skylake a bit more flexible and we'll also need to restore the pipe A registers as it has its own power well that can be toggled. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 2月, 2015 1 次提交
-
-
由 Matt Roper 提交于
As vendors transition their drivers from legacy to atomic there's some duplication of data between drm_crtc and drm_crtc_state (since unconverted drivers likely won't have a state structure). i915 is partially converted and does have a crtc->state structure, but still uses direct crtc fields internally in many places, which causes the two sets of data to get out of sync. As of commit commit 31c946e8 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sun Feb 22 12:24:17 2015 +0100 drm: If available use atomic state in getcrtc ioctl This way drivers fully converted to atomic don't need to update these legacy state variables in their modeset code any more. Reviewed-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> the DRM core starts assuming that the presence of a ->state structure implies that it should make use of the values stored there which, on i915, leads to the core code using stale values for CRTC 'enabled' status. Let's switch over to using the state value of 'enable' internally rather than using the drm_crtc field. This ensures that our driver internals are working from the same data that the DRM core is, avoiding mismatches. This patch was generated with Coccinelle using the following semantic patch: <smpl> @@ struct drm_crtc C; struct drm_crtc *CP; @@ ( - C.enabled + C.state->enable | - CP->enabled + CP->state->enable ) // For assignments, we still update the legacy value as well as the state value // so add an extra assignment statement for that. @@ struct drm_crtc C; struct drm_crtc *CP; expression E; @@ ( C.state->enable = E; + C.enabled = E; | CP->state->enable = E; + CP->enabled = E; ) </smpl> The crtc->mode and crtc->hwmode fields should probably be transitioned over as well eventually, but we seem to do an okay job of keeping those up-to-date already so I want to minimize the changes that will clash with Ander's in-progress atomic work. v2: Don't remove the assignments to the legacy value when we assign to the state value. A second cocci stanza takes care of adding the legacy assignment back where appropriate. (Daniel) Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 2月, 2015 1 次提交
-
-
由 Imre Deak 提交于
Atm, it's possible that the interrupt handler is called when the device is in D3 or some other low-power state. It can be due to another device that is still in D0 state and shares the interrupt line with i915, or on some platforms there could be spurious interrupts even without sharing the interrupt line. The latter case was reported by Klaus Ethgen using a Lenovo x61p machine (gen 4). He noticed this issue via a system suspend/resume hang and bisected it to the following commit: commit e11aa362 Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Wed Jun 18 09:52:55 2014 -0700 drm/i915: use runtime irq suspend/resume in freeze/thaw This is a problem, since in low-power states IIR will always read 0xffffffff resulting in an endless IRQ servicing loop. Fix this by handling interrupts only when the driver explicitly enables them and so it's guaranteed that the interrupt registers return a valid value. Note that this issue existed even before the above commit, since during runtime suspend/resume we never unregistered the handler. v2: - clarify the purpose of smp_mb() vs. synchronize_irq() in the code comment (Chris) v3: - no need for an explicit smp_mb(), we can assume that synchronize_irq() and the mmio read/writes in the install hooks provide for this (Daniel) - remove code comment as the remaining synchronize_irq() is self explanatory (Daniel) v4: - drm_irq_uninstall() implies synchronize_irq(), so no need to call it explicitly (Daniel) Reference: https://lkml.org/lkml/2015/2/11/205Reported-and-bisected-by: NKlaus Ethgen <Klaus@Ethgen.ch> Cc: stable@vger.kernel.org Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 23 2月, 2015 2 次提交
-
-
由 Daniel Vetter 提交于
UMS is no more! Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
With Ville's rework to use drm_crtc_vblank_on/off the core will take care of rejecting drm_vblank_get calls when the pipe is off. Also the core won't call the get_vblank_counter hooks in that case either. And since we've dropped ums support recently we can now remove these hacks, yay! Noticed while trying to answer questions Laurent had about how the new atomic helpers work. Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 14 2月, 2015 1 次提交
-
-
由 Ville Syrjälä 提交于
Replace the valleyview_set_rps() and gen6_set_rps() calls with intel_set_rps() which itself does the IS_VALLEYVIEW() check. The code becomes simpler since the callers don't have to do this check themselves. Most of the change was performe with the following semantic patch: @@ expression E1, E2, E3; @@ - if (IS_VALLEYVIEW(E1)) { - valleyview_set_rps(E2, E3); - } else { - gen6_set_rps(E2, E3); - } + intel_set_rps(E2, E3); Adding intel_set_rps() and making valleyview_set_rps() and gen6_set_rps() static was done manually. Also valleyview_set_rps() had to be moved a bit avoid a forward declaration. v2: Use a less greedy semantic patch Cc: Chris Wilson <chris@chris-wilson.co.uk> Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 2月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
You can _never_ assert that a lock is not held, except in some very restricted corner cases where it's guranteed that your code is running single-threade (e.g. driver load before you've published any pointers leading to that lock). In addition the early return breaks a bunch of testcases since with highly concurrent hangcheck stress tests the reset fails to work and the test doesn't recover and time out. This regression has been introduced in commit b8d24a06 Author: Mika Kuoppala <mika.kuoppala@linux.intel.com> Date: Wed Jan 28 17:03:14 2015 +0200 drm/i915: Remove nested work in gpu error handling Aside: It is possible to check whether a given task doesn't hold a lock, but only when lockdep is enabled, using the lockdep_assert_held stuff. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88908Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 30 1月, 2015 1 次提交
-
-
由 Mika Kuoppala 提交于
Now when we declare gpu errors only through our own dedicated hangcheck workqueue there is no need to have a separate workqueue for handling the resetting and waking up the clients as the deadlock concerns are no more. The only exception is i915_debugfs::i915_set_wedged, which triggers error handling through process context. However as this is only used through test harness it is responsibility for test harness not to introduce hangs through both debug interface and through hangcheck mechanism at the same time. Remove gpu_error.work and let the hangcheck work do the tasks it used to. v2: Add a big warning sign into i915_debugfs::i915_set_wedged (Chris) Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 1月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
When run as a timer, i915_hangcheck_elapsed() must adhere to all the rules of running in a softirq context. This is advantageous to us as we want to minimise the risk that a driver bug will prevent us from detecting a hung GPU. However, that is irrelevant if the driver bug prevents us from resetting and recovering. Still it is prudent not to rely on mutexes inside the checker, but given the coarseness of dev->struct_mutex doing so is extremely hard. Give in and run from a work queue, i.e. outside of softirq. v2: Use own workqueue to avoid deadlocks (Daniel) Cleanup commit msg and add comment to i915_queue_hangcheck() (Chris) Cc: Jani Nikula <jani.nikula@intel.com> Cc: Daniel Vetter <dnaiel.vetter@ffwll.chm> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1) Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Remove accidental kerneldoc comment starter, to appease the 0 day builder.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 1月, 2015 3 次提交
-
-
由 Daniel Vetter 提交于
Self-explanatory code is better code. Cc: Dave Airlie <airlied@redhat.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
To match the semantics of drm_crtc->state, which this will eventually become. The allocation of the memory for config will be fixed in a followup patch. By adding the extra _config field to intel_crtc it was possible to generate this entire patch with the cocci script below. @@ @@ struct intel_crtc { ... -struct intel_crtc_state config; +struct intel_crtc_state _config; +struct intel_crtc_state *config; ... } @@ struct intel_crtc *crtc; @@ -memset(&crtc->config, 0, sizeof(crtc->config)); +memset(crtc->config, 0, sizeof(*crtc->config)); @@ @@ __intel_set_mode(...) { <... -to_intel_crtc(crtc)->config = *pipe_config; +(*(to_intel_crtc(crtc)->config)) = *pipe_config; ...> } @@ @@ intel_crtc_init(...) { ... WARN_ON(drm_crtc_index(&intel_crtc->base) != intel_crtc->pipe); +intel_crtc->config = &intel_crtc->_config; return; ... } @@ struct intel_crtc *crtc; @@ -&crtc->config +crtc->config @@ struct intel_crtc *crtc; identifier member; @@ -crtc->config.member +crtc->config->member @@ expression E; @@ -&(to_intel_crtc(E)->config) +to_intel_crtc(E)->config @@ expression E; identifier member; @@ -to_intel_crtc(E)->config.member +to_intel_crtc(E)->config->member v2: Clarify manual changes by splitting them into another patch. (Matt) Improve cocci script to generate even more of the changes. (Ander) Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
And get rid of the duplicate mode structures. This patch was generated with the following semantic patch: @@ @@ struct intel_crtc_state { +struct drm_crtc_state base; + ... -struct drm_display_mode requested_mode; -struct drm_display_mode adjusted_mode; ... } @@ struct intel_crtc_state *state; @@ -state->adjusted_mode +state->base.adjusted_mode @@ struct intel_crtc_state *state; @@ -state->requested_mode +state->base.mode @@ struct intel_crtc_state state; @@ -state.adjusted_mode +state.base.adjusted_mode @@ struct intel_crtc_state state; @@ -state.requested_mode +state.base.mode @@ struct drm_crtc *crtc; @@ -to_intel_crtc(crtc)->config.adjusted_mode +to_intel_crtc(crtc)->config.base.adjusted_mode @@ identifier member; expression E; @@ -PIPE_CONF_CHECK_FLAGS(adjusted_mode.member, E); +PIPE_CONF_CHECK_FLAGS(base.adjusted_mode.member, E); @@ identifier member; @@ -PIPE_CONF_CHECK_I(adjusted_mode.member); +PIPE_CONF_CHECK_I(base.adjusted_mode.member); @@ identifier member; @@ -PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.member); +PIPE_CONF_CHECK_CLOCK_FUZZY(base.adjusted_mode.member); v2: Completely generate the patch with cocci. (Ander) Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 1月, 2015 2 次提交
-
-
由 Ville Syrjälä 提交于
The dev_priv->display.hpd_irq_setup hook is optional, so we can move the I915_HAS_HOTPLUG() check out of i915_hpd_irq_setup() and only set up the hook when hotplug support is present. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
intel_hpd_irq_handler() walks the passed in hpd[] array assuming it contains HPD_NUM_PINS elements. Currently that's not true as we don't specify an explicit size for the arrays when initializing them. Avoid the out of bounds accesses by specifying the size for the arrays. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 1月, 2015 2 次提交
-
-
由 Imre Deak 提交于
We apply the RPS interrupt workaround on VLV everywhere except when writing the mask directly during idling the GPU. For consistency do this also there. While at it also extend the code comment about affected platforms. I couldn't reproduce the issue on VLV fixed by this workaround, by removing the workaround from everywhere, while it's 100% reproducible on SNB using igt/gem_reset_stats/ban-ctx-render. So also add a note that it hasn't been verified if the workaround really applies to VLV/CHV. Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Imre Deak 提交于
In commit dbea3cea Author: Imre Deak <imre.deak@intel.com> Date: Mon Dec 15 18:59:28 2014 +0200 drm/i915: sanitize RPS resetting during GPU reset we disable RPS interrupts during GPU resetting, but don't apply the necessary GEN6 HW workaround. This leads to a HW lockup during a subsequent "looping batchbuffer" workload. This is triggered by the testcase that submits exactly this kind of workload after a simulated GPU reset. I'm not sure how likely the bug would have triggered otherwise, since we would have applied the workaround anyway shortly after the GPU reset, when enabling GT powersaving from the deferred work. This may also fix unrelated issues, since during driver loading / suspending we also disable RPS interrupts and so we also had a short window during the rest of the loading / resuming where a similar workload could run without the workaround applied. v2: - separate the fix to route RPS interrupts to the CPU on GEN9 too to a separate patch (Daniel) Bisected-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Testcase: igt/gem_reset_stats/ban-ctx-render Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87429Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 18 12月, 2014 1 次提交
-
-
由 Ville Syrjälä 提交于
The flip stall detector kicks in when pending>=INTEL_FLIP_COMPLETE. That means if we first call intel_prepare_page_flip() but don't call intel_finish_page_flip(), the next stall check will erroneosly think the page flip was somehow stuck. With enough debug spew emitted from the interrupt handler my 830 hangs when this happens. My theory is that the previous vblank interrupt gets sufficiently delayed that the handler will see the pending bit set in IIR, but ISR still has the bit set as well (ie. the flip was processed by CS but didn't complete yet). In this case the handler will proceed to call intel_check_page_flip() immediately after intel_prepare_page_flip(). It then tries to print a backtrace for the stuck flip WARN, which apparetly results in way too much debug spew delaying interrupt processing further. That then seems to cause an endless loop in the interrupt handler, and the machine is dead until the watchdog kicks in and reboots. At least limiting the number of iterations of the loop in the interrupt handler also prevented the hang. So it seems better to not call intel_prepare_page_flip() without immediately calling intel_finish_page_flip(). The IIR/ISR trickery avoids races here so this is a perfectly safe thing to do. v2: Fix typo in commit message (checkpatch) Cc: stable@vger.kernel.org Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88381 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85888Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 16 12月, 2014 1 次提交
-
-
由 Imre Deak 提交于
Paulo noticed that we don't enable RPS interrupts via PM_IER in gen6_enable_rps_interrupts(). This wasn't a problem so far, since the only place we disabled RPS interrupts was during system/runtime suspend and after that we reenable all interrupts in the IRQ pre/postinstall hooks. In the next patch we'll disable/reenable RPS interrupts during GPU reset too, but not call IRQ uninstall, pre/postinstall hooks, so there the above wouldn't work. The logical place for programming PM_IER is gen6_enable_rps_interrupts() and this also makes the function more symmetric with gen6_disable_rps_interrupts(), so move the programming there from the postinstall hooks. Note that these changes don't affect the ILK RPS interrupt code, which could be sanitized in a similar way. But that can be done as a follow-up. Credits-to: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 15 12月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
We consistently use the _irq_handler postfix for functions called in hardirq context. Especially when it's a non-static function hardirq is a crazy enough calling context to warrant this level of ocd. So rename it. Cc: Thomas Daniel <thomas.daniel@intel.com> Reviewed-by: NThomas Daniel <thomas.daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-