1. 03 12月, 2014 2 次提交
    • V
      drm/i915: Grab modeset locks for GPU rest on pre-ctg · 7514747d
      Ville Syrjälä 提交于
      On gen4 and earlier the GPU reset also resets the display, so we should
      protect against concurrent modeset operations. Grab all the modeset locks
      around the entire GPU reset dance, remebering first ti dislogde any
      pending page flip to make sure we don't deadlock. Any pageflip coming
      in between these two steps should fail anyway due to reset_in_progress,
      so this should be safe.
      
      This fixes a lot of failed asserts in the modeset code when there's a
      modeset racing with the reset. Naturally the asserts aren't happy when
      the expected state has disappeared.
      
      v2: Drop UMS checks, complete pending flips after the reset (Daniel)
      
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      7514747d
    • D
      drm/i915: Stop gathering error states for CS error interrupts · aaecdf61
      Daniel Vetter 提交于
      There's quite a few bug reports with error states where the error
      reasons makes just about no sense at all. Like dying on tlbs for a
      display plane that's not even there. Also users don't really report a
      lot of bad side effects generally, just the error states.
      
      Furthermore we don't even enable these interrupts any more on gen5+
      (though the handling code is still there). So this mostly concerns old
      platforms.
      
      Given all that lets make our lives a bit easier and stop capturing
      error states, in the hopes that we can just ignore them. In case
      that's not true and the gpu indeed dies the hangcheck should
      eventually kick in. And I've left some debug log in to make this case
      noticeble. Referenced bug is just an example.
      
      v2: Fix missing \n Jani spotted.
      
      References: https://bugs.freedesktop.org/show_bug.cgi?id=82095
      References: https://bugs.freedesktop.org/show_bug.cgi?id=85944Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      aaecdf61
  2. 21 11月, 2014 1 次提交
  3. 20 11月, 2014 1 次提交
  4. 19 11月, 2014 4 次提交
    • I
      drm/i915: sanitize rps irq disabling · d4d70aa5
      Imre Deak 提交于
      When disabling the RPS interrupts there is a tricky dependency between
      the thread disabling the interrupts, the RPS interrupt handler and the
      corresponding RPS work. The RPS work can reenable the interrupts, so
      there is no straightforward order in the disabling thread to (1) make
      sure that any RPS work is flushed and to (2) disable all RPS
      interrupts. Currently this is solved by masking the interrupts using two
      separate mask registers (first level display IMR and PM IMR) and doing
      the disabling when all first level interrupts are disabled.
      
      This works, but the requirement to run with all first level interrupts
      disabled is unnecessary making the suspend / unload time ordering of RPS
      disabling wrt. other unitialization steps difficult and error prone.
      Removing this restriction allows us to disable RPS early during suspend
      / unload and forget about it for the rest of the sequence. By adding a
      more explicit method for avoiding the above race, it also becomes easier
      to prove its correctness. Finally currently we can hit the WARN in
      snb_update_pm_irq(), when a final RPS work runs with the first level
      interrupts already disabled. This won't lead to any problem (due to the
      separate interrupt masks), but with the change in this and the next
      patch we can get rid of the WARN, while leaving it in place for other
      scenarios.
      
      To address the above points, add a new RPS interrupts_enabled flag and
      use this during RPS disabling to avoid requeuing the RPS work and
      reenabling of the RPS interrupts. Since the interrupt disabling happens
      now in intel_suspend_gt_powersave(), we will disable RPS interrupts
      explicitly during suspend (and not just through the first level mask),
      but there is no problem doing so, it's also more consistent and allows
      us to unify more of the RPS disabling during suspend and unload time in
      the next patch.
      
      v2/v3:
      - rebase on patch "drm/i915: move rps irq disable one level up" in the
        patchset
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d4d70aa5
    • I
      drm/i915: sanitize rps irq enabling · 3cc134e3
      Imre Deak 提交于
      Atm we first enable the RPS interrupts then we clear any pending ones.
      By this we could lose an interrupt arriving after we unmasked it. This
      may not be a problem as the caller should handle such a race, but logic
      still calls for the opposite order. Also we can delay enabling the
      interrupts until after all the RPS initialization is ready with the
      following order:
      
      1. disable left-over RPS (earlier via intel_uncore_sanitize)
      2. clear any pending RPS interrupts
      3. initialize RPS
      4. enable RPS interrupts
      
      This also allows us to do the 2. and 4. step the same way for all
      platforms, so let's follow this order to simplifying things.
      
      Also make sure any queued interrupts are also cleared.
      
      v2:
      - rebase on the GEN9 patches where we don't support RPS yet, so we
        musn't enable RPS interrupts on it (Paulo)
      v3:
      - avoid enabling RPS interrupts on GEN>9 too (Paulo)
      - clarify the RPS init sequence in the log message (Chris)
      - add POSTING_READ to gen6_reset_rps_interrupts() (Paulo)
      - WARN if any PM_IIR bits are set in gen6_enable_rps_interrupts()
        (Paulo)
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      3cc134e3
    • I
      drm/i915: WARN if we receive any rps interrupts on gen>9 · 4a74de82
      Imre Deak 提交于
      This extends
      
      commit 132f3f17
      Author: Imre Deak <imre.deak@intel.com>
      Date:   Mon Nov 10 15:34:33 2014 +0200
      
          drm/i915: WARN if we receive any gen9 rps interrupts
      
      to GEN>9 platforms as suggested by Paulo.
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      4a74de82
    • C
      drm/i915: Don't continually defer the hangcheck · 672e7b7c
      Chris Wilson 提交于
      With multiple rings, we may continue to render on the blitter whilst
      executing an infinite shader on the render ring. As we currently, rearm
      the timer with each execbuf, in this scenario the hangcheck will never
      fire and we will never detect the lockup on the render ring. Instead,
      only arm the timer once per hangcheck, so that hangcheck runs more
      frequently.
      
      v2: Rearrange code to avoid triggering a BUG_ON in add_timer from
      softirq context.
      
      Testcase: igt/gem_reset_stats/defer-hangcheck*
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86225Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      672e7b7c
  5. 15 11月, 2014 2 次提交
  6. 14 11月, 2014 5 次提交
  7. 08 11月, 2014 13 次提交
  8. 24 10月, 2014 4 次提交
  9. 16 10月, 2014 1 次提交
  10. 08 10月, 2014 1 次提交
  11. 03 10月, 2014 3 次提交
  12. 01 10月, 2014 1 次提交
  13. 29 9月, 2014 1 次提交
  14. 24 9月, 2014 1 次提交