1. 03 12月, 2014 1 次提交
  2. 21 11月, 2014 1 次提交
  3. 20 11月, 2014 1 次提交
  4. 19 11月, 2014 4 次提交
    • I
      drm/i915: sanitize rps irq disabling · d4d70aa5
      Imre Deak 提交于
      When disabling the RPS interrupts there is a tricky dependency between
      the thread disabling the interrupts, the RPS interrupt handler and the
      corresponding RPS work. The RPS work can reenable the interrupts, so
      there is no straightforward order in the disabling thread to (1) make
      sure that any RPS work is flushed and to (2) disable all RPS
      interrupts. Currently this is solved by masking the interrupts using two
      separate mask registers (first level display IMR and PM IMR) and doing
      the disabling when all first level interrupts are disabled.
      
      This works, but the requirement to run with all first level interrupts
      disabled is unnecessary making the suspend / unload time ordering of RPS
      disabling wrt. other unitialization steps difficult and error prone.
      Removing this restriction allows us to disable RPS early during suspend
      / unload and forget about it for the rest of the sequence. By adding a
      more explicit method for avoiding the above race, it also becomes easier
      to prove its correctness. Finally currently we can hit the WARN in
      snb_update_pm_irq(), when a final RPS work runs with the first level
      interrupts already disabled. This won't lead to any problem (due to the
      separate interrupt masks), but with the change in this and the next
      patch we can get rid of the WARN, while leaving it in place for other
      scenarios.
      
      To address the above points, add a new RPS interrupts_enabled flag and
      use this during RPS disabling to avoid requeuing the RPS work and
      reenabling of the RPS interrupts. Since the interrupt disabling happens
      now in intel_suspend_gt_powersave(), we will disable RPS interrupts
      explicitly during suspend (and not just through the first level mask),
      but there is no problem doing so, it's also more consistent and allows
      us to unify more of the RPS disabling during suspend and unload time in
      the next patch.
      
      v2/v3:
      - rebase on patch "drm/i915: move rps irq disable one level up" in the
        patchset
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d4d70aa5
    • I
      drm/i915: sanitize rps irq enabling · 3cc134e3
      Imre Deak 提交于
      Atm we first enable the RPS interrupts then we clear any pending ones.
      By this we could lose an interrupt arriving after we unmasked it. This
      may not be a problem as the caller should handle such a race, but logic
      still calls for the opposite order. Also we can delay enabling the
      interrupts until after all the RPS initialization is ready with the
      following order:
      
      1. disable left-over RPS (earlier via intel_uncore_sanitize)
      2. clear any pending RPS interrupts
      3. initialize RPS
      4. enable RPS interrupts
      
      This also allows us to do the 2. and 4. step the same way for all
      platforms, so let's follow this order to simplifying things.
      
      Also make sure any queued interrupts are also cleared.
      
      v2:
      - rebase on the GEN9 patches where we don't support RPS yet, so we
        musn't enable RPS interrupts on it (Paulo)
      v3:
      - avoid enabling RPS interrupts on GEN>9 too (Paulo)
      - clarify the RPS init sequence in the log message (Chris)
      - add POSTING_READ to gen6_reset_rps_interrupts() (Paulo)
      - WARN if any PM_IIR bits are set in gen6_enable_rps_interrupts()
        (Paulo)
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      3cc134e3
    • I
      drm/i915: WARN if we receive any rps interrupts on gen>9 · 4a74de82
      Imre Deak 提交于
      This extends
      
      commit 132f3f17
      Author: Imre Deak <imre.deak@intel.com>
      Date:   Mon Nov 10 15:34:33 2014 +0200
      
          drm/i915: WARN if we receive any gen9 rps interrupts
      
      to GEN>9 platforms as suggested by Paulo.
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      4a74de82
    • C
      drm/i915: Don't continually defer the hangcheck · 672e7b7c
      Chris Wilson 提交于
      With multiple rings, we may continue to render on the blitter whilst
      executing an infinite shader on the render ring. As we currently, rearm
      the timer with each execbuf, in this scenario the hangcheck will never
      fire and we will never detect the lockup on the render ring. Instead,
      only arm the timer once per hangcheck, so that hangcheck runs more
      frequently.
      
      v2: Rearrange code to avoid triggering a BUG_ON in add_timer from
      softirq context.
      
      Testcase: igt/gem_reset_stats/defer-hangcheck*
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86225Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      672e7b7c
  5. 15 11月, 2014 2 次提交
  6. 14 11月, 2014 5 次提交
  7. 08 11月, 2014 13 次提交
  8. 24 10月, 2014 4 次提交
  9. 16 10月, 2014 1 次提交
  10. 08 10月, 2014 1 次提交
  11. 03 10月, 2014 3 次提交
  12. 01 10月, 2014 1 次提交
  13. 29 9月, 2014 1 次提交
  14. 24 9月, 2014 2 次提交