1. 20 2月, 2018 1 次提交
  2. 14 2月, 2018 2 次提交
  3. 10 2月, 2018 2 次提交
  4. 08 2月, 2018 1 次提交
  5. 07 2月, 2018 1 次提交
  6. 02 2月, 2018 2 次提交
  7. 01 2月, 2018 5 次提交
  8. 23 1月, 2018 1 次提交
    • C
      drm/i915: Increase render/media power gating hysteresis for gen9+ · c1beabcf
      Chris Wilson 提交于
      On gen9+, after an idle period the HW will disable the entire power well
      to conserve power (by preventing current leakage). It takes around a 100
      microseconds to bring the power well back online afterwards. With the
      current hysteresis value of 25us (really 25 * 1280ns), we do not have
      sufficient time to respond to an interrupt and schedule the next execution
      before the HW powers itself down. (At present, we prevent this by
      grabbing the forcewake for prolonged periods of time, but that overkill
      fixed in the next patch.) The minimum we want to set the power gating
      hysteresis to is the length of time it takes us to service the GPU, which
      across a broad spectrum of machines is about 250us.
      
      (Note this also brings guc latency into the same ballpark as execlists.)
      
      v2: Include some notes on where I plucked the numbers from.
      
      Testcase: igt/gem_exec_nop/sequential
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
      Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Cc: Michal Winiarski <michal.winiarski@intel.com>
      Reviewed-by: NSagar Arun Kamble <sagar.a.kamble@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180122135541.32222-1-chris@chris-wilson.co.uk
      c1beabcf
  9. 20 12月, 2017 1 次提交
  10. 12 12月, 2017 3 次提交
  11. 07 12月, 2017 1 次提交
  12. 06 12月, 2017 2 次提交
  13. 01 12月, 2017 1 次提交
  14. 25 11月, 2017 1 次提交
    • C
      drm/i915: Use exponential backoff for wait_for() · a54b1873
      Chris Wilson 提交于
      Instead of sleeping for a fixed 1ms (roughly, depending on timer slack),
      start with a small sleep and exponentially increase the sleep on each
      cycle.
      
      A good example of a beneficiary is the guc mmio communication channel.
      Typically we expect (and so spin) for 10us for a quick response, but this
      doesn't cover everything and so sometimes we fallback to the millisecond+
      sleep. This incurs a significant delay in time-critical operations like
      preemption (igt/gem_exec_latency), which can be improved significantly by
      using a small sleep after the spin fails.
      
      We've made this suggestion many times, but had little experimental data
      to support adding the complexity.
      
      v2: Bump the minimum usleep to 10us on advice of
      Documentation/timers/timers-howto.txt (Tvrko)
      v3: Specify min, max range for usleep intervals -- some code may
      crucially depend upon and so want to specify the sleep pattern.
      
      References: 1758b90e ("drm/i915: Use a hybrid scheme for fast register waits")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: John Harrison <John.C.Harrison@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
      Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20171124130031.20761-2-chris@chris-wilson.co.uk
      a54b1873
  15. 23 11月, 2017 1 次提交
  16. 22 11月, 2017 3 次提交
  17. 21 11月, 2017 1 次提交
  18. 18 11月, 2017 1 次提交
  19. 17 11月, 2017 2 次提交
  20. 14 11月, 2017 1 次提交
  21. 12 11月, 2017 2 次提交
  22. 11 11月, 2017 1 次提交
  23. 09 11月, 2017 3 次提交
  24. 08 11月, 2017 1 次提交