1. 18 4月, 2020 2 次提交
    • J
      drm/i915/tc/tgl: Implement TC cold sequences · 3c02934b
      José Roberto de Souza 提交于
      TC ports can enter in TCCOLD to save power and is required to request
      to PCODE to exit this state before use or read to TC registers.
      
      For TGL there is a new MBOX command to do that with a parameter to ask
      PCODE to exit and block TCCOLD entry or unblock TCCOLD entry.
      
      So adding a new power domain to reuse the refcount and only allow
      TC cold when all TC ports are not in use.
      
      v2:
      - fixed missing case in intel_display_power_domain_str()
      - moved tgl_tc_cold_request to intel_display_power.c
      - renamed TGL_TC_COLD_OFF to TGL_TC_COLD_OFF_POWER_DOMAINS
      - added all TC and TBT aux power domains to
      TGL_TC_COLD_OFF_POWER_DOMAINS
      
      v3:
      - added one msec sleep when PCODE returns -EAGAIN
      - added timeout of 5msec to not loop forever if
      sandybridge_pcode_write_timeout() keeps returning -EAGAIN
      
      v4:
      - Made failure to block or unblock TC cold a error
      - removed 5msec timeout, instead giving PCODE 1msec by up 3 times to
      recover from the internal error
      
      v5:
      - only sleeping 1msec when ret is -EAGAIN
      
      BSpec: 49294
      Cc: Imre Deak <imre.deak@intel.com>
      Cc: Cooper Chiou <cooper.chiou@intel.com>
      Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      Signed-off-by: NJosé Roberto de Souza <jose.souza@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200414194956.164323-6-jose.souza@intel.com
      3c02934b
    • J
      drm/i915/tc/icl: Implement TC cold sequences · feb7e0ef
      José Roberto de Souza 提交于
      This is required for legacy/static TC ports as IOM is not aware of
      the connection and will not trigger the TC cold exit.
      
      Just request PCODE to exit TCCOLD is not enough as it could enter
      again before driver makes use of the port, to prevent it BSpec states
      that aux powerwell should be held.
      
      So here embedding the TC cold exit sequence into ICL aux enable,
      it will enable aux and then request TC cold to exit.
      
      The TC cold block(exit and aux hold) and unblock was added to some
      exported TC functions for the others and to access PHY registers,
      callers should enable and keep aux powerwell enabled during access.
      
      Also adding TC cold check and warnig in tc_port_load_fia_params() as
      at this point of the driver initialization we can't request power
      wells, if we get this warning we will need to figure out how to handle
      it.
      
      v2:
      - moved ICL TC cold exit function to intel_display_power
      - using dig_port->tc_legacy_port to only execute sequences for legacy
      ports, hopefully VBTs will have this right
      - fixed check to call _hsw_power_well_continue_enable()
      - calling _hsw_power_well_continue_enable() unconditionally in
      icl_tc_phy_aux_power_well_enable(), if needed we will surpress timeout
      warnings of TC legacy ports
      - only blocking TC cold around fia access
      
      v3:
      - added timeout of 5msec to not loop forever if
      sandybridge_pcode_write_timeout() keeps returning -EAGAIN
      returning -EAGAIN in in icl_tc_cold_exit()
      - removed leftover tc_cold_wakeref
      - added one msec sleep when PCODE returns -EAGAIN
      
      v4:
      - removed 5msec timeout, instead giving 1msec to whoever is using
      PCODE to finish it up to 3 times
      - added a comment about turn TC cold exit failure as a error in future
      
      BSpec: 21750
      Closes: https://gitlab.freedesktop.org/drm/intel/issues/1296
      Cc: Imre Deak <imre.deak@intel.com>
      Cc: Cooper Chiou <cooper.chiou@intel.com>
      Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      Signed-off-by: NJosé Roberto de Souza <jose.souza@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200414194956.164323-4-jose.souza@intel.com
      feb7e0ef
  2. 17 4月, 2020 1 次提交
  3. 16 4月, 2020 1 次提交
  4. 08 4月, 2020 1 次提交
  5. 07 4月, 2020 1 次提交
    • C
      drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore · c4e8ba73
      Chris Wilson 提交于
      If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
      user batch or in our own preamble, the engine raises a
      GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
      respond to a semaphore wait by yielding the timeslice, if we have
      another context to yield to!
      
      The only real complication is that the interrupt is only generated for
      the start of the semaphore wait, and is asynchronous to our
      process_csb() -- that is, we may not have registered the timeslice before
      we see the interrupt. To ensure we don't miss a potential semaphore
      blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
      the interrupt and apply it to the next timeslice regardless of whether it
      was active at the time.
      
      v2: We use semaphores in preempt-to-busy, within the timeslicing
      implementation itself! Ergo, when we do insert a preemption due to an
      expired timeslice, the new context may start with the missed semaphore
      flagged by the retired context and be yielded, ad infinitum. To avoid
      this, read the context id at the time of the semaphore interrupt and
      only yield if that context is still active.
      
      Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
      c4e8ba73
  6. 04 4月, 2020 1 次提交
  7. 28 3月, 2020 2 次提交
  8. 20 3月, 2020 1 次提交
  9. 18 3月, 2020 2 次提交
  10. 14 3月, 2020 2 次提交
  11. 12 3月, 2020 1 次提交
  12. 10 3月, 2020 1 次提交
  13. 03 3月, 2020 2 次提交
  14. 02 3月, 2020 1 次提交
  15. 28 2月, 2020 2 次提交
  16. 22 2月, 2020 1 次提交
  17. 21 2月, 2020 2 次提交
  18. 08 2月, 2020 3 次提交
  19. 06 2月, 2020 1 次提交
  20. 31 1月, 2020 1 次提交
  21. 30 1月, 2020 1 次提交
  22. 29 1月, 2020 1 次提交
  23. 17 1月, 2020 1 次提交
  24. 16 1月, 2020 1 次提交
  25. 15 1月, 2020 1 次提交
  26. 09 1月, 2020 1 次提交
  27. 07 1月, 2020 3 次提交
  28. 01 1月, 2020 1 次提交
  29. 28 12月, 2019 1 次提交