1. 10 8月, 2016 1 次提交
  2. 04 8月, 2016 4 次提交
  3. 22 7月, 2016 2 次提交
  4. 20 7月, 2016 1 次提交
  5. 19 7月, 2016 1 次提交
    • L
      drm/i915: Enable polling when we don't have hpd · 84c8e096
      Lyude 提交于
      Unfortunately, there's two situations where we lose hpd right now:
      - Runtime suspend
      - When we've shut off all of the power wells on Valleyview/Cherryview
      
      While it would be nice if this didn't cause issues, this has the
      ability to get us in some awkward states where a user won't be able to
      get their display to turn on. For instance; if we boot a Valleyview
      system without any monitors connected, it won't need any of it's power
      wells and thus shut them off. Since this causes us to lose HPD, this
      means that unless the user knows how to ssh into their machine and do a
      manual reprobe for monitors, none of the monitors they connect after
      booting will actually work.
      
      Eventually we should come up with a better fix then having to enable
      polling for this, since this makes rpm a lot less useful, but for now
      the infrastructure in i915 just isn't there yet to get hpd in these
      situations.
      
      Changes since v1:
       - Add comment explaining the addition of the if
         (!mode_config->poll_running) in intel_hpd_init()
       - Remove unneeded if (!dev->mode_config.poll_enabled) in
         i915_hpd_poll_init_work()
       - Call to drm_helper_hpd_irq_event() after we disable polling
       - Add cancel_work_sync() call to intel_hpd_cancel_work()
      
      Changes since v2:
       - Apparently dev->mode_config.poll_running doesn't actually reflect
         whether or not a poll is currently in progress, and is actually used
         for dynamic module paramter enabling/disabling. So now we instead
         keep track of our own poll_running variable in dev_priv->hotplug
       - Clean i915_hpd_poll_init_work() a little bit
      
      Changes since v3:
       - Remove the now-redundant connector loop in intel_hpd_init(), just
         rely on intel_hpd_poll_enable() for setting connector->polled
         correctly on each connector
       - Get rid of poll_running
       - Don't assign enabled in i915_hpd_poll_init_work before we actually
         lock dev->mode_config.mutex
       - Wrap enabled assignment in i915_hpd_poll_init_work() in READ_ONCE()
         for doc purposes
       - Do the same for dev_priv->hotplug.poll_enabled with WRITE_ONCE in
         intel_hpd_poll_enable()
       - Add some comments about racing not mattering in intel_hpd_poll_enable
      
      Changes since v4:
       - Rename intel_hpd_poll_enable() to intel_hpd_poll_init()
       - Drop the bool argument from intel_hpd_poll_init()
       - Remove redundant calls to intel_hpd_poll_init()
       - Rename poll_enable_work to poll_init_work
       - Add some kerneldoc for intel_hpd_poll_init()
       - Cross-reference intel_hpd_poll_init() in intel_hpd_init()
       - Just copy the loop from intel_hpd_init() in intel_hpd_poll_init()
      
      Changes since v5:
       - Minor kerneldoc nitpicks
      
      Cc: stable@vger.kernel.org
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NLyude <cpaul@redhat.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      (cherry picked from commit 19625e85)
      84c8e096
  6. 16 7月, 2016 1 次提交
  7. 15 7月, 2016 2 次提交
    • R
      drm/i915: Introduce Kabypoint PCH for Kabylake H/DT. · bc7135b9
      Rodrigo Vivi 提交于
      Some Kabylake SKUs are going to use Kabypoint PCH.
      It is mainly for Halo and DT ones.
      
      >From our specs it doesn't seem that KBP brings
      any change on the display south engine. So let's consider
      this as a continuation of SunrisePoint, i.e., SPT+.
      
      Since it is easy to get confused by a letter change:
      KBL = Kabylake - CPU/GPU codename.
      KBP = Kabypoint - PCH codename.
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Reviewed-by: NAnder Conselvan de Oliveira <conselvan2@gmail.com>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96826
      Link: http://patchwork.freedesktop.org/patch/msgid/1467418032-15167-1-git-send-email-rodrigo.vivi@intel.comSigned-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      (cherry picked from commit 22dea0be)
      Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com>
      bc7135b9
    • L
      drm/i915: Enable polling when we don't have hpd · 19625e85
      Lyude 提交于
      Unfortunately, there's two situations where we lose hpd right now:
      - Runtime suspend
      - When we've shut off all of the power wells on Valleyview/Cherryview
      
      While it would be nice if this didn't cause issues, this has the
      ability to get us in some awkward states where a user won't be able to
      get their display to turn on. For instance; if we boot a Valleyview
      system without any monitors connected, it won't need any of it's power
      wells and thus shut them off. Since this causes us to lose HPD, this
      means that unless the user knows how to ssh into their machine and do a
      manual reprobe for monitors, none of the monitors they connect after
      booting will actually work.
      
      Eventually we should come up with a better fix then having to enable
      polling for this, since this makes rpm a lot less useful, but for now
      the infrastructure in i915 just isn't there yet to get hpd in these
      situations.
      
      Changes since v1:
       - Add comment explaining the addition of the if
         (!mode_config->poll_running) in intel_hpd_init()
       - Remove unneeded if (!dev->mode_config.poll_enabled) in
         i915_hpd_poll_init_work()
       - Call to drm_helper_hpd_irq_event() after we disable polling
       - Add cancel_work_sync() call to intel_hpd_cancel_work()
      
      Changes since v2:
       - Apparently dev->mode_config.poll_running doesn't actually reflect
         whether or not a poll is currently in progress, and is actually used
         for dynamic module paramter enabling/disabling. So now we instead
         keep track of our own poll_running variable in dev_priv->hotplug
       - Clean i915_hpd_poll_init_work() a little bit
      
      Changes since v3:
       - Remove the now-redundant connector loop in intel_hpd_init(), just
         rely on intel_hpd_poll_enable() for setting connector->polled
         correctly on each connector
       - Get rid of poll_running
       - Don't assign enabled in i915_hpd_poll_init_work before we actually
         lock dev->mode_config.mutex
       - Wrap enabled assignment in i915_hpd_poll_init_work() in READ_ONCE()
         for doc purposes
       - Do the same for dev_priv->hotplug.poll_enabled with WRITE_ONCE in
         intel_hpd_poll_enable()
       - Add some comments about racing not mattering in intel_hpd_poll_enable
      
      Changes since v4:
       - Rename intel_hpd_poll_enable() to intel_hpd_poll_init()
       - Drop the bool argument from intel_hpd_poll_init()
       - Remove redundant calls to intel_hpd_poll_init()
       - Rename poll_enable_work to poll_init_work
       - Add some kerneldoc for intel_hpd_poll_init()
       - Cross-reference intel_hpd_poll_init() in intel_hpd_init()
       - Just copy the loop from intel_hpd_init() in intel_hpd_poll_init()
      
      Changes since v5:
       - Minor kerneldoc nitpicks
      
      Cc: stable@vger.kernel.org
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NLyude <cpaul@redhat.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      19625e85
  8. 14 7月, 2016 1 次提交
    • C
      drm/i915: Defer enabling rc6 til after we submit the first batch/context · b7137e0c
      Chris Wilson 提交于
      Some hardware requires a valid render context before it can initiate
      rc6 power gating of the GPU; the default state of the GPU is not
      sufficient and may lead to undefined behaviour. The first execution of
      any batch will load the "golden render state", at which point it is safe
      to enable rc6. As we do not forcibly load the kernel context at resume,
      we have to hook into the batch submission to be sure that the render
      state is setup before enabling rc6.
      
      However, since we don't enable powersaving until that first batch, we
      queued a delayed task in order to guarantee that the batch is indeed
      submitted.
      
      v2: Rearrange intel_disable_gt_powersave() to match.
      v3: Apply user specified cur_freq (or idle_freq if not set).
      v4: Give in, and supply a delayed work to autoenable rc6
      v5: Mika suggested a couple of better names for delayed_resume_work
      v6: Rebalance rpm_put around the autoenable task
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1468397438-21226-7-git-send-email-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      b7137e0c
  9. 08 7月, 2016 1 次提交
  10. 05 7月, 2016 4 次提交
  11. 04 7月, 2016 4 次提交
  12. 03 7月, 2016 1 次提交
  13. 02 7月, 2016 1 次提交
  14. 01 7月, 2016 1 次提交
  15. 30 6月, 2016 3 次提交
  16. 24 6月, 2016 8 次提交
  17. 14 6月, 2016 2 次提交
    • A
      drm/i915/bxt: Add WaEnablePooledEuFor2x6 · e015dd69
      arun.siluvery@linux.intel.com 提交于
      Pooled EU is enabled by default for BXT but for fused down 2x6 parts it is
      advised to turn it off. But there is another HW issue in these parts (fused
      down 2x6 parts) before C0 that requires Pooled EU to be enabled as a
      workaround. In this case the pool configuration changes depending upon
      which subslice is disabled. This doesn't affect if the device has all 3
      subslices enabled.
      
      Userspace need to know min no. of eus in a pool as it varies based on which
      subslice is disabled, this is not yet exported because userspace support is
      not available yet. Once the support is available this needs to be exported
      using getparam ioctls.
      
      v2: s/subslice_total/subslice_per_slice as it is a more logical field (Mika)
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Cc: Winiarski, Michal <michal.winiarski@intel.com>
      Cc: Zou, Nanhai <nanhai.zou@intel.com>
      Cc: Yang, Rong R <rong.r.yang@intel.com>
      Cc: Tim Gore <tim.gore@intel.com>
      Cc: Jeff McGee <jeff.mcgee@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      e015dd69
    • A
      drm/i915:bxt: Enable Pooled EU support · 33e141ed
      arun.siluvery@linux.intel.com 提交于
      This mode allows to assign EUs to pools which can process work collectively.
      The command to enable this mode should be issued as part of context initialization.
      
      The pooled mode is global, once enabled it has to stay the same across all
      contexts until HW reset hence this is sent in auxiliary golden context batch.
      Thanks to Mika for the preliminary review and comments.
      
      v2: explain why this is enabled in golden context, use feature flag while
      enabling the support (Chris)
      
      v3: Include only kernel support as userspace support is not available yet.
      
      User space clients need to know when the pooled EU feature is present
      and enabled on the hardware so that they can adapt work submissions.
      Create a new device info flag for this purpose.
      
      Set has_pooled_eu to true in the Broxton static device info - Broxton
      supports the feature in hardware and the driver will enable it by
      default.
      
      We need to add getparam ioctls to enable userspace to query availability of
      this feature and to retrieve min. no of eus in a pool but we will expose
      them once userspace support is available. Opensource users for this feature
      are mesa, libva and beignet.
      
      Beignet team is currently working on adding userspace support.
      
      Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v2)
      Cc: Winiarski, Michal <michal.winiarski@intel.com>
      Cc: Zou, Nanhai <nanhai.zou@intel.com>
      Cc: Yang, Rong R <rong.r.yang@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Armin Reese <armin.c.reese@intel.com>
      Cc: Tim Gore <tim.gore@intel.com>
      Signed-off-by: NJeff McGee <jeff.mcgee@intel.com>
      Signed-off-by: NArun Siluvery <arun.siluvery@linux.intel.com>
      Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      33e141ed
  18. 13 6月, 2016 1 次提交
  19. 31 5月, 2016 1 次提交
    • L
      vga_switcheroo: Add helper for deferred probing · b00e5334
      Lukas Wunner 提交于
      So far we've got one condition when DRM drivers need to defer probing
      on a dual GPU system and it's coded separately into each of the relevant
      drivers. As suggested by Daniel Vetter, deduplicate that code in the
      drivers and move it to a new vga_switcheroo helper. This yields better
      encapsulation of concepts and lets us add further checks in a central
      place. (The existing check pertains to pre-retina MacBook Pros and an
      additional check is expected to be needed for retinas.)
      
      One might be tempted to check deferred probing conditions in
      vga_switcheroo_register_client(), but this is usually called fairly late
      during driver load. The GPU is fully brought up and ready for switching
      at that point. On boot the ->probe hook is potentially called dozens of
      times until it finally succeeds, and each time we'd repeat bringup and
      teardown of the GPU, lengthening boot time considerably and cluttering
      logfiles. A separate helper is therefore needed which can be called
      right at the beginning of the ->probe hook.
      
      Note that amdgpu currently does not call this helper as the AMD GPUs
      built into MacBook Pros are only supported by radeon so far.
      
      v2: This helper could eventually be used by audio clients as well,
          so rephrase kerneldoc to refer to "client" instead of "GPU"
          and move the single existing check in an if block specific
          to PCI_CLASS_DISPLAY_VGA devices. Move documentation on
          that check from kerneldoc to a comment. (Daniel Vetter)
      
      v3: Mandate in kerneldoc that registration of client shall only
          happen after calling this helper. (Daniel Vetter)
      
      v4: Rebase on 412c8f7d ("drm/radeon: Return -EPROBE_DEFER when
          amdkfd not loaded")
      
      v5: Some Optimus GPUs use PCI_CLASS_DISPLAY_3D, make sure those are
          matched as well. (Emil Velikov)
      
      v6: The if-condition referring to PCI_BASE_CLASS_DISPLAY may be
          considered a functional change. Move to a separate commit to
          keep this a pure refactoring change. (Emil Velikov, Jani Nikula)
      
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/575885fd440c2b13c3f19ddf44360cfbbff35f50.1464685538.git.lukas@wunner.de
      b00e5334