1. 26 8月, 2016 1 次提交
  2. 25 8月, 2016 2 次提交
  3. 24 8月, 2016 2 次提交
    • C
      drm/amdgpu: fix lru size grouping v2 · 56615387
      Christian König 提交于
      Adding a BO can make it the insertion point for larger sizes as well.
      
      v2: add a comment about the guard structure.
      Signed-off-by: NChristian König <christian.koenig@amd.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Reviewed-by: NFelix Kuehling <felix.kuehling@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      56615387
    • T
      drm/tegra: dsi: Enhance runtime power management · 87904c3e
      Thierry Reding 提交于
      The MIPI DSI output on Tegra SoCs requires some external logic to
      calibrate the MIPI pads before a video signal can be transmitted. This
      MIPI calibration logic requires to be powered on while the MIPI pads are
      being used, which is currently done as part of the DSI driver's probe
      implementation.
      
      This is suboptimal because it will leave the MIPI calibration logic
      powered up even if the DSI output is never used.
      
      On Tegra114 and earlier this behaviour also causes the driver to hang
      while trying to power up the MIPI calibration logic because the power
      partition that contains the MIPI calibration logic will be powered on
      by the display controller at output pipeline configuration time. Thus
      the power up sequence for the MIPI calibration logic happens before
      it's power partition is guaranteed to be enabled.
      
      Fix this by splitting up the API into a request/free pair of functions
      that manage the runtime dependency between the DSI and the calibration
      modules (no registers are accessed) and a set of enable, calibrate and
      disable functions that program the MIPI calibration logic at points in
      time where the power partition is really enabled.
      
      While at it, make sure that the runtime power management also works in
      ganged mode, which is currently also broken.
      Reported-by: NJonathan Hunter <jonathanh@nvidia.com>
      Tested-by: NJonathan Hunter <jonathanh@nvidia.com>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      87904c3e
  4. 22 8月, 2016 10 次提交
  5. 20 8月, 2016 4 次提交
  6. 18 8月, 2016 2 次提交
  7. 16 8月, 2016 1 次提交
  8. 15 8月, 2016 1 次提交
    • L
      drm/etnaviv: take GPU lock later in the submit process · d9853490
      Lucas Stach 提交于
      Both the fence and event alloc are safe to be done without holding the GPU
      lock, as they either don't need any locking (fences) or are protected by
      their own lock (events).
      
      This solves a bad locking interaction between the submit path and the
      recover worker. If userspace manages to exhaust all available events while
      the GPU is hung, the submit will wait for events to become available
      holding the GPU lock. The recover worker waits for this lock to become
      available before trying to recover the GPU which frees up the allocated
      events. Essentially both paths are deadlocked until the submit path
      times out waiting for available events, failing the submit that could
      otherwise be handled just fine if the recover worker had the chance to
      bring the GPU back in a working state.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NChristian Gmeiner <christian.gmeiner@gmail.com>
      d9853490
  9. 11 8月, 2016 16 次提交
  10. 10 8月, 2016 1 次提交