1. 20 2月, 2019 2 次提交
  2. 01 2月, 2019 4 次提交
  3. 29 1月, 2019 1 次提交
  4. 12 12月, 2018 12 次提交
  5. 03 12月, 2018 1 次提交
  6. 06 10月, 2018 1 次提交
    • D
      drm/msm: Use drm_atomic_helper_shutdown · 3ea4b1e1
      Daniel Vetter 提交于
      drm_plane_helper_disable is a non-atomic drivers only function, and
      will blow up (since no one passes the locking context it needs).
      
      Atomic drivers which want to quiescent their hw on unload should
      use drm_atomic_helper_shutdown() instead.
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Rajesh Yadav <ryadav@codeaurora.org>
      Cc: Chandan Uddaraju <chandanu@codeaurora.org>
      Cc: Archit Taneja <architt@codeaurora.org>
      Cc: Jeykumar Sankaran <jsanka@codeaurora.org>
      Cc: Sean Paul <seanpaul@chromium.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Sinclair Yeh <syeh@vmware.com>
      Cc: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
      Cc: Russell King <rmk+kernel@armlinux.org.uk>
      Cc: Gustavo Padovan <gustavo.padovan@collabora.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-arm-msm@vger.kernel.org
      Cc: freedreno@lists.freedesktop.org
      Link: https://patchwork.freedesktop.org/patch/msgid/20181004202446.22905-12-daniel.vetter@ffwll.ch
      3ea4b1e1
  7. 04 10月, 2018 1 次提交
  8. 11 8月, 2018 1 次提交
  9. 26 7月, 2018 7 次提交
  10. 25 7月, 2018 1 次提交
  11. 05 6月, 2018 2 次提交
  12. 04 6月, 2018 1 次提交
  13. 20 2月, 2018 1 次提交
  14. 08 12月, 2017 1 次提交
  15. 29 10月, 2017 1 次提交
  16. 28 10月, 2017 3 次提交
    • J
      drm/msm: Support multiple ringbuffers · f97decac
      Jordan Crouse 提交于
      Add the infrastructure to support the idea of multiple ringbuffers.
      Assign each ringbuffer an id and use that as an index for the various
      ring specific operations.
      
      The biggest delta is to support legacy fences. Each fence gets its own
      sequence number but the legacy functions expect to use a unique integer.
      To handle this we return a unique identifier for each submission but
      map it to a specific ring/sequence under the covers. Newer users use
      a dma_fence pointer anyway so they don't care about the actual sequence
      ID or ring.
      
      The actual mechanics for multiple ringbuffers are very target specific
      so this code just allows for the possibility but still only defines
      one ringbuffer for each target family.
      Signed-off-by: NJordan Crouse <jcrouse@codeaurora.org>
      Signed-off-by: NRob Clark <robdclark@gmail.com>
      f97decac
    • J
      drm/msm: Add per-instance submit queues · f7de1545
      Jordan Crouse 提交于
      Currently the behavior of a command stream is provided by the user
      application during submission and the application is expected to internally
      maintain the settings for each 'context' or 'rendering queue' and specify
      the correct ones.
      
      This works okay for simple cases but as applications become more
      complex we will want to set context specific flags and do various
      permission checks to allow certain contexts to enable additional
      privileges.
      
      Add kernel-side submit queues to be analogous to 'contexts' or
      'rendering queues' on the application side. Each file descriptor
      instance will maintain its own list of queues. Queues cannot be
      shared between file descriptors.
      
      For backwards compatibility context id '0' is defined as a default
      context specifying no priority and no special flags. This is
      intended to be the usual configuration for 99% of applications so
      that a garden variety application can function correctly without
      creating a queue. Only those applications requiring the specific
      benefit of different queues need create one.
      Signed-off-by: NJordan Crouse <jcrouse@codeaurora.org>
      Signed-off-by: NRob Clark <robdclark@gmail.com>
      f7de1545
    • R
      drm/msm/adreno: load gpu at probe/bind time · eec874ce
      Rob Clark 提交于
      Previously, in an effort to defer initializing the gpu until firmware
      was available (ie. rootfs mounted), the gpu was not loaded at when the
      subdevice was bound.  Which resulted that clks/etc were requested in a
      place that devm couldn't really help unwind if something failed.
      
      Instead move request_firmware() to gpu->hw_init() and construct the gpu
      earlier in adreno_bind().  To avoid the rest of the driver needing to
      be aware of a gpu that hasn't managed to load firmware and hw_init()
      yet, stash the gpu ptr in the adreno device's drvdata, and don't set
      priv->gpu() until hw_init() succeeds.
      Signed-off-by: NRob Clark <robdclark@gmail.com>
      eec874ce