1. 23 6月, 2017 1 次提交
  2. 17 5月, 2017 1 次提交
  3. 09 3月, 2017 1 次提交
  4. 07 3月, 2017 1 次提交
  5. 01 3月, 2017 3 次提交
  6. 02 2月, 2017 3 次提交
  7. 20 1月, 2017 2 次提交
  8. 09 1月, 2017 1 次提交
  9. 18 12月, 2016 1 次提交
  10. 15 11月, 2016 3 次提交
  11. 25 10月, 2016 1 次提交
    • C
      dma-buf: Rename struct fence to dma_fence · f54d1867
      Chris Wilson 提交于
      I plan to usurp the short name of struct fence for a core kernel struct,
      and so I need to rename the specialised fence/timeline for DMA
      operations to make room.
      
      A consensus was reached in
      https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html
      that making clear this fence applies to DMA operations was a good thing.
      Since then the patch has grown a bit as usage increases, so hopefully it
      remains a good thing!
      
      (v2...: rebase, rerun spatch)
      v3: Compile on msm, spotted a manual fixup that I broke.
      v4: Try again for msm, sorry Daniel
      
      coccinelle script:
      @@
      
      @@
      - struct fence
      + struct dma_fence
      @@
      
      @@
      - struct fence_ops
      + struct dma_fence_ops
      @@
      
      @@
      - struct fence_cb
      + struct dma_fence_cb
      @@
      
      @@
      - struct fence_array
      + struct dma_fence_array
      @@
      
      @@
      - enum fence_flag_bits
      + enum dma_fence_flag_bits
      @@
      
      @@
      (
      - fence_init
      + dma_fence_init
      |
      - fence_release
      + dma_fence_release
      |
      - fence_free
      + dma_fence_free
      |
      - fence_get
      + dma_fence_get
      |
      - fence_get_rcu
      + dma_fence_get_rcu
      |
      - fence_put
      + dma_fence_put
      |
      - fence_signal
      + dma_fence_signal
      |
      - fence_signal_locked
      + dma_fence_signal_locked
      |
      - fence_default_wait
      + dma_fence_default_wait
      |
      - fence_add_callback
      + dma_fence_add_callback
      |
      - fence_remove_callback
      + dma_fence_remove_callback
      |
      - fence_enable_sw_signaling
      + dma_fence_enable_sw_signaling
      |
      - fence_is_signaled_locked
      + dma_fence_is_signaled_locked
      |
      - fence_is_signaled
      + dma_fence_is_signaled
      |
      - fence_is_later
      + dma_fence_is_later
      |
      - fence_later
      + dma_fence_later
      |
      - fence_wait_timeout
      + dma_fence_wait_timeout
      |
      - fence_wait_any_timeout
      + dma_fence_wait_any_timeout
      |
      - fence_wait
      + dma_fence_wait
      |
      - fence_context_alloc
      + dma_fence_context_alloc
      |
      - fence_array_create
      + dma_fence_array_create
      |
      - to_fence_array
      + to_dma_fence_array
      |
      - fence_is_array
      + dma_fence_is_array
      |
      - trace_fence_emit
      + trace_dma_fence_emit
      |
      - FENCE_TRACE
      + DMA_FENCE_TRACE
      |
      - FENCE_WARN
      + DMA_FENCE_WARN
      |
      - FENCE_ERR
      + DMA_FENCE_ERR
      )
       (
       ...
       )
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.co.uk>
      Acked-by: NSumit Semwal <sumit.semwal@linaro.org>
      Acked-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
      f54d1867
  12. 20 9月, 2016 1 次提交
  13. 12 7月, 2016 1 次提交
    • B
      drm/qxl: Remove deprecated create_singlethread_workqueue · 7b2d16f5
      Bhaktipriya Shridhar 提交于
      System workqueues have been able to handle high level of concurrency
      for a long time now and there's no reason to use dedicated workqueues
      just to gain concurrency. Since the workqueue in the QXL graphics device
      driver is involved in freeing and processing the release ring
      (workitem &qdev->gc_workqxl, maps to gc_work which calls
      qxl_garbage_collect) and is not being used on a memory reclaim path,
      dedicated gc_queue has been replaced with the use of system_wq.
      
      Unlike a dedicated per-cpu workqueue created with create_workqueue(),
      system_wq allows multiple work items to overlap executions even on
      the same CPU; however, a per-cpu workqueue doesn't have any CPU
      locality or global ordering guarantees unless the target CPU is
      explicitly specified and thus the increase of local concurrency
      shouldn't make any difference.
      
      flush_work() has been called in qxl_device_fini() to ensure that there
      are no pending tasks while disconnecting the driver.
      Signed-off-by: NBhaktipriya Shridhar <bhaktipriya96@gmail.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/20160702110209.GA3560@Karyakshetra
      7b2d16f5
  14. 02 5月, 2016 1 次提交
  15. 31 3月, 2016 1 次提交
  16. 24 11月, 2015 1 次提交
  17. 11 9月, 2015 1 次提交
    • J
      drm/qxl: validate monitors config modes · bd3e1c7c
      Jonathon Jongsma 提交于
      Due to some recent changes in
      drm_helper_probe_single_connector_modes_merge_bits(), old custom modes
      were not being pruned properly. In current kernels,
      drm_mode_validate_basic() is called to sanity-check each mode in the
      list. If the sanity-check passes, the mode's status gets set to to
      MODE_OK. In older kernels this check was not done, so old custom modes
      would still have a status of MODE_UNVERIFIED at this point, and would
      therefore be pruned later in the function.
      
      As a result of this new behavior, the list of modes for a device always
      includes every custom mode ever configured for the device, with the
      largest one listed first. Since desktop environments usually choose the
      first preferred mode when a hotplug event is emitted, this had the
      result of making it very difficult for the user to reduce the size of
      the display.
      
      The qxl driver did implement the mode_valid connector function, but it
      was empty. In order to restore the old behavior where old custom modes
      are pruned, we implement a proper mode_valid function for the qxl
      driver. This function now checks each mode against the last configured
      custom mode and the list of standard modes. If the mode doesn't match
      any of these, its status is set to MODE_BAD so that it will be pruned as
      expected.
      Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      bd3e1c7c
  18. 05 6月, 2015 1 次提交
  19. 30 9月, 2014 1 次提交
  20. 24 9月, 2014 1 次提交
  21. 03 9月, 2014 1 次提交
  22. 02 9月, 2014 2 次提交
  23. 27 8月, 2014 1 次提交
  24. 18 12月, 2013 1 次提交
  25. 06 11月, 2013 1 次提交
    • D
      qxl: add a connector property to denote hotplug should rescan modes. · 4695b039
      Dave Airlie 提交于
      So GNOME userspace has an issue with when it rescans for modes on hotplug
      events, if the monitor has no EDID it assumes that nothing has changed on
      EDID as with real hw we'd never have new modes without a new EDID, and they
      kind off rely on the behaviour now, however with virtual GPUs we would
      like to rescan the modes and get a new preferred mode on hotplug events
      to handle dynamic guest resizing (where you resize the host window and the
      guest resizes with it).
      
      This is a simple property we can make userspace watch for to trigger new
      behaviour based on it, and can be used to replaced EDID hacks in virtual
      drivers.
      
      Acked-by: Marc-André Lureau <marcandre.lureau@gmail.com> (on irc)
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      4695b039
  26. 09 10月, 2013 1 次提交
    • D
      drm: kill ->gem_init_object() and friends · 16eb5f43
      David Herrmann 提交于
      All drivers embed gem-objects into their own buffer objects. There is no
      reason to keep drm_gem_object_alloc(), gem->driver_private and
      ->gem_init_object() anymore.
      
      New drivers are highly encouraged to do the same. There is no benefit in
      allocating gem-objects separately.
      
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Alex Deucher <alexdeucher@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Inki Dae <inki.dae@samsung.com>
      Cc: Ben Skeggs <skeggsb@gmail.com>
      Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Acked-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      16eb5f43
  27. 19 8月, 2013 1 次提交
  28. 07 8月, 2013 2 次提交
  29. 24 7月, 2013 2 次提交
    • D
      qxl: convert qxl driver to proper use for reservations · 8002db63
      Dave Airlie 提交于
      The recent addition of lockdep support to reservations and their subsequent
      use by TTM showed up a number of potential problems with the way qxl was using
      TTM objects.
      
      a) it was allocating objects, and reserving them later without validating
      underneath the reservation, which meant in extreme conditions the objects could
      be evicted before the reservation ever used them.
      
      b) it was reserving objects straight after allocating them, but with no
      ability to back off should the reservations fail. It now allocates the necessary
      objects then does a complete reservation pass on them to avoid deadlocks.
      
      c) it had two lists per release tracking objects, unnecessary complicating
      the reservation process.
      
      This patch removes the dual object tracking, adds reservations ticket support
      to the release and fence object handling. It then ports the internal fb
      drawing code and the userspace facing ioctl to use the new interfaces properly,
      along with cleanup up the error path handling in some codepaths.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      8002db63
    • D
      drm/qxl: add delayed fb operations · 0665f9f8
      Dave Airlie 提交于
      Due to the nature of qxl hw we cannot queue operations while in an irq
      context, so we queue these operations as best we can until atomic allocations
      fail, and dequeue them later in a work queue.
      
      Daniel looked over the locking on the list and agrees it should be sufficent.
      
      The atomic allocs use no warn, as the last thing we want if we haven't memory
      to allocate space for a printk in an irq context is more printks.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0665f9f8
  30. 05 7月, 2013 1 次提交