1. 05 3月, 2015 1 次提交
  2. 04 12月, 2014 1 次提交
  3. 30 9月, 2014 1 次提交
  4. 11 9月, 2014 1 次提交
  5. 02 9月, 2014 1 次提交
  6. 01 9月, 2014 3 次提交
  7. 27 8月, 2014 1 次提交
  8. 09 8月, 2014 1 次提交
  9. 08 7月, 2014 2 次提交
  10. 26 5月, 2014 1 次提交
  11. 04 4月, 2014 2 次提交
  12. 28 3月, 2014 1 次提交
  13. 16 3月, 2014 1 次提交
    • D
      drm: init TTM dev_mapping in ttm_bo_device_init() · 44d847b7
      David Herrmann 提交于
      With dev->anon_inode we have a global address_space ready for operation
      right from the beginning. Therefore, there is no need to do a delayed
      setup with TTM. Instead, set dev_mapping during initialization in
      ttm_bo_device_init() and remove any "if (dev_mapping)" conditions.
      
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Alex Deucher <alexdeucher@gmail.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      44d847b7
  14. 18 2月, 2014 1 次提交
  15. 08 1月, 2014 2 次提交
  16. 20 11月, 2013 1 次提交
    • T
      drm/ttm: Remove set_need_resched from the ttm fault handler · c58f009e
      Thomas Hellstrom 提交于
      Addresses
      "[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE".
      
      In the first occurence it was used to try to be nice while releasing the
      mmap_sem and retrying the fault to work around a locking inversion.
      The second occurence was never used.
      
      There has been some discussion whether we should change the locking order to
      mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves
      that locking order undefined. The solution that we release the mmap_sem if
      tryreserve fails and wait for the buffer to become unreserved is something
      we want in any case, and follows how the core vm system waits for pages
      to be come unlocked while releasing the mmap_sem.
      
      The code also outlines what needs to be changed if we want to establish the
      locking order as mmap_sem -> bo::reserve.
      
      One slight issue that remains with this code is that the fault handler might
      be prone to starvation if another thread countinously reserves the buffer.
      IMO that usage pattern is highly unlikely.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      c58f009e
  17. 18 11月, 2013 2 次提交
  18. 06 11月, 2013 1 次提交
  19. 25 7月, 2013 1 次提交
    • D
      drm/ttm: convert to unified vma offset manager · 72525b3f
      David Herrmann 提交于
      Use the new vma-manager infrastructure. This doesn't change any
      implementation details as the vma-offset-manager is nearly copied 1-to-1
      from TTM.
      
      The vm_lock is moved into the offset manager so we can drop it from TTM.
      During lookup, we use the vma locking helpers to take a reference to the
      found object.
      In all other scenarios, locking stays the same as before. We always
      guarantee that drm_vma_offset_remove() is called only during destruction.
      Hence, helpers like drm_vma_node_offset_addr() are always safe as long as
      the node has a valid offset.
      
      This also drops the addr_space_offset member as it is a copy of vm_start
      in vma_node objects. Use the accessor functions instead.
      
      v4:
       - remove vm_lock
       - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock)
      
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Martin Peres <martin.peres@labri.fr>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      72525b3f
  20. 28 6月, 2013 4 次提交
  21. 12 4月, 2013 1 次提交
  22. 15 1月, 2013 3 次提交
  23. 10 12月, 2012 1 次提交
    • M
      drm/ttm: remove no_wait_reserve, v3 · 97a875cb
      Maarten Lankhorst 提交于
      All items on the lru list are always reservable, so this is a stupid
      thing to keep. Not only that, it is used in a way which would
      guarantee deadlocks if it were ever to be set to block on reserve.
      
      This is a lot of churn, but mostly because of the removal of the
      argument which can be nested arbitrarily deeply in many places.
      
      No change of code in this patch except removal of the no_wait_reserve
      argument, the previous patch removed the use of no_wait_reserve.
      
      v2:
       - Warn if -EBUSY is returned on reservation, all objects on the list
         should be reservable. Adjusted patch slightly due to conflicts.
      v3:
       - Focus on no_wait_reserve removal only.
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      97a875cb
  24. 20 11月, 2012 6 次提交