1. 05 3月, 2015 1 次提交
  2. 30 9月, 2014 1 次提交
  3. 02 9月, 2014 1 次提交
  4. 01 9月, 2014 1 次提交
  5. 27 8月, 2014 1 次提交
  6. 26 5月, 2014 1 次提交
  7. 20 11月, 2013 1 次提交
    • T
      drm/ttm: Remove set_need_resched from the ttm fault handler · c58f009e
      Thomas Hellstrom 提交于
      Addresses
      "[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE".
      
      In the first occurence it was used to try to be nice while releasing the
      mmap_sem and retrying the fault to work around a locking inversion.
      The second occurence was never used.
      
      There has been some discussion whether we should change the locking order to
      mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves
      that locking order undefined. The solution that we release the mmap_sem if
      tryreserve fails and wait for the buffer to become unreserved is something
      we want in any case, and follows how the core vm system waits for pages
      to be come unlocked while releasing the mmap_sem.
      
      The code also outlines what needs to be changed if we want to establish the
      locking order as mmap_sem -> bo::reserve.
      
      One slight issue that remains with this code is that the fault handler might
      be prone to starvation if another thread countinously reserves the buffer.
      IMO that usage pattern is highly unlikely.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      c58f009e
  8. 25 7月, 2013 1 次提交
    • D
      drm/ttm: convert to unified vma offset manager · 72525b3f
      David Herrmann 提交于
      Use the new vma-manager infrastructure. This doesn't change any
      implementation details as the vma-offset-manager is nearly copied 1-to-1
      from TTM.
      
      The vm_lock is moved into the offset manager so we can drop it from TTM.
      During lookup, we use the vma locking helpers to take a reference to the
      found object.
      In all other scenarios, locking stays the same as before. We always
      guarantee that drm_vma_offset_remove() is called only during destruction.
      Hence, helpers like drm_vma_node_offset_addr() are always safe as long as
      the node has a valid offset.
      
      This also drops the addr_space_offset member as it is a copy of vm_start
      in vma_node objects. Use the accessor functions instead.
      
      v4:
       - remove vm_lock
       - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock)
      
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Martin Peres <martin.peres@labri.fr>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      72525b3f
  9. 28 6月, 2013 3 次提交
  10. 10 12月, 2012 1 次提交
    • M
      drm/ttm: remove no_wait_reserve, v3 · 97a875cb
      Maarten Lankhorst 提交于
      All items on the lru list are always reservable, so this is a stupid
      thing to keep. Not only that, it is used in a way which would
      guarantee deadlocks if it were ever to be set to block on reserve.
      
      This is a lot of churn, but mostly because of the removal of the
      argument which can be nested arbitrarily deeply in many places.
      
      No change of code in this patch except removal of the no_wait_reserve
      argument, the previous patch removed the use of no_wait_reserve.
      
      v2:
       - Warn if -EBUSY is returned on reservation, all objects on the list
         should be reservable. Adjusted patch slightly due to conflicts.
      v3:
       - Focus on no_wait_reserve removal only.
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      97a875cb
  11. 20 11月, 2012 3 次提交
  12. 07 11月, 2012 1 次提交
  13. 03 10月, 2012 1 次提交
  14. 23 5月, 2012 1 次提交
  15. 06 12月, 2011 2 次提交
  16. 28 10月, 2011 1 次提交
  17. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  18. 21 6月, 2011 1 次提交
  19. 05 4月, 2011 1 次提交
  20. 31 3月, 2011 1 次提交
  21. 22 11月, 2010 3 次提交
  22. 10 11月, 2010 1 次提交
  23. 06 10月, 2010 1 次提交
  24. 05 10月, 2010 1 次提交
  25. 18 5月, 2010 1 次提交
  26. 20 4月, 2010 1 次提交
    • J
      drm/ttm: ttm_fault callback to allow driver to handle bo placement V6 · 82c5da6b
      Jerome Glisse 提交于
      On fault the driver is given the opportunity to perform any operation
      it sees fit in order to place the buffer into a CPU visible area of
      memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon
      should keep working properly. Future patch will take advantage of this
      infrastructure and remove the old path from TTM once driver are
      converted.
      
      V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS
      V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify
         is responsible to perform any necessary task for mapping to succeed
      V4 minor cleanup, atomic_t -> bool as member is protected by reserve
         mecanism from concurent access
      V5 the callback is now responsible for iomapping the bo and providing
         a virtual address this simplify TTM and will allow to get rid of
         TTM_MEMTYPE_FLAG_NEEDS_IOREMAP
      V6 use the bus addr data to decide to ioremap or this isn't needed
         but we don't necesarily need to ioremap in the callback but still
         allow driver to use static mapping
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      82c5da6b
  27. 08 4月, 2010 1 次提交
  28. 11 12月, 2009 1 次提交
  29. 10 12月, 2009 2 次提交
    • T
      drm/ttm: Have the TTM code return -ERESTARTSYS instead of -ERESTART. · 98ffc415
      Thomas Hellstrom 提交于
      Return -ERESTARTSYS instead of -ERESTART when interrupted by a signal.
      The -ERESTARTSYS is converted to an -EINTR by the kernel signal layer
      before returned to user-space.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      98ffc415
    • J
      drm/ttm: Rework validation & memory space allocation (V3) · ca262a99
      Jerome Glisse 提交于
      This change allow driver to pass sorted memory placement,
      from most prefered placement to least prefered placement.
      In order to avoid long function prototype a structure is
      used to gather memory placement informations such as range
      restriction (if you need a buffer to be in given range).
      Range restriction is determined by fpfn & lpfn which are
      the first page and last page number btw which allocation
      can happen. If those fields are set to 0 ttm will assume
      buffer can be put anywhere in the address space (thus it
      avoids putting a burden on the driver to always properly
      set those fields).
      
      This patch also factor few functions like evicting first
      entry of lru list or getting a memory space. This avoid
      code duplication.
      
      V2: Change API to use placement flags and array instead
          of packing placement order into a quadword.
      V3: Make sure we set the appropriate mem.placement flag
          when validating or allocation memory space.
      
      [Pending Thomas Hellstrom further review but okay
      from preliminary review so far].
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ca262a99
  30. 19 8月, 2009 2 次提交
  31. 15 6月, 2009 1 次提交
    • T
      drm: Add the TTM GPU memory manager subsystem. · ba4e7d97
      Thomas Hellstrom 提交于
      TTM is a GPU memory manager subsystem designed for use with GPU
      devices with various memory types (On-card VRAM, AGP,
      PCI apertures etc.). It's essentially a helper library that assists
      the DRM driver in creating and managing persistent buffer objects.
      
      TTM manages placement of data and CPU map setup and teardown on
      data movement. It can also optionally manage synchronization of
      data on a per-buffer-object level.
      
      TTM takes care to provide an always valid virtual user-space address
      to a buffer object which makes user-space sub-allocation of
      big buffer objects feasible.
      
      TTM uses a fine-grained per buffer-object locking scheme, taking
      care to release all relevant locks when waiting for the GPU.
      Although this implies some locking overhead, it's probably a big
      win for devices with multiple command submission mechanisms, since
      the lock contention will be minimal.
      
      TTM can be used with whatever user-space interface the driver
      chooses, including GEM. It's used by the upcoming Radeon KMS DRM driver
      and is also the GPU memory management core of various new experimental
      DRM drivers.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ba4e7d97