1. 06 12月, 2011 2 次提交
  2. 28 10月, 2011 1 次提交
  3. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  4. 27 7月, 2011 1 次提交
  5. 21 6月, 2011 1 次提交
  6. 05 4月, 2011 1 次提交
  7. 31 3月, 2011 1 次提交
  8. 23 2月, 2011 2 次提交
  9. 28 1月, 2011 2 次提交
  10. 22 11月, 2010 6 次提交
  11. 10 11月, 2010 1 次提交
  12. 09 11月, 2010 1 次提交
  13. 21 10月, 2010 1 次提交
  14. 19 10月, 2010 1 次提交
  15. 06 10月, 2010 1 次提交
  16. 05 10月, 2010 2 次提交
  17. 04 8月, 2010 1 次提交
  18. 07 7月, 2010 1 次提交
    • F
      drm/ttm: Allocate the page pool manager in the heap. · 5870a4d9
      Francisco Jerez 提交于
      Repeated ttm_page_alloc_init/fini fails noisily because the pool
      manager kobj isn't zeroed out between uses (we could do just that but
      statically allocated kobjects are generally considered a bad thing).
      Move it to kzalloc'ed memory.
      
      Note that this patch drops the refcounting behavior of the pool
      allocator init/fini functions: it would have led to a race condition
      in its current form, and anyway it was never exploited.
      
      This fixes a regression with reloading kms modules at runtime, since
      page allocator was introduced.
      Signed-off-by: NFrancisco Jerez <currojerez@riseup.net>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      5870a4d9
  19. 18 5月, 2010 1 次提交
  20. 07 5月, 2010 1 次提交
  21. 20 4月, 2010 2 次提交
    • J
      drm/ttm: remove io_ field from TTM V6 · 0c321c79
      Jerome Glisse 提交于
      All TTM driver have been converted to new io_mem_reserve/free
      interface which allow driver to choose and return proper io
      base, offset to core TTM for ioremapping if necessary. This
      patch remove what is now deadcode.
      
      V2 adapt to match with change in first patch of the patchset
      V3 update after io_mem_reserve/io_mem_free callback balancing
      V4 adjust to minor cleanup
      V5 remove the needs ioremap flag
      V6 keep the ioremapping facility in TTM
      
      [airlied- squashed driver removals in here also]
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0c321c79
    • J
      drm/ttm: ttm_fault callback to allow driver to handle bo placement V6 · 82c5da6b
      Jerome Glisse 提交于
      On fault the driver is given the opportunity to perform any operation
      it sees fit in order to place the buffer into a CPU visible area of
      memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon
      should keep working properly. Future patch will take advantage of this
      infrastructure and remove the old path from TTM once driver are
      converted.
      
      V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS
      V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify
         is responsible to perform any necessary task for mapping to succeed
      V4 minor cleanup, atomic_t -> bool as member is protected by reserve
         mecanism from concurent access
      V5 the callback is now responsible for iomapping the bo and providing
         a virtual address this simplify TTM and will allow to get rid of
         TTM_MEMTYPE_FLAG_NEEDS_IOREMAP
      V6 use the bus addr data to decide to ioremap or this isn't needed
         but we don't necesarily need to ioremap in the callback but still
         allow driver to use static mapping
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      82c5da6b
  22. 08 4月, 2010 1 次提交
  23. 06 4月, 2010 3 次提交
  24. 15 3月, 2010 1 次提交
  25. 01 3月, 2010 1 次提交
  26. 14 1月, 2010 1 次提交
  27. 15 12月, 2009 1 次提交
  28. 11 12月, 2009 1 次提交