1. 02 6月, 2012 1 次提交
  2. 23 5月, 2012 1 次提交
  3. 20 3月, 2012 1 次提交
  4. 26 1月, 2012 1 次提交
    • B
      drm/ttm: fix two regressions since move_notify changes · 9f1feed2
      Ben Skeggs 提交于
      Both changes in dc97b340 cause serious
      regressions in the nouveau driver.
      
      move_notify() was originally able to presume that bo->mem is the old node,
      and new_mem is the new node.  The above commit moves the call to
      move_notify() to after move() has been done, which means that now, sometimes,
      new_mem isn't the new node at all, bo->mem is, and new_mem points at a
      stale, possibly-just-been-killed-by-move node.
      
      This is clearly not a good situation.  This patch reverts this change, and
      replaces it with a cleanup in the move() failure path instead.
      
      The second issue is that the call to move_notify() from cleanup_memtype_use()
      causes the TTM ghost objects to get passed into the driver.  This is clearly
      bad as the driver knows nothing about these "fake" TTM BOs, and ends up
      accessing uninitialised memory.
      
      I worked around this in nouveau's move_notify() hook by ensuring the BO
      destructor was nouveau's.  I don't particularly like this solution, and
      would rather TTM never pass the driver these objects.  However, I don't
      clearly understand the reason why we're calling move_notify() here anyway
      and am happy to work around the problem in nouveau instead of breaking the
      behaviour expected by other drivers.
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Cc: Jerome Glisse <j.glisse@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      9f1feed2
  5. 06 12月, 2011 4 次提交
  6. 23 11月, 2011 1 次提交
  7. 28 10月, 2011 1 次提交
  8. 05 10月, 2011 1 次提交
  9. 14 9月, 2011 1 次提交
  10. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  11. 23 8月, 2011 2 次提交
  12. 27 7月, 2011 1 次提交
  13. 05 4月, 2011 1 次提交
  14. 23 2月, 2011 1 次提交
  15. 24 12月, 2010 1 次提交
    • T
      drm/ttm: use cancel_delayed_work_sync() in ttm_bo · f094cfc6
      Tejun Heo 提交于
      Make ttm_bo::ttm_bo_device_release call cancel_delayed_work_sync()
      instead of calling cancel_delayed_work() followed by
      flush_scheduled_work().
      
      This is to prepare for the deprecation and removal of
      flush_scheduled_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc:: Thomas Hellstrom <thellstrom@vmware.com>
      Cc:: Dave Airlie <airlied@redhat.com>
      f094cfc6
  16. 22 11月, 2010 7 次提交
  17. 18 11月, 2010 1 次提交
  18. 10 11月, 2010 1 次提交
  19. 09 11月, 2010 6 次提交
  20. 21 10月, 2010 2 次提交
  21. 19 10月, 2010 2 次提交
  22. 06 10月, 2010 1 次提交
  23. 05 10月, 2010 1 次提交