1. 07 7月, 2018 1 次提交
  2. 06 7月, 2018 1 次提交
  3. 03 7月, 2018 1 次提交
  4. 29 6月, 2018 1 次提交
  5. 08 6月, 2018 2 次提交
  6. 06 6月, 2018 1 次提交
  7. 05 6月, 2018 1 次提交
  8. 04 5月, 2018 1 次提交
    • C
      drm/i915: Lazily unbind vma on close · 3365e226
      Chris Wilson 提交于
      When userspace is passing around swapbuffers using DRI, we frequently
      have to open and close the same object in the foreign address space.
      This shows itself as the same object being rebound at roughly 30fps
      (with a second object also being rebound at 30fps), which involves us
      having to rewrite the page tables and maintain the drm_mm range manager
      every time.
      
      However, since the object still exists and it is only the local handle
      that disappears, if we are lazy and do not unbind the VMA immediately
      when the local user closes the object but defer it until the GPU is
      idle, then we can reuse the same VMA binding. We still have to be
      careful to mark the handle and lookup tables as closed to maintain the
      uABI, just allowing the underlying VMA to be resurrected if the user is
      able to access the same object from the same context again.
      
      If the object itself is destroyed (neither userspace keeping a handle to
      it), the VMA will be reaped immediately as usual.
      
      In the future, this will be even more useful as instantiating a new VMA
      for use on the GPU will become heavier. A nuisance indeed, so nip it in
      the bud.
      
      v2: s/__i915_vma_final_close/i915_vma_destroy/ etc.
      v3: Leave a hint as to why we deferred the unbind on close.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180503195115.22309-1-chris@chris-wilson.co.uk
      3365e226
  9. 22 2月, 2018 1 次提交
  10. 12 12月, 2017 1 次提交
  11. 08 12月, 2017 1 次提交
  12. 07 12月, 2017 2 次提交
  13. 10 11月, 2017 1 次提交
  14. 09 11月, 2017 1 次提交
  15. 08 11月, 2017 1 次提交
  16. 06 11月, 2017 1 次提交
  17. 17 10月, 2017 1 次提交
  18. 10 10月, 2017 4 次提交
  19. 07 10月, 2017 3 次提交
  20. 18 8月, 2017 1 次提交
    • C
      drm/i915: Replace execbuf vma ht with an idr · d1b48c1e
      Chris Wilson 提交于
      This was the competing idea long ago, but it was only with the rewrite
      of the idr as an radixtree and using the radixtree directly ourselves,
      along with the realisation that we can store the vma directly in the
      radixtree and only need a list for the reverse mapping, that made the
      patch performant enough to displace using a hashtable. Though the vma ht
      is fast and doesn't require any extra allocation (as we can embed the node
      inside the vma), it does require a thread for resizing and serialization
      and will have the occasional slow lookup. That is hairy enough to
      investigate alternatives and favour them if equivalent in peak performance.
      One advantage of allocating an indirection entry is that we can support a
      single shared bo between many clients, something that was done on a
      first-come first-serve basis for shared GGTT vma previously. To offset
      the extra allocations, we create yet another kmem_cache for them.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20170816085210.4199-5-chris@chris-wilson.co.uk
      d1b48c1e
  21. 26 6月, 2017 1 次提交
  22. 21 6月, 2017 2 次提交
  23. 16 6月, 2017 5 次提交
  24. 15 6月, 2017 1 次提交
  25. 09 3月, 2017 1 次提交
  26. 27 2月, 2017 1 次提交
  27. 26 2月, 2017 1 次提交
  28. 15 2月, 2017 1 次提交