1. 15 3月, 2019 3 次提交
  2. 06 3月, 2019 1 次提交
  3. 28 2月, 2019 1 次提交
  4. 21 2月, 2019 1 次提交
  5. 18 1月, 2019 1 次提交
  6. 15 1月, 2019 2 次提交
  7. 08 1月, 2019 1 次提交
  8. 05 12月, 2018 1 次提交
  9. 06 11月, 2018 1 次提交
  10. 31 10月, 2018 1 次提交
  11. 26 10月, 2018 1 次提交
  12. 27 9月, 2018 1 次提交
  13. 20 9月, 2018 1 次提交
  14. 07 8月, 2018 1 次提交
  15. 17 7月, 2018 1 次提交
  16. 07 7月, 2018 3 次提交
  17. 06 7月, 2018 1 次提交
  18. 05 7月, 2018 1 次提交
    • C
      drm/i915/gtt: Pull global wc page stash under its own locking · 63fd659f
      Chris Wilson 提交于
      Currently, the wc-stash used for providing flushed WC pages ready for
      constructing the page directories is assumed to be protected by the
      struct_mutex. However, we want to remove this global lock and so must
      install a replacement global lock for accessing the global wc-stash (the
      per-vm stash continues to be guarded by the vm).
      
      We need to push ahead on this patch due to an oversight in hastily
      removing the struct_mutex guard around the igt_ppgtt_alloc selftest. No
      matter, it will prove very useful (i.e. will be required) in the near
      future.
      
      v2: Restore the onstack stash so that we can drop the vm->mutex in
      future across the allocation.
      v3: Restore the lost pagevec_init of the onstack allocation, and repaint
      function names.
      v4: Reorder init so that we don't try and use i915_address_space before
      it is ininitialised.
      
      Fixes: 1f6f0023 ("drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180704185518.4193-1-chris@chris-wilson.co.uk
      63fd659f
  19. 29 6月, 2018 1 次提交
  20. 14 6月, 2018 1 次提交
  21. 06 6月, 2018 1 次提交
  22. 13 5月, 2018 1 次提交
  23. 04 5月, 2018 1 次提交
    • C
      drm/i915: Lazily unbind vma on close · 3365e226
      Chris Wilson 提交于
      When userspace is passing around swapbuffers using DRI, we frequently
      have to open and close the same object in the foreign address space.
      This shows itself as the same object being rebound at roughly 30fps
      (with a second object also being rebound at 30fps), which involves us
      having to rewrite the page tables and maintain the drm_mm range manager
      every time.
      
      However, since the object still exists and it is only the local handle
      that disappears, if we are lazy and do not unbind the VMA immediately
      when the local user closes the object but defer it until the GPU is
      idle, then we can reuse the same VMA binding. We still have to be
      careful to mark the handle and lookup tables as closed to maintain the
      uABI, just allowing the underlying VMA to be resurrected if the user is
      able to access the same object from the same context again.
      
      If the object itself is destroyed (neither userspace keeping a handle to
      it), the VMA will be reaped immediately as usual.
      
      In the future, this will be even more useful as instantiating a new VMA
      for use on the GPU will become heavier. A nuisance indeed, so nip it in
      the bud.
      
      v2: s/__i915_vma_final_close/i915_vma_destroy/ etc.
      v3: Leave a hint as to why we deferred the unbind on close.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180503195115.22309-1-chris@chris-wilson.co.uk
      3365e226
  24. 22 2月, 2018 1 次提交
  25. 16 2月, 2018 1 次提交
  26. 18 12月, 2017 1 次提交
  27. 24 11月, 2017 3 次提交
  28. 20 11月, 2017 2 次提交
  29. 08 11月, 2017 1 次提交
  30. 17 10月, 2017 1 次提交
  31. 11 10月, 2017 1 次提交
  32. 10 10月, 2017 1 次提交