1. 07 7月, 2018 2 次提交
  2. 06 7月, 2018 1 次提交
  3. 05 7月, 2018 1 次提交
  4. 03 7月, 2018 1 次提交
  5. 29 6月, 2018 1 次提交
  6. 28 6月, 2018 1 次提交
  7. 18 6月, 2018 2 次提交
  8. 16 6月, 2018 1 次提交
  9. 15 6月, 2018 1 次提交
  10. 14 6月, 2018 1 次提交
  11. 11 6月, 2018 2 次提交
  12. 08 6月, 2018 1 次提交
  13. 07 6月, 2018 2 次提交
  14. 06 6月, 2018 1 次提交
  15. 05 6月, 2018 3 次提交
  16. 04 6月, 2018 1 次提交
  17. 02 6月, 2018 2 次提交
  18. 01 6月, 2018 5 次提交
  19. 30 5月, 2018 1 次提交
  20. 24 5月, 2018 1 次提交
    • C
      drm/i915: Look for an active kernel context before switching · 09a4c02e
      Chris Wilson 提交于
      We were not very carefully checking to see if an older request on the
      engine was an earlier switch-to-kernel-context before deciding to emit a
      new switch. The end result would be that we could get into a permanent
      loop of trying to emit a new request to perform the switch simply to
      flush the existing switch.
      
      What we need is a means of tracking the completion of each timeline
      versus the kernel context, that is to detect if a more recent request
      has been submitted that would result in a switch away from the kernel
      context. To realise this, we need only to look in our syncmap on the
      kernel context and check that we have synchronized against all active
      rings.
      
      v2: Since all ringbuffer clients currently share the same timeline, we do
      have to use the gem_context to distinguish clients.
      
      As a bonus, include all the tracing used to debug the death inside
      suspend.
      
      v3: Test, test, test. Construct a selftest to exercise and assert the
      expected behaviour that multiple switch-to-contexts do not emit
      redundant requests.
      Reported-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Fixes: a89d1f92 ("drm/i915: Split i915_gem_timeline into individual timelines")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180524081135.15278-1-chris@chris-wilson.co.uk
      09a4c02e
  21. 18 5月, 2018 2 次提交
  22. 17 5月, 2018 4 次提交
  23. 14 5月, 2018 1 次提交
  24. 08 5月, 2018 1 次提交
  25. 04 5月, 2018 1 次提交
    • C
      drm/i915: Lazily unbind vma on close · 3365e226
      Chris Wilson 提交于
      When userspace is passing around swapbuffers using DRI, we frequently
      have to open and close the same object in the foreign address space.
      This shows itself as the same object being rebound at roughly 30fps
      (with a second object also being rebound at 30fps), which involves us
      having to rewrite the page tables and maintain the drm_mm range manager
      every time.
      
      However, since the object still exists and it is only the local handle
      that disappears, if we are lazy and do not unbind the VMA immediately
      when the local user closes the object but defer it until the GPU is
      idle, then we can reuse the same VMA binding. We still have to be
      careful to mark the handle and lookup tables as closed to maintain the
      uABI, just allowing the underlying VMA to be resurrected if the user is
      able to access the same object from the same context again.
      
      If the object itself is destroyed (neither userspace keeping a handle to
      it), the VMA will be reaped immediately as usual.
      
      In the future, this will be even more useful as instantiating a new VMA
      for use on the GPU will become heavier. A nuisance indeed, so nip it in
      the bud.
      
      v2: s/__i915_vma_final_close/i915_vma_destroy/ etc.
      v3: Leave a hint as to why we deferred the unbind on close.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180503195115.22309-1-chris@chris-wilson.co.uk
      3365e226