1. 08 6月, 2018 1 次提交
  2. 07 6月, 2018 2 次提交
  3. 06 6月, 2018 1 次提交
  4. 05 6月, 2018 3 次提交
  5. 04 6月, 2018 1 次提交
  6. 02 6月, 2018 2 次提交
  7. 01 6月, 2018 5 次提交
  8. 30 5月, 2018 1 次提交
  9. 24 5月, 2018 1 次提交
    • C
      drm/i915: Look for an active kernel context before switching · 09a4c02e
      Chris Wilson 提交于
      We were not very carefully checking to see if an older request on the
      engine was an earlier switch-to-kernel-context before deciding to emit a
      new switch. The end result would be that we could get into a permanent
      loop of trying to emit a new request to perform the switch simply to
      flush the existing switch.
      
      What we need is a means of tracking the completion of each timeline
      versus the kernel context, that is to detect if a more recent request
      has been submitted that would result in a switch away from the kernel
      context. To realise this, we need only to look in our syncmap on the
      kernel context and check that we have synchronized against all active
      rings.
      
      v2: Since all ringbuffer clients currently share the same timeline, we do
      have to use the gem_context to distinguish clients.
      
      As a bonus, include all the tracing used to debug the death inside
      suspend.
      
      v3: Test, test, test. Construct a selftest to exercise and assert the
      expected behaviour that multiple switch-to-contexts do not emit
      redundant requests.
      Reported-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Fixes: a89d1f92 ("drm/i915: Split i915_gem_timeline into individual timelines")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180524081135.15278-1-chris@chris-wilson.co.uk
      09a4c02e
  10. 18 5月, 2018 2 次提交
  11. 17 5月, 2018 4 次提交
  12. 14 5月, 2018 1 次提交
  13. 08 5月, 2018 1 次提交
  14. 04 5月, 2018 1 次提交
    • C
      drm/i915: Lazily unbind vma on close · 3365e226
      Chris Wilson 提交于
      When userspace is passing around swapbuffers using DRI, we frequently
      have to open and close the same object in the foreign address space.
      This shows itself as the same object being rebound at roughly 30fps
      (with a second object also being rebound at 30fps), which involves us
      having to rewrite the page tables and maintain the drm_mm range manager
      every time.
      
      However, since the object still exists and it is only the local handle
      that disappears, if we are lazy and do not unbind the VMA immediately
      when the local user closes the object but defer it until the GPU is
      idle, then we can reuse the same VMA binding. We still have to be
      careful to mark the handle and lookup tables as closed to maintain the
      uABI, just allowing the underlying VMA to be resurrected if the user is
      able to access the same object from the same context again.
      
      If the object itself is destroyed (neither userspace keeping a handle to
      it), the VMA will be reaped immediately as usual.
      
      In the future, this will be even more useful as instantiating a new VMA
      for use on the GPU will become heavier. A nuisance indeed, so nip it in
      the bud.
      
      v2: s/__i915_vma_final_close/i915_vma_destroy/ etc.
      v3: Leave a hint as to why we deferred the unbind on close.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180503195115.22309-1-chris@chris-wilson.co.uk
      3365e226
  15. 03 5月, 2018 2 次提交
    • C
      drm/i915: Split i915_gem_timeline into individual timelines · a89d1f92
      Chris Wilson 提交于
      We need to move to a more flexible timeline that doesn't assume one
      fence context per engine, and so allow for a single timeline to be used
      across a combination of engines. This means that preallocating a fence
      context per engine is now a hindrance, and so we want to introduce the
      singular timeline. From the code perspective, this has the notable
      advantage of clearing up a lot of mirky semantics and some clumsy
      pointer chasing.
      
      By splitting the timeline up into a single entity rather than an array
      of per-engine timelines, we can realise the goal of the previous patch
      of tracking the timeline alongside the ring.
      
      v2: Tweak wait_for_idle to stop the compiling thinking that ret may be
      uninitialised.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180502163839.3248-2-chris@chris-wilson.co.uk
      a89d1f92
    • C
      drm/i915: Move timeline from GTT to ring · 65fcb806
      Chris Wilson 提交于
      In the future, we want to move a request between engines. To achieve
      this, we first realise that we have two timelines in effect here. The
      first runs through the GTT is required for ordering vma access, which is
      tracked currently by engine. The second is implied by sequential
      execution of commands inside the ringbuffer. This timeline is one that
      maps to userspace's expectations when submitting requests (i.e. given the
      same context, batch A is executed before batch B). As the rings's
      timelines map to userspace and the GTT timeline an implementation
      detail, move the timeline from the GTT into the ring itself (per-context
      in logical-ring-contexts/execlists, or a global per-engine timeline for
      the shared ringbuffers in legacy submission.
      
      The two timelines are still assumed to be equivalent at the moment (no
      migrating requests between engines yet) and so we can simply move from
      one to the other without adding extra ordering.
      
      v2: Reinforce that one isn't allowed to mix the engine execution
      timeline with the client timeline from userspace (on the ring).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180502163839.3248-1-chris@chris-wilson.co.uk
      65fcb806
  16. 30 4月, 2018 3 次提交
  17. 26 4月, 2018 1 次提交
  18. 19 4月, 2018 1 次提交
  19. 12 4月, 2018 1 次提交
  20. 11 4月, 2018 1 次提交
  21. 07 4月, 2018 3 次提交
  22. 24 3月, 2018 1 次提交
  23. 16 3月, 2018 1 次提交