1. 03 6月, 2020 1 次提交
  2. 25 5月, 2020 1 次提交
  3. 19 5月, 2020 1 次提交
  4. 14 5月, 2020 2 次提交
  5. 08 5月, 2020 1 次提交
    • C
      drm/i915: Remove wait priority boosting · eec39e44
      Chris Wilson 提交于
      Upon waiting a request (when asked), we gave that request a small
      priority boost, not enough for it to cause preemption, but enough for it
      to be scheduled next before all equals. We also used that bit to give
      new clients a small priority boost, similar to FQ_CODEL, such that we
      favoured short interactive tasks ahead of long running streams.
      
      However, this is causing lots of complications with timeslicing where we
      both want to honour the boost and yet ignore it. Those complications
      cause unexpected user behaviour (tasks not being timesliced and run
      concurrently as epxected), and the easiest way to resolve that is to
      remove the boost. Hopefully, we can find a compromise again if we need
      to, but in theory timeslicing itself and future more advanced schedulers
      should give us the interactivity boost we seek.
      
      Testcase: igt/gem_exec_schedule/lateslice
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200507152338.7452-3-chris@chris-wilson.co.uk
      eec39e44
  6. 04 5月, 2020 2 次提交
  7. 02 5月, 2020 3 次提交
    • C
      drm/i915/gem: Try an alternate engine for relocations · 6f576d62
      Chris Wilson 提交于
      If at first we don't succeed, try try again.
      
      Not all engines may support the MI ops we need to perform asynchronous
      relocation patching, and so we end up falling back to a synchronous
      operation that has a liability of blocking. However, Tvrtko pointed out
      we don't need to use the same engine to perform the relocations as we
      are planning to execute the execbuf on, and so if we switch over to a
      working engine, we can perform the relocation asynchronously. The user
      execbuf will be queued after the relocations by virtue of fencing.
      
      This patch creates a new context per execbuf requiring asynchronous
      relocations on an unusable engines. This is perhaps a bit excessive and
      can be ameliorated by a small context cache, but for the moment we only
      need it for working around a little used engine on Sandybridge, and only
      if relocations are actually required to an active batch buffer.
      
      Now we just need to teach the relocation code to handle physical
      addressing for gen2/3, and we should then have universal support!
      Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Testcase: igt/gem_exec_reloc/basic-spin # snb
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200501192945.22215-3-chris@chris-wilson.co.uk
      6f576d62
    • C
      drm/i915/gem: Use a single chained reloc batches for a single execbuf · 0e97fbb0
      Chris Wilson 提交于
      As we can now keep chaining together a relocation batch to process any
      number of relocations, we can keep building that relocation batch for
      all of the target vma. This avoiding emitting a new request into the
      ring for each target, consuming precious ring space and a potential
      stall.
      
      v2: Propagate the failure from submitting the relocation batch.
      
      Testcase: igt/gem_exec_reloc/basic-wide-active
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200501192945.22215-2-chris@chris-wilson.co.uk
      0e97fbb0
    • C
      drm/i915/gem: Use chained reloc batches · 964a9b0f
      Chris Wilson 提交于
      The ring is a precious resource: we anticipate to only use a few hundred
      bytes for a request, and only try to reserve that before we start. If we
      go beyond our guess in building the request, then instead of waiting at
      the start of execbuf before we hold any locks or other resources, we
      may trigger a wait inside a critical region. One example is in using gpu
      relocations, where currently we emit a new MI_BB_START from the ring
      every time we overflow a page of relocation entries. However, instead of
      insert the command into the precious ring, we can chain the next page of
      relocation entries as MI_BB_START from the end of the previous.
      
      v2: Delay the emit_bb_start until after all the chained vma
      synchronisation is complete. Since the buffer pool batches are idle, this
      _should_ be a no-op, but one day we may some fancy async GPU bindings
      for new vma!
      
      v3: Use pool/batch consitently, once we start thinking in terms of the
      batch vma, use batch->obj.
      v4: Explain the magic number 4.
      
      Tvrtko spotted that we lose propagation of the error for failing to
      submit the relocation request; that's easier to fix up in the next
      patch.
      
      Testcase: igt/gem_exec_reloc/basic-many-active
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200501192945.22215-1-chris@chris-wilson.co.uk
      964a9b0f
  8. 01 5月, 2020 1 次提交
  9. 24 4月, 2020 1 次提交
  10. 07 4月, 2020 3 次提交
  11. 06 4月, 2020 2 次提交
  12. 02 4月, 2020 1 次提交
  13. 01 4月, 2020 1 次提交
  14. 31 3月, 2020 1 次提交
  15. 27 3月, 2020 2 次提交
  16. 25 3月, 2020 1 次提交
  17. 23 3月, 2020 2 次提交
  18. 20 3月, 2020 1 次提交
  19. 13 3月, 2020 1 次提交
  20. 12 3月, 2020 1 次提交
  21. 06 3月, 2020 2 次提交
  22. 04 3月, 2020 5 次提交
  23. 27 2月, 2020 1 次提交
  24. 26 2月, 2020 1 次提交
  25. 25 2月, 2020 1 次提交
  26. 10 2月, 2020 1 次提交