1. 12 7月, 2019 2 次提交
  2. 03 7月, 2019 1 次提交
  3. 27 6月, 2019 1 次提交
  4. 26 6月, 2019 1 次提交
  5. 21 6月, 2019 16 次提交
  6. 17 6月, 2019 1 次提交
  7. 15 6月, 2019 1 次提交
    • C
      drm/i915: Keep contexts pinned until after the next kernel context switch · ce476c80
      Chris Wilson 提交于
      We need to keep the context image pinned in memory until after the GPU
      has finished writing into it. Since it continues to write as we signal
      the final breadcrumb, we need to keep it pinned until the request after
      it is complete. Currently we know the order in which requests execute on
      each engine, and so to remove that presumption we need to identify a
      request/context-switch we know must occur after our completion. Any
      request queued after the signal must imply a context switch, for
      simplicity we use a fresh request from the kernel context.
      
      The sequence of operations for keeping the context pinned until saved is:
      
       - On context activation, we preallocate a node for each physical engine
         the context may operate on. This is to avoid allocations during
         unpinning, which may be from inside FS_RECLAIM context (aka the
         shrinker)
      
       - On context deactivation on retirement of the last active request (which
         is before we know the context has been saved), we add the
         preallocated node onto a barrier list on each engine
      
       - On engine idling, we emit a switch to kernel context. When this
         switch completes, we know that all previous contexts must have been
         saved, and so on retiring this request we can finally unpin all the
         contexts that were marked as deactivated prior to the switch.
      
      We can enhance this in future by flushing all the idle contexts on a
      regular heartbeat pulse of a switch to kernel context, which will also
      be used to check for hung engines.
      
      v2: intel_context_active_acquire/_release
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190614164606.15633-1-chris@chris-wilson.co.uk
      ce476c80
  8. 14 6月, 2019 3 次提交
  9. 13 6月, 2019 1 次提交
  10. 12 6月, 2019 3 次提交
  11. 11 6月, 2019 1 次提交
  12. 06 6月, 2019 1 次提交
  13. 01 6月, 2019 2 次提交
    • C
      drm/i915: Report all objects with allocated pages to the shrinker · d82b4b26
      Chris Wilson 提交于
      Currently, we try to report to the shrinker the precise number of
      objects (pages) that are available to be reaped at this moment. This
      requires searching all objects with allocated pages to see if they
      fulfill the search criteria, and this count is performed quite
      frequently. (The shrinker tries to free ~128 pages on each invocation,
      before which we count all the objects; counting takes longer than
      unbinding the objects!) If we take the pragmatic view that with
      sufficient desire, all objects are eventually reapable (they become
      inactive, or no longer used as framebuffer etc), we can simply return
      the count of pinned pages maintained during get_pages/put_pages rather
      than walk the lists every time.
      
      The downside is that we may (slightly) over-report the number of
      objects/pages we could shrink and so penalize ourselves by shrinking
      more than required. This is mitigated by keeping the order in which we
      shrink objects such that we avoid penalizing active and frequently used
      objects, and if memory is so tight that we need to free them we would
      need to anyway.
      
      v2: Only expose shrinkable objects to the shrinker; a small reduction in
      not considering stolen and foreign objects.
      v3: Restore the tracking from a "backup" copy from before the gem/ split
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-2-chris@chris-wilson.co.uk
      d82b4b26
    • C
      drm/i915: Track the purgeable objects on a separate eviction list · 3b4fa964
      Chris Wilson 提交于
      Currently the purgeable objects, I915_MADV_DONTNEED, are mixed in the
      normal bound/unbound lists. Every shrinker pass starts with an attempt
      to purge from this set of unneeded objects, which entails us doing a
      walk over both lists looking for any candidates. If there are none, and
      since we are shrinking we can reasonably assume that the lists are
      full!, this becomes a very slow futile walk.
      
      If we separate out the purgeable objects into own list, this search then
      becomes its own phase that is preferentially handled during shrinking.
      Instead the cost becomes that we then need to filter the purgeable list
      if we want to distinguish between bound and unbound objects.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Matthew Auld <matthew.william.auld@gmail.com>
      Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-1-chris@chris-wilson.co.uk
      3b4fa964
  14. 31 5月, 2019 1 次提交
  15. 28 5月, 2019 5 次提交