1. 22 3月, 2019 1 次提交
  2. 21 3月, 2019 1 次提交
  3. 15 3月, 2019 3 次提交
  4. 06 3月, 2019 2 次提交
  5. 05 3月, 2019 2 次提交
  6. 28 2月, 2019 1 次提交
  7. 22 2月, 2019 1 次提交
  8. 06 2月, 2019 2 次提交
  9. 29 1月, 2019 2 次提交
    • C
      drm/i915: Pull VM lists under the VM mutex. · 09d7e46b
      Chris Wilson 提交于
      A starting point to counter the pervasive struct_mutex. For the goal of
      avoiding (or at least blocking under them!) global locks during user
      request submission, a simple but important step is being able to manage
      each clients GTT separately. For which, we want to replace using the
      struct_mutex as the guard for all things GTT/VM and switch instead to a
      specific mutex inside i915_address_space.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-2-chris@chris-wilson.co.uk
      09d7e46b
    • C
      drm/i915: Stop tracking MRU activity on VMA · 499197dc
      Chris Wilson 提交于
      Our goal is to remove struct_mutex and replace it with fine grained
      locking. One of the thorny issues is our eviction logic for reclaiming
      space for an execbuffer (or GTT mmaping, among a few other examples).
      While eviction itself is easy to move under a per-VM mutex, performing
      the activity tracking is less agreeable. One solution is not to do any
      MRU tracking and do a simple coarse evaluation during eviction of
      active/inactive, with a loose temporal ordering of last
      insertion/evaluation. That keeps all the locking constrained to when we
      are manipulating the VM itself, neatly avoiding the tricky handling of
      possible recursive locking during execbuf and elsewhere.
      
      Note that discarding the MRU (currently implemented as a pair of lists,
      to avoid scanning the active list for a NONBLOCKING search) is unlikely
      to impact upon our efficiency to reclaim VM space (where we think a LRU
      model is best) as our current strategy is to use random idle replacement
      first before doing a search, and over time the use of softpinned 48b
      per-ppGTT is growing (thereby eliminating any need to perform any eviction
      searches, in theory at least) with the remaining users being found on
      much older devices (gen2-gen6).
      
      v2: Changelog and commentary rewritten to elaborate on the duality of a
      single list being both an inactive and active list.
      v3: Consolidate bool parameters into a single set of flags; don't
      comment on the duality of a single variable being a multiplicity of
      bits.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
      499197dc
  10. 17 1月, 2019 1 次提交
  11. 15 1月, 2019 5 次提交
  12. 10 1月, 2019 1 次提交
  13. 09 1月, 2019 1 次提交
  14. 08 1月, 2019 1 次提交
    • C
      drm/i915: Return immediately if trylock fails for direct-reclaim · d25f71a1
      Chris Wilson 提交于
      Ignore trying to shrink from i915 if we fail to acquire the struct_mutex
      in the shrinker while performing direct-reclaim. The trade-off being
      (much) lower latency for non-i915 clients at an increased risk of being
      unable to obtain a page from direct-reclaim without hitting the
      oom-notifier. The proviso being that we still keep trying to hard
      obtain the lock for kswapd so that we can reap under heavy memory
      pressure.
      
      v2: Taint all mutexes taken within the shrinker with the struct_mutex
      subclass as an early warning system, and drop I915_SHRINK_ACTIVE from
      vmap to reduce the number of dangerous paths. We also have to drop
      I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim
      that ACTIVE is only used from outside context, which fits in with a
      longer strategy of avoiding stalls due to scanning active during
      shrinking.
      
      The danger in using the subclass struct_mutex is that we declare
      ourselves more knowledgable than lockdep and deprive ourselves of
      automatic coverage. Instead, we require ourselves to mark up any mutex
      taken inside the shrinker in order to detect lock-inversion, and if we
      miss any we are doomed to a deadlock at the worst possible moment.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
      d25f71a1
  15. 27 12月, 2018 1 次提交
  16. 22 12月, 2018 1 次提交
  17. 13 12月, 2018 1 次提交
  18. 07 12月, 2018 1 次提交
  19. 20 11月, 2018 2 次提交
  20. 06 11月, 2018 1 次提交
  21. 31 10月, 2018 1 次提交
  22. 30 10月, 2018 2 次提交
  23. 26 10月, 2018 1 次提交
  24. 19 10月, 2018 1 次提交
  25. 27 9月, 2018 2 次提交
  26. 25 9月, 2018 1 次提交
  27. 20 9月, 2018 1 次提交