1. 12 11月, 2019 3 次提交
  2. 11 11月, 2019 6 次提交
  3. 08 11月, 2019 8 次提交
  4. 07 11月, 2019 3 次提交
    • R
      drm/i915: FB backing gem obj should reside in LMEM · 9e678fc9
      Ramalingam C 提交于
      If Local memory is supported by hardware, we want framebuffer backing
      gem objects from local memory.
      
      if the backing obj is not from LMEM, pin_to_display is failed.
      
      v2:
        memory regions are correctly assigned to obj->memory_regions [tvrtko]
        migration failure is reported as debug log [Tvrtko]
      v3:
        Migration is dropped. only error is reported [Daniel]
        mem region check is move to pin_to_display [Chris]
      v4:
        s/dev_priv/i915 [chris]
      v5:
        i915_gem_object_is_lmem is used for detecting the obj mem type. [Matt]
      
      cc: Matthew Auld <matthew.auld@intel.com>
      Signed-off-by: NRamalingam C <ramalingam.c@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191105144414.30470-1-ramalingam.c@intel.com
      9e678fc9
    • D
      drm/i915: use might_lock_nested in get_pages annotation · 74ceefd1
      Daniel Vetter 提交于
      So strictly speaking the existing annotation is also ok, because we
      have a chain of
      
      obj->mm.lock#I915_MM_GET_PAGES -> fs_reclaim -> obj->mm.lock
      
      (the shrinker cannot get at an object while we're in get_pages, hence
      this is safe). But it's confusing, so try to take the right subclass
      of the lock.
      
      This does a bit reduce our lockdep based checking, but then it's also
      less fragile, in case we ever change the nesting around.
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191104173720.2696-3-daniel.vetter@ffwll.ch
      74ceefd1
    • D
      drm/i915: Switch obj->mm.lock lockdep annotations on its head · f86dbacb
      Daniel Vetter 提交于
      The trouble with having a plain nesting flag for locks which do not
      naturally nest (unlike block devices and their partitions, which is
      the original motivation for nesting levels) is that lockdep will
      never spot a true deadlock if you screw up.
      
      This patch is an attempt at trying better, by highlighting a bit more
      of the actual nature of the nesting that's going on. Essentially we
      have two kinds of objects:
      
      - objects without pages allocated, which cannot be on any lru and are
        hence inaccessible to the shrinker.
      
      - objects which have pages allocated, which are on an lru, and which
        the shrinker can decide to throw out.
      
      For the former type of object, memory allocations while holding
      obj->mm.lock are permissible. For the latter they are not. And
      get/put_pages transitions between the two types of objects.
      
      This is still not entirely fool-proof since the rules might change.
      But as long as we run such a code ever at runtime lockdep should be
      able to observe the inconsistency and complain (like with any other
      lockdep class that we've split up in multiple classes). But there are
      a few clear benefits:
      
      - We can drop the nesting flag parameter from
        __i915_gem_object_put_pages, because that function by definition is
        never going allocate memory, and calling it on an object which
        doesn't have its pages allocated would be a bug.
      
      - We strictly catch more bugs, since there's not only one place in the
        entire tree which is annotated with the special class. All the
        other places that had explicit lockdep nesting annotations we're now
        going to leave up to lockdep again.
      
      - Specifically this catches stuff like calling get_pages from
        put_pages (which isn't really a good idea, if we can call get_pages
        so could the shrinker). I've seen patches do exactly that.
      
      Of course I fully expect CI will show me for the fool I am with this
      one here :-)
      
      v2: There can only be one (lockdep only has a cache for the first
      subclass, not for deeper ones, and we don't want to make these locks
      even slower). Still separate enums for better documentation.
      
      Real fix: don't forget about phys objs and pin_map(), and fix the
      shrinker to have the right annotations ... silly me.
      
      v3: Forgot usertptr too ...
      
      v4: Improve comment for pages_pin_count, drop the IMPORTANT comment
      and instead prime lockdep (Chris).
      
      v5: Appease checkpatch, no double empty lines (Chris)
      
      v6: More rebasing over selftest changes. Also somehow I forgot to
      push this patch :-/
      
      Also format comments consistently while at it.
      
      v7: Fix typo in commit message (Joonas)
      
      Also drop the priming, with the lmem merge we now have allocations
      while holding the lmem lock, which wreaks the generic priming I've
      done in earlier patches. Should probably be resurrected when lmem is
      fixed. See
      
      commit 232a6eba
      Author: Matthew Auld <matthew.auld@intel.com>
      Date:   Tue Oct 8 17:01:14 2019 +0100
      
          drm/i915: introduce intel_memory_region
      
      I'm keeping the priming patch locally so it wont get lost.
      
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: "Tang, CQ" <cq.tang@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v5)
      Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v6)
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191105090148.30269-1-daniel.vetter@ffwll.ch
      [mlankhorst: Fix commit typos pointed out by Michael Ruhl]
      f86dbacb
  5. 06 11月, 2019 2 次提交
  6. 01 11月, 2019 7 次提交
  7. 30 10月, 2019 1 次提交
    • C
      drm/i915/gem: Make context persistence optional · a0e04715
      Chris Wilson 提交于
      Our existing behaviour is to allow contexts and their GPU requests to
      persist past the point of closure until the requests are complete. This
      allows clients to operate in a 'fire-and-forget' manner where they can
      setup a rendering pipeline and hand it over to the display server and
      immediately exit. As the rendering pipeline is kept alive until
      completion, the display server (or other consumer) can use the results
      in the future and present them to the user.
      
      The compute model is a little different. They have little to no buffer
      sharing between processes as their kernels tend to operate on a
      continuous stream, feeding the results back to the client application.
      These kernels operate for an indeterminate length of time, with many
      clients wishing that the kernel was always running for as long as they
      keep feeding in the data, i.e. acting like a DSP.
      
      Not all clients want this persistent "desktop" behaviour and would prefer
      that the contexts are cleaned up immediately upon closure. This ensures
      that when clients are run without hangchecking (e.g. for compute kernels
      of indeterminate runtime), any GPU hang or other unexpected workloads
      are terminated with the process and does not continue to hog resources.
      
      The default behaviour for new contexts is the legacy persistence mode,
      as some desktop applications are dependent upon the existing behaviour.
      New clients will have to opt in to immediate cleanup on context
      closure. If the hangchecking modparam is disabled, so is persistent
      context support -- all contexts will be terminated on closure.
      
      We expect this behaviour change to be welcomed by compute users, who
      have often been caught between a rock and a hard place. They disable
      hangchecking to avoid their kernels being "unfairly" declared hung, but
      have also experienced true hangs that the system was then unable to
      clean up. Naturally, this leads to bug reports.
      
      Testcase: igt/gem_ctx_persistence
      Link: https://github.com/intel/compute-runtime/pull/228Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Jon Bloomfield <jon.bloomfield@intel.com>
      Reviewed-by: NJon Bloomfield <jon.bloomfield@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Acked-by: NJason Ekstrand <jason@jlekstrand.net>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191029202338.8841-1-chris@chris-wilson.co.uk
      a0e04715
  8. 29 10月, 2019 4 次提交
  9. 28 10月, 2019 4 次提交
  10. 26 10月, 2019 2 次提交