1. 22 3月, 2019 5 次提交
    • S
      drm/i915/guc: GuC suspend path cleanup · b9d52d38
      Sujaritha Sundaresan 提交于
      Adding a call to intel_uc_suspend in i915_gem_suspend, which
      is a common point for the suspend/resume and hibernate paths.
      This fixes an unbalanced call that causes issues with the CTB
      register/deregister.
      
      v2: Making the call unconditional (Daniele)
      	Moving the call to after the GEM_BUG_ON (Chris)
      
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
      Signed-off-by: NSujaritha Sundaresan <sujaritha.sundaresan@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321203804.6845-1-sujaritha.sundaresan@intel.com
      b9d52d38
    • C
      drm/i915/selftests: Mark up preemption tests for hang detection · e70d3d80
      Chris Wilson 提交于
      Use the igt_live_test framework for detecting whether an unwanted hang
      occurred during test execution, and report failure if it does.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321194031.20240-2-chris@chris-wilson.co.uk
      e70d3d80
    • C
      drm/i915/selftests: Calculate maximum ring size for preemption chain · d067994c
      Chris Wilson 提交于
      32 is too many for the likes of kbl, and in order to insert that many
      requests into the ring requires us to declare the first few hung --
      understandably a slow and unexpected process. Instead, measure the size
      of a singe requests and use that to estimate the upper bound on the
      chain length we can use for our test, remembering to flush the previous
      chain between tests for safety.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: N"Yokoyama, Caz" <caz.yokoyama@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321194031.20240-1-chris@chris-wilson.co.uk
      d067994c
    • C
      drm/i915: Skip object locking around a no-op set-domain ioctl · 754a2544
      Chris Wilson 提交于
      If we are already in the desired write domain of a set-domain ioctl,
      then there is nothing for us to do and we can quickly return back to
      userspace, avoiding any lock contention. By recognising that the
      write_domain is always a subset of the read_domains, and excluding the
      no-op case of requiring 0 read_domains in the ioctl, we can infer if the
      current write_domain matches the target read_domains, there is nothing
      for us to do.
      
      Secondary aspect of this is that we undo the arbitrary fetching and
      potential flushing of all pages for a set-domain(.write=CPU) call on a
      fresh object -- which was introduced simply because we do the get-pages
      before taking the struct_mutex.
      
      References: 40e62d5d ("drm/i915: Acquire the backing storage outside of struct_mutex in set-domain")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Matthew Auld <matthew.william.auld@gmail.com>
      Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321161908.8007-2-chris@chris-wilson.co.uk
      754a2544
    • C
      drm/i915: Flush pages on acquisition · a679f58d
      Chris Wilson 提交于
      When we return pages to the system, we ensure that they are marked as
      being in the CPU domain since any external access is uncontrolled and we
      must assume the worst. This means that we need to always flush the pages
      on acquisition if we need to use them on the GPU, and from the beginning
      have used set-domain. Set-domain is overkill for the purpose as it is a
      general synchronisation barrier, but our intent is to only flush the
      pages being swapped in. If we move that flush into the pages acquisition
      phase, we know then that when we have obj->mm.pages, they are coherent
      with the GPU and need only maintain that status without resorting to
      heavy handed use of set-domain.
      
      The principle knock-on effect for userspace is through mmap-gtt
      pagefaulting. Our uAPI has always implied that the GTT mmap was async
      (especially as when any pagefault occurs is unpredicatable to userspace)
      and so userspace had to apply explicit domain control itself
      (set-domain). However, swapping is transparent to the kernel, and so on
      first fault we need to acquire the pages and make them coherent for
      access through the GTT. Our use of set-domain here leaks into the uABI
      that the first pagefault was synchronous. This is unintentional and
      baring a few igt should be unoticed, nevertheless we bump the uABI
      version for mmap-gtt to reflect the change in behaviour.
      
      Another implication of the change is that gem_create() is presumed to
      create an object that is coherent with the CPU and is in the CPU write
      domain, so a set-domain(CPU) following a gem_create() would be a minor
      operation that merely checked whether we could allocate all pages for
      the object. On applying this change, a set-domain(CPU) causes a clflush
      as we acquire the pages. This will have a small impact on mesa as we move
      the clflush here on !llc from execbuf time to create, but that should
      have minimal performance impact as the same clflush exists but is now
      done early and because of the clflush issue, userspace recycles bo and
      so should resist allocating fresh objects.
      
      Internally, the presumption that objects are created in the CPU
      write-domain and remain so through writes to obj->mm.mapping is more
      prevalent than I expected; but easy enough to catch and apply a manual
      flush.
      
      For the future, we should push the page flush from the central
      set_pages() into the callers so that we can more finely control when it
      is applied, but for now doing it one location is easier to validate, at
      the cost of sometimes flushing when there is no need.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.william.auld@gmail.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Antonio Argenziano <antonio.argenziano@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321161908.8007-1-chris@chris-wilson.co.uk
      a679f58d
  2. 21 3月, 2019 16 次提交
  3. 20 3月, 2019 19 次提交