1. 11 6月, 2019 3 次提交
  2. 10 6月, 2019 1 次提交
  3. 07 6月, 2019 4 次提交
  4. 06 6月, 2019 1 次提交
  5. 05 6月, 2019 1 次提交
  6. 04 6月, 2019 1 次提交
  7. 30 5月, 2019 1 次提交
  8. 29 5月, 2019 1 次提交
  9. 28 5月, 2019 2 次提交
  10. 27 5月, 2019 1 次提交
  11. 24 5月, 2019 2 次提交
    • C
      drm/i915/gtt: Neuter the deferred unbind callback from gen6_ppgtt_cleanup · 63e8dcdb
      Chris Wilson 提交于
      Having deferred the vma destruction to a worker where we can acquire the
      struct_mutex, we have to avoid chasing back into the now destroyed
      ppgtt. The pd_vma is special in having a custom unbind function to scan
      for unused pages despite the VMA itself being notionally part of the
      GGTT. As such, we need to disable that callback to avoid a
      use-after-free.
      
      This unfortunately blew up so early during boot that CI declared the
      machine unreachable as opposed to being the major failure it was. Oops.
      
      Fixes: d3622099 ("drm/i915/gtt: Always acquire struct_mutex for gen6_ppgtt_cleanup")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190524064529.20514-1-chris@chris-wilson.co.uk
      63e8dcdb
    • C
      drm/i915/gtt: Always acquire struct_mutex for gen6_ppgtt_cleanup · d3622099
      Chris Wilson 提交于
      We rearranged the vm_destroy_ioctl to avoid taking struct_mutex, little
      realising that buried underneath the gen6 ppgtt release path was a
      struct_mutex requirement (to remove its GGTT vma). Until that
      struct_mutex is vanquished, take a detour in gen6_ppgtt_cleanup to do
      the i915_vma_destroy from inside a worker under the struct_mutex.
      
      <4> [257.740160] WARN_ON(debug_locks && !lock_is_held(&(&vma->vm->i915->drm.struct_mutex)->dep_map))
      <4> [257.740213] WARNING: CPU: 3 PID: 1507 at drivers/gpu/drm/i915/i915_vma.c:841 i915_vma_destroy+0x1ae/0x3a0 [i915]
      <4> [257.740214] Modules linked in: snd_hda_codec_hdmi i915 x86_pkg_temp_thermal mei_hdcp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core r8169 realtek snd_pcm mei_me mei prime_numbers lpc_ich
      <4> [257.740224] CPU: 3 PID: 1507 Comm: gem_vm_create Tainted: G     U            5.2.0-rc1-CI-CI_DRM_6118+ #1
      <4> [257.740225] Hardware name: MSI MS-7924/Z97M-G43(MS-7924), BIOS V1.12 02/15/2016
      <4> [257.740249] RIP: 0010:i915_vma_destroy+0x1ae/0x3a0 [i915]
      <4> [257.740250] Code: 00 00 00 48 81 c7 c8 00 00 00 e8 ed 08 f0 e0 85 c0 0f 85 78 fe ff ff 48 c7 c6 e8 ec 30 a0 48 c7 c7 da 55 33 a0 e8 42 8c e9 e0 <0f> 0b 8b 83 40 01 00 00 85 c0 0f 84 63 fe ff ff 48 c7 c1 c1 58 33
      <4> [257.740251] RSP: 0018:ffffc90000aafc68 EFLAGS: 00010282
      <4> [257.740252] RAX: 0000000000000000 RBX: ffff8883f7957840 RCX: 0000000000000003
      <4> [257.740253] RDX: 0000000000000046 RSI: 0000000000000006 RDI: ffffffff8212d1b9
      <4> [257.740254] RBP: ffffc90000aafcc8 R08: 0000000000000000 R09: 0000000000000000
      <4> [257.740255] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883f4d5c2a8
      <4> [257.740256] R13: ffff8883f4d5d680 R14: ffff8883f4d5c668 R15: ffff8883f4d5c2f0
      <4> [257.740257] FS:  00007f777fa8fe40(0000) GS:ffff88840f780000(0000) knlGS:0000000000000000
      <4> [257.740258] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [257.740259] CR2: 00007f777f6522b0 CR3: 00000003c612a006 CR4: 00000000001606e0
      <4> [257.740260] Call Trace:
      <4> [257.740283]  gen6_ppgtt_cleanup+0x25/0x60 [i915]
      <4> [257.740306]  i915_ppgtt_release+0x102/0x290 [i915]
      <4> [257.740330]  i915_gem_vm_destroy_ioctl+0x7c/0xa0 [i915]
      <4> [257.740376]  ? i915_gem_vm_create_ioctl+0x160/0x160 [i915]
      <4> [257.740379]  drm_ioctl_kernel+0x83/0xf0
      <4> [257.740382]  drm_ioctl+0x2f3/0x3b0
      <4> [257.740422]  ? i915_gem_vm_create_ioctl+0x160/0x160 [i915]
      <4> [257.740426]  ? _raw_spin_unlock_irqrestore+0x39/0x60
      <4> [257.740430]  do_vfs_ioctl+0xa0/0x6e0
      <4> [257.740433]  ? lock_acquire+0xa6/0x1c0
      <4> [257.740436]  ? __task_pid_nr_ns+0xb9/0x1f0
      <4> [257.740439]  ksys_ioctl+0x35/0x60
      <4> [257.740441]  __x64_sys_ioctl+0x11/0x20
      <4> [257.740443]  do_syscall_64+0x55/0x1c0
      <4> [257.740445]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      References: e0695db7 ("drm/i915: Create/destroy VM (ppGTT) for use with contexts")
      Fixes: 7f3f317a ("drm/i915: Restore control over ppgtt for context creation ABI")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190523064933.23604-1-chris@chris-wilson.co.uk
      d3622099
  12. 20 5月, 2019 1 次提交
  13. 25 4月, 2019 1 次提交
  14. 20 4月, 2019 2 次提交
  15. 12 4月, 2019 1 次提交
  16. 22 3月, 2019 1 次提交
  17. 21 3月, 2019 1 次提交
  18. 15 3月, 2019 3 次提交
  19. 06 3月, 2019 2 次提交
  20. 05 3月, 2019 2 次提交
  21. 28 2月, 2019 1 次提交
  22. 22 2月, 2019 1 次提交
  23. 06 2月, 2019 2 次提交
  24. 29 1月, 2019 2 次提交
    • C
      drm/i915: Pull VM lists under the VM mutex. · 09d7e46b
      Chris Wilson 提交于
      A starting point to counter the pervasive struct_mutex. For the goal of
      avoiding (or at least blocking under them!) global locks during user
      request submission, a simple but important step is being able to manage
      each clients GTT separately. For which, we want to replace using the
      struct_mutex as the guard for all things GTT/VM and switch instead to a
      specific mutex inside i915_address_space.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-2-chris@chris-wilson.co.uk
      09d7e46b
    • C
      drm/i915: Stop tracking MRU activity on VMA · 499197dc
      Chris Wilson 提交于
      Our goal is to remove struct_mutex and replace it with fine grained
      locking. One of the thorny issues is our eviction logic for reclaiming
      space for an execbuffer (or GTT mmaping, among a few other examples).
      While eviction itself is easy to move under a per-VM mutex, performing
      the activity tracking is less agreeable. One solution is not to do any
      MRU tracking and do a simple coarse evaluation during eviction of
      active/inactive, with a loose temporal ordering of last
      insertion/evaluation. That keeps all the locking constrained to when we
      are manipulating the VM itself, neatly avoiding the tricky handling of
      possible recursive locking during execbuf and elsewhere.
      
      Note that discarding the MRU (currently implemented as a pair of lists,
      to avoid scanning the active list for a NONBLOCKING search) is unlikely
      to impact upon our efficiency to reclaim VM space (where we think a LRU
      model is best) as our current strategy is to use random idle replacement
      first before doing a search, and over time the use of softpinned 48b
      per-ppGTT is growing (thereby eliminating any need to perform any eviction
      searches, in theory at least) with the remaining users being found on
      much older devices (gen2-gen6).
      
      v2: Changelog and commentary rewritten to elaborate on the duality of a
      single list being both an inactive and active list.
      v3: Consolidate bool parameters into a single set of flags; don't
      comment on the duality of a single variable being a multiplicity of
      bits.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
      499197dc
  25. 17 1月, 2019 1 次提交
  26. 15 1月, 2019 1 次提交
    • C
      drm/i915: Prevent concurrent GGTT update and use on Braswell (again) · 8cd99918
      Chris Wilson 提交于
      On Braswell, under heavy stress, if we update the GGTT while
      simultaneously accessing another region inside the GTT, we are returned
      the wrong values. To prevent this we stop the machine to update the GGTT
      entries so that no memory traffic can occur at the same time.
      
      This was first spotted in
      
      commit 5bab6f60
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Fri Oct 23 18:43:32 2015 +0100
      
          drm/i915: Serialise updates to GGTT with access through GGTT on Braswell
      
      but removed again in forlorn hope with
      
      commit 4509276e
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Mon Feb 20 12:47:18 2017 +0000
      
          drm/i915: Remove Braswell GGTT update w/a
      
      However, gem_concurrent_blit is once again only stable with the patch
      applied and CI is detecting the odd failure in forked gem_mmap_gtt tests
      (which smell like the same issue). Fwiw, a wide variety of CPU memory
      barriers (around GGTT flushing, fence updates, PTE updates) and GPU
      flushes/invalidates (between requests, after PTE updates) were tried as
      part of the investigation to find an alternate cause, nothing comes
      close to serialised GGTT updates.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105591
      Testcase: igt/gem_concurrent_blit
      Testcase: igt/gem_mmap_gtt/*forked*
      References: 5bab6f60 ("drm/i915: Serialise updates to GGTT with access through GGTT on Braswell")
      References: 4509276e ("drm/i915: Remove Braswell GGTT update w/a")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190114211729.30352-1-chris@chris-wilson.co.uk
      8cd99918