1. 28 5月, 2019 2 次提交
  2. 27 5月, 2019 1 次提交
  3. 24 5月, 2019 2 次提交
    • C
      drm/i915/gtt: Neuter the deferred unbind callback from gen6_ppgtt_cleanup · 63e8dcdb
      Chris Wilson 提交于
      Having deferred the vma destruction to a worker where we can acquire the
      struct_mutex, we have to avoid chasing back into the now destroyed
      ppgtt. The pd_vma is special in having a custom unbind function to scan
      for unused pages despite the VMA itself being notionally part of the
      GGTT. As such, we need to disable that callback to avoid a
      use-after-free.
      
      This unfortunately blew up so early during boot that CI declared the
      machine unreachable as opposed to being the major failure it was. Oops.
      
      Fixes: d3622099 ("drm/i915/gtt: Always acquire struct_mutex for gen6_ppgtt_cleanup")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190524064529.20514-1-chris@chris-wilson.co.uk
      63e8dcdb
    • C
      drm/i915/gtt: Always acquire struct_mutex for gen6_ppgtt_cleanup · d3622099
      Chris Wilson 提交于
      We rearranged the vm_destroy_ioctl to avoid taking struct_mutex, little
      realising that buried underneath the gen6 ppgtt release path was a
      struct_mutex requirement (to remove its GGTT vma). Until that
      struct_mutex is vanquished, take a detour in gen6_ppgtt_cleanup to do
      the i915_vma_destroy from inside a worker under the struct_mutex.
      
      <4> [257.740160] WARN_ON(debug_locks && !lock_is_held(&(&vma->vm->i915->drm.struct_mutex)->dep_map))
      <4> [257.740213] WARNING: CPU: 3 PID: 1507 at drivers/gpu/drm/i915/i915_vma.c:841 i915_vma_destroy+0x1ae/0x3a0 [i915]
      <4> [257.740214] Modules linked in: snd_hda_codec_hdmi i915 x86_pkg_temp_thermal mei_hdcp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core r8169 realtek snd_pcm mei_me mei prime_numbers lpc_ich
      <4> [257.740224] CPU: 3 PID: 1507 Comm: gem_vm_create Tainted: G     U            5.2.0-rc1-CI-CI_DRM_6118+ #1
      <4> [257.740225] Hardware name: MSI MS-7924/Z97M-G43(MS-7924), BIOS V1.12 02/15/2016
      <4> [257.740249] RIP: 0010:i915_vma_destroy+0x1ae/0x3a0 [i915]
      <4> [257.740250] Code: 00 00 00 48 81 c7 c8 00 00 00 e8 ed 08 f0 e0 85 c0 0f 85 78 fe ff ff 48 c7 c6 e8 ec 30 a0 48 c7 c7 da 55 33 a0 e8 42 8c e9 e0 <0f> 0b 8b 83 40 01 00 00 85 c0 0f 84 63 fe ff ff 48 c7 c1 c1 58 33
      <4> [257.740251] RSP: 0018:ffffc90000aafc68 EFLAGS: 00010282
      <4> [257.740252] RAX: 0000000000000000 RBX: ffff8883f7957840 RCX: 0000000000000003
      <4> [257.740253] RDX: 0000000000000046 RSI: 0000000000000006 RDI: ffffffff8212d1b9
      <4> [257.740254] RBP: ffffc90000aafcc8 R08: 0000000000000000 R09: 0000000000000000
      <4> [257.740255] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883f4d5c2a8
      <4> [257.740256] R13: ffff8883f4d5d680 R14: ffff8883f4d5c668 R15: ffff8883f4d5c2f0
      <4> [257.740257] FS:  00007f777fa8fe40(0000) GS:ffff88840f780000(0000) knlGS:0000000000000000
      <4> [257.740258] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [257.740259] CR2: 00007f777f6522b0 CR3: 00000003c612a006 CR4: 00000000001606e0
      <4> [257.740260] Call Trace:
      <4> [257.740283]  gen6_ppgtt_cleanup+0x25/0x60 [i915]
      <4> [257.740306]  i915_ppgtt_release+0x102/0x290 [i915]
      <4> [257.740330]  i915_gem_vm_destroy_ioctl+0x7c/0xa0 [i915]
      <4> [257.740376]  ? i915_gem_vm_create_ioctl+0x160/0x160 [i915]
      <4> [257.740379]  drm_ioctl_kernel+0x83/0xf0
      <4> [257.740382]  drm_ioctl+0x2f3/0x3b0
      <4> [257.740422]  ? i915_gem_vm_create_ioctl+0x160/0x160 [i915]
      <4> [257.740426]  ? _raw_spin_unlock_irqrestore+0x39/0x60
      <4> [257.740430]  do_vfs_ioctl+0xa0/0x6e0
      <4> [257.740433]  ? lock_acquire+0xa6/0x1c0
      <4> [257.740436]  ? __task_pid_nr_ns+0xb9/0x1f0
      <4> [257.740439]  ksys_ioctl+0x35/0x60
      <4> [257.740441]  __x64_sys_ioctl+0x11/0x20
      <4> [257.740443]  do_syscall_64+0x55/0x1c0
      <4> [257.740445]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      References: e0695db7 ("drm/i915: Create/destroy VM (ppGTT) for use with contexts")
      Fixes: 7f3f317a ("drm/i915: Restore control over ppgtt for context creation ABI")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190523064933.23604-1-chris@chris-wilson.co.uk
      d3622099
  4. 20 5月, 2019 1 次提交
  5. 25 4月, 2019 1 次提交
  6. 20 4月, 2019 2 次提交
  7. 12 4月, 2019 1 次提交
  8. 22 3月, 2019 1 次提交
  9. 21 3月, 2019 1 次提交
  10. 15 3月, 2019 3 次提交
  11. 06 3月, 2019 2 次提交
  12. 05 3月, 2019 2 次提交
  13. 28 2月, 2019 1 次提交
  14. 22 2月, 2019 1 次提交
  15. 06 2月, 2019 2 次提交
  16. 29 1月, 2019 2 次提交
    • C
      drm/i915: Pull VM lists under the VM mutex. · 09d7e46b
      Chris Wilson 提交于
      A starting point to counter the pervasive struct_mutex. For the goal of
      avoiding (or at least blocking under them!) global locks during user
      request submission, a simple but important step is being able to manage
      each clients GTT separately. For which, we want to replace using the
      struct_mutex as the guard for all things GTT/VM and switch instead to a
      specific mutex inside i915_address_space.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-2-chris@chris-wilson.co.uk
      09d7e46b
    • C
      drm/i915: Stop tracking MRU activity on VMA · 499197dc
      Chris Wilson 提交于
      Our goal is to remove struct_mutex and replace it with fine grained
      locking. One of the thorny issues is our eviction logic for reclaiming
      space for an execbuffer (or GTT mmaping, among a few other examples).
      While eviction itself is easy to move under a per-VM mutex, performing
      the activity tracking is less agreeable. One solution is not to do any
      MRU tracking and do a simple coarse evaluation during eviction of
      active/inactive, with a loose temporal ordering of last
      insertion/evaluation. That keeps all the locking constrained to when we
      are manipulating the VM itself, neatly avoiding the tricky handling of
      possible recursive locking during execbuf and elsewhere.
      
      Note that discarding the MRU (currently implemented as a pair of lists,
      to avoid scanning the active list for a NONBLOCKING search) is unlikely
      to impact upon our efficiency to reclaim VM space (where we think a LRU
      model is best) as our current strategy is to use random idle replacement
      first before doing a search, and over time the use of softpinned 48b
      per-ppGTT is growing (thereby eliminating any need to perform any eviction
      searches, in theory at least) with the remaining users being found on
      much older devices (gen2-gen6).
      
      v2: Changelog and commentary rewritten to elaborate on the duality of a
      single list being both an inactive and active list.
      v3: Consolidate bool parameters into a single set of flags; don't
      comment on the duality of a single variable being a multiplicity of
      bits.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
      499197dc
  17. 17 1月, 2019 1 次提交
  18. 15 1月, 2019 5 次提交
  19. 10 1月, 2019 1 次提交
  20. 09 1月, 2019 1 次提交
  21. 08 1月, 2019 1 次提交
    • C
      drm/i915: Return immediately if trylock fails for direct-reclaim · d25f71a1
      Chris Wilson 提交于
      Ignore trying to shrink from i915 if we fail to acquire the struct_mutex
      in the shrinker while performing direct-reclaim. The trade-off being
      (much) lower latency for non-i915 clients at an increased risk of being
      unable to obtain a page from direct-reclaim without hitting the
      oom-notifier. The proviso being that we still keep trying to hard
      obtain the lock for kswapd so that we can reap under heavy memory
      pressure.
      
      v2: Taint all mutexes taken within the shrinker with the struct_mutex
      subclass as an early warning system, and drop I915_SHRINK_ACTIVE from
      vmap to reduce the number of dangerous paths. We also have to drop
      I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim
      that ACTIVE is only used from outside context, which fits in with a
      longer strategy of avoiding stalls due to scanning active during
      shrinking.
      
      The danger in using the subclass struct_mutex is that we declare
      ourselves more knowledgable than lockdep and deprive ourselves of
      automatic coverage. Instead, we require ourselves to mark up any mutex
      taken inside the shrinker in order to detect lock-inversion, and if we
      miss any we are doomed to a deadlock at the worst possible moment.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
      d25f71a1
  22. 27 12月, 2018 1 次提交
  23. 22 12月, 2018 1 次提交
  24. 13 12月, 2018 1 次提交
  25. 07 12月, 2018 1 次提交
  26. 20 11月, 2018 2 次提交