- 03 7月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Not only do we need to release the vm.ref we acquired for the vma on the duplicate insert branch, but also for the normal error paths, so roll them all into one. Reported-by: NAndi Shyti <andi.shyti@intel.com> Suggested-by: NAndi Shyti <andi.shyti@intel.com> Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200702211015.29604-1-chris@chris-wilson.co.uk
-
- 02 7月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As we allow for parallel threads to create the same vma instance concurrently, and we only filter out the duplicates upon reacquiring the spinlock for the rbtree, we have to free the loser of the constructors' race. When freeing, we should also drop any resource references acquired for the redundant vma. Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200702083225.20044-1-chris@chris-wilson.co.uk
-
- 12 6月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
If we find ourselves trying to reuse a misplaced but active vma, we currently try to discard it to avoid having to wait to unbind it (upsetting the current user fo the vma). An alternative to marking it as a dicarded vma and keeping it in both the obj->vma.list and obj->vma.tree, is to simply remove it from the lookup rbtree. While it remains in the list of vma, it will be unbound under eviction pressure and freed along with the object. We will never reuse it again for new instances. As before, with no pruning, the list may continually grow, but eventually we will have the most constrained version of the ggtt view that meets all requirements -- so the list of vma should not grow without bound. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2012 Fixes: 9bdcaa5e ("drm/i915: Discard a misplaced GGTT vma") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200611180421.23262-1-chris@chris-wilson.co.uk
-
- 28 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Across suspend/resume, we clear the entire GGTT and rebuild from scratch. In particular, we want to only preserve the global entries for use by the HW, and delay reinstating the local binds until required by the user. This means that we can evict any local binds in the global GTT, saving any time in preserving their state, as they will be rebound on demand. References: https://gitlab.freedesktop.org/drm/intel/-/issues/1947Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200528082427.21402-2-chris@chris-wilson.co.uk
-
- 22 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As we no longer use PIN_UPDATE (since commit 7d0aa0db ("drm/i915/gem: Unbind all current vma on changing cache-level")) we can remove PIN_UPDATE itself. The benefit is just in simplifing the vma bind. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200521144949.25357-1-chris@chris-wilson.co.uk
-
- 07 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Do an early rejection of a i915_vma_unbind() attempt if the i915_vma is currently pinned, without waiting to see if the inflight operations may unpin it. We see this problem with the shrinker trying to unbind the active vma from inside its bind worker: <6> [472.618968] Workqueue: events_unbound fence_work [i915] <4> [472.618970] Call Trace: <4> [472.618974] ? __schedule+0x2e5/0x810 <4> [472.618978] schedule+0x37/0xe0 <4> [472.618982] schedule_preempt_disabled+0xf/0x20 <4> [472.618984] __mutex_lock+0x281/0x9c0 <4> [472.618987] ? mark_held_locks+0x49/0x70 <4> [472.618989] ? _raw_spin_unlock_irqrestore+0x47/0x60 <4> [472.619038] ? i915_vma_unbind+0xae/0x110 [i915] <4> [472.619084] ? i915_vma_unbind+0xae/0x110 [i915] <4> [472.619122] i915_vma_unbind+0xae/0x110 [i915] <4> [472.619165] i915_gem_object_unbind+0x1dc/0x400 [i915] <4> [472.619208] i915_gem_shrink+0x328/0x660 [i915] <4> [472.619250] ? i915_gem_shrink_all+0x38/0x60 [i915] <4> [472.619282] i915_gem_shrink_all+0x38/0x60 [i915] <4> [472.619325] vm_alloc_page.constprop.25+0x1aa/0x240 [i915] <4> [472.619330] ? rcu_read_lock_sched_held+0x4d/0x80 <4> [472.619363] ? __alloc_pd+0xb/0x30 [i915] <4> [472.619366] ? module_assert_mutex_or_preempt+0xf/0x30 <4> [472.619368] ? __module_address+0x23/0xe0 <4> [472.619371] ? is_module_address+0x26/0x40 <4> [472.619374] ? static_obj+0x34/0x50 <4> [472.619376] ? lockdep_init_map+0x4d/0x1e0 <4> [472.619407] setup_page_dma+0xd/0x90 [i915] <4> [472.619437] alloc_pd+0x29/0x50 [i915] <4> [472.619470] __gen8_ppgtt_alloc+0x443/0x6b0 [i915] <4> [472.619503] gen8_ppgtt_alloc+0xd7/0x300 [i915] <4> [472.619535] ppgtt_bind_vma+0x2a/0xe0 [i915] <4> [472.619577] __vma_bind+0x26/0x40 [i915] <4> [472.619611] fence_work+0x1c/0x90 [i915] <4> [472.619617] process_one_work+0x26a/0x620 Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200403120150.17091-1-chris@chris-wilson.co.uk (cherry picked from commit 614654ab) (cherry picked from commit dd086cf516d9bea3878abb267f62ccc53acd764b) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 28 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
While the ggtt vma are protected by their object lifetime, the list continues until it hits a non-ggtt vma, and that vma is not protected and may be freed as we inspect it. Hence, we require the obj->vma.lock to protect the list as we iterate. An example of forgetting to hold the obj->vma.lock is [1642834.464973] general protection fault, probably for non-canonical address 0xdead000000000122: 0000 [#1] SMP PTI [1642834.464977] CPU: 3 PID: 1954 Comm: Xorg Not tainted 5.6.0-300.fc32.x86_64 #1 [1642834.464979] Hardware name: LENOVO 20ARS25701/20ARS25701, BIOS GJET94WW (2.44 ) 09/14/2017 [1642834.465021] RIP: 0010:i915_gem_object_set_tiling+0x2c0/0x3e0 [i915] [1642834.465024] Code: 8b 84 24 18 01 00 00 f6 c4 80 74 59 49 8b 94 24 a0 00 00 00 49 8b 84 24 e0 00 00 00 49 8b 74 24 10 48 8b 92 30 01 00 00 89 c7 <80> ba 0a 06 00 00 03 0f 87 86 00 00 00 ba 00 00 08 00 b9 00 00 10 [1642834.465025] RSP: 0018:ffffa98780c77d60 EFLAGS: 00010282 [1642834.465028] RAX: ffff8d232bfb2578 RBX: 0000000000000002 RCX: ffff8d25873a0000 [1642834.465029] RDX: dead000000000122 RSI: fffff0af8ac6e408 RDI: 000000002bfb2578 [1642834.465030] RBP: ffff8d25873a0000 R08: ffff8d252bfb5638 R09: 0000000000000000 [1642834.465031] R10: 0000000000000000 R11: ffff8d252bfb5640 R12: ffffa987801cb8f8 [1642834.465032] R13: 0000000000001000 R14: ffff8d233e972e50 R15: ffff8d233e972d00 [1642834.465034] FS: 00007f6a3d327f00(0000) GS:ffff8d25926c0000(0000) knlGS:0000000000000000 [1642834.465036] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1642834.465037] CR2: 00007f6a2064d000 CR3: 00000002fb57c001 CR4: 00000000001606e0 [1642834.465038] Call Trace: [1642834.465083] i915_gem_set_tiling_ioctl+0x122/0x230 [i915] [1642834.465121] ? i915_gem_object_set_tiling+0x3e0/0x3e0 [i915] [1642834.465151] drm_ioctl_kernel+0x86/0xd0 [drm] [1642834.465156] ? avc_has_perm+0x3b/0x160 [1642834.465178] drm_ioctl+0x206/0x390 [drm] [1642834.465216] ? i915_gem_object_set_tiling+0x3e0/0x3e0 [i915] [1642834.465221] ? selinux_file_ioctl+0x122/0x1c0 [1642834.465226] ? __do_munmap+0x24b/0x4d0 [1642834.465231] ksys_ioctl+0x82/0xc0 [1642834.465235] __x64_sys_ioctl+0x16/0x20 [1642834.465238] do_syscall_64+0x5b/0xf0 [1642834.465243] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [1642834.465245] RIP: 0033:0x7f6a3d7b047b [1642834.465247] Code: 0f 1e fa 48 8b 05 1d aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ed a9 0c 00 f7 d8 64 89 01 48 [1642834.465249] RSP: 002b:00007ffe71adba28 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [1642834.465251] RAX: ffffffffffffffda RBX: 000055f99048fa40 RCX: 00007f6a3d7b047b [1642834.465253] RDX: 00007ffe71adba30 RSI: 00000000c0106461 RDI: 000000000000000e [1642834.465254] RBP: 0000000000000002 R08: 000055f98f3f1798 R09: 0000000000000002 [1642834.465255] R10: 0000000000001000 R11: 0000000000000246 R12: 0000000000000080 [1642834.465257] R13: 000055f98f3f1690 R14: 00000000c0106461 R15: 00007ffe71adba30 Now to take the spinlock during the list iteration, we need to break it down into two phases. In the first phase under the lock, we cannot sleep and so must defer the actual work to a second list, protected by the ggtt->mutex. We also need to hold the spinlock during creation of a new vma to serialise with updates of the tiling on the object. Reported-by: NDave Airlie <airlied@redhat.com> Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200422072805.17340-1-chris@chris-wilson.co.uk (cherry picked from commit cb593e5d) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 24 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The history of i915_vma_close() is confusing, as is its use. As the lifetime of the i915_vma is currently bounded by the object it is attached to, we needed a means of identify when a vma was no longer in use by userspace (via the user's fd). This is further complicated by that only ppgtt vma should be closed at the user's behest, as the ggtt were always shared. Now that we attach the vma to a lut on the user's context, the open count does indicate how many unique and open context/vm are referencing this vma from the user. As such, we can and should just use the open_count to track when the vma is still in use by userspace. It's a poor man's replacement for reference counting. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1193Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200422190558.30509-1-chris@chris-wilson.co.uk
-
- 22 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
While the ggtt vma are protected by their object lifetime, the list continues until it hits a non-ggtt vma, and that vma is not protected and may be freed as we inspect it. Hence, we require the obj->vma.lock to protect the list as we iterate. An example of forgetting to hold the obj->vma.lock is [1642834.464973] general protection fault, probably for non-canonical address 0xdead000000000122: 0000 [#1] SMP PTI [1642834.464977] CPU: 3 PID: 1954 Comm: Xorg Not tainted 5.6.0-300.fc32.x86_64 #1 [1642834.464979] Hardware name: LENOVO 20ARS25701/20ARS25701, BIOS GJET94WW (2.44 ) 09/14/2017 [1642834.465021] RIP: 0010:i915_gem_object_set_tiling+0x2c0/0x3e0 [i915] [1642834.465024] Code: 8b 84 24 18 01 00 00 f6 c4 80 74 59 49 8b 94 24 a0 00 00 00 49 8b 84 24 e0 00 00 00 49 8b 74 24 10 48 8b 92 30 01 00 00 89 c7 <80> ba 0a 06 00 00 03 0f 87 86 00 00 00 ba 00 00 08 00 b9 00 00 10 [1642834.465025] RSP: 0018:ffffa98780c77d60 EFLAGS: 00010282 [1642834.465028] RAX: ffff8d232bfb2578 RBX: 0000000000000002 RCX: ffff8d25873a0000 [1642834.465029] RDX: dead000000000122 RSI: fffff0af8ac6e408 RDI: 000000002bfb2578 [1642834.465030] RBP: ffff8d25873a0000 R08: ffff8d252bfb5638 R09: 0000000000000000 [1642834.465031] R10: 0000000000000000 R11: ffff8d252bfb5640 R12: ffffa987801cb8f8 [1642834.465032] R13: 0000000000001000 R14: ffff8d233e972e50 R15: ffff8d233e972d00 [1642834.465034] FS: 00007f6a3d327f00(0000) GS:ffff8d25926c0000(0000) knlGS:0000000000000000 [1642834.465036] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1642834.465037] CR2: 00007f6a2064d000 CR3: 00000002fb57c001 CR4: 00000000001606e0 [1642834.465038] Call Trace: [1642834.465083] i915_gem_set_tiling_ioctl+0x122/0x230 [i915] [1642834.465121] ? i915_gem_object_set_tiling+0x3e0/0x3e0 [i915] [1642834.465151] drm_ioctl_kernel+0x86/0xd0 [drm] [1642834.465156] ? avc_has_perm+0x3b/0x160 [1642834.465178] drm_ioctl+0x206/0x390 [drm] [1642834.465216] ? i915_gem_object_set_tiling+0x3e0/0x3e0 [i915] [1642834.465221] ? selinux_file_ioctl+0x122/0x1c0 [1642834.465226] ? __do_munmap+0x24b/0x4d0 [1642834.465231] ksys_ioctl+0x82/0xc0 [1642834.465235] __x64_sys_ioctl+0x16/0x20 [1642834.465238] do_syscall_64+0x5b/0xf0 [1642834.465243] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [1642834.465245] RIP: 0033:0x7f6a3d7b047b [1642834.465247] Code: 0f 1e fa 48 8b 05 1d aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ed a9 0c 00 f7 d8 64 89 01 48 [1642834.465249] RSP: 002b:00007ffe71adba28 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [1642834.465251] RAX: ffffffffffffffda RBX: 000055f99048fa40 RCX: 00007f6a3d7b047b [1642834.465253] RDX: 00007ffe71adba30 RSI: 00000000c0106461 RDI: 000000000000000e [1642834.465254] RBP: 0000000000000002 R08: 000055f98f3f1798 R09: 0000000000000002 [1642834.465255] R10: 0000000000001000 R11: 0000000000000246 R12: 0000000000000080 [1642834.465257] R13: 000055f98f3f1690 R14: 00000000c0106461 R15: 00007ffe71adba30 Now to take the spinlock during the list iteration, we need to break it down into two phases. In the first phase under the lock, we cannot sleep and so must defer the actual work to a second list, protected by the ggtt->mutex. We also need to hold the spinlock during creation of a new vma to serialise with updates of the tiling on the object. Reported-by: NDave Airlie <airlied@redhat.com> Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200422072805.17340-1-chris@chris-wilson.co.uk
-
- 07 4月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Later use will require asynchronous waits on the active timelines, but will not utilize an async wait on the exclusive channel. Make the await on the exclusive fence explicit in the selection flags. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200406155840.1728-1-chris@chris-wilson.co.uk
-
- 04 4月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Do an early rejection of a i915_vma_unbind() attempt if the i915_vma is currently pinned, without waiting to see if the inflight operations may unpin it. We see this problem with the shrinker trying to unbind the active vma from inside its bind worker: <6> [472.618968] Workqueue: events_unbound fence_work [i915] <4> [472.618970] Call Trace: <4> [472.618974] ? __schedule+0x2e5/0x810 <4> [472.618978] schedule+0x37/0xe0 <4> [472.618982] schedule_preempt_disabled+0xf/0x20 <4> [472.618984] __mutex_lock+0x281/0x9c0 <4> [472.618987] ? mark_held_locks+0x49/0x70 <4> [472.618989] ? _raw_spin_unlock_irqrestore+0x47/0x60 <4> [472.619038] ? i915_vma_unbind+0xae/0x110 [i915] <4> [472.619084] ? i915_vma_unbind+0xae/0x110 [i915] <4> [472.619122] i915_vma_unbind+0xae/0x110 [i915] <4> [472.619165] i915_gem_object_unbind+0x1dc/0x400 [i915] <4> [472.619208] i915_gem_shrink+0x328/0x660 [i915] <4> [472.619250] ? i915_gem_shrink_all+0x38/0x60 [i915] <4> [472.619282] i915_gem_shrink_all+0x38/0x60 [i915] <4> [472.619325] vm_alloc_page.constprop.25+0x1aa/0x240 [i915] <4> [472.619330] ? rcu_read_lock_sched_held+0x4d/0x80 <4> [472.619363] ? __alloc_pd+0xb/0x30 [i915] <4> [472.619366] ? module_assert_mutex_or_preempt+0xf/0x30 <4> [472.619368] ? __module_address+0x23/0xe0 <4> [472.619371] ? is_module_address+0x26/0x40 <4> [472.619374] ? static_obj+0x34/0x50 <4> [472.619376] ? lockdep_init_map+0x4d/0x1e0 <4> [472.619407] setup_page_dma+0xd/0x90 [i915] <4> [472.619437] alloc_pd+0x29/0x50 [i915] <4> [472.619470] __gen8_ppgtt_alloc+0x443/0x6b0 [i915] <4> [472.619503] gen8_ppgtt_alloc+0xd7/0x300 [i915] <4> [472.619535] ppgtt_bind_vma+0x2a/0xe0 [i915] <4> [472.619577] __vma_bind+0x26/0x40 [i915] <4> [472.619611] fence_work+0x1c/0x90 [i915] <4> [472.619617] process_one_work+0x26a/0x620 Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200403120150.17091-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Make sure we revoke the user's mmaps of this vma to force them to take a pagefault *before* we remove the associated aperture detiling register. Fixes: 0d86ee35 ("drm/i915/gt: Make fence revocation unequivocal") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200403160951.8271-1-chris@chris-wilson.co.uk
-
- 02 4月, 2020 3 次提交
-
-
由 Chris Wilson 提交于
We cached the number of vma bound to the object in order to speed up shrinker decisions. This has been superseded by being more proactive in removing objects we cannot shrink from the shrinker lists, and so we can drop the clumsy attempt at atomically counting the bind count and comparing it to the number of pinned mappings of the object. This will only get more clumsier with asynchronous binding and unbinding. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401223924.16667-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we must revoke the fence because the VMA is no longer present, or because the fence no longer applies, ensure that we do and convert it into an error if we try but cannot. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401210104.15907-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Only GPU activity via the GGTT fence is asynchronous, we know that we control the CPU access directly, so we only need to wait for the GPU to stop using the fence before we relinquish it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401210104.15907-1-chris@chris-wilson.co.uk
-
- 28 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Userptr causes lockdep to complain when we are using the aliasing-ppgtt (and ggtt, but for that it is rightfully so to complain about) in that when we revoke the userptr we take a mutex which we also use to revoke the mmaps. However, we only revoke mmaps for GGTT bindings and we never allow userptr to create a GGTT binding so the warning should be false and is simply caused by our conflation of the aliasing-ppgtt with the ggtt. So lets try treating the binding into the aliasing-ppgtt as a separate lockclass from the ggtt. The downside is that we are deliberately suppressing lockdep;s ability to warn us of cycles. Closes: https://gitlab.freedesktop.org/drm/intel/issues/478Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200326142727.31962-1-chris@chris-wilson.co.uk
-
- 27 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Abuse^W Take advantage that we know we are inside the GT wakeref and that prevents any client execbuf from reopening the i915_vma in order to claim all the vma to close without having to drop the spinlock to free each one individually. By keeping the spinlock, we do not have to restart if we run concurrently with i915_gem_free_objects -- which causes them both to restart continually and make very very slow progress. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1361 Fixes: 77853186 ("drm/i915: Claim vma while under closed_lock in i915_vma_parked()") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200323092841.22240-2-chris@chris-wilson.co.uk (cherry picked from commit 3447c4c5) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 25 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
If the caller allows and we do not have to wait for any signals, immediately execute the work within the caller's process. By doing so we avoid the overhead of scheduling a new task, and the latency in executing it, at the cost of pulling that work back into the immediate context. (Sometimes we still prefer to offload the task to another cpu, especially if we plan on executing many such tasks in parallel for this client.) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200325120227.8044-2-chris@chris-wilson.co.uk
-
- 23 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Abuse^W Take advantage that we know we are inside the GT wakeref and that prevents any client execbuf from reopening the i915_vma in order to claim all the vma to close without having to drop the spinlock to free each one individually. By keeping the spinlock, we do not have to restart if we run concurrently with i915_gem_free_objects -- which causes them both to restart continually and make very very slow progress. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1361 Fixes: 77853186 ("drm/i915: Claim vma while under closed_lock in i915_vma_parked()") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200323092841.22240-2-chris@chris-wilson.co.uk
-
- 11 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Extend i915_request_await_active() to be able to asynchronously wait on all the tracked timelines simultaneously. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200311092044.16353-1-chris@chris-wilson.co.uk
-
- 04 3月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
The is-closed flag may be added after we have acquired the vma under the ctx->mutex, but will not take effect until after we release the vm->mutex. i.e. the flag may be set on the vma as attempt to bind it and that will cause the vma to be unbound later after we unpin it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200303093157.1153887-1-chris@chris-wilson.co.uk
-
- 22 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
As there is a delay before we pin a vma, there is an opportunity for another thread to have closed the vm and its vma (including us). Check as soon as we acquire the vm->mutex and know the vm/vma is stable. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1291Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200221121940.2741563-1-chris@chris-wilson.co.uk
-
- 11 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Only assert that the i915_vma is now idle if and only if no other pins are present. If another user has the i915_vma pinned, they may submit more work to the i915_vma skipping the vm->mutex used to serialise the unbind. We need to wait again, if we want to continue and unbind this vma. However, if we own the i915_vma (we hold the vm->mutex for the unbind and the pin_count is 0), we can assert that the vma remains idle as we unbind. Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Closes: https://gitlab.freedesktop.org/drm/intel/issues/530Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-1-chris@chris-wilson.co.uk (cherry picked from commit 60e94557) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 03 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Take a reference to the previous exclusive fence on the i915_active, as we wish to add an await to it in the caller (and so must prevent it from being freed until we have completed that task). Fixes: e3793468 ("drm/i915: Use the async worker to avoid reclaim tainting the ggtt->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200203094152.4150550-1-chris@chris-wilson.co.uk
-
- 31 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
On Braswell and Broxton (also known as Valleyview and Apollolake), we need to serialise updates of the GGTT using the big stop_machine() hammer. This has the side effect of appearing to lockdep as a possible reclaim (since it uses the cpuhp mutex and that is tainted by per-cpu allocations). However, we want to use vm->mutex (including ggtt->mutex) from within the shrinker and so must avoid such possible taints. For this purpose, we introduced the asynchronous vma binding and we can apply it to the PIN_GLOBAL so long as take care to add the necessary waits for the worker afterwards. Closes: https://gitlab.freedesktop.org/drm/intel/issues/211Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130181710.2030251-3-chris@chris-wilson.co.uk
-
- 24 1月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Optimistically wait for the prior vma activity before taking the mutex to minimise the mutex hold time while unbinding. We will then verify the vma is idle with a second wait under the mutex to ensure it is safe to unbind. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Only assert that the i915_vma is now idle if and only if no other pins are present. If another user has the i915_vma pinned, they may submit more work to the i915_vma skipping the vm->mutex used to serialise the unbind. We need to wait again, if we want to continue and unbind this vma. However, if we own the i915_vma (we hold the vm->mutex for the unbind and the pin_count is 0), we can assert that the vma remains idle as we unbind. Fixes: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Closes: https://gitlab.freedesktop.org/drm/intel/issues/530Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200123224459.38128-1-chris@chris-wilson.co.uk
-
- 22 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
While we do flush writes to the vma before unbinding (to make sure they go through the right detiling register), we may also be concurrently poking at the GGTT_WRITE bit from set-domain, as we mark all GGTT vma associated with an object. We know this is for another vma, as we are currently unbinding this one -- so if this vma will be reused, it will be refaulted and have its dirty bit set before the next write. Closes: https://gitlab.freedesktop.org/drm/intel/issues/999Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200121222447.419489-1-chris@chris-wilson.co.uk
-
- 11 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
We need to hold the runtime-pm wakeref to update the global PTEs (as they exist behind a PCI BAR). However, some systems invoke ACPI during runtime resume and so require allocations, which is verboten inside the vm->mutex. Ergo, we must not use intel_runtime_pm_get() inside the mutex, but lift the call outside. Closes: https://gitlab.freedesktop.org/drm/intel/issues/958Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200110144418.1415639-1-chris@chris-wilson.co.uk
-
- 09 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Access through the GGTT (iomap) into the vma does require the device to be awake. However, we often take the i915_vma_pin_iomap() as an early preparatory step that is long before we use the iomap. Asserting that the device is awake at pin time does not protect us, and is merely a nuisance. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200108153550.3803446-2-chris@chris-wilson.co.uk
-
- 23 12月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Start introducing a kref on i915_vma in order to protect the vma unbind (i915_gem_object_unbind) from a parallel destruction (i915_vma_parked). Later, we will use the refcount to manage all access and turn i915_vma into a first class container. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Acked-by: NImre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191222210256.2066451-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk (cherry picked from commit da42104f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 22 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Allocate only an internal intel_context for the kernel_context, forgoing a global GEM context for internal use as we only require a separate address space (for our own protection). Now having weaned GT from requiring ce->gem_context, we can stop referencing it entirely. This also means we no longer have to create random and unnecessary GEM contexts for internal use. GEM contexts are now entirely for tracking GEM clients, and intel_context the execution environment on the GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Acked-by: NAndi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191221160324.1073045-1-chris@chris-wilson.co.uk
-
- 18 12月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we inherit an error along the fence chain, we skip the main work callback and go straight to the error. In the case of the vma bind worker, we only dropped the pinned pages from the worker. In the process, make sure we call the release earlier rather than wait until the final reference to the fence is dropped (as a reference is kept while being listened upon). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191216161717.2688274-1-chris@chris-wilson.co.uk
-
- 09 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
If someone else acquires the i915_vma before we complete our wait and unbind it, we currently error out with -EBUSY. Use -EAGAIN instead so that if necessary the caller is prepared to try again. Closes: https://gitlab.freedesktop.org/drm/intel/issues/683Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191208161252.3015727-2-chris@chris-wilson.co.uk
-
- 06 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Remove the vma we wish to destroy from the gt->closed_list to avoid having two i915_vma_parked() try and free it. Fixes: aa5e4453 ("drm/i915/gem: Try to flush pending unbind events") References: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205214159.829727-1-chris@chris-wilson.co.uk
-
- 05 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
It is not acceptable for context pinning to fail with -ENOSPC as we should always be able to make space in the GGTT. The only reason we may fail is that other "temporary" context pins are reserving their space and we need to wait for an available slot. Closes: https://gitlab.freedesktop.org/drm/intel/issues/676Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205113726.413351-2-chris@chris-wilson.co.uk
-
- 04 12月, 2019 1 次提交
-
-
由 Abdiel Janulgue 提交于
This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature comes from the value returned by this ioctl which is the offset into the device fd which userpace uses with mmap(2). mmap_gtt was our initial mmap_offset implementation, this extends our CPU mmap support to allow additional fault handlers that depends on the object's backing pages. Note that we multiplex mmap_gtt and mmap_offset through the same ioctl, and use the zero extending behaviour of drm to differentiate between them, when we inspect the flags. To support multiple mmap types on an object we need to support multiple mmap_offsets for an object (each offset in the global device address space corresponding to a unique instance of the object for a file + mmap type). As we drop the simplified drm core idea of a single mmap_offset, we need to provide replacement hooks for the dumb mmap interface as well. Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675 Testcase: igt/gem_mmap_offset Signed-off-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
-
- 31 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
In order to keep the assert_bind_count() valid, we need to hold the vma page reference until after we drop the bind count. However, we must also keep the drm_mm_remove_node() as the last action of i915_vma_unbind() so that it serialises with the unlocked check inside i915_vma_destroy(). So we need to split up i915_vma_remove() so that we order the detach, drop pages and remove as required during unbind. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112067Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191030192159.18404-1-chris@chris-wilson.co.uk
-