提交 454a325a 编写于 作者: C Chris Wilson

drm/i915: Remove leftover vma->obj->pages_pin_count on insert/remove

We now do the page pin count upfront in vma_get_pages/vma_put_pages, so
that we do the allocations before we enter the vm->mutex. Our vma
page references we are tracked in vma->pages_count and the extra
obj->pages_pin_count being performed later in i915_vma_insert and
i915_vma_remove is redundant, and worse throws off the shrinker's logic
on when it can free an object by unbinding it.
Reported-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reported-by: NMatthew Auld <matthew.auld@intel.com>
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015100155.10376-1-chris@chris-wilson.co.uk
上级 56184a20
......@@ -703,7 +703,6 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
list_add_tail(&vma->vm_link, &vma->vm->bound_list);
if (vma->obj) {
atomic_inc(&vma->obj->mm.pages_pin_count);
atomic_inc(&vma->obj->bind_count);
assert_bind_count(vma->obj);
}
......@@ -726,14 +725,12 @@ i915_vma_remove(struct i915_vma *vma)
if (vma->obj) {
struct drm_i915_gem_object *obj = vma->obj;
atomic_dec(&obj->bind_count);
/*
* And finally now the object is completely decoupled from this
* vma, we can drop its hold on the backing storage and allow
* it to be reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
atomic_dec(&obj->bind_count);
assert_bind_count(obj);
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册