提交 23f54483 编写于 作者: B Ben Widawsky 提交者: Daniel Vetter

drm/i915: Synchronize pread/pwrite with wait_rendering

lifted from Daniel:
pread/pwrite isn't about the object's domain at all, but purely about
synchronizing for outstanding rendering. Replacing the call to
set_to_gtt_domain with a wait_rendering would imo improve code
readability. Furthermore we could pimp pread to only block for
outstanding writes and not for reads.

Since you're not the first one to trip over this: Can I volunteer you
for a follow-up patch to fix this?

v2: Switch the pwrite patch to use \!read_only. This was a typo in the
original code. (Chris, Daniel)
Recommended-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
[danvet: Fix up the logic fumble - wait_rendering has a bool readonly
paramater, set_to_gtt_domain otoh has bool write. Breakage reported by
Jani Nikula, I've double-checked that igt/gem_concurrent_blt/prw-*
would have caught this.]
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
上级 6c4a8962
...@@ -41,6 +41,9 @@ static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *o ...@@ -41,6 +41,9 @@ static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *o
static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj, static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj,
bool force); bool force);
static __must_check int static __must_check int
i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
bool readonly);
static __must_check int
i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
struct i915_address_space *vm, struct i915_address_space *vm,
unsigned alignment, unsigned alignment,
...@@ -430,11 +433,9 @@ i915_gem_shmem_pread(struct drm_device *dev, ...@@ -430,11 +433,9 @@ i915_gem_shmem_pread(struct drm_device *dev,
* optimizes for the case when the gpu will dirty the data * optimizes for the case when the gpu will dirty the data
* anyway again before the next pread happens. */ * anyway again before the next pread happens. */
needs_clflush = !cpu_cache_is_coherent(dev, obj->cache_level); needs_clflush = !cpu_cache_is_coherent(dev, obj->cache_level);
if (i915_gem_obj_bound_any(obj)) { ret = i915_gem_object_wait_rendering(obj, true);
ret = i915_gem_object_set_to_gtt_domain(obj, false); if (ret)
if (ret) return ret;
return ret;
}
} }
ret = i915_gem_object_get_pages(obj); ret = i915_gem_object_get_pages(obj);
...@@ -746,11 +747,9 @@ i915_gem_shmem_pwrite(struct drm_device *dev, ...@@ -746,11 +747,9 @@ i915_gem_shmem_pwrite(struct drm_device *dev,
* optimizes for the case when the gpu will use the data * optimizes for the case when the gpu will use the data
* right away and we therefore have to clflush anyway. */ * right away and we therefore have to clflush anyway. */
needs_clflush_after = cpu_write_needs_clflush(obj); needs_clflush_after = cpu_write_needs_clflush(obj);
if (i915_gem_obj_bound_any(obj)) { ret = i915_gem_object_wait_rendering(obj, false);
ret = i915_gem_object_set_to_gtt_domain(obj, true); if (ret)
if (ret) return ret;
return ret;
}
} }
/* Same trick applies to invalidate partially written cachelines read /* Same trick applies to invalidate partially written cachelines read
* before writing. */ * before writing. */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册