提交 98e1bd4a 编写于 作者: J John Harrison 提交者: Daniel Vetter

drm/i915: Cache ringbuf pointer in request structure

In execlist mode, the ringbuf is a function of the ring and context whereas in
legacy mode, it is derived from the ring alone. Thus the calculation required to
determine the ringbuf pointer from the ring (and context) also needs to test
execlist mode or not. This is messy.

Further, the request structure holds a pointer to both the ring and the context
for which it was created. Thus, given a request, it is possible to derive the
ringbuf in either legacy or execlist mode. Hence it is necessary to pass just
the request in to all the low level functions rather than some combination of
request, ring, context and ringbuf. However, rather than recalculating it each
time, it is much simpler to just cache the ringbuf pointer in the request
structure itself.

Caching the pointer means the calculation is done once at request creation time
and all further code and simply read it directly from the request structure.

OTC-Jira: VIZ-5115
Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com>
[danvet: Drop contentless comment in lrc alloc request entirely. And
spelling fix in the commit message.]
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
上级 5e4be7bd
......@@ -2156,8 +2156,9 @@ struct drm_i915_gem_request {
/** Position in the ringbuffer of the end of the whole request */
u32 tail;
/** Context related to this request */
/** Context and ring buffer related to this request */
struct intel_context *ctx;
struct intel_ringbuffer *ringbuf;
/** Batch buffer related to this request if any */
struct drm_i915_gem_object *batch_obj;
......
......@@ -2763,7 +2763,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
struct intel_ringbuffer *ringbuf;
request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
......@@ -2774,23 +2773,12 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
trace_i915_gem_request_retire(request);
/* This is one of the few common intersection points
* between legacy ringbuffer submission and execlists:
* we need to tell them apart in order to find the correct
* ringbuffer to which the request belongs to.
*/
if (i915.enable_execlists) {
struct intel_context *ctx = request->ctx;
ringbuf = ctx->engine[ring->id].ringbuf;
} else
ringbuf = ring->buffer;
/* We know the GPU must have read the request to have
* sent us the seqno + interrupt, so use the position
* of tail of the request to update the last known position
* of the GPU head.
*/
ringbuf->last_retired_head = request->postfix;
request->ringbuf->last_retired_head = request->postfix;
i915_gem_free_request(request);
}
......
......@@ -888,12 +888,9 @@ static int logical_ring_alloc_request(struct intel_engine_cs *ring,
return ret;
}
/* Hold a reference to the context this request belongs to
* (we will need it when the time comes to emit/retire the
* request).
*/
request->ctx = ctx;
i915_gem_context_reference(request->ctx);
request->ringbuf = ctx->engine[ring->id].ringbuf;
ring->outstanding_lazy_request = request;
return 0;
......
......@@ -2230,6 +2230,7 @@ intel_ring_alloc_request(struct intel_engine_cs *ring)
kref_init(&request->ref);
request->ring = ring;
request->ringbuf = ring->buffer;
request->uniq = dev_private->request_uniq++;
ret = i915_gem_get_seqno(ring->dev, &request->seqno);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册