提交 bfa01200 编写于 作者: C Chris Wilson

drm/i915: Manually unwind after a failed request allocation

In the next patches, we want to move the work out of freeing the request
and into its retirement (so that we can free the request without
requiring the struct_mutex). This means that we cannot rely on
unreferencing the request to completely teardown the request any more
and so we need to manually unwind the failed allocation. In doing so, we
reorder the allocation in order to make the unwind simple (and ensure
that we don't try to unwind a partial request that may have modified
global state) and so we end up pushing the initial preallocation down
into the engine request initialisation functions where we have the
requisite control over the state of the request.

Moving the initial preallocation into the engine is less than ideal: it
moves logic to handle a specific problem with request handling out of
the common code. On the other hand, it does allow those backends
significantly more flexibility in performing its allocations.
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-14-git-send-email-chris@chris-wilson.co.uk
上级 0251a963
......@@ -2769,15 +2769,6 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
req->ctx = ctx;
i915_gem_context_reference(req->ctx);
if (i915.enable_execlists)
ret = intel_logical_ring_alloc_request_extras(req);
else
ret = intel_ring_alloc_request_extras(req);
if (ret) {
i915_gem_context_unreference(req->ctx);
goto err;
}
/*
* Reserve space in the ring buffer for all the commands required to
* eventually emit this request. This is to guarantee that the
......@@ -2786,20 +2777,19 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
* away, e.g. because a GPU scheduler has deferred it.
*/
req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
ret = intel_ring_begin(req, 0);
if (ret) {
/*
* At this point, the request is fully allocated even if not
* fully prepared. Thus it can be cleaned up using the proper
* free code, along with any reserved space.
*/
i915_gem_request_unreference(req);
return ret;
}
if (i915.enable_execlists)
ret = intel_logical_ring_alloc_request_extras(req);
else
ret = intel_ring_alloc_request_extras(req);
if (ret)
goto err_ctx;
*req_out = req;
return 0;
err_ctx:
i915_gem_context_unreference(ctx);
err:
kmem_cache_free(dev_priv->requests, req);
return ret;
......
......@@ -698,7 +698,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
int ret = 0;
int ret;
request->ringbuf = request->ctx->engine[request->engine->id].ringbuf;
......@@ -715,9 +715,21 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
return ret;
}
if (request->ctx != request->i915->kernel_context)
if (request->ctx != request->i915->kernel_context) {
ret = intel_lr_context_pin(request->ctx, request->engine);
if (ret)
return ret;
}
ret = intel_ring_begin(request, 0);
if (ret)
goto err_unpin;
return 0;
err_unpin:
if (request->ctx != request->i915->kernel_context)
intel_lr_context_unpin(request->ctx, request->engine);
return ret;
}
......
......@@ -2340,7 +2340,7 @@ int intel_engine_idle(struct intel_engine_cs *engine)
int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
request->ringbuf = request->engine->buffer;
return 0;
return intel_ring_begin(request, 0);
}
static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册