提交 41f425ad 编写于 作者: M Matt Roper 提交者: Rodrigo Vivi

drm/i915/gt: Manage uncore->lock while waiting on MCR register

The GT MCR code currently relies on uncore->lock to avoid race
conditions on the steering control register during MCR operations.  The
*_fw() versions of MCR operations expect the caller to already hold
uncore->lock, while the non-fw variants manage the lock internally.
However the sole callsite of intel_gt_mcr_wait_for_reg_fw() does not
currently obtain the forcewake lock, allowing a potential race condition
(and triggering an assertion on lockdep builds).  Furthermore, since
'wait for register value' requests may not return immediately, it is
undesirable to hold a fundamental lock like uncore->lock for the entire
wait and block all other MMIO for the duration; rather the lock is only
needed around the MCR read operations and can be released during the
delays.

Convert intel_gt_mcr_wait_for_reg_fw() to a non-fw variant that will
manage uncore->lock internally.  This does have the side effect of
causing an unnecessary lookup in the forcewake table on each read
operation, but since the caller is still holding the relevant forcewake
domain, this will ultimately just incremenent the reference count and
won't actually cause any additional MMIO traffic.

In the future we plan to switch to a dedicated MCR lock to protect the
steering critical section rather than using the overloaded and
high-traffic uncore->lock; on MTL and beyond the new lock can be
implemented on top of the hardware-provided synchonization mechanism for
steering.

Fixes: 3068bec8 ("drm/i915/gt: Add intel_gt_mcr_wait_for_reg_fw()")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: NMatt Roper <matthew.d.roper@intel.com>
Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221117173358.1980230-1-matthew.d.roper@intel.com
(cherry picked from commit 192bb40f)
Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
上级 3d335a52
......@@ -1035,9 +1035,9 @@ get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
static int wait_for_invalidate(struct intel_gt *gt, struct reg_and_bit rb)
{
if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50))
return intel_gt_mcr_wait_for_reg_fw(gt, rb.mcr_reg, rb.bit, 0,
TLB_INVAL_TIMEOUT_US,
TLB_INVAL_TIMEOUT_MS);
return intel_gt_mcr_wait_for_reg(gt, rb.mcr_reg, rb.bit, 0,
TLB_INVAL_TIMEOUT_US,
TLB_INVAL_TIMEOUT_MS);
else
return __intel_wait_for_register_fw(gt->uncore, rb.reg, rb.bit, 0,
TLB_INVAL_TIMEOUT_US,
......
......@@ -730,17 +730,19 @@ void intel_gt_mcr_get_ss_steering(struct intel_gt *gt, unsigned int dss,
*
* Return: 0 if the register matches the desired condition, or -ETIMEDOUT.
*/
int intel_gt_mcr_wait_for_reg_fw(struct intel_gt *gt,
i915_mcr_reg_t reg,
u32 mask,
u32 value,
unsigned int fast_timeout_us,
unsigned int slow_timeout_ms)
int intel_gt_mcr_wait_for_reg(struct intel_gt *gt,
i915_mcr_reg_t reg,
u32 mask,
u32 value,
unsigned int fast_timeout_us,
unsigned int slow_timeout_ms)
{
u32 reg_value = 0;
#define done (((reg_value = intel_gt_mcr_read_any_fw(gt, reg)) & mask) == value)
int ret;
lockdep_assert_not_held(&gt->uncore->lock);
#define done ((intel_gt_mcr_read_any(gt, reg) & mask) == value)
/* Catch any overuse of this function */
might_sleep_if(slow_timeout_ms);
GEM_BUG_ON(fast_timeout_us > 20000);
......
......@@ -37,12 +37,12 @@ void intel_gt_mcr_report_steering(struct drm_printer *p, struct intel_gt *gt,
void intel_gt_mcr_get_ss_steering(struct intel_gt *gt, unsigned int dss,
unsigned int *group, unsigned int *instance);
int intel_gt_mcr_wait_for_reg_fw(struct intel_gt *gt,
i915_mcr_reg_t reg,
u32 mask,
u32 value,
unsigned int fast_timeout_us,
unsigned int slow_timeout_ms);
int intel_gt_mcr_wait_for_reg(struct intel_gt *gt,
i915_mcr_reg_t reg,
u32 mask,
u32 value,
unsigned int fast_timeout_us,
unsigned int slow_timeout_ms);
/*
* Helper for for_each_ss_steering loop. On pre-Xe_HP platforms, subslice
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册